views
Evaluation difficulties associated with training in the Organisation Essay | Acemyhomework Writers
Training
The definition of and activities associated with training are rapidly evolving. Though the words training, development, and education are often used interchangeably, Nadler (1970) differentiates the three terms as follows. Training is those activities designed to improve human performance on the job the employee is presently doing or is being hired to do (Nadler, 1970; p. 40). Education is defined as human resource development activities, which are designed to improve the overall competence of the employee in a specified direction and beyond the job now held
(Nadler, 1970, p. 60). Development involves preparation of employees to move with the organisation as it develops, changes, and grows (Nadler, 1970, p. 88).
More recently, the definition of training has expanded due to changes in the relationship between training and traditional human resource (HR) roles and functions. Robinson and Robinson (1995) advocate a shift from focusing on what people need to learn (training) to what they must do (performance) (p. 7).
Ulrich (1998), suggested repositioning human resources with his statement, HR should not be defined by what it does but by what it delivers results that enrich the organisations value to customers, investors, and employees (p. 124). In this paper, training will emphasise a process to change how people perform their jobs. The significance of evaluation is based on the importance of training. Training is an important tool in making a company competitive, for upgrading the skills required for new technologies, and for keeping the workforce employable.
In any industry professionals must quickly learn and apply information from multiple bodies of knowledge: legal, administrative, technological, psychological, managerial, economic, and financial. Knowledge and skill gaps have been created by unprecedented change in technological advances, regulation, and compliance requirements. Employee flexibility and adaptation in complex, rapidly changing environments requires effective knowledge transfer and skill development (Stewart, 1997). Given that contemporary organisations operate in a volatile economy, effective training can both contribute to better delivery and reduce error and liability arising from inadequate acquisition and use of important knowledge and skills.
To acquire higher-paying positions within any industry, employee skills must be constantly upgraded. The level of knowledge required by workers in todays business environment is constantly increasing, and as a result, the number of positions requiring specific skills is also rapidly increasing. Skilled workers must be either trained or retrained to address the changing needs of the financial services industry.
Technology will introduce change and turbulence into every industry and every job. In particular, the necessity for constant learning and constant adaptation by workers will be a certain outgrowth of technological innovation (Jamieson & OMara, 1991). Without additional training, todays workers will no longer be employable except in low paying, low skill jobs (Jamieson & OMara, 1991). No matter what agency or method is used, workers must learn new skills, accumulate the necessary knowledge and apply the skills and knowledge gained in a new work environment or face unemployment.
Training also provides direct benefits to employees. On a broader perspective, the United States and the financial services industry benefits from the effects of training which updgrades knowledge and skills, but employees also benefit at a personal level.
Ultimately because of the growing importance of skill and its general applicability across institutions, workers who pay attention to education, training, and work experience can increase their control over their working lives (Carnevale, 1991, p. 140). Employees may not only gain financial independence based on their increased value to an employer, but as the quantity of training increases, their ability to work at various tasks also increases and allows the employee a greater number of employment options (Carnevale, 1991).
Evaluation
Definitions of evaluation, particularly training evaluation, overlap in many ways. Tyler (1942) cited in Jamieson & OMara (1991) saw evaluation as a determination of whether program objectives had been achieved, e.g., a comparison of intended outcomes with actual outcomes. Similarly, other researchers have defined evaluation as the comparison of initial objectives with real program outcomes using both quantitative and qualitative methods to assess the results (Phillips, 1997). Brinkerhoff (1981) later extended the definition of evaluation to encompass the systematic inquiry into training contexts, needs, plans, operation and effects (p. 66).
Additional nuances have been offered in more recent years. According to Feldman, effective evaluation measures both the training and the trainee (1990). Swenson (1991) defined evaluation as disciplined inquiry undertaken to determine the value, including merit and worth of some entity (p. 81). A definition by Newby (1992) simply states, the assessment of the worth of training (p. 24). Yet another comprehensive definition of evaluation may be provided by Basarab and Root (1992) as a systematic process converting pertinent data into information for measuring the effects of training, helping in decision making, documenting results to be used in program improvement, and providing a method for determining the quality of training, (p. 2)
Evaluation, as defined by Scriven (1967) and discussed by Worthen et al (1997), is determining the worth or merit of an evaluation object. Worthen and others expanded definition states that evaluation is the identification, clarification, and application of defensible criteria to determine an evaluation objects value (worth or merit), quality, utility, effectiveness, or significance in relation to those criteria (p. 5). The steps of evaluation are (a) determine the standards for judging worth, (b) collecting relevant information, and (c) applying the standards to determine worth. This process leads to recommendations intended to optimize evaluation objects in relation to their intended purposes (Worthen et al, 1997).
If the definition of evaluation still seems vague, it may be because elusive terms such as value and judgment are often used when defining the goals and processes of evaluation. Scriven (1999) has suggested that this lack of definition contributes to an overall misunderstanding of how and what to evaluate. According to Scriven, evaluation has historically focused on at least three questions regarding an intervention: (a) Is it worth it? (b) Is there a better way to do it? (c) Did it have the desired impact?
Shrock & Geis (1999) suggest that even though there are various activities and contexts in which evaluation is conducted, for the most part, information is being collected that allows one to make a judgment about value. Despite these various enhancements in meaning, it is clear that evaluation involves a planned effort to measure what happens in training, how it affects trainee knowledge, skills, abilities, and performances, and trainings impact on organisational outcomes.
Defining the purpose of evaluation is sometimes more informative than defining the term. The uses of evaluation are numerous. For instance, evaluation can be used to improve an object. Evaluation can also be used to provide information for decisions about programs such as: (a) whether or not to continue the program, (b) whether to add or drop specific techniques in the program, (c) whether similar programs should be instituted elsewhere, (d) how to allocate resources among competing programs, and (e) whether to accept or reject a program approach or theory (Worthen et al, 1997).
Bramley and Newby (1984) identified five purposes of evaluation: feedback, control, research, intervention, and power games. Brinkeroff (1981) suggests that the purpose of evaluation should be determined by the degree to which it is designed to change something in the environment. He links evaluation to three aspects of human resources programming: planning, delivering, and recycling.
Few research-based studies were found on why evaluation was important. Clegg (1987) asked 43 chief training officers at Fortune 500 companies why they thought evaluation should be done. The primary reasons given were (ranked in order of importance): (a) to find out how training can contribute more, (b) to determine if there is a payoff, (c) to measure progress toward objectives, (d) to justify existence of the training function, (e) to find out where improvement is needed, and (f) to establish guidelines for future programs.
Kirkpatricks Four Levels of Evaluation Model
In terms of training evaluation, it seems an evaluation framework developed by Donald Kirkpatrick has received the most attention over the past forty years. Almost every discussion of training and development evaluation begins by mentioning Donald Kirkpatricks well-known four-levels of evaluation (Medsker & Roberts, 1992, p. 1). His training evaluation framework is currently considered the standard training and development program evaluation framework in use today (Kaufman & Keller, 1994). The Kirkpatrick levels were mentioned in one-third of the articles, and very few other models were mentioned at all.
During the late 1950s, while at the University of Wisconsin, Kirkpatrick wrote a series of four articles called Techniques for Evaluating Training Programs which were published in the American Society of Training and Development journal, Training and Development. Kirkpatricks reason for developing his model was to clarify the elusive term evaluation’ (Kirkpatrick, 1994, p. xiii). Kirkpatricks four-level model, referred to as stages, criteria, types, categories of measures, and most commonly, levels of evaluation (p. 110), has been enhanced over the years and incorporates the various approaches of training and development professionals regarding the purpose of evaluation.
Kirkpatrick himself now refers to these techniques as the four-level model of evaluation (p. 110), which includes: (1) reaction or customer satisfaction; (2) learning of knowledge, skills, and/or attitudes; (3) behavior or transference of knowledge, skills, and attitudes to the workplace; and (4) results. Kirkpatrick (1994) suggests that these levels are intended to primarily assist in assuring the relevancy of the effects of training on an organisation. A secondary purpose of the classification is to assist in evaluating the design and implementation of training so that it can be continuously improved. Each of these levels will briefly be discussed in the following paragraphs.
First, Level 1 evaluation involves measuring trainee or participant reaction. According to Kirkpatrick, reaction may well be defined as how well the trainees liked the program (1994, p. 1). Reasons for conducting a reaction level evaluation include valuable feedback for nature programs, feedback for the trainers, quantitative information for managers, and data for use in setting nature program standards. Reaction-level evaluations also allow data gathering on several areas including the trainee, the facilitator, the facilities, the schedule, and other aspects of the course (Kirkpatrick, 1994). It is the most frequently used type of evaluation because it is easy to administer and is not particularly threatening to trainers and trainees.
Kirkpatrick saw participant reaction evaluation as important for three reasons: (a) management decisions on whether to continue funding training programs are often made based on comments from the participants, (b) participants can provide information that would help to improve programs (Kirkpatrick, 1994), and (c) participants must like training to receive the maximum benefit from it (Kirkpatrick, 1994, p.4).
The literature supports the first reason: managers make decisions based on participant comment. If the true purpose of a training program is to reward good performers or renew sagging spirits at company expense, an extensive performance based training evaluation is misguided. A simple reactions measure, or smile sheet, may be all that is really necessary (McEvoy & Duller, 1990, p. 40).
The second reason, improving programs, may be viable only in the sense that it supports the first. If increasing the participants enjoyment of the program does not negatively affect the programs effectiveness or efficiency, then such changes can be seen as improvements.
The third reason, which ties enjoyment of the training to receiving benefits from the training, has not been fully supported by the literature. Jones in his list of 26 limitations of end-of-course ratings (1990, p. 20) lists as number one: ratings dont
correlate with transfer of training. No available research shows a clear relationship between end-of-course ratings and the extent to which participants apply training on the job (Jones, 1990, p. 20). Also, Studies of the relationship between actual learning achieved in a course and how participants complete reaction forms indicate such a relationship is either very small or nonexistent. (Dixon, 1990, p. 28)
Based upon the results of the aforementioned studies, it would appear that although participant reaction forms provide information that may be used to make the learning process more enjoyable and fulfilling, they do not evaluate the effectiveness of training. However, most training managers would like to know that the participants enjoyed a particular program. What your measuring with a happiness sheet, he (Kirkpatrick) says, is initial customer satisfaction with the training experience . The sheet only becomes sneer worthy if you pretend it is telling you what is happening at higher levels of evaluation (Gordon, 1991, p. 21).
Level 2 evaluates the learning that took place during training. Learning is defined in a rather limited way as follows: What principals, facts, and techniques were understood and absorbed by the conferees? In other words, we are not concerned with the on-the-job use of these principles, facts, and techniques (Kirkpatrick, 1994, p. 6). Evaluation at this level can only assure that the skills and knowledge to perform a behavior on the job have been learned. However, it cannot assure that the employee: (a) will have an opportunity to perform a behavior, (b) know when to use the learned behavior, or (c) will use the behavior even if the opportunity is recognized.
One method to measure an increase in trainee knowledge is through the use of pre- and post-testing. Level 2 evaluations of this nature are used less frequently than level one largely because it requires more effort to appropriately design a valid test, especially if the results will be used for decision purposes (Shrock and Geis, 1999). Jack Phillips (1997), in his chapter on evaluation design, discusses pretest-posttest designs and discuses validity issues based on testing effects and threats to internal validity.
Even if this form of evaluation is well-designed and addresses validity issues, the best an evaluation at this level can hope to do is learn whether the direct objectives of the training program were reached. Attaining training goals is necessary but not sufficient to guarantee that the goals of a program are met. Instructors tend to think that if participants have mastered a skill during the learning event, they are adequately prepared to implement it on the job However, research on the transfer of training does not support the view that the training adequately prepares participants to transfer the skills to the workplace. (Dixon, 1990, p. 90-91)
Level 4 of Kirkpatricks model reflects the evaluation of trainings impact on business results: increasing sales, reducing accidents, reducing turnover, and reducing scrap rates (p. 70) and the data is often collected via operational performance data, financial reports, or perceptual data. The goal of this evaluation is to determine the impact of an intervention on the organisational bottom-line. If the programs aim is tangible results, rather than to teach management concepts, theories, and principles, then it is desirable to evaluate in terms of results (Kirkpatrick, 1994, p. 70).
Kirkpatrick offered little as to methodology for this level of evaluation. It is conducted very infrequently relative to the other levels and it is extremely difficult, however, if not impossible to isolate effects of training that do not have specific, measurable outcomes, e.g., leadership or diversity training.
One question arising over the years has been whether Kirkpatricks model is hierarchical, that is, does measurement at a higher level require measurement at the lower levels? Research conducted on the Kirkpatrick evaluation model has indicated that the levels are not hierarchical. For example, in the evaluation studies they examined, a high course satisfaction rating (Level 1) did not cause a high level of skill acquisition (Level 2) which in turn did not cause a high level of skill transfer to the job (Level 3), etc.
They cite this research finding to refute what they believed was a common perception amongst training evaluators that the levels were causally linked. Arguments have also been made that these finding validate the belief that the four levels are measuring quite independent constructs and therefore all four levels should be conducted (Holton, 1996).
Several theorists have tried to improve the Kirkpatrick model by adding criteria levels. Some theorists (Kaufman & Keller, 1994) seek to add a fifth level to the Kirkpatrick model, which measures social or cultural costs and benefits. Level 5 evaluation is intended to go beyond the value of the organisations products and services to the external environment and society in which it operates.
Kaufman and Keller (1994) advocate the need to address performance improvement interventions such as strategic planning, organisation development (p. 373) and output quality and usefulness to the client and/or society as a whole adding a fifth level to the Kirkpatrick model. It is an attempt to answer the question: Is what we deliver contributing to the good of society in general as well as satisfying to the client? (Kaufman, et al, 1995, p. 11).
As an alternative to adding a fifth level, other researchers have emphasized a slightly different set of four levels or even added a sixth level. Swanson and Sleezer in 1987 (cit. in Brinkerhoff, 1997) outlined four levels: satisfaction, learning, job/organisation performance, and financial performance. Brinkerhoffs six-stage model modified the levels by addressing timeframe: goals setting, HRD program design, program implementation, immediate outcomes, intermediate or usage outcomes, and impacts and worth (Brinkerhoff, 1997).
Another approach tries to incorporate multiple perspectives into training evaluation. Kaplan and Nortons (1996) Balanced Scorecard, which has similar categories: customer perspective, financial perspective, internal growth perspective, and learning and growth perspective. The Balanced Scorecard adds non-financial measure to traditional organisational measures. The Balanced Scorecard emphasizes the interactive nature of different perspectives on training evaluation in determining whether the training has had a beneficial outcomes.
The literature review indicates that training professionals are being called upon to provide evaluation of training. Evaluation of training at the organisational impact level involves, at a minimum, knowledge of training and development, evaluation, statistics, finance/accounting, and project management, as well as the organisations culture and business environment.
Few case studies are available in the literature illustrating applications of training impact studies. Evaluation of training to date has focused on the four-level Kirkpatrick model. While most evaluation models have used the four-level model as a framework, there is a need to explore specifically how organisations training evaluation models vary from industry to industry.
© 2019 Acemyhomework Writers - WordPress Theme by Kadence WP