Training Evaluation
Training Evaluation
Junaidah Hashim
Department of Business Administration, Kulliyyah of Economics and
Management Sciences, International Islamic University Malaysia, Kuala Lumpur,
Malaysia
Keywords
Abstract
[ 374 ]
Introduction
The current issue and full text archive of this journal is available at
http://www.emerald-library.com/ft
Junaidah Hashim
Training evaluation: clients'
roles
Journal of European Industrial
Training
25/7 [2001] 374379
Evaluation model
In the past, evaluators were urged to use one
preferred set of methodological principles
and procedures, those of the experimental
model for them to assess the extent to which
programmes had attained their goals (Greene
and McClintok, 1991). Today, evaluators can
choose from a repertoire of more than 50
approaches to evaluation, representing a
widened range of philosophical principles
and scientific methodologies.
Evaluators face many options about when
to do evaluation, what questions are
relevant, which questions to ask, which
stakeholders to consult, what methods to use,
what to measure, and how to facilitate use
(Shadish, 1992).
Most of the evaluation models focus on and
advocate the use of formal, systematic, and
sometimes comprehensive evaluation in
order to make full use of the evaluation
advantages.
Case study
Malaysia has a vision to be a fully developed
nation by the year 2020. To achieve this
vision, Malaysia needs a highly educated and
trained workforce. This effort is shared by
both the public and the private sectors by not
only allocating greater budget for training
but also by making a contribution to the
Human Resource Development Fund. Since a
vast amount of resources has been committed
to training programmes, and with the
prevailing situation of economic constraint,
the demand for justifying training expenses
is gaining impetus. Being an agency
responsible for human resource development
and training for the private sector in
Malaysia, the Human Resources
Development Council is aware about the
training programme needs. For this purpose
it has designed a standard evaluation form
and it requires all training providers to fill in
this form and return it to the Council for
further analysis.
The author conducted a research project to
find out what the practices of training
providers in Malaysia are. The study covered
all the training providers, who were
registered with the Human Resource
Development Council as approved training
providers. The actual sample for this study
was 262 institutions. The response rate was
49 per cent.
This study used a specially constructed
questionnaire with a five-point Likert-scale
(Cronbach's coefficient alpha = 0.7310
during pre-testing). In the first part of the
[ 375 ]
Junaidah Hashim
Training evaluation: clients'
roles
Journal of European Industrial
Training
25/7 [2001] 374379
Table I
Distribution of respondents by evaluation method
Methods
Survey
Interview
Observation
Document review
Organisational audit
Performance analysis
Expert review
Panel checklist
Site visit
Pilot tests
Trainee feedback
Simulation
Self-report
Work samples
Peer report
Supervisor
Competency test
Reaction form
Percentage of responses
3
4
6.4
3.2
1.1
4.3
22.3
6.4
19.1
28.7
11.7
28.7
5.3
23.4
10.6
18.1
14.9
12.8
18.1
11.7
10.6
8.5
7.4
18.1
20.2
10.6
17.0
14.9
21.3
11.7
1.1
22.3
22.3
19.1
26.6
14.9
12.8
8.5
34.6
26.6
21.3
31.9
31.9
29.8
37.2
24.5
26.6
25.5
12.8
26.6
35.1
25.5
35.1
34.0
28.7
28.7
26.6
43.6
46.8
30.9
19.1
26.6
19.1
23.4
29.8
26.6
30.9
19.1
21.3
27.7
18.1
27.7
23.4
21.3
Mean
22.3
18.1
23.4
14.9
6.4
26.6
7.4
8.5
10.6
7.4
50.0
8.5
10.6
9.6
5.3
10.6
17.0
29.8
3.48
3.65
3.84
3.34
2.67
3.56
2.79
2.68
3.06
2.72
4.19
2.67
2.99
2.91
2.72
3.08
3.08
3.49
Junaidah Hashim
Training evaluation: clients'
roles
Table II
Distribution of respondents by evaluation schedule
Formal evaluation
Comprehensive:
We evaluate our training programme before we plan
for the training
We evaluate our training programme during the planning
stage
We evaluate our training programme during the
implementation stage
We evaluate our training programme right after the
training is completed
We evaluate our training programme sometimes after
the training
Systematic:
We evaluate all of our training programmes
Not all of the training programmes need evaluation
We have a regular evaluation schedule for certain
programmes
Percentage of responses
2
3
4
5
Mean
17.0
7.4
16.0
33.0
26.6
3.45
13.8
3.2
12.8
52.1
18.1
3.57
13.8
5.3
8.5
41.5
30.9
3.70
3.2
3.2
5.3
18.1
70.2
4.49
14.9
17.0
12.8
36.2
19.1
3.28
2.1
10.6
10.6
30.9
8.5
13.8
25.5
13.8
53.2
30.9
4.17
3.23
6.4
10.6
12.8
40.4
29.8
3.77
Notes: 1 = strongly disagree, 2 = mildly disagree, 3 = neither disagree nor agree, 4 = mildly agree 5 = strongly
agree; N = 94
evaluation is a waste of time. The eight
statements asked were computed as a total
score. Overall commitment as shown in
Table V, respondents' commitment towards
evaluation, was moderate (mean 3.05).
In the literature reviewed it shows that
training providers are not committed to
providing systematic evaluation, part of the
reason being that they are enjoying their best
times, because never before have
organisations invested so heavily in training
(Brinkerhoff, 1988). This might be true
during the period of the early 1990s. As of
now, training institutions in Malaysia are no
longer enjoying their good times, as a lot of
training institutions suffered from the recent
economic turmoil. They have to change to
other businesses and some have to close
down their businesses. The down turn in the
Malaysian economy has demonstrated an
increasing pressure for organizations to
justify investment in training. Evaluation is
Table III
Summary of statistics of respondents by
comprehensive, formal and systematic
evaluation
Formal
Comprehensive
Systematic
Mean
Std deviation
3.71
3.70
3.72
0.6970
0.8413
0.8247
Note: N = 94
commitment towards evaluation. Eight items
were asked for this purpose (Cronbach
coefficient alpha = 0.8089). It was found that
training providers' commitment was
moderate (mean 3.05). More than half of the
respondents strongly agreed (66.0 per cent)
that effective training ensures improved
performance (mean 4.44). Most of the
respondents (80.9 per cent) strongly
disagreed (mean 1.30) that conducting
Table IV
Distribution of respondents by clients' demand variables
Item
Clients never ask for evaluation
Clients insist on evaluation
Clients ask for indication of dollar return on training
Clients require formal evaluation for every
programme
Clients require reaction evaluation
Percentage of responses
2
3
4
Mean
1.1
5.3
12.8
20.2
18.1
20.2
26.6
31.9
28.7
35.1
29.8
22.3
17.0
14.9
16.9
3.47
3.31
3.08
10.6
3.2
13.8
8.5
24.5
24.5
34.0
41.5
17.0
22.3
3.33
3.71
Notes: 1 = strongly disagree, 2 = mildly disagree, 3 = neither disagree nor agree, 4 = mildly agree, 5 = strongly
agree; N = 94
[ 377 ]
Junaidah Hashim
Training evaluation: clients'
roles
Table V
Distribution of respondents by commitment towards evaluation
Statement
Percentage of responses
2
3
4
5
Mean
5.3
8.5
26.6
12.8
35.1
80.9
3.2
13.8
18.1
22.3
8.5
22.3
12.8
4.3
20.2
8.5
22.3
23.4
24.5
3.2
4.3
36.2
31.9
18.1
29.8
11.7
2.1
22.3
24.5
33.0
10.6
25.5
6.4
1.1
66.0
3.61
3.63
2.64
3.47
2.32
1.30
4.44
14.9
14.9
33.0
24.5
12.8
3.05
3.05
Notes: 1 = strongly disagree, 2 = mildly disagree, 3 = neither disagree nor agree, 4 = mildly agree 5 = strongly
agree; N = 94
viewed not only as an important part of
programme development, but also as a tool
for aiding decisions on cutting budgets.
Contribution to evaluation
There are three things that have made this
study on the practice of evaluation in
Malaysia different from the previous studies.
The evaluation was moderately formal (mean
3.71), comprehensive (mean 3.70), and
systematic (mean 3.72).
Conclusion
This study has provided valuable information
especially to the training providers in
Malaysia. To encourage and stimulate the
private sector to introduce training, the
Malaysian government has introduced the
[ 378 ]
References
Junaidah Hashim
Training evaluation: clients'
roles
Journal of European Industrial
Training
25/7 [2001] 374379
Shadish, W.R. (1992), ``Theory-driven metaevaluation'', in Chen, H.-T. and Rossi, P.H.
(Eds), Using Theory to Improve Program and
Policy Evaluation, Greenwood Press,
Westport, CT.
Shamsuddin A. (1995), ``Contextual factors
associated with evaluation practices of
selected adult and continuing education
providers in Malaysia'', unpublished PhD
dissertation, University of Georgia, Athens,
GA.
Shelton, S. and Alliger, G. (1993), ``Who's afraid of
level 4 evaluation?'', Training & Development
Journal, Vol. 47 No. 6, pp. 43-6.
Smith, A. (1990), ``Evaluation of management
training subjectivity and the individual'',
Journal of European Industrial Training,
Vol. 14 No. 1, pp. 12-15.
Smith, A.J. and Piper, J.A. (1990), ``The tailormade training maze: a practitioner's guide to
evaluation'', Journal of European Industrial
Training, Vol. 14 No. 8, pp. 2-24.
Stufflebeam, D.L. (1983), ``The CIPP model for
program evaluation'', in Madaus, G.F.,
Scriven, M.S. and Stufflebeam, D.L. (Eds),
Evaluation Models, Kluwer-Nijhoff
Publishing, Boston, MA.
[ 379 ]