0% found this document useful (0 votes)
50 views6 pages

Training Evaluation

This document discusses training evaluation practices in Malaysia. It examines clients' demand for evaluation and the commitment of training providers to conducting evaluations. It finds that the government, clients, and economic situation have positively influenced evaluation practices. Training providers are now required to conduct more formal and systematic evaluations to demonstrate returns on training investments to clients.

Uploaded by

Akanksha Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views6 pages

Training Evaluation

This document discusses training evaluation practices in Malaysia. It examines clients' demand for evaluation and the commitment of training providers to conducting evaluations. It finds that the government, clients, and economic situation have positively influenced evaluation practices. Training providers are now required to conduct more formal and systematic evaluations to demonstrate returns on training investments to clients.

Uploaded by

Akanksha Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Training evaluation: clients' roles

Junaidah Hashim
Department of Business Administration, Kulliyyah of Economics and
Management Sciences, International Islamic University Malaysia, Kuala Lumpur,
Malaysia

Keywords

Training, Evaluation, Malaysia,


Commitment, Clients

Abstract

Training evaluation is an elusive


concept, especially when it comes
to practice. The practice of
evaluation in training has received
a lot of criticism. This criticism is
largely explained by the
unsystematic, informal, and ad
hoc evaluation that has been
conducted by training institutions.
In Malaysia, training activities are
monitored by the government.
Organisations are required to
obtain training services from
approved training providers
registered with the government.
Examines the clients' demand
toward evaluation, the
commitment given by training
providers, and the overall practice
of evaluation by the training
providers in Malaysia. Finds that
the government, client and
economic situations have
influenced the evaluation practice
in a positive direction.

Journal of European Industrial


Training
25/7 [2001] 374379
# MCB University Press
[ISSN 0309-0590]

[ 374 ]

Introduction

council registered training providers only.


This has caused the number of training
providers in Malaysia to increase rapidly to
cater for the demand.

This paper addresses the issues of training


evaluation practices in general, and
examines the training evaluation in
Malaysia through a case study.
The Malaysian government since her
independence in 1957 has manifested her
commitment toward education and human
resource development. The emphasis was on
education, because the government believed
that it was the key input to national
development. The government has
recognised the importance of human
resource development in its quest for
achieving developed nation status. This
commitment was translated into the
establishment and growth of training
agencies in the country.
In Malaysia, as of 1999, records show that it
has about 300 training institutions registered
as training providers to various companies
in the country. For a country like Malaysia,
this number is considered large and it has
made the training industry very competitive.
The establishment of these institutions may
have resulted from the government's new
legislation, which requires every company to
promote training for its workforce to ensure
that it is competent and can further
contribute to the country in realisation of
Vision 2020. For that, the Malaysian
government has passed an Act of Parliament
entitled the Human Resource Development
Act 1992, which requires companies to
contribute a 1 per cent equivalent of its
monthly payroll to a fund which would then
be used to promote training. A special
council was set up to manage this fund and to
monitor the training activities for private
companies in Malaysia. To facilitate the
monitoring activities, companies are advised
to obtain training services required from the

Training evaluation is a systematic process


of collecting and analyzing information for
and about a training programme which can
be used for planning and guiding decision
making as well as assessing the relevance,
effectiveness, and the impact of various
training components (Raab et al., 1991).
Training evaluation may be taken for a
variety of reasons. Research indicates that
the most popular reason for evaluation is to
gather information that helps decisionmakers improve the training process and to
facilitate participants' job performance.
Training institutions may conduct
evaluation for the purpose of maintaining
training (Smith and Piper, 1990). A training
provider needs to evaluate himself and his
product, to improve training and to build a
reputation and maintain management's
commitment to training.
Evaluation practice is one of the major
dilemmas faced in the field of evaluation
because it receives much criticism. As
Philips (1991) states, when it comes to
training evaluation, there still appears to be
more talk than action. In many
organisations, evaluation of training either
is ignored or is approached in an
unconvincing or unprofessional manner.
Previous literature (Smith, 1990; Davidove
and Schroeder, 1992; Shelton and Alliger,
1993; Shamsuddin, 1995) demonstrates that
the practices of evaluation in training are
unsystematic and they are based on simple
means. Gutek (1988) states the need for
conducting evaluation is very low, and there

The research register for this journal is available at


http://www.mcbup.com/research_registers

The current issue and full text archive of this journal is available at
http://www.emerald-library.com/ft

The practice of training evaluation

Junaidah Hashim
Training evaluation: clients'
roles
Journal of European Industrial
Training
25/7 [2001] 374379

is little or no demand on the part of the


organisation to evaluate a training
programme seriously. Their clients do not
request one. Participants attend training,
enjoy it, forget it, and carry on working
exactly as before. One of the reasons why
clients do not demand evaluation is because
of the clients' basic belief that a well-trained
employee will be a productive employee
(Barron, 1996). Another possible reason why
evaluation is not conducted in training is
because there is no serious enforcement; thus
training providers can go without it.
Almost universally, organisations evaluate
their training programmes by emphasising
one or more of the model's four or five levels.
Chen and Rossi (1992) comment that evaluation
knowledge found in the literature is not being
fully utilised in programme evaluation
practices. For example, out of more than 50
evaluation models available, the evaluation
framework that most training practitioners
use is the Kirkpatrick Model (Philips, 1991).
However, currently, most employees' training
is evaluated only at the reaction level.
Evaluation at this level is associated with the
terms ``smile sheet'' or ``happiness sheet'',
because reaction information usually is
obtained through a participatory
questionnaire administered near or at the end
of a training programme (Smith, 1990).
Admittedly evaluation can never
completely ascertain a training programme's
effectiveness or its efficiency in achieving a
beneficial effect. What worked at one time at
one training location with a unique group of
participants cannot necessarily be transferred
to another time, setting and group and be
expected to work as well. Still, evaluations
build a case of support for training by
providing an approximation of its value.
However, things are changing now. The
importance of evaluating training ranks high
among training consultant and top
management as a means of justifying
training investment. More than ever,
training evaluation must demonstrate
improved performance and financial results.
Since training does not come cheap, it is
understandable that top managers wish to
see value for money and they demand
justification for training cost. Training
providers need to show clients that they are
getting good returns on their investment in
training. The demand for accountability has
been the major impetus for training in the
past few years. Fiscal constraints have
increased the competition of companies'
activities for the available dollars and raised
the question of value for money from their
activities. Training professionals are no
longer enjoying the best times as before.

Evaluation model
In the past, evaluators were urged to use one
preferred set of methodological principles
and procedures, those of the experimental
model for them to assess the extent to which
programmes had attained their goals (Greene
and McClintok, 1991). Today, evaluators can
choose from a repertoire of more than 50
approaches to evaluation, representing a
widened range of philosophical principles
and scientific methodologies.
Evaluators face many options about when
to do evaluation, what questions are
relevant, which questions to ask, which
stakeholders to consult, what methods to use,
what to measure, and how to facilitate use
(Shadish, 1992).
Most of the evaluation models focus on and
advocate the use of formal, systematic, and
sometimes comprehensive evaluation in
order to make full use of the evaluation
advantages.

Case study
Malaysia has a vision to be a fully developed
nation by the year 2020. To achieve this
vision, Malaysia needs a highly educated and
trained workforce. This effort is shared by
both the public and the private sectors by not
only allocating greater budget for training
but also by making a contribution to the
Human Resource Development Fund. Since a
vast amount of resources has been committed
to training programmes, and with the
prevailing situation of economic constraint,
the demand for justifying training expenses
is gaining impetus. Being an agency
responsible for human resource development
and training for the private sector in
Malaysia, the Human Resources
Development Council is aware about the
training programme needs. For this purpose
it has designed a standard evaluation form
and it requires all training providers to fill in
this form and return it to the Council for
further analysis.
The author conducted a research project to
find out what the practices of training
providers in Malaysia are. The study covered
all the training providers, who were
registered with the Human Resource
Development Council as approved training
providers. The actual sample for this study
was 262 institutions. The response rate was
49 per cent.
This study used a specially constructed
questionnaire with a five-point Likert-scale
(Cronbach's coefficient alpha = 0.7310
during pre-testing). In the first part of the

[ 375 ]

Junaidah Hashim
Training evaluation: clients'
roles
Journal of European Industrial
Training
25/7 [2001] 374379

questionnaire, respondents were asked to


indicate how frequently each of the
evaluation methods listed were used in the
evaluation they conducted. The results show
that the respondents used all the evaluation
methods commonly found in the literature.
Trainee feedback was the most frequently
used evaluation method (mean 4.19) by the
respondents, as shown in Table I. Besides
trainee feedback, other frequently used
methods were observation (mean 3.84),
interview (mean 3.65), performance analysis
(mean 3.56) and reaction form (mean 3.49).
The second part includes questions to find
out how the evaluation was planned, and how
frequently the evaluation was carried out.
Stufflebeam (1985) suggested that evaluation
should consist of context, input, processes,
and product evaluation. These evaluations
are done at different stages during the
programme development.
Table II shows the percentage of responses
to each statement asked. Respondents agreed
that they evaluate their training right after
the training is completed (88.3 per cent and
mean 4.49). Evaluation during the
implementation was the second agreed by the
respondents (mean 3.70). Some respondents
did evaluate their training during the
planning stage (mean 3.57). Each statement
relevant to formal, comprehensive and
systematic evaluation was then computed as
a total score and the results are depicted in
Table III. It indicates that respondents did to
some extent conduct formal, comprehensive
and systematic evaluation.

Much of the literature has highlighted that


clients seemed not to demand that the
training providers conduct evaluation for the
training they provided (Smith and Piper,
1990). This study attempted to find out some
information about clients' demand towards
evaluation from the training providers'
perspective. Eight statements initially
contained in the questionnaire, but three of
the statements were deleted due to their
reliability. The five statements asked and the
responses received from respondents were
depicted in Table IV.
Referring to Table IV, the item that has the
highest mean was ``clients require reaction
evaluation'' (mean 3.71). The responses for
other items were relatively high, all the
means being above 3.00. The five statements
were then computed as a total score, for the
purpose of examining the overall demand
from clients. On the overall score, the clients'
demand was moderate (3.38). In the
respondents' opinion, clients did to some
extent demand that they conduct evaluation.
This study involved corporate clients who
hired training providers from the Human
Resource Development Council, and it is
found that clients definitely preferred a
training package that includes evaluation,
because they pay for the training. Training
providers have to oblige the clients'
requirement, if they want to provide the
training services to these clients and plan to
continue doing so.
The other part of the questionnaire
attempted to find out training institutions'

Table I
Distribution of respondents by evaluation method
Methods
Survey
Interview
Observation
Document review
Organisational audit
Performance analysis
Expert review
Panel checklist
Site visit
Pilot tests
Trainee feedback
Simulation
Self-report
Work samples
Peer report
Supervisor
Competency test
Reaction form

Percentage of responses
3
4

6.4
3.2
1.1
4.3
22.3
6.4
19.1
28.7
11.7
28.7
5.3
23.4
10.6
18.1
14.9
12.8
18.1
11.7

10.6
8.5
7.4
18.1
20.2
10.6
17.0
14.9
21.3
11.7
1.1
22.3
22.3
19.1
26.6
14.9
12.8
8.5

34.6
26.6
21.3
31.9
31.9
29.8
37.2
24.5
26.6
25.5
12.8
26.6
35.1
25.5
35.1
34.0
28.7
28.7

26.6
43.6
46.8
30.9
19.1
26.6
19.1
23.4
29.8
26.6
30.9
19.1
21.3
27.7
18.1
27.7
23.4
21.3

Mean

22.3
18.1
23.4
14.9
6.4
26.6
7.4
8.5
10.6
7.4
50.0
8.5
10.6
9.6
5.3
10.6
17.0
29.8

3.48
3.65
3.84
3.34
2.67
3.56
2.79
2.68
3.06
2.72
4.19
2.67
2.99
2.91
2.72
3.08
3.08
3.49

Notes: 1 = not at all, 2 = seldom, 3 = sometimes, 4 = frequently, 5 = very frequently; N = 94


[ 376 ]

Junaidah Hashim
Training evaluation: clients'
roles

Table II
Distribution of respondents by evaluation schedule

Journal of European Industrial


Training
25/7 [2001] 374379

Formal evaluation

Comprehensive:
We evaluate our training programme before we plan
for the training
We evaluate our training programme during the planning
stage
We evaluate our training programme during the
implementation stage
We evaluate our training programme right after the
training is completed
We evaluate our training programme sometimes after
the training
Systematic:
We evaluate all of our training programmes
Not all of the training programmes need evaluation
We have a regular evaluation schedule for certain
programmes

Percentage of responses
2
3
4
5

Mean

17.0

7.4

16.0

33.0

26.6

3.45

13.8

3.2

12.8

52.1

18.1

3.57

13.8

5.3

8.5

41.5

30.9

3.70

3.2

3.2

5.3

18.1

70.2

4.49

14.9

17.0

12.8

36.2

19.1

3.28

2.1
10.6

10.6
30.9

8.5
13.8

25.5
13.8

53.2
30.9

4.17
3.23

6.4

10.6

12.8

40.4

29.8

3.77

Notes: 1 = strongly disagree, 2 = mildly disagree, 3 = neither disagree nor agree, 4 = mildly agree 5 = strongly
agree; N = 94
evaluation is a waste of time. The eight
statements asked were computed as a total
score. Overall commitment as shown in
Table V, respondents' commitment towards
evaluation, was moderate (mean 3.05).
In the literature reviewed it shows that
training providers are not committed to
providing systematic evaluation, part of the
reason being that they are enjoying their best
times, because never before have
organisations invested so heavily in training
(Brinkerhoff, 1988). This might be true
during the period of the early 1990s. As of
now, training institutions in Malaysia are no
longer enjoying their good times, as a lot of
training institutions suffered from the recent
economic turmoil. They have to change to
other businesses and some have to close
down their businesses. The down turn in the
Malaysian economy has demonstrated an
increasing pressure for organizations to
justify investment in training. Evaluation is

Table III
Summary of statistics of respondents by
comprehensive, formal and systematic
evaluation
Formal
Comprehensive
Systematic

Mean

Std deviation

3.71
3.70
3.72

0.6970
0.8413
0.8247

Note: N = 94
commitment towards evaluation. Eight items
were asked for this purpose (Cronbach
coefficient alpha = 0.8089). It was found that
training providers' commitment was
moderate (mean 3.05). More than half of the
respondents strongly agreed (66.0 per cent)
that effective training ensures improved
performance (mean 4.44). Most of the
respondents (80.9 per cent) strongly
disagreed (mean 1.30) that conducting

Table IV
Distribution of respondents by clients' demand variables
Item
Clients never ask for evaluation
Clients insist on evaluation
Clients ask for indication of dollar return on training
Clients require formal evaluation for every
programme
Clients require reaction evaluation

Percentage of responses
2
3
4

Mean

1.1
5.3
12.8

20.2
18.1
20.2

26.6
31.9
28.7

35.1
29.8
22.3

17.0
14.9
16.9

3.47
3.31
3.08

10.6
3.2

13.8
8.5

24.5
24.5

34.0
41.5

17.0
22.3

3.33
3.71

Notes: 1 = strongly disagree, 2 = mildly disagree, 3 = neither disagree nor agree, 4 = mildly agree, 5 = strongly
agree; N = 94
[ 377 ]

Junaidah Hashim
Training evaluation: clients'
roles

Table V
Distribution of respondents by commitment towards evaluation

Journal of European Industrial


Training
25/7 [2001] 374379

Statement

We devote significant resources to evaluation activities


We have staff assigned to evaluation
We have an evaluation unit responsible for evaluation
We encourage clients to conduct evaluation
Evaluation is the client's responsibility
Conducting evaluation is a waste of time
Effective training ensures improved performance
If training fails to improve performance it is a management
failure, not a training failure
Commitment

Percentage of responses
2
3
4
5

Mean

5.3
8.5
26.6
12.8
35.1
80.9
3.2

13.8
18.1
22.3
8.5
22.3
12.8
4.3

20.2
8.5
22.3
23.4
24.5
3.2
4.3

36.2
31.9
18.1
29.8
11.7
2.1
22.3

24.5
33.0
10.6
25.5
6.4
1.1
66.0

3.61
3.63
2.64
3.47
2.32
1.30
4.44

14.9

14.9

33.0

24.5

12.8

3.05
3.05

Notes: 1 = strongly disagree, 2 = mildly disagree, 3 = neither disagree nor agree, 4 = mildly agree 5 = strongly
agree; N = 94
viewed not only as an important part of
programme development, but also as a tool
for aiding decisions on cutting budgets.

Contribution to evaluation
There are three things that have made this
study on the practice of evaluation in
Malaysia different from the previous studies.
The evaluation was moderately formal (mean
3.71), comprehensive (mean 3.70), and
systematic (mean 3.72).

The government enforcement

The government enforcement can help


monitor the activities of training providers. In
this study, the survival of training providers
is very much dependent on complying with
the government requirement.

The clients' demand

The clients have the bargaining power of


customers to ask for a higher quality
of service by insisting on evaluation for
every training offered by training providers.

The economic condition

The economic downturn has pushed the


companies to justify training cost and return
on training investment. Justification is
possible through evaluation. In fact, for
justification, evaluation will have to be
thoroughly done by the training providers.

Conclusion
This study has provided valuable information
especially to the training providers in
Malaysia. To encourage and stimulate the
private sector to introduce training, the
Malaysian government has introduced the

[ 378 ]

Human Resources Development Act, 1992,


which requires organisations to contribute a 1
per cent equivalent of their monthly payroll to
the Human Resources Development Fund, a
fund which then can be used to promote
training. This move would require the Human
Resources Development Council, the body
that is in charge of the fund, to ensure high
quality, standards and accountability among
its training providers in the services they
offer to organisations. In order to ensure that
training programmes offered are of a high
standard, training institutions are required to
conduct an effective programme evaluation.
As the findings of this study demonstrate,
clients are important in the context of
evaluation practice. Clients should be aware
of their privilege to demand because they are
not given an option to choose their own
training providers, if they wanted to
reimburse their training expenditures
incurred from the Council. Clients' demand
can improve evaluation practice positively.
Although this study demonstrates that the
evaluation practice of the training
institutions quite moderately formal,
comprehensive, and systematic, it could be
further improved. To improve the practice,
the practitioners' knowledge needs to be
enhanced, so that they can make full use of
models available. Since each situation is so
unique, the search for an ideal model for a
specific situation will not end; practitioners
should increase their literature knowledge, so
that they can select a model and modify
strategies from models which are appropriate
to their situational setting and implement
those strategies within their constraints.

References

Barron, T. (1996), ``A new wave in training


funding'', Training & Development Journal,
Vol. 50 No. 5, pp. 28-32.

Junaidah Hashim
Training evaluation: clients'
roles
Journal of European Industrial
Training
25/7 [2001] 374379

Brinkerhoff, O.R. (1988), ``An integrated


evaluation model for HRD'', Training &
Development Journal, Vol. 42 No. 2, pp. 66-8.
Chen, H.-T. and Rossi, P.H. (Eds) (1992), Using
Theory to Improve Program and Policy
Evaluations, Greenwood Press, Westport, CT.
Davidove, A.E. and Schroeder, P.A. (1992),
``Demonstrating ROI of training'', Training
& Development Journal, Vol. 46 No. 8,
pp. 70-1.
Greene, J.C. and McClintok, C. (1991), ``The
evolution of evaluation methodology'', Theory
into Practice, Vol. 30 No. 1, pp. 13-20.
Gutek, S.P. (1988), ``Training-program evaluation:
an investigation of perceptions and practices
in non-manufacturing business
organizations'', Doctoral dissertation,
Western Michigan University, Kalamazoo,
MI, Dissertation Abstracts International, 49/
05A, AAC8811388.
Philips, J.J. (1991), Handbook of Training
Evaluation and Measurement Methods, Gulf
Publishing Company, Houston, TX.
Raab, R.T., Swanson, B.E. and Wentling, T.L.
(1991), Improving Training Quality, Food and
Agriculture Organisation of the United
Nations, Rome.

Shadish, W.R. (1992), ``Theory-driven metaevaluation'', in Chen, H.-T. and Rossi, P.H.
(Eds), Using Theory to Improve Program and
Policy Evaluation, Greenwood Press,
Westport, CT.
Shamsuddin A. (1995), ``Contextual factors
associated with evaluation practices of
selected adult and continuing education
providers in Malaysia'', unpublished PhD
dissertation, University of Georgia, Athens,
GA.
Shelton, S. and Alliger, G. (1993), ``Who's afraid of
level 4 evaluation?'', Training & Development
Journal, Vol. 47 No. 6, pp. 43-6.
Smith, A. (1990), ``Evaluation of management
training subjectivity and the individual'',
Journal of European Industrial Training,
Vol. 14 No. 1, pp. 12-15.
Smith, A.J. and Piper, J.A. (1990), ``The tailormade training maze: a practitioner's guide to
evaluation'', Journal of European Industrial
Training, Vol. 14 No. 8, pp. 2-24.
Stufflebeam, D.L. (1983), ``The CIPP model for
program evaluation'', in Madaus, G.F.,
Scriven, M.S. and Stufflebeam, D.L. (Eds),
Evaluation Models, Kluwer-Nijhoff
Publishing, Boston, MA.

[ 379 ]

You might also like