Evaluating HRD Programs
Evaluating HRD Programs
Evaluating HRD Programs
Effectiveness
The degree to which a training (or other HRD program) achieves its intended purpose Measures are relative to some starting point Measures how well the desired goal is achieved
Evaluation
HRD Evaluation
Textbook definition: The systematic collection of descriptive and judgmental information necessary to make effective training decisions related to the selection, adoption, value, and modification of various instructional activities.
In Other Words
Are we training: the right people the right stuff the right way with the right materials at the right time?
Evaluation Needs
Descriptive and judgmental information needed
Objective and subjective data
Information gathered according to a plan and in a desired format Gathered to provide decision making information
Purposes of Evaluation
Determine whether the program is meeting the intended objectives Identify strengths and weaknesses Determine cost-benefit ratio Identify who benefited most or least Determine future participants Provide information for improving HRD programs
Purposes of Evaluation 2
Reinforce major points to be made Gather marketing information Determine if training program is appropriate Establish management database
Learning
Did they learn what they were supposed to?
Job Behavior
Was it used on job?
Results
Did it improve the organizations effectiveness?
A Suggested Framework 1
Reaction
Did trainees like the training? Did the training seem useful?
Learning
How much did they learn?
Behavior
What behavior change occurred?
Suggested Framework 2
Results
What were the tangible outcomes? What was the return on investment (ROI)? What was the contribution to the organization?
Interviews
Advantages: Flexible Opportunity for clarification Depth possible Personal contact Limitations: High reactive effects High cost Face-to-face threat potential Labor intensive Trained observers needed
Questionnaires
Advantages: Low cost to administer Honesty increased Anonymity possible Respondent sets the pace Variety of options Limitations: Possible inaccurate data Response conditions not controlled Respondents set varying paces Uncontrolled return rate
Direct Observation
Advantages: Nonthreatening Excellent way to measure behavior change Limitations: Possibly disruptive Reactive effects are possible May be unreliable Need trained observers
Written Tests
Advantages: Low purchase cost Readily scored Quickly processed Easily administered Wide sampling possible Limitations: May be threatening Possibly no relation to job performance Measures only cognitive learning Relies on norms Concern for racial/ ethnic bias
Simulation/Performance Tests
Advantages: Reliable Objective Close relation to job performance Includes cognitive, psychomotor and affective domains Limitations: Time consuming Simulations often difficult to create High costs to development and use
Validity
Does the device measure what we want to measure?
Practicality
Does it make sense in terms of the resources used to get the data?
Economic Data
Profits Product liability claims Avoidance of penalties Market share Competitive position Return on investment (ROI) Financial utility calculations
Research Design
Specifies in advance: the expected results of the study the methods of data collection to be used
Control Group
Compares performance of group with training against the performance of a similar group without training
Cost-effectiveness analysis
Focuses on increases in quality, reduction in scrap/rework, productivity, etc.
Return on Investment
Return on investment = Results/Costs
Preventable accidents
Number of accidents
24 per year
16 per year
8 per year
$48,000
ROI =
= =
6.8
SOURCE: From D. G. Robinson & J. Robinson (1989). Training for impact. Training and Development Journal, 43(8), 41. Printed by permission.
Direct Costs
Instructor
Base pay Fringe benefits Travel and per diem
Indirect Costs
Training management Clerical/Administrative Postal/shipping, telephone, computers, etc. Pre- and post-learning materials Other overhead costs
Development Costs
Fee to purchase program Costs to tailor program to organization Instructor training costs
Overhead Costs
General organization support Top management participation Utilities, facilities General and administrative costs, such as HRM
Measuring Benefits
Change in quality per unit measured in dollars Reduction in scrap/rework measured in dollar cost of labor and materials Reduction in preventable accidents measured in dollars ROI = Benefits/Training costs
Utility Analysis
Uses a statistical approach to support claims of training effectiveness:
N = Number of trainees T = Length of time benefits are expected to last dt = True performance difference resulting from training SDy = Dollar value of untrained job performance (in standard deviation units) C = Cost of training
U = (N)(T)(dt)(Sdy) C
Use credible and conservative estimates Share credit for successes and blame for failures
Summary
Training results must be measured against costs Training must contribute to the bottom line HRD must justify itself repeatedly as a revenue enhancer, not a revenue waster