Donald J. Ford - Bottom-Line Training - Performance-Based Results
Donald J. Ford - Bottom-Line Training - Performance-Based Results
The Author
Peter Chee
President and CEO
Institute of Training and Development
Penang, Malaysia
This book provides in-depth explanations, examples and exercises that will help the reader
distinguish different types of analysis such as performance analysis, job analysis, task analysis,
skill gap analysis, learner analysis and the ubiquitous needs analysis. More importantly, this
book continuously drives home the vital link between training activity and meaningful business
indicators.
Ethan S. Sanders
President and CEO
Sundial Learning Systems, lnc. and
Co-author of Performance Intervention Maps (ASTD Press)
Dr. Ford's approach to training design helps the reader to understand the importance of taking a
comprehensive and systematic approach to creating and implementing a training initiative from
beginning to end. A major strength of his model is the rigorous approach to front-end needs
analysis and assessment that he recommends ... This book is a "must read" for any person
considering the use of training for performance improvement in an organization.
Richard G. Wong
Manager of Training and Organization Development
Orange County Transportation Authority
Where other books on instructional design offer principles and systematic approaches,
this book goes beyond the ISD model to offer practical, down-to-earth guidance on
everything from designing performance tests to negotiating maintenance issues with
clients. I have used, and quoted from, line Training on numerous occasions. A must-have
for every instructional designer's bookshelf!
Carolyn Johnson
Instructional Design Supervisor
Southern California Gas Company, a Sempra Energy Utility
Don Ford's book hits the nail on the head. It provides a clear introduction and explanation
of the systematic training development process. At the same time, it never lets the reader
lose sight of how important it is for all training activities to be designed to make
significant contributions to organizational results, Bottom-line Training performs
beautifully in both arenas, giving the beginner and the experienced developer alike fresh
insights to our profession's business role and leaving them with a comprehensive set of
design tools that will serve them well.
Don Ford's newest book on Bottom-line Training is the perfect mixture of strategy and
techniques for implementing a systematic approach to training design and delivery. His
examples, stories and models for bottom-line training are a great help to any trainer."
Jean Barbazette
President
The Training 'Clinic and
Author of Instant Case Studies and The Trainer's Journey to Competence
DONALD J. FORD
Ph.D., Certified Performance Technologist (C.P.T.)
Training
PERFORMANCE-BASED RESULTS
Copyright 2005 Training Education Management LLC.
All rights reserved. No portion of this publication may be reproduced,
distributed or transmitted in any form or by any means without the
written permission of the publisher.
Over the past half century, the design and development of training programs
has grown into a multi-billion dollar industry. With more organizations moving away
from classroom lecture as the sole training delivery method, instructional designers
have emerged as the new role model for training professionals.
Although much research has been conducted in how adults learn and how
instruction can be designed to facilitate learning and maximize performance, too
little of the cutting-edge knowledge in the field has made its way into the daily
practice of training in corporate America. The majority of programs, especially those
delivered in classrooms through traditional lecture, continue to be designed by
subject matter experts who rely on their own intuition and experience to guide them,
rather than a systematically applied theory of instructional design.
Though many fine programs are designed this way, a much large number of
these programs fail to help trainees reach the stated objectives and do not pre-pare
them to apply what they learn on the job. Studies have found that as much as 50
percent of the training occurring today is not transferred back to the job, resulting in
a monumental waste of resources.
All is not bleak, however. A growing number of training programs, especially
those designed for delivery via the Internet, computer or other electronic media, are
developed using a systems model of instruction, so that the right content is taught to
the right people at the right time. These training programs are the source of today's
success stories in the workplace. It is this type of training that is transforming
organizations around the world into high-performance, efficient, customer-focused
firms that enjoy success in the marketplace. Increasing evidence supports the
notion that investments in training and other aspects of human capital account for a
large share of a company's financial success.
This book is aimed at working professionals in the field of training and
development and those newcomers who wish to gain systematic knowledge and
skills in the field. It is an easy-to-read introductory text that proceeds step-by-step
through the process of creating and delivering outstanding training programs. It will
make readers more effective trainers and more valuable to the people they serve.
Donald J. Ford
Torrance, California
SECTION ONE
Analysis
The first section of this book deals with the first phase of the results-based design
model – analysis. In this phase, trainers investigate the need for training and uncover
underlying causes for performance problems or new performance opportunities
requiring new knowledge, skills or attitudes. This phase is also called needs assess-
ment, needs analysis, opportunity analysis or front-end analysis – although technical-
ly speaking, they are not synonymous. This section will present knowledge in needs
analysis and assessment, performance analysis, job analysis, task analysis, learner
analysis, context analysis and skill gap analysis.
Each of these types of analysis provides specific kinds of information to the
trainer that help determine whether training is needed at all; whether other interven-
tions, such a job redesign, rewards and recognition, process reengineering, manage-
ment or systems changes are required; and exactly what new knowledge skill and atti-
tudes are needed to achieve the intended results.
Although any given project may not require all of these analytical tools, the
accomplished analyst has them all available in the toolkit should they be necessary.
The growth of analytical tools and methods is one of the most exciting developments
in the field, for training design can never by better than the analysis that informs it.
The following model depicts the analysis phase of training design as a learn-
ing map.
Need Analysis
In the following seven chapters, you will learn about each of these types of
analysis and how to use them to define training programs.
First, though, it is necessary to introduce the Bottom-line Training model to set
the foundation for the chapters that follow. That is the subject of chapter one.
Analysis | 1
CHAPTER ONE
Introduction to
Bottom-Line Training
Design
In this book, results come first. Why should a book on training design start
with the results to be achieved? Because for too long, training and develop-
ment has not directly tied itself to business results that matter to clients and
customers. Is it any wonder, then, that when business turns sour, training is
among the first things to go?
Put yourself in the position of a struggling business owner. Where
would you put your limited resources when forced to choose: in production,
service delivery, marketing or employee development? Of course, you would
choose your products and services over the development of employees, unless
you were certain that an effectively trained workforce that performs superbly
on the job would boost your business’ success. So, the first priority for anyone
designing training programs today is to figure out how the proposed training
enhances an organization’s ability to deliver quality and thereby stay in busi-
ness. Any result less than that will relegate training to the fringe benefit cate-
gory of nice things to have – when the extra money is available to afford such
amenities.
The book’s organizing model is the most widely used instructional
design system available today – the Instructional Systems Design (ISD) or
ADDIE model, as it is sometimes known. It is illustrated below:
Figure: 1-2
Results-Based Training
Design Model
Analyze
• Needs Analysis
• Needs Assessment
• Performance Analysis
• Job/Task Analysis Design
Evaluate • Learner Analysis
• Formative Evaluation • Objectives
• Context Analysis • Deliverables
• Reactions/Training • Skill Gap Analysis
• Transfer of Training • Budgets/Schedules
• Business Results • Project Management
• Blueprints/Prototypes
RESULTS
• Learning
• Performance
Implement • FInancial
• Strategic Develop
• Train the Trainer
• Materials
• Classroom Delivery
• Test/Assessments
• Non-classroom Delivery
• Quality Control
• Production
Performance Results
Thus, the first type of result organizations seek today is job performance. This
may be categorized into several types of performance, among the most com-
mon of which are the following:
1. Perform new job skills
2. Perform existing job skills better, faster or cheaper
3. Apply new knowledge to enhance performance
4. Exhibit new attitudes or change old attitudes about performance
Financial Results
If training has achieved results in terms of learning and job performance, then
it ought to follow that the financial performance of the organization will also
improve. But so few training programs are ever evaluated on their financial
results that it is unknown at this point in time just how many training programs
actually produce a positive financial benefit. Much of the problem is rooted in
the archaic accounting practices of most firms, especially with regard to labor
and human capital, which prevent many trainers from establishing a cause and
effect link between training and the firm’s financial performance. Adding to
the difficulty is the typical confounding of various causes that contribute to a
company’s financial success, including such variables as: worker skills, prod-
uct/service quality, research and development, technology, price, competition
and market conditions. All of these factors interact in determining how well a
company is doing at any moment in time.
All of these obstacles to measuring the financial impact of training have
served to create cynics among both trainers and executives. Many simply
declare that the financial benefits of training cannot be accurately measured
and must be accepted on faith alone. This approach is a cop-out, and will
impoverish the training profession if allowed to prevail.
Leaving aside for the moment the thorny issue of how training’s finan-
cial impact should be measured (this will be revisited in the Evaluation section
Strategic Results
As challenging as tying training to financial results can be, it is still not enough.
One additional level of results must also emanate from effectively designed
training – strategic results. These are results that not only improve job per-
formance and financial results, but also enhance the organization’s ability to
compete and survive in the future. When training begins to influence strategic
results, it then enters a whole new realm of importance and serves organiza-
tions in wholly new and indispensable ways.
Among the strategic results that training frequently contributes to are
the following:
1. Create new products and services and the means to deliver them to
customers
2. Improve the organization’s ability to serve its customers, both current
and future
3. Open new markets for an organization’s products and services
4. Enhance organizational operations by redesigning processes and
functions
Book Layout
We will now explore the five phases of training design, periodically stopping to
consider the results we seek. The first task is to analyze the need for training,
presented in Section One, chapters two through eight.
Next we will deal with Section Two: Design, in chapters nine through
thirteen. Third, we will cover Section Three: Development in chapters fourteen
through seventeen. Fourth, we will consider Section Four: Implementation in
Discussion Questions
and Exercises
1. Think back on the last training project you completed or the last training
course that you took. How were the following bottom-line training steps
handled?
.....................................................................................................................................................................
.....................................................................................................................................................................
F ig u r e 2 - 1 :
M a g e r a n d P ip e ’ s P e r f o r m a n c e
A n a ly s is M o d e l
D e s c r ib e
P e r fo rm a n c e
D is c r e p a n c y
No
Ig n o re I m p o r ta n t ?
Y es
Y es
S k i ll D e f ic ie n c y ?
No
No Y es
U sed to D o It? Perform ance
Punishing?
A rrange R em ove
Form al Y es
Punishm ent
Training
Surveillance
Investigation
Analysis
Action
Surveillance
Surveillance is the on-going process of reviewing vital information
about an organization in order to understand the issues and problems it is
confronting. It is akin to the ubiquitous video cameras in the workplace that
record the movement and activities of employees and customers. The
cameras provide a detailed record of daily life in organizations which can be
reviewed when needed.
Likewise, trainers should be aware of the important events and
information impacting the organizations they work for. Internal trainers
accomplish a portion of this by simply networking and getting to know the
people with whom they work. External trainers have to rely more on printed
information and data searches to provide this background information. Other
ways to conduct surveillance include:
1. Have relevant company documents (periodic reports, senior executive
memos, marketing information, position papers, etc.) circulated to you
2. Stay abreast of current work performance policies, processes and
standards
3. Maintain effective networks of personal contacts in the organization
4. Deepen one’s knowledge of the key business issues senior executives
are working on
The Surveillance Worksheet in Appendix Two, page 271 provides a
more complete list of sources of information about organizations that can
impact training. You may refer to this to see if your surveillance methods
include everything you need to be effective. Surveillance, while unfocused
on a specific performance problem, provides the necessary background to
begin a more specific investigation quickly. It often generates business for
the training department as well, since good surveillance can alert the savvy
trainer to upcoming projects and initiatives that require a training component.
Many trainers find that the way they get invited to participate in key
organization projects is to maintain a wide network of contacts throughout
the organization who will alert them when training may be needed.
Analysis
Once data has been collected, the analysis phase begins. This is
the most challenging phase, requiring the highest levels of skill and
knowledge. Many methods of analysis are available and the key challenge is
to choose a method suitable for the data at hand.
Two basic analytical methods are:
quantitative
qualitative
Quantitative methods include statistical analysis, numerical
summaries, graphs, charts, tables and related methods to analyze numbers.
Qualitative methods include summaries of interviews, field notes,
ethnographic reports, work samples, video or audio tapes, content analysis
and other methods to analyze non-numeric data such as people’s words and
actions. More will be said about these analytical methods in later chapters of
this section.
Action
When trainers and managers begin to make decisions based on the
data they have collected and analyzed, the action phase has begun. The
imperative to take action distinguishes needs analysis from pure research,
where creation of knowledge is the end-product. For needs analysis, even
the failure to take action is, in itself, an action, since that means the
organization has decided to live with a problem or forego an opportunity. In
most cases, though, some useful action results from needs analysis. Among
the typical action outcomes of needs analysis are the following:
new performance skills and training content are designed and developed
existing training content is updated or changed to meet new skill needs
and new audiences
alternative training and support is provided on the job through structured
on-the-job training, electronic performance support systems, manuals,
Internet, etc.
Summary
In this chapter, we have examined four different approaches to analyzing
performance problems and assessing training needs. In each of these
approaches, the emphasis is on establishing causal links of inadequate job
performance and categorizing these as either skill and knowledge deficits
that training can solve or motivational or organizational problems that require
other interventions. Once the need for training has been established,
techniques to assess the job, the content of training, the audience and the
work environment come next. These are discussed in the following chapters
of this section.
The following chart summarizes the typical steps that designers follow in the
needs analysis and needs assessment phase of training design.
Figure 2-7:
N e e d s A n a ly s is a n d N e e d s
A s s e s s m e n t P ro c e s s F lo w
N e e d s A n a ly s is
S u rv e illa n c e
In v e s tig a tio n
T ra in in g
N eeds
A ssessm en t
S k ill G a p C au sal
A n a ly s is A n a ly s is
A c tio n
P la n
2. Discuss the pros and cons of each of the four analytical models presented
in this chapter. Which ones are you most inclined to use? Why?
A rapidly growing fast food company with a franchise business model has
begun to experience problems with some it its newly opened locations. Up
to 20 percent of them are failing with a year, at a huge cost to the company.
The reasons for failure seem numerous, from poor location and advertising,
to poor leadership and work planning, to untrained, unmotivated employees.
Performance Analysis
The performance analysis phase, also sometimes referred to as “front-end
Change
Evaluation Management
Causal Analysis
Causal analysis is the process of examining all the possible underlying
causes for a performance gap in order to design a comprehensive
intervention that will address all the causes. It is the key step in the human
performance technology model, since the correct identification of causes for
inadequate performance is essential to solving the problem. Among the
causes for performance problems, the following occur frequently:
• no consequences (or even positive consequences) for poor performance
• lack of incentives or rewards for good performance
• lack of information required for good performance
• lack of resources, tools, equipment, etc. to enable good performance
• an environment that does not reinforce good performance
• individuals’ capacity to perform up to standards
• lack of individual motivation to perform
• lack of knowledge and skill needed to perform
Organization
Capacity Motivation Knowledge
Individual
The first three factors – resources, structure/process and information
– are primarily the responsibility of the organization. The last three factors –
capacity, motivation and knowledge – are primarily individual employee
issues.
Returning to our example of the financial services firm about to
launch an annuity product line, the consultants found a number of underlying
causes for potential performance problems. First, they identified a need to
provide knowledge and skill to at least half the existing sales force on
annuities as an investment tool and the sales strategy needed to compete in
this marketplace. They also identified a critical need to train underwriters,
risk managers and others in the in-depth design of annuities.
Having addressed the skill gap issue, the consultants turned to other
underlying causes of potential performance problems. The widespread
confusion and skepticism about the new product launch was correctly seen
as a key obstacle to success. Further analysis revealed that many
employees had a poor impression of annuities, finding them less ‘sexy’ than
the other investment tools currently offered, and feared that the company
would siphon resources away from other product lines and thus damage the
firm’s overall health. Among the sales staff, concerns about the annuity
product line centered around the compensation package that had been
proposed. Many sales people felt there wasn’t sufficient incentive to sell this
new product, tempting them to stick to existing product lines instead which
had a proven sales track record. Finally, a third major cause of potential
problems surfaced with regards to the computer systems that would support
the sales and marketing of annuities. The information systems department
Intervention Selection
Once underlying causes have been identified and confirmed, the third
phase of HPT is selection of appropriate interventions to address all the
causes. An intervention is simply any conscious action designed to mitigate
or eliminate a cause for inadequate performance. The list of possible
interventions would be quite lengthy indeed, among the most common of
which might be:
• training
• on the job coaching
• culture change
• teambuilding
• management systems
• information systems
• tools and equipment
• environmental engineering
Anything that addresses an underlying cause of performance problems is a
good candidate for inclusion as an intervention.
In the financial services company’s case, the interventions were
narrowed down to three major initiatives: training for all affected employees
on annuities, an internal public relations campaign to convince skeptical
employees of the merits of annuity products and a decision to delay product
launch until the necessary supporting software systems were available.
These three interventions addressed the three most important underlying
causes of potential performance problems and taken together, helped to
build a critical mass of support for the annuity product launch.
Implementation
The fourth phase of HPT is the implementation of the previously-
selected interventions. Implementation may be as simple as designing and
delivering training or as complicated as changing the entire culture of an
organization.
Whatever the implementation strategy is, it helps to have a clearly defined
change management process to drive the implementation and help everyone
involved to move from resistance to acceptance of the change. If the
implementation involves training, it is important to provide reasons for
employees to participate, especially those that pertain to what’s in it for them
personally. If changes are proposed to compensation or other human
resource systems, these must be carefully explained to employees to reduce
resistance. Changes to facilities, systems and equipment will likely require
communication, training, planning and adequate resources to be a success.
A key ingredient in reducing opposition to change is to involve people in the
change. The more involved people are, the less likely that they will resist.
Evaluation
The role of evaluation in human performance technology is no
different than the traditional role of evaluation in training. It is important to
determine whether the interventions selected and implemented are having
the desired impact. To do this, the four-level evaluation model developed by
Kirkpatrick (Kirkpatrick, 1975) can be applied. First, reactions from
employees, suppliers and especially customers can be collected by survey
or interview. Second, evidence of new skills and knowledge can be gained
from training tests, self-ratings or supervisor ratings. Third, evidence of
behavior change on the job can be found through job observation, interviews
or surveys of employees and their managers. Even better, the organization
may be able to examine existing metrics to spot trends that indirectly
demonstrate new job behaviors, such as increased sales, lower operating
costs, lower absenteeism and turnover, etc. Fourth, and most important,
firms should evaluate the results of performance interventions to determine if
the business’ performance and all-important bottom-line have been positively
impacted. A return on investment should be calculated for major initiatives.
Increasingly, performance evaluators are questioning the adequacy of
Kirkpatrick’s four-level model. There are tow major problems with using the
model when evaluating performance improvement:
HPI Process
Evaluation Process
Initiating Event
The initiating event for HPT intervention is often a business problem
that cries out for a solution. But HPT is not merely meant to be a reactive
approach to business problems. It can be used with equal or perhaps even
greater effect, by those who are more proactive in addressing performance
issues. Thus, it is an ideal companion to business and strategic planning
processes and to performance management systems. Whether proactive or
reactive, HPT may be initiated whenever evidence appears to suggest that
human performance is not reaching its full potential.
Opportunity Analysis
Clark prefers the term ‘opportunity’ for the front-end analysis phase
of his model because HPT is ultimately about seizing opportunities to
maximize:
Opportunity Analysis
Performance Information/Organiz ation System Information/
Performers Information/Cultural Information
Causal Analysis
Organization
Human System
System
Training/Infor- Barrier
Coaching Adaptation
mation/Job Removal
Aid/ Practice
Value Confidence
Evaluation
Causal Analysis
In Clark’s model, causal analysis is a process of isolating symptoms of
performance problems and assigning them to one of four possible root
causes: motivation, knowledge, culture or systems. The first two are
characterized as belonging to the human system while the last two belong to
the organizational system. Proper diagnosis is essential since each of these
sub-systems requires different solutions.
Clark suggests starting with the human system first, since it is typically
easier to address. Within the human system, the first priority is to determine
if lack of knowledge is causing or contributing to a performance problem,
because knowledge is prerequisite to performance. Second, the motivation
of performers should be examined, with particular attention to two facets:
value and confidence. Value refers to how much performers are engaged
and committed to the tasks they perform, while confidence refers to their
sense of self-confidence with regard to the tasks at hand and directly affects
the amount of effort they are likely to expend.
Once the human system issues have been analyzed, the
organizational system is then examined. System components like
management, information technology, production, research and
development, sales, customer service, etc. should be analyzed to determine
if any of these are creating obstacles to performance and if so, for ways to
remove these barriers. Cultural information like values, beliefs and norms
are analyzed to determine if they support or undermine performance. Where
organizational culture impedes performance, a change effort to adapt the
culture to performance requirements is recommended.
Implementation
Once a thorough analysis has been performed, a suggested course of
action emerges. Clark suggests the following remedies for common causes
of performance problems:
Evaluation
The final phase of the HPT Intervention model is evaluation. Like the ISPI
model, Clark adopts Kirkpatrick’s four-level evaluation model. He
recommends that each implemented solution have its own evaluation plan to
Summary
Human performance technology, or performance analysis for short,
has taken the world of instructional design by storm. It is not hard to see why
from the examples we have looked at in this chapter. This systems
approach to human performance problems offers a more robust set of tools
to address the myriad underlying causes of performance problems in the
workplace. Increasingly, complex problems defy a simple training solution.
Instead, like the soaring repair costs of the fictional transportation company,
the answers to performance problems are often found in a combination of
training, motivation and rewards, systems and culture change. Those who
can bring such comprehensive solutions to bear on today’s organizational
problems will reap the benefits of improved performance and profitability in
the new economy.
Discussion Questions
and Exercises
1. compare the HPI model and the ISD (ADDIE) model. What similarity do
you see? What are the key differences?
2. What are the key differences between the Rosenberg & Deterline HPT
model and the Clark model?
1. How would you go about conducting a root cause analysis of this case?
2. What are some likely interventions that you would look at to solve the
business and performance problems?
3. How would you apply the HPI evaluation model to this case? What
would be some key measures you could use to track the business and
performance goals?
their work comes from and where it goes after they are done. By having this
understanding of the work process, and not simply their own set of duties,
employees can see the big picture and can better focus their efforts on what
really matters.
The next step in job analysis is to specify the behaviors and
competencies required to perform the duties contained in the job description.
Behaviors are the observable actions that employees take in the course of
their work. For example, a key behavior for a customer service center
representative is to answer customer phone calls. For a computer
programmer, a key behavior is writing programming code. Competencies
are the underlying capabilities required to perform jobs, and are not always
directly observable. For a customer service representative, a key
competency might be empathy for customers. For a computer programmer,
a key competency might be troubleshooting faulty code. Both of these
competencies are critical to the job, but not easily observed directly. To
establish competencies for jobs, analysts rely on direct observation of work,
interviews with incumbents, especially those recognized as star performers,
and increasingly, on survey research in which experts or job incumbents rate
and rank lists of competencies to determine those most applicable to a
particular job. Competency studies have become particularly popular to
document managerial and professional jobs that do not easily lend
themselves to traditional task analysis (Dubois, 1993). For a corporate
executive, the competencies required to perform the job might include the
following:
• strategic vision
• customer focus
• decisiveness
• communication abilities
• perseverance
• emotional intelligence
• analytical abilities
• credibility
The next step in the job analysis process is to define the compensation
plan for the job, including base pay, bonuses, promotional raises, rewards
and non-cash compensation such as health benefits, life insurance, etc.
While a complete treatise on compensation is beyond the scope of this
chapter, a number of good reference works can shed more light on this topic.
Summary
In this chapter we have briefly examined the specialized field of job analysis.
The scope and nature of the field have been defined and the typical process
steps have been described and illustrated with examples from various
professions. The field draws from many disciplines, including industrial
psychology, compensation, benefits, career development, quality assurance
and human resource development. Highly complex job analyses are
generally conducted by a team of specialists from each of these disciplines.
Discussion Questions
and Exercises
1. Using you own job as an example, conduct a brief job analysis covering
the following points:
b. Job Duties
c. Job Competencies
d. Compensation Plan
e. Career Path
How would you go about design a job analysis for this new position?
Figure 5-2:
Relationships Among Job Elements
Title
Functions
Tasks
Steps
The job title is the most general of the four elements. It provides a
general classification of the job and its role in the organization, but little else.
For example, the job title “PC Support Specialist” tells us that the job involves
specialized knowledge of personal computers and the role is to provide
support and service to end-users. Beyond that, we know little of the job’s
specific duties. Likewise, the job title “Vice-President of Sales and
Marketing” informs us that this is an executive-level position having broad
management responsibility for an organization’s sales and marketing
functions, but little else.
When we turn to the functions of the job, we begin to define the
major parts of the position that require the performance of multiple tasks, or
that represent core responsibilities of the position. For a PC Support
Specialist and a Vice-President of Sales and Marketing, the following
functions might apply:
Table 5-3: Sample Job Functions
PC Support Specialist V.P. Sales and Marketing
Provide technical assistance to PC end-users Develop marketing plans
Troubleshoot end-user PC problems Hire and train sales staff
Handle PC software upgrades and Handle key accounts
enhancements
The task level of analysis usually translates nicely into the major topics
and objectives of training programs. To take the PC Support Specialist job
above, the two tasks listed under the function “provide technical assistance
to PC end-users” could be translated into the following course objectives:
1. To answer common PC technical questions posed by end-users
2. To train end-users in the functions and operation of new computer
software programs
These two objectives, in turn, would form the basis for two modules in a
training program for PC Support Specialists.
Finally, the functions of a job may be further divided into the steps
required to perform each function. Steps may be thought of as the detailed
procedures and processes required to perform a task. These would form the
content of a training program, just as the functions of the job suggest the
objectives of training.
To illustrate this, the PC Support Specialist function of answering end-
user questions about PCs might be broken down into the following steps:
1. Receive the question (phone, e-mail, in person, etc.)
2. Analyze the question
3. Research the answer
4. Present the answer to the end-user
5. Check for understanding
These five steps would form the detailed content of a module on
answering end-user PC questions. It would also help to define the policies
and procedures of the job, if they did not already exist, and would provide a
uniform methodology for PC Support Specialists to employ in answering
questions. Of course, exceptions to this method might have to be noted for
simple questions that require no research or complex questions that require
Discussion Questions
and Exercises
1. What are the key steps in performing a task analysis?
2. Using you own job as an example, develop a task analysis for your most
important job duty.
How would you go about conducting a task analysis of the major duties of
this position?
Learning Styles
A great deal of research has been done on preferred learning styles
of both adults and children. Educational psychologists have identified three
primary learning preferences: auditory, visual and kinesthetic (Filipczak ,
1995). The auditory learner learns best through hearing. These individuals
love to listen to lectures, ‘war stories’ and discussions, finding that they can
identify key points and synthesize information primarily by listening. The
visual learner learns best through seeing. Some researchers suggest that
the majority of Americans are visual learners, and that they enjoy watching
videos, seeing demonstrations and observing first-hand. From this, they can
deduce the information they need and see exactly how to apply it. Finally,
kinesthetic learners tend to be hands on, needing to physically interact with
content in order to learn it. These learners like to be active, and to touch and
manipulate physical objects in a laboratory setting.
Aside from these general characteristics of all learners, researchers
have also investigated the differences between how children and adults
learn. Malcolm Knowles was among the first researchers to recognize that
adult learners were different from children and required a different approach,
which he dubbed andragogy, to distinguish it from pedagogy (Knowles,
1984). He identified the following general characteristics of adult learners:
• self-directing
• motivated by self-interest
• life-centered and pragmatic
• change is primary driver to learn
• rely on experience to learn
Knowles believes that trainers and instructional designers can
capitalize on these traits to create more effective training programs. He
offers the following examples of how training can accommodate the learning
styles of adults. To address adults’ need to be self-directing and in control of
their learning, he suggested the use of individual learning contracts in which
learners decide up front what they want to learn and how they wish to
demonstrate mastery of the content. To address adults’ reliance on their rich
reservoir of experience, he suggested a participative classroom setting in
which learners are encouraged to share their experiences with peers. He
also cautioned that when a learners’ past experience contradicts the content
of new training, particular attention must be paid to helping the learner
unlearn the old habits before they can successfully master new ones. To
address adults’ desire for life-centered, pragmatic learning, Knowles
suggests designing training that can be immediately applied to the job, so
that adults see the relevance and deepen their learning. Finally, to address
adults’ tendency to become ready to learn when a change occurs in their
lives, he suggests that training programs be timed to coincide with major
Summary
In this chapter, we have examined some of the ways that designers can
analyze the characteristics and needs of learners. We have noted that
research on general learning styles of adults suggests that they learn in
different ways, using different strategies based on personal preferences and
past experience. To accommodate these diverse styles, designers should
use a wide variety of learning activities and teaching strategies, including
multisensory approaches to content. Analysis of specific learning
populations should focus on their prerequisite skills, their motivation to learn
and any diversity issues that may impact learning.
The more that designers know about the target audience for training, the
better they are able to customize the training to suit the unique needs of
learners. This is something that makes a great deal of sense both from the
point of view of designers who want to see successful learning take place,
and from the point of view of business decision makers who want to see
training targeted to maximize its impact and efficiency.
2. How would you describe your own learning style? How does your style
impact the way you like to learn?
Summary
In this chapter on context analysis, we have considered a number of
elements of the learning and the performance environment that impact
training design and delivery. Among the issues we have explored are the
following:
Discussion Questions
and Exercises
1. Using the course you are currently in or the last one you took, conduct a
brief context analysis, identifying the following key context issues:
Group size?
Training facility?
Trainers?
Course frequency?
Delivery costs?
Using the context analysis model presented in this chapter, analyze the key
contextual issues facing this company and identify some possible
alternatives to classroom training delivery.
Measure Develop
Existing Skills Skill Profile
G
A Develop
P Plan to Close
Gap
Estimate Develop
Future Skills Vision
Customer Enters
Order on Internet
Order is Elec-
tronically Transfered
to Shipping Dept.
Shipping Dept.
Fills Order
Discussion Questions
and Exercises
1. Assume that you have just been offered the job of your dreams.
Conduct a skill gap analysis on yourself to identify the key train g that
you will need to succeed in your new position.
How would you design a skill gap analysis for this company? How would
you measure future and current skills? How would you propose to close the
skills gap?
Instructional Design
Blueprint
Budget/ Project
Deliverables
Schedule Organization
Project Management
1. Needs Analysis X X
a. Training needs X X X X
b. Job tasks X X X X
c. Costs/Benefits X X X
2. Objectives X X X
a. Sources X X X
3. Instructional X X
a. Principles X
b. Presentations X X X
c. Demonstrations X X
d. Discussions X X
4. Evaluation X X X
a. Designs X X X X X
b. Methods X X X X
The matrix helps the designer to see which target behaviors go with
which content topics. It also can serve as a checkpoint to ensure that all the
objectives of a course cover the content and behaviors that have been
identified by the training needs assessment. A blank objectives matrix is
included in Appendix Two on p. 282 to assist you in developing program
objectives.
Sequencing Objectives
So far, we have considered objectives in isolation. In most training
programs, objectives do not exist in isolation. Instead, there may be several
objectives of differing importance and difficulty being learned. To help
instructional designers sort out the many objectives of a complex training
program, a hierarchy of objectives is sometimes constructed (Mager, 1975).
The hierarchy typically consists of the following elements:
1. Prerequisite Objectives - behaviors that learners must have prior to
beginning a given class, such as basic literacy in English.
2. Enabling Objectives - behaviors that learners must master first before
they are able to perform the ultimate behavior being taught
3. Terminal Objectives - behaviors that learners will be able to demonstrate
at the end of a training program. These are the ultimate goals of the
program.
4. Performance Objectives – behaviors that learners will be able to exhibit
on the job after they return from training.
5. Results Objectives – accomplishments that the organization will be able
to achieve as a results of better training employees, including financial
and strategic results.
Classifying Objectives
To classify objectives by type, it is helpful to ask two questions of
any objective after writing it. These are:
1. Why do learners need to be able to do that?
2. What do learners need to know to be able to do that?
The first question helps move up the hierarchy to identify the truly
meaningful skills that comprise terminal objectives. Keep asking this
questions until you reach an objective that is undisputedly important to
achieve.
The second question helps move down the hierarchy by identifying
prerequisite and enabling skills that must be learned first. Once these skills
have been identified, it is important to determine whether learners possess
them already or not. If they do not, they must be learned prior to teaching
the terminal objective.
From this exercise, we can see that the objectives that emphasize
efficiency, productivity, cost control and empowerment have the greatest
congruence with the company’s values, while those emphasizing higher
costs in the form of salaries and benefits and employee problems like
absenteeism contradict company values. Finally, several objectives related
to teamwork, diversity and management-employee relations are irrelevant to
the company’s values.
Summary
If training is to have the desired outcomes, we must write clear and
measurable training objectives. We start by analyzing the tasks of a job and
describing these as behaviors. The behaviors become the sources for writing
training objectives. A training objective has four components: target behavior,
content, conditions and standards.
Individual learning objectives are typically arranged in a hierarchy
when combined with other objectives. The hierarchy consists of
prerequisites, enabling objectives and terminal objectives. It is important to
sequence objectives properly in order to facilitate learning. Finally,
objectives should be screened against key organizational strategies and
values to be certain that they are congruent with the overall mission of the
organization. Objectives which clash with prevailing values and mores are
unlikely to be achieved without first changing the underlying cultural norms.
Discussion Questions
And Exercises
1. Examine the list of training objectives for a first-line supervisory financial
training program and determine the hierarchy among them by assigning
a level to each objective.
Use the following scale: P = prerequisite E = Enabling T = Terminal
PF = Performance R = Results
___ Interpret monthly budget reports.
___ Inform employees of key budgetary issues.
___ Produce high-quality, low cost products for customers.
___ Read a company budget report.
___ Manage department budgets within 5 percent of plan.
Time-Based Estimates
One of the most commonly used techniques requires estimating how
long it will take to design and develop an hour of instruction. This is usually
expressed as a ratio. For example, a common industry rule of thumb for
print-based classroom training is 30 hours of design and development time
for each hour of instruction, or a 30:1 ratio. If this ratio is applied to a one-
day workshop, it would yield a design and development time estimate of 240
hours (or 30 days) of labor required to complete a one-day course.
The time ratio method is easy to use and makes sense to both
designers and clients, but it is fraught with inaccuracy. A recent study
published by the American Society for Training and Development’s
Benchmarking Forum reported average design to delivery ratios ranging
from as little as 2:1 to as much as 100:1 for classroom training. With such a
huge range, picking any one ratio as a fixed standard is impossible. Part of
the problem with these reports is that people do not all include the same
things in the estimates. Some people count only design time, some only
development time while others include both design and development, plus
needs assessment time. It is well-known that needs assessments can take
as long as the design and development phase combined for a large, complex
project. Because of the tremendous variability in needs assessment time
requirements, I believe it is better to leave that phase out of the design and
development estimate. In many years of practice, I have observed that
design and development time typically runs in the range of 10-30 hours per
hour of classroom instruction.
For computer-based, video-based or other media-based training
designs, the time estimates escalate dramatically to cover the amount of time
Figure 10-2
Trainer Content
Trainee Process
Instructional Methods
In examining process more closely, it helps to place instructional
strategies on a continuum from active learning to passive, as the graphic
below illustrates.
On the left side of the continuum are active learning strategies like
trial and error and on-the-job learning that closely approximate the way most
people learn naturally. On the right side, are passive learning strategies like
Trial & Simulation Games Role Drill & Discussion Q&A Self- Lecture
Error Play Practice Study
Present
Practice
Evaluate
Apply
Summary
In this chapter, we have considered the issue of training deliverables
and the related issue of choosing instructional methods that support various
kinds of deliverables. We have also examined the range of deliverables that
trainers create and some ways to estimate the amount of deliverables and
the time required to create them.
In the next chapter, we will delve into training design schedules and
budgets in detail, as part of the broader planning process for instructional
design.
Discussion Questions
and Exercises
1. Compare and contrast the following instructional methods. Think of an
example of a good application for each method from your personal
experience.
Lecture vs. Self-study
Discussion vs. Role play
Simulation vs. Game
Trial and Error vs. Practice drills
Graphics/Artwork 10/hour
Estimate
activities.
blueprint
conferences expenses
participant reactions
participant learning
on-the-job usefulness
business results
Summary
In this chapter, we have looked at the elements of budgeting and
scheduling design projects. In the budgeting area, we examined the various
cost elements of training budgets, and described techniques to assemble a
preliminary proposal budget and a final working budget. In the scheduling
area, we discussed techniques for assembling a schedule, ways to cut
design time and to use project management software to aid the process. We
also presented advice on estimating time requirements for design projects,
including the factors that increase and decrease design time.
In the next chapter, we will explore training design project
management in greater detail.
Discussion Questions
and Exercises
1. Describe your experience in estimating training schedules and budgets.
Which techniques presented in this chapter would be most useful?
COST
GOALS
TIME QUALITY
Figure 12-2:
Integrated Project Planning &
Management Cycle
1) Planning, Appraisal & Design
Feasibility
& Appraisal
Ide ormula
&F
ntifi
n
cati tion
sig
De
on
Re
fi
& P ne Po n
lan li
nin cy tio al
g lec rov
Se App
&
Goals
n
atio -up Act
alu
Ev ollow iva
tion
& F
Im
an tion
r
ple
ve
& Control
Supervision
& H m ple
do
m
en
Co
tat
nio
Feedback
Interaction
Cycle Flow
Training Design
Registration/
Logistics
Design Blueprint
Materials
Development
Client Approval
41
1 22 4 5 7 9
9 Implementation Begins
Summary
In this chapter, we have examined a model for managing training
projects called the Integrated Project Planning and Management Cycle. We
have discussed the four phases of project management in detail and given
tips and examples regarding how to use the model to plan instructional
design projects. Among the useful tools introduced in this chapter are
project management software, Gantt and CPT charts, and budget and
scheduling tools. In the next chapter, we will explore the role of design
blueprints and prototypes in instructional design projects.
Discussion Questions
and Exercises
1. How would you work with a client to define priorities on a training design
project using the project management triangle?
2. Which of the project management steps do you find the most difficult?
Why?
3. Compare and contrast the Gantt chart and the CPM chart. What are the
advantages and disadvantages of each?
REASONS: New trainers need to know how to design and deliver effective
training programs
As the example above shows, the task analysis provides the content
and structure of the course. This information is valuable to include in the
design blueprint.
The remaining needs assessment data may be briefly summarized in
the design blueprint. The Performance Analysis findings should be restated,
especially those that indicate why training is needed. The Context Analysis
information that is useful to include covers how the course will be
implemented, including where it will be held, who will be presenting, and
what equipment and materials will be used. The Learner Analysis
information should cover how many people will be trained, their job titles,
existing knowledge and skill and preferred learning styles. This background
information will help clients see the rationale for the proposed training
program.
Once the preliminary information has been summarized, the
blueprint should then move into a detailed description of the course, starting
with the proposed objectives. The following sample blueprint for a train the
trainer course provides examples of the blueprint’s major elements.
Figure 13-2: Train the Trainer Design Blueprint
Training Objectives Module (4 hours)
Objectives:
1. To describe the four components of a training objective.
2. To write learning objectives for training programs.
Prerequisites:
Complete Training Needs Assessment module.
Content:
Instructional Strategy:
Present concept of a learning objective using lecture and demonstration.
Present principle of how to write an effective learning objective.
Analyze examples of effective and ineffective learning objectives.
Present principles for sequencing multiple learning objectives.
Key Points:
The four components of a learning objective are: target behavior,
content, conditions, standards. Learning objectives should be written
from learners’ point of view, and should use action verbs for behaviors
and specific nouns for content. Conditions and standards are optional.
An effective objective should clearly communicate what the learner will
be able to do after instruction and should be measurable, so there is a
Presentation:
Deliver Method:
Concepts and principles will be taught using interactive lecture and
demonstration. Participants will read along in their manual and will be
asked to analyze various examples of objectives to identify their
components and determine whether they are effective or not.
Media:
Concepts and principles will be summarized on overheads. Examples
will be presented in the student manual and presented on a flip
chart/white board for class discussion.
Practice:
Classroom Exercises:
Students will write sample objectives. Students will also identify
problems with sample objectives and edit them to improve them.
Tests:
Students will be asked to prepare the objectives for a training program
they teach and turn these in for grading. They will also be required to
take a final examination in the course, including questions about
objectives. They must pass this test with 80% or better.
Deliverables:
This module includes a participant manual, a leader’s guide, and
overheads. These three items will be developed, published and
delivered prior to the start of instruction.
Evaluation:
Students will be evaluated based on classroom participation, a
homework assignment and their performance on the final exam. Those
who successfully complete all the modules will receive a certificate.
Those who do not will be allowed to retake the modules they need and
try the exam again.
Rapid Prototyping
A technique that is increasingly popular, especially for multimedia
and other costly training designs, is rapid prototyping. This means preparing
rough drafts of training content quickly for the purpose of client reviews and
learner input so as to accelerate development. Typically, content is
formatted using a template, a shell or a storyboard so that the client can get
a sense of what the training will look like without all the expense of adding
graphics, color, management systems, leader’s guides and the like. For
print-based courses, the use of word processing templates allows content to
be quickly written in a format similar to what the final product will be. Often,
the prototype will consist of a single lesson as an example, with the design
blueprint serving as the source for other module content. For computer-
based training, the prototype may be a paper-based storyboard of the
proposed computer screens with content sketched out, but other graphic,
audio, video and interactive elements left out. An alternative approach is to
create a prototype using existing computer-based training templates.
These prototypes can be reviewed along with the design blueprint
and can actually be tried out with one or two learners to get a sense of
whether they will work for the audience. Feedback can be quickly
incorporated into the final design and help guide development. The use of
rapid prototyping in design engineering has resulted in the reduction of
design time by a large amount, on the order of 100-200 percent or even
more. In instructional design projects, rapid prototyping saves time mainly on
large-scale projects or those involving multimedia, where the development
and production costs are very high.
The key to using rapid prototyping is to prepare materials only to the
point where they can be tried out, but are still easily modified if changes are
needed. Clients also need to understand that they are not viewing a finished
product, but rather a work in progress, so that they have the right
expectations about what they are reviewing.
Video/Multimedia Development
If the training program will use video or multimedia, it is particularly
important to use organizing tools to plan development carefully and avoid
wasted effort. For video, the standard development process is to first write a
treatment that describes the settings, plot, characters and structure of the
proposed video. A general treatment should be generated at the design
blueprint stage, but a more detailed treatment should be written at the outset
of the development phase. Once this has been approved and reviewed,
scriptwriters can then begin writing the script while cinematographers
develop a storyboard of the video. These can then be reviewed and
approved before moving into video production, where the costs of video
really begin to mount.
Once video production has begun, it is helpful to examine rough cuts
of the video as they are shot. These may simply be outtakes of the day’s
Analyze
Identify Platform
Issues
Hardware Requirements
Software Requirements
Support Strategy
Migration Strategy
Design
Summary
In this chapter, you have examined the issue of creating draft
materials based on design blueprints. You have learned about why draft
materials are important, what they usually entail and how to work with
various types of materials developers to produce draft materials. Although
much of this discussion assumes that you are working with various experts in
a large instructional design project team, the same rules and techniques
apply when designers are asked to produce materials themselves. In the
latter case, the task at hand may be more difficult for one person to
accomplish, but this difficulty is offset by the relative ease with which a single
person or small project team communicates information about the project
and actualizes the designer’s original intent.
The next chapter deals with the specialized work of creating tests
and other assessment materials for training programs.
Discussion Questions
and Exercises
1. What do you think are the major challenges of working with materials
developers, especially those who are not trainers?
2. What are some techniques instructional designers can use when working
with print or multimedia developers?
3. How does e-learning design differ from classroom training design?
Types of Tests
Because tests are used for so many different purposes and the
consequences of their use can be potentially risky, different types of tests
have emerged over time for these differing purposes. The two most
commonly used tests are norm-referenced and criterion-referenced tests.
Each has its unique characteristics and uses. Norm-referenced tests are
used to determine the relative status of individuals with regards to a
particular body of knowledge. These tests sample a wide range of
knowledge and then spread out the individuals taking them, so that the
performance of the best and the worst can be easily identified. Such tests
are mainly used for selection of candidates or ranking of performers. In
these cases, the intent is to compare all the test-takers against each other
and identify those who are at the top. Norm-referenced tests are mainly
used in business for applicant screening and promotions, but they are widely
used in education for things like measuring public school pupil performance,
selecting candidates for special education programs, like gifted or remedial
education, and college admissions.
Norm-referenced tests have been around for nearly 100 years and
have the benefit of an extensive history of statistical validation. Some of the
most famous tests, such as IQ, SAT, GRE, etc. are norm-referenced.
Over the past 30 years, a different sort of test has emerged as a
useful tool for training –the criterion-referenced test. Unlike norm-referenced
tests, criterion-referenced tests determine the absolute status of individuals
with regard to the criterion being measured. The measurement focuses on
whether an individual has mastered a certain performance or skill level,
regardless of his relative performance against other test takers. Criterion-
referenced tests are most useful for the kinds of measurement that trainers
are typically interested in, such as whether individuals have mastered certain
necessary skills, whether instruction is effective and whether learning has
occurred in training programs. For all these purposes, criterion-referenced
Armed with the test specifications, test developers can then write
test items that are reliable and valid measures of the learning objectives for
the test.
Besides a good set of test specifications, a number of other test
construction principles help test writers to develop good items. These
guidelines are summarized in the table below, which may be used as a job
aid for test developers.
In this example, the median is 80%, since it is the fifth score of the
10 in the list.
Finally, the last average is the mode, which is the most frequently
occurring score in a list. In the example above, the mode would also be
80%, since it is the only number that occurs twice. If no number occurred
twice, the list would not have a mode.
Of the three averages, the mean is the most accurate and most
useful central tendency. The other two averages, median and mode, have
more specialized uses, primarily with very large data sets. One additional
phenomenon worth noting is that in perfectly distributed sets of data, or those
that comprise a true bell curve, the mean, median and mode are all identical,
since the three averages would all coalesce around the same mid-point if
scores were evenly distributed on both sides of the average. Thus, one
Performance Tests
Another category of tests that is frequently employed to measure
learning outcomes is the performance or hands-on, test. This is defined as
any test that requires the actual performance of job skills under observation.
It is most often used with psychomotor skills like typing or repairing
equipment. The observer may be an instructor, supervisor, experienced
worker or outside expert. After observing the trainee perform, the
performance is rated against the standards set for the task. Those who meet
or exceed the standards pass the test, while those who fall short must retake
the exam or the training.
To assure that performance tests are reliable and valid, they must be
carefully designed so that the task to be performed is clear and the scoring is
unbiased and accurate. One technique which helps is to create performance
test specifications and scoring checklists like the ones in Appendix two, p.
Summary
In this chapter, you have learned about the role of testing in training
design. You have been introduced to two different types of tests: norm-
referenced and criterion-referenced and examined their role in training and
selection. You have also learned about two powerful concepts for
determining the quality of tests –reliability and validity, and ways to ensure a
test possesses both. You also examined how to construct good tests and
ways of analyzing the results using both descriptive and inferential statistics.
Now we are ready to move on to another key issue of training
development – quality control.
Discussion Questions
and Exercises
1. Compare and contrast norm-referenced and criterion-referenced
tests. Give an example of when you might use each type.
How would you develop a testing and assessment plan for this
certification program?
Summary
This chapter has briefly described the quality control function of
training design projects, with a focus on preventing problems by using
effective control techniques, including the use of a systematic instructional
design process. Responsibilities for quality control were described and
suggestions on how to manage design projects for quality were presented.
When problems do arrive, advice on how to quickly take effective corrective
action has focused on the use of a Training Design Corrective Action
process.
Once the quality of the training design process and products has
been assured, training design can move into full-scale production with
confidence, knowing that the right things have been done right the first time.
Discussion Questions
and Exercises
1. Who do you think should be held responsible for the quality of
training designs? Why?
How would you address this client’s concerns and get the project
back on track?
Reproduction Issues
Once the final masters have been produced and approved, the
project moves into a phase that often signals the beginning of the end of the
instructional designer’s involvement. This is when hand-offs to
reprographics, implementers and clients occur and when the risk of a fumble
is extremely high. To minimize problems, designers need to stay involved
with the project until all hand-offs have taken place and the distribution and
reproduction systems are established and running smoothly.
It makes sense to involve reproduction people in the orientation and
discussions you hold with producers. This way, they will know what the
project’s goals are, and will anticipate the support that they will be asked to
provide for the project. Once oriented, the next issues to tackle are the
method, schedule and budget for reproduction. For print materials, common
reproduction methods include photocopy and offset printing. Binding
methods include: three-hole punch and three-ring notebook, comb-bind,
velo-bind, staple, saddle stitch, etc. Other print material issues include use of
tabs and dividers, assembly of student and trainer materials, overheads or
flip charts, single versus double-sided reproduction, print and paper colors,
covers, collation, and packaging. For audio-visual and multimedia materials,
reproduction methods include: tape duplication, color photocopies, digital
copy, diskette, CD-ROM, DVD (digital video disc) reproduction, etc. Other
issues to be considered are: packaging, labels, storage requirements,
installation requirements, equipment and platform requirements and
technical support.
Once these issues have been discussed and a workable solution
has been agreed upon, those responsible for reproduction will be able to
quote a cost and develop a schedule to meet the requirements. The
schedule and budget will need to be carefully reviewed by the project
manager and the client to determine if it fits within the needs and parameters
of the project. If not, additional negotiation will be required to resolve any
outstanding issues.
Distribution Systems
Once the reproduction system has been identified and finalized, the
next issue to handle is the distribution system for materials, including
storage, shipping, and scheduling additional reproduction when needed.
These logistic issues are often thorny, since they typically involve more than
Maintenance Systems
An ancillary issue that must be addressed along with distribution is
the maintenance system for training materials that are anticipated to have a
long shelf-life. Maintenance issues refer not simply to the storage of existing
materials, but to the plan for updating materials as needed and maintaining
the instructional integrity of the training program, including on-going train the
trainer sessions for new instructors, conversion of multimedia materials to
new platforms and updated software systems, and the incorporation of new
training programs in the company’s curriculum.
Maintenance issues can become a whole separate function of the
instructional design department, especially for programs with volatile content.
For example, one large multinational company had two staff positions solely
dedicated to keeping a library of customer service and customer information
system self-study materials constantly updated and maintained. This is an
extreme case, since the software upon which the training program was
based underwent revision every year, but it illustrates the attention that
material maintenance deserves.
Among the maintenance issues that training designers must handle
are the following:
• how often will materials be updated?
Summary
In this chapter, you have learned about the key issues involved in
full-scale reproduction of materials, including moving from development to
full-scale production, working with production experts, creating a viable
reproduction, storage and distribution system and deciding how to handle
long-term course maintenance.
Discussion Questions
and Exercises
1. What are some key issues to attend to during full-scale production?
2. How can instructional designers work effectively with developers and
clients to accelerate production?
Figure 18-1:
Train the Trainer
Orient Set Up
Present Orient
Practice Present
Evaluate Practice
Apply Evaluate
Apply
Summary
This chapter has discussed how to prepare trainers to deliver
training programs effectively. Both foundational and content-specific Train
the Trainer programs have been described, and specific instructional
strategies for conducting Train the Trainer sessions have been provided.
Finally, information on using Instructor Guides and lesson plans has been
presented.
Now it is time for the moment of truth – the implementation of a
newly-designed training program. This will be considered in the next two
chapters.
Prepare a Train the Trainer plan for this project, using the techniques
discussed in this chapter.
Orienting Learners
The first phase of training delivery is typically referred to as the
introduction, but I prefer the term orientation because it is a more accurate
description of the purpose of this phase. Adult learning psychologists tell us
that adults learn best when they are ready to learn (Knowles, 1984 b).
Readiness is a function of need. That is, adults learn best when they sense
they need to, in order to make a change in their lives, to improve themselves,
to prepare for new challenges, etc. Thus, the key component of any training
introduction should be creating a need to learn in the mind of the learner.
This can be accomplished several ways, but all rely to some extent
on motivating the learner. Adults are motivated by both extrinsic and intrinsic
rewards. Extrinsic rewards include the following:
• promotion
• pay raise/ money
• recognition
• peer approval
• honors
• gifts and other tangible items
Orient
• Motivate
• Assess
• Prepare
Present
• Demonstrate
• Explain
Practice
• Simulate
• Trial and Error
Evaluate
• Observe
• Feedback
• Coach
Apply
• Review
• Performance
• Skill Transfer
Once the role play is over, the learning really begins. This is when
the trainer should debrief the role play and lead a discussion about what was
learned from it. Failure to follow up role plays with discussion is one of the
biggest weaknesses of role plays. Too often, participants have a few laughs
playing some fictional actor and then the class moves on, without any clear
idea of why the role play was conducted or what it means to participants’ real
life jobs. The debriefing portion of the discussion should focus on how
participants felt about their experience and how they would assess their
performance. Getting participants to share this information helps them
reflect on their own learning and recognize personal insights that might
otherwise be missed. Once everyone has had a chance to express their
feelings and personal insights, discussion should then turn to the key issues
and learning points raised by the role play. This is where the link to the
content of the lesson should occur and where participants begin to see how
to take what they learned in the role play and apply it to their own situation.
Some of this discussion can take place in small groups, especially for larger
classes, but at least a summary of the discussion ought to be conducted with
the whole group to avoid the tunnel vision syndrome mentioned earlier as a
drawback of role plays. If a particular group had an unsuccessful role play or
got themselves off on a tangent, the group discussion can help refocus them
on the key learning points that the role play was designed to accomplish and
integrate their experiences with the rest of the class.
Additional training delivery techniques are included in Appendix
Two, p. 302.
Summary
In this chapter, we have examined the key skills and techniques of
successful classroom trainers. These include: a basic model for classroom
lessons and strategies for using demonstrations, discussions, case studies
and role plays to enrich the learning process. With this foundation and lots of
practice, you will be ready to deliver training with impact. For more in-depth
treatment of classroom facilitation, consult the following authors in the
bibliography in Appendix One, p. AP-00 (Baird, Schnier and Laird, 1984;
Bentley, 1994; Craig, 1996; Knowles, 1984; Smith and Delahaye, 1987; Pike,
1997).
Even if your role is only to design training, it is important to
appreciate how training programs are delivered in classroom settings in
order to prepare training programs that are easy and effective to deliver.
In the next chapter, we will consider the growing and exciting world
of learning outside the classroom, a world that will play a dominating role in
the training profession in the years to come.
Set Up
• Facilities
• Hardware
• Software
• Tech. Support
Orient
• Market
• Motivate
• Assess
Present
• Demonstrate
• Simulate
• Inform
Practice
• Drill
• Trial & Error
• Activity
Evaluate
• Test
• Feedback
• Remediation
• Report
Apply
• EPSS
• OJT
Summary
In this chapter, we have briefly examined the major issues that
trainers face in implementing non-classroom delivery systems such as those
that use learning technology or structured on-the-job training. We have seen
that the use of these alternative delivery systems is increasing due to their
lower overall cost and better on-the-job results. Such systems must be
Training Environment
Reactions Learning
•Learner •Learn er
•Client •Organization
Work Environment
Res ults Job Behavior
•Performance •Learn er
•Financial •Organizati on
•Strat egic
T --------------------------------- M
T = Training
M = Measurement
From this table, we can see that the number of accidents peaked in
June, just before the training began. In the following month, as training was
rolled out throughout the company, accidents fell by two, both of which were in
the category that the training targeted. Over the next four months, accidents
fell every month, led by a sharp decline in lifting and falling accidents. The fact
that everyone had training on these two safety areas appears to be the cause
of the drop. When the Training Manager shares this information with the
Safety Manager and the Vice-President of Human Resources, they agree that
the training had a positive effect on accident rates and rewarded the training
department with a bonus.
In Chapter 24, we will learn additional analytical techniques that can
be used with interrupted time series designs to gain even greater precision
about training’s impact. Now that you are familiar with the standard designs
Figure 21-10:
Design/
Analyze Implement Evaluate
Develop
Summary
In this chapter, we have examined the role that evaluation plays in
improving the instructional design and training process and in demonstrating
the results of training. We have examined five common evaluation design
models, discussed when to evaluate training programs and how to write
evaluation reports that communicate clearly to decision makers.
Now that you understand the vital role evaluation plays in the training
process, we will turn to specific techniques for conducting various levels of
evaluation, starting with reactions and learning in the next chapter.
Discussion Questions
and Exercises
1. How does Kirkpatrick’s model help us evaluate training programs? What
does it leave out?
Evaluating Learning
A far more important task than evaluating reactions to training is
determining the amount of learning. Since learning is one of the key results
promised by training, and the only one completely under the control of the
training function, it is imperative that the majority of evaluation resources be
devoted to learning evaluation, not reactions.
You have already learned about the role of testing for individual
assessment in Chapter 15. Aside from determining whether individuals are
learning the objectives of training, it is also important to establish whether
groups of learners are mastering objectives and to use this information both to
improve the training process and to demonstrate the learning results of
training to clients and stakeholders.
Formal evaluations of learning rely on five primary methods:
1. Standardized, published tests
2. Criterion-referenced tests
3. Learning contracts/action learning
To illustrate this, let’s assume that you are asked to evaluate learning
in an introductory computer database course. Learners must pass this course
before they are allowed to assume a position as customer service
representative, since the reps use the database to look up customer account
information and provide service to customers. A performance test is created
around the following course objectives:
1. Access a customer’s account information.
2. Make a change to a customer’s basic account information.
3. Process a customer payment.
4. Print out a customer bill.
The test consists of four tasks, one each for the objectives listed
above. To pass the course, participants must achieve a passing score of 70
percent or more on each of the tasks.
To evaluate learning, 20 participants took a pre-test before starting
training. The mean scores for each of the tasks are listed below:
What do these results tell us? The first row shows that learners made
gains on all four tasks, ranging from a high of 88% on task two to a low of 44%
on task three. Notice that the percentage gain is a factor of where learners
started and where they ended up. When pre-test scores are very low, like they
were for task two, learners are more likely to make large percentage
increases, whereas when pre-test scores are fairly high, as they were on task
one, the percentage increase is necessarily lower.
Regarding the statistical t-test, the raw t scores and probability for
each task indicate that statistically significant learning occurred for at least
three of the four tasks. We would likely question the learning gains for task
three, since the probability of these occurring by chance alone is .10 or ten
percent. That’s a pretty high chance, and calls us to question whether
significant learning really occurred on this task. Typically, evaluators look for
probability scores of .05 or less, indicating the chances of the difference being
due to random error is less that 5 percent.
Finally, if we look at the percentage of learners achieving the 70
percent mastery level, we see that learners did very well on tasks one and
four, but much worse on tasks two and three. Task three appears to be a real
concern, since less than half the group achieved mastery on this, even after
attending the training.
To interpret these findings, we would need to reexamine the original
instructional design and the way it was delivered to discover why participants
performed more poorly on task two and especially task three. We might find
that the training materials were weak or confusing in these two areas, or that
the instructor’s presentation and demonstration was flawed. We might simply
discover that these tasks are so much more difficult than one and four that
additional practice time needs to be devoted to them before learners achieve
mastery. Whatever the cause of the deficiency, the learning data we have
evaluated has pointed us in some likely directions to search for continuous
improvement. This is one of the key uses of learning evaluation data.
A second key use of the data would be to inform the managers of
these learners that their performance on tasks one and four is excellent, but
they will need additional coaching and practice to fully master tasks two and
three. Alerting managers of this situation will help them prepare to support
learning transfer on the job and focus their efforts where they will do the most
Summary
In this chapter, we have examined how to conduct evaluations based
on reactions to training and participant learning. We have explored some
useful techniques to measure these variables and to use the information to
improve the training process and communicate to clients and stakeholders
about the results. Although these are important issues to evaluate, especially
learning, we cannot stop here. Instead, we must press ahead and look at how
to evaluate changes in job behavior and bottom-line results in order to
determine the full impact of training. We will start with behavior change in the
next chapter.
Discussion Questions
and Exercises
2. What are three ways to analyze test data? Compare and contrast each
method.
Notice how the action plan gets the learner to think very concretely
about what they have just learned and how they can use it to improve their
performance. Some of this may seem very obvious to a trainer, but such is not
the case to most learners, who are not experts at learning and who often are
clueless about how they will go about using what they have learned. After
completing this exercise, the learner has a specific plan of action that is
realistic and definite. With the right climate and support back on the job, this
learner is much more likely to fully apply what she learned in class than
someone who simply shows up back at the job with no fixed plan to use newly
learned skills.
The other key action that trainers can take to support learning transfer
to the job is to make use of job aids and other support systems that reduce the
difficulty of applying new skills on the job. As we discussed in the
development section of this book, job aids are any tool or reference material
that can be used while performing a task which make the task easier. A job
aid can be as simple as a list of commands and their functions, or even a list of
frequently called telephone numbers. Job aids can also be highly complex
flowcharts, tables, matrices and reference materials that are used in
conjunction with performing a job. In the example above, the learner referred
to two job aids: the Inquiry Matrix and the Customer Complaint Resolution Job
Aid. Both of these are summaries of procedures learned in class, arranged in
a useful way for performing tasks. They were both summarized on a single
sheet of paper and then laminated for durability so that they could be used at
the customer representative’s desk while talking on the telephone with
customers.
Communicates effectively 3.5 4.5 2.4 3.0 2.1 2.9 3.0 3.3 2.8 3.4
with employees 1%
Establishes clear 2.7 4.1 1.9 2.7 3.1 3.9 3.5 3.6 2.8 3.6
performance expectations 9%
and goals
Provides coaching and on 3.2 3.8 3.0 3.3 2.5 2.9 3.7 3.7 3.1 3.4
the job training 0%
The results of this survey were further analyzed using Analysis of
Variance, a statistical technique for examining data involving two or more
groups or variables to determine if any differences are statistically significant.
Further statistical techniques allowed evaluators to pinpoint which groups were
statistically significant. The results of the analysis showed that the post-
training survey results were significantly higher than the pre-test for all groups
surveyed as well as for the overall group average. Participants made
statistically significant progress in eight of the ten objectives measured.
This information was shared with the training staff responsible for the
supervisor training program, the participants, their managers and senior
management. As a result of this evaluation study, the training department won
approval to expand the supervisor training program to other departments at
the company and to make it mandatory for all newly-promoted supervisors.
The training department also used the data to improve the sections of the
course devoted to two objectives that participants had not demonstrated
significant progress in achieving on the job.
The final example uses a different approach to measuring job behavior
change – direct work output measures. An electronics manufacturer with a
polyglot workforce was experiencing quality and communications problems in
its factory due to the large immigrant workforce and low level of basic skills
among some of its assembly workers. To combat these problems, the
company engaged with an outside vendor to provide custom basic skills and
Vocational English as a Second Language (VESL) courses for 60 of its hourly
assemblers, including those who had the lowest communication skills in
English. The 18 week, 72-hour program focused on teaching the vocabulary,
language and mathematics used in the factory. To evaluate the behavior
change of employees enrolled in the course, the training department used the
factory’s own computerized labor cost and accounting system data. This
0.3
0.25
0.2
0.15
0.1
0.05
= Training
Group
0
Monthly
Productivity
Jan Feb Mar Apr May Jun 1 Jul Aug Sep Oct Nov Dec Average
Summary
In this chapter, we have examined evaluation methods for skill transfer
and job behavior change. You have learned why it is important to extend
evaluation to the work environment, ways that training can support skill
transfer through job aids and electronic performance support systems and
methods for evaluating behavior change. You also examined three real-world
examples of level three evaluation that illustrate innovative ways to measure
skill transfer.
In the next chapter, you will learn about one more reason to evaluate
skill transfer, and that is to be able to measure the business results of training.
Discussion Questions
and Exercises
1. What are some ways that training can promote skill transfer to the job?
2. What are some data collection methods for level three evaluation?
How would you create a level 3 evaluation plan for this project? What
are some likely productivity measures you could use?
Figure 24-1:
Bottom-Line Impact of Training
Strategic
Strategic Growth
Growth
Revenues
Revenues
Savings
Savings
Cost
Cost Avoidance
Avoidance
Training
In this case, the training returned $7.42 in benefits for every dollar
invested, a huge ROI. Typically, ROI from training runs in the range of 100 to
400 percent, but reports of ROI as high as 2,000 percent have appeared in the
literature.
Reporting training results in these bottom-line terms has a powerful
effect on executives and line managers. They not only see clearly, sometimes
for the first time, how powerful investments in human capital really are, but
they also gain new respect for the training function and its business acumen.
Trainers who can regularly report these kinds of results, even on a small
fraction of their total offerings, are bound to enjoy greater management
support for their efforts.
Figure 24-11:
Training Scorecard Results
• Training Activity • Training Results
– % Payroll – % Positive Course
US Avg.=1.9% ‡ Ratings
– $/Employee/year Avg.=91%†
US Avg.=$569 ‡ – % Learning Gain
– Hrs./Empl./year Avg.=45%†
US Avg.= 36 ‡ – % Behavior Change
Avg.=25%†
• Training Efficiency – Revenues/Employee
– Cost/Hour Avg.=$100K†
US Avg.=$28 ‡
– Market/Book†
– % Billable Hrs. Avg.=10:1
US Avg.=70% †
Discussion Questions
and Exercises
1. What kinds of results does training produce?
2. Describe the process for determining bottom-line results from training.
3. Describe the method of calculating ROI from training.
4. Calculate a training balanced scorecard for your training organization.
Compare to the U.S. average, how is your organization doing? In what
areas should you make improvements?
Calculate the ROI of this training, assuming that all results and costs
are included in the table.
Let’s examine each of these best practices in more detail to see how
they impact training design. The first two involve the analysis phase of
training, which has grown enormously over the past decade. Traditional
instructional design focused almost exclusively on behavioral job task analysis
to determine the need for training and the specific objectives to be taught.
Today, needs analysis has expanded to include consideration of
competencies, as well as tasks, especially for knowledge or management
work that cannot be easily compartmentalized using traditional task analysis.
Secondly, needs analysis is often a team effort nowadays, with active
participation by clients, key stakeholders and other areas of an organization
like human resources and quality assurance. The expansion of needs
Summary
Whatever the future might bring, training designers must prepare
themselves to deliver learning and performance in a world of constant change.
Truly, as never before in our history, the future of training is full of challenge
and hope. It is up to both theorists and practitioners today to work together in
advancing the field of training and instructional design to meet the future
demands placed upon it by society. If we can succeed in this grand endeavor,
we will help humankind in significant ways, for our success as a species on
this planet is a direct outgrowth of our incredible ability to learn and to change.
While we help others, we will also help ourselves by ensuring the survival and
growth of training as a profession worthy of admiration.
If this book moves us forward, in even the smallest ways, in the quest
for better human performance at work, it will have been worth every penny
spent in creating and publishing it.
Discussion Questions
and Exercises
1. Compare and contrast training in 1950 and today. What major changes
have occurred? What is still largely the same?
2. Which of the current training best practices do you use? How could you
apply the other best practices in your work?
3. What are the major challenges facing the training profession in the future?
How can trainers best prepare to meet them?
Design Section
5. Objectives Matrix 282
6. Budget Template 283
7. Task Analysis Summary Report 284
8. Blueprint Template
Development Section
9. Training Manual Template 288
10. Test Specifications 290
11. Training Quality Control Checklist 295
12. Production Evaluation Checklist 297
Implementation Section
13. Lesson Plan Template 301
14. Training Delivery Techniques Matrix 302
Evaluation Section
15. Sample Participant Reaction Survey 303
16. Sample Supervisory Behavior Change Survey 306
17. Training ROI Worksheet 310
Organizational Surveillance
Worksheet
Directions:
Scan the areas listed below and determine how many of them are relevant to
your organization and job. Where would you obtain current information about
relevant areas? What’s the current state in your organization?
Interview Questions
For the next series of questions, focus on each major task individually. Use
the Task Rating Form to help capture this information.
Directions:
On the following pages are a large number of task statements. For each task
statement do the following:
In the blank spaces at the end of the list of tasks, write in any tasks you
perform that are not listed. Be sure to provide information on Importance,
Difficulty, Frequency and Physical Effort for all tasks you write in.
1. [List
Job
Tasks
Here.]
Directions:
Calculate the ratings for each task from the Ratings Survey by adding up the
rows. List the top ten tasks here in rank order. Discuss with the client whether
you should include lower rated tasks in the training.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
JOB/COURSE:
FUNCTION/SECTION:
TASK/LESSON:
PREREQUISITES:
REASONS:
STANDARDS:
INITIATING EVENT:
1.
2.
3.
4.
5.
6.
7.
Concepts/Principles: Examples:
Competencies
1. What is level of students' current knowledge and skills in subject area? Check
one:
_____ None
_____ Basic background
_____ Intermediate
_____ Advanced
___________________________________________________
3. What major misconceptions are new students likely to have regarding the
subject matter?
___________________________________________________
Attitudes
4. Are there topics toward which the students are likely to feel especially
positive?
___________________________________________________
5. Are there topics toward which the students are likely to feel especially
negative?
___________________________________________________
6. Are there any preferences the students have in instructional format or media?
___________________________________________________
Language Skills
7. What is the language level of the students?
_____ English as a Second Language/functional illiterate
_____ Native English up to high school
_____ College educated
_____ Specialized/technical vocabulary
Tool Skills
9. Are there any sensory-perceptual deficiencies that will require special attention?
_______________________________________________________
10. Are there any special skills students will bring to the course?
_______________________________________________________
Motivation
11. To what extent do the students value the training they will receive from the
course?
12. To what extent are students confident in their ability to succeed in the training?
14. Identify how the subgroups are different. (For example different learning
preferences, language skills, attitudes, competencies, etc.)
_____ None
_____ Flip Chart/White Board
_____ Overhead projector
_____ Video system with VCR
_____ CBT workstation
_____ Multimedia workstation (incl. videodisk and/or CD-ROM)
_______________________________________________________
_____Yes
_____No
_______________________________________________________
_______________________________________________________
8. How can learning from this course be reinforced once the course is
completed?
_______________________________________________________
_______________________________________________________
_______________________________________________________
_____Yes
_____No
_____Yes
_____No
14. Recommend how to modularize the course according to the needs of different
groups with varying needs.
15. Recommend how frequently the course should be offered per year.
Objectives Matrix
Directions:
Use this matrix to plan a course with multiple learning objectives. List the
target behaviors in the columns across the top and the content topics next to
the rows on the left. Place an X in boxes that represent a learning objective.
Make sure you have at least one objective per content area.
1.
2.
3.
4.
5.
6.
7.
8.
Directions:
Use this template to enter budgeted and actual costs for design projects. It
may be recreated in a spreadsheet program with formulas to automatically
calculate costs and tally totals.
Contract Labor
Blueprint Template
Directions:
Using the data from the job task analysis, context analysis, learner analysis,
and objectives matrix, complete the template by adding Content, Presentation,
Media, Practice, Deliverables, Learning Activities, and Tests/Assessments.
Blueprint for
_____________________________________ Course
Summary:
Time to complete:
Objectives:
Prerequisites:
Content:
Media:
Practice Activities:
Pre-work/Homework/Projects:
Evaluation Plan:
Deliverables:
Budget:
Milestones:
Directions:
The following template may be used either as a participant workbook, a
leader’s guide or a combination of both. If used as a leader’s guide, the left
column should show instructor notes and the right column student material. If
used only as a participant workbook, the left column may be deleted or used to
display icons and headings.
Module Name p. 2
Purpose
The following test specifications are meant to provide uniform guidelines for
the preparation of test items in all training programs which include testing.
Two types of test specifications are included: paper and pencil (cognitive) tests
and performance (psychomotor) tests. Cognitive tests are appropriate when
evaluating the acquisition of new knowledge, attitudes or thinking skills.
Psychomotor tests are appropriate for evaluating behaviors and physical skills.
General Description
After attending a training session and reading the accompanying course
manual, trainees will answer questions about factual details by selecting the
one response from a choice of four which correctly answers the question.
The objective of this test is to identify facts which correctly answer questions
based on the content of training classes.
Sample Item
Directions: Read the text and question below. Then read the four possible
answers and select the one choice which correctly answers the question,
based on the content of the FSR training program you attended. Circle the
letter of the correct answer.
1. Arrival
A. Record arrival time.
B. Go to meter and record meter number and read
C. Verify whether meter is ON or OFF.
2. Shutdown
A. Follow FSR Shutdown Routine
B. Tell customer that you have to go outside to work on the
meter and that you will return to light and adjust
appliances.
1. According to the text, what is the second step in the gas off turn-on routine?
Stimulus Attributes
1. The questions will be taken directly from the course manual or other participant
handout. When appropriate, the material will be reprinted in the stimulus.
2. The questions will cover content which is specified in the objectives of the course.
3. The questions will include one of the following interrogative words: who, what, when,
where, why, how, how many, how much, etc. The questions posed must be directly
answered by material in the course manual or handouts.
Response Attributes
1. Trainees will be asked to circle the letter of one of four given response alternatives, or
to mark the letter of the answer on a pre-printed answer sheet. The four choices will
consist of one correct response and three distracters.
2. Distracters will be of four types:
a. Irrelevant detail: the distracter contains a detail from the course, but it is
irrelevant to the question posed.
b. Contradicted detail: the distracter directly contradicts information stated in
the instruction.
c. Inaccurate detail: the distracter inaccurately states a detail in the
instruction. It may be inaccurate because of different scope (too broad or
narrow), because a detail is omitted, or because incorrect information has
been added to the detail.
d. Unsupported detail: the distracter makes a statement about information in
the instruction which is not directly supported by details in the instruction.
The statement, however, is neither irrelevant nor contradictory.
Directions to Instructor:
Provide each trainee with an assembled Fisher S-102 Regulator, a drawing of
the regulator, and a standard FSR tool kit. Instruct the trainee to disassemble
the parts of the regulator listed below in the stimulus attributes and then to
reassemble these parts. Observe the trainees performing this task and rate
their performance on the Performance Observation Checklist (attached).
Tally the total score on the checklist and go over the results with the trainees.
Stimulus Attributes
Response Attributes
1. The instructor or designee will record whether the trainee performs each
step successfully on a Performance Observation Checklist (attached).
3. A passing score will be awarded to trainees who perform all six steps
correctly. Trainees who fail to perform all six steps will receive additional
instruction and be required to take the test again.
TOTAL
SCORE
_____________
2. Ensure reliable data collection and analysis.
_____________
12. Transition materials and course maintenance to
Implementers.
_____________
14. Ensure that any needed changes are
completed.
_____________
15. Conduct formal evaluation of reactions,
learning, skill transfer and business results.
Production Evaluation
Checklist
Production Key Items to Check Reviewed and
Media Approved By
Print Copy editing _____________
Materials 1. Grammar, punctuation, spelling,
language and usage are all correct.
Formatting
4. Page layout is correct, including: ____________
margins, headers, footers, headlines,
spacing, etc.
_____________
5. Page breaks are correct, and do not
leave any widows or orphans.
Impact
14. Visuals express their messages clearly _____________
and powerfully.
Materials Needed
_________________________________________________
Time
:00 [List key content outline [List methods and [Describe what
points] materials to be used] trainees should be
doing]
Training Evaluation
Course Name:________________________
Date:________________________
Instructor(s)/Learning System:
__________________________________________
Directions:
Please read each item carefully and place a check mark in the column which comes closest to
how strongly you agree or disagree with the statement. Check only one column for each item.
Write any comments or suggestions in the space provide on the reverse side of the form.
Strongly Strongly
Agree Agree Neither Disagree Disagree
Course Administration
1. I was able to take this course when I needed it.
Course Content
4. I clearly understood the course objectives.
Course Instructor(s)/Presentation
9. The instructor(s) or presentation was
Course Relevance
13. My knowledge and skills increased
due to this course.
__________________________________________________________________
__________________________________________________________________
18. What would you like to see added or deleted from the course?
__________________________________________________________________
__________________________________________________________________
__________________________________________________________________
Thank you for taking the time to complete this survey. Your opinions will
help us improve the quality of training programs and measure their results.
Directions:
Dear [ Name]:
About three months ago, one or more of your staff attended [Name of Training
Program]. The course taught your employee(s):
• [List terminal performance objectives]
The attached survey asks some questions about the behavior and skills you
have observed since the training took place. The information you provide will
help us measure how well newly-learned skills are being transferred to the
workplace and how job performance improves as a result of training.
We are asking you to complete this because, as their supervisor, you are in
the best position to observe and judge how well your employees are
performing. Please answer the questions that follow as completely and
accurately as you can. It should take you about 10-15 minutes to complete the
survey.
Thank you very much for your support of training. Let me know if you have
any questions.
Sincerely,
[Training Manager]
Department: ___________________________
Part One:
2. How would you rate your overall satisfaction with the training your employees
received?
3. What was your employees’ average personal productivity before they started
training?
4. What is your employees’ average personal productivity now that they have
completed training?
For each of the skills listed below, please estimate the following:
1. The current skill level of your employees (Very High to Very Low).
2. How often they use the skill (Always to Never).
3. How important the skill is for their jobs (Very Important to Very Unimportant).
Use the five point scale, with five being highest and one being lowest.
3.
4.
5.
Part Three:
1. Please check all of the following ways that you support training in your department.
Supervisor Coaching
Peer Coaching
Job Aids
Other? (What?
__________________________________________________)
Electronic Performance
Support Systems
Other (What?)
3. What barriers or obstacles (if any) make it difficult for your employees to apply
newly learned skills on the job?
4. What enablers or motivators help employees to apply newly learned skills on the
job?
5. What changes to the training your employees took would help them to perform
better?
Thank you for completing this survey. Please return it to: [Name, Address].
Training ROI
Worksheet
Directions:
Use the worksheet below to plan and calculate training ROI.
Enter data in the spreadsheet and the totals are automatically
calculated.
Indirect
Opportunity
Indirect
Opportunity
ROI (B-C)/C x %
100)
B
Baird, Schnier and Laird, 190
Basic skills, 56
Bell Curve, 145
Bentley, 190
Best Practices in Training Design, 257
Bloom, Hastings and Madaus, 224
Brainstorming sessions, 185
Broad and Newstrom, 22, 229
Budgets
examples of training design resources, 95
Final Working Budget, 96, 98
Preliminary Proposals, 95
Sample Preliminary Proposal Budget, 97
Business results, 1, 4, 5, 32, 72, 80, 97, 108, 112, 119, 131, 155, 166, 200, 201, 202, 203, 205, 214,
216, 217, 218, 239, 240, 244, 245, 247, 248, 253, 257
C
Case Studies and Role Plays To Simulate Reality, 187
Case study, 71, 76, 173, 187, 188, 209, 210, 220
Cause analysis, 23, 26, 31
CD-ROM, 163, 196
Change management, 24
Clark, 29, 31, 32
Classroom Delivery, 167, 173, 174
Cognitive psychology, 51, 174, 257, 258, 259
company organization chart, 36
Competencies, 39, 40
Computer programmers, 122, 128, 129, 160, 161
Computer-based training (CBT), 51, 53, 60, 62, 84, 86, 88, 100, 102, 119, 191, 196, 225
Context analysis, 9, 58, 59, 115
Elements, 58
Importance, 58
Context Analysis, 9, 58, 59, 115
Elements, 58
Importance, 58
Continuous Learning, 261
Corrective Action, 154, 156, 158
Correlation coefficient, 147
Coscarelli and Shrock, 137, 224
Cost avoidance, 120, 203, 241
Cost Estimation Techniques, 97
Material Estimation Standards, 98
see also Budgets, 97
Cost savings, 62, 203, 242, 250, 251, 254
Cost/benefit ratio, 248
D
Declarative knowledge, 183
Demonstrations to Develop Skills, 182
Design, 6, 65, 73, 74, 81, 99, 100, 108, 114, 119, 120, 130, 152, 200, 258, 260
design blueprint review, 118
Training Blueprint XE "Training Blueprint : see also Design Blueprint" Examples, 115
Design Blueprint, 73, 96, 97, 99, 108, 113, 114, 115, 117, 118, 119, 120, 122, 124, 127, 128, 129,
130, 152, 156, 157, 258
examples, 115
Gaining Client Approval, 115, 116
see also Design, 73, 96, 97, 99, 108, 113, 114, 115, 118, 119, 120, 122, 124, 127, 128, 129, 130,
152, 156, 157, 258
Design Blueprint
elements of the blueprint, 114
Development, 121
Draft Materials, 122
Full-scale Production, 121, 159
layered approach, 125
Multimedia Production, 162
Print-Based Development, 123
Producers and Publishers, 160
storyboards, 88, 125
video production, 124, 129
Video/Multimedia Development, 123
Discussion Questioning Techniques, 187
Discussions
content discussion, 184, 185
Discussion Process Model, 185
Open-ended questions, 184, 186
Distance learning, 59, 197, 259
Distribution Systems, 163
Dubois, 40
DVD (digital video disc), 163
E
EPSS, 181, 192, 195, 197, 233
Evaluating Learning, 169, 223
Evaluating Reactions to Training, 219
see also Evaluation, 219
Evaluation, 3, 24, 28, 32, 78, 92, 108, 112, 115, 116, 117, 131, 169, 172, 203, 204, 206, 217, 229,
233, 234, 246, 257, 260
Bottom-Line Results Evaluation Process, 246
Determining When to Evaluate, 215
Evaluating Reactions and Learning, 219
Examples of Level Three Evaluations, 235
How Can Training Support Skill Transfer, 231
How to Design, 207
How Training Contributes to Business Results, 244
Interrupted Time Series Design, 213
Level-four evaluation, 241, 247
Links Among Learning, Performance and Results, 249
Method for Converting Opinion Data to Monetary Data, 251
Methods for Calculating Return on Investment, 253
Methods for Evaluating Results, 245
Methods for Skill Transfer and Behavior Change, 233
Models for Measuring Behavior Change, 234
312 | Bottom-Line Training: Performance-based Results
Non-Equivalent Control Group Design, 211
One-Group Pre-Post Evaluation Design, 210
One-Shot Case Study Design, 209
Pre-Post Control Group (Classic Experimental Design), 212
Reporting Evaluation Results, 217
Skill Transfer Survey, 236, 237
Training Evaluation Percentage Guidelines, 217
Typical Evaluation Points for Training, 215
Why it is Neglected, 206
Expert Systems, 261
F
Ford, 10, 18, 36, 238
Formative evaluation, 112, 204, 222
Foundational Train the Trainer Curriculum, 169
Frequency table, 143, 144
front-end analysis, 10, 24, 29
Future Developments in Training Design, 261
see also Instructional Design, 261
G
Gaining Client Approval, 115
Gantt Chart, 109, 110, 111
Goodman and Love, 108
Graphic artists, 127, 128, 129, 160, 161
H
Halprin, 246
Herrmann, 53, 54
Histogram, 143
Howell and Silvey, 126, 197, 198
HTML (hypertext markup language), 259
Human Performance Technology
definition, 23
Human Performance Technology (HPT), 22
Clark’s HPT Model, 29
definition, 23
evaluation, 32
evaluation’s role, 28
example, 32
initiating event, 29
models, 23
Models, 23
I
Implementation phase, 167, 168, 202
Implementation Process Model, 167
Individual assessment, 131, 223
Inferential statistics, 143, 147, 148, 149, 151
Instructional design
Formal Learning Model, 92
Instructional Design
Current State of The Art, 257
The Future of Training Design, 260
instructional methods, 84, 91, 92, 93
Instructional methods, 84, 91, 92, 93, 171, 256
lesson steps, 92
Instructional Models
Bottom-Line Training: Performance-based Results | 313
ADDIE Model, 1, 262
Results-Based Training Design, 1
see also ADDIE Model, 1, 262
Instructional Strategies, 73, 84, 89, 170, 172
Instructional Systems Design (ISD), 1, 114, 116
Integrated Project Planning and Management Cycle, 108, 112, 113
International Society for Performance Improvement (ISPI), 23
Internet, 20, 70, 84, 124, 125, 129, 191, 196, 212, 259
Intervention selection, 24, 27
Interviewing, 250
Interviews, 250
Intranet, 196
IQ test, 136
ISD, 262
J
Job aids, 32, 63, 65, 74, 105, 181, 182, 183, 195, 197, 201, 231, 232, 233, 239
Job analysis, 9, 36, 38, 39
career paths, 42
compensation plans, 41
examples of inputs and outputs, 40
models, 38
origins, 36
process model, 39
Underlying Skills and Knowledge, 49
K
Kaye, 42
Kirkpatrick, 28, 32, 203, 205, 206, 229, 240
Knowledge database, 262
Knowles, 52, 190
Kuder-Richardson correlation (K-R 21), 135
L
Laserdisc, 196
Learner analysis, 9, 51, 115
Classifying learners, 54
Content-specific, 54
Diversity, 54, 56, 57
importance, 51
learning styles, 52
Methodologies, 56
Motivation to learn, 55, 180
Learner Analysis, 9, 51, 115
Classifying learners, 54
Content-specific, 54
Diversity, 54, 56, 57
Methodologies, 56
Motivation to learn, 55
Learner reaction surveys, 220
Learner Reactions
Analyzing and Applying Reaction Data, 222
Opinion Survey, 221
Learning
Analyzing and Reporting Learning Data, 224
Learning contracts, 223, 224
Learning Evaluation Process Model, 226
Learning organization, 258, 261, 262
Learning technology, 124, 191, 192, 193, 195, 197, 198, 199, 200, 256, 261
314 | Bottom-Line Training: Performance-based Results
Learning Technology
Delivery Issues, 197
Implementation Planning, 199
Learning Technology Implementation Model, 192
Lectures and Discussions to Develop Knowledge, 184
M
Mager, 10, 11, 14, 15, 17, 79, 224
Maintenance Systems, 165
Materials Developers, 127
Materials Development Process, 121 (see also Development, 121)
Model for Classroom Training, 173
Orienting Learners, 173
Presenting New Information, 176
Role of Practice, 179
Motivation
Problems and Solutions, 11
Motivation and training, 11
Multimedia, 58, 59, 84, 88, 89, 119, 122, 123, 124, 125, 126, 127, 128, 129, 130, 159, 161, 162,
163, 164, 165, 166, 191, 193, 194, 196, 197, 256
Design and Development Flowchart, 126
Multimedia training (MMT), 122, 126, 191, 194, 196, 256
N
Needs Analysis
Action phase, 18, 20, 76, 232
Analysis phase, 9, 10, 15, 16, 18, 19, 22, 57
Analysis Process ‘Model T’, 18
Investigation, 18, 19
Organizational Systems Model, 15
Surveillance, 18, 19
Needs analysis and assessment
definitions, 10, 21
Needs Assessment Methods, 17
Needs Assessment Model, 16
Nolan, 200
Non-Classroom Delivery, 167, 191
Norm-referenced tests, 132
O
Objectives
Action versus Abstract Behaviors, 76
classifying, 81
components, 75
conditions, 77
content, 75, 77
hierarchy, 80
importance, 74
matrix, 78
selecting and prioritizing, 81
sequencing, 79
setting standards, 78
target behavior, 75, 76
Objectives, training, 6, 71, 74, 83, 91, 117, 130, 137, 140, 142, 143, 205, 206, 208, 225
Observation, 17, 48, 250
On-the-Job Training (OJT), 199
see also Structured On-the-Job Training, 199
Structured OJT, 200, 259
Opportunity analysis, 9, 29
Bottom-Line Training: Performance-based Results | 315
P
‘Perfect’ practice, 179, 180
Performance, 3, 9, 10, 11, 16, 22, 23, 24, 29, 31, 32, 37, 115, 120, 124, 150, 169, 174, 221, 240,
249, 251, 257, 261
Performance analysis, 9, 10, 16, 22, 23, 24, 115
example, 25
future performance requirements, 38
implementation, 27
Performance Cause and Solution Matrix, 32
performance documentation, 67
Performance documentation, 67, 234
Performance examples, 3
Performance improvement, 38, 152, 251, 258, 261, 262
Peters, 37
Phillips, 205, 240, 251
Pike, 190
Pipe, 10
Popham, 134, 137, 224
Post-test, 149, 208, 210, 211, 212, 216, 225, 226, 227
Pre-test, 139, 149, 208, 210, 211, 212, 216, 225, 226, 227, 237
Print Production, 161
Probability, 149, 227
Procedural knowledge, 183
Productivity, 239, 254
Profit growth, 203, 242, 243, 249
Project management, 105
basic elements, 105
Critical Path Method (CPM) Charts, 111
Gantt Charts, 109, 110, 111
Role of the Project Manager, 106
Project Management
Integrated Project Planning and Management Cycle, 108, 112, 113
project management software, 101, 104, 113
Project management software, 101, 104, 113, 259
Q
quality assurance data, 66, 67
Quality assurance data, 66, 67
Quality Control, 121, 152, 155, 257
Corrective Action Process, 156
Responsibilities, 155
Quality control system, 153
Quality standards, 129, 152, 153
R
Rapid Prototyping, 100, 101, 114, 119, 166, 249
Reliability, 134, 135
Report Card Reaction Survey, 221
Reproduction, 162
Results of training
example, 4
financial, 3
importance, 8
strategic, 4
Results-Based Evaluation, 240
see also Evaluation, 240
Results-Based Project Management, 108
see also Project Management, 108
S
Schedules, 99
Estimates Based on Hours of Instruction, 101
Factors Affecting Training Design Time, 103
Time Estimation Techniques, 101
Scriven, 204
See also Budgets, 97
See also Project Management, 108
See also Schedules, 99
Self-assessments, 224
Senge, 258
Simulations, 193
Skill Gap Analysis, 9, 65
Closing Skill Gaps, 71
Documenting Existing Skills, 66
Estimating Future Skill Needs, 68
Importance, 65
Model, 65
Skinner, 256
Smith and Delahaye, 190
Standard deviation, 146, 147, 149
Statistically significant, 148, 149, 212, 227, 237
Stolovitch and Keeps, 23
Storage systems, 163
Strategic results, 4, 71, 80, 120, 241, 243, 245, 249, 254, 257, 260
Structured On-the-Job Training Checklist, 201
Subject matter expert (SME), 48, 56, 68, 69, 89, 97, 115, 118, 122, 127, 128, 137, 252
Subject matter experts (SMEs), 48
Summative evaluation, 112, 204, 205
Survey research, 250, 251
Surveys, 250, 251
Swanson, 37
T
t test, 149
Task analysis, 9, 16, 43, 48, 50, 103, 115
definition, 45
example, 50
importance, 43
job descriptions, 39, 40, 43
Methodologies, 48
Process Model, 43
Sample Job Functions, 45
Sequencing Job Functions, Tasks and Steps, 46
steps, 46
TASK ANALYSIS SUMMARY REPORT, 116
Taylor, 36
Teaching machines, 196, 256
Technology-based learning, 124
Templates, 100, 119, 123, 125, 154, 166, 259
Bottom-Line Training: Performance-based Results | 317
see also Development, 100, 119, 123, 125, 154, 166, 259
Test bias, 141, 142
Test Construction Principles, 142
Test of Adult Basic Education (TABE), 67
Testing
4/5ths rule, 141
Analyzing and Reporting Test Results, 143
central tendency, 144, 146
Legal Issues, 139
passing scores, 137, 138, 139, 142, 147, 150, 225, 226, 227
range, 17, 41, 56, 84, 88, 93, 95, 102, 105, 132, 133, 134, 135, 146, 147, 159, 168, 173, 194,
220, 221, 254
Reliable, Valid Tests, 134
Role of Testing and Assessment, 131
Selection, 24, 27, 131, 141
Split Half Reliability, 135
test specifications, 142, 143, 147, 150
Tests, 66, 117, 121, 131, 132, 139
Performance Tests, 150
Train the Trainer, 77, 78, 104, 116, 117, 128, 167, 168, 169, 170, 171, 172, 175
Assessing the Need, 168
Instructional Strategies, 170
Sample Agenda, 170
Training Quality Assurance, 153
Training Blueprint
see also Design Blueprint, 115, 155
Training customer satisfaction survey, 222
training deliverables, 84, 93, 108
examples, 84
Training deliverables, 84, 93, 108
estimating, 87
estimating guidelines, 89
examples, 84
Training Delivery
costs, 62
options, 60, 85
Training Design Budgets, 94
Training Design Schedules, 99 (see also Schedules, 99)
training facilities, 60
Training facilities, 60, 164, 197
training transfer strategies, 62
Training transfer strategies, 62
Transfer of Training, 203, 229
Transferring skills to the job, 63, 179, 181
Tyler, 74
V
Validity, 135
Construct Validity, 136
Content Validity, 136
Criterion Validity, 136
Threats to validity, 208, 209
W
Workplace Literacy Training, 239
World Wide Web (WWW), 196