ROI
ROI
22 21 20 19 1 2 3 4 5
ATD Press
1640 King Street
Alexandria, VA 22314 USA
Ordering information: Books published by ATD Press can be purchased by visiting ATD’s
website at www.td.org/books or by calling 800.628.2783 or 703.683.8100.
ISBN-10: 1-950496-37-6
ISBN-13: 978-1-950496-
37-2 e-ISBN: 978-1-
950496-38-9
Contents
1. The
Basics ...................................................................................................
..........................1
ii
Defining
ROI ...................................................................................................
.............2
ROI and the Levels of
Evaluation ..................................................................................5
The Evaluation
Puzzle .................................................................................................1
0 The ROI Process
Model ..............................................................................................13
Putting ROI to
Use ....................................................................................................
.17
Getting It
Done .................................................................................................
..........21
2. Plan Your
Work .....................................................................................................
.............25
Aligning Programs With the
Business ..........................................................................26
The Alignment Model in
Action .................................................................................31
Defining Program
Objectives ......................................................................................3
3
Developing the Evaluation
Plan ...................................................................................39
Getting It
Done .................................................................................................
..........52
3. Collect
Data ......................................................................................................
..................55
Selecting the
Method .............................................................................................
......56
Defining the
Source ..............................................................................................
.......78
iii
Determining the Time of Data
Collection ...................................................................81
Getting It
Done .................................................................................................
..........81
4. Isolate Program
Impact ..................................................................................................
....83
Understanding Why Isolating Impact Is a Key
Issue ....................................................84
Applying the
Techniques .......................................................................................
......87
Building Credibility With the
Process ........................................................................106 Getting It
Done ........................................................................................................
.109
Contents
iv
v
Preface
vii
the basis for categorizing data and based on Raymond Katzell’s four
steps of evaluation, the ROI Methodology is the operational process
that ensures data collected and categorized within the framework are
reliable. More than that, it is a process that enables talent
development, and other functions within the organization, to do their
job—that is, drive value in the organization. Application of the process
is not limited. In fact, the ROI Methodology has reached audiences
beyond what we thought it would. Areas in which organizations apply
this process include:
• human resources/human capital • organization
• training/learning/development development/consulting
• leadership/coaching/mentoring • project management
• knowledge management/transfer solutions
• recognition/incentives/engagement • quality/Six Sigma/Lean
engineering
• work arrangement systems
• communications/public
• change management/culture
relations
• talent management/retention
• public policy/social
• policies/procedures/processes programs
• technology/systems/IT • creativity/innovation
• meetings/events/conferences • ethics/integrity
• marketing/advertisement/ • safety/health/wellness
promotion programs
• compliance/risk management • environment/
sustainability
• healthcare initiatives
• schools/colleges/
universities• public
sector/nonprofits
• faith-based programs.
Some of our most exciting work is with nongovernmental
organizations, faith-based organizations, and the First Nation
community where we are working with the American Indian Fund to
demonstrate the value of tribal colleges.
viii
Never before have we seen such interest in designing programs,
processes, initiatives, and institutions to drive results that matter to
employees, students, stockholders, taxpayers, and communities. The
good news is the process to do this hasn’t changed—it’s the same, only
better. Better in terms of the techniques to collect, analyze, report, and
optimize data. With greater use, broader adoption, and advances in
technology, we continue to learn. We strive to apply that learning by
advancing the measurement and evaluation field.
But the fundamentals are the fundamentals—they are foundational
and unchanging. They are what makes the ROI Methodology work and
why it continues to be the most applied and documented approach to
demonstrating impact and ROI for non-capital investments. This book
presents those fundamentals. It provides the basics of the ROI
Methodology.
Preface
What’s New
While retaining the fundamentals, this second edition improves on the
first, which came out in 2005. The question then is why an update?
Because the need for ROI is not going away.
Demonstrating the ROI of programs remains critical to how
organizations allocate resources to future programs. Senior executives
want to know this information. And yet, we keep hearing the same
story: Organizations spend millions of dollars on talent development,
but the business measures that matter are not improving! That
suggests a continuing gap in connecting how a program performs and
whether it delivers value.
Here is what we’ve changed in this update:
• We have focused more on alignment, as in connecting programs
to the business. This includes identifying payoff needs and
specific business measures that need to improve. It also includes
ix
using a variety of techniques to identify the most feasible
solution given the business need.
• While there is so much to talk about on the technology front, we
could have written an entire book just on the application and
impact of technology on the adoption of the ROI Methodology.
Alas, we did not; although, we do give a shout out to a few
technologies that are either fundamental tools or new tools that
are sure to take measurement to new levels.
• We have also referenced new applications of the process to
talent development, demonstrating its increasing use.
What’s Inside?
Each chapter provides the basic steps in developing a comprehensive
evaluation that includes ROI. Although attempts have been made to
address some of the more difficult issues, readers will become most
comfortable with the basic techniques. By the end of the book, you will
have a basic understanding of the ROI Methodology and be able to
select programs for this type of analysis. You will be prepared to
develop a strategy to integrate ROI as part of your ongoing talent
development process.
Chapter 1, The Basics, provides an overview of ROI—what it
means, how it is reported, and when it should be used.
Chapter 2, Plan Your Work, introduces the alignment process,
which includes developing program objectives and the evaluation plans
that will support your moving forward with an ROI project.
Chapter 3, Collect Data, covers the appropriate data collection
procedures and shows how to implement them. This chapter answers
the questions: How do you collect data? From whom do you collect
data? When do you collect data?
Chapter 4, Isolate Program Impact, addresses one of the most
important steps in program evaluation, answering the basic question:
How do you know it was your program that improved the measures?
x
Chapter 5, Calculate ROI, presents the fundamental difference
between reporting activity and reporting ROI. It’s in the math. Only by
converting impact measures to monetary value and comparing that
value to the fully loaded cost of the program can an actual ROI be
reported.
Chapter 6, Optimize Results, focuses on communicating results
and using black box thinking. Without communication, you can’t
accomplish what you set out to accomplish by evaluating the program.
Without reflecting on the wins and losses, and nudging people to take
action, you cannot improve the program, the system, or the
organization. That’s what measurement and evaluation are all about.
Chapter 7, Sustain Momentum, builds on the previous chapter.
Anyone can conduct an ROI study, but can you integrate the ROI
process into the talent development process so that it is seamless and
still effective? This chapter describes how to do that.
The appendix, ROI Forecasting Basics, introduces the concept
of forecasting ROI prior to investing in programs, during program
implementation, and using Level 3: Application data.
xi
Basic Rules
These rules present guiding principles and guidelines to ensure
consistent application of the ROI Methodology.
Noted
This icon flags sections with greater detail or an explanation
about a concept or a principle. Sometimes it is also used for a
short but productive tangent.
Getting It Done
The final section of each chapter supports your ability to take the
content of that chapter and apply it to your situation. The focus of this
section is mostly on job aids and tools for understanding the content.
Sometimes it contains a list of questions for you to ponder, sometimes
it is a self-assessment tool, and sometimes it lists action steps you can
take to improve your skills and help increase the chances for
participant success.
xii
What Do We Mean?
Before delving into the material, let’s clarify a few terms so we’re on
the same page. Program refers to the initiative being evaluated. This
could be a course, a full-scale change initiative, or a learning
management system implementation. Talent development refers to
training, performance improvement, learning, development, and
education. The levels of evaluation refer to the Phillips five-levels
evaluation framework. ROI is defined in the true sense of the acronym
—earnings divided by investment or net benefits divided by costs.
We hope this book will help you as you move forward with ROI. Best
of luck to all of you who do!
Acknowledgments
A book is never developed by the authors alone. It begins with the
publisher’s willingness to take on the book project and then working
with the authors to frame the content in such a way that it will be
useful to the target audience. So, our first thanks go to Ann Parker,
senior community manager at ATD, for asking us to update the book.
We also thank the ATD editorial team for their work in producing the
book. ATD has been a long-time partner of ours and we appreciate the
opportunity they gave us to lead the measurement and evaluation
content for ATD. We believe in the methodology presented here and
have observed its successful application in organizations throughout
the world. We appreciate ATD’s recognition of its importance.
A huge thanks goes to Hope Nicholas, ROI Institute’s director of
publications. Hope jumped on this book and would not let it (or us) out
of her line of sight until it was completed. She always comes through to
help us develop quality publications. We would also like to thank our
entire team at the ROI Institute who make things happen while we
work on research and publications. As other authors will attest,
developing a book requires deep work—meaning, we get lost in our
work and sometimes come up only when hunger strikes. So many
xiii
thanks to Ann, Kylie, Sherri, Melissa, Becky, Andy, Brady, and Tim for
all you do when we’re away!
We’d also like to thank our many workshop participants and clients.
Without you, we would have few stories to tell and limited ways in
which we could tell them. We appreciate your candor and help in
addressing the many issues faced by talent development professionals
pursuing this challenging topic.
From Patti to Jack: As always, Jack, you support and encourage me
to be my best. I do this by trying to keep up with you! You are my rock,
my friend, my love. Thank you!
From Jack to Patti: Much of the success of our work belongs to
Patti. She is an outstanding researcher, consultant, teacher, speaker,
and author. Her vast knowledge and experience shine through this
book. Thank you for the amazing contributions you make each and
every day.
xiv
1 The Basics
This chapter explores the fundamentals of the ROI Methodology, a process that has become
fundamental to many organizations around the world.
The chapter covers three key topics:
• defining return on investment (ROI)
• following the ROI process model
• putting ROI to use.
1
The Basics
Defining ROI
What is ROI? ROI is the ultimate measure of accountability that
answers the question: Is there economic value added to the
organization for investing in programs, processes, initiatives, and
performance improvement solutions? Organizations rely on many
economic indicators. Table 1-1 summarizes the typical financial
measures important to resource allocation decisions.
The Basics
Return on ROE Measures a corporation’s profitability by revealing how much profit a company generates with
Equity the money that shareholders have invested. Used for comparing the profitability of a company to
that of other firms in the same industry.
Calculation: Compares the annual net income to shareholder equity.
ROE = (Net Income / Shareholder Equity) x 100
Return on ROA Indicates how profitable a company is in relation to its total assets. Measures how efficient
Assets management is at using its assets to generate earnings.
Calculation: Compares annual net income (annual earnings) to total assets; expressed as a
percentage.
ROA (%) = (Net Income / Total Assets) x 100
Return on ROAE Modified version of ROA referring to a company’s performance over a fiscal year.
Average
Calculation: Same as ROA except the denominator is changed from total assets to average
Equity
shareholders’ equity, which is computed as the sum of the equity value at the beginning and end
of the year divided by two.
ROAE = Net Income / Average Shareholder Equity
Financial Acronym Description
Measure
Return on ROCE Indicates the efficiency and profitability of a company’s capital investments. ROCE should always
Capital be higher than the rate at which the company borrows; otherwise any increase in borrowing will
Employed reduce shareholders’ earnings.
Calculation: Compares earnings before interest and tax (EBIT) to total assets minus current
liabilities.
Calculation: Divides amount of cash flows (C; or sum of money) by the interest rate (r) over a
period of time (t).
PV = C / (1 + r)t
2
The Basics
Net Present NPV Measures the difference between the present value of cash inflows and the present value of cash
Value outflows. Another way to put it: measures the present value of future benefits with the present
value of the investment.
Calculation: Compares the value of a dollar today to the value of that same dollar in the future,
taking into account a specified interest rate over a specified period of time.
T
Internal Rate IRR Makes the net present value of all cash flows from a particular project equal to zero. Used in
of Return capital budgeting. The higher the IRR, the more desirable it is to undertake the process.
Calculation: Follows the NPV calculation as a function of the rate of return. A rate of return for
which this function is zero is the internal rate of return.
T
PP = Costs / Benefits
Benefit-Cost BCR Used to evaluate the potential costs and benefits of a project that may be generated if the
Ratio project is completed. Used to determine financial feasibility.
3
The Basics
4
The Basics
years, his application of ROI found its way into the broader HR
community and has since been adopted across most disciplines.
ROI and BCR provide similar indicators of investment success, though
one, ROI, presents the earnings (net benefits) as compared to the cost
and is multiplied by 100 to report it as a percentage. The other, BCR,
compares gross benefits to costs. Below are the basic formulas used to
calculate the BCR and the ROI:
Program
Benefits BCR =
Program Costs
5
The Basics
the PP formula is the number of months or years before the projects pay
the cost back. The formula for PP is:
Program
Costs PP =
Program Benefits
Noted
Periodically, someone will report a BCR of 3:1 and an ROI of 300 percent. This is not possible. ROI is the
net benefits divided by the costs, which translates to 200 percent. The net benefit is equal to benefits
minus costs.
6
The Basics
7
The Basics
8
The Basics
Noted
The levels of evaluation are categories of data; timing of data collection does not necessarily define the level to which you are
evaluating. Level 1 data can be collected at the end of the program (as is typical) or in a follow-up evaluation months after the
program (not ideal).
Levels 4 and 5 data can be forecasted before a program is implemented or at the end of the program. The true impact is
determined after the program is implemented when actual improvement in key measures can be observed. Through analysis,
this improvement is isolated to the program, accounting for other factors. The basics of forecasting ROI are described in the
appendix.
9
The Basics
10
The Basics
Input
Reaction
Learning
Application
Impact
ROI
Intangible Benefits
11
The Basics
provides the data required to improve programs when the ROI is less
than desirable.
Process Model
The process model serves as a step-by-step guide to help maintain a
consistent approach to evaluation. There are four phases to the process,
each containing critical steps that must be taken to get to credible
information. These four phases are described in more detail later in this
chapter:
1. Plan the Evaluation:
o Align programs with the
business. o Select the right
12
The Basics
Noted
The ROI Methodology was originally developed in 1973 by Jack J. Phillips. Jack, at the time, was an electrical engineer at Lockheed
Aircraft (now Lockheed Martin) in Marietta, Georgia, who taught test pilots the electrical and avionics systems on the C-5A Galaxy. He
was also charged with managing a co-operative education program designed as part of Lockheed’s engineer recruiting strategy. His
senior leader told him that in order to continue funding the co-operative education program, Jack needed to demonstrate the return
on Lockheed’s investment (ROI). The senior leader was not looking for an intangible measure of value, but the actual ROI.
ROI and cost-benefit analysis had been around for decades, if not centuries. But neither had been applied to this type of
program. Jack did his research and ran across a concept referred to as four-steps to training evaluation, developed by an industrial-
organizational psychologist named Raymond Katzell. Don Kirkpatrick wrote about these steps and cited Katzell in his 1956 article titled
“How to Start an Objective Evaluation of Your Training Programs.” Because the concept had not been operationalized nor did it include
a financial metric describing the ROI, Jack added the economic theory of cost-benefit analysis to the four-step concept and created a
model and standards to ensure that reliable data, including the ROI, could be reported to his senior leadership team.
Jack’s 1983 Handbook of Training Evaluation and Measurement Methods put the five-level evaluation framework and the ROI
process model on the map. As he moved up in organizations to serve as head of learning and development, senior executive VP of
human resources, and president of a regional bank, he had his learning and talent development and HR teams apply this approach to
major programs.
Then, in 1994, his book, Measuring Return on Investment Volume 1, published by the American Society of Training &
Development (ASTD), now the Association for Talent Development (ATD), became the first book of case studies describing how
organizations were using the five-level framework and his process to evaluate talent development programs.
Over the years, Jack Phillips, Patti Phillips, and their team at ROI Institute have authored more than 100 books describing the use
of the ROI Methodology. The application of the process expands well beyond talent development and human resources. From
humanitarian programs to chaplaincy, and even ombudsmanship, Jack’s original work has grown to be the most documented and
applied approach to demonstrating value for money of all types of programs and projects.
13
The Basics
14
The Basics
15
Successfully Leading Virtual Teams
Implementation
Conducting just one study adds little value to your efforts to
continuously improve and account for your talent development
programs. The key is implementation—the last and most critical piece
of the evaluation puzzle. Anyone can conduct one ROI study; the key is
sustaining the practice. Building the philosophy into everyday
decisions about your talent development process is imperative if you
want to sustain a culture of results-based talent development. This
requires assessing your organization’s culture for accountability;
assessing your organization’s readiness for ROI; defining the purpose
for pursuing this level of evaluation; building expertise and capability;
creating tools, templates, and standard processes; and adopting
16
The Basics
technology that will enable optimal use of information that flows from
data collection and analysis.
17
The Basics
LEVEL 2: LEVEL 4:
LEARNING IMPACT
a plan it will be difficult for you to know where you are going, much
less when you arrive. Your plan begins with clarifying the business
needs for your program and ensuring the most feasible solution has
been identified given the needs. Once the correct program has been
identified, the next step is to develop specific, measurable objectives
and design the program around those objectives. From there you
develop your data collection plan. This includes defining the measures
for each level of evaluation, selecting the data collection instrument,
identifying the source of the data, and determining the timing of data
collection. Any available baseline data for the measures you are taking
should be collected during this time.
Next, develop the ROI analysis plan. This means selecting the most
appropriate technique to isolate the effects of the program on impact
data and the most credible method for converting data to money. Cost
categories and communication targets are developed. As you develop
18
The Basics
these planning documents, you will also identify ways in which the
evaluation approach can be seamlessly integrated into the program.
Make It
Credible:
Capture
ANALYZE DATA Costs of OPTIMIZE RESULTS
Program
LEVEL 5: ROI
Make It
Credible:
Identify
Intangible
Measures
INTANGIBLES
Collect Data
Once the planning phase is completed, data collection begins. Levels 1
and 2 data are collected during the program with common instruments,
including end-of-course questionnaires, written tests and exercises,
and demonstrations. Follow-up data, Levels 3 and 4, are collected
sometime after the program when application of the newly acquired
knowledge and skills becomes routine and when enough time has
passed to observe impact on key measures. A point to remember is that
if you have identified the measures that need to improve through initial
19
The Basics
analysis, you will measure the change in performance in those same
measures during the evaluation. It is feasible to believe that your data
collection methods during the evaluation could be the same as those
used during the needs analysis.
Analyze Data
Once the data are available, analysis begins using the approach chosen
during the planning stage. Now it’s a matter of execution. Isolating the
effects of the program on impact data is a first step in data analysis.
This step is taken when collecting data at Level 4. Too often overlooked
in evaluating success of talent development programs, this step
answers the critical question, “How do you know it was your program
that improved the measures?” While some will say this is difficult, we
argue (and have argued for years), it doesn’t have to be. Besides,
without this step, the results you report will lack credibility.
The move from Level 4 to Level 5 begins with converting Level 4:
Impact measures to monetary value. Often this step instills the greatest
fear in talent development professionals, but once you understand the
available techniques to convert data along with the five steps of how to
do it (which are covered in chapter 5), the fear usually subsides.
Fully loaded costs are developed during the data analysis phase.
These costs include needs assessment (when conducted), design,
delivery, and evaluation costs. The intent is to leave no associated cost
unreported.
Intangible benefits are also identified during this phase. These are
the Level 4 measures not converted to monetary value. They can also
represent any unplanned program benefits.
The last step of the data analysis phase is the math. Using simple
addition, subtraction, multiplication, and division, the ROI is
calculated.
20
The Basics
Optimize Results
This is the most important phase in the evaluation process. Evaluation
without communication and communication without action are
worthless endeavors. If you tell no one how the program is
progressing, how can you improve the talent development process,
secure additional funding, justify programs, and market programs to
future participants?
There are a variety of ways to report data. There are micro reports
that include the complete ROI impact study; there are macro reports
for all programs that include scorecards, dashboards, and other
reporting tools.
But communication must lead to action—and that action requires
stepping back and analyzing what is learned from the data. Black box
thinking is required if you want to get value from your program
evaluation investments. The job of talent development professionals is
not to “train” people, but to drive improvement in output, quality, cost,
time, customer satisfaction, job satisfaction, work habits, and
innovation. This occurs through the development of others; but to do it
well, it means assessing, measuring, and evaluating and then taking
action based on the findings. Figure 1-4 offers something to remember
about the evaluation process.
21
The Basics
Justify Spending
Justification of spending is becoming more commonplace in the talent
development practice. Talent development managers are often
required to justify investing in new programs, the continuation of
existing programs, and changes or enhancements to existing programs.
New Programs
In the past, when the talent development function had “deep pockets,”
new programs were brought on board every time a business book hit
the New York Times bestseller list. While many of those programs were
inspiring, there was no business justification for them. Today, new
programs undergo a greater amount of scrutiny. At a minimum, talent
development managers consider the costs and provide some esoteric
justification for investing the resources.
For those who are serious about justifying investments in new
programs, ROI is a valuable tool. A new program’s ROI can be
forecasted using a variety of techniques, but some programs may
require pre-programming justification. There are two approaches for
this: pre-program forecasts and ROI in pilot programs. Although these
approaches are beyond the scope of this book, the appendix includes
basic descriptions of the forecasting techniques.
Existing Programs
Calculating ROI in existing programs is more common in practice than
forecasting success for new programs, although there is an increased
interest in program justification prior to launch. Typically, ROI is used
to justify investments in existing programs where development and
delivery have taken place, but there is concern that the value does not
justify continuing.
Along with justifying the continuation of existing programs, ROI is
used to determine the value of changing delivery mechanisms, such as
incorporating blended learning or placing a program online with no in-
person interaction. It is also used to justify investing in additional
support initiatives that supplement the learning transfer process. Four
22
The Basics
approaches to ROI can assist in justifying the investment in existing
programs: forecasting at Levels 1, 2, and 3, and the post-program
evaluation. Post-program evaluation is the basis for this book.
Set Priorities
In almost all organizations, the need for talent development exceeds
the available resources. A comprehensive evaluation process, including
ROI, can help determine which programs rank as the highest priority.
Programs with the greatest impact (or the potential for greatest
impact) are often top priority. Of course, this approach has to be
moderated by taking a long view, ensuring that developmental efforts
are in place for a long-term payoff. Also, some programs are necessary
and represent commitments by the organization. Those concerns aside,
the programs generating the greatest impact or potential impact
should be given the highest priority when allocating resources.
Eliminate Unsuccessful Programs
You hate to think of eliminating programs—to some people this
translates into the elimination of responsibility and, ultimately, the
elimination of jobs. This is not necessarily true. For years, the talent
development function has had limited tools to eliminate what are
23
The Basics
known to be unsuccessful, unnecessary programs. ROI provides this
tool.
Basic Rule 1
Not every program should be evaluated to impact and ROI. ROI is reserved for those programs that are
expensive, have a broad reach, drive business impact, have the attention of senior managers, or are highly
visible in the organization. However, when evaluation does go to impact and ROI, results should be reported
at the lower levels to ensure that the complete story is told.
Gain Support
A third use for ROI is to gain support for programs and the talent
development process. A successful talent development function needs
24
The Basics
support from key executives and administrators. Showing the ROI for
programs can alter managers’ and supervisors’ perceptions and
enhance the respect and credibility of the learning staff.
25
The Basics
effect of what employees do with what they learn, with particular
emphasis on measures representative of output, quality, cost, and time.
If talent development programs can show results linked to the business
and talent development staff can speak the language of business, mid-
level managers and supervisors may start to listen to them more
closely.
Employees
Showing the value of programs, including ROI, can enhance the talent
development function’s overall credibility. By showing employees that
the programs offered are serious programs achieving serious results,
the talent development function can show that training is a valuable
way to spend time away from their pressing duties. Also, by making
adjustments in programs based on the evaluation findings, employees
will see that the evaluation process is not just a superficial attempt to
show value.
Getting It Done
It is easy to describe the basics and benefits of using such a
comprehensive evaluation process as the ROI Methodology, but this
approach is not for everyone. Given that, your first step toward making
ROI work for your organization is assessing the degree to which your
talent development function is results based. Complete the assessment
in Exercise 1-1 to see where you stand. Then ask a client to complete
the survey and compare the results.
In the next chapter, you will learn how to create a detailed plan for
your evaluation.
Instructions: For each of the following statements, circle the response that best matches the talent development function at your
organization.
26
The Basics
b. is determined by talent development and adjusted as needed
c. is based on a mission and a strategic plan for the function
2. The primary mode of operation of the talent development function is to:
a. respond to requests by managers and other employees to deliver training services
b. help management react to crisis situations and reach solutions through training services
c. implement many talent development programs in collaboration with management to prevent problems and crisis
situations
3. The goals of the talent development function are:
a. set by the talent development staff based on perceived demand for programs
b. developed consistent with talent development plans and goals
c. developed to integrate with operating goals and strategic plans of the organization
4. Most new programs are initiated:
a. by request of top management
b. when a program appears to be successful in another organization
c. after a needs analysis has indicated that the program is needed
Exercise 1-1. Talent Development Program Assessment (cont.)
27
The Basics
b. moderate, usually by request, or on an as-needed basis
c. deliberately planned for all major talent development activities, to ensure a partnership arrangement
13. To ensure that talent development is transferred into performance on the job, you:
a. encourage participants to apply what they have learned and report results
b. ask managers to support and reinforce training and report results
c. use a variety of training transfer strategies appropriate for each situation
14. The talent development staff’s interaction with line management is:
a. rare; you almost never discuss issues with them
b. occasional; during activities, such as needs analysis or program coordination
c. regular; to build relationships, as well as to develop and deliver programs
15. Talent development’s role in major change efforts is to:
a. conduct training to support the project, as required
b. provide administrative support for the program, including training
c. initiate the program, coordinate the overall effort, and measure its progress—in addition to providing training
16. Most managers view the talent development function as:
a. a questionable function that wastes too much time of employees
b. a necessary function that probably cannot be eliminated
c. an important resource that can be used to improve the organization
17. Talent development programs are:
a. activity oriented (all supervisors attend the Talent Development Workshop)
b. individual results based (the participants will reduce their error rate by at least 20 percent)
c. organizational results based (the cost of quality will decrease by 25 percent)
18. The investment in talent development is measured primarily by:
a. subjective opinions
b. observations by management and reactions from participants
c. dollar return through improved productivity, cost savings, or better quality
19. The talent development effort consists of:
a. usually one-shot, seminar-type approaches
b. a full array of courses to meet individual needs
c. a variety of talent development programs implemented to bring about change in the organization
20. New talent development programs and projects, without some formal method of evaluation, are implemented at your
organization:
a. regularly
b. seldom
c. never
21. The results of talent development programs are communicated:
a. when requested, to those who have a need to know
b. occasionally, to members of management only
c. routinely, to a variety of selected target audiences
22. Management involvement in talent development evaluation:
a. is minor, with no specific responsibilities and few requests
b. consists of informal responsibilities for evaluation, with some requests for formal training
c. is very specific. All managers have some responsibilities in evaluation
23. During a business decline at your organization, the talent development function will:
a. be the first to have its staff reduced
b. be retained at the same staffing level
c. go untouched in staff reductions and possibly beefed up
24. Budgeting for talent development is based on:
a. last year’s budget
28
The Basics
b. whatever the training department can “sell”
c. a zero-based system
25. The principal group that must justify talent development expenditures is:
a. the talent development department
b. the human resources or administrative function
c. line management
Exercise 1-1. Talent Development Program Assessment (cont.)
26. Over the last two years, the talent development budget as a percentage of operating expenses has: a. decreased
b. remained stable
c. increased
27. Top management’s involvement in the implementation of talent development programs:
a. is limited to sending invitations, extending congratulations, and passing out certificates
b. includes monitoring progress, opening and closing speeches, and presentations on the outlook of the organization
c. includes participating in the program to see what’s covered, conducting major segments of the program, and requiring key
executives be involved
28. Line management involvement in conducting talent development programs is:
a. very minor; only talent development specialists conduct programs
b. limited to a few supervisors conducting programs in their area of expertise
c. significant; on the average, over half of the programs are conducted by key line managers
29. When an employee completes a talent development program and returns to the job, their supervisor is likely to: a. make no
reference to the program
b. ask questions about the program and encourage the use of the material
c. require use of the program material and give positive rewards when the material is used successfully
30. When an employee attends an outside seminar, upon return, they are required to: a. do nothing
b. submit a report summarizing the program
c. evaluate the seminar, outline plans for implementing the material covered, and estimate the value of the
program
29
2 Plan Your Work
2
Plan Your Work
31
Plan Your Work
Noted
“There is nothing so useless as doing efficiently that which should not be done at all.”
—Peter Drucker, Austrian-born American management consultant, educator, and author
32
Plan Your Work
Payoff Needs
Every organization faces opportunities to make money, save money,
avoid cost, or contribute to the greater good while making money, saving
money, or avoiding cost. Identifying payoff needs is the first step in the
alignment process. Payoff needs can be opportunities to pursue or
problems to solve. They answer questions such as:
• Is this program worth doing?
• Is the problem worth solving?
• Is the opportunity worth pursuing?
33
Plan Your Work
Business Needs
While considering the payoff needs, the business needs will often
become apparent. Business needs are the specific organizational
measures that, if improved, will help address the payoff need.
Measures that represent business needs come in the form of output,
quality, cost, time, customer satisfaction, job satisfaction, work habits,
and innovation.
These measures represent either hard data or soft data. Hard data
are objectively based and easily converted to money, for example,
34
Plan Your Work
Performance Needs
Some talent development professionals are moving from order taker to
value creator. These individuals are resisting the temptation to say
“yes” to every request for a new program. Rather, they try to uncover
the problem or opportunity and identify business measures in need of
improvement. Then they identify a solution or solutions that will best
influence the business need. Their role is evolving into a performance
consulting role, positioning them as critical business partners to
leaders throughout the organization.
Success in this movement requires the talent development
professional to assess the performance gaps that, if changed, will
address business needs. This means they must have a mindset for
curiosity and inquiry and be willing to:
• Examine data and records.
• Initiate the discussion with the client.
35
Plan Your Work
36
Plan Your Work
Noted
You can use collaborative analytics to discern opportunities to improve output, quality, and cost, as well as employee
engagement, customer experience, and other business measures. It is also useful in determining the impact change in
collaborative networks has on business measures. While its use is still in its infancy, it is important that talent
development professionals become familiar with the opportunities it offers. A good place to begin this learning journey is
a research piece authored by Rob Cross, Tom Davenport, and Peter Gray, titled “Driving Business Impact Through
Collaborative Analytics” (Connected Commons, April 2019).
Learning Needs
Addressing the performance needs uncovered in the previous step
typically requires a learning component to ensure all parties know
what they need to do and how to do it. In some cases, a learning
program becomes the solution. In other cases, nonlearning solutions
such as processes, procedures, policies, and technologies are the most
feasible approach to closing the performance gap that will lead to
improvement in business measures. Assessing learning needs is not
relegated only to pre- and post-knowledge assessments of program
participants. Examples of other techniques include: subject matter
expert input, job and task analysis, observations, demonstrations, and
management assessments.
It is important to go beyond technical knowledge and tactical skill
assessment, especially when there is great opportunity at stake. People
37
Plan Your Work
need to know the “how” as well as the “what,” “why,” and “when.” It is
also important to remember that learning needs assessment is
important for multiple stakeholders, not just program participants.
Supervisors, senior leaders, and the direct reports of the target
audience all play a role in ensuring programs are successful.
Preference Needs
Preference needs drive program requirements. Individuals prefer
certain content, processes, schedules, or activities for the structure of
a program. These preferences inform how best to roll out and deliver a
program. If the program is a solution to a problem or if it is leveraging
an opportunity, preference needs define how best to implement the
program and how participants should perceive it for it to be successful
from their perspective. Designing programs based on audience
preference increases the odds that participants will commit to them
and will be equipped to do what needs to be done to drive the
measures that matter.
Input Needs
The last phase of analysis is the project plan, which represents
projected investment in the program. Here needs are determined in
terms of number of offerings, who will likely participate when, and how
many people will participate during each session. The program team
will also decide on in-house and external resources to leverage. Travel,
food, facilities, and lodging issues are also defined at this stage. At the
end of this phase, the program team will estimate the full cost of the
program.
38
Plan Your Work
chief learning officer (CLO) and the president of operations for a large
chip manufacturing company (Phillips and Phillips 2005). The president
was concerned that his people spent too much time in training that did
not matter. Upon questioning the president, the CLO learned that the
concern was not too much training, but rather too much time in
meetings in general. She also gained insight into how the meetings
were being run and the extent to which follow-through on
commitments made in those meetings was taking place. Together they
came to an agreement that the president would actively engage in Table
2-2. Output of Alignment Process
39
Plan Your Work
or the measures that define those objectives are irrelevant to the need
for the program. Vague and irrelevant objectives hurt the design of the
program, impair the evaluation process, and lead to meaningless
results.
41
Plan Your Work
Objective Measure
At the end of the course, • 80 percent of participants rate program relevance a 4.5 out of 5 on a Likert scale.
participants will perceive
program content as relevant
to their jobs.
For those of you who are more research driven, you might want to
take this a step further by defining (literally) what you mean by
“relevance.” For example, relevance may be defined as: • knowledge
and skills that participants can immediately apply in their work
• knowledge and skills that reflect participants’ day-to-day work
activity.
Now the measures of success can be even more detailed. Table 2-4
compares the broad objective to the more detailed measures. Success
with these two measures can be reported individually, or you can
combine the results of the two measures to create a “relevance index.”
Table 2-4. Compare A Broad Objective With More Specific and Detailed Measures
Objective Measures
At the end of the course, • 80 percent of participants indicate that they can immediately apply the knowledge and skills in
participants will perceive their work as indicated by a 4.5 rating out of 5 on a Likert scale.
program content as relevant • 80 percent of participants view the knowledge and skills as reflective of their day-to-day work
to their jobs. activity as indicated by rating this measure a 4.5 out of 5 on a Likert scale.
Breaking down objectives to multiple, specific measures provides a
clearer picture of success; however, multiple measures also lengthens
your Level 1 data collection instrument. The question to consider is,
“Do you need a long questionnaire with many questions representing
many measures to determine success with an objective?” For a
42
Plan Your Work
at lower levels. ed and specific, they spell out what the participant must
be able to do as a result of learning.
43
Plan Your Work
At the end of the course, participants will be able to use Microsoft Word.
Sounds reasonable. But what does “successful use” look like? How
will you know if you have achieved success? You need a measure, as
shown in Table 2-5. Now, you can evaluate the success of learning.
Objective Measures
At the end of the course, Within a 10-minute time period, participants will be able to demonstrate to the facilitator the
participants will be able to use following applications of Microsoft Word with zero errors:
Microsoft Word. • File, save as, save as web page
• Format, including font, paragraph, background, and themes
• Insert tables, add columns and rows, and delete columns and rows
44
Plan Your Work
Objective Measures
Participants will use effective • Participants will develop a detailed agenda outlining the specific topics to be covered for 100
meeting behaviors. percent of meetings.
• Participants will establish meeting ground rules at the beginning of 100 percent of meetings.
• Participants will follow up on meeting action items within three days following 100 percent of
meetings.
45
Plan Your Work
46
Plan Your Work
Objective Measure
Improve the quality of the X- • Reduce the number of warranty claims on the X-1350 by 10 percent within six months after the
1350. program.
• Improve overall customer satisfaction with the quality of the X-1350 by 10 percent as indicated
by a customer satisfaction survey taken six months after the program.
• Achieve top scores on product quality measures included in industry quality survey.
Specific measures describe the meaning of success. They also serve
as the basis for the questions that you ask during the evaluation.
47
Plan Your Work
48
Plan Your Work
therefore, you let the raw data sit for days, months, and sometimes
years before you consider analyzing it to see what happened.
Defining the purpose of the evaluation helps determine the scope of
the evaluation project. It drives the type of data to be collected as well
as the type of data collection instruments to be used.
Evaluation purposes range from demonstrating the value of a
particular program to boosting credibility for the entire talent
development function. Typical evaluation purposes can be categorized
into three overriding themes:
• making decisions about programs
• improving programs and processes•
demonstrating program value.
49
Plan Your Work
different types of data that influence different Decisions are made with or
without evaluation data. By
decisions. Table 2-8 presents a list of decisions providing data, the talent
development team can
that evaluation data, including ROI, can influence.
influence the decision-
The clients of the talent development team are deciding if they want to invest in expanding a pilot Level 5
leadership program for the entire leadership team.
Senior managers are planning next year’s budget and are concerned about allocating additional Levels 1–5 (scorecard)
funding to the talent development function.
The talent development staff are deciding whether they should eliminate an expensive program Level 5
that is getting bad reviews from participants, but a senior executive plays golf with the training
supplier.
A training supplier is trying to convince the talent development team that their leadership program Level 5 (forecast/pilot)
will effectively solve the turnover problem.
Supervisors want to implement a new initiative that will change employee behavior because they Level 3 (focus on barriers
believe the talent development program did not do the job. and enablers)
Improving Programs and Processes
One of the most important purposes in generating comprehensive data
using the ROI Methodology is to improve talent development programs
and processes. As data are generated, the programs being evaluated
can be adjusted so that future presentations are more effective.
Reviewing evaluation data in the earlier stages allows the talent
development function to implement additional tools and processes that
can support the transfer of learning.
Evaluation data can help the talent development function improve
its accountability processes. By consistently evaluating programs, the
talent development function will find ways to develop data more
efficiently through technology or through the use of experts within the
50
Plan Your Work
51
Plan Your Work
Levels of Evaluation
Level 4: Impact
Economic
Level 5: Return on Investment
Consumer Perspective
The consumers of talent development are those who have an immediate
connection with the program. Facilitators, designers, developers, and
participants represent consumers. Value to this group is represented at
Levels 1 and 2. Data provide the talent development staff feedback so
they can make immediate changes to the program as well as decide
where developmental needs exist. These data provide a look at what
the group thought about the program and how they each fared from a
knowledge and skills acquisition perspective compared to the group.
Some measures— those representing utility of knowledge gain—are
often used to predict actual application of knowledge and skills.
System Perspective
The system represents those people and functions that support
learning within an organization. This includes participant supervisors,
participant peers and team members, executives, and support
functions, such as the IT department or the talent development
function. In many cases, the system is represented by the client.
Although Level 3 data provide evidence of participant application of
newly acquired knowledge and skills, the greatest value in evaluating
at this level is in determining the extent to which the system supports
52
Plan Your Work
Economic Perspective
The economic perspective is typically that of the client—the person or
group funding the program. Although the supervisor will be interested
in whether the program influenced business outcomes and the ROI, it
is the client—who is sometimes the supervisor, but more often senior
management—who makes the financial investment in the program.
Levels 4 and 5 provide data representing the economic value of the
investment.
Table 2-10 presents the value perspectives compared with the
frequency of use of the data provided by each level of evaluation.
Although there is value at all levels, the lower levels of evaluation are
implemented most frequently and tend to be of greater value to clients.
This is due to the feasibility of conducting evaluations at the lower
levels versus the higher levels.
Table 2-10. Value Perspective Versus Use
Level 3: Application
System
and Implementation
Level 4: Impact
Economic
Level 5: Return
on Investment
53
Plan Your Work
Feasibility
Program evaluations have multiple purposes—when you evaluate at
Level 5 to influence funding decisions, you still need Level 1 data to
help you improve delivery and design. This is one reason the lower
levels of evaluation are conducted more frequently than the higher
levels. Other drivers that determine the feasibility of evaluating
programs to the various levels include the program objectives, the
availability of data, and the appropriateness for ROI.
Program Objectives
As described earlier, program objectives are the basis for evaluation.
Program objectives drive the design and development of the program
and show how to measure success. They define what the program is
intended to do, and how to measure participant achievement and
system support of the learning transfer process. All too often, however,
minimal emphasis is placed on developing objectives and their defined
measures at the higher levels of evaluation.
Availability of Data
A question to consider is “Can you get the information you need to
determine if the objectives are met?” The availability of data at Levels
1 and 2 is rarely a concern. Simply ask for the opinion of the program
participants, test them, or facilitate role plays and exercises to assess
their overall knowledge, skills, and insight. Level 3 data are often
obtained by going to participants, their supervisors, their peers, and
their direct reports. The challenge is in the availability of Level 4 data.
While the measures are typically monitored on a routine basis, the
question is often how the talent development team can access them.
The first step is to determine where they are housed, and then build a
relationship such that the owners of the measures will partner with you
so you can access the data you need. Occasionally, reliance on
participants to provide information on the measures is the best
54
Plan Your Work
approach. But if they are not the right audience, how will they access
the data?
Program objectives and data availability are key drivers in
determining the feasibility of evaluating a program to ROI; however,
some programs are just inappropriate for ROI.
55
Plan Your Work
Noted
Not all programs are suitable for impact and ROI evaluation; but when you do evaluate to these levels, use
at least one method to isolate the effects of the program and credibly convert data to monetary value.
56
Plan Your Work
57
Plan Your Work
58
Plan Your Work
Cost Categories
This section includes all costs for the program. These costs include the
needs assessment, program design and development, program
59
Plan Your Work
Intangible Benefits
Not all measures will be converted to monetary value. There is a four-
part test in chapter 5 that helps you decide which measures to convert
and which not to convert. Those measures you choose not to convert to
monetary value are considered intangible benefits. Move the Level 4
measures that you don’t convert to monetary value to this column.
Comments
The final element of the ROI analysis plan is the comments. Here, you
can put notes to remind yourself and your evaluation team of key
issues, comments regarding potential success or failure of the
program, reminders for specific tasks to be conducted by the
evaluation team, and so forth.
The importance of planning your data collection for your ROI
analysis cannot be stressed enough. Planning in detail what you are
60
Plan Your Work
going to ask, how you are going to ask, who you are going to ask, when
you are going to ask, and who will do the asking, along with the key
steps in
61
Table 2-12. Completed Data Collection
Plan
Program: Effective Meetings Responsibility: Date:
Level Program Measures of Success Data Collection Data Sources Timing Responsibilities
Objectives Method
1 Reaction and Planned
Action
•Identify the extent and •Given cost guidelines, identify the •Meeting profile •Participants •At the beginning •Facilitator
cost of meetings cost of the last three meetings of) the program
pre
(
•Identify positives, •From a list of 30 positive and •Written test •At the end of
negatives, and negative meeting behaviors, the
) program
post
implications of basic correctly identify the implications (
meeting issues and of each behavior
dynamics
62
3 Application and
Implementation
•Use of effective meeting •Reported change in behavior to •Action plan •Participants •Three months •Program
behaviors planning and conducting meetings owner
•Time savings from •Time savings •Questionnaire •Participants •Three months •Program
fewer meetings, shorter for three owner
meetings, and fewer groups)
(
participants (hours
savings per month)
5 ROI
Comments:
•Target an ROI of at
least 25%
Plan Your Work
63
Plan Your Work
Getting It Done
Now it is time for you to go to work. Before you go any further in this
book, select a program that is suitable for ROI. If this is your first ROI
study, consider selecting a program in which you are confident that
success will be achieved. Success with your first study is an incentive
for the next one.
Once you have identified the program, answer the questions
presented in Exercise 2-1. In the next chapter, you will learn methods
for collecting data and begin developing the data collection plan (Table
2-14 ).
Program:
Evaluation Team:
Expected Date of Completion:
64
Plan Your Work
4. Transfer your answers to questions 2 and 3 to the first two columns in the data collection plan (Table 2-14).
65
Table 2-13. Completed ROI Analysis Plan
Program: Effective Meetings Responsibility: Date:
Data Methods for Methods of Cost Categories Intangible Communication Other Comments
Usually Benefits
4Items Isolating the Converting Data Targets for Final Influences or
Level Effects of the to Monetary Report Issues During
( )
Program Values Application
•Time savings •Participants’ •Hourly wage •Prorated cost •Improvement •Business unit •Participants Participants will
•Miscellaneous estimates and of needs in individual president must see identify specific
business •Participants’ •Participants’
benefits assessment productivity •Senior manag - the need for improvements
measures estimates estimates •Program fee not captured ers providing as a result of
using stan
- per participant elsewhere •Managers of measurement meetings being
dard values •Travel, lodging, •Stress reduc
- participants •Follow-up conducted more
( when and meals tion •Participants process will effectively
available) •Facilities •Improved •Training and be explained
•Participants’ planning and development to participants
salaries plus scheduling staff during the
benefits for •Greater program
time in work- participation in •Three groups
shop meetings will be
•Evaluation cost measured
•Participants
must report
productivity
gains due to
time saved
Plan Your Work
66
Plan Your Work
Responsibilities
Timing
Date:
Sources
Data
Data Collection
Responsibility:
Method
Measures of Success
Program Objectives
ROI
Program:
Level
1 2 3 4 5
67
3 Collect Data
This chapter presents the basics in collecting data for your ROI study, which includes:
• selecting the data collection method
• defining the source of data
• determining the time of data collection.
Collect Data
3
Collect Data
69
Collect Data
Thank you for participating in the Leading Change in Organizations course. This is your opportunity to provide feedback as to how
we can improve this course.
Please respond to the following questions regarding your perception of the program as well as your anticipated use of the skills
learned during the program. We also would like to know how you think the skills applied from this course will affect business
measures important to your function.
6. The instructor discussed how I can apply the knowledge and skills taught in ❑ ❑ ❑ ❑ ❑
the class.
Strongly Strongly
Disagree Agree
II. Your reaction to the course content
1 2 3 4 5
70
Collect Data
71
Collect Data
Strongly Strongly
Disagree Agree
IV. Your expected application of knowledge and skills
1 2 3 4 5
❑ 0% ❑ 10% ❑ 20% ❑ 30% ❑ 40% ❑ 50% ❑ 60% ❑ 70% ❑ 80% ❑ 90% ❑ 100%
17. On a scale of 0% (not at all) to 100% (extremely critical), how critical is applying the content of this course to your job success?
❑ 0% ❑ 10% ❑ 20% ❑ 30% ❑ 40% ❑ 50% ❑ 60% ❑ 70% ❑ 80% ❑ 90% ❑ 100%
18. What percentage of the new knowledge and skills learned from this course do you estimate you will directly apply to your
job?
❑ 0% ❑ 10% ❑ 20% ❑ 30% ❑ 40% ❑ 50% ❑ 60% ❑ 70% ❑ 80% ❑ 90% ❑ 100%
19. What potential barriers could prevent you from applying the knowledge and skills learned from this course?
20. What potential enablers will support you in applying the knowledge and skills learned from this course?
1 2 3 4 5
Productivity ❑ ❑ ❑ ❑ ❑
Sales ❑ ❑ ❑ ❑ ❑
Quality ❑ ❑ ❑ ❑ ❑
Costs ❑ ❑ ❑ ❑ ❑
Time ❑ ❑ ❑ ❑ ❑
72
Collect Data
Job Satisfaction
❑ ❑ ❑ ❑ ❑
Customer Satisfaction ❑ ❑ ❑ ❑ ❑
Table 3-2. Action Plan
Action Plan
1. __________________________________________________________________
______________________________________
2. __________________________________________________________________
______________________________________
3. __________________________________________________________________
______________________________________
4. __________________________________________________________________
______________________________________
5. __________________________________________________________________
______________________________________
At Level 2, data are collected using a variety of techniques to
determine if learning occurred.
Fundamental questions answered at Level 2 represent:
• new knowledge and skills acquired
• improvement in knowledge and skills• confidence to apply
knowledge and skills.
While it is sometimes assumed that testing is the only technique to
measure knowledge and skill acquisition, there are many other
techniques to gather this information. These include:
• written tests and exercises
• criterion reference tests
• performance demonstrations
• performance observations
• case studies
• simulations
73
Collect Data
• peer assessments
• self-assessments
• skill- and confidence-building exercises.
Technology-enabled learning allows for easier and more integrated
data collection at these lower levels of evaluation. By building
questions and exercises into a mobile module, for example, participants
can respond seamlessly as part of the program. Simple online polling
tools are excellent for incorporating data collection into in-person
talent development events. By asking questions about the usefulness of
content throughout a program and capturing those data in real time,
facilitators can address issues and improve programs as they go.
Games can also allow for real-time knowledge checks and the
opportunity for deeper dive feedback in class.
Integrating data collection during program implementation is
relatively easy given the tools available. The real data collection
change is in the follow-up when you want to know if people are
applying what they learned and how much the impact measures are
improving.
74
Collect Data
Questionnaire ✓ ✓
Interviews ✓
Focus groups ✓
75
Collect Data
Program assignments ✓
Action planning ✓ ✓
Performance contracting ✓ ✓
Performance monitoring ✓ ✓
Monetary values ✓
Cost data ✓
Questionnaires
Questionnaires are the most often used data collection technique when
conducting an ROI evaluation. Questionnaires are inexpensive and easy
to administer. Depending on the length, they take very little of the
respondent’s time. Questionnaires can be sent via mail, internal mail,
and email, or they can be distributed online, either posted on an
intranet site or via an electronic survey tool.
Questionnaires also provide versatility in the types of data you can
collect. For example, you can gather data about the demographics of
participants, attitudes toward the program, knowledge gained during
the program, or how the participants applied that knowledge. In the
questionnaire, you can ask respondents to tell how much a particular
measure is worth. Participants, through a questionnaire, can tell how
much a measure has improved. They can also identify other variables
that influenced improvements in a given measure, and they can tell the
extent of the influence of those variables.
Questions can be open-ended, closed, or forced-choice. Participants
may be asked to select multiple responses or one response from an
array of options. Likert scale questions are very common in follow-up
questionnaires, as are frequency scales, ordinal scales, and paired-
76
Collect Data
Noted
Technology enables us to ask questions in such a way that analysis has never been easier. For example,
questions about monetary value can be asked and simply calculated, so that neither the respondents nor
the talent development professional has to worry about math. Qualtrics.com is one such tool that provides
survey developers and respondents with an improved survey experience.
Interviews
Interviews are the most ideal method of data collection for a deep dive
into an issue. They allow you to get more precise data than
questionnaires, action plans, and even focus groups. Interviews can be
conducted in person or over the phone; online tools like web and video
conferencing platforms make participation in the interview process
more accessible to hard-to-reach target audiences.
77
Collect Data
5 4 3 2 1 N/A
Productivity ❑ ❑ ❑ ❑ ❑ ❑
Sales ❑ ❑ ❑ ❑ ❑ ❑
Quality ❑ ❑ ❑ ❑ ❑ ❑
Costs ❑ ❑ ❑ ❑ ❑ ❑
Efficiency ❑ ❑ ❑ ❑ ❑ ❑
Time ❑ ❑ ❑ ❑ ❑ ❑
Employee Satisfaction ❑ ❑ ❑ ❑ ❑ ❑
Customer Satisfaction ❑ ❑ ❑ ❑ ❑ ❑
2. What other measures were positively influenced by coaching?
3. Of the measures listed above, improvement in which one is most directly linked to coaching? (Check only one)
4. Please define the measure above and its unit for measurement
78
Collect Data
5. How much did the measure identified in Questions 3 and 4 improve since you began this process?
7. Recognizing that other factors may have caused this improvement, estimate the percentage of improvement related directly
to coaching?
8. For this measure, what is the monetary value of improvement for one unit of this measure? (Although this is difficult, please
make every effort to estimate the value.)
9. Please state your basis for the estimated value of improvement you indicated above.
10. What is the annual value of improvement in the measure you selected above?
11. What confidence do you place in the estimates you have provided in the prior questions? (0 percent is no confidence, 100
percent is complete certainty.)
Interviews are used when the evaluator needs to ask complex
questions, or the list of response choices is so long that it becomes
confusing if administered through a questionnaire. In- person
interviews are conducted when the information collected through the
interview process is considered confidential or when the respondent
would feel uncomfortable providing the information on paper (or
electronically) or over the phone. They also are useful when there is a
need to probe for more detail.
Interviews can be structured or unstructured. Structured interviews
work exactly like a questionnaire, except that there is a face-to-face
rapport between the evaluator and the respondent. The respondent has
the opportunity to elaborate on responses, and the evaluator can ask
followup questions for clarification. Unstructured interviews allow
greater depth of dialogue between the evaluator and the respondent.
Virtual interviews using online platforms simulate the in-person
interview without the cost of traveling to the same physical room.
While some people are less comfortable with this technology, it has
many advantages in addition to travel cost avoidance. Virtual face-to-
79
Collect Data
80
Collect Data
Focus Groups
Focus groups are a good way to get important information from a
group of people when dialogue among the group is important. Focus
groups work best when the topic on which participants are to focus is
important to them. High-quality focus groups and the questions you
ask produce discussions that address exactly the topics you want to
hear about. The key to successful focus groups, however, is keeping
the focus group on topic. While focus groups are used for group
discussion, a fair amount of planning goes into designing the protocol.
The conversations that transpire during the focus group are
constructed conversations focusing on a key issue of interest. Table 3-5
81
Collect Data
presents a sample focus group protocol used to collect Level 3 data for
a study of an emergency response support program.
Noted
Collecting data using qualitative techniques such as interviews and focus groups is a noble idea, but one that often falls
short of its real potential. Two challenges present themselves. The first challenge is transcribing interview and focus group
responses. The second is making meaning out of the data. Gig workers, machine learning, and artificial intelligence (AI) are
enabling researchers to tackle both issues with more ease than in the past, enabling evaluators to leverage the value
qualitative data have to offer.
Table 3-5. Focus Group Protocol for a Study of an Emergency Response Support Program
82
Collect Data
This focus group is intended to help us understand how knowledge and skills gained in the program have been applied (Level 3).
During the focus group you will identify effectiveness with application, frequency of application, barriers, and enablers to
application.
What to Do
What to Take
1. Directions.
2. Point of contact’s telephone numbers.
3. Tent cards. Each tent card should have a number in a corner. Participants can write their first name just so you call them
by name, but your notes will refer to the participant number.
4. Refreshments—something light, but a treat because people respond to food, and it relaxes the environment.
5. Flipchart.
6. Markers for the tent cards and the flipchart.
7. Focus group notepads.
8. An umbrella.
What to Wear
You will be in a comfortable environment, so ties and high-heels are not necessary, but do dress professionally. No jeans and
tennis shoes: business casual.
83
Collect Data
What to Say
The intent is to understand how participants are applying what they learned during training. Start on time. You do not want to
keep the participants over the allotted time.
Questions
Each person will answer each question before moving to the next question. The idea is to allow each person to hear what the
others say so that they can reflect on their responses. You want to know what each individual thinks.
1. Now that you have had a chance to apply what you learned regarding your emergency response duties, how effectively
have you been able to execute those duties?
2. What specific barriers have interfered with your ability to execute your duties?
3. What has supported your efforts?
Question:
Notes Notable Quotes
Location:
Facilitator:
84
Collect Data
Action Plans
In some cases, action plans are incorporated into the talent
development program, with participants completing one prior to
leaving the program. Action plans are used to collect Level 3:
Application and Implementation and Level 4: Impact data. They can
also be used to collect monetary values of measures and isolate
program effects when more robust approaches are inappropriate.
Action plans are completed during the program. The results,
categorized as Level 1: Reaction and Planned Action, give participants
a road map toward implementation of content. Sometime after the
program, the action plans will be reviewed to determine if actions
actually occurred, resulting in Level 3: Application and Implementation
data. Using action plans as a tool to collect Level 4: Impact data,
however, takes more effort.
Table 3-6 shows an action plan used to collect Levels 3 and 4 data.
In Section A, the participants include their name, objective, evaluation
period, measure for improvement, current performance, and target
performance. Identifying these measures prior to the program is an
important step in securing credible follow-up data. During the
program, participants complete Sections B and C with specific steps
they will take, the end results of those steps, and expected intangible
benefits. Questions 1, 2, and 3 in Section E are completed prior to the
participant coming to the program.
After the evaluation period ends, participants complete Questions 4,
5, 6, and 7 in Section E. In answering Question 4, the participant
indicates how much that measure actually changed during the last
month of the evaluation period
compared to the average before the Basic Rule 3
training. The participant also
Extreme data items and unsupported claims
explains the basis for this change. It
should not be used in ROI calculations.
is important that all claims of
85
Collect Data
86
Collect Data
Gel mass will decrease to a minimum over time, which will contribute to gre
SAMPLE
Elimination of unnecessary gelatin mass waste.
November 30
probability of leftover medicine batches.
Target Performance:
Follow-Up
to Date:
June 1
Evaluation Period:
Training Program
,000 kgEnd
wasted monthly
Result: So that on the amounts.
Expected Intangible Benefits
Improvement Measure:
batches.
Objective:
Name:
1. 2. 3. 4. 5.
A B C
87
Collect Data
88
Collect Data
89
Collect Data
90
Collect Data
Performance Records
Performance records are records of standard data important
throughout the organization in reporting performance status for a
variety of functions. It would be a wise investment of your time to learn
what data are currently housed within your organization, who has
access to the data, and how you can best access the data if you need
to. You may find there is more available than you think.
91
Collect Data
Response Rates
An often-asked question when considering the data that are collection
process is, “How many responses do you need in order for the data to
be valid and usable?” The typical approach to determining the
response rate needed for an accurate story of success and valid
evaluation results is to first consider the population and how diverse or
homogeneous the population is in terms of factors that could influence
their responses. The next consideration is how confident you want to
be that the sample responds the same as the population would given a
certain margin of error. Based on these considerations, you would
target a sample that was smaller than the population, and, with the
results, infer the findings to the larger population. While sampling has
more technical components than a simple calculator provides, you can
get a good rough estimate of an appropriate sample size by using the
various online calculators available to you.
The thing to the remember is that a population is made up of people
who know the answers to the questions you ask. When evaluating a
program, the population includes people who have been through it; the
population is not the target audience. Typically, when conducting an
ROI study, the population is a small group and results are reported for
that group. While others may have completed the program, capturing
enough data to make a statistically significant inference based on the
study group is often not feasible. The more people involved in an
evaluation, the more the evaluation will cost. So, the results of the
program evaluation focus on the study group. Then, logical inference,
versus statistical inference, comes into play when answering the
question, “How likely do the results reflect the results of the entire
group?”
Another issue to consider that can affect the results of the study
group is the response rate from that group. Without enough responses
from your small study group, you could face a problem making
92
Collect Data
assumptions for that group. Let’s say you have a program you plan to
implement and evaluate that will include a population of 50 people.
Based on a sample size calculation, you would
93
Collect Data
94
Collect Data
95
Collect Data
96
Collect Data
97
Collect Data
Utility
The last consideration when selecting a data collection method is
utility. How useful will the data be, given the type of data you’ll be
collecting through the data collection process? Data collected through
a questionnaire can be easily coded and put into a database and
analyzed. With the help of automation, data generated through a
questionnaire can quickly be summarized and the story of success be
told. Data collected through focus groups and interviews, however, call
for a more challenging approach to analysis. Though you often take
those stories collected through dialogue with your respondents and
summarize the story in your report, a better analysis of what your
respondents are telling you can be conducted. This requires developing
themes for the data collected and coding those themes for statistical
analysis. This type of analysis can be quite time consuming and, in
some cases, frustrating if you do not immediately compile the data at
the conclusion of the interview or focus group. Although you often
make mental notes during data collection of this type, you will quickly
lose those notes if you don’t record them in some structured way.
Another issue with regard to utility is, what can you do with the
data? If the question does not map back to an objective of the program,
reconsider asking it. Also, whether the programs are being offered
through a corporate, government, nonprofit, community, or faith-based
98
Collect Data
Performance Records
Given the variety of sources for the data, one of the most credible will
be your organization or internal performance records. These records
are not an individual’s performance record kept between the
individual, the supervisor, and HR. Rather they reflect performance in
a work unit, department, division, region, or organization. Performance
records can include all types of measures that are usually readily
available throughout the organization. This is the preferred method of
data collection for Level 4 evaluation, because it usually reflects
business impact data.
99
Collect Data
Participants
Participants are the primary and most widely used source of data for
ROI analysis. They are always asked about their reaction to the
program, and they are whom you assess to determine if learning has
occurred. They know what they do with what they learned when they
return to the job, as well as what may prevent them from applying
what they learned on the job. In addition, they are the ones who realize
what impact their actions have on the job.
Although many people perceive participants as the most biased
option, you have to keep in mind that people are typically honest. If you
explain and reinforce to the participants that the evaluation is not
about them, it is about the program, they will more likely remove their
personal feelings from their answers and provide more objective data
than if they are uncertain of your intent.
100
Collect Data
Customers
If measures for improvement are customer related, the customer is the
best source of data. A simple example is if you want to know whether
the cashier acknowledged the customer in a friendly tone when the
customer walked up to check out; the credit card reader can pose a
question to the customer who, at the point of sale, can respond. Or, if
you want to know whether the customer’s perception of customer
service has improved, you can send a survey or text and have them
respond. Customer data are routinely collected and in a variety of
ways; the customer’s voice is the ultimate voice you want to hear when
determining the direction of your business. But from a training
evaluation perspective, the customer’s voice is not easily accessible.
So, you may need to rely on other sources to provide input from the
customer.
Other Sources
Internal and external expert and external databases provide a good
source of data for some measures. Experts such as the business
intelligence unit or the human capital analytics team can offer insights
into improvement in measures that you may not have available. Experts
and databases can also be resources for the monetary value of
measures when you need it. The key to success in selecting sources of
101
Collect Data
Noted
Determining the timing of data collection for follow-up data can be tricky, so it is important to make the
timing decision when establishing the program objectives. When deciding on the timing, consider the cur-
rent state with the measure, the time it will take for participants to use what they learn on a routine basis,
the availability of the data, and the convenience and constraints of collecting it.
Getting It Done
102
Collect Data
In the previous chapter, you read about developing objectives and you
worked through the process of defining the measures of your program.
Now it is time to complete the data collection plan. Complete the data
collection plan from chapter 2 (Table 2-15) by noting which data
collection method you plan to use to collect your data at the various
levels, the sources of your data, the timing for your data collection, and
the person or team responsible for the data collection.
In the next chapter, you will learn to make your results credible by
isolating the effects of the program from other influences that may
have contributed to business impact.
103
4 Isolate Program Impact
This step in the ROI Methodology attempts to delineate the direct contribution caused by the talent
development program, isolating it from other influences. This chapter covers three critical areas:
• understanding why isolating impact is a key issue
• identifying the methods to do it
• building credibility with the process.
4
Isolate Program Impact
106
Isolate Program Impact
107
Isolate Program Impact
108
Isolate Program Impact
109
Isolate Program Impact
ness results measures. This step recognizes that other factors Basic
Rule 5
are almost always present and that the credit for improvement Use at least
one method to is shared with other functions in the organization. Just taking
isolate the effects of a project.
110
Isolate Program Impact
Control Group
(Untrained ) Measurement
Experimental Group
Program Measurement
(Trained )
111
Isolate Program Impact
Case Study
Retail Merchandise Company (RMC) is a national chain of 420 stores.
The executives at RMC were concerned about the slow sales growth and
were experimenting with several programs to boost sales. One of their
concerns focused on the interaction with customers. Sales associates
were not actively involved in the sales process, usually waiting for a
customer to make a purchasing decision and then proceeding with
processing the sale. Several store managers had analyzed the situation
to determine if more communication with the customer would boost
sales. The analysis revealed that simple techniques to probe and guide
the customer to a purchase should boost sales in each store.
The senior executives asked the talent development staff to
experiment with a customer interactive skills program for a small group
of sales associates. The training staff would prefer a program produced
by an external supplier to avoid the cost of development, particularly if
112
Isolate Program Impact
the program was not effective. The specific charge from the management
team was to implement the program in three stores, monitor the results,
and make recommendations.
The talent development staff selected an interactive selling skills
program, which makes significant use of skill practices. The program
includes two days of training in which participants have an opportunity
to practice each of the skills with a fellow classmate, followed by three
weeks of on-the-job application. Then, there’s a final day of training that
includes a discussion of problems, issues, barriers, and concerns about
using the skills. Additional practice and fine-tuning of skills also take
place in the final one-day session. At RMC, this program was tried in the
electronics area of three stores, with 16 people trained in each store.
One of the most important parts of this evaluation is isolating the
effects of the training program. This is a critical issue in the planning
stage. The key question is, “When sales data are collected three months
after the program is implemented, how much of the increase in sales, if
any, is directly related to the program?” Although the improvement in
sales may be linked to the talent development program, other
nontraining factors contribute to improvement. Though the cause-and-
effect relationship between training and performance improvement can
be very confusing and difficult to prove, it can be accomplished with an
acceptable degree of accuracy. In the planning process, the challenge is
to develop one or more specific strategies to isolate the effects of
training and include it on the ROI analysis plan.
In this case study, the issue was relatively easy to address. Senior
executives gave the talent development staff the freedom to select any
stores for implementation of the pilot program. The performance of the
three stores selected for the program was compared with the
performance of three other stores that were identical in every way
possible. This approach represents the most accurate way to isolate the
effects of a program. Although other strategies, such as trend line
113
Isolate Program Impact
analysis and estimation, would have also been feasible, the control group
analysis was selected because of the appropriateness of the situation and
the credibility of the analysis. The challenge in using a control versus
experimental group is to appropriately select both sets of stores.
114
Isolate Program Impact
While other factors could have had an influence on sales, there was
up-front agreement that these four criteria would be used to select three
stores for the pilot program and match them with three other stores. As
a fallback position, in case the control group arrangement did not work,
participant estimates were planned.
115
Isolate Program Impact
last group. These naturally occurring control groups often exist in major
talent development program implementations.
A second problem is that the control groups must be addressed early
enough to influence the implementation schedule so that similar groups
can be used in the comparison. Dozens of factors can affect employee
performance, some of them individual and others contextual. To tackle
the issue on a practical basis, it is best to select three to five variables
that will have the greatest influence on performance.
A third problem with the control group arrangement is
contamination, which can occur when participants in the program
influence others in the control group. Sometimes the reverse situation
occurs when members of the control group model the behavior from the
trained group.
In either case, the experiment becomes contaminated because the
influence of the program filters to the control group. This can be
minimized by ensuring that control groups and experimental groups are
at different locations, have different shifts, or are on different floors in
the same building. When this is not possible, it is sometimes helpful to
explain to both groups that one group will receive training now and
another will receive training at a later date. Also, it may be helpful to
appeal to the sense of responsibility of those being trained and ask them
not to share the information with others.
Noted
A challenge is when the control group outperforms the experimental group. In some cases, the program
was, in fact, a poor solution to the opportunity. But more times than not, when the control group outper-
forms the experimental design, there is a problem with the research design. Therefore, it is important to
have an alternative approach readily available to determine how much improvement is due the program.
116
Isolate Program Impact
goes on, the greater the likelihood that other influences will affect the
results. More variables will enter into the situation, contaminating the
results. On the other end of the scale, there must be enough time so that
a clear pattern can emerge between the two groups. Thus, the timing for
control group arrangement must strike a delicate balance of waiting long
enough for their performance differences to show but not so long that
the results become seriously contaminated.
A fifth problem occurs when the different groups function under
different environmental influences because they may be in different
locations. Sometimes the selection of the groups can help prevent this
problem from occurring. Also, using more groups than necessary and
discarding those with some environmental differences is another tactic.
A sixth problem with using control groups is that it may appear to be
too research oriented for the organization. For example, management
may not want to take the time to experiment before proceeding with a
program, or they may not want to withhold a program from a group just
to measure the impact of an experimental program. Because of this
concern, some professionals do not entertain the idea of using control
groups. When the process is used, however, some organizations conduct
it with pilot participants as the experimental group and nonparticipants
as the control group. Under this arrangement, the control group is not
informed of their control group status.
The primary advantage of using control versus experimental groups is
that it is the gold standard in demonstrating cause and effect. This level
of credibility is important when reporting results of a major talent
development initiative. When the experimental and control groups are
evenly matched and the program is the only other factor, it’s difficult for
someone to push back on results. In today’s era of agility and analytics,
experimentation is becoming more acceptable than in the past. Senior
leaders recognize the need to pivot quickly if a program is not working;
117
Isolate Program Impact
Case Study
In a warehouse where documents are shipped to fill consumer orders,
shipment productivity is routinely monitored. For one particular team,
the shipment productivity is well below where the organization desires it
118
Isolate Program Impact
to be. The ideal productivity level is 100 percent, reflecting that the
actual shipments equal the scheduled shipments.
100%
Team Training Program
87.3 %
85%
J F M A M J J A S O N D J
Months
Figure 4-2 shows the data before and after the team training
program. As shown in the figure, there was an upward trend on the data
prior to conducting the training. Although the program apparently had a
dramatic effect on shipment productivity, the trend line shows that
improvement would have continued anyway, based on the trend that had
been previously established. It is tempting to measure the improvement
by comparing the average six months of shipments prior to the program
(87.3 percent) to the actual average of six months after the program
(94.4 percent), yielding a 7.1 percentage point difference. However, a
more accurate comparison is the six-month actual average after the
program (94.4 percent) compared with the trend line (92.3 percent); the
119
Isolate Program Impact
Mathematical Modeling
A more analytical approach to isolating program effects is the use of
mathematical modeling. Correlation does not equal causation; this has
been proven time and again. Yet, scientists continue to demonstrate
approaches in which, under certain circumstances, correlational analysis
indicates causal outcomes (Mooij et al. 2016). Through the development
of models using robust statistical analysis, an evaluator can demonstrate
a relatively reliable connection between performance variables. This
approach represents a mathematical interpretation of the trend-line
120
Isolate Program Impact
Case Study
A basic example of how this type of analysis can be employed is in a
retail setting where two investments were being made to increase sales:
advertising and training. The marketing and advertising team tracked
investment in advertising and sales over time. Using the method of least
squares, they found that there was a mathematical relationship between
advertising and sales: Y = 140 + 40X, where Y represented the daily
sales per employee and X represented the investment in advertising
divided by 1,000. Prior to the program the average daily sales, using a
one-month average, was $1,100. The investment in advertising was
$24,000. In formula form: $1,100 = 140 + 40(24).
Six months after the program, average sales on a monthly basis was
$1,500. Investment in advertising was $30,000. However, there had
been a training program during that six months. While the senior
executive and talent development team discussed other factors, they
agreed the only other “significant” factor that could have influenced
sales was the training program. To account for the increase, the first
step was to solve for the contribution of advertising using the
mathematical formula: Y = 140 + 40(30). The output showed that
average sales due to advertising was $1,340. The difference between the
121
Isolate Program Impact
$1,500 and $1,340 was $160. This difference was attributed to the
training program.
122
Isolate Program Impact
123
Isolate Program Impact
The focus group session should take about one hour (slightly more if
there are multiple factors affecting the results or there are multiple
business measures). The facilitator should be neutral to the process (that
is, the same individual conducting the program should not conduct this
focus group).
The task is to link a talent development program to business
performance. The group is presented with the improvement, and they
provide input on how much of the improvement is due to the program.
Twelve steps are recommended to arrive at the most credible value for
learning impact:
1. Explain the task. The first step is to describe the task to
members of the focus group. Participants should be clear that
there has been improvement in performance. While many factors
could have contributed to the performance, the task of this group
is to determine how much of the improvement is related to the
specific program.
2. Discuss the rules. Each participant should be encouraged to
provide input, limiting comments to two minutes per person for
any specific issue. Comments are confidential and will not be
linked to a specific individual.
3. Explain the importance of the process. The participant’s role
in the process is critical. Because it is their new actions,
behaviors, or processes that have led to performance improvement
in measures related to their work, they are in the best position to
indicate what has caused this improvement; they are the experts
in this process. Without quality input, the contribution of the
program (or any other processes) may never be known.
4. Select the first measure and show the improvement. Using
actual data, show the level of performance prior to and following
the program; in essence, the change in business results is
124
Isolate Program Impact
125
Isolate Program Impact
126
Isolate Program Impact
127
Isolate Program Impact
Total 100%
improveprovide input into the actual monetary value of the unit. To ment for
potential errors of estimation.
Questionnaire Approach
Sometimes focus groups are not available or are considered
unacceptable for use in data collection. The participants may not be
available for a group meeting, or the focus groups may become too
expensive. In these situations, it may be helpful to collect similar
information via a questionnaire. With this approach, participants address
the same issues as those addressed in the focus group, but now on a
series of impact questions imbedded in a follow-up questionnaire.
The questionnaire may focus solely on isolating the effects of talent
development, as detailed in the previous example, or it may focus on the
monetary value derived from the program, with the isolation issue being
128
Isolate Program Impact
129
Isolate Program Impact
130
Isolate Program Impact
• Because only annualized values are used, it is source, it is assumed that little
or no for a population or from a specific assumed that there are no benefits from the
improvement has occurred.
Table 4-2. Sample of Input From Participants in a Leadership Program for New Managers
131
Isolate Program Impact
42 $90,000 Turnover reduction. Two turnover statistics per 90% 40% $32,400
year. Base salary × 1.5 = 45,000
117 $8,090 Team project completed 10 days ahead of 90% 45% $3,276
schedule. Annual salaries:
$210,500 = $809 per day × 10 days
118 $159,000 Under budget for the year by this amount 100% 30% $47,700
Total $113,721
series of impact questions is the critical challenge with this Basic Rule
8
process. Participants must be primed to provide data, and this
can be accomplished in six ways. Avoid use of extreme data
132
Isolate Program Impact
• The quantity of data will improve. Participants Use only the first
year of annual will understand the chain of impact and benefits in ROI analysis of
short-term solutions understand how data will be used. They will complete
more questions.
• The quality of the data is enhanced. With up-front
expectations, there is greater understanding of the type of data
needed and improved confidence in the data provided. Perhaps
subconsciously, participants begin to think through consequences
of training and specific result measures.
133
Isolate Program Impact
134
Isolate Program Impact
135
Isolate Program Impact
136
Isolate Program Impact
Strengthening Credibility
It is not unusual for the ROI in talent development to be high. Even when
a portion of the improvement is allocated to other factors, the numbers
are still impressive in many situations. The audience should understand
that, although every effort was made to isolate the impact, it is still a
figure that is not precise and may contain error. It represents the best
estimate of the impact given the constraints, conditions, and resources
available.
One way to strengthen the credibility of the ROI is to consider the
different factors that influence the credibility of data. Table 4-4 is a
listing of typical factors that influence the credibility of data presented to
a particular group. The issue of isolating the effects of the talent
development program is influenced by several of these credibility
factors.
The reputation of the source of the data is important to consider. The
most knowledgeable expert must provide input and be involved in the
analysis in this topic. Also, the motives of the researchers can have a
major influence on perceived credibility. A third party must facilitate any
focus group that is done, and the data must be collected objectively. In
addition, the assumptions made in the analysis and the methodology of
the study should be clearly defined so that the audience will understand
the steps taken to increase the credibility. The type of data focuses
directly on the impact data: The data have changed, and the challenge is
137
Isolate Program Impact
to isolate the effects on that change. Managers prefer to deal with hard
data, typically collected from the output of most programs. Finally, by
isolating the effects of only one program, the scope of analysis is kept
narrow, enhancing the credibility.
• Reputation of the source of the data • Personal bias of audience • Realism of the outcome data
• Reputation of the source of the study • Methodology of the study • Type of data
• Motives of the researchers • Assumptions made in the analysis • Scope of analysis
Getting It Done
In chapter 2, you were introduced to the data collection plan and the ROI
analysis plan. In chapter 3, you completed the data collection plan for a
program you plan to evaluate to ROI. Here is where you begin
completing the ROI analysis plan.
Table 4-5 provides a blank ROI analysis plan. Transfer your Level 4
measures from your data collection plan to the first column of the ROI
analysis plan. Then, identify the techniques you will use to isolate the
effects of the program from other influences and write the techniques in
the second column aligned with each Level 4 measure. Remember, this
step must be taken, so a technique should be included for each objective.
In the next chapter, you will continue completing the ROI analysis
plan.
138
Isolate Program Impact
139
Table 4-5. ROI Analysis Plan
Program: Responsibility:________________________________ Date:____________
Methods for Methods of Cost Intangible Communication Other Comments
Isolating the Converting Data Categories Benefits Targets for Influences or
Effects of the to Monetary Final Report Issues During
Program Values Application
Isolate Program Impact
140
Isolate Program Impact
Usually
wLevel
4)
Items
Data
(
141
5 Calculate ROI
To continue building credibility for your talent development programs, you need to demonstrate the
economic value they add to the organization. Specifically, in this chapter you will learn the basic
steps to move from Level 4 to Level 5 by:
• converting data to monetary value
• tabulating fully loaded costs
• calculating the ROI.
5
Calculate ROI
Output Quality
143
Calculate ROI
144
Calculate ROI
145
Calculate ROI
Noted
There are five levels of data. Intangible benefits are impact data not converted to money. They represent a
sixth type of data when reporting an ROI due to their importance to the organization.
146
Calculate ROI
behind the data, and the motive in presenting the results are all
concerns when data are somewhat questionable. Don’t risk credibility
just to calculate an ROI. Intangible measures of success may be where
you stop.
Productivity
Quality
Cost Savings and
Time Cost Avoidance
Cost
Research Rank
Vulcan Materials Company produced 195 million tons of crushed stone during 2018.
Source: Annual Report.
IAMGOLD showed an ROI of 345 percent on a leadership program involving first-level managers.
Source: Parker, L., and C. Hubble. 2015. “Measuring ROI in a Supervisory Leadership Development
Program.” In Measuring the Success of Leadership Development, by P.P. Phillips, J.J. Phillips, and R.L. Ray.
Alexandria, VA: ATD Press.
St. Mary-Corwin’s Farm Stand Prescription Pantry saved money for the organization and avoided medical
costs for recipients of service so much so that it resulted in a 650 percent ROI. Source: Phillips, P.P., J.J.
Phillips, G. Paone, and C.H. Gaudet. 2019. Value for Money: How to Show the Value for Money for All Types
of Projects and Programs. Hoboken, NJ: John Wiley & Sons.
147
Calculate ROI
148
Calculate ROI
149
Calculate ROI
mistakes and errors in reporting, the cost (or value) of those mistakes
is the cost incurred in reworking the report.
Historical Costs
When no standard values exist, look for historical costs. These are
costs for which there is a receipt, so to speak. Using this technique
often requires more time and effort than desired. In the end, however,
you can develop a credible value for a given measure.
An example of using historical costs is the case of a sexual
harassment prevention program that was implemented in a large
health care organization. The measure of the investigation was formal,
150
Calculate ROI
151
Calculate ROI
staff calls the expert and asks for the estimate for the economic impact
of the rate adjustment.
External Databases
Sometimes there are no standard values, no receipts, and no expertise.
When this is the case, go to databases. Today, more than any time in
the past, talent development professionals have good research at their
fingertips. External databases provide a variety of information,
including the monetary value of an array of measures. Take the use of
external databases to convert a measure to monetary value in the case
of turnover. A company implemented a stress management program,
which was driven by the excessive turnover due to the stress that came
from changing a bureaucratic, sluggish organization into a competitive
force in the marketplace. After implementing the stress management
program, turnover was reduced along with improvements in other
measures, such as productivity and job satisfaction. In calculating the
ROI, the evaluators went to a variety of databases to determine the
value of turnover for a particular employee leaving the organization.
The turnover studies used in the research revealed that a value of 85
percent of the annual base pay is what it was costing the organization
for the people in this job classification to leave. While senior managers
thought the cost of turnover was slightly overstated using the
databases, it did give them a basis from which to begin determining
the value of this measure.
152
Calculate ROI
Estimations
When the previous methods are inappropriate and you still want to
convert a measure to monetary value, use an estimation process that
has been proven conservative and credible with executives in your
organization. The estimates of monetary value can come from
participants, supervisors, managers, and even the talent development
staff. The process of using estimation to convert a measure to
monetary value is quite simple. The data can be gathered through
focus groups, interviews, or questionnaires (discussed in chapter 3).
153
Calculate ROI
The key is clearly defining the measure so that those who are asked to
provide the estimate have a clear understanding of that measure.
The first step in the estimation approach is to determine who is the
most credible source of the data. Typically, the participants realize the
contribution they are making to the organization after participating in
a talent development program. But, depending on what job group those
participants work in, you might develop data that are more credible if
you go to the supervisors or managers. Only fall back on the talent
development staff when you have no other option and are under
pressure to come up with a monetary value. The concern with using
talent development staff is their ownership of the program in question
increases bias and often results in loss of credibility, especially when
reporting a very high ROI.
Let’s consider an example of using estimation to convert the
measure of absenteeism to monetary value. Say you have an
absenteeism problem, you implement a solution, and, as a result, the
absenteeism problem is resolved. You now want to place a monetary
value on an absence. You have no standard value. You don’t want to
invest the resources to develop a value using historical costs. There are
no internal or external experts who can tell you. You’ve been
unsuccessful in looking for an external database. You have no other
measures that have been converted to monetary value to which you can
link absenteeism. With pressure to come up with an ROI for this
particular program, you decide to go to estimation.
The first step is determining who knows best what happened when
an unexpected absence occurred. So, to convert the measure to
monetary value, you call in five supervisors from similar work units to
discuss the issue and help develop a value for an absence. Using a
structured focus group approach, the scenario plays out as follows.
At the beginning of the focus group session, discuss the issue with
the five supervisors, explaining why they have been brought together
154
Calculate ROI
155
Calculate ROI
functions and how much they believe it’s costing them when someone
doesn’t show up for work. Now, given what happens in your
organization and your estimated costs and what you have heard from
others, how confident are you that your estimate is accurate?” After
thinking this over, Supervisor 1 says, “Well, it is an estimate, but I
know what happens when people don’t show up for work and I can be
pretty sure what it’s costing us from a time perspective. Given that it is
an estimate and I’m not totally sure, I’ll say that I am 70 percent
confident in my estimate.”
Write it next to his or her estimate. Repeat the process with
Supervisors 2, 3, 4, and 5. Table 5-4 shows the estimates of the five
supervisors and their error adjustments. Multiply each estimate by the
error adjustment, then total and average the adjusted values. The
results are an average adjusted per-day cost for one absence of $1,061.
Figure 5-2 shows what happens when you adjust original estimates
by factoring for confidence level. The top line represents the original
estimate for each supervisor. The bottom line shows the adjusted value.
The additional step to adjust and estimate for error reduces variability
in the estimates and provides a more conservative value, hence
improving the reliability of the estimated value of one absence.
Table 5-4. Absenteeism Is Converted Using Supervisor Estimates
156
Calculate ROI
$2,500
Estimated Values
$2,000
$1,500
$1,000
$500
$0
1 2 3 4 5
Supervisors
157
Calculate ROI
Is there a Move to
Is there a No No intangible
method to
standard value? benefits
get there?
Yes Yes
Yes
Can we convince
our executive in Move to
two minutes that No intangible
the value benefits
is credible?
Yes
Convert the
measure
to money
158
Calculate ROI
159
Calculate ROI
160
Calculate ROI
The value that you put in step 5 is the value that goes in the numerator of the formula.
161
Calculate ROI
A B
• Operating costs • Administrative costs
• Support costs • Participant compensation
• Facility costs
• Classroom costs
C D
• Program development costs • Analysis costs
• Administrative costs • Development costs
• Classroom costs • Implementation costs
• Participant costs • Delivery costs
• Evaluation costs
• Overhead and administrative costs
If you selected category D, you are correct. The analysis and the
development costs are prorated over the life of the program, so one
ROI study will not be weighed down by the full costs of analysis and
development. But a fair portion of those costs will be included. The
lifetime of the program is considered the time until a major program
change occurs. Say you are evaluating a program that will not change
for one year and you offer the program 10 times during the year. When
you conduct an ROI study on one offering of that program, your
analysis costs and your development costs will be included only at the
rate of one-tenth of the total of the analysis and development costs. The
other offerings are going to benefit from the investment in analysis and
development as well. Program materials, instructor and facilitator
costs, facilities costs, travel, lodging, meals, participant salary and
benefits, and evaluation costs are expensed—they are the direct costs.
Overhead and administrative costs, however, are allocated based on
the number of days or hours required of participants to engage in the
program. Table 5-7 provides an example. As you see in the table, the
unallocated budget in the example is $548,061. To calculate the total
number of participant-days, consider the number of days for a program
and multiply it by the number
162
Calculate ROI
Table 5-8 provides a worksheet to help you develop the fully loaded
costs for your talent development programs.
163
Calculate ROI
164
Calculate ROI
Participant costs
Salaries and employee benefits
(no. of participants × avg. salary × employee benefits factor × hrs. or days of
training time)
Meals, travel, and accommodations
(no. of participants × avg. daily expenses × days of training)
Program materials and supplies
Participant replacement costs (if applicable)
Lost production (explain basis)
165
Calculate ROI
Remember that intangible benefits are those that you choose not to
convert to monetary value. But they are sometimes more important
than the actual ROI calculation. Typical intangible benefits that you do
not convert to monetary value are job satisfaction, organizational
commitment, teamwork, and customer satisfaction. You can convert
these measures to monetary value; typically, however, when job
satisfaction, organizational commitment, teamwork, and customer
satisfaction are improved, you’re satisfied enough with the
improvement in these measures that the dollar value with that
improvement is not relevant.
166
Calculate ROI
Getting It Done
You have completed almost all the steps in the ROI Methodology. Now,
it’s time to complete the next three columns in the ROI analysis plan.
In chapter 4, you transferred your Level 4 measures to the ROI analysis
plan; you selected techniques to isolate the effects of the program on
the measure. Now, determine how you will convert these measures to
monetary value. If your measure does not pass the four-part test
explained earlier, move the measure to the intangible benefits column.
Identify the program costs that you plan to consider and those benefits
that you plan to categorize as intangibles.
In the next chapter, you will read about the final phase in the ROI
Methodology: Optimize Results. This phase requires that you
communicate results to key stakeholders and use black box thinking to
drive an increase in your talent development funding.
The value that you put in step 5 is the value that goes in the numerator of the formula.
167
Calculate ROI
168
6 Optimize Results
The chapter describes the basics of the last phase in the ROI Methodology: Optimize results.
Specifically, this chapter covers:
• telling the story
• developing reports
• using black box thinking.
6
Optimize Results
170
Optimize Results
Figure 6-1. People Analytics Competencies That Are Important But Lacking
Psychometrics 24%
IT systems 14%
Employment law 2%
Other 11%
Source: i4cp and ROI Institute (2018)
171
Optimize Results
172
Optimize Results
might not have occurred, and what the goals are to improve a program
when it results in a negative ROI. You can use the communication
process to energize the talent development staff as well as senior
management and supervisors about an upcoming program. Finally, you
can demonstrate how tools, skills, or new knowledge can be applied to
the organization.
When a pilot program shows impressive results, use this
opportunity to stimulate interest among stakeholders in continuing the
program—and interest for employees to participate in it. Table 6-1
provides a list of possible purposes to consider in determining why you
want to communicate the process and the results.
173
Optimize Results
174
Optimize Results
Participants
Participants are a critical source of data. Without participants, there
are no data. Levels 1 and 2 data should be reported back to
participants immediately after the data have been analyzed. A
summary copy of the final ROI study should also be provided to
participants. In doing so, participants see that the data that they are
providing to you is actually being used to make improvements to the
program. This enhances the potential for additional and even better
data in future evaluations.
One caveat is when the Level 4: Impact results and the Level 5: ROI
metric could lead to negative response from participants. If this is the
situation, then manage this communication with extra care. While the
impact and ROI results are important to those funding the program and
processes, they may sometimes communicate the wrong message to
participants and other employees. Also, following up with participants
after you have adjusted a program reinforces that what participants
tell you is important to the success of the program and contributes
value to the organization as a whole.
Participants’ Supervisors
175
Optimize Results
Clients
The fourth group to whom you should always communicate the results
of your ROI study is the client, the person or persons who fund the
program. Here, it is important to report the full scope of success. The
client wants to see the program impact on the business as well as the
actual ROI. Although Levels 1 and 2 data are only marginally important
to the client to some extent, it is unnecessary to report this data to the
client immediately after the program. The client’s greatest interest is
in Levels 4 and 5 data. Providing the client a summary report for the
comprehensive evaluations will ensure that the client sees that
programs are successful and, in the event of an unsuccessful program,
that a plan is in place to take corrective action.
176
Optimize Results
Meetings
When considering meetings as the medium for telling your story, you
have several criteria to take into account. First, organizations are
swamped with endless meetings. It helps to review when regularly
scheduled meetings occur and plan for communication during them so
you are not disrupting your audiences’ schedules. However, you do run
the risk of having to wait to present your report until a future meeting
when you can be added to the agenda. Key players might be so
interested in your ROI study that they won’t mind you scheduling the
earliest possible meeting.
If it’s not a staff or management meeting, you might schedule a
discussion where you, a participant, and maybe a participant’s
supervisor create a panel to discuss a particular program. Panel
discussions can also occur at regularly scheduled meetings or at a
special meeting focused on the program.
Business update meetings also present opportunities to provide
information about your program. Best practice meetings, where each
function within an organization shares results, are another opportunity
to present the results of your programs. Or you might present your ROI
study at a large conference in a panel discussion, which includes talent
177
Optimize Results
Internal Publications
Internal publications are another way in which you can communicate to
the employees. You can use these internal publications—newsletters,
memos, breakroom bulletin boards—to report program progress and
results as well as to generate interest in current and future programs.
Internal hard copy communications are the perfect opportunity to
recognize program participants who have provided data or responded
promptly to your questionnaires. If you have offered incentives for
participation in a program or for prompt responses to questionnaires,
mention this in these publications. Use internal publications to tell
human interest stories and highlight activities, actions, and encounters
that occur during and as a result of the program. Be sure to accentuate
the positive and announce compliments and congratulations
generously.
Electronic Media
Digitization is leading to new types of electronic media that support the
dissemination of information. Websites, intranets, and group emailing
tend to remain foundational approaches, and are often used to promote
programs and processes being implemented in the organization. While
social media platforms are useful, one would never disseminate the
details of an ROI study through such means. However, they can be
useful for sharing snippets of information that promote program
success or even changes that resulted from your study. Take advantage
of these opportunities to spread the word about the activities and
successes of the talent development department.
Brochures
178
Optimize Results
Formal Reports
A final medium through which to report results is in the formal report.
There are two types of reports—micro-level reports and macro-level
scorecards—that are used to tell the success of talent development
programs. Micro-level reports present the results of a specific program
and include detailed reports, executive summaries, general audience
reports, and single-page reports. Macro-level scorecards are an
important tool in reporting the overall success of the talent
development function. In the next section, we dive deeper into
developing these reports.
Developing Reports
There are five types of reports to develop to communicate the results of
the ROI studies. These include detailed reports, which are developed
for every evaluation project; executive summaries; general audience
reports; single-page reports; and macro-level scorecard.
Detailed Reports
The detailed report is the comprehensive report that details the
specifics of the program and the ROI study. This report is developed
for every comprehensive evaluation that you conduct. It becomes your
179
Optimize Results
record and allows you the opportunity to replicate the study without
having to repeat the entire planning process. By building on an existing
study, you can save time, money, effort, and a great deal of frustration.
The detailed report contains six major headings: need for the program,
need for the evaluation, evaluation methodology, results, conclusions
and next steps, and appendixes.
Evaluation Methodology
180
Optimize Results
Results
Now, it’s time for your story—the results section, where the talent
development program that has undergone this rigorous evaluation can
shine! Here, you will provide the results for all levels of evaluation
181
Optimize Results
182
Optimize Results
out the dollar values of the costs. The readers have already seen the
benefits in dollar amounts; now give them the costs. If the benefits
exceed the costs, then the pain of a very expensive program is relieved
because the audience can clearly see that the benefits outweigh the
costs. Finally, provide the ROI calculation.
The last part of the results section in the
detailed report concerns intangible benefits. As
you’ve learned throughout
the book, intangible benefits are those items you choose Basic Rule
14
not to convert to monetary value. Highlight those intan-
Hold reporting the actual
gible benefits and the unplanned benefits that came about
ROI until
183
Optimize Results
Appendixes
The appendixes include exhibits, detailed tables that could not feasibly
be included in the text, and raw data (keeping the data items
confidential). Again, the final report is a reference for you as well as a
story of success for others.
Throughout your report, incorporate quotes—positive and negative
—from respondents. Remember that there are ethical issues with
evaluation. It might be tempting to leave out negative comments;
however, you will enhance your credibility and gain respect for the
talent development function if you tell the story as it is. By developing
this detailed comprehensive report, you will have backup for anything
that you say during a presentation. When conducting a future ROI
study on a similar program, you will have your road map in front of
you. Table 6-2 presents a sample outline of a detailed report. Table 6-2.
Detailed Report Outline
184
Optimize Results
Results
• General Information
»Response Profile
»Relevance of Materials
• Participant Reaction
• Learning
The results with six
• Application and Implementation
measures: Levels 1, 2,
»Success With Use
3, 4, 5, and
»Barriers
intangibles
»Enables
• Impact
»General Comments
»Linkage With Business Measures
• ROI
• Intangible Benefits
Appendix
Executive Summaries
Another important report to develop is the executive summary. The
executive summary follows the same outline as the detailed report
although you exclude the appendixes and do not develop each section
and subsection in detail. You will clearly and concisely explain the need
for the program, the need for the evaluation, and the evaluation
methodology. Always include the methodology prior to the results.
Why? When the reader understands and appreciates the methodology,
they typically have a greater appreciation for the results. Report the
data from Level 1 through Level 5 and include the sixth measure of
success—the intangible benefits. The executive summary is usually 10
to 15 pages in length.
185
Optimize Results
Single-Page Reports
A final micro-level report is a single-page report or micro-level
dashboard that summarizes the results. Table 6-3 shows an example of
a single-page report. Figure 6-2 shows a dashboard from Explorance’s
Metrics That Matter (MTM) system. Single-page reports are used with
great care. Reporting success of your program using the single-page
report or micro-level dashboard can be risky if your audience is
unfamiliar with the process. If an audience sees the ROI of a program
without having an appreciation for the methodology, members will
fixate on the ROI and never notice, much less form a regard for, the
information developed in the other levels of evaluation. While these
simple micro-level summaries of results have risks, they are an easy
way to communicate results to the appropriate audiences on a routine
basis.
Table 6-3. Single-Page Report
186
Optimize Results
Level 2: Learning—Results
• Post-test scores average 84
• Pretest scores average 51
• Improvement 65%
• Participants demonstrated they could use skills successfully
Level 4: Impact—Results
Sexual Harassment Business One Year Prior to One Year After Factor for Isolating the
Performance Measures Program Program Effects of Program
Internal complaints 55 35 74%
External charges 24 14 62%
Litigated complaints 10 6 51%
Legal fees and expenses $632,000 $481,000
Level 5: ROI—Results
• Total annual benefits $3,200,908
• Total costs $277,987
• ROI 1,052%
Intangible Benefits
• Increased job satisfaction
• Increased teamwork
• Reduced stress
187
Optimize Results
Macro-Level Scorecard
Macro-level scorecards can provide the results of the overall training
process. These scorecards provide a macro-level perspective of success
and serve as a brief description of program evaluation as contrasted to
the detailed report. They show the connection between talent
development’s contribution and organizational objectives. Methods of
isolation are always included in the report to reinforce that you are
taking steps to give credit where credit is due. The scorecard
integrates a variety of types of data and demonstrates alignment
among programs, strategic objectives, and operational goals. Table 6-4
presents an outline of a macro-level scorecard.
188
Optimize Results
189
Optimize Results
For 2016
Level 2: Learning (select programs) Score 78% 85% 83% 2% below 85% on plan
190
Optimize Results
presentation. If you give them the report, they will be flipping through the pages to find the ROI calculation. Keep the reports
beside you as you present your results.
Present the results to the senior management team just as you have written the report: need for program, need for
evaluation, evaluation methodology, results, conclusion, and next steps. Be thorough in reporting Levels 1 through 4, and do
not fixate on or hurry to the ROI calculation—the entire chain of impact is important to reporting the success of the programs.
Report Level 5: ROI and the intangible benefits. Then, present your conclusions and next steps. At the end of your presentation,
provide each senior manager a copy of your final report.
Do you really expect the senior management team to read this detailed report? No. At best, they will hand it off to
someone else to read and summarize the contents that you will have presented in the meeting. Why then go to the trouble
of preparing this printed copy of the detailed final report for senior managers? To build trust. You’ve told them your story;
now, all they have to do is look in the report to see that you covered the details and that you provided a thorough and
accurate presentation of the report’s contents.
After the first one or two studies, senior management will have bought into the ROI Methodology. Of course, if
you’ve worked the process well, they will have begun to learn the methodology long before your initial presentation. Given
that, after the first or second study, you can start distributing the executive summary. Limit your report to senior
management to the 10 to 15-page report. Again, it has all the components, but not so many details.
After about five ROI studies, you can begin reporting to senior management using the single-page report, dashboard,
scorecard, or even infographic. This will save time and money. Do remember, the talent development staff will always have
a copy of the detailed, comprehensive report. This will serve as a backup and a blueprint for future studies.
Data Visualization
Data can be displayed in a variety of ways; the more comprehensive the
display of data, the better the story is told within a limited space.
Edward R. Tufte is one of the predominant leaders on the topic of
graphical display of data. Tufte (1983) suggests that graphical displays
of data should:
• Show data.
• Induce the audience to think about the substance rather than the
technology of the graphic production.
• Avoid distorting the story that the data have to tell.
• Present many numbers in a small space.
• Make large data sets coherent.
• Encourage the eye to compare different pieces of data.
• Reveal the data at several layers of detail from broad overview to
fine structure.
• Serve a reasonably clear purpose: description, exploration,
tabulation, or decoration.
191
Optimize Results
Noted
Only use visual displays of data if it makes the information more accessible and
better nudges the audience toward action.
A basic table is shown in Table 6-5. This simple frequency table
shows the scores received on an exam in a four-week course. The first
column represents the scores; the second column represents the
frequency or the number of participants who achieved that score; the
third column represents the percent of the total number of participants
with that particular score. The valid percent column (sometimes
referred to as the adjusted percentage) is based on missing data
(scores). In this example, there are no missing scores. So, the valid
percent and the percent columns are the same. The cumulative percent
192
Optimize Results
Variable 1 Variable 2
Table 6-6 is a one-way table that shows two variables along the same
axis. This means that two different variables are represented in
columns. In the first column is variable 1, which is
193
Optimize Results
Diagrams
Diagrams are charts made up primarily of geometric shapes, such as
circles, rectangles, and triangles, connected by lines or arrows. They
show how people, ideas, and things relate. Text is frequently included
inside and outside these shapes to tell the story. Numerical values are
sometimes used, though to a lesser extent, because diagrams generally
display nonquantitative data. Flowcharts, critical path method charts,
organization charts, network charts, decision charts, and conceptual
charts are frequently presented in diagrams. The four-part test shown
as a flowchart in Figure 5-3 is an example of a diagram. Use diagrams
to present project timelines, as well as the conceptual framework
displaying the findings in an evaluation. Figure 6-4 represents a
diagram displaying a phased approach to implementing a full-blown
evaluation.
Figure 6-5 is another diagram displaying the argument for the
conceptual framework discovered through the evaluation process. In
this diagram, the course leads to positive reaction, knowledge
acquisition, and the use of knowledge and skills. As a result, there is
positive impact on network security, work stoppage, equipment
downtime, uptime, and costs of troubleshooting. Through the isolation
process, note the other variables that contribute to the Level 4
outcomes. Continuous learning and practice, research, real-world
exercises, knowledge application, and reliable staff are listed and
graphically depicted as influences on the outcomes. Providing a
pictorial framework summarizes the results of the evaluation study in a
manner that supports audience understanding.
Figure 6-4. A Phased Approach to a Comprehensive Evaluation
194
Optimize Results
Phase 2
Summer 2017
2nd Course
Phase 1 Evaluation Phase 3 Phase 4
Summer 2016 Fall 2018 Fall 2018
Evaluation
1st Course 3rd Course Certification
Implementation
Evaluation Evaluation ROI Study
Strategy
4th Course
Evaluation
As participants react positively to the course, acquire knowledge and skills, and apply knowledge and skills, results occur.
However, other intervening variables also influence measures; therefore, steps must be taken to isolate the effects of the
course on these measures.
195
Optimize Results
Graphs
Graphs are the most commonly used displays of quantitative
relationships between two or more data types. Some types of graphs
196
Optimize Results
16
14
12
Frequency
10
0
Std. Dev = 12.00
35.0 45.0 55.0 65.0 75.0 85.0
Mean = 72.1
40.0 50.0 60.0 70.0 80.0 90.0
N = 60.00
Score on Training Exam
197
Optimize Results
Box Plots
Box plots provide a display of data comparing groups of data and how
the results compare on a variety of measures. In Figure 6-8 the box
plot shows training exam scores for three groups. As you will notice for
group number one, the test scores range from approximately 45 to 85.
The box represents an interquartile containing 50 percent of the
scores. The dark line in the middle of the box represents the median
score that, in the case of group one, is 63.94. The standard deviation or
the spread of scores is 13.5, which tells you there is a wide distribution
of scores. Look at group two. The minimum score is not quite as low as
in group one—47.56. The maximum score—89.65—is slightly higher
than group one. The mean score is 73.57. The standard deviation for
group two is 10.61, less variability. For group three, the minimum
score—71.77—is well above the minimum scores for the other two
groups. The maximum score—89.69—is just slightly above the
maximum score for group two. The median score is 80.19. The
standard deviation for group three is only 4.41, making both the box
and the line between the minimum and maximum smaller than for
either of the other two groups. Using box plots you can clearly
communicate the difference among groups.
198
Optimize Results
90
Score On Training Exam
80
70
60
50
40
30
20
N= 20 20 20
1 2 3
Line Graphs
A line graph is a good way to display multiple variables and how they
compare. Figure 6-9 compares data provided by three different sources
—participants of a training course, the supervisors of the participants,
and the customers of the participants. The example displays the extent
to which each source of data expects participants to apply the
knowledge and skills gained during a training course. As shown, the
supervisors have a higher expectation of performance for determining
performance gaps, defining root causes, and reconciling requests than
either the participant or the customer. The customer, on the other
hand, has the lowest expectations for defining root causes, managing
implementation, troubleshooting implementation, and recommending
199
Optimize Results
200
Optimize Results
201
Optimize Results
the data, they found that one group of participants outperformed the
others by a significant amount. They decided to invest in creating a
high-performer profile to help their talent acquisition team identify job
candidates with the same characteristics. Now, they hire for the higher
standard of performer and can raise the bar for their learning output.
This is what we mean by optimization— learn from failure, use the
data, and make improvement to the program, the organization that
supports it, and the users of it.
Uses of Data
Sixty to 90 percent of job-related learning is wasted—that is, not used
on the job, although participants may want to use it (Bran 2018). The
primary culprit is system failure—that the system doesn’t support or
respect talent development. The system might not need the programs
being offered, leaving talent development with a lack of control, no
influence, and no seat at the table. While all of this, and more, may be
true, playing the victim doesn’t cut it. The purpose of talent
development and measurement, evaluation, and analytics is to nudge
people toward action that will lead to improvement in organization
performance. No more, no less.
Good measurement and evaluation addresses the right questions,
the right way, and provides insights into the actions we can take to do
our job well. Table 6-7 summarizes a few uses of evaluation data at the
five levels.
1 2 3 4 5
Adjust program design ✓ ✓
202
Optimize Results
Enhance reinforcement ✓
Reduce costs ✓ ✓ ✓ ✓
Market programs ✓ ✓ ✓ ✓
When to Act
Changes are made at different points based on data collected during
the talent development process. In the initial stages it is important to
have the right people involved in a program at the right time. Among
other things, this means that an opportunity must exist in their work
for them to use what they learn through a program or project. This
type of information is collected and validated with Level 0: Input data.
Level 1: Reaction and Planned Action data and Level 2: Learning
data determine whether the program content matters to participants.
Once a program is underway, assessment of buy-in begins. At any point
in time, if there is reason to believe that participants do not see value
in a program, adjustments must be made. The same is true with
knowledge acquisition. If evidence exists that participants “get it” and
the intent is for them to “use it,” it is time to act. Participants must be
203
Optimize Results
204
Optimize Results
205
Optimize Results
206
7 Sustain Momentum
Now that you know the basics of developing an ROI impact study, it’s time to learn how to keep up
the momentum. This includes:
• identifying resistance to implementation
• overcoming resistance to implementation
• making the ROI Methodology routine.
7
Sustain Momentum
Identifying Resistance
Resistance to comprehensive evaluation like the ROI Methodology will
be based on fear, lack of understanding, opposition to change, and the
efforts required to make a change successful. To not only identify
resistance but overcome it, you need to start with the talent
development team, go to the management team, and then do a gap
analysis.
Sustain Momentum
208
Sustain Momentum
• This costs too much. • I do not understand this. • The ROI process is too subjective.
• We don’t need this. • Our clients will never buy this. • Our managers will not support this.
• This takes too much time. • What happens when the results are • ROI is too narrowly focused.
negative? • This is not practical.
• Who is asking for this?
• How can we be consistent with this?
• This is not in my job duties.
• I did not have input on this.
209
Sustain Momentum
210
Sustain Momentum
211
Sustain Momentum
212
Sustain Momentum
Sharing Information
Using Technology
Identifying a Champion
As a first step in the process, one or more individuals should be
designated as the internal leader for ROI. As in most change efforts,
someone must take responsibility for ensuring that the process is
implemented successfully. The ROI champion is usually the one who
213
Sustain Momentum
understands the process best and sees the potential of the ROI
Methodology. This leader must be willing to teach and coach others.
Table 7-4 presents the various roles of the ROI champion.
214
Sustain Momentum
215
Sustain Momentum
Noted
It will only take 3 to 5 percent of your talent development budget to create and integrate a robust measurement
and evaluation practice. That’s pennies compared to value of the opportunities lost if you don’t have one.
216
Sustain Momentum
communicating results to target audiences. All the roles can come into
play at one time or another as the leader implements ROI in the
organization.
Developing the Staff
A group that will often resist the ROI Methodology is the staff who
must design, develop, deliver, and coordinate talent development
solutions. These staff members often see evaluation as an unnecessary
intrusion into their responsibilities—absorbing precious time and
stifling their freedom to be creative.
You should involve the staff on each key issue in the process. As
policy statements are prepared and evaluation guidelines developed,
staff input is essential. It is difficult for the staff to be critical of
something they helped design, develop, and plan. Using meetings,
brainstorming sessions, and task forces, the staff should be involved in
every phase of developing the framework and supporting documents
for ROI. In an ideal situation, the staff can learn the process in a two-
day workshop and, at the same time, develop guidelines, policy, and
application targets. This approach is very efficient, completing several
tasks at the same time.
217
Sustain Momentum
known to the clients and the management group, if they are not aware
of it already. Lack of results will cause managers to become less
supportive of talent development. Dwindling support appears in many
forms, ranging from reducing budgets to refusing to let participants be
involved in programs. If the weaknesses of programs are identified and
adjustments are made quickly, not only will effective programs be
developed, but also the credibility and respect for the function and the
staff will be enhanced.
218
Sustain Momentum
1. Purpose.
2. Mission.
3. Evaluate all programs, which will include the following levels:
• Level 1: Reaction and Planned Action (100%)
• Level 2: Learning (no less than 70%)
• Level 3: Application and Implementation (50%)
• Level 4: Impact (usually through sampling) (10%) (highly visible, expensive)
• Level 5: ROI (7%).
4. Evaluation support group (corporate) will provide assistance and advice in measurement and evaluation, instrument design,
data analysis, and evaluation strategy.
5. New programs are developed following logical steps beginning with needs analysis and ending with communicating results.
6. Evaluation instruments must be designed or selected to collect data for evaluation. They must be valid, reliable,
economical, and subject to audit by evaluation support group.
7. Responsibility for talent development program results rests with facilitators, participants, and supervisors of participants.
8. An adequate system for collecting and monitoring talent development costs must be in place. All direct costs should be
included.
9. At least annually, the management board will review the status and results of talent development. The review will include
plans, strategies, results, costs, priorities, and concerns.
10. Line management shares in the responsibility for program evaluation through follow-up, pre-program commitments, and
overall support.
11. Managers and supervisors must declare competence achieved through talent development programs. When not applicable,
the talent development staff should evaluate.
12. External consultants must be selected based on previous evaluation data. A central data or resource base should exist.
13. All external programs of more than one day in duration will be subjected to evaluation procedures. In addition, participants
will assess the quality of external programs.
14. Talent development program results must be communicated to the appropriate target audience. As a minimum, this
includes management (participants’ supervisors), participants, and all learning staff.
15. Key talent development staff members should be qualified to do effective needs analysis and evaluation.
16. A central database for program development must be in place to prevent duplication and serve as program resource.
17. Union involvement is necessary in total talent development plan.
The policy statement addresses critical issues that will influence the
effectiveness of the measurement and evaluation process. Typical
topics include adopting the five-level evaluation framework presented
in this book; requiring objectives at the higher levels at least for some,
if not all, programs; and defining responsibilities for talent
development.
Policy statements guide and direct the staff and others who work
closely with the ROI Methodology. They keep the process clearly
focused and enable the group to establish goals for evaluation. They
also provide an opportunity to communicate basic requirements and
219
Sustain Momentum
Setting Targets
Establishing specific targets for evaluation levels is an important way
to make progress with measurement and evaluation. Targets enable
the staff to focus on the improvements needed with specific evaluation
levels. In this process, the percentage of programs planned for
evaluation at each level is developed. The first step is to assess the
present situation. The number of all programs, including repeated
sections of a program, is tabulated along with the corresponding level
of evaluation presently conducted for each course. Next, the
percentage of courses using Level 1: Reaction questionnaires is
220
Sustain Momentum
221
Sustain Momentum
you evaluate at each level? How do your targets have the approval of the key management
222
Sustain Momentum
Figure 7-2. ROI Implementation Project Plan for a Large Petroleum Company
223
Sustain Momentum
a leadership program is selected as one of the ROI projects, all the key
staff involved in the program (design, development, and delivery)
should meet regularly to discuss the status of the project. This keeps
the project team focused on the critical issues, generates the best ideas
to tackle problems and barriers, and builds a knowledge base to
implement evaluation in future programs.
Noted
Not every offering of a program is evaluated to impact or ROI. This type of evaluation is typically conducted
on select offerings. So, while 20 unique programs may be targeted for ROI evaluation, it is likely only one
or two will be evaluated to those levels.
Using Technology
To ensure the measurement and evaluation are efficiently and
effectively administered will require the use of technology. Throughout
this book, a few types of technologies that support evaluation were
mentioned. Technologies can range from simple, inexpensive software
purchases to complete systems for managing large amounts of data.
224
Sustain Momentum
225
Sustain Momentum
Sharing Information
Because the ROI Methodology is new to many individuals, it is helpful
to have a peer group experiencing similar issues and frustrations.
Tapping into an international network, joining or creating a local
network, or building an internal network are all possible ways to use
the resources, ideas, and support of others.
One way to integrate the information needs of talent development
professionals for an effective ROI evaluation process is through an
internal ROI network. The concept of a network is simplicity itself. The
idea is to bring people who are interested in ROI together throughout
the organization to work under the guidance of trained ROI evaluators.
Typically, advocates within the department see both the need for
beginning networks and the potential of ROI evaluation to change how
the department does its work. Interested network members learn by
designing and executing real evaluation plans. This process generates
commitment for accountability as a new way of doing business for the
department.
Preparing the Management Team
Several actions can be taken with the management team to ensure that
they are supporting evaluation and using the data properly. In some
cases, they need to understand more about ROI. Four specific efforts
need to be considered.
First, present data to the management team routinely so that they
understand the value of talent development, particularly Level 3:
Application and Implementation, which translates directly into new
skills in the workplace, and Level 4: Impact, which relates directly to
226
Sustain Momentum
227
Sustain Momentum
228
Sustain Momentum
229
Sustain Momentum
each member can see what others have accomplished. This provides a
little of “what’s in it for me” for the participants. Action plans are used
to drive not only application and implementation data, but also impact
data.
Another built-in technique is to integrate the follow-up
questionnaire with the talent development program. Ample time should
be provided to review the items on the questionnaire and secure a
commitment to provide data. This step-by-step review of expectations
helps clarify confusing issues and improves response rates as
participants make a commitment to provide the data. This easy-to-
accomplish step can be a powerful way to enhance data collection. It
prevents the need for constant reminders to participants to provide
data at a later follow-up date.
Using Shortcuts
One of the most significant barriers to the implementation of
measurement and evaluation is the potential time and cost involved in
implementing the process. An important tradeoff exists between the
task of additional analysis versus the use of shortcut methods,
including estimation. In those tradeoffs, shortcuts win almost every
time. An increasing amount of research shows shortcuts and estimates,
when provided by those who know a process best (experts), can be
even more accurate than more sophisticated, detailed analysis.
Essentially, evaluators try to avoid the high costs of increasing
accuracy because it just doesn’t pay off.
Sometimes, the perception of excessive time and cost is only a
myth; at other times, it is a reality. Most organizations can implement
the evaluation methodology for about 3 percent to 5 percent of the
talent development budget. Nevertheless, evaluation still commands
significant time and monetary resources. A variety of approaches have
230
Sustain Momentum
Use Participants
One of the most effective cost-saving approaches is to have participants
conduct major steps of the process. Participants are the primary source
231
Sustain Momentum
Use Estimates
Estimates are an important part of the process. They are also the least
expensive way to arrive at a number or value. Whether isolating the
effects of the talent development program or converting data to
monetary value, estimates can be a routine and credible part of the
process. The important point is to make sure the estimate is credible
and follows systematic, logical, and consistent steps.
232
Sustain Momentum
233
Sustain Momentum
The good news is that many shortcuts can be taken to supply the
data necessary for the audience and manage the process in an efficient
way. All these shortcuts are important processes that can help make
evaluation routine because when evaluation is expensive, time
consuming, and difficult, it will never become routine.
Getting It Done
Now it is time to develop your ROI implementation plan using Exercise
7-1. Items may be added or removed so that this becomes a customized
document. This plan summarizes key issues presented in the book and
will help you as you move beyond the basics of ROI.
This document addresses a variety of issues that make up the complete measurement and evaluation strategy and plan. Each of the
following items should be explored and decisions made regarding the specific approach or issue.
Purposes of Evaluation
From the list of evaluation purposes, select the ones that are relevant to your organization:
Determine success in achieving program objectives.
Identify strengths and weaknesses in the talent development process.
Set priorities for talent development resources.
Test the clarity and validity of tests, cases, and exercises.
Identify the participants who were most (or least) successful with the program.
Reinforce major points made during the program.
Decide who should participate in future programs.
Compare the benefits to the costs of a talent development program.
Enhance the accountability of talent development.
Assist in marketing future programs.
Determine if a program was an appropriate solution.
Establish a database to assist management with decision making.
234
Sustain Momentum
Stakeholder Groups
Identify specific stakeholders that are important to the success of measurement and evaluation:
Level 2: Learning
Level 4: Impact
Level 5: ROI
Staffing
Indicate the philosophy of using internal or external staff for evaluation work and the number of staff involved in this process part
time and full time.
Responsibilities
Detail the responsibilities of different groups in talent development. Generally, specialists are involved in a leadership role in
evaluation, and others are involved in providing support and assistance in different phases of the process.
Group Responsibilities
Budget
235
Sustain Momentum
The budget for measurement and evaluation in best-practice organizations is 3 to 5 percent of the learning and development
budget. What is your current level of measurement and evaluation investment? What is your target?
Questionnaires ❑ ❑
Focus groups ❑ ❑
Interviews ❑ ❑
Level 2: Learning
Objective tests ❑ ❑
Simulations ❑ ❑
Self-assessments ❑ ❑
Level 3: Application and Implementation
Follow-up surveys ❑ ❑
Observations ❑ ❑
Interviews ❑ ❑
Action planning ❑ ❑
Level 4: Impact
Follow-up questionnaires ❑ ❑
Action planning ❑ ❑
Performance contracting ❑ ❑
Building Capability
How will staff members develop their measurement and evaluation capability?
236
Sustain Momentum
ROI certification
Coaching
ROI conferences
Networking
Use of Technology
How do you use technology for data collection, integration, and scorecard reporting, including technology for conducting ROI
studies? How do you plan to use technology?
Tests ❑ ❑
Integration ❑ ❑
ROI ❑ ❑
Scorecards ❑ ❑
Communication Methods
Indicate the specific methods you currently use to communicate results. What methods do you plan to use?
Newsletters ❑ ❑
Case studies ❑ ❑
Use of Data
Indicate how you currently use evaluation data by placing a “✓” in the appropriate box. Indicate your planned use of evaluation
data by placing an “X” in the appropriate box.
237
Sustain Momentum
1 2 3 4 5
Adjust program design ❑ ❑ ❑ ❑ ❑
Reduce costs ❑ ❑ ❑ ❑ ❑
238
Appendix ROI Forecasting
Basics
Pre-Program Forecasts
Pre-program forecasts are ideal when you are deciding between two
programs designed to solve the same problem. They also serve well
when considering one very expensive program or deciding between one
or more delivery mechanisms. Whatever your need for pre-program
forecasting, the process is similar to post-program ROI evaluation.
Noted
When conducting a pre-program forecast, the step of isolating the effects of the program is omitted. It is
assumed that the estimated results are referring to the influence on the program under evaluation.
Figure A-1 shows the basic forecast model. An estimate of the change
in results data expected to be influenced by the program is the first step
in the process. From there data conversion, cost estimates, and the
calculation are the same as in post-program analysis. The anticipated
intangibles are speculative in forecasting, but they can be indicators of
which measures may be influenced beyond those included in the ROI
calculation.
Appendix
240
Figure A-1. Basic ROI Forecasting Model
Estimate Business
Calculate the
or Organizational Convert Data to
Return on
Impact (Level 4) Monetary Value
Investment
Data
Identify
Intangible
Benefits
190
Appendix
Pilot Program
A more accurate forecast of program success is through a small-scale
pilot, and then developing ROI based on post-program data. There are
five steps to this approach:
1. As in the pre-program forecast, develop Levels 3 and 4
objectives.
2. Initiate the program on a small scale without all the bells and
whistles. This keeps the cost low without sacrificing the
fundamentals of the program.
3. Fully implement the program with one or more of the typical
groups of individuals who can benefit from it.
4. Calculate the ROI using the ROI Methodology for post-
program analysis.
5. Decide whether to implement the program throughout the
organization based on the results of the pilot program.
Using a pilot post-program evaluation as your ROI forecast will allow
you to report the actual story of program success for the pilot group,
showing results at all five levels of evaluation, including intangible
benefits.
242
Level 1 Forecasting
A simple approach to forecasting ROI for a new program is to add a few
questions to the standard Level 1 evaluation questionnaire. As in the
case of pre-program forecast, the data are not as credible as in an actual
post-program evaluation; however, a Level 1 evaluation at a minimum
relies on data from participants who have actually attended the program.
Table A-1 presents a brief series of questions that can develop a
forecast ROI at the end of a program. Using this series of questions,
participants detail how they plan to use what they have learned and the
results that they expect to achieve. They are asked to convert their
anticipated accomplishments into an annual monetary value and show
the basis for developing the values; they moderate their response with a
confidence estimate to make the data more credible while
Appendix
1. As a result of this program, what specific actions will you attempt as you apply what you have learned?
2. Indicate what specific measures, outcomes, or projects will change as a result of your action.
3. As a result of these anticipated changes, estimate (in monetary values) the benefits to your organization over a period of one
year. $_______________________
5. What confidence, expressed as a percentage, can you put in your estimate? _______%
(0% = no confidence; 100% = complete certainty)
243
Additional Approaches to Forecasting
Other approaches to forecasting include the use of Level 2 test data. A
reliable test, reflecting the content of talent development programs, is
validated against impact measures. With a statistically significant
relationship between test scores and improvement in impact measures,
test scores should relate to improved performance. The performance can
be converted to monetary value and the test scores can then be used to
estimate the monetary impact from the program. When compared to
projected costs, the ROI is forecasted.
Another approach is forecasting ROI at Level 3, which places monetary
value on competencies.
A very basic approach to forecasting ROI using improvement with
competencies is to:
1. Identify the competencies.
2. Determine the percentage of the skills applied on the job.
3. Determine the monetary value of the competencies using the
salary and benefits of participants.
4. Determine the increase in skill level.
5. Calculate the monetary benefits of the improvement.
6. Compare the monetary benefits to the cost of the program.
Table A-2 presents a basic example of forecasting ROI using Level 3
data.
192
244
Ten supervisors attend a four-day learning program
3. Determine the monetary value of the competencies using salary and benefits of participants:
$40,000 per participant
Multiply percentage of skills used on the job by the value of the job: $50,000 × 80% = $40,000 Calculate the dollar
value of the competencies for the group: $40,000 × 10 = $400,000
Multiply the dollar value of the competencies by the improvement in skill level: $400,000 × 10% = $40,000
6. Compare the monetary benefits to the cost of the program: The ROI is 166% and the cost of the program is $15,000
$40,000 – $15,000
ROI (%) = x 100 = 166%
$15,000
Noted
Forecasting ROI and the use of predictive analytics is becoming much more popular than in the past. Be
forewarned: Don’t rely on forecasting alone. While forecasting and predictive analytics are useful, they re-
sult in mere estimates of what could be. The real meaning is in what actually occurs—hence, the need for
post-program evaluation.
245
References
Cascio, W.F. 2000. Costing Human Resources: The Financial Impact of Behavior in
Organizations. Australia: South- Western College Publishing.
Delcol, K., S. Wolfe, and K. West, ed. n.d. “Can Innovation Tools Influence the New
Product Development Process?” http://debonogroup.com/return-on-investment.htm.
Institute for Corporate Productivity (i4cp) and ROI Institute. 2018. Four Ways to
Advance Your People Analytics Practice. Seattle, WA: Institute for Corporate
Productivity; Birmingham, AL: ROI Institute.
Harvard Business Review Analytic Services. 2017. How CEO’s and CHRO’s Can
Connect People Strategy to Business Strategy. Boston: Harvard Business Review
Publishing.
Heskett, J., W. Sasser, and L. Schlesinger. 1997. The Service Profit Chain: How Leading
Companies Link Profit and Growth to Loyalty, Satisfaction, and Value. New York:
Free Press.
Howard, T. 2005. “Brewers Get Into the Spirits of Marketing.” USA Today, May 16.
Phillips, J.J., ed. 1997. Measuring Return on Investment, vol. 2. Alexandria, VA: ASTD
Press.
Phillips, J.J., and P.P. Phillips. 2005. ROI at Work. Alexandria, VA: ASTD Press.
References
247
Phillips, J.J., and P.P. Phillips. 2009. Measuring Success: What CEOs Really Thin About
Learning Investments. Alexandria, VA: ASTD Press.
Phillips, J.J., and P.P. Phillips. 2016. Handbook of Training Evaluation and
Measurement, 4th ed. New York: Routledge.
Phillips, J.J., R.D. Stone, and P.P. Phillips. 2001. The Human Resources Scorecard:
Measuring the Return on Investment. Boston: Butterworth-Heinemann.
Phillips, P.P., ed. 2001. Measuring Return on Investment, vol. 3. Alexandria, VA: ASTD
Press.
Phillips, P.P., ed. 2002. Measuring ROI in the Public Sector. Alexandria, VA: ASTD
Press.
Phillips, P.P. 2017. The Bottomline on ROI, 3rd ed. West Chester, PA: HRDQ.
Phillips, P.P., J.J. Phillips, G. Paone, and C. Huff-Gaudet. 2019. Value for Money: How to
Show the Value for Money for All Types of Projects and Programs in Governments,
Non-Governmental Organizations, Nonprofits, and Businesses. Hoboken, NJ: Wiley.
Phillips, P.P., J.J. Phillips, and R.L. Ray. 2016. Measuring the Success of Employee
Engagement: A Step-by-Step Guide for Measuring Impact and Calculating ROI.
Alexandria, VA: ATD Press.
Prest, A.R. 1965. “Cost-Benefit Analysis: A Survey.” The Economic Journal 75(300): 683–
735.
Thompson, M.S. 1980. Benefit-Cost Analysis for Program Evaluation. Thousand Oaks,
CA: Sage Publications.
Tufte, E.R. 1983. The Visual Display of Quantitative Information. Cheshire, CT: Graphics
Press.
Wharff, D. 2005. Presentation for ADET E8 Learning Analytics and Strategies: “Serving
the Enterprise.”
Additional Resources
248
Additional Books From the ROI Institute
Elkeles, T., and J.J. Phillips. 2007. The Chief Learning Officer: Driving Value Within a
Changing Organization Through Learning and Development. Waltham, MA:
Butterworth-Heinemann.
Elkeles, T., J.J. Phillips, and P.P. Phillips. 2016. The Chief Talent Officer: Driving Value
Within A Changing Organization Through Learning and Development, 2nd ed.
Abingdon, UK: Routledge.
Phillips, J.J., W. Brantley, and P.P. Phillips. 2011. Project Management ROI: A Step-by-
Step Guide for Measuring the Impact and ROI for Projects. Hoboken, NJ: John
Wiley.
Phillips, J.J., M.T. Breining, and P.P. Phillips. 2008. Return on Investment in Meetings &
Events: Tools and Techniques to Measure the Success of All Types of Meetings and
Events. Waltham, MA: Butterworth-Heinemann.
Phillips, J.J., V. Buzachero, P.P. Phillips, and Z.L. Phillips. 2012. Measuring ROI in
Healthcare: Tools and Techniques to Measure the Impact and ROI in Healthcare
Improvement Projects and Programs. New York: McGraw-Hill.
Phillips, J.J., M. Myhill, and J.B. McDonough. 2007. Proving the Value of Meetings &
Events: How and Why to Measure ROI. Birmingham, AL: ROI Institute; Dallas: MPI.
Phillips, J.J., and P.P. Phillips. 2007. Show Me the Money: How to Determine ROI in
People, Projects, and Programs. San Francisco: Berrett-Koehler.
Phillips, J.J., and P.P. Phillips. 2008. Beyond Learning Objectives: Develop Powerful
Objectives that Link to the Bottom Line. Alexandria, VA: ASTD Press.
Phillips, J.J., and P.P. Phillips. 2008. The Measurement and Evaluation Series – ROI
Fundamentals 1, Data Collection 2, Isolation of Results 3, Data Conversion 4, Costs
and ROI 5, Communication and Implementation 6. New York: Pfeiffer.
Phillips, J.J., and P.P. Phillips. 2010. The Consultant’s Guide to Results-Driven
Proposals. How to Write Proposals That Forecast the Impact and ROI. New York:
McGraw-Hill.
Phillips, J.J., and P.P. Phillips. 2011. 10 Steps to Successful Business Alignment.
Alexandria, VA: ASTD Press.
Additional Resources
Phillips, J.J., and P.P. Phillips. 2012. Proving the Value of HR: How and Why to Measure
ROI. 2nd ed. Alexandria, VA: SHRM.
249
Phillips, J.J., and P.P. Phillips. 2015. High-Impact Capital Strategy: Addressing the 12
Major Challenges Today’s Organizations Face. New York: AMACOM.
Phillips, J.J., and P.P. Phillips. 2015. Making Human Capital Analytics Work: Measuring
the ROI of Human Capital Processes and Outcomes. New York: McGraw-Hill.
Phillips, J.J., and P.P. Phillips. 2016. Handbook of Training Evaluation and
Measurement, 4th ed. New York: Routledge.
Phillips, J. J., and P.P. Phillips. 2018. The Value of Innovation: Knowing, Proving, and
Showing the Value of Innovation and Creativity. Hoboken, NJ: Wiley.
Phillips, J.J., P.P. Phillips, and A. Pulliam. 2014. Measuring ROI in Environment, Health,
and Safety. Hoboken, NJ: Scrivener-Wiley.
Phillips, J.J., P.P. Phillips, and R.L. Ray. 2012. Measuring Leadership Development:
Quantify Your Program’s Impact and ROI on Organizational Performance. New
York: McGraw-Hill.
Phillips, J.J., P.P. Phillips, and K. Smith. 2016. Accountability in Human Resource
Management: Connect HR to Business Results, 2nd ed. Abingdon, UK: Routledge.
Phillips, J.J., and L. Schmidt. 2004. The Leadership Scorecard (Improving Human
Performance Series). New York: Elsevier; Waltham, MA: Butterworth-Heinemann.
Phillips, J.J., and R. Stone. 2001. The Human Resources Scorecard: Measuring Return
on Investment. New York: Elsevier; Waltham, MA: Butterworth-Heinemann.
Phillips, J.J., and R. Stone. 2002. How to Measure Training Results: A Practical Guide to
Tracking the Six Key Indicators. New York: McGraw-Hill.
Phillips, J.J., W.D. Trotter, and P.P. Phillips. 2015. Maximizing the Value of Consulting:
A Guide for Internal and External Consultants. Hoboken, NJ: Wiley.
Phillips, P.P, ed. 2001. Measuring Return on Investment (In Action), vol. 3. Alexandria,
VA: ASTD Press.
Phillips, P.P. 2017. The Bottomline on ROI: Benefits and Barriers to Measuring
Learning, Performance Improvement, and Human Resources Programs, 3rd ed.
West Chester, PA: HRDQ.
Phillips, P.P., and J.J. Phillips. 2007. The Value of Learning: How Organizations Capture
Value and ROI and Translate Them into Support, Improvement, and Funds. New
York: Pfeiffer.
Phillips, P.P., and J.J. Phillips. 2011. The Green Scorecard: Measuring the Return on
Investment in Sustainability Initiatives. Boston: Nicholas Brealey.
Phillips, P.P., and J.J. Phillips. 2013. Survey Basics: A Complete How-to Guide to Help
You: Design Surveys and Questionnaires, Analyze Data and Display Results, and
Identify the Best Survey Tool for Your Needs. Alexandria, VA: ASTD Press.
250
Phillips, P.P., and J.J. Phillips. 2016. Real World Evaluation Training: Navigating
Common Constraints for Exceptional Results. Alexandria, VA: ATD Press.
Additional Resources
Phillips, P.P., and J.J. Phillips. 2017. The Business Case for Learning: Using Design
Thinking to Deliver Business Results and Increase the Investment in Talent
Development. West Chester, PA: HRDQ; Alexandria, VA: ATD Press
Phillips, P.P., J.J. Phillips, G. Paone, and C. Huff-Gaudet. 2019. Value for Money: How to
Show the Value for Money for All Types of Projects and Programs in Governments,
Non-Governmental Organizations, Nonprofits, and Businesses. Hoboken, NJ: Wiley.
Phillips, P.P., J.J. Phillips, R. Stone, and H. Burkett. 2006. The ROI Fieldbook: Strategies
for Implementing ROI in HR and Training. Waltham, MA: Butterworth-Heinemann.
Robinson, D.G., J.C. Robinson, J.J. Phillips, P.P. Phillips, and D. Handshaw. 2015.
Performance Consulting: A Strategic Process to Improve, Measure, and Sustain
Organizational Results, 3rd ed. Oakland, CA: Berrett-Koehler.
Phillips, J.J., ed. 1994. Measuring Return on Investment (In Action), vol. 1. Alexandria,
VA: ASTD Press.
Phillips, J.J., ed. 1997. Measuring Return on Investment (In Action), vol. 2. Alexandria,
VA: ASTD Press.
Phillips, J.J., ed. 1998. Implementing Evaluation Systems and Processes (In Action).
Alexandria, VA: ASTD Press.
Phillips, J.J., ed. 2000. Performance Analysis and Consulting (In Action). Alexandria, VA:
ASTD Press.
Phillips, J.J., and P.P. Phillips. 2006. ROI at Work: Best Practice Case Studies From the
Real World. Alexandria, VA: ASTD Press.
Phillips, J.J., and P.P. Phillips. 2010. Measuring for Success: What CEOs Really Think
About Learning Investments. Birmingham, AL: ROI Institute; Alexandria, VA: ASTD
Press.
Phillips, J.J., P.P. Phillips, and L. Zuniga. 2013. Measuring the Success of Organization
Development: A Step-by-Step Guide for Measuring Impact and Calculating ROI.
Alexandria, VA: ASTD Press.
251
Phillips, J.J., and P.P. Phillips. 2014. Measuring ROI in Employee Relations and
Compliance. Alexandria, VA: SHRM.
Phillips, P.P., ed. 2001. Measuring Return on Investment (In Action), vol. 3. Alexandria,
VA: ASTD Press.
Phillips, P.P., and J.J. Phillips. 2002. Measuring ROI in the Public Sector (In Action).
Alexandria, VA: ASTD Press.
Phillips, P.P., and J.J. Phillips. 2008. ROI in Action Casebook. New York: Pfeiffer.
Phillips, P.P., and J.J. Phillips. 2012. Measuring the Success of Coaching: A Step-by-
Step Guide for Measuring Impact and Calculating ROI. Alexandria, VA: ASTD Press.
Phillips, P.P., and J.J. Phillips. 2012. Measuring ROI in Learning and Development: Case
Studies from Global Organizations. Alexandria, VA: ASTD Press.
Additional Resources
Phillips, P.P., and J.J. Phillips. 2013. Measuring the Success of Sales Training: A Step-
by-Step Guide for Measuring Impact and Calculating ROI. Alexandria, VA: ASTD
Press.
Phillips, P.P., J.J. Phillips, and R.L. Ray. 2015. Measuring the Success of Leadership
Development: A Step-by-Step Guide for Measuring Impact and Calculating ROI.
Alexandria, VA: ATD.
Phillips, P.P., and J.J. Phillips. 2016. Measuring the Success of Employee Engagement:
A Step-by-Step Guide for Measuring Impact and Calculating ROI. Alexandria, VA:
ATD Press.
Phillips, P.P., and J.J. Phillips. 2018. Value for Money: Measuring the Return on Non-
Capital Investments. Birmingham, AL: BWE Press.
Pope, C., and J.J. Phillips, eds. 2001. Implementing E-learning Solutions (In Action).
Alexandria, VA: ASTD Press.
Schmidt, L., and J.J. Phillips, eds. 2003. Implementing Training Scorecards (In Action).
Alexandria, VA: ASTD Press.
252
American Society for Training & Development (ASTD). 2009. “The Four Levels of
Evaluation + ROI.” Infoline. Alexandria, VA: ASTD Press.
Association for Talent Development (ATD). 2015. “Train the Trainer Volume 4:
Measurement and Evaluation: Essentials for Measuring Training Success.” TD at
Work. Alexandria, VA: ATD Press.
Burkett, H., and P.P. Phillips. 2001. “Managing Evaluation Shortcuts.” Infoline.
Alexandria, VA: ASTD Press.
Glynn, K., and D. Tolsma. 2017. “Design Thinking Meets ADDIE.” TD at Work.
Alexandria, VA: ATD Press.
Guerra-López, L., and K. Hicks. 2015. “Turning Trainers into Strategic Business
Partners.” TD at Work. Alexandria, VA: ATD Press.
Neal, B. 2014. “How to Develop Training Quality Standards.” Infoline. Alexandria, VA:
ASTD Press.
Additional Resources
Novak, C. 2012. “Making the Financial Case for Performance Improvement.” Infoline.
Alexandria, VA: ASTD Press.
Phillips, J.J., W. Jones, and C. Schmidt. 1999. “Level 3 Evaluation: Application.” Infoline.
Alexandria, VA: ASTD Press.
Phillips, J.J., and P.P. Phillips. 1998. “Level 5 Evaluation: Mastering ROI.” Infoline.
Alexandria, VA: ASTD Press.
Phillips, J.J., R.. Shriver, and H.S. Giles. 1999. “Level 2 Evaluation: Learning.” Infoline.
Alexandria, VA: ASTD Press.
Phillips, J.J., and R.D. Stone. 1999. “Level 4 Evaluation: Business Results.” Infoline.
Alexandria, VA: ASTD Press.
253
Phillips, J.J., J.O. Wright, and S.I. Pettit-Sleet. 1999. “Level 1 Evaluation: Reaction and
Planned Action.” Infoline. Alexandria, VA: ASTD Press.
Phillips, P.P., and J.J. Phillips. 2003. “Evaluation Data: Planning and Use.” Infoline.
Alexandria, VA: ASTD Press.
Spitzer, D., and M. Conway. 2002. “Link Training to the Bottom Line.” Infoline.
Alexandria, VA: ASTD Press.
Waagen, A. 1997. “Essentials for Evaluation.” Infoline. Alexandria, VA: ASTD Press.
Knaflic, C.N. 2015. Storytelling With Data: A Data Visualization Guide for Business
Professionals. Hoboken, NJ: Wiley Publishing.
Tufte, E.R. 1997. Visual Explanations: Images and Quantities, Evidence and Narrative.
Cheshire, CT: Graphics Press.
254
About the Authors
Industry, based on his work on ROI. The Society for Human Resource
Management presented him an award for one of his books and honored a
Phillips ROI study with its highest award for creativity. The Association
for Talent Development, formerly the American Society of Training &
Development, gave him its highest award, Distinguished Contribution to
Workplace Learning and Development for his work on ROI. The
International Society for Performance Improvement presented Jack with
its highest award, the Thomas F. Gilbert Award, for his contribution to
human performance technology. His work has been featured in the Wall
Street Journal, BusinessWeek, and Fortune magazine. He has been
interviewed by several television programs, including CNN. He served as
president of the International Society for Performance Improvement.
Jack regularly consults with clients in manufacturing, service, and
government organizations in 70 countries. He and his wife, Patti P.
Phillips, contribute to a variety of journals in addition to authoring more
than 100 books.
Jack has undergraduate degrees in electrical engineering, physics,
and mathematics; a master’s degree in decision sciences from Georgia
State University; and a PhD in human resource management from the
University of Alabama. He has served on the boards of several private
businesses—including two NASDAQ companies—and several nonprofits
and associations, including the Association for Talent Development and
the National Management Association. He is chairman of ROI Institute
Inc. and can be reached at [email protected].
257
204
258
l