Developing An Effective Plan
Developing An Effective Plan
2TES
I-II17:7Ell
Acknowledgments
This workbook was developed by the Centers for Disease Control and Prevention's
(CDC's) Office on Smoking and Health (OSH) and Division of Nutrition, Physical Activity,
and Obesity (DNPAO). This workbook was developed as part of a series of technical
assistance workbooks for use by program managers, and evaluators. The workbooks
are intended to offer guidance and facilitate capacity building on a wide range of
evaluation topics. We encourage users to adapt the tools and resources in this workbook
to meet their program's evaluation needs.
This workbook applies the CDC Framework for Program Evaluation in Public Health
(http://www.cdc.gov/eval/framework/index.htm). The Framework lays out a six-step
process for the decisions and activities involved in conducting an evaluation. While
the Framework provides steps for program evaluation, the steps are not always linear
and represent a more back-and-forth effort; some can be completed concurrently. In
some cases, it makes more sense to skip a step and come back to it. The important
thing is that the steps are considered within the specific context of your program.
Suggested Citation: Developing an Effective Evaluation Plan. Atlanta, Georgia: Centers for
Disease Control and Prevention, National Center for Chronic Disease Prevention and
Health Promotion, Office on Smoking and Health; Division of Nutrition, Physical Activity,
and Obesity, 2011.
Table of Contents
The ESW: Why should you engage stakeholders in developing the evaluation plan?
Narrative Description
Logic Model
Stage of Development.
Credible Evidence 25
Measurement 25
Budget. 28
Ensuring Use 37
References 43
Resources 102
Figures
Figure 1: CDC Framework for Program Evaluation in Public Health 5
The "How" addresses the process for implementing a program and provides
information about whether the program is operating with fidelity to the
program's design. Additionally, the "How" (or process evaluation), along with
output and/or short-term outcome information, helps clarify if changes should
be made during
implementation.
1
• The "Why It Matters" provides the rationale for your program and the impact it
has on public health. This is also sometimes referred to as the "so what"
question. Being able to demonstrate that your program has made a difference is
critical to program sustainability.
There are several critical elements needed to ensure that your evaluation plan lives up to
its potential. These elements include ensuring (1) that your plan is collaboratively
developed with a stakeholder workgroup, (2) that it is responsive to program changes and
priorities,
(3) that it covers multiple years if your project is ongoing, and (4) that it addresses your
entire program rather than focusing on just one funding source or objective/activity. You
will, by necessity, focus the evaluation based on feasibility, stage of development, ability
to consume information, and other priorities that will be discussed in Steps 3 and 4 in
this
workbook. However, during the planning phase, your entire program should be
considered by the evaluation group.
Title page: Contains an easily identifiable program name, dates covered, and
basic focus of the evaluation.
• Intended use and users: Fosters transparency about the purpose(s) of the
evaluation and identifies who will have access to evaluation results. It is
important to build a market for evaluation results from the beginning. Clarifying
the primary intended users, the members of the stakeholder evaluation
workgroup, and the purpose(s) of the evaluation will help to build this market.
• Analysis and interpretation plan: Clarifies how information will be analyzed and
describes the process for interpretation of results. This section describes who will
get to see interim results, whether there will be a stakeholder interpretation
meeting or meetings, and methods that will be used to analyze the data.
• Use, dissemination, and sharing plan: Describes plans for use of evaluation
results and dissemination of evaluation findings. Clear, specific plans for
evaluation use should be discussed from the beginning. This section should
include a broad overview of how findings are to be used as well as more detailed
information about the intended modes and methods for sharing results with
stakeholders. This is a critical but often neglected section of the evaluation plan.
Steps:
1. Engage stakeholders.
CDC's Framework for Program Evaluation
2. Describe the
program.
4. Gather credible
evidence.
5. Justify conclusions.
6. Ensure use and
share lessons
learned.
In addition to CDC's
Framework for Program
Evaluation in Public Health
there are evaluation
standards that will enhance
the quality of evaluations
by guarding against
potential mistakes or errors
in
practice. The evaluation standards are grouped around four important attributes: utility,
feasibility, propriety, and accuracy as indicated by the inner circle in Figure 1.
• Utility: Serve information needs of intended
users. It is critical to remember that
• Feasibility: Be realistic, prudent,
these standards apply to
diplomatic, and frugal. a// steps and phases of the
Propriety: Behave legally, ethically, and with due
evaluation plan.
regard for the welfare of those involved and
those affected.
• Accuracy: Evaluation is comprehensive and
grounded in the data.
(The Joint Committee on Standards for Educational Evaluation,
1994)
5
2
1. Rendering judgments-accountability
(Patton, 2008)
7
2
Several questions pertaining to stakeholders may arise among program staff, including:
It is suggested that the program enlist the aid of an ESW of 8 to 10 members that
represents the stakeholders who have the greatest stake or vested interest in the
evaluation (Centers for Disease Control, 2008). These stakeholders, or primary
intended users, will serve in a consultative role on all phases of the evaluation. As
members of
the ESW, they will be an integral part of the entire evaluation process from the initial
design phase to interpretation, dissemination, and ensuring use. Stakeholders will play a
major role in the program's evaluation, including consultation and possibly even data
collection, interpretation, and decision making based on the evaluation results.
Sometimes stakeholders can have competing interests that may come to light in the
evaluation
planning process. It is important to explore agendas in the beginning and come to a
shared
8 I Developing an Effective Evaluation Plan
2
understanding of roles and responsibilities, as well as the purposes of the evaluation. It is
important that both the program and the ESW understand and agree to the importance
and role of the workgroup in this process.
In order to meaningfully engage your stakeholders, you will need to allow time for
resolving conflicts and coming to a shared understanding of the program and evaluation.
However, the time is worth the effort and leads toward a truly participatory, empowerment
approach to evaluation.
Step 3: Focus the evaluation. Understanding the purpose of the evaluation and the
rationale for prioritization of evaluation questions is critical for transparency and
acceptance of evaluation findings. It is essential that the evaluation address those
questions of greatest need to the program and priority users of the evaluation.
• Identify intended users who can directly benefit from and use the
evaluation results.
• Identify a evaluation stakeholder workgroup of 8 to 10 members.
• Engage stakeholders throughout the plan development process as well as
the implementation of the evaluation.
• Identify intended purposes of the evaluation.
• Allow for adequate time to meaningfully engage the evaluation
stakeholder workgroup.
EVALUATION TOOLS AND RESOURCES FOR STEP 1:
11
1 3
Narrative Description
A narrative description helps ensure a full and complete shared understanding of the
program. A logic model may be used to succinctly synthesize the main elements of a
program. While a logic model is not always necessary, a program narrative is. The
program description is essential for focusing the evaluation design and selecting the
appropriate methods. Too often groups jump to evaluation methods before they even
have a grasp
of what the program is designed to achieve or what the evaluation should deliver. Even
though much of this will have been included in your funding application, it is good
practice to revisit this description with your ESW to ensure a shared understanding and
that the program is still being implemented as intended. The description will be based on
your program's objectives and context but most descriptions include at a minimum:
Logic Model
The description section often includes a logic model to visually show the link between
activities and intended outcomes. It is helpful to review the model with the ESW to
ensure a shared understanding of the model and that the logic model is still an accurate
and complete reflection of your program. The logic model should identify available
resources (inputs), what the program is doing (activities), and what you hope to achieve
(outcomes). You might also want to articulate any challenges you face (the program's
context or environment). Figure 2 illustrates the basic components of a program logic
model. As you
view the logic model from left to right, the further away from the intervention the more
time needed to observe outcomes. A major challenge in evaluating chronic disease
prevention and health promotion programs is one of attribution versus contribution and
the fact that distal outcomes may not occur in close proximity to the program
interventions or policy change. In addition, given the complexities of dynamic
implementation environments, realized impacts may differ from intended impacts.
However, the rewards of understanding the proximal and distal impacts of the program
intervention often outweigh the challenges.
Activities: The actual interventions that the program implements in order to achieve
health outcomes
•
Figure 2: Sample Logic Model
-~~ - ii - o s
~- - -
--- Inputs
Short-term
Outcomes Long-term
Outcomes
Environmental Context
13
1 3
Stage of Development
Another activity that will be needed to fully describe your program and prepare you to
focus your evaluation is an accurate assessment of the stage of development of the
program. The developmental stages that programs typically move through are planning,
implementation, and maintenance. In the example of a policy or environmental
initiative, the stages might look somewhat like this:
The stage of development conceptual model is complementary to the logic model. Figure
3.1 shows how general program evaluation questions are distinguished by both logic
model categories and the developmental stage of the program. This places evaluation
within the appropriate stage of program development (planning, implementation, and
maintenance). The model offers suggested starting points for asking evaluation questions
within the
logic model while respecting the developmental stage of the program. This will prepare
the program and the workgroup to focus the evaluation appropriately based on
program maturity and priorities.
14 I Developing an Effective Evaluation Plan
[ 3
15
1 3
Key evaluation questions and needs for information will differ based on the stage of
development of the program. Additionally, the ability to answer key evaluation
questions will differ by stage of development of the program and stakeholders need to
be aware
of what the evaluation can and cannot answer. For the above policy program example,
planning stage type questions might include:
Useful evaluations are not about special research interests or what is easiest to implement
but what information will be used by the program, stakeholders (including funders), and
decision makers to improve the program and make decisions. Establishing the focus of the
evaluation began with the identification of the primary purposes and the primary intended
users of the evaluation. This process was further solidified through the selection of the
ESW. Developing the purposeful intention to use evaluation information and not just
produce another evaluation report starts at the very beginning with program planning and
your evaluation plan. You need to garner stakeholder interests and prepare them for
evaluation use. This step facilitates conceptualizing what the evaluation can and cannot
deliver.
It is important to collaboratively focus the evaluation design with your ESW based on the
identified purposes, program context, logic model, and stage of development.
Additionally, issues of priority, feasibility, and efficiency need to be discussed with the
ESW and those responsible for the implementation of the evaluation. Transparency is
particularly important in this step. Stakeholders and users of the evaluation will need to
understand why some questions were identified as high priorities while others were
rejected or delayed.
Often if a funder requires an evaluation plan, you might notice text like this:
As you and the ESW take ownership of the evaluation, you will find that honing the
evaluation focus will likely solidify interest in the evaluation. Selection of final evaluation
questions should balance what is most useful to achieving your program's information
needs while also meeting your stakeholders' information needs. Having stakeholders
participate in the selection of questions increases the likelihood of their securing
evaluation
The ultimate goal is to focus the evaluation design such that it reflects the program
stage of development, selected purpose of the evaluation, uses, and questions to be
answered. Transparency related to the selection of evaluation questions is critical to
stakeholder acceptance of evaluation results and possibly even for continued support of
the program.
Even with an established multi-year plan, Step 3 should be revisited with your ESW
annually (or more often if needed) to determine if priorities and feasibility issues still hold
for the planned evaluation activities. This highlights the dynamic nature of the
evaluation plan. Ideally, your plan should be intentional and strategic by design and
generally cover multiple years for planning purposes. But the plan is not set in stone. It
should also be flexible and adaptive. It is flexible because resources and priorities
change and adaptive because opportunities and programs change. You may have a
new funding opportunity and a short-term program added to your overall program. This
may require insertion of a smaller evaluation plan specific to the newly funded project,
but with the overall program
evaluation goals and objectives in mind. Or, resources could be cut for a particular
program requiring a reduction in the evaluation budget. The planned evaluation may have
to be reduced or delayed. Your evaluation plan should be flexible and adaptive to
accommodate these scenarios while still focusing on the evaluation goals and objectives
of the program and the ESW.
Some options that may point you in the direction of qualitative methods:
• You are planning and want to assess what to consider when designing a program
or initiative. You want to identify elements that are likely to be effective.
• You are looking for feedback while a program or initiative is in its early stages and
want to implement a process evaluation. You want to understand approaches to
enhance the likelihood that an initiative (e.g., policy or environmental change) will
be adopted.
• Something isn't working as expected and you need to know why. You need to
understand the facilitators and barriers to implementation of a particular initiative.
• You want to truly understand how a program is implemented on the ground
and need to develop a model or theory of the program or initiative.
Some options that may point you in the direction of quantitative methods:
• You are looking to identify current and future movement or trends of a
particular phenomenon or initiative.
• You want to consider standardized outcome across programs. You need to
monitor outputs and outcomes of an initiative. You want to document the impact
of a particular initiative.
• You want to know the costs associated with the implementation of a particular
intervention.
• You want to understand what standardized outcomes are connected with a
particular initiative and need to develop a model or theory of the program or initiative.
Or the most appropriate method may be a mixed methods approach wherein the
qualitative data provide value, understanding, and application to the quantitative data. It is
beyond
the scope of this workbook to address the full process of deciding what method(s) are
most appropriate for which types of evaluations questions. The question is not whether
to apply qualitative or quantitative methods but what method is most appropriate to
answer the evaluation question chosen. Additional resources on this are provided in the
resource section in Part II.
Measurement
If the method selected includes indicators and/or performance measures, the
discussion
of what measures to include is critical and often lengthy. This discussion is naturally tied
to the data credibility conversation, and there is often a wide range of possible indicators
or performance measures that can be selected for any one evaluation question. You will
want to consult best practices publications for your program area and even other
programs in neighboring states or locales. The expertise that your ESW brings to the table
can facilitate this discussion. The exact selection of indicators or performance measures
is beyond the scope of this workbook. Resource information is included in Part II of this
workbook, such as the Key Outcome Indicators for Evaluating Comprehensive Tobacco
Programs guide.
25
1 5
What process Interview Case study, Site visits and Pre and post Contractor
leads to description of interviews, reports funding period
implementation process steps, document
of policy? actions, and reviews etc.
strategies
27
1 5
Figure 4.2: Evaluation Plan Methods Grid Example
Evaluation Question Indicators/ Performance Potential Data Source Comments
Measure (Existing/New)
An Evaluation Plan Methods Grid exercise and more examples can be found in Part
II, Section 4. 1.
Budget
The evaluation budget discussion was most likely initially started during Step 3 when the
team was discussing the focus of the evaluation and feasibility issues. It is now time to
develop a complete evaluation project budget based on the decisions made about the
evaluation questions, methods, roles, and responsibilities of stakeholders. A complete
budget is necessary to ensure that the evaluation project is fully funded and can deliver
upon promises.
An Evaluation Budget exercise and more examples can be found in Part II, Section
4.2.
• Select the best method(s) that answers the evaluation question. This can
often involve a mixed-methods approach.
• Gather the evidence that is seen as credible by the primary users of the
evaluation.
• Define implementation roles and responsibilities for program staff,
evaluation staff, contractors, and stakeholders.
• Develop an evaluation plan methods grid to facilitate a shared understanding of
the overall evaluation plan, and the timeline for evaluation
activities.
29
1 G
Planning for analysis and interpretation is directly tied to the timetable begun in Step 4.
Errors or omissions in planning this step can create serious delays in the final evaluation
report and may result in missed opportunities if the report has been timed to correspond
with significant events. Often, groups fail to appreciate the resources, time, and
expertise required to clean and analyze data. This applies to both qualitative and
quantitative data. Some programs focus their efforts on collecting data, but never fully
appreciate the
time it takes to work with the data to prepare for analysis, interpretation, feedback, and
conclusions. These programs are suffering from "D.R.I.P.", that is, programs that are
"Data Bich but Information Poor." Survey data remains "in boxes" or interviews are never
fully explored for theme identification.
After planning for the analysis of the data, you have to prepare to examine the
results to determine what the data actually say about your program. These results
should be interpreted with the goals of your program in mind, the social/political
context of the program, and the needs of the stakeholders.
Moreover, it is critical that your plans include time for interpretation and review by
stakeholders to increase transparency and validity of your process and conclusions.
The emphasis here is on justifying conclusions, not just analyzing data. This is a
step
that deserves due diligence in the planning process. The propriety standard plays a
role in guiding the evaluator's decisions on how to analyze and interpret data to assure
that all stakeholder values are respected in the process of drawing conclusions
(Program
Evaluation Standards, 1994). That is to say, who needs to be involved in the evaluation for
it to be ethical. This may include one or more stakeholder interpretation meetings to
review interim data and further refine conclusions. A note of caution, as a stakeholder
driven process, there is often pressure to reach beyond the evidence when drawing
conclusions.
Based on the uses for your evaluation, you will need to determine who should learn
about the findings and how they should learn the information. Typically, this is where
the final report is published, and most assume the evaluation is done. However, if
personal ownership of evaluation results is inserted here, such as through collaboration
with an ESW, the impact and value of the evaluation results will increase (Patton, 2008).
The program and the ESW take personal responsibility for getting the results to the
right
people and in a usable, targeted format. This absolutely must be planned for and included
in the evaluation plan. It will be important to consider the audience in terms of timing, style,
tone, message source, method and format. Remember that stakeholders will not suddenly
become interested in your product just because you produced a report. You must
sufficiently prepare the market for the product and for use of the evaluation results (Patton,
2008). Writing a straightforward and comprehensive evaluation report can help insure use.
33
1
Communication and Dissemination Plans
Your evaluation results may not reach the intended audience with the intended impact
just because they are published. An intentional communication and dissemination
approach should be included in your evaluation plan. As previously stated, the planning
stage is
the time for the program and the ESW to begin to think about the best way to share
the lessons you will learn from the evaluation. The communication-dissemination
phase of
the evaluation is a two-way process designed to support use of the evaluation results
for program improvement and decision making. In order to achieve this outcome, a
program must translate evaluation results into practical applications and must
systematically distribute the information through a variety of audience-specific
strategies. In order to be effective, dissemination systems need to-
• orient toward the needs of the user, incorporating the types and levels
of information needed into the forms and language preferred by the
user,
• use varied dissemination methods, including written information, electronic media,
and person-to-person contact,
• include both proactive and reactive dissemination channels-that is, incorporate
information that users have identified as important and information that users
may not know to request but that they are likely to need,
• establish clear channels for users to make their needs and priorities known to
the disseminating agency,
• recognize and provide for the "natural flow" of the four levels of dissemination
that have been identified as leading to utilization: spread, exchange, choice,
and implementation,
• draw upon existing resources, relationships, and networks to the maximum extent
possible while building new resources as needed by users,
• include effective quality control mechanisms to assure that information included
is accurate, relevant, and representative,
• incorporate sufficient information so that the user can determine the basic
principles
underlying specific practices and the settings in which these practices may be
used most productively, and
• establish linkages to resources that may be needed to implement the information•
usually referred to as technical assistance.
(National Institute on Disability and Rehabilitation Research,
2001)
34 I Developing an Effective Evaluation Plan
[
The first step in writing an effective communications plan is to define your
communication goals and objectives. Given that the communication objectives will be
tailored to each
target audience, you need to consider with your ESW who the primary audience(s) are
(e.g., the funding agency, the general public, or some other group). Some questions to ask
about the potential audience(s) are the following:
• Who is a priority?
• What do they already know about the topic?
• What is critical for them to know?
• Where do they prefer to receive their information?
• What is their preferred format?
• What language level is appropriate?
• Within what time frame are evaluation updates and reports necessary?
Once the goals, objectives, and target audiences of the communication plan are
established, you should consider the best way to reach the intended audience by
considering which communication/dissemination tools will best serve your goals and
objectives. Will the program use newsletters/fact sheets, oral presentations, visual displays,
videos, storytelling, and/or press releases? Carefully consider the best tools to use by
getting feedback from
your ESW, by learning from others' experiences, and by reaching out to target audiences
to gather their preferences. An excellent resource to facilitate creative techniques for
reporting evaluation results is Torres, Preskill, and Pionteck's (2004) Evaluation Strategies
for Communicating and Reporting.
Complete the communication planning step by establishing a timetable for sharing
evaluation findings and lessons learned. Figure 5 can be useful in helping the program
to chart the written communications plan:
35
1
Figure 5: Communication Plan Table
Target Audience Goals Tools Timetable
(Priority)
Program Implementation Inform them in real time Monthly meetings and Monthly
Team about what's working well briefing documents
and what needs to be
quickly adjusted during
implementation
Funding Decision Makers Continue and/or enhance Executive summary; Within 90 days of
program funding Targeted program briefs conclusion of funding
It is important to note that you do not have to wait until the final evaluation report is
written in order to share your evaluation results. A system for sharing interim results to
facilitate program course corrections and decision making should be included in your
evaluation plan. Additionally, success stories that focus on upstream, midstream, and
downstream successes can facilitate program growth and visibility. A success story can
show movement in your program's progress over time and demonstrate its value and
impact. It can serve as a vehicle for engaging potential participants, partners, and funders
especially when it takes time for a program to mature to long-term outcomes
(Lavinghouze, Price, Smith, 2007).
The Communicating Results exercise can be found in Part II, Section 6.2 and can
assist you with tracking your audiences and ways to reach them. More information
on developing a communication and dissemination plan can be found in the
Resource Section in Part II of this workbook.
There are several practical steps you can include in your evaluation plan to help
ensure evaluation findings are used. These steps might contain plans to-
• conduct regularly scheduled meetings with evaluation stakeholders as a forum
for sharing evaluation findings in real time and developing recommendations
for program improvement based on evaluation findings,
• review evaluation findings and recommendations in regularly scheduled staff
meetings,
• engage stakeholders in identifying ways they can apply evaluation findings
to improve their programs,
• coordinate, document, and monitor efforts program staff and partners are making to
implement improvement recommendations, and
• develop multiple, tailored evaluation reports to address specific
stakeholders information needs.
I 39
PULLING IT ALL TOGETHER
Thus far we have walked through the six steps of the CDC Framework for Program
Evaluation in Public Health to facilitate programs and their evaluation workgroups as
they think through the process of planning evaluation activities. We have described the
components of an evaluation plan and details to consider while developing the plan in the
context of the CDC Framework. In this section, we briefly recap information that you
should consider when developing your evaluation plan.
Title page
■ Question overview
■ Intended use and users
■ Program description
■ Evaluation focus
■ Methods
• Analysis and interpretation plan
• Use, dissemination, and sharing plan
However, your plan should be adapted to your specific evaluation needs and context.
Additionally, it is important to remember that your evaluation plan is a living, dynamic
document designed to adapt to the complexities of the environment within which your
programs are implemented. The plan is a guide to facilitate intentional decisions. If
changes are made, they are documented and done intentionally with a fully informed ESW.
Title page: This page provides easily identifiable program name, dates covered,
and possibly basic focus of the evaluation.
Intended use and users: This section fosters transparency about the purposes of the
evaluation and who will have access to evaluation results. It is important to build a
market for evaluation results from the beginning. This section identifies the primary
intended users and the ESW and describes the purposes and intended uses of the
evaluation.
41
Evaluation focus: There are never enough resources or time to answer every evaluation
question. Prioritization must be collaboratively accomplished based on the logic model/
program description, the stage of development of the program, program and
stakeholder priorities, intended uses of the evaluation, and feasibility issues. This section
will clearly delineate the criteria for evaluation prioritization and will include a discussion
of feasibility and efficiency.
Methods: This section covers indicators and performance measures, data sources and
selection of appropriate methods, roles and responsibilities, and credibility of
evaluation information. This section will include a discussion about appropriate
methods to fit the evaluation question. An evaluation plan methods grid is a useful tool
for transparency and planning.
Analysis and interpretation plan: Who will get to see interim results? Will there be a
stakeholder interpretation meeting or meetings? It is critical that your plans allow time for
interpretation and review from stakeholders (including your critics) to increase
transparency and validity of your process and conclusions. The emphasis here is on
justifying conclusions, not just analyzing data. This is a step that deserves due diligence in
the planning process. The propriety standard plays a role in guiding the evaluator's
decisions
in how to analyze and interpret data to assure that all stakeholder values are respected in
the process of drawing conclusions. A timeline that transparently demonstrates inclusion
of stakeholders facilitates acceptance of evaluation results and use of information.
Use, dissemination, and sharing plan: Plans for use of evaluation results,
communications, and dissemination methods should be discussed from the beginning.
This is a critical but often neglected section of the evaluation plan. A communication plan
that displays target audience, goals, tools, and a timeline is helpful for this section.
The exercises, worksheets, and tools found in Part II of this workbook are to help you
think through the concepts discussed in Part I. These are only examples. Remember,
your evaluation plan(s) will vary based on program and stakeholder priorities and context.
Centers for Disease Control and Prevention. Framework for Program Evaluation in Public
Health. Morbidity and Mortality Weekly Report 1999; 48(NoRR-11):1-40.
Centers for Disease Control and Prevention. Introduction to Process Evaluation in
Tobacco Use Prevention and Control. Atlanta (GA): U.S. Department of Health and Human
Services, Centers for Disease Control and Prevention, National Center for Chronic Disease
Prevention and Health Promotion, Office on Smoking and Health, 2008 [accessed 2011 Oct
19].
Knowlton LW, Philips CC. The Logic Model Guidebook: Better Strategies for Great
Results. Thousand Oaks (CA): Sage Publications, 2009.
Lavinghouze R, Price AW, Smith K-A. The Program Success Story: A Valuable Tool for
Program Evaluation. Health Promotion Practice 2007; 8(4):323-331.
National Institute on Disability and Rehabilitation Research. Developing an Effective
Dissemination Plan. Austin (TX): Southwest Educational Developmental Laboratory,
2001 [accessed 2011 Oct 24].
Patton MQ. Utilization-Focused Evaluation. 4th ed. Thousand Oaks (CA): Sage
Publications, 2008.
Sandars JR, The Joint Committee on Standards for Educational Evaluation. The Program
Evaluation Standards. 2nd ed. Thousand Oaks (CA): Sage Publications, 1994.
Torres R, Preskill H, Piontek ME. Evaluation Strategies for Communicating and Reporting.
2nd ed. Thousand Oaks (CA): Sage Publications, 2004.
Worthen BR, Sanders JR, Fitzpatrick JL. Program Evaluation: Alternative Approaches and
Practical Guidelines. 2nd ed. New York: Addison Wesley Logman, 1997.
43
Part II: Exercise, Worksheets, and Tools
Step 1: 1.1 Stakeholder Mapping Exercise 45
Priority Person/Group
Centers for Disease Control and Prevention. Introduction to Process Evaluation in Tobacco Use Prevention and Control. Atlanta (GA):
U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease
Prevention and Health Promotion, Office on Smoking and Health, 2008 [accessed 2011 Oct 19).
45
2
Now, go back over your list of potential users of the evaluation results and consider their
level of priority on the list. For example, providing the information that funders or
decision makers need may take a higher priority over some clients even though the
clients are still very important. You might rate stakeholders in terms of "high," "medium,"
or "low" or you might rank order them in numerical order (i.e. from "1" to "n"). The choice
is yours.
Characteristic X
High Low
Characteristic X
High
Low
Characteristic X
High Low
Characteristic X
High Stakeholder A, B, C, E, G, I, K, M Stakeholder D, F
The stakeholders that fall into the high box for both characteristics X and Y would
most likely be strong candidates to be invited to be a part of the 8 to 10 person ESW.
As with any stakeholder group membership, potential participation would still include
additional
conversations by program
staff.
47
2
Characteristic X
High Low
Characteristic X
High
Low
49
2
In order to determine the evaluation purpose, the evaluation team should work with
those who are requesting the evaluation to identify the possible multiple purposes for the
evaluation from multiple sources. The first task is to consider what groups are interested
in an evaluation of the program. This might include the program staff, health department
staff, funders, state level decision makers, and other stakeholders. The second task
would be to align the specific group with what they are requesting to be evaluated. The
next task would be to ascertain what the potential uses of the evaluation results will be by
each group interested in the evaluation. And fourth, the team should develop a purpose
statement
relevant to each group and evaluation requested.
51
2
List the appropriate role for each stakeholder relevant to the evaluation and how and when
you might engage him or her in the evaluation. It is important to consider a stakeholder's
expertise, level of interest, and availability when developing the communication plan. If
there are specific deadlines for information such as a community vote or funding
opportunity, it is important to note those as well. Additional columns could be added for
comments.
Timing of
Communication
A note on roles: Stakeholders need not be a member of the ESW in order to have a role
related to the evaluation. Given a stakeholder's specific expertise, interest, availability, or
intended use of the evaluation results, he or she may be involved in part or all of the
evaluation without being a specific member of the ESW. Roles might include but are not
limited to:
From your list of primary intended users (those who have a stake in the evaluation
results), identify what information each stakeholder will use.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
55
2
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
The stage of development conceptual model is complementary with the logic model. Figure
3.1 shows how general program evaluation questions are distinguished by both logic
model categories and the developmental stage of the program. This places evaluation
within the appropriate stage of program development (planning, implementation, and
maintenance). The model offers suggested starting points for asking evaluation questions
within the logic model while respecting the developmental stage of the program. This will
prepare the program and the ESW to focus the evaluation appropriately based on program
maturity and priorities.
To determine what stage of development your program is currently in, staff and
stakeholders should have a conversation about program maturation with the logic
model in hand. It is important to note that when a program is reinventing itself or
revitalization is occurring, the program may resemble the left-hand side of the logic
model and thus the program planning stage even when it has been in existence for
numerous years.
57
1 3
Describe your program's maturation:
Progress Achieved on
Activities/Tasks Outputs or Outcomes
Activities/Tasks Activities/Tasks
That Have (Indicate if Short I
Working on: Not Yet Begun
Been Completed Intermediate, or
Long Term)
Based on your description and consideration of the logic model, your program is in what
stage of development?
Is there compliance
Is there public support with the policy?
Example:
for the policy? Is there continued
Questions Based What is the
What resources or increased public
on Developmental health impact
will be needed support for the policy?
Stage When of the policy?
for implementation Are there major
Passing a Policy
of the policy? exemptions or loopholes
to the policy?
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
What evaluation questions are outside of the current stage of development of your
program? What implications does this have for your current evaluation? What
implications does this have for planning for future evaluations?
59
1 3
Based on your description and consideration of the logic model, your program is in what
stage of development?
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
L
-\ 61
1 4
In this exercise, you will need to consider all the information from previous exercises
in Step 1 through Step 2, the logic model, and your stakeholders' vested interest in
the evaluation.
From the Stakeholder Mapping exercise, list the stakeholders included in the high-high
or priority category for information needs:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
63
1 4
Indicate your program's current stage of development:
Based on your description and consideration of the logic model, your program is in what stage of
development?
Given the overall purpose statement and the stage of development of the program, what
questions from the high-high stakeholder group are viable for the current evaluation
effort?
Next, the team should consider issues of feasibility related to those evaluation questions
that are viable options given the current program stage of development and the
evaluation purpose.
No chart, grid, or exercise can fully answer the question of how best to focus the
evaluation. However, the above information should facilitate informed discussions and
can help avoid evaluation activities that are misaligned with the program stage of
development, underfunded, or not of the highest priority for information needs. Additional
considerations that might help you prioritize your evaluation questions include:
The questions most important to you and your key stakeholders (the "must
answer" questions)
• Questions that provide results that you can use (e.g., for improvement)
• Questions you can answer fully with available or easy to gather data
• Questions within your resources to answer
65
1 4
What process N/A Case study Site visits and Pre and post Contractor to be
leads to reports funding period determined
implementation
of policy?
Choose the grid that is most appropriate for your program and complete it given
your chosen evaluation questions from Step 3.
69
1 5
Additional possible evaluation plan data grids might look like:
71
1 5
Evaluation Question Indicator/ Performance Potential Data Source Comments
Measure (Existing/New)
The team should consider roles and responsibilities, what services might be in kind and what
activities will cost additional money. Will you need to pay for additional questions on existing
surveys or can you use items that already exist? Are there existing data sources or will you
need to create new ones? Do not forget items such as copying costs for surveys or Web
services or technology needed in the field, such as recorders or mobile data collection devices.
73
1 5
Evaluation Indicator/ Frequency Responsibility Cost
Question Performance Considerations
Measure
Don't be surprised if during this exercise you have to revisit Step 3 or earlier portions
of Step 4. Often the budget available doesn't match the evaluation desired. Either the
evaluation scope will need to be reduced or additional resources obtained. It is better
to thoroughly consider this now before implementation begins than have to change
course mid-implementation cycle.
75
1 6
Moreover, it is critical that your plans allow time for interpretation and review from
stakeholders (including your critics) to increase transparency and validity of your
process and conclusions. The emphasis here is on justifying conclusions not just
analyzing data. This is a step that deserves due diligence in the planning process. The
propriety standard plays a role in guiding the evaluator's decisions on how to analyze
and interpret data to assure that all stakeholder values are respected in the process of
drawing conclusions.* This may include one or more stakeholder interpretation meetings
to review interim data and further refine conclusions. A note of caution, as a
stakeholder-driven process, there
is often pressure to reach beyond the evidence when drawing conclusions. It is the
responsibility of the evaluator and the evaluation workgroup to ensure that conclusions
are drawn directly from the evidence.
*Sandars JR, The Joint Committee on Standards for Educational Evaluation. The Program Evaluation Standards. 2nd ed. Thousand Oaks
(CA): Sage Publications, 1994.
Individual site evaluation reports and feedback Within 1 month of site visit
Within 3 months of site visit or as appropriate during
Check-in with ESW and/or participants
analysis phase
77
1 6
It is important to consider the time it takes to solicit and incorporate stakeholder
feedback in your evaluation project timeline. At this time, you should revisit your budget
and timeline created earlier to ensure adequate time and funding for the stakeholder
inclusion process.
In order to make sure your stakeholder interpretation meeting is a success, plan for
steps to help things run smoothly. Time for these activities needs to be included in
your evaluation timeline.
• Send the initial invitation at least 2 months in advance so that stakeholders can plan
for the meeting. Remind stakeholders of the overall evaluation purpose and
questions.
• Send the preliminary report or PowerPoint presentation within 2 weeks of the initial
invitation to allow stakeholders time to review. It is important to remind
stakeholders that results are a draft and should not be shared outside of the review
group.
• Send reminders about the meeting 1 or 2 weeks prior to the date. Identify any pre•
existing documentation that may be useful for understanding context.
• Plan for appropriate technology (and backup) needed such as recorders,
laptop, and screen, flipcharts.
• If feasible, use a professional meeting facilitator.
A checklist to facilitate the development of a formal stakeholder interpretation meeting
can be found at http://www.wmich.edu/evalctr/checklists/checklist topics/.
78 I Developing an Effective Evaluation Plan
7 6
79
1
Also visit The Evaluation Center at Western Michigan University online for a free
evaluation report checklist:
http://www.wmich.edu/evalctr/checklists/checklist topics/.
81
1
• With which target audiences or groups of stakeholders will you share findings?
• What formats and channels will you use to share findings?
• When and how often do you plan to share findings?
• Who is responsible for carrying out dissemination strategies?
You can use the following matrix to help you plan your communication process.
82 I Developing an Effective Evaluation Plan
[1
How do you want to communicate?
Format(s)
Channel(s)
** This tool was adapted from DASH's Communication Matrix in Using Evaluation to Improve Programs:
Strategic Planning in the Strategic planning kit for school health programs. Available at:
http://www.cdc.gov/ healthyyouth/evaluation/sp toolkit.htm [accessed 2011 Oct 19].
This tool can help you track communications with your various audiences, including the
communication format(s) (the layout of the communication, such as newsletters) and the
communication channel(s) (the route of communication, such as oral presentations),
audience
feedback on the communication message, and next steps you need to take in response.
Audience
Communication Communication
Communication Date Feedback and
Format(s) Channel(s)
Next Steps
83
1
A second example of a tracking chart might look like this:
Program Implementation Inform them in real time Monthly meetings and Monthly
Team about what's working well briefing documents
and what needs to be
quickly adjusted during
implementation
Funding Decision Makers Continue and/or enhance Executive summary; Within 90 days of
program funding Targeted program briefs conclusion of funding
Format(s) Channel(s)
85
1
Audience
Communication Communication
Communication Date Feedback and
Format(s) Channel(s)
Next Steps
87
OUTLINE: 7.1 BASIC ELEMENTS OF AN EVALUATION PLAN
Often, programs have multiple funding sources and, thus, may have multiple evaluation
plans. Ideally, your program will develop one overarching evaluation plan that
consolidates all activities and provides an integrated view of program assessment. Then,
as additional funding sources are sought and activities added, those evaluation activities
can be enfolded into the larger logic model and evaluation scheme.
Your plan should be adapted to your specific evaluation needs and context. Additionally,
it is important to remember that your evaluation plan is a living, dynamic document
designed to adapt to the complexities of the environment within which your programs
are implemented. The plan is a guide to facilitate intentional decisions. If changes are
made, they are documented and done intentionally with a fully informed ESW.
89
From the list of high-priority stakeholders identified above, think about their
information needs from the evaluation or about the program.
1.
2.
3.
4.
5.
6.
7.
8.
9.
Discuss the intended uses of the evaluation by primary intended users and program staff:
Intended Uses
1.
2.
3.
4.
5.
6.
7.
8.
9.
Stakeholder Goals/Objectives
4. Briefly describe your program (in your plan you will include your logic model(s) if
you have one):
Description of Program:
91
5. Think back to your program description you just wrote. Where are you in
your program's growth (beginning, middle, mature)?
Stage of Growth:
6. Based on where you are in your program's growth, what does that tell you
about what kinds of questions you can ask?
Beginning
Middle
Mature
8. Now, take each question and think about ways you might answer that question.
Will your method be qualitative, quantitative or both? Do you already have a data
source? Will you have some success stories? How much will it cost? What
resources do you have? Who needs to be involved to make the evaluation a
success? How will you ensure use of lessons learned? How and when will you
disseminate information? Below are two samples of tables you can use to
organize this information.
93
Evaluation Indicator/ Method Data Source Frequency Responsibility
Question Performance
Measure
95
9. Now think about the different ways you might communicate information from the
evaluation to stakeholders. Communication may include information to
stakeholders not on your ESW. You may want to provide preliminary results,
success stories, etc. throughout the evaluation. Additionally, your ESW may assist
in your communication efforts. What deadlines must be met and what opportunities
are lost if deadlines are not met. How will this impact the timetable you created in
#8?
Format(s) Channel(s)
.At
97
u
" "
e,
LOGIC MODEL EXAMPLES
OSH Logic Models Example
a.,
0
0
a.,
•
o
o
C:
2
o
c
0
l
•
E
<1
a.,
0
0
€
c
cU
c
e
0
t t
I I I
c G ¢ 4
2
e 0
•
2%
·s: c
=g
E E=
C: €.2
« E
00
o '-
OE
C:
; t
c
a.,
a,.. 9
z
£
..., £
o
•
Q)
lo...
co
o
0
w
<
0
h
-rQ e%8
cz
o 20
E N
E e s
00
0
c OE
o
z
o
c
~
e
c
z
Cl.
E
E
w
99
c
0
0
c
o
• T
o
c I I I I I
2 0
c •c
U 2± o
cU o.
•
c
co
o
G « Q
o
±-- 1/)
•
1/)
"
U
<I
0 t t t t t
I
£ I I I
0
E gm
0
C
<I £ 52
o t
c
E
•
¢
O £
t
c
0
I
£
;
0
E
0
11...
o
. £..
9
£2
t
Ill
E5
rt
F
8
8
5
7
•
tr
5 t
t
e
~
8
gs
s .$ 2 3
3" £
};
%
5 %
0
i°w
ti
5 ~
~8
I f5 ! t
%
2
0 5
<..>
101
Resources*
*Resources are listed for the convenience of the user and do not constitute
endorsement by the U.S. Government.
WEB RESOURCES
American Evaluation Association
• http://www.eval.org/
• The American Evaluation Association (AEA) is an international professional
association of evaluators devoted to the application and exploration of program
evaluation, personnel evaluation, technology, and many other forms of evaluation.
Evaluation involves assessing the strengths and weaknesses of programs, policies,
personnel, products, and organizations to improve their effectiveness. AEA has
approximately 5,500 members representing all 50 states in the United States as
well as over 60 foreign countries. [accessed 2011 Jul 19]
Centers for Disease Prevention and Control (CDC} Division of Adolescent and School
Health's Program Evaluation Resources and Tools
• http://www.cdc.gov/healthyyouth/evaluation/resources.htm
CDC Division of Sexually Transmitted Disease Prevention's Practical Use of Program
Evaluation among Sexually Transmitted Disease (STD} Programs
• http://www.cdc.gov/std/program/pupestd/Introduction-SPREADS.pdf
• Effective program evaluation is a systematic way to improve and account for public
health actions that involve procedures that are useful, feasible, ethical, and
accurate. The framework guides public health professionals in their use of program
evaluation. It is a practical, nonprescriptive tool, designed to summarize and
organize essential elements of program evaluation. The framework comprises steps
in program evaluation practice and standards for effective program evaluation.
Adhering to the
steps and standards of this framework will allow an understanding of each
program's context and will improve how program evaluations are conceived and
conducted.
102 I Developing an Effective Evaluation Plan
CDC Introduction to Program Evaluation for Public Health Programs: A Self Study Guide
• http://www.cdc.gov/getsmart/program-planner/downloads/Manual 04062006.pdf
Disseminating Program Achievements and Evaluation Findings to Garner Support
• http://www.cdc.gov/healthyyouth/evaluation/pdf/brief9.pdf
Impact and Value: Telling Your Program's Story
• http://extension.psu.edu/evaluation/
Western Michigan University: The Evaluation Center
• http://www.wmich.edu/evalctr/checklists/
• This site provides refereed checklists for designing, budgeting, contracting,
staffing, managing, and assessing evaluations of programs, personnel, students,
and
other evaluations; collecting, analyzing, and reporting evaluation information; and
determining merit, worth, and significance. Each checklist is a distillation of
valuable lessons learned from practice.
Becker HS. Writing for Social Scientist: How to Start and Finish Your Thesis, Book, or
Article. Chicago, IL: University of Chicago Press, 2077, 2nd ed.
Heath C, Heath, D. Made to Stick: Why some Ideas Survive and Others Die. New York,
NY: Random House, 2007.
Heath C, Heath D. Switch: How to Change Things When Change is Hard. New York,
NY: Random House, 2010.
Lavinghouze R, Price AW, Smith, KA. The Program Success Story: A Valuable Tool for
Program Evaluation. Health Promotion Practice, 2007; 8(4): 323--331.
QUALITATIVE METHODS
Miles MB, Huberman, MA. Qualitative Data Analysis. Thousand Oaks, CA: Sage
Publications, Inc., 1994, 2nd ed.
Patton M.Q. Qualitative Research and Evaluation Methods. Thousand Oaks, CA: Sage
Publications, Inc., 2001, 3rd ed.
Yin RK. Case Study Research: Design and Methods (Applied Social Research
Methods) Thousand Oaks, CA: Sage Publications, Inc., 2008, 4th ed.
Yin RK. Qualitative Research from Start to Finish. New York, NY: The Guilford Press, 2010.
Rothman KJ, Greenland S, Lash TL. Modern Epidemiology. Philadelphia, PA: Lippincott
Williams & Wilkins, 2008, 3rd ed.
Tufte ER. The Visual Display of quantitative Information. Cheshire, CT: Graphics Press, 2001.
EVALUATION USE
Butterfoss FD. Coalitions and Partnerships in Community Health. San Francisco,
CA: Jossey-Bass,2007.
Mattessich PW. The Manager's Guide to Program Evaluation: Planning, Contracting, and
Managing for Useful Results. St. Paul, Minnesota: Amherst H. Wilder Foundation, 2003.
• http://www.cdc.gov/tobacco/stateandcommunity/best practices/index.htm
The evaluation approaches described in this toolkit and the findings of studies
conducted using these approaches may also be useful to stakeholders who are
interested in the effects of smoke-free laws, including business organizations
(e.g.,
chambers of commerce, restaurant associations) and labor unions.
105
Introduction to Process Evaluation in Tobacco Use Prevention and Control
• Published in 2008, this guide will help state and federal program managers
and evaluation staff design and implement valid, reliable process evaluations
for tobacco use prevention and control programs.
• Published in 2001, this "how to" guide for planning and implementing evaluation
activities will help state tobacco control program managers and staff in the
planning, design, implementation, and use of practical and comprehensive
evaluations of tobacco control efforts.
• This 2005 document is intended to help state health departments, health care
organizations, and employers to contract for and monitor telephone-based tobacco
cessation services. It is also designed to help states, health care organizations, and
quitline operators enhance existing quitline services and to inform those who are
interested in learning more about population-based approaches to tobacco
cessation.
106 I Developing an Effective Evaluation Plan
Smoking-Attributable Mortality, Morbidity, and Economic Costs (SAMMEC)
• http://apps.nccd.cdc.gov/sammec/
This online application allows you to estimate the health and health-
related economic consequences of smoking to adults and infants.
State Tobacco Activities Tracking and Evaluation (STATE) System
• http://www.cdc.gov/tobacco/statesystem
The STATE System is an electronic data warehouse containing up-to-date
and historical state-level data on tobacco use prevention and control.
Surveillance and Evaluation Data Resources for Comprehensive Tobacco Control
Programs
• http://www.cdc.gov/tobacco/tobacco control programs/surveillance evaluation/
surveillance manual/index.htm
Published in 2001, this compilation of data sources for tobacco control programs
is useful for tobacco control programs that are conducting surveillance or
evaluation.
Surveillance and Evaluation Net-Conferences
• http://www.cdc.gov/obesity/downloads/EvaluationConsultationGroup.pdf
• An Evaluation Consultation Group (ECG) is required for all state obesity
programs funded by the Division of Nutrition, Physical Activity and Obesity
(DNPAO) to provide technical, programmatic, and related input to the program
evaluation of the state health department's NPAO work. This guidance provides
a systematic
approach to evaluating an ECG including a series of steps and tools for
conducting the evaluation.
Evaluation of State Nutrition, Physical Activity, and Obesity Plans
• www.cdc.gov/obesity/downloads/EvaluationofStateNPAOPlans.pdf
• This guide clarifies approaches to and methods of evaluation; provides examples
and tools specific to the scope and purpose of state nutrition, physical activity
and obesity programs; and recommends resources for additional reading.
• This resource provides a list of key references and tools for planning and
implementing program and/or project evaluations, focusing specifically on
physical activity programs and evaluations.