Monitoring and Evaluation Course notes
Monitoring and Evaluation Course notes
Monitoring and Evaluation Course notes
EVALUATION
Page 1
Evaluation is a systematic approach to attribute changes in specific outcomes to program
activities. It has the following characteristics:
Conducted at important program milestones
Page 2
(v) Key benefits of Monitoring and Evaluation
a. Provide regular feedback on project performance and show any need for ‘mid-
course’ corrections
b. Identify problems early and propose solutions
c. Monitor access to project services and outcomes by the target population;
d. Evaluate achievement of project objectives, enabling the tracking of progress
towards achievement of the desired goals
e. Incorporate stakeholder views and promote participation, ownership and
accountability
f. Improve project and programme design through feedback provided from baseline, mid-
term, terminal and ex-post evaluations
g. Inform and influence organizations through analysis of the outcomes and impact of
interventions, and the strengths and weaknesses of their implementation, enabling
development of a knowledge base of the types of interventions that are successful (i.e.
what works, what does not and why.
h. Provide the evidence basis for building consensus between stakeholders
Page 3
Chapter 2 SELECTING INDICATORS, BASELINES AND TARGETS
a) The indicator: “An indicator is defined as a quantitative measurement of an objective to be
achieved, a resource mobilised, an output accomplished, an effect obtained or a context
variable (economic, social or environmental)”. precise information needed to assess whether
intended changes have occurred. Indicators can be either quantitative (numeric) or qualitative
(descriptive observations). Indicators are typically taken directly from the logframe, but should
be checked in the process to ensure they are SMART (specific, measurable, achievable,
relevant and time-bound).
b) The Indicator definition- key terms in the indicator that need further detail for precise and
reliable measurement.
c) The methods/sources- identifies sources of information and data collection methods and tools,
such as the use of secondary data, regular monitoring or periodic evaluation, baseline or
endline surveys, and interviews.
d) The frequency/schedules -how often the data for each indicator will be collected, such as
weekly, monthly, quarterly, annually, etc.
e) The person(s) responsible- lists the people responsible and accountable for the data
collection and analysis, e.g. community volunteers, field staff, project/programme
managers, local partner(s) and external consultants.
f) The information use/audience - identifies the primary use of the information and its
intended audience. Some examples of information use for indicators include:
• Monitoring project/programme implementation for decision-making
• Evaluating impact to justify intervention
• Identifying lessons for organizational learning and knowledge-sharing
• Assessing compliance with donor or legal requirements
• Reporting to senior management, policy-makers or donors for strategic planning
• Accountability to beneficiaries, donors and partners
• Advocacy and resource mobilization.
g) Types of Indicators
Page 4
h) Characteristics of Good Indicators.
a) Specific – focused and clear
b) Measurable - quantifiable and reflecting change
c) Attainable - reasonable in scope and achievable within set time-frame
d) Relevant - pertinent to the review of performance
e) Time-Bound/Trackable - progress can be charted chronologically Also
be CREAM: Clear, Relevant, Economical, Adequate and Monitor-able.
Page 5
Chapter 3: FRAMEWORKS FOR EVALUATION - THE LOGICAL FRAMEWORK
APPROACH (LFA)
Four types of frameworks dominate the M&E field:
a) Conceptual frameworks are also known as theoretical or causal frameworks.
b) Results-based frameworks are also known as strategic frameworks and serve as a
management tool with an emphasis on results. The purpose of results frameworks is to
increase focus, select strategies, and allocate resources accordingly.
Impact The higher-order objective to which a development intervention is intended to contribute.
Outcome The likely or achieved short-term and medium-term effects of an intervention’s outputs.
Output The products, capital goods and services which result from a development intervention; may also
include changes resulting from the intervention which are relevant to the achievement of outcomes.
Activity Actions taken or work performed through which inputs, such as funds, technical assistance and other
types of resources are mobilized to produce specific outputs.
Inputs The financial, human, and material resources used for the development intervention.
c) Logical frameworks are also known as LogFRAMEs and are commonly used to help set
clear program objectives and define indicators of success. They also outline the critical
assumptions on which a project is based, similar to the results framework.
d) Logic models are also known as M&E frameworks are commonly used to present a clear
plan for the use of resources to meet the desired goals and objectives. They are a useful tool
for presenting programmatic and evaluation components.
The choice of a particular type of framework—whether a conceptual framework, results
framework, logical framework or logic model—depends on the program’s specific needs, the M&E
team’s preferences and donor requirements.
In particular, the LFA is a systematic planning procedure for complete project cycle management, a
participatory Planning, Monitoring & Evaluation tool;
Page 6
Narrative summary Objectively verifiable Means of verification Assumptions/Risks- what
A snapshot of the different levels of indicators (OVI) - how (MOV) - how will assumptions underlie the
the project objectives— known as the will we know we've been we check our structure of our project and
“hierarchy of objectives”. successful? reported results? what is the risk they will not
prevail?
Goal (Impact)- Longer-term
effects/General or overall objective
Purpose- - why are we doing this?
direct and immediate
effects/objectives/Outcomes/Results
Outputs - what are the
deliverables? – goods and
services produced/operational
objectives
Activities- what tasks will we Inputs Cost
undertake to deliver the outputs? By what means do we What does it cost
carry out the activities
Pre-conditions
What needs to be fulfilled
before activities can start
Page 7
Chapter 4: MONITORING CRITERIA
a) Project monitoring & control cycle.
To achieve effective control over project implementation, it is necessary to assess the progress from
time at regular intervals in terms of physical completion of scheduled activities, actual cost incurred
in performing those activities ad achievement of desired performance levels by comparing the
status with the plans to find deviations. This assessment process is known as ‘monitoring’.
VARIANCES
NO YES
Page 8
b) Types of monitoring
A project/programme usually monitors a variety of things according to its specific informational needs.
These monitoring types often occur simultaneously as part of an overall monitoring system commonly
found in a project/programme monitoring system.
TAB 1: Common types of monitoring
Results monitoring: Tracks effects and impacts to determine if the project/programme is on
target towards its intended results (inputs, activity, outputs, outcomes, impact,
assumptions/risks monitoring) and whether there may be any unintended impact (positive or
negative
Process (activity) monitoring : Tracks the use of inputs and resources, the progress of activities,
how activities are delivered – the efficiency in time and resources and the delivery of
outputs
Compliance monitoring: Ensures compliance with, say, donor regulations and expected results,
grant and contract requirements, local governmental regulations and laws, and ethical
standards.
Context (situation) monitoring: Tracks the setting in which the project/programme operates,
especially as it affects identified risks and assumptions, and any unexpected considerations that
may arise, including the larger political, institutional, funding, and policy context that
affect the project/programme.
Beneficiary monitoring: Tracks beneficiary perceptions of a project/programme. It includes
beneficiary satisfaction or complaints with the project/programme, including their participation,
treatment, access to resources and their overall experience of change.
Financial monitoring: Accounts for costs by input and activity within predefined categories of
expenditure, to ensure implementation is according to the budget and time frame.
Organizational monitoring: Tracks the sustainability, institutional development and capacity
building in the project/programme and with its partners.
Page 9
EVALUATION CRITERIA FOR PROJECTS
a) Five Part Evaluation Criteria
Relevance - Was/is the project a good idea given the situation to improve? Was the
logic of the project correct? Why or Why Not? -The validity of the Overall Goal and
Project Purpose at the evaluation stage.
Effectiveness - Have the planned results been achieved? Why or Why Not? -The degree to
which the Project Purpose has been achieved by the project Outputs.
Efficiency - Have resources been used in the best possible way? Why or Why Not? -The
productivity in project implementation. The degree to which Inputs have been
converted into Outputs.
Impact - To what extent has the project contributed towards its longer term goals? Why or
Why Not? Have there been any unanticipated positive or negative consequences of the
project? Why did they arise? -Positive and negative changes produced, directly or
indirectly, as a result of the Implementation of the project.
Sustainability – Can the outcomes be sustained after the project funding to ensure
continued impacts? Why or Why Not? -The durability of the benefits and development
effects produced by the project after its completion.
Page 10
Chapter 5. TYPES OF EVALUATION
Three ways of classifying:
When it is done - Ex-ante evaluation; Formative evaluation; Summative – end of
project, and Ex-Post evaluation.
Who is doing it - External evaluation; Internal evaluation or self-assessment
What methodology or technicality is used- Real-time evaluations (RTEs); Meta-
evaluations; Thematic evaluations; Cluster/sector evaluations; Impact evaluations
The details are as follows: -
a) Ex–ante evaluation: Conducted before the implementation of a project as part of the
planning. Needs assessment determines who needs the program, how great the need is, and
what might work to meet the need. Implementation(feasibility)evaluation monitors the
fidelity of the program or technology delivery, and whether or not the program is
realistically feasible within the programmatic constraints
b) Formative evaluation: Conducted during the implementation of the project. Used to
determine the efficiency and effectiveness of the implementation process, to improve
performance and assess compliance. Provides information to improve processes and learn
lessons. Process evaluation investigates the process of delivering the program or
technology, including alternative delivery procedures. Outcome evaluations
investigate whether the program or technology caused demonstrable effects on specifically
defined target outcomes. Cost-effectiveness and cost-benefit analysis address questions of
efficiency by standardizing outcomes in terms of their dollar costs and values
c) Midterm evaluations are formative in purpose and occur midway through
implementation.
d) Summative evaluation: Conducted at the end of the project to assess state of project
implementation and achievements at the end of the project. Collate lessons on content and
implementation process. Occur at the end of project/programme implementation to assess
effectiveness and impact.
e) Ex-post evaluation: Conducted after the project is completed. Used to assess
sustainability of project effects, impacts. Identifies factors of success to inform other
projects. Conducted sometime after implementation to assess long-term impact and
sustainability.
f) External evaluation: Initiated and controlled by the donor as part of contractual
agreement. Conducted by independent people – who are not involved in implementation.
Often guided by project staff
g) Internal or self-assessment: Internally guided reflective processes. Initiated and
controlled by the group for its own learning and improvement. Sometimes done by
consultants who are outsiders to the project. Need to clarify ownership of information before
the review starts
h) Real-time evaluations (RTEs): are undertaken during project/programme
implementation to provide immediate feedback for modifications to improve on-going
implementation.
i) Meta-evaluations: are used to assess the evaluation process itself. Some key uses of meta-
evaluations include: take inventory of evaluations to inform the selection of future
evaluations; combine evaluation results; check compliance with evaluation policy and good
practices; assess how well evaluations are disseminated and utilized for organizational
learning and change, etc.
j) Thematic evaluations: focus on one theme, such as gender or environment, typically across
a number of projects, programmes or the whole organization.
k) Cluster/sector evaluations: focus on a set of related activities, projects or programmes,
typically across sites and implemented by multiple organizations
l) Impact evaluations: is broader and assesses the overall or net effects -- intended or
unintended -- of the program or technology as a whole focus on the effect of a
project/programme, rather than on its management and delivery. Therefore, they typically
occur after project/programme completion during a final evaluation or an
Page 11
ex-post evaluation. However, impact may be measured during project/programme
implementation during longer projects/programmes and when feasible.
Page 12
Participatory/Collaborative Evaluation.- “What are the information needs of those closest
to the program?”- Engaging stakeholders in the evaluation process, so they may better
understand evaluation and the program being evaluated and ultimately use the evaluation
findings for decision-making purposes.
Empowerment Evaluation. “What are the information needs to foster improvement and self-
determination?”. Use of evaluation concepts, techniques, and findings to foster
improvement and self-determination, a catalyst for learning in the workplace a social
activity in which evaluation issues are constructed by and acted on by organization
members
Organizational Learning. “What are the information and learning needs of individuals,
teams, and the organization in general?” ongoing and integrated into all work practices
Theory-Driven Evaluation.- “How is the program supposed to work? What are the
assumptions underlying the program’s development and implementation?”- Focuses on
theoretical rather than methodological issues to use the “program’s rationale or theory as the
basis of an evaluation to understand the program’s development and impact” using a plausible
model of how the program is supposed to work.
Success Case Method. “What is really happening?”- focuses on the practicalities of
defining successful outcomes and success cases and uses some of the processes from theory-
driven evaluation to determine the linkages, which may take the form of a logic model, an
impact model, or a results map. Evaluators using this approach gather stories within the
organization to determine what is happening and what is being achieved. The major question
this approach asks is,
Page 13
Chapter 6: THE EVALUATION PROCESS
Evaluation operates within multiple domains and serves a variety of functions at the same time.
Moreover it is subject to budget, time and data constraints that may force the evaluator to sacrifice
many of the basic principles of impact evaluation design. Before entering into the details of
evaluation methods it is important for the reader to have a clear picture of the way an evaluation
procedure works.
Page 14
Major sources of data- secondary data, primary data-sample surveys, project output data,
qualitative studies- PRA, mapping, KIIs, FGDs, observation, checklists, external
assessments, participatory assessments
planning for data collection - prepare data collection guidelines, pre-test data
collection tools, train data collectors, address ethical issues
a) Assess the existing readiness and capacity for monitoring and evaluation
b) Review current capacity within (or outsourced without) the organization and its partners
which will be responsible for project implementation, covering: technical skills,
managerial skills, existence and quality of data systems, available technology and
existing budgetary provision.
c) Establish the purpose and scope
Why is M&E needed and how comprehensive should the system be?
What should be the scope, rigour and should the M&E process be participatory?
d) Identify and agree with main stakeholders the outcomes and development
objective(s).
Set a development goal and the project purpose or expected outcomes, outputs,
activities and inputs. Indicators, baselines and targets are similarly derived
e) Select key indicators i.e the qualitative or quantitative variables that measure project
performance and achievements for all levels of project logic with respect to inputs,
activities, outputs, outcomes and impact, as well as the wider environment,
requiring pragmatic judgment in the careful selection of indicators.
f) Developing and Evaluation Frame work - set out the methods, approaches and
evaluation designs ( Experimental, Quasi-Experimental and Non-Experimental) to be
used to address the question of whether change observed through monitoring indicators
can be attributed to the project interventions.
g) Set baselines and planning for results -The baseline is the first measurement of an
indicator, which sets the pre-project condition against which change can be tracked and
evaluated.
h) Select data collection methods as applicable.
i) Setting targets and developing a results framework- A target is a specification of the
quantity, quality, timing and location to be realized for a key indicator by a given
date. Starting from the baseline level for an indicator the desired improvement is defined
taking account of planned resource provision and activities, to arrive at a performance
target for that indicator.
j) Plan monitoring, data analysis, communication, and reporting: Monitoring and
Evaluation Plan
k) Implementation monitoring tracking the inputs, activities and outputs in annual or
multiyear work plans, and ‘Results monitoring’ tracking achievement of outcomes and
impact, are both needed. The demands for information at each level of management
need to be established, responsibilities allocated, and plans made for:
i. what data to be collected and when;
ii. how data are collected and analyzed;
iii. who collects and analyses data;
iv. who reports information,
v. when ?
l) Facilitating the necessary conditions and capacities to sustain the M&E System -
organizational structure for M&E, partner’s responsibilities and information
requirements, staffing levels and types, responsibilities and internal linkages,
incentives and training needs, relationships with partners and stakeholders,
Page 15
horizontal and vertical lines of communication and authority, physical resource needs
and budget.
Page 16
Chapter 7: EVALUATION DESIGN
The following are brief descriptions of the most commonly used evaluation (and research) designs.
One-Shot Design In using this design, the evaluator gathers data following an intervention or
program. For example, a survey of participants might be administered
after they complete a workshop.
Retrospective Pre- As with the one-shot design, the evaluator collects data at one time but
test. asks for recall of behaviour or conditions prior to, as well as after, the
intervention or program.
One-Group The evaluator gathers data prior to and following the intervention or
Pre- test-Post- program being evaluated.
test
Design.
Time Series Design The evaluator gathers data prior to, during, and after the implementation
of an intervention or program
Pre-test-Post-test . The evaluator gathers data on two separate groups prior to and
Control-Group following an intervention or program. One group, typically called the
Design experimental or treatment group, receives the intervention. The other
group, called the control group, does not receive the intervention.
Post-test-Only The evaluator collects data from two separate groups following an
Control-Group intervention or program. One group, typically called the experimental or
Design. treatment group, receives the intervention or program, while the other
group, typically called the control group, does not receive the
intervention.
Data are collected from both of these groups only after the intervention.
Case Study Design When evaluations are conducted for the purpose of understanding the
program’s context, participants’ perspectives, the inner dynamics of
situations, and questions related to participants’ experiences, and where
generalization is not a goal, a case study design, with an emphasis on the
collection of qualitative data, might be most appropriate. Case studies
involve in-depth descriptive data collection and analysis of individuals,
groups, systems, processes, or organizations. In particular, the case study
design is most useful when you want to answer how and why questions and
when there is a need to understand the particulars, uniqueness, and
diversity of the case.
Page 17
Decisions for Designing an Evaluation Study
Page 19
Evaluation Description Remarks
method/Approac
h/Tool
or the presence of examined behaviours. establishing periodic measures of
improvement.
Community book A community-maintained document of a Where communities have low literacy rates,
project belonging to a community. It can a memory team is identified whose
include written records, pictures, drawings, responsibility it is to relate the written record to
songs or whatever community members feel is the rest of the community in keeping with their
appropriate. oral traditions.
Community A form of public meeting open to all Interaction is between the participants and
interviews/meeti ng. community members. the interviewer, who presides over the
meeting and asks questions following a
prepared interview guide.
Direct A record of what observers see and hear at a An observation guide is often used to reliably
observation specified site, using a detailed observation look for consistent criteria, behaviours, or
form. Observation may be of physical patterns.
surroundings, activities or processes.
Observation is
a good technique for collecting data on
behavioural patterns and physical conditions.
Document A review of documents (secondary data) can It includes written documentation (e.g.
review provide cost-effective and timely project records and reports, administrative
baseline information and a historical databases, training materials,
perspective of the project/programme. correspondence, legislation and policy
documents) as well as videos, electronic data or
photos.
Focus group Focused discussion with a small group A moderator introduces the topic and uses a
discussion. (usually eight to 12 people) of participants to prepared interview guide to lead the
record attitudes, perceptions and beliefs discussion and extract conversation,
relevant to the issues being examined. opinions and reactions.
Interviews. An open-ended (semi-structured) interview Replies can easily be numerically coded for
is a technique for questioning that allows the statistical analysis.
interviewer to probe and pursue topics of
interest in depth (rather than just “yes/no”
questions). A closedended(structured)
interview systematically follows carefully
organized questions (prepared in advance in an
interviewer’s guide) that only allow a limited
range of answers, such as “yes/no” or
expressed by
a rating/number on a scale.
Key informant An interview with a person having special These interviews are generally conducted in
interview. information about a particular topic. an open-ended or semi-structured fashion.
Laboratory Precise measurement of specific objective
testing. phenomenon, e.g. infant weight or water quality
test.
Mini-survey Data collected from interviews with 25 to 50 Structured questionnaires with a limited
individuals, usually selected using number of closed-ended questions are used to
nonprobability generate quantitative data that can be
sampling techniques. collected and analysed quickly.
Most significant A participatory monitoring technique based They give a rich picture of the impact of
change (MSC). on stories about important or significant development work and provide the basis for
changes, rather than indicators. dialogue over key objectives and the value of
development programmes
Participant A technique first used by anthropologists This method gathers insights that might
observation. (those who study humankind); it requires the otherwise be overlooked, but is time-
researcher to spend considerable time (days) consuming.
with the group being studied and to interact
with them as a participant in their
community.
Participatory This uses community engagement techniques It is usually done quickly and intensively –
rapid (or rural) to understand community views on a over a two- to three-week period. Methods
appraisal (PRA). particular issue. include interviews, focus groups and
community mapping. Tools include
stakeholder analysis, participatory rural
Page 20
Evaluation Description Remarks
method/Approac
h/Tool
appraisal, beneficiary assessment, and
participatory monitoring and evaluation.
Questionnaire A data collection instrument containing a set Typically used in a survey
of questions organized in a systematic way, as
well as a set of instructions for the data
collector/interviewer about how to ask the
questions
Rapid appraisal A quick, cost-effective technique to gather This technique shares many of the
(or assessment). data systematically for decision-making, characteristics of participatory appraisal (such
using quantitative and qualitative methods, as triangulation and multidisciplinary teams)
such as site visits, observations and sample and recognizes that indigenous knowledge is
surveys. a critical consideration for decision-making.
Methods include: key informant interview,
focus group discussion, community group
interview, direct
obsevation, and mini-survey
Statistical data A review of population censuses, research
review. A studies and other sources of statistical data.
Story An account or recital of an event or a series of A learning story focuses on the lessons
events. A success story illustrates impact by learned through an individual’s positive and
detailing an individual’s positive experiences negative experiences (if any) with a
in his or her own words. project/programme.
Formal Survey Systematic collection of information from a Includes multi-topic or single topic house
defined population, usually by means of hold/living standards survey, client
interviews or questionnaires administered to a satisfaction surveys, core welfare indicators
sample of units in the population (e.g. person, questionnaire.
beneficiaries and adults). An enumerated Public expenditure tracking surveys- tracking
survey is one in which the survey is flow of public funds and the extent to which
administered by someone trained (a data resources actually reach the target groups.
collector/enumerator) to record responses Sampling-related methods- sample frame,
from respondents. A self- administered sample size, sample method e.g. random –
survey is a written survey completed by the simple (and systematic) or stratified
respondent, either in a group setting or in a Non-random- purposive (and cluster) and
separate location. quota sampling, etc
Respondents must be literate.
Visual Participants develop maps, diagrams, This technique is especially effective where
techniques. calendars, timelines and other visual displays to verbal methods can be problematic due to low-
examine the study topics. Participants can be literate or mixed-language target populations,
prompted to construct visual responses to or in situations where the desired information
questions posed by the interviewers; e.g. by is not easily expressed in either words or
constructing a map of their local area. numbers.
Cost Benefit and Assesses whether or not the costs of an Cost Benefit- measures both inputs and
Cost Effectiveness activity can be justified by the outcomes and outputs in monetary terms
Analysis impacts Cost Effectiveness- inputs in monetary and
outputs in non-monetary terms
Case studies Provide a rich picture of what is Require a sophisticated and well-
happening, as seen through the trained data collection and
eyes of many individuals reporting team
Allow a thorough exploration of Can be costly in terms of the
interactions between treatment demands on time and resources
and contextual factors Individual cases may be over
Can help explain changes or interpreted or overgeneralized
facilitating factors that might
Page 21
otherwise not emerge from the
data
Interviews Usually yield richest data, details, Expensive and time consuming
new insights Need well-qualified, highly
Permit face-to-face contact with trained interviewers
respondents Interviewee may distort
Provide opportunity to explore information through recall error,
topics in depth selective perceptions, desire to
Allow interviewer to experience please interviewer
the affective as well as cognitive Flexibility can result in
aspects of responses Allow inconsistencies across interviews
interviewer to explain or help Volume of information very large;
clarify questions, may be difficult to
increasing the likelihood of useful transcribe and reduce data
responses
Allow interviewer to be flexible in
administering interview to
particular individuals or in
particular circumstances
b) PARTICIPATORY M&E
Participatory evaluation is a partnership approach to evaluation in which stakeholders actively engage
in developing the evaluation and all phases of its implementation. Participatory evaluations often
use rapid appraisal techniques. Name a few of them.
Key Informant Interviews - Interviews with a small number of individuals who are most
knowledgeable about an issue.
Focus Groups - A small group (8-12) is asked to openly discuss ideas, issues and
experiences.
Mini-surveys - A small number of people (25-50) is asked a limited number of
questions.
Neighbourhood Mapping - Pictures show location and types of changes in an area to be
evaluated.
Flow Diagrams - A visual diagram shows proposed and completed changes in systems.
Photographs - Photos capture changes in communities that have occurred over time.
Oral Histories and Stories - Stories capture progress by focusing on one person’s or
organization’s account of change.
The term “data” refers to raw, unprocessed information while “information,” or “strategic
information,” usually refers to processed data or data presented in some sort of context.
Data –primary or secondary- is a term given to raw facts or figures before they have been
processed and analysed.
Information refers to data that has been processed and analysed for reporting and use.
Data analysis is the process of converting collected (raw) data into usable information.
Page 22
(i) Quantitative and Qualitative data
Quantitative data measures and explains what is being studied with numbers (e.g. counts,
ratios, percentages, proportions, average scores, etc).
Qualitative data explains what is being studied with words (documented observations,
representative case descriptions, perceptions, opinions of value, etc).
Quantitative methods tend to use structured approaches (e.g. coded responses to surveys)
which provide precise data that can be statistically analysed and replicated (copied) for
comparison.
Qualitative methods use semi-structured techniques (e.g. observations and interviews) to
provide in-depth understanding of attitudes, beliefs, motives and behaviours. They tend to be
more participatory and reflective in practice.
Quantitative data is often considered more objective and less biased than qualitative data but recent
debates have concluded that both quantitative and qualitative methods have subjective (biased) and
objective (unbiased) characteristics.
Therefore, a mixed-methods approach is often recommended that can utilize the advantages of both,
measuring what happened with quantitative data and examining how and why it happened with
qualitative data.
Page 23
Preparing the TOR; TOR is a written document presenting the purpose and scope of the
evaluation, the methods to be used, the standard against which performance is to be
assessed or analyses are to be conducted, the resources and time allocated, and reporting
requirements. TOR also defines the expertise and tasks required of a contractor as an evaluator,
and serves as job descriptions for the evaluator.
Appointing evaluator(s);
Securing budget for evaluation;
monitoring the evaluation work;
Providing comments on the draft;
publicizing the evaluation report, and
Providing feedback from the results to concerned parties.
E.g.
Formal reports developed by evaluators typically include six major sections:
(1) Background
(2) Evaluation study questions
(3) Evaluation procedures
(4) Data analyses
(5) Findings
(6) Conclusions (and recommendations)
Or detailed:
Summary sections
A. Abstract
B. Executive summary
II. Background
Page 24
A. Problems or needs addressed
B. Literature review
C. Stakeholders and their information needs
D. Participants
E. Project’s objectives
F. Activities and components
G. Location and planned longevity of the project
H. Resources used to implement the project
I. Project’s expected measurable outcomes
J. Constraints
III. Evaluation study questions
A. Questions addressed by the study
B. Questions that could not be addressed by the study (when relevant)
IV. Evaluation procedures
A. Sample
1. Selection procedures
2. Representativeness of the sample
3. Use of comparison or control groups, if applicable
B. Data collection
1. Methods
2. Instruments
C. Summary matrix
1. Evaluation questions
2. Variables
3. Data gathering approaches
4. Respondents
5. Data collection schedule
V. Findings
A. Results of the analyses organized by study question
VI. Conclusions
A. Broad-based, summative statements
B. Recommendations, when applicable
0r
Table of contents
Executive summary
Introduction
Evaluation scope, focus and approach
Project facts
Findings, Lessons Learned
o Findings
o Lessons learned
Conclusions and recommendations
o Conclusions
o Recommendations
Annexes/appendices
Page 25
Country/Organization/Entity of the evaluation intervention
Names and organizations of evaluators
Name of the organization commissioning the evaluation
Acknowledgements
2. Table of contents
Should always include lists of boxes, figures, tables and annexes with page references.
3. List of acronyms and abbreviations
4. Executive summary
A stand-alone section of two to three pages that should:
Briefly describe the intervention (the project(s), programme(s), policies or other
interventions) that was evaluated.
Explain the purpose and objectives of the evaluation, including the audience for the
evaluation and the intended uses.
Describe key aspect of the evaluation approach and methods.
Summarize principle findings, conclusions, and recommendations.
5. Introduction
Should:
Explain why the evaluation was conducted (the purpose), why the intervention is being
evaluated at this point in time, and why it addressed the questions it did.
Identify the primary audience or users of the evaluation, what they wanted to learn from the
evaluation and why, and how they are expected to use the evaluation results.
Identify the intervention (the project(s) programme(s), policies or other interventions) that was
evaluated—see upcoming section on intervention.
Acquaint the reader with the structure and contents of the report and how the information
contained in the report will meet the purposes of the evaluation and satisfy the information
needs of the report’s intended users.
6. Description of the intervention/project/process/programme —Provide the basis for report users to
understand the logic and assess the merits of the evaluation methodology and understand the
applicability of the evaluation results. The description needs to provide sufficient detail for the report
user to derive meaning from the evaluation. The description should:
Describe what is being evaluated, who seeks to benefit, and the problem or issue it seeks to
address.
Explain the expected results map or results framework, implementation strategies, and the key
assumptions underlying the strategy.
Link the intervention to national priorities, Development partner priorities, corporate strategic
plan goals, or other project, programme, organizational, or country specific plans and goals.
Identify the phase in the implementation of the intervention and any significant changes (e.g.,
plans, strategies, logical frameworks) that have occurred over time, and explain the
implications of those changes for the evaluation.
Identify and describe the key partners involved in the implementation and their roles.
Describe the scale of the intervention, such as the number of components (e.g., phases of a
project) and the size of the target population for each component.
Indicate the total resources, including human resources and budgets.
Describe the context of the social, political, economic and institutional factors, and the
geographical landscape within which the intervention operates and explain the effects
(challenges and opportunities) those factors present for its implementation and outcomes.
Point out design weaknesses (e.g., intervention logic) or other implementation constraints (e.g.,
resource limitations).
7. Evaluation scope and objectives - The report should provide a clear explanation of the evaluation’s
scope, primary objectives and main questions.
Evaluation scope—The report should define the parameters of the evaluation, for example, the
time period, the segments of the target population included, the geographic area included,
and which components, outputs or outcomes were and were not assessed.
Evaluation objectives—The report should spell out the types of decisions evaluation users will
make, the issues they will need to consider in making those decisions, and what the
evaluation will need to achieve to contribute to those decisions.
Page 26
Evaluation criteria—The report should define the evaluation criteria or performance
standards used. The report should explain the rationale for selecting the particular criteria used
in the evaluation.
Evaluation questions—Evaluation questions define the information that the evaluation will
generate. The report should detail the main evaluation questions addressed by the
evaluation and explain how the answers to these questions address the information needs of
users.
8. Evaluation approach and methods - The evaluation report should describe in detail the selected
methodological approaches, theoretical models, methods and analysis; the rationale for their selection;
and how, within the constraints of time and money, the approaches and methods employed yielded
data that helped answer the evaluation questions and achieved the evaluation purposes. The description
should help the report users judge the merits of the methods used in the evaluation and the credibility
of the findings, conclusions and recommendations.
Page 27
key questions addressed by the evaluation. They should address sustainability of the initiative and
comment on the adequacy of the project exit strategy, if applicable.
12. Lessons learned—As appropriate, the report should include discussion of lessons learned from the
evaluation, that is, new knowledge gained from the particular circumstance (intervention, context
outcomes, even about evaluation methods) that are applicable to a similar context. Lessons should be
concise and based on specific evidence presented in the report.
13. Report annexes—Suggested annexes should include the following to provide the report user with
supplemental background and methodological details that enhance the credibility of the report:
ToR for the evaluation
Additional methodology-related documentation, such as the evaluation matrix and data
collection instruments (questionnaires, interview guides, observation protocols, etc.) as
appropriate
List of individuals or groups interviewed or consulted and sites visited
List of supporting documents reviewed
Project or programme results map or results framework
Summary tables of findings, such as tables displaying progress towards outputs, targets, and
goals relative to established indicators
Short biographies of the evaluators and justification of team composition
Code of conduct signed by evaluators
Page 28