Module_1
Module_1
DIPLOMA IN
MONITORING AND
EVALUATION
MODULE 1
Module two of the Diploma in Monitoring and Evaluation
TABLE OF CONTENTS
Introduction .......................................................................................Pg 3
Monitoring and Evaluation as an integral component of Project Planning
and Implementation ...........................................................................Pg 11
Evaluation types and Model ...............................................................Pg 22
Monitoring and Evaluation Methods and Tools ..................................Pg 35
Monitoring and Evaluation Planning, Design and Implementation......Pg 45
Data Analysis and Report writing ....................................................Pg 50
Why Monitoring and Evaluation ..........................................................Pg 61
Putting Planning Monitoring and Evaluation together: Results Based
Management......................................................................................Pg 79
Designing a Monitoring System .........................................................Pg 91
Baseline and Damage control ..............................................................Pg 97
Designing a Monitoring System: Case study .....................................Pg 118
Glossary ......................................................................................Pg 123
MONITORING AND EVALUATION
Module 1:
What is evaluation?
There are many definitions of evaluation in the literature and websites. For the purpose of this
guide, we will define evaluation as a structured process of assessing the success of a project
in meeting its goals and to reflect on the lessons learned An evaluation should be structured
so that there is some thought and intent as to what is to be captured, how best to capture it,
and what the analysis of the captured data will tell us about our program.
Another term that is widely used is monitoring. Monitoring refers to setting targets and
milestones to measure progress and achievement, and whether the inputs are producing the
planned outputs. In other words, monitoring sees whether the project is consistent with the
design.
2
The key difference between monitoring and evaluation is that evaluation is about placing a
value judgment on the information gathered during a project, including the monitoring data.
The assessment of a project’s success (its evaluation) can be different based on whose value
judgment is used. For example, a project manager’s evaluation may be different to that of the
project’s participants, or other stakeholders.
1.1 Introduction
The last few years have witnessed an increased interest in strengthening project M&E by
donors and African civil society organizations. More African nonprofit and civil society
organizations are interested in strengthening their M&E capacity. This document reviews the
nature of program M&E, presents basic concepts, principles, tools and methods of M&E,
reviews the process of planning and implementing effective M&E processes for nonprofit
programs, and suggests ways for using M&E results. Many of the principles presented in this
document are also applicable for “for-profit” organizations.
There are many reasons why development project staff and managers of civil society
organizations should know about M&E. First, knowledge about M&E helps project staff to
improve their ability to effectively monitor and evaluate their projects, and therefore,
strengthen the performance of their projects. We should remember that project staff need not
be evaluation experts in order to monitor their projects; with basic orientation and training,
project staff can implement appropriate techniques to carry out a useful evaluation. Second,
3
program evaluations, carried out by inexperienced persons, might be time-consuming, costly
and could generate impractical or irrelevant information. Third, if development organizations
are to recruit an external evaluation expert they should be smart consumers aware of
standards, and know what to look for and require in this service.
Project managers and other stakeholders (including donors) need to know the
extent to which their projects are meeting their objectives and leading to their
desired effects.
Monitoring represents an on-going activity to track project progress against planned tasks. It
aims at providing regular oversight of the implementation of an activity in terms of input
delivery, work schedules, targeted outputs, etc. Through such routine data gathering, analysis
and reporting program/project monitoring aims at:
4) Enabling managers and staff to identify and reinforce initial positive project results,
strengths and successes. As well, monitoring alerts managers to actual and potential
project weaknesses, problems and shortcomings before it is too late. This would
provide managers with the opportunity to make timely adjustments and corrective
actions to improve the program/project design, work plan and implementation
strategies.
Monitoring actions must be undertaken throughout the lifetime of the project. Ad hoc
evaluation research might be needed when unexpected problems arise for which planned
monitoring activities cannot generate sufficient information, or when socio economic or
environmental conditions change drastically in the target area.
Effective monitoring needs adequate planning, baseline data, indicators of performance, and
results and practical implementation mechanisms that include actions such as field visits,
stakeholder meetings, documentation of project activities, regular reporting, etc. Project
monitoring is normally carried out by project management, staff and other stakeholders.
5
1.4 Project Evaluation
6
3) Mid-term evaluations may serve as a means of validating the results of initial
assessments obtained from project monitoring activities.
5) Assisting managers to carry out a thorough review and re-thinking about their
projects in terms of their goals and objectives, and means to achieve them.
7) Improving the learning process. Evaluations often document and explain the causes as
to why activities succeeded or failed. Such documentation can help in making future
activities more relevant and effective.
Evaluation goals and objectives should be determined by project management and staff.
Many organizations do not have the resources to carry out the ideal evaluation. Therefore, it
is preferred that they recruit an external evaluation consultant to lead the evaluation process.
This would increase the objectivity of the evaluation. Project strengths and weaknesses might
not be interpreted fairly when data and results are analyzed by project staff members that are
responsible for ensuring that the program is successful.
7
In case the organization does not have the technical expertise to carry out the evaluation and
cannot afford outside help, or prefers to carry out the evaluation using its own resources, it is
recommended to engage an experienced evaluation expert to advise on developing the
evaluation plan, selecting evaluation methods, and analyzing and reporting results.
Monitoring and evaluation are two different management tools that are closely related,
interactive and mutually supportive. Through routine tracking of project progress, monitoring
can provide quantitative and qualitative data useful for designing and implementing project
evaluation exercises. On the other hand, evaluations support project monitoring. Through the
results of periodic evaluations, monitoring tools and strategies can be refined and further
developed.
Some might argue that good monitoring substitutes project evaluations. This might be true in
small-scale or short-term projects, or when the main objective on M&E is to obtain
information to improve the process on implementation of an ongoing project. However, when
a final judgment regarding project results, impact, sustainability, and future development are
needed, an evaluation must be conducted.
Project evaluations are less frequent than monitoring activities, considering their costs and
time needed.
8
Item Monitoring Evaluation
Community Funders
(beneficiaries)
External evaluators
Supervisors
Community
Funders (beneficiaries)
9
10
Chapter 2
Monitoring and evaluation are integral components of the program/ project management
cycle. Used at all stages of the cycle, monitoring and evaluation can help to strengthen
project design, enrich quality of project interventions, improve decision-making, and enhance
learning. Likewise, the strength of project design can improve the quality of monitoring
and evaluation. It is important to remember that poorly designed projects are hard to monitor
or evaluate. The following section summarizes the logical framework approach to project
planning, implementation, and monitoring and evaluation.
The logical framework approach provides a structure for logical thinking in project design,
implementation and monitoring and evaluation. It makes the project logic explicit, provides
the means for a thorough analysis of the needs of project beneficiaries and links project
objectives, strategies, inputs, and activities to the specified needs. Furthermore, it indicates
the means by which project achievement may be measured.
The detailed description of the processes of designing a program/ project using the logical
framework is beyond the scope of this report. However, the following section provides a
summary of the milestones and main concepts and definitions 1[1]:
Problem analysis represents the first step in project design. It is the process through which
stakeholders identify and analyze the problem(s) that the project is trying to
11
overcome. The result of this analysis is usually summarized in a tree diagram that links
problems with their causes.
• Next, project goals and objectives are developed and structured in a hierarchy
to match the analysis of problems. They can be represented as a mirror image
of the problem tree diagram. While projects are usually designed to address
long-term sectoral or national goals, objectives are specific to the project
interventions. They should also be clear, realistic in the timeframe for their
implementation and measurable for evaluation. Examples: school dropouts (in
a geographical area or for a target group) will be reduced by 10% (within a
specific timeframe); agricultural products (in a geographical area or for a
target group) will be increased by 15% (within a specific timeframe), etc.
• Outputs are the immediate physical and financial results of project activities.
Examples: kilometers of agricultural roads constructed, number of schools
renovated, and number of farmers attended a training course; number of
textbook printed, etc.
• Activities and inputs are developed to produce the outputs that will result in
achieving project objectives.
The product of this analytical approach is usually summarized in a matrix called the logical
frame matrix, which summarizes what the project intends to do and how, what kind of effects
are expected, what the project key assumptions are, and how outputs and outcomes will be
monitored and evaluated (see below).
The columns of the logical frame matrix represent the levels of project objectives (hierarchy
of objectives) and the means to achieve them. There are four levels in the logical frame and
each lower level of activity must contribute to the achievement of a higher level. For
12
example, the implementation of project activities would contribute to the achievement of
project outputs. The achievement of the project outputs would lead to the achievement of
project objectives. This is called the vertical logic. The rows indicate how the achievement of
objectives can be measured and verified. This is called the horizontal logic. Assumptions
(situations needed to promote the implementation of the project) must be systematically
recorded.
13
of producing each objectives have been collect and objective/purpose
component's achieved. Used report it. linkage.
outputs. during review and
evaluation.
Project description provides a narrative summary of what the project intends to achieve and
how. It describes the means by which desired ends are to be achieved.
Goal refers to the sectoral or national objectives for which the project is designed to
contribute, e.g. increased incomes, improved nutritional status, reduced crime. It can also be
referred to as describing the expected impact of the project. The goal is thus a statement of
intention that explains the main reason for undertaking the project.
14
Purpose refers to what the project is expected to achieve in terms of development outcome.
Examples might include increased agricultural production, higher immunization coverage,
cleaner water, or improved local management systems and capacity. There should generally
be only one purpose statement.
Component Objectives Where the project/program is relatively large and has a number of
components, it is useful to give each component an objective statement. These statements
should provide a logical link between the outputs of that component and the project purpose.
Poorly stated objectives limit the capacity of M&E to provide useful assessments for
decision-making, accountability and learning purposes.
Outputs refer to the specific results and tangible products (goods and services) produced by
undertaking a series of tasks or activities. Each component should have at least one
contributing output, and often have up to four or five. The delivery of project outputs should
be largely under project management's control.
Activities refer to all the specific tasks undertaken to achieve the required outputs. There are
many tasks and steps to achieve an output. However, the logical frame matrix should not
include too much detail on activities because it becomes too lengthy. If detailed activity
specification is required, this should be presented separately in an activity schedule/Gantt
chart format and not in the matrix itself.
Inputs refer to the resources required to undertake the activities and produce the outputs, e.g.,
personnel, equipment and materials. The specific inputs should not be included in the matrix
format.
Assumptions refer to conditions which could affect the progress or success of the project, but
over which the project manager has no direct control, e.g. price changes, rainfall, political
15
situation, etc. An assumption is a positive statement of a condition that must be met in order
for project objectives to be achieved. A risk is a negative statement of what might prevent
objectives being achieved.
Indicators refer to the information that would help us determine progress towards meeting
project objectives. An indicator should provide, where possible, a clearly defined unit of
measurement and a target detailing the quantity, quality and timing of expected results.
Indicators should be relevant, independent and can be precisely and objectively defined in
order to demonstrate that the objectives of the project have been achieved (see below).
Means of verification (MOVs). Means of verification should clearly specify the expected
source of the information we need to collect. We need to consider how the information will
be collected (method), which will be responsible, and the frequency with which the
information should be provided. In short MOVs specify the means to ensure that the
indicators can be measured effectively, i.e. specification of the indicators, types of data,
sources of information, and collection techniques.
The horizontal logic of the matrix helps establish the basis for monitoring and
evaluating the project by asking how outputs, objectives, purpose and goal can be
measured, and what the suitable indicators are. The following table summarizes the link
between the logical frame and monitoring and evaluation.
16
Goal Ex-post evaluation Impact indicators
It is worth noting that the above table represents a simplified framework and should be
interpreted in a suitably flexible manner. For example, ex-post evaluation assesses whether or
not the purpose, component objectives and outputs have been achieved. Project/program
reviews are concerned with performance in output delivery and the extent of achieving
objectives.
Indicators
Indicators provide the quantitative and qualitative details to a set of objectives... In addition,
they provide evidence of the progress of program or project activities in the attainment of
development objectives. Indicators should be pre-established, i.e. during the project design
phase. When a direct measure is not feasible, indirect or proxy indicators may be used.
Indicators should be directly linked to the level of assessment (e.g. output indicators,
outcome indicators or impact indicators). Output indicators show the immediate physical
and financial outputs of the project. Early indications of impact (outcomes) may be obtained
by surveying beneficiaries’ perceptions about project services. Impact refers to
17
long-term developmental change. Measures of change often involve complex statistics
about economic or social welfare and depend on data that are gathered from beneficiaries.
They should also be clearly phrased to include change in a situation within a geographical
location, time frame, target etc. A popular code for remembering the characteristics of good
indicators is SMART.
S: Specific
M: Measurable
1. Classifying project objectives into different levels requires that management will need to
develop systems to provide information at all levels, from basic accounting through
sophisticated studies, in order to measure project outcomes.
• Project outcome are often measured through the assessment of indicators that
focus on whether beneficiaries have access to project services, level of usage
and satisfaction with services. Such evidence can also be provided easily and
accurately through impact research, e.g. changes in health status or
improvements in income.
3. Exogenous indicators focus on general social, economic and environmental factors that are
out of the control of the project, but which might affect its outcome. Those factors might
include the performance of the sector in which the project operates. Gathering data on
project indicators and the wider environment place an additional burden on the project's
M&E effort.
M&E designers should examine existing record keeping and reporting procedures used by the
project authorities in order to assess the capacity to generate the data that will be needed.
19
5. Some of the impact indicators, such as mortality rates or improvement of the household
income, are hard to attribute to the project in a cause-effect relation. In general, the higher
the objective, the more difficult the cause-effect linkages become. Project impact will
almost certainly be a result of a variety of factors, including that of the project itself. In
such situation, the evaluation team might use comparisons with the situation before the
project (baseline data), or in areas not covered by the project.
6. To maximize the benefits of M&E, the project should develop mechanisms to incorporate
the findings, recommendations and lessons learned from evaluations into the various
phases of the program or project cycle.
Chapter 3
Program evaluations are carried out at different stages of project planning and
implementation. They can include many types of evaluations (needs assessments,
accreditation, cost/benefit analysis, effectiveness, efficiency, formative, summative,
goalbased, process, outcomes, etc.). The type of evaluation you undertake to improve your
programs depends on what you want to learn about the program.
20
In general, there are two main categories of evaluations of development projects:
Formative evaluations (process evaluations) examine the development of the project and
may lead to changes in the way the project is structured and carried out. Those types of
evaluations are often called interim evaluations. One of the most commonly used formative
evaluations is the midterm evaluation.
In general, formative evaluations are process oriented and involve a systematic collection of
information to assist decision-making during the planning or implementation stages of a
program. They usually focus on operational activities, but might also take a wider perspective
and possibly give some consideration to long-term effects. While staff members directly
responsible for the activity or project are usually involved in planning and implementing
formative evaluations, external evaluators might also be engaged to bring new approaches or
perspectives. Questions typically asked in those evaluations include:
• To what extent do the activities and strategies correspond with those presented in the
plan? If they are not in harmony, why are there changes? Are the changes justified?
• To what extent did the project follow the timeline presented in the work plan?
• To what extent are project actual costs in line with initial budget allocations?
• To what extent is the project moving toward the anticipated goals and objectives of
the project?
• Which of the activities or strategies are more effective in moving toward achieving
the goals and objectives?
• What barriers were identified? How and to what extent were they dealt with?
• To what extent are the project beneficiaries satisfied with project services?
Summative evaluations (also called outcome or impact evaluations) address the second set
of issues. They look at what a project has actually accomplished in terms of its stated goals.
There are two types of summative evaluations. (1) End evaluations aim to establish the
situation when external aid is terminated and to identify the possible need for follow up
activities either by donors or project staff. (2) Ex-post evaluations are carried out two to five
years after external support is terminated. The main purpose is to assess what lasting impact
the project has had or is likely to have and to extract lessons of experience.
• To what extent did the project meet its overall goals and objectives?
For each of these questions, both quantitative data (data expressed in numbers) and
qualitative data (data expressed in narratives or words) can be useful.
Summative evaluations are usually carried out as a program is ending or after completion of a
program in order to “sum up” the achievements, impact and lessons learned. They are useful
22
for planning follow-up activities or related future programs. Evaluators generally include
individuals not directly associated with the program.
Terms like "outcome" and "impact" are often used interchangeably. A distinction should be
made. Outcomes refer to any results or consequences of an intervention or a project. Impact
is a particular type of outcome. It refers to the ultimate results (i.e. what the situation will be
if the outcome is achieved). A UNICEF publication clarifies the relationship between the two
terms:
“Some people distinguish between outcomes and impacts, referring to outcomes as shortterm
results (on the level of purpose) and impacts as long-term results (on the level of broader
goals). Outcomes are usually changes in the way people do things as a result of the project
(for example, mother’s properly treating diarrhea at home), while impacts refer to the
eventual result of these changes (the lowered death rate from diarrhea disease).
Demonstrating that a project caused a particular impact is usually difficult since many
factors outside the project influence the results.” (UNICEF, A UNICEF Guide for
Monitoring and Evaluation: Making a Difference?, New York, 1991, p. 40.)
Impact evaluation should be carried out only after a program or project has reached a
sufficient level of stability. It is usually preceded by an implementation evaluation to
make sure that the intended program/ project elements have been put in place and are
operational before we try to assess their effects. Assessing the impact at an early stage is
meaningless and a waste of resources.
23
The main question that impact evaluations try to answer is whether the intervention or project
has made a difference for the target groups. There are different ways to find out and prove if
the intervention or project has made a difference. Those ways are referred to as evaluation
models.
Evaluation models differ in the extent to which they are able to identify and prove project
outcome or impact and link them with project interventions, i.e. to make a causal link
between the two. Some models are more likely than others to generate reliable results that
could establish a causal link. In evaluation terms this is called the scientific rigor or validity
of the model. There are many evaluation models. The following section reviews two
commonly used models: the pretest-posttest model and the comparison group model.
A. Pretest-Posttest Model
The basic assumption of this model is that without project interventions, the situation that
existed before the implementation of the project will continue as did before. As a result of the
intervention, the situation will change over time. Therefore, we measure the situation before
the project starts and repeat the same measures after the project is completed. The differences
or changes between the two points in time can be attributed to the project interventions.
To increase the validity of this model, we have to control some biases that might result from
the application of the model. For example the pre and posttests should be the same; measures
should be taken from the same groups, etc. In addition, to establish a strong link between
project interventions and project impact, the model should take into account other biases that
might occur between the two points in time. Some of those biases might be out of the project
control, i.e., social, political, economic, and environmental factors.
Advantages: The main advantage of the pretest-posttest model is that it is relatively easy to
implement. It can be implemented with the same group of project beneficiaries (does not
require a control or comparison group). It does not usually require a high level of statistical
24
expertise to implement and is able to assess progress over time by comparing the results of
projects against baseline data.
Disadvantages: The main disadvantage of the pre and posttest model is that it lacks scientific
rigor. There are many biases that might take place between the pretest and the posttest that
could affect the results, and therefore, weaken the direct link between project interventions
and project outcomes or impact. In other words, changes in the situation before and after
project implementation might (at least in part) be attributed to other external factors. This
problem could be reduced by adopting what is called the multiple time-series model, i.e.
repeating the measures at different points of time during the implementation of the project
and not only at the beginning and end points of time. This way, results of measures can be
tracked over time and the effects of the external factors can be assessed and controlled.
However, this might increase the work burden and expand the cost of the evaluation.
Implementation Steps: Applying the pretest posttest model involves the following main
stages:
3. Apply the tools and instruments with the target group or a representative sample of
the target group at the pretest time (at the beginning or the project implementation
phase or before the implementation starts).
4. Repeat the same measures at the posttest time (at the end of the project
implementation phase) with the same target group or a representative sample of the
target group.
25
5. Analyze, compare and interpret the two sets of evaluation data.
6. Report findings.
This evaluation model assesses project outcomes or impact through the comparison between
project results on two comparable groups at the same period of time (say the end of project
implementation phase). The first group represents beneficiaries of the project and the second
represents a group that has not benefited from the project. To control for design biases, the
two groups should have the same characteristics in many aspects (socioeconomic status,
gender balance, education, and other geographic and demographic aspects). Difference
between the two groups could be attributed to the project interventions.
Advantages: This model has relatively strong scientific rigor. It is able to link project impact
with project interventions or to attribute outcomes to the intervention. The implementation of
this model is relatively easy when naturally existing comparison groups can be found.
Implementation Steps: Applying the comparison group model involves the following main
stages:
26
2. Design evaluation tools and instruments for data collection.
4. Apply the tools and instruments with the target and comparison groups, or
representative samples of both, at the same time.
6. Report findings.
Evaluating the impact or results of a project is difficult to prove if we do not know the
situation prior to the project implementation. Baseline surveys are those surveys carried out
before project implementation start to generate data about the existing situation of a target
area or group. Such data becomes the reference against which project/program impact can be
assessed when summative evaluations are carried out. For example, if the objective of the
project is to reduce school dropout rates, we have to know those rates prior to project
implementation and compare them with rates after the completion of the project.
Baseline surveys are especially important when the pretest posttest evaluation model is
adopted. The logic behind carrying out baseline surveys is that by comparing data that
describe the situation to be addressed by a project or a program and data generated after the
completion of the project, evaluators would be able to measure progress or changes in the
27
situation and link those changes to project interventions. As well, baseline data might be
useful to track changes that the project would bring about over time and to refine project
indicators that are important for project monitoring or for evaluating project impact.
Baseline surveys are especially important for assessing project higher-level objectives.
Special focus is given to gathering information about various indicators developed to
measure project effects. Both quantitative and qualitative information are used in baseline
surveys (see next section). To control biases in methodological indicators, methods and tools
used in the baseline survey should be repeated when carrying out summative evaluations.
Source: United Nations Development Programme (UNDP), who Are the Question-makers? A
Participatory Evaluation Handbook, OESP Handbook Series, 1997.
There are a number of interrelated dimensions of programs and projects to measure their
success including: effectiveness, efficiency, relevance, impact, and sustainability. Following
is a summary review of each of those dimensions:
1. Effectiveness
Effectiveness in simple terms is the measure of the degree to which the formally stated
project objectives have been achieved or can be achieved. To make such measure and
verification possible, project objectives should be defined clearly and realistically. Often,
evaluators have to deal with unclear and highly general objectives that are hard to assess:
Efficiency is the measure of the economic relationship between the allocated inputs and the
project outputs generated from those inputs (i.e. cost effectiveness of the project). It is a
measure of the productivity of the project, i.e., to what degree the outputs achieved derive
from an acceptable cost. This includes the efficient use of financial, human and material
resources. In other words, efficiency asks whether the use of resources in comparison with
the outputs is justified.
This might be easy to answer in the field of business. In such situations, the main difficulty in
measuring efficiency is to determine what standards to follow as a point of reference. The
question, however, becomes more difficult in the social context especially when ethical
considerations are involved. For example, how can we answer if spending X amount of
dollars to save the lives of Y number of children or to rehabilitate Z number of disabled
persons is justified. What are the acceptable standards in such situations?
In the absence of agreed upon and predetermined standards, evaluators have to come up with
some justifiable standards. Following is a list of recommendations that evaluators may use:
• Compare project inputs and outputs against other comparable activities and projects.
• Ask questions such as: could the project or intervention achieve the same results at a
lower cost? Could the project achieve more results at the same cost?
3. Relevance
29
Relevance is a measure used to determine the degree to which the objectives of a program or
project remain valid as planned. It refers to an overall assessment to determine whether
project interventions and objectives are still in harmony with the needs and priorities of
beneficiaries. In other words, are the agreed objectives still valid? Is there a sufficient
rationale for continuing the project or activity? What is the value of the project in relation to
other priority needs? Is the problem addressed still a major problem?
Society’s priorities might change over time as a result of social, political, demographic or
environmental changes. As a result, a given project might not be as important as it was when
it was initiated. For example, once an infectious epidemic has been eradicated, the
justification for the project that dealt with the problem might no longer exist. Or, if a natural
disaster happens, society’s priorities shifts to emergency or relief interventions, and other
projects and interventions might become less important.
In many cases, the continuation of project relevance depends on the seriousness, quality of
needs assessment and the rationale upon which the project has been developed.
4. Impact
Project impact is a measure of all positive and negative changes and effects caused by the
project, whether planned or unplanned. While effectiveness focuses only on specific positive
and planned effects expected to accrue as a result of the project and is expressed in terms of
the immediate objective, impact is a far broader measure as it includes both positive and
negative project results, whether they are intended, or unintended. Impact is often the most
difficult and demanding part of the evaluation work since it requires the establishment of
complex causal conditions that are difficult to prove unless a strong evaluation model and a
diverse set of techniques are used.
In assessing impacts, the point of reference is the status of project beneficiaries and
stakeholders prior to implementation. Questions often asked in impact evaluations include:
what are the results of the project? What difference has the project made to the beneficiaries
and how many have been affected? What are the social, economic, technical, environmental,
30
and other effects on the direct or indirect individual beneficiaries, communities and
institutions? What are the positive or negative, intended and unintended, effects that come
about as a result of the project activities?
Project impacts can be immediate and long-range. Project staff and evaluators should decide
how much time must elapse until project impacts are generated. For example, an agricultural
project may produce important impacts after only a few months – whereas an educational
project might not generate significant effects until several years after the completion of the
project. Therefore, it is important to design the program or project in a way that will lend
itself to impact assessment at a later stage, e.g., through the preparation of baseline data,
setting of indicators for monitoring and evaluation, etc.
5. Sustainability
Many development initiatives fail once the implementation phase is over because neither the
target group nor responsible organizations have the means, capacity or motivation to provide
the resources needed for the activities to continue. As a result, many development
organizations became more interested in the long-term and lasting improvements of projects.
In addition, many donors are becoming interested to know for how long they should need to
support a project before it can run with local resources.
During the last decade, the concept of sustainability has been developed from merely asking
whether the project has succeeded in contributing to the achievement of its objectives or
whether the project will be able to cover its operational costs from local sources to a broader
set of issues including if there is an indication whether the positive impacts are likely to
continue after the termination of external support. In addition, environmental, financial,
31
institutional and social dimensions have become major issues in the assessment of
sustainability.
Since sustainability is concerned with what happens after external support is completed, it
should ideally be measured after the completion of the project. It will be difficult to provide
definitive assessment of sustainability while the project is still running. In such cases, the
assessment will have to be based on projections about future developments.
There are a number of factors that can be used to ensure that project interventions are likely
to become self-sustaining and continue after the termination of external funding, including:
32
Chapter 4
Methods of data collection have strengths and drawbacks. Formal methods (surveys,
participatory observations, direct measurement, etc.) used in academic research would lead to
qualitative and quantitative data that have a high degree of reliability and validity. The
problem is that they are expensive. Less formal methods (field visits, unstructured interviews,
etc.) might generate rich information but less precise conclusions, especially because some of
those methods depend on subjective views and intuitions.
Qualitative methods, especially participatory methods of data collection, can bring rich and
in-depth analysis of the situation of the beneficiaries of projects and new insights into
peoples' needs for project planning and implementation. However, they demand more skills
than most quantitative methods. In addition, they require time and substantial talent in
communication and negotiation between planners and participants.
The quality of information, especially in terms of validity and reliability, should be a main
concern for the evaluator. The evaluator may simultaneously employ a number of methods
and sources of information in order to cross-validate data (triangulation). Triangulation is a
term used to describe the simultaneous use of multiple evaluation methods and information
sources to study the same topic. It provides the means to generate rich and contextual
information. As well, it provides the means to verify information and explain conflicting
evidence.
The following table provides an overview of some of the quantitative and qualitative data
collection methods commonly used during evaluations.
33
Method Description/ Advantages Disadvantages/
Purpose Challenges
34
* Many sample
questionnaires
already exist.
likelihood of
useful responses.
* Allow
interviewer to be
flexible in
35
administering
interview to
particular
individuals or
circumstances.
36
Observation Involves inspection, * Well-suited * Dependent on
field visits and for understanding observer’s understanding
observation to processes, views, and interpretation.
understand processes, operations of a
* Has limited
infrastructure/services program while they
potential for generalization.
and their utilization. are actually
occurring. * Can be difficult to
interpret exhibited
* Can adapt
Gathers accurate to events as they behaviors.
information about occur and exist in
how a program natural, * Can be complex to
actually operates, unstructured and categorize observations.
particularly about
* Can influence
behavior
* Provides
good opportunities
for identifying
unanticipated
outcomes.
37
Focus groups A focus group brings * Efficient * Can be hard to
together a and reasonable in analyze responses.
representative group terms of cost.
* Need good
of 8 to 10 people, facilitators.
* Stimulate
who are asked a
the generation of * Difficult to schedule
series of questions
new ideas. 810 people together.
related to the task at
hand. * Quickly
and reliably gets
common
Used for analysis of impressions
specific, complex
problems, in order to * Can be an
identify attitudes and efficient way to get
priorities in sample a wide range and
depth of
38
Explore a topic in * Can
groups. information
short
40 time. in a
41
depth through group convey key 43
reactions to an programs.
45
experience or * Useful in suggestion, project design and
understanding in assessing the common complaints, impact of a
41
etc. project on a given set of stakeholders.
42
Case studies In-depth review of * 53 Well- * Usually time
44
beneficiaries’ * Fully experiences in a depicts client's
program, and conduct experience in comprehensive program
input, examination through process and
or depict tested
59 later.
cases. * Powerful means to portray program to outsiders.
47
Source: Information on common qualitative methods is provided in the earlier User-Friendly
Handbook for Project Evaluation (NSF 93-152).
Evaluation can involve a number of methods. No recipe or formula is best for every situation.
Some methods are better suited for the collection of certain types of data. Each has
advantages and disadvantages in terms of costs and other practical and technical
considerations (such as ease of use, accuracy, reliability, and validity). For example, there is
no best way to conduct interviews. Your approach will depend on the practical considerations
of getting the work done during the specified time period. Using a focus group - which is
essentially a group interview - is more efficient than one-on-one interviews, if done well.
However, people often give different answers in groups than they do individually. They may
feel freer to express personal views in a private interview. At the same time, group
conversations can draw out deeper insights as participants listen to what others are saying.
Both approaches have value.
Project staff and evaluators must weigh pros and cons against program goals. In selecting
evaluation methods, evaluators consider the use of methods that could generate the most
useful and reliable information, be the most cost-effective and is the easiest to implement in a
short period of time.
Following is a list of questions that might help in selecting appropriate evaluation methods:
48
1. What information is needed?
2. Of this information, how much can be collected and analyzed in a low-cost and
practical manner, e.g., using questionnaires, surveys and checklists?
6. Will the information appear as credible to decision makers, e.g., to donors or top
management?
7. Are the methods appropriate for the target group? If group members are illiterate, the
use of questionnaires might not be appropriate unless completed by the evaluators
themselves.
Ideally, the evaluator uses a combination of methods. For example, a questionnaire to quickly
collect a great deal of information from a lot of people, and then interviews to get more
indepth information from certain respondents to the questionnaires. In addition, case studies
49
could then be used for more in-depth analysis of unique and notable cases, e.g., those who
did or did not benefit from the program, those who quit the program, etc.
Combining quantitative and qualitative research methods and approaches in monitoring and
evaluation of development projects has proved to be very effective.
Chapter 5
50
MONITORING AND EVALUATION PLANNING, DESIGN AND
IMPLEMENTATION
Monitoring and evaluation planning and design must be prepared as an integral part of the
program/project design. To increase the effectiveness of the M&E systems, program
managers should:
• Establish baseline data describing the problems to be addressed and building baseline
indicators.
• Make sure that program/project objectives are clear, measurable and realistic.
• Agree with stakeholders on the specific indicators to be used for monitoring and
evaluating project performance and impact.
• Define the types and sources of data needed and the methods of data collection and
analysis required based on the indicators.
It should be noted that the monitoring and evaluation plan should not be seen in a rigid way.
The plan should be subject to continuous review and adjustment as required, and a means for
an effective learning process.
51
As mentioned above, evaluation planning and design depend on the type of information
needed. The type, quantity and quality of information should be thought of carefully before
planning M&E systems.
Project managers usually prepare annual work plans that translate the project document into
concrete tasks. The work plans should describe in detail the delivery of inputs, the activities
to be conducted and the expected results. They should clearly indicate schedules and the
persons responsible for providing the inputs and producing results. The work plans should be
used as the basis for monitoring the progress of program/project implementation.
1. A first step towards developing a good monitoring system is to decide what should be
monitored. The careful selection of monitoring indicators organizes and focuses the
data collection process.
2. The next question would be how to gather information, i.e. to select methods to track
indicators and report on progress (observation, interviews, stakeholder meetings,
routine reporting, field visits, etc.).
3. When to gather information by whom. The monitoring plan should include who will
gather the information and how often. Project staff at various levels will do most data
collection, analysis and reporting. Staff should agree on what the monitoring report
should include.
52
5. The monitoring plan should indicate the resources needed to carry out project
monitoring. Needed funds and staff time should be allocated to ensure effective
implementation.
Planning an Evaluation
There is no "perfect" evaluation design. It is far more important to do something, rather than
wait until every last detail has been tested. However, to improve evaluation planning and
design, it is useful to consider the following questions and issues:
a. What are the purposes of the evaluation? Which ones are more important than others?
This step involves identifying a manageable number of evaluation purposes and prioritizing
them. The best way to decide on the purposes of an evaluation is to ask who needs what type
of information and for what reason. When the evaluation purpose has been decided, it must
be clearly set forth in the Evaluation Terms of Reference.
b. What evaluation model is the most appropriate for the project or program?
As mentioned earlier, there are many evaluation models that can be considered. Each has
some strengths and weaknesses. The evaluation model that a specific project would utilize
should be selected during the project design phase. This is especially important if the project
plans to include a summative evaluation.
c. When to carry out the evaluation. What is the timing of evaluation within the project cycle?
The timing of major evaluations is determined by the project plan, the identification of
significant problems during the course of monitoring, donors’ request, etc.
d. What is the scope and focus of the evaluation and questions for the evaluation to answer?
53
Determining the scope and focus of an evaluation includes identifying the geographic area,
type of activity and time period that the evaluation should cover. This would clarify the types
of questions to be asked.
Existing data should be identified and its quality assessed. In the process, some questions
might be answered. Other data sources might include documents (regular reports, field visits
notes, previous evaluation reports, etc.) and data generated by research projects (household
surveys, evaluation of similar programs, etc.).
In the early stages of planning an evaluation, resources should be clearly defined. In order for
evaluations to be effective, sufficient human, financial and logistic resources should be
allocated. We should remember that the amount of available resources influences the scope
and methods of the evaluation.
• Why - The purposes of the evaluation - who can/will use the results.
• What - The scope and focus of evaluation and questions for the evaluation to answer.
54
• Who - Those responsible for managing and those responsible for carrying out the
evaluation, specifying whether the evaluation team will be internal or external or a
combination of both.
• Resources - The supplies and materials, infrastructure and logistics needed for the
evaluation.
Chapter 6
Analyzing Data
55
1. Data Management
Organizing evaluation data is an important step for ensuring effective analysis and reporting.
If the amount of quantitative data is very small and you are not familiar with computer
software and data entry, you might opt to manually organize and analyze data. However, if
the amount of data is huge or you need to carry out sophisticated analysis, you should enter
the data into a computer program. There are a number of software packages available to
manage the evaluation data, including SPSS, Access, or Excel. Each requires a different level
of technical expertise. For a relatively small project, Excel is the simplest of the three
programs and should work well as database software. In any event, the assistance of
statisticians and computer experts can be engaged at different stages of the evaluation.
Analyzing the gathered quantitative and qualitative data is a major step in project evaluation.
Developing a data analysis plan is important to carry out a successful analysis and
interpretation of information gathered by the evaluation. Following are some tips to make
sense of the quantitative data:
Before analyzing your data, review your evaluation goals. This will help you organize your
data and focus your analysis. For example, if you wanted to improve your program by
identifying its strengths and weaknesses, you can organize data into program strengths,
weaknesses and suggestions to improve the program. If you are conducting an
outcomesbased evaluation, you could categorize data according to the indicators for each
outcome. In general, data analysis is facilitated if the project has clear and measurable goals
and objectives.
Data analysis often involves the disaggregation of data into categories to provide evidence
about project achievements and to identify areas in which a program is succeeding and/or
56
needs improvement. Data can be broken down by gender, social and economic situation,
education, area of residence (urban or rural), marital status, age, etc. Decide what type of
disaggregation is relevant to your evaluation and project objectives and indicators. One of the
main advantages of statistical analysis is that it can be used to summarize the findings of an
evaluation in a clear, precise and reliable way. However, not all information can be analyzed
quantitatively. The most commonly used statistics include the following:
Percentage. A percentage tells us the proportion of activities, things, or people that have
certain characteristics within the total population of the study or sample. Percentage is
probably the most commonly used statistic to show the current status as well as growth over
time.
Mean. The mean is the most commonly used statistic to represent the average in research and
evaluation studies. It is derived by dividing the sum by the total number of units included in
the summation. The mean has mathematical properties that make it appropriate to use with
many statistical procedures.
57
3. Analysis of Quantitative Information
The use of both quantitative and qualitative analysis in evaluation has become the preferred
model for many evaluators. Most evaluators and researchers agree that they should be
employed simultaneously. The analysis of qualitative data helps broaden the view of the
phenomena of interest in an evaluation, but can also increase depth and detail, where needed.
58
It is important to keep all documents for several years after completion in case they are
needed for future reference.
There is no common format for reporting. Following is a list of tips that might help in
improving your evaluation reports:
It is useful to start the preparation of the report before data collection. There are a number of
sections that can be prepared by using the material of the evaluation plan or proposal
(background section, information about the project and some aspects of the methodology,
evaluation questions, etc.). Those will remain the same throughout the evaluation. The
evaluation findings, conclusions, and recommendations generally need to wait for the end of
the evaluation.
One of the most challenging tasks that evaluators face is how to organize the huge amount of
data gathered into a useful, concise and interesting report and what data to include and not to
include. It is useful to remember that only a small and concise amount of tabulations prepared
during the analysis phase should be reported. A report outline will help in classifying
information. Always abide by your key evaluation questions, the indicators you are assessing
and the type of information that your audience needs.
59
Make your recommendations clear, concise and direct. Examples include:
1. Ways for improving management of the program (planning, decision making, policy
development, etc.) and where capacity building/technical assistance and training are
needed.
Remember that the level and content of evaluation reports depend on for whom the report is
intended, e.g., donors, staff, beneficiaries, the general public, etc. Presentation must be clear
and adjusted to the target group. The presentation must be made in simple language that can
be understood by non-professionals. Following is a list of suggestions that might help in
making your report more interesting and easier to read:
1. The first sentence of paragraphs should be used to make the main point, and the
remainder to supplement, substantiate and discuss the main point.
2. As much as possible, use a short text. This will ensure that a large number of people
will read it.
3. The structure of the report should be simple. The text should be broken down in
relatively small thematic or sequential parts, with simple and clear subtitles precisely
identifying the topics discussed.
4. Make the report interesting to read. Display your data in graphs, diagrams,
illustrations and tables that summarize numbers. This should reduce the amount of
text needed to describe the results. Furthermore, they are more effective than written
60
text. Do not explain the graphs or illustrations in written form. Focus only on
the important points that relate to the problem under discussion. Use of qualitative
information effectively makes the report more interesting. In addition, direct quotes,
short examples and comments heard during fieldwork personalize the findings, and
photographs help in familiarizing readers with the conditions of the project
beneficiaries.
5. Use simple language that the readers will understand. Avoid the use of long and
complicated sentences, unclear jargon and/or difficult words. Important technical
terms should be defined in the text or in the glossary at the end of the report.
8. Simple link words should be used to split sentences and indicate the direction in
which the argument is moving. Link words should be simple, such as “also,” “even
so,” “on the other hand,” and “in the same way.” Avoid long words like “moreover,”
“nevertheless,” and “notwithstanding.”
9. Only data tables or diagrams should contain detailed numbers. The written text should
highlight the most important numbers and say what they mean. Percentages should in
most cases be rounded up to the nearest whole number. It should be possible for the
reader to get the main message from a table without consulting the text. Every table
must have a title, table number, reference to the source of information, sample size,
and full description of what each figure refers to.
10. Use space around the text. Ease of reading and understanding is more important than
reducing the volume of pages.
61
Suggested Contents of Evaluation Report
1. Title page
2. Table of Contents
3. Acknowledgments (optional)
4. Executive Summary
• Summarize the program/project evaluated, the purpose of the evaluation and the
methods used, the major findings, and the recommendations in priority order.
• Two to three pages (usually) that could be read independently without reference to
the rest of the report.
5. Introduction
62
• Describe the program/project being evaluated (the setting and problem
addressed, 86 objectives and strategies, funding).
• Summarize the evaluation context (purposes, sponsors, composition of the
team, 88 duration).
63
6. Evaluation Objectives and Methodology
• List the evaluation objectives (the questions the evaluation was designed to answer).
64
• Describe fully the evaluation methods and instruments (e.g., what data were 93
collected, specific methods used to gather and analyze them, rationale for visiting 94
selected sites).
• Limitations of the evaluation.
66
• State findings clearly with data presented graphically in tables and figures. Include 99
• Explain the comparisons made to judge whether adequate progress was made.
• Identify reasons for accomplishments and failures, especially continuing constraints.
68
9. Lessons Learned (optional)
• Identify lessons learned from this evaluation for those planning, implementing or
evaluating similar activities.
10. Appendices
• Terms of Reference.
• Case studies.
• Abbreviations.
decision-makers.
• Include a proposed timetable for implementing/reviewing recommendations. 109
Use of Monitoring and Evaluation Results
Disseminate of the report to various interested and related parties that might use it. Potential
users include: the funding organization (for the program or evaluation), project managers
and staff, board members of the organization, partner organizations/interested community
groups and other stakeholders, the general public, and external resources (researchers,
consultants, professional agencies, etc.).
69
Apart from distributing the evaluation report itself, common ways to disseminate evaluation
information are through the evaluation summaries, annual reports, bibliographies, thematic
reports, seminars, press releases, websites, newsletters, etc. The entire report should be
distributed to administrators and donors. The executive summary could be distributed more
widely, for example to other policy-making staff, political bodies or others involved in similar
programs.
The evaluation report highlights project strength and weaknesses and suggested solutions to
major problems. While it is important to know if the program is achieving its goals and
objectives, it is also important that the project manager and staff are able to use the results to
plan follow-up actions to further strengthen the program.
The project manager and staff should prepare an action plan to implement follow-up
activities. The action plan should have a time line and should identify individuals responsible
for carrying out the planned activities. The implementation of the follow-up action plan needs
to be monitored and evaluated. This makes program evaluation, implementation and impact,
an integral part of a process for continuous improvement.
One of the objectives of evaluations is to feed into the next planning phases of the
programming cycle of the organization as well as to provide a baseline for future planning.
Findings of evaluations reflect the situation of the target group and highlight follow up
actions. Such recommendations could be used to design new projects or interventions, or to
further develop existing projects.
70
4. Policy development
If the evaluation is well done and recommends policy changes, program managers can use it
as a tool for advocacy. Good evaluations forcefully demonstrate the potential beneficial
impact of suggested policy changes.
Evaluations can be used as a tool to obtain further support for the program/project. By
documenting what has been achieved, evaluators help project leaders gain the support of
government officials, increase credibility in the community and raise funds from donors,
especially if the results of the evaluation affirm that the project goals remain valid Chapter 7
Monitoring and evaluation enable you to check the “bottom line” (see Glossary of Terms) of
development work: Not “are we making a profit?” but “are we making a difference?”
Through monitoring and evaluation, you can:
Review progress;
In many organizations, “monitoring and evaluation” is something that that is seen as a donor
requirement rather than a management tool. Donors are certainly entitled to know whether
their money is being properly spent, and whether it is being well spent. But the primary
(most important) use of monitoring and evaluation should be for the organisation or project
71
itself to see how it is doing against objectives, whether it is having an impact, whether it is
working efficiently, and to learn how to do it better.
Plans are essential but they are not set in concrete (totally fixed). If they are not working, or
if the circumstances change, then plans need to change too. Monitoring and evaluation are
both tools which help a project or organisation know when plans are not working, and when
circumstances have changed. They give management the information it needs to make
decisions about the project or organisation, about changes that are necessary in strategy or
plans. Through this, the constants remain the pillars of the strategic framework: the problem
analysis, the vision, and the values of the project or organisation. Everything else is
negotiable. Getting something wrong is not a crime. Failing to learn from past mistakes
because you are not monitoring and evaluating, is.
The effect of monitoring and evaluation can be seen in the following cycle. Note that you
will monitor and adjust several times before you are ready to evaluate and replan.
Evaluate/learn/ decide
Plan
Implemen
Reflect/learn/
decide/adjust
Implemen
72
Monitor Monitor
Reflect/learn/
Implement Decide/adjust
It is important to recognize that monitoring and evaluation are not magic wands that can be
waved to make problems disappear, or to cure them, or to miraculously make changes without
a lot of hard work being put in by the project or organisation. In themselves, they are not a
solution, but they are valuable tools. Monitoring and evaluation can:
Push you to reflect on where you are going and how you are getting there;
Increase the likelihood that you will make a positive development difference.
Good planning combined with effective monitoring and evaluation can play a major role in
enhancing the effectiveness of development programmes and projects. Good planning helps
us focus on the results that matter, while monitoring and evaluation help us learn from past
successes and challenges and inform decision making so that current and future initiatives are
better able to improve people’s lives and expand their choices.
73
Understanding inter-linkages and dependencies between planning, monitoring
and evaluation
Planning can be defined as the process of setting goals, developing strategies, outlining the
implementation arrangements and allocating resources to achieve those goals. It is important
to note that planning involves looking at a number of different processes:
Determining and allocating the resources (financial and other) required to achieve the
vision and goals
Outlining implementation arrangements, which include the arrangements for
monitoring and evaluating progress towards achieving the vision and goals
There is an expression that “failing to plan is planning to fail.”While it is not always true that
those who fail to plan will eventually fail in their endeavors, there is strong evidence to
suggest that having a plan leads to greater effectiveness and efficiency. Not having a plan—
whether for an office, programme or project—is in some ways similar to attempting to build a
house without a blueprint, that is, it is very difficult to know what the house will look like,
how much it will cost, how long it will take to build, what resources will be required, and
whether the finished product will satisfy the owner’s needs. In short, planning helps we
74
define what an organization, programme or project aims to achieve and how it will go about
it.
Monitoring can be defined as the ongoing process by which stakeholders obtain regular
feedback on the progress being made towards achieving their goals and objectives. Contrary
to many definitions that treat monitoring as merely reviewing progress made in
implementing actions or activities, the definition used in this
Handbook focuses on reviewing progress against achieving goals. In other words, monitoring
in this Handbook is not only concerned with asking “Are we taking the actions we said we
would take?” but also “Are we making progress on achieving the results that we said we
wanted to achieve?” The difference between these two approaches is extremely important. In
the more limited approach, monitoring may focus on tracking projects and the use of the
agency’s resources. In the broader approach, monitoring also involves tracking strategies and
actions being taken by partners and non-partners, and figuring out what new strategies and
actions need to be taken to ensure progress towards the most important results.
The distinction between monitoring and evaluation and other oversight activities
Like monitoring and evaluation, inspection, audit, review and research functions are
oversight activities, but they each have a distinct focus and role and should not be confused
with monitoring and evaluation.
75
recommendations for improvement or corrective action. It is often performed when there is a
perceived risk of non-compliance.
Reviews, such as rapid assessments and peer reviews, are distinct from evaluation and more
closely associated with monitoring. They are periodic or ad hoc, often light assessments of
the performance of an initiative and do not apply the due process of evaluation or rigor in
methodology. Reviews tend to emphasize operational issues. Unlike evaluations conducted
by Independent evaluators, reviews are often conducted by those internal to the subject or the
commissioning organization.
Monitoring involves:
Evaluation involves:
76
Looking at what the project or organisation intended to achieve – what difference did
it want to make? What impact did it want to make?
Assessing its progress towards what it wanted to achieve, its impact targets.
Looking at the strategy of the project or organisation. Did it have a strategy? Was it
effective in following its strategy? Did the strategy work? If not, why not?
Looking at how it worked. Was there an efficient use of resources? What were the
opportunity costs (see Glossary of Terms) of the way it chose to work? How
sustainable is the way in which the project or organisation works? What are the
implications for the various stakeholders in the way the organisation works?
There are many different ways of doing an evaluation. Some of the more common terms you
may have come across are:
77
External evaluation: This is an evaluation done by a carefully chosen outsider or
outsider team.
Why evaluate?
To identify how efficient the project was in converting resources (funded and in-kind)
into activities, objectives and goals
To assess how sustainable and meaningful the project was for participants
Evaluation is not just about demonstrating success, it is also about learning why things don’t
work. As such, identifying and learning from mistakes is one of the key parts of evaluation.
Evaluation can be a confronting undertaking, especially if you come to it unprepared. This
guide, along with the online evaluation toolbox, will allow you to plan and undertake an
evaluation of your project. An important thing to consider, and something that may lighten
the load, is to remember that evaluation is not about finding out about everything, but about
finding the things that matter.
78
Evaluation Questions
Evaluation questions should be developed up-front, and in collaboration with the primary
audience(s) and other stakeholders who you intend to report to. Evaluation questions go
beyond measurements to ask the higher order questions such as whether the intervention is
worth it, or could it have been achieved in another way (see Table 1). Overall, evaluation
questions should lead to further action such as project improvement, project mainstreaming,
or project redesign.
In order to answer evaluation questions, monitoring questions must be developed that will
inform what data will be collected through the monitoring process. The monitoring questions
will ideally be answered through the collection of quantitative and qualitative data. It is
important to not leap straight into the collection of data, without thinking about the evaluation
questions. Jumping straight in may lead to collecting data that provides no useful information,
which is a waste of time and money.
79
Can the project be scaled up?
What next
Theory of change
Does the project have a theory of change?
Is the theory of change reflected in the
program logic?
How can the program logic inform the research
questions?
Terminology
The language and terms used in evaluation can make the whole process quite daunting. This
is accentuated by many references providing different definitions for the same term. The
important thing for you to do is not to get bogged down in all the jargon, but to make sure you
use the same terms consistently within your evaluation. It may help to provide a brief
Definition of the terms you select in your evaluation report (see Table 2), so that readers
know what you mean when you use words that may have different meanings.
81
Types of evaluation
Evaluation can be characterized as being either formative or summative (see Table 3).
Broadly (and this is not a rule), formative evaluation looks at what leads to an intervention
working (the process), whereas summative evaluation looks at the short-term to long-term
outcomes of an intervention on the target group. Formative evaluation takes place in the lead
up to the project, as well as during the project in order to improve the project design as it is
being implemented (continual improvement). Formative evaluation often lends itself to
qualitative methods of inquiry. Summative evaluation takes place during and following the
project implementation, and is associated with more objective, quantitative methods. The
distinction between formative and summative evaluation can become blurred. Generally it is
important to know both how an intervention works, as well as if it worked. It is therefore
important to capture and assess both qualitative and quantitative data.
Type of evaluation
Formative Summative
82
s, what were
the
learning’s,
and how to
improve
Participatory evaluation is about valuing and using the knowledge of insiders (target group
and other stakeholders) to provide meaningful targets and information, as opposed to solely
relying on objective and external indicators of change. It also refers to getting stakeholders
involved in the collection and to interpretation of results.
Participatory evaluation is not always appropriate in every project. There are a number of
constraints which may impact on the quality of the process, and hence its overall value to the
evaluation. These include:
83
Advantages 84 Disadvantages
Internal evaluation 85
The evaluators are very familiar The evaluation team may have a vested
86
with the work, the organizational interest in reaching positive conclusions
culture and the aims and87 about the work or organisation. For this
objectives. 88 reason, other stakeholders, such as donors,
89 may prefer an external evaluation.
90
Sometimes people are more
91 than
willing to speak to insiders The team may not be specifically skilled or
to outsiders. 92 trained in evaluation.
93
An internal evaluation is94very The evaluation will take up a considerable
clearly a management tool, a way amount of organizational time – while it may
of self-correcting, and much less cost less than an external evaluation, the
threatening than an external opportunity costs (see Glossary of Terms)
evaluation. This may make it may be high.
easier for those involved to
accept findings and criticisms.
External evaluation The evaluation is likely to be Someone from outside the organisation or
(done by a team or more objective as the evaluators project may not understand the culture or
person with no vested
interest in the will have some distance from the even what the work is trying to achieve.
project) work.
Those directly involved may feel threatened
The evaluators should have a by outsiders and be less likely to talk openly
range of evaluation skills and and co-operate in the process.
experience.
External evaluation can be very costly.
Sometimes people are more
willing to speak to outsiders than An external evaluator may misunderstand
to insiders. what you want from the evaluation and not
give you what you need.
Using an outside evaluator gives
greater credibility to findings,
particularly positive findings.
To improve the chances of success, attention needs to be placed on some of the common areas
of weakness in programmes and projects. Four main areas for focus are identified
consistently:
95
properly defined and clarified. This reduces the likelihood of experiencing major challenges
in implementation.
Good planning combined with effective monitoring and evaluation can play a major role in
enhancing the effectiveness of development programmes and projects. Good planning helps
us focus on the results that matter, while monitoring and evaluation help us learn from past
successes and challenges and inform decision making so that current and future initiatives are
better able to improve people’s lives and expand their choices.
Communicate what you want clearly – good Terms of Reference (see Glossary of
Terms) are the foundation of a good contractual relationship.
Negotiate a contract which makes provision for what will happen if time frames and
output expectations are not met.
Maintain contact – ask for interim reports as part of the contract – either verbal or
written.
Do not expect any evaluator to be completely objective. S/he will have opinions and ideas –
you are not looking for someone who is a blank page! However, his/her opinions must be
clearly stated as such, and must not be disguised as “facts”. It is also useful to have some
idea of his/her (or their) approach to evaluation.
97
Chapter 8
98
RESULTS-BASED MANAGEMENT
Planning, monitoring and evaluation come together as RBM. RBM is defined as “a broad
management strategy aimed at achieving improved performance and demonstrable results,”
and has been adopted by many multilateral development organizations, bilateral development
agencies and public administrations throughout the world (as noted earlier, some of these
organizations now refer to RBM as MfDR to place the emphasis on development rather than
organizational results).
Good RBM is an ongoing process. This means that there is constant feedback, learning and
improving. Existing plans are regularly modified based on the lessons learned through
monitoring and evaluation, and future plans are developed based on these lessons.
Monitoring is also an ongoing process. The lessons from monitoring are discussed
periodically and used to inform actions and decisions. Evaluations should be done for
programmatic improvements while the programme is still ongoing and also inform the
planning of new programmes. This ongoing process of doing, learning and improving is
what is referred to as the RBM life-cycle approach, which is depicted in Figure 1.
RBM is concerned with learning, risk management and accountability. Learning not only
helps improve results from existing programmes and projects, but also enhances the capacity
of the organization and individuals to make better decisions in the future and improves the
formulation of future programmes and projects. Since there are no perfect plans, it is essential
that managers, staff and stakeholders learn from the successes and failures of each
programme or project.
There are many risks and opportunities involved in pursuing development results.
RBMsystems and tools should help promote awareness of these risks and opportunities, and
provide managers, staff, stakeholders and partners with the tools to mitigate risks or pursue
opportunities.
RBM practices and systems are most effective when they are accompanied by clear
accountability arrangements and appropriate incentives that promote desired behavior. In
other words, RBM should not be seen simply in terms of developing systems and tools to plan
monitor and evaluate results. It must also include effective measures for promoting a culture
of results orientation and ensuring that persons are accountable for both the results achieved
and their actions and behavior.
99
The main objectives of good planning, monitoring and evaluation—that is, RBM— are to:
Goal-based Assessing Were the goals achieved? Comparing baseline and progress
achievement of Efficiently? Were they the data; finding ways to measure
goals and objectives. right goals? indicators.
Goal-free Assessing the full What are all the outcomes? Independent determination of
range of project What value do they have? needs and standards to judge
effects, intended and project worth. Qualitative and
unintended. quantitative techniques to
uncover any possible results.
100
Our feeling is that the best evaluators use a combination of all these approaches, and that an
organisation can ask for a particular emphasis but should not exclude findings that make use
of a different approach.
Monitoring and evaluation should be part of your planning process. It is very difficult to go
back and set up monitoring and evaluation systems once things have begun to happen. You
need to begin gathering information about performance and in relation to targets from the
word go. The first information gathering should, in fact, take place when you do your needs
assessment (see the toolkit on overview of planning, the section on doing the ground work).
This will give you the information you need against which to assess improvements over time.
When you do your planning process, you will set indicators (see Glossary of Terms). These
indicators provide the framework for your monitoring and evaluation system. They tell you
what you want to know and the kinds of information it will be useful to collect. In this
section we look at:
What do we want to know? This includes looking at indicators for both internal
issues and external issues. (Also look at the examples of indicators later in this
toolkit.)
There is not one set way of planning for monitoring and evaluation. The ideas included in the
toolkits on overview of planning, strategic planning and action planning will help you to
develop a useful framework for your monitoring and evaluation system. If you are familiar
with logical framework analysis and already use it in your planning, this approach lends itself
well to planning a monitoring and evaluation system.
What we want to know is linked to what we think is important. In development work, what
we think is important is linked to our values.
101
Most work in civil society organizations is underpinned by a value framework. It is this
framework that determines the standards of acceptability in the work we do. The central
values on which most development work is built are:
Sustainability;
So, the first thing we need to know is: Is what we are doing and how we are doing it meeting
the requirements of these values? In order to answer this question, our monitoring and
evaluation system must give us information about:
Who is benefiting from what we do? How much are they benefiting?
Are beneficiaries passive recipients or does the process enable them to have some
control over their lives?
Are there lessons in what we are doing that have a broader impact than just what is
happening on our project?
Can what we are doing be sustained in some way for the long-term, or will the impact
of our work cease when we leave?
Are we getting optimum outputs for the least possible amount of inputs?
Should development work be evaluated in terms of the process (the way in which the work is
done) or the product (what the work produces)? Often, this debate is more about excusing
inadequate performance than it is about a real issue. Process and product are not separate in
development work. What we achieve and how we achieve it is often the very same thing. If
the goal is development, based on development values, then sinking a well without the
transfer of skills for maintaining and managing the well is not enough. Saying: “It was
taking too long that way. We couldn’t wait for them to sort themselves out. We said we’d
102
sink a well and we did” is not enough. But neither is: “It doesn’t matter that the well hasn’t
happened yet. What’s important is that the people have been empowered.”
Both process and product should be part of your monitoring and evaluation system.
But how do we make process and product and values measurable? The answer lies in the
setting of indicators and this is dealt with in the sub-section that follows.
Indicators
Indicators are measurable or tangible signs that something has been done or that something
has been achieved. In some studies, for example, an increased number of television aerials in
a community have been used as an indicator that the standard of living in that community has
improved. An indicator of community empowerment might be an increased frequency of
community members speaking at community meetings. If one were interested in the gender
impact of, for example, drilling a well in a village, then you could use “increased time for
involvement in development projects available to women” as an indicator. Common
indicators for something like overall health in a community are the infant/child/maternal
mortality rate, the birth rate, and nutritional status and birth weights. You could also look at
less direct indicators such as the extent of immunisation, the extent of potable (drinkable)
water available and so on. (See further examples of indicators later in this toolkit, in the
section on examples.)
Indicators are an essential part of a monitoring and evaluation system because they are what
you measure and/or monitor. Through the indicators you can ask and answer questions such
as:
Who?
How many?
How often?
How much?
But you need to decide early on what your indicators are going to be so that you can begin
collecting the information immediately. You cannot use the number of television aerials in a
103
community as a sign of improved standard of living if you don’t know how many there were
at the beginning of the process.
Some people argue that the problem with measuring indicators is that other variables (or
factors) may have impacted on them as well. Community members may be participating
more in meetings because a number of new people with activist backgrounds have come to
live in the area. Women may have more time for development projects because the men
of the village have been attending a gender workshop and have made a decision to share the
traditionally female tasks. And so on. While this may be true, within a project it is possible
to identify other variables and take them into account. It is also important to note that, if
nothing is changing, if there is no improvement in the measurement of the key indicators
identified, then your strategy is not working and needs to be rethought.
DEVELOPING INDICATORS
Step 1: Identify the problem situation you are trying to address. The following might be
problems:
(Step 2: Develop a vision for how you would like the problem areas to be/look. (See the
toolkit on Strategic Planning, the section on vision.) This will give you impact indicators.
What will tell you that the vision has been achieved? What signs will you see that you can
measure that will “prove” that the vision has been achieved? For example, if your vision was
that the people in your community would be healthy, then you can use health indicators to
104
measure how well you are doing. Has the infant mortality rate gone down? Do fewer women
die during child-birth? Has the HIV/AIDS infection rate been reduced? If you can answer
“yes” to these questions then progress is being made.
Step 3: Develop a process vision for how you want things to be achieved. This will give you
process indicators.
If, for example, you want success to be achieved through community efforts and participation,
then your process vision might include things like community health workers from the
community trained and offering a competent service used by all; community organizes clean-
up events on a regular basis, and so on.
For example, if you believe that you can increase the secondary school pass rate by upgrading
teachers, then you need indicators that show you have been effective in upgrading the
teachers e.g. evidence from a survey in the schools, compared with a baseline survey.
Here you can set indicators such as: planned workshops are run within the stated timeframe,
costs for workshops are kept to a maximum of US$ 2.50 per participant, no more than 160
hours in total of staff time to be spent on organizing a conference; no complaints about
conference organisation etc.
With this framework in place, you are in a position to monitor and evaluate efficiency,
effectiveness and impact (see Glossary of Terms).
Quantitative; or Qualitative.
Quantitative measurement tells you “how much or how many”. How many people attended a
workshop, how many people passed their final examinations, how much a publication cost,
how many people were infected with HIV, how far people have to walk to get water or
firewood, and so on. Quantitative measurement can be expressed in absolute numbers (3 241
women in the sample are infected) or as a percentage (50% of households in the area have
105
television aerials). It can also be expressed as a ratio (one doctor for every 30 000 people).
One way or another, you get quantitative (number) information by counting or measuring.
Qualitative measurement tells you how people feel about a situation or about how things are
done or how people behave. So, for example, although you might discover that 50% of the
teachers in a school are unhappy about the assessment criteria used, this is still qualitative
information, not quantitative information. You get qualitative information by asking,
observing, interpreting.
Some people find quantitative information comforting – it seems solid and reliable and
“objective”. They find qualitative information unconvincing and “subjective”. It is a mistake
to say that “quantitative information speaks for itself”. It requires just as much interpretation
in order to make it meaningful as does qualitative information. It may be a “fact” that
enrolment of girls at schools in some developing countries is dropping – counting can tell us
that, but it tells us nothing about why this drop is taking place. In order to know that, you
would need to go out and ask questions – to get qualitative information. Choice of indicators
is also subjective, whether you use quantitative or qualitative methods to do the actual
measuring. Researchers choose to measure school enrolment figures for girls because they
believe that this tells them something about how women in a society are treated or viewed.
The monitoring and evaluation process requires a combination of quantitative and qualitative
information in order to be comprehensive. For example, we need to know what the school
enrolment figures for girls are, as well as why parents do or do not send their children to
school. Perhaps enrolment figures are higher for boys than for girls because a particular
community sees schooling as a luxury and prefers to train boys to do traditional and practical
tasks such taking care of animals. In this case, the higher enrolment of girls does not
necessarily indicate higher regard for girls.
Your methods for information collecting need to be built into your action planning. You
should be aiming to have a steady stream of information flowing into the project or
organisation about the work and how it is done, without overloading anyone. The
information you collect must mean something: don’t collect information to keep busy, only
do it to find out what you want to know, and then make sure that you store the information in
such a way that it is easy to access.
106
Usually you can use the reports, minutes, and attendance registers, financial statements that
are part of your work anyway as a source of monitoring and evaluation information.
However, sometimes you need to use special tools that are simple but useful to add to the
basic information collected in the natural course of your work. Some of the more common
ones are:
Case studies
Recorded observation
Diaries
Structured questionnaires
One-on-one interviews
Focus groups
Sample surveys
Almost everyone in the organisation or project will be involved in some way in collecting
information that can be used in monitoring and evaluation. This includes:
The administrator who takes minutes at a meeting or prepares and circulates the
attendance register;
107
participation in activities or women’s participation specifically, structure the
observations with facts. (Look at the fieldworker report format given later in this
toolkit.)
Record information in such a way that it is possible to work out what you need to
know. For example, if you need to know whether a project is sustainable financially,
and which elements of it cost the most, and then make sure that your bookkeeping
records reflect the relevant information.
It is a useful principle to look at every activity and say: What do us need to know about this
activity, both process (how it is being done) and product (what it is meant to achieve), and
what is the easiest way to find it out and record it as we go along?
108
Designing a monitoring and/or evaluation process
As there are differences between the design of a monitoring system and that of an evaluation
process, we deal with them separately here.
Purpose
Methodology.
MONITORING
When you design a monitoring system, you are taking a formative view point and establishing
a system that will provide useful information on an ongoing basis so that you can improve
what you do and how you do it.
For a case study of how an organisation went about designing a monitoring system, go to the
section with examples, and the example given of designing a monitoring system.
Chapter 9
Below is a step-by-step process you could use in order to design a monitoring system for your
organisation or project.
Step 1: At a workshop with appropriate staff and/or volunteers, and run by you or a
consultant:
Introduce the concepts of efficiency, effectiveness and impact (see Glossary of Terms).
109
Clarify what variables (see Glossary of Terms) need to be linked. So, for example, do
you want to be able to link the age of a teacher with his/her qualifications in order to
answer the question: Are older teachers more or less likely to have higher
qualifications?
Step 2: Turn the input from the workshop into a brief for the questions your monitoring
system must be able to answer. Depending on how complex your requirements are, and what
your capacity is, you may decide to go for a computerized data base or a manual one. If you
want to be able to link many variables across many cases (e.g. participants, schools, parent
involvement, resources, urban/rural etc), you may need to go the computer route. If you have
a few variables, you can probably do it manually. The important thing is to begin by knowing
what variables you are interested in and to keep data on these variables. Linking and analysis
can take place later. (These concepts are complicated. It will help you to read the case study
in the examples section of the toolkit.)
From the workshop you will know what you want to monitor. You will have the
indicators of efficiency, effectiveness and impact that have been prioritised. You will then
choose the variables that will help you answer the questions you think are important.
So, for example, you might have an indicator of impact which is that “safer sex
options are chosen” as an indicator that “young people are now making informed and mature
lifestyle choices”. The variables that might affect the indicator include:
Age
Gender
Religion
Urban/rural
Economic category
Family environment
110
By keeping the right information you will be able to answer questions such as:
Does economic category i.e. do young people in richer areas respond better
or worse to the message or does it make no difference?
Answers to these kinds of questions enable a project or organisation to make decisions about
what they do and how they do it, to make informed changes to programmes, and to measure
their impact and effectiveness. Answers to questions such as:
Do more young people attend when sessions are over weekends or in the
evenings?
Step 3: Decide how you will collect the information you need (see collecting information)
and where it will be kept (on computer, in manual files).
Step 4: Decide how often you will analyze the information – this means putting it together
and trying to answer the questions you think are important.
EVALUATION
Designing an evaluation process means being able to develop Terms of Reference for such a
process (if you are the project or organisation) or being able to draw up a sensible proposal to
meet the needs of the project or organisation (if you are a consultant).
The main sections in Terms of Reference for an evaluation process usually include:
111
Background: This is background to the project or organisation, something about the
problem identified, what you do, how long you have existed, why you have decided to
do an evaluation.
Purpose: Here you would say what it is the organisation or project wants the
evaluation to achieve.
Key evaluation questions: What the central questions are that the evaluation must
address.
Specific objectives: What specific areas, internal and/or external, you want the
evaluation to address. So, for example, you might want the evaluation to include a
review of finances, or to include certain specific programme sites.
Methodology: here you might give broad parameters of the kind of approach you
favor in evaluation (see the section on more about monitoring and evaluation). You
might also suggest the kinds of techniques you would like the evaluation team to use.
Logistical issues: These would include timing, costing, and requirements of team
composition and so on.
For more on some of the more difficult components of Terms of Reference, see the following
pages.
Purpose
The purpose of an evaluation is the reason why you are doing it. It goes beyond what you
want to know to why you want to know it. It is usually a sentence or, at most, a paragraph. It
has two parts:
To provide the organisation with information needed to make decisions about the future of the
project.
To assess whether the organisation/project is having the planned impact in order to decide
whether or not to replicate the model elsewhere.
112
To assess the programme in terms of effectiveness, impact on the target group, efficiency and
sustainability in order to improve it’s functioning.
The key evaluation questions are the central questions you want the evaluation process to
answer. They are not simple questions. You can seldom answer “yes” or “no” them. A
useful evaluation question is:
Thought provoking
Challenges assumptions.
The purpose of the evaluation is to assess how efficient the project is in delivering benefits to
the identified community in order to inform Board decisions about continuity and
replicability.
Do the inputs (in money and time) justify the outputs and, if so/if not, on what basis is
this claim justified?
What would improve the efficiency, effectiveness and impact of the current project?
What are the lessons that can be learned from this project in terms of replicability?
Note that none of these questions deals with a specific element or area of the internal or
external functioning of the project or organisation. Most would require the evaluation team to
deal with a range of project or organizational elements in order to answer them.
What are the most effective ways in which a project of this kind can address the
problem identified?
113
To what extent does the internal functioning and structure of the organisation impact
positively on the programme work?
What learning’s from this project would have applicability across the full
development spectrum?
Clearly, there could be many, many examples. Our experience has shown us that, when an
evaluation process is designed with such questions in mind, it produces far more interesting
insights than simply asking obvious questions such as: Does the Board play a useful role in
the organisation? Or: What impact are we having?
Methodology
“Methodology” as opposed to “methods” deals more with the kind of approach you use in
your evaluation process. (See also more about monitoring and evaluation earlier in the
toolkit). You could; for example, commission or do an evaluation process that looked almost
entirely at written sources, primary or secondary: reports, data sheets, minutes and so on. Or
you could ask for an evaluation process that involved getting input from all the key
stakeholder groups. Most terms of reference will ask for some combination of these but they
may also specify how they want the evaluation team to get input from stakeholder groups for
example:
Here too one would expect to find some indication of reporting formats: Will all reporting be
written? Will the team report to management, or to all staff, or to staff and Board and
beneficiaries? Will there be interim reports or only a final report? What sort of evidence
does the organisation or project require to back up evaluator opinions? Who will be involved
in analysis?
The methodology section of Terms of Reference should provide a broad framework for how
the project or organisation wants the work of the evaluation done.
Collecting Information
By damage control we mean what you need to do if you failed to get baseline information
when you started out.
Chapter 10
Ideally, if you have done your planning well and collected information about the situation at
the beginning of your intervention, you will have baseline data.
Baseline data is the information you have about the situation before you do anything. It is
the information on which your problem analysis is based. It is very difficult to measure the
impact of your initiative if you do not know what the situation was when you began it. (See
also the toolkit on overview of planning, the section on doing the ground work.) You need
baseline data that is relevant to the indicators you have decided will help you measure the
impact of your work.
General information about the situation, often available in official statistics e.g. infant
mortality rates, school enrolment by gender, unemployment rates, literacy rates and so
on. If you are working in a particular geographical area, then you need information
for that area. If it is not available in official statistics, you may need to do some
information gathering yourselves. This might involve house-to-house surveying,
either comprehensively or using sampling (see the section after this on methods), or
visiting schools, hospitals etc. Focus on your indicators of impact when you collect
this information.
If you have decided to measure impact through a sample of people or families with
whom you are working, you will need specific information about those people or
families. So, for example, for families (or business enterprises or schools or whatever
units you are working with) you may want specific information about income, history,
number of people employed, and number of children per classroom and so on. You
will probably get this information from a combination of interviewing and filling in of
basic questionnaires. Again, remember to focus on the indicators which you have
decided are important for your work.
115
If you are working with individuals, then you need “intake” information –
documented information about their situation at the time you began working with
them. For example, you might want to know, in addition to age, gender, name and so
on, current income, employment status, current levels of education, amount of money
spent on leisure activities, amount of time spent on leisure activities, ambitions and so
on, for each individual participant. Again, you will probably get the information from
a combination of interviewing and filling in of basic questionnaires, and you should
focus on the indicators which you think are important.
It is very difficult to go back and get this kind of baseline information after you have begun
work and the situation has changed. But what if you didn’t collect this information at the
beginning of the process? There are ways of doing damage control. You can get anecdotal
information (see Glossary of Terms) from those who were involved at the beginning and you
can ask participants if they remember what the situation was when the project began. You
may not even have decided what you’re important indicators are when you began your work.
You will have to work it out “backwards”, and then try to get information about the situation
related to those indicators when you started out. You can speak to people, look at records and
other written sources such as minutes, reports and so on.
One useful way of making meaningful comparisons where you do not have baseline
information is through using control groups. Control groups are groups of people,
businesses, families or whatever unit you are focusing on, that has not had input from your
project or organisation but are, in most other ways, very similar to those you are working
with.
For example: You have been working with groups of school children around the country in
order to build their self-esteem and knowledge as a way of combating the spread of
HIV/AIDS and preventing teenage pregnancies. After a few years, you want to measure what
impact you have had on these children. You are going to run a series of focus groups (see
methods) with the children at the schools where you have worked. But you did not do any
baseline study with them. How will you know what difference you have made?
You could set up a control groups at schools in the same areas, with the same kinds of
profiles, where you have not worked. By asking both the children at those schools you have
worked at, and the children at the schools where you have not worked, the same sorts of
questions about self-esteem, sexual behavior and so on, you should be able to tell whether or
116
not your work has made any difference. When you set up control groups, you should try to
ensure that:
The profiles of the control groups are very similar to those of the groups you have
worked with. For example, it might be schools that serve the same economic group,
in the same geographical area, with the same gender ratio, age groups, ethnic or racial
mix.
There are no other very clear variables that could affect the findings or comparisons.
For example, if another project, doing similar work, has been involved with the
school, this school would not be a good place to establish a control group. You want
a situation as close to what the situation was with the beneficiaries of your project
when you started out.
METHODS
In this section we are going to give you a “shopping list” of the different kinds of methods
that can be used to collect information for monitoring and evaluation purposes. You need to
select methods that suit your purposes and your resources. Do not plan to do a
comprehensive survey of 100 000 households if you have two weeks and very little money!
Use sampling in this case.
Sampling (see Glossary of Terms) is another important concept when using various tools for
a monitoring or evaluation process. Sampling is not really a tool in itself, but used with other
tools it is very useful. Sampling answers the question: Who do we survey, interview, include
in a focus group etc? It is a way of narrowing down the number of possible respondents to
make it manageable and affordable. Sometimes it is necessary to be comprehensive. This
means getting to every possible household, or school or teacher or clinic etc. In an
evaluation, you might well use all the information collected in every case during the
monitoring process in an overall analysis. Usually, however, unless numbers are very small,
for in-depth exploration you will use a sample. Sampling techniques include:
Random sampling (In theory random sampling means doing the sampling on a sort of
lottery basis where, for example all the names go into a container, are tumbled around
and then the required number are drawn out. This sort of random sampling is very
difficult to use in the kind of work we are talking about. For practical purposes you
117
are more likely to, for example, select every seventh household or every third person
on the list. The idea is that there is no bias in the selection.);
Stratified sampling (e.g. every seventh household in the upper income bracket, every
third household in the lower income bracket);
Cluster sampling (e.g. only those people who have been on the project for at least two
years).
It is also usually best to use triangulation. This is a fancy word that means that one set of
data or information is confirmed by another. You usually look for confirmation from a
number of sources saying the same thing.
Tool Description Usefulness Disadvantages
Interviews These can be structured, Can be used with almost Requires some skill in the
semi-structured or anyone who has some interviewer. For more on
unstructured (see Glossary involvement with the interviewing skills, see
of Terms). They involve project. Can be done in later in this toolkit.
asking specific questions person or on the telephone
aimed at getting or even by e-mail. Very
information that will flexible.
enable indicators to be
measured. Questions can
be openended or closed
(yes/no answers). Can be
a source of qualitative and
quantitative information.
118
Key informant These are interviews that As these key informants Needs a skilled
interviews are carried out with often have little to do with interviewer with a good
understanding of the
specialists in a topic or the project or topic. Be careful not to
someone who may be organisation, they can be turn something into an
absolute truth (cannot be
able to shed a particular quite objective and offer
challenged) because it has
light on the process. useful insights. They can been said by a key
provide something of the
“big picture” where
people more involved may
119
Questionnaires These are written This tool can save lots of With people who do not
questions that are used to time if it is selfcompleting, read and write, someone
get written responses enabling you to get to
which, when analyzed, many people. Done in this has to go through the
will enable indicators to way it gives people a questionnaire with them
be measured. feeling of anonymity and
which means no time is
they may say things they
would not say to an saved and the numbers one
interviewer. can reach are limited.
With questionnaires, it is
not possible to explore
what people are saying any
further.
Focus groups In a focus group, a group This can be a useful way It is quite difficult to do
of about six to 12 people of getting opinions from random sampling for
are interviewed together quite a large sample of focus groups and this
by a skilled people. means findings may not
interviewer/facilitator be generalised.
120
with a carefully Sometimes people
structured interview influence one another
schedule. Questions are either to say something or
usually focused around a to keep quiet about
specific topic or issue. something. If possible,
focus groups interviews
should be recorded and
then transcribed. This
requires special
equipment and can be
very time-consuming.
121
Ranking This involves getting It can be used with Ranking is quite a people to say what they
individuals and groups, as difficult concept to get think is most useful, most part of an
interview across and requires very important, least useful schedule or questionnaire,
careful explanation as
122122
or as a separate session. well as testing to ensure Where people
cannot read that people understand and write, pictures can be
what you are asking. If
123
etc.
data can be completely distorted.
124used.
they misunderstand, your
Visual/audio These include pictures, Very useful to use You have to have stimuli
movies, tapes, stories, together with other tools, appropriate stimuli and role plays,
photographs, particularly with people the facilitator needs to be
125125
used to illustrate who cannot read or write. skilled in using
such problems or issues or past events or even future
126126
events. stimuli.
Rating scales This technique makes use It is useful to measur e You need to test the of a
continuum, along attitudes, opinions, and statements very carefully which people are
perceptions.
127127
128
no possibility of
own feelings, 129
misunderstanding. A
observations etc. People
130
132
Critical This method is a way of Very useful when The evaluation team can
event/incident focusing interviews with something problematic end up submerged in a
individuals or groups on has occurred and people vast amount of
particular events or feel strongly about it. If all contradictory detail and
Analysis
incidents. The purpose of those involved are lots of “he said/she said”.
doing this is to get a very included, it should help the It can be difficult not to
full picture of what evaluation team to get a take sides and to remain
actually happened. picture that is reasonably objective.
close to what actually
happened and to be able to
diagnose what went
wrong.
Self-drawings This involves getting Can be very useful, Can be difficult to explain
participants to draw particularly with younger and interpret.
pictures, usually of how children.
they feel or think about
something.
INTERVIEWING SKILLS
133
□ DO test the interview schedule beforehand for clarity, and to make sure questions cannot
be misunderstood.
□ DO ask if the interviewee minds if you take notes or tape record the interview.
□ DO watch for answers that are vague and probe for more information.
□ DO be flexible and note down everything interesting that is said, even if it isn’t on the
schedule.
□ DON’T show what you are thinking through changed tone of voice.
Analyzing information
Whether you are looking at monitoring or evaluation, at some point you are going to find
yourself with a large amount of information and you will have to decide how to make sense
of it or to analyze it. If you are using an external evaluation team, it will be up to this team to
do the analysis, but, sometimes in evaluation, and certainly in monitoring, you, the
organisation or project, have to do the analysis.
134
Analysis is the process of turning the detailed information into an understanding of patterns,
trends, interpretations. The starting point for analysis in a project or organizational context is
quite often very unscientific. It is your intuitive understanding of the key themes that come
out of the information gathering process. Once you have the key themes, it becomes possible
to work through the information, structuring and organizing it. The next step is to write up
your analysis of the findings as a basis for reaching conclusions, and making
recommendations.
Monitoring and evaluation have little value if the organisation or project does not act on the
information that comes out of the analysis of data collected. Once you have the findings,
conclusions and recommendations from your monitoring and evaluation process, you need
to:
Deal with resistance to the necessary changes within the organisation or project, or
even among other stakeholders.
REPORTING
Whether you are monitoring or evaluating, at some point, or points, there will be a reporting
process. This reporting process follows the stage of analyzing information. You will report
to different stakeholders in different ways, sometimes in written form, sometimes verbally
and, increasingly, making use of tools such as PowerPoint presentations, slides and videos.
Below is a table, suggesting different reporting mechanisms that might be appropriate for
different stakeholders and at different times in project cycles. For writing tips, go to the
toolkit on effective writing for organizations.
Target group Stage of project cycle Appropriate format
136
Evaluation Written report, with an Executive
Summary, and verbal presentation from the
evaluation team.
evaluation team.
137
For an outline of what would normally be contained in a written report, go to the following
page.
EXECUTIVE SUMMARY (Usually not more than five pages – the shorter the better –
intended to provide enough information for busy people, but also to tease people’s appetite
so that they want to read the full report.)
PREFACE (Not essential, but a good place to thank people and make a broad comment
about the process, findings etc.)
CONTENTS PAGE (With page numbers, to help people find their way around the report.)
SECTION 1:
SECTION 2:
FINDINGS: (Here you would have sections dealing with the important areas of findings,
e.g. efficiency, effectiveness and impact, or the themes that have emerged.)
SECTION 3:
CONCLUSIONS: (Here you would draw conclusions from the findings – the
interpretation, what they mean. It is quite useful to use a SWOT Analysis – explained in
Glossary of Terms - as a summary here.)
SECTION 4:
RECOMMENDATIONS: (This would give specific ideas for a way forward in terms of
addressing weaknesses and building on strengths.)
138
Learning is, or should be, the main reason why a project or organisation monitors its work or
does an evaluation. By learning what works and what does not, what you are doing right and
what you are doing wrong, you, as project or organisation management, are empowered to
act in an informed and constructive way. This is part of a cycle of action reflection. (See
the diagram in the section on why do monitoring and evaluation?)
The purpose of learning is to make changes where necessary, and to identify and build on
strengths where they exist. Learning also helps you to understand, to make conscious,
assumptions you have. So, for example, perhaps you assumed that children at more affluent
schools would have benefited less from your intervention than those from less affluent
schools. Your monitoring data might show you that this assumption was wrong. Once you
realize this, you will probably view your interactions with these schools differently.
Being in a constant mode of action-reflection-action also helps to make you less complacent.
Sometimes, when projects or organizations feel them “have got it right”, they settle back and
do things the same way, without questioning whether they are still getting it right. They
forget that situations change, that the needs of project beneficiaries may change, and that
strategies need to be reconsidered and revised.
So, for example, an organisation provided training and programmes for community radio
stations. Because it had excellent equipment and an excellent production studio, it invited
stations to send presenters to its training centre for training in how to present the
programmes it (the organisation) was producing. It developed an excellent reputation for
high quality training and production. Over time, however, the community radio stations
began to produce their own programmes and what they really wanted was for the
organisation to send someone to their stations to help them workshop ideas and to give them
feedback on the work they were doing. This came out in an evaluation process and
organisation realized that it had become a bit smug in the comfort zone of what it was good
at, but that, if it really wanted to help community radio stations, it needed to change its
strategy.
Organizations and projects that don’t learn, stagnate. The process of rigorous (see Glossary
of Terms) monitoring and evaluation forces organizations and projects to keep learning -
and growing.
139
EFFECTIVE DECISION-MAKING
As project or organisation management, you need the conclusions and recommendations that
come out of monitoring and evaluation to help you make decisions about your work and the
way you do it.
The success of the process is dependent on the ability of those with management
responsibilities to make decisions and take action. The steps involved in the whole process
are:
1 Plan properly – know what you are trying to achieve and how you intend to achieve it
2 Implement
4 Analyse the information you get from monitoring and evaluation and work out what
it is telling you.
5 Look at the potential consequences to your plans of what you have learned from the
analysis of your monitoring and evaluation data.
8 Share adjustments and plans with the rest of the organisation and, if necessary, your
donors and beneficiaries.
9 Implement.
Work out what needs to be done and have clear motivations for why it needs to be
done.
140
Look at the options critically in terms of which are likely to be the most effective.
Get a mandate (usually from a Board, but possibly also from donors and
beneficiaries) to do it.
Do it.
Not everyone will be pleased about any changes in plans you decide need to be made.
People often resist change. Some of the reasons for this include:
People are comfortable with things the way they are – they don’t want to be pushed
out of their comfort zones.
People worry that any changes will lessen their levels of productivity – they feel
judged by what they do and how much they do, and don’t want to take the time out
necessary to change plans or ways of doing things.
People don’t like to rush into change – how do we know that something different will
be better? They spend so long thinking about it that it is too late for useful changes
to be made.
People don’t have a “big picture”. They know what they are doing and they can see
it is working, so they can’t see any reason to change anything at all.
People don’t have a long term commitment to the project or the organisation – they
see it as a stepping stone on their career path. They don’t want change because it will
delay the items they want to be able to tick off on their curriculum vitaes.
People feel they can’t cope – they have to keep doing what they are doing but also
work at bringing about change. It’s all too much.
141
Make the reasons why change is needed very clear – take people through the findings
and conclusions of the monitoring and evaluation processes, involve them in
decisionmaking.
Help people see the whole picture – beyond their little bit to the overall impact on the
problem analyzed.
Recognize anger, fear, and resistance. Listen to people; give them the opportunity to
express frustration and other emotions.
Find common ground – things that they also want to see changed.
Encourage a feeling that change is exciting, that it frees people from doing things that
are not working so they can try new things that are likely to work, that it releases
productive energy.
BEST PRACTICE
EXAMPLES OF INDICATORS
Please note that these are just examples – they may or may not suit your needs but they
should give you some idea of the kind of indicators you can use, especially for measuring
impact.
142
Employment, by occupation, by gender
Government employment
143
Per capita income
Death rate
Causes of death
Number of suicides
144
Causes of accidents
Number of homeless
Birth rate
Fertility rate
Rates of hospitalisation
145
Political/organisational Development Indicators
146
Chapter 11
What follows is a description of a process that a South African organisation called Puppets
against AIDS went through in order to develop a monitoring system which would feed into
monitoring and evaluation processes.
The main work of the organisation is presenting work shopped plays and/or puppet shows
related to life skill issues, especially those life skills to do with sexuality, at schools, across
the country. The organisation works with a range of age groups, with different “products”
(scripts) being appropriate at different levels.
Puppets against AIDS wanted to develop a monitoring and evaluation system that provided
useful information on the efficiency, effectiveness and impact of its operations. To this end,
it wanted to develop a data base that:
Provided all the basic information the organisation needed about clients and services
given;
Produced reports that enabled the organisation to inform it and other stakeholders,
including donors, partners and even schools, about the impact of the work, and what
affected the impact of the work.
The organisation made a decision to go for a computerised monitoring system. Much of the
day-to-day information needed by the organisation was already on a computerised data base
(e.g. schools, regions, services provided and so on), but the monitoring system would require
a substantial upgrading and the development of data base software specific to the
organisation’s needs. The organisation also made the decision to develop a system initially
for a pilot project, but with the intention of extending it to all the work over time. This pilot
project would work with about 60 schools, using different scripts each year, over a period of
three years. In order to raise the money needed for this process, Puppets against AIDS
needed some kind of a brief for what was required so that it could be costed.
At an initial workshop with staff, facilitated by consultants, the staff generated a list of
indicators for efficiency, effectiveness and impact, in relation to their work. These were the
147
things staff wanted to know from the system about what they did, how they did it, and what
difference it made. The terms were defined as follows:
Efficiency Here what needed to be assessed was how quickly, how correctly, how cost
effectively and with what use of resources the services of the organisation were offered.
Much of this information was already collected and was contained in reports which reflected
planning against achievement. It needed to be made “computer friendly”.
Effectiveness Here what needed to be assessed was getting results in terms of the strategy
and shorter-term impact. For example, were the puppet shows an effective means of
communicating messages about sexuality? Again, this information was already being
collected and just needed to be adapted to fit the computerised system.
Impact Here what needed to be assessed was whether the strategy worked in that it had an
impact on changing behaviour in individuals (in this case the students) and that that change
in behaviour impacted positively on the society of which the individuals are a part. The
organisation had a strong intuitive feeling that it was working, but wanted to be able to
measure this more scientifically and to be able to look at what variables made impact more
or less likely, or affected the degree of impact.
Staff generated a list of the different variables that they thought might be important in
assessing and accounting for differences of impact. The monitoring system would need to
link information on impact to these variables. The intention was to provide both qualitative
and quantitative information.
The consultants and a senior staff member then developed measurable indicators of impact
and a tabulation of important variables which included:
148
Forms/questionnaires were developed to measure impact indicators before the first
intervention (to provide baseline information) and then at various points in the process, as
well as to categorise such concepts as “teacher profile”. With the student questionnaire, it
was designed in such a way to make it possible to aggregate a score which could be
compared when the questionnaire was administered at different stages in the process. The
questionnaire took the form of a series of statements with which students were asked to
agree/disagree/strongly agree/strongly disagree etc. So, for example, statements to do with
an increase in student self-esteem included “When I look in a mirror, I like what I see”, and
“Most of the people I know like the real me”. The organisation indicated that it wanted the
system to generate reports that would enable it to know:
What difference is there between the indicator ratings on the impact objective at the
beginning and end of the process?
What difference is there between teacher attitudes at the beginning and end of the
process?
What variables to do with the school and school environment impact on the degree of
difference between indicators at the beginning and end of the process?
What variables to do with the way in which the shows are presented impact on the
degree of difference at the beginning and end of the process?
All this was written up as a brief which was given to software experts who then came up
with a system that would meet the necessary requirements. The process was slow and
demanding but eventually the system was in place and it is currently being tested.
This format was used by an early childhood development learning centre to measure the
following indicators in the informal schools with which it worked:
Records up-to-date.
Payments up-to-date.
149
Attendance at committee meetings.
________________________________________________________________________
CARE-ED FIELD VISIT REPORT
Date:
Name of school:
1. List the skills used by the teachers in the time period of your visit to the school: 2.
4. Record-keeping assessment:
5.
Bookkeeping
Petty cash
150
Filing
Correspondence
Stock control
Registers
151
Glossary
Activities What a program does with its inputs. Examples are construction of a kindergarten,
computer training for youth, counseling of women, raising public awareness regarding
childhood diseases, etc. Program activities result in outputs.
Background The contextual information that describes the reasons for the project, including
its goals, objectives, and stakeholders’ information needs.
Baseline data A baseline study is the analysis describing the situation prior to the
implementation of the project, which is used to determine the results and accomplishments
of an activity, and which serves as an important reference for the summative evaluation.
Case study an intensive, detailed description and analysis of a single project, program, or
instructional material in the context of its environment. Study based on a small number of
“typical” examples. Results provide in-depth review of the case but are not statistically
reliable.
Context (of an evaluation) The combination of factors accompanying the study that may
have influenced its results, including geographic location, timing, political and social
climate, economic conditions, and other relevant professional activities in progress at the
same time.
152
Data Information. The term "data" often describes information stored in numerical form.
Hard data is precise numerical information. Soft data is less precise verbal information. Raw
data is the name given to survey information before it has been processed and analyzed.
Data collection method The way facts about a program and its outcomes are gathered. Data
collection methods often used in program evaluations include literature search, file review,
natural observations, surveys, expert opinion, case studies, etc.
Development objective The ultimate and long-term objective of the development impact,
which is expected to be attained after the project purpose, is achieved.
Direct beneficiaries Usually institutions and/or individuals who are the direct recipients of
technical cooperation aimed at strengthening their capacity to undertake development tasks
that are directed at specific target groups. In micro-level interventions, the direct
beneficiaries and the target groups are the same.
153
Evaluation design The logical model or conceptual framework and the methods used to
collect information, analyze data and arrive at conclusions.
Finding Factual statement about the program or project based on empirical evidence
gathered through monitoring and evaluation activities.
Focus group a small group selected for its relevance to an evaluation that is engaged by a
trained facilitator in a series of discussions designed for sharing insights, ideas, and
observations on a topic of concern to the evaluation.
Impact The positive and negative changes produced by a program or a component, directly
or indirectly, intended or unintended.
Inputs The funds, personnel, materials, etc., necessary to produce the intended outputs of
development activities.
154
Lesson learned Learning from experience that is applicable to a generic situation rather than
to a specific circumstance.
Key informant Person carefully chosen for interview because of his/her special knowledge
of some aspect of the target population.
Logical framework approach A tool for development planning and monitoring applied by
some donor agencies.
Objective Purpose or goal representing the desired result that a program or project seeks to
achieve. A development objective is a long-term goal that a program or project aims to
achieve in synergy with other development interventions. An immediate objective is a
shortterm purpose of a program or project.
Outcome indicators The specific items of information that track a program's success on
outcomes. They describe observable, measurable characteristics or changes that represent
achievement of an outcome.
Outcomes Results of a program or project relative to its immediate objectives that are
generated by the program or project outputs. Examples: increased rice yield, increased
income for the farmers.
155
Outputs The planned results that can be guaranteed with high probability as a consequence
of development activities/inputs. They are the direct results of program activities.
Program A group of related projects or services directed toward the attainment of specific
(usually similar or related) objectives.
A time-bound intervention that differs from a project in that it usually cuts across sectors,
themes and/or geographic areas, involves more institutions than a project, and may be
supported by different funding sources.
Project A planned undertaking designed to achieve certain specific objectives within a given
budget and within a specified period of time.
Project document A document that explains in detail the context, objectives, expected
results, inputs, risks and budget of a project.
156
Recommendations Suggestions for specific actions derived from analytic approaches to the
program components.
Relevance The degree to which the rationale and objectives of an activity are, or remain,
valid, significant and worthwhile, in relation to the identified priority needs and concerns.
Reliability A measurement is reliable to the extent that, when repeatedly applied to a given
situation, it consistently produces the same results if the situation does not change between
the applications. Reliability can refer to the stability of the measurement over time or the
consistency of the measurement from place to place.
Results A broad term used to refer to the effects of a program or project. The terms
"outputs", "outcomes" and "impact" describe more precisely the different types of results.
Stakeholders Groups that have a role and interest in the objectives and implementation of a
program or project. They include target groups, direct beneficiaries, those responsible for
ensuring that the results are produced as planned, and those that are accountable for the
resources that they provide to that program or project.
A person, group, organization or other body who has a “stake” in the area or field where
interventions and assistance are directed. Target groups are always stakeholders, whereas
other stakeholders are not necessarily target groups.
Structured interview An interview in which the interviewer asks questions from a detailed
guide that contains the questions to be asked and the specific areas for probing.
Subjective data Observations that involve personal feelings, attitudes and perceptions.
Subjective data can be quantitatively or qualitatively measured.
Sustainability Durability of positive program or project results after the termination of the
technical cooperation channeled through that program or project. Static sustainability is the
continuous flow of the same benefits, set in motion by the completed program or project, to
the same target groups. Dynamic sustainability is the use or adaptation of program or project
157
results to a different context or changing environment by the original target groups and/or
other groups.
Sustainability factors Six areas of particular importance to ensure that aid interventions are
sustainable, i.e. institutional, financial and economic, technological, environmental,
sociocultural, and political.
Target groups The main stakeholders of a program or project that are expected to gain from
the results of that program or project. Sectors of the population that a program or project
aims to reach in order to address their needs based on gender considerations and their
socioeconomic characteristics.
Terms of Reference (ToR) Action plan describing objectives, results, activities and
organization of a specific endeavor. Most often used to describe technical assistance, study
assignments, or evaluations.
ASSIGNMENT
158
1. Giving examples differentiate between Monitoring and Evaluation.
2. Why is Baseline survey an important part in Project Management?
3. Distinguish between Summative and formative evaluation Methods
with examples.
4. Monitoring and evaluation uses both qualitative and quantitative
methods to measure the success and impact of the projects. However,
economists and staticians adapt a one sided method (quantitative) to
analyze the results.
a) Identify the potential dangers of a one sided monitoring system.
b) Critically analyze the quantitative method often employed by
economists and staticians in monitoring and evaluating
development projects
5. a. Define Logical Framework
b. Define and Explain key components of Logical framework
.
159