0% found this document useful (0 votes)
10 views

Module_1

Uploaded by

minoxlive72
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Module_1

Uploaded by

minoxlive72
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 159

2018

DIPLOMA IN
MONITORING AND
EVALUATION
MODULE 1
Module two of the Diploma in Monitoring and Evaluation
TABLE OF CONTENTS
Introduction .......................................................................................Pg 3
Monitoring and Evaluation as an integral component of Project Planning
and Implementation ...........................................................................Pg 11
Evaluation types and Model ...............................................................Pg 22
Monitoring and Evaluation Methods and Tools ..................................Pg 35
Monitoring and Evaluation Planning, Design and Implementation......Pg 45
Data Analysis and Report writing ....................................................Pg 50
Why Monitoring and Evaluation ..........................................................Pg 61
Putting Planning Monitoring and Evaluation together: Results Based
Management......................................................................................Pg 79
Designing a Monitoring System .........................................................Pg 91
Baseline and Damage control ..............................................................Pg 97
Designing a Monitoring System: Case study .....................................Pg 118
Glossary ......................................................................................Pg 123
MONITORING AND EVALUATION

Module 1:

What is evaluation?
There are many definitions of evaluation in the literature and websites. For the purpose of this
guide, we will define evaluation as a structured process of assessing the success of a project
in meeting its goals and to reflect on the lessons learned An evaluation should be structured
so that there is some thought and intent as to what is to be captured, how best to capture it,
and what the analysis of the captured data will tell us about our program.

Another term that is widely used is monitoring. Monitoring refers to setting targets and
milestones to measure progress and achievement, and whether the inputs are producing the
planned outputs. In other words, monitoring sees whether the project is consistent with the
design.

2
The key difference between monitoring and evaluation is that evaluation is about placing a
value judgment on the information gathered during a project, including the monitoring data.
The assessment of a project’s success (its evaluation) can be different based on whose value
judgment is used. For example, a project manager’s evaluation may be different to that of the
project’s participants, or other stakeholders.

Monitoring and Evaluation: Basic Concepts and Definitions

1.1 Introduction

Monitoring and evaluation (M&E) of development projects are increasingly recognized as


indispensable management functions. For many years, M&E of development projects in
Africa have been given little attention. Some of the main constraints and problems that
hampered M&E in development project include: weak interest and commitment to the
evaluation function by both donors and African civil society organizations, weak culture of
carrying out, sharing, discussing and using the results of evaluation activities among African
NGOs and donors, a relative shortage of professional evaluation experts (in comparison with
researchers, trainers, etc.), insufficient technical resources, limited monitory allocation to
M&E work by donors, limited training opportunities in evaluation, shortage of trained staff,
etc.

The last few years have witnessed an increased interest in strengthening project M&E by
donors and African civil society organizations. More African nonprofit and civil society
organizations are interested in strengthening their M&E capacity. This document reviews the
nature of program M&E, presents basic concepts, principles, tools and methods of M&E,
reviews the process of planning and implementing effective M&E processes for nonprofit
programs, and suggests ways for using M&E results. Many of the principles presented in this
document are also applicable for “for-profit” organizations.

There are many reasons why development project staff and managers of civil society
organizations should know about M&E. First, knowledge about M&E helps project staff to
improve their ability to effectively monitor and evaluate their projects, and therefore,
strengthen the performance of their projects. We should remember that project staff need not
be evaluation experts in order to monitor their projects; with basic orientation and training,
project staff can implement appropriate techniques to carry out a useful evaluation. Second,

3
program evaluations, carried out by inexperienced persons, might be time-consuming, costly
and could generate impractical or irrelevant information. Third, if development organizations
are to recruit an external evaluation expert they should be smart consumers aware of
standards, and know what to look for and require in this service.

The Need for Monitoring and Evaluation

There are many reasons for carrying out project M&E.

 Project managers and other stakeholders (including donors) need to know the
extent to which their projects are meeting their objectives and leading to their
desired effects.

 M&E build greater transparency and accountability in terms of use of project


resources.

 Information generated through M&E provides project staff with a clearer


basis for decision-making.

 Future project planning and development is improved when guided by lessons


learned from project experience.

1.3 Project Monitoring

Monitoring represents an on-going activity to track project progress against planned tasks. It
aims at providing regular oversight of the implementation of an activity in terms of input
delivery, work schedules, targeted outputs, etc. Through such routine data gathering, analysis
and reporting program/project monitoring aims at:

1) Providing project management, staff and other stakeholders with information on


whether progress is being made towards achieving project objectives. In this regard,
monitoring represents a continuous assessment of project implementation in
4
relation to project plans, resources, infrastructure, and use of services by project
beneficiaries.

2) Providing regular feedback to enhance the ongoing learning experience and to


improve the planning process and effectiveness of interventions.

3) Increasing project accountability with donors and other stakeholders.

4) Enabling managers and staff to identify and reinforce initial positive project results,
strengths and successes. As well, monitoring alerts managers to actual and potential
project weaknesses, problems and shortcomings before it is too late. This would
provide managers with the opportunity to make timely adjustments and corrective
actions to improve the program/project design, work plan and implementation
strategies.

5) Checking on conditions or situations of a target group, and changes brought about by


project activities. In this regard, monitoring assists project management to check
whether the project continues to be relevant to the target group and/or geographical
area, and whether project assumptions are still valid.

Monitoring actions must be undertaken throughout the lifetime of the project. Ad hoc
evaluation research might be needed when unexpected problems arise for which planned
monitoring activities cannot generate sufficient information, or when socio economic or
environmental conditions change drastically in the target area.

Effective monitoring needs adequate planning, baseline data, indicators of performance, and
results and practical implementation mechanisms that include actions such as field visits,
stakeholder meetings, documentation of project activities, regular reporting, etc. Project
monitoring is normally carried out by project management, staff and other stakeholders.

5
1.4 Project Evaluation

Program/project evaluation represents a systematic and objective assessment of ongoing or


completed projects or programs in terms of their design, implementation and results. In
addition, evaluations usually deal with strategic issues such as program/project relevance,
effectiveness, efficiency (expected and unexpected), in the light of specified objectives, as
well as program/project impact and sustainability. Those terms are described in detail in the
following sections and in the glossary.

Periodic evaluations of ongoing projects are conducted to review implementation progress,


predict project's likely effects and highlight necessary adjustments in project design.
Terminal evaluations (or final evaluations) are evaluations carried out at the end of a project
to provide an overall assessment of project performance and effects/impact, as well as to
assess the extent to which the project has succeeded in meeting their objectives and their
potential sustainability.

There are many reasons for conducting an evaluation, including:

1) Providing managers with information regarding project performance. Project plans


might change during the implementation process. Evaluations can verify if the
program is really running as originally planned. In addition, they provide signs of
project strengths and weaknesses, and therefore, enable managers to improve future
planning, delivery of services and decision-making.

2) Assisting project managers, staff and other stakeholders to determine in a systematic


and objective manner the relevance, effectiveness, and efficiency of activities
(expected and unexpected) in light of specified objectives.

6
3) Mid-term evaluations may serve as a means of validating the results of initial
assessments obtained from project monitoring activities.

4) If conducted after the termination of a program/project, an evaluation determines the


extent to which the interventions are successful in terms of their impact and
sustainability of results.

5) Assisting managers to carry out a thorough review and re-thinking about their
projects in terms of their goals and objectives, and means to achieve them.

6) Generating detailed information about project implementation process and results.


Such information can be used for public relations, fundraising, promotion of services
in the community, as well as identifying possibilities for project replication.

7) Improving the learning process. Evaluations often document and explain the causes as
to why activities succeeded or failed. Such documentation can help in making future
activities more relevant and effective.

As in monitoring, evaluation activities must be planned at the program/ project level.


Baseline data and appropriate indicators of performance and results must be established.

Evaluation goals and objectives should be determined by project management and staff.
Many organizations do not have the resources to carry out the ideal evaluation. Therefore, it
is preferred that they recruit an external evaluation consultant to lead the evaluation process.
This would increase the objectivity of the evaluation. Project strengths and weaknesses might
not be interpreted fairly when data and results are analyzed by project staff members that are
responsible for ensuring that the program is successful.

7
In case the organization does not have the technical expertise to carry out the evaluation and
cannot afford outside help, or prefers to carry out the evaluation using its own resources, it is
recommended to engage an experienced evaluation expert to advise on developing the
evaluation plan, selecting evaluation methods, and analyzing and reporting results.

1.5 Relationship between Monitoring and Evaluation

Monitoring and evaluation are two different management tools that are closely related,
interactive and mutually supportive. Through routine tracking of project progress, monitoring
can provide quantitative and qualitative data useful for designing and implementing project
evaluation exercises. On the other hand, evaluations support project monitoring. Through the
results of periodic evaluations, monitoring tools and strategies can be refined and further
developed.

Some might argue that good monitoring substitutes project evaluations. This might be true in
small-scale or short-term projects, or when the main objective on M&E is to obtain
information to improve the process on implementation of an ongoing project. However, when
a final judgment regarding project results, impact, sustainability, and future development are
needed, an evaluation must be conducted.

Project evaluations are less frequent than monitoring activities, considering their costs and
time needed.

The following table provides a comparison between monitoring and evaluation:

8
Item Monitoring Evaluation

Frequency Periodic, regular Episodic

Main action Keeping Assessment


track/oversight

Basic Improving efficiency Improve effectiveness,

purpose Adjusting work plan impact, future


programming

Focus Inputs/outputs, process Effectiveness,


outcomes, work plans relevance, impact, cost-
effectiveness

Information Routine systems, field Same plus


sources observations, progress surveys/studies
reports, rapid
assessments

Undertaken Project managers Program managers


by
Community workers Supervisors

Community Funders
(beneficiaries)
External evaluators
Supervisors
Community
Funders (beneficiaries)

9
10
Chapter 2

MONITORING AND EVALUATION AS AN INTEGRAL COMPONENT OF THE


PROJECT PLANNING AND IMPLEMENTATION PROCESS

Monitoring and evaluation are integral components of the program/ project management
cycle. Used at all stages of the cycle, monitoring and evaluation can help to strengthen
project design, enrich quality of project interventions, improve decision-making, and enhance
learning. Likewise, the strength of project design can improve the quality of monitoring
and evaluation. It is important to remember that poorly designed projects are hard to monitor
or evaluate. The following section summarizes the logical framework approach to project
planning, implementation, and monitoring and evaluation.

The Logical Framework Approach to Project Design, Implementation and Evaluation

The logical framework approach provides a structure for logical thinking in project design,
implementation and monitoring and evaluation. It makes the project logic explicit, provides
the means for a thorough analysis of the needs of project beneficiaries and links project
objectives, strategies, inputs, and activities to the specified needs. Furthermore, it indicates
the means by which project achievement may be measured.

The detailed description of the processes of designing a program/ project using the logical
framework is beyond the scope of this report. However, the following section provides a
summary of the milestones and main concepts and definitions 1[1]:

Problem analysis represents the first step in project design. It is the process through which
stakeholders identify and analyze the problem(s) that the project is trying to
11
overcome. The result of this analysis is usually summarized in a tree diagram that links
problems with their causes.

• Next, project goals and objectives are developed and structured in a hierarchy
to match the analysis of problems. They can be represented as a mirror image
of the problem tree diagram. While projects are usually designed to address
long-term sectoral or national goals, objectives are specific to the project
interventions. They should also be clear, realistic in the timeframe for their
implementation and measurable for evaluation. Examples: school dropouts (in
a geographical area or for a target group) will be reduced by 10% (within a
specific timeframe); agricultural products (in a geographical area or for a
target group) will be increased by 15% (within a specific timeframe), etc.

• Outputs are the immediate physical and financial results of project activities.
Examples: kilometers of agricultural roads constructed, number of schools
renovated, and number of farmers attended a training course; number of
textbook printed, etc.

• Activities and inputs are developed to produce the outputs that will result in
achieving project objectives.

The product of this analytical approach is usually summarized in a matrix called the logical
frame matrix, which summarizes what the project intends to do and how, what kind of effects
are expected, what the project key assumptions are, and how outputs and outcomes will be
monitored and evaluated (see below).

The columns of the logical frame matrix represent the levels of project objectives (hierarchy
of objectives) and the means to achieve them. There are four levels in the logical frame and
each lower level of activity must contribute to the achievement of a higher level. For

12
example, the implementation of project activities would contribute to the achievement of
project outputs. The achievement of the project outputs would lead to the achievement of
project objectives. This is called the vertical logic. The rows indicate how the achievement of
objectives can be measured and verified. This is called the horizontal logic. Assumptions
(situations needed to promote the implementation of the project) must be systematically
recorded.

The Logical Frame Matrix Structure


Project Indicators Means of Assumptions
Description Verification
(MOV)

Goal: The broader Measures of the Sources of


development extent to which a information and
impact to which contribution to the methods used to
the project/ goal has been made. collect and report
program Used during it.
contributes- at a evaluation.
national and/or
sectoral level.

Purpose: The Conditions at the end Sources of Assumptions


development of the project information and concerning the purpose/
outcome expected indicating that the methods used to goal linkage.
at the end of the Purpose has been collect and report
project. All achieved. Used for it.
components will project completion
contribute to this. and evaluation.

Component Measures of the Sources of Assumptions


Objectives: The extent to which information and concerning the
expected outcome component methods used to component

13
of producing each objectives have been collect and objective/purpose
component's achieved. Used report it. linkage.
outputs. during review and
evaluation.

Outputs: The Measures of the Sources of Assumptions


direct measurable quantity and quality information and concerning the output/
results (goods and of outputs and the methods used to component objective
services) of the timing of their collect and linkage.
project which are delivery. Used during report it.
largely under monitoring and
project review.
management's
control.

Activities: The Implementation/work Sources of Assumptions


tasks carried out program targets. Used information and concerning the activity/
to implement the during monitoring. methods used to output linkage.
project and collect and
deliver the report it.
identified outputs.

A brief description of the terminology is given below:

Project description provides a narrative summary of what the project intends to achieve and
how. It describes the means by which desired ends are to be achieved.

Goal refers to the sectoral or national objectives for which the project is designed to
contribute, e.g. increased incomes, improved nutritional status, reduced crime. It can also be
referred to as describing the expected impact of the project. The goal is thus a statement of
intention that explains the main reason for undertaking the project.
14
Purpose refers to what the project is expected to achieve in terms of development outcome.
Examples might include increased agricultural production, higher immunization coverage,
cleaner water, or improved local management systems and capacity. There should generally
be only one purpose statement.

Component Objectives Where the project/program is relatively large and has a number of
components, it is useful to give each component an objective statement. These statements
should provide a logical link between the outputs of that component and the project purpose.
Poorly stated objectives limit the capacity of M&E to provide useful assessments for
decision-making, accountability and learning purposes.

Outputs refer to the specific results and tangible products (goods and services) produced by
undertaking a series of tasks or activities. Each component should have at least one
contributing output, and often have up to four or five. The delivery of project outputs should
be largely under project management's control.

Activities refer to all the specific tasks undertaken to achieve the required outputs. There are
many tasks and steps to achieve an output. However, the logical frame matrix should not
include too much detail on activities because it becomes too lengthy. If detailed activity
specification is required, this should be presented separately in an activity schedule/Gantt
chart format and not in the matrix itself.

Inputs refer to the resources required to undertake the activities and produce the outputs, e.g.,
personnel, equipment and materials. The specific inputs should not be included in the matrix
format.

Assumptions refer to conditions which could affect the progress or success of the project, but
over which the project manager has no direct control, e.g. price changes, rainfall, political
15
situation, etc. An assumption is a positive statement of a condition that must be met in order
for project objectives to be achieved. A risk is a negative statement of what might prevent
objectives being achieved.

Indicators refer to the information that would help us determine progress towards meeting
project objectives. An indicator should provide, where possible, a clearly defined unit of
measurement and a target detailing the quantity, quality and timing of expected results.
Indicators should be relevant, independent and can be precisely and objectively defined in
order to demonstrate that the objectives of the project have been achieved (see below).

Means of verification (MOVs). Means of verification should clearly specify the expected
source of the information we need to collect. We need to consider how the information will
be collected (method), which will be responsible, and the frequency with which the
information should be provided. In short MOVs specify the means to ensure that the
indicators can be measured effectively, i.e. specification of the indicators, types of data,
sources of information, and collection techniques.

Link between the Logical Frame and Monitoring and Evaluation

The horizontal logic of the matrix helps establish the basis for monitoring and
evaluating the project by asking how outputs, objectives, purpose and goal can be
measured, and what the suitable indicators are. The following table summarizes the link
between the logical frame and monitoring and evaluation.

Logical frame Type of monitoring Indicators


hierarchy and evaluation
activity

16
Goal Ex-post evaluation Impact indicators

Purpose Program Review Outcome indicators

Component Periodic and final Outcome indicators


Objectives evaluation

Outputs Monitoring/periodic Output indicators


evaluation

Activities/Inputs Monitoring Output indicators

It is worth noting that the above table represents a simplified framework and should be
interpreted in a suitably flexible manner. For example, ex-post evaluation assesses whether or
not the purpose, component objectives and outputs have been achieved. Project/program
reviews are concerned with performance in output delivery and the extent of achieving
objectives.

Indicators

Indicators provide the quantitative and qualitative details to a set of objectives... In addition,
they provide evidence of the progress of program or project activities in the attainment of
development objectives. Indicators should be pre-established, i.e. during the project design
phase. When a direct measure is not feasible, indirect or proxy indicators may be used.

Indicators should be directly linked to the level of assessment (e.g. output indicators,
outcome indicators or impact indicators). Output indicators show the immediate physical
and financial outputs of the project. Early indications of impact (outcomes) may be obtained
by surveying beneficiaries’ perceptions about project services. Impact refers to
17
long-term developmental change. Measures of change often involve complex statistics
about economic or social welfare and depend on data that are gathered from beneficiaries.

They should also be clearly phrased to include change in a situation within a geographical
location, time frame, target etc. A popular code for remembering the characteristics of good
indicators is SMART.

S: Specific

M: Measurable

A: Attainable (i.e., can be checked)

R: Relevant (reflect changes in the situation)

T: Trackable (can be tracked over a specific period of time)

Notes and Comments:

1. Classifying project objectives into different levels requires that management will need to
develop systems to provide information at all levels, from basic accounting through
sophisticated studies, in order to measure project outcomes.

2. There are different means for measuring project indicators:

• Input indicators can be provided from management and accounting records.


Input indicators are used mainly by managers closest to the tasks of
implementation.

• Output indicators are directly linked to project activities and inputs.


Management of the project has control of project activities and their
18
direct results or outputs. Those outputs can be verified through internal record
keeping and analysis.

• By contrast, the achievement of project objectives normally depends on a


number of factors. Some might be controlled by the project, other cannot. For
example, the response of beneficiaries to project services is beyond the
control of the project. Responses of beneficiaries regarding benefits brought
to them by the project require consultation and data collection.

• Project outcome are often measured through the assessment of indicators that
focus on whether beneficiaries have access to project services, level of usage
and satisfaction with services. Such evidence can also be provided easily and
accurately through impact research, e.g. changes in health status or
improvements in income.

3. Exogenous indicators focus on general social, economic and environmental factors that are
out of the control of the project, but which might affect its outcome. Those factors might
include the performance of the sector in which the project operates. Gathering data on
project indicators and the wider environment place an additional burden on the project's
M&E effort.

4. The importance of indicators could be changed during project implementation. For


example, monitoring and evaluation focus at an early stage of the project is on input and
process indicators. Emphasis shifts later to outputs and impact. In other words, emphasis is
first placed on indicators of implementation progress, and later on indicators of
development results.

M&E designers should examine existing record keeping and reporting procedures used by the
project authorities in order to assess the capacity to generate the data that will be needed.

19
5. Some of the impact indicators, such as mortality rates or improvement of the household
income, are hard to attribute to the project in a cause-effect relation. In general, the higher
the objective, the more difficult the cause-effect linkages become. Project impact will
almost certainly be a result of a variety of factors, including that of the project itself. In
such situation, the evaluation team might use comparisons with the situation before the
project (baseline data), or in areas not covered by the project.

6. To maximize the benefits of M&E, the project should develop mechanisms to incorporate
the findings, recommendations and lessons learned from evaluations into the various
phases of the program or project cycle.

Chapter 3

EVALUATION TYPES AND MODEL

Overview of Types of Evaluations

Program evaluations are carried out at different stages of project planning and
implementation. They can include many types of evaluations (needs assessments,
accreditation, cost/benefit analysis, effectiveness, efficiency, formative, summative,
goalbased, process, outcomes, etc.). The type of evaluation you undertake to improve your
programs depends on what you want to learn about the program.

20
In general, there are two main categories of evaluations of development projects:

Formative evaluations (process evaluations) examine the development of the project and
may lead to changes in the way the project is structured and carried out. Those types of
evaluations are often called interim evaluations. One of the most commonly used formative
evaluations is the midterm evaluation.

In general, formative evaluations are process oriented and involve a systematic collection of
information to assist decision-making during the planning or implementation stages of a
program. They usually focus on operational activities, but might also take a wider perspective
and possibly give some consideration to long-term effects. While staff members directly
responsible for the activity or project are usually involved in planning and implementing
formative evaluations, external evaluators might also be engaged to bring new approaches or
perspectives. Questions typically asked in those evaluations include:

• To what extent do the activities and strategies correspond with those presented in the
plan? If they are not in harmony, why are there changes? Are the changes justified?

• To what extent did the project follow the timeline presented in the work plan?

• Are activities carried out by the appropriate personnel?

• To what extent are project actual costs in line with initial budget allocations?

• To what extent is the project moving toward the anticipated goals and objectives of
the project?

• Which of the activities or strategies are more effective in moving toward achieving
the goals and objectives?

• What barriers were identified? How and to what extent were they dealt with?

What are the main strengths and weaknesses of the project?

• To what extent are beneficiaries of the project active in decision-making and


implementation?
21
• To what extent do project beneficiaries have access to services provided by the
project? What are the obstacles?

• To what extent are the project beneficiaries satisfied with project services?

Summative evaluations (also called outcome or impact evaluations) address the second set
of issues. They look at what a project has actually accomplished in terms of its stated goals.
There are two types of summative evaluations. (1) End evaluations aim to establish the
situation when external aid is terminated and to identify the possible need for follow up
activities either by donors or project staff. (2) Ex-post evaluations are carried out two to five
years after external support is terminated. The main purpose is to assess what lasting impact
the project has had or is likely to have and to extract lessons of experience.

Summative evaluation questions include:

• To what extent did the project meet its overall goals and objectives?

• What impact did the project have on the lives of beneficiaries?

• Was the project equally effective for all beneficiaries?

• What components were the most effective?

• What significant unintended impacts did the project have?

• Is the project replicable?

• Is the project sustainable?

For each of these questions, both quantitative data (data expressed in numbers) and
qualitative data (data expressed in narratives or words) can be useful.

Summative evaluations are usually carried out as a program is ending or after completion of a
program in order to “sum up” the achievements, impact and lessons learned. They are useful

22
for planning follow-up activities or related future programs. Evaluators generally include
individuals not directly associated with the program.

Overview of Summative Evaluation Models

Terms like "outcome" and "impact" are often used interchangeably. A distinction should be
made. Outcomes refer to any results or consequences of an intervention or a project. Impact
is a particular type of outcome. It refers to the ultimate results (i.e. what the situation will be
if the outcome is achieved). A UNICEF publication clarifies the relationship between the two
terms:

“Some people distinguish between outcomes and impacts, referring to outcomes as shortterm
results (on the level of purpose) and impacts as long-term results (on the level of broader
goals). Outcomes are usually changes in the way people do things as a result of the project
(for example, mother’s properly treating diarrhea at home), while impacts refer to the
eventual result of these changes (the lowered death rate from diarrhea disease).
Demonstrating that a project caused a particular impact is usually difficult since many
factors outside the project influence the results.” (UNICEF, A UNICEF Guide for
Monitoring and Evaluation: Making a Difference?, New York, 1991, p. 40.)

Impact evaluation should be carried out only after a program or project has reached a
sufficient level of stability. It is usually preceded by an implementation evaluation to
make sure that the intended program/ project elements have been put in place and are
operational before we try to assess their effects. Assessing the impact at an early stage is
meaningless and a waste of resources.

23
The main question that impact evaluations try to answer is whether the intervention or project
has made a difference for the target groups. There are different ways to find out and prove if
the intervention or project has made a difference. Those ways are referred to as evaluation
models.

Evaluation models differ in the extent to which they are able to identify and prove project
outcome or impact and link them with project interventions, i.e. to make a causal link
between the two. Some models are more likely than others to generate reliable results that
could establish a causal link. In evaluation terms this is called the scientific rigor or validity
of the model. There are many evaluation models. The following section reviews two
commonly used models: the pretest-posttest model and the comparison group model.

A. Pretest-Posttest Model

The basic assumption of this model is that without project interventions, the situation that
existed before the implementation of the project will continue as did before. As a result of the
intervention, the situation will change over time. Therefore, we measure the situation before
the project starts and repeat the same measures after the project is completed. The differences
or changes between the two points in time can be attributed to the project interventions.

To increase the validity of this model, we have to control some biases that might result from
the application of the model. For example the pre and posttests should be the same; measures
should be taken from the same groups, etc. In addition, to establish a strong link between
project interventions and project impact, the model should take into account other biases that
might occur between the two points in time. Some of those biases might be out of the project
control, i.e., social, political, economic, and environmental factors.

Advantages: The main advantage of the pretest-posttest model is that it is relatively easy to
implement. It can be implemented with the same group of project beneficiaries (does not
require a control or comparison group). It does not usually require a high level of statistical

24
expertise to implement and is able to assess progress over time by comparing the results of
projects against baseline data.

Disadvantages: The main disadvantage of the pre and posttest model is that it lacks scientific
rigor. There are many biases that might take place between the pretest and the posttest that
could affect the results, and therefore, weaken the direct link between project interventions
and project outcomes or impact. In other words, changes in the situation before and after
project implementation might (at least in part) be attributed to other external factors. This
problem could be reduced by adopting what is called the multiple time-series model, i.e.
repeating the measures at different points of time during the implementation of the project
and not only at the beginning and end points of time. This way, results of measures can be
tracked over time and the effects of the external factors can be assessed and controlled.
However, this might increase the work burden and expand the cost of the evaluation.

Implementation Steps: Applying the pretest posttest model involves the following main
stages:

1. Prepare a list of indicators that would test project outcomes.

2. Design evaluation tools and instruments for data collection.

3. Apply the tools and instruments with the target group or a representative sample of
the target group at the pretest time (at the beginning or the project implementation
phase or before the implementation starts).

4. Repeat the same measures at the posttest time (at the end of the project
implementation phase) with the same target group or a representative sample of the
target group.

25
5. Analyze, compare and interpret the two sets of evaluation data.

6. Report findings.

B. Comparison Group Model

This evaluation model assesses project outcomes or impact through the comparison between
project results on two comparable groups at the same period of time (say the end of project
implementation phase). The first group represents beneficiaries of the project and the second
represents a group that has not benefited from the project. To control for design biases, the
two groups should have the same characteristics in many aspects (socioeconomic status,
gender balance, education, and other geographic and demographic aspects). Difference
between the two groups could be attributed to the project interventions.

Advantages: This model has relatively strong scientific rigor. It is able to link project impact
with project interventions or to attribute outcomes to the intervention. The implementation of
this model is relatively easy when naturally existing comparison groups can be found.

Disadvantages: In many situations it is difficult to find a comparison group. In addition,


working with two different groups might increase the research burden and increase the cost
of evaluation.

Implementation Steps: Applying the comparison group model involves the following main
stages:

1. Prepare a list of indicators that would test project outcomes.

26
2. Design evaluation tools and instruments for data collection.

3. Select a comparison group based on an appropriate set of criteria.

4. Apply the tools and instruments with the target and comparison groups, or
representative samples of both, at the same time.

5. Analyze, compare and interpret the two sets of evaluation data.

6. Report findings.

3.3 Baseline Survey and Data

Evaluating the impact or results of a project is difficult to prove if we do not know the
situation prior to the project implementation. Baseline surveys are those surveys carried out
before project implementation start to generate data about the existing situation of a target
area or group. Such data becomes the reference against which project/program impact can be
assessed when summative evaluations are carried out. For example, if the objective of the
project is to reduce school dropout rates, we have to know those rates prior to project
implementation and compare them with rates after the completion of the project.

Baseline surveys are especially important when the pretest posttest evaluation model is
adopted. The logic behind carrying out baseline surveys is that by comparing data that
describe the situation to be addressed by a project or a program and data generated after the
completion of the project, evaluators would be able to measure progress or changes in the
27
situation and link those changes to project interventions. As well, baseline data might be
useful to track changes that the project would bring about over time and to refine project
indicators that are important for project monitoring or for evaluating project impact.

Baseline surveys are especially important for assessing project higher-level objectives.
Special focus is given to gathering information about various indicators developed to
measure project effects. Both quantitative and qualitative information are used in baseline
surveys (see next section). To control biases in methodological indicators, methods and tools
used in the baseline survey should be repeated when carrying out summative evaluations.

Source: United Nations Development Programme (UNDP), who Are the Question-makers? A
Participatory Evaluation Handbook, OESP Handbook Series, 1997.

Review of Key Outcome and Impact Evaluation Indicators

There are a number of interrelated dimensions of programs and projects to measure their
success including: effectiveness, efficiency, relevance, impact, and sustainability. Following
is a summary review of each of those dimensions:

1. Effectiveness

Effectiveness in simple terms is the measure of the degree to which the formally stated
project objectives have been achieved or can be achieved. To make such measure and
verification possible, project objectives should be defined clearly and realistically. Often,
evaluators have to deal with unclear and highly general objectives that are hard to assess:

“upgraded health conditions,” “improved living conditions” or unrealistic objectives (in


comparison with allocated resources, time or level of activities). In such situations, the
measurement of effectiveness becomes difficult. Evaluators have to work with project staff to
try to operationalise those objectives based on existing documents and to draw clear and
realistic objectives as the point of reference for measuring effectiveness.
28
2. Efficiency

Efficiency is the measure of the economic relationship between the allocated inputs and the
project outputs generated from those inputs (i.e. cost effectiveness of the project). It is a
measure of the productivity of the project, i.e., to what degree the outputs achieved derive
from an acceptable cost. This includes the efficient use of financial, human and material
resources. In other words, efficiency asks whether the use of resources in comparison with
the outputs is justified.

This might be easy to answer in the field of business. In such situations, the main difficulty in
measuring efficiency is to determine what standards to follow as a point of reference. The
question, however, becomes more difficult in the social context especially when ethical
considerations are involved. For example, how can we answer if spending X amount of
dollars to save the lives of Y number of children or to rehabilitate Z number of disabled
persons is justified. What are the acceptable standards in such situations?

In the absence of agreed upon and predetermined standards, evaluators have to come up with
some justifiable standards. Following is a list of recommendations that evaluators may use:

• Compare project inputs and outputs against other comparable activities and projects.

• Use elements of best practice standards.

• Use criteria to judge what might be reasonable.

• Ask questions such as: could the project or intervention achieve the same results at a
lower cost? Could the project achieve more results at the same cost?

3. Relevance

29
Relevance is a measure used to determine the degree to which the objectives of a program or
project remain valid as planned. It refers to an overall assessment to determine whether
project interventions and objectives are still in harmony with the needs and priorities of
beneficiaries. In other words, are the agreed objectives still valid? Is there a sufficient
rationale for continuing the project or activity? What is the value of the project in relation to
other priority needs? Is the problem addressed still a major problem?

Society’s priorities might change over time as a result of social, political, demographic or
environmental changes. As a result, a given project might not be as important as it was when
it was initiated. For example, once an infectious epidemic has been eradicated, the
justification for the project that dealt with the problem might no longer exist. Or, if a natural
disaster happens, society’s priorities shifts to emergency or relief interventions, and other
projects and interventions might become less important.

In many cases, the continuation of project relevance depends on the seriousness, quality of
needs assessment and the rationale upon which the project has been developed.

4. Impact

Project impact is a measure of all positive and negative changes and effects caused by the
project, whether planned or unplanned. While effectiveness focuses only on specific positive
and planned effects expected to accrue as a result of the project and is expressed in terms of
the immediate objective, impact is a far broader measure as it includes both positive and
negative project results, whether they are intended, or unintended. Impact is often the most
difficult and demanding part of the evaluation work since it requires the establishment of
complex causal conditions that are difficult to prove unless a strong evaluation model and a
diverse set of techniques are used.

In assessing impacts, the point of reference is the status of project beneficiaries and
stakeholders prior to implementation. Questions often asked in impact evaluations include:
what are the results of the project? What difference has the project made to the beneficiaries
and how many have been affected? What are the social, economic, technical, environmental,

30
and other effects on the direct or indirect individual beneficiaries, communities and
institutions? What are the positive or negative, intended and unintended, effects that come
about as a result of the project activities?

Project impacts can be immediate and long-range. Project staff and evaluators should decide
how much time must elapse until project impacts are generated. For example, an agricultural
project may produce important impacts after only a few months – whereas an educational
project might not generate significant effects until several years after the completion of the
project. Therefore, it is important to design the program or project in a way that will lend
itself to impact assessment at a later stage, e.g., through the preparation of baseline data,
setting of indicators for monitoring and evaluation, etc.

5. Sustainability

Sustainability in simple terms is a measure of the continuation of the project program or


positive results after external support has been concluded. It has become a major issue in
development work and evaluation of projects.

Many development initiatives fail once the implementation phase is over because neither the
target group nor responsible organizations have the means, capacity or motivation to provide
the resources needed for the activities to continue. As a result, many development
organizations became more interested in the long-term and lasting improvements of projects.
In addition, many donors are becoming interested to know for how long they should need to
support a project before it can run with local resources.

During the last decade, the concept of sustainability has been developed from merely asking
whether the project has succeeded in contributing to the achievement of its objectives or
whether the project will be able to cover its operational costs from local sources to a broader
set of issues including if there is an indication whether the positive impacts are likely to
continue after the termination of external support. In addition, environmental, financial,

31
institutional and social dimensions have become major issues in the assessment of
sustainability.

Since sustainability is concerned with what happens after external support is completed, it
should ideally be measured after the completion of the project. It will be difficult to provide
definitive assessment of sustainability while the project is still running. In such cases, the
assessment will have to be based on projections about future developments.

There are a number of factors that can be used to ensure that project interventions are likely
to become self-sustaining and continue after the termination of external funding, including:

• economic (future expenses, especially recurrent costs)

• Institutional (administrative capacity, technical capacity, institutional motivation,


ownership of the project, etc.)

• Social (community interest, political will, etc.)

• Factors related to overall environmental benefits.

32
Chapter 4

MONITORING & EVALUATION METHODS AND TOOLS

Review of Main Methods and Tools

Evaluations often produce controversial results. Therefore, they might be criticized,


especially in terms of whether data collection methods, analysis and results lead to reliable
information and conclusions that reflect the situation.

Methods of data collection have strengths and drawbacks. Formal methods (surveys,
participatory observations, direct measurement, etc.) used in academic research would lead to
qualitative and quantitative data that have a high degree of reliability and validity. The
problem is that they are expensive. Less formal methods (field visits, unstructured interviews,
etc.) might generate rich information but less precise conclusions, especially because some of
those methods depend on subjective views and intuitions.

Qualitative methods, especially participatory methods of data collection, can bring rich and
in-depth analysis of the situation of the beneficiaries of projects and new insights into
peoples' needs for project planning and implementation. However, they demand more skills
than most quantitative methods. In addition, they require time and substantial talent in
communication and negotiation between planners and participants.

The quality of information, especially in terms of validity and reliability, should be a main
concern for the evaluator. The evaluator may simultaneously employ a number of methods
and sources of information in order to cross-validate data (triangulation). Triangulation is a
term used to describe the simultaneous use of multiple evaluation methods and information
sources to study the same topic. It provides the means to generate rich and contextual
information. As well, it provides the means to verify information and explain conflicting
evidence.

The following table provides an overview of some of the quantitative and qualitative data
collection methods commonly used during evaluations.
33
Method Description/ Advantages Disadvantages/
Purpose Challenges

Literature Gather background Economic and Difficult to assess validity


search information on methods efficient way of and reliability of
and results of obtaining secondary data.
evaluation methods information.
used by others.

Questionnaires Oral interviews or * Produce * Demanding and


/ written questionnaires reliable could be costly.
of a representative information.
surveys * Might not get careful
sample of
* Can be feedback.
respondents.
completed
* Wording can bias
Most appropriate anonymously.
when need to quickly client's responses.
and/or easily get lots * Easy to
of information from * Data is analyzed for
compare and
people in a groups and are impersonal.
nonthreatening way. analyze.
* Surveys may need
* Can be
sampling expert.
administered
easily to a large * Provide numbers
number of people. but do not get the full story.

* Collect a * Open-ended data


lot of data in an may be difficult to analyze.
organized manner.

34
* Many sample
questionnaires
already exist.

Interviews To fully understand * Give full * Can be hard to


someone's range and depth analyze and compare.
impressions or of information
* Interviewer can
experiences, or learn and yield rich
bias responses.
more about their data, details and
answers to new insights. * Can be expensive
questionnaires. and time-consuming.
* Can be
flexible with the * Need well-
client. qualified and highly
Individual or group
trained interviewers.
interviews could be * Permit
organized to assess face-toface * Interviewee may
perceptions, views contact with distort information
and satisfaction of respondents and through recall errors,
beneficiaries. provide selective perceptions and
opportunity to desire to please
explore topics in interviewer.
depth.
* Flexibility can
* Allow result in inconsistencies
interviewer to across interviews.
probe, explain or
* Volume of
help clarify information too large and
questions, may be difficult to reduce
increasing the data.

likelihood of
useful responses.

* Allow
interviewer to be
flexible in

35
administering
interview to
particular
individuals or
circumstances.

Documentation Impression of how * Give * Often takes a lot of


program operates comprehensive and time
Review without interrupting
the program by historical * Information may be
review of information incomplete. Quality of
applications,
finances, memos, * Doesn't documentation might be
minutes, etc. poor.
interrupt program
or client's routine * Need to be clear
in program about purpose.

* Information * Not a flexible


already exists. means to get data. Data
restricted to what already
* Few biases exists.
about information.

36
Observation Involves inspection, * Well-suited * Dependent on
field visits and for understanding observer’s understanding
observation to processes, views, and interpretation.
understand processes, operations of a
* Has limited
infrastructure/services program while they
potential for generalization.
and their utilization. are actually
occurring. * Can be difficult to
interpret exhibited
* Can adapt
Gathers accurate to events as they behaviors.
information about occur and exist in
how a program natural, * Can be complex to
actually operates, unstructured and categorize observations.
particularly about
* Can influence
behavior

processes. flexible setting. of program participants.

* Provides * Can be expensive


direct information and time-consuming.
about behavior of
* Needs well-
individuals and
qualified, highly trained
groups.
observers and/or content
* Permits experts.
evaluator to enter
* Investigator has
into and little control over situation.
understand
situation/ context.

* Provides
good opportunities
for identifying
unanticipated
outcomes.

37
Focus groups A focus group brings * Efficient * Can be hard to
together a and reasonable in analyze responses.
representative group terms of cost.
* Need good
of 8 to 10 people, facilitators.
* Stimulate
who are asked a
the generation of * Difficult to schedule
series of questions
new ideas. 810 people together.
related to the task at
hand. * Quickly
and reliably gets
common
Used for analysis of impressions
specific, complex
problems, in order to * Can be an
identify attitudes and efficient way to get
priorities in sample a wide range and
depth of

38
Explore a topic in * Can

groups. information
short
40 time. in a
41
depth through group convey key 43

discussion, e.g., about information about

reactions to an programs.
45
experience or * Useful in suggestion, project design and
understanding in assessing the common complaints, impact of a

41
etc. project on a given set of stakeholders.

42
Case studies In-depth review of * 53 Well- * Usually time

number of selected understanding organize and describe.

one or a small suited


54 for consuming to collect,
cases. processes and for * Represents depth of
formulating information, rather than To fully understand hypotheses
to be breadth.

44
beneficiaries’ * Fully experiences in a depicts client's
program, and conduct experience in comprehensive program
input, examination through process and

or depict tested
59 later.
cases. * Powerful means to portray program to outsiders.

cross comparison of results.


65
Key informant Interviews with * Flexible, indepth * Risk of biased
interviews persons who are approach. presentation/

knowledgeable about * Easy to Interpretation from


the community implement. informants/interviewer.
targeted by the
* Provides * Time required to
project.
information select and get commitment
concerning causes, may be substantial.
reasons and/or
A key informant is a * Relationship
person (or group) who best approaches
between evaluator and
has unique skills or from an "insider"
professional informants may influence
point of view.
background related to type of data obtained.
the issue/intervention
*
being evaluated, is * Informants may
knowledgeable about Advice/feedback interject own biases and
the project increases impressions.
participants and/or has
access to other credibility of
information of interest study.
to the evaluator.
* May have
side benefit to
solidify
relationships
between
evaluators,
beneficiaries and
other stakeholders.

Direct Registration of * Precise. Registers only facts, not


measurement quantifiable or explanations.
classifiable data by * Reliable
means of an analytical and often requiring
instrument. few resources.

47
Source: Information on common qualitative methods is provided in the earlier User-Friendly
Handbook for Project Evaluation (NSF 93-152).

Selecting Monitoring and Evaluation Methods

Monitoring is an ongoing function and can be incorporated into daily management


operations. It can involve a wide range of methods such as interviews with project
beneficiaries, field visits, regular reports, observations, interviews with key informants, etc.

Evaluation can involve a number of methods. No recipe or formula is best for every situation.
Some methods are better suited for the collection of certain types of data. Each has
advantages and disadvantages in terms of costs and other practical and technical
considerations (such as ease of use, accuracy, reliability, and validity). For example, there is
no best way to conduct interviews. Your approach will depend on the practical considerations
of getting the work done during the specified time period. Using a focus group - which is
essentially a group interview - is more efficient than one-on-one interviews, if done well.
However, people often give different answers in groups than they do individually. They may
feel freer to express personal views in a private interview. At the same time, group
conversations can draw out deeper insights as participants listen to what others are saying.
Both approaches have value.

Project staff and evaluators must weigh pros and cons against program goals. In selecting
evaluation methods, evaluators consider the use of methods that could generate the most
useful and reliable information, be the most cost-effective and is the easiest to implement in a
short period of time.

Following is a list of questions that might help in selecting appropriate evaluation methods:

48
1. What information is needed?

2. Of this information, how much can be collected and analyzed in a low-cost and
practical manner, e.g., using questionnaires, surveys and checklists?

3. How accurate will the information be?

4. Will the methods get all of the needed information?

5. What additional methods should and could be used if additional information is


needed?

6. Will the information appear as credible to decision makers, e.g., to donors or top
management?

7. Are the methods appropriate for the target group? If group members are illiterate, the
use of questionnaires might not be appropriate unless completed by the evaluators
themselves.

8. Who can administer the methods? Is training required?

9. How can the information be analyzed?

Ideally, the evaluator uses a combination of methods. For example, a questionnaire to quickly
collect a great deal of information from a lot of people, and then interviews to get more
indepth information from certain respondents to the questionnaires. In addition, case studies
49
could then be used for more in-depth analysis of unique and notable cases, e.g., those who
did or did not benefit from the program, those who quit the program, etc.

Combining quantitative and qualitative research methods and approaches in monitoring and
evaluation of development projects has proved to be very effective.

Chapter 5

50
MONITORING AND EVALUATION PLANNING, DESIGN AND
IMPLEMENTATION

Monitoring and evaluation planning and design must be prepared as an integral part of the
program/project design. To increase the effectiveness of the M&E systems, program
managers should:

• Establish baseline data describing the problems to be addressed and building baseline
indicators.

• Make sure that program/project objectives are clear, measurable and realistic.

• Define specific program/project targets in accordance with the objectives.

• Agree with stakeholders on the specific indicators to be used for monitoring and
evaluating project performance and impact.

• Define the types and sources of data needed and the methods of data collection and
analysis required based on the indicators.

• Specify how the information generated from M&E will be used.

• Specify the format, frequency and distribution of reports.

• Develop an M&E schedule.

• Clarify roles and responsibilities for M&E.

• Allocate an adequate budget and resources for M&E.

It should be noted that the monitoring and evaluation plan should not be seen in a rigid way.
The plan should be subject to continuous review and adjustment as required, and a means for
an effective learning process.

Planning a Monitoring System

51
As mentioned above, evaluation planning and design depend on the type of information
needed. The type, quantity and quality of information should be thought of carefully before
planning M&E systems.

Project managers usually prepare annual work plans that translate the project document into
concrete tasks. The work plans should describe in detail the delivery of inputs, the activities
to be conducted and the expected results. They should clearly indicate schedules and the
persons responsible for providing the inputs and producing results. The work plans should be
used as the basis for monitoring the progress of program/project implementation.

As a management tool, monitoring should be organized at each level of management.


Monitoring systems should be linked to annual plans. A first step in designing a monitoring
plan is to identify who needs what information, for what purpose, how frequently, and in
what form. To develop an effective monitoring system, the following steps might be
followed:

1. A first step towards developing a good monitoring system is to decide what should be
monitored. The careful selection of monitoring indicators organizes and focuses the
data collection process.

2. The next question would be how to gather information, i.e. to select methods to track
indicators and report on progress (observation, interviews, stakeholder meetings,
routine reporting, field visits, etc.).

3. When to gather information by whom. The monitoring plan should include who will
gather the information and how often. Project staff at various levels will do most data
collection, analysis and reporting. Staff should agree on what the monitoring report
should include.

4. Progress reports should be reviewed by project staff and major stakeholders.


Feedback should be collected by project managers on a regular basis.

52
5. The monitoring plan should indicate the resources needed to carry out project
monitoring. Needed funds and staff time should be allocated to ensure effective
implementation.

Planning an Evaluation

There is no "perfect" evaluation design. It is far more important to do something, rather than
wait until every last detail has been tested. However, to improve evaluation planning and
design, it is useful to consider the following questions and issues:

a. What are the purposes of the evaluation? Which ones are more important than others?

This step involves identifying a manageable number of evaluation purposes and prioritizing
them. The best way to decide on the purposes of an evaluation is to ask who needs what type
of information and for what reason. When the evaluation purpose has been decided, it must
be clearly set forth in the Evaluation Terms of Reference.

b. What evaluation model is the most appropriate for the project or program?

As mentioned earlier, there are many evaluation models that can be considered. Each has
some strengths and weaknesses. The evaluation model that a specific project would utilize
should be selected during the project design phase. This is especially important if the project
plans to include a summative evaluation.

c. When to carry out the evaluation. What is the timing of evaluation within the project cycle?

The timing of major evaluations is determined by the project plan, the identification of
significant problems during the course of monitoring, donors’ request, etc.

d. What is the scope and focus of the evaluation and questions for the evaluation to answer?

53
Determining the scope and focus of an evaluation includes identifying the geographic area,
type of activity and time period that the evaluation should cover. This would clarify the types
of questions to be asked.

e. Methods of gathering data to answer the questions.

Existing data should be identified and its quality assessed. In the process, some questions
might be answered. Other data sources might include documents (regular reports, field visits
notes, previous evaluation reports, etc.) and data generated by research projects (household
surveys, evaluation of similar programs, etc.).

Evaluators should be selective. Extensive data gathering is time-consuming, expensive and


can result in mountains of unnecessary information.

f. What resources are needed for the evaluation?

In the early stages of planning an evaluation, resources should be clearly defined. In order for
evaluations to be effective, sufficient human, financial and logistic resources should be
allocated. We should remember that the amount of available resources influences the scope
and methods of the evaluation.

A UNICEF publication summarizes the evaluation planning process as follows:

• Why - The purposes of the evaluation - who can/will use the results.

• When - The timing of evaluation in the program cycle.

• What - The scope and focus of evaluation and questions for the evaluation to answer.

54
• Who - Those responsible for managing and those responsible for carrying out the
evaluation, specifying whether the evaluation team will be internal or external or a
combination of both.

• How - The methods of gathering data to answer the questions.

• Resources - The supplies and materials, infrastructure and logistics needed for the
evaluation.

Chapter 6

DATA ANALYSIS AND REPORT WRITING

Analyzing Data

55
1. Data Management

Organizing evaluation data is an important step for ensuring effective analysis and reporting.
If the amount of quantitative data is very small and you are not familiar with computer
software and data entry, you might opt to manually organize and analyze data. However, if
the amount of data is huge or you need to carry out sophisticated analysis, you should enter
the data into a computer program. There are a number of software packages available to
manage the evaluation data, including SPSS, Access, or Excel. Each requires a different level
of technical expertise. For a relatively small project, Excel is the simplest of the three
programs and should work well as database software. In any event, the assistance of
statisticians and computer experts can be engaged at different stages of the evaluation.

2. Analysis of Quantitative Data

Analyzing the gathered quantitative and qualitative data is a major step in project evaluation.
Developing a data analysis plan is important to carry out a successful analysis and
interpretation of information gathered by the evaluation. Following are some tips to make
sense of the quantitative data:

a. Start with the evaluation goals and objectives:

Before analyzing your data, review your evaluation goals. This will help you organize your
data and focus your analysis. For example, if you wanted to improve your program by
identifying its strengths and weaknesses, you can organize data into program strengths,
weaknesses and suggestions to improve the program. If you are conducting an
outcomesbased evaluation, you could categorize data according to the indicators for each
outcome. In general, data analysis is facilitated if the project has clear and measurable goals
and objectives.

b. Basic analysis of quantitative information

Data analysis often involves the disaggregation of data into categories to provide evidence
about project achievements and to identify areas in which a program is succeeding and/or
56
needs improvement. Data can be broken down by gender, social and economic situation,
education, area of residence (urban or rural), marital status, age, etc. Decide what type of
disaggregation is relevant to your evaluation and project objectives and indicators. One of the
main advantages of statistical analysis is that it can be used to summarize the findings of an
evaluation in a clear, precise and reliable way. However, not all information can be analyzed
quantitatively. The most commonly used statistics include the following:

Frequency Count. A frequency count provides an enumeration of activities, things, or people


that have certain pre-specified characteristics. Frequency counts can often be categorized
(e.g., 0, 1-5, 6-10, more than 10) in data analysis.

Percentage. A percentage tells us the proportion of activities, things, or people that have
certain characteristics within the total population of the study or sample. Percentage is
probably the most commonly used statistic to show the current status as well as growth over
time.

Mean. The mean is the most commonly used statistic to represent the average in research and
evaluation studies. It is derived by dividing the sum by the total number of units included in
the summation. The mean has mathematical properties that make it appropriate to use with
many statistical procedures.

The level of sophistication of analysis is a matter of concern in evaluation. Tables,


percentages and averages often give a clear picture of the sample data particularly for
nonspecialists, and many users will only be interested in this level of analysis. In addition,
measures of spread, including percentiles and standard deviations, may add valuable
information on how a variable is distributed throughout a sample population. There is a
wealth of more sophisticated research methods that can be applied. However, much of the
evaluation work can be done using very basic methods.

57
3. Analysis of Quantitative Information

The use of both quantitative and qualitative analysis in evaluation has become the preferred
model for many evaluators. Most evaluators and researchers agree that they should be
employed simultaneously. The analysis of qualitative data helps broaden the view of the
phenomena of interest in an evaluation, but can also increase depth and detail, where needed.

Qualitative data includes detailed descriptions, direct quotations in response to open-ended


questions, analysis of case studies, the transcript of opinion of groups, and observations of
different types. Qualitative analysis is best done in conjunction with the statistical analysis of
related (quantitative or qualitative) data. The evaluation should be designed so that the two
sorts of analysis, using different but related data, will be mutually reinforcing.

Analysis of qualitative methods may produce descriptions (patterns, themes, tendencies,


trends, etc.), and interpretations and explanations of these patterns. The data analysis should
include efforts to assess the reliability and validity of findings. Following is a list of some
useful tips to improve your analysis of qualitative data:

• Carefully review all the data.

• Organize comments into similar categories, e.g., concerns, suggestions, strengths,


weaknesses, similar experiences, program inputs, recommendations, outputs,
outcome indicators, etc.

• Try to identify patterns, or associations and causal relationships in the themes,


e.g., all people who attended programs in the evening had similar concerns, most
people came from the same geographic area, most people were in the same salary
range, processes or events respondents experience during the program, etc.

• Try to combine the results of the quantitative and qualitative data.

58
It is important to keep all documents for several years after completion in case they are
needed for future reference.

Development of an Evaluation Report

There is no common format for reporting. Following is a list of tips that might help in
improving your evaluation reports:

a. Start the preparation of the evaluation report at an early stage.

It is useful to start the preparation of the report before data collection. There are a number of
sections that can be prepared by using the material of the evaluation plan or proposal
(background section, information about the project and some aspects of the methodology,
evaluation questions, etc.). Those will remain the same throughout the evaluation. The
evaluation findings, conclusions, and recommendations generally need to wait for the end of
the evaluation.

Evaluations generate huge amount of information. Therefore, it is useful to organize


evaluation data and field notes as soon as they are collected and to document fieldwork
experiences and observations as soon as possible. Finally, preparing sections of the findings
chapter during the data collection phase allows researchers to generate preliminary
conclusions or identify potential trends that need to be assessed by additional data collection
activities.

b. Make the report short and concise

One of the most challenging tasks that evaluators face is how to organize the huge amount of
data gathered into a useful, concise and interesting report and what data to include and not to
include. It is useful to remember that only a small and concise amount of tabulations prepared
during the analysis phase should be reported. A report outline will help in classifying
information. Always abide by your key evaluation questions, the indicators you are assessing
and the type of information that your audience needs.

59
Make your recommendations clear, concise and direct. Examples include:

1. Ways for improving management of the program (planning, decision making, policy
development, etc.) and where capacity building/technical assistance and training are
needed.

2. Actions needed to increase effects of the project.

3. Actions needed to improve monitoring and evaluation processes and methods.

4. Topics for further research.

c. Make the presentation interesting

Remember that the level and content of evaluation reports depend on for whom the report is
intended, e.g., donors, staff, beneficiaries, the general public, etc. Presentation must be clear
and adjusted to the target group. The presentation must be made in simple language that can
be understood by non-professionals. Following is a list of suggestions that might help in
making your report more interesting and easier to read:

1. The first sentence of paragraphs should be used to make the main point, and the
remainder to supplement, substantiate and discuss the main point.

2. As much as possible, use a short text. This will ensure that a large number of people
will read it.

3. The structure of the report should be simple. The text should be broken down in
relatively small thematic or sequential parts, with simple and clear subtitles precisely
identifying the topics discussed.

4. Make the report interesting to read. Display your data in graphs, diagrams,
illustrations and tables that summarize numbers. This should reduce the amount of
text needed to describe the results. Furthermore, they are more effective than written

60
text. Do not explain the graphs or illustrations in written form. Focus only on

the important points that relate to the problem under discussion. Use of qualitative
information effectively makes the report more interesting. In addition, direct quotes,
short examples and comments heard during fieldwork personalize the findings, and
photographs help in familiarizing readers with the conditions of the project
beneficiaries.

5. Use simple language that the readers will understand. Avoid the use of long and
complicated sentences, unclear jargon and/or difficult words. Important technical
terms should be defined in the text or in the glossary at the end of the report.

6. Different main ideas should be presented in separate sentences.

7. The meaning of abbreviations and colloquial words should be explained.

8. Simple link words should be used to split sentences and indicate the direction in
which the argument is moving. Link words should be simple, such as “also,” “even
so,” “on the other hand,” and “in the same way.” Avoid long words like “moreover,”
“nevertheless,” and “notwithstanding.”

9. Only data tables or diagrams should contain detailed numbers. The written text should
highlight the most important numbers and say what they mean. Percentages should in
most cases be rounded up to the nearest whole number. It should be possible for the

reader to get the main message from a table without consulting the text. Every table
must have a title, table number, reference to the source of information, sample size,
and full description of what each figure refers to.

10. Use space around the text. Ease of reading and understanding is more important than
reducing the volume of pages.

Consider the following format for your report:

61
Suggested Contents of Evaluation Report

1. Title page

2. Table of Contents

3. Acknowledgments (optional)

• Identify those who contributed to the evaluation.

4. Executive Summary

• Summarize the program/project evaluated, the purpose of the evaluation and the
methods used, the major findings, and the recommendations in priority order.

• Two to three pages (usually) that could be read independently without reference to
the rest of the report.

5. Introduction

• Identify program/project description/background.

62
• Describe the program/project being evaluated (the setting and problem
addressed, 86 objectives and strategies, funding).
• Summarize the evaluation context (purposes, sponsors, composition of the
team, 88 duration).

63
6. Evaluation Objectives and Methodology
• List the evaluation objectives (the questions the evaluation was designed to answer).

64
• Describe fully the evaluation methods and instruments (e.g., what data were 93
collected, specific methods used to gather and analyze them, rationale for visiting 94
selected sites).
• Limitations of the evaluation.

7. Findings and Conclusions

66
• State findings clearly with data presented graphically in tables and figures. Include 99

• Explain the comparisons made to judge whether adequate progress was made.
• Identify reasons for accomplishments and failures, especially continuing constraints.

effects of the findings on achievement of program/project goals.


8. Recommendations
• List the recommendations for different kinds of users in priority order. Include costs
105

of implementing them, when possible.


• Link recommendations explicitly with the findings, discussing their implication for 107

68
9. Lessons Learned (optional)

• Identify lessons learned from this evaluation for those planning, implementing or
evaluating similar activities.

10. Appendices

• Terms of Reference.

• Instruments used to collect data/information (copies of questionnaires, surveys, etc.).

• List of persons interviewed and sites visited.

• Data collection instruments.

• Case studies.

• Abbreviations.

• Any related literature.

• Other data/ tables not included in the findings chapter.

decision-makers.
• Include a proposed timetable for implementing/reviewing recommendations. 109
Use of Monitoring and Evaluation Results

Results of evaluations can be used in many ways:

1. Dissemination of the report

Disseminate of the report to various interested and related parties that might use it. Potential
users include: the funding organization (for the program or evaluation), project managers
and staff, board members of the organization, partner organizations/interested community

groups and other stakeholders, the general public, and external resources (researchers,
consultants, professional agencies, etc.).

69
Apart from distributing the evaluation report itself, common ways to disseminate evaluation
information are through the evaluation summaries, annual reports, bibliographies, thematic
reports, seminars, press releases, websites, newsletters, etc. The entire report should be
distributed to administrators and donors. The executive summary could be distributed more
widely, for example to other policy-making staff, political bodies or others involved in similar
programs.

2. Improvement of project/ program performance

The evaluation report highlights project strength and weaknesses and suggested solutions to
major problems. While it is important to know if the program is achieving its goals and
objectives, it is also important that the project manager and staff are able to use the results to
plan follow-up actions to further strengthen the program.

The project manager and staff should prepare an action plan to implement follow-up
activities. The action plan should have a time line and should identify individuals responsible
for carrying out the planned activities. The implementation of the follow-up action plan needs
to be monitored and evaluated. This makes program evaluation, implementation and impact,
an integral part of a process for continuous improvement.

3. Development of new projects

One of the objectives of evaluations is to feed into the next planning phases of the
programming cycle of the organization as well as to provide a baseline for future planning.
Findings of evaluations reflect the situation of the target group and highlight follow up
actions. Such recommendations could be used to design new projects or interventions, or to
further develop existing projects.

70
4. Policy development

Results of evaluations could be discussed at regional or national levels through seminars or


workshops to discuss policy implications. Planners on the policy-level can use evaluation
results for decision-making.

If the evaluation is well done and recommends policy changes, program managers can use it
as a tool for advocacy. Good evaluations forcefully demonstrate the potential beneficial
impact of suggested policy changes.

5. Advocacy to increase support to the project

Evaluations can be used as a tool to obtain further support for the program/project. By
documenting what has been achieved, evaluators help project leaders gain the support of
government officials, increase credibility in the community and raise funds from donors,
especially if the results of the evaluation affirm that the project goals remain valid Chapter 7

WHY DO MONITORING AND EVALUATION?

Monitoring and evaluation enable you to check the “bottom line” (see Glossary of Terms) of
development work: Not “are we making a profit?” but “are we making a difference?”
Through monitoring and evaluation, you can:

 Review progress;

 Identify problems in planning and/or implementation;

 Make adjustments so that you are more likely to “make a difference”.

In many organizations, “monitoring and evaluation” is something that that is seen as a donor
requirement rather than a management tool. Donors are certainly entitled to know whether
their money is being properly spent, and whether it is being well spent. But the primary
(most important) use of monitoring and evaluation should be for the organisation or project

71
itself to see how it is doing against objectives, whether it is having an impact, whether it is
working efficiently, and to learn how to do it better.

Plans are essential but they are not set in concrete (totally fixed). If they are not working, or
if the circumstances change, then plans need to change too. Monitoring and evaluation are
both tools which help a project or organisation know when plans are not working, and when
circumstances have changed. They give management the information it needs to make
decisions about the project or organisation, about changes that are necessary in strategy or
plans. Through this, the constants remain the pillars of the strategic framework: the problem
analysis, the vision, and the values of the project or organisation. Everything else is
negotiable. Getting something wrong is not a crime. Failing to learn from past mistakes
because you are not monitoring and evaluating, is.

The effect of monitoring and evaluation can be seen in the following cycle. Note that you
will monitor and adjust several times before you are ready to evaluate and replan.

Evaluate/learn/ decide

Plan
Implemen

Reflect/learn/
decide/adjust

Implemen

72
Monitor Monitor

Reflect/learn/

Implement Decide/adjust

It is important to recognize that monitoring and evaluation are not magic wands that can be
waved to make problems disappear, or to cure them, or to miraculously make changes without
a lot of hard work being put in by the project or organisation. In themselves, they are not a
solution, but they are valuable tools. Monitoring and evaluation can:

 Help you identify problems and their causes;

 Suggest possible solutions to problems;

 Raise questions about assumptions and strategy;

 Push you to reflect on where you are going and how you are getting there;

 Provide you with information and insight;

 Encourage you to act on the information and insight;

 Increase the likelihood that you will make a positive development difference.

Planning, monitoring and evaluation for development results

Good planning combined with effective monitoring and evaluation can play a major role in
enhancing the effectiveness of development programmes and projects. Good planning helps
us focus on the results that matter, while monitoring and evaluation help us learn from past
successes and challenges and inform decision making so that current and future initiatives are
better able to improve people’s lives and expand their choices.

73
Understanding inter-linkages and dependencies between planning, monitoring
and evaluation

 Without proper planning and clear articulation of intended results, it is not


clear what should be monitored and how; hence monitoring cannot be done
well.
 Without effective planning (clear results frameworks), the basis for evaluation
is weak; hence evaluation cannot be done well.
 Without careful monitoring, the necessary data is not collected; hence
evaluation cannot be done well.
 Monitoring is necessary, but not sufficient, for evaluation.
 Monitoring facilitates evaluation, but evaluation uses additional new data
collection and different frameworks for analysis.
 Monitoring and evaluation of a programme will often lead to changes in
programme plans. This may mean further changing or modifying data

collection for monitoring purposes.

Planning can be defined as the process of setting goals, developing strategies, outlining the
implementation arrangements and allocating resources to achieve those goals. It is important
to note that planning involves looking at a number of different processes:

Identifying the vision, goals or objectives to be achieved


Formulating the strategies needed to achieve the vision and goals

Determining and allocating the resources (financial and other) required to achieve the
vision and goals
Outlining implementation arrangements, which include the arrangements for
monitoring and evaluating progress towards achieving the vision and goals

There is an expression that “failing to plan is planning to fail.”While it is not always true that
those who fail to plan will eventually fail in their endeavors, there is strong evidence to
suggest that having a plan leads to greater effectiveness and efficiency. Not having a plan—
whether for an office, programme or project—is in some ways similar to attempting to build a
house without a blueprint, that is, it is very difficult to know what the house will look like,
how much it will cost, how long it will take to build, what resources will be required, and
whether the finished product will satisfy the owner’s needs. In short, planning helps we
74
define what an organization, programme or project aims to achieve and how it will go about
it.

Monitoring can be defined as the ongoing process by which stakeholders obtain regular
feedback on the progress being made towards achieving their goals and objectives. Contrary
to many definitions that treat monitoring as merely reviewing progress made in
implementing actions or activities, the definition used in this

Handbook focuses on reviewing progress against achieving goals. In other words, monitoring
in this Handbook is not only concerned with asking “Are we taking the actions we said we
would take?” but also “Are we making progress on achieving the results that we said we
wanted to achieve?” The difference between these two approaches is extremely important. In
the more limited approach, monitoring may focus on tracking projects and the use of the
agency’s resources. In the broader approach, monitoring also involves tracking strategies and
actions being taken by partners and non-partners, and figuring out what new strategies and
actions need to be taken to ensure progress towards the most important results.

Evaluation is a rigorous and independent assessment of either completed or ongoing


activities to determine the extent to which they are achieving stated objectives and
contributing to decision making. Evaluations, like monitoring, can apply to many things,
including an activity, project, programme, strategy, policy, topic, theme, sector or
organization. The key distinction between the two is that evaluations are done independently
to provide managers and staff with an objective assessment of whether or not they are on
track. They are also more rigorous in their procedures, design and methodology, and
generally involve more extensive analysis. However, the aims of both monitoring and
evaluation are very similar: to provide information that can help inform decisions, improve
performance and achieve planned results.

The distinction between monitoring and evaluation and other oversight activities

Like monitoring and evaluation, inspection, audit, review and research functions are
oversight activities, but they each have a distinct focus and role and should not be confused
with monitoring and evaluation.

Inspection is a general examination of an organizational unit, issue or practice to ascertain


the extent it adheres to normative standards, good practices or other criteria and to make

75
recommendations for improvement or corrective action. It is often performed when there is a
perceived risk of non-compliance.

Audit is an assessment of the adequacy of management controls to ensure the economical


and efficient use of resources; the safeguarding of assets; the reliability of financial and other
information; the compliance with regulations, rules and established policies; the
effectiveness of risk management; and the adequacy of organizational structures, systems
and processes. Evaluation is more closely linked to MfDR and learning, while audit focuses
on compliance.

Reviews, such as rapid assessments and peer reviews, are distinct from evaluation and more
closely associated with monitoring. They are periodic or ad hoc, often light assessments of
the performance of an initiative and do not apply the due process of evaluation or rigor in
methodology. Reviews tend to emphasize operational issues. Unlike evaluations conducted
by Independent evaluators, reviews are often conducted by those internal to the subject or the
commissioning organization.

Research is a systematic examination completed to develop or contribute to knowledge of a


particular topic. Research can often feed information into evaluations and other assessments
but does not normally inform decision making on its own.

MORE ABOUT MONITORING AND EVALUATION

Monitoring involves:

 Establishing indicators of efficiency, effectiveness and impact;

 Setting up systems to collect information relating to these indicators;

 Collecting and recording the information;

 Analyzing the information;

 Using the information to inform day-to-day management.

Monitoring is an internal function in any project or organisation.

Evaluation involves:

76
 Looking at what the project or organisation intended to achieve – what difference did
it want to make? What impact did it want to make?

 Assessing its progress towards what it wanted to achieve, its impact targets.

 Looking at the strategy of the project or organisation. Did it have a strategy? Was it
effective in following its strategy? Did the strategy work? If not, why not?

 Looking at how it worked. Was there an efficient use of resources? What were the
opportunity costs (see Glossary of Terms) of the way it chose to work? How
sustainable is the way in which the project or organisation works? What are the
implications for the various stakeholders in the way the organisation works?

In an evaluation, we look at efficiency, effectiveness and impact (see Glossary of Terms).

There are many different ways of doing an evaluation. Some of the more common terms you
may have come across are:

 Self-evaluation: This involves an organisation or project holding up a mirror to itself


and assessing how it is doing, as a way of learning and improving practice. It takes a
very self-reflective and honest organisation to do this effectively, but it can be an
important learning experience.

 Participatory evaluation: This is a form of internal evaluation. The intention is to


involve as many people with a direct stake in the work as possible. This may mean
project staff and beneficiaries working together on the evaluation. If an outsider is
called in, it is to act as a facilitator of the process, not an evaluator.

 Rapid Participatory Appraisal: Originally used in rural areas, the same


methodology can, in fact, be applied in most communities. This is a qualitative (see
Glossary of Terms) way of doing evaluations. It is semi-structured and carried out by
an interdisciplinary team over a short time. It is used as a starting point for
understanding a local situation and is a quick, cheap, useful way to gather
information. It involves the use of secondary (see Glossary of Terms) data review,
direct observation, semi-structured interviews, key informants, group interviews,
games, diagrams, maps and calendars. In an evaluation context, it allows one to get
valuable input from those who are supposed to be benefiting from the development
work. It is flexible and interactive.

77
 External evaluation: This is an evaluation done by a carefully chosen outsider or
outsider team.

 Interactive evaluation: This involves a very active interaction between an outside


evaluator or evaluation team and the organisation or project being evaluated.
Sometimes an insider may be included in the evaluation team.

Why evaluate?

Conducting an evaluation is considered good practice in managing an intervention. The


monitoring phase of project evaluation allows us to track progress and identify issues early
during implementation, thus providing an opportunity to take corrective action or make
proactive improvements as required. End of project evaluation allows you to manage projects
and programs based on the results of the activities you undertake, and therefore provides
accountability to those that fund projects. It also allows you to repeat activities that have been
demonstrated to work, and you can improve on, or let go activities that do not work.

REASONS TO UNDERTAKE AN EVALUATION


To assess whether a project has achieved its intended goals
To understand how the project has achieved its intended purpose, or why it may not
have done so

To identify how efficient the project was in converting resources (funded and in-kind)
into activities, objectives and goals
To assess how sustainable and meaningful the project was for participants

To inform decision makers about how to build on or improve a project

Evaluation is not just about demonstrating success, it is also about learning why things don’t
work. As such, identifying and learning from mistakes is one of the key parts of evaluation.
Evaluation can be a confronting undertaking, especially if you come to it unprepared. This
guide, along with the online evaluation toolbox, will allow you to plan and undertake an
evaluation of your project. An important thing to consider, and something that may lighten
the load, is to remember that evaluation is not about finding out about everything, but about
finding the things that matter.

78
Evaluation Questions
Evaluation questions should be developed up-front, and in collaboration with the primary
audience(s) and other stakeholders who you intend to report to. Evaluation questions go
beyond measurements to ask the higher order questions such as whether the intervention is
worth it, or could it have been achieved in another way (see Table 1). Overall, evaluation
questions should lead to further action such as project improvement, project mainstreaming,
or project redesign.
In order to answer evaluation questions, monitoring questions must be developed that will
inform what data will be collected through the monitoring process. The monitoring questions
will ideally be answered through the collection of quantitative and qualitative data. It is
important to not leap straight into the collection of data, without thinking about the evaluation
questions. Jumping straight in may lead to collecting data that provides no useful information,
which is a waste of time and money.

Table 1. Broad types of evaluation questions


After Davidson & Wehipeihana (2010)

Type of evaluation Evaluation question Process


How well was the project designed and
implemented (i.e. its quality)

Outcome Did the project meet the overall needs?


Was any change significant and was it
attributable to the project?
How valuable are the outcomes to the
organisation, other stakeholders, and
participants?

Learning’s What worked and what did not?


What were unintended consequences?
What were emergent properties?

Investment Was the project cost effective?


Was there another alternative that may have
represented a better investment?

79
Can the project be scaled up?
What next

Can the project be replicated elsewhere? Is


the change self-sustaining or does it
require continued intervention?

Theory of change
Does the project have a theory of change?
Is the theory of change reflected in the
program logic?
How can the program logic inform the research
questions?

Terminology
The language and terms used in evaluation can make the whole process quite daunting. This
is accentuated by many references providing different definitions for the same term. The
important thing for you to do is not to get bogged down in all the jargon, but to make sure you
use the same terms consistently within your evaluation. It may help to provide a brief
Definition of the terms you select in your evaluation report (see Table 2), so that readers
know what you mean when you use words that may have different meanings.

Table 2. Evaluation Terminology


Activities The tasks that are required to be done in
order to achieve project outputs (e.g. run a
workshop, conduct and audit)

Efficiency Refers to the extent to which activities,


outputs and/or the desired effects are
achieved with the lowest possible use of
resources/inputs (funds, expertise, time)

Effectiveness The extent to which project meets its


intended outputs and/or objectives.

Impact Refers to the measures of change that result


80
from the outputs being completed, such as
responses to surveys, requests for further
information, or number of products taken up
(e.g. lights installed).
Impact is sometimes used in place of shortterm
outcomes

Qualitative Refers to data that consists of words or


communication (whether that is text, voice,
or visual).

Refers to data that are counts or numbers.


Quantitative
Outcome Measures the change in behavior or resource
use in relation to goal of the project. Outcomes
are usually considered in terms of their
expected timeframe:
Short–term (or immediate),
Intermediate, and Long-term.

Without thorough outcome evaluation, it is


not possible to demonstrate whether a
behavior change project has had the desired
effect. It is important to capture both
intended and unintended outcomes.

Outputs Products or services delivered as part of the


project’s activities (e.g. workshops, audits,
brochures).
Relevance The extent to which the project purpose and
goal meet the target group’s needs or
priorities.

Sustainability In terms of a project, sustainability refers to


the likelihood of the change continuing once
the intervention activities have ceased.

81
Types of evaluation
Evaluation can be characterized as being either formative or summative (see Table 3).
Broadly (and this is not a rule), formative evaluation looks at what leads to an intervention
working (the process), whereas summative evaluation looks at the short-term to long-term
outcomes of an intervention on the target group. Formative evaluation takes place in the lead
up to the project, as well as during the project in order to improve the project design as it is
being implemented (continual improvement). Formative evaluation often lends itself to
qualitative methods of inquiry. Summative evaluation takes place during and following the
project implementation, and is associated with more objective, quantitative methods. The
distinction between formative and summative evaluation can become blurred. Generally it is
important to know both how an intervention works, as well as if it worked. It is therefore
important to capture and assess both qualitative and quantitative data.

Table 3. Types of evaluation


AFTER OWEN & ROGERS
(1999)

Type of evaluation
Formative Summative

Proactive Clarificative Interactive Monitoring Outcome


Evaluation
When Pre-project Project Project Project Project
implementati implementati
development implementati
on on and
on postproject

Why To To make To improve


understand To ensure To assess
clear the the project’s
or clarify the that the whether the
need for the theory of design project project has
project change that the project is activities are met its
(continual being goals,
delivered whether
based on improvemen
efficiently there were
t) as it is and any
rolled out effectively unintended
consequence

82
s, what were
the
learning’s,
and how to
improve

Participatory Monitoring and Evaluation


Participatory monitoring and evaluation refers to getting all project stakeholders, particularly
the target group, involved in a project evaluation (and also the design of the evaluation). The
level of participation can vary, from the getting the target group to set objectives, targets, and
data sources themselves, to getting participants to gather data, tell their story, and interpret
results. Participatory evaluation generally requires good facilitation skills and commitment
from all the stakeholders, including the participants, to the process.

Participatory evaluation is about valuing and using the knowledge of insiders (target group
and other stakeholders) to provide meaningful targets and information, as opposed to solely
relying on objective and external indicators of change. It also refers to getting stakeholders
involved in the collection and to interpretation of results.

Participatory evaluation is not always appropriate in every project. There are a number of
constraints which may impact on the quality of the process, and hence its overall value to the
evaluation. These include:

Cost and time involved in building capacity to implement participatory evaluation


Cost and time involved in collecting and analyzing data
The process can be unpredictable and result in unexpected consequences and this may
require facilitation skills and risk management processes.

ADVANTAGES AND DISADVANTAGES OF INTERNAL AND


EXTERNAL EVALUATIONS

83
Advantages 84 Disadvantages
Internal evaluation 85
The evaluators are very familiar The evaluation team may have a vested
86
with the work, the organizational interest in reaching positive conclusions
culture and the aims and87 about the work or organisation. For this
objectives. 88 reason, other stakeholders, such as donors,
89 may prefer an external evaluation.
90
Sometimes people are more
91 than
willing to speak to insiders The team may not be specifically skilled or
to outsiders. 92 trained in evaluation.
93
An internal evaluation is94very The evaluation will take up a considerable
clearly a management tool, a way amount of organizational time – while it may
of self-correcting, and much less cost less than an external evaluation, the
threatening than an external opportunity costs (see Glossary of Terms)
evaluation. This may make it may be high.
easier for those involved to
accept findings and criticisms.

An internal evaluation will cost


less than an external evaluation.

External evaluation The evaluation is likely to be Someone from outside the organisation or
(done by a team or more objective as the evaluators project may not understand the culture or
person with no vested
interest in the will have some distance from the even what the work is trying to achieve.
project) work.
Those directly involved may feel threatened
The evaluators should have a by outsiders and be less likely to talk openly
range of evaluation skills and and co-operate in the process.
experience.
External evaluation can be very costly.
Sometimes people are more
willing to speak to outsiders than An external evaluator may misunderstand
to insiders. what you want from the evaluation and not
give you what you need.
Using an outside evaluator gives
greater credibility to findings,
particularly positive findings.

To improve the chances of success, attention needs to be placed on some of the common areas
of weakness in programmes and projects. Four main areas for focus are identified
consistently:

1. Planning and programme and project definition—Projects and programmes have a


greater chance of success when the objectives and scope of the programmes or projects are

95
properly defined and clarified. This reduces the likelihood of experiencing major challenges
in implementation.

2. Stakeholder involvement—High levels of engagement of users, clients and


stakeholders in programmes and projects are critical to success.

3. Communication—Good communication results in strong stakeholder buy-in and


performance. This clarity helps to ensure optimum use of resources.

4. Monitoring and evaluation—Programmes and projects with strong monitoring and


evaluation components tend to stay on track. Additionally, problems are often detected
earlier, which reduces the likelihood of having major cost overruns or time delays later.

Good planning combined with effective monitoring and evaluation can play a major role in
enhancing the effectiveness of development programmes and projects. Good planning helps
us focus on the results that matter, while monitoring and evaluation help us learn from past
successes and challenges and inform decision making so that current and future initiatives are
better able to improve people’s lives and expand their choices.

SELECTING AN EXTERNAL EVALUATOR OR EVALUATION TEAM

Qualities to look for in an external evaluator or evaluation team:

An understanding of development issues.


An understanding of organizational issues.
Experience in evaluating development projects, programmes or organizations.
A good track record with previous clients.
Research skills.
A commitment to quality.
A commitment to deadlines.
Objectivity, honesty and fairness.
Logic and the ability to operate systematically.
Ability to communicate verbally and in writing.
A style and approach that fits with your organisation.
Values those are compatible with those of the organisation.
Reasonable rates (fees), measured against the going rates.

How do you find all this out? By asking lots of questions!


96
When you decide to use an external evaluator:

Check his/her/their references.

Meet with the evaluators before making a final decision.

Communicate what you want clearly – good Terms of Reference (see Glossary of
Terms) are the foundation of a good contractual relationship.

Negotiate a contract which makes provision for what will happen if time frames and
output expectations are not met.

Ask for a work plan with outputs and timelines.

Maintain contact – ask for interim reports as part of the contract – either verbal or
written.

Build in formal feedback times.

Do not expect any evaluator to be completely objective. S/he will have opinions and ideas –
you are not looking for someone who is a blank page! However, his/her opinions must be
clearly stated as such, and must not be disguised as “facts”. It is also useful to have some
idea of his/her (or their) approach to evaluation.

97
Chapter 8

PUTTING PLANNING, MONITORING AND EVALUATION TOGETHER:

98
RESULTS-BASED MANAGEMENT

Planning, monitoring and evaluation come together as RBM. RBM is defined as “a broad
management strategy aimed at achieving improved performance and demonstrable results,”
and has been adopted by many multilateral development organizations, bilateral development
agencies and public administrations throughout the world (as noted earlier, some of these
organizations now refer to RBM as MfDR to place the emphasis on development rather than
organizational results).

Good RBM is an ongoing process. This means that there is constant feedback, learning and
improving. Existing plans are regularly modified based on the lessons learned through
monitoring and evaluation, and future plans are developed based on these lessons.

Monitoring is also an ongoing process. The lessons from monitoring are discussed
periodically and used to inform actions and decisions. Evaluations should be done for
programmatic improvements while the programme is still ongoing and also inform the
planning of new programmes. This ongoing process of doing, learning and improving is
what is referred to as the RBM life-cycle approach, which is depicted in Figure 1.

RBM is concerned with learning, risk management and accountability. Learning not only
helps improve results from existing programmes and projects, but also enhances the capacity
of the organization and individuals to make better decisions in the future and improves the
formulation of future programmes and projects. Since there are no perfect plans, it is essential
that managers, staff and stakeholders learn from the successes and failures of each
programme or project.

There are many risks and opportunities involved in pursuing development results.
RBMsystems and tools should help promote awareness of these risks and opportunities, and
provide managers, staff, stakeholders and partners with the tools to mitigate risks or pursue
opportunities.

RBM practices and systems are most effective when they are accompanied by clear
accountability arrangements and appropriate incentives that promote desired behavior. In
other words, RBM should not be seen simply in terms of developing systems and tools to plan
monitor and evaluate results. It must also include effective measures for promoting a culture
of results orientation and ensuring that persons are accountable for both the results achieved
and their actions and behavior.

99
The main objectives of good planning, monitoring and evaluation—that is, RBM— are to:

Support substantive accountability to governments, beneficiaries, donors, other


partners and stakeholders.
Prompt corrective action

Ensure informed decision making


Promote risk management
Enhance organization and individual learning

DIFFERENT APPROACHES TO EVALUATION


Approach Major purpose Typical focus questions Likely methodology

Goal-based Assessing Were the goals achieved? Comparing baseline and progress
achievement of Efficiently? Were they the data; finding ways to measure
goals and objectives. right goals? indicators.

Decision-making Providing Is the project effective? Assessing range of options related


information. Should it continue? How to the project context, inputs,
might it be modified? process, and product.
Establishing some kind of
decision-making consensus.

Goal-free Assessing the full What are all the outcomes? Independent determination of
range of project What value do they have? needs and standards to judge
effects, intended and project worth. Qualitative and
unintended. quantitative techniques to
uncover any possible results.

Expert Use of expertise. How does an outside Critical review based on

judgement professional rate this project? experience, informal surveying,


and subjective insights.

100
Our feeling is that the best evaluators use a combination of all these approaches, and that an
organisation can ask for a particular emphasis but should not exclude findings that make use
of a different approach.

Planning for monitoring and evaluation

Monitoring and evaluation should be part of your planning process. It is very difficult to go
back and set up monitoring and evaluation systems once things have begun to happen. You
need to begin gathering information about performance and in relation to targets from the
word go. The first information gathering should, in fact, take place when you do your needs
assessment (see the toolkit on overview of planning, the section on doing the ground work).
This will give you the information you need against which to assess improvements over time.

When you do your planning process, you will set indicators (see Glossary of Terms). These
indicators provide the framework for your monitoring and evaluation system. They tell you
what you want to know and the kinds of information it will be useful to collect. In this
section we look at:

 What do we want to know? This includes looking at indicators for both internal
issues and external issues. (Also look at the examples of indicators later in this
toolkit.)

 Different kinds of information.

 How will we get information?

 Who should be involved?

There is not one set way of planning for monitoring and evaluation. The ideas included in the
toolkits on overview of planning, strategic planning and action planning will help you to
develop a useful framework for your monitoring and evaluation system. If you are familiar
with logical framework analysis and already use it in your planning, this approach lends itself
well to planning a monitoring and evaluation system.

WHAT DO WE WANT TO KNOW?

What we want to know is linked to what we think is important. In development work, what
we think is important is linked to our values.

101
Most work in civil society organizations is underpinned by a value framework. It is this
framework that determines the standards of acceptability in the work we do. The central
values on which most development work is built are:

 Serving the disadvantaged;

 Empowering the disadvantaged;

 Changing society, not just helping individuals;

 Sustainability;

 Efficient use of resources.

So, the first thing we need to know is: Is what we are doing and how we are doing it meeting
the requirements of these values? In order to answer this question, our monitoring and
evaluation system must give us information about:

 Who is benefiting from what we do? How much are they benefiting?

 Are beneficiaries passive recipients or does the process enable them to have some
control over their lives?

 Are there lessons in what we are doing that have a broader impact than just what is
happening on our project?

 Can what we are doing be sustained in some way for the long-term, or will the impact
of our work cease when we leave?

 Are we getting optimum outputs for the least possible amount of inputs?

Do we want to know about the process or the product?

Should development work be evaluated in terms of the process (the way in which the work is
done) or the product (what the work produces)? Often, this debate is more about excusing
inadequate performance than it is about a real issue. Process and product are not separate in
development work. What we achieve and how we achieve it is often the very same thing. If
the goal is development, based on development values, then sinking a well without the
transfer of skills for maintaining and managing the well is not enough. Saying: “It was
taking too long that way. We couldn’t wait for them to sort themselves out. We said we’d

102
sink a well and we did” is not enough. But neither is: “It doesn’t matter that the well hasn’t
happened yet. What’s important is that the people have been empowered.”

Both process and product should be part of your monitoring and evaluation system.

But how do we make process and product and values measurable? The answer lies in the
setting of indicators and this is dealt with in the sub-section that follows.

WHAT DO WE WANT TO KNOW?

Indicators

Indicators are measurable or tangible signs that something has been done or that something
has been achieved. In some studies, for example, an increased number of television aerials in
a community have been used as an indicator that the standard of living in that community has
improved. An indicator of community empowerment might be an increased frequency of
community members speaking at community meetings. If one were interested in the gender
impact of, for example, drilling a well in a village, then you could use “increased time for
involvement in development projects available to women” as an indicator. Common
indicators for something like overall health in a community are the infant/child/maternal
mortality rate, the birth rate, and nutritional status and birth weights. You could also look at
less direct indicators such as the extent of immunisation, the extent of potable (drinkable)
water available and so on. (See further examples of indicators later in this toolkit, in the
section on examples.)

Indicators are an essential part of a monitoring and evaluation system because they are what
you measure and/or monitor. Through the indicators you can ask and answer questions such
as:

 Who?

 How many?

 How often?

 How much?

But you need to decide early on what your indicators are going to be so that you can begin
collecting the information immediately. You cannot use the number of television aerials in a
103
community as a sign of improved standard of living if you don’t know how many there were
at the beginning of the process.

Some people argue that the problem with measuring indicators is that other variables (or
factors) may have impacted on them as well. Community members may be participating
more in meetings because a number of new people with activist backgrounds have come to
live in the area. Women may have more time for development projects because the men
of the village have been attending a gender workshop and have made a decision to share the
traditionally female tasks. And so on. While this may be true, within a project it is possible
to identify other variables and take them into account. It is also important to note that, if
nothing is changing, if there is no improvement in the measurement of the key indicators
identified, then your strategy is not working and needs to be rethought.

To see a method for developing indicators, go to the next page.

To see examples of indicators, go to examples.

DEVELOPING INDICATORS

Step 1: Identify the problem situation you are trying to address. The following might be
problems:

 Economic situation (unemployment, low incomes etc)

 Social situation (housing, health, education etc)

 Cultural or religious situation (not using traditional languages, low attendance at


religious services etc)

 Political or organizational situation (ineffective local government, faction fighting etc)

There will be other situations as well.

(Step 2: Develop a vision for how you would like the problem areas to be/look. (See the
toolkit on Strategic Planning, the section on vision.) This will give you impact indicators.

What will tell you that the vision has been achieved? What signs will you see that you can
measure that will “prove” that the vision has been achieved? For example, if your vision was
that the people in your community would be healthy, then you can use health indicators to
104
measure how well you are doing. Has the infant mortality rate gone down? Do fewer women
die during child-birth? Has the HIV/AIDS infection rate been reduced? If you can answer
“yes” to these questions then progress is being made.

Step 3: Develop a process vision for how you want things to be achieved. This will give you
process indicators.

If, for example, you want success to be achieved through community efforts and participation,
then your process vision might include things like community health workers from the
community trained and offering a competent service used by all; community organizes clean-
up events on a regular basis, and so on.

Step 4: Develop indicators for effectiveness.

For example, if you believe that you can increase the secondary school pass rate by upgrading
teachers, then you need indicators that show you have been effective in upgrading the
teachers e.g. evidence from a survey in the schools, compared with a baseline survey.

Step 5: Develop indicators for your efficiency targets.

Here you can set indicators such as: planned workshops are run within the stated timeframe,
costs for workshops are kept to a maximum of US$ 2.50 per participant, no more than 160
hours in total of staff time to be spent on organizing a conference; no complaints about
conference organisation etc.

With this framework in place, you are in a position to monitor and evaluate efficiency,
effectiveness and impact (see Glossary of Terms).

DIFFERENT KINDS OF INFORMATION – QUANTITATIVE AND QUALITATIVE

Information used in monitoring and evaluation can be classified as:

 Quantitative; or  Qualitative.

Quantitative measurement tells you “how much or how many”. How many people attended a
workshop, how many people passed their final examinations, how much a publication cost,
how many people were infected with HIV, how far people have to walk to get water or
firewood, and so on. Quantitative measurement can be expressed in absolute numbers (3 241
women in the sample are infected) or as a percentage (50% of households in the area have

105
television aerials). It can also be expressed as a ratio (one doctor for every 30 000 people).
One way or another, you get quantitative (number) information by counting or measuring.

Qualitative measurement tells you how people feel about a situation or about how things are
done or how people behave. So, for example, although you might discover that 50% of the
teachers in a school are unhappy about the assessment criteria used, this is still qualitative
information, not quantitative information. You get qualitative information by asking,
observing, interpreting.

Some people find quantitative information comforting – it seems solid and reliable and
“objective”. They find qualitative information unconvincing and “subjective”. It is a mistake
to say that “quantitative information speaks for itself”. It requires just as much interpretation
in order to make it meaningful as does qualitative information. It may be a “fact” that
enrolment of girls at schools in some developing countries is dropping – counting can tell us
that, but it tells us nothing about why this drop is taking place. In order to know that, you
would need to go out and ask questions – to get qualitative information. Choice of indicators
is also subjective, whether you use quantitative or qualitative methods to do the actual
measuring. Researchers choose to measure school enrolment figures for girls because they
believe that this tells them something about how women in a society are treated or viewed.

The monitoring and evaluation process requires a combination of quantitative and qualitative
information in order to be comprehensive. For example, we need to know what the school
enrolment figures for girls are, as well as why parents do or do not send their children to
school. Perhaps enrolment figures are higher for boys than for girls because a particular
community sees schooling as a luxury and prefers to train boys to do traditional and practical
tasks such taking care of animals. In this case, the higher enrolment of girls does not
necessarily indicate higher regard for girls.

HOW WILL WE GET INFORMATION?

Your methods for information collecting need to be built into your action planning. You
should be aiming to have a steady stream of information flowing into the project or
organisation about the work and how it is done, without overloading anyone. The
information you collect must mean something: don’t collect information to keep busy, only
do it to find out what you want to know, and then make sure that you store the information in
such a way that it is easy to access.

106
Usually you can use the reports, minutes, and attendance registers, financial statements that
are part of your work anyway as a source of monitoring and evaluation information.

However, sometimes you need to use special tools that are simple but useful to add to the
basic information collected in the natural course of your work. Some of the more common
ones are:

 Case studies

 Recorded observation

 Diaries

 Recording and analysis of important incidents (called “critical incident analysis”)

 Structured questionnaires

 One-on-one interviews

 Focus groups

 Sample surveys

 Systematic review of relevant official statistics.

WHO SHOULD BE INVOLVED?

Almost everyone in the organisation or project will be involved in some way in collecting
information that can be used in monitoring and evaluation. This includes:

 The administrator who takes minutes at a meeting or prepares and circulates the
attendance register;

 The fieldworkers who writes reports on visits to the field;

 The bookkeeper who records income and expenditure.

In order to maximize their efforts, the project or organisation needs to:

 Prepare reporting formats that include measurement, either quantitative or qualitative,

of important indicators. For example, if you want to know about community

107
participation in activities or women’s participation specifically, structure the

fieldworkers reporting format so that s/he has to comment on this, backing up

observations with facts. (Look at the fieldworker report format given later in this
toolkit.)

 Prepare recording formats that include measurement, either quantitative or qualitative,


of important indicators. For example, if you want to know how many men and how
many women attended a meeting, include a gender column on your attendance list.

 Record information in such a way that it is possible to work out what you need to
know. For example, if you need to know whether a project is sustainable financially,
and which elements of it cost the most, and then make sure that your bookkeeping
records reflect the relevant information.

It is a useful principle to look at every activity and say: What do us need to know about this
activity, both process (how it is being done) and product (what it is meant to achieve), and
what is the easiest way to find it out and record it as we go along?

108
Designing a monitoring and/or evaluation process

As there are differences between the design of a monitoring system and that of an evaluation
process, we deal with them separately here.

Under monitoring we look at the process an organisation could go through to design a


monitoring system.

Under evaluation we look at:

 Purpose

 Key evaluation questions

 Methodology.

MONITORING

When you design a monitoring system, you are taking a formative view point and establishing
a system that will provide useful information on an ongoing basis so that you can improve
what you do and how you do it.

For a case study of how an organisation went about designing a monitoring system, go to the
section with examples, and the example given of designing a monitoring system.

Chapter 9

DESIGNING A MONITORING SYSTEM

Below is a step-by-step process you could use in order to design a monitoring system for your
organisation or project.

Step 1: At a workshop with appropriate staff and/or volunteers, and run by you or a
consultant:

Introduce the concepts of efficiency, effectiveness and impact (see Glossary of Terms).

Explain that a monitoring system needs to cover all three.

Generate a list of indicators for each of the three aspects.

109
Clarify what variables (see Glossary of Terms) need to be linked. So, for example, do
you want to be able to link the age of a teacher with his/her qualifications in order to
answer the question: Are older teachers more or less likely to have higher
qualifications?

Clarify what information the project or organisation is already collecting.

Step 2: Turn the input from the workshop into a brief for the questions your monitoring
system must be able to answer. Depending on how complex your requirements are, and what
your capacity is, you may decide to go for a computerized data base or a manual one. If you
want to be able to link many variables across many cases (e.g. participants, schools, parent
involvement, resources, urban/rural etc), you may need to go the computer route. If you have
a few variables, you can probably do it manually. The important thing is to begin by knowing
what variables you are interested in and to keep data on these variables. Linking and analysis
can take place later. (These concepts are complicated. It will help you to read the case study
in the examples section of the toolkit.)

From the workshop you will know what you want to monitor. You will have the
indicators of efficiency, effectiveness and impact that have been prioritised. You will then
choose the variables that will help you answer the questions you think are important.

So, for example, you might have an indicator of impact which is that “safer sex
options are chosen” as an indicator that “young people are now making informed and mature
lifestyle choices”. The variables that might affect the indicator include:

Age

Gender

Religion

Urban/rural

Economic category

Family environment

Length of exposure to your project’s initiative

Number of workshops attended.

110
By keeping the right information you will be able to answer questions such as:

Does age make a difference to the way our message is received?

Does economic category i.e. do young people in richer areas respond better
or worse to the message or does it make no difference?

Does the number of workshops attended make a difference to the impact?

Answers to these kinds of questions enable a project or organisation to make decisions about
what they do and how they do it, to make informed changes to programmes, and to measure
their impact and effectiveness. Answers to questions such as:

Do more people attend sessions that are organized well in advance?

Do more schools participate when there is no charge?

Do more young people attend when sessions are over weekends or in the
evenings?

Does it cost less to run a workshop in the community, or to bring people to


our training centre to run the workshop?

Enable the project or organisation to measure and improve their efficiency.

Step 3: Decide how you will collect the information you need (see collecting information)
and where it will be kept (on computer, in manual files).

Step 4: Decide how often you will analyze the information – this means putting it together
and trying to answer the questions you think are important.

Step 5: Collect, analyze, report.

EVALUATION

Designing an evaluation process means being able to develop Terms of Reference for such a
process (if you are the project or organisation) or being able to draw up a sensible proposal to
meet the needs of the project or organisation (if you are a consultant).

The main sections in Terms of Reference for an evaluation process usually include:

111
 Background: This is background to the project or organisation, something about the
problem identified, what you do, how long you have existed, why you have decided to
do an evaluation.

 Purpose: Here you would say what it is the organisation or project wants the
evaluation to achieve.

 Key evaluation questions: What the central questions are that the evaluation must
address.

 Specific objectives: What specific areas, internal and/or external, you want the
evaluation to address. So, for example, you might want the evaluation to include a
review of finances, or to include certain specific programme sites.

 Methodology: here you might give broad parameters of the kind of approach you
favor in evaluation (see the section on more about monitoring and evaluation). You
might also suggest the kinds of techniques you would like the evaluation team to use.

 Logistical issues: These would include timing, costing, and requirements of team
composition and so on.

For more on some of the more difficult components of Terms of Reference, see the following
pages.

Purpose

The purpose of an evaluation is the reason why you are doing it. It goes beyond what you
want to know to why you want to know it. It is usually a sentence or, at most, a paragraph. It
has two parts:

 What you want evaluated;

 To what end you want it done.

Examples of an evaluation purpose could be:

To provide the organisation with information needed to make decisions about the future of the
project.

To assess whether the organisation/project is having the planned impact in order to decide
whether or not to replicate the model elsewhere.

112
To assess the programme in terms of effectiveness, impact on the target group, efficiency and
sustainability in order to improve it’s functioning.

The purpose gives some focus to the broad evaluation process.

Key evaluation questions

The key evaluation questions are the central questions you want the evaluation process to
answer. They are not simple questions. You can seldom answer “yes” or “no” them. A
useful evaluation question is:

 Thought provoking

 Challenges assumptions.

 Focuses inquiry and reflection.

 Raises many additional questions.

Some examples of key evaluation questions related to a project purpose:

The purpose of the evaluation is to assess how efficient the project is in delivering benefits to
the identified community in order to inform Board decisions about continuity and
replicability.

Key evaluation questions:

 Who is currently benefiting from the project and in what ways?

 Do the inputs (in money and time) justify the outputs and, if so/if not, on what basis is
this claim justified?

 What would improve the efficiency, effectiveness and impact of the current project?

 What are the lessons that can be learned from this project in terms of replicability?

Note that none of these questions deals with a specific element or area of the internal or
external functioning of the project or organisation. Most would require the evaluation team to
deal with a range of project or organizational elements in order to answer them.

Other examples of evaluation questions might be:

 What are the most effective ways in which a project of this kind can address the
problem identified?
113
 To what extent does the internal functioning and structure of the organisation impact
positively on the programme work?

 What learning’s from this project would have applicability across the full
development spectrum?

Clearly, there could be many, many examples. Our experience has shown us that, when an
evaluation process is designed with such questions in mind, it produces far more interesting
insights than simply asking obvious questions such as: Does the Board play a useful role in
the organisation? Or: What impact are we having?

Methodology

“Methodology” as opposed to “methods” deals more with the kind of approach you use in
your evaluation process. (See also more about monitoring and evaluation earlier in the
toolkit). You could; for example, commission or do an evaluation process that looked almost
entirely at written sources, primary or secondary: reports, data sheets, minutes and so on. Or
you could ask for an evaluation process that involved getting input from all the key
stakeholder groups. Most terms of reference will ask for some combination of these but they
may also specify how they want the evaluation team to get input from stakeholder groups for
example:

 Through a survey;  Through key informants;

 Through focus groups.

Here too one would expect to find some indication of reporting formats: Will all reporting be
written? Will the team report to management, or to all staff, or to staff and Board and
beneficiaries? Will there be interim reports or only a final report? What sort of evidence
does the organisation or project require to back up evaluator opinions? Who will be involved
in analysis?

The methodology section of Terms of Reference should provide a broad framework for how
the project or organisation wants the work of the evaluation done.

Collecting Information

Here we look in detail at:

 Baselines and damage control;


114
 Methods.

By damage control we mean what you need to do if you failed to get baseline information
when you started out.

Chapter 10

BASELINES AND DAMAGE CONTROL

Ideally, if you have done your planning well and collected information about the situation at
the beginning of your intervention, you will have baseline data.

Baseline data is the information you have about the situation before you do anything. It is
the information on which your problem analysis is based. It is very difficult to measure the
impact of your initiative if you do not know what the situation was when you began it. (See
also the toolkit on overview of planning, the section on doing the ground work.) You need
baseline data that is relevant to the indicators you have decided will help you measure the
impact of your work.

There are different levels of baseline data:

 General information about the situation, often available in official statistics e.g. infant
mortality rates, school enrolment by gender, unemployment rates, literacy rates and so
on. If you are working in a particular geographical area, then you need information
for that area. If it is not available in official statistics, you may need to do some
information gathering yourselves. This might involve house-to-house surveying,
either comprehensively or using sampling (see the section after this on methods), or
visiting schools, hospitals etc. Focus on your indicators of impact when you collect
this information.

 If you have decided to measure impact through a sample of people or families with
whom you are working, you will need specific information about those people or
families. So, for example, for families (or business enterprises or schools or whatever
units you are working with) you may want specific information about income, history,
number of people employed, and number of children per classroom and so on. You
will probably get this information from a combination of interviewing and filling in of
basic questionnaires. Again, remember to focus on the indicators which you have
decided are important for your work.
115
 If you are working with individuals, then you need “intake” information –
documented information about their situation at the time you began working with
them. For example, you might want to know, in addition to age, gender, name and so
on, current income, employment status, current levels of education, amount of money
spent on leisure activities, amount of time spent on leisure activities, ambitions and so
on, for each individual participant. Again, you will probably get the information from
a combination of interviewing and filling in of basic questionnaires, and you should
focus on the indicators which you think are important.

It is very difficult to go back and get this kind of baseline information after you have begun
work and the situation has changed. But what if you didn’t collect this information at the
beginning of the process? There are ways of doing damage control. You can get anecdotal
information (see Glossary of Terms) from those who were involved at the beginning and you
can ask participants if they remember what the situation was when the project began. You
may not even have decided what you’re important indicators are when you began your work.
You will have to work it out “backwards”, and then try to get information about the situation
related to those indicators when you started out. You can speak to people, look at records and
other written sources such as minutes, reports and so on.

One useful way of making meaningful comparisons where you do not have baseline
information is through using control groups. Control groups are groups of people,
businesses, families or whatever unit you are focusing on, that has not had input from your
project or organisation but are, in most other ways, very similar to those you are working
with.

For example: You have been working with groups of school children around the country in
order to build their self-esteem and knowledge as a way of combating the spread of
HIV/AIDS and preventing teenage pregnancies. After a few years, you want to measure what
impact you have had on these children. You are going to run a series of focus groups (see
methods) with the children at the schools where you have worked. But you did not do any
baseline study with them. How will you know what difference you have made?

You could set up a control groups at schools in the same areas, with the same kinds of
profiles, where you have not worked. By asking both the children at those schools you have
worked at, and the children at the schools where you have not worked, the same sorts of
questions about self-esteem, sexual behavior and so on, you should be able to tell whether or

116
not your work has made any difference. When you set up control groups, you should try to
ensure that:

 The profiles of the control groups are very similar to those of the groups you have
worked with. For example, it might be schools that serve the same economic group,
in the same geographical area, with the same gender ratio, age groups, ethnic or racial
mix.

 There are no other very clear variables that could affect the findings or comparisons.
For example, if another project, doing similar work, has been involved with the
school, this school would not be a good place to establish a control group. You want
a situation as close to what the situation was with the beneficiaries of your project
when you started out.

METHODS

In this section we are going to give you a “shopping list” of the different kinds of methods
that can be used to collect information for monitoring and evaluation purposes. You need to
select methods that suit your purposes and your resources. Do not plan to do a
comprehensive survey of 100 000 households if you have two weeks and very little money!
Use sampling in this case.

Sampling (see Glossary of Terms) is another important concept when using various tools for
a monitoring or evaluation process. Sampling is not really a tool in itself, but used with other
tools it is very useful. Sampling answers the question: Who do we survey, interview, include
in a focus group etc? It is a way of narrowing down the number of possible respondents to
make it manageable and affordable. Sometimes it is necessary to be comprehensive. This
means getting to every possible household, or school or teacher or clinic etc. In an
evaluation, you might well use all the information collected in every case during the
monitoring process in an overall analysis. Usually, however, unless numbers are very small,
for in-depth exploration you will use a sample. Sampling techniques include:

 Random sampling (In theory random sampling means doing the sampling on a sort of
lottery basis where, for example all the names go into a container, are tumbled around
and then the required number are drawn out. This sort of random sampling is very
difficult to use in the kind of work we are talking about. For practical purposes you

117
are more likely to, for example, select every seventh household or every third person
on the list. The idea is that there is no bias in the selection.);

 Stratified sampling (e.g. every seventh household in the upper income bracket, every
third household in the lower income bracket);

 Cluster sampling (e.g. only those people who have been on the project for at least two
years).

It is also usually best to use triangulation. This is a fancy word that means that one set of
data or information is confirmed by another. You usually look for confirmation from a
number of sources saying the same thing.
Tool Description Usefulness Disadvantages

Interviews These can be structured, Can be used with almost Requires some skill in the
semi-structured or anyone who has some interviewer. For more on
unstructured (see Glossary involvement with the interviewing skills, see
of Terms). They involve project. Can be done in later in this toolkit.
asking specific questions person or on the telephone
aimed at getting or even by e-mail. Very
information that will flexible.
enable indicators to be
measured. Questions can
be openended or closed
(yes/no answers). Can be
a source of qualitative and
quantitative information.

118
Key informant These are interviews that As these key informants Needs a skilled
interviews are carried out with often have little to do with interviewer with a good
understanding of the
specialists in a topic or the project or topic. Be careful not to
someone who may be organisation, they can be turn something into an
absolute truth (cannot be
able to shed a particular quite objective and offer
challenged) because it has
light on the process. useful insights. They can been said by a key
provide something of the
“big picture” where
people more involved may

focus at the micro (small) informant.


level.

119
Questionnaires These are written This tool can save lots of With people who do not
questions that are used to time if it is selfcompleting, read and write, someone
get written responses enabling you to get to
which, when analyzed, many people. Done in this has to go through the
will enable indicators to way it gives people a questionnaire with them
be measured. feeling of anonymity and
which means no time is
they may say things they
would not say to an saved and the numbers one
interviewer. can reach are limited.

With questionnaires, it is
not possible to explore
what people are saying any
further.

Questionnaires are also


over-used and people get
tired of completing them.
Questionnaires must be
piloted to ensure that
questions can be
understood and cannot be
misunderstood. If the
questionnaire is complex
and will need
computerized analysis,
you need expert help in
designing it.

Focus groups In a focus group, a group This can be a useful way It is quite difficult to do
of about six to 12 people of getting opinions from random sampling for
are interviewed together quite a large sample of focus groups and this
by a skilled people. means findings may not
interviewer/facilitator be generalised.

120
with a carefully Sometimes people
structured interview influence one another
schedule. Questions are either to say something or
usually focused around a to keep quiet about
specific topic or issue. something. If possible,
focus groups interviews
should be recorded and
then transcribed. This
requires special
equipment and can be
very time-consuming.

Community This involves a gathering Community meetings are Difficult to facilitate –


meetings of a fairly large group of useful for getting a broad requires a very
beneficiaries to who response from many experienced facilitator.
questions, problems, people on specific issues. May require breaking into
situations are put for It is also a way of small groups followed by
input to help in measuring involving beneficiaries plenary sessions when
indicators. directly in an evaluation everyone comes together
process, giving them a again.
sense of ownership of the
process. They are useful
to have at critical points in
community projects.

Fieldworker Structured report forms Flexible, an extension of Relies on field workers


reports that ensure that normal work, so cheap and being disciplined and
indicatorrelated questions not time-consuming. insightful.
are asked and answers
recorded, and
observations recorded on
(See also
every visit.
fieldworker
reporting format
under examples)

121
Ranking This involves getting It can be used with Ranking is quite a people to say what they
individuals and groups, as difficult concept to get think is most useful, most part of an
interview across and requires very important, least useful schedule or questionnaire,
careful explanation as

122122
or as a separate session. well as testing to ensure Where people
cannot read that people understand and write, pictures can be
what you are asking. If

123
etc.
data can be completely distorted.

124used.
they misunderstand, your
Visual/audio These include pictures, Very useful to use You have to have stimuli
movies, tapes, stories, together with other tools, appropriate stimuli and role plays,
photographs, particularly with people the facilitator needs to be

125125
used to illustrate who cannot read or write. skilled in using
such problems or issues or past events or even future

126126

events. stimuli.
Rating scales This technique makes use It is useful to measur e You need to test the of a
continuum, along attitudes, opinions, and statements very carefully which people are
perceptions.

127127

to make sure that there is


expected to place their

128

no possibility of
own feelings, 129
misunderstanding. A
observations etc. People

130

common problem is when


are usually asked to say whether they agree strongly, agree, don’t know,
disagree, and disagree strongly with a

two concepts are included


in the statement and you
cannot be sure whether an
opinion is being given on
one or the other or both.
131
statement. You can use pictures and symbols in this technique if people
cannot read and write.

132
Critical This method is a way of Very useful when The evaluation team can
event/incident focusing interviews with something problematic end up submerged in a
individuals or groups on has occurred and people vast amount of
particular events or feel strongly about it. If all contradictory detail and
Analysis
incidents. The purpose of those involved are lots of “he said/she said”.
doing this is to get a very included, it should help the It can be difficult not to
full picture of what evaluation team to get a take sides and to remain
actually happened. picture that is reasonably objective.
close to what actually
happened and to be able to
diagnose what went
wrong.

Participant This involves direct It can be a useful way of It is difficult to observe


observation observation of events, confirming, or otherwise, and participate. The
information provided in process is very
processes, relationships other ways. timeconsuming.
and behaviors.
“Participant” here implies
that the observer gets
involved in activities
rather than maintaining a
distance.

Self-drawings This involves getting Can be very useful, Can be difficult to explain
participants to draw particularly with younger and interpret.
pictures, usually of how children.
they feel or think about
something.

INTERVIEWING SKILLS

Some dos and don’ts for interviewing:

133
□ DO test the interview schedule beforehand for clarity, and to make sure questions cannot
be misunderstood.

□ DO state clearly what the purpose of the interview is.

□ DO assure the interviewee that what is said will be treated in confidence.

□ DO ask if the interviewee minds if you take notes or tape record the interview.

□ DO record the exact words of the interviewee as far as possible.

□ DO keep talking as you write.

□ DO keep the interview to the point.

□ DO cover the full schedule of questions.

□ DO watch for answers that are vague and probe for more information.

□ DO be flexible and note down everything interesting that is said, even if it isn’t on the
schedule.

□ DON’T offend the interviewee in any way.

□ DON’T say things that are judgmental.

□ DON’T interrupt in mid-sentence.

□ DON’T put words into the interviewee’s mouth.

□ DON’T show what you are thinking through changed tone of voice.

Analyzing information

Whether you are looking at monitoring or evaluation, at some point you are going to find
yourself with a large amount of information and you will have to decide how to make sense
of it or to analyze it. If you are using an external evaluation team, it will be up to this team to
do the analysis, but, sometimes in evaluation, and certainly in monitoring, you, the
organisation or project, have to do the analysis.

134
Analysis is the process of turning the detailed information into an understanding of patterns,
trends, interpretations. The starting point for analysis in a project or organizational context is
quite often very unscientific. It is your intuitive understanding of the key themes that come
out of the information gathering process. Once you have the key themes, it becomes possible
to work through the information, structuring and organizing it. The next step is to write up
your analysis of the findings as a basis for reaching conclusions, and making
recommendations.

So, your process looks something like this:

Determine key indicators for the


evaluation/monitoring process

Collect information around the indicators

Develop a structure for your analysis,


based on your intuitive understanding of
emerging themes and concerns, and where
you suspect there have been variations
from what you had hoped

Go through your data, organizing it under


the themes and concerns.

Identify patterns, trends, possible


interpretations. 106
and/or expected.

Write up your findings and conclusions.


Work out possible ways forward
Taking action

Monitoring and evaluation have little value if the organisation or project does not act on the
information that comes out of the analysis of data collected. Once you have the findings,
conclusions and recommendations from your monitoring and evaluation process, you need
to:

 Report to your stakeholders;

 Learn from the overall process;

 Make effective decisions about how to move forward; and, if necessary,

 Deal with resistance to the necessary changes within the organisation or project, or
even among other stakeholders.

REPORTING

Whether you are monitoring or evaluating, at some point, or points, there will be a reporting
process. This reporting process follows the stage of analyzing information. You will report
to different stakeholders in different ways, sometimes in written form, sometimes verbally
and, increasingly, making use of tools such as PowerPoint presentations, slides and videos.

Below is a table, suggesting different reporting mechanisms that might be appropriate for
different stakeholders and at different times in project cycles. For writing tips, go to the
toolkit on effective writing for organizations.
Target group Stage of project cycle Appropriate format

Board Interim, based on monitoring Written report


analysis

136
Evaluation Written report, with an Executive
Summary, and verbal presentation from the
evaluation team.

Management Team Interim, based on monitoring Written report, discussed at management


analysis team meeting.

Evaluation Written report, presented verbally by the

evaluation team.

Staff Interim, based on monitoring Written and verbal presentation at


departmental and team levels.

Evaluation Written report, presented verbally by


evaluation team and followed by in-depth
discussion of relevant recommendations at
departmental and team levels.

Beneficiaries Interim, but only at significant Verbal presentation, backed up by


points, and evaluation summarized document, using appropriate
tables, charts, visuals and audio-visuals.
This is particularly important if the
organisation or project is contemplating a
major change that will impact on
beneficiaries.

Donors Interim, based on monitoring Summarized in a written report.

Evaluation Full written report with executive summary


or a special version, focused on donor
concerns and interests.

Wider development Evaluation Journal articles, seminars, conferences,


websites.
Community

137
For an outline of what would normally be contained in a written report, go to the following
page.

OUTLINE OF AN EVALUATION REPORT

EXECUTIVE SUMMARY (Usually not more than five pages – the shorter the better –
intended to provide enough information for busy people, but also to tease people’s appetite
so that they want to read the full report.)

PREFACE (Not essential, but a good place to thank people and make a broad comment
about the process, findings etc.)

CONTENTS PAGE (With page numbers, to help people find their way around the report.)

SECTION 1:

INTRODUCTION: (Usually deals with background to the project/organisation, background


to the evaluation, and the brief to the evaluation team, the methodology, the actual process
and any problems that occurred.)

SECTION 2:

FINDINGS: (Here you would have sections dealing with the important areas of findings,
e.g. efficiency, effectiveness and impact, or the themes that have emerged.)

SECTION 3:

CONCLUSIONS: (Here you would draw conclusions from the findings – the
interpretation, what they mean. It is quite useful to use a SWOT Analysis – explained in
Glossary of Terms - as a summary here.)

SECTION 4:

RECOMMENDATIONS: (This would give specific ideas for a way forward in terms of
addressing weaknesses and building on strengths.)

APPENDICES: (Here you would include Terms of Reference, list of people


interviewed, questionnaires used, possibly a map of the area and so on.) LEARNING

138
Learning is, or should be, the main reason why a project or organisation monitors its work or
does an evaluation. By learning what works and what does not, what you are doing right and
what you are doing wrong, you, as project or organisation management, are empowered to
act in an informed and constructive way. This is part of a cycle of action reflection. (See
the diagram in the section on why do monitoring and evaluation?)

The purpose of learning is to make changes where necessary, and to identify and build on
strengths where they exist. Learning also helps you to understand, to make conscious,
assumptions you have. So, for example, perhaps you assumed that children at more affluent
schools would have benefited less from your intervention than those from less affluent
schools. Your monitoring data might show you that this assumption was wrong. Once you
realize this, you will probably view your interactions with these schools differently.

Being in a constant mode of action-reflection-action also helps to make you less complacent.

Sometimes, when projects or organizations feel them “have got it right”, they settle back and
do things the same way, without questioning whether they are still getting it right. They
forget that situations change, that the needs of project beneficiaries may change, and that
strategies need to be reconsidered and revised.

So, for example, an organisation provided training and programmes for community radio
stations. Because it had excellent equipment and an excellent production studio, it invited
stations to send presenters to its training centre for training in how to present the
programmes it (the organisation) was producing. It developed an excellent reputation for
high quality training and production. Over time, however, the community radio stations
began to produce their own programmes and what they really wanted was for the
organisation to send someone to their stations to help them workshop ideas and to give them
feedback on the work they were doing. This came out in an evaluation process and
organisation realized that it had become a bit smug in the comfort zone of what it was good
at, but that, if it really wanted to help community radio stations, it needed to change its
strategy.

Organizations and projects that don’t learn, stagnate. The process of rigorous (see Glossary
of Terms) monitoring and evaluation forces organizations and projects to keep learning -
and growing.

139
EFFECTIVE DECISION-MAKING

As project or organisation management, you need the conclusions and recommendations that
come out of monitoring and evaluation to help you make decisions about your work and the
way you do it.

The success of the process is dependent on the ability of those with management
responsibilities to make decisions and take action. The steps involved in the whole process
are:

1 Plan properly – know what you are trying to achieve and how you intend to achieve it

2 Implement

3 Monitor and evaluate.

4 Analyse the information you get from monitoring and evaluation and work out what
it is telling you.

5 Look at the potential consequences to your plans of what you have learned from the
analysis of your monitoring and evaluation data.

6 Draw up a list of options for action.

7 Get consensus on what you should do and a mandate to take action.

8 Share adjustments and plans with the rest of the organisation and, if necessary, your
donors and beneficiaries.

9 Implement.

10 Monitor and evaluate.

The key steps for effective decision making are:

 As a management team, understand the implications of what you have learned.

 Work out what needs to be done and have clear motivations for why it needs to be
done.

 Generate options for how to do it.

140
 Look at the options critically in terms of which are likely to be the most effective.

 Agree as a management team.

 Get organisational/project consensus on what needs to be done and how it needs to be


done.

 Get a mandate (usually from a Board, but possibly also from donors and
beneficiaries) to do it.

 Do it.

DEALING WITH RESISTANCE

Not everyone will be pleased about any changes in plans you decide need to be made.
People often resist change. Some of the reasons for this include:

 People are comfortable with things the way they are – they don’t want to be pushed
out of their comfort zones.

 People worry that any changes will lessen their levels of productivity – they feel
judged by what they do and how much they do, and don’t want to take the time out
necessary to change plans or ways of doing things.

 People don’t like to rush into change – how do we know that something different will
be better? They spend so long thinking about it that it is too late for useful changes
to be made.

 People don’t have a “big picture”. They know what they are doing and they can see
it is working, so they can’t see any reason to change anything at all.

 People don’t have a long term commitment to the project or the organisation – they
see it as a stepping stone on their career path. They don’t want change because it will
delay the items they want to be able to tick off on their curriculum vitaes.

 People feel they can’t cope – they have to keep doing what they are doing but also
work at bringing about change. It’s all too much.

How can you help people accept changes?

141
 Make the reasons why change is needed very clear – take people through the findings
and conclusions of the monitoring and evaluation processes, involve them in
decisionmaking.

 Help people see the whole picture – beyond their little bit to the overall impact on the
problem analyzed.

 Focus on the key issues – we have to do something about this!

 Recognize anger, fear, and resistance. Listen to people; give them the opportunity to
express frustration and other emotions.

 Find common ground – things that they also want to see changed.

 Encourage a feeling that change is exciting, that it frees people from doing things that
are not working so they can try new things that are likely to work, that it releases
productive energy.

 Emphasize the importance of everyone being committed to making it work.

 Create conditions for regular interaction – anything from a seminar to graffiti on a


notice board - to discuss what is happening and how it is going.

 Pace change so that people can deal with it.

BEST PRACTICE

EXAMPLES OF INDICATORS

Please note that these are just examples – they may or may not suit your needs but they
should give you some idea of the kind of indicators you can use, especially for measuring
impact.

Economic Development Indicators

 Average annual household income

 Average weekly/monthly wages

 Employment, by age group

 Unemployment, by age group, by gender

142


Employment, by occupation, by gender

Government employment

Earned income levels

 Average length of unemployment period

 Default rates on loans

 Ratio of home owners to renters

143


Per capita income

Average annual family income

% people below the poverty line

Ratio of seasonal to permanent employment

 Growth rate of small businesses

 Value of residential construction and/or renovation

Social Development Indicators

 Death rate

 Life expectancy at birth

 Infant mortality rates

 Causes of death

 Number of doctors per capita

 Number of hospital beds per capita

 Number of nurses per capita

 Literacy rates, by age and gender

 Student: teacher ratios

 Retention rate by school level

 School completion rates by exit points

 Public spending per student

 Number of suicides

144


 Causes of accidents

 Dwellings with running water

 Dwellings with electricity

Number of homeless

Number of violent crimes

Birth rate

Fertility rate

 Gini distribution of income (see Glossary of Terms)

 Infant mortality rate

 Rates of hospitalisation

 Rates of HIV infection

 Rates of AIDS deaths

 Number of movie theatres/swimming pools per 1000 residents

 Number of radios/televisions per capita

 Availability of books in traditional languages

 Traditional languages taught in schools

 Time spent on listening to radio/watching television by gender  Number of


programmes on television and radio in traditional languages and/or dealing with
traditional customs

 Church participation, by age and gender

145


Political/organisational Development Indicators

 Number of community organisations

 Types of organised sport

 Number of tournaments and games

 Participation levels in organised sport

 Number of youth groups

Participation in youth groups

Participation in women’s groups

Participation in groups for the elderly

Number of groups for the elderly

 Structure of political leadership, by age and gender

 Participation rate in elections, by age and gender

 Number of public meetings held

 Participation in public meetings, by age and gender

Examples adapted from Using Development Indicators for Aboriginal Development,


the
Development Indicator Project Steering Committee, and September 1991

146
Chapter 11

DESIGNING A MONITORING SYSTEM – CASE STUDY

What follows is a description of a process that a South African organisation called Puppets
against AIDS went through in order to develop a monitoring system which would feed into
monitoring and evaluation processes.

The main work of the organisation is presenting work shopped plays and/or puppet shows
related to life skill issues, especially those life skills to do with sexuality, at schools, across
the country. The organisation works with a range of age groups, with different “products”
(scripts) being appropriate at different levels.

Puppets against AIDS wanted to develop a monitoring and evaluation system that provided
useful information on the efficiency, effectiveness and impact of its operations. To this end,
it wanted to develop a data base that:

 Provided all the basic information the organisation needed about clients and services
given;

 Produced reports that enabled the organisation to inform it and other stakeholders,
including donors, partners and even schools, about the impact of the work, and what
affected the impact of the work.

The organisation made a decision to go for a computerised monitoring system. Much of the
day-to-day information needed by the organisation was already on a computerised data base
(e.g. schools, regions, services provided and so on), but the monitoring system would require
a substantial upgrading and the development of data base software specific to the
organisation’s needs. The organisation also made the decision to develop a system initially
for a pilot project, but with the intention of extending it to all the work over time. This pilot
project would work with about 60 schools, using different scripts each year, over a period of
three years. In order to raise the money needed for this process, Puppets against AIDS
needed some kind of a brief for what was required so that it could be costed.

At an initial workshop with staff, facilitated by consultants, the staff generated a list of
indicators for efficiency, effectiveness and impact, in relation to their work. These were the

147
things staff wanted to know from the system about what they did, how they did it, and what
difference it made. The terms were defined as follows:

Efficiency Here what needed to be assessed was how quickly, how correctly, how cost
effectively and with what use of resources the services of the organisation were offered.
Much of this information was already collected and was contained in reports which reflected
planning against achievement. It needed to be made “computer friendly”.

Effectiveness Here what needed to be assessed was getting results in terms of the strategy
and shorter-term impact. For example, were the puppet shows an effective means of
communicating messages about sexuality? Again, this information was already being
collected and just needed to be adapted to fit the computerised system.

Impact Here what needed to be assessed was whether the strategy worked in that it had an
impact on changing behaviour in individuals (in this case the students) and that that change
in behaviour impacted positively on the society of which the individuals are a part. The
organisation had a strong intuitive feeling that it was working, but wanted to be able to
measure this more scientifically and to be able to look at what variables made impact more
or less likely, or affected the degree of impact.

Staff generated a list of the different variables that they thought might be important in
assessing and accounting for differences of impact. The monitoring system would need to
link information on impact to these variables. The intention was to provide both qualitative
and quantitative information.

The consultants and a senior staff member then developed measurable indicators of impact
and a tabulation of important variables which included:

Gender and age profile of proposed age cohort


Economic profile of school
Religious profile of the school
Teacher profile at the school
Approach to discipline at the school
Which scripts were used
Which acting teams presented the scripts And
so on.

148
Forms/questionnaires were developed to measure impact indicators before the first
intervention (to provide baseline information) and then at various points in the process, as
well as to categorise such concepts as “teacher profile”. With the student questionnaire, it
was designed in such a way to make it possible to aggregate a score which could be
compared when the questionnaire was administered at different stages in the process. The
questionnaire took the form of a series of statements with which students were asked to
agree/disagree/strongly agree/strongly disagree etc. So, for example, statements to do with
an increase in student self-esteem included “When I look in a mirror, I like what I see”, and
“Most of the people I know like the real me”. The organisation indicated that it wanted the
system to generate reports that would enable it to know:

What difference is there between the indicator ratings on the impact objective at the
beginning and end of the process?

What difference is there between teacher attitudes at the beginning and end of the
process?

What variables to do with the school and school environment impact on the degree of
difference between indicators at the beginning and end of the process?

What variables to do with the way in which the shows are presented impact on the
degree of difference at the beginning and end of the process?

All this was written up as a brief which was given to software experts who then came up
with a system that would meet the necessary requirements. The process was slow and
demanding but eventually the system was in place and it is currently being tested.

FIELDWORKER REPORTING FORMAT

This format was used by an early childhood development learning centre to measure the
following indicators in the informal schools with which it worked:

 Increasingly skilled educare teachers.

 Increased amount of self-made equipment.

 Records up-to-date.

 Payments up-to-date.

149
 Attendance at committee meetings.

________________________________________________________________________
CARE-ED FIELD VISIT REPORT

Date:

Name of school:

Information obtained from:

Report completed by:

Field visit number: _____

1. List the skills used by the teachers in the time period of your visit to the school: 2.

List self-made equipment visible in the school

3. List the fundraising activities the school committee is currently involved in

4. Record-keeping assessment:

5.

Kind of Up-to-date and Up-to-date but Not up-to- Not attempted


record accurate not very date
accurate

Bookkeeping

Petty cash

150
Filing

Correspondence

Stock control

Registers

6. Number of children registered:

Average attendance over past two months:

7. Number of payments outstanding for longer than two months:

8. Average attendance at committee meetings over past two months:

9. Comments on this visit:

10. Comparison with previous field visit:

151
Glossary

Activities What a program does with its inputs. Examples are construction of a kindergarten,
computer training for youth, counseling of women, raising public awareness regarding
childhood diseases, etc. Program activities result in outputs.

Background The contextual information that describes the reasons for the project, including
its goals, objectives, and stakeholders’ information needs.

Baseline data A baseline study is the analysis describing the situation prior to the
implementation of the project, which is used to determine the results and accomplishments
of an activity, and which serves as an important reference for the summative evaluation.

Case study an intensive, detailed description and analysis of a single project, program, or
instructional material in the context of its environment. Study based on a small number of
“typical” examples. Results provide in-depth review of the case but are not statistically
reliable.

Conclusion (of an evaluation) a reasoned judgment based on a synthesis of empirical


findings or factual statements corresponding to a specific circumstance.

Context (of an evaluation) The combination of factors accompanying the study that may
have influenced its results, including geographic location, timing, political and social
climate, economic conditions, and other relevant professional activities in progress at the
same time.

152
Data Information. The term "data" often describes information stored in numerical form.
Hard data is precise numerical information. Soft data is less precise verbal information. Raw
data is the name given to survey information before it has been processed and analyzed.

Data collection method The way facts about a program and its outcomes are gathered. Data
collection methods often used in program evaluations include literature search, file review,
natural observations, surveys, expert opinion, case studies, etc.

Development objective The ultimate and long-term objective of the development impact,
which is expected to be attained after the project purpose, is achieved.

Direct beneficiaries Usually institutions and/or individuals who are the direct recipients of
technical cooperation aimed at strengthening their capacity to undertake development tasks
that are directed at specific target groups. In micro-level interventions, the direct
beneficiaries and the target groups are the same.

Effectiveness A measure of the extent to which a project or program is successful in


achieving its objectives.

Efficiency A measure of the "productivity" of the implementation process – how


economically inputs are converted into outputs, or the optimal transformation of inputs into
outputs.

Evaluation An examination as systematic and objective as possible of an ongoing or


completed project or program, its design, implementation and results, with the aim of
determining its efficiency, effectiveness, impact, sustainability and the relevance of the
objectives. The purpose of an evaluation is to guide decision-makers.

153
Evaluation design The logical model or conceptual framework and the methods used to
collect information, analyze data and arrive at conclusions.

External evaluation: Evaluation conducted by an evaluator from outside the organization


within which the object of the study is housed.

Finding Factual statement about the program or project based on empirical evidence
gathered through monitoring and evaluation activities.

Focus group a small group selected for its relevance to an evaluation that is engaged by a
trained facilitator in a series of discussions designed for sharing insights, ideas, and
observations on a topic of concern to the evaluation.

Impact The positive and negative changes produced by a program or a component, directly
or indirectly, intended or unintended.

In depth interview A guided conversation between a skilled interviewer and an interviewer


that seeks to maximize opportunities for the expression of a respondent’s feelings and ideas
through the use of open-ended questions and a loosely structured interview guide.

Indicators Quantitative or qualitative statements, which can be used to describe situations


that exist and to measure changes or trends over a period of time. Indicators are used to
measure the degree of fulfillment of stated objectives, outputs, activities and inputs.

Inputs The funds, personnel, materials, etc., necessary to produce the intended outputs of
development activities.

154
Lesson learned Learning from experience that is applicable to a generic situation rather than
to a specific circumstance.

Key informant Person carefully chosen for interview because of his/her special knowledge
of some aspect of the target population.

Logical framework approach A tool for development planning and monitoring applied by
some donor agencies.

Monitoring A continuing function that aims primarily to provide program or project


management and the main stakeholders of an ongoing program or project with early
indications of progress or lack thereof in the achievement of program or project objectives.

Objective Purpose or goal representing the desired result that a program or project seeks to
achieve. A development objective is a long-term goal that a program or project aims to
achieve in synergy with other development interventions. An immediate objective is a
shortterm purpose of a program or project.

Outcome indicators The specific items of information that track a program's success on
outcomes. They describe observable, measurable characteristics or changes that represent
achievement of an outcome.

Outcomes Results of a program or project relative to its immediate objectives that are
generated by the program or project outputs. Examples: increased rice yield, increased
income for the farmers.

155
Outputs The planned results that can be guaranteed with high probability as a consequence
of development activities/inputs. They are the direct results of program activities.

Program A group of related projects or services directed toward the attainment of specific
(usually similar or related) objectives.

A time-bound intervention that differs from a project in that it usually cuts across sectors,
themes and/or geographic areas, involves more institutions than a project, and may be
supported by different funding sources.

Project A planned undertaking designed to achieve certain specific objectives within a given
budget and within a specified period of time.

A time-bound intervention that consists of a set of planned, interrelated activities aimed at


achieving defined objectives.

Project document A document that explains in detail the context, objectives, expected
results, inputs, risks and budget of a project.

Qualitative evaluation The approach to evaluation that is primarily descriptive and


interpretative. Observations that are categorical rather than numerical and often involve
attitudes, perceptions and intentions.

Quantitative evaluation The approach to evaluation involving the use of numerical


measurement and data analysis based on statistical methods.

156
Recommendations Suggestions for specific actions derived from analytic approaches to the
program components.

Relevance The degree to which the rationale and objectives of an activity are, or remain,
valid, significant and worthwhile, in relation to the identified priority needs and concerns.

Reliability A measurement is reliable to the extent that, when repeatedly applied to a given
situation, it consistently produces the same results if the situation does not change between
the applications. Reliability can refer to the stability of the measurement over time or the
consistency of the measurement from place to place.

Results A broad term used to refer to the effects of a program or project. The terms
"outputs", "outcomes" and "impact" describe more precisely the different types of results.

Stakeholders Groups that have a role and interest in the objectives and implementation of a
program or project. They include target groups, direct beneficiaries, those responsible for
ensuring that the results are produced as planned, and those that are accountable for the
resources that they provide to that program or project.

A person, group, organization or other body who has a “stake” in the area or field where
interventions and assistance are directed. Target groups are always stakeholders, whereas
other stakeholders are not necessarily target groups.

Structured interview An interview in which the interviewer asks questions from a detailed
guide that contains the questions to be asked and the specific areas for probing.

Subjective data Observations that involve personal feelings, attitudes and perceptions.
Subjective data can be quantitatively or qualitatively measured.

Sustainability Durability of positive program or project results after the termination of the
technical cooperation channeled through that program or project. Static sustainability is the
continuous flow of the same benefits, set in motion by the completed program or project, to
the same target groups. Dynamic sustainability is the use or adaptation of program or project

157
results to a different context or changing environment by the original target groups and/or
other groups.

Sustainability factors Six areas of particular importance to ensure that aid interventions are
sustainable, i.e. institutional, financial and economic, technological, environmental,
sociocultural, and political.

Target groups The main stakeholders of a program or project that are expected to gain from
the results of that program or project. Sectors of the population that a program or project
aims to reach in order to address their needs based on gender considerations and their
socioeconomic characteristics.

Terms of Reference (ToR) Action plan describing objectives, results, activities and
organization of a specific endeavor. Most often used to describe technical assistance, study
assignments, or evaluations.

Triangulation In an evaluation, triangulation is an attempt to get a fix on a phenomenon or


measurement by approaching it via several (three or more) independent routes. This effort
provides redundant measurement.

ASSIGNMENT

158
1. Giving examples differentiate between Monitoring and Evaluation.
2. Why is Baseline survey an important part in Project Management?
3. Distinguish between Summative and formative evaluation Methods
with examples.
4. Monitoring and evaluation uses both qualitative and quantitative
methods to measure the success and impact of the projects. However,
economists and staticians adapt a one sided method (quantitative) to
analyze the results.
a) Identify the potential dangers of a one sided monitoring system.
b) Critically analyze the quantitative method often employed by
economists and staticians in monitoring and evaluating
development projects
5. a. Define Logical Framework
b. Define and Explain key components of Logical framework
.

159

You might also like