A Practitioners Manual On Monitoring and Evaluation
A Practitioners Manual On Monitoring and Evaluation
A Practitioners Manual On Monitoring and Evaluation
Manual on Monitoring
and Evaluation of
Development Projects
A Practitioners’
Manual on Monitoring
and Evaluation of
Development Projects
By
Kultar Singh,
Dharmendra Chandurkar
and Varun Dutt
A Practitioners’ Manual on Monitoring and Evaluation of Development
Projects
All rights for this book reserved. No part of this book may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording or otherwise, without
the prior permission of the copyright owner.
Chapter Seven............................................................................................ 90
Approaches to Evaluation
7.1 Utilisation Focused Evaluation
7.2 Feminist Evaluation
7.3 Transformative Evaluation
7.4 Real World Evaluation
7.5 Evaluation Approach Using Appreciative Inquiry
7.6 Realist Evaluation
7.7 Equity-Focused Evaluation
7.8 Developmental Approach to Evaluation
A Practitioners’ Manual on Monitoring and Evaluation vii
of Development Projects
Chapter 3 explains the key concepts, the fundamentals of M&E and also
the key differences between them. The key typologies and different types
of M&E are explained, in addition to the critical concept of indicators and
how to design good indicators.
Chapter 8 delineates data basics and how the data may be analysed
usefully for monitoring and evaluating a project. Univariate and
multivariate analysis are covered as part of the data analysis. Basic
quantitative data analysis using MS Excel and SPSS software has been
demonstrated using screenshots. The fundamentals of qualitative data
analysis and the popular software that can be used to conduct the same
have also been touched upon.
UNDERSTANDING PROJECTS
as part of the project vary depending on the project objective and the
implementing organisation’s capacity. These activities form the key work
that is done as part of the project implementation.
Another important aspect of every project is that it has a specific start date
and a specific end date i.e., a specific time period within which it has to be
executed. The project is expected to achieve its desired objective within
this specific time period, which in the example quoted above, is a duration
of five years. Last but not the least, it is very critical to understand that
each project is allocated a limited set of resources. Resources, which may
be financial, human and physical, are allocated to a project so that its
activities may be implemented and its objectives achieved within a
specific time period. Accordingly, the example project is also allotted a
fixed budget, human resources and fixed physical resources with which its
activities may be implemented and its objective achieved in a specific
period of time.
From the beginning till the end of the project, the project cycle comprises
of various phases or stages. All the stages in the project cycle are
delineated and implemented successively in a phased manner. Each of
these stages are defined by their objective, information requirements,
responsibilities and key outputs. The various stages of a generic project
cycle are:
Understanding Projects 3
Situation
Analysis
Gap
Evaluation
Analysis
Implementation Programme
and Monitoring planning
Discussing the project planning phase in reference to the example, the first
step is to define the project objective, which should be specific and
realistic. For this project, the objective is to make the project area ODF in
the next five years. The second step is to identify the activities that are
undertaken as part of the project to achieve its intended objective. The
project can undertake activities like building HH and community toilets,
creating awareness in the community about the ill effects of OD, tracking
people who defecate in the open and counselling them, and providing
information and technical support for building the right type of toilet etc.
After identifying the activities, targets are set, i.e., the number of HH and
community toilets that could be built within each year, and the number of
communication campaigns that need to be undertaken to motivate people
to not defecate in the open. This is followed by deployment of resources
for the project, primarily in the form of finances available for implementation
of the project. The money is utilised to recruit human resources (project
Understanding Projects 5
To take the case of the ODF project, the various activities that were
identified in the planning stage are executed during this stage. The project
staff is recruited and deployed, awareness campaigns are conducted about
the ill effects of OD, and subsidy is provided for toilet construction etc.
These activities are also simultaneously monitored to assess whether the
toilets are being constructed as planned, both in terms of quantity and
quality and whether the communication campaigns are being conducted as
planned etc.
Stage V: Evaluation
After project activities are completed, many stake holders like project
implementers, policy makers, the government, and the external audience,
among others, want to know whether there is any change in the ‘situation’.
The stake holders also want to know whether this change is due to the
project intervention or other external factors. An evaluation, helps to
systematically assess the impact, effectiveness and the contribution of the
project. Mid-term evaluations are helpful because they provide timely
learning which helps in course correction. Post project evaluations help in
getting insights that are helpful in formulation of other similar projects.
Various techniques or designs are thus adopted for different projects in
different situations. These evaluation designs are explained in detail in the
following chapters.
in case it has reduced, would it be right to say that it has reduced due to the
activities undertaken by the project?
After the various stages of the project cycle are examined, the practitioner
needs to assess the stage at which M&E needs to be conducted and the
specific M&E activities that should be performed at each stage of the
project cycle.
Objective
The objective of a project is the specific condition that the project targets
to achieve and that too by itself. An objective is derived from a goal, has
the same intention as a goal, but it is more specific, quantifiable and
verifiable than the goal (SMART, Characteristics of Good Objectives,
2016). Usually, the project tries to address the inverted image of the core
problem. For example, if the core problem in a specific village is the ‘high
rate of OD’, then its corresponding objective would be to ‘reduce the rate
of OD from 35 per cent to zero per cent in a period of five years’.
Understanding Projects 7
x Specific
x Measurable
x Attainable
x Realistic
x Time-bound
Using the SMART criteria to examine the objective to ‘reduce the rate of
OD from 35 per cent to zero per cent in a period of five years:
If the project objective fulfils all the five criteria as part of the SMART
framework, it is said to be a good objective. Another question which may
be considered is whether a project should have a single objective or
multiple objectives? By its definition, an objective is supposed to be
precise, specific and definite. A project should thus simply have a single
objective. In common parlance, the term 'project objectives' is often used.
However, this term is incorrect and this usage is against the spirit of a
‘project-based approach’. A project is essentially about the breaking down
of a large problem into its smaller constituent parts, such that each
problem is addressed one at a time in a single project. It is not about
tackling all problems at once by trying to find a general solution for the
various aspects of the problem. A project which has more than one
8 Chapter One
Goal
By now the practitioner knows that the planned activities have to lead to
fulfilment of the project objective. The objective also has a higher purpose
which extends beyond the precincts of the project. This higher purpose is
the goal which the project aims to contribute towards. Each project, by
achieving its objective contributes towards a larger goal. Thus, it can be
rightly said that while a project 'achieves' its objective, it 'contributes'
towards the goal. In other words, the goal is the macro-level change that
the project contributes to at the micro-level.
Thus, the objective of our example project, ‘reducing the rate of OD from
35 per cent to zero per cent in a period of five years’, at the same time
contributes to the higher goal of 'improving the sanitation conditions in
rural India’. Another current example would be of a project which aims to
contribute towards the goal of improving the standard of living of the rural
poor. Increasing the HH income of the rural poor, improving their access
to social schemes, improving the sanitation conditions, improving their
health, providing safe drinking water, increasing the HH assets etc., could
be a few of the project objectives which would contribute towards the
above mentioned larger goal. Thus, a goal is the intended change we seek;
it is expansive in its scope and all-encompassing in its vision. In this way,
many projects may refer to and contribute towards a single goal while
achieving their respective objectives.
DESIGNING PROJECTS
Operations are based on the 'if-and-then' logic. For example, if we put fuel
in a car’s fuel tank, then only can we drive and go somewhere in it. This
'if-and-then' logic is the means-to-an-end relation or a cause-and-effect
connection between the system components. So, what implication does
this system model have for projects?
10 Chapter Two
Every project, as we know, has its own rationale of intervention, one that
clearly addresses the nuts-and-bolts of the problem of 'what', 'when', 'why',
'how', 'who' and 'where'. The clearer a project is about the logic of change
underpinning its project activities or processes, the better it can deliver the
results or achieve the objective it has in mind.
Inputs: This includes the resources that are available or allocated for the
project. Input resources may be natural, human, and financial, depending
upon the nature of the project. For example, funds allocated, human
resources deployed, laptops allotted etc.
Impact: The final desired goal or the macro level goal that the project
envisages to achieve is defined as its impact. Impact is what the project
aims to contribute towards rather than trying to claim that it is what it
would achieve by itself. For example, ‘decreasing the Maternal Mortality
Rate (MMR)’ is the impact which the project aims to contribute to by
providing the outcome, which is, ‘increase in the rate of institutional
delivery’. Also, ‘increase in the empowerment level of women’ is the
impact which the project aims to achieve through its outcome of
‘increased female literacy rate’.
Examples of the results chain are presented below in Figures 2.2 and 2.3
respectively.
12 Chapter Two
Exercise: Can you think of other examples of results chain? Fill the results
chain given below with your example.
Designing Projects 13
Usually, in a project, there are multiple results that are envisioned and
therefore the project has multiple chains. The example of a project with
multiple results chain is presented below:
All the above are tools for facilitating project development and each one
has got its strengths and limitations. LFA is discussed in the following
sub-chapter in greater detail.
The steps that are followed in conducting an LFA are explained below. An
example is used to reinforce the reader's understanding of this concept.
Designing Projects 15
For example, if a person goes to the doctor with high fever and the doctor
prescribes medicine for high fever without diagnosing the cause for the
fever, then the doctor is simply treating the symptom but not the cause of
the problem. If the real cause of the fever is not discovered, it is quite
possible that the fever will persist after the effect of the medicine wears
out. Similarly, projects which only address the effects of the problems and
not the underlying causes are unlikely to bring about sustainable benefits.
A common method used to identify the core problem is the problem tree,
which helps to visually depict the connection between the various possible
causes of the problem. This is best explored in a participatory workshop
with key project stakeholders, who are asked to write down both the
causes and effects of the problem in a structured way. This procedure
makes it possible to clearly visualise the causes of the focal problem and
its effects and to find out how different problems are related to each other.
The next step is to plan activities and implement them within the
framework of the project to treat the causes of these problems.
16 Chapter Two
While the problem tree shows the cause and effect relationship between
problems, the objective tree shows the means to the end relationship
between the objectives. This leads directly to developing the project’s
narrative description in the form of the Logical Framework Matrix.
The objective tree needs to be developed for the problems which the
project aims to solve through its intervention. The objective tree delineates
a logical sequence or cause and effect relation that needs to be followed to
solve the problems as presented in the problem tree.
After identifying the key risks associated with the project logic model,
they can be plotted on a 2*2 matrix consisting of Impact and Likelihood
on its axis. The project management should aim to mitigate the risks which
lie in the top-right most quadrant i.e., risks which can have high impact
and high likelihood as these risks can jeopardise the project in achieving
its objective. The 2*2 matrix used for plotting potential risks is presented
in the figure given below.
Designing Projects 23
Also, in the case of the example used, several risks associated with the
project can be identified. It is a good practice to identify risks, which can
be both internal and external, at each level of programme implementation,
i.e., from activity to output level, output to outcome level, outcome to
impact level. The risks identified with the example project are listed
below:
External Risks:
Internal Risks:
1. Enterprises are not able to develop market linkages for selling their
products.
2. Banks do not easily provide loans to individuals or self-help groups
(SHGs) supported by the project.
3. The project target group is not self-motivated and enterprising
enough to adopt new IGAs.
24 Chapter Two
Performance measures called indicators are set for each hierarchical level,
with sources of information listed for each indicator. Finally, assumptions
that make activities translate into outputs, and outputs to purpose, or
purpose to goal are also listed. This forms the horizontal logic of the PPM.
Columns and rows of the PPM are given below.
Horizontal Logic
Narrative Objectively Means of Assumptions/
Summary Verifiable Verification Risks
Vertical Logic
Activities Inputs
institutions
x Rate of interest
for credit
Activity: Inputs Project
1.1.1 x Number of MIS/Reports
Forming SHGs formed
SHGs under the
1.1.2 project
Performing x Number of
all pathway SHGs for which
activities in all pathways
formation activities were
and ensuring performed
quality of x Amount of
SHGs credit provided
1.1.3 through
Linking linkages with
SHGs with banks
financial x Number of
institutions people given
for access to vocational
credit skills training
1.2.1 x Number of
Providing people trained
skill based in life skills
vocational
training to
women
1.2.2
Providing
life skills
training to
women
The logframe should be used as the basis of the funding application and
then throughout the project lifecycle to track progress and adapt to
changing situations. It can be used to review assumptions and
implications, and to keep donors and other stakeholders informed of
significant changes related to the project.
CHAPTER THREE
UNDERSTANDING MONITORING
AND EVALUATION:
BASICS AND KEY CONCEPTS
that the project is being implemented in the right direction for it to achieve
its intended outcomes. In case the project is not moving in its intended
direction, midcourse correction should be done. Monitoring is applicable
to all programme levels (from input, process, output and outcome). Most
commonly, the focus is on output data, although it is also important to
track the goals and the objectives. Monitoring should ideally be an internal
function of the project management team. Monitoring, thus, plays a critical
role in the success of a project.
Monitoring Evaluation
Definition Concurrent analysis of project Assessment of the magnitude
progress towards achieving of change in the results
the planned results with the proposed by the project that
purpose of improving may be attributed to the
management decision making project.
(Aquaknow, 2016).
When is it Systematic activity should It should be done only at
done? be done regularly throughout specific points of time like in
the project implementation. the middle of the project, at
the change of phase, and at
the end of the project etc.
Scope Focuses on activities, Focuses on delivery of
outputs and indicators of project outcomes and
progress and change. impacts. It assesses the
progress towards the project
objectives and goals.
Who does Ideally, it should be an Ideally, it should be an
it? internal activity. This should external activity to avoid
be done by project staff or conflict of interest. It should
its target beneficiaries. be conducted by external
evaluators while involving
donors, project staff and
project users.
Why is it It is done to report project It is done to ensure
done? progress to the management, accountability of the project,
to identify the bottlenecks, learn broad lessons and
take remedial action and provide recommendations to
modify the project similar projects. It highlights
implementation plans. the potential and the
achievements of the project.
30 Chapter Three
By juxtaposing M&E with the various stages of the results chain, the
practitioner should be able to gauge at which stage of the results chain or
theory of change of the project, should the M&E be focused on.
With respect to the Logical Framework, impact and outcomes fall within
the domain of evaluation, whereas outputs, activities and inputs fall within
the domain of monitoring.
Monitoring Typologies
Evaluation Typologies
3.5 Indicators
The concept of indicators is pivotal to M&E. As per its dictionary
definition, an indicator is defined as a sign or a signal. In the context of
M&E, an indicator is said to be a quantitative standard of measurement or
an instrument which gives us information (UNAIDS, 2010). Indicators
help to capture data and provide information to monitor performance,
measure achievement, determine accountability and improve the effectiveness
of projects or programmes.
x Specific
x Measurable
x Attainable
x Realistic
x Timebound
x Subjective
x Participatory
x Interpreted and communicable
x Cross-checked and compared
x Empowering
x Diverse and disaggregated
For example, in the case of the project which aims to make its target area
ODF, the output level indicator is ‘The number of individual household
latrines (IHHL) constructed’. As constructing toilets is one of the key
outputs expected from the project, this indicator helps in measuring the
same. Similarly, as creating awareness about sanitation is another key
activity, ‘The number of village level meetings conducted to create
awareness about sanitation’ is another output level indicator.
Defining Indicators
Source: The sample survey conducted as part of the baseline, midline and
endline survey
Data Collection Frequency: In the first quarter of the first year and in the
last quarter of the third and in the fifth year of the project.
Denominator: The overall sample size of the number of women who had
deliveries in the last two years.
36 Chapter Three
DEVELOPING A PERFORMANCE
MONITORING PLAN
Impact
Outcome
Outputs
1.............
2.............
40 Chapter Four
The first step of populating the expected results column is given below:
Output
1. Women have
increased access to
obstetric care
2.Increased capacity of
community to demand
for the rights and
entitlements under
government health
programmes
3. Increased awareness
and knowledge about
important maternal and
child practices and
contraception methods
4.Participatory
formulation of AIP in
consultation with
village health
committees (VHCs)
(Practice Level Change)
42 Chapter Four
In the example used, indicators are defined in the next step for developing
the PMP and at each level of the results, they are populated with impact,
outcome and output.
Number of women
with access to
contraceptives and
safe abortion
services
2.Increased Number of village
capacity of level communities
community to formed which are
demand for the capable of
rights and demanding
entitlements entitlement under
under the national
government government health
health programme
programmes
Number of VHCs
which are capable
of registering
complaints with
the health
department in case
any entitlement
under the
government health
programme is not
received
3. Increased Number of women
awareness and who are able to
knowledge articulate the key
about danger signs
important during pregnancy
MCH practices Number of women
and who have
contraception knowledge of
methods temporary method
of contraception
Number of women
who are aware
about the
availability of
Developing a Performance Monitoring Plan 45
contraceptive and
abortion services at
various levels of
the public health
system
4.Participatory
formulation of
annual
implementation
plan (AIP) in
consultation
with VHCs)
(Practice Level
Change)
The next step is to decide the period or frequency at which each indicator
in the PMP is to be measured. Periodicity is decided based on factors like
the minimum time that is required for change to be visible in the condition
under observation, the cost of data collection of a specific indicator,
requirement of indicator etc. For e.g., if the change is expected within a
short duration, the periodicity can be low such as quarterly or half-yearly.
If the project intervention is expected to take a longer duration to have an
effect on the indicator value, then the periodicity would be higher, say
annual or biannual. In the case of indicators for which data collection is a
complex process or where special investigating skills are required which
make the data collection costly, e.g., in agriculture production, testing the
haemoglobin (Hb) level in blood etc., it is better to collect this data over a
longer periodicity or for a smaller sample.
the time when it is being used. In case the time gap is substantial, it is not
advisable to use this secondary data source. Data sources are decided
while considering factors like availability, reliability, cost etc.
The PMP of the sample project with its periodicity, source and the
person/agency responsible for gathering information is populated in the
table below:
Developing a Performance Monitoring Plan 47
Step 4: Put Milestone Values for each Indicator from the Baseline to
the Endline of the Project
After putting in the baseline value, key milestones and target values are
assigned at specific time gaps for the indicators. The target is a posteriori
condition that is envisaged at or after completion of the project. In other
words, it is the concrete result that is sought at the end of the project
efforts. Milestones are the benchmark values that are hoped to be achieved
at specific points of time within the project implementation years. The
milestone values are to be assigned based on the way the project
implementation has been planned. For instance, for a five-year project,
formative work is undertaken or the foundation is built during the first
year of project implementation. The implementation intensity picks up
from the second year and peaks during the third or fourth year. The fifth
year is usually reserved for the handing over of the project after ensuring
its sustainability. Therefore, the project milestones are set according to the
planned pace or intensity of work.
The PMP of the sample project after the key milestones values are
assigned is given below:
Developing a Performance Monitoring Plan 51
Number of PHCs with Annual CHC/PHC 2 PHCs 5 PHCs out 10 PHCs 20 PHCs out Designated
referral services for Register Audit, out of 30 of 30 PHCs out of 30 of 30 PHCs Research
complicated PAS PHCs have PHCs have referral Agency
pregnancy have referral have transport
referral transport referral support
transport support transport
support support
Number of women Annual HH Structured 5% 10% 15% 25% Designated
with access to safe Interview, PAS Research
abortion services Agency
2.Increased Number of VHCs Annual Project MIS Out of Out of 50 Out of Out of 250 MEL
capacity of formed which are 500 intervention 100 intervention Manager
community to capable of demanding interventi villages, interventi villages,
demand for entitlement under the on none have on none have
the rights and national government villages, VHCs villages, VHCs
entitlements health programme none have none have
under VHCs VHCs
government
health Number of Village Annual VHC Zero 50 100 250 Designated
programmes Health Committees Representative, Research
that are capable of IDIs, PAS Agency
registering complaints
with the health
department in case any
entitlement under
government health
programme is not
received
Developing a Performance Monitoring Plan 53
3.Increased Number of women Annual HH Structured 15% 20% 25% 50% Designated
awareness who can articulate the Interview, PAS Research
and key danger signs Agency
knowledge during pregnancy
about Number of women Annual HH Structured 30% 35% 45% 70% Designated
important with knowledge of at Interview, PAS Research
MCH least two temporary Agency
practices and methods of
contraception contraception
methods
4.Participator
y formulation
of AIP in
consultation
with VHCs
(Practice
Level
Change)
54 Chapter Four
In the case of projects where the perceived behavioural change falls in the
realm of outputs or outcomes which are not physically observable or
verifiable, outcome mapping serves as a vital tool to map and track
process-level changes along the pathway of change. Using a graded
measure of change known as 'progress markers', it seeks to unpack the
multiple layers of change.
These progress markers define and describe the milestone changes that are
required to achieve the behavioural change from the current situation to
the envisaged situation. These are progressive milestones, ranging from
the low-hanging fruit to the higher-order or difficult to attain changes. The
first milestone is an 'expect to see' yardstick, which refers to the minimally
acceptable level of change or first-level change expected from the
intervention. The second milestone is what is called 'like to see' or the
change that could realistically be achieved, while the final milestone is
known as ‘love to see’ or the most desired or ambitious change that is
foreseen. To use a real-life example, passing an exam is the minimally
acceptable or what we 'expect to see', whereas getting a first division is
what we would 'like to see' and finally, to get a first-class distinction is
what we would ‘love to see’.
Developing a Performance Monitoring Plan 55
The PPM of the example project, is presented below after adding the its
behavioural outcomes and related progress markers
56 Chapter Four
health
programme
Output(Practice Progress Periodicity Source Expect to See Like to See Love to See
Level Change) Marker
4.Participatory District wise State Inputs of VHC Representative
formulation of AIP government representatives of VHC is
AIP in planning officials are taken included in the
consultation including while executive body
with VHCs VHC developing an which
representative, AIP develops the
AIP district level
development AIP
meetings
60 Chapter Four
To sum up, in this sub chapter, the practitioner has learnt to develop a
PMP, which is a plan to monitor and manage results, by following a step-
by-step process.
For data quality assurance, systems, document protocols and guidelines for
ensuring monitoring data quality are developed throughout the various
project stages. Checks and procedures are also defined right across the
various stages viz., designing of monitoring formats, translation of
monitoring formats, during data collection, data entry or digitisation etc.
Key aspects that should be included while developing a data quality
assurance plan are listed below:
Data Auditing
APPROACHES TO MONITORING
" Understand and explain the definition and key concept of progress
and performance monitoring
" Understand the definition and concept of Participatory Monitoring
" Explain the concept of Most Significant Change (MSC) and
Outcome Mapping
" Understand and explain the key levers for ensuring quality of
monitoring data
Progress Monitoring
Process Monitoring
A process check-list is developed by making a list of all the steps that are
followed as part of the ideal process implementation. Process monitoring
is usually done using these process checklists. The activities and the
processes are observed and recorded on the checklist. In case any
deviation is observed from the ideal required process, it is recorded.
64 Chapter Five
PM&E aims not only to measure the effectiveness of a project but also
ensures accountability to its beneficiaries and increases transparency in the
M&E process of the project. Making the process participatory empowers
the project beneficiaries and inculcates a sense of ownership amongst the
stakeholders of the project. Also, making the process participatory
buttresses the process of formulating corrective action for improving the
performance and outcomes of the project.
As the first step, all the stakeholders in the project, right from the project
funders, project implementers, government agencies, other key boundary
partners and its target beneficiaries are identified to be included in the
M&E process.
66 Chapter Five
x What are the resources and limitations at hand for the PM&E?
x What is the role and the responsibility of each stake holder?
x What information is required?
x From whom is this information to be collected?
x How is it to be incorporated to improve the programme
implementation?
Subsequently, the project PMP and the matrix will be developed as it has
been done in Chapter 4. All the stakeholders are jointly involved in
developing the PMP. Also, the responsibilities are distributed among all
stakeholders including the beneficiaries, unlike in the case of the
conventional M&E.
After the objective and framework are finalised, the next critical step is to
determine how to go ahead with the data collection. For this, the tools and
techniques used for data collection are identified. A wide range of tools is
available for this and a tool based on the context and requirements of the
project is selected. After the tool is selected, the actual process of data
Approaches to Monitoring 67
The next step after the data collection is to analyse this collected data.
Unlike in conventional M&E where analysis is usually done by a third
party or by funders or implementers of the project, in PM&E, all
stakeholders, including the project beneficiaries, are involved in
conducting the data analysis. Workshops or joint meetings are conducted
in which all stakeholders analyse the data collectively, mull over the
problems and constraints in achieving the project objective, look for
solutions and examine the results of the project activities.
After the data is analysed and insights derived from it, the last step is to
document and report the process and the information generated from the
PM&E process. It is also very important to share and disseminate this
information to all stakeholders and relevant external audience. Based on
the insights derived from this process, feedback and recommendations are
given to the project implementation team for further improvement and
course correction.
Limitations
Though there are various advantages in conducting PM&E, there are also
some practical limitations while applying the same. Conducting a
participatory evaluation which fulfils the core objectives requires a lot of
commitment and focus from the donors and the implementers. Conducting
such an evaluation is very resource intensive, both in terms of time and
finance. Moreover, adept professionals are required to facilitate this kind
of process.
The process involves collection of significant change (SC) stories from the
grassroots level. The MSC stories are then shortlisted by the panel of
designated stakeholders or the project team. Subsequently, various
representatives sit together and read out these SC stories and further
discuss the value of these stories.
The MSC technique is implemented in the programme using the ten steps
formulated by Rick Davis in 2005. They are:
This chapter deals with evaluation and its fundamental concepts, and
experimental and quasi experimental evaluation designs. At the
completion of the chapter, the practitioner will be able to:
Pipeline Design
The figure above illustrates the DID method where A is the value of the
output indicator of the treatment group at time t1 i.e., before the
intervention and B is the value of the output indicator of the treatment
group at time t2, i.e., after the intervention. Similarly, C is the value of the
output indicator of the comparison group at time t1 and D is its value at
Choosing Evaluation Designs 77
time t2. DID helps to separate the impact on the output indicator due to
time and estimates the change in output variable which can be attributed to
the project. The computation of DID is estimated to be (B-A)-(D-C) as
shown in the table given below.
PSM is a very useful method when there are many potential characteristics
to match between a sample of project participants and a sample of non-
participants. Instead of aiming to ensure that the matched control for each
participant has the same value of the control variable X, the same result
can be achieved by matching the predicted probability of project
participation P, given X, which is the propensity score of X. The range of
propensity scores estimated for the treatment group should correspond
closely to that of the retained sample of non-participants. The closer the
propensity score, the better the match. A good comparison group comes
from the same economic environment and is administered the same
questionnaire as the treatment group by the same group of trained
interviewers.
To illustrate evaluation using the PSM design, one may take the example
of a programme which was started four years ago, and is currently on the
verge of completion. The objective of the programme was to create SHGs
and work towards giving its members greater access to financial resources.
In such cases, where the baseline data is not available, the matching
propensity score is used to identify the comparison units for the treatment
units. The propensity score is calculated based on variables which are
expected to affect the probability of the person becoming a member of the
SHG. Variables like the education level, above poverty line (APL)/below
poverty line (BPL) status, caste, engagement in agricultural activities, and
participation in gram sabha activities can be used to calculate the
propensity score.
Choosing Evaluation Designs 79
The RD design exploits the discontinuity around the cut-off score in the
regression line to estimate the counterfactual. The cases just above and
below the cut-off point are similar in their characteristics and the
intervention is the main difference between them. Development
interventions like subsidy programmes based on poverty score, pension
programmes based on age, scholarship programmes based on marks etc.,
can be evaluated by using the RD design.
Enrol’ group, which constitutes of people who will enrol in the programme
even if the programme is not promoted. The third group is ‘Enrol if
Promoted’, and these are the people who will enrol in the training
programme if the training programme is promoted amongst them. The
lattermost are the set of participants on which the impact of the
programme can be evaluated (Paul J. Gertler S. M., 2011).
The Fixed Effect (FE) model explores the relationship between predictor
and outcome variables within an entity. It assumes that something within
the individual may impact or bias the predictor or outcome variables and
during the study, the practitioner needs to control for the same.
In the case of the Random Effect model, the key rationale is that unlike in
the FE model, the variation is assumed to be random and uncorrelated with
the independent variables included in the model or to quote “…the crucial
distinction between fixed and random effects is whether the unobserved
individual effect embodies elements that are correlated with the regressors
in the model, not whether these effects are stochastic or not” (Greene,
2008).
One of the advantages of the random effect method over the FE method is
that one can include time invariant variables. The equation is written as:
To decide whether to use the Fixed or the Random Effect model, the
Hausman test can be carried out wherein the null hypothesis is that the
preferred model is the random effect vs the alternative, the fixed effect
(Greene, 2008).
Background:
It focused on building a school (with separate toilets for girls, and access
to drinking water) in each selected village. Also, some additional
interventions were introduced in these schools which included:
Choosing Evaluation Designs 87
Scenario 1:
Scenario 2:
A new CSR head joined the foundation of XXXX organisation six months
after the commencement of the BPY project. He appreciates the
importance of having a robust IE and wants IE to be conducted for this
project. A baseline was not conducted initially, so the programme team is
uncertain about the IE design that should be adopted. The CSR head, who
has had a positive experience of working with your organisation, requests
you to conduct an IE. In your background research, you find that the YY
state Education Ministry had conducted a village level survey in the 20
worst performing districts which includes the 10 districts covered by BPY.
This data includes characteristics like the number of girls in the village,
the enrolment rate of girl students, the distance to the nearest school from
the village, the number of students in the nearest school etc. This
application data had been used by the implementing agency to rank each
village of 450 initially shortlisted villages in terms of need, with the 200
villages ranked as the neediest being selected to receive the programme
interventions. Consider the following questions and prepare your
responses.
Scenario 3:
At the time when the five year BPY programme is about to end, the CSR
board realises that they should assess how effective the programme has
been. The programme team is left with a limited budget, making it difficult
to do large scale primary data collection. However, it is found that the
state Ministry of Education has annually been collecting very good village
level data in 20 low performing districts, including in the 10 project
districts. The data includes important indicators like the enrolment rate of
girl and boy students, the number of toilets in the school, and the learning
test scores of both girl and boy students. Consider the following questions
and prepare your responses.
Choosing Evaluation Designs 89
Scenario 4:
The BPY programme is about to start. The board of the CSR foundations
is considering conducting an IE for the project. Due to strategic reasons
and political pressure, the 200 villages where the programme will be
implemented have already been decided. Your organisation is being called
to conduct an IE for this project. Please present your responses to the
questions mentioned below:
APPROACHES TO EVALUATION
This evaluation approach has two key components. Firstly, the specific
and real life primary intended users of the evaluation are identified and
engaged from the beginning to design the evaluation and decide the key
outputs of the evaluations based on their requirement. Secondly, the
decisions about the evaluation process should be made keeping in mind
the intended use of the evaluation by its intended users.
There are a few critical points that need to be taken care of while
implementing TE. It is suggested that the researchers should have a
discussion at the beginning of the planning phase in order to understand
and ascertain the background and cultural context of the programme. Also,
it is good to use mixed-method designs, as quantitative and qualitative data
facilitates the responsiveness of different participants to different issues.
Also, the methodology adopted should be able to capture the cultural
complexity and it should be appropriate for the cultural groups in the
research study area.
constraints and come up with the best possible methodology in the context
of the programme.
RWE tries to address the quality challenge to ensure that the study has the
minimum acceptable methodological rigour and adapts the evaluation
design to the level of rigour required by decision makers. One of the key
advantages of RWE is that it can lead the increase in the uptake of
evidence into policy making as project stakeholders are involved across all
phases including design, implementation, analysis and dissemination. This
would also ensure that the evaluation focuses on the key issues highlighted
by the stakeholders and that it uses the client’s preferred style of
communication.
96 Chapter Seven
This approach was proposed by Ray Pawson and Nick Tilley in 1997.
While conventional evaluations aim to answer the question, ‘What
works?’ or ‘Does the programme work?’ realist evaluation aims to answer
the question ‘What works for whom in what circumstances and in what
respect, and how?’ (Tilley, 2004). To understand this, realist evaluation
aims to identify the mechanisms and processes and explain how the output
was achieved under the influence of the programme milieu.
Realist evaluation is method neutral i.e., it does not impose any particular
method. As with other evaluation approaches, realist evaluation too
believes that the choice of data collection and analysis methods and tools
depends upon the type of data that is needed to answer the evaluation
questions and to test the programme theory. Quantitative data is collected
to test the various levels of results and the programme context, whereas
qualitative data is collected on the processes and generative mechanisms.
Approaches to Evaluation 99
In recent times, there has been increased demand and stronger focus on
equity in human and social development. A greater number of
organisations working in the development sector including national
governments are focusing on achieving equitable development results for
the vulnerable or disadvantaged groups amongst the target population.
While the focus is on equity focused interventions, there is also the
challenge of how to evaluate the effect of interventions on equitable
outcomes.
highlights the gaps between the various strata of beneficiaries viz., 'the
best-off group', 'the average group' and 'the worst-off group'.
x The project does not have an intervention model yet i.e. it needs to
be developed
x The model is not fool proof and needs to be developed or modified
x The programme situation or context is complex i.e., there is no
simple cause and effect situation. Most of the important relationships
between cause and effect are not clear
x When a rapid response needs to be developed in case of a sudden
major change in the project or situation.
There are four types of measurement scales that are used in the
measurement of variables: nominal, ordinal, interval, and ratio scales,
which follow a hierarchy, with nominal being at the lowest rung of the
hierarchy. These four types of measurement scales are briefly defined
below:
are feasible and only measures such as median and range can be
calculated on this type of data.
3. Interval variables: Interval variables provide more flexibility in
terms of measurement as they not only allow measured items to be
ranked but also help to quantify the sizes of differences between
them. For example, temperature that is measured in degrees
Fahrenheit or Celsius constitutes an interval scale. Thus, although
the temperature of 80 degrees is higher than the temperature of 40
degrees, the temperature at 80 degrees is not twice as hot as 40
degrees. Another example of the interval variable is that of the time
system such as B.C. or A.D., wherein time is measured taking B.C.
or A.D. as the reference point (initial point of reference is assumed
as zero). However, this does not mean that time did not exist before
B.C. or A.D. A reference scale to measure time has been
constructed, but it does not have a true or rational zero.
4. Ratio variables: Ratio variables are measured by scale and they
have an equidistant point as well as a rational zero. Thus, in
addition to all the properties of interval variables, ratio variables
feature an identifiable absolute zero point. A typical example of the
ratio scale is the Kelvin temperature scale, where not only is the
temperature of 60 degrees higher than the temperature of 20
degrees, but it can be specified that the temperature of 60 degrees is
thrice as high as the temperature at 20 degrees. Similarly, an object
weighing 20 kg is twice as heavy than an object weighing 10 kg.
Most of the variables used for measuring in field situations
conform to ratio scale properties, though most statistical data
analysis procedures do not distinguish between the interval and
ratio properties of the measurement scales.
Data Classification
Basis Classification
Nature Qualitative
Quantitative
Source Primary
Secondary
Value Discrete
Continuous
Qualitative Data
Quantitative data
Primary Data
Secondary Data
Secondary data is data that has already been collected and is available
from other sources. When primary data derived from primary sources is
made available to the researcher, it is called secondary data, e.g., data
collected from the Census of India, AHS (Annual Health Survey), data
that is already published in literature etc. Secondary data is readily
available so it saves time, as the researcher is provided with a larger
database. It also helps the researcher to identify the gaps and deficiencies
in the available data and the additional information that is required.
Mode
Median
The median is the middle value in a data set that divides the ordered data
set exactly in half, with one half of the data being greater than the median
and one half being less than the median. The goal here is to determine the
precise midpoint of the data set. When the number of observations (N) is
an odd number, the median is the middlemost number in the ordered data
set. When N is an even number, the median is the average of the two
middlemost scores in an ordered data set. One very important property of
the median is that it is less sensitive than the mean to extreme scores.
Mean
The mean is the arithmetic average, i.e., the sum of all data divided by the
number of cases.
Often it is best to calculate more than one measure since each measure
tells us something different, although all refer to the centre of the data.
Generally, the mean is the most preferred measure of central tendency
because it uses every score in the distribution. The practitioner needs to
remember that the goal is to use one score to best represent the entire data
set though it might not always be the best.
When the Mode is the best: Since the mean has to use the interval or the
ratio data, the mode is often the best in nominal or ordinal data though it
Analysing M&E Information 109
can be used for all types. Moreover, it is very easy to use, since no
calculations are necessary.
When the Median is preferred over the Mean: There are several
instances when the median is the preferred statistic:
1. When there are extreme scores: An extreme score pulls the mean
towards that score, which means that an extremely high score pulls
the mean higher, while an extremely low score pulls the mean
lower. If, for example, one of the groups in the data given above
saved INR 1000, the mean would not be the best way by which to
represent the central tendency.
2. If some scores are missing.
3. If one has ordinal data; since this data is discrete, the mean is
generally not used.
Range
The range is the difference between the highest and lowest score in the
distribution. Expressed as a single number, the range is the result of the
following formula:
110 Chapter Eight
Variance
s2 = Ȉ (x - x )2 or SS
N N
The variance detects the differences that the range does not detect because
it uses all the data points in the calculation and not just the minimum and
the maximum.
The key to variance is the SS since if the SS is large, the variance is large,
etc.
Standard Deviation
To work around this problem of using the variance, the researcher should
go back to the original units, and take the square root
ı= Ȉ(X- X ) 2 or SS
n n
Analysing M&E Information 111
This is the standard deviation, which is the most commonly used measure
of variability. It is frequently cited with the mean as the main piece of
information about a distribution.
It considers every piece of data in the distribution (as does the mean) as it
measures, on an average, how much each piece of data deviates from the
mean.
The standard deviation has been calculated below taking the SHG example
discussed above.
This should give the practitioner an idea of how much dispersion around
the mean there is in a particular distribution. It provides the overall notion
of dispersion in a distribution needed to complete the description of the
distribution.
Regression
To identify the functions available in MS-Excel, the user needs to click the
on Formulas tab on the Ribbon and then click the Insert Function button
in the Function Library group. Alternatively, the user clicks the button on
the formula bar. Either approach provides the Insert Function dialog box
shown in the figure below:
The Search for a function box at the top of the Insert Function dialog box
enables the user to type a brief description of what they want to do. After
that, when the Go button is clicked, MS-Excel searches for and displays
the functions that may accomplish the task in the Select a function box. In
many situations, however, the user may want to browse through an entire
category of functions to see what is available. For this task, the user has
the choice of Or select a category box which contains a drop-down list of
several categories of functions provided by MS-Excel. The figure below
shows that the Statistical category is selected. As a result of this, MS-
Excel’s statistical functions appear in alphabetic order in the Select a
function box. The user will find the AVEDEV function listed first,
followed by the AVERAGE function, and so on.
For data analysis, the user can choose out of the key MS-Excel functions
displayed. For this purpose, the sample dataset Health_Data_2013-14.xlsx
is used.
116 Chapter Eight
The first step in any data analysis is to understand the dataset. In the
example used, it can be observed that the dataset consists of the following
variables:
x District Name
x Month of reporting
x Total number of pregnant women registered for ANC
x Deliveries conducted at facility (including C-sections)
x The number of newborns breastfed within one hour of birth
The data is culled from three districts viz., District-1, District-2 and
District-3 and contains 36 cases in all. Each case contains the monthly data
for a district and based on these observations, the user analyses the data.
The first step is to select the cell in which the output is to be displayed.
The cell next to the last entered value at the end of column D is selected.
In this case, the cell number is D38. After this cell has been selected, the
user needs to:
Analysing M&E Information 117
1. Go to the Formulas tab in the Ribbon and select the Insert Function
2. Select All from the drop-down menu of Or select a category
3. Select AVERAGE from the list of functions and press OK.
The AVERAGE function requires the user to give an input of the list of
numbers that are used to compute the average. To estimate the average
number of deliveries in facilities in a month, the entire range of the
variable for institutional deliveries i.e., D2 to D37 is selected. The user
needs to click OK to get the average number of deliveries conducted at the
facility per month across the three districts, which is 2684.7.
Though the required average figures may be obtained using this process,
several steps are involved in it. An easier way to do this is by using MS-
Excel Pivot Tables.
For the process of using MS-Excel functions, the user needs to:
The desired output can be obtained using this process and subsequently the
results can be interpreted. Given here is the list of functions that are most
frequently used:
Function Use
MAX Returns the largest value from a list of supplied numbers
MIN Returns the smallest value from a list of supplied numbers
SUM Returns the sum of a list of supplied numbers
IF Tests a user-defined condition and returns one result if the
condition is TRUE, and another result if the condition is
FALSE
COUNT Returns the number of numerical values in a supplied set of
cells or values
COUNTIF Returns the number of cells (of a supplied range) that satisfy
a given criterion
CORREL Returns the correlation coefficient between array 1 and
array 2 cell ranges
AVERAGE Returns the Average of a list of supplied numbers
MEDIAN Returns the Median (the middle value) of a list of supplied
numbers
MODE Returns the Mode (the most frequently occurring value) of a
list of supplied numbers (replaced by MODE.SNGL
function in MS-Excel 2010)
STDEV.S Returns the standard deviation of a supplied set of values
(which represents a sample of a population) (New in MS-
Excel 2010 - replaces STDEV function)
STDEV.P Returns the standard deviation of a supplied set of values
(which represents an entire population) (New in MS-Excel
2010 - replaces STDEVP function)
120 Chapter Eight
Pivot Tables are one of the most powerful features of MS-Excel. Pivot
tables provide an easy way to summarise, analyse, explore and present
data. A Pivot Table is a versatile and user friendly reporting tool that
makes it easy to extract information from large tables of data without the
use of formulae. By moving or pivoting fields of data from one location to
another or by using drag and drop, the same data can be seen in several
different ways.
To analyse the data using Pivot Tables, the first step for the user is to
insert a Pivot Table for the dataset and click on any single cell in the
dataset.
The user goes to the insert tab and clicks Pivot Table.
The following dialog box appears. MS-Excel automatically selects the data
and the default location for a new pivot table is a New Worksheet.
Analysing M&E Information 121
The Pivot Table Fields will appear on the right side of the screen. For
example, to get the total number of women registered for ANC for each
district, the user drags the following fields to different areas on the Pivot
Table Field:
Pivot Tables also provide options to summarise the data using functions
such as count, average, standard deviation etc. These options can be
chosen from the value field settings in the values area.
Analysing M&E Information 123
To summarise data with the help of the Pivot Tables tool, the user needs
to:
4. Select Descriptive Statistics from the list of analysis tools and click
OK. The following dialog box appears.
The user gives the input about the data to be analysed in the Input Range
box. To analyse the number of children breastfed within one hour of birth,
E1:E37 is selected as the Input Range. Each column in the dataset
represents a variable, the user clicks on the Columns in the Grouped By
checkbox. Also, because Row 1, which includes the variable name is
included, the user will click on the Labels in First Row checkbox.
Next, under Output Options, the user decides where the results are to be
written and gives the command for a new sheet by clicking on the New
Worksheet Ply radio button so that accidentally some of the data is not
overwritten.
The user needs to click OK for the new results to appear in a New
Worksheet.
The first box is the Input Range, where the user gives the input about the
variables for which the correlation is to be analysed.
For the Input Y Range, all the values of the dependent variable (the
number of children breastfed within one hour of birth) is selected.
In the next step the independent variable (the number of deliveries in the
facility) is entered in the Input X Range box.
The Labels checkbox is ticked since the first row of data is the variable
label. Once again, the output is entered on a new spreadsheet.
When the user selects OK, the result appears in the New Worksheet.
128 Chapter Eight
Therefore, the equation for our model for this example would be:
Another important result that is derived from this model is the R squared
value. The R squared value denotes the fraction of the variance in the
dependent variable that is accounted for by the independent variable. In
this case, the R square value is 0.91, which suggests that 91 per cent of the
variation in the dependent variable is accounted for the independent
variable.
1. The Data Window, with a blank data sheet ready for analysis, is the
first window the user encounters. It is used to define and enter data
and to perform statistical procedures. Data files have the suffix ‘.
sav’ at the end of the file.
2. The Syntax Window is used to keep a record of all commands
issued in a session. The researcher does not necessarily have to
know the language for writing syntax, instead they can just select
the appropriate options from the menu and the dialog box, after
which they can select the Paste function option. This command
pastes the equivalent syntax of the selected operation in the Syntax
window. Beyond serving as a log for operations, it is possible to
run commands directly from the syntax window. It is recommended
that a syntax file is maintained with the syntax of all data analyses.
This ensures regeneration of the required tables and analysis
quickly. The syntax files have the suffix ‘.sps’ at the end of the file.
3. Output window: Whenever a procedure is run, the output is
directed to a separate window called the Output Window. The
tables can be directly copied from the Output Window and be saved
in MS-Excel or MS-Word files. Output files have the suffix ‘. spo’
at the end of the file.
Entering Data
The researcher can create a data file simply by entering the data. The
present section describes the step-by-step procedure for creating a data file
by entering variable information about subjects or cases. The SPSS main
data window provides two options on the bottom left hand corner of the
screen and the researcher can access either the Data View or the Variable
View window as shown in the figure given below.
Analysing M&E Information 131
x Name: The variable names in SPSS are not case sensitive but they
must begin with a letter. Further, the variable name should not
exceed eight characters.
x Type: SPSS also provides the facility to specify the data type, i.e.,
whether the data is in numeric, string or date format etc., though the
default format is numeric.
x Labels: SPSS also provides the facility of attaching labels to a
variable name. A variable name is limited to a length of eight
characters but by using a variable label, a researcher can use up to
256 characters to attach a label to a variable name. This provides
the ability to have very descriptive labels that appear as the output.
The researcher can enhance the readability of the output by using
the Labels option.
x Values: The researcher can assign values, e.g., the male and the
female can be coded as 1 and 2, respectively.
Data Dictionary
A data dictionary is easily produced by selecting the File Info option from
the Utilities menu. The data dictionary first provides a list of variables on
the working file, wherein the variable name appears on the left-hand side
and the column number of the variable appears on the right side of the
output window. The data dictionary also provides the print and write
format after the variable name, followed by special characteristics of the
variable such as value labels.
134 Chapter Eight
Data can be exported in several ways, though the simplest is the Open
option. In this case, the requisite database file can be directly opened by
simply selecting the ‘Open’ option as displayed in the figure given below.
Secondly, data files of other formats (dBase, Access) can be imported to
SPSS through database capture. It can also be used to read ASCII data as it
is the third option to read files that are saved in ASCII format, which
further provides two options - Freefield and Fixed Columns.
Analysing M&E Information 135
An SPSS file can be opened quite easily by clicking on the menu item and
selecting the option File -> Open -> Data. The SPSS file type (having a '.
sav' extension) can then be selected and the checkbox ticked to open it.
Similarly, an SPSS output file can be opened by selecting File -> Open ->
Output document files type (having a '.spo' extension).
An Excel workbook can be opened by selecting the Read Text Data option
by navigating through File->Read Text Data. Excel file types (.xls) can be
selected in file type and any excel file can be selected to be opened as
shown in the figure given below.
After this, the Excel workbook can be selected which can be opened by
selecting the option 'Open and Continue'. This opens a screen like the one
shown in the figure below.
Analysing M&E Information 137
It is important to point out that in case the first row of the Excel workbook
contains the variable names, then it should be ensured that the checkbox
for this option is ticked. In case the data is not on the first worksheet, the
worksheet can be changed using the down arrow to select the worksheet
which has the data. SPSS also provides the facility to select the range from
the worksheet which the researcher wants to use, in case the researcher is
not interested in using all the data in the worksheet.
SPSS provides the facility to import the text file through ‘Text Wizard’.
Text data files can be read in a variety of formats mentioned below:
• Tab-delimited files
• Space-delimited files
• Comma-delimited files
• Fixed-format files.
Text Import Wizard at Step 2 requests information about how variables are
arranged, i.e., whether the variables are defined by a specific character
(spaces, tabs or commas) or whether the variables have a fixed width,
where each variable is recorded in the same column. In the next step, it
further needs to be seen if the variable names are included at the top of the
file as the file may or may not contain variable names. In case the file
contains variable names of more than eight characters, then the variable
names are truncated. Further, in case the files do not contain variable
names, SPSS can allocate default names.
In the third step, the Text Import Wizard requests information about how
cases are represented and the number of cases that are to be imported.
Usually the first case begins with line 1 if no variable name is supplied and
line 2 if the variable name is supplied. Further, each line normally
represents a case. In this step, it is also necessary to specify the number of
cases the researcher wants to import, i.e., whether the researcher wants to
import all the cases, the first n cases, or a random sample.
Step 4 of the Text Import Wizard requests information about the file, i.e.,
whether it is comma delimited or space delimited. This step, in the case of
delimited files, allows selection of the character or symbol used as a
Analysing M&E Information 139
delimiter, whereas in case of fixed width files, the step displays vertical
lines on the file which can be moved if required.
In the last step, i.e., step 6 of the Text Import Wizard, SPSS provides a
facility to save the file specification for later use, or to paste the syntax.
a) Recoding Variables
For this, there are two options provided by SPSS for recoding of the
variable, viz., Recode into Different Variables and Recode into Same
Variables. It is strongly recommended that initially the researcher uses
only the Recoding into Different Variables option because even if the
researcher makes any error, the original variable would still be in the file
and the variables can be recoded.
The data can be recoded using several options. A particular value can be
changed into a new value by entering the value to be changed into the Old
Value box, while the new value is entered into the New Value box.
After typing the target variable, the computation that involves the
variables in the list is specified to create the target variable. The
researcher can use all the operations listed at the bottom of the screen,
though operations within parentheses are performed first.
140 Chapter Eight
The If command can also be used to create new variables based on the old
variables. This can be done by selecting Transform and then selecting the
Compute function. Further, the Include if Case Satisfies Condition dialog
box should be selected to select subsets of cases using conditional
expressions.
There are several operators which can be used, though in the majority of
cases, one or more of the six relational operators (<, >, <=, >=, =, and ~=)
are used.
SPSS provides the facility of splitting a data file into separate groups for
analysis based on the values of one or more grouping variables. In case
multiple grouping variables are to be selected, then cases are grouped by
each variable within categories of the first grouping variable, based on the
groups selected. For example, if Occupation is selected as the first
grouping variable and Education as the second grouping variable, then
cases will be grouped by Education classification within each Occupation
category.
The Data-> Split File option can be very easily chosen from the menu as
shown in the figure below. Further, at the next stage, groups are selected to
organise output by groups. Before splitting the file, it is important to select
the Split File option from the data editor window.
142 Chapter Eight
The Split File option can be turned off by selecting Data and then the Split
File option from the Data Editor window by clicking on the Analyze all
cases option.
SPSS also provides the facility to select subsets of cases for further
analysis. To arrive at this option, first Data is selected and then the option
Select Cases. This opens the Select Cases box and if the researcher wants
to select a subset of these cases, they can select the option If condition is
satisfied to select the subset of cases.
Further, at the bottom of the window, there is a check box to specify that
the unselected cases are filtered. By selecting this option, the unselected
cases can be used later by clicking on All Cases. However, if the Delete
option is selected, the unselected cases are permanently deleted.
To weight the data using this variable just created, the researcher selects
Data and then Weight Cases. In the next step, the researcher selects the
circle to the left of Weight Cases.
g) Missing Values
Missing data can be due to various factors, such as the way in which the
interviewer has administered the question, or left certain questions blank,
the respondent has declined to respond to certain questions, or due to some
human error in data coding and data entry etc. There are a few techniques
to handle data with missing values: (1) complete case analysis (list wise
deletion), (2) available case methods (pair-wise deletion), and (3) filling in
the missing values with estimated scores (imputation).
Besides this, the researcher may use various functions and simple
arithmetic expressions to extract missing values in different ways.
Thus, the researcher can use the expression AVERAGE in the example
(var1+var2+var3)/3 where the result is treated as missing, only if any of
the three variables i.e., var1, var2 or var3 has a missing value.
The researcher can use the expression MEAN (var1, var2, var3) where the
result is treated as missing only if a case has missing values for all three
variables.
h) Reliability Analysis
The figure above details the reliability coefficients for an awareness scale,
which involves eleven awareness variables about the leprosy symptom,
having a high value of alpha, i.e., .7672. A rule of thumb for alpha value
states that 0.60 is the lower level of acceptability for the alpha, though
alphas in the 0.70s and 0.80s range are preferable.
Further, the Inter-Item Correlations and Scale if item deleted are also very
important indicators to assess the reliability of scale. The Inter-Item
Correlation option allows the researcher to see if any of the items are
negatively correlated with the other items in the scale, while the Scale if
item deleted option reveals the alpha if each item is deleted from the scale.
Data Analysis
a) Univariate Statistics
b) Frequencies
c) Descriptives
d) Explore
The SPSS package provides the option to examine the central tendency
and distributional characteristics of continuous variables by using the
Explore option. The researcher can select statistics such as M estimators,
outliers, and percentiles by using the Explore option. It also provides
options like grouped frequency tables, displays, as well as stem and leaf
and box plots.
e) Cross Tabulations
Cross tabulation is one of the easiest ways of summarising the data which
can be of any size in terms of rows and columns. It generally allows the
researcher to identify relationships between the cross tabulated variables
based on their cell values.
Analysing M&E Information 153
The SPSS package provides the facility for generating bivariate cross
tabulations. A cross tabulation helps in analysing the association of one
variable with another variable and is extremely useful in cases where each
variable contains only a few categories.
f) The Mean
The chi-square test is only used with measures, which place cases into
categories. This test indicates whether the results from the two measures
are about what one would expect if the two were not related. This test can
be conducted by selecting Analyze from the top menu and Descriptive
Statistics and Crosstabs to open the Crosstabs dialog box. Further, in the
Crosstabs dialog box, the variables which the researcher wishes to crosstab
can be selected. Further, the researcher can select the Crosstabs: Statistics
box option, and then select the Chi-square box option to continue.
h) Independent-Samples t-Test
The Independent-Samples t-test procedure compares the means for two groups
of cases. In fact, there are two variants of unpaired t-tests based on the
assumption of equal and unequal variances between the two groups of cases.
In the case of an unpaired t-test, subjects are randomly assigned to two groups,
so that after employing the Significance-test, it can conclude if the difference
in response is due to the treatment and not due to other factors.
At the next stage, the groups are defined by selecting the Define Groups
button, which opens the Define Groups box. Since males are coded as 1
and females are coded as 2, the researcher types 1 in the Group 1 box and
2 in the Group 2 box.
156 Chapter Eight
The table above shows the results of the two t tests. The table also gives
the values for the degrees of freedom and the observed significance level.
It is important to point out that in this test the null hypothesis is that men
and women have the same TV viewership frequency. The null hypothesis
can be easily tested by comparing the p value with the specified
significance level. This significance level value is the probability that the t
value would be this big or bigger simply by chance if the null hypothesis
were true. Since this probability is less than .05 (the significance level
researchers use by convention in social studies), the researcher can reject
the null hypothesis and conclude that probably there is a difference
between men and women in terms of TV viewership.
i) Paired-Samples t-Test
The test statistic is t with n-1 degrees of freedom. If the p-value associated
with t is low (< 0.05), there is evidence to reject the null hypothesis. This
proves that there is evidence that there is a difference in the means across
paired observations.
j) Correlation
For example, one could hypothesise that as education increases, the level
of prestige of one’s occupation also increases. The Pearson Correlation
Coefficient could tell us the correlation between these two variables. This
can be assessed by selecting the Analyze menu option, then Correlate and
Bivariate sub-options.
158 Chapter Eight
k) Regression
At the next stage, the method of data analysis is selected by clicking on the
Method button which is right below the Independent(s): box. The SPSS
statistics package provides several choices for doing regression analysis
though this is the one which is used most frequently.
Graphs
a) Bar graphs
To select the bar graph option, first Graphs is selected from the menu,
after which the Bar option is selected, which includes the options of
Simple, Clustered or Stacked bar graphs. After this, the independent
variable is placed on the vertical axis into the category box, while the
dependent variable is placed on the horizontal axis into the Y box.
b) Pie charts
The pie chart sub-option can be selected by going to the ‘Graph’ tab
through the menu bar. After selecting pie chart as the graphical
representation option, the variable for which the pie chart needs to be
created can be selected.
c) Line graphs
In the same way, the line graph chart option can also be selected from the
Graph tab in the menu bar option. SPSS provides the option of creating
different types of line graphs like simple line graph, multiple line graph or
drop line graph.
In the data view in SPSS, if the grey column heading is clicked on, the
whole column gets selected or highlighted. Similarly, if the grey row
number is clicked on, the whole row gets selected.
162 Chapter Eight
Writing Syntax
Syntax can also be directly edited into syntax files in much simpler terms
than menu-generated syntax. Users usually find it more efficient to
program by directly typing in the commands.
It is suggested that the syntax should be saved in syntax files for analysis
and tabulations done on a data set. This is very helpful as the analysis and
tables can be regenerated instantly multiple times using this syntax. It is
very easy to run the syntax commands. To run any syntax command, the
appropriate syntax which needs to be run in the Syntax Editor Window is
highlighted or selected. After the Syntax Editor Window is selected, the
Run button on the Syntax window toolbar is clicked or the same choice is
selected from the Run menu and the shortcut Control+R is used.
The output of the analysis or the tabulations done can be viewed in the
SPSS Output Viewer window. The SPSS Viewer window displays output,
including the warning and error messages. The left-hand side of the
window contains the log history highlighting navigation of all the outputs
run at a single output format in the opened output file.
Analysing M&E Information 163
To get Help in the SPSS Viewer for interpreting any output table, the
researcher right-clicks an output table and selects Results Coach in the
popup menu. Results Coach is a well-organised, context specific
introduction to interpreting output items.
it’s all about noting down what the respondent has reported about what he
or she feels. Thus, it is important to point out that there is the potential
problem of a researcher's bias during transcription, due to which the
researcher notes only those sections that seem relevant or interesting to
them. During qualitative data analysis, the researcher should take special
care to avoid this issue. Transcription is not an act which is done in
isolation; it involves lot of processes, which ultimately contribute to
collection of quality raw data that is ready for analysis. Transcription
coupled with content analysis, takes care of qualitative data analysis and
usually includes the following steps:
Analysis Methods
by far the most widely used framework for analysing qualitative data.
Grounded theory has been defined as ‘theory that was derived from data,
systematically gathered and analysed through the research process. In this
method, data collection, analysis and eventual theory stand in close
relationship with one another’ (Corbin, 1998). It consists of ‘plausible
relationships’ among sets of concepts, which are directly developed from
data analysis. Thus, two central features of grounded theory are that it is
concerned with the development of theory out of data, and while the
approach is iterative, it means that data collection and analysis proceed in
tandem.
Grounded theory starts with a clear research question and it owes a lot of
its analytical skill to its constant comparative method. In this method,
concepts or categories emerging from one stage of the data analysis are
compared with concepts emerging from the next. In a way, by constantly
comparing the concepts and categories, the researcher looks for the
relationship between them, while looking out for the emerging theory.
This continues until the stage of theoretical saturation has been reached,
i.e., when there are no new significant categories or concepts which are of
interest. At the initial stage of data analysis, after familiarisation with field
notes, the researcher looks for indicators of categories in events and
behaviour, which are subsequently coded.
The coding of the notes of text passages which are of interest is done and
codes are assigned (mentally or by scribbling) to text passages, which are
then compared to find similarities and differences between the codes to
reveal concepts and categories. At the next stage, memos are developed on
the comparisons and emerging concepts and categories till the point of
saturation. Thus, eventually the emerging concepts and theories can be
combined in a logical way to find the relationship between them.
c) Framework Analysis:
The process of framework analysis starts by getting familiar with the raw
data. After this, the thematic framework needs to be developed by
166 Chapter Eight
reviewing and analysing the emerging issues. At the next stage, numeral
and text codes are assigned to specific pieces of data based on the
developed thematic framework. The data is represented pictorially under
the broad heads of the thematic framework by using charts and maps,
which later pave the way for identifying patterns, associations and
relationships. Framework analysis thus involves a few key stages, which
are mentioned below:
a) Ethnography
Ethnography uses segment as the basic unit to retrieve the text from
documents and can be described as a text retrieval statistical software.
Analysing M&E Information 169
Segments are then coded in the form of twelve code words, which can be
nested or overlapped upto seven levels deep as a sort of query function to
retrieve text. It is important to point out that the search results depend on
the query function in the form of nesting using codes.
b) HyperQual
c) Hypersoft
Hypersoft provides an integrated facility for data entry and data retrieval
for theory building. This software also provides the options of indexing,
searching and analysing textual data. Hypersoft does so by providing
facilities for annotating, categorising, and coding of raw data.
e) Textbase Alpha
Textbase Alpha provides the facility for coding of both narrative text as
well as data which has internal structure. Textbase Alpha also provides the
facility for assembling coded segments, which can be represented by
frequency counts and data matrix output. Textbase Beta, the new and
modified version of Textbase Alpha provides the facility to code longer
portions of lines.
f) SONAR
g) ATLAS/ti
and theory building, rather than straight code and retrieve. Codes and
super codes which represent a series of statements expressing textual
relationships are used to build theory, concept and relationship through
operators.
ATLAS/ti is probably the most powerful qualitative analysis tool for text,
video and audio data. It can work with text, video and audio files of
various formats. Though it follows the analysis paradigm of Grounded
Theory and the qualitative content analysis, it is used universally by
qualitative researchers from all realms of analysis theory. It helps to reveal
the complex phenomena hidden in qualitative data in an exploratory way.
Step II: After creating a HU, all the data files relevant to the project,
including the primary documents in the form of text, graphics and audio is
associated with the created project or the HU.
Step III: After associating all relevant data files or Primary Documents
with the HU, the relevant text passages or pieces of special interest are
selected and further assigned codes or memos. This step transforms all
Analysing M&E Information 171
relevant documents in codes and memos, which are used in later stages of
query indexing or theory building.
Step IV: After the initial stages of assigning codes and memos, the coded
data segments are reviewed thoroughly to ascertain whether it is necessary
to associate more files with the HU.
Step V: After assigning codes and memos to all relevant data files
associated with the project, all the primary documents, codes, and memos
are organised using ‘Families’. This is used later in developing theory.
Step VI: After organising the primary documents, codes and memos, as the
next step, the semantics, conceptual or terminological networks are built
from the codes and memos which have been created and organised. These
conceptual networks along with codes, memos, super codes and families
help in crystallisation of the emerging theory.
Step VII: As the last step, based on the emerging theory, a report is
prepared. In case any further analysis is required, the data is exported for
further analysis using the statistical software.
172 Chapter Eight
h) NVivo
After the NVivo program is started by clicking in the software icon, one of
the three options provided can be selected viz., Run a Tutorial, Start a New
Project or Open an Existing Project (Figure 8.31). To start a new project,
the dialog box of ‘New Project’ is checked.
The NVivo menu provides the option to start a New Project, or to Open an
Existing Project.
Reports
Node summary –This provides a summary of each node, e.g., the number
of words coded for this node, the number of sources and cases.
" List the steps involved in executing MLE for development projects
or programmes
" Understand the operational dimensions involved in executing MLE
for developmental projects or programmes
After building on an agreed ToC, the evaluator works on the Logic Model
with the programme team and in case the programme does not have a clear
Logic Model, it is essential that the evaluator develops one in
collaboration with the relevant stakeholders. While working on the logic
model, it is important to have an agreement about its specific components,
viz., the inputs or the resources required to execute the intervention,
activities that need to be carried out, outputs, outcomes and the impact to
be achieved.
Executing Monitoring Evaluation and Learning 177
Further, as explained in the previous chapters, the Logic Model must also
outline the specific indicators for measuring the aspects of each
component.
Based on the purpose of the evaluation, the different terminologies that are
used to describe evaluations are as follows:
x Comprehensive Evaluation
x Formative Evaluation
x Process Evaluation
x Outcome Evaluation/ Impact Evaluation
Based on the above tasks, the lessons learnt and best practices are
delineated so that they can further be adopted, adapted and replicated
across other projects for facilitating their efficient and effective
implementation.
Result Questions
Process Questions
The process questions are also related to the inter-linkage and sustainability
questions. While the process questions at each level try to draw causal
inferences about the intervention, the inter-linkage questions attempt to
analyse the outcome of interventions at one level as well as at the next
level of implementation. The questions framed under sustainability are
cross-cutting in nature and seek answers to issues like the viability of
interventions, the possibility of taking the interventions to scale and the
scope of internalisation.
Table 1.14: Review of the Indicators with Respect to the Logic Model
and Theory of Change
As part of the MEL planning process, it is essential for the MEL team to
review the indicators regarding the various stages of Logic Model and the
ToC. It should be assessed if they are appropriate to each stage and
whether they fulfil the SMART characteristics which each indicator
should ideally possess.
After the research tools are developed, they are translated into local
languages focusing on three basic aspects: semantic equivalence, conceptual
equivalence, and normative equivalence of items. For verification of the
translation quality, the translation/back translation method is used, having
independent translators translate from one language to another, and then
back again, to see if the original and re-translated item remains the same.
There are other important questions that need to be answered as part of the
pilot test regarding the research tool, viz.,
The field-tested tools are finalised in consultation with the client and serve
the following purposes:
mobiles/tablets used for an evaluation do not last for more than two to
three projects and hence, it is prudent to budget and plan accordingly.
One great advantage of CAPI is that the enumerators need to spend less
time thinking about what question to ask next as the skips are automated in
the programme. Thus, the CAPI interview has a more natural
conversational flow than a PAPI interview and it also tends to move faster,
reducing the total time it takes to administer the questionnaire. Further, it
is observed that respondents (and enumerators) are less likely to become
tired and/or frustrated with the interview.
Another critical benefit in the case of CAPI is the timeliness of data. One
can look and analyse the data in almost real time and there is no time lag
between data collection, cleaning and analysis. This saves lot of time,
which may be to the tune of one to three months depending on the sample
size. More importantly, timely availability of quality data helps to speed
up decision making. It also helps in building ownership, as all stakeholders
can look at the data on real time basis.
The quality of the data depends on the skill of the enumerators recruited,
trained and managed during the data collection exercise. Enumerators are
the fulcrum of the data collection exercise and, hence, they are also the
custodians of the data. The field staff consisting of field enumerators and
supervisors are primarily responsible for the data collection effort.
Thus, it is essential that the enumerators selected for the data collection
exercise possess the requisite skills. The first and foremost skillset
required is good communication or people skills as the key job of
enumerators is to communicate the purpose of the survey, solicit consent,
highlight the importance of the respondent’s participation and then solicit
responses to research questions listed in the research instrument. It is
always helpful to have enumerators who have either basic understanding
of thematic areas wherein they are supposed to collect data or have prior
experience of collecting data on similar thematic areas viz., public health,
nutrition, livelihood etc. For instance, the data collection exercises which
involve technical knowledge as in the case of collection of data on the
quality of care at the health facility, it is in the interest of the programme
to have enumerators who are trained in nursing. Similarly, in case the
survey is being done using PDAs, it needs to be assessed if the
investigators are comfortable using the same. The composition of the
survey team i.e., the number or males or females required is as per the
nature of the survey. For example, in the case of surveys related to
reproductive health and family planning to be administered to women, it is
advisable to have female investigators.
Another key skill set required for enumerators is the ability to grasp and
follow detailed instructions in addition to literacy, basic numeracy, and the
ability to absorb training. Language becomes one of the key criteria for the
selection of an enumerator and enumerators should be fluent in the local
language(s) which is native to respondents. In specific cases, an additional
language may come handy, and more so in the case of multi-lingual
surveys.
The training should have a proper mix of classroom sessions and field
practice. Classroom training should focus on training the field team on the
structured interview schedule and related fundamental concepts. Besides
lectures, the training should also include mock exercises and role play
demonstration. Field practice sessions should be scheduled in non-sampled
areas, once enumerators are trained adequately on research instruments.
Debriefing sessions to reiterate and assess whether investigators have
understood the questions correctly should be scheduled thereafter.
The training agenda should have sufficient time for classroom training,
simulation and mock exercise, field training and then feedback. The
training methods selected should include active learning opportunities for
the trainee enumerators whenever possible. The training method should
provide enumerators with an opportunity to practice, apply, or respond. In
case PDAs are being used for conducting the surveys, technical training is
also important and sufficient time should be given to train enumerators on
the usage of PDAs.
Testing and Scoring One way of ensuring the quality of the field team
and to ensure that they are ready to go to the field for data collection is to
rate and score them. One can use a mock survey exercise where one
enumerator chooses another enumerator as a respondent and completes a
mock questionnaire. Each investigator can be scored based on their
performance.
The team size, i.e., the number of enumerators required, depends on the
productivity of the enumerators, which in turn depends on the length and
complexity of the research instruments which determines the time allowed
for the survey to be conducted. In addition to this, geographical location or
terrain plays a part in determining the team size. Thus, the number of
enumerators required may vary from survey to survey.
The dry run is another activity which is the key to ensure data quality.
During the dry run, the field team is informed that their field work has
started, though in reality, it is like a practice session. The dry run helps the
research team to understand how the field team will perform in real world
situations. The research team and field managers can take stock of the
situation based on the dry run.
192 Chapter Nine
Enumerators have a central role in the study since they are the ones who
collect information from respondents. Thus, the success of the survey
depends on the quality of each interviewer’s work. Successful
interviewing is an art that should not be treated as a mechanical process.
The skill of interviewing develops with practice but there are certain basic
principles which should be followed by every successful interviewer. One
of the key tasks of an enumerator is building rapport with the respondent.
The enumerator should keep some of these points in mind while
conducting an interview:
Make a good first impression: When approaching the respondent for the
first time, the investigator should do their best to make the respondent feel
Executing Monitoring Evaluation and Learning 193
at ease. With a few well-chosen words, the respondent can be put in the
right frame of mind for the interview. The interview should be started with
a smile and salutation such as ‘namaste’ followed by the introduction of
the investigator.
During the fieldwork, the supervisor should assign work to the interviewers
after considering the competence of individual interviewers and ensuring
that there is an equitable distribution of the workload, allocate the HHs in
such a way that all the investigators are working close to each other. This
will help the supervisor in proper supervision and monitoring and
communicating any problems to the field coordinator/project director.
also make sure that a larger work load is not assigned to a certain
set of investigators consistently. The supervisor should also ensure
that each interviewer has all the required information and materials
for completing the work assignment.
x Make sure that all the selected HHs have been interviewed before
the team leaves that area.
x Maintain a checklist of all completed records for each day, and
carefully monitor all assignments and work completed by each
interviewer and for each work area for completeness and accuracy.
x Make sure that all the mobiles, laptops etc., are charged and are in
proper functioning condition in case the survey is being done using
PDAs.
x Make sure that the interviewers fully understand the instructions
given to them and that they adhere to the work schedule.
The responsibility of the field manager is to manage the field work starting
with the recruitment of the investigators and supervisors, their training,
ensuring data collection is carried out in adherence with the required
quality standards and finally providing data to the field team. The field
manager should concurrently monitor the data collection, make spot
checks and back checks and have a look at the data collected regularly to
provide feedback to supervisors and enumerators on data quality. The field
manager should also conduct debriefing sessions initially and intermittently to
discuss any issues which the field team might be facing and also provide
feedback to the investigators based on his and the supervisor's observations.
The field manager and executives, in tandem with the supervisors, should
ensure best practices are followed at the data collection stage. Along with
this, there should be regular visits by the research team. Handholding
support should be provided to ensure data quality through spot checks and
back checks.
Back checks
The field manager and executives should ensure that back checks are done
by them or by the supervisors in all sampled areas wherein the completion
rate is found to be low or it seems to be a problem. Back checks are a
powerful tool for checking the quality of the data in which the information
for a few respondents is systematically checked. This is done by
conducting a short re-interview with some respondents and matching the
results with what had been collected by the interviewer. Back checks or re-
interviews help reduce the types of problems that affect the accuracy of the
survey data.
To ensure the data quality and to minimise logical and data entry errors,
basic checks should be included in the programme or the survey tool.
cannot be missing or cannot have any other value than the two options of
'live birth' or 'still birth'. A log of cases with such kinds of errors should be
generated for each survey.
Example:
Range Checks
Example:
Consistency Checks
Value Indicator
Value 1 How many hours did you stay in this facility?
Value 2 After how many hours of reaching the facility did you
deliver?
CHECK Value 1 > Value 2
Data Cleaning
After the completion of data collection, when the field team hands over the
raw data to the research team, data cleaning is done by the research team.
Data cleaning consists of checking the data thoroughly and checking to see
if the data has all the variables, including the required number of cases.
Open ended questions should be post coded as part of data cleaning. It is
also necessary to ensure that the data is properly labelled. The data analyst
looks for areas of data inconsistency and clarifies with the field team if
required.
The core team members, under the guidance of the project coordinator and
the data analyst, prepare the analysis or tabulation plan. The required
tables are generated using the latest versions of analysis software like
SPSS or STATA.
The choice and nature of data analysis depends on several factors such as
the type of the variable, the nature of the variable and the mode of analysis
performed. Evaluators can choose either SPSS 17.0 or STATA 10.0 for
analysis. SPSS is the most popular quantitative analysis software used
today in social research. SPSS 17.0 can be used for basic analysis and for
generating tabulated reports, descriptive statistics, and complex statistical
analyses, in addition to which the SPSS package provides easy to use
interface for analytics. STATA is another strong statistical software
package which provides strong modules for evaluation methodologies
such as DID, RD and IV. Nowadays, evaluators are also using R for its
strong analytical power. The data analysis is usually categorised into two
broad categories:
Executing Monitoring Evaluation and Learning 201
Basic Analysis
Advanced Analysis
x Significance testing
x Calculating correlation coefficient
x Analysing internal consistency reliability of the scale using Alpha
Coefficient
x Inequality Analysis
x Multivariate Analysis
High Quality
Timely
Used
x Policy learning
x Organisational learning
x Process Learning
It is important that the learnings related to processes are also derived from
the programme evaluation. Key areas of non-compliance and bottle necks
need to be identified and communicated to the programme team as part of
process learning.
x Knowledge Development
Programmes often do not have sufficient knowledge about the lives of the
programme recipients or beneficiaries they are trying to impact.
Evaluations provide an opportunity to build such knowledge. Evaluations
also provide opportunities to learn about the barriers that need to be
overcome. This knowledge can be fed back into the system with the aim of
Executing Monitoring Evaluation and Learning 205
x Programme Impacts
One key goal is to develop clear structures within the MLE team to
support the learning and feedback. This implies:
x Fit with the latest ‘political zeitgeist’ (The defining spirit or mood
of a period of history as shown by the ideas and beliefs of the time).
For example, the political zeitgeist may be conducive to
implementing health equity policies. Working closely with policy
makers and other policy and programme stakeholders can provide
evaluators an avenue by which to understand the policy landscape
and in some cases, impact the same through a broadening of policy
horizons and enhancing policy capacities (Carden, Knowledge to
Policy: Making the Most of Development Research, 2009).
x Raise the salience of the key policy considerations emerging from
the evaluations to those with the power to decide (Mark, 2004)
x Be simple enough to be communicable
x Be simple enough for those delivering to be able to apply it
x Be cogent enough to persuade the various agents in the delivery
chain that it is worth doing
x Be cogent enough to win over the required leadership - from the
start, and from the top, to the end
x Be embedded within the intervention teams, e.g., explore
perspectives with action research approaches which are helpful but
not captured by the intervention team - with people who can remind
the intervention team about the gap between their measures and the
intended outcome. Such embedded evaluators can mediate/present
the experience of others in similar contexts and advise on what is
likely to work
x Educate policy makers and practitioners that an evaluation is not
simply an instrument to determine if a programme is working
Executing Monitoring Evaluation and Learning 207
In addition to the above, meetings to share the learnings with the funders
should continue to take place as and when required.
To share the project learnings with the outside world, i.e., the policy
makers and other evaluators, the project should generate working papers
or methodological notes for presentation in local and international
conferences as well as in peer reviewed journals. These documents are
aimed primarily at policy makers/evaluators and highlight the new
approaches attempted in the project.
x Title Page
x Executive Summary
x Intended Use and Users
x Programme Description
x Evaluation Design Scope and Focus
x Results and Conclusion
x Programmatic Implication
x Learning
x Annexure
Intended Use and Users: In line with the spirit of utilisation focused
evaluation, the evaluation report clearly specifies its intended use and the
intended users for transparency and better use of evaluation findings. This
helps in building the demand for evaluation use and also helps the reader
or the user to interpret findings in a better way.
Evaluation Focus, Scope and Design: This chapter presents data sources
and methods along with evaluation indicators, performance measures, data
sources, and methods used in the evaluation which are described in this
section.
Results and Conclusion: The Results and Conclusion section provides the
findings derived from the evaluation study. This section also aims to
provide a meaningful interpretation of the data, which is more than mere
presentation.
Real World Evaluation. (2016, July 2). Retrieved from Real World
Evaluation: http://www.realworldevaluation.org/Home_Page.html
Republic of Serbia. (2011). Guide to Logical Framework Approach: A Key
Tool to Project Cycle Management .
Richard Blundell, M. C. (2000). Evaluation Methods for Non-
Experimental Data. Fiscal Studies , 427-468.
Rodrik, D. (2008, October). The New Development Economics: We Shall
Experiment, but How Shall We Learn? Faculty Research Working
Papers Series-John F. Kennedy School of Government - Harvard
University .
Sanderson, I. (2003). Is it ‘What Works’ that Matters? Evaluation and
Evidence-Based Policy Making. Journal of Research Papers in
Education, 329-343.
Sielbeck-Bowen, S. B. (2002). Exploring femenist evaluation:The ground
from which we rise.New Directions for Evaluation .
SMART, Characteristics of Good Objectives. (2016, 01 04). Retrieved
from Community Empowermnet Collective:
http://cec.vcn.bc.ca/cmp/modules/pd-smar.htm
Sridharan, S. G. (2008). Evaluation of Primary Prevention Intervention:
Have a Heart Paisley Phase 2. . Glasgow: NHS Health Scotland.
Tilley, R. P. (2004). Realist Evaluation. Sage Publications.
UNAIDS. (2010). An Introduction to Indicators .
UNDP Evaluation Office. (2002). Handbook on Monitoring and
Evaluation for Results.
UNICEF Evaluation Office. (2006). How to design and manage Equity-
focused evaluations. UNICEF.
USAID Centre for Development Information and Evaluation. (1996).
Preparing a Performance Monitoring Plan.