IE Intro

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Introduction to Impact

Evaluation
Part I- Motivation

Emmanuel Skoufias
The World Bank
PREM KL Forum
May 3-4, 2010
Outline of presentation

1. Why Evaluate?
2. Evaluation vs. Monitoring
3. Necessary ingredients of a good Impact
Evaluation
1. Why Evaluate

3
Why conduct an Impact
Evaluation?
 Knowledge & Learning
 Improve design and effectiveness of the program
 Economic Reasons
• To make resource allocation decisions: Comparing program impacts
allows G to reallocate funds from less to more effective programs
and thus to an increase in Social Welfare
 Social Reasons
 increases transparency & accountability
 Support of public sector reform / innovation
 Political Reasons
 Credibility/break with “bad” practices of past
Why Evaluate?
Need evidence on what works
Limited budget forces choices
Bad policies could hurt
Improve program/policy implementation
Design: eligibility, benefits
Operations: efficiency & targeting
Management tool to improve operations

Information key to sustainability


Budget negotiations
Informing public opinion and press
Results agenda & aid effectiveness
5
Allocate limited resources
Benefit-Cost analysis
Comparison of choices
Highest return investment

Benefit:
Change in outcome indicators
Measured through impact evaluation

Cost:
Additional cost of providing benefit
Economic versus accounting costs 6
What kinds of
questions does IE answer?
What is effect of program on outcomes?
How much better off are beneficiaries
because of the intervention?
How would outcomes change under
alternative program designs?
Does the program impact people
differently (e.g. females, poor, minorities)
Is the program cost-effective?
Traditional M&E cannot answer these 7
For Example IE Answers…

What is effect of scholarships on school


attendance & performance (test scores)?
Does contracting out primary health care
lead to an increase in access?
Does replacing dirt floors with cement
reduce parasites & improve child health?
Do improved roads increase access to
labor markets & raise income 8
Summary
An impact evaluation informs:
Whether we are doing the right things
Strategy Operation

 Rationale/justification
 Clear theory of change

Whether we are doing things right


 Effectiveness in achieving expected outcomes
 Efficiency in optimizing resources
 Client satisfaction

Whether there are better ways of doing it


Learning

 Alternatives
 Best practices
 Lessons learned
2. Evaluation vs. Monitoring
Definitions
(Results Based) Monitoring: is a continuous process
of collecting and analyzing information to
compare how well a project, program or policy is
performing against expected results
(Results-Based) Evaluation: An assessment of a
planned, ongoing, or completed intervention to
determine its relevance, efficiency, effectiveness,
impact and sustainability. The intent is to
incorporate lessons learned into the decision-
making process.
Monitoring and Evaluation

IMPACT Effect on living standards


- infant and child mortality,
- prevalence of specific disease

OUTCOMES Access, usage and satisfaction of users


- number of children vaccinated,
- percentage within 5 km of health center

OUTPUTS Goods and services generated


- number of nurses
- availability of medicine

INPUTS Financial and physical resources


- spending in primary health care
Evaluation Addresses:
“Why” Questions – What caused the changes we
are monitoring
“How” Questions – What was the sequence or
processes that led to
successful (or not) outcomes

“Compliance/ – Did the promised activities


Accountability actually take place and as they
Questions” were planned?

Process/ Was the implementation


Implementation process followed as
Questions anticipated, and with what
consequences
Six Types Of Evaluation
Performance
Logic Chain Pre-Implementation
Assessment

Process
Case Study
Implementation

Impact Evaluation Meta-Evaluation


Complementary Roles of
Results-Based Monitoring and Evaluation
Monitoring Evaluation

 Clarifies program objectives


 Analyzes why intended results were
or were not achieved
Links activities and their Assesses specific causal

 resources to objectives
 contributions of activities to
results=Impact Evaluation
Translates objectives into
 performance indicators and
set targets  Examines implementation
process=Operations Evaluation
Routinely collects data on Explores unintended
 these indicators, compares
actual results with targets
 results=Spillover effects

Reports progress to Provides lessons, highlights

 managers and alerts them


to problems  significant accomplishment or
program potential, and offers
recommendations for improvement
Summary--1

Results-based monitoring and evaluation


are generally viewed as distinct but
complementary functions
Each provides a different type of
performance information
Both are needed to be able to better
manage policy, program, and project
implementation
Summary--2

Implementing results-based monitoring and


evaluation systems can strengthen WB and
public sector management
Implementing results-based monitoring
and evaluation systems requires commitment
by leadership and staff alike
3. Necessary ingredients of a good
Impact Evaluation:
A good counterfactual & robustness checks
What we need for an IE

The difference in outcomes with the


program versus without the program – for
the same unit of analysis (e.g. individual,
community etc.)
Problem: individuals only have one
existence
Hence, we have a problem of a missing
counter-factual, a problem of missing data
Thinking about the counterfactual

Why not compare individuals before and


after (the reflexive)?
The rest of the world moves on and you are
not sure what was caused by the program
and what by the rest of the world
We need a control/comparison group that
will allow us to attribute any change in the
“treatment” group to the program
(causality)
We observe an outcome indicator,

Y0

t=0 time

Intervention
and its value rises after the program:
(observedl)
Y1

Y0

t=0 t=1 time

Intervention
Having the “ideal” counterfactual……
(observedl)
Y1

(counterfactual)
Y 1*

Y0

t=0 t=1 time

Intervention
allows us to estimate the true impact
Y1
Impact = Y1- Y1*

Y1*

Y0

t=0 t=1 time


Comparison Group Issues
 Two central problems:
 Programs are targeted
 Program areas will differ in observable and unobservable
ways precisely because the program intended this
 Individual participation is (usually) voluntary
 Participants will differ from non-participants in observable
and unobservable ways (selection based on observable
variables such as age and education and unobservable
variables such as ability, motivation, drive)
 Hence, a comparison of participants and an
arbitrary group of non-participants can lead to
heavily biased results
Impact Evaluation methods
Differ in how they construct the
counterfactual
• Experimental methods/Randomization
• Quasi-experimental methods
• Propensity score matching (PSM)
• Regression discontinuity design (RDD)
• Other Econometric methods
• Before and After (Reflexive comparisons)
• Difference in Difference (Dif in Dif)
• Instrumental variables
• Encouragement design
Thank you

You might also like