Impact Eval
Impact Eval
Impact Eval
Impact Evaluation
What is different about an impact evaluation?
Impact means long term effects positive and negative, intended and unintended.
A good aid evaluation, according to the DAC, focuses on five areas including impact: > > > >
>
relevance (were the objectives right?) effectiveness (how well were the objectives achieved?) efficiency (was it value for money?) impact sustainability (will benefits, particularly in systems or institutions, be sustained?)
However, a traditional evaluation will generally focus on objectives (eg designing the objectives and turning inputs into outputs and outcomes) and give only cursory attention to impacts. Higher order outcomes and impacts are difficult to measure without making a big effort and hence they are only rarely measured.
What makes an impact evaluation different is the emphasis and priority that it gives to establishing the impact of the initiative; and the rigorous social-sciences methods that it uses to do so. An impact evaluation would often consider all the traditional evaluation criteria, but place more weight on credible examination of impact. For example, the World Bank requires a rigorous quantitative specification of a counterfactual situation (ie a description of what would have happened if, hypothetically, there had been no intervention) for an evaluation to be described as an impact evaluation. A review in 2002 found that only about a quarter of their evaluations met this criteria.1
Qualitative techniques can facilitate a wider range of explanations; be better at identifying unintended impacts; sometimes avoid repeating conceptual mistakes made by designers at the beginning of programs; and sometimes be applied with moderate success even when data collection during the life of the initiative was poor. They can be useful for exploratory work to be confirmed later by quantitative studies; or for exploring findings from quantitative work in more depth.
Qualitative techniques can also sometimes be particularly suited for programs working closely with a relatively small number of people a village-level community development program, or an organisational capacity building project. An example qualitative technique is the Most Significant Changes approach, where the evaluator begins the research by inquiring of stakeholders what have been the most significant changes over this period?, and only then relates those changes to the intervention.
AusAID is just now beginning to use more quantitative techniques in evaluating aid impacts. Quantitative techniques allow a precise description of a hypothetical counterfactual situation an answer to the question what would have happened if there were no intervention? - which can then be compared to the real life result of the program. A high degree of specialist statistical expertise is required to avoid pitfalls.
Constructing a convincing quantitative counterfactual requires a sound model of reality and good data collection. Ideally, as in a drug trial, the intervention is applied to randomly chosen subjects and the results compared to comparable control cases representing the no-intervention population. AusAID will be attempting an impact evaluation along these lines requiring the evaluation to be built into the design from the beginning of the initiative on a water supply and sanitation initiative in coming years.
If randomisation is not possible for ethical or practical reasons, statistical techniques (such as those used by epidemiologists or econometricians) can be used to simulate a counterfactual. That is, a statistical model and analysis will identify and allow for the impacts of various external variables, in order to isolate and reach conclusions about the impact of the aid initiative. Although this avoids the problem of the evaluator making design decisions within the aid initiative (as has to happen when randomisation is used) it is still often necessary for this evaluation approach to be built in from the beginning, particularly to allow data collection. For example, often data will need to be collected in geographical areas or on variables that may not be directly needed for managing the initiative. It is important the evaluator is involved as soon as possible in the initiative.
However, collecting this information to identify impact is necessary for higher level management decisions such as scaling up an initiative, copying it in other countries, or designing substantial changes in approach within the on-going initiative.
Australias Prime Minister has made it clear that the aid program will need to demonstrate impact to inform the highest level decision making in the Australian system the budget process. It would be disproportionate for AusAID to fund more than a handful of rigorous impact evaluations. However, a trickle of such evaluations is essential to inform AusAIDs country-level and overall performance reporting.
Information collected for an impact evaluation can sometimes also be of use for management decisions outside the area of the original initiative for example, in planning or policy decisions elsewhere in the same sector. An impact evaluation could even be the impetus for encouraging better systematic data collection by partners.
Peter Ellis