0% found this document useful (0 votes)
30 views9 pages

Types of Evaluation-A

Uploaded by

Benson Gutu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views9 pages

Types of Evaluation-A

Uploaded by

Benson Gutu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Topic 6

TYPES OF MONITORING of monitoring


 Results monitoring tracks effects and impacts. This is where monitoring merges with
evaluation to determine if the project/programme is on target towards its intended results
(outputs, outcomes, impact) and whether there may be any unintended impact (positive or
negative). For example, a psychosocial project may monitor that its community activities
achieve the outputs that contribute to community resilience and ability to recover from a
disaster.
 Implementation Monitoring
It is carried out during the roll of the project plan. It has the inputs which involve the basic
tracking and reporting of information about a programme’s input like funding, education
materials required for distribution. To check whether planned inputs are utilized for the
intended purpose. Inputs are human resources, funds and equipment for carrying out project
activities. It is usually done annually to determine if the planned projects and activities are
effective and completed on time.
 Outcome Monitoring
These are results after use of outputs, to track whether the desired outcomes have been
reached. It entails tracking information related to programme clients like knowledge, attitude,
behavior, beliefs, skills and access to services. It determines the amount of activity and
compliance to the projects plan standards.
 Process (activity) monitoring tracks the use of inputs and resources, the progress of
activities and the delivery of outputs. It examines how activities are delivered – the efficiency
in time and resources. It is often conducted in conjunction with compliance monitoring and
feeds into the evaluation of impact. For example, a water and sanitation project may monitor
that targeted households receive septic systems according to schedule.
 Compliance monitoring ensures compliance with donor regulations and expected results,
grant and contract requirements, local governmental regulations and laws, and ethical
standards. For example, a shelter project may monitor those shelters adhere to agreed
national and international safety standards in construction.
 Context (situation) monitoring tracks the setting in which the project/programme operates,
especially as it affects identified risks and assumptions, but also any unexpected
considerations that may arise. It includes the field as well as the larger political, institutional,
funding, and policy context that affect the project/programme. For example, a project in a
conflict-prone area may monitor potential fighting that could not only affect project success
but endanger project staff and volunteers.
 Impact/Beneficiary monitoring tracks beneficiary perceptions of a project/programme. It
includes beneficiary satisfaction or complaints with the project/programme, including their
participation, treatment, access to resources and their overall experience of change.
Sometimes referred to as beneficiary contact monitoring. It is seen at the end of the project.
 Financial monitoring accounts for costs by input and activity within predefined categories
of expenditure. It is often conducted in conjunction with compliance and process monitoring.
For example, a livelihoods project implementing a series of micro-enterprises may monitor
the money awarded and repaid, and ensure implementation is according to the budget and
time frame.
 Organizational monitoring tracks the sustainability, institutional development and capacity
building in the project/programme and with its partners. It is often done in conjunction with
the monitoring processes of the larger, implementing organization. For example, a National
Society’s headquarters may use organizational monitoring to track communication and
collaboration in project implementation among its branches and chapters.

Monitoring best practices


 Monitoring data should be well-focused to specific audiences and uses (only what is
necessary and sufficient).
 Monitoring should be systematic, based upon predetermined indicators and assumptions.
 Monitoring should also look for unanticipated changes with the project/ programme and its
context, including any changes in project/programme assumptions/risks; this information
should be used to adjust project/programme implementation plans.
 Monitoring needs to be timely, so information can be readily used to inform
project/programme implementation.
 Whenever possible, monitoring should be participatory, involving key stakeholders – this can
not only reduce costs but can build understanding and ownership.
 Monitoring information is not only for project/programme management but should be shared,
when possible, with beneficiaries, donors and any other relevant stakeholders.

What is Evaluation
Evaluations involve identifying and reflecting upon the effects of what has been done, and
judging their worth. Their findings allow project/programme managers, beneficiaries, partners,
donors and other project/programme stakeholders to learn from the experience and improve
future interventions.

Types of evaluation according to evaluation timing

1. Formative evaluations occur during project/programme implementation to improve


performance and assess compliance.
2. Summative evaluations occur at the end of project/programme implementation to assess
effectiveness and impact.
3. Midterm evaluations are formative in purpose and occur midway through
implementation. For secretariat-funded projects/ programmes that run for longer than 24
months, some type of midterm assessment, evaluation or review is required. Typically,
this does not need to be independent or external, but may be according to specific
assessment needs. Final evaluations are
4. Summative in purpose and are conducted (often externally) at the completion of project/
programme implementation to assess how well the project/ programme achieved its
intended objectives. All secretariat funded projects/programmes should have some form of
final assessment, whether it is internal or external.

Types of evaluation according to who conducts the evaluation


1. Internal or self-evaluations are conducted by those responsible for implementing a
project/programme. They can be less expensive than external evaluations and help build
staff capacity and ownership. However, they may lack credibility with certain
stakeholders, such as donors, as they are perceived as more subjective (biased or one-
sided). These tend to be focused on learning lessons rather than demonstrating
accountability.
2. External or independent evaluations are conducted by evaluator(s) outside of the
implementing team, lending it a degree of objectivity and often technical expertise. These
tend to focus on accountability.
Types of evaluation according to t technicality or methodology technicality o
Real-time evaluations (RTEs) are undertaken during project/ programme implementation to
provide immediate feedback for modifications to improve ongoing implementation. Emphasis is
on immediate lesson learning over impact evaluation or accountability. RTEs are particularly
useful during emergency operations, and are required in the first three months of secretariat
emergency operations that meet any of the following criteria: more than nine months in length;
plan to reach 100,000 people or more; the emergency appeal is greater than10,000,000 Swiss
francs; more than ten National Societies are operational with staff in the field.
Meta-evaluations are used to assess the evaluation process itself. Some key uses of meta-
evaluations include: take inventory of evaluations to inform the selection of future evaluations;
combine evaluation results; check compliance with evaluation policy and good practices; assess
how well evaluations are disseminated and utilized for organizational learning and change, etc.

TYPES OF EVALUATION

Introduction to Evaluation

Evaluation is a methodological area that is closely related to, but distinguishable from more
traditional social research. Evaluation utilizes many of the same methodologies used in
traditional social research, but because evaluation takes place within a political and
organizational context, it requires group skills, management ability, political dexterity,
sensitivity to multiple stakeholders and other skills that social research in general does not
rely on as much. Here we introduce the idea of evaluation and some of the major terms and
issues in the field.

Definitions of Evaluation

 Evaluation is the systematic assessment of the worth or merit of some object


 Evaluation is the systematic acquisition and assessment of information to provide
useful feedback about some object
Both definitions agree that evaluation is a systematic endeavour and both use the
deliberately ambiguous term ‘object’ which could refer to a program, policy, technology,
person, need, activity, and so on.

The latter definition emphasizes acquiring and assessing information rather than assessing
worth or merit because all evaluation work involves collecting and sifting through data,
making judgements about the validity of the information and of inferences we derive from
it, whether or not an assessment of worth or merit results.

The Goals of Evaluation

The generic goal of most evaluations is to provide “useful feedback” to a variety of


audiences including sponsors, donors, client-groups, administrators, staff, and other relevant
constituencies. Most often, feedback is perceived as “useful” if it aids in decision-making.
But the relationship between an evaluation and its impact is not a simple one – studies that
seem critical sometimes fail to influence short-term decisions, and studies that initially seem
to have no influence can have a delayed impact when more congenial conditions arise.
Despite this, there is broad consensus that the major goal of evaluation should be to
influence decision-making or policy formulation through the provision of empirically-driven
feedback.

Types of Evaluation

There are many different types of evaluations depending on the object being evaluated and
the purpose of the evaluation. Perhaps the most important basic distinction in evaluation
types is that between formative and summative evaluation. Formative evaluations
strengthen or improve the object being evaluated – they help form it by examining the
delivery of the program or technology, the quality of its implementation, and the assessment
of the organizational context, personnel, procedures, inputs, and so on. Summative
evaluations, in contrast, examine the effects or outcomes of some object – they summarize it
by describing what happens subsequent to delivery of the program or technology; assessing
whether the object can be said to have caused the outcome; determining the overall impact
of the causal factor beyond only the immediate target outcomes; and, estimating the relative
costs associated with the object.

Formative Evaluation

Formative evaluation is typically conducted during the program period to improve or assess the
program delivery or implementation. With a formative evaluation, you have the opportunity to
apply your learnings as you go. The term formative evaluation can also refer to evaluation
activities involving the design or development of a program.
Figure 1 shows other terms which may be used to describe formative evaluation.

Formative evaluation includes several evaluation types:

 Needs assessment determines who needs the program, how great the need is, and
what might work to meet the need
 Evaluability assessment determines whether an evaluation is feasible and how
stakeholders can help shape its usefulness
 Structured conceptualization helps stakeholders define the program or technology,
the target population, and the possible outcomes
 Implementation evaluation monitors the fidelity of the program or technology
delivery
 Process evaluation investigates the process of delivering the program or technology,
including alternative delivery procedures

Summative Evaluation
Typically occurs at the end of a program with a retrospective and holistic scope that assesses all
program aspects including delivery, activities, impacts and outcomes. This form of evaluation
helps make judgements of the program’s overall success in terms of its:

 Effectiveness – were the intended outcomes achieved?


 Efficiency – were resources used cost-effectively?
 Appropriateness – was the program a suitable way to address the needs of the group
targeted by the program?
Figure 2 shows other terms which may be used to describe summative evaluation.
Summative evaluation can also be subdivided:

 Outcome evaluations investigate whether the program or technology caused


demonstrable effects on specifically defined target outcomes
 Impact evaluation is broader and assesses the overall or net effects – intended or
unintended – of the program or technology as a whole
 Cost-effectiveness and cost-benefit analysis address questions of efficiency by
standardizing outcomes in terms of their dollar costs and values
 Secondary analysis reexamines existing data to address new questions or use
methods not previously employed
 Meta-analysis integrates the outcome estimates from multiple studies to arrive at an
overall or summary judgement on an evaluation question

Which type do you need to consider?

This depends entirely on the objectives of your evaluation: are you looking at making a sudden,
short-term improvement in a particular area, or a longer-term, broader impact? Many program
managers find a combination of both works best for them.

In formative research the major questions and methodologies are:

What is the definition and scope of the problem or issue, or what’s the question?

Formulating and conceptualizing methods might be used including brainstorming, focus


groups, nominal group techniques, Delphi methods, brainwriting, stakeholder analysis,
synectics, lateral thinking, input-output analysis, and concept mapping.

Where is the problem and how big or serious is it?

The most common method used here is “needs assessment” which can include: analysis of
existing data sources, and the use of sample surveys, interviews of constituent populations,
qualitative research, expert testimony, and focus groups.
How should the program or technology be delivered to address the problem?

Some of the methods already listed apply here, as do detailing methodologies like
simulation techniques, or multivariate methods like multi attribute utility theory or
exploratory causal modelling; decision-making methods; and project planning and
implementation methods like flow charting, PERT/CPM, and project scheduling.

How well is the program or technology delivered?

Qualitative and quantitative monitoring techniques, the use of management information


systems, and implementation assessment would be appropriate methodologies here.

The questions and methods addressed under summative evaluation include:

What type of evaluation is feasible?

Evaluability assessment can be used here, as well as standard approaches for selecting an
appropriate evaluation design.

What was the effectiveness of the program or technology?

One would choose from observational and correlational methods for demonstrating whether
desired effects occurred, and quasi-experimental and experimental designs for determining
whether observed effects can reasonably be attributed to the intervention and not to other
sources.

What is the net impact of the program?

Econometric methods for assessing cost effectiveness and cost/benefits would apply here,
along with qualitative methods that enable us to summarize the full range of intended and
unintended impacts.

Clearly, this introduction is not meant to be exhaustive. Each of these methods, and the
many not mentioned, are supported by an extensive methodological research literature. This
is a formidable set of tools. But the need to improve, update and adapt these methods to
changing circumstances means that methodological research and development needs to have
a major place in evaluation work.

Topic Resource
IFAD(2002). Managing impact for Rural development; A guide for project Monitoring and Evaluation.
Nuguti, E. O(2009) Understanding project monitoring and Evaluation. Nairobi: Ekon publishing.
John W. Best, Research in Education
https://conjointly.com/kb/sampling-terminology/
https://www.grosvenor.com.au/resources/types-of-program-evaluation/

You might also like