100% found this document useful (4 votes)
740 views26 pages

Monitoring and Evaluation

Monitoring and evaluation (M&E) involves the continuous collection and analysis of data to assess progress towards goals and objectives and determine the effectiveness of projects and programs. Monitoring tracks implementation and short-term outputs, while evaluation provides in-depth analysis of outcomes and impacts. Key benefits of M&E include providing feedback for improvements, identifying problems early, assessing achievement of objectives, and informing future program design. Selecting appropriate indicators is important for M&E. Indicators should be specific, measurable, attainable, relevant and time-bound. Baselines establish starting values and targets set desired levels of improvement. Logical frameworks and other evaluation frameworks can help structure M&E processes.

Uploaded by

Robinson Ocan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
740 views26 pages

Monitoring and Evaluation

Monitoring and evaluation (M&E) involves the continuous collection and analysis of data to assess progress towards goals and objectives and determine the effectiveness of projects and programs. Monitoring tracks implementation and short-term outputs, while evaluation provides in-depth analysis of outcomes and impacts. Key benefits of M&E include providing feedback for improvements, identifying problems early, assessing achievement of objectives, and informing future program design. Selecting appropriate indicators is important for M&E. Indicators should be specific, measurable, attainable, relevant and time-bound. Baselines establish starting values and targets set desired levels of improvement. Logical frameworks and other evaluation frameworks can help structure M&E processes.

Uploaded by

Robinson Ocan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

MONITORING & EVALUATION- LECTURE NOTES

SESSION 1: OVERVIEW OF MONITORING AND EVALUATION

(i) What is Monitoring and Evaluation


 Monitoring and Evaluation is a process of continued gathering of information and its
analysis, in order to determine whether progress is being made towards pre-specified goals
and objectives, and highlight whether there are any unintended (positive or negative)
effects from a project/programme and its activities.

(ii) What is a Monitoring?


 Monitoring is a continuous process of collecting, analyzing, documenting, and reporting
information on progress to achieve set project objectives. It helps identify trends and
patterns, adapt strategies and inform decisions for project or programme management.

(iii) What is Evaluation?


 Evaluation is a periodic assessment, as systematic and objective as possible, of an on-
going or completed project, programme or policy, its design, implementation and
results. It involves gathering, analysing, interpreting and reporting information based
on credible data. The aim is to determine the relevance and fulfilment of objectives,
developmental efficiency, effectiveness, impact and sustainability.

(iv) Purpose/Importance of Monitoring and Evaluation


Timely and reliable M&E provides information to:
 Support project/programme implementation with accurate, evidence-based reporting
that informs management and decision-making to guide and improve project/programme
performance.
 Contribute to organizational learning and knowledge sharing by reflecting upon and
sharing experiences and lessons.
 Uphold accountability and compliance by demonstrating whether or not our work has
been carried out as agreed and in compliance with established standards and with any
other stakeholder requirements
 Provide opportunities for stakeholder feedback,.
 Promote and celebrate project/program work by highlighting accomplishments and
achievements, building morale and contributing to resource mobilization.
 Strategic management in provision of information to inform setting and adjustment of
objectives and strategies.
 Build the capacity, self-reliance and confidence stakeholders, especially beneficiaries and
implementing staff and partners to effectively initiate and implement development
initiatives.

v) Characteristics of monitoring and evaluation


Monitoring tracks changes in program performance or key outcomes over time. It has the
following characteristics:
 Conducted continuously
 Keeps track and maintains oversight
 Documents and analyzes progress against planned program activities
 Focuses on program inputs, activities and outputs
 Looks at processes of program implementation
 Considers program results at output level
 Considers continued relevance of program activities to resolving the health problem
 Reports on program activities that have been implemented
 Reports on immediate results that have been achieved

Evaluation is a systematic approach to attribute changes in specific outcomes to program


activities. It has the following characteristics:
 Conducted at important program milestones

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 1


 Provides in-depth analysis
 Compares planned with actual achievements
 Looks at processes used to achieve results
 Considers results at outcome level and in relation to cost
 Considers overall relevance of program activities for resolving health problems
 References implemented activities
 Reports on how and why results were achieved
 Contributes to building theories and models for change
 Attributes program inputs and outputs to observed changes in program outcomes
and/or impact

(v) Key benefits of Monitoring and Evaluation


a. Provide regular feedback on project performance and show any need for ‘mid-
course’ corrections
b. Identify problems early and propose solutions
c. Monitor access to project services and outcomes by the target population;
d. Evaluate achievement of project objectives, enabling the tracking of progress
towards achievement of the desired goals
e. Incorporate stakeholder views and promote participation, ownership and
accountability
f. Improve project and programme design through feedback provided from baseline,
mid-term, terminal and ex-post evaluations
g. Inform and influence organizations through analysis of the outcomes and impact
of interventions, and the strengths and weaknesses of their implementation,
enabling development of a knowledge base of the types of interventions that are
successful (i.e. what works, what does not and why.
h. Provide the evidence basis for building consensus between stakeholders

SESSIONS 2 & 3 SELECTING INDICATORS, BASELINES AND TARGETS


a) The indicator: “An indicator is defined as a quantitative measurement of an objective to be
achieved, a resource mobilised, an output accomplished, an effect obtained or a context
variable (economic, social or environmental)”. precise information needed to assess
whether intended changes have occurred. Indicators can be either quantitative (numeric)
or qualitative (descriptive observations). Indicators are typically taken directly from the
logframe, but should be checked in the process to ensure they are SMART (specific,
measurable, achievable, relevant and time-bound).
b) The Indicator definition- key terms in the indicator that need further detail for precise and
reliable measurement.
c) The methods/sources- identifies sources of information and data collection methods and
tools, such as the use of secondary data, regular monitoring or periodic evaluation,
baseline or endline surveys, and interviews.
d) The frequency/schedules -how often the data for each indicator will be collected, such
as weekly, monthly, quarterly, annually, etc.
e) The person(s) responsible- lists the people responsible and accountable for the data
collection and analysis, e.g. community volunteers, field staff, project/programme
managers, local partner(s) and external consultants.
f) The information use/audience - identifies the primary use of the information and its
intended audience. Some examples of information use for indicators include:
• Monitoring project/programme implementation for decision-making
• Evaluating impact to justify intervention
• Identifying lessons for organizational learning and knowledge-sharing
• Assessing compliance with donor or legal requirements
• Reporting to senior management, policy-makers or donors for strategic planning
• Accountability to beneficiaries, donors and partners
• Advocacy and resource mobilization.
g) Types of Indicators

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 2


 Context indicators which measure an economic, social or environmental variable
concerning an entire region, sector, or group and the Project location, as well as relevant
national and regional policies and programs.. The situation before the project starts, the
(baseline) data, primarily from official statistics.
 Input indicators include indicators that measure the human and financial resources,
physical facilities, equipment and supplies that enable implementation of a program
 Process indicators reflect whether a program is being carried out as planned and how
well program activities are being carried out.
 Output indicators which relate to activities, measured in physical or monetary units
/results of program efforts (inputs and processes/activities) at the program level.
 Outcome indicators measure the program’s level of success in improving service
accessibility, utilization or quality.
 Result indicators- direct and immediate effect arising from the project activities that
provide information on changes of the direct project beneficiaries.
 Impact indicators refer to the the long-term, cumulative effects of programs over time,
beyond the immediate and direct effects on beneficiaries
 Exogenous indicators are those that cover factors outside the control of the project but
which might affect its outcome.
 Proxy indicators – an indirect way to measure the subject of interest

h) Characteristics of Good Indicators.


a) Specific – focused and clear
b) Measurable - quantifiable and reflecting change
c) Attainable - reasonable in scope and achievable within set time-frame
d) Relevant - pertinent to the review of performance
e) Time-Bound/Trackable - progress can be charted chronologically
Also be CREAM: Clear, Relevant, Economical, Adequate and Monitor-able.

i) Baselines and Targets


 A baseline is qualitative or quantitative information that provides data at the
beginning of, or just prior to, the implementation of an intervention.
 Targets are established for each indicator by starting from the baseline level, and by
including the desired level of improvement in that indicator

SESSION 4: FRAMEWORKS FOR EVALUATION - THE LOGICAL FRAMEWORK APPROACH (LFA)


Four types of frameworks dominate the M&E field:
a) Conceptual frameworks are also known as theoretical or causal frameworks.
b) Results-based frameworks are also known as strategic frameworks and serve as a
management tool with an emphasis on results. The purpose of results frameworks is to
increase focus, select strategies, and allocate resources accordingly.
Impact The higher-order objective to which a development intervention is intended to contribute.
Outcome The likely or achieved short-term and medium-term effects of an intervention’s outputs.
Output The products, capital goods and services which result from a development intervention; may also
include changes resulting from the intervention which are relevant to the achievement of outcomes.
Activity Actions taken or work performed through which inputs, such as funds, technical assistance and other
types of resources are mobilized to produce specific outputs.
Inputs The financial, human, and material resources used for the development intervention.

c) Logical frameworks are also known as LogFRAMEs and are commonly used to help set
clear program objectives and define indicators of success. They also outline the critical
assumptions on which a project is based, similar to the results framework.
d) Logic models are also known as M&E frameworks are commonly used to present a clear
plan for the use of resources to meet the desired goals and objectives. They are a useful
tool for presenting programmatic and evaluation components.
The choice of a particular type of framework—whether a conceptual framework, results
framework, logical framework or logic model—depends on the program’s specific needs, the
M&E team’s preferences and donor requirements.
In particular, the LFA is a systematic planning procedure for complete project cycle
management, a participatory Planning, Monitoring & Evaluation tool;

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 3


 A tool for planning a logical set of interventions;
 A tool for appraising a Programme document;
 A concise summary of the Programme;
 A tool for monitoring progress made with regard to delivery of outputs and activities;
 A tool for evaluating impact of Programme outputs, e.g. progress in achieving
purpose and goal.
Narrative summary Objectively verifiable Means of verification Assumptions/Risks- what
A snapshot of the different levels of indicators (OVI) - how (MOV) - how will we assumptions underlie the
the project objectives— known as the will we know we've been check our reported structure of our project and
“hierarchy of objectives”. successful? results? what is the risk they will not
prevail?
Goal (Impact)- Longer-term
effects/General or overall objective
Purpose- - why are we doing this?
direct and immediate
effects/objectives/Outcomes/Results
Outputs - what are the
deliverables? – goods and
services produced/operational
objectives
Activities- what tasks will we Inputs Cost
undertake to deliver the outputs? By what means do we What does it cost
carry out the activities

Pre-conditions
What needs to be fulfilled
before activities can start

SESSION 5a: MONITORING CRITERIA


a) Project monitoring & control cycle.
To achieve effective control over project implementation, it is necessary to assess the progress
from time at regular intervals in terms of physical completion of scheduled activities, actual cost
incurred in performing those activities ad achievement of desired performance levels by
comparing the status with the plans to find deviations. This assessment process is known as
‘monitoring’.

COMPARE

PLAN ACTUAL
STATUS

VARIANCES

NO YES

REVISED SCHEDULES,BUDGETS ESTIMATES TO COMPLETE


ACTION PLAN

Key elements of project monitoring and control


 Project Status reporting
 Conducting a project review with stakeholders
 Controlling schedule variances
 Controlling scope and change requests
 Controlling budget
 Tracking and mitigating risks

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 4


b) Types of monitoring
A project/programme usually monitors a variety of things according to its specific informational
needs. These monitoring types often occur simultaneously as part of an overall monitoring
system commonly found in a project/programme monitoring system.
TAB 1: Common types of monitoring
 Results monitoring: Tracks effects and impacts to determine if the project/programme is
on target towards its intended results (inputs, activity, outputs, outcomes, impact,
assumptions/risks monitoring) and whether there may be any unintended impact (positive
or negative
 Process (activity) monitoring : Tracks the use of inputs and resources, the progress of
activities, how activities are delivered – the efficiency in time and resources and the
delivery of outputs
 Compliance monitoring: Ensures compliance with, say, donor regulations and expected
results, grant and contract requirements, local governmental regulations and laws, and
ethical standards.
 Context (situation) monitoring: Tracks the setting in which the project/programme
operates, especially as it affects identified risks and assumptions, and any unexpected
considerations that may arise, including the larger political, institutional, funding, and
policy context that affect the project/programme.
 Beneficiary monitoring: Tracks beneficiary perceptions of a project/programme. It
includes beneficiary satisfaction or complaints with the project/programme, including
their participation, treatment, access to resources and their overall experience of change.
 Financial monitoring: Accounts for costs by input and activity within predefined categories
of expenditure, to ensure implementation is according to the budget and time frame.
 Organizational monitoring: Tracks the sustainability, institutional development and
capacity building in the project/programme and with its partners.

c) Monitoring Questions and the LogFrame

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 5


SESSION 5 b. EVALUATION CRITERIA FOR PROJECTS
a) Five Part Evaluation Criteria
 Relevance - Was/is the project a good idea given the situation to improve? Was the
logic of the project correct? Why or Why Not? -The validity of the Overall Goal and
Project Purpose at the evaluation stage.
 Effectiveness - Have the planned results been achieved? Why or Why Not? -The degree
to which the Project Purpose has been achieved by the project Outputs.
 Efficiency - Have resources been used in the best possible way? Why or Why Not? -The
productivity in project implementation. The degree to which Inputs have been
converted into Outputs.
 Impact - To what extent has the project contributed towards its longer term goals? Why
or Why Not? Have there been any unanticipated positive or negative consequences of
the project? Why did they arise? -Positive and negative changes produced, directly or
indirectly, as a result of the Implementation of the project.
 Sustainability – Can the outcomes be sustained after the project funding to ensure
continued impacts? Why or Why Not? -The durability of the benefits and development
effects produced by the project after its completion.

b) Evaluation Questions and the LogFrame

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 6


SESSION 6a. TYPES OF EVALUATION
Three ways of classifying:
 When it is done - Ex-ante evaluation; Formative evaluation; Summative – end of
project, and Ex-Post evaluation.
 Who is doing it - External evaluation; Internal evaluation or self-assessment
 What methodology or technicality is used- Real-time evaluations (RTEs); Meta-
evaluations; Thematic evaluations; Cluster/sector evaluations; Impact evaluations
The details are as follows: -
a) Ex–ante evaluation: Conducted before the implementation of a project as part of the
planning. Needs assessment determines who needs the program, how great the need
is, and what might work to meet the need. Implementation(feasibility)evaluation
monitors the fidelity of the program or technology delivery, and whether or not the
program is realistically feasible within the programmatic constraints
b) Formative evaluation: Conducted during the implementation of the project. Used to
determine the efficiency and effectiveness of the implementation process, to improve
performance and assess compliance. Provides information to improve processes and
learn lessons. Process evaluation investigates the process of delivering the program or
technology, including alternative delivery procedures. Outcome evaluations
investigate whether the program or technology caused demonstrable effects on
specifically defined target outcomes. Cost-effectiveness and cost-benefit analysis
address questions of efficiency by standardizing outcomes in terms of their dollar
costs and values
c) Midterm evaluations are formative in purpose and occur midway through
implementation.
d) Summative evaluation: Conducted at the end of the project to assess state of project
implementation and achievements at the end of the project. Collate lessons on content
and implementation process. Occur at the end of project/programme implementation
to assess effectiveness and impact.
e) Ex-post evaluation: Conducted after the project is completed. Used to assess
sustainability of project effects, impacts. Identifies factors of success to inform other
projects. Conducted sometime after implementation to assess long-term impact and
sustainability.
f) External evaluation: Initiated and controlled by the donor as part of contractual
agreement. Conducted by independent people – who are not involved in
implementation. Often guided by project staff
g) Internal or self-assessment: Internally guided reflective processes. Initiated and
controlled by the group for its own learning and improvement. Sometimes done by
consultants who are outsiders to the project. Need to clarify ownership of information
before the review starts
h) Real-time evaluations (RTEs): are undertaken during project/programme
implementation to provide immediate feedback for modifications to improve on-going
implementation.
i) Meta-evaluations: are used to assess the evaluation process itself. Some key uses of
meta-evaluations include: take inventory of evaluations to inform the selection of
future evaluations; combine evaluation results; check compliance with evaluation
policy and good practices; assess how well evaluations are disseminated and utilized
for organizational learning and change, etc.
j) Thematic evaluations: focus on one theme, such as gender or environment, typically
across a number of projects, programmes or the whole organization.
k) Cluster/sector evaluations: focus on a set of related activities, projects or programmes,
typically across sites and implemented by multiple organizations
l) Impact evaluations: is broader and assesses the overall or net effects -- intended or
unintended -- of the program or technology as a whole focus on the effect of a
project/programme, rather than on its management and delivery. Therefore, they
typically occur after project/programme completion during a final evaluation or an

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 7


ex-post evaluation. However, impact may be measured during project/programme
implementation during longer projects/programmes and when feasible.

SESSION 6b: EVALUATION MODELS AND APPROACHES


 Behavioral Objectives Approach. -“Is the program, product, or process achieving its
objectives?” Focuses on the degree to which the objectives of a program, product, or
process have been achieved.
 The Four-Level Model or Kirkpatrick Model -“What impact did the training have on
participants in terms of their reactions, learning, behavior, and organizational results?”
Often used to evaluate training and development programs and focuses on four levels of
training outcomes: reactions, learning, behavior, and results.
o Reaction - Measures trainees’ valuable experience, feel good about the instructor,
the topic, the material, its presentation, and the venue.
o Learning- How much has their knowledge increased as a result of the training?
o Behavior-trainees have changed their behavior, based on the training they received.
o Results- good for business, good for the employees, or good for the bottom line.
 Management Models-“What management decisions are required concerning the
program”. The evaluator’s job is to provide information to management to help them in
making decisions about programs, products, etc. Daniel Stufflebeam’s CIPP Model has
been very popular. CIPP stands for context evaluation, input evaluation, process
evaluation, and product evaluation. Context evaluation includes examining and
describing the context of the program you are evaluating, conducting a needs and goals
assessment, determining the objectives of the program, and determining whether the
proposed objectives will be sufficiently responsive to the identified needs. It helps in
making program planning decisions. Input evaluation includes activities such as a
description of the program inputs and resources, a comparison of how the program
might perform compared to other programs, a prospective benefit/cost assessment (i.e.,
decide whether you think the benefits will outweigh the costs of the program, before the
program is actually implemented), an evaluation of the proposed design of the program,
and an examination of what alternative strategies and procedures for the program
should be considered and recommended. Process evaluation includes examining how a
program is being implemented, monitoring how the program is performing, auditing the
program to make sure it is following required legal and ethical guidelines, and
identifying defects in the procedural design or in the implementation of the program.
Product evaluation includes determining and examining the general and specific
anticipated and unanticipated outcomes of the program (i.e., which requires using
impact or outcome assessment techniques).
 Responsive Evaluation. -“What does the program look like to different people?” - Calls for
evaluators to be responsive to the information needs of various audiences or stakeholders.
 Goal-Free Evaluation. -“What are all the effects of the program, including any side
effects?”-Fcuses on the actual outcomes rather than the intended outcomes of a program.
Thus, the evaluator is unaware of the program’s stated goals and objectives.
 Adversary/Judicial Approaches. “What are the arguments for and against the program?”-
These Adopts the legal paradigm to program evaluation, where two teams of evaluators
representing two views of the program’s effects argue their cases based on the evidence
(data) collected. Then, a judge or a panel of judges decides which side has made a better
case and makes a ruling.
 Consumer-Oriented Approaches. “Would an educated consumer choose this program or
product?”- helps consumers to choose among competing programs or products.
 Expertise/Accreditation Approaches. “How would professionals rate this program?”- The
accreditation model relies on expert opinion to determine the quality of programs. The
purpose is to provide professional judgments of quality.
 Utilization-Focused Evaluation. “What are the information needs of stakeholders, and how
will they use the findings?”- Evaluation done for and with specific, intended primary users
for specific, intended uses. ” Assumes stakeholders will have a high degree of involvement
in many, if not all, phases of the evaluation. The major question being addressed is,

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 8


 Participatory/Collaborative Evaluation.- “What are the information needs of those
closest to the program?”- Engaging stakeholders in the evaluation process, so they may
better understand evaluation and the program being evaluated and ultimately use the
evaluation findings for decision-making purposes.
 Empowerment Evaluation. “What are the information needs to foster improvement and
self-determination?”. Use of evaluation concepts, techniques, and findings to foster
improvement and self-determination, a catalyst for learning in the workplace a social
activity in which evaluation issues are constructed by and acted on by organization
members
 Organizational Learning. “What are the information and learning needs of individuals,
teams, and the organization in general?” ongoing and integrated into all work practices
 Theory-Driven Evaluation.- “How is the program supposed to work? What are the
assumptions underlying the program’s development and implementation?”- Focuses on
theoretical rather than methodological issues to use the “program’s rationale or theory as
the basis of an evaluation to understand the program’s development and impact” using a
plausible model of how the program is supposed to work.
 Success Case Method. “What is really happening?”- focuses on the practicalities of
defining successful outcomes and success cases and uses some of the processes from
theory-driven evaluation to determine the linkages, which may take the form of a logic
model, an impact model, or a results map. Evaluators using this approach gather stories
within the organization to determine what is happening and what is being achieved. The
major question this approach asks is,

SESSION 7: THE EVALUATION PROCESS


Evaluation operates within multiple domains and serves a variety of functions at the same time.
Moreover it is subject to budget, time and data constraints that may force the evaluator to
sacrifice many of the basic principles of impact evaluation design. Before entering into the
details of evaluation methods it is important for the reader to have a clear picture of the way an
evaluation procedure works.

(i) The M&E Plan/strategy


A comprehensive planning document for all monitoring and evaluation activities within
a program. This plan documents the key M&E questions to be addressed: what indicators
will be collected, how, how often, from where, and why; baseline values, targets, and
assumptions; how data are going to be analyzed/interpreted; and how/how often report
will be developed and distributed.

Typically, the components of an M&E plan are:


 Establishing goals and objectives
 Setting the specific M&E questions
 Determining the activities to be implemented
 The methods and designs to be used for monitoring and evaluation
 The data to be collected
 The specific tools for data collection
 The required resources
 The responsible parties to implement specific components of the plan
 The expected results
 The proposed timeline

(ii) Monitoring And Evaluation Cycle

Step 1 – Identify the purpose and scope of the M&E system


 Formulating objectives
 Selecting Indicators
 Setting baselines and targets
Step 2 – Plan for data collection and management

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 9


 Major sources of data- secondary data, primary data-sample surveys, project output
data, qualitative studies- PRA, mapping, KIIs, FGDs, observation, checklists, external
assessments, participatory assessments
 planning for data collection - prepare data collection guidelines, pre-test data
collection tools, train data collectors, address ethical issues

Step 3 – Plan for data analysis


Step 4 – Plan for information reporting and utilization
Step 5 – Plan for M&E human resources and capacity building
Step 6 – Prepare the M&E budget

(iii) Setting up an M&E system often involves the following aspects

a) Assess the existing readiness and capacity for monitoring and evaluation
b) Review current capacity within (or outsourced without) the organization and its
partners which will be responsible for project implementation, covering: technical
skills, managerial skills, existence and quality of data systems, available technology
and existing budgetary provision.
c) Establish the purpose and scope
Why is M&E needed and how comprehensive should the system be?
What should be the scope, rigour and should the M&E process be participatory?
d) Identify and agree with main stakeholders the outcomes and development
objective(s).
Set a development goal and the project purpose or expected outcomes, outputs,
activities and inputs. Indicators, baselines and targets are similarly derived
e) Select key indicators i.e the qualitative or quantitative variables that measure project
performance and achievements for all levels of project logic with respect to inputs,
activities, outputs, outcomes and impact, as well as the wider environment,
requiring pragmatic judgment in the careful selection of indicators.
f) Developing and Evaluation Frame work - set out the methods, approaches and
evaluation designs ( Experimental, Quasi-Experimental and Non-Experimental) to
be used to address the question of whether change observed through monitoring
indicators can be attributed to the project interventions.
g) Set baselines and planning for results -The baseline is the first measurement of an
indicator, which sets the pre-project condition against which change can be tracked
and evaluated.
h) Select data collection methods as applicable.
i) Setting targets and developing a results framework- A target is a specification of the
quantity, quality, timing and location to be realized for a key indicator by a given
date. Starting from the baseline level for an indicator the desired improvement is
defined taking account of planned resource provision and activities, to arrive at a
performance target for that indicator.
j) Plan monitoring, data analysis, communication, and reporting: Monitoring and
Evaluation Plan
k) Implementation monitoring tracking the inputs, activities and outputs in annual or
multiyear work plans, and ‘Results monitoring’ tracking achievement of outcomes
and impact, are both needed. The demands for information at each level of
management need to be established, responsibilities allocated, and plans made for:
i. what data to be collected and when;
ii. how data are collected and analyzed;
iii. who collects and analyses data;
iv. who reports information,
v. when ?
l) Facilitating the necessary conditions and capacities to sustain the M&E System -
organizational structure for M&E, partner’s responsibilities and information
requirements, staffing levels and types, responsibilities and internal linkages,
incentives and training needs, relationships with partners and stakeholders,

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 10


horizontal and vertical lines of communication and authority, physical resource
needs and budget.

SESSION 8: EVALUATION DESIGN

Developing an evaluation design includes:


 Determining what type of design is required to answer the questions posed
 Selecting a methodological approach and data collection instruments
 Selecting a comparison group
 Sampling
 Determining timing, sequencing, and frequency of data collection

Evaluation research may adopt two general methodological approaches—either a quantitative, a


qualitative or mixed-methods design approach. Quantitative designs normally take the form of
experimental designs. Qualitative evaluation approaches are non-experimental approaches
which answer ‘why’ and ‘how’ questions.

The following are brief descriptions of the most commonly used evaluation (and research)
designs.

One-Shot Design In using this design, the evaluator gathers data following an intervention
or program. For example, a survey of participants might be administered
after they complete a workshop.
Retrospective Pre- As with the one-shot design, the evaluator collects data at one time but
test. asks for recall of behaviour or conditions prior to, as well as after, the
intervention or program.
One-Group Pre- The evaluator gathers data prior to and following the intervention or
test-Post-test program being evaluated.
Design.
Time Series Design The evaluator gathers data prior to, during, and after the implementation
of an intervention or program
Pre-test-Post-test . The evaluator gathers data on two separate groups prior to and
Control-Group following an intervention or program. One group, typically called the
Design experimental or treatment group, receives the intervention. The other
group, called the control group, does not receive the intervention.
Post-test-Only The evaluator collects data from two separate groups following an
Control-Group intervention or program. One group, typically called the experimental
Design. or treatment group, receives the intervention or program, while the
other group, typically called the control group, does not receive the
intervention.
Data are collected from both of these groups only after the intervention.
Case Study Design When evaluations are conducted for the purpose of understanding the
program’s context, participants’ perspectives, the inner dynamics of
situations, and questions related to participants’ experiences, and where
generalization is not a goal, a case study design, with an emphasis on the
collection of qualitative data, might be most appropriate. Case studies
involve in-depth descriptive data collection and analysis of individuals,
groups, systems, processes, or organizations. In particular, the case study
design is most useful when you want to answer how and why questions
and when there is a need to understand the particulars, uniqueness, and
diversity of the case.

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 11


Decisions for Designing an Evaluation Study

SESSION 9: METHODS OF EVALUATION AND TOOLS

a) Evaluation Methods
Informal and less-structured methods
 Conversation with concerned individuals
 Community interviews
 Field visits
 Reviews of records
 Key informant interviews
 Participant observation
 Focus group interviews

Formal and more-structured methods


 Direct observation
 Questionnaires
 One-time survey
 Panel survey
 Census
 Field experiments

Evaluation Description Remarks


method/Approac
h/Tool
Case study. A detailed description of individuals, Useful in evaluating complex situations and
communities, organizations, events, exploring qualitative impact. Helps to
programmes, time periods or a story illustrate findings and includes comparisons
(commonalities); only when combined
(triangulated) with other case studies or
methods can one draw conclusions about key
principles.
Checklist A list of items used for validating or Allow for systematic review that can be
inspecting whether procedures/steps have useful in setting benchmark standards and
been followed,
Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 12
Evaluation Description Remarks
method/Approac
h/Tool
or the presence of examined behaviours. establishing periodic measures of
improvement.

Community book A community-maintained document of a Where communities have low literacy rates,
project belonging to a community. It can a memory team is identified whose
include written records, pictures, drawings, responsibility it is to relate the written record
songs or whatever community members feel to the rest of the community in keeping with
is appropriate. their oral traditions.

Community A form of public meeting open to all Interaction is between the participants and
interviews/meeti community members. the interviewer, who presides over the
ng. meeting and asks questions following a
prepared interview guide.
Direct A record of what observers see and hear at a An observation guide is often used to reliably
observation specified site, using a detailed observation look for consistent criteria, behaviours, or
form. Observation may be of physical patterns.
surroundings, activities or processes.
Observation is
a good technique for collecting data on
behavioural patterns and physical conditions.
Document A review of documents (secondary data) can It includes written documentation (e.g.
review provide cost-effective and timely project records and reports, administrative
baseline information and a historical databases, training materials,
perspective of the project/programme. correspondence, legislation and policy
documents) as well as videos, electronic data
or photos.
Focus group Focused discussion with a small group A moderator introduces the topic and uses a
discussion. (usually eight to 12 people) of participants prepared interview guide to lead the
to record attitudes, perceptions and beliefs discussion and extract conversation,
relevant to the issues being examined. opinions and reactions.
Interviews. An open-ended (semi-structured) interview Replies can easily be numerically coded for
is a technique for questioning that allows the statistical analysis.
interviewer to probe and pursue topics of
interest in depth (rather than just “yes/no”
questions). A closedended(structured)
interview systematically follows carefully
organized questions (prepared in advance in
an interviewer’s guide) that only allow a
limited range of answers, such as “yes/no” or
expressed by
a rating/number on a scale.
Key informant An interview with a person having special These interviews are generally conducted in
interview. information about a particular topic. an open-ended or semi-structured fashion.
Laboratory Precise measurement of specific objective
testing. phenomenon, e.g. infant weight or water
quality test.

Mini-survey Data collected from interviews with 25 to 50 Structured questionnaires with a limited
individuals, usually selected using number of closed-ended questions are used
nonprobability to generate quantitative data that can be
sampling techniques. collected and analysed quickly.
Most significant A participatory monitoring technique based They give a rich picture of the impact of
change (MSC). on stories about important or significant development work and provide the basis for
changes, rather than indicators. dialogue over key objectives and the value of
development programmes
Participant A technique first used by anthropologists This method gathers insights that might
observation. (those who study humankind); it requires the otherwise be overlooked, but is time-
researcher to spend considerable time (days) consuming.
with the group being studied and to interact
with them as a participant in their
community.
Participatory This uses community engagement techniques It is usually done quickly and intensively –
rapid (or rural) to understand community views on a over a two- to three-week period. Methods
appraisal (PRA). particular issue. include interviews, focus groups and
community mapping. Tools include
stakeholder analysis, participatory rural

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 13


Evaluation Description Remarks
method/Approac
h/Tool
appraisal, beneficiary assessment, and
participatory monitoring and evaluation.
Questionnaire A data collection instrument containing a set Typically used in a survey
of questions organized in a systematic way, as
well as a set of instructions for the data
collector/interviewer about how to ask the
questions
Rapid appraisal A quick, cost-effective technique to gather This technique shares many of the
(or assessment). data systematically for decision-making, characteristics of participatory appraisal
using quantitative and qualitative methods, (such as triangulation and multidisciplinary
such as site visits, observations and sample teams) and recognizes that indigenous
surveys. knowledge is a critical consideration for
decision-making. Methods include: key
informant interview, focus group discussion,
community group interview, direct
obsevation, and mini-survey
Statistical data A review of population censuses, research
review. A studies and other sources of statistical data.
Story An account or recital of an event or a series A learning story focuses on the lessons
of events. A success story illustrates impact by learned through an individual’s positive and
detailing an individual’s positive experiences negative experiences (if any) with a
in his or her own words. project/programme.
Formal Survey Systematic collection of information from a Includes multi-topic or single topic house
defined population, usually by means of hold/living standards survey, client
interviews or questionnaires administered to satisfaction surveys, core welfare indicators
a sample of units in the population (e.g. questionnaire.
person, beneficiaries and adults). An Public expenditure tracking surveys-
enumerated survey is one in which the tracking flow of public funds and the extent
survey is administered by someone trained (a to which resources actually reach the target
data collector/enumerator) to record groups.
responses from respondents. A self- Sampling-related methods- sample frame,
administered survey is a written survey sample size, sample method e.g. random –
completed by the respondent, either in a simple (and systematic) or stratified
group setting or in a separate location. Non-random- purposive (and cluster) and
Respondents must be literate. quota sampling, etc
Visual Participants develop maps, diagrams, This technique is especially effective where
techniques. calendars, timelines and other visual displays verbal methods can be problematic due to
to examine the study topics. Participants can low-literate or mixed-language target
be prompted to construct visual responses to populations, or in situations where the
questions posed by the interviewers; e.g. by desired information is not easily expressed in
constructing a map of their local area. either words or numbers.
Cost Benefit and Assesses whether or not the costs of an Cost Benefit- measures both inputs and
Cost Effectiveness activity can be justified by the outcomes and outputs in monetary terms
Analysis impacts Cost Effectiveness- inputs in monetary and
outputs in non-monetary terms

M&E Tool/Method Advantages Disadvantages


Survey Good for gathering descriptive Self-report may lead to biased
data reporting
Can cover a wide range of topics Data may provide a general
Are relatively inexpensive to use picture but lack depth
Can be analyzed using a variety of May not provide adequate
existing software information on context

Case studies Provide a rich picture of what is Require a sophisticated and well-
happening, as seen through the trained data collection and
eyes of many individuals reporting team
Allow a thorough exploration of Can be costly in terms of the
interactions between treatment demands on time and resources
and contextual factors Individual cases may be over
Can help explain changes or interpreted or overgeneralized
facilitating factors that might

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 14


otherwise not emerge from the
data

Interviews Usually yield richest data, details, Expensive and time consuming
new insights Need well-qualified, highly
Permit face-to-face contact with trained interviewers
respondents Interviewee may distort
Provide opportunity to explore information through recall error,
topics in depth selective perceptions, desire to
Allow interviewer to experience please interviewer
the affective as well as Flexibility can result in
cognitive aspects of responses inconsistencies across interviews
Allow interviewer to explain or Volume of information very large;
help clarify questions, may be difficult to
increasing the likelihood of useful transcribe and reduce data
responses
Allow interviewer to be flexible in
administering interview to
particular individuals or in
particular circumstances

b) PARTICIPATORY M&E
Participatory evaluation is a partnership approach to evaluation in which stakeholders actively
engage in developing the evaluation and all phases of its implementation. Participatory
evaluations often use rapid appraisal techniques. Name a few of them.
 Key Informant Interviews - Interviews with a small number of individuals who are most
knowledgeable about an issue.
 Focus Groups - A small group (8-12) is asked to openly discuss ideas, issues and
experiences.
 Mini-surveys - A small number of people (25-50) is asked a limited number of
questions.
 Neighbourhood Mapping - Pictures show location and types of changes in an area to be
evaluated.
 Flow Diagrams - A visual diagram shows proposed and completed changes in systems.
 Photographs - Photos capture changes in communities that have occurred over time.
 Oral Histories and Stories - Stories capture progress by focusing on one person’s or
organization’s account of change.

E.g. Specific applications of the focus group method in evaluations.


 Identifying and defining problems in project implementation
 Pretesting topics or idea
 Identifying project strengths, weaknesses, and recommendations
 Assisting with interpretation of quantitative findings
 Obtaining perceptions of project outcomes and impacts
 Generating new ideas

SESSION 10: DATA ANALYSIS AND REPORTING

The term “data” refers to raw, unprocessed information while “information,” or “strategic
information,” usually refers to processed data or data presented in some sort of context.
 Data –primary or secondary- is a term given to raw facts or figures before they have been
processed and analysed.
 Information refers to data that has been processed and analysed for reporting and use.
 Data analysis is the process of converting collected (raw) data into usable information.

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 15


(i) Quantitative and Qualitative data
 Quantitative data measures and explains what is being studied with numbers (e.g. counts,
ratios, percentages, proportions, average scores, etc).
 Qualitative data explains what is being studied with words (documented observations,
representative case descriptions, perceptions, opinions of value, etc).
 Quantitative methods tend to use structured approaches (e.g. coded responses to surveys)
which provide precise data that can be statistically analysed and replicated (copied) for
comparison.
 Qualitative methods use semi-structured techniques (e.g. observations and interviews) to
provide in-depth understanding of attitudes, beliefs, motives and behaviours. They tend to
be more participatory and reflective in practice.

Quantitative data is often considered more objective and less biased than qualitative data but
recent debates have concluded that both quantitative and qualitative methods have subjective
(biased) and objective (unbiased) characteristics.

Therefore, a mixed-methods approach is often recommended that can utilize the advantages of
both, measuring what happened with quantitative data and examining how and why it happened
with qualitative data.

(ii) Some Data Quality Issues in Monitoring and Evaluation


 Coverage: Will the data cover all of the elements of interest?
 Completeness: Is there a complete set of data for each element of interest?
 Accuracy: Have the instruments been tested to ensure validity and reliability of the data?
 Frequency: Are the data collected as frequently as needed?
 Reporting schedule: Do the available data reflect the periods of interest?
 Accessibility: Are the data needed collectable/retrievable?
 Power: Is the sample size big enough to provide a stable estimate or detect change?

(iii) Data Analysis


Quantitative or qualitative research methods or a complementary combination of both
approaches are used.
Analysis may include:
 Content or textual analysis, making inferences by objectively and systematically
identifying specified characteristics of messages.
 Statistical descriptive techniques, the most common include: graphical description
(histograms, scatter-grams, bar chart,…); tabular description (frequency distribution,
cross tabs,…); parametric description (mean, median, mode, standard deviation, skew-
ness, kurtosis, …).
 Statistical inferential techniques which involve generalizing from a sample to the whole
population and testing hypothesis. Hypothesis are stated in mathematical or statistical
terms and tested through two or one-tailed tests (t-test, chi-square, Pearson correlation, F-
statistic, …)

SESSION 11: TERMS OR REFERENCE IN M&E AND EVLUATION REPORT TEMPLATE

(i) Terms of Reference in Evaluation


Evaluation organizers are usually the ones who are in charge of a particular project and want to
have the project evaluated to better manage project operation. Responsibility in the evaluation
organizers differs from those of evaluators, who are usually consultants contracted for the
evaluation.

Tasks of the evaluation organizers include:

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 16


 Preparing the TOR; TOR is a written document presenting the purpose and scope of the
evaluation, the methods to be used, the standard against which performance is to be
assessed or analyses are to be conducted, the resources and time allocated, and reporting
requirements. TOR also defines the expertise and tasks required of a contractor as an
evaluator, and serves as job descriptions for the evaluator.
 Appointing evaluator(s);
 Securing budget for evaluation;
 monitoring the evaluation work;
 Providing comments on the draft;
 publicizing the evaluation report, and
 Providing feedback from the results to concerned parties.

The role of evaluator includes:


 Preparing the detailed evaluation design;
 Collecting and analyzing information, and
 Preparing an evaluation report.

The role of Management includes:


 Management response
 Action on recommendations
 Tracking status of implementation of recommendations

Management Response Template


Prepared by:
Reviewed by:
Evaluation recommendation 1.
Management response:
Key action(s) Time Frame Responsible Tracking*
unit(s) Comments Status
1.1
1.2
Evaluation recommendation 2.
Management response:
Key action(s) Time Frame Responsible Tracking*
unit(s) Comments
Comments Status
2.1
2.2

ii) Evaluation Report Template


The is no single universal format for M&E but the template is intended to serve as a guide for preparing
meaningful, useful and credible evaluation reports that meet quality standards. It only suggests the content
that should be included in a quality evaluation report but does not purport to prescribe a definitive section-
by-section format that all evaluation reports should follow.

E.g.
Formal reports developed by evaluators typically include six major sections:
(1) Background
(2) Evaluation study questions
(3) Evaluation procedures
(4) Data analyses
(5) Findings
(6) Conclusions (and recommendations)

Or detailed:
Summary sections
A. Abstract
B. Executive summary
II. Background

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 17


A. Problems or needs addressed
B. Literature review
C. Stakeholders and their information needs
D. Participants
E. Project’s objectives
F. Activities and components
G. Location and planned longevity of the project
H. Resources used to implement the project
I. Project’s expected measurable outcomes
J. Constraints
III. Evaluation study questions
A. Questions addressed by the study
B. Questions that could not be addressed by the study (when relevant)
IV. Evaluation procedures
A. Sample
1. Selection procedures
2. Representativeness of the sample
3. Use of comparison or control groups, if applicable
B. Data collection
1. Methods
2. Instruments
C. Summary matrix
1. Evaluation questions
2. Variables
3. Data gathering approaches
4. Respondents
5. Data collection schedule
V. Findings
A. Results of the analyses organized by study question
VI. Conclusions
A. Broad-based, summative statements
B. Recommendations, when applicable

0r

Table of contents
Executive summary
 Introduction
 Evaluation scope, focus and approach
 Project facts
 Findings, Lessons Learned
o Findings
o Lessons learned
 Conclusions and recommendations
o Conclusions
o Recommendations
 Annexes/appendices

Or as per organizational requirements (Modified from UNDP, 2009, Handbook on Planning,


Monitoring and Evaluating for Development Results)
The report should also include the following:

1. Title and opening pages—Should provide the following basic information:


 Name of the evaluation intervention
 Time frame of the evaluation and date of the report

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 18


 Country/Organization/Entity of the evaluation intervention
 Names and organizations of evaluators
 Name of the organization commissioning the evaluation
 Acknowledgements

2. Table of contents
 Should always include lists of boxes, figures, tables and annexes with page references.
3. List of acronyms and abbreviations
4. Executive summary
A stand-alone section of two to three pages that should:
 Briefly describe the intervention (the project(s), programme(s), policies or other
interventions) that was evaluated.
 Explain the purpose and objectives of the evaluation, including the audience for the
evaluation and the intended uses.
 Describe key aspect of the evaluation approach and methods.
 Summarize principle findings, conclusions, and recommendations.
5. Introduction
Should:
 Explain why the evaluation was conducted (the purpose), why the intervention is being
evaluated at this point in time, and why it addressed the questions it did.
 Identify the primary audience or users of the evaluation, what they wanted to learn from
the evaluation and why, and how they are expected to use the evaluation results.
 Identify the intervention (the project(s) programme(s), policies or other interventions) that
was evaluated—see upcoming section on intervention.
 Acquaint the reader with the structure and contents of the report and how the information
contained in the report will meet the purposes of the evaluation and satisfy the information
needs of the report’s intended users.
6. Description of the intervention/project/process/programme—Provide the basis for report users
to understand the logic and assess the merits of the evaluation methodology and understand the
applicability of the evaluation results. The description needs to provide sufficient detail for the
report user to derive meaning from the evaluation. The description should:
 Describe what is being evaluated, who seeks to benefit, and the problem or issue it seeks
to address.
 Explain the expected results map or results framework, implementation strategies, and the
key assumptions underlying the strategy.
 Link the intervention to national priorities, Development partner priorities, corporate
strategic plan goals, or other project, programme, organizational, or country specific plans
and goals.
 Identify the phase in the implementation of the intervention and any significant changes
(e.g., plans, strategies, logical frameworks) that have occurred over time, and explain the
implications of those changes for the evaluation.
 Identify and describe the key partners involved in the implementation and their roles.
 Describe the scale of the intervention, such as the number of components (e.g., phases of
a project) and the size of the target population for each component.
 Indicate the total resources, including human resources and budgets.
 Describe the context of the social, political, economic and institutional factors, and the
geographical landscape within which the intervention operates and explain the effects
(challenges and opportunities) those factors present for its implementation and outcomes.
 Point out design weaknesses (e.g., intervention logic) or other implementation constraints
(e.g., resource limitations).
7. Evaluation scope and objectives - The report should provide a clear explanation of the evaluation’s
scope, primary objectives and main questions.
 Evaluation scope—The report should define the parameters of the evaluation, for example,
the time period, the segments of the target population included, the geographic area
included, and which components, outputs or outcomes were and were not assessed.
 Evaluation objectives—The report should spell out the types of decisions evaluation users
will make, the issues they will need to consider in making those decisions, and what the
evaluation will need to achieve to contribute to those decisions.

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 19


 Evaluation criteria—The report should define the evaluation criteria or performance
standards used. The report should explain the rationale for selecting the particular criteria
used in the evaluation.
 Evaluation questions—Evaluation questions define the information that the evaluation will
generate. The report should detail the main evaluation questions addressed by the
evaluation and explain how the answers to these questions address the information needs
of users.
8. Evaluation approach and methods - The evaluation report should describe in detail the selected
methodological approaches, theoretical models, methods and analysis; the rationale for their
selection; and how, within the constraints of time and money, the approaches and methods
employed yielded data that helped answer the evaluation questions and achieved the evaluation
purposes. The description should help the report users judge the merits of the methods used in the
evaluation and the credibility of the findings, conclusions and recommendations.

The description on methodology should include discussion of each of the following:


 Data sources—The sources of information (documents reviewed and stakeholders), the
rationale for their selection and how the information obtained addressed the evaluation
questions.
 Sample and sampling frame—If a sample was used: the sample size and characteristics; the
sample selection criteria (e.g., single women, under 45); the process for selecting the
sample (e.g., random, purposive); if applicable, how comparison and treatment groups
were assigned; and the extent to which the sample is representative of the entire target
population, including discussion of the limitations of the sample for generalizing results.
 Data collection procedures and instruments—Methods or procedures used to collect data,
including discussion of data collection instruments (e.g., interview protocols), their
appropriateness for the data source and evidence of their reliability and validity.
 Performance standards/indicators—The standard or measure that will be used to evaluate
performance relative to the evaluation questions (e.g., national or regional indicators,
rating scales).
 Stakeholder engagement—Stakeholders’ engagement in the evaluation and how the level of
involvement contributed to the credibility of the evaluation and the results.
 Ethical considerations—The measures taken to protect the rights and confidentiality of
informants
 Background information on evaluators—The composition of the evaluation team, the
background and skills of team members and the appropriateness of the technical skill mix,
gender balance and geographical representation for the evaluation.
 Major limitations of the methodology—Major limitations of the methodology should be
identified and openly discussed as to their implications for evaluation, as well as steps taken
to mitigate those limitations.
9. Data analysis—The report should describe the procedures used to analyse the data collected to
answer the evaluation questions. It should detail the various steps and stages of analysis that were
carried out, including the steps to confirm the accuracy of data and the results. The report also
should discuss the appropriateness of the analysis to the evaluation questions. Potential weaknesses
in the data analysis and gaps or limitations of the data should be discussed, including their possible
influence on the way findings may be interpreted and conclusions drawn.
10. Findings and conclusions—The report should present the evaluation findings based on the analysis
and conclusions drawn from the findings.
 Findings—Should be presented as statements of fact that are based on analysis of the data.
They should be structured around the evaluation criteria and questions so that report users
can readily make the connection between what was asked and what was found. Variances
between planned and actual results should be explained, as well as factors affecting the
achievement of intended results. Assumptions or risks in the project or programme design
that subsequently affected implementation should be discussed.
 Conclusions—Should be comprehensive and balanced, and highlight the strengths,
weaknesses and outcomes of the intervention. They should be well substantiated by the
evidence and logically connected to evaluation findings. They should respond to key
evaluation questions and provide insights into the identification of and/or solutions to
important problems or issues pertinent to the decision making of intended users.
11. Recommendations—The report should provide practical, feasible recommendations directed to the
intended users of the report about what actions to take or decisions to make. The recommendations
should be specifically supported by the evidence and linked to the findings and conclusions around

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 20


key questions addressed by the evaluation. They should address sustainability of the initiative and
comment on the adequacy of the project exit strategy, if applicable.
12. Lessons learned—As appropriate, the report should include discussion of lessons learned from the
evaluation, that is, new knowledge gained from the particular circumstance (intervention, context
outcomes, even about evaluation methods) that are applicable to a similar context. Lessons should
be concise and based on specific evidence presented in the report.
13. Report annexes—Suggested annexes should include the following to provide the report user with
supplemental background and methodological details that enhance the credibility of the report:
 ToR for the evaluation
 Additional methodology-related documentation, such as the evaluation matrix and data
collection instruments (questionnaires, interview guides, observation protocols, etc.) as
appropriate
 List of individuals or groups interviewed or consulted and sites visited
 List of supporting documents reviewed
 Project or programme results map or results framework
 Summary tables of findings, such as tables displaying progress towards outputs, targets,
and goals relative to established indicators
 Short biographies of the evaluators and justification of team composition
 Code of conduct signed by evaluators

SESSION 12: BEST PRACTICES, EMERGING TRENDS & M&E CAPACITY BUILDING IN KENYA
(i) Monitoring Best Practices
 Data well-focused to specific audiences and uses (only what is necessary and sufficient).
 Systematic, based upon predetermined indicators and assumptions.
 Also look for unanticipated changes with the project/programme and its context,
including any changes in project/programme assumptions/risks; this information should
be used to adjust project/programme implementation plans.
 Be timely, so information can be readily used to inform project/programme
implementation.
 Be participatory, involving key stakeholders –reduce costs, build understanding and
ownership.
 Not only for project/programme management but should be shared when possible with
beneficiaries, donors and any other relevant stakeholders.

(ii) Good M&E Principles for Projects

 Participation: encourage participation “by all who wish to participate and/or who might
be affected by the review.”
 Decision Making: “Projects will utilize a structured decision-making process.”
 Value People: “Projects are not intended to result in a loss of employees but may result in
employees being re-deployed to other activities within the department.”
 Measurement: for accountability; measures should be accurate, consistent, flexible,
comprehensive but not onerous
 Integrated Program/Process Planning and Evaluation: incorporated into yearly business
plans
 Ethical Conduct/Openness: consider ethical implications, respect and protect rights of
participants
 Program/Process Focus: focus on improving program, activity or process
 Clear and Accurate Reporting of Facts and Review Results
 Timely Communication of Information and Review Results to Affected Parties
 Multi-Disciplinary Team Approach: include a range of knowledge and experience; seek
assistance from outside of the team as required
 Customer and Stakeholder Involvement: “External and internal customers and
stakeholders related to a project should be identified and consulted, if possible, throughout
the project.”

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 21


(iii) Basic Ethics to expect from an evaluator
 Systematic Inquiry – Evaluators conduct systematic, data-based inquiries about whatever
is being evaluated.
 Competence – Evaluators provide competent performance to stakeholders.
 Integrity/honesty – Evaluators ensure the honesty and integrity of the entire evaluation
process.
 Respect for people – Evaluators respect the security, personal dignity and autonomy of
individuals, and self-worth of the respondents including recognition and special
protections for those with diminished autonomy, such as children or prisoners, program
participants, clients, and other stakeholders with whom they interact.
 Responsibilities for general and public welfare – Evaluators clarify and take into account
the diversity of interests and values that may be related to the general and public welfare.
 Beneficence: the obligation to protect people from harm by maximizing anticipated
benefits and minimizing potential risks of harm
 Justice: benefits and burdens of research should be distributed fairly. In other words, one
segment of society—the poor or people of one ethnicity—should not be the only subjects
in research designed to benefit everyone

(iv) Key Success Factors of Monitoring and Evaluation System


 Clear linkage with the strategic objectives
 Clear statements of measurable objectives for the project and its components.
 A structured set of indicators covering: inputs, process, outputs, outcomes, impact, and
exogenous factors.
 Data collection mechanisms capable of monitoring progress over time, including
baselines and a means to compare progress and achievements against targets.
 Availability of baselines and realistic results framework
 Clear mechanisms for reporting and use of M&E results in decision-making.
 Sustainable organizational arrangements for data collection, management, analysis,
and reporting.
 A good evaluation process should have six characteristics:
o stakeholder involvement,
o impartiality, usefulness,
o technical adequacy,
o cost effectiveness and
o timely dissemination and feedback.

(v) Factors contributing to failure of M&E Systems


 Poor system design in terms of collecting more data than is needed or can be processed.
 Inadequate staffing of M&E both in terms of quantity and quality
 Missing or delayed baseline studies. Strictly these should be done before the start of project
implementation, if they are to facilitate with and without project comparisons and
evaluation.
 Delays in processing data, often as a result of inadequate processing facilities and staff
shortages.
 Personal computers can process data easily and quickly but to make the most of these
capabilities requires the correct software and capable staff.
 In adequate utilization of results

(vi) Status of M&E in Kenya


 Establishment of a National Monitoring and Evaluation Policy
 Monitoring and evaluation defined as ‘ a management tool that ensures that policy,
programme, and project results are achieved by gauging performance against plans; and
drawing lessons from experience of interventions for future implementation effectiveness

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 22


while fostering accountability to the people of Kenya’. (GOK, Monitoring and evaluation
policy in Kenya, 2012)
 Directorate of M&E created in 2003
 National Integrated M&E system- implementation coordinated by Directorate of M&E,
Department of Planning to monitor implementation of the Economic Recovery Strategy

 Rationale for M&E policy-Constitution of Kenya provides basis for M&E under articles
10, 56, 174, 185, 201, 203 and 225, 226 and 227
 Challenges include: -
i. Weak M&E culture- hard to determine with M&E influences decision-making,
and M&E budgets not aligned to projects/programmes
ii. Weak M&E reporting structures and multiple and uncoordinated M&E systems
within and among institutions-hard to get full and harmonized results-based
information.
iii. Weak institutional, managerial and technical capacities- evaluations not
adequately conducted
iv. Untimely, rarely analysed data and low utilization of data/ information
v. Lack of M&E policy and legal framework
 Capacity development to complement policy
o Technical and managerial capacity – Equip officers with M&E skills and do
backstopping on M&E for state and non-state actors
o Standardize M&E activities
o MED in collaboration with local training institutions shall develop curriculum to
guide delivery of certificate, diploma, graduate, masters and post-graduate
diploma courses
o MED to spearhead real time reporting through uploading, downloading and data
analysis on ICT database platforms
o Institutional capacity
 Units charged with M&E
 Necessary enabling infrastructure at national and devolved levels
 Technical oversight committee
 National steering committee
 Ministerial M&E committees
 County M&E committees
 National and County Stakeholders fora
 Funds designated for M7E activities
 Non-state actors ( NGOs, civil society and private sector) be supported by
MED in their M&E capacity development

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 23


EXERCISES

Exercise 1: Identify 5 key indicators and complete an indicator matrix for project/programme
you are familiar with.
Indicator Indicator Methods/Sourc Person/s Frequency/ Data Informatio
Definition es Responsible Schedules Analysis n Use

Exercise 2: Identify a suitable project and complete a logical framework


Narrative Verifiable Means of Important
Summary Indicators Verification Assumptions
(OVI) (MOV)

GOAL

PURPOSE

OUTPUTS

ACTIVITIES Inputs

Exercise 3: Identify a suitable project and complete an Evaluation Grid using the five evaluation
criteria, which are Relevance, Effectiveness, Efficiency, Impact and Sustainability

Exercise 4: Identify a suitable project and complete an Evaluation Matrix using the five evaluation
criteria, which are Relevance, Effectiveness, Efficiency, Impact and Sustainability
Relevant Key Specific Data Data collection Indicators/ Methods
evaluation Questions Sub- Sources Methods/Tools Success for Data
criteria Questions Standard Analysis

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 24


Relevance
Effectiveness
Efficiency
Impact
Sustainability

Exercise 5: Identify 5 evaluation methods/techniques and complete an Evaluation


Method/Technique Matrix in regard to a suitable project/programme.

Evaluation What are they What can it be Advantages Disadvantages Resources


Method/Techniqu used for required
e
Formal surveys Used to collect Baseline data, Findings from Data analysis Finances,
standardized comparing sampled items Process and Technical
information different groups, can be applied to analysis can be a and
from samples changes wider target bottleneck analytical
overtime, etc group skills
Rapid appraisal
methods
Participatory
methods

Exercise 6: Identify 5 evaluation models/approaches and complete an Evaluation


Model/Approaches Matrix

Evaluation What are some examples What conditions need to What are some
Model/Approach or situations in which exist to use this limitations of this
you would use this approach? Approach?
approach?
Goal-free evaluation
Kirkpatrick Four-level
approach

Exercise 7: Evaluation Models


a) Applying Kirkpatrick Four-Level Approach to Evaluate Training
A Sales training covers basic topics, such as how to begin the sales discussion, how to ask the right
questions, and how to ask for the sale. Although the trainer believes that the training will be successful,
you have been requested to evaluate the training program. You decide to use the Kirkpatrick four-level
approach.

What aspects of the What are some of the What are some of the limitations of the evaluation and
training will you variables you will focus its findings?
evaluate? on?

a) Applying CIPP evaluation model(Context, Input, Process, Product)


What aspects of the What are some of the What are some of the limitations of the evaluation and
project will you evaluate? variables you will focus its findings?
on?

Exercise 8: Identify 5 evaluation designs and complete an Evaluation Design Matrix

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 25


Evaluation Design When would you use this What data collection What are some
design? methods might you use? limitations of this design?
Retrospect Pre-test
Case study Design

=======++++++++++++++====END====++++++++++++++==========

Monitoring and Evaluation-MSc-Lecture Notes- May-August 2016 -pnk Page 26

You might also like