Monitoring and Evaluation
Monitoring and Evaluation
c) Logical frameworks are also known as LogFRAMEs and are commonly used to help set
clear program objectives and define indicators of success. They also outline the critical
assumptions on which a project is based, similar to the results framework.
d) Logic models are also known as M&E frameworks are commonly used to present a clear
plan for the use of resources to meet the desired goals and objectives. They are a useful
tool for presenting programmatic and evaluation components.
The choice of a particular type of framework—whether a conceptual framework, results
framework, logical framework or logic model—depends on the program’s specific needs, the
M&E team’s preferences and donor requirements.
In particular, the LFA is a systematic planning procedure for complete project cycle
management, a participatory Planning, Monitoring & Evaluation tool;
Pre-conditions
What needs to be fulfilled
before activities can start
COMPARE
PLAN ACTUAL
STATUS
VARIANCES
NO YES
a) Assess the existing readiness and capacity for monitoring and evaluation
b) Review current capacity within (or outsourced without) the organization and its
partners which will be responsible for project implementation, covering: technical
skills, managerial skills, existence and quality of data systems, available technology
and existing budgetary provision.
c) Establish the purpose and scope
Why is M&E needed and how comprehensive should the system be?
What should be the scope, rigour and should the M&E process be participatory?
d) Identify and agree with main stakeholders the outcomes and development
objective(s).
Set a development goal and the project purpose or expected outcomes, outputs,
activities and inputs. Indicators, baselines and targets are similarly derived
e) Select key indicators i.e the qualitative or quantitative variables that measure project
performance and achievements for all levels of project logic with respect to inputs,
activities, outputs, outcomes and impact, as well as the wider environment,
requiring pragmatic judgment in the careful selection of indicators.
f) Developing and Evaluation Frame work - set out the methods, approaches and
evaluation designs ( Experimental, Quasi-Experimental and Non-Experimental) to
be used to address the question of whether change observed through monitoring
indicators can be attributed to the project interventions.
g) Set baselines and planning for results -The baseline is the first measurement of an
indicator, which sets the pre-project condition against which change can be tracked
and evaluated.
h) Select data collection methods as applicable.
i) Setting targets and developing a results framework- A target is a specification of the
quantity, quality, timing and location to be realized for a key indicator by a given
date. Starting from the baseline level for an indicator the desired improvement is
defined taking account of planned resource provision and activities, to arrive at a
performance target for that indicator.
j) Plan monitoring, data analysis, communication, and reporting: Monitoring and
Evaluation Plan
k) Implementation monitoring tracking the inputs, activities and outputs in annual or
multiyear work plans, and ‘Results monitoring’ tracking achievement of outcomes
and impact, are both needed. The demands for information at each level of
management need to be established, responsibilities allocated, and plans made for:
i. what data to be collected and when;
ii. how data are collected and analyzed;
iii. who collects and analyses data;
iv. who reports information,
v. when ?
l) Facilitating the necessary conditions and capacities to sustain the M&E System -
organizational structure for M&E, partner’s responsibilities and information
requirements, staffing levels and types, responsibilities and internal linkages,
incentives and training needs, relationships with partners and stakeholders,
The following are brief descriptions of the most commonly used evaluation (and research)
designs.
One-Shot Design In using this design, the evaluator gathers data following an intervention
or program. For example, a survey of participants might be administered
after they complete a workshop.
Retrospective Pre- As with the one-shot design, the evaluator collects data at one time but
test. asks for recall of behaviour or conditions prior to, as well as after, the
intervention or program.
One-Group Pre- The evaluator gathers data prior to and following the intervention or
test-Post-test program being evaluated.
Design.
Time Series Design The evaluator gathers data prior to, during, and after the implementation
of an intervention or program
Pre-test-Post-test . The evaluator gathers data on two separate groups prior to and
Control-Group following an intervention or program. One group, typically called the
Design experimental or treatment group, receives the intervention. The other
group, called the control group, does not receive the intervention.
Post-test-Only The evaluator collects data from two separate groups following an
Control-Group intervention or program. One group, typically called the experimental
Design. or treatment group, receives the intervention or program, while the
other group, typically called the control group, does not receive the
intervention.
Data are collected from both of these groups only after the intervention.
Case Study Design When evaluations are conducted for the purpose of understanding the
program’s context, participants’ perspectives, the inner dynamics of
situations, and questions related to participants’ experiences, and where
generalization is not a goal, a case study design, with an emphasis on the
collection of qualitative data, might be most appropriate. Case studies
involve in-depth descriptive data collection and analysis of individuals,
groups, systems, processes, or organizations. In particular, the case study
design is most useful when you want to answer how and why questions
and when there is a need to understand the particulars, uniqueness, and
diversity of the case.
a) Evaluation Methods
Informal and less-structured methods
Conversation with concerned individuals
Community interviews
Field visits
Reviews of records
Key informant interviews
Participant observation
Focus group interviews
Community book A community-maintained document of a Where communities have low literacy rates,
project belonging to a community. It can a memory team is identified whose
include written records, pictures, drawings, responsibility it is to relate the written record
songs or whatever community members feel to the rest of the community in keeping with
is appropriate. their oral traditions.
Community A form of public meeting open to all Interaction is between the participants and
interviews/meeti community members. the interviewer, who presides over the
ng. meeting and asks questions following a
prepared interview guide.
Direct A record of what observers see and hear at a An observation guide is often used to reliably
observation specified site, using a detailed observation look for consistent criteria, behaviours, or
form. Observation may be of physical patterns.
surroundings, activities or processes.
Observation is
a good technique for collecting data on
behavioural patterns and physical conditions.
Document A review of documents (secondary data) can It includes written documentation (e.g.
review provide cost-effective and timely project records and reports, administrative
baseline information and a historical databases, training materials,
perspective of the project/programme. correspondence, legislation and policy
documents) as well as videos, electronic data
or photos.
Focus group Focused discussion with a small group A moderator introduces the topic and uses a
discussion. (usually eight to 12 people) of participants prepared interview guide to lead the
to record attitudes, perceptions and beliefs discussion and extract conversation,
relevant to the issues being examined. opinions and reactions.
Interviews. An open-ended (semi-structured) interview Replies can easily be numerically coded for
is a technique for questioning that allows the statistical analysis.
interviewer to probe and pursue topics of
interest in depth (rather than just “yes/no”
questions). A closedended(structured)
interview systematically follows carefully
organized questions (prepared in advance in
an interviewer’s guide) that only allow a
limited range of answers, such as “yes/no” or
expressed by
a rating/number on a scale.
Key informant An interview with a person having special These interviews are generally conducted in
interview. information about a particular topic. an open-ended or semi-structured fashion.
Laboratory Precise measurement of specific objective
testing. phenomenon, e.g. infant weight or water
quality test.
Mini-survey Data collected from interviews with 25 to 50 Structured questionnaires with a limited
individuals, usually selected using number of closed-ended questions are used
nonprobability to generate quantitative data that can be
sampling techniques. collected and analysed quickly.
Most significant A participatory monitoring technique based They give a rich picture of the impact of
change (MSC). on stories about important or significant development work and provide the basis for
changes, rather than indicators. dialogue over key objectives and the value of
development programmes
Participant A technique first used by anthropologists This method gathers insights that might
observation. (those who study humankind); it requires the otherwise be overlooked, but is time-
researcher to spend considerable time (days) consuming.
with the group being studied and to interact
with them as a participant in their
community.
Participatory This uses community engagement techniques It is usually done quickly and intensively –
rapid (or rural) to understand community views on a over a two- to three-week period. Methods
appraisal (PRA). particular issue. include interviews, focus groups and
community mapping. Tools include
stakeholder analysis, participatory rural
Case studies Provide a rich picture of what is Require a sophisticated and well-
happening, as seen through the trained data collection and
eyes of many individuals reporting team
Allow a thorough exploration of Can be costly in terms of the
interactions between treatment demands on time and resources
and contextual factors Individual cases may be over
Can help explain changes or interpreted or overgeneralized
facilitating factors that might
Interviews Usually yield richest data, details, Expensive and time consuming
new insights Need well-qualified, highly
Permit face-to-face contact with trained interviewers
respondents Interviewee may distort
Provide opportunity to explore information through recall error,
topics in depth selective perceptions, desire to
Allow interviewer to experience please interviewer
the affective as well as Flexibility can result in
cognitive aspects of responses inconsistencies across interviews
Allow interviewer to explain or Volume of information very large;
help clarify questions, may be difficult to
increasing the likelihood of useful transcribe and reduce data
responses
Allow interviewer to be flexible in
administering interview to
particular individuals or in
particular circumstances
b) PARTICIPATORY M&E
Participatory evaluation is a partnership approach to evaluation in which stakeholders actively
engage in developing the evaluation and all phases of its implementation. Participatory
evaluations often use rapid appraisal techniques. Name a few of them.
Key Informant Interviews - Interviews with a small number of individuals who are most
knowledgeable about an issue.
Focus Groups - A small group (8-12) is asked to openly discuss ideas, issues and
experiences.
Mini-surveys - A small number of people (25-50) is asked a limited number of
questions.
Neighbourhood Mapping - Pictures show location and types of changes in an area to be
evaluated.
Flow Diagrams - A visual diagram shows proposed and completed changes in systems.
Photographs - Photos capture changes in communities that have occurred over time.
Oral Histories and Stories - Stories capture progress by focusing on one person’s or
organization’s account of change.
The term “data” refers to raw, unprocessed information while “information,” or “strategic
information,” usually refers to processed data or data presented in some sort of context.
Data –primary or secondary- is a term given to raw facts or figures before they have been
processed and analysed.
Information refers to data that has been processed and analysed for reporting and use.
Data analysis is the process of converting collected (raw) data into usable information.
Quantitative data is often considered more objective and less biased than qualitative data but
recent debates have concluded that both quantitative and qualitative methods have subjective
(biased) and objective (unbiased) characteristics.
Therefore, a mixed-methods approach is often recommended that can utilize the advantages of
both, measuring what happened with quantitative data and examining how and why it happened
with qualitative data.
E.g.
Formal reports developed by evaluators typically include six major sections:
(1) Background
(2) Evaluation study questions
(3) Evaluation procedures
(4) Data analyses
(5) Findings
(6) Conclusions (and recommendations)
Or detailed:
Summary sections
A. Abstract
B. Executive summary
II. Background
0r
Table of contents
Executive summary
Introduction
Evaluation scope, focus and approach
Project facts
Findings, Lessons Learned
o Findings
o Lessons learned
Conclusions and recommendations
o Conclusions
o Recommendations
Annexes/appendices
2. Table of contents
Should always include lists of boxes, figures, tables and annexes with page references.
3. List of acronyms and abbreviations
4. Executive summary
A stand-alone section of two to three pages that should:
Briefly describe the intervention (the project(s), programme(s), policies or other
interventions) that was evaluated.
Explain the purpose and objectives of the evaluation, including the audience for the
evaluation and the intended uses.
Describe key aspect of the evaluation approach and methods.
Summarize principle findings, conclusions, and recommendations.
5. Introduction
Should:
Explain why the evaluation was conducted (the purpose), why the intervention is being
evaluated at this point in time, and why it addressed the questions it did.
Identify the primary audience or users of the evaluation, what they wanted to learn from
the evaluation and why, and how they are expected to use the evaluation results.
Identify the intervention (the project(s) programme(s), policies or other interventions) that
was evaluated—see upcoming section on intervention.
Acquaint the reader with the structure and contents of the report and how the information
contained in the report will meet the purposes of the evaluation and satisfy the information
needs of the report’s intended users.
6. Description of the intervention/project/process/programme—Provide the basis for report users
to understand the logic and assess the merits of the evaluation methodology and understand the
applicability of the evaluation results. The description needs to provide sufficient detail for the
report user to derive meaning from the evaluation. The description should:
Describe what is being evaluated, who seeks to benefit, and the problem or issue it seeks
to address.
Explain the expected results map or results framework, implementation strategies, and the
key assumptions underlying the strategy.
Link the intervention to national priorities, Development partner priorities, corporate
strategic plan goals, or other project, programme, organizational, or country specific plans
and goals.
Identify the phase in the implementation of the intervention and any significant changes
(e.g., plans, strategies, logical frameworks) that have occurred over time, and explain the
implications of those changes for the evaluation.
Identify and describe the key partners involved in the implementation and their roles.
Describe the scale of the intervention, such as the number of components (e.g., phases of
a project) and the size of the target population for each component.
Indicate the total resources, including human resources and budgets.
Describe the context of the social, political, economic and institutional factors, and the
geographical landscape within which the intervention operates and explain the effects
(challenges and opportunities) those factors present for its implementation and outcomes.
Point out design weaknesses (e.g., intervention logic) or other implementation constraints
(e.g., resource limitations).
7. Evaluation scope and objectives - The report should provide a clear explanation of the evaluation’s
scope, primary objectives and main questions.
Evaluation scope—The report should define the parameters of the evaluation, for example,
the time period, the segments of the target population included, the geographic area
included, and which components, outputs or outcomes were and were not assessed.
Evaluation objectives—The report should spell out the types of decisions evaluation users
will make, the issues they will need to consider in making those decisions, and what the
evaluation will need to achieve to contribute to those decisions.
SESSION 12: BEST PRACTICES, EMERGING TRENDS & M&E CAPACITY BUILDING IN KENYA
(i) Monitoring Best Practices
Data well-focused to specific audiences and uses (only what is necessary and sufficient).
Systematic, based upon predetermined indicators and assumptions.
Also look for unanticipated changes with the project/programme and its context,
including any changes in project/programme assumptions/risks; this information should
be used to adjust project/programme implementation plans.
Be timely, so information can be readily used to inform project/programme
implementation.
Be participatory, involving key stakeholders –reduce costs, build understanding and
ownership.
Not only for project/programme management but should be shared when possible with
beneficiaries, donors and any other relevant stakeholders.
Participation: encourage participation “by all who wish to participate and/or who might
be affected by the review.”
Decision Making: “Projects will utilize a structured decision-making process.”
Value People: “Projects are not intended to result in a loss of employees but may result in
employees being re-deployed to other activities within the department.”
Measurement: for accountability; measures should be accurate, consistent, flexible,
comprehensive but not onerous
Integrated Program/Process Planning and Evaluation: incorporated into yearly business
plans
Ethical Conduct/Openness: consider ethical implications, respect and protect rights of
participants
Program/Process Focus: focus on improving program, activity or process
Clear and Accurate Reporting of Facts and Review Results
Timely Communication of Information and Review Results to Affected Parties
Multi-Disciplinary Team Approach: include a range of knowledge and experience; seek
assistance from outside of the team as required
Customer and Stakeholder Involvement: “External and internal customers and
stakeholders related to a project should be identified and consulted, if possible, throughout
the project.”
Rationale for M&E policy-Constitution of Kenya provides basis for M&E under articles
10, 56, 174, 185, 201, 203 and 225, 226 and 227
Challenges include: -
i. Weak M&E culture- hard to determine with M&E influences decision-making,
and M&E budgets not aligned to projects/programmes
ii. Weak M&E reporting structures and multiple and uncoordinated M&E systems
within and among institutions-hard to get full and harmonized results-based
information.
iii. Weak institutional, managerial and technical capacities- evaluations not
adequately conducted
iv. Untimely, rarely analysed data and low utilization of data/ information
v. Lack of M&E policy and legal framework
Capacity development to complement policy
o Technical and managerial capacity – Equip officers with M&E skills and do
backstopping on M&E for state and non-state actors
o Standardize M&E activities
o MED in collaboration with local training institutions shall develop curriculum to
guide delivery of certificate, diploma, graduate, masters and post-graduate
diploma courses
o MED to spearhead real time reporting through uploading, downloading and data
analysis on ICT database platforms
o Institutional capacity
Units charged with M&E
Necessary enabling infrastructure at national and devolved levels
Technical oversight committee
National steering committee
Ministerial M&E committees
County M&E committees
National and County Stakeholders fora
Funds designated for M7E activities
Non-state actors ( NGOs, civil society and private sector) be supported by
MED in their M&E capacity development
Exercise 1: Identify 5 key indicators and complete an indicator matrix for project/programme
you are familiar with.
Indicator Indicator Methods/Sourc Person/s Frequency/ Data Informatio
Definition es Responsible Schedules Analysis n Use
GOAL
PURPOSE
OUTPUTS
ACTIVITIES Inputs
Exercise 3: Identify a suitable project and complete an Evaluation Grid using the five evaluation
criteria, which are Relevance, Effectiveness, Efficiency, Impact and Sustainability
Exercise 4: Identify a suitable project and complete an Evaluation Matrix using the five evaluation
criteria, which are Relevance, Effectiveness, Efficiency, Impact and Sustainability
Relevant Key Specific Data Data collection Indicators/ Methods
evaluation Questions Sub- Sources Methods/Tools Success for Data
criteria Questions Standard Analysis
Evaluation What are some examples What conditions need to What are some
Model/Approach or situations in which exist to use this limitations of this
you would use this approach? Approach?
approach?
Goal-free evaluation
Kirkpatrick Four-level
approach
What aspects of the What are some of the What are some of the limitations of the evaluation and
training will you variables you will focus its findings?
evaluate? on?
=======++++++++++++++====END====++++++++++++++==========