MEmodule Planning
MEmodule Planning
Monitoring
and Evaluation
Planning
Guidelines and Tools
by Scott G. Chaplowe
American Red Cross
Since 1943, Catholic Relief Services (CRS) has held the privilege of serving the poor
and disadvantaged overseas. Without regard to race, creed, or nationality, CRS
provides emergency relief in the wake of natural and manmade disasters. Through
development projects in fields such as education, peace and justice, agriculture,
microfinance, health and HIV/AIDS, CRS works to uphold human dignity and
promote better standards of living. CRS also works throughout the United States
to expand the knowledge and action of Catholics and others interested in issues of
international peace and justice. Our programs and resources respond to the U.S.
Bishops’ call to live in solidarity—as one human family—across borders, over oceans,
and through differences in language, culture and economic condition.
The American Red Cross helps vulnerable people around the world prevent, prepare
for, and respond to disasters, complex humanitarian emergencies, and life-threatening
health conditions through global initiatives and community-based programs. With a
focus on global health, disaster preparedness and response, restoring family links, and
the dissemination of international humanitarian law, the American Red Cross provides
rapid, effective, and large-scale humanitarian assistance to those in need. To achieve
our goals, the American Red Cross works with our partners in the International Red
Cross and Red Crescent Movement and other international relief and development
agencies to build local capacities, mobilize and empower communities, and establish
partnerships. Our largest program is currently the Tsunami Recovery Program,
which is improving community health and preventing disease outbreaks, supporting
communities as they rebuild their lives and reestablish their livelihoods, and helping
affected Red Cross and Red Crescent Societies and their communities develop disaster
preparedness capabilities.
This module was produced by CRS and the American Red Cross with financial support
from the United States Agency for International Development (USAID) Food for Peace
(FFP) grants: CRS Institutional Capacity Building Grant (AFP-A-00-03-00015-00) and
American Red Cross Institutional Capacity Building Grant (AFP-A-00-00007-00).
The views expressed in this document are those of the author and do not necessarily
represent those of the USAID or FFP.
iii Preface
iii Acknowledgments
iv Acronyms
Our intention in writing the Monitoring and Evaluation Planning module was to
provide concise guidance to readers to develop a comprehensive M&E system
for international humanitarian relief and development programs. Please send
any comments or suggestions for this module to m&[email protected].
Acknowledgments
The author would like to acknowledge the considerable contribution and
guidance of Cynthia Green, formerly with the American Red Cross, in her
patient review and editing of this module, as well as the work of Dina Towbin
(consultant), in shepherding the module through the final stages, and Joe
Schultz (CRS) and Jeanne Ivy (consultant), who were responsible for the
graphic design work.
This module focuses on the key components of an M&E system that inform
M&E planning for projects. These components trace a logical train of thought
from hypotheses on how the project will bring about change in a specific
sector, to the specific objectives needed for these changes, methods for
measuring the project’s achievement of its stated objectives, and protocols
for collecting and analyzing data and information used in the measurement.
The four key components of an M&E system are:
Following an overview of the M&E system, this module examines these four
key M&E components. It is important to stress that the various components
of an M&E system are interdependent and that M&E planning requires other
elements, whether stated explicitly or implicitly. Other key considerations
for M&E planning are presented in the final section of the module and
highlighted in relevant boxes throughout.
Project Follow-Up
Project
Dissemination Design
& Use of Needs
Project Lessons Assessment
End
Final Evaluation
(With Endline Logframe
Survey) &
Indicators
Ongoing Project
Project
Monitoring Planning
Midterm or &
Project Annual Reflection M&E Plan
Evaluation Development
Middle [Indicator Table]
Project
Monitoring Baseline
(QPR) Survey
Project Implementation
Each project may have different M&E needs, depending on the operating
context, implementing agency capacity, donor requirements, and other
factors. In preparing an M&E plan, it is important to identify these needs
and coordinate the methods, procedures, and tools used to meet them; this
conserves resources and streamlines M&E planning.
There is not a single, recognized industry standard for assessing the quality
of an M&E system. However, some key criteria are summarized below (IFAD
2002, pp. 4-20):
▪▪ Utility: The proposed M&E system will serve the practical information
needs of intended users.
▪▪ Feasibility: The methods, sequences, timing and processing
procedures proposed are realistic, prudent and cost-effective.
▪▪ Propriety: The M&E activities will be conducted legally, ethically and
with due regard for the welfare of those affected by its results.
▪▪ Accuracy: The M&E outputs will reveal and convey technically
adequate information.
1. The major problem and condition(s) that the project seeks to change
2. Factors that cause the condition(s)
3. Ways to influence the causal factors, based on hypotheses of the
relationships between the causes and likely solutions
4. Interventions to influence the causal factors
5. The expected changes or desired outcomes (see Table 1).
▪▪ Goal: To what extent has the project contributed towards its longer
term goals? Why or why not? What unanticipated positive or negative
consequences did the project have? Why did they arise?
▪▪ Outcomes: What changes have occurred as a result of the outputs
and to what extent are these likely to contribute towards the project
purpose and desired impact? Has the project achieved the changes for
which it can realistically be held accountable?
▪▪ Outputs: What direct tangible products or services has the project
delivered as a result of activities?
▪▪ Activities: Have planned activities been completed on time and within
the budget? What unplanned activities have been completed?
▪▪ Inputs: Are the resources being used efficiently?
Decisions regarding indicators are linked to the overall research plan. The
type of data and information to be collected will depend on the research
question being addressed, the desired level of precision in measuring
project effects, and the project’s size and complexity. These issues need to be
considered when the logframe is being developed, since they are related to the
selection of interventions and project outputs, the proposed M&E budget, and
staffing levels.
It is important to note that there are other types of frameworks used to show
the relationships between project objectives and the indicators that will
demonstrate achievement or progress toward these objectives. This module
focuses on the logframe because it is widely used for development projects,
but it does have its limitations (see Box 3). Another framework used by the
U.S. Agency for International Development (USAID) and other donors is the
results framework, sometimes called a strategic framework. Using diagrams
to illustrate the steps or levels of results, the results framework emphasizes
Annex III provides a sample format for an indicator matrix, with illustrative
rows for outcome and output indicators. The following are the major
components (column headings) of the indicator matrix:
5. Frequency/Schedules: This column states how often the data for each
indicator will be collected, such as monthly, quarterly, or annually. It is
often useful to list the data collection timing or schedule, such as start-
up and end dates for collection or deadlines for tool development.
When planning for data collection timing, it is important to consider
factors such as seasonal variations, school schedules, holidays, and
religious observances (i.e., Ramadan).
6. Person(s) Responsible: This column lists the people responsible and
accountable for the data collection and analysis, i.e., community
volunteers, field staff, project managers, local partner/s, and external
consultants. In addition to specific people’s names, use the position
title to ensure clarity in case of personnel changes. This column is
useful in assessing and planning for capacity building for the M&E
system.
7. Data Analysis: This column describes the process for compiling
and analyzing the data to gauge whether the indicator has been
met or not. For example, survey data usually require statistical
analysis, while qualitative data may be reviewed by research staff or
community members.
8. Information Use: This column identifies the intended audience and
use of the information. For example, the findings could be used for
monitoring project implementation, evaluating the interventions,
planning future project work, or reporting to policy makers or donors.
This column should also state ways that the findings will be formatted
(e.g., tables, graphs, maps, histograms, and narrative reports) and
disseminated (e.g., Internet Web sites, briefings, community meetings,
listservs, and mass media).
frame and methodology; timing and mode of data collection; research staff
responsibilities; enumerator selection, training, and supervision; fieldwork
timing and logistics; checks for data quality; data entry and storage;
hypothesized relationships among the variables; and data analysis methods.
Special analyses, such as disaggregating data by gender, age, location and
socio-economic status, should also be described.
It is important to provide the rationale for the data collection and analysis
methods. This includes the triangulation of methods (quantitative and/
or qualitative) and sources to reduce bias and ensure data reliability and
completeness. It should also be informed by the standards that guide
good practice of project evaluation. There are many useful resources in
the evaluation community that identify key principles to ensure ethical,
accountable, and quality evaluations (for example, American Evaluation
Association [AEA] 2004, Australian Evaluation Society [AES] 2002, and
Development Assistance Committee [DAC] 2008).
The plan should also discuss the purpose of data collection and analysis in
terms of specific monitoring and evaluation functions. Some key functions
of monitoring include compliance, process, results, context, beneficiary, and
organizational monitoring. Typically, a project will use a combination of these
monitoring functions and design data collection and analysis accordingly. For
project assessments, the discussion should identify not only the methods used,
but the timing of the assessment event (i.e., baseline studies, annual reviews,
midterm and final evaluations), and the rationale for selecting evaluators with
specific skill sets and independence (i.e., internal versus external evaluators).
See Annex IV for a more extensive list of data sources. Also, Annex I lists M&E
guides that describe the process of data collection and analysis.
It is also important to carefully plan for the data management of the M&E
system. This includes the set of procedures, people, skills, and equipment
necessary to systematically store and manage M&E data. If this step is
not carefully planned, data can be lost or incorrectly recorded, which
compromises not only data quality and reliability, but also subsequent data
analysis and use. Poorly managed data waste time and resources.
Reporting is closely related to M&E work, since data are needed to support
the major findings and conclusions presented in a project report. Often,
the focus and frequency of M&E processes are determined by reporting
requirements and schedules.
▪▪ Identify the various tasks and related skills that are needed, such
as ensuring adequate data collection systems in the field, research
design, and data entry and analysis
▪▪ Assess the relevant skills of the project team, partner organizations,
and the community beneficiaries
▪▪ Specify to what extent local stakeholders will (or will not) participate
in the M&E process (see Table 3)
▪▪ Assign specific roles and responsibilities to team members and
designate an overall M&E manager
▪▪ Recruit consultants, students, and others to fill in the skill gaps and
special needs such as translation, statistical analysis, and cultural
knowledge
▪▪ Identify the topics for which formal training is needed and hold
training sessions
▪▪ Encourage staff to provide informal training through on-the-job
guidance and feedback, such as commenting on a report or showing
how to use computer software programs
▪▪ Give special attention to building local capacity in M&E.
Cultivating nascent M&E skills takes time and patience, but in the end the
contributions of various collaborators will enrich M&E work and lead to
greater acceptance of M&E’s role in project implementation.
▪▪ List all M&E tasks and overall responsibilities, analyze the necessary
items associated with each task, and determine their cost
▪▪ Budget for staffing, including full-time staff, external consultants,
capacity building/training, and other human resource expenses
▪▪ Ensure that the budget includes all capital expenses, including facility
costs, office equipment and supplies, travel and lodging, computer
hardware and software, and other expenses
▪▪ Determine whether all tasks are included in the overall project
budget, such as support for an information management system, field
transportation and vehicle maintenance, translation, and printing and
publishing of M&E documents/tools
▪▪ Review the donor’s requirements to determine whether there are any
extra items that need to be budgeted, or conversely, items such as an
external evaluation that will be funded directly by the donor
A narrative justifying each line item can help guard against arbitrary budget
cuts. It may be necessary to clarify or justify expenses, such as wage rates
not normally paid to comparable positions, fees for consultants and external
experts, or the various steps in a survey that add up in cost (development
and testing the questionnaire, translation and back-translation, enumerator
training, enumerators’ and field supervisors’ daily rates, travel/lodging costs
for administering the survey, data analysis and write-up, and so on).
References
AEA (American Evaluation Association). 2004. American Evaluation Association Guiding Principles for
Evaluators. www.eval.org/Publications/GuidingPrinciplesPrintable.asp.
AES (Australian Evaluation Society). 2002. Australian Evaluation Society Guidelines for the Ethical Conduct of
Evaluations. http://www.aes.asn.au/.
AusAID. 2006. “M&E Framework Good Practice Guide” (Exposure Draft: March 17, 2006). Australian
Government AusAID. www.mande.co.uk/docs/MEF%20QAG_Guide%20(ver7)%201703016.pdf.
Bamberger, Michael, Jim Rugh, and Linda Mabry. 2006. RealWorld Evaluation: Working Under Budget, Time, Data,
and Political Constraints. Thousand Oaks, London, New Delhi: SAGE Publications.
Barton, Tom. 1997. “Guidelines to Monitoring and Evaluation: How Are We Doing?” CARE International,
Kampala, Uganda.
http://pqdl.care.org/pv_obj_cache/pv_obj_id_1DCDB23F514606B280C36E5E42B6EF31F9D70700.
DAC (Development Assistance Committee). 2008. “DAC Evaluation Quality Standards (draft).” DAC Evaluation
Network, Organization for Economic Co-operation and Development (OECD), Paris, France.
http://www.oecd.org/dataoecd/30/62/36596604.pdf.
Frankel, Nina and Anastasia Gage. 2007. “M&E Fundamentals: A Self-Guided Minicourse.” United States Agency
for International Development (USAID), Washington, DC.
http://www.cpc.unc.edu/measure/publications/pdf/ms-07-20.pdf.
IFAD (International Fund for Agricultural Development). 2002. “A Guide for Project M&E.” IFAD, Rome.
http://www.ifad.org/evaluation/guide/toc.htm.
IFRC (International Federation of Red Cross and Red Crescent Societies). 2007. “Monitoring and Evaluation in a
Nutshell.” IFRC, Geneva. http://participation.110mb.com/PCD/M%20and%20E%20guide%20final.pdf.
Rugh, Jim. 2008. “The Rosetta Stone of Logical Frameworks.” Compiled by Jim Rugh for CARE International and
InterAction’s Evaluation Interest Group. http://www.mande.co.uk/docs/Rosettastone.doc.
Stetson, Valerie, Guy Sharrock, and Susan Hahn. 2004. “ProPak: The CRS Project Package.” Catholic Relief
Services, Baltimore. http://crs.org/publications/list.cfm?sector=19.
1 Internet links have been provided where possible to download references and resources. If a link is no longer active, try a keyword
search on the publication title with an Internet search engine, e.g., Google.
Monitoring and Evaluation Planning • 19
Annex I References and Resources
Theory of Change. 2008. A joint venture between ActKnowledge and the Aspen Institute Roundtable on
Community Change. http://www.theoryofchange.org/html/overview.html.
USAID (United States Agency for International Development). 1996. “Preparing a Performance Monitoring Plan.
Performance Monitoring and Evaluation TIPS: Number 7.” USAID, Washington, DC.
http://pdf.usaid.gov/pdf_docs/pnaby215.pdf
Resources
American Red Cross. 2006. “Integrated Planning Process: Project Design & Proposal Writing Guide.” American
Red Cross, Washington, DC. http://www.redcross.org/static/file_cont5976_lang0_2296.pdf.
Danida. 1996. “Logical Framework Approach: A Flexible Tool for Participatory Development.”
http://amg.um.dk/en/menu/TechnicalGuidelines/LogicalFrameworkApproach.
Department for International Development (DFID). 2006. “A Guide for DFID-contracted Research Programmes.”
http://www.dfid.gov.uk/research/me-guide-contracted-research.pdf.
MandE News. http://mande.co.uk/. A comprehensive M&E resource site, with links to other M&E resources. For
example, multiple resources on logical frameworks and their pros and cons can be assessed at: http://portals.
wdi.wur.nl/ppme/index.php?Logical_Framework_Approach and http://www.mande.co.uk/logframe.htm.
MEASURE (Measure and Evaluation to Assess and Use Results Evaluation). 2008. http://www.cpc.unc.edu/
measure. Funded by USAID, the MEASURE framework offers M&E publications, tools, trainings, and other
resources.
Participatory Planning Monitoring & Evaluation (PPM&E) Resource Portal. 2008. Contains multiple resources for
M&E planning. http://portals.wdi.wur.nl/ppme/index.php?Home.
Performance Assessment Resource Centre (PARC). 2008. http://www.parcinfo.org/. Contains useful M&E
resources.
Resources for Methods in Evaluation and Social Research. 2008. http://gsociology.icaap.org/methods/. Lists useful
resources for program evaluation and social research methods.
UNAIDS. 2002. Monitoring and Evaluation Operations Manual. Joint United Nations Programme on HIV/AIDS.
http://www1.worldbank.org/hiv_aids/docs/M&EManual.pdf.
UNICEF Evaluation. http://www.unicef.org/evaluation/. Provides useful M&E resources, as wells as links to other
M&E sites.
United Nations Development Programme (UNDP). 2008. http://www.undp.org/eo/. Useful M&E resources as
wells as links to other M&E sites.
United Nations Evaluation Group (UNEG). 2008. http://www.uneval.org/. M&E standards, resources, and events.
United Nations Population Fund (UNFPA). 2000. “The Programme Manager’s Planning, Monitoring and
Evaluation Toolkit: Planning and Managing an Evaluation.” UNFPA Office of Oversight and Evaluation.
http://www.unfpa.org/monitoring/toolkit.htm.
USAID (United States Agency for International Development). 2002. “Developing a Participatory Monitoring and
Evaluation Plan.” Produced for USAID by the Synergy Project. www.synergyaids.com/.
———. 2007. “Monitoring and Evaluation Systems Strengthening Tool.” USAID, Washington, DC.
http://www.pepfar.gov/documents/organization/79624.pdf.
USAID Center for Development and Evaluation Information. 1996. Evaluation Publications at http://www.usaid.
gov/pubs/usaid_eval. M&E resources from 1996, but still very relevant and useful, most notably the concise
series of TIPS documents. For example, “Preparing a Performance Monitoring Plan. Performance Monitoring
and Evaluation TIPS: Number 7.”
USAID/DCHA/FFP. 2005. P.L. 480 Title II Program Policies and Proposal Interim Guidelines. March 14, 2005,
Annex A, p. 9.
W. K. Kellogg Foundation. 2001. “Logic Model Development Guide.” W.K. Kellogg Foundation, Battle Creek,
Michigan. http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.pdf.
World Bank, Independent Evaluation Group (IEG). 2008. International Program for Development Evaluation
Training. Course modules provide an overview of key M&E concepts and practices.
http://www.worldbank.org/oed/ipdet/modules.html.
Hagens, Clara, with assistance from Guy Sharrock. 2008. “Hiring M&E Staff.”
McMillan, Della E., and Alice Willard. 2008. “Preparing for an Evaluation.”
McMillan, Della E., Guy Sharrock, and Alice Willard. 2008. “Guidelines and Tools for the Preparation and Use of
Indicator Performance Tracking Tables (IPTT).”
2 Note: The indicators in Annex III are illustrative and are not necessarily from the same project or objective.
Example 1. “Schools” refers 1. Pre-arranged School Field Offi- 1. Checklist data 1. Post-drill 1. Project imple-
Outcome 2a. to K-12 in Mat- site visits dur- cer (SFO): Shantha collected quar- meeting with mentation with
Percent of ara District ing disaster Mande terly School Disaster School Disaster
target schools 2. Criteria of drill 2. FGDs: teachers, Committee, Committees
that success- “success”: drill 2. Complete disas- students, and facilitated by 2. Monitoring
fully conduct unannounced ter drill check- administra- SFO process of
a minimum of through early list and entered tion every six 2. Project manage- school outreach
one disaster warning sys- into quarterly months ment team training for
drill per quarter tem; response project report during quar- project with
3. Begin data
time under 20 (QPR) collection on terly reflection management
minutes, school 3. School focus 4/15/06 meeting with Sri Lankan
members report group discus- Red Cross
to designated 4. Scenario check- Society
sions (teachers, list completed
area per the students, ad- 3. Tsunami Recov-
School Crisis by 3/8/06
ministration) ery Program
Response Plan management
3. Numerator: 4. Impact evalu-
number of ation to justify
schools with intervention
successful to Ministry
scenario per of Education,
quarter. Ministry of
Denominator: Disaster Relief,
total number donors
of targeted
schools
Checklist: A list of items used for validating or inspecting that procedures/steps have been followed, or the
presence of examined behaviors.
Closed-ended (structured) interview: A technique for interviewing that uses carefully organized questions that
only allow a limited range of answers, such as “yes/no,” or expressed by a rating/number on a scale. Replies can
easily be numerically coded for statistical analysis.
Community interviews/meeting: A form of public meeting open to all community members. Interaction is
between the participants and the interviewer, who presides over the meeting and asks questions following a
prepared interview guide.
Direct observation: A record of what observers see and hear at a specified site, using a detailed observation
form. Observation may be of physical surroundings, activities, or processes. Observation is a good technique for
collecting data on behavior patterns and physical conditions.
Focus group discussion: Focused discussion with a small group (usually 8 to 12 people) of participants to record
attitudes, perceptions, and beliefs pertinent to the issues being examined. A moderator introduces the topic and
uses a prepared interview guide to lead the discussion and elicit discussion, opinions, and reactions.
Key informant interview: An interview with a person having special information about a particular topic. These
interviews are generally conducted in an open-ended or semi-structured fashion.
Laboratory testing: Precise measurement of specific objective phenomenon, for example, infant weight or water
quality test.
Mini-survey: Data collected from interviews with 25 to 50 individuals, usually selected using non-probability
sampling techniques. Structured questionnaires with a limited number of closed-ended questions are used to
generate quantitative data that can be collected and analyzed quickly.
Most significant change (MSC): A participatory monitoring technique based on stories about important or
significant changes, rather than indicators. They give a rich picture of the impact of development work and
provide the basis for dialogue over key objectives and the value of development programs.
Open-ended (semi-structured) interview: A technique for questioning that allows the interviewer to probe and
follow up topics of interest in depth (rather than just “yes/no” questions).
3 Note: This list is not exhaustive, as tools and techniques are emerging and evolving in the M&E field.
Participant observation: A technique first used by anthropologists; it requires the researcher to spend considerable
time with the group being studied (days) and to interact with them as a participant in their community. This
method gathers insights that might otherwise be overlooked, but is time-consuming.
Participatory rapid (or rural) appraisal (PRA): This uses community engagement techniques to understand
community views on a particular issue. It is usually done quickly and intensively – over a 2 to 3-week period.
Methods include interviews, focus groups, and community mapping.
Questionnaire: A data collection instrument containing a set of questions organized in a systematic way, as well as
a set of instructions to the enumerator/interviewer about how to ask the questions (typically used in a survey).
Rapid appraisal (or assessment): A quick cost-effective technique to gather data systematically for decision-
making, using qualitative and quantitative methods, such as site visits, observations, and sample surveys.
This technique shares many of the characteristics of participatory appraisal (such as triangulation and multi-
disciplinary teams) and recognizes that indigenous knowledge is a critical consideration for decision-making.
Self-administered survey: Written surveys completed by the respondent, either in a group setting or in a separate
location. Respondents must be literate (for example, it can be used to survey teacher opinions).
Statistical data review: A review of population censuses, research studies, and other sources of statistical data.
Survey: Systematic collection of information from a defined population, usually by means of interviews or
questionnaires administered to a sample of units in the population (e.g., person, beneficiaries, and adults).
Visual techniques: Participants develop maps, diagrams, calendars, timelines, and other visual displays to
examine the study topics. Participants can be prompted to construct visual responses to questions posed by the
interviewers, for example, by constructing a map of their local area. This technique is especially effective where
verbal methods can be problematic due to low literate or mixed language target populations, or in situations
where the desired information is not easily expressed in either words or numbers.
Written document review: A review of documents (secondary data) such as project records and reports,
administrative databases, training materials, correspondence, legislation, and policy documents.