0% found this document useful (0 votes)
24 views7 pages

10 Steps To Design A Monitoring and Evaluation

The document outlines a 10-step process for designing a Monitoring and Evaluation (M&E) system, emphasizing the importance of stakeholder participation and defining the system's scope and purpose. It details the necessary components for effective M&E systems, including organizational structures, human capacity, frameworks, communication strategies, and data dissemination. The final thoughts stress the need for a robust yet flexible M&E system that promotes learning and accountability.

Uploaded by

hamedaskarzada77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views7 pages

10 Steps To Design A Monitoring and Evaluation

The document outlines a 10-step process for designing a Monitoring and Evaluation (M&E) system, emphasizing the importance of stakeholder participation and defining the system's scope and purpose. It details the necessary components for effective M&E systems, including organizational structures, human capacity, frameworks, communication strategies, and data dissemination. The final thoughts stress the need for a robust yet flexible M&E system that promotes learning and accountability.

Uploaded by

hamedaskarzada77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

10 Steps to Design a Monitoring and Evaluation System

However, before launching into the steps, please note that the development
of a M&E system is a participatory exercise. Staff at different levels of the
organization who will be expected to maintain or use the new M&E system
should always be consulted. This might include staff at head offices or
secretariats, staff in regional or country offices, and staff at programme or
project level.
Step 1: Define the scope and purpose
This step involves identifying the evaluation audience and the purpose of
the M&E system. M&E purposes include supporting management and
decision-making, learning, accountability and stakeholder engagement.
Will the M&E be done mostly for learning purposes with less emphasis on
accountability? If this is the case, then the M&E system would be designed
in such a way as to promote ongoing reflection for continuous programme
improvement.
If the emphasis is more on accountability, then the M&E system could then
collect and analyse data with more rigor and to coincide with the reporting
calendar of a donor.
It is important that the M&E scope and purpose be defined beforehand, so
that the appropriate M&E system is designed. It is of no use to have a M&E
system that collects mostly qualitative data on an annual basis while your
‘evaluation audience’ (read: 'donor') is keen to see the quantitative
results of Randomised Controlled Trials (RCTs) twice a year.
Step 2: Define the evaluation questions
Evaluation questions should be developed up-front and in collaboration with
the primary audience(s) and other stakeholders who you intend to report to.
Evaluation questions go beyond measurements to ask the higher order
questions such as whether the intervention is worth it or if it could have
been achieved in another way (see examples below).
Broad types of evaluation questions by focus area
Focus of Evaluation question
Evaluation
Process How well was the project designed and
implemented (i.e. its quality)
Outcome To what extent did the project meet the overall
needs?
Was there any significant change and to what
extent was it attributable to the project?
How valuable are the outcomes to the organization,
other stakeholders, and participants?
Learnings What worked and what did not?
What were unintended consequences?
What were emergent properties?
Investment Was the project cost effective?
What there another alternative that may have
represented a better investment?
What next Can the project be scaled up?
Can the project be replicated elsewhere?
Is the change self-sustaining or does it require
continued interventions?
Theory of change Does the project have a theory of change?
Is the theory of change reflected in the project
logic?
How can the program logic inform the research
questions?
Step 3: Identify the monitoring questions
For example, for an evaluation question pertaining to 'Learnings', such
as "What worked and what did not?" you may have several monitoring
questions such as "Did the workshops lead to increased knowledge on
energy efficiency in the home?" or "Did the participants have any issues
with the training materials?".
The monitoring questions will ideally be answered through the collection of
quantitative and qualitative data. It is important to not start collecting data
without thinking about the evaluation and monitoring questions. This may
lead to collecting data just for the sake of collecting data (that provides no
relevant information to the programme).
Step 4: Identify the indicators and data sources
In this step you identify what information is needed to answer your
monitoring questions and where this information will come from (data
sources). It is important to consider data collection in terms of the type of
data and any types of research design. Data sources could be from primary
sources, like from participant themselves or from secondary sources like
existing literature. You can then decide on the most appropriate method to
collect the data from each data source.
Step 5: Identify who is responsible for data collection, data storage,
reporting, budget and timelines
It is advisable to assign responsibility for the data collection and
reporting so that everyone is clear of their roles and responsibilities.
Collection of monitoring data may occur regularly over short intervals, or
less regularly, such as half-yearly or annually. Likewise, the timing of
evaluations (internal and external) should be noted.
You may also want to note any requirements that are needed to collect the
data (staff, budget etc.). It is advisable to have some idea of the cost
associated with monitoring, as you may have great ideas to collect a lot of
information, only to find out that you cannot afford it all.
Additionally, it is good to determine how the collected data will be stored. A
centralised electronic M&E database should be available for all project staff
to use. The M&E database options range from a simple Excel file to the use
of a comprehensive M&E software such as LogAlto.
Log Alto is a user-friendly cloud-based M&E software that stores all
information related to the programme such as the entire log frame (showing
the inputs, activities, outputs, outcomes) as well as the quantitative and
qualitative indicators with baseline, target and milestone
values. LogAlto also allows for the generation of tables, scorecards, charts
and maps. Quarterly Progress reports can also be produced from LogAlto.
Step 6: Identify who will evaluate the data and how it will be
reported
In most programmes there will be an internal and an independent
evaluation (conducted by an external consultant).
For an evaluation to be used (and therefore useful) it is important to
present the findings in a format that is appropriate to the audience. A
'Marketing and Dissemination Strategy’ for the reporting of evaluation
results should be designed as part of the M&E system. See my article, ‘4
Reasons Why No One Reads Your Evaluation Report’ for more information
on this.
Step 7: Decide on standard forms and procedures
Once the M&E system is designed there will be a need for planning
templates, designing or adapting information collection and analysis tools,
developing organisational indicators, developing protocols or methodologies
for service-user participation, designing report templates, developing
protocols for when and how evaluations and impact assessments are carried
out, developing learning mechanisms, designing databases and the list goes
on Simister, 2009.
However, there is no need to re-invent the wheel. There may already be
examples of best practice within an organisation that could be exported to
different locations or replicated more widely. This leads to step 9.
Step 8: Use the information derived from Steps 1- 7 above to fill in
the 'M&E System template
You can choose from any of the templates presented in this article to
capture the information. Remember, they are templates, not cast in stone.
Feel free to add extra columns or categories as you see fit.
Step 9: Integrate the M&E system horizontally and vertically
Where possible, integrate the M&E system horizontally (with other
organizational systems and processes) and vertically (with the needs and
requirements of other agencies). Simister, 2009
Try as much as possible to align the M&E system with existing planning
systems, reporting systems, financial or administrative monitoring systems,
management information systems, human resources systems or any other
systems that might influence (or be influenced by) the M&E system.
Step 10: Pilot and then roll-out the system
Once everything is in place, the M&E system may be first rolled out on a
small scale, perhaps just at the Country Office level. This will give the
opportunity for feedback and for the ‘kinks to be ironed out’ before a full
scale launch.
Staff at every level be should be aware of the overall purpose(s), general
overview and the key focus areas of the M&E system.
It is also good to inform persons on which areas they are free to develop
their own solutions and in which areas they are not. People will need
detailed information and guidance in the areas of the system where
everyone is expected to do the same thing or carry out M&E work
consistently.
This could include guides, training manuals, mentoring approaches, staff
exchanges, interactive media, training days or workshops.
Final Thoughts
In conclusion, my view is that a good M&E system should be robust enough
to answer the evaluation questions, promote learning and satisfy
accountability needs without being so rigid and inflexible that it stifles the
emergence of unexpected (and surprising!) results.

THE 12 KEY COMPONENTS OF M&E SYSTEMS


Monitoring and Evaluation Systems require twelve main components in
order to function effectively and efficiently to achieve the desired results.
These twelve M&E components are discussed in detail below:
1. Organizational Structures with M&E Functions
The adequate implementation of M&E at any level requires that there is a
unit whose main purpose is to coordinate all the M&E functions at its level.
While some entities prefer to have an internal organ to oversee its M&E
functions, others prefer to outsource such services. This component of M&E
emphasizes the need for M&E unit within the organization, how elaborate
its roles are defined, how adequately its roles are supported by the
organizations hierarchy and how other units within the organization are
aligned to support the M&E functions within the organization.
2. Human Capacity for M&E
An effective M&E implementation requires that there is only adequate staff
employed in the M&E unit, but also that the staff within this unit have the
necessary M&E technical know-how and experience. As such, this
component emphasizes the need to have the necessary human resource that
can run the M&E function by hiring employees who have adequate
knowledge and experience in M&E implementation, while at the same time
ensuring that the M&E capacity of these employees are continuously
developed through training and other capacity building initiatives to ensure
that they keep up with current and emerging trends in the field.
3. Partnerships for Planning, Coordinating and Managing the
M&E System
A prerequisite for successful M&E systems whether at organizational or
national levels is the existence of M&E partnerships. Partnerships for M&E
systems are for organizations because they complement the organization’s
M&E efforts in the M&E process and they act as a source of verification for
whether M&E functions align to intended objectives. They also serve
auditing purposes where line ministries, technical working groups,
communities and other stakeholders are able to compare M&E outputs with
reported outputs.
4. M&E frameworks/Logical Framework
The M&E framework outlines the objectives, inputs, outputs and outcomes
of the intended project and the indicators that will be used to measure all
these. It also outlines the assumptions that the M&E system will adopt. The
M&E framework is essential as it links the objectives with the process and
enables the M&E expert know what to measure and how to measure it.
5. M&E Work Plan and costs
Closely related to the M&E frameworks is the M&E Work plan and costs.
While the framework outlines objectives, inputs, outputs and outcomes of
the intended project, the work plan outlines how the resources that have
been allocated for the M&E functions will be used to achieve the goals of
M&E. The work plan shows how personnel, time, materials and money will
be used to achieve the set M&E functions.
6. Communication, Advocacy and Culture for M&E
This refers to the presence of policies and strategies within the organization
to promote M&E functions. Without continuous communication and
advocacy initiatives within the organization to promote M&E, it is difficult
to entrench the M&E culture within the organization. Such communication
and strategies need to be supported by the organizations hierarchy. The
existence of an organizational M&E policy, together with the continuous use
of the M&E system outputs on communication channels are some of the
ways of improving communication, advocacy and culture for M&E
7. Routine Programme Monitoring
M&E consists of two major aspects: monitoring and evaluation. This
component emphasizes the importance of monitoring. Monitoring refers to
the continuous and routine data collection that takes place during project
implementation. Data needs to be collected and reported on a continuous
basis to show whether the project activities are driving towards meeting the
set objectives. They also need to be integrated into the program activities
for routine gathering and analysis.
8. Surveys and Surveillance
This involves majorly the national level M&E plans and entails how
frequently relevant national surveys are conducted in the country. National
surveys and surveillance needs to be conducted frequently and used to
evaluate progress of related projects. For example, for HIV and AIDS
national M&E plans, there needs to be HIV related surveys carried at last
bi-annually and used to measure HIV indicators at the national level.
9. National and Sub-national databases
The data world is gradually becoming open source. More and more entities
are seeking data that are relevant for their purposes. The need for M&E
systems to make data available can therefore not be over-emphasized. This
implies that M&E systems need to develop strategies of submitting relevant,
reliable and valid data to national and sub-national databases.
10. Supportive Supervision and Data Auditing
Every M&E system needs a plan for supervision and data auditing.
Supportive supervision implies that an individual or organization is able to
supervise regularly the M&E processes in such a way that the supervisor
offers suggestions on ways of improvement. Data auditing implies that the
data is subjected to verification to ensure its reliability and validity.
Supportive supervision is important since it ensures the M&E process is run
efficiently, while data auditing is crucial since all project decisions are
based on the data collected.
11. Evaluation and Research
One aspect of M&E is research. The other is evaluation. Evaluation of
projects is done at specific times most often mid- term and at the end of the
project. Evaluation is an important component of M&E as it establishes
whether the project has met he desired objectives. It usually provides for
organizational learning and sharing of successes with other stakeholders.
12. Data Dissemination and Use
The information that is gathered during the project implementation phase
needs to be used to inform future activities, either to reinforce the
implemented strategy or to change it. Additionally, results of both
monitoring and evaluation outputs need to be shared out to relevant
stakeholders for accountability purposes. Organizations must therefore
ensure that there is an information dissemination plan either in the M&E
plan, Work plan or both.

You might also like