0% found this document useful (0 votes)
3K views148 pages

What Are The Different Types of Monitoring and Evaluation M&E

The document discusses different types of monitoring and evaluation (M&E) used in development and humanitarian sectors. It describes 7 types of monitoring: process, compliance, context, beneficiary, financial, organizational, and results monitoring. It then outlines 10 types of evaluation: formative, process, outcome, summative, impact, real-time, participatory, thematic, cluster or sector, and meta-evaluation. The types of M&E can be used individually or in combination to assess various aspects of a project at different stages. Internal staff typically conduct monitoring while internal or external consultants perform evaluations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3K views148 pages

What Are The Different Types of Monitoring and Evaluation M&E

The document discusses different types of monitoring and evaluation (M&E) used in development and humanitarian sectors. It describes 7 types of monitoring: process, compliance, context, beneficiary, financial, organizational, and results monitoring. It then outlines 10 types of evaluation: formative, process, outcome, summative, impact, real-time, participatory, thematic, cluster or sector, and meta-evaluation. The types of M&E can be used individually or in combination to assess various aspects of a project at different stages. Internal staff typically conduct monitoring while internal or external consultants perform evaluations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 148

What are the different types

of Monitoring and
Evaluation (M&E)?
May 28, 2019
Did you know that monitoring and evaluation (M&E) are classified into
different types based on the purpose, focus, timing and audience of the
assessment? In this article, we have attempted to explore the variety that is
commonly used in the development and humanitarian sector to help you plan
and execute your M&E more strategically.
Great news is, the variations of monitoring and evaluation are not mutually
exclusive, which means that they can be used in different combinations to
leverage the full potential of your project.

Before we jump into the types, let’s quickly explore how monitoring differs
from evaluation.

7 types of monitoring to get you started


1. Process monitoring

This is often referred to as ‘activity monitoring.’ Process monitoring is


implemented during the initial stages of a project as its sole purpose is
to track the use of inputs and resources, along with examining how
activities and outputs are delivered. It is often conducted in conjunction
with compliance monitoring and feeds into the evaluation of impact.
2. Compliance monitoring

Just as the name suggests, the purpose of compliance monitoring is to


ensure compliance with donor regulations, grant, contract requirements,
local governmental regulations and laws, ethical standards, and most
importantly compliance with the expected results of the project. The
need for compliance monitoring could arise at any stage of the project
life cycle.
3. Context monitoring

Context monitoring is often called ‘situation monitoring.’ It tracks the


overall setting in which the project operates. Context monitoring helps
us identify and measure risks, assumptions, or any unexpected
situations that may arise within the institutional, political, financial, and
policy context at any point during the project cycle. These assumptions
and risks are external factors and are not within the control of the
project, however, context monitoring helps us identify these on time to
influence the success or failure of a project.
4. Beneficiary monitoring

This type of monitoring is sometimes referred to as ‘Beneficiary


Contact Monitoring (BCM)’ and the need for this may arise at any
stage of the project cycle. Its primary purpose is to track the overall
perceptions of direct and indirect beneficiaries in relation to a project. It
includes beneficiary satisfaction or complaints with the project and its
components, including their participation, treatment, access to
resources, whether these are equitable, and their overall experience of
change. Beneficiary monitoring also tracks stakeholder complaints and
feedback mechanism.
5. Financial monitoring

The main purpose of financial monitoring is to measure financial


efficiency within a project. It tracks the real expenditure involved in a
project in comparison to the allocated budget and helps the project team
to form strategies to maximize outputs with minimal inputs. This
is often conducted in combination with ‘process’ and ‘compliance’
monitoring and is crucial for accountability and reporting purposes.
6. Organisational monitoring

As the name suggests, organisational monitoring tracks institutional


development, communication, collaboration, sustainability and capacity
building within an organisation and with its partners and stakeholders
in relation to project implementation.
7. Results monitoring
This is where monitoring entwines with evaluation. It gathers data to
demonstrate a project’s overall effects and impacts on the target
population. It helps the project team to determine if the project is on the
right track towards its intended results and whether there may be any
unintended impacts.
10 types of evaluation to put you on the right track
1. Formative evaluation

This is generally conducted before the project implementation phase.


But depending on the nature of the project, it may also continue
through the implementation stage. Its main purpose is to generate
baseline data to investigate the need for the project, raise awareness of
the initial project status, identify areas of concern and provide
recommendations for project implementation and compliance.
2. Process evaluation

It is conducted as soon as the project implementation stage begins. It


assesses whether the project activities have been executed as intended
and resulted in certain outputs. Process evaluation is useful in
identifying the shortcomings of a project while the project is still
ongoing to make the necessary improvements. This also helps to assess
the long-term sustainability of the project.
3. Outcome evaluation

This type of evaluation is conducted once the project activities have


been implemented. It measures the immediate effects or outcomes of
the activities in the target population and helps to make improvements
to increase the effectiveness of the project.
4. Summative evaluation

This occurs immediately after project conclusion to assess project


efficacy and the instant changes manifested by its interventions.
Summative evaluation compares the actual outcome data with baseline
data to determine whether the project was successful in producing the
intended outcomes or bringing about the intended benefits to the target
population. It provides evidence of project success or failure to the
stakeholders and donors to help them determine whether it makes sense
to invest more time and money for project extension.
5. Impact evaluation

Impact evaluation assesses the long term impact or behavioral changes


as a result of a project and its interventions on the target community or
population. It assesses the degree to which the project meets the
ultimate goal, rather than focusing on its management and delivery.
These typically occur after project completion or during the final stage
of the project cycle. However, in some longer projects, this can be
conducted in certain intervals during the project implementation phase,
or whenever there is a need for impact measurement.
6. Real-time evaluation

Real-time evaluation is undertaken during the project implementation


phase. It is often conducted during emergency scenarios, where
immediate feedback for modifications is required to improve ongoing
implementation. The emphasis is on immediate lesson learning over
impact evaluation or accountability.
7. Participatory evaluation

This type of evaluation is conducted collaboratively with the


beneficiaries, key stakeholders and partners to improve the project
implementation. Participatory evaluation can be empowering for
everyone involved as it builds capacity, consensus, ownership,
credibility and joint support.
8. Thematic evaluation

Such type of evaluation focuses on one theme across a number of


projects, programs or the whole organisation. The theme could be
anything, ranging from gender, migration, environment etc.
9. Cluster or sector evaluation

Just as the name suggests, this evaluation is implemented by larger


development and humanitarian sectors, including a group of different
organisations, programs or projects that are working on similar
thematic areas. It assesses a set of interconnected activities across
different projects and entities. As a result, it strengthens partnerships
within these key sectors, while improving their coordination,
accountability, predictability, and response capacity.
10. Meta-evaluation

This is used to assess the evaluation process itself. Meta-evaluations


could be useful to make a selection of future evaluation types, check
compliance with evaluation policy and good practices, assess how well
evaluations are utilized for organizational learning and change, etc.
Here’s the summarized version of the different types of monitoring and
evaluation (M&E) we have listed above.  
Quick note: Monitoring is conducted by an internal staff member, whereas,
evaluations, depending on its type, could be conducted by internal team
members or external consultants/experts. Internal evaluation could help build
staff capacity and ownership but they may be subjective at times. On the
other hand, external evaluation brings in a degree of objectivity and technical
expertise and tends to be more transparent and accountable.
The content in this article is based on the International Federation of Red
Cross and Red Crescent’s  Project/Program Monitoring and Evaluation
Guide. 
Before we sign off, just a quick reminder that this list is not all-inclusive and
there are many more types of monitoring and evaluation practiced in the
development and humanitarian sectors. If you know of any additional types
that you would like to see on this list then do reach out to us and we’d be
happy to add them here. 

Monitoring and evaluation


(M&E) plan for NGOs
June 12, 2019
Our simple guide articulates the key components and important steps in
developing a monitoring and evaluation (M&E) plan. Our intention is to help
you plan your M&E more strategically, so that your project can achieve the
intended results. Find out how and when to transform your M&E vision into
implementation, when to design an M&E framework, how to collect useful
data, how to disseminate and utilise results, and more.
What is a monitoring and evaluation (M&E) plan?
A monitoring and evaluation plan is a guide that explains the goals and
objectives of an M&E strategy and its key elements. In simple words, an
M&E plan is like a roadmap that describes how you will monitor and
evaluate your program, as well as how you intend to use evaluation results
for project improvement and decision making. An M&E plan helps to define,
implement, track and improve a monitoring and evaluation strategy within a
particular project or a group of projects; it includes all the steps, elements and
activities that need to happen from the project planning phase until the
project reaches its goal and creates the intended impact.
Let’s take a look at what these ‘steps,’ ‘elements’ and ‘activities’ entail:

 A proposed timeline for M&E


 Relevant M&E questions to ask at different stages of the project life
cycle
 Different methodologies
 An effective implementation strategy
 Expected results
 Defining who would implement the various components of the M&E
plan
 Appropriate M&E tools for data collection
 Identifying where data would be stored and how it would be analysed
 Defining how M&E findings would be reported to donors, stakeholders
and internal staff members to ensure project improvement,
transparency and data-driven decision making
 Other required resources and capacities
One thing to keep in mind – the specifics of each project’s M&E plan will
differ on a project-by-project basis, however, they should all follow the same
basic structure and include the same key elements.
When should we design an M&E plan?
Monitoring and evaluation plan should be created right in the beginning when
the project interventions are being planned. Planning project interventions
and designing an M&E strategy should go hand in hand. Planning the M&E
this early on also helps to ensure that there is a robust system in place to
monitor every little intervention and activity of the project and evaluate their
success. It also helps the project managers and other staff members
associated with the project to get a clear picture of key objectives and ensure
the project is on the right track.
It is important to involve project managers, evaluators, donors, and other
stakeholders in the designing of the M&E plan, as stakeholder involvement in
the early phase ensures the applicability and sustainability of M&E activities.
The idea is to identify opportunities and barriers as a team in the planning
stage with a focus on problem-solving and maximizing impact.
Step-by-step guide to designing an M&E plan
Developing an M&E plan is a dynamic and multi-faceted process as it
involves merging and connecting different elements of M&E into one holistic
system to measure the performance of interventions and impact of a project.
It is recommended to design the M&E work plan in a manner that it’s flexible
so adjustments could be made anytime within the context of the work plan to
account for issues that may arise during the M&E process.
Before we begin, it is essential to understand the rationale behind developing
the M&E plan, the key elements that will be included and the steps required
in developing it. Below, we have attempted to break down these elements
into different steps for more clarity.
Step 1: Identifying the focal problem and the need for a
project
Before we conceptualise a project, it is essential to understand the underlying
problem in the community of interest and explore what’s causing it, what
interventions could solve this problem and how long would the intervention
need to last for it to be effective. The project will thus be designed based on
the need for a certain assumed intervention.
There are many strategic ways to identify the focal problem and its causes,
but one common way organisations define these are through a ‘Problem Tree
Analysis.’ This is a group activity that involves input from project team
members, stakeholders and beneficiaries who can contribute relevant
technical and local knowledge.
The first step is to define the main problem that all the team members
mutually agree upon and visualise it on a flip-chart or a white board as the
trunk of a tree. Next, through many rounds of discussions and dialogues, the
team identifies the causes of the problem and visualises them as the roots of
the tree. Finally, the team brainstorms on the potential consequences of the
problem and exhibits them as the branches of the tree. Team members can
also add additional branches for solutions, concerns and decisions.
This is an effective practice as it maps out a realistic picture of a problem
from economic, political and socio-cultural dimensions, while building a
shared sense of purpose, action and understanding amongst the involved
parties.
Step 2: Planning the project
Once we have fully grasped the underlying problem and mapped out its
causes and consequences, we can begin to plan our project.
Identifying project goals, objectives and inputs/activities
Before we begin the groundwork of M&E plan, it is essential to understand
where we need to go and how we are going to get there. This is possible by
identifying clear and concise goals, objectives and relevant activities.
 Goals: The final impacts on the lives of the beneficiaries or the
environment that the project intends to achieve
 Objectives: longer-term change in the environment or the behaviour of
project beneficiaries that is needed to achieve the overall goal
 Activities/Input: direct interventions and processes of the project
Identifying key players

This step involves identifying key internal and external stakeholders who will
be involved in the project or who will benefit from the project. The key
stakeholders include the project team, donors, stakeholders in the wider
community (community groups, networks, residents etc.), partner
organisations, local and national policy makers, other government
bodies/ministries and the project beneficiaries.
Identifying monitoring and evaluation questions

In this step, program managers or M&E specialists with input from all
stakeholders and donors identify the most important M&E questions the
project will investigate. M&E questions, when answered will allow the
managers to determine their internal capacity and processes in terms of
vision, leadership, budget, management, sustainability etc. The M&E
questions also allow the managers to gauge the relevance, effectiveness,
impact and contributions of the interventions at different stages of the project
life-cycle.
By identifying these questions early on in the process, project managers or
M&E specialists are prepared to design tools, instruments, and methodologies
required to gather the needed information. M&E questions may require
revisions every now and then depending on the status of the project.
Roles and responsibilities

This is another important step to include while planning a project because


defining the roles of project staff members and stakeholders early on will
clarify who would be in charge of what activities, including communications,
project management, project design and implementation, data collection, data
analysis, reporting etc. and avoid unnecessary confusions later on during
project implementation.
Cost estimates for the monitoring and evaluation activities

It is essential to allocate tentative budget and provide an explanation of the


needed resources in the planning phase. This includes – money and
personnel, capacity development, infrastructure, etc. M&E experts suggest
allocating approximately 5 to 10 percent of total project costs for M&E
programming.
Understanding the overall context

It is important to understand the political and administrative structures of the


community where your project will take place, along with the roles and
influences of existing policies that may affect project implementation.
Likewise, it is also recommended to start thinking about the potential risks
and unexpected circumstances that might arise during project
implementation, for eg., any reluctance on the key players’ part for
cooperation etc.
Once a clear picture of the overall goals and objectives of the project are
defined, the key players are identified and the context is well understood, it is
time to select an appropriate approach and sketch out the detailed design of
the implementation plan.
Step 3: Defining a monitoring and evaluation framework
By the time we reach this step, we should have sufficient background
knowledge to design a framework. A framework increases understanding of
the project’s goals and objectives and defines the relationships between
factors key to implementation. A framework also articulates the external and
internal elements that could affect the project’s success.
It is important to keep in mind that there is no one size fits all when it comes
to frameworks. Different kinds of projects use different kinds of frameworks,
the best way to determine your ideal type is by understanding the scope of
your project and then choosing the one that best fits the purpose. These three
types of M&E frameworks are widely used in the development and
humanitarian sectors:
 Theory of Change – A theory of change shows a bigger picture (which
could sometimes get compex) of all the underlying processes and
possible pathways leading to long term behavioral changes in the
institutional, individual or community levels, while visualising all the
possible evidence and assumptions that are linked to those changes.
 Logical Framework (LogFrame)/Logic Model  –  Unlike the theory of
change, a LogFrame or a Logic Model is to the point and focuses only
on one specific pathway that a project deals with and creates a neat
and orderly structure for it. This makes it easier for the project
managers and stakeholders to monitor project implementation.
 Results Framework –  A results framework emphasises on results to
provide clarity around the key project objectives. In other words, it
outlines how each of the intermediate results/ outputs and outcomes
relates to and facilitates the achievement of each objective, and how
objectives relate to each other and the ultimate goal. Want some tips
on how to design a Results Framework?  Click here. 
These three frameworks may have some differences in practice, but there are
also some common elements that run through them, like the need for the
identification and involvement of key stakeholders; the need for well-defined
goals, objectives, activities and outputs, the same general purpose of
describing how the project will lead to results and the need for ongoing
monitoring and evaluation.
Check out our blog on Theory of Change vs. Logic Model to learn how they
differ in terms of structure, approach, rationale and usage.
Step 4: Identifying relevant indicators
Once the program’s goals and objectives are defined and an outline of an
M&E framework is in place, it is time to define indicators for tracking
progress towards achieving those goals. A good mix of process, outcome and
impact indicators is always recommended.
Process indicators track the progress of the project. These indicators help us
get clarity on whether activities are being implemented as planned. On the
other hand, outcome indicators track how successful program activities have
been at achieving project objectives. Unlike process indicators, these
indicators focus more on what the project is trying to achieve rather than how
it is being achieved. Impact indicators measure the long term goals or
impacts of a project.
Would you like to learn more about indicators? Click here.
Step 5: Identifying data collection tools and methodologies
After creating monitoring indicators, it is time to identify and collect relevant
data to demonstrate the actual results of the project interventions against our
indicators. M&E experts recommend to involve the project team and
stakeholders in the discussion to make the process more participatory. Before
collecting data, it is a good idea to discuss these questions as a team:
 Will the data be qualitative, quantitative, or a combination of the two?
 What baseline data already exists?
 What are the most relevant methods and tools to collect new data?
 How will the collected data be recorded?
 How and when will the data be analysed?
 Who will be responsible for data collection and analysis?
The golden rule to follow here is to collect fewer useful data properly than a
lot of data poorly. It is important for project managers to take into
consideration staff time and resource costs of data collection to see what is
reasonable.
What is a good way to determine the most relevant source of monitoring
data? This depends largely on what each indicator is trying to measure. The
program will likely need multiple data sources to answer all the monitoring
and evaluation questions. Data sources could be participants themselves,
literature, national statistics, the whole community, individual homes or
anyone or anything that can help to generate the relevant data. Once the
appropriate sources have been selected, the next step would be to decide on
the appropriate tools and methods to collect the data from the data source.
Some common types of data collection methods are as follows:
 Surveys
 Questionnaires
 Focus groups
 Case studies
 Interviews
 Workshops
 Content analysis of materials etc.
Apart from the traditional pen and paper methods, there are many digital data
collection tools available in the market to help data collectors gather data
faster and more efficiently. These online or offline tools also help to avoid
human errors that can arise during data collection and input. Some widely
used data collection tools are KoBo
Toolbox, CommCare, SurveyCTO, ONA etc.
Once the process of data collection is determined, it is also necessary to
decide how frequently data will be collected. This will depend on the needs
of the project, donor requirements, available resources, and the timeline of
the intervention. Most data will be continuously gathered by the program,
while others at certain intervals. Gathered data is usually recorded every few
months, depending on the agreed upon timeline. Want to know how to
prepare your dataset for analysis? Click here.
Step 6: Reviewing M&E Work Plan (M&E practitioners
recommend conducting this on a periodic basis)
Now that we’ve mapped out our indicators and data collection plan, it is time
to revisit our M&E plan to see our progress toward the project goals and
objectives and revise it based on the current needs of the project – what is the
status of the project? How well are the activities being implemented? Are
they generating intended outcomes or to what extent are our interventions in
line with the needs of the community? What needs to be improved, added or
changed at this point? etc.
At this stage, it is also good to revisit the fund allocation for the evaluation
and see if our plan fits well within the available budget and resources. Roles
and responsibilities for each component of the work plan should also be
clearly explained. Would we need to outsource a particular segment of the
evaluation to an external party?
Reviewing our M&E work plan also allows new team members, if any, to
familiarise with the project and get a sense of what his/her responsibilities are
and how the other roles and responsibilities are divided amongst the group.
Step 7: Reporting
Once data is gathered and analysed, it must be reported to the relevant
members as regularly as possible to discuss and interpret findings. The
intention of reporting should always be to provide clarity on the most up-to-
date results to staff members and stakeholders about the progress, success
and failure of the project and to help them make data-driven decisions for
modifications of project components and to develop future work plans as
necessary. Also, data must be reported so that it can increase knowledge and
make contributions to the related field for the future projects and practices to
be more effective. If the project results and data are not dissemination
adequately then it might lead to duplicate monitoring and evaluation efforts.  
Thus, the M&E work plan should include an effective strategy for internal
dissemination of data among the project team, as well as wider dissemination
among stakeholders, donors and external audiences. The plan should also
articulate what format will be used to share the findings – formal meetings
with donors and stakeholders, written reports, oral presentations, program
materials or community and stakeholder feedback sessions.
Besides the traditional reporting techniques, many organisations are also
opting for digital M&E tools and software like TolaData. These tools usually
come with dashboard and portfolio features that allow users to visualise data
into graphs, charts, reports and images for real-time reporting. These tools
make reporting so much easier and help organisations to provide more clarity
on their progress and ensure transparency and accountability at all levels.
Conclusion
We hope our step-by-step guide will serve as a helpful roadmap to develop
and implement monitoring and evaluation that is relevant, effective, timely,
and credible. According to the experts, if M&E is planned and executed well,
it can become a powerful tool for social and political change – all the more
reason to make M&E an integral component of our development and
humanitarian projects. Have questions or concerns about this article? Please
feel free to reach out to us.
Key references:
1. Guide to monitoring and evaluation | UNIVERSITY OF OXFORD
2. Developing a monitoring and evaluation work plan |  FHI 360
3. Developing a monitoring and evaluation plan for ICT for education
|  TINA JAMES AND JONATHAN MILLER
4. Step-by-step guide to create your M&E plan | EVALUATION TOOLBOX
5. How to develop a monitoring and evaluation plan |  COMPASS
6. Monitoring and evaluation plans | UN WOMEN

In conversation with M&E


Specialist Sarah Ulrich
July 10, 2019

“I always enjoyed being in direct contact with target groups of projects I


evaluated, because I find it amazing and gratifying to hear about the change
and impact they experience.” Monitoring and Evaluation Specialist Sarah
Ulrich talks about her love for the M&E sector and how she views M&E as
the facilitator in intersectoral collaborations. Read on for more interesting
insights from Sarah!
A quick introduction
Which organisation do you work for?
I work part-time for aqtivator gGmbH, a philanthropic organisation
supporting non-profit organisations (NPOs) with impact-scaling potential in
the educational sector. In addition to that, I am a freelance consultant in the
field of impact orientation and intersectoral partnering.
Which region(s) of the world have you worked in so far?
I am based in Germany but have occasionally worked as a consultant for
organisations in other European countries.

Where did you like working the most?


There is no specific country I prefer but I love to experience the (working)
cultural differences and similarities.
What is your favourite dish? 
Pasta is guaranteed to make me happy!

What are your favourite leisure activities? (other than M&E, of course!)
I relish rock music and try to go to concerts at least once a month. I also play
the guitar and sing in a band called TwoBricks which you will definitely not
find on Youtube. 
M&E Specialist Sarah Ulrich talks about the power of monitoring and
evaluation to enable partners with different sectoral backgrounds to come to
a common understanding of their individual and the mutual idea of values
and results.
Let’s talk about your career in M&E
How long have you worked in the M&E sector?
I have been working in this sector for eight years. Before that, I worked in
research, diagnostics and methodology, which, I guess, is somehow close to
M&E.

What inspired you to pursue a career in M&E?


It was by chance, really. Coming from a social psychological research
background I have always been interested in field research methods and
finding out about the cause-and-effect-mechanisms in practice.

When I joined Education Y (former buddY e.V.), a German NPO, it was a


rather unexpected turn of my career path. I had the chance to further explore
this interest in what I later found out was M&E in the role of an Internal
Consultant for Quality Management and Evaluation and loved it right from
the start.
Your most joyful or memorable moment related to your M&E work?
There have been various happy moments. For example, I always enjoyed
being in direct contact with target groups of projects I evaluated, because I
find it amazing and gratifying to hear about the change and impact they
experience.

I was also extremely proud when I was told that quite a lot of people who are
working in the field of Impact Orientation or M&E in German NPOs have
been inspired by one of my talks or papers on how Impact Monitoring and
Evaluation are more a matter of STARTING TO DO it, rather than endlessly
revolving around questions of research methodology or the countless
dilemmas of scientific validity of field research and qualitative data quality.

Your most frustrating or challenging moment related to your M&E


work? 
As a freelance consultant and M&E service provider, I always try to be very
transparent about the limits and opportunities of impact orientation,
especially in the intersectoral context. For instance, when I work with
corporate clients, scientific hardliners or public administration, I find it rather
demanding to ‘translate’ the logic of M&E in the context of social impact
into their respective value system.

I graduated from Zeppelin University‘s executive Master course ‘Intersctoral


Governance and Leadership’ in 2016. This has helped me a lot to understand
the main challenges of intersectoral communication, especially when it
comes to the interpretation and subjective value of data.

M&E in your perspective


What do you think are the biggest opportunities in M&E today?
I think that we have to challenge the idea that any sector can tackle major
societal problems on their own. But collective and intersectoral impact
projects and approaches are very complex. I think one of M&E’s biggest
challenges but also biggest opportunities is to be a facilitator in intersectoral
collaborations.
M&E experts are generally very good at formulating concise, SMART and
unambiguous outcome and impact goals and coming up with persuasive,
useful indicators. Thus they can enable partners with different sectoral
backgrounds to come to a common understanding of their individual and the
mutual idea of values and results. The data gathered by Monitoring and
Evaluation thereby becomes a common language for intersectoral
communication and a central premise for successful collective impact.

What do you think are the biggest challenges in M&E today?


I would really like to speak for more openness towards methodological
creativity and diversity. As Chairperson of the Social Reporting Initiative
e.V., the provider of the Social Reporting Standard guide for annual social
reports, I have read countless annual reports of NPOs (worldwide) and come
across a plethora of creative and clever methodical solutions for social Impact
monitoring, evaluation and reporting (case studies, impact narrations,
surveys, interviews, (big) data analysis, RCT trials, SROI reports etc.)
I think as long as it delivers meaningful results in terms of the output,
outcome and impact goals and helps to gain insight into the success premises
and effects of a project or program, there should not be a dispute about
whether the evaluation method is ‘gold standard’ or not.

Are you aware of any emerging trends in the M&E sector? How do you
feel about them?
As in a lot of aspects of our everyday life, I think, digitisation will play an
important role in the M&E sector now and in the future. A lot of
organisations are still unaware of their data potential or are still reluctant to
explore it with regard to data privacy and data abuse.

With the emergence of more and more data sources, I believe, it’s also
absolutely inevitable to have a concise Impact Logic Model (I-O-O-I)
because to me it is the basis and frame for any data collection and analysis.
It’s like a compass in a data jungle.
In your opinion, what are some good skills to have to work in M&E?
Although it’s really helpful to have deeper knowledge about research
methodology, I would not necessarily count that as a key skill. But to work in
M&E you ought to be creative, curious and receptive and have a hands-on-
mentality. Analytical thinking, the flexibility to rethink and re-frame an
aspect or idea and the ability to clearly and comprehensibly communicate
issues in varying contexts are of great value as well. And – my personal
major challenge – patience.

Digital M&E
Have you used any digital tools for M&E? How was your experience?
The experiences I have made with digital tools have proven to me that in
certain areas we are still far from a “digital society.” For instance, for project
evaluations in schools, I always try to use digital surveys and the response
rate is generally lower than if I hand out pen-and-paper-surveys, because of
technical challenges (No Wi-Fi, no devices, scepticism towards clicking a
link etc). 
On the other hand are the data analysis options coming with digitisations
literally making me giddy with joy. There are just so. many. possibilities!
And I am very far from being an expert for digital data analysis (like for
instance the folks at CorrelAid), but just really like how it has made things
easier and swifter (as long as you have a good Impact Logic Model and
therefore know what you’re looking for.)

Tips and recommendations


Do you have any advice for individuals who are thinking about starting a
career in M&E?
Be prepared for a lifelong learning process, a lot of creativity and joy and a
really fulfilling and valuable job. And if you can… learn a coding language
(i.e. “R”) as early as you can. From personal experience I can tell you that the
later you try, the harder it becomes.
Final words
What is the funniest reaction you’ve encountered when you told someone
(maybe from outside the non-profit sector) that you were “working in
M&E”? 
I tried to explain it to my grandparents who are both 92 years old by now. It
took me quite some time to get it across to them but when I finally
succeeded, my Grandma explained, “Everyone should be doing that thing!
Then nobody would ever make a mistake!”
Any final words to your colleagues in the M&E sector?
I have learned and continue to learn so much from each and every M&E
person I ever came across so please get in touch because I’m eager to learn
more.

Contact:
Sarah Ulrich
Website – www.outcome-reporting.com
LinkedIn – Sarah Ulrich
Twitter – @sarah_civilian
We hope you enjoyed this interview with Ms. Ulrich. In case you have
additional questions for M&E experts, do send them our way and we will
more than happy to include them in the next edition of our interview with
M&E Specialist.

Using multiple data collection


tools? Here’s how to integrate
your dataset with TolaData
August 22, 2019
Caught up in a data-silo dilemma? TolaData can help!
Do you use Ona, CommCare, SurveyCTO, KoBo Toolbox or other digital
tools to collect your data? Do you store your collected dataset on MS Excel
or Google Sheet? Is it difficult to bring all your dataset into one place
without losing it or compromising on its quality? The struggle is real but we
have an integrated solution for all your data silo problems.
Find out how to import, consolidate, analyse and report all your data in one
place with TolaData, a monitoring and evaluation platform.
Mobile data collection tools - creating opportunities in
the development and humanitarian sector
The phrase “Data for Development” reflects the crucial role of data in the
development and humanitarian sector. Fortunately, the massive growth of
technology in recent years has been improving monitoring and evaluation
practices in the sector by replacing the time-consuming paper-based data
collection process with effective digital tools.
There are many data collection tools available, including KoBo Toolbox,
Ona, CommCare, SurveyCTO etc. These platforms are versatile and reliable.
They enable users to create unique data collection forms that meet their
project requirements while facilitating development professionals to discover
and utilise newer and richer data sources. Data collectors are able to reach
wider target groups, collect data faster, cheaper, more efficiently and shorten
the gap between data collection and decision making.

Digital data collection tools also ensure good data quality with their
automated data entry process and the possibility for real-time quality checks.
Moreover, their offline functionality allows organisations to collect data with
their mobile devices from any corner of the world and in any challenging
environment.

Import data from multiple data collection tools and


cloud storage systems into one platform
TolaData integrates seamlessly with many mobile data collection tools and
cloud-based document storage. We also support direct import of CSV files.
Moreover, you can import data from a range of services via pre-defined
connectors.

So, regardless of the source or the format of your data, you can import all
your data into our comprehensive M&E platform and connect them to their
respective projects. All team members and stakeholders with access to your
project on TolaData can see the data and collaborate in real-time. Moreover,
TolaData’s in-built form-builder helps you to collect more data, in case there
is a need for it.
TolaData integrates seamlessly with multiple data collection tools and cloud storage
systems to bring all your data into one platform for easy analysis and reporting. 
Import data from Open Data Kit (ODK) based data collection
tools - KoBo Toolbox and Ona
At TolaData, we have a robust API that allows our system to connect to the
APIs of a number of different 3rd party data collection tools and cloud-based
storage systems.

Our interface integrates seamlessly with the Open Data Kit (ODK) based data
collection tools, including KoBo Toolbox and Ona. Simply insert your KoBo
or Ona API token on TolaData and import your raw dataset directly into
TolaData’s Data Tables. Your data from KoBo and Ona is automatically
imported as JSON feed into our system.
Import Data from cloud-based document storage systems -
Google Drive and MS OneDrive for Business
TolaData connects easily with cloud-based document storage systems like
Google Drive and MS OneDrive for Business. If you use Google Sheets to
store and clean your data, by default Google will save your Google Sheets
into your Google Drive. So, all you will need to do is connect your Google
Drive to TolaData via our interface and import your data with a click of a
button.

Similarly, you can import data tables stored on MS OneDrive for Business.
MS Excel has both online and offline functionalities. You can import
your .xls files directly into TolaData via our API. In case you work on MS
Excel offline, your first step would be to save your file to your MS OneDrive
for Business account and then import it into TolaData via our interface.
Alternatively, you can choose to export your Excel file as a CSV and then
import it directly into TolaData. Simply choose the process that works for
you.

Import data from Survey CTO and CommCare


Using CommCare and Survey CTO? No problem! You can easily import
your data into TolaData. First, download your dataset as a CSV file into your
computer and then import it directly into TolaData. Our CSV import helper
supports users to import their CSV files, allowing different formats.
TolaData for data analysis and reporting
At TolaData, we understand the value of your data. Therefore, we have built
a system to help you leverage its full potential.
Our versatile toolkit supports your organisation with a complete indicator
workflow. We enable you to build a results framework to visualise the most
important elements of your projects, including your goals, objectives,
outcomes and activities. You can take advantage of our indicator section to
assign relevant indicators to all your activities. Use our form-builder and data
tables section to collect, import and manage all your dataset. Moreover, our
platform allows you to connect your data to your indicators to help you to
track your project performance and make improvements based on real
evidence.
Lastly, you could utilise our configurable dashboards to visualise your latest
data and share your results in real-time with your partners and stakeholders
across the globe. Dashboards open up space for real-time collaborations and
ensure transparency, accountability and learning.

Interested to explore the data collection tools mentioned in this article? Just
click on the links below:

 KoBo ToolBox
 CommCare
 Ona
 SurveyCTO
Digital technology is clearly transforming the way we approach development
problems and opportunities. To keep up with the evolving trends, our
TolaData team continues to improve and diversify our platform, so that you
can harness the latest in tech to reap the best out of your data. With that
being said, we look forward to hearing about how your organisation collects
and manages data and the challenges and opportunities that come with it.
Let’s keep this conversation going!

https://www.toladata.com/wp-content/uploads/2019/09/monitoring-questions-toladata.svg
What is Monitoring and Evaluation
(M&E)?
September 6, 2019

At TolaData, we love getting queries about monitoring and evaluation


(M&E) from our clients and followers around the world.
Here’s one of our favourite inquiries of all time – What is monitoring and
evaluation? It may sound like a simple question but it is certainly an
important one.
Before incorporating M&E into a project, it is vital to have a clear
understanding of what it really is, its purpose, its significance and how
monitoring differs from evaluation while being two sides of the same coin.
Stay with us while we deep dive into monitoring and evaluation separately
with some key questions to help you navigate the M&E path more accurately.
A quick intro to Monitoring and Evaluation
Monitoring and Evaluation (M&E) is a powerful tool for transformational
change and learning.

When a robust M&E plan is incorporated into a project at an early stage, it


guides the project in the right direction from day one. M&E acts as a catalyst
to leverage resources, improve project performance, reach the intended
beneficiaries, provide accountability and transparency to the donors,
stakeholders beneficiaries and maximise project impact, often within the
established quality standards, estimated time frame and allocated budget.
Monitoring and evaluation goes hand-in-hand and are implemented
continuously throughout the life of a project. However, it is helpful to
understand each term on its own, so that you can have a thorough
understanding of the functions, how they are interconnected and how they fit
into the realm of a project.

What is Monitoring?
Simply put, monitoring is an ongoing activity that begins and ends with a
project. Monitoring involves collection and analysis of data/information on a
routine basis to identify patterns, changes and progress within the ongoing
development intervention, against predetermined targets, indicators and
objectives.

As monitoring is conducted by internal team members, it gives them a first-


hand overview of the project to revise strategies as needed and make
informed management decisions.
Monitoring helps the project team to discern whether their inputs and
resources are adequate to implement the activities. It also helps them to track
how the activities are being implemented and determine whether the
activities are generating the intended outputs. Moreover, monitoring allows
the team to identify gaps in implementation and formulate solutions to fix
them as they arise. The project team is able to check compliance of project
activities within the established standards and discern the effectiveness or
inefficacy in the use of allocated funds and resources.
Here’s a list of our 8 hand-picked Monitoring questions to get you started
Here’s a list of our 8 hand-picked Monitoring questions to get you started

What is Evaluation?
Evaluation, on the other hand, takes place at specific intervals of a project
life-cycle. Unlike monitoring, evaluation looks at the bigger picture and
delves deeper into project outcomes, impact and the overall goal and
investigates their significance. In other words, evaluation utilises data
(including monitoring data) to unravel the effectiveness of project design and
implementation, the relevance of project goals and objectives, the efficiency
of resource use, project strategies, organisational policies, operational area,
the quality of results, the sustainability of project impact etc.

Evaluation seeks to understand how and why the intervention has worked so
well or why it failed and suggests solutions for its improvements. Evaluation
thus paints a much thorough picture and provides credible information and
recommendations to enable organisations to incorporate lessons learned into
their decision-making process for their long term growth and success.
Our 10 key Evaluation questions to put you on the right track
Before we sign off, we would like to reiterate that monitoring and evaluation
are interconnected processes; they both complement each other to help you
make the most out of your interventions. So, next time you conceptualize a
new project, make sure to include M&E from day one.
Did you find this article helpful? Feel free to share it with your network!

5 reasons why
monitoring & evaluation
is good for NGOs
September 6, 2019
The trials and errors in the development and humanitarian sector in the last
decades have been transforming the way development is practiced globally.
The evolving development trends and practices include the heightened
expectations of donors for more accountability, transparency and proof of
the effectiveness of projects. This has shifted the focus towards a more
quantifiable, results based and data-driven approach to development.
How does this affect NGOs?
To meet the growing demand, NGOs are now under greater pressure to
demonstrate development success to donors in a clear, comprehensive,
compelling and innovative manner. 

NGOs are expected to make their operations as transparent and accountable


as possible – by providing a clear cut picture from day 1 – regarding where,
why and how every dollar is spent and whether it’s contributing to foster any
change in the community of interest. It is vital for NGOs to showcase
tangible results and exhibit discernible improvements in the lives of the
beneficiaries or the lack of it with clear data evidence. 

Successful development projects today are thus grounded in careful planning,


rigorous data collection, meticulous implementation, and thorough analysis
and reporting – and this is where monitoring and evaluation comes into play.
There are numerous advantages of monitoring and evaluation for NGOs,
but here’s a curated list of the top 5 benefits from our M&E experts:
1. Greater transparency and accountability
One of the greatest benefits of M&E is helping organisations to track, analyse
and report on relevant information and data throughout the life cycle of a
project. This allows the project team to provide robust evidence for all their
actions and decisions to stakeholders, donors and community members from
day one. On the other hand, stakeholders and donors acquire the information
and understanding they need to collaborate, communicate, provide inputs and
make informed decisions about strategy improvements and project
operations. Additionally, M&E helps donors to weigh the efficacy of their
funds in a project, which influences their current and future funding plans.

2. Improved project performance


A well planned M&E helps the project team to get a better understanding of
the target population’s needs. This helps to define the scope of the project
and design objectives that are relevant, measurable and achievable. A well
defined M&E plan also clarifies the process and interventions that will lead
to the project’s outputs and deliverable. Moreover, M&E helps the team to
plan an end-to-end indicator management system, identify effective tools and
methodologies to measure, analyse and demonstrate every intervention and
its impact on expected outcomes. This enables organisations to see their
progress and identify gaps as they arise and make timely amendments to
achieve the desired results.
3. Effective resource allocation
All project operations are interwoven around project budgets. The amount of
available cash dictates the duration and magnitude of interventions, choices
of resources, number of employees etc. M&E is an effective tool for
enhancing the efficiency and effectiveness of finances in project
implementation. M&E facilitates with the estimation of the value, worth and
impact of project components; and enables the team to verify what works and
what doesn’t and where more money should be invested or where budget
should be cut. M&E allows the team to make appropriate changes to the
financial plan on a regular basis to avoid unfavorable contingencies.

4. Promotes learning & data-driven decision making


M&E data provides quantifiable results to help the involved parties to learn
from project successes and challenges and be more adaptive. Involved parties
are better prepared to respond to the ever-evolving project situations,
determine what worked and what did not and why it did not work and how it
could be improved and make revisions based on data evidence, rather than
assumptions. The team is able to establish links between the past, present and
future actions to improve project implementation and to identify what could
be replicated and scaled up for sustainability of the current project and for
future endeavors.
5. Systematic management of organisation
M&E also functions as a performance management tool as it facilitates
organisations to gather, disseminate and utilise information and data to
improve their internal operations and add value to their organisation.
Organisations can thus focus on their objectives such as enhancing
performance, encouraging innovation, sharing and integrating lessons learned
for continuous improvement etc. M&E also streamlines organisational
procedures to achieve constructive coordination among different stakeholders
and organizational units.

We hope our article stimulates curiosity and inquiry about M&E and inspires
you and your organisation to make it an integral part of your projects and
internal operations.
Got a point or two to add to the list? Send them our way!
Key references:
 WHO Monitoring and Evaluation Toolkit
 UN WOMEN What is Monitoring and Evaluation

How are NGOs coping with the impacts


of COVID-19
June 16, 2020
Photo credit: Centers for Disease Control and Prevention (CDC) 
COVID-19 has impacted billions of lives around the globe. Governments,
individuals, businesses, and civil society organizations are battling to save
lives, support families, and keep businesses, and organisations afloat. During
these unprecedented times, the role of NGOs has become paramount in
combating the coronavirus and its impact on society’s most vulnerable
populations, especially in countries and regions where government services
are struggling.
From TolaData’s conversations with clients and other NGOs, we are hearing
many reflections on how organisations are coping with the impacts of the
novel coronavirus, its implications on the economy and their aid efforts.
The organisations we have spoken with all agreed that the strain from the
novel coronavirus has been immense. The pandemic has impacted all aspects
of their work – from running programs, planning finances, coordinating staff
to how they collaborate with partners and stakeholders situated across the
globe. However, many NGOs also said the challenges were paving the way
for new opportunities and innovative ways of working in the sector – a
chance to renew how we tackle global problems together as a community.

Innovative management approaches and project


adaptation
Many NGOs have been compelled to redesign or pivot their projects to
respond to the rapidly changing landscape caused by COVID-19.
Assessments on the challenges faced by communities in light of the pandemic
inform how organisations are adjusting objectives and implementation
strategies for 2020 and possibly beyond. Fortunately, many donors are easing
their protocols to allow implementing partners to redirect their funding and
program activities to the COVID-19 response.

We are seeing donors offering greater flexibility to partner organisations.


USAID, for example, is permitting organisations (on a case-by-case basis)
the ability to do no-cost extensions on existing projects and introducing new
emergency measures like online reporting mechanisms to simplify
administration during the pandemic. Anja Schermer, Managing Director at
the Sarah Wiener Stiftung agrees that this kind of adaptation and flexibility is
critical in these difficult times.
“We had intense discussions with our major donor whether delivering training online (instead of
face-to-face) would be considered as having fulfilled our pre-agreed activities. In the end, they
agreed, which was a great relief for us.”

- Anja Schermer, Managing Director of Sarah Wiener Stiftung


It is likely however that project work not related to COVID-19 may be put on
hold or scaled back due to implementation constraints and financial
limitations. This could lead to some major setbacks in our collective
ambitions under the Sustainable Development Goals (SDGs). Many people
anticipate huge ripple effects from this crisis in the medium to long term that
we can only begin to grasp at the moment.
To rise to these challenges, we need to be creative and perhaps, question
some of the traditional approaches and take risks in order to find the right
solutions. We have heard many organisations are leveraging technology to
continue the implementation of their project activities. With this in mind, we
can see the many opportunities at hand for the development sector to advance
in the digital sphere.

Remote working has become the new norm


Many organisations are embracing new styles of working, communicating
and collaborating. Remote working and home-office have swiftly become the
new norm with a growing reliance on the web, cloud-based platforms and
new technologies to support projects, staff and communities across the world.
With this new trend, many are questioning whether offices are even needed
anymore.

“We’re very happy that we transferred the majority of our processes to the cloud already last year
- I can’t imagine how difficult it must be for organisations at the moment that have to change both
their processes and the content of their programs at the same time”.

- Anja Schermer, Managing Director of Sarah Wiener Stiftung

Changes to Monitoring & Evaluation (M&E)


approaches
According to the World Bank, M&E has a critical role to play during COVID
in assessing the continued appropriateness of an organisation’s response to
the pandemic. However, with travel restrictions, lockdown, and health
concerns, M&E cannot be carried out the same way as before. Many
organisations have acknowledged the need for a restructured and adaptive
M&E, safe data gathering practices, and methods of verification and evidence
that can be submitted and stored virtually.
Reliance on digital technology and innovative approaches is increasing, from
digital data collection devices and applications, including mobile phone-
based feedback mechanisms, to remote sensing with satellites, to software
and platforms that allow for remote monitoring of data flows and sources. We
are also seeing remote reporting and verification replacing in-person
monitoring visits and assessments across the sector.
Will localization finally take the lead?
The limitations on the movement of aid workers means that international
organisations and research institutions have to find ways to transfer more
responsibilities, and decision-making into the hands of local staff and
partners. Leveraging the power of the internet to implement programs
remotely, the current circumstances are an opportunity to create a more
flexible, collective and collaborative leadership and management.

Shifting to more locally led delivery which harnesses the in-country expertise
has great potential to enhance the long term effectiveness of responses and
contribute to the sustainability of programs. This can also create more
inclusive and participatory forms of governance. Many hope that this will
encourage open dialogue and reinforce local and national action wherever
possible.

Rethinking financial models


The financial ramifications of COVID-19 for the sector are huge, with global
economic uncertainty, cancellations of fundraising events and delays or loss
of new grants. Big organisations are downsizing and smaller organisations
face an even bleaker future without new financial support to ensure the
continuation of their work. There is a challenge ahead that might ask us to re-
evaluate traditional business models and diversify income streams, and
perhaps this offers opportunities to build new alliances between NGOs and
between sectors.

Having said this, new funding opportunities are being made available for the
not-for-profit sector. According to the New Humanitarian, many public and
private donors have pledged billions in international aid to support NGOs and
the focus is now on ensuring that these resources reach those in need as
quickly as possible.
This is just a slice of the bigger conversations that are happening across the
globe on how the development sector is adapting during these unprecedented
times. Hope you found it helpful. If you or your organization have any ideas,
lessons, or insights to add, do write to us and we would be happy to keep this
discussion going.

Theory of Change vs. Logical


Framework
- know the difference
January 27, 2021
Theory of Change (ToC) or Logical Framework? Given how both
frameworks are used for the same purpose, it could be a difficult choice to
make. However, if you know their unique attributes, their similarities and
differences, it might make your decision making process a bit easier.
Both ToCs and logical Frameworks are widely popular and have been around
for decades but neither term is clearly defined in the international
development literature. Some say it’s okay to use these terms interchangeably
as both are used to demonstrate how activities lead to results, whereas others
say they are different in terms of structure, approach, rationale and usage. 

Also, there is no set structure or rule to developing either of the two


frameworks. Although they must include some key project components, there
is also room for flexibility, both in terms of the process of developing them
and the look of the final product. In this article, we will explore some
similarities and differences between these two much debated approaches
in Monitoring and Evaluation.
Before we jump into the main discussion, let us clarify some project related
terms that we will be referring throughout the article.
What is a Logical Framework (LogFrame)?
The Logical Framework was first introduced in the late 1960s as a program
design methodology for USAID projects. By the mid-1970s it was being used
not only by USAID but also by Canadian and German development agencies
as a tool to design a wide range of projects and as a framework for
monitoring and evaluation. By the late 1980s, this model had become a
requirement for funding of most international development programs, usage
expanded to include other bilateral donors, United Nations agencies, and the
World Bank, all of whom continue to utilize this tool.
Logical Frameworks also referred to as logframes operate at the project or
program level and describe in a concrete way, how your project or program
will create the desired change. Like most other frameworks, these are flexible
and evolving – you can easily create one based on your current understanding
of the project components and revise it as you go along. 
The structure of a logframe is quite standardized –  it is logical, sequential, to
the point and has a simple linear chart format. It could be displayed as a
matrix or as a flow chart. A logframe is visually engaging as it clearly
illustrates the basic project components in the chart, which makes it easier for
stakeholders to identify project inputs, activities, outputs, outcomes and
impacts. In other words, through these components, indicators and
milestones, the chart triggers questions to help everyone involved in the
project to think about what they’re doing, what they hope to achieve and
what they need to do to get all the important stuff done. 
Some drawbacks of using a Logical Framework (LogFrame)

But some experts point out that a logframe could be quite limited and too
simplistic in its scope and approach as it only zooms in on one specific
pathway that your program deals with, meaning, it only points out activity A
leads to outcome B, leaving out the possibilities of having additional
outcomes, other than just outcome B. Therefore, there is limited flexibility
and little room for the emergence of unexpected outcomes within a logframe.
Moreover, it fails to explain the “Whys” – why activity A is expected to
cause outcome B. Additionally, some experts also point out that a logframe
often does not show the bigger impact of an intervention or the evidence that
something has been achieved.
Here’s an example of a Logical Framework
(LogFrame)
Let’s assume that our organisation has developed a project to implement in
‘Community A.’ Our overall project goal is to improve the sexual health of
the community members. Let’s look at how we can show the relationships
between our goals, outcomes, and activities in a logframe.

This is a simplified version of a Logical Framework. Depending on the nature of your project and
the preference of your organization, the complexity and forms of logical framework may differ.
What is a Theory of Change (ToC)?
When talking about theory of change, we like to kick off with M&E expert,
Piroska Bisits’ definition of the concept.
“ At the simplest level, a Theory of Change shows the big, messy “real world”
picture, with all the possible pathways leading to change, and why you think they lead
to change. Do you have evidence or is it an assumption.”

M&E expert, Piroska BisitsTweet

According to the experts, theory of change seems to have emerged from the
field of theory-driven evaluation, which came to prominence in the 1990’s. A
ToC is best described as a flow chart or a web diagram that goes beyond a
simplistic input-outcome notion of evaluation and seeks to demonstrate
through all possible pathways how and why an intervention is assumed to
lead to a desired end-result or a long-term impact.
Like a logical framework or a results framework, a ToC is also a tool used to
design and evaluate projects by mapping out the logical sequence of an
initiative from goals, inputs, activities to outcomes but comparatively ToC is
a much more comprehensive methodology which shows a much bigger
picture.
While demonstrating the connection between activities and outcomes, ToC
also uses narratives to depict issues within a project that you can and cannot
control. It articulates how change happens in the wider context, including the
culture and power relations in which a specific project will take place;
clarifies the organisation’s role in contributing to change, and defines and
tests critical assumptions about behaviour, causal relations and contexts and
as much as possible and supports these assumptions by evidence.  A ToC is
often used by organisations working in multiple sectors and in different
thematic areas on complex projects or programs.
Some drawbacks of using a Theory of Change (ToC)

Compared to other approaches, a ToC requires vast amounts of data, greater


time investment, deeper critical reflection, logical thinking and frequent
exchange of dialogue amongst colleagues and stakeholders. It also requires
an honest approach to answer difficult questions and identify assumptions
and hypotheses about how your efforts might influence change, given the
social and political circumstances, uncertainties and complexities that
surround the initiatives.
Hypothesis and assumptions are unique attributes of ToCs and these really
help the team members to sketch out a broader picture of their intervention,
however, when there’s too many assumptions, it could also cause a lot of
confusion and could lead the team in many different directions. Many critics
point out that due to ToC’s theory driven nature, it is sometimes unclear
whether a ToC evaluates the program itself or the program’s underlying
theory.
Here’s an example of a Theory of Change (ToC)
Taking the same scenario from our previous logical framework example, we
have created this simplified version of a Theory of Change. You will notice
how a ToC is much more complex and detailed than a logical framework.
This is a simplified version of a theory of change. Depending on the nature of your project and the
preference of your organization, the complexity and forms of ToCs may differ. 
Similarities between Theory of Change (ToC) and
Logical Framework (LogFrame)
We have already seen how ToCs and logical frameworks use slightly
different approaches, but if you look deeper you will notice that there is also
significant overlap between these two models. Both are used to improve our
value-for-money, both are tied to an organisation’s overarching mission and
both rest on a foundation of logic – specifically, the logic of how change
happens. Let’s explore some common features.
 Both are used for planning, monitoring and evaluation of projects.
 Both can greatly improve project design and outcomes.
 Both identify the main components of the project (inputs, activities,
outputs, outcomes) and demonstrate the relationship between these
elements to acknowledge and implement all of the steps it will take to
deliver the results you want to see in the world.
 Both frameworks enable you to measure, track and quantify your
progress and your impact.
 Both help you identify and explain why your strategy is a good solution
to the problem at hand.
 Both are generally represented in the form of visual charts, graphs and
diagrams.
 Both Logframe and ToC are outcome-focused – it answers the
question, “what do you want to accomplish?
 Both depend on a set of assumptions about why these strategies
should make a difference in the desired outcomes.
 Both are grounded in your analysis of what it takes to create the
change, and your understanding of why conditions are as they are
now.
 With both models, project members can identify intermediate effects
and define measurable indicators for them. But you must note that
some logframes may not include indicators, it varies on a case by case
basis. 
 Both help to communicate your interventions and outcomes more
effectively to your team, donors, stakeholders and local beneficiaries.
 Both models enhance accountability by keeping stakeholders focused
on outcomes.
 Both need to be regularly updated to reflect the project changes over
time.
Differences between Theory of Change (ToC) and
Logical Framework
Like we mentioned above both ToC and Logical Framework are tools used
to describe how programs lead to results, however, if we dive deeper, we will
notice the differences in terms of their structure, approach, rationale and
usage. Here’s a list of features unique to each model.
Logical Framework
Theory of Change
 The logical Framework approach was introduced
in the late 1960s as a tool to design a wide range  ToC gained popularity only in the 1990s to capt
of projects and as a framework for monitoring the implementation and outcomes of complex
and evaluation. projects in international development. 

 Logical framework is mainly used as a tool for  ToC is used as a tool for program design and
monitoring and evaluation. evaluation.

 ToC is explanatory in nature. ToC shows the big


 Logical framework is descriptive in nature. It messy picture and takes a wide view of a desire
illustrates project components (project goals, change or long-term impact – carefully examini
activities, outputs, inputs, risks, assumptions and and thinking through each activity, input, outpu
short and medium term outcomes) in one clear and outcome, issues that you can and can’t con
and specific pathway, and takes a narrow and and preconditions that will enable or inhibit eac
practical look at the relationship between these step. ToC probes assumptions behind each step
elements. The risks and assumptions stated in a demonstrates all possible pathways that lead to
logframe are usually only basic and are not desired change or impact and provides evidenc
backed up by evidence for why you think one how and why you think certain activity will caus
thing will lead to another. the certain change.

 ToCs have their core components but are rathe


flexible and do not have a standardized format.
 The structure of a logframe is quite standardized can take any form from a flow chart to a
and linear, which means that all activities lead to comprehensive graphical diagram with narrativ
outputs which lead to outcomes and the goal. It text. It could include cyclical processes, feedbac
is often presented as a table and there are no loops, one box could lead to multiple other box
cyclical processes or feedback loops. different shapes could be used and so on. 

 Designing a logframe usually involves project  Designing a ToC sees the involvement of a much
staff within an organisation.  bigger group of staff members, stakeholders,
donors and beneficiaries. It is a time consuming
complex process but when done right, it inspire
and supports innovation and improvement in
projects and programs. 

 ToC is best when you need to design a complex


 Logframes are great when you are working on a initiative and want to have a rigorous plan for
small to medium sized project and need to success. It evaluates appropriate outcomes at t
summarize complex project theories into basic right time and the right sequence and explains
components for everyone involved in the project an initiative worked or did not work, and what
to understand at a glance.   exactly went wrong.

 A ToC is best created before an intervention


starts.The development of a ToC usually begins
 A logframe is often created after a project or from the top, meaning it identifies the goal first
program has been developed, working forward then works backwards to map the outcome
from inputs through activities, outputs outcomes pathways and the most appropriate interventio
to the end result or the goal. The question used that may create the desired change(s). While
in developing this framework is –  if we plan to developing a ToC, we ask – if we do activity A th
do activity A, then will that produce outcome B? outcome B will take place because…

 ToC is a causal model, it requires justifications a


each step, meaning you have to articulate the
assumption about WHY Activity A will cause
 A logframe states Activity A causes Outcome B Outcome B. Hence, ToC makes much bigger stri
but it does not show WHY that activity produces in trying to establish the underlying causes of
that particular outcome. change.

 A ToC works to understand the context in which


 A logframe focuses much more on the project program operates. It recognises that factors ou
and program itself and how it is operating rather of the program will often have an influence on t
than external factors.  end result. 

How do you know which model is right for you?


When it comes to picking a model, it’s not an easy task, especially when both
theory of change and logical framework are used for the same purpose and
both models help you understand how your initiatives will bring about
changes and the results you expect to see for the community and its people.
So, the answer to this question is – it really depends on your organisation’s
preference and capacity. The complexity of your intervention, and most
importantly the willingness to commit a certain amount of time or otherwise,
and the availability of resources, skills and knowledge within your
organisation could also influence your choice. Some organisations may even
opt for both, given the size and scope of their interventions. But no matter
which approach you choose to go with, the aim of using models as such
should be to keep all the staff members and stakeholders moving in the same
direction by providing a common language and point of reference. 
We hope this article was helpful. Please let us know if you have any points to
add to the list or any comments or suggestions to share. Also, we’d love to
know which framework your organisation uses to plan, monitor and evaluate
projects.
Key References:
 Theory of change vs. logical framework – What’s the
difference? Tools4Dev.
 Developing a Logic Model or Theory of Change, Community Toolbox.
 Using logic models and theories of change better in
evaluation, BetterEvaluation.
 The Aspen Institute Roundtable on Community Change, the
community builder’s approach to ToC.
 Review of using Theory of Change in International Development, DFID.
 The Logical Framework, USAID 

Getting started with IATI - an


expert interview with Maaike
Blom
April 21, 2021
Maaike Blom, CEO of IATI frontrunner Data4Development walks us
through the International Aid Transparency Initiative and the opportunities
& challenges of publishing to IATI. Maaike also shares some helpful tips and
resources for individuals and organisations looking to learn more while
reflecting upon her own journey through this initiative. Stay with us as we
explore IATI through Maaike’s perspective! 
Tell us about your role at Data4Development (D4D)?
I am the CEO and one of the founders of D4D. My role involves providing
strategic guidance to the implementation projects we are involved in while
steering the sales and marketing team and maintaining our external relations
– I represent D4D at different conferences, seminars, lectures etc.
When did your team at D4D first learn about the
International Aid Transparency Initiative (IATI)?
D4D was founded in 2015 and we have been connected to IATI since day
one. Our Chief-Tech Officer (CTO) Rolf Kleef has also been an integral part
of the IATI discussions since its launch in 2008. My business partner Gyan
Mahadew and I got together while heading an IT transformation project,
including a project management system for Terre des Hommes NL back in
2014-15. IATI was a part of this project and we understood at that point in
time that IATI was there to stay. 
First, Gyan and I worked together as separate Interim Advisors in this
particular project for about two years and then we realized that we could
assist many more organisations with IT and IATI in the same manner – that’s
when we started a company of our own (D4D) to devote ourselves to helping
organisations with IATI, data and other information management support for
the non-profit sector.
Why did D4D decide to join IATI as a member?
We became a member of the IATI official governance structure in 2018 after
we’d gained experience in executing IATI related work for governments,
including the Department for International Development, UK (DFID now
FCDO) and the Dutch Ministry of Foreign Affairs. The IATI secretariat is
actually hosted at UNDP under the flag of the United Nations, comprising
over 90 members, including governments, multilaterals, foundations, private
sector and civil society organisations – all coming together to work towards
this global multi-stakeholder initiative. 
A governance board with representatives from different stakeholder groups is
responsible for steering the initiative in the right direction. Every year a
general member’s assembly takes place where IATI policies, developments
and issues are discussed. Besides this, there is a lively community of
practice  (IATI Connect) with different communities, including technical
community, data use community and data publishing community, where
members can provide input on the ongoing debates.
Data4Development is a member of these various communities, this keeps us
informed on what’s happening from the official perspective and it also
provides a channel to inform our clients on all the latest policy developments
and the latest discussions and vice versa. Plus, we can provide input and
recommendations on where the Standard should be going and how it can be
improved so it fits better for the use cases we encounter in practice.
In your opinion, how effective is the International Aid
Transparency Initiative (IATI)?
IATI provides international development professionals with a common
standard for the publication of aid information and makes it easy for them to
track the flow of aid. Let’s take a simple example – let’s say Oxfam
Netherlands applies for funding at the European commission and once they
receive it the funds get transferred to Oxfam Kenya, and Oxfam Kenya
further distributes the funds to support local NGOs and local partners who
would actually be the ones to work directly with target groups in a specific
area. This is where IATI comes into play, it can visualize this chain of aid
transfer and demonstrate how the money is distributed but also what results
are reached on all levels, this helps all actors to work together towards a more
sustainable development. 
A variety of tools is available for the IATI data, enabling organisations to
work with data, validate data, get insights from data, see who is working
where, doing what, how the money is spent and what needs to be improved.
When more people use it, more people become aware of the potential of IATI
to stimulate aid transparency as more people will see what’s happening where
and ask questions about projects.  When more people ask questions, it helps
NGOs to see whether something is wrong with the way they are publishing
their information or if the information is outdated. This could provide them
with an incentive to be more detailed, represent their interventions more
extensively and fine-tune or enrich their data as needed.
What benefits could implementing organisations gain
by complying with the IATI Standard?
To me, it’s all about aid effort coordination and aid effectiveness, the whole
discussion that has been taking place within the OECD-DAC and G8
Summit, the Paris Declaration, Bhutan Declaration etc. The major donors
have all agreed to share information on who is working where and with what
objectives in mind, so you can actually coordinate better, avoid double
financing or double implementation efforts and avoid scenarios where
everybody focuses on one area where maybe the need is not the highest. 
Additionally, publishing to IATI also provides organisations access to
accurate real-time data to make informed decisions. For example, during the
Ebola outbreak in Liberia, people from the Red Cross used data, including
IATI data to see which communities had the biggest Ebola outbreaks and
which hospitals in Liberia still had beds available for patients and which ones
did not – this helped the Red Cross to easily channel the patients to the right
places.
So, if more and more actors active in International Development publish what
they are doing on IATI, it will be easier to work together, to operate, be more
effective, make data-driven decisions and avoid potential mistakes because
organisations don’t always know what the counterparts are doing and in this
case, IATI can be an excellent tool as a standard. Of course, people still have
to fill it with data and they have to make sure the data is of good quality and
it’s kept up to date.
More on the IATI Standard.
What challenges do organisations face while
publishing or using IATI data?
In my opinion, one of the biggest challenges the average NGO user or
publisher faces during IATI publishing is the need to engage with XML and
make sure they understand the logic behind this data publishing standard and
format and how to structure their data in a logical manner. The IATI standard
is an XML standard, in principle XML is a very common language used for
exchanging, harmonizing and visualizing data, in that sense, it is not strange
that IATI is based on XML but for an average user, using IATI and
publishing to IATI can be quite tricky as it is too technical. 
Oftentimes people publish to IATI, they put the XML file somewhere on the
registry without knowing exactly what they are publishing about and where
it’s going or how it’s being used, so in the process, the very purpose or
intention behind publishing to IATI can get lost if the format is a black box to
people. That is why visualising the data is so important.
Organisations can really benefit from using IATI publishing tools that are
easily available and ready for use. Also, training the in-house team or hiring a
specialist to work with IATI data can be helpful. NGOs can approach this as
a preparation for the future where data is going to be more important than
ever and where digital ways of working will be the norm in our common
future.
How does publishing data to IATI help organisations
with their monitoring and evaluation (M&E)?
IATI is focused on two important aspects – where does the money go and
what kind of results are achieved and these two together can provide insight
into what is happening in a certain thematic area or country. In IATI  the
results are called the performance part and it’s directly related to M&E. 
The Standard already enables organisations to provide transparency on what
kind of results they are striving to achieve and now with the combined
services from TolaData and D4D, we make it possible for organisations to
combine their results and project data with financial data to present a
complete picture which is what organisations aim for. In that sense, the IATI
publication fosters stronger M&E support within an organisation.
How can Data4Development and TolaData help
organisations with IATI?
TolaData is a very user-friendly and flexible software for collaboration and
M&E. It makes tracking and reporting of project results easier with tools for
data collection, end-to-end indicator management support, results framework,
dashboard and portfolio features. It is also very simple to set up and most
organisations are easily able to adapt it to their project structure and needs.
And with our recent partnership with TolaData, organisations can now extract
their project results and other key information they have set up in TolaData
and combine it with the financial data and use D4D’s SpreadSheet2IATI to
validate and publish them to IATI, all in one platform. 
This takes away the burden of having to think about XML and all other
technical requirements of IATI because TolaData already helps you to
structure and organize your data and then it’s just a matter of pressing the
button at the end and the information that is delivered is an IATI ready report.
If organisations need further support, they can take advantage of IATI
training and consultancy services provided by D4D.
So in a way, our combined services help people to engage more with
activities that matter the most: achieving the outcomes and impact they aim
for while becoming a part of the international transparency movement,
leveraging data, tracking and reporting on their progress rather than dealing
with all kinds of technical stuff that is far from their daily work.
More on how TolaData and D4D can help your organisation with IATI.
How are D4D and TolaData different from other
publishing tools?
TolaData has its own strengths and so does D4D. Of course, there are other
tools that offer similar benefits but what sets TolaData and D4D apart is their
tools, features and support are integrated, which means organisations can set
up projects, manage and track their indicators, report on their progress and
actually publish those project results together with financial reports to IATI,
without the need to jump back and forth between multiple platforms.
Plus, with D4D’s IATI training and consultancy support, it’s like a one-stop-
shop service. Customers won’t have to sweat to get any help or information
on IATI, it’s all made easily available and accessible for them as one
package. Of course, it depends on the questions they have and issues they are
struggling with but together we can really answer almost any question or help
with any IATI related problems they face.
What improvements do you want to see in IATI data?
The IATI Standard is also like a living thing that when more people start
using it you discover more room for improvements. It’s quite normal that
within such a big community of publishers, you frequently have
recommendations for some part to be enriched or upgraded. For example,
previously it was only possible to publish quantitative results in IATI and
now they have also enabled the possibility of publishing qualitative data
using the Standard and that was all due to the discussion within the
community. These discussions really enable the IATI support team to
continuously mould the Standard in a way that it aligns with the work of
different types of organisations in the field.
There is a recent discussion that is ongoing on geolocations. At present, it is
possible to publish geo locations using the IATI Standard but it’s not fully
optimized – this is one of the improvements I am looking forward to and it
will probably be a part of the upcoming new release. It’s a bit like software,
there is always going to be room for improvements, but the basic elements
that IATI currently has are good to work with and it’s up to each organisation
to decide whether they want to engage with certain elements or not.
What does the future look like for IATI?
Over 1200 organisations now publish their development and humanitarian
spending to IATI and the number is expected to grow significantly in the
upcoming years. Different countries have taken different approaches to IATI,
for example, the UK, the Dutch and the Belgians have opted for obligatory
publishing to IATI, meaning every organisation funded by their governments
has to use IATI.
Whereas a number of other governments and donors like the Swedish
government and the German government are more on the road of trying to
engage people to work with IATI data by setting examples and showing some
of the benefits but not yet making it compulsory and perhaps when
organisations are given a choice and shown the benefits they might feel more
intrinsically motivated to join the IATI movement and not just publish to
IATI because they have to but also engage with the published data. We have
to wait and see how these different approaches work out in practice but one
thing is clear, the obligatory approach has certainly led to a high rise in IATI
publishers and much more data becoming available for public use. 
There have been about 888K activities published to IATI since the very
beginning and it keeps growing, so if more people start to use that richness
and investigate, the advantage of using IATI and being a part of this
transparency movement becomes even more evident for all kinds of
stakeholders.
Do you have any special advice or tips for
organisations considering publishing to IATI?
I think people should see IATI as a journey. In the beginning, It can be quite
intimidating because there’s so many options, fields and components that you
can potentially use, so what we always advise our clients is to start small, get
some experience and see what elements work for you and your organisation
and become acquainted slowly on a smaller scale, instead of thinking that
they have to comply with all the possibilities at once. If you want to make it
work for everything you do at once and want to engage with all the options
straight from the start you sort of make it difficult on yourself and the chance
is you might never start doing it. 
So, again it’s better to start small and build it up, gain experience, learn along
the way and make mistakes, it’s all part of the data journey. In IATI you can
always go back and correct mistakes – you can overwrite your previous
dataset, make corrections and publish again. It’s a learning cycle – so just get
started with it and don’t think that you need to be perfect from the start.
Lastly, could you recommend some resources for
those who are interested to know more about this
initiative?
Certainly! For those of you who are interested to learn about IATI, you can
start with these helpful resources:
 IATI is growing in the EU
 IATI: Intro, benefits & guidelines for NGOs to publish as per the IATI
Standard
 About the International Aid Transparency Initiative (IATI) 
 Potential and feasibility of the international aid transparency initiative
(IATI)
 Using IATI: A guide for NGOs
 The rise of IATI: how the initiative is expanding throughout the world
We hope you found this interview with Maaike Blom helpful. In case you
have additional questions for Maaike and her team, do write them in the
comment section below or connect with Maaike on social media.     

Impact evaluation: overview,


benefits, types and planning tips
May 4, 2021
After a thorough consultation with our in-house M&E experts, we have
created a 2-part series on impact evaluation and this article is the first part
of the two. It provides an in-depth overview of impact evaluation and
answers questions such as – what is impact evaluation, when is a good time
to conduct an impact evaluation, whom to engage in the evaluation, what are
the benefits, types and challenges of impact evaluation, plus some top tips
and easy to follow steps to help you and your organisation plan and manage
your own impact evaluation. 
In the second part, we will walk you through the process of designing an
actual impact evaluation work plan and help you grasp its key elements,
including the objectives, purpose and scope of the evaluation, key questions,
methodologies and more.
Governments, donors, multilateral institutions and development organisations
around the world are investing billions of dollars every year in developing
and implementing projects, programs and policies to help reduce poverty,
improve lives, encourage learning, and protect the environment. As much as
it’s important to have these projects in place, good intentions alone are not
enough – It is equally crucial to understand the impacts these interventions
are making.
Since the Paris Declaration on Aid Effectiveness in 2005, many development
actors are now required to report against their core strategic objectives and
demonstrate their effectiveness. Meaning, NGOs and civil society
organisations have to go beyond just tracking and reporting on what they
have achieved to focusing on identifying the real difference their
interventions are making in the lives of the poorest and the most vulnerable
populations and justify that these efforts are effective in manifesting change.

In order to meet this growing demand for aid effectiveness, development


actors are increasingly incorporating impact evaluation into their monitoring
and evaluation plans as a tool for learning and accountability and to design
projects and policies that are evidence-based.

This article is the first part of our 2-part series on impact evaluation. It
provides an overview of impact evaluation, its benefits, types, and some tips
on how to plan and manage it. In the second part, we walk you through the
process of designing an actual impact evaluation work plan to help you grasp
its key elements, including the objectives, purpose and scope of the
evaluation, key questions, methodologies and more. 
What is impact evaluation?
OECC-DAC defines impact as, “Positive and negative, primary and
secondary long-term effects produced by a development intervention, directly
or indirectly, intended or unintended.” 
According to the DAC Evaluation Network report, impact evaluation serves
both objectives of evaluation: lesson-learning and accountability. When done
correctly, impact evaluations should measure both positive and negative
changes in development outcomes that can be attributed to a specific
intervention, whether they are short or long-term, intended or unintended, 
direct or indirect. The intervention could be a small project, a large program,
a collection of activities, or a policy. 
But how do we understand and report on the changes our interventions are
making? Change is not an easy concept to capture and explain as it does not
happen in a linear path. In impact evaluation, we cannot understand change
by simply asking what we have achieved but rather we have to ask ourselves
how our efforts were connected to this change and who or what was involved
in the change, what strategies were used to bring about the change, what were
the contexts that affected how change happened and what was the process or
pathway of change?
What are the benefits of impact evaluation?
Impact evaluation helps to demonstrate project success or failure, and provide
accountability to all stakeholders, including donors and beneficiaries. It helps
to determine if, and how well, an intervention worked to create a change in a
particular community of interest or in the lives of our target populations,
while demonstrating the extent of the impact and how it came about. 
Impact evaluations are also useful in navigating the real needs on the ground
and providing answers to project or program design questions to determine
which, among several alternatives, is the most effective approach, represents
the greatest benefits to the target communities, offers the best value for
money and is the most suitable for scale-up and replication. This provides
organisations with evidence to make informed decisions for redesigning the
current project or for planning of future interventions. Impact evaluation also
helps organisations to use the findings from the evaluation to advocate for
changes in behaviour, attitudes, policy and legislation at all levels. 
Impact evaluations have already proven to be valuable for development
interventions.
In 2015, a World Bank report found that, “Projects with impact evaluations
are more likely to implement their activities as planned and, in so doing, are
more likely to achieve their objectives.”
However, it is also important to note that impact evaluation might not be
applicable in all contexts for a range of reasons such as budget, timing,
questions of interest etc. It is therefore important to consider impact
evaluation as one tool in a wider spectrum of evidence-generating activities.
Outcomes vs. Impact - what's the difference?
Oftentimes, people confuse project outcomes with impact. It’s important to
understand that intermediate outcomes are evident during the life of the
evaluation as opposed to the long-term impacts of the intervention. Outcomes
are the benefits an intervention is designed to deliver, whereas, impacts are
higher level strategic goals or long term effects of an intervention.
Achieving the intermediate outcomes may contribute to the intended final
impact. For example when a project achieves an increased number of
women’s participation in community decision-making (intermediate
outcome), this might contribute to the improved economic, social and
physical well-being of women, which would be the long term effect of the
intervention (impact). So, in other words, outcomes precede, and are usually
a precondition for impact to occur.
Types of impact evaluation
An impact evaluation can be undertaken during as well as towards the end of
an intervention but the planning must begin early on. Based on the timing and
the purpose of evaluation, impact evaluation is categorized into two types.
How to plan and manage impact evaluation?
Before planning and implementing impact evaluation, organisation staff and
relevant stakeholders must clarify a few points and should proceed with the
evaluation only if it’s appropriate and necessary. Here are some point to
consider and steps to follow:
First of all,
 It’s important to determine how relevant the evaluation will be for
your organisation’s development strategy. 
 An impact evaluation should only be implemented when there is
clearly a need to understand the impacts of an intervention and when
impact evaluation is the best way to answer the questions about the
intervention.
Once the organisation has clarity on the above points, they can proceed with:
 Identifying what needs to be evaluated and generating evaluation
questions according.
 Identifying the availability of resources and determining how to
mobilize them. To estimate the amount of resources for impact
evaluation, organisations can refer to the budgets of previous similar
evaluations or use a budget analysis template made available by other
organisations. Check USAID’s budget template for reference. 
 Given the availability of resources and time, they have to determine
whether the findings will be credible and relevant.
 Determining whom to engage in the evaluation, decision making and
management, and outlining the required skills of the evaluation team
– It is also important to garner commitments from all individuals who
will be invested in this process.
 A clear understanding of the appropriate timing for impact evaluation
is also crucial. 
 Developing an evaluation design, methods and implementation work
plan. 
 Development and distribution of evaluation reports.
 Encouraging utilization of evaluation results. Impact evaluation is
useful when there is a scope to use the findings from the current
intervention to inform decisions about future projects. Therefore, it is
important to have a clear understanding of how the findings from
impact evaluation will be used and by whom?
 Maintaining the quality of evaluation throughout the project cycle.
It is recommended that organisations first take an evaluability assessment
before proceeding with the planning. (Check out this article by
BetterEvaluation to learn more about evaluability assessment.)
When is a good time to conduct an impact evaluation?
Impact evaluation planning should begin in the early stages of a project. The
impact evaluation methodologies need significant investment in preparation
and enough time for the collection of baseline data, and where appropriate,
the creation of a randomized control trial or comparison group or the use of
other strategies to investigate causal attribution. 
Instead of leveraging it as a stand-alone component, It is important to address
impact evaluation as part of an integrated monitoring and evaluation (M&E)
plan. Meaningful impact evaluation cannot be carried out without drawing on
data from other ongoing M&E activities and components. M&E enables
impact evaluation by providing information on the nature of the intervention,
context of the information and additional evidence on how the intervention
has been progressing and whether impact evaluation is necessary and when
it’s a good time to undertake it. 
Although it is a good practice to undertake impact evaluation sooner as it
provides useful information to make modifications to the project and
improves its efficiency and benefits, one must also be wary that when impact
evaluation is undertaken too early, there is a probability that impacts may be
underestimated or unnoticed because in some cases, the impacts may not
have had sufficient time to develop. When implemented too late, it might
miss the timing window to inform decisions.
Whom to engage in impact evaluation - taking a
participatory approach
Thinking about whom to involve, why and how in the evaluation process is a
crucial step in M&E, therefore, evaluation management arrangements should
be clearly described from the beginning of the evaluation process. This helps
to develop an inclusive and context-specific participatory approach, which
can bring a lot of value to the evaluation. The underlying rationale for
choosing a participatory approach to impact evaluation could be pragmatic or
ethical, or a combination of both and being clear and intentional about
participatory approaches in impact evaluation is an essential step for
managing expectations and guiding implementation. 
The nature of the project, the purpose of the evaluation, the expectations of
the donors, the goal of the intervention, plus the skills and competencies
available within the team, all determine how different team members and
stakeholders could be engaged in different stages of the evaluation process to
maximize the benefits. 
Additionally, asking these questions laid out by BetterEvaluation can also
help in designing an impact evaluation that is participatory in nature:
1.
1. What purpose will stakeholder participation serve in this impact
evaluation?
2. Whose participation matters, when and why?
3. When is participation feasible?
Tip: a common practice includes, creating an ‘Evaluation Management
Team,’ which is a steering committee responsible for creating and
supervising the ‘Evaluation Team.’ Evaluation management team also
provides technical guidance, oversees quality assurance and  manages the
budgets and field visits and other operational aspects of the evaluation.
Additionally, the management team also creates and manages an ‘Evaluation
Reference Group’ which is responsible for providing technical and cultural
advice. The members for this group are selected from a range of relevant
stakeholders.
Challenges of impact evaluation
Impact evaluations are known to be time consuming and expensive, require
special skills to conduct and pose many managerial and technical challenges
– this is often a challenge for organisations that do not have large budgets and
technical expertise to carry out evaluation work. It is also very difficult to
determine the appropriate time to execute impact evaluation – depending on
the timing of the evaluation, different purposes and results may be reached. 
Another key challenge is that many impact evaluation methodologies need to
be agreed upon from the start of an intervention, especially if they rely on
baseline surveys or randomisation.
This can be difficult in more complex interventions where goals and
objectives evolve over time. In such cases it may be more appropriate to use
methodologies that do not require extensive baselines.
Counterfactual is another big challenge in impact evaluation – to identify
what would have happened in the community in the absence of the
intervention. To figure that out, a comparison group must be selected to
represent the counterfactual and that requires careful thought, if it’s not done
properly then an inappropriate comparison group may invalidate the
evaluation results. 
Another major challenge of undertaking an impact evaluation is measuring
the impact itself. Assessing impact is not an easy task as it is often not visible
during the life of a short-term intervention and is more likely to be affected
by other interventions and other factors. In practice, a particular intervention
is rarely sufficient to produce the intended impacts alone, oftentimes, a
combination of similar interventions and projects are required to achieve an
impact.
There are several reasons for choosing impact evaluation for an
intervention. It plays a crucial role, not just in identifying project impacts but
also in assessing them and understanding their dimensions.  
We hope this article was helpful in explaining impact evaluation, it’s
benefits, types and challenges and in providing easy to follow steps to help
you plan and manage your own impact evaluation. Check out the second part
of our 2-part series on impact evaluation – “Designing an impact evaluation
workplan: a step-by-step guide” and learn how to design and conduct
appropriate and effective impact evaluation and make it a part of your
overall M&E plan to improve your learning and accountability.
If you have any comments on this article or if you’d like to simply suggest an
M&E topic that you would like us to cover on our next blog post then please
write to us in the comment section below. 
Key Resources:

 Overview of Impact Evaluation, by Patricia Rogers at UNICEF, 2014.


 Impact Evaluation Methods for Youth Employment Interventions –  ILO
 Choosing Appropriate Designs and Methods for Impact Evaluation
–  Australian Government 
 Impact Evaluation: Resources – IDB 
 Guidance for the Terms of Reference for Impact Evaluations
– European Commission
Designing an impact
evaluation work plan: a step-
by-step guide
May 4, 2021
This article is the second part of our 2-part series on impact evaluation. In
the first article, “Impact evaluation: overview, benefits, types and planning
tips,” we introduced impact evaluation and some helpful steps for planning
and incorporating it into your M&E plan. 
In this blog, we will walk you through the next steps in the process – from
understanding the core elements of an impact evaluation work plan to
designing your own impact evaluation to identify the real difference your
interventions are making on the ground. Elements in the work plan include
but are not limited to – the purpose, scope and objectives of the evaluation, 
key evaluation questions, designs and methodologies and more. Stay with us
as we deep dive into each element of the impact evaluation work plan!
Key elements in an impact evaluation work plan
Developing an appropriate evaluation design and work plan is critically
important in impact evaluation. Evaluation work plans are also called terms
of reference (ToR) in some organisations. While the format of an evaluation
design may vary on a case by case basis, it must always include some
essential elements, including:
1. Background and context
2. The purpose, objectives and scope of the evaluation
3. Theory of change (ToC)
4. Key evaluation questions the evaluation aims to answer
5. Proposed designs and methodologies
6. Data collection methods 
7. Specific deliverables and timelines
1. Background and context
This section provides information on the background of the intervention to be
evaluated. The description should be concise and kept under one page and
focus only on the issues pertinent for the evaluation – the intended objectives
of the intervention, the timeframe and the progress achieved at the moment of
the evaluation, key stakeholders involved in the intervention, organisational,
social, political and economic factors which may have an influence on the
intervention’s implementation etc.
2. Defining impact evaluation purpose, objectives and
scope
Consultation with the key stakeholders is vital to determine the purpose,
objectives and scope of the evaluation and identify some of its other
important parameters. 
The evaluation purpose refers to the rationale for conducting an impact
evaluation. Evaluations that are being undertaken to support learning should
be clear about who is intended to learn from it, how they will be engaged in
the evaluation process to ensure it is seen as relevant and credible, and
whether there are specific decision points around where this learning is
expected to be applied. Evaluations that are being undertaken to support
accountability should be clear about who is being held accountable, to whom
and for what.  
The objective of impact evaluation reflects what the evaluation aims to find
out. It can be to measure impact and to analyse the mechanisms producing
the impact. It is best to have no more than 2-3 objectives, that way the team
can explore few issues in depth rather than examine a broader set
superficially.
The scope of the evaluation includes the time period, the geographical and
thematic coverage of the evaluation, the target groups and the issues to be
considered. The scope of the evaluation must be realistic given the time and
resources available. Specifying the evaluation scope enables clear
identification of the implementing organisation’s expectations and of the
priorities that the evaluation team must focus on in order to avoid wasting its
resources on areas of secondary interest. The central scope is usually
specified in the work plan or the terms of reference (ToR) and the extended
scope in the inception report.
3. Theory of change (ToC)
Theory of change (ToC) or project framework is a vital building block for
any evaluation work and every evaluation should begin with one. A ToC may
also be represented in the form of a logic model or a results framework. It
illustrates project goals, objectives, outcomes and assumptions underlying the
theory and explains how project activities are expected to produce a series of
results that contribute to achieving the intended or observed project
objectives and impacts. 
A ToC also identifies which aspects of the interventions should be examined,
what contextual factors should be addressed, what the likely intermediate
outcomes will be and how the validity of the assumptions will be tested. Plus,
a ToC explains what data should be gathered and how it will be synthesized
to reach justifiable conclusions about the effectiveness of the intervention.
Alternative causal paths and major external factors influencing outcomes may
also be identified in a project theory. 
A ToC also helps to identify gaps in logic or evidence that the evaluation
should focus on, and provides the structure for a narrative about the value and
impact of an intervention. All in all, a ToC helps the project team to
determine the best impact evaluation methods for their intervention. ToCs
should be reviewed and revised on a regular basis and kept up to date at all
stages of the project lifecycle – be this at project design, implementation,
delivery, or close.
More on the theory of change, logic model and results framework.
4. Key impact evaluation questions
Impact evaluations should be focused on key evaluation questions that reflect
the intended use of the evaluation. Impact evaluation will generally answer
three types of questions: descriptive, causal or evaluative. Each type of
question can be answered through a combination of different research designs
and data collection and analysis mechanisms.  
 Descriptive questions ask about how things were and how they are
now and what changes have taken place since the intervention. 
 Causal questions ask what produced the changes and whether or not,
and to what extent, observed changes are due to the intervention
rather than other factors.
 Evaluative questions ask about the overall value of the intervention,
taking into account intended and unintended impacts. It determines
whether the intervention can be considered a success, an
improvement or the best option.
Examples of key evaluation questions for impact evaluation based on the
OECD-DAC evaluation criteria.
Key Impact Evaluation Questions based on the OECD-DAC evaluation criteria

Relevance  To what extent did the intended impacts match the stated priorities of the organisation and intende
participants?

Effectivenes  Did the intervention produce the intended impacts in the short, medium and long term? If so, for
s whom, to what extent and in what circumstances?
 What helped or hindered the intervention to achieve these impacts?
 What variations were there in the quality of implementation in different sites? 
 To what extent are differences in impact explained by variations in implementation?
 Did implementation change over time as the intervention evolved?
 How did the intervention work in conjunction with other interventions to achieve outcomes?

Efficiency  What resources and strategies have been utilized to produce these results?

Impact  What unintended impacts, positive and negative, did the intervention produce?

Sustainabilit  Are impacts likely to be sustainable? 


y  Have impacts been sustained?

5. Impact evaluation design and methodologies


Measuring direct causes and effects can be quite difficult, therefore, the
choice of methods and designs for impact evaluation of interventions is not
straightforward, and comes with a unique set of challenges. There is no one
right way to undertake an impact evaluation, discussing all the potential
options and using a combination of different methods and designs that suit a
particular situation must be considered. 
Generally, the evaluation methodology is designed on the basis of how the
key descriptive, causal and evaluative evaluation questions will be answered,
how data will be collected and analysed, the nature of the intervention being
evaluated, the available resources and constraints and the intended use of the
evaluation. 
The choice of the methods and designs also depend on causal attribution,
including whether there is a need to form comparison groups and how it will
be constructed. In some cases, quantifying the impacts of interventions
requires estimating the counterfactual – meaning, estimating what would
have happened to the beneficiaries in the absence of the intervention? But in
most cases, mixed-method approaches are recommended as they build on
qualitative and quantitative data and make use of several methodologies for
analysis.
In all types of evaluations, it is important to dedicate sufficient time to
develop a sound evaluation design before any data collection or analysis
begins. The proposed design must be reviewed at the beginning of the
evaluation and it must be updated on a regular basis – this helps to manage
the quality of evaluation throughout the entire project cycle. Plus, engaging
with a broad range of stakeholders and following established ethical
standards and using the evaluation reference group to review evaluation
design and draft reports all contribute to ensuring the quality of evaluation.
Descriptive Questions
In most cases, an effective combination of quantitative and qualitative data
will provide a more comprehensive picture of what changes have taken place
since the intervention. Data collection options include, but are not limited to
interviews, questionnaires, structured or unstructured and participatory or
non-participatory observations recorded through notes, photos or video;
biophysical measurements or geographical information and existing
documents and data, including existing data sets, official statistics, project
records, social media data and more.
Causal Questions
Answering causal questions require a research design that addresses
“attribution” and “contribution.” Attribution means the changes observed are
entirely caused by the intervention and contribution means that the
intervention partially caused or contributed to the changes. In practice, it is
quite complex for an organisation to fully claim attribution to a change, this
is because changes within the community are likely to be the result of a mix
of different factors besides just the effects of the intervention, such as
changes in economic and social environments, national policy etc. 
The design for answering causal questions could be ‘experimental,’ ‘quasi-
experimental’ or ‘non-experimental.’ Let’s take a look at each design
separately:
Experimental: involves the construction of a control group through random
assignment of participants. Experimental designs can produce highly credible
impact estimates but are often expensive and for certain interventions,
difficult to implement. Examples of experimental designs include:

 Randomized controlled trial (RCT)  – In this type of experiment, two
groups, a treatment group and a comparison group are created and
participants for each group are picked randomly. The two groups
are statistically identical, in terms of both observed and unobserved
factors before the intervention but the group receiving treatment
will gradually show changes as the project progresses. Outcome
data for comparison and treatment groups and baseline data and
background variables are helpful in determining the change.
Quasi experimental: unlike experimental design, quasi experimental design
involves construction of a valid comparison group through matching,
regression discontinuity, propensity scores or other statistical means to
control and measure the differences between the individuals treated with the
intervention being evaluated and those not treated. Examples of quasi-
experimental designs include, 

 Difference-in-differences:  this measures improvement or change
over time of an intervention’s participants relative to the
improvement or change of non-participants.
 Propensity score matching: Individuals in the treatment group are
matched with non-participants who have similar observable
characteristics. The average difference in outcomes between
matched individuals is the estimated impact. This method is based
on the assumption that there is no unobserved difference in the
treatment and comparison group.
 Matched comparisons:  this design compares the differences
between participants of an intervention being evaluated with the
non participants after the intervention is completed.
 Regression discontinuity: in this design, individuals are ranked
based on specific, measurable criteria. There is usually a cut-off
point to determine who is eligible to participate. Impact is
measured by comparing outcomes of participants and non-
participants close to the cutoff line. Outcomes as well as data of
ranking criteria, e.g. age, index, etc. and data on socioeconomic
background variables are used.
Non-experimental: when experimental and quasi-experimental designs are
not possible, we can conduct non-experimental designs for impact evaluation.
This design takes a systematic look at whether the evidence is consistent with
what would be expected if the intervention was producing the impacts, and
also whether other factors could provide an alternative explanation.

 Hypothetical and logical counterfactuals: it is basically an estimate
of what would have happened in the absence of an intervention. It
involves consulting with key informants to identify either a
hypothetical counterfactual, meaning what they think would have
happened in the absence of an intervention or a logical
counterfactual, meaning what would logically have happened in its
absence.
 Qualitative comparative analysis: this design is particularly useful
where there are a number of different ways of achieving positive
impacts, and where data can be iteratively gathered about a
number of cases to identify and test patterns of success.
Evaluative Questions
To answer these questions one needs to identify criteria against which to
judge the evaluation results and decide how well the intervention performed
overall or how successful or unsuccessful an intervention was. This includes
determining what level of impact from the intervention will count as
significant. Once the appropriate data are gathered, the results will be judged
against the evaluative criteria. 
For this type of evaluation, you should have a clear understanding of what
indicates ‘success’ – is it represented as improvement in quality or value?
One way to find out is by using a specific rubric that defines different levels
of performance for each evaluative criterion, deciding what evidence will be
gathered and how it will be synthesized to reach defensible conclusions about
the worth of the intervention.
These are just a handful of commonly used impact evaluation methodologies
in international development, to explore more methodologies, check out
the  Australian Government’s guidelines on “Choosing Appropriate Designs
and Methods for Impact Evaluation.“
6. Data collection methods for impact evaluation
According to BetterEvaluation, well-chosen and well-implemented methods
for data collection and analysis are essential for all types of evaluations and
must be specified during the evaluation planning stage. One should have a
clear understanding of the objectives and assumptions of the intervention,
what baseline data exist and are available for use and what new data needs to
be collected, how frequently, in what form, and what data do the beneficiaries
need to deliver etc. 
Reviewing the key evaluation questions can help to determine which data
collection and analysis method can be used to answer each question and
which data collection tools can be leveraged to gather all the necessary
information. Sources for data can be stakeholder interviews, project
documents, survey data, meeting minutes, and statistics, among others. 
However, many outcomes of a development intervention are complex and
multidimensional and may not be captured with just one method. Therefore,
using a combination of both qualitative and quantitative data collection
methods, which is also called a mixed-methods approach is highly
recommended as it allows us to combine the strengths and counteract the
weaknesses of both qualitative and quantitative evaluation tools, allowing for
a stronger evaluation design overall and provides a better understanding of
the dynamics and results of the intervention.
But how do you know which method is right for you? 

It is a good idea to consider all possible impact evaluation methods and to


carefully weigh advantages and disadvantages before making a choice(s).
The methods you select must be credible, useful and cost effective in
producing the information that is important for your intervention. As
mentioned above, many impact evaluation uses mixed methods, which is a
combination of qualitative and quantitative methods. Each method’s
shortcomings can be fulfilled by using it in combination with other methods.
Using a combination of different methods also helps to increase the
credibility of evaluation findings as information from different data sources
are converged, likewise, it can also help the team to gain a deeper
understanding of the intervention, its effects and context.
7. Impact evaluation deliverables and timelines
Deliverables include an ‘inception report,’ a ‘draft report’ and the ‘final
evaluation report’ but in case of complex evaluations, ‘monthly progress
reports’ might also be required.  These reports contain detailed descriptions
of the methodology that will be used to answer the evaluation questions, as
well as the proposed source of information and data collection procedure.
These reports must also indicate the detailed schedule for the tasks to be
undertaken, the activities to be implemented and the deliverables, plus,
clarification on the role and responsibilities of each member of the evaluation
team.
We hope you found this article helpful. Our intention behind the 2-part series
was to explain impact evaluations and its key components in a simple manner
so that you can plan and implement your own impact evaluation more
accurately and effectively. 
Before we sign off, just a quick reminder that this list is not all-inclusive but
rather a list of few key elements that many organisations choose to include in
their impact evaluation work plan or ToR. If you know of any additional
elements that are included in an evaluation work plan in your organisation
then do reach out to us and we’d be happy to add them here. 

This article is partly based on the Methodological brief “Overview of Impact


Evaluation,” by Patricia Rogers at UNICEF, 2014.
Additional Resources:

 Outline of Principles of Impact Evaluation – OECD


 Technical Note on Impact Evaluation – USAID
 Impact Evaluation – BetterEvaluation

Helpful guides & resources on


Monitoring & Evaluation (M&E)
June 3, 2021
For any development project to be successful, it must have a
robust monitoring and evaluation system in place. However, the efficacy of
the monitoring and evaluation system depends on many factors, including the
timing of the assessment, the appropriate choice of tools and methodologies,
availability of resources, skills and capacity within the team, proper
mechanism for data collection and analysis, just to name a few. 
Our team at TolaData has put together this list of helpful resources for
different stages of the project cycle to guide you through your entire M&E
journey. These resources will help you choose the right approach, tools and
methodologies, build knowledge and capacity in M&E and enable you to
plan and design M&E systems and plans that are effective and strategic. 
The collection of resources and guides listed in this blog post are beneficial
for organisations, M&E practitioners or individuals who are interested in
learning more about M&E and are committed to understanding and
enhancing the outcomes and impacts of their development projects. 
Monitoring and Evaluation (M&E) resources for
beginners
Guiding principles for evaluators by the American Evaluation Association
is a must-have resource for anyone in the sector. This is intended as a guide
to the professional ethical conduct of evaluators. The five Principles listed in
the guide address systematic inquiry, competence, integrity, respect for
people, and common good and equity. The Principles govern the behaviour of
evaluators in all stages of the evaluation from the initial discussion of focus
and purpose, through design, implementation, reporting, and ultimately the
use of the evaluation.
Guidelines for project and programme evaluations by the Austrian
Development Agency is an excellent resource for those who are new to M&E
and would like to learn the basics. The guide was intended for projects or
programs supported by ADC but its content is relevant for many in the sector.
It introduces M&E and explains the purpose of evaluation while defining the
international evaluation principles and standards and key M&E
terminologies. Additionally, it includes helpful checklists and examples of
M&E reports, data collection planning worksheets and more. 
The roles of monitoring and evaluation in projects by FAO provides clear
and concise definitions of both monitoring and evaluation, their key roles and
purposes, tools, limitations, tips on how to write monitoring and evaluation
reports and more. 
10 steps to successful M&E by TolaData is a short article that lays out the
top ten tips from experts to help you navigate the M&E path successfully.
Topics include, how to determine a favourable time to incorporate M&E into
your project, how to choose the right M&E tools and methodologies, the
crucial role of indicators and more.
Glossary of key terms in evaluation and results based management by
OECD has a list of the most common terms used in M&E and their
definitions in English, French and Spanish.
Resources to support your Monitoring and Evaluation
(M&E) planning and implementation
Project/programme monitoring and evaluation (M&E) guide by the
International Federation of Red Cross (IFRC) is a great resource that covers
the A-Z of M&E. It explains the general M&E concepts and considerations
and outlines the six key interconnected steps of an M&E system in detail.
The guide also includes a list of common M&E terminologies and their
definitions, list of factors affecting the quality of M&E information, a
checklist for the six key M&E steps, key data collection methods and tools
and helpful templates for M&E stakeholder assessment, M&E activity
planning and more.
Evaluation flashcards by Michael Quinn Patton introduces the core
evaluation concepts that every evaluator must know, including, evaluation
questions, M&E frameworks, standards, methods, types of evaluation and
more.
A step-by-step guide to monitoring and evaluation by the University of
Oxford is one of our favorite go-to resources. It explains M&E in an easy to
understand manner and provides tips and guidance for every step of the M&E
process – from determining which projects to monitor, whom to involve, how
to clarify your aims, objectives, activities and pathways to change to identify
the information you need to collect and how to collect, analyze and utilize
them and more. Plus, it includes a range of examples, templates and resources
that you can use and adapt into your own projects.
Monitoring and evaluation plan for NGOs by TolaData articulates the key
components and important steps in developing a monitoring and evaluation
(M&E) plan and guides you through each step of the process. The best part
about this guide is, it’s written in a very simple language and it’s short and
easy to follow. 
The nuts and bolts of monitoring and evaluation systems by the World
Bank group synthesizes existing knowledge about M&E systems and
provides it in a highly succinct, readily understandable, and credible manner.
Although it was intended for policymakers, it is relevant for anyone working
in M&E.
Designing evaluations is a comprehensive guide from the United States
Government Accountability Office. If you need help designing an M&E
system or a plan then this guide is for you. It walks you through the key steps
in designing an evaluation  – from defining the evaluation scope, selecting an
evaluation design, determining the key components of an evaluation design to
determining the criteria for a good evaluation design and more.
Need help formulating good evaluation questions?
Evaluation questions checklist for program evaluation by Lori Wingate
and Daniela Schroeter is an excellent resource. The purpose of this checklist
is to help evaluators in developing high-quality and appropriate evaluation
questions and in assessing the quality of existing questions. It identifies
characteristics of good evaluation questions, based on the relevant literature
and the authors’ own experiences with evaluation design, implementation,
and use.
Monitoring and Evaluation (M&E) guides and toolkits
for data collection
Toolkit for monitoring and evaluation data collection by Cardno (2017),
Pacific Women Shaping Pacific Development Support Unit outlines a set of
‘Guiding Principles’ to consider when collecting monitoring and evaluation
data and provides information, resources, templates and a range of data
collection tools for development professionals to consider, use and adapt
when planning for and collecting  both ‘routine monitoring data’ and
‘periodic internal evaluation data.’ It does not, however, provide guidance for
undertaking larger-scale end of program evaluations or for ‘externally led
evaluations.’
Data collection methods and tools for performance monitoring by USAID
covers the basics of data collection for performance monitoring, including,
primary and secondary types of data sources, common data collection
methods, the process of identifying appropriate data collection tools and
more.
Data collection methods handbook by Measure Evaluation covers 5 modules
to guide you through the process of data collection. The modules include data
collection strategies, data collection general rules, key issues about measures,
quantitative and qualitative data and a toolkit on common data collection
approaches. This handbook is quite thorough and provides a comprehensive
explanation of each data collection method and approach separately.
Monitoring and Evaluation (M&E) resources and
tutorials for data analysis
How to get your dataset ready for analysis is a short blog article by
TolaData that provides a list of best practices to guide you and your team to
get more out of the data you’ve collected. The article provides tips on laying
out your data on spreadsheets, formatting and structuring your data, entering
the data into a data set, plus it explains the most common dos and don’ts in
preparing your dataset for analysis.
Excel Easy is an excellent guide for those who use excel spreadsheets for
data management, consolidation and analysis. Its tutorials on data analysis
illustrate the powerful features Excel has to offer to analyze data – from
sorting your data to conditional formatting, pivoting tables, creating charts to
leveraging an array of tools in Excel such as the ‘Analysis Toolpak’ to ‘What
if analysis,’ ‘Solver’ and more.
Resources for data reporting - how best to
communicate your M&E findings?
One page reports by Evaluate is a toolkit that provides evaluators with tools,
examples, and videos on creating one-page reports that summarize data,
findings, or recommendations. The toolkit also includes real-life examples to
further support evaluators in creating one-page reports in their own practice.
Data visualization checklist by Stephanie Evergreen & Ann K. Emery is a
guide for the development of high-impact data visualizations. Topics covered
include tips on how to arrange text, graphical elements, choosing the most
appropriate colours, lines etc. for impactful data visualizations.   
Communicating evaluation findings by Simon Hearn at BetterEvaluation
summarises some great new tools, methods and tips for communicating your
evaluation findings. Plus, he lists some no cost or low-cost tools for data
visualisation and reporting.
Additional websites with excellent resources and
guides on Monitoring and Evaluation (M&E)
BetterEvaluation is an international collaboration to improve the practice
and theory of evaluation. They create and curate information, resources,
guides and toolkits on choosing and using evaluation methods and processes,
including managing evaluations and strengthening evaluation capacity. Plus,
if you are interested in attending events, webinars or other programs on M&E
then keep an eye on their website or their social media channels. 
INTRAC is a not-for-profit organisation that builds the skills and knowledge
of civil society to be more effective in addressing poverty and inequality.
They have a wealth of knowledge on M&E and other international
development themes and topics and they share them in the form of excellent
downloadable guides, books, discussion papers, blog articles and more on
their website. 
EvaluATE is the evaluation hub for evaluators, project leaders and staff,
grant specialists and anyone working in the M&E sector. It covers all things
evaluation. All their resources, webinars, newsletters, blogs, and information
on M&E are open access.
Western Michigan University’s Evaluation Checklist Project’s mission is
to advance excellence in evaluation by providing high-quality checklists to
guide practice. Their checklists are on a diverse range of topics from
managing evaluation to applying specific evaluation approaches, managing
stakeholders and more.  
Racial Equity Tools have a section on evaluation that is designed to help
groups assess, learn from, and document their racial equity work, with special
attention to issues of power and privilege in the work, and in M&E.
We hope you found our compilation of guides and resources on M&E
helpful. Our goal is to promote a good understanding and successful practice
of monitoring and evaluation in international development. 
Didn’t find your go-to M&E guides on the list? Mention them in the comment
section below and we’ll be happy to add them to the list.

Qualitative and Quantitative


data collection methods in M&E
June 17, 2021
Data is the heart and soul of monitoring and evaluation (M&E). Valid,
reliable and accurate data can reveal and improve the performance and
impact of your intervention and support decision making and learning, while
enhancing your credibility and accountability. Data is divided into different
categories based on how you source them and the techniques you employ to
gather and analyse them. 
In this article, we will explore:
   Qualitative  data collection approach
   Quantitative data collection approach
   Key  differences between qualitative and quantitative data collection
approach  and how they are used in M&E.
 Some  common  qualitative data collection tools and
methods  and  quantitative data collection tools and methods  to help
you choose the ones that best fit your project needs. 
 Plus, why a  mixed-method approach  might be your best option.
Primary and secondary data
All data are categorized as either ‘primary data’ or ‘secondary data,’ based
on how you source them. Primary data are those that you and your team
collect directly from the main sources, whereas, secondary data are those that
were collected by other organisations, government agencies, or independent
research institutions and individuals and are available for use. Secondary data
could be censuses, surveys, organizational records or other previous research,
extracted from books, journals, reports, newspapers, magazines, data
archives, databases etc. 
Once you are ready to collect your data, you will have to decide upon the
data collection tools and methods. This will depend on a number of things,
including the purpose of the data, the local context, cost, timeline, availability
of skills and resources, and most importantly, the indicators and key
questions you have identified and how the collected data will be utilized.
All data are further divided into two broad categories based on the techniques
employed in the field to gather and analyse them – ‘qualitative
data’ and ‘quantitative data.’  Stay with us as we walk you through each
approach and explain their key differences.
Qualitative data collection approach
Qualitative data collection plays an important role in monitoring and
evaluation as it helps you delve deeper into a particular problem and gain a
human perspective on it. It provides in depth information on some of the
more intangible factors like experiences, opinions, motivations, behaviours or
descriptions of a process, event or a particular context relevant to your
project. So, in other words, a qualitative approach uses people’s stories,
experiences and feelings to measure change. 
Compared to a quantitative approach, a qualitative approach is more open,
informal and unstructured or semi-structured, and it provides more flexibility
in how data is collected. Qualitative research is investigative in nature and the
data collected through this process answers the question ‘why’ or
‘how’ –  how do people feel about a situation, or why are health care
facilities underutilized?  This approach relies more heavily on interactive
interviews, discussions and deeper conversations. While using this approach,
many researchers also use triangulation or mixed methods to increase the
credibility and authenticity of their findings. Data is often recorded in the
form of field notes, sketches, audiotapes, photographs and other suitable
means.
Usually the findings drawn from qualitative research are not generalizable to
any specific population, rather each case study produces a unique piece of
evidence that can help identify patterns among different studies of the same
issue. The results produced from this approach can be subjective and as
such can be subject to bias in their interpretation. Analyzing such data can
also be quite complex and time-consuming which can make it an expensive
process.
Quantitative data collection approach
The quantitative approach uses numbers and statistics to quantify change and
is often expressed in the form of digits, units, ratios, percentages, proportions,
etc. Compared to the qualitative approach, the quantitative approach is more
structured, straightforward and formal. Quantitative approach is utilized to
derive answers to the questions ‘how much’ or ‘how many’ – how many
people attended the workshop or how often do people visit the health
center.  
Quantitative research is useful for multi-site and cluster evaluations that
involve a large group of respondents or sample population. This approach
relies heavily on random sampling and structured data collection instruments
that fit diverse experiences into predetermined response categories. Typical
quantitative data gathering strategies include, experiments or clinical trials,
gathering relevant data from management information systems, administering
surveys with closed-ended questions or observing and recording well-defined
events. 
Because quantitative methods are not about gaining an in-depth
understanding but rather grasping a general understanding of a particular
context with precise results, quantitative data is easier to collect and analyse
and there are less chances of bias in the result interpretation. Results are
numerical, objective, conclusive and to the point, so the results are easier to
summarize and generalize and are useful for making comparisons across
different sites or interventions.
Differences between qualitative and quantitative data
collection approach
Qualitative and quantitative data collection methods
Below, we have summarized key data collection methods and tools used in
monitoring and evaluation (M&E). Most methods and tools can be used in
combination with other tools and methods and are applicable in both
qualitative and quantitative research. However, this list is not complete, as
tools and techniques continually evolve and new tools and techniques keep
emerging in M&E.
The list is adapted from ‘Project/Programme Monitoring and Evaluation (M&E) Guide from the
International Federation of Red Cross and Red Crescent Societies (IFRC), 2011.’
The most common qualitative data collection methods and
tools
 Open-Ended Surveys: allow for a systematic collection of information
from a defined population, usually by means of interviews or
questionnaires administered to a sample of units in the population.
Qualitative surveys include a set of open-ended questions that aim to
gather information from people about their characteristics,
knowledge, attitudes, values, behaviours, experiences and opinions on
relevant topics. Surveys can be collected via pen/paper forms or
digitally via online/offline data collection apps.  
 Open ended interviews: are useful when you want an in-depth
understanding of experiences, opinions or individual descriptions of a
process. Can be done individually or in groups. In groups, you will ask
fewer questions than in an individual interview since everyone has to
have the opportunity to answer and there are limits to how long
people are willing to sit still. In-person interviews can be longer and
more in-depth.
 Community interviews/meeting: is a form of public meeting open to
all community members. Interaction is between the participants and
the interviewer, who moderates the meeting and asks questions
following a prepared interview guide. This is ideal for interacting with
and gathering insights from a big group of people.
 Focus group discussions (FGDs): is ideal when you want to interview a
small group of people (6-12 individuals) to informally discuss specific
topics relevant to the issues being examined. A moderator introduces
the topic and uses a prepared interview guide to lead the discussion
and extract insights, opinions and reactions but s/he can improvise
with probes or additional questions as warranted by the situation. The
composition of people in an FGD depends upon the purpose of the
research, some are homogenous, others diverse. FGDs tend to elicit
more information than individual interviews because people express
different views, beliefs and opinions and engage in a dialogue with one
another. 
 Case study:  is an in-depth analysis of individuals, organisations,
events, projects, communities, time periods or a story. As it involves
data collection from multiple sources, a case study is particularly
useful in evaluating complex situations and exploring qualitative
impact. A case study can also be combined with other case studies or
methods to illustrate findings and comparisons. They are usually
presented in written forms, but can also be presented as photographs,
films or videos. 
 Observation: It is a good technique for collecting data on behavioural
patterns, physical surroundings, activities and processes as it entails
recording what observers see and hear at a specified site. An
observation guide is often used to look for consistent criteria,
behaviours, or patterns. Observations can be obtrusive or unobtrusive.
It is ‘obtrusive’ when observations are made with the participant’s
knowledge and ‘unobtrusive’ when observations are done without the
knowledge of the participant.
 Ethnography: Ethnographic research involves observing and studying
research topics in a specific geographic location to understand
cultures, behaviors, trends, patterns and problems in a natural setting.
Geographic location can range from a small entity to a big
country. Researchers must spend a considerable amount of time,
usually several weeks or months, with the group being studied to
interact with them as a participant in their community. This makes it a
time-consuming and challenging research method and cannot be
limited to a specific period. 
 Visual techniques: in this method, participants are prompted to
construct visual responses to questions posed by the interviewers, the
visual content can be maps, diagrams, calendars, timelines and other
visual displays to examine the study topics. This technique is especially
effective where verbal methods can be problematic due to low-literate
or mixed-language target populations, or in situations where the
desired information is not easily expressed in either words or
numbers.
 Literature review and document review: is a review of secondary data
which can be either qualitative or quantitative in nature  e.g. project
records and reports, administrative databases, training materials,
correspondence, legislation and policy documents, as well as videos,
electronic data or photos that are relevant to your project. This
technique can provide cost-effective and timely baseline information
and a historical perspective of the project or intervention.
 Oral histories: it’s the process of establishing historical information by
interviewing a select group of informants and drawing on their
memories of the past. Oral history strives to obtain interesting and
provoking historic information from different perspectives, most of
which cannot be found in written sources. The insights from oral
history can be discussed, debated, and utilized in numerous capacities.
The most common quantitative data collection methods and
tools
 A structured closed-ended interview: this type of interview
systematically follows carefully organised questions that only allow a
limited range of answers, such as “yes/no” or expressed by a
rating/number on a scale. For quantitative interviews to be effective,
each question must be asked the same way to each respondent, with
little to no input from the interviewer. 
 Closed ended surveys and questionnaires: is an ideal choice when you
want simple, quick feedback which can easily translate into statistics
for analysis. In quantitative research, surveys are structured
questionnaires with a limited number of closed-ended questions and
rating scales used to generate numerical data or data that can be
separated under ‘yes’ or ‘no’ categories. These can be collected and
analysed quickly using statistics such as percentages.
 Experimental research: is guided by hypotheses that state an
expected relationship between two or more variables, so an
experiment is conducted to support or disconfirm this experimental
hypothesis. Usually, one set of variables is manipulated (treatment
group) and applied to the other set of dependent variables (control
group) to measure their effect on the latter. The effect of the
independent variables on the dependent variables is observed and
recorded to draw a reasonable conclusion regarding the relationship
between the two groups. This research is mainly used in natural
sciences.
 Correlational research: is a non-experimental research that studies
the relationship between two or more variables that are similar and
interdependent and assesses their statistical relationship – how one
variable affects the other and vice versa but with no influence from
any extraneous variable. It uses mathematical analysis to analyse
collected data and the results are presented in a diagram or generated
in statistics. 
 Causal-comparative:  also known as quasi-experimental research,
compares two variables that are not related. Variables are not
manipulated. One variable is dependent and the other independent.
Variables not randomly assigned. 
 Statistical data review: entails a review of population censuses,
research studies and other sources of statistical data.
 Laboratory testing: are precise measurement of a specific objective
phenomenon, e.g. infant weight or water quality test.
Why a mixed method approach might be your best
option for data collection
Each data collection tool and method has its own advantages but
development projects are complex and their intricate dynamics cannot be
disentangled through one method or one data collection tool alone. Therefore,
mixing qualitative and quantitative methods and using different data
collection techniques is recommended as it could add value to the monitoring
and evaluation of your development projects. 
Using a combination of quantitative and qualitative methods, which is often
called a mixed-method approach, enables researchers to gain a more holistic
understanding of the intervention and why it is or it isn’t manifesting the
expected outcomes. It also addresses the shortcomings and limitations of each
method to provide more coherent, reliable and useful conclusions and
increases the overall confidence in the validity of the evaluation results.
Using mixed methods helps to capture a wider range of perspectives and in
some cases, one method can be used to help guide the use of another method,
or to explain the findings from that method. You can measure what happened
with quantitative data and examine how and why it happened with qualitative
data. Qualitative methods also help to uncover issues during the early stages
of an intervention that can then be further investigated using quantitative
methods, or quantitative methods can highlight particular issues that can be
examined in-depth with qualitative methods.  Some may point out that mixed
methods could be costly and time-consuming but research shows that the
benefits almost always far exceed the costs.
Want to learn more about mixed-method approaches? Check out this World
Bank’s document on “Combining Quantitative and Qualitative Methods for
Program Monitoring and Evaluation: Why Are Mixed Method Designs
Best?”
We hope you found our article on qualitative and quantitative data collection
methods helpful. As you see, both methods have their own advantages and
disadvantages but when used in a balanced combination, they can really
provide reliable evidence to unfold the progress and shortcomings of your
development projects and help you make data-driven decisions for timely
improvement and enhancement. However, the choices of methods depend on
the nature of your project. But one thing to keep in mind, no matter which
approach you may choose to utilize, it is important to observe the ethical
principles of research for all data collection methods and tools.
For more on the ethics of data collection, check out this short report by
INTRAC – Principles of Data Collection.

Monitoring & Evaluation interview


with expert Linda Ntsiful
July 1, 2021
Ever wondered what a career in monitoring and evaluation (M&E) might
look like? As the initiator of the Women in Research, Monitoring, Evaluation
and Learning (RMEL) group and with over 5 years of professional
experience in the sector, expert Linda Kpormone Ntsiful reflects on her M&E
journey. Linda highlights the key challenges, opportunities and new trends in
the sector and shares excellent tips for the young and emerging evaluators
(YEEs) looking to advance their careers in M&E.
Let's get to know Linda...
Which organisation do you work for?
I work for Mobile Web Ghana as a Communications and Fundraising Officer.
In addition, I work on two personal initiatives:
1.     Women in Research, Monitoring, Evaluation and Learning (RMEL) as
the initiator
2.     Creative Hive Foundation, as the founder
Which region(s) of the world have you worked in so far?
I have worked in Ghana (in West Africa), USA and Belgium (in Europe).
Where did you like working the most?
I enjoyed working in the USA the most as an Undergraduate Researcher
because the role included a training component that was linked with my past
research experience, allowing me to utilise my knowledge and past exposure
in oceanography and fisheries.
What is your favourite dish?
‘Waakye’ (wah -chay) is a popular Ghanaian dish made from cooked rice and
black-eyed beans. These are cooked together with red dried sorghum leaves.
Once served, it is accompanied by beef or egg, salad, spaghetti or grated
cassava (gari). I hope I am not whetting some people’s appetite at the wrong
time..haha.
What is your favourite leisure activity (other than M&E, of
course!)?
I enjoy doing a number of things for leisure, including reading, art, painting,
singing, sightseeing and watching documentaries.
Let’s talk about your career in M&E
How long have you worked in the M&E sector?
For 5 years now.
What inspired you to pursue a career in M&E?
I was inspired to pursue this career because of my desire to see how data
collected from projects and programs contribute to the lives of target groups
and beneficiaries. Data helps to easily identify and avoid risks, better manage
project resources to proactively adjust problem areas quickly. Given that my
work and passion is centred on social development projects, I also review and
use data frequently to inform the design of these initiatives.
What has been your happiest/most joyful moment related to
your M&E work?
Generally, I feel good when I am able to support projects by providing data
that allows the team to demonstrate their impact towards fulfilling the goals
and objectives of their interventions. My biggest moment was when I was
able to provide data that allowed the cocoa farmers who belonged to the
largest cocoa cooperative in Ghana to keep track of their farming activities, to
monitor changes, ensure compliance to sustainable standards and to increase
cocoa yields resulting in improved livelihoods of the beneficiaries.
What has been your most frustrating/least joyful moment
related to your M&E work?
In one of my former roles as a Data Impact Coordinator, there was an
instance when a client had high expectations from a data collection platform
that my team had provided them with to gather information. They expected
the data file to be exported in a particular format only to realise that the
exported data file was not in the form they had anticipated. It was a very
challenging moment for the whole team, but we managed to resolve it by
providing the client with alternative options to analyse the data extracted
from these data files which enabled us to keep working on the project and
maintain a good working relationship with the client.
M&E in your perspective
In your opinion, what are the biggest opportunities in M&E
today?
It is great to see many opportunities emerging for improved collaborations
and expansion of networks between M&E associations from different regions
in the world, thanks to the growth in online events such as webinars and
training. These are great innovations and are vital in these unprecedented
times of the COVID19 pandemic. Also, there are so many scholarships and
other opportunities available for young and emerging evaluators (YEEs) who
want to advance their careers in M&E.
To get started, check out IPDET’s website or EvalYouth’s FB page and
their Twitter account for the latest updates and opportunities for new
evaluators.
What do you think are the biggest challenges in M&E today?
Below are the five biggest challenges I have noticed in M&E today: 
 Firstly, most challenges identified are tied to limited funding/budget
for M&E and limited access to project resources.
 Secondly, there are different career paths within M&E and so it might
be challenging for individuals new to the sector to identify which areas
of M&E to specialise in.
 Thirdly, for professionals working in advocacy, if their intervention fails
to push through a policy that benefits a vulnerable or marginalized
group, the impact could be quite massive emotionally. Initiatives
sometimes overestimate the change they contribute in the short term
but underestimate the long term change they may make.
 Also, there is a lack of data trust in the sector. Billions of dollars are
spent at the behest of donors but in most cases, data is collected just
because donors require international development projects to report
and not because they want to distil insights and learn from the data.
 Finally, I have often seen that many projects lack a theory of change
driven data collection mechanism. Most organisation’s data collection
is either non-existent or missing a robust data strategy. If they are
collecting data, the focus is often on activity and output data instead
of outcome and impact which usually do not align with and validate
the primary mission and vision of the organisation.
Are you aware of any emerging trends in the M&E sector?
How do you feel about them?
Here are the five emerging trends that I have identified in the M&E sector:
 Firstly, it is interesting to see how M&E is taking a different twist, not
only with    evaluating processes but also with interconnected  systems
which are mostly used in environmental projects and are ideal for
evaluating several components within a single project. 
 Secondly, the Young Emerging Evaluators (YEEs) are recently being
recognized more as their efforts are increasingly demanded with the
growing need for M&E in the development sector.
 Thirdly, the M&E industry has witnessed a strong growth led by the
increasing personalized media interactions demanded by today’s
consumers;  who are enthusiastic and very demanding in the way they
would like to consume content based on their choice of medium,
context, schedule, and preferences.
 Needless to say, the industry is constantly innovating and bringing new
tools and applications to the market. 
 Finally, the actual reality is that the digital and online market is
disrupting the industry, and players in the traditional industry are
being impacted. This is evidenced by the growth in internet ads
expenditure. But the shift to digital platforms for media consumption
implies that demand for data will continue to grow.
In your opinion, what are some good skills to have to work in
M&E?
The field requires technical skills such as research, inductive and deductive
reasoning, project management skills, reporting skills, data collection skills, 
statistical and analytical skills as well data management skills but it’s also
good to be equipped with additional skills, including coordination, proposal
writing, communications and networking, presentation, fundraising and
digital skills.
Digital Monitoring and Evaluation (M&E)
What are your thoughts on “the digitisation of M&E”?
It is a step in the right direction because technology is growing at a blistering
pace. Thanks to the development of apps, dashboards and M&E software and
platforms, processes such as developing log frameworks, theories of change,
data collection, visualisations, analysis and reporting have become much
easier.
Have you used any digital tools for M&E? How was your
experience?
Yes, I have used digital tools for M&E such as mobile data collection
platforms (e.g Poimapper, Akvo Flow, Akvo Lumen, SurveyCTO,
CommCare, KoboToolbox and others) and experimented with M&E
platforms like Toladata. On all occasions, I have enjoyed the experience. It
makes data collection, management and reporting of results a lot easier, plus
it enables me to gain an enhanced insight from the collected data.
Tips and recommendations
Do you have any advice for individuals who are thinking
about starting a career in M&E?
I highly recommend interested individuals to join an association involved in
M&E to meet and engage with professionals, take courses available on
Massive Open Online Courses (MOOC) sites or enroll with a recognised
institution that offers degree courses on M&E. Also, participate in
webinars /workshops  such as the gLOCAL evaluation week. Sign up as a
volunteer with an M&E organisation such as EvalYouth or participate in
evaluation competitions which will allow you to work on a project and
expand your network. And of course read publications on M&E.
Are there any books or resource materials on M&E that you
would like to recommend to our readers? (optional)
 I would like to recommend a manual by the International Fund for
Agricultural Development on M&E as well as the Ultimate Guide to
Effective Data Collection by Atlan, an Indian IT services company. There are
other interesting resources on M&E by INTRAC on their M&E universe
website. SoPact, Better Evaluation, Khulisa Management Services and Ann
Murray Brown’s website also has a lot of excellent resources. If you are a
visual learner, there are many M&E resources on YouTube too.
Some final words for our readers...
What is the funniest reaction you’ve encountered when you
told someone (maybe from outside the non-profit sector)
that you were “working in M&E”?
I informed a friend about my interest in M&E and she said she had no
clue what it was all about so I decided to tell her. After I explained it to her,
she said, that’s for “sharks” (a term in Ghana used to describe a smart person)
like you. Surprisingly, I never assumed M&E was necessarily for smart
people…haha
Any final words to your colleagues in the M&E sector?
I would like to implore colleagues to join a local M&E association which will
serve as a gateway to engaging with other M&E associations in the world.
Take advantage of digital tools for M&E as well as embrace new innovations.
I would like to acknowledge the Ghana Monitoring and Evaluation Forum
and YEE which I belong to for creating the platform to broaden my
knowledge in M&E. 
I would also like to take this opportunity to encourage women RMEL
professionals to be a part of the online community which I started this year
namely ‘Women in RMEL’ aimed at getting more women into the research,
monitoring evaluation and learning (RMEL) sector and to be a support group
where people can share their experiences and opportunities with the next
generation of RMEL professionals.
A big thank you to Linda for taking out the time from her busy schedule to
answer our questions and for sharing her personal and professional
experiences in the M&E sector with us. We hope you enjoyed exploring M&E
from Linda’s perspective as much as we did and found her tips and
recommendations helpful.
 Do let us know if you have any questions that you would like us to feature on
our next M&E interview with an expert. 

How to write a good monitoring and


evaluation report - guidelines and best
practices
July 27, 2021
Reporting is an integral part of any monitoring and evaluation plan or
framework. Good reporting enables organisations to communicate the value
of their work and their impact while allowing them to demonstrate aid
effectiveness and enhance performance, collaboration, learning and
adaptation within their organisation and throughout the entire project cycle.
In this article, we will explain what monitoring and evaluation reporting is,
how it’s done and how your organisation can benefit from periodic
reporting. Plus, stay with us as we walk you through some of the best
practices of M&E reporting – M&E report formats and frequency, what to
include in your M&E report and some top guidelines from experts to help
you streamline your reporting process and write reports that are credible
and constructive.
What is monitoring and evaluation (M&E) reporting
and how is it done?
Reporting is the documentation and communication of M&E results to
appropriate audiences at specified times. The key purpose of reporting may
be to account for funds expended, to provide rich data for the decision-
making process or to improve targeting and coordination of investments and
on-ground actions. Reporting can be done at a project or program level. Most
M&E reports include financial summary of a project as well as updates on its
progress and achievements, activities undertaken, inputs supplied, money
disbursed, key findings, results, impacts, plus, conclusions and
recommendations from the interventions that have been compiled from
various monitoring and evaluation activities and data sources. 
The goal of reporting is to present these collected and analyzed data as
information or evidence to key stakeholders and investors to utilize and to
increase their confidence in the project and the implementing team.
Benefits of periodic monitoring and evaluation (M&E)
reporting
Periodic reporting on M&E data helps internal staff and management teams
to assess and communicate their transparency and accountability to their
stakeholders, partners, funders, beneficiaries and others. It enables them to
identify and interpret the progress their interventions have made against their
set targets and indicators and its impact in the community of interest and its
people. 
M&E reports also help the team to test the effectiveness of their underlying
assumptions, project activities, design, strategy and suggest ways for future
adaptation and improvements. Moreover, M&E reports allow the team to
identify and share challenges they have encountered, unexpected changes that
have emerged in the process, along with underlying reasons for under-
performance or shortcomings of existing management and monitoring
systems and their proposed recommendations and action plans for
improvements of subsequent work plans and sustainability of their results. 
M&E reports help the stakeholders, partners, donors and others involved in
the project to grasp a clear picture of the performance of the project and its
real impact on the ground, helping them make evidence-based decisions to
improve the current intervention and design better projects in the future.
These reports also help the higher-up management teams to make
adjustments to their internal operations and make recommendations for the
state or country level policy amendments. Moreover, the evidence from such
reports also help the donors to direct aid and funding to where it’s needed
most – to address the most critical issues and help the most vulnerable
communities in need.
Monitoring and Evaluation (M&E) reports - frequency
and formats
Each organisation is different and so are its projects, hence every
organisation has its own unique reporting system. M&E reports can be
produced and distributed on a weekly, bi-weekly, monthly, quarterly, bi-
annually or on an annual basis. Weekly and bi-weekly reports are usually
concise and shared with the internal team and some external stakeholders to
keep them up to date on the project progress against their targets, budget, any
changes made to the project or the implementing team etc. However,
monthly, quarterly bi-annual or annual reports are much more comprehensive
and include more details and evidence on the progress of the intervention,
project inputs, activities, outputs, outcomes, lessons learned,
recommendations etc. These are shared with a wider audience, including
partners, donors and other stakeholders.
Many organisations report on their M&E data in a traditional narrative format
in the form of paper reports. However, with the emergence of digital
technology and approaches, many others are adopting new and innovative
mediums to report, such as videos, recorded audio, interactive media,
mapping, data visualization, interactive dashboards, online presentations and
more. How often and in which format reports are produced and distributed
depends on the organisation and its reporting system. The frequency and
format also depend on the nature of the project, its M&E plan and log frames,
the resources available, the requirements of the donors and the audience of
the report – how and by whom reporting data will be utilized etc.  
Note that many organisations report on their monitoring and evaluation
activities together, however, there are exceptions. In some organisations,
monitoring and evaluation reporting is done separately. Monitoring is done
by implementing staff members and it’s undertaken more frequently than
evaluation. Evaluation on the other hand could be undertaken by internal staff
or external consultants.
See how TolaData’s configurable dashboards can add value to your
organisation’s reporting processes. 
Some key points to keep in mind before writing your
M&E report
 Have you identified indicators for each project activity? Indicators
must be relevant and easy to track, measure and report. Here’s how
to create indicators that make sense. 
 Have you identified key monitoring and evaluation questions and
determined what data will need to be collected and which tools and
methods will be used to collect them?
 How will you collect, consolidate and analyse your data and distil
insights from it? Here’s how you can integrate data from multiple
sources and tools. 
 How will you compile and consolidate the findings and results and
include them in your report? 
 How frequently will you send out your report and in what format?
 Be mindful of the audience and timing of your report. M&E reports are
effective only when they are submitted to the right people at the right
time and facilitates corrective decision making.
What goes into an M&E report?
Please note that this list includes some common elements included in an
M&E report. As mentioned above, every organisation and every project is
different and so is their reporting system. Therefore, you will have to adjust
this list according to the nature of your project, the requirements of your
donors and stakeholders and the audience of your report.
This list has been adapted from the Evaluation Report Checklist by USAID.
Guidelines for writing credible and constructive
monitoring and evaluation (M&E) reports.
The following guidelines have been adapted from the document – Monitoring
and Evaluation Guidelines from the UN World Food Programme.
1. To help ensure efficiency, the purpose of reporting should be clearly
defined. Be sure to include a section in the introduction describing the
need to produce this report and its anticipated use. 
2. Make sure the information you are providing is accurate, complete,
reliable, timely, relevant and easy to understand. 
3. Be clear who your audience is and ensure that the information is
meaningful and useful to them. If needed, tailor the content, format
and timing of the report to suit the audiences’ needs. Information is of
little value if it is too late or infrequent for its intended purpose. 
4. Consistency is key. Reporting should adopt units and formats that
allow comparison over time, enabling progress to be tracked against
indicators, targets and other agreed-upon milestones.
5. Make sure your report is concise and the layout clean and consistent.
6. Focus on results and accomplishments and link the use of resources
allocated to their delivery and use. 
7. Be sure to include a section describing the data sources and data
collection methods used so that your findings are objectively
verifiable.
8. Write in plain language that can be understood by the target
audience. Avoid complex jargons and details if possible and be
consistent in your use of terminology, definitions and descriptions of
partners, activities and places. Be sure to define any technical terms or
acronyms in the annex section.
9. Make use of graphs and charts to communicate your findings. Present
complex data with the help of figures, summary tables, maps,
photographs, and graphs that are easier to understand.
10. Include references for sources and authorities.
11. Include a table of contents for reports over 5 pages in length.
12. Make sure your reporting system is cost effective. Avoid
excessive, unnecessary reporting. Information overload is costly and
can burden information flow and the potential of using other more
relevant information.
13. Be open to feedback. Make sure to include an email address, a
physical address or a telephone number for the recipients to send
their feedback on the report. 
As we can see, there are many benefits of good and timely reporting and it
should be a part of every development project and its monitoring and
evaluation system. However, many organisations continue to report on their
progress only as a part of the donor requirement, without giving much heed
to the huge prospect of collaboration, learning, adaptation and improvement.
Therefore, a good reporting system should have a balance of all these
elements, plus quality project management and good coordination and
communication flow within the team. 
We hope you found our article helpful. If you have any comments or
suggestions on how we can improve it, please leave a comment below.
Key References
 Monitoring and evaluation guidelines, UN World Food Programme
 Reporting, INTRAC
 Evaluation report checklist,  USAID

Qualitative indicators and their


relevance in M&E
August 11, 2021
Indicators are integral to the design, implementation and monitoring and
evaluation of every development project. Different types of indicators are
used in monitoring and evaluation, such
as ‘quantitative’, ‘qualitative’ and ‘hybrid or mixed’. In this blog post, we
will deep dive into qualitative indicators and explain what they are, how
they differ from other indicators, how to formulate them, how to use
progress markers to set them and last but not least, how to measure
qualitative indicators and the tools you can use to measure them. 
But before we jump into qualitative indicators, let’s understand what
indicators are and why they are important in M&E.
What are indicators and why are they important?
Indicators are important tools to benchmark, target and monitor performance.
Simply put, indicators are variables that provide reliable means to measure
achievements, changes and transformations brought about by an intervention
which helps the project team to gain a holistic understanding of their
intervention, assess their actual progress against their targets, set priorities
and allocate resources based on real needs, learn from their experiences and
provide a clearer picture of their results to their stakeholders for making
evidence-based decisions. 
More on how to create indicators that make sense.
What are qualitative indicators?
Development projects are complex, therefore, in order to fully understand
their processes and outcomes, we need approaches that are not only based
exclusively on the measurement of tangible and material outcomes but also
approaches that are able to explain the intangible characteristics, properties,
abstract concepts and qualitative processes of a project. This is where
qualitative evaluation comes into play – qualitative evaluation sees
development projects as dynamic and evolving and not necessarily following
a predetermined direction. 
Qualitative indicators are naturalistic which means it does not attempt to
manipulate the project or its participants for evaluation, rather, it studies the
processes naturally as they unfold. The naturalistic approach helps to go
beyond pre-determined and expected outcomes to capture the unexpected,
differential impact and the actual changes that occur as a result of a project. 
Unlike quantitative indicators, qualitative indicators do not strictly involve
enumeration, which allows them to capture a much broader picture and
nuances of a project and describe the nature of the changes more thoroughly.
These indicators provide detailed information on any changes taking place,
the nature, character, extent and scope of these changes or the process leading
to those changes. The qualitative approach also emphasises the importance of
getting close to project participants in order to understand more authentically
their realities and the details of their everyday lives. 
Qualitative evaluation is holistic, meaning it sees a project as a working
whole which allows the team to understand and analyse the project from
many different perspectives. This holistic approach is also useful in capturing
different dimensions of a development project, including specific information
about the context, participants, interrelationships with other projects,
activities, patterns of behaviour or the nature of the relationships among
groups, individuals, organisations, etc. Therefore, qualitative measurements
are best suited to help track the progress of innovative, complex, multifaceted
or multi-dimensional projects,  contexts and objectives that are not easily
measurable by just quantitative means.
How are qualitative indicators different from
quantitative and hybrid indicators?
How to formulate good qualitative indicators?
There are many criteria to consider when developing an appropriate set of
indicators for a project. The nature of indicators depends on your project’s
specific objectives and its social and cultural aspects. Identification of
indicators must be done in the project planning or design phase and it must be
a participatory activity with inputs from different team members and
stakeholders. Indicators must be reviewed and redefined on a regular basis to
match with the evolving context of the project. When developing indicators,
make sure they are realistic, intelligible, unambiguous, replicable,
comparable, easy to monitor, record and report on, yield high-quality data
and are context-specific. More on qualitative and quantitative data collection
methods in M&E.
Once indicators are developed, it is equally crucial to understand how to use
them. One single indicator can be measured with different methods and at
different levels of quantification. Therefore it is best to begin by asking these
questions: what do I want to measure and what do I want to use it for? Being
explicit about your assessment objectives is crucial for setting sound and
reliable indicators
To establish a qualitative indicator, it is necessary to set strict criteria to
measure change over time and to reflect on the outcome, so that you can track
your actual progress against your set objectives. A qualitative indicator is
usually expressed in descriptive terms, which can include statements and
narratives. Examples of qualitative indicators include – NGO functional
capacity; level of participation of women in local governance; involvement in
decision making about service delivery; level of employee satisfaction;
changes in knowledge and attitudes, etc.
Top tips:
 Structure your indicator to allow annual assessments against
predetermined and well-defined criteria.
 Your statement needs to describe the gradual or milestone changes so
they can be assessed over time.
 Your indicators must meet the standards for indicator validity and
reliability and they must be SMART (specific, measurable, achievable,
relevant and time-bound.)
 Consider the system for data collection, including who will do it and
how often. Do you have the required capacity and resources for data
collection?
 Establish controls to maintain the data quality.
 Leave room for flexibility, unexpected developments and changes
emerge all the time, so make sure you are able to adjust, remove or
add new indicators as needed.
Progress Markers as an innovative methodology to
set qualitative indicators
Outcome mapping (OM) is a methodology for planning, monitoring and
evaluating development initiatives in order to bring about sustainable social
change. This methodology is useful when you need to identify individuals,
groups, organisations or stakeholders with whom you will work directly to
influence behavioural change, plus when you need to track and monitor
behavioural change and the strategies to support those changes.
The methodology of Outcome Mapping introduces the concept of boundary
partners that can be useful in identifying intended users of an evaluation by
focusing on those who are directly associated with the initiative itself. It’s a
participatory approach to develop outcome challenges and progress markers
together with the boundary partners.
Progress markers are a set of statements describing a gradual progression or
milestone changes in a boundary partner leading to the outcome challenge.
Progress markers are very useful to measure progression through a qualitative
approach towards the outcome, therefore progress markers are central in the
monitoring and evaluation process. Progress markers can be seen as
qualitative indicators as they can be used to monitor the progress of the
project towards achieving its overall goal in a systematic manner.
Examples of milestones that you can set in Progress Markers are:
 Love to see: expanding influence, sharing expertise, behaviour change.
 Like to see: active engagement, learning, commitment.
 Expect to see: early encouraging response to the project, initial
engagement. 
Each Progress Marker describes a changed behaviour by the target group and
can be monitored and observed. Progress Markers could be established as a
set and could be graduated by stages from easier to a more difficult stage to
achieve the change process of a target group.
How to measure qualitative indicators?
When you set up qualitative indicators, it is important to think of your data
collection system. Some tools are more appropriate for process-oriented
indicators, some for indicators that measure certain criteria. Other formats are
good tools to assess particular attributes or for capacity building. It is
important to pick the one that provides the best kind of information you need
to measure your qualitative indicator.
Some tools you could use to collect data for qualitative indicators:

  Focus groups: these allow participants from a particular sample or sub-


set of the population to answer questions and offer opinions or insights on a
topic, event or perceived change.
 Direct observations: are consistent and organized site visits by
observers. These are ideal for collecting data about what goes on in practice,
rather than what goes on in theory. 
 Assessment of attributes: this is useful to assess the development of
specific outcomes, components or elements of a project. It takes a set of
attributes and asks questions about those attributes to gain a better
understanding of the progress of the report.
 Rubrics: you could develop a rubric that consists of statements that
describe criteria for assessing different levels of performance within your
outcomes.
 Milestone scales: this tool outlines sequential stages or milestones of
your project and its processes and measures movement along this scale.
These work best when each stage is clearly defined and when the defined
stages realistically represent the local context and processes involved.
 Key informant interviews (KII): this is a tool that could be structured
or semi-structured relying on a list of issues or topics to be discussed
according to the qualitative indicator.
 Questionnaires: the questions are organised around key aspects of a
particular subject or key aspects of a particular process, phenomenon,
function, or organisation.
 Case study:  this method is an important tool for gathering, integrating
and accessing information across a manageable set of well-selected cases. 
Utilizing qualitative indicators in development projects is still in its infancy
stage. However, with new changes and progress within the development
sector, key development players and professionals have realized that
numerical indicators alone are rarely adequate to evaluate the complex
dynamics of development projects and capture their holistic picture.
Therefore, many are adopting a combination of qualitative, quantitative and
hybrid indicators into their projects to strike a balance between practicality
and comprehensiveness, and demonstrate a richer understanding of the
dynamics at play.
We hope you found our article helpful. If you have any suggestions or
comments on this article or ideas for improvement then please mention them
in the comment section below.
Key References
 Indicators,  INTRAC 
 Wageningen University & Research (2018). Outcome Mapping.
 Outcome Mapping Practitioner Guide, Outcome Mapping Learning
Community
 Handbook on Qualitative Indicators, USAID
 Combining Quantitative and Qualitative Aspects of Indicators for
Assessing Community Resilience, Daniel Becker, Stefan
Schneiderbauer, John Forrester and Lydia Pedoth.

Data Disaggregation & its key role in


International Development
October 12, 2021
In the 2030 Agenda for Sustainable Development, the UN member states
have pledged to leave no one behind – an unequivocal commitment to
eradicate poverty, end discrimination and exclusion, and reduce the
inequalities and vulnerabilities in all its forms. However, progress in this
area has been slow. If we as a community are to truly reach the poorest of
the poor and combat discrimination and rising inequalities, then concerted
efforts are needed to boost the collection and use of reliable and high-quality
data, disaggregated across multiple dimensions.
Our article is aimed at practitioners who work across NGOs, INGOs,
governments, CSOs or other actors who are involved in development
projects. The guide is intended to help development actors understand the
concept of data disaggregation and inspire them to make it a part of every
intervention. We will explore what data disaggregation is, how it’s different
from data aggregation, its key benefits, common challenges, how it’s relevant
for decision and policy making and why it’s important in international
development.
What does Data Disaggregation mean?
Disaggregation of data refers to the breakdown of gathered information into
smaller units or variables to gain a deeper understanding of a situation or to
clarify underlying trends and patterns. Data may be grouped by different
dimensions, such as age, sex, geographic area, education, ethnicity, disability,
social status or other socioeconomic variables.
Disaggregation dimensions are the characteristics by which data is to be
disaggregated (by sex, age, geographic location) and the disaggregation
categories are the different characteristics under a certain disaggregation
dimension (i.e: female/male). 
According to the Inter-Agency and Expert Group on SDG indicators
(IAEGSDGs),
“Disaggregation is the breakdown of observations within a common branch
of a hierarchy to a more detailed level to that at which detailed observations
are taken. With standard hierarchical classifications, categories can be split
or disaggregated when finer details are required and made possible by the
codes given to the primary observations.”

Aggregated data vs. disaggregated data


Before we explore data disaggregation further, let’s first understand how it
differs from data aggregation. 
The above table shows the differences between the two concepts but it is
important to keep in mind that even disaggregated data can be further
aggregated by combining results from different activities, projects and
programs to get a more comprehensive understanding of an organisation’s
operations on a local, national or global level and to see its overall impact. 
More on the aggregation of disaggregation data and how TolaData can
assist you with the process. 
Benefits of Data Disaggregation for the International
Development community
Data disaggregation is critical to development success and sustainability. By
disaggregating data into many variables, policymakers, donors, community
leaders and beneficiaries are better equipped to understand their challenges
and opportunities which fosters informed decision and policymaking,
ultimately promoting a just and equitable distribution of resources. These
decisions in turn can help create appropriate and effective interventions for
more equitable outcomes for community members across different
dimensions. 

According to the United Nations Statistics Division (UNSD), data


disaggregation is now also one of the nine pillars of the “Data
Revolution.” Data disaggregation helps to improve the quality and
availability of statistics on local, national and global levels – high quality,
reliable, accessible, timely and open disaggregated data addresses key data
gaps and adds valuable information to the evidence base for future analysis
and research.
Here’s the summary of the common benefits of disaggregating data into
several relevant sub-units or variables:
 Data disaggregation portrays a more holistic picture of an
intervention. By providing precise understanding, definition,
characteristics of the target group and the nuances, complexities,
trends, patterns and links within a project, its activities and outcomes,
data disaggregation helps you to detect and analyse problems and
needs more accurately. 
 Data disaggregation helps to pinpoint populations with specific
needs. Data Disaggregation helps you to see beyond just the number
of people you have reached through your project. It allows you to dive
deeper to see their characteristics, like their age, gender, location,
income level and other variables that you may choose to disaggregate
by. It allows for routine comparisons and in-depth trend and
characteristic analysis across different target groups and sub-groups.
This ultimately helps to shape projects and policies to support the
advancement of a community or a nation’s collective well-being and
ensure sustainable development.
 Disaggregation of data is an important way to ensure
inclusiveness. Disaggregation allows more detailed data analysis to
identify inequalities, making the issues and the voices of marginalized
and vulnerable populations more visible to policymakers and
stakeholders which makes it possible for an organisation to focus its
response where it’s needed most. 
 Disaggregated data is key to effective policymaking and interventions
because it provides enough evidence for the development actors to
understand what works for one demographic group may not work for
another. Different people face different constraints depending on
their gender, age, income, location, and other factors, production
systems, education levels etc. This enables the development actors
and policymakers to advocate for more tailored, effective and efficient
policies and interventions that align with the different groups and
subgroups’ needs, concerns and circumstances.
 Disaggregated data is also critical in the analysis of project
indicators which is key in determining the actual progress of a project
against its set targets and critical goals – this helps the project team to
make the necessary changes to the design and implementation of
their projects to maximize their impact and outcomes.
 Data disaggregation helps you to see your overall impact. When you
choose to aggregate your data results across multiple projects and
programs, it harmonizes data on multiple levels and across diverse
themes, enabling you to see, for example, your organisation’s overall
impact across multiple dimensions. This helps you to determine
whether or not your projects are creating the intended change or
impact.
Let’s take a simple example…
Let’s say, you designed two surveys to collect data from 400 members in a
community to measure the average income. In your first survey, you only
include a few questions: the names of their respondents, total household
income or income of a specific family member, expenses etc. The result you
are going to get from this survey will reflect the average income in that
community which will get your job done but it doesn’t provide any additional
insights. 
Whereas, in your second survey you break down your question into smaller
units, like including their age group, the gender of the respondents, their
ethnicity, the location of the respondents, the years of their employment, their
educational levels and backgrounds, the nature of their employment, how
satisfied they are with their jobs etc. then the insight you are going to derive
from this survey will be much richer than the first one. 
The second survey will not only show the average income in that community
but will also reflect more complexity, trends and issues related to the income
generation process within that community, and highlight the characteristics of
the respondents and many additional factors that play a crucial role in
shaping the economy of that community, which will ultimately help you
design or adapt your project in a better manner to mitigate the issues and
target the underlying problems. For example, by disaggregating the average
income by gender, you can clearly see the stark disparity in income levels
between the male and female respondents in that community and so on.
Is there a minimum set of suggested disaggregation?
Data can be disaggregated into various dimensions but the minimum
disaggregation types are determined by the objectives of your intervention,
the indicators and targets set in your results framework and your reporting
and donor requirements, so it varies on a case by case basis. However, the
UN Fundamental Principles of Official Statistics states that all standard
indicators can be and should be disaggregated at least by gender, age, ethnic
origin, disability, income and geographic location.  
Before you select the disaggregation variables, make sure you understand
your data requirements – what information can help you understand the needs
of the target groups and analyse your outcomes and impact better and how
are you going to collect them? Which disaggregation variables will help you
meet those data needs? Thinking about your data needs beforehand will help
you set the most appropriate disaggregation questions for your survey which
will help you generate rich data and insights. 
Below is the summary of the data disaggregation principles set by the Global
Partnership for Sustainable Development which can help you get a head start
with the disaggregation process:
 First and foremost, identify the objective and scope for disaggregating
your data.
 All target populations must be included in the data because we can
only achieve the “leave no one behind” goal by empowering the
furthest behind. 
 Remember, all data should, wherever possible, be disaggregated in
order to accurately describe the target populations.
 Data should be drawn from all available sources for a comprehensive
understanding of the realities on the ground.
 Use both qualitative and quantitative data. (More on Qualitative vs.
Quantitative data.)
 Those responsible for the collection of data and production of
statistics must be ethical and accountable.
 Human and technical capacity to collect, analyse, and use
disaggregated data must be improved where needed, through
adequate and sustainable financing. Collecting and analyzing
disaggregated data needs specific skills and these must be built. 
 Boost the use of technology that supports disaggregation.  (See how
TolaData can help.)
 Don’t forget about data privacy and protection, your respondents
must feel safe at all times. Confidentiality and privacy must be
maintained to ensure personal data is not abused, misused, or putting
anyone at risk of identification or discrimination.
Source: Adapted from Global Partnership for Sustainable Development Data. Inclusive Data Charter vision and
principles.
Challenges in data disaggregation
Data disaggregation may look different across diverse contexts. However,
there are a few key issues and some overarching challenges that may arise
during data disaggregation within any project. 
The most common challenge comes with data privacy and protection related
to disaggregated data. Appropriate data protection architecture is necessary
for maintaining the confidentiality of respondents’ information through all
stages of the data cycle. Any potential risks to the respondent should be
avoided if possible or addressed in a timely manner.
Some projects choose population census or other secondary data as one of the
data sources but you must be wary that in many parts of the world population
census is updated approx. once every decade, so the data you are collecting
might not be accurate, timely and 100% reliable. Plus, the census data or
other public statistics (e.g. ministry surveys) often lack the information on
sub-groups or the disaggregation details you need. The ideal solution would
be to collect data from multiple sources and combining secondary data with
primary data and triangulating them. Data triangulation allows for
verification and comparison of data as well as captures different dimensions
of the same phenomenon.
In addition, there are other practical challenges that obstruct progress in
disaggregation. In some bigger projects, disaggregation requires more
intensive data collection, management and analysis and it could be
technically difficult or very costly, so many organisations may not choose to
prioritize it. Other challenges include limited coordination among the team
and other involved parties, lack of awareness of the importance of
disaggregated data, or limited knowledge, capacity and resources. 
Data disaggregation is a critical step towards achieving sustainable
development and identifying and addressing the needs of the target groups,
mostly of the vulnerable and marginalized communities. Collection and
analysis of high-quality disaggregation data can provide a solid evidence
base for use by governments, the private sector, academic researchers, and
others to advance equality and economic and social development. 
However, disaggregating data can be challenging in practice due to the lack
of resources, knowledge and capacity. But on the bright side, efforts continue
on a national and global level to develop relevant data disaggregation tools,
frameworks, guidance & resources for practitioners. In the meantime, all
individuals, organisations and countries working in disaggregation must
collaborate with each other as much as possible to discuss and share
experiences, good practices and lessons learned. 
We hope you found our article helpful. If you have any recommendations or
suggestions for improvement, then please feel free to mention them in the
comment section below.
Key References
 Disaggregated data is essential to leave no one behind; IISD
 Why data disaggregation is key during a pandemic; PAHO, WHO 
 The importance of disaggregated data; National Collaborating Center
for Aboriginal Health
 Leave no migrant behind, the 2030 agenda and data
disaggregation;  IOM
 Practical guidebook on data disaggregation for the sustainable
development goals, ADB
 Data Disaggregation and the Global Indicator Framework, United
Nations Statistical Division

Integrating a gender dimension into


Monitoring and Evaluation (M&E) systems -
best practices
November 30, 2021
Integrating a gender dimension to M&E structures is an important step
towards achieving broad-based and inclusive development. Our article
provides step-by-step guidance to help you and your organisation integrate
gender into your project and different components of your M&E system as a
crosscutting issue. Development actors working in M&E or anyone who
provides M&E support to organisations or those interested to learn about
Gender and M&E can greatly benefit from this guide.
The new development agendas acknowledge gender equity and equality as
essential elements for combating poverty and stimulating sustainable
development. However, women, girls, gender non-confirming individuals,
and other minority groups continue to suffer marginalization, discrimination,
and violence in most parts of the world as consistent, meaningful progress
toward gender equality remain elusive. To build an equitable and more
inclusive society, governments, organisations, policymakers, and industry
leaders across the globe must instill gender parity across all sectors. This
requires developing appropriate gender-sensitive policies, targeted
interventions and M&E systems.
Stay with us as we explain what gender-sensitive M&E means and how
development organisations and others working in the sector can benefit from
integrating a gender dimension into their M&E structures. In addition, we
explore a 4-step process to help development actors use specific gender-
related questions to ensure integration of gender components into all aspects
of their M&E system for the appropriate collection, compilation, analysis,
dissemination, and use of gender data for assessing and addressing the needs
of all individuals.
What is gender-sensitive monitoring and evaluation
(M&E)?
Before we explain gender-sensitive M&E, let’s try and understand how the
term ‘Gender ‘ is defined in international development literature. The
Glossary of Terms and Concepts by UNICEF explains ‘Gender’ as a socio-
cultural variable that refers to the comparative, relational, or differential
roles, responsibilities, and activities of females, males, and gender-
nonconforming individuals. While the sex of an individual is biologically-
determined, gender roles are socially constructed and it’s important to
recognize that gender roles and other attributes can change over time and
vary with different cultural contexts.
Likewise, ‘Gender Equality’ refers to the concept that women, men, girls,
boys, and gender non-confirming individuals have equal conditions,
treatment, and opportunities for realizing their full potential, human rights,
and dignity, and for contributing to and benefiting from economic, social,
cultural and political development. People’s rights, responsibilities, status,
and access to and control over resources and benefits should not depend on
their sex or gender. Instead, every person should be able to develop their
interests and abilities and make choices that are free from limitations set by
rigid expectations, responsibilities, and roles based on stereotypes and
discrimination. 
According to the UN Women Evaluation Handbook, ‘Gender-sensitive
monitoring and evaluation (M&E)’ is a powerful tool for learning, decision-
making, budgeting, and accountability that supports the achievement of
gender equality and the empowerment of women, girls, and other minority
groups. 
Gender-sensitive M&E uses the method of gender mainstreaming to
consciously incorporate the concerns and experiences of all individuals as an
integral dimension of the evaluation objectives, approaches, methods, and use
as well as project design and implementation so that all individuals benefit
equally from the intervention. Thus, gender-sensitive M&E is not only a
driver of positive change towards gender equality and the empowerment of
women and other minority groups, but the process itself also empowers the
involved stakeholders and can prevent further discrimination and exclusion.
The benefits of integrating gender into monitoring and
evaluation (M&E) systems
The development of an appropriate gender-sensitive Monitoring and
Evaluation (M&E) system that contains meaningful gender equality
outcomes and indicators can provide the following benefits for the
international development community:
 Gender-sensitive M&E discloses the extent to which an intervention
has addressed the different needs of different genders, and has made
an impact on their lives and overall social and economic well-being. 
 Gender-sensitive M&E provides evidence to governments and
policymakers to understand where government policies and programs
fall on the gender equality continuum and the extent to which their
interventions are relevant and effective in terms of achieving the
desired gender equality, women’s empowerment, and human rights
outcomes. 
 Gender-sensitive M&E encourages the collection of inclusive data,
disaggregated by sex, age, and other dimensions. Data disaggregated
by sex can provide richer insights into gender differentials in
knowledge, behavior, access to services/resources and their
utilization, and other outcomes. More on data disaggregation.
 It also improves project performance during implementation,
encourages midterm corrections, and enables development actors to
derive lessons for future projects.
 Gender-sensitive evaluations tend to be inclusive, participatory and
reflective, respectful, transparent, and accountable.
 Even when project designs are gender-blind, using gender-inclusive
indicators and conducting gender-sensitive M&E can help create
equitable project outcomes.
 The evidence from Gender-sensitive M&E can be used to advocate
change, address gender dimensions in different sectors, recommend
actions to improve the effectiveness of interventions in addressing
different needs of different genders and contribute to greater gender
equality. 
Not integrating a gender dimension into project planning, design,
implementation and M&E greatly reduces the relevance, effectiveness and
sustainability of an intervention. In fact, in many cases, this further reinforces
existing and unequal power relations between the sexes and can even
exacerbate these and diminish women, girls and gender nonconfirming
individuals’ status in a society. 
Making each component of the M&E system gender-
sensitive
At a systemic level, it’s important for donors and governments to establish
accountability systems to track their compliance with commitments to gender
equality. They must also build alliances with local entities and civil society
organisations to support the capacity of national statistical offices to produce
gender-sensitive data while strengthening gender policy and normative
frameworks at national and local levels. Moreover, governments and
development organisations should also work together in securing proper
financing for projects and programs that support gender equality and the
empowerment of women and minority groups. 
At an organisation level, NGOs must align with the national and donor’s
gender agendas to design gender-sensitive projects and M&E systems. This
requires incorporating gender mainstreaming throughout the project’s
lifecycle as well as an assessment mechanism with a focused gender lens
applied to all M&E system components. This enables the project team to
assess progress of their intervention in achieving broad-based and inclusive
development.
Here’s a 4-step process our in-house M&E experts recommend for
making each component of your M&E gender-sensitive.
Step 1 - Conducting a gender situation analysis
One of the prerequisites of incorporating gender dimension into a project is to
conduct a thorough gender-sensitive assessment. On the one hand, this type
of assessment helps to deepen the understanding of the present situation of
the target groups including the social, cultural, political, and economic
aspects as well as the predominant gender roles, disparities, concerns,
constraints, opportunities, gaps and structural and systemic causes of gender
discrimination and inequalities within the context of the intervention.
On the other hand, this assessment is useful to estimate how the situation
analysis relates to the objectives and goals of the project and whether or not it
adequately considers gender concerns throughout its planning and
implementation, regardless of whether the intervention explicitly targets the
empowerment of women or gender equality. Additionally, gender analysis
also helps to predict how the intervention is expected to change the existing
situation as well as its potential impacts on different individuals and
communities. 
Organisations must engage with as many local stakeholders, staff members,
beneficiaries, and gender experts as possible to conduct the situation
assessment. Input and feedback from multiple stakeholders can greatly
inform the project and M&E design and improve its overall efficacy. Based
on the assessment results derived from this close consultation and
participatory dialogue, organisations can adapt their interventions to align
with the underlying gender issues.
Key questions to ask:
 Are sufficient capacities in place for identifying and addressing gender
issues, gathering gender-sensitive information, and conducting gender
analysis? Do we have team members with appropriate expertise? If
not, what kind of capacity building will be needed to train the current
members? 
 Are funds being allocated for gender capacity building?
 Are the benchmark surveys or baseline information gender-sensitive
and capture the relevant gender concerns? 
 Is there a gender focal point or staff in charge of gender concerns
within the organisation?
 Have key stakeholders had an opportunity to provide gender-related
inputs?
 Has the project plan been circulated for comments to the responsible
gender specialist or gender focal point (if there is one)? 
Step 2 - Integrating gender into project design and
M&E system
Based on the gender situation analysis, the project team can start defining the
evaluation purpose and scope, and identify appropriate gender-sensitive
project goals, objectives, targets, and indicators and establish an appropriate
monitoring and evaluation (M&E) system. It is important to include input
from women, men, and other minority groups from the project team, local
stakeholders, gender specialists (if there is one) in setting these goals and
objectives.
Identifying and selecting key gender-sensitive indicators for input, output,
outcome, and impact is a key step in setting up a gender-sensitive M&E.
Based on the indicators, the team can select the most appropriate gender-
sensitive evaluation questions, data collection methodologies, tools, data
analysis techniques and determine a proper timeline and budget. Gender-
sensitive evaluation must apply mixed-method data collection and analytical
approaches to account for the complexity of gender relations and to ensure
participatory and inclusive processes that are culturally appropriate. While
quantitative information helps to compare, qualitative information helps to
capture the more complex and less quantifiable causes and effects of gender
inequality.
At this stage, the team should also organize reporting and feedback
mechanisms by clearly identifying who will collect and analyze information
and when, and who will receive the information, and how it will be used. The
team must ensure that there is a good representation of women and other
minority groups in evaluation and data collection teams. 
key questions to ask: 

 Are the intervention’s goals and objectives gender-sensitive? Do they


adequately reflect all individuals’ needs? Have they been incorporated
into the project logframe, results framework, or the theory of
change? 
 Who should be involved in defining the vision of change, determining
the indicators, and gathering data? 
 Do the tools and methods selected for data collection reflect gender
outcomes and impacts?
 Are there male and female data collectors, and have they received
gender sensitivity training? 
 Are quantitative collection and analysis methods being complemented
with qualitative methods? Is gender analysis being integrated into
these? 
 Are both qualitative and quantitative project indicators, targets, and
milestones gender-inclusive? Do they need to be revised to better
capture the project’s impact on gender relations?  
 How can we ensure small changes will be measured? Which indicators
could capture the small, nuanced shifts in gender equality that tend to
happen over time? 
 What legal frameworks exist that may enable or inhibit gender
equality? 
 Is there a plan in place to protect the rights of the respondent,
including their privacy and confidentiality?
Step 3 - Project implementation
At this stage, the team can start collecting gender-sensitive data based on the
selected indicators. Plus, they can start monitoring the progress against
targets set for the period under evaluation and feedback results into the
system to allow for corrections and improvements as needed to obtain
expected gender-related outcomes.
Key questions to ask:

 What information already exists, or is being collected, to assist in


tracking changes?
 Is all data being collected disaggregated by sex, age, disability and
other diversities and gender-specific indicators?
 What expected effects does the intervention have on gender
relations? Are these effects regularly analysed? Is someone specifically
assigned to do this? 
 Are key project activities and outcomes being discussed with key
project partners? 
 How does the project strategy need to be adapted to increase the
gender-responsiveness of the intervention?  
Step 4 - Project analysis, reporting and lessons
learned
This is a good time to determine how the collected data will be analysed,
disseminated, and used. Assess the impact of gender integration in the overall
project context, followed by the assessment of the impact of the interventions
on women, men, boys, girls, and gender non-conforming individuals. The
finding can also assist in determining whether the intervention has prompted
changes in the existing norms, cultural values, power structure, and the roots
of gender inequalities and discrimination in the community of interest. 
Key questions to ask:

 What are the key results and how do they compare to the targets? 
 How will the results be communicated to the stakeholders?
 How has the intervention affected the men, women, boys, girls, and
non-gender-conforming individuals in the target community? 
 How will people’s gender or sexuality affect the way they understand
and experience these changes? 
 Are the effects of the intervention on different genders and gender
relations part of every progress report? 
 Do the findings, conclusions, and recommendations reflect gender
analysis and explicitly address the gender-responsiveness and gender-
related performance of the project? 
 What are possible long-term effects on gender equality? Is there
sufficient information to know that? 
 Are the positive gender-related outcomes likely to be sustainable?
 What lessons can the team learn from the key results and how will it
inform the design and implementation of the current and future
interventions?
 Has the project established mechanisms to share knowledge related to
gender equality?
In summary, gender-sensitive M&E can be an extremely powerful tool for the
empowerment of women, girls, gender non-conforming individuals, and other
minority groups. It is also equally beneficial for reversing unequal
distributions of power, resources, and opportunities, addressing structural
barriers and challenging discriminatory laws, social norms, and stereotypes
that perpetuate inequalities and disparities in our societies. Therefore, the
development of gender-sensitive projects and M&E should be made
mandatory within the international development sector. Donors,
governments and policymakers, government agencies, civil society, and
grassroots organisations, and all development actors all have a part to play
in realizing this common goal.
We hope our article was helpful. If you have any feedback or suggestions on
how we can improve it, please feel free to write to us in the comment section
below.
Key References:
 How to Manage Gender Responsive Evaluation, UN WOMEN
Independent Evaluation Office.
 Guidelines for Integrating Gender into an M&E Framework and system
Assessment, Measure Evaluation. 
 Gender Mainstreaming, European Institute for Gender Equality 
 Monitoring and Evaluation Framework for Gender Inclusive
Recruitment and Selection, USAID. 
 Integrating a Gender Dimension into Monitoring & Evaluation of Rural
Development Projects, The World Bank.
 Integrating gender equality in monitoring and evaluation,   ILO
Evaluation Office. 
 Glossary of Terms and Concepts, UNICEF Regional Office for South
Asia, NOV 2017.

Maximizing the use of


KoBoToolbox & more - interview
with expert, Janna Rous
December 14, 2021
Janna Rous, a humanitarian information management expert, an aid worker,
a data enthusiast, and a mother of two shares her passion for digital data
collection tools and her own journey into data management in the
humanitarian and international development sector. Through her
initiative, Humanitarian Data Solutions, Janna aspires to help as many
development practitioners as possible to feel more confident with data –
collecting it, analysing it, visualising it, and using it to take action.
An expert in mobile data collection tools, Janna has already trained and
inspired thousands of field workers, practitioners and change makers to
leverage tools, like KoBoToolbox to improve their interventions and scale
their impact. In this interview, Janna explains the benefits of switching to
mobile data collection tools, the common challenges data collectors face
while using such tools, some skills that can come in handy and how
humanitarian data solutions can help make the data collection and
management process more efficient and impactful for all. Stay with us as we
share some excellent tips, recommendations, resources and training
opportunities with Janna that can significantly improve your data literacy
and capacity.
Get to know Janna
Could you give us a short introduction? (where are you based, your
favourite leisure activities, favourite cuisine or anything else you’d like
our readers to know).

Sure! Originally, I’m from Canada. I met my husband while working in


North Sudan over 10 years ago. We now live in a small village outside of
Oxford. I’m a mom of two little ones (aged 2 and 4 years old) so my leisure
activities typically revolve around them – our favourites are jumping on
trampolines, diving into ball pits, and eating ice cream.    
Our favourite cuisine has to be from the Middle East – we lived there for a
few years and our eldest was born there. We can never seem to get enough of
hummus, falafel, and fresh flatbread!
How long have you worked in the international development sector?
Which region(s) of the world have you worked in so far and where did
you like working the most?

My interest in international development began at a young age through


exposure to various people who had travelled and worked in Central America
and Africa. In 2004, as a student of Water Resources Engineering, I started
volunteering with Engineers Without Borders Canada. In 2007, I was placed
in a rural District Water and Sanitation Team in Northern Ghana as a Junior
Fellow which was a life-changing experience for me as I lived with a
Ghanaian family and worked for a Ghanaian government team for 4 months.
I credit this experience, that family, and those colleagues for helping me
develop a lot of my personal theory and approach to international
development.

After graduation, I worked with an engineering firm in Canada for a few


years developing computer models of watersheds which helped me polish my
technical skill in data management, GIS, and modelling. Since then, I’ve
worked in Sudan, Ghana, Jordan and the Middle East Region in humanitarian
and international development teams. Now, I’m based in the UK where I’ve
worked for a charity called Emerging Leaders as Head of Operations and lead
Humanitarian Data Solutions.

What was my favourite? Impossible question to answer. In each place, each


role, there were dear friends who taught me innumerable lessons and there
were incredible experiences that have been an irreplaceable part of my life
journey.

What inspired you to pursue a career in humanitarian information


management?
I would say I kind of “fell” into humanitarian information management. I was
hired to develop new humanitarian programmes a few years ago and was
introduced to OpenDataKit (now ODK) as a way to collect humanitarian
needs information. I suppose my background in engineering, computer
modelling and GIS allowed me to quickly turn that data around into
interesting analyses that we could develop into funding proposals and needs
assessment reports for donors. The possibilities seemed to be endless, new
ways of working that hadn’t been previously possible before smartphones.
Since then, I’ve just tried to continue to push into interesting and scalable
use-cases for improving humanitarian and development programmes using
data.

What are your thoughts on the digitisation of


development and humanitarian response?
In brief, I think digitisation opens up amazing new possibilities that weren’t
possible when I first started in the sector. Putting amazing technology in the
hands of creative people who want to make people’s lives better has led to so
much innovation and continuous improvement. In the process of digitisation,
I think it’s important to remember that we’re all very human in this system.
People’s real, non-digital lives, have really been turned upside down in
humanitarian crises. And while digitisation can help deliver better, faster and
more relevant services – we still need to meet very physical, real, human
needs. 
Digitisation makes it even more important for us as aid and development
workers to ensure that tech tools are just tools that we use to support our very
human-centric work. When we interview people, we need to use eye contact
and share a laugh together. We need to be able to listen to their story, not just
use their data to make decisions. So yes – I love it, and I also think it
reinforces just how important it is for us to put our tech tools down and
remember to connect deeply, create art and music, dance, laugh, cry, share
meals and experiences.
There are so many digital data collection tools in the
market, what drew you to specialize in KoboToolbox
in particular?
You know what? I just love all the ODK-family of software teams  . I got
started with ODK and ODK Aggregate in 2014. That’s where I fell in love
with mobile data collection. In 2016, we moved our field team to
use ONA after I saw their CEO speak at a local conference, and I loved them
too, as they had great additional user management capabilities. I also got to
meet Chris, from SurveyCTO, at the MERLTech Conference in London pre-
COVID, and loved his demo, as he showed me the additional features they’d
built into their interface.
So why KoBoToolbox?  Well, primarily because of my own mission to help
field programme teams move into digital tools, especially local NGO teams.
And the challenge I often see them face at first is cost – KoBoToolbox allows
anyone to get started easily and for free while realising some of the additional
data management disciplines that they need to address, as they want/need to
turn their data into analyses, reports, and decision-making products. I just
love that this software has an easy-to-use front-end with enormous
capabilities to be used at scale and it’s free for users, which extends digital
transformation potential to even the tiniest or most remote team. Plus, they
have a fantastic community forum manager who’s always ready to help. So
KoBoToolbox is where I’ve spent a lot of time teaching people about quality
data collection!
In your opinion, what are the biggest benefits for
organisations to switch from paper to mobile data
collection tools like KoBoToolbox?
There are many benefits of switching to digital data collection tools, but these
are my top favourites:
 Timely Insights – In the past, we’d collect data on paper, and then
spend weeks entering them before doing the analysis. Now, we spend
a lot more time up-front, developing high-quality data collection tools,
and automating the analysis so that as data gets collected, we can see
almost real-time results of our efforts. Digital data collection totally
transforms the timeliness of insights. This could mean, in a
humanitarian team, for example, rapid needs assessments after acute
emergencies can be used within hours to get sign-off on emergency
response.
 Data Quality – With paper data collection, you often end up with
missing data and answers that might not make sense logically or
methodology that hasn’t been followed correctly. One of my favourite
features of tools like KoBoToolbox is that you can programme your
digital questionnaires so that they do little mini-calculations as the
data is being filled out to check that data makes sense and to make
sure nothing is missing. Overall, the change in the quality of the data
you can collect with a digital form is amazing.
 Data Triangulation – Another cool benefit is the possibility to collect
different types of data like images, GPS points, and ‘metadata’ like
timestamps that are all connected to the form. It’s a way to prove data
collection happened at a particular place, at a particular time, etc. If
accountability is especially important to an organisation or a donor,
then this is an amazing feature that allows local teams to really
showcase their programme quality.
 Global Consistency – Digital data collection can help teams think
about how their various programme locations collect similar data, use
similar data collection questionnaires, and can feed into cohesive
overviews. While this isn’t inherent in the software itself, the software
often leads organisations to give greater thought to how they’re
managing data globally, and it seems to spur on teams to create
exciting new endeavours globally. 
What are the most common challenges data collectors
face while using tools like KoBoToolbox and how can
they avoid them?
I would say a couple of the primary challenges that data collectors and
organisations face include:
 Creating Quality Forms – Many people tend to take an existing paper-
based questionnaire, and just translate it directly into a digital format
without really utilizing the added benefits of a digital format – like
data validation, appropriate skip logic in the form, etc.  This results in
teams ending up with the same poorer-quality data they’d collect
using paper. I run a training programme for KoBoToolbox users called
“Mastering Form Design in KoBoToolbox” that addresses this problem
and introduces them to basic and advanced digital data collection
techniques to help them make full use of the digital format.
 Not turning data into insights – Teams will often stop after they
collect data, and don’t necessarily go on to analyse that data and turn
it into insights. So it is possible for teams to have a lot of unused data
laying around, unsure how to best use it. This is common when teams
have moved ahead with data collection without thinking through the
real value of each question being asked, and how it ties into their
‘bigger picture.’  To address this, Humanitarian Data Solutions has also
developed a course called “Getting Started in Power BI for
KoBoToolbox Users” – which at least helps people to start thinking
about visualising their data. 
 Data Collectors not using appropriate data collection
methodology – This is a common one I come across. When moving to
a digital data collection format, a lot of people put emphasis on the
development of the digital tool and training people on how to use it
but they forget about data collection workflows, which means that
they often misuse the data collection tools, or don’t follow the
expected methodology. It’s really important to think through the
human workflows around your digital tool, to practice collecting data,
to do role plays, to find the barriers to collecting data, and to address
them openly, with empathy for different people’s perspectives.
In your opinion, what additional skills can help a
practitioner enhance their use of digital tools like
KoBoToolbox?
 Having a firm understanding of your indicators – for example, what
are you trying to measure through your questionnaire? And
understanding how each question you ask ties into the bigger picture
of how you’ll use the data once it’s collected is super helpful. Many
people might think this only applies to donor indicators and M&E
frameworks – but it’s important for teams to also clarify what other
indicators they need and want to measure that allows them to learn
about what’s working (and what’s not) in their programmes so they
can learn, adapt and improve over time. So, really focusing on building
the skill of “What questions should I ask?”
 Next, getting clear on data protection – understanding how you’re
using people’s data, and how you’re protecting it to the necessary
level, is critical.
 Building your skills in data cleaning and data analysis – is also really
important and helpful. This doesn’t have to be complicated – Excel is
used for lots of analysis, and is a great tool. Other tools that I see lots
of people using include SPSS, [R], Python, and Business Intelligence
tools like Tableau, Power BI, and QLIK. You can do almost anything
using your favourite tool of choice, so it’s just about figuring out what
kinds of analysis you really need to accomplish, and then learning how
to do that in whatever tool you choose.
 Learning and building skills in data visualisation – learning how to
turn your analysis into usable formats for a variety of users is super
helpful. This can be as far-reaching as using interactive business
intelligence tools (e.g., Power BI), or getting into graphic design to help
simplify meaning and allow the analysis to be consumed more easily.
The possibilities are endless. Learning how to do some basic mapping
in QGIS or another software, can be super useful too!
How can you and your team at Humanitarian Data
Solutions help?
Humanitarian Data Solutions is all about supporting people along their data
collection and information management journey. People who are looking for
a good overview of how to get started can enrol in Humanitarian Data
Solutions beginner courses “Getting Started in KoBoToolbox” and “Getting
Started in Power BI for KoBoToolbox Users”.  
If individuals or organisations are looking to develop their staff skills beyond
‘beginner’ level, so they can create great digital questionnaires, then
“Mastering Form Design in KoBoToolbox” is a great training programme to
enrol in. We’re also accepting applications for humanitarian and development
staff to join our Impact Data Practitioner Network launching in 2022.  If
you’re interested, send an email to [email protected]
Finally, we provide consulting services to organisations to help them make
the transition to digital data collection and visualisation. Just send an email
to [email protected] and we’ll connect. 
The best way to keep up to date with what’s coming up is to join our email
list here. Our next training will be launching in January 2022, so do get in
touch if you’re interested! (and let us know that you heard about it through
TolaData to get a small discount!) 
What improvements would you like to see in the
mainstream data collection, analysis and Monitoring
and Evaluation (M&E) processes?
I still think that the integrations between different tools are fiddly. Different
tools are really awesome at individual pieces of the puzzle, for example, data
collection OR data visualisation but it’s difficult to get all things working
together in a single system. I think this is where interesting tools
like TolaData come in, but in general, still think there’s a lot of improvement
for data modelling and integrations. Another one is a better understanding of
data protection and how that’s integrated into data collection, analysis, and
visualisation tools.
Any tips or advice for development professionals on
how to adapt their data collection and M&E during a
pandemic, such as COVID-19?
I wrote an article on this very question, so I’d love to point you to this
resource:
 In English: 8 Ways to Digitize Your MERL Practices During COVID-19
Response  
 In French: 8 manières d’adapter votre suivi & évaluation pendant la
pandémie de Covid-19
Any books or resource materials on digital data
collection or management that you would like to
recommend to our readers?
I’ll share some of my favourite books, but these are more “inspirations”
rather than exact data resources. 
 One of my favourite books is called “Creativity, Inc”.  It has nothing to
do with digital data collection – but it has so inspired me and I’ve used
it with my teams to think through how we ‘make the invisible visible,’
and allow ourselves the grace to fail.  When you start getting really
rigorous about your data, you often discover that things might not be
as ‘rosy’ as you thought they were. We need to be humble and
creative and adaptable. This book is just the best.
 Jackie Novogratz is a huge inspiration to me, and I highly recommend
her two books “Manifesto for a Moral Revolution” and “The Blue
Sweater”.  She’s someone who emphasizes the importance of the
human experience, humility, accompaniment, and also rigour –
amazing.
 If you can look into “Poor Economics” by Esther Duflo and Abhijit
Banerjee, as well, their approach to using RCTs in development
contexts is great, and I love their insights.
 Currently on my reading list that I haven’t had a chance to read yet,
but thought they’re also worth mentioning:  I had someone share a
recommendation with me recently, “Data Feminism”, and am also
reading “Impact: Reshaping capitalism to drive real
change”.  Hopefully, they’re good ones, too!
What has been your happiest or frustrating moment
related to your work?
Digital data collection has allowed me to be involved in remote programme
management, where data is used to verify needs and aid delivery in highly
insecure contexts. I remember a programme I was running that was
completely dependent on amazing local aid workers who input thousands of
data points over months of time to ensure their work could continue. I’d
experienced so many emotionally acute moments in digital-only format with
that team, reading intimate stories, seeing pictures and creating a ‘digital’
picture of a remote programme. But when I finally met the team in person it
was the most amazing moment. It was the happiest of times, where we
realized that without the digital data capacity, we wouldn’t have been able to
meet such incredible humanitarian needs together. For me, that just showed
the power of tech and of partnership across cultures and boundaries and I am
forever inspired by that team.
Any final words of advice to those looking to pursue a
career in data management or monitoring and
evaluation (M&E)?
Oh goodness, go for it! I would say, find your community. Learn how to ask
the right questions – grounded in a clear understanding of what you need to
know to create forward movement and critical decisions. Keep humble and
don’t worry about failing. Learn how to use your voice to advocate for the
needs and insights you unearth through the process. And then have FUN
playing with all the tools available to you. I find that using most software is a
bit like playing a computer game – you can get really creative using it,
turning your ideas into a functioning system. I’m always playing with new
software, researching possibilities, and reaching out to chat with people. It’s
such an amazing community of practitioners, so I know you’ll find friendly
help out there!
We hope you found our interview with Janna helpful. If you have additional
questions for Janna or would like to join one of her courses or training
programs, you can simply email her, connect with her on social media, or
just drop us a line below and we’ll be sure to let her know! 
-------------------------------------------------

Explore the power of Monitoring and


Evaluation (M&E) with specialist Kandi
Shejavali - part 1
December 15, 2021
M&E specialist Kandi Shejavali believes that M&E has the power to bring
enormous transformation to the wellbeing of people and the planet and bring
us closer to the type of society we want. Think that’s an exaggeration? Find
out why she feels that way – and learn about her unique culinary method for
determining the best countries to work in. Plus, stay with us as we explore
the key challenges and opportunities in M&E through Kandi’s perspective.
Can you give us a quick introduction?
Sure! My name is Kandi, also known as Dr. Reader to my family and close
friends, given my love for books (I was a total nerd growing up and still have
nerd tendencies!).
Aside from my reading obsession, I love to dance and I love to eat good food.
Favorite cuisine? Anything local to the place I happen to be in, the more
traditional the better!
Which organisation do you work for?
As an entrepreneur, I’m the organization I work for. (laughs) My M&E-
related business entity is called RM3 Consulting. But ultimately, I consider
who I work for to be my clients, whether they be companies (who hire me for
consulting work) or individuals (who benefit from my coaching services).
Why? Because they’re the ones I wake up every day to better understand and
serve. 
I love it when a client tells me that, thanks to my support, they are able to
enhance the effective management of their project and maximize positive
results (which is what M&E is all about). Similarly, when I’m able to help an
individual client conduct M&E with greater clarity, effectiveness, and ease,
that absolutely makes my day. So that’s who I work for, my clients.
Which region(s) of the world have you worked in so
far? Where did you like working the most?
Gosh, I’ve worked on projects that have been implemented in almost all
regions of the world, from Africa to the Americas and the Caribbean to Asia
and the Pacific to Europe. I haven’t necessarily had the opportunity to travel
to all my project countries, but out of the countries I have been fortunate
enough to be in, it’s really, really difficult to say where I liked working the
most because each one is so unique.
So, let’s base it on the level of deliciousness of the local food, shall we? On
that basis, these are the three countries that I’d love to return to and spend
extended periods working (and eating) in: Mozambique, Nepal, and Rwanda.
(Mozambican piri-piri, Nepali momos, Rwandan brochettes, mmm, I’m
licking my lips just thinking about it…!)
How long have you worked in the monitoring and
evaluation (M&E) sector?
I’m not sure if I would call M&E a “sector” per se (rather than a profession),
but I’ve conducted M&E work in some form or another across multiple
sectors for at least a decade and a half, broken up somewhere in the middle
when I hopped over to work in international trade and export controls for a
while.
What inspired you to pursue a career in monitoring
and evaluation (M&E)?
I wish I could say that it was intentional, but I kind of fell into M&E by
accident! I couldn’t decide what sector I wanted to specialize in, so I chose
public policy and public economics as the focus areas for my Masters at New
York University because that allowed me to explore a wide diversity of
sectors. My core public policy analysis skills (which are an element of M&E)
could be applied to all of the sectors – public health, education, tax, you name
it. An undecided person’s dream! 
And, true enough, over the course of my career, which started as an intern at
a small non-profit in midtown Manhattan and then as an intern-turned-
consultant at the United Nations, I’ve applied my M&E skills to all types of
sectors from agriculture to biodiversity finance to oil and gas to social
protection to tourism and many others in between.
What inspires me about M&E, aside from providing the opportunity to
dabble in a diverse variety of sectors, is the immense promise it offers to
maximize the positive results of a project. And the beauty of that is that it
makes M&E the responsibility of everyone working on a project because
everyone’s efforts affect the project’s results. This means that the entire
project team needs to understand how to interact with the M&E system, not
just the person with “M&E” in their job title or as part of their official job
responsibilities.
What has been your happiest and the most
challenging moment related to your M&E work?
Happiest moment: when I realized that my sector colleagues on a large
USD304.5 million project for which I served as M&E Director were
excitedly running to me to talk M&E, rather than me having to chase them
down. That demonstrated to me that my and my team’s efforts to wave the
M&E flag and establish how useful M&E could be had paid off. It was so
gratifying. Honestly, I could have cried with joy.
Most frustrating or challenging moment: having to facilitate the
implementation of a top-down, donor-imposed evaluation that the affected
population had not been consulted on and whose methodology aligned
neither with that part of the country’s cultural context nor the project’s
implementation approach.
Ultimately, that evaluation’s exigencies may have hindered rather than
helped the achievement of positive project results – and it certainly didn’t
help that the donor-hired evaluation team leader who jetted in showed
everyone the level of his sensitivity to other cultures by casually leaning back
and propping his feet right up on the meeting room table at the first gathering
with selected stakeholders!
All in all, a classic case of M&E getting in the way of implementation and of
the achievement of results – and, on top of that, generating resentment.
Obviously not the way to do M&E, dear TolaData readers.
What is the funniest reaction you’ve encountered
when you told someone that you were “working in
monitoring and evaluation (M&E)”?
Usually, whenever I tell anyone outside of the non-profit or international
development sectors that I “work in M&E”, I expand on it a little bit right
away because I know they’ll be lost if I don’t. After hearing the explanation,
they’ll typically grab onto whatever they can identify with and, with great
relief and pride in their understanding, they’ll say something that ends up
being pretty reductionist, like “oh, KPIs!”
So it’s always funny to see what they sum M&E up as being. I think
“financial auditing” was one of the funniest responses – but also the saddest!
Maybe I needed to improve my explanation that time!
In your opinion, what are the biggest opportunities
and challenges in monitoring and evaluation (M&E)
today?
I’m taking the opportunity side of this question to first mean the biggest
opportunities presented by M&E, okay?
The biggest opportunity presented by M&E is the enormous potential that it
holds to bring us closer to the type of society we want. To me, that’s what
M&E is about. It’s not the techniques or tools or graphs and numbers. None
of that means anything if M&E system-generated evidence isn’t used to
inform decision-making so that projects are not only steered in a way that
maximizes intended results but, more broadly, so that projects are designed to
be beneficial for people and the planet in the long-term.
This is especially relevant now in the context of anthropogenic climate
change and biodiversity loss. I believe that evaluative judgments made
through M&E activities have to be arrived at through this lens. M&E has to
step to the plate and “help envision and articulate [the type of society we
want] and then help chart a path to making it a reality”.
That might seem like wishful thinking, but this isn’t a pipe dream. There are
many interesting discussions being had on related topics, as can be heard on
podcasts like the GIIN’s Next Normal: Re-imagining capitalism for our
future. The opportunity is there for M&E to play an incredibly important role
to transforming society for the better! The revised DAC criteria can even be
considered one step in this direction, but I think they are applied too late in
the project cycle. In my opinion, they should be considered at the planning
stage. 
So that’s the opportunity presented by M&E. 
In terms of opportunities in M&E for professionals, I’d say that independent
consulting work is huge. There are so many opportunities out there. I can’t
count the number of opportunities I’m invited to participate in that I have to
turn down or refer to other people.
That’s why I’m committed to coaching M&E professionals and letting them
benefit from this. My interest is in getting these opportunities to qualified
people, ideally, people who live and work in or near the communities
targeted by the project in question and who are committed to a decolonizing
approach to M&E. I mean, why should someone like me be flown into a
country to do M&E when a local professional could have the opportunity to
earn income while doing an amazing M&E job that would more intuitively
incorporate the local communities’ way of knowing and doing into the M&E
activities?
I think we really have to commit to what ‘development’ means and ensure
that these opportunities don’t keep going to the same people.
Turning to the biggest challenge in M&E, well, it’s sort of the flip side of its
opportunity to bring us closer to the type of society we want, and it’s the
same challenge faced in other fields and by humanity at large: how to bring
an elevated level of consciousness into our work and thus catalyze
transformative change. 
Eckhart Tolle once said “No change is possible without a shift in
consciousness”, and I tend to agree. I’ve started to reflect on what this would
look like in M&E practice, and I intend to explore it some more.
That’s actually the other reason I picked Nepal earlier as one of my favorite
countries to work in. I felt an amazing sense of peace while I was there (it
wasn’t just about the delicious momos, you see)… the zen in the air was
almost tangible, and my soul felt very much at ease. I’d love to bathe in that
atmosphere again, though of course, we can all cultivate such inner peace
from wherever we are.
And the elevated level of consciousness that it represents is what I think the
world needs each of us to uncover within ourselves in order to be more
effective at truly transforming our societies for the better, including through
our work, which of course includes M&E work.
On a more practical level (I know that the consciousness stuff can seem a
little esoteric and ‘woo-woo’ for cartesian M&E types (laughs)), another
challenge in M&E is how to measure project-specific results when projects
typically operate within complex systems. Different parts of life are
intricately interrelated, and things don’t work in the linear fashion that may
be conveyed by a project’s theory of change (ToC). ToCs have to be
caveated, and, in conducting M&E, practical approaches have to be taken to
credibly estimate – note, I said estimate; this is not a precise science – a
project’s unique contribution to results that multiple other actors may be
working towards as well.
Another challenge is measuring what matters. We have lots of measures that
are lauded for being robust in a limited statistical sense (GDP, for example)
and that are widely used. Yet, they don’t say much about how well we’re
truly doing as living beings, as people and nature. I think measures such as
Bhutan’s Gross National Happiness need to be used much more widely, with
each country or community establishing its related targets and taking
measures relevant to its particular context to meet those targets. Imagine the
kind of societies we could create! We’d be rich in the way that actually
matters.
And I guess a final challenge is that sometimes M&E is seen as a complex
and onerous burden. I get it. But it doesn’t have to be that way. That’s why I
seek to share my M&E skills in a manner that’s straightforward but allows
for the resultant M&E to be robust.
We hope you enjoyed getting to know monitoring and evaluation (M&E)
specialist Kandi Shejavali and reading about her reflections on her
profession and her journey into M&E. 
Stay tuned for the second part of this interview, coming up in January 2022.
Kandi will share her thoughts on the emerging trends in M&E, skills that are
helpful to have as an M&E professional, reflections on digital M&E, plus
excellent tips, recommendations and resources for those new to the sector.
In the meantime, you can connect with Kandi on LinkedIn or read more blog
articles from her on her website.

Widen your M&E horizon w/


specialist Kandi Shejavali - Part 2
January 26, 2022
In this 2nd part of our 2-part M&E interview series with specialist Kandi
Shejavali, Kandi shares her reflections on the emerging new trends in M&E,
skills that are helpful to have as an M&E professional, her thoughts on
digital M&E, plus, excellent tips, recommendations and resources for those
new to the profession. And that’s not all, Kandi even has a special
gift prepared for the TolaData readers to help you all widen your M&E
horizons and navigate the M&E path more confidently. 
So stay with us as we explore M&E together with Kandi Shejavali.
If you’d like to find out what inspired Kandi to pursue a career in M&E and
read about her reflections on the profession and her thoughts on the power of
M&E, then check out the 1st part of this interview – Explore the power of
M&E w/ specialist Kandi Shejavali – P1
Are you aware of any emerging trends in the M&E
sector? How do you feel about them?
M&E is an evolving specialization, and most of the emerging trends that I’m
aware of are wonderfully promising. For example, I love that the profession
is continuing to be more and more oriented towards serving as a management
tool for project decision-makers rather than being a checkbox exercise that
projects only comply with in order to keep the money rolling in.
The curious part about that, though, is that donors seem to be leading the way
in this shift (I was recently on a call with a donor who explicitly stated that
part of their M&E requirements were aimed at supporting the use of M&E
system-generated evidence for project steering purposes), while some project
management teams still consider M&E a burden. 
And the project management teams can’t be blamed for feeling that way.
There are plenty of elements in today’s M&E that are leftovers from M&E’s
old days of being meaningless and onerous to project teams.
But such teams will be glad to know that by viewing and leveraging M&E
tools differently, they can be well on their way to making M&E serve them
rather than feeling like they have to serve a horrible master called M&E. I
hope that the gift that I’ve prepared for TolaData readers helps those who are
frustrated by M&E get on the path to this ideal. (Link for gift download.)
Related to this idea of M&E being at the service of management, there’s also
great work being done by the team over at SoPact, with the concept of
frequent impact experiments that generate short-term feedback loops about
what is working and what is not working, so that necessary adjustments can
be made sooner rather than later. I also like their approach about not spending
too much time developing a complex ToC but rather focusing on refining the
general program logic with the help of findings from the impact experiment
results.
I may be biased with this next one since it’s part and parcel of my M&E
approach, but I think there’s also an emerging trend towards M&E being seen
more holistically. I perceive a recognition that M&E is not just about the ‘m’
and the ‘e’; it’s about the full continuum of results articulation, measurement,
and management – including the related assumptions, lesson-learning, and
considerations of broader impact that I referred to earlier to ensure the
maximization of positive results.
I love that this approach appears to be appreciated by my clients, so the
demand – or at least receptivity – is there, too. And organizations such as
UNDP have started to reflect this evolution in their M&E requirements, and I
find this to be a really positive sign, hopefully, a sign of a larger trend. 
I also see the interest in decolonizing M&E methodologies as a reflection of
this same shift – at least for me, personally, as I explore that particular topic
more deeply.
So those are the emerging trends in M&E that I welcome with arms wide
open.
On the other hand, trends like calling M&E “MEL” or “MERL” or “MEAL”,
don’t appeal to me too much at all… so much so that I wrote a grumpy blog
article about it! (laughs) Based on the engagement that the article received,
many other professionals are animated by the topic as well.
In your opinion, what are some good skills to have to
work in Monitoring and Evaluation (M&E)?
Of course, mastery of all the core M&E concepts and an awareness of the
various methodological and analytical approaches out there to answer M&E
questions. Plus a good grasp of related skills such as data collation and data
analysis, though the extent to which an M&E professional actually needs to
know all the intricacies of those types of activities depends on the person’s
specific role in M&E (some M&E professionals perform highly complex
statistical analyses for evaluations, others are more the M&E systems folks
who need more generalist knowledge).
But the core technical skills are only a part of doing great M&E.
You also have to have excellent listening skills – and here I don’t just mean
hearing what different stakeholders of a project say but identifying what lies
beneath what they are saying, including ‘hearing’ what the relevant
documents are saying.
And related to listening skills, excellent translation skills are also needed. For
translating what? Well, for translating what technical sector people might say
to inform the ToC into language that can be understood by laypeople who
might come across that same ToC without being subject matter experts. For
translating affected populations’ expressions of desired results into
measurable variables. For translating donor’s M&E requirements into
practical aspects of the M&E system. And so on.
And if curiosity can be considered a skill, plenty of curiosity as well.
Conducting and/or coordinating M&E activities effectively means sticking
your nose into all areas of the project’s business – including easily
overlooked areas like procurement – and drawing from or informing those
areas in order to ensure that the potential for achieving positive results is
maximized.
Organization skills, creativity, and a good sense of humor round out my list
of good skills to have to work in M&E.
What are your thoughts on “the digitisation of
Monitoring and Evaluation (M&E)”?
I think it’s essential! Especially for large projects, gone are the days when
M&E data could be managed through the exchange of Excel (or similar) files.
Excel can still be great for things like planning out your indicator
documentation or for designing reports that will be generated from a digital
system. But when it comes to actual M&E data capture, cleaning, storage,
analysis, and reporting, using a digital system just makes sense. If well
applied, it should make everything easier, all while enhancing data quality as
well. 
(If you’d like to receive a free template for indicator documentation, feel free
to contact Kandi directly via her LinkedIn.) 
The digitization of M&E also makes itself manifest in the way in which
M&E-related activities such as data collection are undertaken. Digital data
capture for survey, for example, allows for more immediate data quality
oversight and shortens the timeframe from the point of data collection to the
final report, all of which allows for the evidence to be used more promptly to
inform project management. This is true for both self-administered and
enumerator-administered data collection, though there are specific
considerations for when to use each, just as there are for non-digitized M&E.
I think the main thing to take into account in using digital tools is to ensure
that they enhance rather than detract from the M&E system’s objectives, and
we should particularly ensure that the use of such tools will not limit the
meaningful participation of affected populations.
Have you used any digital tools for Monitoring and
Evaluation (M&E)? How was your experience?
Yup! I’ve worked on projects where a bespoke management information
system was in place or being created to capture and store M&E data as well
as allow for some analysis. My experience was pretty good, especially that I
had the possibility of providing feedback to refine the digital tools as some
kinks still needed to be worked out.
I’ve also had oversight over data collection exercises that used tailormade
digital tools. Again, a good experience.
I don’t have much experience with off-the-shelf tools. I’m aware of
TolaData’s tool, of course. I’m eager to learn more about it and other
products out there and to consider them for use in my projects.
Do you have any advice for individuals who are
thinking about starting a career in M&E or those who
are new to the sector?
For those thinking about starting a career in M&E, I’d say stop thinking
already, and try it out! As I mentioned earlier, M&E is everyone on the
project team’s business so even if you don’t end up becoming or remaining
an M&E specialist, learning M&E will stand you in good stead no matter
what your role on a project and especially if you eventually hold a leadership
role. It’s truly a critical skill, and project team members with a good
understanding of M&E are best positioned to help make it realize its full
potential as a management tool that is not only not burdensome but
indispensable.
So I say get started, reach out to connect with others in the profession, and,
most importantly, start practising even before you’re officially an M&E
professional. The practical tips that I provide in some of my blog articles are
mostly directed at folks who work on projects in any capacity, not just M&E
specialists – and the tips can be applied right away. The idea is to achieve
immediate wins for your project’s M&E. Try the tips out and then let me
know how you did.
Then for those who have already said yes to M&E and are new to the
profession, I say welcome on board, buckle in, and enjoy the ride! I advise
that you recognize that M&E ‘advocacy’ is as much a part of your job as the
technical M&E work. You’re going to have to wave the M&E flag among
your colleagues, sometimes subtly, sometimes not-so-subtly, because not
everyone will be as much in love with M&E as you are. You’ll need to make
M&E as pain-free of an experience for your colleagues as possible (there are
many ways to do this but one critical one is to have M&E mirror the project)
and be ready to quietly demonstrate, again and again, how useful M&E can
be to project decision-making. 
Then rinse and repeat. 
If anyone who’s new to M&E would like a free cheat sheet on M&E basics,
you can download one that I prepared here.
Are there any books or resource materials on M&E
that you would like to recommend to our readers?
There are many great resources out there. There’s TolaData of course – you
guys’ articles are great, and I love that you aim to make M&E simple,
understandable, and fun –, and I’ve already mentioned SoPact. Your readers
probably often get referred to the usual suspects such as BetterEvaluation, so
I’ll mention a few others that are somewhat off the beaten path: 
 The course Decolonizing Evaluation by WitsX (available on edX);
 Xceval, a great resource for finding out about hundreds of M&E
opportunities;
 Articles by the folks behind Khulisa Management Services, such
as #EvalTuesdayTip: 7 Lessons from Made in Africa evaluation failures;
and
 Zenda Ofir’s blog in which she shares a reimagined view of evaluation.
I recently read her article on the power and powerlessness of
evaluation in the context of the situation in Afghanistan, and I really
appreciate how she breaks things down and distils lessons.
I also think it’s important to broaden one’s horizons when one thinks about
‘M&E resources’ to keep a pulse on the world’s broader needs and then, after
that, consider how M&E can serve those needs. Podcasts like the GIIN’s that
I mentioned earlier (Next Normal: Re-imagining capitalism for our future)
are excellent resources for that.
Any additional advice or final words for our readers?
Regardless of your role on a project, whether as an M&E specialist or not, let
the promise of M&E inspire you, and use it to improve your project’s results
and make as meaningful and positive of a difference as possible. M&E, if
approached wisely and used masterfully, is a powerful tool to help change the
world for the better, enhancing the wellbeing of people and the planet!
We hope our 2-part interview series with monitoring and evaluation (M&E) 
specialist Kandi Shejavali was helpful in sparking some M&E inspiration,
answering some of your M&E related questions and providing clarity on
what it’s really like to work in the sector. If you have feedback on this
interview, please leave us a comment below. 

And for more insights from Kandi, don’t miss the first part of this interview
>>> Explore the power of M&E w/ specialist Kandi Shejavali –
P1 where she shares her journey into M&E, her reflections on the profession
and her thoughts on the power of M&E.
You can also connect with Kandi on LinkedIn or read more blog articles
from her on her website. 
February 7, 2022

Double counting can pose a threat to transparent and true reporting.


Organisations who use data to report their work must ensure that the effects
of double counting are mitigated so that results are reliable and valid. This
article will explain double counting, looking at how it arises in international
development projects and how it can be prevented. 
What is Double Counting?
Double counting causes an inflation of project results. This happens when an
organisation or project counts a recipient or event more than once during a
reporting period. When all the data is combined, this causes the overall
project results to be unintentionally inflated. This inflation of project results
is known as over-reporting. Over-reporting makes it difficult to understand
how effective the project actually is. This in turn can affect decision making
around the project as coverage gaps can be masked, making it hard to see
what areas are worth scaling up and which aspects need to be improved. 
Financial Double Counting During the Pandemic
Double counting not only occurs in international development projects, it also
poses an issue in other kinds of reporting, such as financial reporting. At the
beginning of the Pandemic, the topic of double counting in such an instance
was brought into focus as the OECD’s Development Assistance Committee
agreed to allow donor countries to count debt relief as official development
assistance. Such an agreement came under scrutiny as it meant that donors
could double count aid money, once when they provided a loan and again if
they provided debt relief. This double counting would make it possible for
donors to artificially inflate their aid statistics so that they hit their aid targets,
all while recipient countries could in fact be receiving less money than
before. This discrepancy between donor reporting and the amount of
assistance recipient countries actually received is an example of double
counting. 
How Does Double Counting Occur in International
Development Projects?
Within the contexts of international development projects, MEASURE
Evaluation identifies three different types of Double Counting in their
research:
 Within-Partner Double Counting of Individuals 
 Between-Partner Double Counting of Individuals; and 
 Double Counting of Sites
Within-Partner Double Counting of individuals occurs when a partner or
project at one site provides the same service multiple times to the same
individual. In their reporting, this individual is counted for each time they
receive the service as a separate recipient, thus inflating the number of
beneficiaries. For example, an organisation is running three different
workshops on quality teaching approaches. Teachers from all over the
country are invited to participate. Each teacher can choose to attend 1,2 or all
3 of the workshops depending on which topics they are interested in. In their
reporting, the organisation wants to keep track of the number of teachers who
received training. However, since the same individual may take multiple
training workshops within that reporting period, the partner may over-report
the number of individuals trained if this is not taken into consideration. 
Similarly, Between-Partner Double Counting of Individuals, involves two or
more partners providing the same service to the same individual either at the
same site or at different sites within the reporting period. Both partners then
include that individual in their count of the number of beneficiaries. For
example, partners aim to count the number of beneficiaries of malaria
treatment across two projects active in the same region. In the reporting, if
the team is not aware that an individual might be receiving treatment from
project A as well as project B, resulting in that individual beneficiary being
counted twice in the reporting period. 
Lastly, Double Counting of Sites occurs when different partners provide
different supplies and/or services to the same organization within one
reporting period and each partner counts that same organization as one of its
service points. For example, partner A provides training on handwashing and
hygiene to providers at site Z. Partner B provides PPE to this same site.
When reporting the number of service outlets carrying out testing, both
partner A and partner B count and report site Z. 
Ways to Mitigate Double Counting
With these cases of double counting in mind, we can now look at how these
situations can be better accounted for in reporting. 
1. Separating Partners by Project or Site. 
To avoid Double Counting of Sites and some of the effects of Within Partner
Double Counting of Individuals, it can be helpful to separate out partners by
site. For example, Partner A provides clinical care to COVID patients in
Region 1 while Partner B provides care to COVID patients in Region 2.
Similarly, partners can also be separated by project. For example, Partner A
provides clinical care to COVID patients in Region X whilst Partner B
provides medical equipment in the same region. By separating partners by
site or project, project coverage can be maximized and reduce duplication
efforts. However, it should be noted that such practice will not fully
eliminate Between-Partner Double Counting of Individuals. This is because
some beneficiaries may move between sites and receive services from more
than one partner. 
2. Estimating the Degree of Overlap in the Project. 
Often, partners will overlap both geographically and in their projects. To
mitigate all three different types of double counting, partners can adjust their
reporting after data has been collected. For example, if an organisation has a
KPI measuring the total number of beneficiaries across two of their projects
in one region, they will be aware that some beneficiaries are receiving
services from each project during the reporting period. The organisation can
estimate this overlap as a percentage. This estimated overlap can then be
included in the calculation of the adjusted total number of beneficiaries.
Just as this estimation can help with over reporting, it can also be used to
tackle under reporting instances. For example, an indicator measuring the
total population in two different states uses census data from each state in its
calculation. However, historical analysis of this census data shows that for
state 1 the census reports are always 15% over reported whereas in state 2
they are 5% underreported. With this in mind, using this percentage
weighting can help organisations to correct for either double counting or
underreporting. 
3. Adjusting Client Visits to Number of Clients Reached.
To prevent Within-Partner Double Counting of Individuals, partners can
move away from counting the number of unique individuals they have served
and instead track the number of clients reached instead. To do this, the
partner can distribute surveys asking people how many visits to the facility
they have made during the reporting period. After collecting responses,
adjustments can be made to data based on these empirical findings. 
4. Unique identifiers. 
To avoid Within-Partner Double Counting of Individuals, organisations can
assign project beneficiaries unique identification numbers through either
paper-based or computer based monitoring systems. If such unique identifiers
are shared between partners, this can also mitigate Between-Partner Double
Counting of Individuals. It’s important to ensure privacy and confidentiality
concerns are taken into account when sharing data in such a way. 
Unique identifiers can also be used within the indicator workflow itself to
help differentiate between sites and services which different partners are
involved in. This can help prevent double counting in sites as if all partners
utilize the same unique identifiers in their indicator workflow, data can be
easily aggregated and adjusted. 
Double-counting is a common problem in reporting. While it cannot always
be remedied, being aware of double counting and the way in which it can
present itself in projects is key to mitigating its effects. This ensures
transparent and powerful reporting for organizations. 
If you want to learn more about how you can adjust results for double
counting in TolaData check out our guide in the Help Center.
We hope our article was helpful. If you have any feedback or suggestions on
how we can improve it, please feel free to write to us in the comment section
below. 

You might also like