ForAfrika DMEAL Framework
ForAfrika DMEAL Framework
2025
Author
Fabio Bezerra Correia Lima – Director of DMEAL
Thabo Miya – Programme Officer
Collaborators
Abeba Amene – Chief Programmes Officer
Dr Mary Okumu – Senior Director Technical Unit
Richard Mulandi – Director of Integrated Programming
Narciso Cau – DMEAL Specialist (Mozambique)
Allan Ssebutinde – DMEAL Specialist (Uganda)
Acknowledgements
We would like to express our deepest gratitude to all those who contributed their time,
expertise, and valuable insights in the development of this DMEAL Framework.
Special appreciation goes to the dedicated members of the Global Support Office (GSO)
Technical Unit. Your technical expertise and guidance were invaluable in shaping this robust
tool.
We also wish to thank the teams at the country offices, specifically the directors, programme
managers, and monitoring & evaluation officers. Your hands-on experience and intimate
knowledge of our work’s realities ensured that this guideline is both effective and practical in
measuring our impact at the grassroots level.
We are grateful to the members of the Executive Committee for their leadership and vision.
Your thorough review and final approval of this guideline not only ensures its quality but also
reaffirms our collective commitment to evidence-based decision-making and transparency.
Our gratitude extends to the communities we serve. The trust and co-creation of knowledge
you all facilitate is integral to our work. Your resilience and determination continually inspire
us to improve our methods and understand better the impacts of our work.
Together, we strive to create a world where everyone has the opportunity to lead a sustainable
and self-sufficient life.
Published in April 2023 - Updated in September 2024
1. Introduction 7
1.1 Background and purpose 7
1.2 Theory of Change 8
1.3 Importance of community participation in the DMEAL process 10
1.4 Importance of community participation in the DMEAL process 11
• Project Management 11
• Programme Management 11
1.5 DMEAL in the programme cycle 12
Direct Users of the DMEAL Framework 13
4. Community-based monitoring 33
4.1 Establish a monitoring plan 33
4.2 Collect data on programme progress 35
4.3 Participatory data verification and improvement 37
4.4 Monitor progress against targets and develop action plans 37
Chapter 4 Summary Table 39
5. Participatory evaluation 41
5.1 Establish an evaluation plan 41
Programme Prioritisation for Evaluation 41
Establishing the Evaluation Plan 42
Types of Evaluations and Evaluation Approaches 43
5.2 Ensure community ownership and empowerment 45
5.3 Evaluation management 46
5.4 Evaluation Action Plan 47
5.5 Internal Evaluations and Reviews 48
5.6 Impact Measurement 49
Chapter 5 Summary Table 51
6. Community-driven accountability 53
6.1 Establish community-based feedback channels 54
Suggested proactive feedback channels 55
Suggested reactive feedback channels 55
6.2 Handling and addressing complaints and grievances 56
6.3 Interpreting and responding to stakeholder feedback 60
6.4 Communicating responses 61
6.5 Foster community ownership 61
6.6 Reporting 62
6.7 Roles & Responsibilities 63
References 74
• Local ownership and empowerment: Community participation fosters local ownership and
empowerment, as participants become active stakeholders in the programme’s design,
implementation, monitoring, and evaluation, contributing to long-term success and
sustainability.
• Local capacities and resources: A localisation approach leverages the skills, knowledge,
and resources of local communities, building on their existing capacities and strengths to
support the development of locally driven solutions and promote self-reliance.
• Sustainability: A localisation approach fosters local ownership, capacity building, and the
development of community-driven solutions, supporting the long-term viability of
interventions even after the organisation’s direct involvement has ended.
• Project Management
According to the Project Management Institute and PMD Pro 1, a project is defined as “a
temporary endeavour undertaken to create a unique product, service, or result”.
Projects deliver integrated outputs (deliverables), which then result in better outcomes
(results) for communities and other stakeholders (such as donors). Projects are time-bound
and focus on a requirement to deliver specific benefits for communities in ways that are cost-
effective and measurable.
Project Management is the discipline of planning, organising and managing resources to bring
about the successful delivery of specific project goals, outcomes and outputs. The primary
challenge of project management is to achieve each of these, while managing project
constraints related to scope, budget, schedule and quality.
• Programme Management
Programmes are groups of related projects and activities (sometimes referred to as
’component parts of a programme’) that are managed in a coordinated way to achieve an
impact that is greater than if they were managed individually. In other words, the whole
(the benefit of the programme) is greater than the sum of its parts (the projects, activities and
tasks). ForAfrika organises projects into programmes to deliver outcomes that address a broad
range of needs and achieve exponential benefits for the communities in which we work.
Programmes, unlike projects, are generally implemented through a centralised management
system (Country Directors and Programme Managers) in which groups of projects are
coordinated to achieve a programme’s overall strategic objectives and benefits. This approach
enables us to achieve economies of scale and realise incremental change that would not be
possible if projects were managed separately.
Although the focus of this guideline is on DMEAL process from a programme perspective, all
the tools and rationale presented below can also be applied to project management.
1
Project Management for Development Professionals Guide developed by PM4NGOs. Accessible at
https://www.pm4ngos.org/project-dpro/
DMEAL Director (GSO Level): The DMEAL Director is the highest authority in terms of the
Guideline's implementation. Their role includes overseeing the DMEAL practices across the
organisation, ensuring alignment with ForAfrika's values and strategies. They offer guidance
to all levels of staff, facilitate necessary training, ensure resources are allocated for DMEAL,
and promote a culture of continuous learning and improvement.
Programme Quality Manager (GSO level): The Programme Quality Manager ensures that
the quality of programme design and implementation aligns with the guidelines set forth. They
work closely with the DMEAL Director to translate the Guideline into practice and ensure that
the quality standards are met at all stages of the programme cycle. Critically, they are
responsible for ensuring that each programme aligns with ForAfrika's results framework at the
design stage, which means they actively guide and oversee the development and refinement
of programme plans, outcomes, and indicators.
Programme Officer (GSO level): The Programme Data Officer's primary role is to manage
and oversee the quality of data collected throughout the programme's cycle. They play an
instrumental role in implementing data-related aspects of the Guideline, such as designing
and maintaining data collection tools, ensuring data quality, and facilitating data analysis and
interpretation.
Programme Integration Director (GSO level): The Programme Integration Director's role is
to ensure that all different programmes and initiatives align with the Guideline. They facilitate
cross-departmental coordination, promote integration of DMEAL practices into all
programmes, and ensure that the DMEAL processes are incorporated in a way that
complements and enhances overall programme performance.
Country Directors: The Country Directors are responsible for implementing the Guideline at
the country level. They oversee their respective teams, ensuring that DMEAL practices are
incorporated into all programmes. They also facilitate coordination with the GSO level staff,
report on DMEAL activities, and ensure that country-level insights feed into organisational
learning.
DMEAL Managers and Officers (country level): The DMEAL Managers and Officers,
hereinafter called DMEAL staff, are the backbone of DMEAL implementation at the country
level. They facilitate data collection, manage data quality, conduct monitoring and evaluation
activities, and report findings in line with the Guideline. They also collaborate closely with
programme managers and community stakeholders to ensure participatory approaches are
utilised. Additionally, DMEAL staff play a key role in ensuring that country-level programme
design and implementation align with ForAfrika's results framework, providing the necessary
support and oversight to ensure this alignment.
Programme Managers and Staff (country level): Programme Managers have a critical role
in implementing the Guideline in their specific programmes. They are responsible for
incorporating DMEAL activities into programme design and implementation, collaborating with
DMEAL Staff to ensure data is collected and used effectively, and promoting participatory
approaches in DMEAL practices.
• 2
All secondary data sources should be explicitly referenced.
Gender and Representation: Ensure gender balance in the composition of data collection
teams and in participatory activities. At a minimum, all indicators involving numbers or
percentages of people should be disaggregated by gender. The DMEAL team will strive to
include individuals of all age groups and persons with disabilities in routine data collection
efforts.
Informed Consent: Participants must be clearly informed about how their data will be used,
with whom it will be shared, and the process for submitting complaints or addressing concerns.
Consent must be explicitly obtained before data collection begins.
Ethical Clearance: Before contacting any respondents for participation in any needs’
assessments and/or studies which collect data beyond our monitoring activities, ForAfrika
country office teams should submit applications to ethical clearance statutory bodies in their
relevant jurisdiction for the ethical vetting of the proposed research.
National Bioethics Committee for Health (CNBS - Comité Nacional de Bioética para a
MZ
Saúde)
While many of these review boards pertain specifically to health, clearances for research in
other fields may be obtained from the internal review boards of most Higher Education
Institutions (HEIs). This is an important layer of compliance for the organisation to have
documented. Moreover, partnerships with these institutions may be established pursuant to
their support with this and other aspects of our research efforts.
When undertaking ethical clearance for research, always keep in mind the following:
• Identify the Relevant Body: Depending on the nature of your research (e.g., health, social
sciences), you may need to approach different bodies.
• Submit Proposal: Prepare a detailed research proposal, including information on ethical
considerations, and submit it to the appropriate committee or board.
• Follow Local Regulations: Ensure compliance with local laws and guidelines regarding
research ethics.
1. Mobilise resources: Each Country Office needs to account for funds for regular
community consultations in their country strategies. These will allow for a bank of
community-based evidence on local priorities. Further, every project proposal budget
must include resources for a complete or rapid needs-assessment.
2. Define objectives, scope, and timeline: Clearly outline the purpose and objectives of
the participatory needs assessment. Determine the scope by specifying the geographic
area, target population, and thematic focus (related to ForAfrika’s programme areas) and
develop a realistic timeline for the data collection;
3. Identify stakeholders and participants: Compile a list of key stakeholders, including
local community members, leaders, civil society organisations, government
representatives, and other relevant parties who should be involved in the process.
Consider diversity in terms of gender, age, ethnicity, socio-economic status, and other
factors to ensure a balanced representation of the community. Define your focal area
precisely to draw an appropriate and representative sample;
4. Select participatory tools and methods: Choose appropriate participatory tools and
methods that align with the objectives and context of the needs assessment. Ensure that
the selected methods are culturally sensitive, accessible, and inclusive to facilitate
meaningful participation from all community members.
5. Develop a plan for data collection, quality control, and reporting: Outline the data
collection strategy, including the methods, tools, and techniques to be used, as well as
the roles and responsibilities of the assessment team and community participants. Set up
a plan for how the team will ensure the quality standards set for the data are adhered to
as well as a plan for how the results of the data collection will be reported. Consider
ethical considerations and ensure informed consent from participants.
Stakeholder Engagement:
Data Collection:
• Train data collectors, including community members and assessment team members, on
the selected participatory tools and methods. Ensure they understand the objectives of the
needs assessment, ethical considerations, and how to collect and record data accurately
and respectfully.
• Conduct data collection activities using the selected participatory tools and methods to
capture needs, challenges and assets, and accurately record the data using formats
such as written notes, audio recordings, photographs, or visual representations. Ensure
that data is stored securely and confidentially.
• Regularly monitor the data collection process to ensure that it is progressing according to
the plan, addressing any challenges or issues that may arise.
Data Analysis:
• Gather all collected data and organise it in a structured and accessible format. This may
involve transcribing audio recordings, digitising handwritten notes, or sorting and
categorising data.
• Analyse qualitative data, such as focus group discussions or key informant interviews, by
identifying common themes, patterns, and trends. This may involve using techniques such
as content analysis, thematic analysis, or narrative analysis.
• Analyse the quantitative data, such as survey responses or numerical rankings, using
descriptive or inferential statistical methods, depending on the data and research
questions.
Prioritisation:
• Share the findings of the data analysis with community members and stakeholders, using
accessible formats and language.
• Organise workshops or meetings to discuss the findings and collaboratively prioritise the
identified needs and challenges. Ensure that diverse community members and
stakeholders are included in these discussions.
• Establish criteria for prioritising needs and challenges, such as urgency, severity,
feasibility, or potential impact. These criteria should be agreed upon by the community
members and stakeholders involved in the process.
• Apply the prioritisation criteria to the identified needs and challenges, either through a
structured process such as matrix ranking or through group discussions and consensus-
building.
Gantt chart or
Programme
Timeline Develop a realistic timeline for other programme
Quality Manager
Establishment conducting the needs assessment. management
& DMEAL Staff
tools
Needs
Choose appropriate participatory Programme
Tool and Method assessment tools
tools and methods that align with the Quality Manager
Selection and methods
objectives and context. & DMEAL Staff
guide
Programme
Data Collection Outline the data collection strategy, Data collection
Quality Manager
Plan Development including the methods, tools, etc. plan template
& DMEAL Staff
phone calls
Building establish trust. & DMEAL Staff
Roles and Clearly define and communicate the Programme Roles and
Responsibilities roles and responsibilities of Quality Manager responsibilities
Definition stakeholders. & DMEAL Staff chart
Emails, printed
Programme
Report Disseminate the approved report to copies,
Quality Manager
Dissemination relevant stakeholders. community
& DMEAL Staff
meetings
Impact: the long-term results ForAfrika aims at achieving, as defined by our mission and vision:
20 million people reaching self-sufficiency by 2030.
Ultimate Outcomes: the 3 high level results, which the Theory of Change identifies as steps
necessary to achieve the impact. These are currently related to emergency response, building
back better and transformational development.
Intermediate Outcomes: the high-level goals we aim to achieve through the programmatic area,
which will enable the ultimate outcomes. It refers to a level of changes, which relates to community
systems, policies and collective behaviour. Each intermediate outcome should/can be achieved
by a combination of immediate outcomes. Please refer to the Pillars’ LogFrames to harmonize the
programme outcomes with our global framework.
Immediate Outcomes: these are the first level of change in each programme area we expect to
observe. They are a consequence of successful implementation of one or more projects. They
normally refer to changes in knowledge, attitude and individual practices.
Outputs: Outputs are the immediate deliverables (not changes) we are promoting – i.e. services
and products. When Country Offices design projects, they need to link their outputs to the
immediate goals provided on the overarching LogFrame – hence contributing to strategic
alignment and reducing a pulverization of efforts.
Example
If a participatory needs assessment identified low literacy rates among women as a key challenge
in a community, one of the programme immediate outcomes might be to "Increase the literacy rate
among women aged 15-25 in community X from 50% to 75% over the next five years". This
objective directly addresses the identified need and is SMART.
An associated ultimate outcome might be "Improved economic opportunities for women in X
community". The Theory of Change would then map out the steps and conditions needed to move
from the outputs, such as "increased school enrolment for girls," "improved school retention rates,"
"increased participation in vocational training programme," etc), to immediate outcome (increased
literacy rates) to the ultimate outcome (improved economic opportunities). These steps need to
explain why a lower level of results is able to generate the next level of effects.
3
https://pmdprostarter.org/problem_tree/
4
https://pmdprostarter.org/objectives_tree/
The different results that The measures or metrics How data will be External factors that
the programme area that will be used to collected to assess need to be present for
aims to achieve. Each assess progress towards progress towards the results to be
result should be specific, achieving each result. achieving each result. It achieved. Assumptions
measurable, achievable, Indicators should be should include the data should be specific and
relevant, and time-bound specific, measurable, sources, tools, or relevant to the result.
(SMART). and relevant to the methods that will be They should also be
result. used to collect the data. realistic and based on
Start listing the Ultimate
available information.
outcome, then follow the When selecting When identifying means
sequence presented indicators, choose ones of verification, consider When listing
above. that are easy to measure the data sources and assumptions, be specific
and report on. This will methods that will be about the external
Each result should be
help ensure that most reliable and factors that need to be
inserted in one line, and
progress is monitored feasible to use. This may present for the results to
followed by its indicators,
and reported accurately. include surveys, be achieved. Consider
means of verification and
interviews, observation, both positive and
assumptions.
or other data collection negative external factors
methods. that may impact the
project or program's
success.
Theory of Change: takes the Program Logic Model one-step further by explicitly outlining the
causal pathways leading from inputs to outcomes, identifying the preconditions or intermediate
outcomes that must be in place for the final outcomes to be achieved. It also considers the
assumptions underlying these pathways and the external factors that might influence the
achievement of outcomes.
Identifying Indicators
• Review Programme Outputs and Outcomes: Revisit the defined outputs and outcomes
from the programme design. Each output and outcome should have associated indicators.
• Examine ForAfrika’s Global Indicator Tracking Tool and Pillar LogFrames: Consider
the indicators already established in these tools to promote consistency and comparability
across ForAfrika’s programmes.
• Involve Community Members: Conduct workshops or discussions with community
members to identify potential indicators. Community input ensures the indicators are
understandable, meaningful, and have local relevance.
• Select Indicators: Finalise a list of indicators, making sure each outcome has at least one
corresponding indicator. They can be qualitative (describing qualities or characteristics)
and/or quantitative (numerical data).
• Evaluate selected indicators against SMART, cultural appropriateness, and other
criteria: Indicators should be SMART – Specific, Measurable, Attainable, Relevant, and
Time-bound. They should also be culturally appropriate and feasible to collect with the
resources available. Indicators should also be useful for management, specify the
standards informing their targeting, and emphasise their connection to activities
undertaken.
Example: For a programme aiming to improve literacy rates (outcome), a possible
quantitative indicator could be “percentage of children at grade level reading proficiency,”
while a qualitative indicator might be increased “confidence in reading as reported by
students.”
Setting Targets
• Understand Baseline Data: Before setting targets, understand the current situation or
baseline against which progress will be measured. The baseline data for each indicator
can be gathered during the needs assessment phase.
• Consult with Community Members: Community consultation is key in setting targets to
ensure they are realistic and culturally appropriate.
Remember, indicators and targets may need to be revised as the programme evolves or as
the community’s needs and context change. Regular review ensures that they continue to be
relevant and aligned with the programme’s objectives and outcomes.
• Identify Activities: After defining the programme/project outcomes and outputs, list all
the activities that are necessary to achieve each output.
• Gather quotes: from partner suppliers for each of the identified activities which comprise
the entire project and compare overall costs across suppliers. This could include
personnel costs, materials, equipment, training, and any other costs that are necessary
to implement the activity. This step should incorporate a comprehensive analysis of all
potential costs, including both direct and indirect costs like overheads.
• Link Costs to Outputs: For each activity, identify which outputs it contributes to and link
the costs of the activity to these objectives. This can be done by assigning a percentage
of the cost of each activity to each output based on how much it contributes to the
achievement of that output. For example, if you have an activity that contributes equally
• Temporal Distribution: After identifying cost centres and estimating the resources,
distribute the costs over the programme’s lifetime on a monthly basis. This will create a
clear financial timeline for executing the activities and achieving the outputs and
outcomes.
5
ResearchMethod.Net. 2024. Stratified Random Sampling – Definition, Method and Examples.
Available online: Stratified Random Sampling - Definition, Method and Examples (researchmethod.net)
[Accessed 30 September 2024].
PERSON TOOLS
ACTIVITY TITLE ACTIVITY DESCRIPTION
RESPONSIBLE REQUIRED
Programme Manager
Review Programme Revisit the defined outputs and
and sector specialists Programme
Outputs and outcomes and associate each
(lead); DMEAL Staff LogFrame
Outcomes with appropriate indicators.
(support)
INDICATORS AND TARGETS
Temporal
Distribute the costs over the Programme Manager
Distribution of Budgeting tools
lifetime of the programme. and sector specialists
Costs
4. Community-based monitoring
After an inclusive and participatory programme design process, the journey does not stop.
The next crucial step is monitoring. This chapter elucidates the significance of a community-
centred approach to monitoring, turning to the community as valuable partners and
contributors in the process.
With Community-Based Monitoring (CBM), not only is the programme data collected more
reliable and relevant, but it also facilitates a shared ownership that helps to strengthen
community engagement, empowerment, and sustainability.
While reflecting on the topics above, make sure the monitoring process is adequately
supported by factoring in the costs associated with each activity within the programme budget.
Consider the following elements when budgeting:
• Personnel: This includes wages for data collectors, analysts, and other staff involved in
the monitoring process.
• Training: Account for costs of training sessions for community members and other data
collectors.
Along with programme progress, data collection efforts should also encompass budget
utilisation. Tracking spending against the budget will highlight any discrepancies between
planned and actual expenditures, thus facilitating timely financial management decisions.
Financial data collection should be done accurately through ForAfrika’s financial system and
compiled in a monitoring spreadsheet which pairs outputs and outcome indicators with their
budgetary performance.
6
Okello, F., Kitungulu, B., Kabore, I., Adhikary, R., Merrigan, M., Lew, K., and Etheredge, G. 2013. Participatory
Data Verification and Improvement Tool: Framework and Operational Guide For Implementation. Available online:
FHI [Accessed 4 September 2024]
Following the collective interpretation of results, the next step is developing an action plan
to address any identified deviations or challenges.
• Define Actions: Identify specific actions that need to be taken to address the issues
raised. For instance, if a literacy programme requires more engaging teaching materials
for children, one action might be to collaborate with teachers and students to develop these
resources.
• Assign Responsibilities: For each action, assign a responsible party. This ensures
accountability for the implementation of the action. For example, a specific team within
ForAfrika could be responsible for coordinating the development of the teaching materials.
• Set Timelines: Establish a realistic timeline for the implementation of each action. This
helps maintain momentum and allows for tracking of progress. For instance, the goal might
be to have the new teaching materials ready for use within three months.
PERSON TOOLS
ACTIVITY TITLE ACTIVITY DESCRIPTION
RESPONSIBLE REQUIRED
targets. (Support)
Programme Manager
Assign For each action, assign a Action Plan
(Lead), Programme
Responsibilities responsible party. Template
Team (Support)
Action Plan
Establish a realistic timeline for Programme Manager
Template, Project
Set Timelines the implementation of each (Lead), Programme
Management
action. Team (Support)
Tools
REPORTING
The evaluation approach sets the tone of the evaluation and influences the methods used
for data collection and analysis. Below are some of the possible approaches which can be
used:
• Participatory Evaluation: This approach involves stakeholders, particularly those
affected by the programme, in the evaluation process. The aim is to enhance the credibility
and usefulness of the evaluation by ensuring it addresses the concerns and perspectives
of these stakeholders. Participatory evaluation can be empowering and can contribute to
capacity building and sustainability.
• Utilisation-Focused Evaluation: This approach focuses on ensuring the evaluation is
useful and is actually used by stakeholders. It involves identifying the intended users of
the evaluation and their information needs at the beginning of the evaluation process and
then involving these users throughout the process to increase the likelihood that they will
find the evaluation findings credible and relevant and will use them in decision-making.
• Empowerment Evaluation: This approach focuses on helping communities to assess
their own programmes to improve and empower themselves. It focuses on capacity
building and helping the community learn about its programme.
• Gender and Equity-Focused Evaluation: This approach focuses on whether a
programme is reducing inequalities and promoting fairness among genders. It focuses on
whether all genders have been able to access and benefit from the programme equally.
• Outcome Harvesting: This approach is useful in complex scenarios where the
relationship between cause and effect is not fully understood. It involves identifying what
has been changed (outcomes) and then working backward to determine whether and how
an intervention contributed to these changes.
• Theory-Based Evaluation: This approach involves a detailed examination and analysis
of why a programme works or doesn’t work. It includes an investigation of the underlying
theories of change and the assumptions that link the activities of the programme to its
expected outcomes.
• Blue Marble Evaluation: This relatively new approach to evaluation takes a global
perspective, appreciating the complexity, interconnections, and dynamism of today’s
world. Blue Marble Evaluators are guided by principles of interconnectedness, global
thinking, simultaneous attention to universality and uniqueness, and adoption of a worldly
gaze that embraces the global and the local.
The choice of evaluation type and approach should be guided by the purpose of the
evaluation, the questions it seeks to answer, and the needs and context of the stakeholders
involved. It’s important to note that the types and approaches are not mutually exclusive. An
evaluation might often combine elements of different types and approaches to best meet its
objectives.
Evaluations and reviews are essential for continuously monitoring and maintaining programme
quality while ensuring alignment with ForAfrika’s strategic objectives. Unlike external
evaluations which provide an independent and comprehensive assessment of programme
effectiveness, impact, and sustainability, internal evaluations and reviews offer a valuable
complement, enabling more regular, nimble checks on programme performance.
Internal evaluations are typically lighter exercises, requiring fewer resources than full-fledged
external evaluations. Conducted by DMEAL Staff within ForAfrika, these assessments provide
an opportunity for continuous learning and adaptation throughout the programme cycle.
Self-sufficiency in the context of ForAfrika's programs refers to the level of autonomy and
resilience achieved by the people reached through our programs in meeting their basic needs,
improving their livelihoods, and sustaining positive outcomes without continued external support.
Impact refers to the measurable positive or negative effects that result from the execution of
ForAfrika’s programs. ForAfrika primarily envisages impact manifesting as:
• Changes in the lives and well-being of targeted individuals, communities, and
ecosystems;
• Improvements in social, economic, and environmental conditions;
• Sustainable positive outcomes that endure beyond the duration of the intervention.
The concept of self-sufficiency implies that individuals and communities are able to access
and manage resources, knowledge, and skills in a way that allows them to meet their
immediate and strategic needs, create sustainable livelihoods, and maintain their well-being
over the long term, without being depending on external assistance. Self-sufficiency
encompasses not only economic factors, but also social, environmental, and cultural aspects
that enable individuals and communities to lead dignified, empowered, and sustainable lives,
taking into account local contexts including cultural norms and behavioural changes
Our annual impact measurement involves systematically evaluating the extent to which a
programme or intervention has achieved its intended outcomes and contributed to positive
changes in the lives of the target population.
Data collected through ‘blindfolded’8 interviews is then analysed using a thematic analysis
approach, which involves identifying patterns and themes in the data that relate to the
monitoring questions and objectives. This process allows for the identification of
commonalities and differences in experiences, as well as the exploration of causal
mechanisms and the influence of contextual factors on programme outcomes. By using QuIP,
the impact measurement process is enriched and provides a more comprehensive
understanding of the effects of ForAfrika’s programmes on the target communities.
7
Scoones, I. (1998) Sustainable Rural Livelihoods: A Framework for Analysis, IDS Working Paper 72, Brighton:
IDS.
8
The blindfolding process in QuIP (Qualitative Impact Protocol) interviews is a key component of the methodology
that enhances the credibility and trustworthiness of the study findings. Blindfolding refers to withholding specific
information about the intervention, program, or project being evaluated from the field monitoring officers who
conduct the interviews. The purpose of the blindfolding process is to minimize biases that may influence the way
interviewers ask questions, probe for details, or interpret the responses they receive from participants.
PERSON TOOLS
ACTIVITY TITLE ACTIVITY DESCRIPTION
RESPONSIBLE REQUIRED
Country Director
Budget Reports,
Prioritise programmes for (Lead), DMEAL
Programme Programme
evaluation based on budget, officer, programme
Prioritisation for documents, Risk
degree of innovation, and risk staff, community
Evaluation Assessment
EVALUATION PLAN
Develop a comprehensive
evaluation plan with clear
objectives, evaluation
questions, stakeholder DMEAL Officer (Lead),
Establishing the Evaluation Plan
engagement, evaluation Programme Manager
Evaluation Plan Template
design, budgeting, timeline, (Support)
data collection and analysis
plan, and reporting and
dissemination plan.
(ERG) evaluation.
If it is possible, ask participants about their preferred type of feedback channels. These
questions could be integrated into a survey conducted at the design or start-up phase of the
project. Throughout implementation, make sure to constantly verify if the available channels
are providing useful and actionable feedback data from the participants.
ForAfrika Safeguarding policy guides operations to implement appropriate policies, procedures and
practices that prevent harm to participants. This includes protection from sexual exploitation, abuse,
harassment, and protection of vulnerable children, adults, people with disabilities, and diverse genders.
The Safeguarding Policy applies to all involved in providing and receiving humanitarian services
throughout assistance delivery.
A clear referral pathway defining response protocols to various requests, feedback, and complaints from
stakeholders and participants is prerequisite. This enables appropriate and timely response to
stakeholders.
TYPE OF
PRIORITY LEVEL EXPLANATION RECORD RESPOND REFER
CATEGORY
An information
request about the
type of aid and
services available at If you don’t know the
ForAfrika, the Yes Before the next answer, refer to the
Request for
Low or medium location or timing of Record into the programme protection actor
information
a specific service, Feedback Registry interaction responsible for
questions about information services
targeting or
registration criteria,
eligibility criteria, etc
Complaint about
staff attitude, lack of
access to aid
distribution or
service, lack of
access to
information,
exclusion of a Refer to the
Provide status of
minority/vulnerable supervisor or
Yes resolution before the
Programmatic group, extortion of programme
next programme
complaint – major High aid by a third party Record into the manager, HR or
interaction; conclude
dissatisfaction (with no ForAfrika Feedback Registry. field coordinator for
resolution in x
involvement), interpretation and
months
refusal by ForAfrika action.
staff to listen or
acknowledge a
complaint or inability
for a client to reach
the feedback hotline
or any other
mechanism
External channels like police and community leaders can also be approached; however, this
does not guarantee that ForAfrika will receive and be able to address the complaint. The
decision to take the complaint outside the organisation should be based on the nature and
severity of the issue.
6.6 Reporting
High quality reporting is an essential component of our stakeholder engagement. In order to
achieve this, country office teams need to make use of GSO’s report review process based
on the tiering assigned to them.
Below is the tiering of country office teams as of 2024:
COUNTRY
TIER MEANING GSO SUPPORT
OFFICES
Moreover, for those reports requiring review by GSO before submission to donor, the following
report workflow is applicable:
• 30 days to deadline: Development Unit (DU) sends reminders (or Salesforce auto
reminders)
• 20 days to deadline: Country Offices Share draft report
• 10 days to deadline: GSO (DU, DMEAL, SME) return comments to Country Office
• 8 days to submission: Country Office submits second draft for review
• 2 days to submission: DU submits report to Affiliates/ Donor OR Country Office submits
report to donor as applicable
In order to ensure this workflow is functional, country office teams need to make sure that their
grants records and reporting schedules are up to date in Salesforce, as well as upload and
mark the status of submitted reports therein appropriately.
Country Directors:
• Provide overall leadership and direction for community accountability
• Allocate sufficient resources (staff, time, budget) for community accountability
mechanisms
• Review and sign off on major decisions related to community feedback
Programme Managers:
• Oversee design and implementation of feedback channels and community engagement
• Foster an Open and Supportive Environment: The first step in cultivating a learning
culture is creating an environment that encourages openness, collaboration, and
psychological safety. Staff should feel comfortable asking questions, sharing their
thoughts, and discussing their successes and failures. This requires strong leadership
commitment, inclusive communication practices, and respectful interpersonal dynamics.
For example, country offices might establish a “no-blame culture” where employees feel
safe expressing their opinions and learn from mistakes. If a community programme didn’t
achieve its targets, hold a review meeting in which all team members can discuss what
happened, without fear of being reprimanded.
• Celebrate Learning: Make learning a part of daily routines and celebrate it. Acknowledge
staff who take the initiative to learn something new or share their learnings with others.
This can be done through shout-outs in team meetings, annual awards ceremonies and
highlights in the programme highlights and Postcards from the field.
• Learning as a Core Value: Learning is already embedded into our organisational values,
now it is our responsibility to make it present in work processes. Include it in staff
performance appraisals, meetings, and project management processes. Make it clear that
learning is not an optional activity but a core expectation for everyone in the organisation.
• Identify Lessons: After each programme, activity, or even significant meeting, take the
time to identify and document what was learned. This could be done in an “After-Action
Review” meeting, as described in the previous section.
Conduct Debriefs: Schedule a debriefing session at the close of each programme
milestone. These meetings should include all team members involved in the programme,
from design to execution. An example might include a meeting post-implementation of a
water sanitation programme to assess the accuracy of the needs assessment and context
mapping at the planning stage of the programme.
Use Structured Questions: Ask pointed questions such as, “What unexpected challenges
did we encounter and how did we adapt?” or “Which strategies led to the most significant
community impact?” This structured questioning can help unearth critical insights.
Include All Stakeholders: Ensure the involvement of community representatives in these
discussions. Their perspectives can bring to light unanticipated impacts or identify locally
relevant best practices.
• Catalogue Best Practices: Use Annex A - Programme Good Practices Report. This tool
allows other teams to learn from and potentially replicate their success.
Identify Key Areas: Choose areas of significant success in the programme. For instance,
if a new method of community engagement led to higher participation rates, this method
should be highlighted.
Write Detailed Descriptions: Document these strategies clearly and comprehensively,
ensuring they can be easily replicated in different contexts. Include tangible details such
as the steps taken, resources needed, and challenges encountered.
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
• Share Lessons Broadly: Do not limit your sharing of lessons to internal channels. Share
your successes, failures, and learnings with partners, donors, and other stakeholders.
This could be done through blog posts, annual reports, or webinars.
Write Blog Posts: A blog post might detail the journey of a specific community
programme, highlighting the challenges faced, innovative solutions used, and the impact
achieved.
Include in Reports: Dedicate a section of your annual report to key learnings. Similarly,
when reporting to donors, emphasise what your team has learned and how it will shape
future programmes.
Organise Webinars: These could be targeted to specific audiences. For instance, a
webinar for other NGOs might focus on sharing best practices, while a webinar for
potential donors could highlight the organisation’s commitment to learning and improving.
• Leverage Technology: Utilise collaborative platforms and technologies to make the
sharing of lessons learned easier and more efficient. For instance, a shared online
document can be a useful way for teams to contribute their experiences and insights.
Digitise Key Resources: Making use of the Programmes SharePoint website with
documents sorted into categories like “Best Practices”, “Programme Reports”, and
“Community Feedback”, can help make resources easily accessible for all staff.
Given the content of this DMEAL Framework, the following areas of skills would be important
to focus on for capacity building:
DMEAL Skills
• Design: These involve the ability to conceptualise and plan a project or programme
effectively. Key skills here include logical framework development, setting realistic and
measurable objectives, defining indicators, planning for data collection, and setting up
monitoring and evaluation mechanisms. Understanding the Theory of Change and how to
develop performance-based budgets also falls under design skills.
• Monitoring: This requires the ability to track and document the progress of a programme
towards its objectives. Skills involved include data collection, analysis, and interpretation,
using both quantitative and qualitative methods. Monitoring skills also involve
If done effectively, this process of capacity building leads to a more capable and engaged
team, more active and empowered community members, and more meaningful partnerships
with stakeholders. It will enhance the overall effectiveness and impact of our programmes.
Based on the
learning, identify
Identification of and agree on Programme Quality Meeting space,
Adaptations and necessary Manager & DMEAL Quarterly Improvement
Improvements programme Staff suggestions
adaptations and
improvements.
Develop an action
plan outlining the
changes to be Programme Quality
After each Action plan
Action Planning made, Manager & DMEAL
Reflection Session template
responsibilities, Staff
required resources,
and timelines.
Implement the
action plan
Implementation of according to Assigned team
As per Action Plan As per Action Plan
Action Plan assigned roles, members
responsibilities, and
timelines.
After
implementation,
continue monitoring Programme Meeting space,
Monitoring and
and conduct a Manager & DMEAL Bi-annual Monitoring data,
Review
review to assess Staff Evaluation reports
the effectiveness of
the adaptations.
Please use the word template stored at the DMEAL SharePoint folder when creating your
report. The content below provides a guidance, without using the recommended final
layout/style for this product.
Project/Programme:
Project/Programme Location:
Duration:
Target population:
Stakeholders: [involved in the project/programme at all levels including donors/government
departments at national and devolved tiers; community level]
ForAfrika personnel: [engaged in the project/programme]
ForAfrika [Country Office name]
Authors: [ForAfrika personnel writing (authors) of the project/programme model/best practice]
[month, year – when the document was prepared]
Executive Summary
• What is the purpose and scope of the project/programme?
• What are the key objectives, findings, and recommendations?
• Which practices were adopted, and why were they considered good or best practices?
• What results were achieved due to these practices?
• What recommendations can be made based on the project/programme’s experience?
1. Programming Framework
• What is the context of the project/programme within the organisation’s broader
programming framework?
• What is the project/programme theory of change?
• Where and when was the initiative implemented?
End notes:
1. Please select high-quality pictures for illustrating various sections of the document.
2. Please ensure credit is given for any material/content cited from other sources.
3. Please use diagrams wherever possible to make visual illustrations.
4. While documenting good or best practices, ensure the practice is highlighted, its impact
demonstrated with evidence, and how it aligns with the criteria for good or best practices.
Also, note its potential for replication and adaptation in different contexts
Appendices
Terms of Reference
• The most relevant aspects of the economic, social and political context relevant to the
object of the evaluation are well and succinctly explained. (This can include strategies,
policies, goals, frameworks and priorities at the international, national, and organisational
levels).
• The ToR spell out the national and international instruments as well as ForAfrika policies
on human rights, including child rights, and gender equality, and equity/inclusion that are
relevant to the object of the evaluation.
Question 2. Do the ToR include sufficient and relevant information on the object of the
evaluation?
• Overall, the object of the evaluation is briefly and clearly explained (includes: objective(s),
interventions, stakeholders, time period, budget, geographic scope, implementation
phase, and linkages to SDGs, ForAfrika outcome area, and Country Office strategic
priorities.
• The logic model or the theory of change (ToC) is clearly presented. If not, the ToR specify
that the ToC will be reconstructed retroactively.
• Right holders and duty bearers are clearly identified by type and geographic location
where relevant.
• Key stakeholders, including ForAfrika, are clearly identified and their contribution to the
object of the evaluation is well described.
Question 3. Does the ToR specify the purpose of the evaluation, who will use it, and
how it will be used?
• The purpose of the evaluation identified in the TOR clearly states why the evaluation is
being done, including justification for why it is being done at this time.
• The ToR identify the primary intended users and intended uses of the evaluation.
Question 4. Does the ToR present clearly defined, relevant and feasible objectives?
• The evaluation objectives clearly follow the overall purpose of the evaluation.
• It is clear from the ToR what type of an evaluation this is (formative, summative, mid-term,
ex-post, final, etc).
• Evaluation objectives clearly address gender, equity/inclusion, and child rights either
within objectives or as separate objectives, as appropriate.
Question 5. Does the ToR include a complete description of the scope of the
evaluation?
• The ToR clearly define what will and will not be covered in the evaluation (thematically,
chronologically, geographically), as well as the reasons for this scope (e.g., lack of access
to geographic areas for political or safety reasons at the time of the evaluation, lack of
data/evidence on elements of the intervention).
• The ToR clearly include gender, equity/inclusion, and child rights dimensions in the
evaluation scope, referring to Leave No one Behind.
Question 6. Does the ToR include a comprehensive and tailored set of evaluation
questions within the framework of the evaluation criteria?
• Evaluation questions and sub-questions are appropriate for meeting the objectives and
purpose of the evaluation and are aligned with the evaluation criteria.
• Any other relevant criteria are adequately addressed, including humanitarian criteria
(Coverage, Connectedness, Coordination, Protection, Security) in the case of
humanitarian programs.
• The ToR include an assessment of relevant human rights, including child rights,
equity/inclusion (e.g., disability inclusion, Leave No one Behind), and gender equality
through the selection of the evaluation criteria and questions.
Question 7. Does the ToR specify the methods for data collection and analysis,
including information on the overall methodological design?
• Evaluability assessment: the ToR assess the availability and reliability of data (baseline,
indicator, targets, output and outcome data available through DMEAL) disaggregated by
gender, age, disability/socially excluded groups, etc.
• The overall methodological approach for the evaluation is explained, e.g., theory-based
evaluation, mixed methods, participatory (including vulnerable groups, utilisation focused,
gender and human rights responsive, equity focused). The evaluation design is suggested
(e.g. experimental, quasi-experimental, non-experimental).
• Data collection and analysis methods are adequately addressed (primary and secondary
data collection).
• The ToR specify that the evaluation will follow the ForAfrika Norms and Standards, as
well as ethical guidelines.
• The ToR specify an evaluation approach and data collection and analysis methods that
are human-rights based, including child-rights based and gender sensitive, and for
evaluation data to be disaggregated by sex, ethnicity, age, disability, etc.
• The overall methodology is appropriate to meet the objectives and scope of the evaluation
and to answer the evaluation questions.
Question 8. Does the ToR present complete information on the work plan and
evaluation management?
• The ToR state the list of products that will be delivered by the evaluation team.
• The ToR describe the structure of the evaluation report and links to the ForAfrika DMEAL
Standards.
• The ToR specify an appropriate length of products to be delivered. (An evaluation report
should be between 40-60 pages.)
• The ToR describe the key stages of the evaluation process (incl. meetings, consultations,
workshops with different groups of stakeholders, key points of interaction with a steering
committee, process for verification of findings, presentation of preliminary findings, etc.).
• The ToR define the level of expertise needed among the evaluation team on gender
equality and human rights, including child rights, equity/inclusion, and their responsibilities
in this regard, and calls for a gender balanced and culturally diverse team that makes use
of national/regional evaluation expertise.
• Roles and responsibilities of the evaluation team, ForAfrika CO, ForAfrika GSO, steering
committee, evaluation reference group, stakeholders are clearly stated.
• The ToR contain a section specifying that contractors are required to “clearly identify any
potential ethical issues and approaches, as well as the processes for ethical review and
oversight of the evaluation process in their proposal”.
• The ToR specify whether the evaluation will have to go through an ethical review board
based on the “Criteria for Ethical Review Checklist” annexed to the ToR.
• If relevant, the ToR explain how official ethical approvals will be received.
• The ToR describe basic steps of the evaluation quality assurance process (e.g., reference
groups, quality review of milestone documents, etc.).
• The ToR describe a proposed approach to dissemination and advocacy for the evaluation
findings and recommendations.
• The number of days allocated to each phase is realistic to produce high quality
deliverables, given the scope of the evaluation being planned.
• The style of the ToR is adequate (brief, to the point, logically structured and easy to
understand).
Question 1. Do the opening pages and introduction of the Inception Report contain all
the relevant information?
• The introduction contains a short description of the purpose and content of the IR, the key
activities undertaken for its preparation and its place in the evaluation process.
• The introduction highlights any emerging issues that have arisen during the inception
phase (if applicable).
• Basic elements in the opening pages are presented (evaluation title, country, years
covered by the evaluation, name(s) and/or organisation(s) of the evaluator(s), and
commissioning organisation on cover page, list of acronyms, table of contents, including
list of tables and figures).
Section B: Context and Description of the Object of the Evaluation (Weight 10%)
Question 2. Are the context and description of the object of the evaluation clearly
presented?
• Clear and relevant description of the context of the object of the evaluation (i.e. relevant
policy, socio-economic, political, cultural, power/privilege, institutional, international
factors) and how context relates to the implementation of the object of the evaluation.
• Linkages are drawn to the SDGs and relevant targets and indicators for the area being
evaluated.
• The object of the evaluation is briefly and clearly explained (its objectives, stakeholders
involved and their roles, contributions, and stakes, right holders/participants and their
status and needs, time period, budget, geographic scope, phase of implementation).
• The description of the object of the evaluation makes adequate references to human
rights, gender, and equity/inclusion.
Question 3. Are the purpose, objectives and scope of the evaluation clearly presented?
• The evaluation purpose is clearly presented, including the rationale behind the evaluation,
its intended use and what this use is expected to achieve, its primary intended users and
how they stand to gain or lose from the results of the evaluation.
• The evaluation objectives are clearly presented with reference to any changes made to
the objectives included in the TOR.
• The scope of the evaluation is clearly defined (includes what will and will not be covered,
the geographic location, period, thematic field(s) of intervention/interventions to be
evaluated, levels (regional, country, municipal). Changes from ToR are clearly indicated
and justified.
• All of the evaluation criteria and questions are listed as per ToR. If criteria/questions differ
from ToR, the Inception Report justifies the changes, e.g., efforts to prioritize questions
and reduce number of questions to address should be noted in the report.
Question 5. Are evaluation findings derived from the conscientious, explicit and
judicious use of the best available, objective, reliable and valid data and by accurate
quantitative and qualitative analysis of evidence.
• The Inception Report links the evaluation criteria and questions to the chosen
methodology through an evaluation matrix that includes indicators, benchmarks,
assumptions and/or other processes from which the analysis can be based and
conclusions drawn, referring to the Convention on the Rights of the Child (CRC), Leave
No one Behind (LNOB), and disability inclusion as appropriate.
• Indicators, data sources, and data collection and methods are identified for each question.
• The indicators chosen are specific, easily measurable, and relevant to the corresponding
evaluation questions and ToC
• The evaluation questions and indicators include reference to human rights, gender, and
equity dimensions.
• Clear and complete description of a relevant and robust methodological design and set of
methods that are suitable for the evaluation’s purpose, objectives, and scope. Any
adaptations to the methods proposed in the ToR are explained and justified.
• Key data sources are clearly presented and appropriate (includes list of documents for
desk review, the group of stakeholders to be interviewed, available databases, data gaps),
and appear comprehensive and reliable.
• Methodology allows for drawing causal connections between outputs and expected
outcomes.
• The sampling methods described for qualitative data collection are appropriate and
adequate (includes ALL of the following: sample size, the geographic area(s), specific
populations, sampled site/country visits, the rationale/criteria for selection, how
participants/interviewees will be selected, and criteria for selection of countries to be
visited/studied (if applicable)).
• The sampling methods described for quantitative data collection are appropriate and
adequate (includes ALL of the following: sample size, the geographic area(s), specific
populations, sampled site/country visits, the rationale/criteria for selection, how
participants/interviewees will be selected, and criteria for selection of countries to be
visited/studied (if applicable)).
• The data collection tools are linked to the specific evaluation questions (the way in which
the tools are designed should facilitate capturing the information needed to answer the
evaluation questions).
• Clear and complete description of evaluation limitations, potential biases and constraints
faced by the evaluation team, and mitigation strategies to be used.
• Description of ethical safeguards for participants appropriate for the issues described
(respect for dignity and diversity, right to self-determination, fair representation,
compliance with codes for vulnerable groups (i.e. adherence to ethical principles and
procedure, do no harm, confidentiality and data collection). For those cases where the
evaluation will involve interviewing children, explicit reference is made to procedures for
ethical research involving children.
• The evaluation phases are clearly described, including a timeline with associated
activities, number of days for each team member, locations and deliverables.
• The roles and responsibilities of each member of the evaluation team are clearly
described.
• If the evaluation requires official ethical approval, the process to be followed is clearly
described.
• The logistics of carrying out the evaluation are discussed (e.g. assistance required from
ForAfrika for interview arrangements, field visits, etc.) and the expected roles and
responsibilities from the commissioning organisation(s) or oversight committee are
adequately explained.
• The following elements are annexed to the Inception Report: logic model/ToC, evaluation
matrix, bibliography, data collection tools (draft interview protocols, survey, case study
formats), list(s) of people to be interviewed, if applicable and available ethical review
board approval form and/or informed consent form.
• Structure is easy to identify and navigate (for instance, with numbered sections, clear titles
and sub-titles, well formatted).
• Clear and relevant description of intended rights holders (participants) and duty bearers
(state and non-state actors with responsibilities regarding the object of the evaluation) by
type (i.e., institutions/organisations; communities; individuals…), by geographic
location(s) (i.e., urban, rural, particular neighbourhoods, town/cites, sub-regions…) and in
terms of numbers reached with disaggregation by gender, age, disability ... (as
appropriate to the purpose of the evaluation).
• Clear and relevant description of the context of the object of the evaluation (i.e. relevant
policy, socio-economic, political, cultural, power/privilege, institutional, international
factors) and how context relates to the implementation of the object of the evaluation.
• Linkages are drawn to the SDGs and relevant targets and indicators for the area being
evaluated.
• Clear and relevant description (where appropriate) of the status and needs of the right
holders/participants of the intervention.
• Specific identification of how the evaluation is intended to be used and what this use is
expected to achieve.
• Clear and complete description of what the evaluation seeks to achieve by the end of the
process with reference to any changes made to the objectives included in the TOR and/or
in the inception report.
• Clear and relevant description of the scope of the evaluation: what will and will not be
covered (thematically, chronologically, geographically with key terms defined), as well as
the reasons for this scope (e.g., specifications by the ToR and/or inception report, lack of
access to particular geographic areas for political or safety reasons at the time of the
evaluation, lack of data/evidence on particular elements of the intervention).
• Clear and complete description of the intervention’s intended results or of the parts of the
results chain that are applicable to, or are being tested by, the evaluation.
• Causal relationship between outputs and outcomes is presented in narrative and graphic
form (e.g., results chain, logic model, theory of change, evaluation matrix).
• For theory-based evaluations, the theory of change or results framework is assessed, and
if requested in the ToR, it is reformulated/improved by the evaluators.
Question 7. Does the evaluation use questions and the relevant list of evaluation criteria
that are explicitly justified as appropriate for the purpose of the evaluation?
Not all OECD/DAC criteria are relevant to all evaluation objectives and scopes. Standard
OECD DAC Criteria include Relevance; Effectiveness; Efficiency; Sustainability; and Impact.
Evaluations should also consider equity, gender and human rights (these can be
mainstreamed into other criteria). Humanitarian evaluations should consider Coverage;
Connectedness; Coordination; Protection; and Security.
• Evaluation questions and sub-questions are appropriate for meeting the objectives and
purpose of the evaluation and are aligned with the evaluation criteria.
• In addition to the questions and sub-questions, the evaluation matrix includes indicators,
benchmarks, assumptions and/or other processes from which the analysis can be based
and conclusions drawn.
Question 8. Does the report specify methods for data collection, analysis, and
sampling?
• Clear and complete description of a relevant and robust methodological design and set of
data collection methods that are suitable for the evaluation’s purpose, objectives, and
scope.
• Sampling strategy is provided, describing how diverse perspectives were captured (or if
not, providing reasons for this).
• Methodology allows for drawing causal connections between outputs and expected
outcomes.
• Clear and complete description of evaluation limitations, biases and constraints faced by
the evaluation team and mitigation strategies used.
• Description of ethical safeguards for participants appropriate for the issues described
(respect for dignity and diversity, right to self-determination, fair representation,
compliance with codes for vulnerable groups (i.e. adherence to ethical principles and
procedure, do no harm, confidentiality and data collection).
• If the Evaluation Report required an official ethical approval and informed consent, both
forms are included as an annex in the draft final evaluation report.
Question 10. Do the findings clearly address all evaluation objectives and scope?
• If feasible and relevant to the purpose, cost analysis is clearly presented (how costs
compare to similar interventions or standards, most efficient way to get expected results)-
if not feasible, an explanation is provided.
• Explicit use of the intervention’s results framework/ToC in the formulation of the findings.
Question 11. Are evaluation findings derived from the conscientious, explicit and
judicious use of the best available, objective, reliable and valid data and by accurate
quantitative and qualitative analysis of evidence.
• Evaluation uses credible forms of qualitative and quantitative data, presenting both output
and outcome-level data as relevant to the evaluation framework. Triangulation is evident
through the use of multiple data sources.
Question 12. Does the evaluation assess and use the intervention’s Results Based
Management elements?
• Conclusions are clearly formulated and reflect the purpose and objectives of the
evaluation. They are sufficiently forward looking (if a formative evaluation or if the
implementation is expected to continue or have additional phase).
• Conclusions are derived appropriately from findings and present a picture of the strengths
and limitations of the intervention that adds insight and analysis beyond the findings.
Question 14. Are logical and informative lessons learned identified? [N/A if lessons are
not presented and not requested in ToR]
• Identified lessons stem logically from the findings, have wider applicability and relevance
beyond the object of the evaluation.
• Lessons are clearly and concisely presented yet have sufficient detail to be useful for
intended audience.
• Recommendations align with the evaluation purpose, are clearly formulated and logically
derived from the findings and/or conclusions.
• Recommendations are useful and actionable for primary intended users and uses
(relevant to the intervention); guidance is given for implementation, as appropriate.
Question 17. Does the evaluation report include all relevant information?
Name of evaluated object, timeframe of the object evaluated, date of report, location of
evaluated object, name(s) and/or organisation(s) of the evaluator(s), name of organisation
commissioning the evaluation, table of contents -including, as relevant, tables, graphs,
figures, annexes-; list of acronyms/abbreviations, page numbers.
• Structure is easy to identify and navigate (for instance, with numbered sections, clear titles
and sub-titles, well formatted).
• Report is easy to understand (written in accessible way for intended audience) and
generally free from grammar, spelling and punctuation errors.
• Frequent use of visual aids (such as infographics, maps, tables, figures, photos) to convey
key information. These are clearly presented, labelled, and referenced in text.
• Report is of reasonable length; it does not exceed number of pages that may be specified
in ToR.
Question 19. Did the evaluation design and style consider incorporation of the
ForAfrika’s commitment to a human rights-based approach to programming, to gender
equality, and to equity?
• Reference and use of rights-based framework, and/or CRC, and/or CCC, and/or CEDAW
and/or other rights related benchmarks in the design of the evaluation.
• Clear description of the level of participation of key rights holders and duty bearers in the
conduct of the evaluation, including in the development of recommendations, (for
example, a reference group is established, stakeholders are involved as informants or in
data gathering).
• Stylistic evidence of the inclusion of these considerations can include using human-rights
language; gender-sensitive and child-sensitive writing; disaggregating data by gender,
age and disability groups; disaggregating data by socially excluded groups.
Question 20. Does the evaluation assess the extent to which the implementation of the
intervention addressed equity?
• Evaluation assesses the extent to which the implementation of the intervention addresses
child rights and Leave No one Behind (gender and other excluded and marginalised
groups). It is disability inclusive.
• An executive summary is included that is of relevant conciseness and depth for key users.
(Maximum of 5 pages unless otherwise specified in ToR).
• Includes all necessary elements (overview of the object of the evaluation, evaluation
purpose, objectives and intended audience, evaluation methodology, key conclusions on
findings, lessons learned if requested, and key recommendations) as per ToR.
• Includes all significant information to understand the object of the evaluation and the
evaluation itself AND does not introduce anything new from what is presented in the rest
of the report.
Dear Participant,
Thank you for participating in this survey to assess your skills in the Design, Monitoring,
Evaluation, Accountability, and Learning (DMEAL) framework. Your responses will help us
identify strengths and areas for improvement to enhance our programme effectiveness. The
survey covers DMEAL skills, including design, monitoring, evaluation, accountability, and
learning, as well as other relevant areas such as facilitation, data management, project
management, communication, capacity building, adaptive management, ethics, and
technology.
Your honest feedback will be kept confidential and used for internal assessment and capacity
building purposes.
Let’s begin the survey!
Most of the questions below will use a scale of 1-5 for you to assess your skill level on each
competency.
• 1: Very Low – The individual has minimal knowledge or skills in the particular area.
• 2: Low – The individual has some basic knowledge or skills but requires significant
improvement and guidance.
• 3: Moderate – The individual possesses satisfactory knowledge or skills and can perform
tasks with some level of independence.
• 4: High – The individual has a good level of knowledge or skills and can perform tasks
proficiently with minimal supervision.
• 5: Very High – The individual is highly knowledgeable or skilled and can perform tasks
expertly with a high level of independence.
Identification
• What is your name?
• In what office do you work?
• What is your current position?
• If you selected other in the previous question, please specify below:
Using the 1-5 scale described above, please rate your proficiency of ability in the following
programme design skills:
• Proficiency in logical framework development. (Logical framework development involves
creating a structured framework that outlines the programme’s goals, objectives,
activities, and indicators.)
• Ability to set realistic and measurable objectives. (Setting objectives that are specific,
achievable, relevant, and time bound.)
• Understanding of the Theory of Change? (The ToC explains how and why a programme’s
activities will lead to the desired outcomes and impact.)
Monitoring
Using the 1-5 scale described above, please rate your proficiency of ability in the following
programme monitoring skills:
• Ability to collect and analyse quantitative data. (Collecting and analysing numerical data
using appropriate statistical methods.)
• Ability to collect and analyse qualitative data. (Collecting and analysing non-numerical
data, such as interviews or observations.)
• Proficiency in using monitoring data to identify issues early. (Using monitoring data to
detect challenges or deviations from the programme’s intended progress.)
Evaluation
Using the 1-5 scale described above, please rate your proficiency of ability in the following
programme evaluation skills
• Ability to develop an evaluation plan. (Creating a comprehensive plan for assessing the
effectiveness, impact, and sustainability of a programme.)
• Competence in selecting appropriate evaluation methods and tools. (Choosing the most
suitable evaluation approaches and techniques based on the programme’s objectives and
available resources.)
• Ability to interpret findings and draw conclusions. (Making sense of evaluation results to
understand the programme’s achievements and areas for improvement.)
Accountability
Using the 1-5 scale described above, please rate your proficiency of ability in the following
programme accountability skills:
• Ability to establish feedback mechanisms. (Creating channels for stakeholders to provide
feedback and express concerns.)
Learning
Using the 1-5 scale described above, please rate your proficiency of ability in the following
programme learning skills:
• Ability to analyse and interpret data for learning purposes. (Examining data to extract
meaningful insights and lessons.)
• Ability to apply insights to programme design and implementation. (Using lessons learned
to inform decision-making and enhance programme effectiveness.)
Support Skills
Facilitation and Engagement
Using the 1-5 scale described above, please rate your proficiency of ability in the following
programme facilitation skills:
• Ability to facilitate group discussions effectively. (Guiding group discussions to ensure
active participation and meaningful outcomes.)
• Coaching and mentoring skills. (Guiding and supporting individuals to improve their
performance and professional growth.)
Please use this space to provide any additional comments, suggestions, or feedback you may
have regarding the DMEAL skills assessment, or any other aspect related to programme
effectiveness and capacity building.
Means of
# Result Description Indicators Q1 target Q2 Target Q3 Target Q4 Target Y1 Target Assumptions
Verification
Project name
Project code
Role Responsibilities
Mitigation
Risk Likelihood Impact
Strategy