Foundation Monitoring and Evaluation
Foundation Monitoring and Evaluation
Development Module
MONITORING AND
EVALUATION
Foundation Level
2018
MONITORING AND EVALUATION – FOUNDATION LEVEL
CONTENTS
Acronyms .................................................................................................................................... 3
1 Introduction ...................................................................................................................... 4
2
MONITORING AND EVALUATION – FOUNDATION LEVEL
ACRONYMS
3
MONITORING AND EVALUATION – FOUNDATION LEVEL
1 INTRODUCTION
The purpose of this module is to provide introductory information about monitoring and
evaluation (M&E), including the purpose, application of M&E frameworks, and key issues
in education M&E. It provides a foundation to engage in this topic and apply advice from
staff with operational or expert levels of knowledge in education M&E.
Monitoring and evaluation can be used for a wide range of purposes, including tracking
expenditure, revenues, staffing levels, and goods and services produced. M&E is a key
element of development assistance, to understand and track mutual contributions to a
partnership. This is defined in DFAT’s Aid Programming Guide.
Importantly, M&E needs to be considered, and defined before the start of any activity so
that it can provide the evidence required to make assessments of program performance.
Key guidelines for developing M&E are provided in the DFAT Monitoring and Evaluation
Standards.
Sources: DFAT 2017a; DFAT 2017b.
4
MONITORING AND EVALUATION – FOUNDATION LEVEL
Purpose of M&E
Monitoring and evaluation is an essential tool of management, extending to almost every
aspect of public sector activity, including development. There are multiple purposes of
M&E. It provides a basis for accountability to stakeholders. When reported clearly, M&E
processes and outcomes help identify shared learning about a range of areas, including
good practice, effective strategies and tools, and information about specific issues. M&E
supports well-informed management through evidence-based decision making. All donors,
bilateral and multilateral, conduct a large array of performance assessments at all stages of
project or program cycles as part of their ongoing commitment to M&E. Donors also tend
to align M&E to higher level, global commitments.
The Sustainable Development Goals (SDGs) and Education for All (EFA) Goals are probably
the best-known M&E mechanisms in development. The SDG and EFA indicators specify
time-based goals to improve social and economic conditions in developing countries.
SDG 4 sets out the goal to ensure inclusive and quality education for all and promote
lifelong learning. Specific indicators of enrolment and primary completion are evaluated to
assess progress towards that goal.
The Australian aid program uses M&E to underpin its overall policy setting. Making
Performance Count: Enhancing the Accountability and Effectiveness of Australian Aid
articulates the high-level priorities, broad programs and specific investments. This policy
directive provides a credible and effective system for overseeing the performance of the
Australian aid effort.
Sources: United Nations 2017; DFAT 2014.
A theory of change defines the sequence of elements required to achieve the program’s
goal and objectives. It is usually presented visually as the program logic. The theory of
change is an important determinant of M&E. It sets out the hierarchy of inputs and
intended outputs and outcomes including the links to the higher-level intentions, all of
which provide a measurement frame.
Activities involve the processes of management and support. Outputs are the tangible
products of the activities that are within the control of the program to deliver. Outcomes
describe an end state, how things are, rather than how they are achieved.
Importantly, for education programs, the theory of change will usually seek to determine
links between activities and the associated outcomes from them. It is generally assumed
that activities will contribute to outcomes, which are also influenced by a range of other
factors.
5
MONITORING AND EVALUATION – FOUNDATION LEVEL
An M&E framework presents the desired goals, results and/or impacts to be achieved and
establishes realistic measures, called indicators, against these. It presents the logical
ordering of inputs, activities, indicators, targets, outcomes and impacts as detailed in the
theory of change. Increasingly, M&E frameworks are being referred to as Monitoring,
Evaluation and Learning (MEL) Frameworks.
The M&E framework provides detail around how the evidence of success would be
assessed for each evaluation question and theory of change element with a corresponding
means of verification. The M&E framework will usually also include information around:
baseline data, M&E activity reporting timeframes; relevant data sources; data
disaggregation; and responsibility for data collection. Performance indicators are usually
disaggregated by gender, social inclusion status and other variables to provide evaluative
insights to inclusion.
For an example of an education program M&E framework see Ten steps to a results-based
monitoring and evaluation system: a handbook for development practitioners.
Source: Kusek & Rist 2004.
Program M&E should be agreed with partners and should reflect the planning cycle of the
partner country. Importantly, wherever possible data on indicators should be aligned to, if
not collected within, partner government data systems.
Typically, M&E for education projects will include common approaches to understand and
compare the general level of participation in education and capacity of primary education.
6
MONITORING AND EVALUATION – FOUNDATION LEVEL
The key indicators are the gross enrolment rate, net enrolment rate and assessments of
educational access. Each of these indicators is discussed below:
There are five key stages of education program activity when it is important to carry out
evaluation:
7
MONITORING AND EVALUATION – FOUNDATION LEVEL
2. In the design stage – to ensure objectives are clear and baseline data is collected. It is
important to record evaluation questions that emerge during the design.
For more detail on the Australian aid program’s approach to M&E, see the Strategy for
Australia’s Aid Investments in Education 2015–2020. The strategy has specific implications
for evaluation. DFAT’s Performance Assessment Note can also provide additional
information.
Sources: DFAT 2017a; DFAT 2015.
The DAC Principles are perhaps the most important, and longstanding, definitions in the
field of development M&E. Those that are used in the Australian aid program are:
Relevance: the extent to which the aid activity is suited to the priorities and
policies of the target group, recipient and development partner. In evaluating the
relevance of a program or a project, it is useful to ask questions such as: To what
extent are the objectives of the program still valid? Are the activities and outputs
of the program consistent with the overall goal and the attainment of its
objectives?
Effectiveness: a measure of the extent to which an aid activity attains its
objectives. To what extent were the objectives achieved or are likely to be
achieved? What were the major factors influencing the achievement or non-
achievement of the objectives?
8
MONITORING AND EVALUATION – FOUNDATION LEVEL
Alignment with key policy priorities: whether the aid activity is aligned with policy
priorities in disability, indigenous peoples and/or ethnic minorities, climate change
and disasters, private sector, and innovation.
The results-based approach explicitly incorporates strategic priorities into evaluation. This
enables the assessment of expenditure and inputs of a program in achieving desired
outcomes. Results-based M&E focuses on outcomes and impact. As such, the Australian
aid program asks whether programs or policies have produced their intended results.
The lesson has gradually been learned that increasing enrolments is not equivalent to
improvements in learning. There has been movement away from measuring simple
enrolments to measuring primary school completion, academic achievement and ability to
progress to further study and ultimately employment. The focus on implementation
(inputs leading to outputs) is changing to a results-oriented approach, with an emphasis on
outcomes.
9
MONITORING AND EVALUATION – FOUNDATION LEVEL
An M&E plan, based on the 2017 DFAT M&E Standards, establishes a clear way to define
M&E criteria, processes, outputs, timeframes, roles and responsibilities at the outset for a
well-managed program or activity.
Those responsible for M&E should assert themselves at the commencement of a program,
ensure that the measures and processes they are using are understood and agreed with, and
are supported by reliable data by accessing or creating relevant data sources.
M&E specialists should also see it as part of their roles to, where needed, strengthen the
capacities of local staff to build their M&E skills, particularly in data verification and analysis.
Source: DFAT 2017b.
10
MONITORING AND EVALUATION – FOUNDATION LEVEL
As mentioned earlier, program designs and theories of change do not generally seek to
identify the causal relationships necessary to establish attribution (i.e. this input caused
that outcome). Instead activities are linked to outcomes, to establish their contribution to
a positive change (i.e. this activity, along with several others, contributed to that
outcome).
The Australian aid program can rarely claim that a given activity exclusively caused an
outcome (attribution). Rather, investments typically contribute to outcomes
(contribution).
11
MONITORING AND EVALUATION – FOUNDATION LEVEL
12
MONITORING AND EVALUATION – FOUNDATION LEVEL
Assessment questions
Answer the following questions by ticking ‘True’ or ‘False’. Once you have selected your
answers to all the questions, turn the page to ‘The correct answers are...’ to check the
accuracy of your answers.
Question 1
The DAC Principles have changed since they were developed in 1991.
Question 2
An M&E system can be described as a performance assessment framework.
Question 3
We do not need indicators at every level of monitoring and evaluation.
Question 4
An M&E system should be designed and built into an aid activity from the very beginning.
Question 5
Support from the Australian aid program shows its contribution to outcomes.
13
MONITORING AND EVALUATION – FOUNDATION LEVEL
This statement is false. The DAC Principles are largely unchanged, but the way they are
applied has changed.
Question 2
An M&E system can be described as a performance assessment framework.
Question 3
We do not need indicators at every level of monitoring and evaluation.
This statement is false. We do need indicators at every level of M&E, although we should
be careful to select a few good indicators, rather than having too many. Indicators are at
the heart of M&E to measure what we are doing and to tell us whether we are on track to
achieve our goals.
Question 4
An M&E system should be designed and built into an aid activity from the very beginning.
This statement is true. If the necessary M&E elements are not incorporated at the outset,
such as baseline data, it will be very difficult to monitor progress or evaluate the program
at the end.
Question 5
Support from the Australian aid program shows its contribution to outcomes.
This statement is true. Support from the Australian aid program usually does not claim
that a given activity exclusively causes an outcome (attribution). Australian aid program
support typically contributes to outcomes (contribution).
14
MONITORING AND EVALUATION – FOUNDATION LEVEL
Department of Foreign Affairs and Trade (DFAT) 2014, Making performance count:
Enhancing the accountability and effectiveness of Australian aid, June, DFAT,
http://dfat.gov.au/about-us/publications/Documents/framework-making-performance-
count.pdf
2015, Strategy for Australia’s aid investments in education 2015 -2020, September,
DFAT, http://dfat.gov.au/about-us/publications/Documents/strategy-for-australias-aid-
investments-in-education-2015-2020.pdf
Kusek J & Rist R 2004, Ten steps to a results-based monitoring and evaluation system: A
handbook for development practitioners, World Bank,
https://openknowledge.worldbank.org/handle/10986/14926
Reid, K, Kleinhenz, E & Australian Council for Educational Research 2015, Supporting
teacher development: Literature review, DFAT, https://dfat.gov.au/aid/how-we-measure-
performance/ode/Documents/supporting-teacher-development-literature-review.pdf
UNESCO Institute for Statistics (UIS) 2017a, ‘Gross enrolment rate’, Glossary,
http://uis.unesco.org/en/glossary-term/gross-enrolment-ratio
2017c, Liberia: participation in education; Education and literacy, data for the
Sustainable Development Goals, http://uis.unesco.org/en/country/lr
15
MONITORING AND EVALUATION – FOUNDATION LEVEL
United Nations 2017, Goal 4: Ensure inclusive and quality education for all and promote
lifelong learning, Sustainable Development Goals, UN,
http://www.un.org/sustainabledevelopment/education/
16