100% found this document useful (1 vote)
135 views16 pages

Foundation Monitoring and Evaluation

This document provides an introduction to monitoring and evaluation (M&E) for education programs. It defines monitoring as the regular collection of data to track progress towards objectives, while evaluation assesses the achievement of objectives and tests underlying assumptions. The document outlines that M&E frameworks establish goals, results, and indicators to measure progress. It also discusses what types of information education programs should monitor, such as inputs, activities, outputs, and progress towards outcomes, in order to track performance and inform decision making.

Uploaded by

San Lidan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
135 views16 pages

Foundation Monitoring and Evaluation

This document provides an introduction to monitoring and evaluation (M&E) for education programs. It defines monitoring as the regular collection of data to track progress towards objectives, while evaluation assesses the achievement of objectives and tests underlying assumptions. The document outlines that M&E frameworks establish goals, results, and indicators to measure progress. It also discusses what types of information education programs should monitor, such as inputs, activities, outputs, and progress towards outcomes, in order to track performance and inform decision making.

Uploaded by

San Lidan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Education Learning and

Development Module

MONITORING AND
EVALUATION
Foundation Level
2018
MONITORING AND EVALUATION – FOUNDATION LEVEL

CONTENTS
Acronyms .................................................................................................................................... 3

1 Introduction ...................................................................................................................... 4

2 Monitoring and evaluation: what do they mean? ............................................................ 4

3 Program logic/theory of change ....................................................................................... 5

4 Monitoring and evaluation frameworks ........................................................................... 6

5 What should we monitor in education programs? .......................................................... 6

6 What should we evaluAte in education programs? ......................................................... 7

7 The ‘DAC Principles’ .......................................................................................................... 8

8 Issues in education monitoring and evaluation.............................................................. 10

9 Education program evaluations ...................................................................................... 11

10 Summary: being ‘aware’ of monitoring and evaluation ................................................. 12

11 Test your knowledge....................................................................................................... 13

References and links................................................................................................................. 15

2
MONITORING AND EVALUATION – FOUNDATION LEVEL

ACRONYMS

DAC Development Assistance Committee (OECD)

DFAT Australian Government Department of Foreign Affairs and Trade

EFA Education for All

GER gross enrolment rate

JCSEE Joint Committee on Standards for Educational Evaluation

M&E monitoring and evaluation

MEL monitoring, evaluation and learning

NER net enrolment rate

NIR net intake rate

ODE Office of Development Effectiveness

OECD Organisation for Economic Co-operation and Development

SDGs Sustainable Development Goals

UIS UNESCO Institute of Statistics

UNESCO United Nations Educational, Scientific and Cultural Organization

3
MONITORING AND EVALUATION – FOUNDATION LEVEL

1 INTRODUCTION
The purpose of this module is to provide introductory information about monitoring and
evaluation (M&E), including the purpose, application of M&E frameworks, and key issues
in education M&E. It provides a foundation to engage in this topic and apply advice from
staff with operational or expert levels of knowledge in education M&E.

2 MONITORING AND EVALUATION:


WHAT DO THEY MEAN?

Monitoring and evaluation


Monitoring is the regular collection and analysis of information
to provide indicators of progress towards objectives. It includes
monitoring inputs, activities, outputs and progress towards
outcomes. Monitoring answers the question: ‘What is going on?’

Evaluation is assessment of a planned, ongoing or completed activity to assess the


achievement of objectives as well as testing underlying theory of change assumptions.
Evaluation answers the question: ‘What happened?’

Applying M&E practices


Monitoring and evaluation have a complementary relationship. Monitoring gives
information on the status of a policy, program, or project at any given time relative to
respective targets and outcomes. Evaluation gives evidence of why targets and outcomes
have (or have not) been achieved.

Monitoring and evaluation can be used for a wide range of purposes, including tracking
expenditure, revenues, staffing levels, and goods and services produced. M&E is a key
element of development assistance, to understand and track mutual contributions to a
partnership. This is defined in DFAT’s Aid Programming Guide.

Importantly, M&E needs to be considered, and defined before the start of any activity so
that it can provide the evidence required to make assessments of program performance.
Key guidelines for developing M&E are provided in the DFAT Monitoring and Evaluation
Standards.
Sources: DFAT 2017a; DFAT 2017b.

4
MONITORING AND EVALUATION – FOUNDATION LEVEL

Purpose of M&E
Monitoring and evaluation is an essential tool of management, extending to almost every
aspect of public sector activity, including development. There are multiple purposes of
M&E. It provides a basis for accountability to stakeholders. When reported clearly, M&E
processes and outcomes help identify shared learning about a range of areas, including
good practice, effective strategies and tools, and information about specific issues. M&E
supports well-informed management through evidence-based decision making. All donors,
bilateral and multilateral, conduct a large array of performance assessments at all stages of
project or program cycles as part of their ongoing commitment to M&E. Donors also tend
to align M&E to higher level, global commitments.

The Sustainable Development Goals (SDGs) and Education for All (EFA) Goals are probably
the best-known M&E mechanisms in development. The SDG and EFA indicators specify
time-based goals to improve social and economic conditions in developing countries.
SDG 4 sets out the goal to ensure inclusive and quality education for all and promote
lifelong learning. Specific indicators of enrolment and primary completion are evaluated to
assess progress towards that goal.

The Australian aid program uses M&E to underpin its overall policy setting. Making
Performance Count: Enhancing the Accountability and Effectiveness of Australian Aid
articulates the high-level priorities, broad programs and specific investments. This policy
directive provides a credible and effective system for overseeing the performance of the
Australian aid effort.
Sources: United Nations 2017; DFAT 2014.

3 PROGRAM LOGIC/THEORY OF CHANGE


Developing a theory of change
Monitoring and evaluation is applied differently in different contexts and investments,
however most DFAT M&E is based on an evidence-based theory of change.

A theory of change defines the sequence of elements required to achieve the program’s
goal and objectives. It is usually presented visually as the program logic. The theory of
change is an important determinant of M&E. It sets out the hierarchy of inputs and
intended outputs and outcomes including the links to the higher-level intentions, all of
which provide a measurement frame.

Activities involve the processes of management and support. Outputs are the tangible
products of the activities that are within the control of the program to deliver. Outcomes
describe an end state, how things are, rather than how they are achieved.

Importantly, for education programs, the theory of change will usually seek to determine
links between activities and the associated outcomes from them. It is generally assumed
that activities will contribute to outcomes, which are also influenced by a range of other
factors.

5
MONITORING AND EVALUATION – FOUNDATION LEVEL

4 MONITORING AND EVALUATION


FRAMEWORKS
Components of an M&E framework
The theory of change enables the formulation of key M&E questions which will in turn
direct M&E activity to provide information along specific lines of enquiry. This will ensure
evidence is provided to report on the effectiveness or otherwise of program
implementation.

An M&E framework presents the desired goals, results and/or impacts to be achieved and
establishes realistic measures, called indicators, against these. It presents the logical
ordering of inputs, activities, indicators, targets, outcomes and impacts as detailed in the
theory of change. Increasingly, M&E frameworks are being referred to as Monitoring,
Evaluation and Learning (MEL) Frameworks.

The M&E framework provides detail around how the evidence of success would be
assessed for each evaluation question and theory of change element with a corresponding
means of verification. The M&E framework will usually also include information around:
baseline data, M&E activity reporting timeframes; relevant data sources; data
disaggregation; and responsibility for data collection. Performance indicators are usually
disaggregated by gender, social inclusion status and other variables to provide evaluative
insights to inclusion.

For an example of an education program M&E framework see Ten steps to a results-based
monitoring and evaluation system: a handbook for development practitioners.
Source: Kusek & Rist 2004.

5 WHAT SHOULD WE MONITOR IN


EDUCATION PROGRAMS?
Monitoring
Robust M&E systems are an essential part of every aid investment made by the Australian
Government. These systems need to collect, analyse and feedback information to decision
makers. There is an increasing focus on real-time availability of data to improve education
program performance management, rather than waiting for mid-term or end-of-program
review points.

Program M&E should be agreed with partners and should reflect the planning cycle of the
partner country. Importantly, wherever possible data on indicators should be aligned to, if
not collected within, partner government data systems.

Typically, M&E for education projects will include common approaches to understand and
compare the general level of participation in education and capacity of primary education.

6
MONITORING AND EVALUATION – FOUNDATION LEVEL

The key indicators are the gross enrolment rate, net enrolment rate and assessments of
educational access. Each of these indicators is discussed below:

Gross Enrolment Rate (GER)


The GER is the total enrolment within a country of a specific
level of education, regardless of age, expressed as a percentage
of the population in the official age group corresponding to this
level of education. For example, if a nation has 900,000 people
enrolled in school in the academic year 2016-17, this number is divided by the total
number of school-age individuals. Suppose this number is 1,000,000. This means 90 per
cent of the people are enrolled; or that 90 per cent is the GER of that nation. GER can
exceed 100 per cent due to the inclusion of over-aged and under-aged students
because of early or late entrants and grade repetition.

Net Enrolment Rate (NER)


The NER is the total enrolment of the official age-group for a given level of education
expressed as a percentage of the corresponding population. For example, in 2014,
Liberia had the worst measured NER in primary education in the world at 38 per cent.
Thus, out of every 100 children within the official age-group for primary education, only
38 were enrolled in school.

Assessments of educational access


Assessments of educational access should go beyond GER and NER. For a more nuanced
understanding of education access and participation, monitoring needs to include the
Grade One net intake rate (NIR), measures of attendance by grade level, and the
primary completion rate, among other indicators.
Sources: UIS 2017a; 2017b; UIS 2017c Liberia: Participation in education.

6 WHAT SHOULD WE EVALUATE IN


EDUCATION PROGRAMS?
Five key stages of education program activity evaluation
The Aid Programming Guide highlights the importance of evaluation. See Chapter 3: Aid
program management and performance reporting; Chapter 4: Investment management,
evaluation and quality reporting; and Chapter 5: Investment.

There are five key stages of education program activity when it is important to carry out
evaluation:

7
MONITORING AND EVALUATION – FOUNDATION LEVEL

1. At program preparation stage – to consider other similar program evaluations and


lessons to be drawn from them.

2. In the design stage – to ensure objectives are clear and baseline data is collected. It is
important to record evaluation questions that emerge during the design.

3. During implementation – targeted evaluations to assess progress against objectives,


sometimes along thematic lines of enquiry.

4. At completion – to see if the program has achieved expected objectives and


outcomes, and assess value for money.

5. Post-program – to assess ongoing impact and sustainability of benefits from the


program, usually to inform an evidence base for other program preparation.

For more detail on the Australian aid program’s approach to M&E, see the Strategy for
Australia’s Aid Investments in Education 2015–2020. The strategy has specific implications
for evaluation. DFAT’s Performance Assessment Note can also provide additional
information.
Sources: DFAT 2017a; DFAT 2015.

7 THE ‘DAC PRINCIPLES’


The DAC Principles for evaluation of development assistance
The Development Assistance Committee was established by the Organisation for Economic
Cooperation and Development (OECD-DAC) to improve development cooperation between
its member governments and governments of developing or transitional countries.

In 1991, the OECD-DAC released Principles for Evaluation of Development Assistance


devising key evaluation criteria. These evaluation guidelines have proved remarkably
resilient and flexible, and have been updated over time. Donors, including the Australian
aid program, have modified the criteria to suit their own perspectives. It is common to
refer to the ‘DAC Principles’ as short-hand for these widely-accepted evaluation criteria.

The DAC Principles are perhaps the most important, and longstanding, definitions in the
field of development M&E. Those that are used in the Australian aid program are:
 Relevance: the extent to which the aid activity is suited to the priorities and
policies of the target group, recipient and development partner. In evaluating the
relevance of a program or a project, it is useful to ask questions such as: To what
extent are the objectives of the program still valid? Are the activities and outputs
of the program consistent with the overall goal and the attainment of its
objectives?
 Effectiveness: a measure of the extent to which an aid activity attains its
objectives. To what extent were the objectives achieved or are likely to be
achieved? What were the major factors influencing the achievement or non-
achievement of the objectives?

8
MONITORING AND EVALUATION – FOUNDATION LEVEL

 Efficiency: efficiency measures the output, qualitative and quantitative, in relation


to the inputs. It is an economic term which signifies that the aid uses the least
costly resources possible in order to achieve the desired results.
 Impact: the positive and negative changes produced by a development
intervention, directly or indirectly, intended or unintended. Relevant questions
are: What has happened as a result of the program or project? What real
difference has the activity made to the beneficiaries?
 Sustainability: sustainability is concerned with measuring whether the benefits of
an activity are likely to continue. To what extent did the benefits of a program or
project continue after development partner funding ceased?

Source: Development Assistance Committee 1991.

The DAC Principles and the Australian aid program


Australian aid performance M&E systems generally exclude ‘impact’, although as a greater
results-focus is adopted, aspects of ‘impact’ principles have relevance. The Australian aid
program has also added additional criteria:
 Monitoring and evaluation: whether an appropriate system is being used to assess
progress towards meeting objectives.
 Analysis and learning: whether the aid activity is based on sound technical analysis
and continuous learning.
 Gender equality: whether the aid activity is making a difference to gender equality
and empowering women and girls.

 Alignment with key policy priorities: whether the aid activity is aligned with policy
priorities in disability, indigenous peoples and/or ethnic minorities, climate change
and disasters, private sector, and innovation.

How are the DAC Principles applied?


The DAC Principles remain largely unchanged, but the way they are applied has changed.
When the DAC Principles were formulated (1991), M&E was largely aimed at good aid
management and administration. Now, there is a focus on results and outcomes.

The results-based approach explicitly incorporates strategic priorities into evaluation. This
enables the assessment of expenditure and inputs of a program in achieving desired
outcomes. Results-based M&E focuses on outcomes and impact. As such, the Australian
aid program asks whether programs or policies have produced their intended results.

The lesson has gradually been learned that increasing enrolments is not equivalent to
improvements in learning. There has been movement away from measuring simple
enrolments to measuring primary school completion, academic achievement and ability to
progress to further study and ultimately employment. The focus on implementation
(inputs leading to outputs) is changing to a results-oriented approach, with an emphasis on
outcomes.

9
MONITORING AND EVALUATION – FOUNDATION LEVEL

Developing indicators for equity and access


There is still much work to be done on developing indicators for different dimensions of
equity and access, but it is increasingly common to report the Gender Parity Index – the
ratio of female to male enrolments at a given level of schooling.

8 ISSUES IN EDUCATION MONITORING


AND EVALUATION
Potential difficulties
There are potential difficulties at any stage of M&E in education programs and activities,
including:
 M&E may not be built in to activities or programs
 indicators and other measures may be poorly specified
 a lack of reliable and valid data
 a lack of access to M&E respondents
 incomplete data – including no baseline information
 limited capacity in data analysis
 M&E systems may be generally set up to include a focus on results, but evaluations of
projects or programs tend to default to a model of evaluating inputs, activities and outputs
 education outcomes have not been well defined in the M&E system, cannot be measured, and
cannot be reliably, and sensitively, understood.

When do these difficulties become apparent?


Difficulties in M&E usually arise when there is very little reference given to M&E during the
project/program planning, implementing and reviewing cycle. Poor M&E is usually
evidenced in poorly defined measures and procedures, unclear data collection methods,
limited access to evidence and data and little attention given to the impact of the activity.

An M&E plan, based on the 2017 DFAT M&E Standards, establishes a clear way to define
M&E criteria, processes, outputs, timeframes, roles and responsibilities at the outset for a
well-managed program or activity.

Those responsible for M&E should assert themselves at the commencement of a program,
ensure that the measures and processes they are using are understood and agreed with, and
are supported by reliable data by accessing or creating relevant data sources.

M&E specialists should also see it as part of their roles to, where needed, strengthen the
capacities of local staff to build their M&E skills, particularly in data verification and analysis.
Source: DFAT 2017b.

10
MONITORING AND EVALUATION – FOUNDATION LEVEL

Attribution versus contribution


Attribution seeks to identify how a given activity specifically resulted in an identified
outcome. Attribution is easier to establish when there is a clear causal relationship
between the outcome and any preceding outputs. For example, that immunising children
resulted in fewer cases of that disease.

In education, attribution is difficult to establish, as it is hard to identify the specific factor


that resulted in an outcome. For example, are children performing better in standardised
tests because of teacher training, or the availability of textbooks, or changes to the school
curriculum?

As mentioned earlier, program designs and theories of change do not generally seek to
identify the causal relationships necessary to establish attribution (i.e. this input caused
that outcome). Instead activities are linked to outcomes, to establish their contribution to
a positive change (i.e. this activity, along with several others, contributed to that
outcome).

The Australian aid program can rarely claim that a given activity exclusively caused an
outcome (attribution). Rather, investments typically contribute to outcomes
(contribution).

9 EDUCATION PROGRAM EVALUATIONS


An evaluation can address a specific education project or cover a whole education sector
program. The type of evaluation choice will depend on the context, timing, resources and
questions that need to be answered.
 Strategic evaluations are independently initiated and managed by the Office of
Development Effectiveness (ODE). They are broad assessments of Australian aid
that focus on policy directions or specific development themes. They typically
examine a number of investments, often across multiple countries, regions or
sectors.
 Program evaluations are initiated and managed by program areas, such as country
and sector programs. Each education program undertakes an annual process to
identify and prioritise a reasonable number of evaluations which they can use to
improve its work. Programs may also be required to conduct thematic evaluation,
mid-term reviews, or an Independent Completion Report as part of the DFAT
quality assurance framework.

Some examples include:


In 2015, ODE undertook an evaluation of teacher development approaches titled
Investing in Teachers. The evaluation, together with the Supporting Teacher
Development: Literature Review, provides evidence for improving teacher development
programs. It examines 27 bilateral Australian aid investments in teacher development
from 2009 to 2015.

11
MONITORING AND EVALUATION – FOUNDATION LEVEL

In 2016, the Education Team in Indonesia commissioned the Independent Completion


Report for the Education Partnership. The Report describes the partnership’s evolution,
captures its significant achievements and reports the program’s performance, by
component, against the DAC criteria. It also looks at value for money and provides
lessons for future programs.
Sources: ODE 2015; Reid et al. 2015, ODE 2016; DFAT 2016.

10 SUMMARY: BEING ‘AWARE’ OF


MONITORING AND EVALUATION
 There are many ways of doing M&E, including managing for results, use of
performance indicators and impact evaluations.
 These approaches are all based on the principle of trying to relate inputs and
activities to outcomes, and to get the best value for money.
 M&E is best considered an approach rather than a specific technique.
 M&E is fundamentally a system of performance assessment.
 M&E needs to be built into an activity at the design stage.

12
MONITORING AND EVALUATION – FOUNDATION LEVEL

11 TEST YOUR KNOWLEDGE

Assessment questions
Answer the following questions by ticking ‘True’ or ‘False’. Once you have selected your
answers to all the questions, turn the page to ‘The correct answers are...’ to check the
accuracy of your answers.

Question 1
The DAC Principles have changed since they were developed in 1991.

Is this statement true or false? □ True □ False

Question 2
An M&E system can be described as a performance assessment framework.

Is this statement true or false? □ True □ False

Question 3
We do not need indicators at every level of monitoring and evaluation.

Is this statement true or false? □ True □ False

Question 4
An M&E system should be designed and built into an aid activity from the very beginning.

Is this statement true or false? □ True □ False

Question 5
Support from the Australian aid program shows its contribution to outcomes.

Is this statement true or false? □ True □ False

13
MONITORING AND EVALUATION – FOUNDATION LEVEL

The correct answers are...


Question 1
The DAC Principles have changed since they were developed in 1991.

This statement is false. The DAC Principles are largely unchanged, but the way they are
applied has changed.

Question 2
An M&E system can be described as a performance assessment framework.

This statement is true.

Question 3
We do not need indicators at every level of monitoring and evaluation.

This statement is false. We do need indicators at every level of M&E, although we should
be careful to select a few good indicators, rather than having too many. Indicators are at
the heart of M&E to measure what we are doing and to tell us whether we are on track to
achieve our goals.

Question 4
An M&E system should be designed and built into an aid activity from the very beginning.

This statement is true. If the necessary M&E elements are not incorporated at the outset,
such as baseline data, it will be very difficult to monitor progress or evaluate the program
at the end.

Question 5
Support from the Australian aid program shows its contribution to outcomes.

This statement is true. Support from the Australian aid program usually does not claim
that a given activity exclusively causes an outcome (attribution). Australian aid program
support typically contributes to outcomes (contribution).

14
MONITORING AND EVALUATION – FOUNDATION LEVEL

REFERENCES AND LINKS


All links retrieved July, 2018.

Department of Foreign Affairs and Trade (DFAT) 2014, Making performance count:
Enhancing the accountability and effectiveness of Australian aid, June, DFAT,
http://dfat.gov.au/about-us/publications/Documents/framework-making-performance-
count.pdf

2015, Strategy for Australia’s aid investments in education 2015 -2020, September,
DFAT, http://dfat.gov.au/about-us/publications/Documents/strategy-for-australias-aid-
investments-in-education-2015-2020.pdf

2016, Education partnership – independent completion report: Performance oversight


and monitoring (POM), December, DFAT, http://dfat.gov.au/about-
us/publications/Documents/indonesia-education-partnership-completion-report.pdf

2017a, Aid programming guide, March, DFAT, http://dfat.gov.au/about-


us/publications/Documents/aid-programming-guide.pdf

2017b, DFAT Monitoring and evaluation standards, April, DFAT,


http://dfat.gov.au/about-us/publications/Documents/monitoring-evaluation-
standards.pdf

Development Assistance Committee 1991, Principles for evaluation of development


assistance, OECD, http://www.oecd.org/dac/evaluation/dcdndep/41029845.pdf

Kusek J & Rist R 2004, Ten steps to a results-based monitoring and evaluation system: A
handbook for development practitioners, World Bank,
https://openknowledge.worldbank.org/handle/10986/14926

Office of Development Effectiveness (ODE) 2015, Investing in teachers, December, DFAT,


http://dfat.gov.au/aid/how-we-measure-performance/ode/Documents/teacher-
development-evaluation.pdf

Office of Development Effectiveness (ODE) 2016, Teacher development, 20 May, DFAT,


http://dfat.gov.au/aid/how-we-measure-performance/ode/other-work/Pages/teacher-
quality.aspx

Reid, K, Kleinhenz, E & Australian Council for Educational Research 2015, Supporting
teacher development: Literature review, DFAT, https://dfat.gov.au/aid/how-we-measure-
performance/ode/Documents/supporting-teacher-development-literature-review.pdf

UNESCO Institute for Statistics (UIS) 2017a, ‘Gross enrolment rate’, Glossary,
http://uis.unesco.org/en/glossary-term/gross-enrolment-ratio

2017b, ‘Net enrolment rate’, Glossary, UNESCO, http://uis.unesco.org/en/glossary-


term/net-enrolment-rate

2017c, Liberia: participation in education; Education and literacy, data for the
Sustainable Development Goals, http://uis.unesco.org/en/country/lr

15
MONITORING AND EVALUATION – FOUNDATION LEVEL

United Nations 2017, Goal 4: Ensure inclusive and quality education for all and promote
lifelong learning, Sustainable Development Goals, UN,
http://www.un.org/sustainabledevelopment/education/

Learn more about…


The DAC principles for development evaluation assistance, found at:
http://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm
DFAT aid evaluation policy 2016, found at: http://dfat.gov.au/aid/how-we-measure-
performance/ode/Documents/dfat-aid-evaluation-policy-nov-2016.pdf
Joint Committee on Standards for Educational Evaluation (JCSEE) accepted international
standards, found at: http://www.jcsee.org/program-evaluation-standards
OECD’s principles for evaluating development cooperation, found at:
http://www.oecd.org/dac/evaluation/dcdndep/41612905.pdf
OECD-DAC’s 2002, Glossary of key terms in evaluation and results-based management,
found at: http://www.oecd.org/dac/evaluation/2754804.pdf
UNESCO’s 2016 Global Education Monitoring (GEM) Report: UNESCO 2016, Education for
people and planet: Creating sustainable futures for all, 2016 Global Education Monitoring
Report, found at: http://unesdoc.unesco.org/images/0024/002457/245745e.pdf
World Bank 2004, Influential evaluations: Evaluations that improved performance and
impacts of development programs, found at:
http://siteresources.worldbank.org/EXTEVACAPDEV/Resources/4585672-
1251727474013/influential_evaluations_ecd.pdf
World Bank 2004, Monitoring & evaluation: Some tools, methods & approaches, found at:
http://siteresources.worldbank.org/EXTEVACAPDEV/Resources/4585672-
1251481378590/MandE_tools_methods_approaches.pdf

16

You might also like