150191eng PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

ED/EPS/2006/PI/11

October 2006

Strategic Planning in Education:


Some Concepts and Steps
Gwang-Chol Chang

Education Sector
UNESCO
This paper is a synthesis of a UNESCO publication entitled “National
Education Sector Development Plan: A result-based planning handbook”
(2006). It was distributed as a background material at the Workshop on
Strategic Planning and Quality Assurance in Education, which took place in
Tbilisi, Georgia, from 23 to 26 October 2006. This Workshop was organized
by the Ministry of Education and Science of Georgia in collaboration with
UNESCO.

Any part of this paper may be freely reproduced with the appropriate
acknowledgement. The authors are responsible for the choice and presentation
of the facts contained in this book and for the opinions expressed therein,
which are not necessarily those of UNESCO and do not commit the
Organization.

Published in 2006 by:


Division of Educational Policies and Strategies, UNESCO
7, place de Fontenoy, 75352 Paris 07 SP (France)

© United Nations Educational, Scientific and Cultural Organization


Contents

Abstract ........................................................................................................... 1

I. Introduction................................................................................................. 3

II. The Strategic Management Cycle ............................................................ 4


II.1. An Overview .................................................................................................... 4
II.2. Operational v. Strategic Planning..................................................................... 4
II.3. The Strategic Management Cycle .................................................................... 5

III. Three Stages of Strategic Planning ........................................................ 8


III.1. Sector Analysis ............................................................................................... 8
III.2. Policy Design .................................................................................................. 9
III.3. Action Planning ............................................................................................ 10

IV. Planning for Monitoring and Evaluation............................................. 12


IV.1. Performance Indicators ................................................................................. 13
IV.2. Three Classifications of Evaluation .............................................................. 16
IV.3. Objects of Monitoring and Evaluation.......................................................... 18
Abstract

Keywords: Education Policies and Strategies, Strategic Planning, Education


Development Plan, Management Cycle.

In the context of national education development, the term of strategic


planning is increasingly referred to. First, one may wish to plan and carry out
all the activities deemed needed, but without achieving the ultimate goals.
Furthermore, more resources do not necessarily stand for the best results. The
way one uses these resources can lead to a different level of benefits. Thirdly,
it has become more and more difficult to plan everything one would wish to
do. One ought to make choices, often tough ones, through a balanced
decision-making, trade-offs across the system and consensus building process.
There are a variety of approaches to carry out strategic planning. One cannot
say that there is a “single perfect way” to conduct it. Each institution has its
own particular interpretation of the approaches and activities in strategic
management. However, what is generic to strategic planning and management
are certain typical stages involving similar activities carried out in a similar
sequence.

Any management involves four basic stages: analysis, planning,


implementation and evaluation. In the education sector, the management
operations related to “upstream”, planning work consist of: (i) system
analysis; (ii) policy formulation; (iii) action planning.

Sector analysis consists of conducting data collection on and critical analysis


of the aspects relating to the education sector. Planners carefully review how
the system functions (internal dynamics) and examine various contextual,
determining factors (the environment of which education is a part), e.g.
macro-economic and socio-demographic situations and developments.

Policy and strategy formulation: Careful (and critical) analysis of the


educational system undertaken during the sector analysis leads to questions
about what the education sector must do in order to address the major issues,
challenges and opportunities. These questions include what overall results
(strategic goals) the system should achieve and the overall methods (or
strategies) to implement policies designed to bring about such objectives.

Action planning is a process whereby one translates the policy statements


(options and strategies) into executable, measurable and accountable actions.

1
In a broader sense, action planning includes specifying objectives, outputs,
strategies, responsibilities and timelines (what, what for, how, who and
when). The output of this process is a plan of action.

By Gwang-Chol CHANG
Education Sector, UNESCO Paris

2
I. Introduction

Planning is a process whereby a direction is set forth and then the ways and
means for following that direction are specified. There are many forms of
planning with several types of activities involved in this process.

A plan is the product of the planning process and can be defined as a set of
decisions about what to do, why, and how to do it. A plan of action is a living
reference framework for action. This implies that:

ƒ As a reference of action, the plan is the result of consensus building


process, to be agreed upon by all those working in the fields covered as
well as the other stakeholders contributing to its implementation;
ƒ As an indicative, living framework, it is designed in such a way as to
allow for adjustments in light of new developments during
implementation;
ƒ As a working tool, it includes not only policy and expenditure
frameworks, but also the hierarchy of objectives, key actions and
institutional arrangements for implementation, monitoring and evaluation.

More and more, education managers are compelled to think and plan
strategically, due to some following reasons:

ƒ First, one may wish to plan and carry out all the activities deemed needed,
but without achieving the ultimate goals.
ƒ Furthermore, more resources do not necessarily stand for the best results.
The way one uses these resources can lead to a different level of benefits.
ƒ Thirdly, it has become more and more difficult to plan everything one
would wish to do. One ought to make choices, often tough ones, through a
balanced decision-making, trade-offs across the system and consensus
building process.

These lead to espouse strategic planning. A strategic plan in the education


sector is the physical product of the strategic planning process and embodies
the guiding orientations on how to run an education system within a larger
national development perspective, which is evolving by nature and often
involves constraints.

3
II. The Strategic Management Cycle

II.1. An Overview

Like any other systems, education has inputs, processes, outputs and
outcomes:

ƒ Inputs to the education system include resources such as teachers,


buildings, equipment, books, etc.
ƒ These inputs go through a process (throughput) whereby they are mixed
(input mix), combined and/or moved along to achieve results.
ƒ Educational outputs are tangible results produced by processes in the
system, such as enrolments, graduates and learning achievements.
ƒ Another kind of result, which can be called outcome, is the benefits for
the students, their families and/or the society as well.

As a way of strategic management, education systems should be analyzed and


thought out in terms of relevance, efficiency, effectiveness, impact and
sustainability: for example, one will wonder whether the inputs to the
education system are relevant for addressing the needs, to what extent the
processes (utilization of resources) are efficiently driven and how well the
anticipated outputs are effectively produced. Outcomes should be weighed in
terms of their impact and sustainability.

II.2. Operational v. Strategic Planning

In the past, planners usually referred to the term “long-range planning”. More
recently, they use the term “strategic planning”. Although many still use these
terms interchangeably, strategic planning and long-range planning differ.
Long-range planning is generally considered to mean the development of a
plan aimed at achieving a policy or set of policies over a period of several
years, with the assumption that the projection of (or extrapolation from) the
past and current situation is sufficient to ensure the implementation of the
future activities. In other words, long-range planning assumes that the
environment is stable, while strategic planning assumes that a system
must be responsive to a dynamic and changing environment. The term
“strategic planning” is meant to capture strategic (comprehensive, holistic,
thoughtful or fundamental) nature of this type of planning.

4
With regard to operational and strategic planning, a narrow definition would
be that strategic planning is done with involvement of high levels of
management, while operational planning is done at lower levels. However,
this document proposes to give it a wider definition as shown in the following
table.

Operational planning Strategic planning


Focus Routine activities Achieving goals
Purpose Achieving the best use of Planning the best courses of
available resources action
Rewards Efficiency, stability Effectiveness, impact
Information Present situation Future opportunities
Organization Bureaucratic, stable Entrepreneurial, flexible
Problem solving Relies on past experience Finds new ways and
alternatives
Risks Low High

II.3. The Strategic Management Cycle

There are a variety of terminologies used in strategic management and a


variety of approaches to carry it out. One cannot say that there is a “single
perfect way” to conduct strategic planning. Each institution has its own
particular interpretation of the approaches and activities in strategic
management. However, what is generic to strategic management are certain
typical stages involving similar activities carried out in a similar sequence.
Any management involves four basic stages: analysis, planning,
implementation and evaluation.

In a more sophisticated way, we can say that strategic management is a


continuum of successive stages such as: critical analysis of a system,
policy formulation and appraisal, action planning, management and
monitoring, review and evaluation. Experience and lessons learnt from
implementation, monitoring and evaluation provide feedback for
adjusting the current programme or for the next cycle of policy
formulation and action planning.

5
Diagram: The strategic management cycle

The diagram above outlines this cyclical pattern of strategic management:

ƒ Any management cycle begins with analysis, whereby the current


situation of a system and the critical issues pertaining to its status and
functioning are first analysed.
ƒ Findings and remedial options are then formulated and appraised, thus
providing policy orientations.
ƒ When the system is analysed and the future directions are traced, one can
proceed with planning the necessary actions to correct or improve the
situation. A plan can be long range (6 to 10 years), medium term (3 to 5
years) or short term (1 to 2 years).
ƒ Operationalization consists of taking the necessary reform and
institutional measures that are conducive to the smooth implementation of
plans or programmes and before the actual execution starts, including:
ƒ Designing specific development projects or programmes and/or
mobilizing resources required to implement the planned actions and
activities.
ƒ Planning and management are subject to feedback-providing operations,
i.e. monitoring, review and evaluation.

6
In the education sector, the management operations related to “upstream”,
planning work consist of: (i) system analysis; (ii) policy formulation; (iii)
action planning.

Sector analysis: This diagnostic stage consists of conducting data collection


on and critical analysis of the aspects relating to (and surrounding) the
education sector. Planners carefully review how the system functions (internal
dynamics) and examine various contextual, determining factors (the
environment of which education is a part), e.g. macro-economic and socio-
demographic situations and developments. They look into the above aspects
from the perspective of the system’s strengths, weaknesses, opportunities and
threats (better known as the SWOT analysis) regarding educational
development. This will help to identify the critical issues, to identify the
challenges and to construct remedial actions. Some call this phase of
education sector analysis (ESA) the diagnostic work. Sector review, system
analysis, etc. are also used.

Policy and strategy formulation: Careful (and critical) analysis of the


educational system undertaken during the sector analysis leads to questions
about what the education sector must do in order to address the major issues,
challenges and opportunities. These questions include what overall results
(strategic goals) the system should achieve and the overall methods (or
strategies) to implement policies designed to bring about such objectives. This
stage of strategic planning is called policy formulation.

Action planning: Action planning is a process whereby one translates the


policy statements (options and strategies) into executable, measurable and
accountable actions. In a broader sense, action planning includes specifying
objectives, outputs, strategies, responsibilities and timelines (what, what for,
how, who and when). The output of this process is a plan of action. For the
purpose of result-based planning, the Logical Framework Approach is also
widely used when preparing development projects, programmes and plans,
thus contributing to results-based programming, management and monitoring
in the education sector.

Usually, projections of resource requirements are included in strategic,


operating or work plans. Resources can be human, technical, physical and
financial. Information on financial resources include: the cost estimates
required for the implementation of the plan, the budget likely to be available
in the future and the funding gaps (additional funding) to fill in for each of the
years included in the span of time, giving particular attention to the first years.
The MTEF (Mid-Term Expenditure Framework) processes in place in some

7
countries should contribute to fruitful negotiations and trade-offs between the
“top-down” budget ceiling and “bottom-up” initiatives for resource envelope
for the sector. Plans build on the MTEF and further detail how the funds will
be spent (by recurrent budget, capital budget, project budgets, etc.)

III. Three Stages of Strategic Planning

III.1. Sector Analysis

Sector analysis is the first stage of sector development planning. Sector


review, situation analysis, diagnosis, etc. are sometimes used interchangeably.
Basically, sector analysis consists in conducting data collection on and
critical analysis of the aspects relating to (and surrounding) the education
sector. Planners and managers carefully examine both internal and external
aspects of the education system. In other words, they:

ƒ review how the system functions (internal dynamics) to meet people’s


needs and economic demand;
ƒ examine various driving forces behind the education system and external
conditions (the environment of which education is a part), e.g. macro-
economic and socio-demographic situations and developments.

Planners and managers can look at the above aspects from the perspective of
the system’s strengths, weaknesses, lessons and opportunities regarding
educational development. They also examine the relevance, efficiency and
effectiveness of the inputs, processes and outputs of the system in its current
setting. This helps to identify critical issues, challenges and construct remedial
actions and policy provisions.

The main categories of aspects to be considered when conducting an


Education Sector Analysis (ESA) and/or when describing the diagnostic part
of an education sector development plan are: (i) macro-economic and socio-
demographic frameworks; (ii) access to and participation in education; (iii)
quality of education; (iv) external efficiency; (v) costs and financing of
education; and (vi) managerial and institutional aspects. The aspects (ii), (iii),
(iv), (v) and (vi) can be documented by sub-sector (pre-school, primary and
secondary education, technical and vocational education, higher education,
non formal education, etc.)

8
III.2. Policy Design

Education sector policies represent the government’s public commitment to


the future orientation of the sector. A clearly formulated policy can play an
important “operational” role as a reference for action. It can help to guide
decisions and future actions in educational development, including the
interventions of international and bilateral cooperation agencies, in a coherent
way. It is important that policy promote the coordination and success of
programmes and projects. The formulation of a “good policy for education” is
a necessary step in promoting the emergence and effective implementation of
action plans, programmes and projects.

A policy is a set of the goal and purposes (specific objectives). Often,


education policies are defined along the following threefold dimension:

ƒ access (access, participation, including gender and equity issues)


ƒ quality (quality, internal efficiency, relevance and external effectiveness)
ƒ management (governance, decentralization, resource management).

These dimensions are addressed (i) either as a whole, by programme


component or by sub-sector, (ii) with target indicators by time-range (medium
or long-term) and with a few quantitative indicators. One cannot say that there
is a perfect way of writing policies or of listing different policy aspects. An
indicative checklist is presented below as a way of providing specification of
some of the fields requiring definition in an educational policy and the
implementation strategies. This list is not exhaustive:

ƒ access to and participation in education;


ƒ equity and the reduction of disparities between boys and girls, regional
disparities, rural/urban disparities and social disparities;
ƒ quality and the relevance of education at different levels (basic education,
general secondary education, technical and professional education, higher
education, adult education, etc.);
ƒ the place that the private sector and local groups occupy in the
organization of education;
ƒ regulation of student flows between (i) formal and non-formal education;
(ii) public and private education; (iii) general secondary, technical, and
professional education; (iv) short and longer higher education; (v)
elementary and secondary, secondary and higher education, etc.;
ƒ institutional aspects such as governance, management and planning,
including the decentralization, de-concentration and centralization
balance;

9
ƒ partnership and communication between actors and partners, the level and
form of participation and communication;
ƒ cost control in recurrent and capital expenditure; and
ƒ policies and strategies to mobilize resources in connection with
decentralization, the development of the private sector and partnership
development.

Particular emphasis should be placed on formulating quantified objectives


such as enrolment, admission, and flow rates, pupils/teacher ratios, the
supervision rate, the space utilization rate and the share of education in the
national budget. For this purpose, simulation techniques and models have
been used successfully to define policies that can then be quantified for
consultation and the negotiation of trade-offs between stakeholders and
development partners, on issues related to enrolment objectives, the
organization of provision of different levels of education, and public, private,
external financial contributions.

III.3. Action Planning

A national policy should establish the framework for its implementation by


giving the main goals and priorities, as well as the strategies to achieve them.
It should be credible: that human and financial resources are available for
carrying out the policy. Action planning (or programming) is the
preparation for implementation. An action programme (which could also
be called an action plan) aims to translate into operational terms the
policy directions that education authorities intend to implement in a
given time horizon. It is a tool for “clarifying” to some extent the goals and
strategies in relation to the education policy, programming the activities
required, establishing the timing, indicating the necessary resources,
distributing institutional and administrative responsibilities, preparing the
budgets, etc. It is important to consult and negotiate with the various
development partners throughout the action planning stage if the country is to
mobilise their support for plan implementation.

Note: It is necessary to differentiate between an action plan/programme


and an investment programme which often deals with the infrastructures
and equipments to carry out the action plan and the recurrent expenditure
incurred by such investments. The duration of an action programme, in
general, is five years. One of the criteria of an action plan – in order for a
plan to be called action plan - is to go beyond mere policy statements and
lists of activities to further define and prioritize the actions, activities, and

10
required resources in a coherent manner. These actions and resource
projections should be defined within a given macro-economic framework
using appropriate technical tools such as a simulation model.

In general the education policy framework document concerns the whole of


the education sector. The action plan, which is linked to this policy
framework, should also be sector-wide. Sometimes, a policy statement may
concern either a particular sub-sector (secondary technical and professional
education, for example) or a cross-cutting theme (improvement of the quality
of education, for example), this within an overall, sector-wide development
framework. Inasmuch as the subsectors represent fairly homogenous groups,
an action programme can be developed first for each sub-sector, then these
programmes are assembled into a sectoral plan of action, ensuring that a
coherent whole is produced which faithfully reflects the policy framework.

An initial task for those in charge of developing an action plan is to draw up a


typology of concepts to be used: objectives, results, actions, activities,
measurements, resources, etc. It is necessary to achieve consensus on the
concepts and their logical arrangement.

Different methodologies and techniques of action planning have been


designed and used by different countries and agencies. Among them, two
instruments are emerging as reference tools in developing action plans in the
education sector: the Logical Framework Approach and simulation modelling.
In reality, these two and other approaches are used, not in isolation but to
complement each other, resulting in the preparation of a credible and coherent
action plan for educational development.

11
IV. Planning for Monitoring and Evaluation

We are all accountable for the work we do. We are accountable for the use of
the resources that we are given. We are accountable to a variety of people, but
foremost to the people and communities we serve, though we are also
accountable to those who provide resources.

We also need to learn lessons. We need a system that is reflective and


analytical, examining performance both:
ƒ On an on-going day-by-day, month-by-month basis so that we can change
direction and improve what we are doing; and
ƒ On an occasional basis, perhaps annually or every three years, when we
can examine our effectiveness and the changes that have occurred so that
we can build lessons from such experience into our future plans.

In response to these needs for accountability and feedback, three main


questions should be addressed when preparing education development plans
or programmes:

ƒ What can enable us to judge and measure whether an objective or an


expected result is achieved and an activity implemented?
ƒ How can we assess the achievement of an activity, an output or an
objective?
ƒ What level of result are we going to assess?

In general terms, monitoring and evaluation consists in measuring the


status of an objective or activity against an “expected target” that allows
judgement or comparison. This target is an indicator. This implies that one
has to define at the stage of planning some indicators that can enable
measurement whether and how an output or an activity is delivered in
comparison with the initial targets.

The second question concerns how to assess the status of each level of the
programme. Your boss might want you to produce results, no matter how you
achieve them. However, you ought to care about the use of the means that you
are given in order to attain the results expected by your boss. This can be done
by regular monitoring of the achievement of your activities. On the other
hand, you may need an external and objective point of view to assess the
impact of your activities, which can be done by a more formal form of
assessment, an evaluation.

12
It is very important to plan M&E from the outset: e.g. when doing a strategic
plan or planning a programme or a project. A system is needed that will help
answer the questions of:

ƒ Relevance: does the organization or project address identified needs?


ƒ Efficiency: are we using the available resources wisely and well?
ƒ Effectiveness: are the desired outputs being achieved? Is the organization
or project delivering the results it set out to deliver?
ƒ Impact: have the wider goals been achieved? What changes have occurred
that have targeted individuals and/or communities?
ƒ Sustainability: will the impact be sustainable? Will any structures and
processes so established be sustained?

It is important to note that credible indicators cannot be constructed without a


reliable information system. Without the production of reliable statistics, the
quality of monitoring and evaluation will be questionable at the stage of the
plan implementation. In other words, one must start by establishing a reliable
information system in order to ensure the quality of the monitoring and
evaluation.

IV.1. Performance Indicators

An indicator is a number or ratio (a value on a scale of measurement)


that can be obtained from a series of observed or calculated facts and
that can reveal relative changes as a function of time. Indicators are used to
measure performance; they play a crucial role in monitoring and evaluation:

ƒ they specify realistic targets for measuring or judging if the objectives


have been achieved
ƒ they provide the basis for monitoring, review and evaluation so feeding
back into the management of the organisation or project and into lesson
learning and planning for other subsequent work
ƒ the process of setting indicators contributes to transparency, consensus
and ownership of the overall objectives and plan.

The following introduces the general types of performance indicators that can
be used to assess progress towards the achievement of different types of
expected results and to answer the question: How do we know whether we are
achieving/ have achieved our goal?

13
Direct or indirect indicators

Direct indicators (often statistical). These indicators are used for objectives
that relate to a directly observable change resulting from activities and
outputs. A direct indicator is simply a more precise, comprehensive and
operational restatement of the respective objective. If the expected result is to
increase the number of professionals trained in an area over a period of time,
one should ensure that quantified data are collected on a regular basis and
made available for monitoring, review or evaluation. For example, if the
expected result is to: “train over two years 250 inspectors in educational
planning and management”, then the direct statistical indicator would be
simply a count by semester or by year of the number of those actually trained
in this field.

Indirect or proxy indicators may be used instead of, or in addition to direct


indicators. They may be used if the achievement of objectives: (i) is not
directly observable like the quality of life, organisational development or
institutional capacity; (ii) is directly measurable only at high cost which is not
justified; (iii) is measurable only after long periods of time beyond the life
span of the project. However, there must to be a prima facie connection
between the proxy and the expected result. The following example illustrates
how a proxy indicator could be used to assess progress in what might seem to
be an intangible situation. If the expected result is: “greater awareness among
the general public and policy-makers about the major challenges of the
HIV/AIDS in education”, a good proxy indicator might be to collect data on
the number of times public figures spoke of these challenges and/or the
number of times the mass media reported on these challenges. In this case,
collecting data every six months would be satisfactory. In the longer term,
programme evaluation should offer statistical data to ascertain more
accurately the totality of the factors and variables at play.

Qualitative or quantitative indicators

The Quantity-Quality-Time (QQT) maxim for constructing an indicator


generally works well. But its rigid application can result in performance and
change, that is difficult to quantify or to be given appropriate value. That a
change may be difficult to quantify or that the analysis of qualitative data may
not be straightforward, are not reasons to sweep them under the carpet.
Special effort and attention needs to be given to devising qualitative
indicators. A balance of indicators is needed, with some that focus on the
quantitative and others on qualitative aspects.

14
Quantitative indicators may relate to:
ƒ the frequency of meetings,
ƒ the number of people involved
ƒ growth rates
ƒ the intakes of inputs; e.g. grants, buildings, teachers
ƒ the adoption and implementation of the outputs, etc.

In many instances where the expected result may be qualitative (change of


attitudes, capacity building, etc), a non-statistical approach may be the only
way possible to develop an indication of “progress”. Qualitative indicators
largely focus on the “process of change” - asking stakeholders what they did
as a result of their participation in activities. This technique works especially
well in instances where training seminars and workshops are the pursued
outputs. However, when dealing with stakeholders, care needs to be taken to
avoid a focus simply on “satisfaction”. Rather, the focus should be on what
happened as a result of the participation. It should also be noted that narrative
indicators can seldom be quantified easily over the short term.

Qualitative indicators relate to:


ƒ the level of participation of a stakeholder group
ƒ stakeholder opinions and satisfaction
ƒ aesthetic judgements; e.g. taste, texture, colour, size, shape, etc.
ƒ decision-making ability
ƒ the emergence of leadership
ƒ the ability to self-monitor
ƒ attitudinal and behavioural changes
ƒ evidence of consensus.

Qualitative indicators are sometimes called narrative indicators. The


following example illustrates how narrative indicators could be used. If the
expected result is to “Enhance provincial capacities for organization and
management of non-formal education”, then a valid narrative indicator might
be through a follow-up questionnaire to be circulated among those individuals
who participated in training activities to ask them what they did in their
provinces as a result of the actions of the Ministry of Education. Such a
questionnaire should not be a survey of client satisfaction. It should ask:
"What did you do as a result of your participation in the training workshop?”
It could be sent out to stakeholders several times – at least once a year - in
order to develop a “baseline” and thus begin to assess the continuum of
change. It may be that oral interviews could be used in lieu of a formal written
response. Narrative indicators enable an organization to assess the

15
interconnection of factors without recourse to extremely expensive statistical
research. In this way, one could demonstrate “partial success” even if other
factors may have prevented the overall “enhancement of national capacity”.
This example also illustrates how a proxy indicator could be combined with a
narrative indicator. In this case, a reliable proxy indicator might be the
number of new non-formal education centres. The proxy has not measured
“enhanced capacity”; rather it has shown the impact.

IV.2. Three Classifications of Evaluation

Depending on the nature of a programme and the purpose of an evaluation,


there are different classifications of evaluation.

The first classification can be made depending on who’s conducting the


evaluation:
ƒ internal (when the evaluation concerns a programme implemented
entirely within an institution, is carried out by the persons belonging to
the same institution as those managing the programme, sometimes in
cooperation with the assistance of external evaluators);
ƒ self-evaluation (is a form of internal evaluation done by those who
implement the programme); or
ƒ external (when the evaluation concerns a programme whose
implementation involves persons from outside the institution, often
carried out by evaluators independent of the institution).

The second classification is made depending on the use of evaluation. An


evaluation can be:
ƒ formative (because its main goal is generally to correct the course taken
by a programme and its results are usually intended for those
implementing it. Sometimes called mid-term evaluation because it is
carried while the programme is still being implemented);
ƒ summative (because it leads to conclusions about the value of the
programme so that lessons can be learnt for the future. It is called end-of-
programme evaluation); or
ƒ ex-post (because it is conducted some time after the completion of the
programme in order to draw conclusions on the impact and sustainability
of the programme. It is another form of summative evaluation.)

The following three types of evaluation form the third classification that is
being widely used in programme evaluation. However, it is recommended that
some flexibility is applied when conducting the types of evaluation described

16
below in combination with those mentioned above. These three types are:
monitoring, review and evaluation.

Monitoring: It is not an evaluation per se, but is a process whereby the


progress of activities is regularly and continuously observed and analysed in
order to ensure that the expected result is achieved. It is done by regular
collection and analysis of information for checking the performance of the
programme activities.

Monitoring is usually done internally by those who are responsible for the
execution of activities (programme managers) in order to assess:
ƒ whether and how inputs (resources) are being used;
ƒ whether and how well planned activities are being carried out or
completed; and
ƒ whether outputs are being produced as planned.

Monitoring focuses on efficiency, that is the use of resources, especially at the


activity (and sometimes at the output level).

Major data and information sources for monitoring are: financial accounts and
also internal documents such as mission reports, monthly/quarterly reports,
training records, minutes of meetings, etc.

Review, as for monitoring, is a task performed usually by those who are


responsible for the activities, but it is a more substantial form of monitoring,
carried out less frequently, e.g. annually or at the completion of a phase.

Often called mid-term review, its results are designed for those who are
implementing the activities as well as the providers of funds. Reviews can be
used to adjust, improve or correct the course of programme activities.

Review focuses, in particular, on effectiveness and relevance. It assesses


whether the activities have delivered the expected outputs and the latter are
producing the expected outcomes, in other words whether there is indication
that the outputs are contributing to the purpose of the project or programme.

Key data and information sources for review are typically both internal and
external documents, such as annual status reports, survey reports, national
statistics (e.g. statistical yearbooks), consultants’ reports, etc.

Evaluation in many organisations is a general term used to include review.


Other organisations use it in the more restricted sense of a comprehensive

17
examination of the outputs of a programme, how it contributes to the purposes
and goals of the programme.

Evaluations are usually carried out both by insiders (those belonging to the
same institution as the programme managers) and outsiders (external
evaluators) in order to help decision makers and other stakeholders to learn
lessons and apply them in future programmes. Evaluations focus, in
particular, on impact and sustainability.

Evaluations may take place:


ƒ at the end of a project phase or at the completion of a project (terminal or
summative evaluations) to assess immediate impact; and/or
ƒ beyond the end of the project (ex-post evaluations) to assess the longer-
term impact of the project and its sustainability.

Key data and information sources for evaluation are both internal and
external. They may include annual status reports, review reports, consultants’
reports, national and international statistics, impact assessment reports, etc.

IV.3. Objects of Monitoring and Evaluation

As described above, depending on the purpose and types of evaluation, the


focus of evaluation can be different.

Like any other system, the education sector has inputs, processes, outputs and
outcomes as show in the figure below. These are the main objects of
monitoring and evaluation.

Inputs are human, financial and other resources necessary for producing
outputs and achieving results. In the education system, they are teachers,
equipment, buildings, textbooks, etc. These inputs go through a process
(throughput) where they are mixed (input mix), combined and/or moved along
to achieve results.

Outputs are the products and services that are generated as the tangible
results in carrying out the planned activities. In an education system, they are,
for example, the graduates and the knowledge acquired during their studies.
Producing an output by itself can be meaningless. Such outputs are sought for
the purpose of contributing to the achievement of an outcome.

18
Outcomes are the effects of utilizing the outputs. They are the overall
changes in situations and/or benefits for the students, their families and/or the
society as well, that can be qualitative and/or quantitative. For example, in the
education sector, they are the gains that the graduates from an education level
can actually obtain thanks to the knowledge they acquired at school.

Systems are often analyzed in terms of relevance, efficiency, effectiveness,


impact and sustainability: for example, one can wonder whether the inputs
to the education system are relevant for addressing identified needs, to what
extent the processes – utilization of resources - are efficient, and how far the
anticipated outputs are effectively produced. Outcomes and results will be
analyzed in terms of their impact and sustainability. These are the focuses of
monitoring and evaluation.

Figure: Relevance, efficiency, and effectiveness.


Effectiveness
Efficiency

Needs Objectives Resources Outputs & Outcomes

Hypothetical
Relevance
Real Relevance

Relevance can be hypothetical or real:

ƒ Hypothetical relevance is defined in relation to needs, e.g. whether a goal,


an objective or an expected result of a programme or project reflects the
actual needs of the beneficiaries or not. This is the focus of evaluation
when appraising a programme before its approval, and sometimes during
the programme review.
ƒ Real relevance measures the extent to which the outputs produced and/or
outcomes achieved respond to the needs of the population. This is the
focus of evaluation when conducting a programme review, most often
during a programme evaluation.

Efficiency describes the relation between the quantity of the outputs (products
and services) produced and the quantity of resources used to produce them.
Unit or average cost is often used to express the efficiency. This is the focus
of evaluation during programme monitoring and review, and sometimes
during programme evaluation.

19
Effectiveness describes the extent to which an objective has been achieved. In
other words, it measures the level of achievement of an objective (or an
expected result) of a programme or project pursued and of the effects (outputs
and outcomes) achieved. This is the focus of evaluation during the programme
review, and most often during the programme evaluation.

Impacts are the effects on the population and the environment by the pursuit
and the achievement of an objective. The action involved in the pursuit of an
objective can change a situation in both predictable and unpredictable ways.
Sustainability is the extent to which the benefits delivered and changes
brought about by a programme or a project continue after its completion.
Programme evaluation, and project review in a lesser extent, focus on impact
and sustainability.

References

[1] DFID, Tools for Development: A handbook for those engaged in


development activity. London, Department for International
Development, 2002.

[2] European Commission, Manual Project Cycle Management. Brussels,


European Commission. (Evaluation Unit of the EuropeAid Co-operation
Office), 2001.

[3] Jallade, L.; Radi, M.; Cuenin, S., National education policies and
programmes and international co-operation: What role for
UNESCO? (Education policies and strategies, ED-2001/WS/5.), Paris:
UNESCO, 2001.

[4] UNESCO, National Education Sector Development Plan: A result-


based planning handbook, Paris: UNESCO, 2006.

20

You might also like