Sphere For Monitoring and Evaluation
Sphere For Monitoring and Evaluation
Sphere for
Monitoring and
Evaluation
The Sphere unpacked guides
The Sphere unpacked series discusses the use of the Sphere standards in specific situations.
Sphere for Monitoring and Evaluation together with Sphere for Assessments explains how to
integrate key elements of Spheres people-centred approach into the humanitarian programme cycle.
These guides indicate the relevant parts of the Sphere Handbook at different moments of the response
process and should therefore be used together with the Handbook.
Both Sphere unpacked guides are compatible in spirit with the Inter Agency Standing Committee
(IASC) Humanitarian Programme Cycle guidance. They are particularly relevant for IASCs needs
assessment and analysis, implementation and monitoring and operational review and evaluation.
This guidance assumes a basic level of understanding of both monitoring and evaluation processes, and
access to the Sphere Handbook. It is intended to complement rather than replace agency-specific and
sector-specific monitoring and evaluation guidance and to promote an understanding of the added
value that Sphere can bring to programme implementation.
Author
Ben Mountfield
Acknowledgements
Daniel Arteaga, Francesca Bonino, John Borton, Scott Chaplowe, Hana Crowe, Astrid de Valon,
David Goetghebuer, Richard Garfield, Scott Green, Saul Guerrero, Maria Kiani, Tzvetomira Laub,
David Loquercio, Albert Maipisi, Warner Passanisi, Minja Peuschel, Nicolas Rost, Fiamma Rupp,
Elias Sagmeister, Claudia Schneider, Dina Szsz, William Wallis, Alexandra Warner, Cathy
Watson, Andy Wheatley, Gavin Wood, Kelly Wooster.
Terminology
This guidance uses the term results monitoring to cover results at all levels: outputs, outcomes and
even impact. Evaluations are often concerned with results as well, specifically at the levels of outcome
and impact.
Although the word indicator is used in a variety of ways, there is a useful distinction to be made
between the metric the thing we actually measure and a performance target, objective or ambition.
Sphere for Assessments. Published by the Sphere Project in Geneva. Feb. 2015. SphereProject.org
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 Unported License.
Sphere for Monitoring and Evaluation will be relevant and useful for the following groups and
individuals:
Needs assessment teams Selecting indicators for needs-assessment that are compliant with Sphere, that
may work across agencies operating in the same sector and that will remain
relevant throughout the rest of the programme cycle.
People responsible for Selecting robust, high value indicators that cover all aspects of programme
programme design implementation and results and relating them to the Sphere standards.
Programme managers Ensuring that programmes properly contextualise the Sphere standards and that
progress towards meeting them can be effectively measured in all areas.
People commissioning an Understanding how Sphere can be used in designing an evaluation process to
evaluation provide an appropriate benchmark for assessing the quality of humanitarian
assistance.
Considering how this can be done in situations where Sphere was not explicitly
referenced in the project design and reporting.
People running and working Understanding the expectations related to the Sphere Standards, and how they
in programmes being can be applied to programming.
evaluated
Maintaining a flexible approach to programme design and implementation;
ensuring that good records are kept of decision-making processes and that the
monitoring framework is sufficient.
Being ready to support and make time for evaluation and other reflective
practices.
People undertaking the Understanding the varied ways in which Sphere Standards can be used to inform
evaluation evaluation processes and the value of using a globally recognised set of
benchmarks.
Appreciating the linkages between the Sphere Standards and the DAC criteria.
People and groups working Building on strong and justified M&E information which can be used for global
on lessons learning and joint lessons learning processes in the sector
The Sphere Handbook, Humanitarian Charter and Minimum Standards in Humanitarian Response,
explains and lists what needs to be in place in four life-saving sectors so that a population affected by
disaster or conflict can survive and recover with dignity. Because the way to achieve standards and
indicators varies according to context, the Sphere Handbook provides guidance on globally applicable
aspects of humanitarian aid.
Cross-cutting themes
Core Standards and minimum standards: These are qualitative in nature and specify the minimum
levels to be attained in humanitarian response across four technical areas. They always need to be
understood within the context of the emergency.
Key Actions: These are suggested activities and inputs to help meet the standards.
Key indicators: These are signals that show whether a standard has been attained. They provide a way
of measuring and communicating the processes and results of Key Actions. The key indicators relate
directly to the minimum standard, not to the Key Action.
If the required key indicators and actions cannot be met, the resulting adverse implications for the
affected population should be appraised and appropriate mitigating actions taken.
The key indicators are a mixture of qualitative and quantitative statements that describe a performance
target. A group of these together outline the expectations to be met to achieve each Core Standard and
each minimum standard. In many cases, the specific metric the aspect to be measured is only
implied in the Handbook, although some are described in detail in the Appendices.
Guidance notes: These include specific points to consider when applying the minimum standards,
Key Actions and key indicators in different situations. They provide guidance on tackling practical
difficulties, benchmarks or advice on priority issues. They may also include critical issues relating to
the standards, actions or indicators and describe dilemmas, controversies or gaps in current knowledge.
The Humanitarian Charter provides an alternative, unique and globally recognised framework for the
evaluation of humanitarian action.
It is possible that humanitarian actions which aim to improve one aspect of the lives of people
affected by a disaster can worsen another aspect. To minimise this, all humanitarian agencies should
be guided by the Protection Principles, even if they do not have a specific protection mandate or
capacity.
Ensure peoples access to impartial assistance in proportion to need and without discrimination
Protect people from physical and psychological harm arising from violence and coercion
Assist people to claim their rights, access available remedies and recover from the effects of abuse.
Monitoring the degree to which these principles are applied is difficult and participatory approaches
can be helpful to achieve this.. If issues are identified, they can often be addressed by adapting the
programme approach. See page 18 for more on Spheres participatory approach to monitoring.
Agencies working in camps in Port-au-Prince quickly discovered that protection concerns cut across
technical sectors. It was difficult to find locations within crowded camps to place latrines which
needed vehicle access several times a day. But latrines placed at the edge of the camp in the dark were
a real concern, especially for women. Various approaches, including providing lighting, redesigning
the camp layout and alternate systems (peepoo bags) were tested to reduce this risk.
1
The seven criteria proposed by the Development Assistance (Committee DAC) of the Organisation of Economic
Cooperation and Development are discussed in detail at the end of this guide and in Appendix 4.
2
For example, see www.humanitarianresponse.info/applications/ir/indicators and use the tools to show indicators
related to Protection and guidance on protection mainstreaming on the Global Protection Cluster website. ALNAP
will publish a scoping paper on protection-specific challenges in humanitarian evaluation and additional guidance
should follow in 2015.
6 | Sphere for Monitoring and Evaluation
The Core Standards, and Core Standard 5
There are six Core Standards, which are essential standards that are shared by all sectors. They
provide a single reference point for approaches and mostly relate to agency processes. An evaluation
process could examine performance against any (or all) of the Core Standards. Core Standard 5:
performance, transparency and learning is explicitly associated with the functions of evaluation and
monitoring and their role in supporting transparency and improving the quality of responses. This
Standard is explored in more detail below, starting on page 17 and the eight associated Key Actions
provide the structure for the middle section of this guidance.
The Sphere Handbook is explicit about the importance of considering cross-cutting themes
throughout the programme cycle, and the evaluation process should include these aspects as
appropriate to the context. In particular, evaluation of humanitarian action should consider the
gender-specific aspects of design, implementation and outcomes. This process is far easier if
assessment and monitoring data has been disaggregated for age and gender from the start.3
3
Mazurana, D., Benelli, P., Gupta, H., & Walker, P. (2011), Sex and Age Matter. Tufts University, USA.
Sphere for Monitoring and Evaluation | 7
What do we mean by monitoring and evaluation?
Monitoring
Monitoring compares intentions with results. It measures progress against project objectives and the
influence of the programme on people and the context as well as tracking the systems and processes of
the implementing agency. Monitoring information guides project revisions, verifies targeting criteria
and confirms that aid is reaching the people intended. It should be disaggregated for different groups:
women, men, boys and girls and other groupings as appropriate. It enables decision-makers to respond
to community feedback and identify emerging problems and trends.
Monitoring has a range of purposes, but the critical one is this: better outcomes for disaster-affected
populations. This means that management processes should be explicitly designed to consider and
respond to monitoring data.
This guide considers three different areas in which humanitarian action is monitored: the context in
which it takes place, the activities and processes undertaken and the results that these activities have
on the disaster-affected population.
The diagram below places a chain of processes and events running from left to right across the centre
and organises these three broad types of monitoring around it.
There are many variations of evaluation, but the ALNAP definition brings them together under two
broad purposes: accountability and learning. Some evaluations try to combine these together, while
others focus on one or the other.
Evaluations can be internal or external, but they always seek to be systematic, objective and credible.
They can explore the project design, its relevance, the implementation of activities, internal and
external relationships and coordination, the projects outputs, outcomes and impact, or some
combination of these areas.
The scope and methodology of an evaluation is normally agreed in advance and set out in the terms of
reference (TOR). The TOR usually set out a number of research questions which can be refined by
the evaluators through an inception report. The evaluation then seeks to answer these questions on
the basis of the evidence that emerges during the evaluation.6 Often these questions will be further
broken down into a number of sub-questions.
In humanitarian action, evaluations can take place at various times. The most common are:
Real-Time Evaluation: An evaluation undertaken soon after the operation begins which aims to
provide feedback to operational managers in real time and to ensure that the operation is on track.
Mid-Term Evaluation: An evaluation process that takes place around the middle of the planned
operational period. Mid-term evaluations tend to be used in larger or longer responses.
Final Evaluation: A final evaluation takes place at the end of the implementation period or after
the operation has closed. These evaluations are often used to capture learning and identify gap
areas that can inform future programming and evaluations.
Every humanitarian evaluation is different and no single diagram or process map can successfully
describe all of them. The diagram below represents a fairly typical process for an external evaluation
towards the end of a humanitarian action.7 It is not intended to be prescriptive or universal: a
participatory evaluation, for example, would follow a very different path.
4
OECD (2002) Organisation for Economic Co-operation and Development /DAC Development Assistance Committee
5
ALNAP (2013), Evaluation of Humanitarian Action, Pilot guide, London, UK
6
For example, see ALNAP (2014), Insufficient evidence? London, UK
7
For a more detailed consideration of evaluation processes, see http://betterevaluation.org/plan
Sphere for Monitoring and Evaluation | 9
Table 1: Using the Sphere Handbook at different points in a typical evaluation process
Identify the need for an evaluation; clarify Consider the use of the Core Standards or the
the main purpose; identify stakeholders. Protection Principles as the guiding framework
for the evaluation.
Terms of Outline key questions and preferred Use Sphere Core Standards and Minimum
Reference methodology. Standards as an explicit reference point against
which to set key questions.
Refine key questions and scope. Use Key Actions and Guidance notes to inform
the development of sub-questions.
Inception report Describe and justify methodology. Use Key Actions and Guidance notes to inform
Outline sub-questions. the development of sub-questions and data
collection tools.
Propose report structure.
Draft report Present draft report. Reference Sphere Standards in the framing of
findings.
Final report Present final report: observations, Use Sphere Standards to frame and ground the
findings and recommendations. recommendations, where appropriate.
Monitoring is usually continuous or at least periodic and frequent and internal and is largely
concerned with activities and their immediate results as it is with systems and processes. Evaluation
tends to be an episodic and often external assessment of performance and can look at the whole of
the results chain from inputs to sustainability.
Having said that, there are areas in which monitoring and evaluation overlap, in particular during
programme design and implementation. The set targets and monitored progress will later be evaluated
(see the chapter on Evaluation).
Some indicators can be used all the way through the programme cycle in monitoring and in
evaluation: evaluation often builds on monitoring and uses monitoring data and reports as source
material
Different actors working within the same operation can agree on adopting common indicators.
So even though evaluation is often an external process happening towards the end of an operation, it
needs to be planned for from the beginning of the response process.
The processes of monitoring and evaluation are much easier if the foundations have been laid during
the needs assessment phase and during programme design. Evaluation processes can be facilitated if
they can be built on a solid monitoring basis and evaluations should be built into project design from
the beginning so as to contribute effectively to learning and accountability.
The Sphere Handbook, while not mentioning this approach explicitly, is entirely consistent with it:
Core Standard 5 covers performance, transparency and learning.
In addition to the guidance that relates specifically to technical sectors, Sphere provides valuable
benchmarks for the whole programme cycle, and these can be especially valuable in situations where
the agency does not have internal targets or standard operating procedures.
Sphere also adds value through its emphasis on a rights-based and participatory approach. As an
articulation of humanitarian principles in practice and as part of efforts to improve quality and
accountability, the approach described in the Handbook should be incorporated as much as possible
throughout the humanitarian programme cycle.
Sphere provides two quite distinct types of guidance on monitoring and evaluating humanitarian
action:
Internal aspects and processes, such as programming processes, systems, capacities and performance
External aspects, such as the degree to which technical humanitarian standards are met.
Figure 4: Two applications of the Sphere Handbook to monitoring and evaluation processes
The Sphere
Handbook
offers both
This guidance does not set out an approved list of indicators for each technical sector although work
is underway within the global clusters to achieve this aim.8 Rather, it aims to support the effective use
of the Sphere Handbook in selecting indicators and designing monitoring systems for humanitarian
response generally. Likewise, agencies often have their own tools and formats for monitoring, and
therefore this document does not attempt to suggest a standardised version but some guidance for
tracking indicators and targets is included in Appendix 3: The indicator tracking table.
8
See: www.humanitarianresponse.info/applications/ir
12 | Sphere for Monitoring and Evaluation
Sphere and evaluation
Besides general and agency-specific guidance to frame a humanitarian evaluation, the Sphere
Handbook is a key resource, especially for projects that seek to demonstrate an adherence to the
Sphere Minimum Standards. The Handbook provides specific guidance on evaluation and many
benchmarks against which evaluation can take place.
The adaptability of the Sphere indicators means that they are useful regardless of the given evaluation
methodology. Sphere for Evaluation is not a guide on how to carry out an evaluation, but on how to
incorporate the Sphere standards and indicators into the methodology used by your organisation.
Accordingly, this guidance does not make any recommendation about specific evaluation
methodology.
Using indicators that are based on Sphere brings advantages: the minimum standards are globally
agreed and using standardised indicators improves comparability between projects.
Social norms and peoples expectations vary from one location to another and each emergency has
unique constraints, consequences and opportunities. Therefore, understanding the context of any
emergency intervention is critical to its success. The context itself must be monitored and programme
assumptions that relate to the context should be reviewed on a regular basis.
The Sphere minimum standards are designed to apply in any environment. The key indicators, both
qualitative and quantitative, also apply in all situations, but may need to be considered in light of the
local context.
Depending on the situation, agencies may choose to set a target value for a specific quantitative
indicator above or below the level suggested by Sphere.
The organisation has a sound understanding of the context before and after a disaster and has
analysed the impact of the context on capacities and vulnerabilities of the affected population.
Adapting the Sphere indicator would help bring the affected community back to their normal way
of living and promote life with dignity.
Such adaptation would normally have been agreed prior to the disaster on the basis of the context and
norms of the area. The adapted target is informed by the Sphere minimum standard, but has been
revised on the basis of the context considering the political, economic, social, technological, legal
and environmental background.
Adapting indicators must be done with consideration and care, taking the Key Actions and Guidance
notes into account and maintaining the spirit of the minimum standard. The indicators were
developed to mark the moment in which an affected population can survive in stable and dignified
conditions. Where an agency or cluster sets an adapted target in this way, this should be clearly
explained and justified. Efforts must be made to work towards meeting the indicators and to mitigate
any negative effects on the affected population.
Collecting accurate baseline and reference information is a critical part of the needs assessment
process. Without such information, it is extremely difficult to monitor results. See Sphere for
Assessments for more information on selecting indicators and collecting baseline information.
When tracking the changes in an indicator, you can compare it to the normal value for the indicator
(known as the reference value) and you can also compare it to the situation immediately after the
disaster and before the intervention: the baseline value. Sometimes the reference value varies through
the seasons and a good understanding of this kind of variation is an important part of the context
analysis (see also Appendix 2: Seasonality, baselines and reference values for seasonal indicators).
Two specific contexts are worth highlighting (See Sphere Handbook: What is Sphere, p9):
9
Adapted from the video: Humanitarian Standards in Context Training Notes, The Sphere Project, 2013
14 | Sphere for Monitoring and Evaluation
When the host populations living conditions are below the Sphere minimum standards, meeting
the standards would provide the displaced with a higher level than the host community and this
could cause tension between the two groups. In this situation, the agency may choose to adapt the
target value to a slightly lower level, in accordance with the protection principles, to reduce this
risk. It may also be appropriate to provide some support to the host community. Any adaptation of
targets should be clearly explained and justified.
When the needs far outweigh the resources available to meet the Sphere indicators, it may be
better at the outset to provide everybody with a basic level of assistance rather than fully meeting
the indicators for a small proportion of the affected population. At the same time, efforts should be
made to advocate, identify new partners, raise additional funds and increase the level of provision
accordingly.
We should never avoid making an effort to help, even when resources are inadequate. The risk of not
meeting the indicators is far less important than the risk of doing nothing.
See Wash Cluster Somalia (2012), Guide to WASH Cluster Strategy and Standards also known as
Strategic Operational Framework (SOF).
Monitoring and evaluation guidance can be found in various handbooks and guidelines. Of those, the four
Sphere Companion Standards are of particular relevance here, as they were developed in a Sphere-like manner
and structured the same way. They are thus very compatible with the Sphere Handbook and with each other.
Thus this guide also has relevance for the sectors covered by those standards and their guidance can be valuable
for Sphere.
The four Companion Standards handbooks cover essentially two broad areas: children (protection and
education) and livelihoods (livestock management and economic recovery). Some specificities pertaining to each
particular handbook are highlighted here.
Child protection and education are included in the Sphere Handbook as cross-cutting themes and supported
by the Sphere Protection Principles.
The Minimum Standards for Child Protection in Humanitarian Action (CPMS) provide a structure for
agency specific and inter-agency monitoring of the child protection situation and response on an ongoing
basis. Situation monitoring is addressed in detail in Standard 6: Child Protection Monitoring. Response
monitoring is usually structured around Standards 7-14: Standards to Address Child Protection Needs.
Programme monitoring usually takes place at the agency level. All the standards may contribute to
development of a programme monitoring framework. CPWG.net/minimum-standards
The INEE Minimum Standards for Education (INEE MS) share global standards on monitoring and
evaluating education programmes and policies that range through all phases of emergency response from
prevention to long-term development (see INEE Analysis Standards 3 and 4). Key Actions and Guidance notes
address which stakeholders to involve in M&E, education management information systems (EMIS),
monitoring learners, evaluating education response activities, capacity-building through evaluation and
sharing evaluation findings and lessons learned to inform future work. INEEsite.org
Livelihoods: All monitoring and evaluation should take livelihoods issues of the disaster-affected communities
into account as much as possible. Spheres guidance on livelihoods (essentially in the Food security chapter) is
enhanced by guidance found in the MERS and LEGS handbooks. Both help assess key elements of disaster-
affected communities livelihoods, which should be a key component of humanitarian response.
The Livestock Emergency Guidelines and Standards (LEGS) provide detailed guidance on monitoring and
evaluating livestock-based emergency responses. Linking with the Sphere Core Standards, LEGS Core
Standard 6 focuses on monitoring, evaluation and livelihoods impact and emphasises the importance of
establishing participatory M&E systems early in the project cycle. Chapter 3 includes references for
participatory methodologies. Each technical chapter of LEGS includes an M and E checklist, divided into
process and impact indicators. The LEGS Project has also developed a short on-line training tool for
monitoring and evaluating livestock-based emergency interventions. Livestock-Emergency.net
Minimum Economic Recovery Standards (MERS) Assessment and Analysis Standards enable and guide
users with continuous and ongoing analysis of market dynamics and livelihoods strategies of affected
populations for ongoing programme monitoring, evaluation and dissemination of results. They provide
guidance for designing household and market mapping, looking at institutions and governance, power
dynamics, gender and key market infrastructure. Timing guidelines emphasise the importance of seasonal
calendars, labour trends and ongoing assessment updates to respond to rapidly changing environments.
SEEPnetwork.org/MERS
Programmes are adapted in response to The primary purpose of monitoring is to maintain and improve the
monitoring and learning information. quality of the response. For this to happen effectively, the agency
See below: Adapting the project in response to must be monitoring the right things, and it must have a
monitoring page 27 mechanism in place that allows a prompt and appropriate reaction
to adverse monitoring findings or new opportunities arising.
Monitoring and evaluation sources include Humanitarian action affects different groups and individuals in
the views of a representative number of different ways. Effective monitoring needs to consider the impacts
people targeted by the response as well as intentional and unintentional, positive and negative on the
the host community if different. target population as well as on those not directly targeted,
See below: Participatory mechanisms including the host population if appropriate. The data should be
page 18 disaggregated for age and gender as a minimum and may need to
be further broken down depending on the targeting criteria, the
type of response and the context.
Accurate, updated, non-confidential Agencies should be transparent with their stakeholders in terms of
progress information is shared with the the processes and the outcomes of their action. Openness and
people targeted by the response and communication about monitoring increases accountability to the
relevant local authorities and other affected population. Monitoring carried out by the population
humanitarian agencies on a regular basis. itself further enhances transparency and the quality and
See below: Participatory mechanisms ownership of the information. Clarity about the intended use and
page 18 users of the data should determine what is collected and how it is
presented.
Performance is regularly monitored in Agency performance is not confined to measuring the extent of its
relation to all Sphere Core and relevant programme achievements. It covers the agencies overall function
technical minimum standards (and related the progress with respect to aspects such as its relationships
global or agency performance standards) with other organisations and adherence to humanitarian good
and the main results shared with key practice, codes and principles and efficiency of its management
stakeholders. systems.
See below: Monitoring processes and
performance page 20, and Monitoring the
results of our interventions page 24
Agencies consistently conduct an objective Programme evaluations are typically carried out at the end of a
evaluation or learning review of a major response, while real-time evaluations and learning reviews may
humanitarian response in accordance with be carried out at any time. Evaluation and learning processes lead
recognised standards of evaluation practice. to changes in policy and practice. Evaluations are carried out by
See below: Evaluation, page 31 and Reflection independent staff external to the project implementation team;
and learning page 34 they may be internal or external to the agency.
Core Standard 5 has eight Key Actions which provide the structure for the sections that follow.
Sphere for Monitoring and Evaluation | 17
Participatory mechanisms for monitoring and
evaluation
Core Standard 5, Key Action 1: Establish systematic but simple, timely and participatory
mechanisms to monitor progress towards all relevant Sphere standards and the programmes
stated principles, outputs and activities.
This Key Action describes the achievement of Sphere standards as an appropriate target for
humanitarian interventions, and requires tools and procedures to be used in a systematic manner to
monitor the progress towards this goal. The tools should be simple, which means that the data should
be easy to collect and relevant and the monitoring process cost-effective. There is no need to monitor
everything if a small number of critical indicators tell you what you need to know.
Participatory approaches often provide a broader perspective than top-down external approaches and
they can build ownership and empower participants. In particular, participatory approaches will help
to identify the contributions and capacities that affected populations bring to their own recovery.
Some work may need to be done to align the indicators identified through participatory approaches
with Sphere indicators.
Participatory evaluation is a specialised field in itself with its own literature. It is not a very common
practice within evaluations of humanitarian action. But participatory approaches can be adopted
relatively easily and add a valuable perspective and foundation to both evaluation process and
findings.10
Several of the Key Actions within CS1 specifically address issues of two-way communication with the
affected population, which is a key element contributing to accountability.
10
For example, see the following ALNAP method note on participatory evaluation: www.alnap.org/resource/19163
18 | Sphere for Monitoring and Evaluation
There is an overlap here between impact monitoring and evaluation. The third Guidance note under
Core Standard 5 states:
The affected people are the best judges of changes in their lives; hence outcome and impact assessment must
include peoples feedback, open-ended listening and other participatory qualitative approaches, as well as
quantitative approaches.
It is also possible to evaluate the quality of participatory processes within the project itself, as described
within Core Standard 1. These could be explored through evaluation questions or sub-questions such
as:
In what ways were the affected population involved in the various phases of the response: in
needs assessment, in setting priorities, in selecting appropriate response mechanisms, in
targeting, in monitoring processes and results?
Did effective and safe feedback mechanisms exist for the affected population? Did the population
use them and if not, why not? What changes were made to the progamme as a result of such
feedback?
Core Standard 5, Key Action 2: Establish basic mechanisms for monitoring the agencys overall
performance with respect to its management and quality control systems.
Process monitoring tells us how well (how effectively, how quickly, how efficiently) we are doing
things. It says nothing about whether those are the right things to be doing or the effects those
activities have on people.
Process monitoring includes all of the actions, systems and processes the agency uses to deliver its
programme ranging, among others, from Human Resources, communications, accountability
processes, data collection and logistics to distribution and financial systems (see also Core Standard 5
Guidance note 2). The systems an organisation uses will impact on the efficiency and effectiveness of
the outputs, and monitoring these processes provides an opportunity to identify problems and
opportunities early and respond to them.
This approach is often applied to individual agencies or responses, but can also operate at an inter-
agency level. For example, work has been undertaken to look at and improve the efficiency of the
cluster process.11
Implied metrics to The existence of a report meeting the Number of SCM staff at each level trained
be measured specifications described and shared in the appropriate parts of the SCM
appropriately with stakeholders. system.
Total number of SCM staff at each level.
11
See IASC (2012), Reference Module for Cluster Coordination at the Country Level
20 | Sphere for Monitoring and Evaluation
Distributions and the provision of services
If a programme has a component of distribution, this will require a range of processes ranging from
specification and tendering to contracting, taking delivery, quality control, storage and distribution.
Each of these stages involves numerous additional processes. All of them involve the collection of
management information that serves a range of purposes from supply chain management to audit.
Once the distribution is completed, it is also important to check that the goods actually reached the
household safely and completely, and that they are being used as intended and not, for example,
resold. This requires monitoring at the point of distribution and at the household level.
In addition to physical commodities, humanitarian agencies also provide other services such as health
care, psychosocial support, hygiene promotion and other information. Such activities also need to be
monitored at the point of delivery, and should also be monitored at the community or household level
to explore disaggregated levels of access to services, levels of take-up and the effects of such service
provision on different members of the community and the household.
Each of the technical chapters of the Sphere Handbook makes references to distributions, and they
highlight the wide range of factors that need to be considered when planning distributions. Many of
these considerations will also need to be monitored.
Accountability processes should be monitored. The indicators will vary depending on the mechanisms
being used. For example, if a suggestions box is provided at the project site, the numbers and types of
suggestions including complaints can be logged as well as the agencys responses to them. This
information can then be shared with the community along with other project communication.
HB page 254-256
Implied metrics Number and type of consultation processes and the proportion of the affected population
to be measured able to access such consultations.
The Core Humanitarian Standard refers to accountability. See also the website of the Humanitarian
Accountability Partnership International (HAP) for detailed guidance on compliance issues.12
12
The Core Humanitarian Standard refers to both accountability and compliance issues. See
www.corehumanitarianstandard.org and www.hapinternational.org
Sphere for Monitoring and Evaluation | 21
Communicating project achievements is another important aspect of accountability. This can be done
through reports to community representatives, through the media, through community meetings and
through non-verbal tools which are locally and culturally appropriate such as thermometer or
dashboard signboards showing levels of success against key targets.
This allows a safe mechanism for people to complain about services and the opportunity to deal with
problems as they arise. The process is two-way, with a reply being sent to the beneficiary once the
complaint has been investigated.
Complaints mechanisms
A key aspect of accountability is a complaints mechanism, which must be safe, able to identify priority
issues and act swiftly upon them. The provision of such mechanisms is included within the Key
Actions associated with Core Standard 1. Guidance note 6 as well as Commitment 5 of the Core
Humanitarian Standard address complaints directly.13
An evaluation should look at the systems in place for complaints and feedback handling as well as at
the changes to the programme that have come about as a result of it. Evaluation questions could ask:
Was an effective, safe and responsive system in place to handle complaints from the affected
population (and not just programme beneficiaries)?
What changes came about as a result of the complaints and feedback received?
Humanitarian agencies provide appropriate management, supervisory and psychosocial support, enabling aid
workers to have the knowledge, skills, behaviour and attitudes to plan and implement an effective
humanitarian response with humanity and respect.
These are management responsibilities that can be monitored, for example with the following
indicators:
13
www.corehumanitarianstandard.org
14
See in particular People In Aid: www.peopleinaid.org and www.corehumanitarianstandard.org
22 | Sphere for Monitoring and Evaluation
Table 5: Key indicators associated with Core Standard 6: Aid worker performance
Staff and volunteers performance reviews indicate Frequency of (and triggers for) performance reviews.
adequate competency levels in relation to their Findings of performance reviews.
knowledge, skills, behaviour attitudes and the
responsibilities described in their job descriptions.
Aid workers who breach codes of conduct prohibiting Numbers and records of breaches and responses.
corrupt and abusive behaviour are formally disciplined.
The principles, or similar, of the People In Aid Code of Existence of appropriate and compliant policy
15
Good Practice are reflected in the agencys policy and documents.
practice. No evidence of non-compliance.
The incidence of aid workers illness, injury and stress- Frequency of stress-related illness amongst staff,
related health issues remains stable or decreases over possibly disaggregated by location and role.
the course of the disaster response.
This is also important territory for evaluations to consider. Indeed, whole evaluations can focus simply
on this area. More common, though, are evaluation questions such as:
Were the staff (and volunteers) sufficient in number, and properly trained and supported to
deliver the planned response?
15
www.peopleinaid.org/code/
Sphere for Monitoring and Evaluation | 23
Monitoring the results of our interventions
Core Standard 5, Key Action 3: Monitor the outcomes and, where possible, the early impact of a
humanitarian response on the affected and wider populations.
In order to monitor the results of a project:
It must be possible to attribute this change to the project activities, in part or in full.
This implies that you must know the initial value of the indicator and that the programme logic is
sufficiently robust for you to be confident that the change observed has been caused, to some degree,
by the programme intervention. It also requires that you can have confidence in the quality of the data
you have collected.
Note that it may not be appropriate to try to measure the impact of an intervention in the early stages
of a humanitarian response, especially in sudden-onset emergencies. In other situations, it may be
appropriate. Efforts should always be made to measure outcomes, however.
One important aspect of monitoring results is to monitor the levels of satisfaction amongst the target
population, partner organisations and other stakeholders. This provides important additional
perspectives rather than seeing everything from the viewpoint of the project implementers. This aspect
can be linked with other accountability processes.
The qualitative Sphere minimum standards often include some quantitative guidance or targets within
the Guidance notes or within the appendices to each Handbook chapter. Indicators of results can be
expressed in qualitative or quantitative terms.
Minimum Water supply standard 3: Water facilities Management of acute malnutrition and
standard People have adequate facilities to collect, store micronutrient deficiencies standard 1:
and use sufficient quantities of water for drinking, Moderate acute malnutrition
cooking and personal hygiene and to ensure that Moderate acute malnutrition is addressed.
drinking water remains safe until it is consumed.
Key Water collection and storage containers have More than 90 per cent of the target
indicator narrow necks and/or covers for buckets or other population is within less than one days
safe means of storage for safe drawing and return walk (including time for treatment)
handling and are demonstrably used (see of the programme site for dry ration
Guidance note 1). supplementary feeding programmes and
no more than one hours walk for on-site
supplementary feeding programmes (see
Guidance note 2).
Implied Type and design of water containers at the Distance from target populations homes
metrics to household level. to feeding centres.
be Method of water storage. Proportion of target population below
measured Use of water containers and other storage systems. target distance.
Key There is adequate access to a range of foods including a staple (cereal or tuber), pulses (or animal
indicator products) and fat sources that together meet nutritional requirements (see Guidance notes 23, 5).
Unintended effects
The Humanitarian Charter is explicit that humanitarian actions may have complex consequences and
that some of these will be unintended, adverse, or both. Clause 9 of the Charter states:
We are aware that attempts to provide humanitarian assistance may sometimes have unintended adverse
effects. In collaboration with affected communities and authorities, we aim to minimise any negative effects of
humanitarian action on the local community or on the environment.
Similarly, Protection Principle 1 is about avoiding exposing people to further harm as a result of your
actions.
Unintended results can be positive or negative and affect either beneficiaries or non-beneficiaries.
Monitoring systems need to consider these possibilities and management systems need to be willing to
recognise and respond to them.
Unintended results
An international medical NGO, as part of a Roll Back Malaria initiative, created a programme to
reduce the incidence of malaria for internally displaced persons in Guinea. Having conducted a needs
assessment, the team prioritised areas most affected by malaria and designed a project that involved
distribution of mosquito nets accompanied by training on the causes of malaria and correct usage of
the nets. The monitoring team, by visiting recipient households, discovered that several families had
used their mosquito nets to make wedding veils and dresses. Even though the family members knew
the causes of malaria and the correct usage of bed nets, they prioritised using the material as clothing
for special occasions.
In Gaza, the use of cash grants as an alternative to food distributions was reported by male and
female beneficiaries alike to have reduced levels of tension in the household and to have
contributed to a reduction in domestic violence. This was not a planned outcome of the programme
and was only discovered in focus groups during the evaluation.
That the food provided is being consumed by the community members it was intended for, and
That the community is not contributing anything to its own food consumption.
For example, if a community has sufficient resources to meet 30% of its food needs according to the
minimum standards and the humanitarian community provides the remaining 70%, then the
minimum standard is likely to be met. The appropriate contribution is for the humanitarian agency to
bridge the gap between the communitys own resources and the minimum standard. The Sphere video
Sphere in Context: Bringing Humanitarian Standards to Life shows how parents contribute to a school
feeding programme in the Democratic Republic of the Congo.
Similarly, if 40% of an affected population have their needs for non-food items met by one agency and
60% by another agency, the minimum standards will be achieved.
In monitoring and evaluation processes, questions around the ultimate use of distributed assets and
rations should be asked as a regular practice for all distributions in order to understand what happened
with the goods and if they did actually reach the intended beneficiaries (See for example the Food
transfers standard 6 on Food use).
Core Standard 5, Key Action 4: Establish systematic mechanisms for adapting programme
strategies in response to monitoring data, changing needs and an evolving context.
Key Action 4 directs agencies to establish systematic mechanisms for adapting programme strategies
in response to monitoring data. It is not sufficient to collect information: efforts must be made to
understand it and, where appropriate, respond. It is a waste of resources and a missed opportunity to
collect data if there are no processes or commitment to act upon it.
The timing of data collection and analysis may be critical in understanding changes caused by the
project or by changes in the context and reacting appropriately. For this reason, it is important to
consider the frequency with which each indicator is measured. A monitoring plan and an indicator
tracking table can make this process easier: see Appendix 3 for further details.
Indicators will often only suggest that a programme is not delivering as expected. They may not
explain why not. Further research or analysis might be necessary prior to taking a decision.
In addition to monitoring the progress, the relevance of the programme should also be monitored (see
Core Standard 5, Guidance note 4). Changes in context can alter the relevance of an intervention.
Participatory approaches are probably the best way to monitor changes to a programmes relevance.
The project team responded by creating cash-for-work opportunities for men and women. They took
a phased approach that included raising awareness of rights of the whole community and ensuring
that all eligible individuals had appropriate training and support to participate. The team monitored
the projects progress and the acceptance of the community at every stage to ensure that the goals
were reached in a culturally appropriate manner.
See this example in the video Sphere in context: Bringing humanitarian standards to life
Security and risks: A well-designed programme is based on a solid understanding of the context. It
includes a robust risk analysis and assumptions that have been made accordingly. This risk analysis
provides a good starting point for ongoing monitoring of the context.
Coping: People affected by a disaster find ways to cope with the changed situation. Some coping
strategies have negative consequences. Monitoring peoples coping strategies can provide valuable
information about changes to context as well as the outcomes of your intervention.16
Markets: All humanitarian activities providing cash, goods or services will have an impact on local
market systems. While these impacts will normally be positive for the target group of the
intervention, they may have less positive impacts for other actors such as food producers or traders.
The impact of humanitarian interventions on market systems and prices should be monitored and
agencies must be willing to change approaches in order to minimise these negative impacts.17
Core Core Standard 4: Design and response Food security livelihoods standard 2: Income
Standard / The humanitarian response meets the and employment
Minimum assessed needs of the disaster-affected Where income generation and employment are
standard population in relation to context, the risks feasible livelihood strategies, women and men
faced and the capacity of the affected have equal access to appropriate income-earning
people and state to cope and recover. opportunities.
Key indicator Programme designs are revised to reflect Responses providing employment opportunities
changes in the context, risks and peoples are equally available to women and men and do
needs and capacities. not negatively affect the local market or
negatively impact on normal livelihood activities
(see Guidance note 7).
Implied Critical aspects of context are monitored at Proportion of men and women accessing income
metrics to an appropriate frequency. generation opportunities.
be measured Needs, capacities and coping strategies are Changes in commodity prices during intervention
monitored at an appropriate frequency. period, compared to norms.
Changes in programme design, Impact of intervention on [other] normal
implementation modality are tracked. livelihood activities.
16
See Protection Principle 4, p43, and Food Security and Nutrition, Livelihoods standard 3, p211, as well as
Appendix 1, pp 214-215.
17
See Emergency Market Mapping and Analysis (EMMA) Toolkit for one approach to market mapping during
emergencies: emma-toolkit.org
28 | Sphere for Monitoring and Evaluation
Changing needs: monitoring cross-cutting themes
Cross-cutting themes in humanitarian action focus on particular areas of concern in disaster response
and address individual, group or general vulnerability issues. The Sphere Handbook identifies eight
such themes, which fall into two broad groupings: Specific needs or considerations and external
factors.
Specific needs or considerations: Children, Gender, People living with HIV and AIDS, Older
people, Persons with disabilities. Depending on the context and the type of intervention, monitoring
data should be disaggregated for these groups. As an absolute minimum, assessment and monitoring
data should be sufficiently detailed to allow disaggregation by age and gender, as outlined in Core
Standard 3.18
Table 9: Some Sphere key indicators are explicit about recognising differences between groups
Minimum Excreta disposal standard 2: Appropriate Non-food items standard 2: Clothing and
standard and adequate toilet facilities. bedding
People have adequate, appropriate and The disaster-affected population has sufficient
acceptable toilet facilities, sufficiently close to clothing, blankets and bedding to ensure their
their dwellings to allow rapid, safe and secure personal comfort, dignity, health and well-
access at all times, day and night. being.
Key indicator Toilets are appropriately designed, built and All women, girls, men and boys have at least
located to meet the following requirements two full sets of clothing in the correct size that
[only one shown]: are appropriate to the culture, season and
They can be used safely by all sections of climate (see Guidance notes 15).
the population, including children, older
people, pregnant women and persons
with disabilities (see Guidance note 1).
Implied metrics Appropriate design of toilets. Availability and number of sets of appropriate
to be Disaggregated data on use. clothing.
measured
Cross-cutting themes relating to external factors: disaster risk reduction including climate change
issues, the environment and psycho-social support. These should be monitored where appropriate to
the context or the programme intervention. Such monitoring is sometimes explicitly described in the
minimum standards, but this is not always the case:
18
The degree of disaggregation by age varies with the context and the nature of the indicator. There is no
common set of age breakdowns that applies across all sectors and in all situations. For example (see page 341), for
specific health indicators, standard values may include: 0-11 months; 1-4 years, 5-14 years; 15-49 years; 50-59
years; 60-69 years; 70-79 years; 80+ years.
Sphere for Monitoring and Evaluation | 29
Table 10: Some Sphere key indicators are explicit about cross-cutting themes
Minimum standard Shelter and settlement standard 5: Essential health services sexual and
Environmental impact reproductive health standard 1:
Shelter and settlement solutions and the Reproductive health
material sourcing and construction People have access to the priority
techniques used minimise adverse reproductive health services of the
impact on the local natural environment. Minimum Initial Service Package (MISP)
at the onset of an emergency and
comprehensive RH as the situation
stabilises.
Key indicator The construction processes and sourcing All health facilities have trained staff,
of materials for all shelter solutions sufficient supplies and equipment for
demonstrate that adverse impact on the clinical management of rape survivor
local natural environment has been services based on national or WHO
minimised and/or mitigated (see protocols.
Guidance note 4).
Implied metrics to be Environmental assessment has been Number and distribution of trained staff.
measured carried out. Availability of supplies and equipment.
Sources of construction materials.
Erosion mitigation measures.
Core Standard 5, Key Action 6: Carry out a final evaluation or other form of objective learning
review of the programme, with reference to its stated objectives, principles and agreed
minimum standards.
Ways to use Sphere for process and performance evaluation were presented in an earlier section. Here,
we will reiterate the importance of considering evaluation as part of the entire programme cycle.
The needs assessment itself is a valid target for evaluation. Core Standard 3, for example, covers the
following areas, all of which could be appropriate for exploration through evaluation processes:
The quality of needs assessment could be studied through evaluation questions such as:
To what degree did the needs assessment accurately reflect the situation on the ground and how
was it used to influence decision-making in the early phases of the response?
Needs assessment is the focus of another guide in the Sphere Unpacked series, Sphere for Assessments.
The Sphere standards describe good practice in setting programme activities and targets and in the
design of the monitoring framework. This means that two separate groups of questions, both derived
from Sphere, can be used in evaluation processes:
Did the designed activities themselves meet the Sphere technical minimum standards? Evaluation
questions might focus on the qualitative standards or on the quantitative measures found in some
of the indicators and Guidance notes.
19
See also Sphere for Assessments, www.sphereproject.org
Sphere for Monitoring and Evaluation | 31
Were Sphere Core Standards met during the processes of analysing potential response options,
design of activities and project delivery? The evaluation can also consider the internal logic of the
response and provide commentary on the quality of the logic model. To evaluate these areas
properly, it is essential that good documentation is kept about decision-making throughout the
project design phase.
What factors were considered in the process of deciding the most appropriate response? How
were the various factors weighted? Which options were discarded and why? What can be learned
from the quality of the response to influence this decision-making process in the future?
Was the risk analysis adequate for the context and the programme? Were the actions put in place
to mitigate risk sufficient?
Finally, an evaluation can look at the way in which the project used the monitoring data and how it
reacted to unexpected results and events. This relates to Core Standard 5.
If the agency has made a general commitment to observe or work towards Sphere Standards, then it is
appropriate to use them in evaluation. This commitment might be in policy documents, in agency
publications or on its website or in an agreement with a donor.
However, if no such commitment exists, then evaluators can work with the agency to find an
appropriate benchmark to use in the evaluation process. Sphere Minimum Standards and the
companion standards all provide such a benchmark as they are widely accepted within the
humanitarian domain and do not belong to any agency, donor or sector.
If Sphere is used retrospectively in an evaluation, then this should be explicitly clear in the evaluation
report.
While it is quite possible to design an evaluation process around the Sphere Core Standards, it is more
common to use the set of seven criteria generated by the OECD and DAC, themselves referenced in
Core Standard 5. Appendix 4 looks at six of these criteria (the criterion of Coherence applies largely to
the area of policy) and makes linkages between them and the Sphere Standards.
The DAC criteria can be seen to apply differently to different aspects of the results chain. The
following diagram shows the main areas in which the DAC criteria apply although it is not intended
to be proscriptive.
Although the DAC criteria are commonly used within the evaluation of humanitarian action, they
provide a rather different lens than that used by participatory approaches and that implied by the
Sphere Handbook. That said, there are also strong overlaps.
The technical chapters of Sphere will find their greatest expression within the DAC criteria of
relevance, effectiveness and impact.
The Core Standards and Protection Principles find expression throughout the DAC criteria.
Core Standard 5, Key Action 5: Conduct periodic reflection and learning exercises throughout
the implementation of the response.
Sphere indicators can provide a useful framework for some of these reflection sessions. For example,
organisations may use Sphere Core Standards to monitor and/or evaluate their own performance,
identifying appropriate key indicators to do so. These could be used in a self-assessment exercise. Or
participatory approaches could be used and key informants identified to evaluate the organisations
performance. In each case, the reflection process would lead to an action plan.
Opportunities for reflection and learning should be built into programme design. Time spent in self-
assessment and reflection is rarely wasted!
Reflective practices
Core Standard 5 calls upon humanitarian agencies and practitioners to adopt reflective practices and
seek to improve the quality of their responses. The term reflective practice describes a range of
activities designed to support continuous learning and that can be used in humanitarian activities as a
real-time check on the quality and relevance of the response.
While external evaluations are one example of reflective practices, in most cases they take place after
the activities are finished and mainly seek to influence future responses.
Other reflective practices exist, however, and humanitarian agencies can and should make an active
effort to learn, develop and improve practices even at the height of a humanitarian operation. Core
Standard 5 outlines a number of such practices: participatory impact assessments, listening exercises,
use of quality assurance tools, audits and internal learning and reflection exercises. Others are implied
within Core Standard 1, which explores participation. Core Standard 5, Key Actions 7 and 8 propose
to participate in joint, inter-agency and other collaborative learning initiatives wherever feasible and
to share key monitoring findings and, where appropriate, the findings of evaluation and other key
learning processes with the affected population, relevant authorities and coordination groups in a
timely manner.
What actions were taken during the assessment, design and response phases to ensure that
opportunities were created for reflection and learning?
Because each intervention is challenging and different, meeting Sphere standards is not necessarily
synonymous with reaching all the related indicators. You will conform to Sphere when you meet
adapted indicators or when you work towards Sphere indicators while at the same time explaining the
gap (see also page 14: Placing Sphere in Context).
Sphere provides a yardstick against which to measure performance and outcomes as part of a broader
toolkit for performance accountability and learning and as a means to strengthen quality. We must be
both thoughtful and ambitious in applying Sphere.
By monitoring, evaluating and learning from the results, you are conforming with Sphere. The key is
to understand and act upon response gaps. It is this last point that constitutes active learning.
It is not helpful or cost-effective to try to measure every aspect of programme implementation and
impact. Collecting too much data can pull resources away from the project, overload the community
and the staff and make it harder to find the critical information. However, selecting the best indicators
can be a challenge. The following two-step process may help:
Step 1 Step 2
Produce a long list of indicators based on Reduce this to the minimum list needed to answer the
the following criteria: following questions:
Standard indicators for the Cluster, Are the needs of people being met?
where these exist Are Sphere minimum standards being met?
Standard indicators of the agency, Are these indicators easy and cost-effective to
where these exist collect? Do they avoid duplication?
Expectations of consortium members, Will the results of this data collection be robust
partners, stakeholders, donors and free from bias?
Context analysis including the Can we effectively report on processes and results?
protection context and scenario
Will we know in a timely manner if the programme
planning
is off-track?
Resources available (which will
Will the selected indicators tell us about
influence the type and number of
programme-critical changes in context, as
monitoring tools used).
identified in our risks and assumptions?
Good practice suggests that in most situations, a mixture of qualitative and quantitative indicators
provides the best understanding of the situation.
Participatory approaches may help you to identify the most valuable and informative indicators to use
to track the progress of the project. Participatory approaches tend to require higher levels of resourcing
and can take more time.
A well-selected indicator can be indicative of the wider situation. It can provide a warning flag that
something is going wrong and it can also provide the confidence that things are going to plan.
Cluster-wide agreed indicators help to improve quality, coherence and coordination within the sector.
In some cases, these indicators are already being linked to Sphere and there is considerable overlap
even where the linkages are not made explicit.
Where agencies struggle to agree common indicators in the field, the Sphere Handbook provides a
common framework to begin this discussion.
20
See www.humanitarianresponse.info/applications/ir
36 | Sphere for Monitoring and Evaluation
As a result of effective coordination on issues like common indicators, it is possible to provide cluster-
wide reporting or reporting across common approaches such as cash transfers. Such coordination
makes demands on resources, so like monitoring processes themselves it must be included within
programme budgets and justified in terms of the expected outputs.
Agencies can also work together to meet Sphere minimum standards either by splitting up the
affected population and working in different areas or by splitting the intervention up into
complementary activities and sharing those out.
Even well selected indicators may not tell you why things are not working out as expected. However,
they can provide the trigger needed for further investigation.
Some indicators are fairly stable over time and others can vary quite dramatically. Sometimes, the
variation is seasonal.
For example, the incidence of malaria or diarrhoea can change in rainy and dry seasons. The prices of
foodstuffs and crops are often highest just before the harvest time. If you plan to measure indicators
like these in an emergency response situation, it is important to consider the normal seasonal variation.
Figure 6: Reference and baseline values of an indicator that changes with the seasons
Indicator
Shock
value
Reference
Average
Baseline
Time
In the simplified example above, the value of an indicator varies in a regular manner every year. In
response to an external shock, it drops to a new low. This is the baseline value, and it will be measured
during the needs assessment process. Improvements as a result of humanitarian intervention can be
measured against this baseline. In this case, the intervention was successful and the indicator returns to
its normal pattern after a year.
The indicator tracking table provides a simple but thorough means to track the changes in the values
of important indicators through the life of the programme.
The programme will set performance targets which may take the form of qualitative statements (like
the Sphere minimum standards), quantitative targets or a combination of these. Following and
interpreting changes in these indicators over time can be a challenge. Using a tracking table can
provide structure to the task, make monitoring and reporting more transparent and support the
process of making decisions on the basis of monitoring data.
For any one indicator, the following information may be collected or calculated:
The reference (or normal) value of the indicator (and a source) a note on the range of the
indicator may be appropriate if it varies seasonally
The baseline value (after the shock and before the intervention) with a date
The target value for the end of the intervention (with a reference to the Sphere minimum standards
where appropriate
The target value for the end of each period (daily, weekly, quarterly, monthly) for the duration of
the intervention
The actual value of the indicator at the end of each period (or the number achieved during that
period)
The actual value as a percentage of the target value for that period
Wherever the indicator is a number of people, the values should be disaggregated for age and gender
as a minimum.
The indicators can be clustered within the table to reflect the tools by which the data is collected, the
components of the programme or to separate context, process and results monitoring.
It is worth investing some time in getting the format right at the start of the programme. This makes
subsequent recording, analysis, reporting and decision-making much easier.
Indicator tracking tables will vary between agencies, contexts and sectors. They are usually created in a
spreadsheet and can contain many columns, especially where disaggregated data is appropriate. An
example is provided below.
Indicator Reference Source Baseline Date Target Sphere
value value value Standard
Then each indicator can be tracked over time using a structure similar to this:
Period 1 Period 2 Period 3 Period 4 Period 5
Target value: 8 9 10 10 10
Actual value: 7 8 9 10 10
Core Standard 1
Core Standard 1 is explicitly concerned with ensuring the appropriateness of humanitarian response
from the perspective of the affected population. This could be translated into evaluation questions
such as:
To what degree did the activities undertaken meet the needs and expectations of the affected
population? To what degree were community aspirations actually canvassed?
To what degree was disaggregated assessment data available and to what degree did such data
enable the design of responses?
Did project beneficiaries and non-beneficiaries have access to a safe and impartial complaints
mechanism?
Technical standards
The technical standards also speak strongly to the subjects of relevance and appropriateness. Here is
one example:
WASH standard 1 on Wash programme design and implementation states: WASH needs of the
affected population are met and users are involved in the design, management and maintenance of the
facilities where appropriate. The associated Guidance note says (of health promotion activities): The
assessment should look at resources available to the population as well as local knowledge and practices so
that promotional activities are effective, relevant and practical.
In terms of an evaluation process, this could translate into general or specific evaluation questions such
as:
To what degree were Sphere technical standards applied during the design phase to ensure the
relevance of the response to the affected population? To what degree was this population
consulted?
To what degree were the capacity, resources and cultural practices of the affected population
taken into account in the design of health promotion activities?
21
This description and the others that follow are drawn from Beck (2006), Evaluating Humanitarian Action using the
OECD-DAC Criteria, ALNAP, London, UK).
40 | Sphere for Monitoring and Evaluation
Core Standard 4
Core Standard 4 includes a Key Indicator stating that Programme designs are revised to reflect
changes in the context, risks and peoples needs and capacities. This is further expanded within the
Guidance notes:
Context and vulnerability: Social, political, cultural, economic, conflict and natural environment factors can
increase peoples susceptibility to disasters; changes in the context can create newly vulnerable people.
Vulnerable people may face a number of factors simultaneously (for example, older people who are members of
marginalised ethnic groups). The interplay of personal and contextual factors that heighten risk should be
analysed and programmes should be designed to address and mitigate those risks and target the needs of
vulnerable people.
This also links well with the second of the DAC criteria, connectedness. Considering the ways in
which humanitarian response has responded to contextual changes is an important aspect of
evaluation, addressed through evaluation questions such as:
What systems were put in place to monitor changes in the external context, the security situation
or the nature of vulnerability during the implementation period? What changes were made to
activities or methods as the situation changed and evolved?
Clearly, there are links here to the ability of a programme or project to monitor the external changes,
context and risk associated with an intervention.
Core Standards 3 and 4 highlight the importance of understanding the context when carrying out
needs assessment and planning operations, including ensuring that complex environments and
interconnected problems are properly understood. This could be translated into evaluation questions
such as:
Are the planned activities appropriate, given the history of tension between the various resident
groups in the area?
Did emergency activities support or undermine the long-term development plan of the local
authority?
To what degree did immediate response actions support or undermine the potential of medium-
term recovery activities?
This standard highlights the importance of working appropriately with both displaced and resident
populations. Guidance note 3 states:
Hosting by families and communities: Displaced populations who are unable to return to their original homes
often prefer to stay with other family members or people with whom they share historical, religious or other ties
(see Core Standard 1 on page 55). Assistance for such hosting may include support to expand or adapt an
existing host family shelter and facilities to accommodate the displaced household or the provision of an
additional separate shelter adjacent to the host family. The resulting increase in population density should be
assessed and the demand on social facilities, infrastructure provision and natural resources should be evaluated
and mitigated.
Protection Principle 2
Protection Principle 2 requires governments and humanitarian actors to ensure peoples access to
impartial assistance in proportion to need and without discrimination. The Principle expands this
idea by expressing the following expectation:
People can access humanitarian assistance according to need and without adverse discrimination. Assistance is
not withheld from people in need and access for humanitarian agencies is provided as necessary to meet the
Sphere standards.
Protection Principle 4
Protection Principle 4 requires humanitarian actors to Assist people to claim the rights, access
available remedies and recover from the effects of abuse.
Core Standard 4
Core Standard 4 covers the design and implementation of humanitarian response and it expects that:
The humanitarian response meets the assessed needs of the disaster-affected population in relation to
context, the risks faced and the capacity of the affected people and state to cope and recover. One of
the Key Actions anticipates that humanitarian actors will:
Using disaggregated assessment data, analyse the ways in which the disaster has affected different individuals
and populations and design the programme to meet their particular needs.
Did the response target and reach all groups affected by the disaster?
Technical standards
In many cases, the technical standards echo this expectation. For example, Guidance note 2 of
Essential Health Services standard 1 states:
Access to health services should be based on the principles of equity and impartiality, ensuring equal access
according to need without any discrimination. In practice, the location and staffing of health services should be
organised to ensure optimal access and coverage. The particular needs of vulnerable people should be addressed
when designing health services. Barriers to access may be physical, financial, behavioural and/or cultural as
well as communication barriers. Identifying and overcoming such barriers to the access of prioritised health
services are essential.
Core Standard 2
Core Standard 2 (Coordination and Collaboration) outlines how effective coordination improves the
efficiency of the combined (multi-agency) response.
Core Standard 5
Core Standard 5 considers Performance, Transparency and Learning and states in Guidance note 2:
Agency performance is not confined to measuring the extent of its programme achievements. It covers the
agency's overall function its progress with respect to aspects such as its relationships with other organisations,
adherence to humanitarian good practice, codes and principles and the effectiveness and efficiency of its
management systems.
Efficiency is often considered in purely monetary terms, although there are other ways to consider it.
Evaluations often seek to explore efficiency through questions such as:
Were the financial, human, physical and information resources available utilised efficiently? (e.g.
were inputs used in the best way to achieve outcomes and in a cost-effective manner?) If not, why
not?
Was the assistance provided in a timely manner to meet beneficiary and community needs? Did
the integration approach adopted affect the timeliness of delivery? If so, how?
Were staffing requirements correctly estimated, and were staff appropriately recruited and
deployed?
Sphere promotes the same process. For example, the introduction to the section on Food Security
(cash and voucher transfers) states:
The choice of appropriate transfers (food, cash or vouchers) requires a context-specific analysis including cost
efficiency, secondary market impacts, the flexibility of the transfer, targeting and risks of insecurity and
corruption.
What process was put in place to consider the full range of possible options to respond to the
needs identified in the needs assessment?
What factors were considered in making the selection of the chosen response modality, targeting
and scale? Were these factors appropriate and sufficient?
Core Standard 4
Core Standard 4 seeks to: progressively close the gap between assessed conditions and the Sphere minimum
standards, meeting or exceeding Sphere indicators.
Technical standards
The Technical standards are concerned with outlining what these expected results should be. In most
cases, these results will have been included within the monitoring framework of the operation and it
should be possible to use this to understand the progress towards targets over time.
Core Standard 2
Core Standard 2 requires that aid is effectively coordinated. This leads to more general questions such as:
To what degree did the action complement, compete with or duplicate the activities of other
humanitarian actors?
22
ALNAP (2006), Evaluating humanitarian action using the OECD-DAC Criteria. An ALNAP Guide for Humanitarian
Agencies
44 | Sphere for Monitoring and Evaluation
At this level it is usually difficult to establish causation and project inputs and activities are usually
considered to contribute towards the desired impact. The key question asked in impact evaluation is
therefore simply did it work? This can then be broken down into a range of specific questions. Only
two examples from many possible options are provided here:
Did the humanitarian action reach all the people it intended to reach?
What impact was experienced by the affected population in addition to that planned and
anticipated?
This second question relates to the fact that impacts can also be unplanned or negative and that they
can affect other groups in addition to the targeted households or community.
Humanitarian Charter, clause 9 states: We are aware that attempts to provide humanitarian assistance
may sometimes have unintended adverse effects. In collaboration with affected communities and authorities,
we aim to minimise any negative effects of humanitarian action on the local community or on the
environment.
Protection Principle 1
Protection Principle 1 succinctly states: Avoid exposing people to further harm as a result of your actions.
It may be possible to explore some measure of impact through monitoring data. However, impact is
more commonly assessed once the programme is completed, through evaluation processes. Monitoring
indicators more usually look at the levels of outputs and outcomes. This implies that evaluations of
impact will need to engage directly with the affected population.
The easiest way to demonstrate impact is by comparing the situation before the response with that
after the humanitarian action has been completed and trying to understand what has changed and
why. To do this for different groups as suggested by the definition requires a disaggregated baseline
and this is often not available. In some circumstances, it can be possible to build a retrospective
baseline but this obviously becomes more challenging the more time has passed.
Launched on 12 December 2014, the Core Humanitarian Standard on Quality and Accountability
(CHS) describes the essential elements of principled, accountable and quality humanitarian action.
The CHS was developed through a 12-month consultation facilitated by HAP International, People
In Aid, Groupe URD and the Sphere Project. It draws together key elements of several existing
humanitarian standards and commitments including the Red Cross/Red Crescent Code of Conduct,
the Sphere Handbook Core Standards and the Humanitarian Charter, the 2010 HAP Standard, the
People In Aid Code of Good Practice and the Quality COMPAS. The Core Humanitarian Standard
is a voluntary code which humanitarian organisations may use to align their own internal procedures.
The CHS takes the form of nine commitments and quality criteria, each with associated actions and
responsibilities. The nine commitments are:
1. Communities and people affected by crisis receive assistance appropriate and relevant to their needs.
Humanitarian response is appropriate and relevant
2. Communities and people affected by crisis have access to the humanitarian assistance they need at
the right time.
Humanitarian response is effective and timely
3. Communities and people affected by crisis are not negatively affected and are more prepared,
resilient and less at-risk as a result of humanitarian action.
Humanitarian response strengthens local capacities and avoids negative effects
4. Communities and people affected by crisis know their rights and entitlements, have access to
information and participate in decisions that affect them.
Humanitarian response is based on communication, participation and feedback
5. Communities and people affected by crisis have access to safe and responsive mechanisms to handle
complaints.
Complaints are welcomed and addressed
8. Communities and people affected by crisis receive the assistance they require from competent and
well-managed staff and volunteers.
Staff are supported to do their job effectively and are treated fairly and equitably.
9. Communities and people affected by crisis can expect that the organisations assisting them are
managing resources effectively, efficiently and ethically.
Resources are managed and used responsibly for their intended purpose.
Table 11: Quick Location Guide for the Core Standards in the CHS
and collaboration
CS5 Performance,
CS2 Coordination
CS3 Assessment
performance
CS1 People-
Principles*
Protection
response
learning
centred
CHS 1 Assessment
Appropriate and relevant response
CHS 4 Communication
Communication, participation,
feedback
CHS 6 Coordination
Coordinated and complementary
response
CHS 7 Learning
Continuous learning and
improvement
CHS 9 Resources
Resources responsibly used for
intended purposes
* Note that the CHS will not replace the Sphere Protection Principles, only the Core Standards; however,
it is useful to consider the overlap between the Protection Principles and certain CHS Commitments.
Albu (2010), Emergency Market Mapping and Analysis Toolkit (EMMA). Practical Action Publishing, UK.
ALNAP (Active Learning Network for Accountability and Performance in Humanitarian Action):
www.alnap.org
Beck (2006), Evaluating Humanitarian Action using the OECD-DAC Criteria, ALNAP, London, UK.
ECB Project (2007), The Good Enough Guide: Impact Measurement and Accountability in Emergencies.
Humanitarian Accountability Partnership International (2010), HAP Standard on Accountability and
Quality Management, Geneva, Switzerland.
IASC Indicator Registry: www.humanitarianresponse.info/applications/ir/indicators
IASC (2012), Reference Module for Cluster Coordination at the Country Level. Geneva, Switzerland.
IFRC (2011): Monitoring and Evaluation Guide
INTRAC: www.intrac.org
Knox Clarke, P., Darcy, J. (2014), Insufficient evidence? The quality and use of evidence in humanitarian
action. ALNAP, London, UK.
Managing for Impact Portal (guidance on participatory planning, monitoring and evaluation):
www.managingforimpact.org
Mazurana, D., Benelli, P., Gupta, H., & Walker, P. (2011), Sex and Age Matter. Feinstein
International Centre, Tufts University, USA.
OECD/DAC evaluation of development programmes website:
www.oecd.org/development/evaluation
People in Aid (2013) Code of Good Practice, People in Aid, London, UK.
Pretty, J., Guijt, I., Thompson, J., Scoones, I. (1995), Participatory Learning and Action. International
Institute for Environment and Development, London, UK.
Quality Compas: www.compasqualite.org
Sphere Project (2011), Humanitarian Charter and Minimum Standards in Humanitarian Response. The
Sphere Project, Geneva, Switzerland.
Sphere Project (2013), Humanitarian Standards in Context. Video and video guide.
www.sphereproject.org/resources
Sphere Project (2013), Sphere for Assessments. The Sphere Project, Geneva, Switzerland
Wash Cluster Somalia (2012), Guide to WASH Cluster Strategy and Standards also known as Strategic
Operational Framework (SOF).
World Bank (2014), Ten Steps to a Results-Based Monitoring and Evaluation System.