Capacity Building
Capacity Building
March 2003
MEASURE Evaluation Manual Series, No. 7
The manual series is made possible by support from USAID under the terms of Cooperative
Agreement HRN-A-00-97-00018-00. The opinions expressed are those of the authors, and do
not necessarily reflect the views of USAID.
March 2003
NO. 2
Quick Investigation of Quality (QIQ): A User's Guide for Monitoring Quality of Care. February 2001.
NO. 3
NO. 4
Measuring Maternal Mortality from a Census: Guidelines for Potential Users, July 2001.
NO. 5
NO. 6
Recommended Citation
LaFond, Anne and Brown, Lisanne. A Guide to Monitoring and Evaluation of Capacity-Building Interventions in the Health Sector in Developing Countries. MEASURE Evaluation Manual Series, No. 7.
Carolina Population Center, University of North Carolina at Chapel Hill. 2003.
Acknowledgements
We wish to acknowledge the contributions and support of a number of individuals and institutions that enabled the successful completion of this document. Ray Kirkland and Krista Stewart
of USAID were instrumental in the conception of the Guide. Sara Pacque-Margolis of USAID
provided the support to see it through to completion. Our sincere gratitude also goes to several
technical reviewers for their constructive and instructive comments on earlier versions of the
Guide. They are: Alfredo Fort (PRIME II), Diane Catotti (IPAS), Alison Ellis (MSH), Leo Ryan
(CSTS/ORC Macro), Eric Sarriot (CSTS/ORC Macro), Fred Carden (IDRC), and Doug Horton
(ISNAR). Kate Macintyre contributed her ideas and encouragement, as well as provided the
SAIDIA case material. Catherine Elkins and Kate Macintyre contributed to the MEASURE
working paper on measuring capacity in the health sector, which provided a basis for this guide.
Thom Eisele and Cira Endley reviewed and analyzed capacity-measurement tools and practices.
Case examples of capacity measurement were developed with the cooperation of PRIME /
INTRAH; SAIDIA; NGO Networks for Health; and PATH (in a workshop setting). Finally, we
are grateful to the many adventurous organizations and individuals working to build capacity in
the health sector in developing countries. Their experimentation in capacity-building monitoring
and evaluation is commendable and deserves further study. This guide would not have been possible without the support of the Offices of Health and Population at the United States Agency for
International Development (Contract Number: HRN-A-00-97-00018- 00).
Acknowledgements
Prologue
Capacity development1 has moved to center stage of the agendas of development organizations.
Substantial sums are being invested in capacity-building programs. Yet, their design and management leave much to be desired. Marred by untested, unrealistic assumptions, the results of
many programs fall short of their goals and expectations.
Evaluations are needed to test the theories and assumptions on which capacity development
programs are based, to document their results, and to draw lessons for improving future programs. However, few capacity development programs have been systematically and thoroughly
evaluated (Horton et al., 2000).
Capacity building and capacity development are used interchangeably throughout this document.
Prologue
iii
Table of Contents
Acknowledgements.......................................................................................................................... i
Prologue ......................................................................................................................................... iii
List of Acronyms and Abbreviations.............................................................................................. v
About This Guide............................................................................................................................ 1
Structure of the Guide................................................................................................................. 2
Introduction..................................................................................................................................... 3
Defining Capacity-Building Monitoring and Evaluation ........................................................... 4
Capacity-Building M&E Has Many Roles ................................................................................. 5
Part 1. Concepts, Definitions, and Attributes of Capacity and Capacity Building ........................ 7
Why Build Capacity?.................................................................................................................. 7
What is Capacity Building? ........................................................................................................ 7
Useful Definitions ...................................................................................................................... 7
Attributes of Capacity and Capacity Building............................................................................ 7
Capacity Building Is Behavior Change ...................................................................................... 9
Why Monitor and Evaluate Capacity Building?....................................................................... 11
What Is Different about M&E of Capacity Building?.............................................................. 11
Implications for Capacity-Building M&E ................................................................................ 12
Summary for Managers and Evaluators.................................................................................... 12
Part 2. Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual
Framework ........................................................................................................................ 15
Overview Framework: The Role of Capacity in the Health Sector.......................................... 15
Capacity at a Single Level ........................................................................................................ 17
Defining Variables Related to Capacity and Performance ....................................................... 18
Using These Conceptual Frameworks ...................................................................................... 25
Summary for Managers and Evaluators.................................................................................... 26
Part 3. Monitoring and Evaluating Capacity-Building Interventions........................................... 27
STEP 1 Define the Purpose of the Evaluation......................................................................... 28
STEP 2 Define Performance Objectives.................................................................................. 30
Defining Performance........................................................................................................... 30
STEP 3 Mapping Capacity: Build a Conceptual Framework for a Specific Capacity-Building
Intervention ................................................................................................................ 32
When to Map Capacity ......................................................................................................... 32
How to Map Capacity ........................................................................................................... 33
Single-Level Capacity Mapping ........................................................................................... 34
Multi-Level Capacity Mapping............................................................................................. 36
Dealing with Context ............................................................................................................ 36
Interpreting and Using Capacity Maps ................................................................................. 41
Table of Contents
vii
viii
Tables
Table 1
Table 2
Table 3
Table 4
Table 5
Table 6
Table 7
Figures
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Boxes
Box 1
Box 2
Box 3
Box 4
Box 5
Box 6
Box 7
ix
Box 8
Box 9
Box 10
Box 11
Box 12
Box 13
Maps
Map 1
Map 2
Map 3
Map 4
Map 5a
Map 5b
Map 6
From the discussion that follows on the concept of capacity building and capacity measurement techniques readers will come to understand why this guide is neither prescriptive
nor exhaustive. Standardized approaches to
monitoring and evaluating capacity-building
interventions are not found because of the
wide variety of circumstances in which capacity building takes place. Capacity building
has been applied to actions as distinct as policy formulation, supplying basic health commodities, and identifying danger signs of
malnutrition. In short, capacity building demands adaptation to its context and capacitybuilding evaluation techniques must reflect
this potential variation. The Guide acknowledges this and other challenges by providing a
link between the theoretical and practical aspects of capacity measurement in the health
sector and offering an approach to monitoring
and evaluation that is relevant in a variety of
settings.
It is also important to keep in mind that the
monitoring and evaluation of capacity building, while singled out for discussion in this
document, is normally part of an overall plan
or system for monitoring and evaluating a
health program or health sector intervention.
This guide should therefore be used as a tool
for orienting planners to capacity measurement in the context of developing a projectlevel or overall program-level performancemonitoring plan (particularly programs where
sustainability and scaling-up are a central
concern). As such, it will aid the process of
thinking through the role capacity and capacity measurement play in improving performance.
Introduction
Over the last decade, capacity building has
become as central to the business of developing health systems in lesser-developed countries as providing financial resources and applying the latest science. Capacity is believed
to contribute directly to improving performance in the health sector, and is thought to
play an important role in sustaining adequate
performance over time. Despite increased
attention to capacity, experience in gauging
the effectiveness of capacity-building interventions in the health sector is still limited.
Unlike other aspects of health-related monitoring and evaluation (M&E), capacity measurement is not supported by a comprehensive
history of theory and practice. While methods
for monitoring and evaluating health service
coverage, access, and quality are well advanced, there are few tried and true approaches for capturing the interim state or
process that reflects the ability to achieve and
sustain coverage, access, and quality over
time (Brown, LaFond, and Macintyre, 2001).
Thus, capacity measurement in the health
sector is both new and experimental.
There are intrinsic challenges to measuring
capacity that are reflected in the concept and
role of capacity itself. For example, capacity
derives its relevance from the contribution it
makes to performance. There are endless areas where performance is required in the
health sector, and an equally wide range of
possible capacity variables that influence performance. In addition, contextual factors (or
factors outside the control of most health
sector actors &) can have a strong influence
on capacity or the desired outcome of capacity-building intervention. These and other
characteristics of capacity and capacity
building explain why there are no gold standards for capacity-building M&E. There is no
short list of valid indicators of capacity in the
health sector, nor are there standardized
Introduction
Capacity Assessment
q
q
q
q
Defining Capacity-Building
Monitoring and Evaluation
Most capacity measurement experience to
date has emphasized capacity assessment
rather than M&E (Brown, LaFond, and Macintyre, 2001). Assessment normally takes
place at the beginning of an intervention as
part of an organizational diagnosis or formative design process. Evaluators can learn a
great deal from capacity assessment tools (as
we have in developing this guide). However,
it is worth noting that while capacity assessment is an important first step in planning a
capacity-building intervention, capacitybuilding M&E differs from assessment by
virtue of its explicit focus on measuring
change. Capacity-building monitoring and
evaluation tracks or identifies changes in capacity that take place in the course of a capacity-building intervention. It uses stated
objectives for capacity building and perform-
q
q
q
q
conducted to gain understanding of the relationship between capacity-building interventions and capacity outcomes, or the links between capacity and performance variables.
The term impact evaluation & is not appropriate or useful in the context of capacitybuilding M&E because of the difficulty of
quantifying many elements of capacity and
attributing capacity change to any single intervention or even a range of interventions.
Introduction
Part 1
through monitoring and evaluation. The dynamic nature of capacity is often a reflection
of the many different forces that influence its
development or decline.
Some have labeled this level institutional development (Kotellos, 1998; INTRAC, 1998), while others
use the terms organization and institution interchangeably. To avoid confusion, we have adopted the
term system.
10
based management of programs where capacity building is part of the overall strategy for
improving performance.
11
local practitioners and their external partners to think strategically about capacity
development and to learn, through practice, what works under different circumstances. At the same time, systematic
measurement of capacity contributes to
results-based management of programs
where capacity building is part of the
overall strategy for improving performance.
13
Part 2
The first step in developing a vision of capacity development, and a plan to measure it, is
to understand the role capacity plays in the
health sector in developing countries. What
are the expectations and assumptions surrounding capacity and its relationship to performance and health outcomes? Clear thinking about these variables helps planners define realistic objectives for capacity-building
interventions and express desired capacity
outcomes explicitly and precisely. Evaluators
must rely on these parameters of capacity
building in order to develop a capacitybuilding M&E plan.
The following series of conceptual frameworks are provided as a reference to help
planners and evaluators develop their own
vision of the role capacity (and capacity
building) plays in the health sector. We have
found that directed discussion using these
types of frameworks prior to M&E planning
can stimulate strategic thinking within project
or work teams, clarify individual and collective expectations and thereby improve capacity-building M&E. Figure 1 The Overview
illustrates the critical role capacity plays in
influencing and sustaining performance in the
health sector. It takes a system-wide view of
capacity, including all possible levels where
capacity building might take place. The four
other frameworks (Figures 2-5) take capacity
at each level and break it down into defined
components: inputs, processes, outputs, and
outcomes (See Table 2). In breaking down
capacity at each level, the frameworks provide a starting point for identifying the key
variables that influence capacity and performance at that level.
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework
15
Capacity Levels
Performance
Health System
Health System
Performance
Sustainability
T
I
Organization
Health
Program
a
Personnel
Organizational
Performance
Personnel
Performance
M
E
Sustainable
Health
System
Performance
Improved
Health
Status
Individual/Community
Capacity
Individual/Community
Behavior Change
External Environment
16
Sustained
Individual/Community
Behavior Change
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework
17
tors of performance at this level.5 The framework includes a range of possible capacity
inputs, processes, outputs and outcomes that
contribute to performance at this level.
The system level is a complex area in which
to define or address capacity development or
to assess changes in capacity resulting from
external or internal intervention. Despite the
use of an inputs-process-outputs-outcomes
framework, in practice, relationships among
elements of capacity are not perfectly linear.
Change (or the lack of it) in capacity results
from multiple influences, some of which can
be unexpected (Sarriot, 2002a). Contextual
factors such as political and economic stability can also play a dominant yet poorly understood role in ensuring system capacity. Good
examples come from health sector reform
activities that seek to improve national health
sector performance by changing sector priorities, laws, organizational structures, and financing arrangements. For instance, the actual
results of legal reform in Zambia were
achieved but not well communicated to health
workers, which led to internal resistance to
delinking or separating health workers from
the civil service (Lake et al., 2000). Despite
addressing key constraints such as laws or
regulations, capacity to manage human resources more effectively did not emerge as
planned.
Performance at the health system level is often defined in terms of access to services,
quality of care, equity, and efficiency, although there are many other possible indica5
18
The World Health Organization proposed new indicators for monitoring health system performance in the
World Health Report 2000, including measures of
stewardship, financing, resource generation, and service provision.
Inputs
Infrastructure
Public/private composition
of services
Organizational structure
(public sector)
Existing health-related
laws, regulations, and
policies
Information/
communication
systems
Human resources
Leadership
Financial resources (public/
private, internal/external)
History and culture of the
system
Outputs
Process
Health policy making
Enforcement of health
related laws and
regulations
Resource allocation
Sector-wide strategy
Resource generation
Financial management
Improved human resource
availability in rural areas
Human resource
development and
management
Coordinated donor
interventions
Donor coordination
Timely analysis and
dissemination of national
health information
Multi-sectoral
collaboration
Capacity
Outcomes
Information coordination
& dissemination
Accountability
(financial and program
transparency)
Capacity to assess and
cope with internal and
external change
Financial self-reliance
Effective monitoring of
quality of care
Responsiveness to client
needs and demands
Efficient/appropriate
resource allocation
P
e
r
f
o
r
m
a
n
c
e
External Environment
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework
19
Process
Output
Outcome
Set of results that represent capacity (an ability to carry out stated objectives),
often expected to change as a direct result of capacity-building intervention.
Performance
Set of results that represent productivity and competence related to an established objective, goal or standard. The four capacity levels together contribute
to overall system-level performance.
Impact
Long-term results achieved through improved performance of the health system: sustainable health system and improved health status. Impact measures
are not addressed in capacity-building M&E.
Organization Level
Figure 3 depicts a similar categorization of
capacity variables at the organization level
that contribute to organizational performance.
Performance at the organization level might
be described in terms of the ability of the organization to produce goods and services to
an acceptable standard (e.g., the quality of
care; coverage of the catchment population).
This framework relates to organizations
whose main function might be health service
delivery (in the public or private sector) and
those considered to be civil society organizations (nongovernmental or nonhealth service
agencies). Civil society organizations generally are not involved in the direct delivery of
health services, but they do influence health
service delivery, policies, and behaviors in
many societies throughout the world. Civil
society organizations of particular importance
20
could be cooperatives, community development organizations, advocacy groups, informal pressure groups, and others. The MOH is
a unique organization for conceptualizing
capacity building since it can be a significant
actor at both the system and organization levels. The contextual factors influencing organizational capacity are represented at the
perimeter of the diagram and include system
level factors as well as typical political, economic, cultural, and other variables.
Inputs
Process
Infrastructure
Organizational
structure
Mission
Financial management
Leadership
Logistics/supplies
management
Financial
resources
Equipment and
Supplies
Human resources
(technical &
managerial)
History and
culture of
organization
Outputs
Strategic and operational
plans
Capacity
Outcomes
Able to assess and cope with
internal and external
change
Responsiveness to client
needs and demands
Financial self-reliance
Functional financial
management system (i.e.,
resources available, costs
contained)
Stakeholder involvement
Functional health
information and
communication system
(information collected,
analyzed and used)
P
e
r
f
o
r
m
a
n
c
e
External Environment
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework
21
22
Inputs
Process
Outputs
Capacity
Outcomes
Financial resources
(i.e., salaries, benefits,
incentives)
Physical resources
venues
materials
supplies
equipment
National/organizational
training policies, plans,
and guidelines
Up-to-date information
on appropriate clinical
and managerial
practices
Curricula
Staff trained/retrained
as required
Trainers
trained/retrained as
required
Managers
trained/retrained as
required
Supervision received
Motivated health
personnel
Professional or peer
support networks
Access to information
P
e
r
f
o
r
m
a
n
c
e
Human resources
External Environment
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework
23
Inputs
Individual/family
Education
Income
Family history
Sex
Perceptions of need/risk
Willingness to seek care
Ability to pay
Exposure to programs/services
Process
Achieving consensus
Utilization-enhancing activities
(e.g., IEC, accessible services)
Securing resources
Critical reflection
Negotiation
Community dimensions
Community history
Communication
Citizen participation
Cohesiveness
Leadership
Material and financial resources
(internal and external to community)
Social and interorganizational
networks
Communication channels
Values
Skills
Outputs
Recognition of
need for services
Intention to use
services
Participation in
community health
committees
Community plans
External Environment
24
Capacity
Outcomes
Recognition of
symptoms and
danger signs and
actions needed
Ability to articulate
needs and demands
Knowledge of
prevention behavior
Community support
for prevention
behaviors
Community support
for communitybased health care
Community-based
mobilization and
empowerment for
interacting with
health system
P
e
r
f
o
r
m
a
n
c
e
Here the individual/community level represents all those who could benefit from and
participate in the health care system; thus it
includes all current and potential clients of the
services offered and the communities in
which they live. The inclusion of individual
and community capacity in this framework
represents a departure from conventional
thinking on capacity in the health sector. References to community capacity are found
mostly in literature on community
empowerment and strategies for improving
community mobilization and participation
(Goodman et al. 1998; Israel et al, 1994; Israel et al. 1998; and Eng and Parker, 1994).
The inputs in this framework represent the
resources available to individuals and communities. They include individual/family
factors, community factors, and factors outside the immediate influence of the community, such as exposure to health and education
programs. Processes explain how individuals
and communities use their resources to act in
support of their own capacity development.
Capacity outcomes relate to knowledge, motivation, skills and behavior that support individual and the communitys health and wellbeing. Performance is the actual behavior on
the part of individuals or communities that
might include interaction with the health system (participation or advocacy), as well as
behavior that directly influences health outcomes: utilization of health services, self
treatment, compliance, prevention behavior.
nel, and organizations cannot function without health personnel. Without individual users
of health services, the other levels cannot begin to perform effectively. Going beyond onedimensional diagrams to understand the dynamics of capacity building at each level and
between levels will guide the development of
M&E strategies and techniques.
For example, the processes listed at the system level in practice are often activities carried out by the MOH with support from donors and in collaboration with other actors in
the health sector (e.g., NGOs, private companies). There is a clear overlap between system
and organizational capacity since the capacity
of the system to carry out certain functions
may depend directly on the capacity of the
MOH to play its organizational role effectively. An M&E plan should attempt to
monitor changes at both levels to explain capacity development (or lack of it) well.
The overview diagram that describes the relationship between capacity, performance and
sustainability also suggests a logical progression from capacity to performance to sustained performance, when in fact both capacity and performance can improve or decline in
uncoordinated or illogical ways. Because capacity is a fluid notion that responds to many
influences, linear frameworks, often used in
research and evaluation, are sometimes considered too mechanical for monitoring and
evaluating capacity. Cause and effect chains
related to capacity are seldom linear, suggesting the need to break out of a rigid, inflexible way of thinking.
While it is useful to separate levels of capacity for facilitating M&E planning, these levels
are clearly interdependent, as shown in the
nesting of health personnel and organization
levels in the system level, and the arrows
connecting individuals/communities to the
health system and its parts. A health system is
made up of organizations and health person-
Understanding the Role of Capacity in the Health Sector: Introducing a Conceptual Framework
25
26
for identifying the key variables that influence capacity and performance at that
level, and will help evaluators define capacity variables to track in the M&E plan.
Part 3
Part 2 described a generic conceptual framework for understanding the role of capacity in
the health sector and suggested possible capacity variables for each level. This part presents the six steps for developing a monitoring
and evaluation plan for a specific capacitybuilding intervention. At the heart of this process is the development of a capacity map
or conceptual framework that applies to the
particular capacity-building intervention under study. The six steps are listed in Box 4.
Ideally an M&E plan should be formulated
during the design and planning of a capacitybuilding or performance improvement intervention. Evaluators and program planners
should work together with key stakeholders to
conduct a needs assessment, define the intervention strategy, and construct an M&E plan.
Since capacity building is often one strategy
in a broader approach to improving performance, capacity-building M&E should fit into
the overall performance-monitoring plan.
a conceptual framework
a definition of essential variables of capacity and performance
hypotheses on important links between
these capacity and performance variables
identification of the stages of capacity
indicators, and methods
a timeframe, and
a dissemination strategy
27
STEP 1
Needs assessment
Monitoring
Evaluation
28
Involve all stakeholders, both internal and external, in developing the M&E plan, particularly the purpose of the evaluation
Be prepared to negotiate with stakeholders on the purpose of the evaluation and make
all expectations transparent
DONT
Base M&E plans only on the needs of external stakeholders (mostly donors) at the expense of meeting internal information needs
Miss opportunities to reflect and learn about capacity development through M&E
29
STEP 2
Define Performance
Objectives
Before launching into monitoring and evaluation of any capacity-building program or intervention it is critical to step back and fully
understand its focus and strategy. It is particularly crucial to understand how the stated
capacity-building strategy is expected to improve performance and what signs of improved effectiveness are expected from capacity building. Although it is not possible to
prove causality, it is important to clearly define the expected pathways between capacity
building and performance.
To begin, evaluators should address the following questions:
Defining Performance
Performance objectives should relate to the
mandate or specific purpose of a system, organization, or community, or to health personnel functions. The more specific one can
be about performance expectations, the easier
it will be to construct a capacity map. If the
M&E plan is being developed after a capacity-building intervention has been designed,
then articulating the performance focus and
expectations should not be difficult (assuming
the design document is sufficiently explicit
about performance objectives). Moreover,
some organizations already may adhere to a
30
Measurable
Reflects a needed change
Relates to a clear product or action
Relates to a defined target population
Performed by specific delivery agent (e.g., organization, community group, etc.)
Relevant to a particular context/situation
Examples
31
STEP 3
Mapping Capacity:
Build a Conceptual
Framework for a
Specific CapacityBuilding Intervention
Once performance objectives and expectations are defined, planners and evaluators
must make assumptions about the capacity
required to meet these objectives. Capacity
mapping is a structured process of thinking
through the role capacity plays in ensuring
performance by developing a conceptual
framework that is specific to a particular capacity-building intervention. During capacity
mapping, all the possible factors of capacity
that influence performance and the relationships between them must be identified. Once
the factors are all laid out, the program staff
or evaluator can focus on those that are most
essential for the evaluation.
Mapping capacity can be a critical step in
developing an M&E plan. The map is a tool
that guides the design of the plan, from selection of indicators and methods to presentation
of evaluation results. As stated by Morgan
(1997), evaluation designers and their program partners need a sense of what capacities they need to develop and for what reason.
Most groups and organizations can articulate
such a vision of the future given sufficient
time and productive discussion. Mapping
capacity makes plain to all stakeholders assumptions about key variables that affect the
desired outcome of a capacity-building intervention. A mapping exercise is an excellent
way to bring all stakeholders to a common
understanding of the scope and focus of a
capacity-building intervention, the performance outcomes expected from capacity development, and the role of M&E in tracking and
influencing change.
For the evaluator, the objective of this stage
of M&E planning is to create a conceptual
32
33
34
Capacity mapping should refer to the logic of the overall program, project or intervention. Horton et al. describe this approach as referring to a theory of action & that binds interested parties into a single vision (Horton, 2001). Whether mapping capacity during intervention design or in
the context of an already defined intervention strategy, it is advisable to refer to existing data on
the intervention area, including needs assessment, capacity assessments, etc.
When mapping capacity it may be helpful to refer to the conceptual framework in Part 2 for a
general review of the role capacity plays in improving performance in the health sector and examples of capacity variables.
Be realistic about your expectations of the role of capacity. There is a tendency to consider
every aspect of resources and behavior in an individual, organization, or system as a capacity variable, and to risk measuring too much.
Look beyond individual capacity and training solutions to identify capacity variables. For example, during discussions on the capacity framework with SAIDIA, a Kenyan NGO (nongovernmental
organization) that provides health services and community development opportunities, staff at first
claimed that training health workers and community members was their only work in capacity building. Yet, with further discussion, participants illustrated a wide range of capacity-building activities
at all levels, including their work in coordination and collaboration with the public sector, and courting relations with donors that fund the NGO.
Map capacity with a wide range of stakeholders to inspire a sense of ownership of capacity
building and appreciation of the use of evaluation in programming. Since capacity-building M&E
delves into many internal characteristics and processes found within systems, organizations, and
communities, it requires considerable investment on the part of the members of these groups to
achieve success. The quality of information obtained from evaluation, therefore, is directly affected by the extent to which participants develop a feeling of ownership of the M&E activity and
value the data being collected.
35
36
Inputs
Processes
Outputs
Leadership
Financial management
Staff trained
Resource mobilization
Functioning financial
management system
Finances
Infrastructure
Human resources
Finance policy
Organizational culture
Capacity
Outcome
Performance
Objective
Consistent delivery of
essential package of goodquality family planning
services to a defined
population (coverage,
quality, and consistency)
37
Capacity
Inputs
Capacity
Processes
Capacity
Outputs
Capacity
Outcome
Performance
Objective
Leadership
Operational planning
Financial resources
Human resource
management &
development
Operational plans
developed and
implemented
Quality assurance
practices
institutionalized
Infrastructure
Human resources
Technology
Organizational culture
Incentive practices
Research, monitoring &
evaluation
Quality assurance
standards clearly stated &
reference material
available
Logistics/supplies
management
Monitoring reports on
quality, utilization, &
client satisfaction
Functional relationships
between facilities and
suppliers
38
Capacity
Inputs
Capacity
Process
Capacity
Outputs
Capacity
Outcome
Performance
Objective
Leadership
Human resource
management &
orientation
Finances
Infrastructure
Human resources
History of health service
organization
Organizational culture
Organizational incentive
practices
M&E, research
Coordination and
communication with
referral units
Creation & maintenance
of linkages with
community groups
Functional community
outreach &
communication
mechanisms
Feedback from routine
client satisfaction &
community monitoring
Quality of referral service
monitored
IEC
Community mobilization
Context or operational environment
National policy on consumer roles and rights
Published norms and standards of care
39
Level
Capacity
Input
Capacity
Processes
Capacity
Outputs
Capacity
Outcomes
System
National policy on
immunization and
community-based workers
Organizational
(Local NGO)
Supervision and
mentoring of CHWs
Curricula for:
Training of Trainers & for
Community Health
Workers
Participation in
Training of Trainers
Participation in CHW
training on IEC
Performance
Personnel
Exposure to immunization
program
Community meetings
with CHWs
Level of participation in
health care learning activities
Recognition of need for
immunization
40
Community knowledge of
immunization benefits and side
effects
Caregivers value immunization
Each type of mapping (single-level or multiple-level) can be done in two or three iterations. The first iteration of a map should attempt to provide a full list of capacity variables that may influence capacity outcomes
41
Box 10:
Administrative
Is your organization influenced by the rule of other organizations, institutions, and groups to
which it is related or might be expected to be related?
Is your organization influenced by expectations of consumers, policymakers, suppliers, competitors, and other organizations in its external environment?
Are your organizations objectives and activities influenced by governments, donors, and
other organizations?
Is your organization influenced by important sector rules and regulations?
Do administrative norms/values in your country support or hinder the work your organization
intends to carry out?
Legal
Do the laws of the country support the role played by your organization?
Does the legal framework support the organizations autonomy?
Is the legal framework clear?
Is the legal framework consistent with current practice?
Is the legal regulatory context conducive to your organizations work?
Does your organization monitor changes in the legal context that could affect the position of
the organization?
Political environment issues
Do the political and ideological trends of the government support the kind of work the organization does?
Does the government system facilitate collaborative arrangements?
Does the organization play a role in national or sector development?
Does the organization have access to government funding?
Does the organization have access to international funding?
Does the organization have access to the governments knowledge and publications?
Do government policies and programs support the organization?
Sociocultural environment
Is equity in the workplace a social value?
Does the organization account for the effect of culture on program complexity?
Do values found in the sociocultural environment support the work of the organization?
Does the organization have access to a pool of capable human resources to recruit staff?
Does the organization analyze and link demographic trends to its work?
Economic environment
Does the governments economic policy support the organizations ability to acquire technologies and financial resources?
Is money available to do the organizations work?
Do donors support the organization?
Technological environment
Is adequate physical infrastructure (telecommunication, transport) in place to support the
organizations work?
Is the technology needed for your work supported by the overall level of national technology
development?
42
Does the government system facilitate the organizations process for acquiring needed technology?
Is the level of human resource development in your organization adequate to support new
technology?
Stakeholder environment
Is the community involved in the organization?
Are partners involved in the organization?
Do governments value the organizations products and services?
Do governments request or use the organizations products and services?
Do similar organizations compete or cooperate with your organization?
Do donors influence the organization?
Do funders support the organization?
The questions above are adapted from Enhancing Organizational Performance (Lusthaus et al.,
1999). While they are focused on the organization level, many of them can be adapted for any
level of the health system.
43
STEP 4
Identify Capacity
Indicators
44
Lesson 1: Indicators should reflect an understanding of the change strategy for capacity development.
The process of choosing capacity indicators
should feed into the overall change strategy
designed for building capacity and improving
performance. Indicators should be developed
alongside capacity mapping while designing a
capacity-building intervention. Evaluators
also might seek to understand how information is currently used in the organization or
system to ensure that indicators become incentives for change and not barriers.
45
Box 11: Examples of Capacity Indicators from Non-health Sector CapacityBuilding Interventions
Example 1
1. Capacity indicator related to decentralized payment functions administered by local officials,
district assembly members, and financial and political employees:
Ability of the system to transfer funds between authority levels (for example, within 45 days
of the end of the quarter) and/or produce audited statements within six months of the end of
the fiscal year.
2. Capacity indicator related to community water management committees role in water pump
maintenance:
A functioning Pump Management Committee that meets at least once a month and keeps the
pump functioning 90 percent of the time in normal circumstances.
3. Capacity indicator related to coordination of information among six ministries working on soil
erosion:
Twenty-five percent increase in the number of projects that require contributions from two or
more departments.
4. Capacity indicator related to government department to carry out joint surveys of client
farmers in delta area of cotton region:
Acceptance of survey methods as an effective tool by senior research officers and their incorporation into the work program of the agencies.
Source: Morgan, 1997
Example 2
Indicators related to motivation
Motivation to implement the strategic approach
Motivation to undertake strategic planning
Interest in improving the management information system
Interest in designing and managing competitive projects
Indicators related to capacity
Knowledge of the strategic approach
Skills to undertake strategic planning
Knowledge about designing and managing competitive projects
Knowledge about the foundations of an information management system
Indicators related to context or environment
Degree to which tasks demand conceptual and methodological creativity and innovation
Positive appreciation of performance in institutional evaluations
Degree of autonomy to undertake work
Contribution to improvement of the management information system
Source: Horton et al, 2000
46
47
Inputs
System
Organization
Processes
Outputs
Outcomes
Performance
Supervisors
Quality of referral
system
Human resource
Incentives
Supplies
Referral
Consistent delivery of
essential package of
good quality family
planning services to a
defined population (coverage, quality, and consistency)
Feedback
Supplies management
Personnel
Number of staff
Outreach
Learning
Community
Provider-client interaction
Links to community
Number of contacts
48
Outcome of contacts
Inputs
System
Processes
Outputs
Outcomes
Performance
Consistent delivery of
essential package of
good-quality family
planning services to a
defined population (coverage, quality, and consistency)
Organization
Outcome of contacts in
terms of client satisfaction
Leadership
49
Map 6: Community Capacity Map on Multiple Levels with Indicators (in Italics)
Intervention
Performance objective: To increase demand for childhood immunization in Sierra Leone.
Capacity-building objective: Work with a local NGO to improve Community Health Workers (CHW) capacity to provide Information, Education, &
Communication (IEC) on childhood immunization.
Strategies and activities: Develop curricula for training of trainers and training of CHWs; conduct training of trainers and supervision; health personnel support CHWs from health centers; NGO supervises and supports health center personnel working in service delivery.
Level
Capacity Inputs
System
National policy on
immunization & CHWs
(Policy exists & is favorable)
Organization
(Local NGO)
Capacity Processes
Capacity Outputs
Capacity Outcomes
Performance
Community
Curricula for:
Training of Trainers
and
Community Health
Workers
(curriculum exists)
Exposure to immunization
program (Past experience
with childhood
immunization)
Participation in Training
of Trainers
Participation in CHW
training on IEC
(% of personnel or
CHWs completing
training)
Community knowledge of
immunization benefits and side
effects (Index of immunization
program message recall)
50
51
Inputs
Process
Outputs
Outcomes
Health system
Adequacy of training
materials/supplies has been
assessed in one or more
institutions
Adequate training supplies
available in sufficient
quantities to support ongoing
RH/FP training in one or more
institutions
Up-to-date curricula
Percent of training budget from
external assistance
Average level of education
attained in the district
Mean income level
Proportion of adults whose
partner recently died in central
hospital
Community leadership (type
and quality)
Organization
Health
Personnel
Individual/
Community
52
Organization
Health Personnel
Individual/
Community
Average time/distance to the nearest reproductive health facility offering a specific service
Percent of facilities where percent of clients receive the service that meets the expected standards
Number/percent of trainees deployed to an appropriate service delivery point and job assignment
Percent of facilities that experience a stockout at any point during a given time period
Percent of health facilities providing STI services with adequate drug supply
Contraceptive prevalence rate (CPR)
Disability adjusted life years (DALY)
Disability adjusted life expectancy (DALE)
System responsiveness to clients
Index of equality of child survival
Total health expenditure as a percent of GDP
Public expenditure on health as a percent of total public expenditure
Out of pocket expenditure as a percent of total health expenditure
Percent of mothers examined every 30 minutes during the first two hours after delivery
Percent of data elements reported accurately in MIS reports
Family planning continuation rates in catchment population
Percent of annual revenue generated from diverse sources
Percent of target population that received DPT 3 immunization
Cost of one months supply of contraceptives as a percent of monthly wages
Percent of deliveries in which a partograph is correctly used
Percent of newborns receiving immediate care according to MOH guidelines
Percent of pregnant women counseled and tested for HIV
Percent of STI patients appropriately diagnosed and treated
Percent of communities with active health center management committee
Percent of target population that received DPT 3 immunization
Percent of non-users who intend to adopt a certain practice in the future
Percent of infants 0 - < 6 months of age who are exclusively breastfed
Percent using condoms at last higher-risk sex
53
54
STEP 5
Identify Appropriate
Methodological
Approach and Sources
of Data
The fifth step in developing a capacitybuilding M&E plan involves defining the
methodological approach, identifying sources
of data, and choosing (or developing) data
collection tools. Evaluators should ask the
following questions:
Which methodological approach is appropriate?
What sources of data are necessary for
measuring the indicators defined in Step
4?
Are there any existing tools for measuring
capacity that are appropriate for my purposes?
Methodological Approaches and Challenges
As discussed throughout this guide, monitoring and evaluation require different methodological approaches and have different data
needs. The choice of methods and data
sources relates mainly to the purpose of the
evaluation (see Step 1).
Is the purpose to monitor the implementation of a capacity-building intervention,
assess its effectiveness, or both?
Will the results be used mainly for internal improvements or external reporting?
Clearly, all capacity-building programs need
to be monitored to ensure they are working
well (i.e. to track changes in inputs, processes,
outputs and outcomes). However, the evaluation of program effectiveness happens less
frequently and only for selected interventions
due to cost and complexity. In the case of
capacity-building evaluation, it can be particularly difficult to conduct evaluations that
look for an association between capacitybuilding intervention and changes in capacity
or performance. These changes can occur for
a number of reasons in addition to the capac-
55
56
use data interpretation workshops to obtain input from a range of stakeholders involved in the program (both internal and
external).
Sources of Data
A number of data sources are available for
monitoring and evaluating capacity building.
Since capacity measurement often includes
the use of multiple indicators, monitoring and
evaluation usually requires multiple data
sources. Indicator design should take into
account the potential availability of data particularly from existing sources. Organizations
and systems often have records and reports
that provide insights into different aspects of
capacity. Some examples of existing data
sources are presented below.
In many cases, however, it will be necessary
to collect new data to operationalize the indicators selected. As noted above, issues such
as data sensitivity (with respect to its effect on
validity), the purpose of monitoring and
evaluation, and the cost in terms of time and
resources required should guide evaluators in
determining what data will be collected and
how they will be collected.
57
Health personnel: personnel records (job descriptions, performance evaluations, background checks, training summaries), supervision reports, self-evaluations.
Individual/Community: community-based and
social marketing surveys, community health
worker reports, meeting minutes, maps, focus
groups, and participatory appraisals.
In planning for data collection, it is often
helpful to develop a data chart that spells out
the key questions to be addressed, the indicator that links to the question, and the data
sources needed to answer the question. An
example of a data chart is found in Table 6.
Tools for Measuring Capacity at Different
Levels
A number of data-collection instruments and
tools have been developed and used to measure capacity at the four levels. (See Table 7
for a list of tools and their key characteristics). In most cases, these tools have been
used for capacity assessment rather than for
monitoring and evaluation. In addition, most
of the tools identified are designed to assess
organizational capacities, although many of
58
59
Objective(s)
Indicator
Method(s)
Data Sources
1.
1.
1.
Amount of budgetary
resources by source over
time
1.
1.
2.
2.
Number of management
and staff positions filled
over time
2.
2.
3.
3.
Finance manager,
accountant, donor/NGO
representative
2.
1.
1.
Determine whether
capacity-building
interventions increased
budgetary resources of the
organization and the
number of trained
personnel.
Determine whether
change in reliance on
donor/NGO funding has
decreased.
Determine the extent of
networking and its effect
on organizational
behavior.
1.
1.
1.
Record forms
2.
2.
2.
3.
3.
Survey data
4.
Organizational
networking analysis
1.
1.
Survey data
2.
2.
3.
1.
1.
Survey data
2.
2.
3.
3.
1.
1.
60
1.
1.
Determine the
effectiveness of training
and mentoring.
Determine the
effectiveness of training
and mentoring.
1.
2.
1.
2.
Methods
Self/
External
Assessment
Single/
Multiple
tools
IDRC
Organization
Qualitative and
quantitative
External and
self-assessment
Multiple
IDRC
System
Organization
Qualitative and
quantitative
Self-assessment
Multiple
BASICS
Organization
Quantitative
assessment
External assessment
Multiple
Family Planning
Management
Development
(FPMD)/
MSH
Organization
Qualitative
Self-assessment
Single
Management Development
Assessment (MDA)
http://erc.msh.org/mainpage.c
fm?file=95.50.htm&module=t
oolkit&language=English
FPMD/MSH
Organization
Quantitative
Self-assessment
Single
Child Survival
Technical
Support (CSTS)
Project/ORC
MACRO
CSTS
Project/ORC
MACRO
System
(local)
Organization
Community
Qualitative and
quantitative
Multiple
System
(local)
Organization
Qualitative and
quantitative
Multiple
Tool
Developed By
Enhancing Organizational
Performance: A Toolbox for
Self Assessment
http://www.idrc.ca
Short description
Measures the results of an organizations programs, products and services and then integrates these results with the
techniques of formative assessment in which the assessment
team becomes involved in helping the organization meet its
goals.
Outcome Mapping characterizes and assesses the contributions a project or organization makes to significant and
lasting changes (outcomes). In Outcome Mapping a program is assessed against its activities that contribute to a
desired outcome, not against the outcome itself.
This manual outlines the key steps for planning and conducting an integrated health facility assessment at outpatient
health facilities in developing countries. This assessment is
designed for use by primary health care programs that are
planning to integrate child health care services.
The Management and Organizational Sustainability Tool
(MOST) is a package (instrument and user's guide) designed
to facilitate management self-assessment and to support
management improvement. MOST uses an instrument to
help focus an organization on the actual characteristics of
their management, identify directions and strategies for
improvement, and set priorities for the management development effort.
This tool includes four steps: 1) develop a preliminary
management map to guide assessment; 2) develop and
administer MDA questionnaire to collect information on the
management capabilities of organization; 3) analyze survey
results and develop a post-survey management map; and 4)
develop action plan for making improvements.
Evaluation framework to systematically measure progress
toward sustainable health goals. Process that projects can
use to lead a participatory assessment with communities and
local partners.
This self-assessment tool is currently being pilot tested by
CSTS.
61
Self/
External
Assessment
Single/
Multiple
tools
Qualitative and
quantitative
Self and
internal client
assessment
Multiple
Organization
Qualitative and
quantitative
Self-assessment
Multiple
World Vision
Community
Qualitative and
quantitative
External and
self-assessment
Multiple
Center for
Communications
Programs
(CCP)/Johns
Hopkins
University
Community
Qualitative and
quantitative
External and
self-assessment
Multiple
Tool
Developed By
INTRAH/PRIME
Capacity Building In Training
Questionnaire
http://www.prime2.org/prime
2/techreport/home/50.html
Client-Oriented Provider
Efficient (COPE)
http://www.engenderhealth.or
g/ia/sfq/qcope.html
Note: COPE has now been
adapted for use with maternal
health services and community partnership
http://www.engenderhealth.or
g/news/newsreleases/020516.
html
Transformational Development Indicators Field Guide
http://www.worldvision.org
NOTE: Tool not yet available
online
Communication for Social
Change: An Integrated Model
for Measuring the Process and
Its Outcomes
http://164.109.175.24/Docum
ents/540/socialchange.pdf
62
Level
Methods
INTRAH/
PRIMEII
Organization
Engender Health
Short description
Self/
External
Assessment
Single/
Multiple
tools
Quantitative
External and
self-assessment
Multiple
instruments
Organization
Quantitative
External and
self-assessment
Single
instrument
PASCA
Organization
Quantitative
External and
self-assessment
Single
instrument
Institutional Assessment
Instrument (IAI)
http://www.worldlearning.org
or
http://www.worldlearning.org/
pidt/docs/wl_instcape.pdf
World Learning
Project Inc.
Organization
Qualitative and
quantitative
External
assessment
Multiple
instruments
Institutional Development
Assessment (IDA)
http://www.fhasfps.org/documentsdownload/
Institutional%20Development%20A
ssessments.PDF
Organizational Capacity
Assessment Tool (OCAT)
http://www.pactworld.org
SFPS
Organization
Qualitative and
quantitative
External
assessment
Multiple
instruments
Pact/Ethiopia
Organization
Quantitative
Self-assessment
Multiple
instruments
Tool
Developed By
Level
Methods
CCP/Johns
Hopkins
University
Organization
Management/Financial
Sustainability Scale (MFSS)
http://www.pasca.org
PASCA
Short description
63
Tool
Developed By
Level
Methods
Education
Development
Center and PACT
Organization
Qualitative and
quantitative
Self/
External
Assessment
Single/
Multiple
tools
Self-assessment
Single
instrument
64
The Futures
Group/
Population
Council
System
(national)
Organization
Quantitative and
qualitative
External
assessment
Single
instrument
Short description
Participatory Organizational Evaluation Tool (POET) is an
organizational capacity assessment tool used to measure and
profile organizational capacities and consensus levels in
seven critical areas and assess, over time, the impact of
these activities on organizational capacity (benchmarking).
POET is based on a methodology called PROSE.
PROSE stands for Participatory, Results-Oriented, SelfEvaluation, a new methodology for assessing and enhancing
organizational capacities. PROSE is designed for use by
service organizations, schools, and government units. It is
suitable for assessing capacity and catalyzing organizational
change in relation to such concerns as: practices related to
exceeding customer expectations, organizational effectiveness in achieving mission, community participation, equity,
decentralization, and managerial effectiveness.
Each index measures national level effort and identifies
strengths and weaknesses of those efforts.
STEP 6
Develop an
Implementation and
Dissemination Plan
The final step in planning for capacitybuilding M&E is to develop an implementation plan to monitor and evaluate capacity. At
a minimum, the implementation plan should
include a timetable for data gathering and
review of data, individual responsibilities, a
dissemination strategy, and a budget. In practice, capacity measurement, as a reflection of
capacity development, is likely to be an iterative process rather than a perfunctory before
and after look at capacity. Experienced
evaluators (Horton et al, 2000; Lusthaus,
1999; Earl et al., 2001; Morgan, 1997) recommend regular review and discussion of
monitoring results with stakeholders to guide
the process of capacity development and encourage ownership of the monitoring process.
Setting aside enough time to present the results periodically and allow for discussion and
feedback from the stakeholders will greatly
enhance data interpretation and the impact of
the evaluation itself. As Morgan (1997) notes,
Indicators by themselves provide few answers. The information they produce must be
65
Part 4
67
68
69
Annex A
Dimensions
I. Legal/Policy Support
II. Resources
Objectives
National FP/RH service guidelines
and training are official
Indicator
1. Existence of updated official
FP/RH service and training
guidelines
Financial
Existence of sufficient and
diversified training budget
Venues/Equipment
Adequate venues
Scoring
0=Nonexistent guidelines (both service
and training), to
4=Complete/updated, disseminated,
and official guidelines
0=Nonexistent written policy to
4=Written/updated, disseminated, and
official
0=No mention, to
4=Mentioned on several private and at
least twice on public occasions
0=No in-country training budgets;
funds are allocated on ad hoc basis, to
4=20% or more of training budget
comes from external assistance
0=Budget does not cover all aspects of
training, to
4=Budget covers all training costs
0=Nonexistent venue, (incrementally
scoring coverage, capacity, and/or
quality of venue), to
4=Fully accessible, high-quality, and
sufficient-capacity local venue for
training events
71
Dimensions
72
Objectives
Materials, equipment, and
supplies (MES)
Appropriate and cost-efficient
MES, (including AV equipment
and teaching aids)
Systems are in place for
replacement and upgrading of
MES
Indicator
7. MES are pertinent, updated,
sufficient, and adapted to local
culture (including locally
produced)
Human
Trainers/preceptors formed have
updated and standardized
technical and presentation
knowledge and skills
9. Trainers/preceptors are
constantly formed (TOT) and do
periodic refresher courses and
pass standard tests on FP/RH
technical and presentation
knowledge and skills
Scoring
0=MES are insufficient and/or
outdated, to
4=MES of standard technical and
material quality and readability are
available for each event participant
0=There are no or insufficient means
for replacing MES, to
4=The means exist to produce, replace
and upgrade MES
0=Trainers/preceptors not regularly
formed and/or do not update their
technical and presentation knowledge
and skills, to
4=Trainers/preceptors constantly
formed and undergoing periodic (at
least once every two years) refresher
courses
0=No training plan performance
(training conducted on ad hoc basis), to
4=Training plans are drawn
periodically (at least annually) and
reviewed
0=No standard training curriculum or
curriculum is inadequate / outdated,
different ones used by different
institutions, to
4=There is a standard curriculum,
reviewed periodically (at least once
every 2 years) and used officially by
training institutions
Dimensions
IV. Organization
Objectives
Leadership
Vision of training as a means to
improve services
Indicator
12. Training plans are linked with
quality of care and increased
service access
Promotion of public-private
collaboration
Infrastructure
Existence of decentralized training
units in all areas
Scoring
0=Providers training plans are not
coupled with service and quality of
care objectives, to
4=Training plans form part of the
quality of care and service
improvement strategies
0=Training is not part of the
organizations strategic plan, to
4=Training is part of the organizations
long-term strategic plan (multiannual)
0=No evidence of public-private
collaboration, to
4=Evidence of public-private
collaboration
0=No decentralized training units (even
if there is one at central level), to
4= Active training units in central and
peripheral levels
0=Training is not coupled with
providers improvement objectives, to
4=Training is part of HR development
and performance
73
Dimensions
V. Community Development
-Participation
74
Objectives
Technical capability
Technological transfer and
development through networking,
evaluation, and research
Indicator
19. Contacts with other training
institutions and institution
evaluation and research feed into
training improvement (e.g., trainee
selection, training contents and
formats)
Track record
Proven capacity to
conduct/replicate courses
autonomously
Scoring
0=No/little use of evaluation and
research of information from other
training institutions to improve, update
training capabilities, to
4=Extensive use of internal and
external data and resources for
improvement
0=No replica or independent courses
carried out by the organization (or only
done with foreign assistance), to
4=Evidence of ongoing
replication/expansion of courses with
institutional resources
0=No/little community involvement, to
4=Extensive involvement /
participation in provider training and/or
performance assessment; organized
demand/petitions to improve services,
etc.
Annex B
100.0
56.7
Score
35.8
1997
10.0
1999
2.0
3.7 3.7
3.7
3.0
2.8
2.2 2.2
2.2
1.7
1.51.5 1.7
3.2
2.8 2.8
2.8
1.8
3.5
4.0
2.3
1.7
3.7
2.3
1.7
2.0 2.0
2.5
1.5
3.2
3.3
2.5
1.8
1.0
1.0
1.0
1
10
11
12
13
2.0
2.0
1.3
14
15
16
17
18
1.0
19
1.0
20
21
1997-99 Scores for each of the 20 Indicators and Average Score (21) - Logarithmic Scale
75
Annex C
2. INTRAH/Prime II
http://www.prime2.org/
The PRIME II Project is a partnership combining leading global health care organizations dedicated to improving the quality and accessibility of family planning and reproductive health care
services throughout the world. Funded by USAID and implemented by the University of North
Carolina at Chapel Hill School of Medicine, PRIME II focuses on strengthening the performance
of primary care providers as they work to improve services in their communities. To accomplish
its goals, PRIME II applies innovative training and learning and performance improvement approaches in collaboration with host-country colleagues to support national reproductive health
goals and priorities.
77
Since 1997, The PRIME Project has been committed to applying the guiding principles of performance improvement (PI) to real-world reproductive health contexts. Work in Yemen, Burkina
Faso, the Dominican Republic, and India indicates that PI users like the clear, highly participatory process and the focus on cost-effective interventions to address the most important problem
areas.
This interactive Website, created by the PRIME II Project and INTRAH, presents a revised edition of Performance Improvement Stages, Steps and Tools, first issued in print form in 2000.
INTRAH/PRIME II published this site online in August 2002 (www.intrah.org/sst/).
For more information, please contact Marc Luoma by email ([email protected]).
3. JHPIEGO
http://www.jhpiego.org
Through advocacy, education and performance improvement, JHPIEGO helps host-country policymakers, educators and trainers increase access and reduce barriers to quality health services,
especially family planning and maternal and neonatal care, for all members of their society.
JHPIEGOs work is carried out in an environment that recognizes individual contributions and
encourages innovative and practical solutions to meet identified needs in low-resource settings
throughout Africa, Asia, and Latin American and the Caribbean.
TIMS is a computer-based tool to track and monitor training efforts. Each persons skills, qualifications, and location are stored, along with courses taken and taught, through a Microsoft Access 2000 database application that stores information about training course content, timing, participants, and trainers. In the standard form, TIMS tracks the following training results over a
period of time:
- Which providers from which service sites have been trained, and in what topic(s)
- Which trainers have been conducting courses, and how many people they have trained
- How many courses have been held, summarized by training center, district, or province
TIMS allows senior and mid-level program managers to monitor the variety of training activities
and track results in a number of perspectives. TIMS is designed to be part of a countrys training
information system, replacing paper-based reporting and aggregation with a computer database.
Ministries of Health, Planning and/or Finance can use TIMS to supplement service information
for policy decisions on training, retraining, and provider deployment.
For additional information about TIMS, contact Catherine Schenck-Yglesias by e-mail
([email protected]).
78
79
9. Pact
http://www.pactworld.org/services/oca/index_oca.htm
http://www.pactworld.org/
Pacts unique methodology for organizational capacity assessment and strengthening (OCA)
helps organizations anticipate and overcome the greatest barriers to organizational change and
growth. Through a guided self-assessment and planning process, organizations reflect upon their
performance and select the tools and strategies they need to build capacity and broaden impact.
Pact's OCA is the product of ten years of research and field practice in partnership with the Education Development Center and USAIDs Office of Private & Voluntary Cooperation. Hundreds
of local and international NGOs, private-sector corporations, and municipal governments around
the world have used this methodology.
OCA is a four-staged process that includes:
80
Participatory tool design that empowers organizations to define the critical factors that
influence their performance and to identify relevant indicators for evaluating their competency.
Guided self-assessment that leads employees, board members, and constituents through
structured discussions followed by individual scoring on a series of rigorous performance
indicators.
Reassessment for continual learning that allows organizations to monitor change, track
the effectiveness of their capacity-building efforts, and integrate new learning as their
needs change and capabilities increase.
For more information on Pacts Organizational Assessment, please contact Betsy Kummer by
email ([email protected]).
81
82
13. Capacity.org
http://www.capacity.org/index_en.html
Capacity.org is a Website dedicated to advancing the policy and practice of capacity building in
international development cooperation. Issue 14 of the web-based magazine Capacity.org presents highlights of the UNDP initiative on capacity building and related information on the policy
and practice of capacity building in international development cooperation (also see UNDP website at http://www.undp.org/dpa/publications/capacity.html).
83
who are evaluating capacity development efforts in their own organizations in Africa, Asia and
Latin America. This site presents the work of a global project, "Evaluating Capacity Development Project (The ECD Project)." National and international research and development organizations are participating in the ECD Project, which is supported by five donor agencies and coordinated by ISNAR.
The site features the ECD Project's activities since 2000 and its result to date. It provides access
to project reports and events. Lists of useful concepts and terms, bibliographic references and
Internet resources are also provided for use by capacity developers and evaluators
84
17. EngenderHealth
http://www.engenderhealth.org
EngenderHealth works worldwide to improve the lives of individuals by making reproductive
health services safe, available, and sustainable. EngenderHealth provides technical assistance,
training, and information, with a focus on practical solutions that improve services where resources are scarce in partnership with governments, institutions, and health care professionals.
EngenderHealth's trademarked COPE (client-oriented, provider-efficient services) is a set of
flexible self-assessment tools that assist providers and supervisors to evaluate and improve the
care offered in clinic and hospital settings. Using self-assessment, client-interviews, client-flow
analysis and facilitated discussion, staff identify areas needing attention and develop their own
solutions and action plans to address the issues. Originally developed for family planning services, COPE has been successfully applied in a variety of healthcare settings all over the world for
over 10 years. With the growing popularity of COPE, healthcare providers from related disciplines asked if the tools could be adapted to a wider range of health services. EngenderHealth
has answered the demand by creating these new products: COPE for Maternal Health Services
and Community COPE: Building Partnership with the Community to Improve Health Services.
85
86
87
11. Sociometrics
http://www.socio.com/eval.htm
Sociometrics offers a wide variety of evaluation products and services to professionals across the
world. Their evaluation workshops and training services, technical publications, evaluation tools,
and data sets are all designed to assist practitioners, administrators, evaluators, and funders of
social interventions to design and implement successful evaluation systems.
For additional information, contact Dr. Shobana Raghupathy by email ([email protected]) or
by phone at 1.800.846.3475 x209.
13. UNICEF
http://www.unicef.org/reseval/
This site lists some of the monitoring and evaluation tools recently developed by UNICEF and
its partners, including the UNICEF Guide to Monitoring and Evaluation.
88
89
Annex D
Performance Improvement in RH
What is it?
Helps managers decide: what PI strategy to use? Did performance change as a result of the PI process?
Guides planners and evaluators in viewing capacity systematically and identifying all areas that affect performance.
Encourages understanding of capacity in the health sector as a
system that includes four interdependent levels: the system,
organizations, health personnel, individuals and communities.
When to use it?
Focus of study/action
Who is involved?
View of performance
91
Glossary
Capacity is the ability to carry out stated objectives. It has also been described as the stock of
resources available to an organization or system as well as the actions that transform those resources into performance.
Capacity building (or capacity development) is a process that improves the ability of
a person, group, organization, or system to meet objectives or to perform better.
Capacity evaluation is normally more complex than monitoring, and is conducted to gain understanding of the relationship between capacity-building interventions and capacity outcomes,
or the links between capacity and performance variables.
Capacity mapping is a structured process of thinking through the role capacity plays in ensuring
performance by developing a conceptual framework that is specific to a particular capacitybuilding intervention. During capacity mapping, all the possible factors of capacity that influence
performance and the relationships between them must be identified. Once the factors are all laid
out, the program staff or evaluator can focus on those that are most essential for the evaluation.
Capacity monitoring normally would be used to understand the effectiveness and efficiency of
a capacity-building intervention during implementation (i.e., is capacity improving and at what
cost?) to contribute to strategic or operational decisions related to capacity building or enable a
periodic look at a program or system.
Cold chain: The system that ensures vaccine viability from manufacturing to delivery.
Contextual factors: external factors relating to the economic, social, cultural and political environment. Factors normally outside the control of most health sector actors.
Impact: Long-term results achieved through improved performance of the health system: sustainable health system and improved health status. Impact measures are not addressed in capacity-building M&E.
Impact evaluation: An evaluation that uses experimental or quasi-experimental study design to
attribute changes in capacity or performance to program interventions. Impact evaluation is not
appropriate or useful in the context of capacity-building M&E because of the difficulty of quantifying many elements of capacity and attributing capacity change to any single intervention or
even a range of them.
Input: Set of resources, including service personnel, financial resources, space, policy orientation, and program service recipients, that are the raw materials that contribute to capacity at each
level (system, organization, health personnel, and individual/community).
Outcome: Set of results that represent capacity (an ability to carry out stated objectives), often
expected to change as a direct result of capacity-building intervention.
Glossary
93
Output: Set of products anticipated through the execution of practices, activities, or functions.
Performance: Set of results that represent productivity and competence related to an established
objective, goal or standard. The four capacity levels together contribute to overall system-level
performance.
Performance Improvement (PI): Performance Improvement (PI) is a process for enhancing
employee and organizational performance that employs an explicit set of methods and strategies.
Results are achieved through a systematic process that considers the institutional context; describes desired performance; identifies gaps between desired and actual performance; identifies
root causes; selects, designs and implements interventions to fix the root causes; and measures
changes in performance. PI is a continuously evolving process that uses the results of monitoring
and feedback to determine whether progress has been made and to plan and implement additional
appropriate changes.
Process: Set of activities, practices, or functions by which the resources are used in pursuit of the
expected results.
Theory of action: Part of a capacity-building plan that includes common objectives and shared
concepts. A coherent theory of action agreed on by the key groups involved in the process states
how activities are expected to produce intermediate and longer-term results and benefits. Without a theory of action, a capacity development effort could become a fragmented exercise in
wishful thinking, rather than a coherent initiative with a high probability of success (Horton,
2001).
Triangulation: The use of multiple data sources or methods to validate findings, discover errors
or inconsistencies, and reduce bias.
94
Bibliography
Africa Bureau, Office of Sustainable Development USAID. 1999. Health and Family Planning
Indicators: Measuring Sustainability, Volume II. Washington: USAID.
Ampomah, K. 2000. PRIMEs Technical Report 20: An Assessment of the Impact of PRIMEs
Interventions on the Training Capacity and Reproductive Health Service Delivery in Ghana.
2000. Chapel Hill, NC: INTRAH.
Bertrand, J. and Escudaro, G. 2002. Compendium of Indicators for Evaluating Reproductive
Health Programs, Volume 1. Chapel Hill: MEASURE Evaluation Project.
Brown, L., LaFond, A., Macintyre, K. 2001. Measuring Capacity Building. Chapel Hill:
MEASURE Evaluation Project.
Catotti, D. 1999. PRIMEs Technical Report 13: Improving the Quality and Availability of Family Planning and Reproductive Health Services at the Primary Care Level: Institutional Capacity
Building in the El Salvador Ministry of Health. Chapel Hill: INTRAH.
Development Resources Team, World Vision. 2002. Transformational Development Indicators
Field Guide. Washington, DC: World Vision.
Earl, S., Carden, F., and Smutylo, T. 2001. Outcome Mapping: Building Learning and Reflection
into Development Programs. Ottawa: International Development Research Centre.
Eng, E. and Parker, E. 1994. Measuring Community Competence in the Mississippi Delta: The
Interface between Program Evaluation and Empowerment. Health Education Quarterly 21 (2):
199-220.
Figueroa, M.E., Kincaid D.L., Pani, M. and Lewis, G. 2002. Communication for Social Change:
An Integrated Model for Measuring the Process and Its Outcomes. Communication for Social
Change Working Paper Series, No. 1. Baltimore: Johns Hopkins Center for Communications
Programs.
Fort, Alfredo. 1999. PRIMEs Technical Report 16: Capacity Building in Training: A Framework and Tool for Measuring Progress. Chapel Hill: INTRAH.
Franco, L.M., Bennett, S. and Kanfer, R. 2002. Health Sector Reform and Public Sector Health
Worker Motivation: A Conceptual Framework. Social Science and Medicine 54: 1255-1266.
Goodman, R.M., Speers, M.A., McLeroy, K., Fawcett, S., Kegler, M., Parker, E., et al. 1998.
Identifying and Defining the Dimensions of Community Capacity to Provide a Basis for Measurement. Health Educ Behav 25 (3): 258-278.
Gubbles, P., Koss C. 2000. From the Roots Up: Strengthening Organizational Capacity through
Guided Self-Assessment. Oklahoma City: World Neighbors.
Bibliography
95
96
Lake, S., Daura, M., and Mabanddhala, M., et al. 2000. Analyzing the Process of Health
Financing Reform in South Africa and Zambia. Zambia Country Reports. Major Applied Research Technical Paper 1. Bethesda: Partnerships for Health Reform Project.
Lande, R.E. 2002. Performance Improvement. Population Reports, Series J, No. 52, Baltimore:
The Johns Hopkins Bloomberg School of Public Health, Population Information Program.
Luoma, M. 2000. PRIMEs Technical Report 19: Dominican Republic Performance Improvement Project Evaluation. Chapel Hill: INTRAH.
Lusthaus, C., Adrien, M., Andersen, G., and Carden, F. 1999. Enhancing Organizational Performance: A Toolbox for Self-Assessment. Ottawa: International Development Research Centre.
Mackay, R. and Horton, D. 2002. Capacity Development in Planning, Monitoring, and Evaluation: Results of an Evaluation. Briefing Paper No. 51. ISNAR.
Management Sciences for Health. 1996. Planning for Sustainability: Assessing the Management
Capabilities of Your Organization. The Family Planning Manager. FPMD.
McCaffrey, J., Luoma, M., Newman, C., Rudy, S., Fort, A., Rosensweig, F. 1999. PI Stages,
Steps and Tools, Chapel Hill: INTRAH.
MEASURE Evaluation. 1998. The Needs Assessment Validation Study and 1998 Institutional
Capacity Assessment, PASCA Project. Chapel Hill: MEASURE Evaluation Project.
MEASURE Evaluation. 2001. Mapping Capacity in the Health Sector: Application of the
MEASURE Conceptual Framework in Measuring Capacity Building for a Complex Nongovernmental Organization. Draft.
Moore, M., Brown, L., and Honan, J. 2001. Toward a Public Value Framework for Accountability and Performance Management for International Non-Governmental Organizations. Presented
at Hauser Center/Keio University Workshop on Accountability for International Nongovernmental Organizations, November 2 11, 2001.
Morgan, P. 1997. The Design and Use of Capacity Development Indicators. CIDA.
Murray, C.J.L., Frenk J. 1999. A WHO Framework for Health System Performance Assessment.
Geneva: World Health Organization.
Oakley, P. 2001. Evaluating Empowerment: Reviewing the Concept and Practice. INTRAC
NGO Management and Policy Series No.13. London: INTRAC.
Partnerships for Health Reform. 1997. Measuring Results of Health Sector Reform for System
Performance: A Handbook of Indicators. Bethesda: Partnerships for Health Reform.
Bibliography
97
98