0% found this document useful (0 votes)
5 views

Module_2

Uploaded by

minoxlive72
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module_2

Uploaded by

minoxlive72
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

2018

DIPLOMA IN
MONITORING AND
EVALUATION
MODULE 2
Module two of the Diploma in Monitoring and Evaluation
TABLE OF CONTENTS
Indicators..................................................................................................Pg 3
Project Management techniques in monitoring........................................Pg 14
Understanding the Initiative or The project.............................................Pg 22
Stakeholder Analysis............................................ ..................................Pg 35
Importance of Monitoring and Evaluation ............................................Pg 45
Cluster development ..............................................................................Pg 50
Community Based Participatory Research.............................................Pg 58
Participatory Evaluation ........................................................................Pg 85
Why should have an Evaluation Plan ....................................................Pg 116

2
MODULE 2

Chapter 1

INDICATORS

Components and Indicator Monitoring

How will we know when we have achieved our desired outcomes? After examining the importance of

setting achievable and well-defined outcomes, and the issues and process involved in agreeing upon those

outcomes, we turn next to the selection of key indicators .Outcome indicators are not the same as

outcomes. Indicators are the quantitative or qualitative variables that provide a simple and reliable means

to measure achievement, to reflect the changes connected to an intervention, or to help assess the

performance of an organization against the stated outcome. Indicators should be developed for all levels

of the results-based M&E system, meaning that indicators are needed to monitor progress with respect to

inputs, activities, outputs, outcomes, and goals. Progress needs to be monitored at all levels of the system

to provide feedback on areas of success and areas in which improvement may be required.

Outcome indicators help to answer two fundamental questions: “How will we know success or

achievement when we see it? Are we moving toward achieving our desired outcomes?” These are the

questions that are increasingly being asked of governments and organizations across the globe.

Consequently, setting appropriate indicators to answer these questions becomes a critical part of our

10step model.

Developing key indicators to monitor outcomes enables managers to assess the degree to which intended

or promised outcomes are being achieved. Indicator development is a core activity in building a

resultsbased M&E system. It drives all subsequent data collection, analysis, and reporting. There are also

important political and methodological considerations involved in creating good, effective indicators.

Indicators Are Required for All Levels of Results-Based M&E Systems

Setting indicators to measure progress in inputs, activities, outputs, outcomes, and goals is important in

providing necessary feedback to the management system. It will help managers identify those parts of an

3
organization or government that may, or may not, be achieving results as planned. By measuring

performance indicators on a regular, determined basis, managers and decision makers can find out

whether projects, programs, and policies are on track, off track, or even doing better than expected against

the targets set for performance. This provides an opportunity to make adjustments, correct course, and

gain valuable institutional and project, program, or policy experience and knowledge. Ultimately, of

course, it increases the likelihood of achieving the desired outcomes.

Translating Outcomes into Outcome Indicators

When we consider measuring “results,” we mean measuring outcomes, rather than only inputs and

outputs. However, we must translate these outcomes into a set of measurable performance indicators. It is

through the regular measurement of key performance indicators that we can determine if outcomes are

being achieved.

For example, in the case of the outcome “to improve student learning,” an outcome indicator regarding

students might be the change in student scores on school achievement tests. If students are continually

improving scores on achievement tests, it is assumed that their overall learning outcomes have also

improved. Another example is the outcome “reduce at-risk behavior of those at high risk of contracting

HIV/AIDS.” Several direct indicators might be the measurement of different risky behaviors for those

individuals most at risk.

As with agreeing on outcomes, the interests of multiple stakeholders should also be taken into account

when selecting indicators. We previously pointed out that outcomes need to be translated into a set of

measurable performance indicators. Yet how do we know which indicators to select? The selection

process should be guided by the knowledge that the concerns of interested stakeholders must be

considered and included. It is up to managers to distill stakeholder interests into good, usable performance

indicators. Thus, outcomes should be disaggregated to make sure that indicators are relevant across the

concerns of multiple stakeholder groups—and not just a single stakeholder group. Just as important, the

indicators have to be relevant to the managers, because the focus of such a system is on performance and

its improvement.

4
The “CREAM” of Good Performance Indicators

The “CREAM” of selecting good performance indicators is essentially a set of criteria to aid in

developing indicators for a specific project, program, or policy (Schiavo-Campo 1999, p. 85).

Performance indicators should be clear, relevant, economic, adequate, and monitorable. CREAM amounts
to an insurance policy, because the more precise and coherent the indicators, the better focused the
measurement strategies will be.

Clear Precise and unambiguous

Relevant Appropriate to the subject at hand

Economic Available at a reasonable cost

Adequate Provide a sufficient basis to assess performance

Monitorable Amenable to independent validation


If any one of these five criteria is not met, formal performance indicators will suffer and be less useful5.

Performance indicators should be as clear, direct, and unambiguous as possible. Indicators may be

qualitative or quantitative. In establishing results-based M&E systems, however, we advocate beginning

with a simple and quantitatively measurable system rather than inserting qualitatively measured indicators

upfront.

Quantitative indicators should be reported in terms of a specific number (number, mean, or median) or

percentage. “Percents can also be expressed in a variety of ways, e.g., percent that fell into a particular

outcome category . . . percent that fell above or below some targeted value . . . and percent that fell into

particular outcome intervals . . .” (Hatry 1999, p. 63). “Outcome indicators are often expressed as the

number or percent (proportion or rate) of something. Programs should consider including both forms. The

number of successes (or failures) in itself does not indicate the rate of success (or failure)—what was not

achieved. The percent by itself does not indicate the size of the success. Assessing the significance of an

outcome typically requires data on both number and percent” (Hatry 1999, p.

“Qualitative indicators/targets imply qualitative assessments . . . [that is], compliance with, quality of,

extent of and level of . . . . Qualitative indicators . . . provide insights into changes in institutional

5
processes, attitudes, beliefs, motives and behaviors of individuals” (U.N. Population Fund 2000, p. 7). A

qualitative indicator might measure perception, such as the level of empowerment that local government

officials feel to adequately do their jobs. Qualitative indicators might also include a description of a

behavior, such as the level of mastery of a newly learned skill. Although there is a role for qualitative

data, it is more time consuming to collect, measure, and distill, especially in the early stages. Furthermore,

qualitative indicators are harder to verify because they often involve subjective judgments about

circumstances at a given time.

Qualitative indicators should be used with caution. Public sector management is not just about

documenting perceptions of progress. It is about obtaining objective information on actual progress that

will aid managers in making more well-informed strategic decisions, aligning budgets, and managing

resources. Actual progress matters because, ultimately, M&E systems will help to provide information

back to politicians, ministers, and organizations on what they can realistically expect to promise and

accomplish. Stakeholders, for their part, will be most interested in actual outcomes, and will press to hold

managers accountable for progress toward achieving the outcomes.

Performance indicators should be relevant to the desired outcome, and not affected by other issues

tangential to the outcome. The economic cost of setting indicators should be considered. This means that

indicators should be set with an understanding of the likely expense of collecting and analyzing the data.

For example, in the National Poverty Reduction Strategy Paper (PRSP) for the Kyrgyz Republic, there are

about 100 national and sub national indicators spanning more than a dozen policy reform areas. Because

every indicator involves data collection, reporting, and analysis, the Kyrgyz government will need to

design and build 100 individual M&E systems just to assess progress toward its poverty reduction

strategy. For a poor country with limited resources, this will take some doing. Likewise, in Bolivia the

PRSP initially contained 157 national-level indicators. It soon became apparent that building an M&E

system to track so many indicators could not be sustained. The present PRSP draft for Bolivia now has 17

national level indicators.

Every indicator has cost and work implications. In essence when we

explore building M&E systems, we are considering a new M&E system

6
for every single indicator. Therefore, indicators should be chosen

carefully and judiciously

Indicators ought to be adequate. They should not be too indirect, too much of a proxy, or so abstract that

assessing performance becomes complicated and problematic.

Indicators should be Monitorable, meaning that they can be independently validated or verified, which is

another argument in favor of starting with quantitative indicators as opposed to qualitative ones.

Indicators should be reliable and valid to ensure that what is being measured at one time is what is also

measured at a later time— and that what is measured is actually what is intended.

Caution should also be exercised in setting indicators according to the ease with which data can be

collected. “Too often, agencies base their selection of indicators on how readily available the data are, not

how important the outcome indicator is in measuring the extent to which the outcomes sought are being

achieved” (Hatry 1999, p. 55).

Use of Proxy Indicators

You may not always be precise with indicators, but you can strive to be approximately right. Sometimes it

is difficult to measure the outcome indicator directly, so proxy indicators are needed. Indirect, or proxy,

indicators should be used only when data for direct indicators are not available, when data collection will

be too costly, or if it is not feasible to collect data at regular intervals. However, caution should be

exercised in using proxy indicators, because there has to be a presumption that the proxy indicator is

giving at least approximate evidence on performance (box 3.1).

For example, if it is difficult to conduct periodic household surveys in dangerous housing areas, one could

use the number of tin roofs or television antennas as a proxy measure of increased household income.

These proxy indicators might be correctly tracking the desired outcome, but there could be other

contributing factors as well; for example, the increase in income could be attributable to drug money, or

income generated from the hidden market, or recent electrification that now allows the purchase of

7
televisions. These factors would make attribution to the policy or program of economic development

more difficult to assert.

The Pros and Cons of Using Predesigned Indicators

Predesigned indicators are those indicators established independently of an individual country,

organization, program, or sector context. For example, a number of development institutions have created

indicators to track development goals, including the following:

• MDGs

• The United Nations Development Programme’s (UNDP’s)

Sustainable Human Development goals

• The World Bank’s Rural Development Handbook

• The International Monetary Fund’s (IMF’s) Financial Soundness Indicators.

The MDGs contain eight goals, with attendant targets and indicators assigned to each. For example, Goal

4 is to reduce child mortality, while the target is to reduce by two-thirds the under-five mortality rate

between the years 1990 and 2015. Indicators include

a) under-five mortality rate;

b) infant mortality rate; and

c) Proportion of one-year-old children immunized against measles.

In light of regional financial crises in various parts of the world, the IMF is in the process of devising a set

of Financial Soundness Indicators.

These are indicators of the current financial health and soundness of a given country’s financial

institutions, corporations, and households. They include indicators of capital adequacy, asset quality,

earnings and profitability, liquidity, and sensitivity to market risk (IMF 2003).

8
On a more general level, the IMF also monitors and publishes a series of macroeconomic indicators that

may be useful to governments and organizations. These include output indicators, fiscal and monetary

indicators, balance of payments, external debt indicators, and the like.

There are a number of pros and cons associated with using predesigned indicators:

Pros:

• They can be aggregated across similar projects, programs, and policies.

• They reduce costs of building multiple unique measurement systems.

• They make possible greater harmonization of donor requirements.

Cons:

• They often do not address country specific goals.

• They are often viewed as imposed, as coming from the top down.

• They do not promote key stakeholder participation and ownership.

• They can lead to the adoption of multiple competing indicators.

There are difficulties in deciding on what criteria to employ when one chooses one set of predesigned

indicators over another.

Predesigned indicators may not be relevant to a given country or organizational context. There may be

pressure from external stakeholders to adopt predesigned indicators, but it is our view that indicators

should be internally driven and tailored to the needs of the organization and to the information

requirements of the managers, to the extent possible. For example, many countries will have to use some

predesigned indicators to address the MDGs, but each country should then disaggregate those goals to be

appropriate to their own particular strategic objectives and the information needs of the relevant sectors.

9
Ideally, it is best to develop indicators to meet specific needs while involving stakeholders in a

participatory process. Using predesigned indicators can easily work against this important participatory

element.

Constructing Indicators

Constructing indicators takes work. It is especially important that competent technical, substantive, and

policy experts participate in the process of indicator construction. All perspectives need to be taken into

account—substantive, technical, and policy—when considering indicators. Are the indicators

substantively feasible, technically doable, and policy relevant? Going back to the example of an outcome

that aims to improve student learning, it is very important to make sure that education professionals,

technical people who can construct learning indicators, and policy experts who can vouch for the policy

relevance of the indicators, are all included in the discussion about which indicators should be selected.

Indicators should be constructed to meet specific needs. They also need to be a direct reflection of the

outcome itself. And over time, new indicators will probably be adopted and others dropped. This is to be

expected. However, caution should be used in dropping or modifying indicators until at least three

measurements have been taken.

Taking at least three measurements helps establish a baseline and a trend over time. Two important

questions should be answered before changing or dropping an indicator: Have we tested this indicator

thoroughly enough to know whether it is providing information to effectively measure against the desired

outcome? Is this indicator providing information that makes it useful as a management tool?

It should also be noted that in changing indicators, baselines against which to measure progress are also

changing. Each new indicator needs to have its own baseline established the first time data are collected

for it.

In summary, indicators should be well thought through. They should not be changed or switched often

(and never on a whim), as this can lead to chaos in the overall data collection system. There should be

clarity and agreement in the M&E system on the logic and rationale for each indicator from top level

decision makers on to those responsible for collecting data in the field.

1
0
Performance indicators can and should be used to monitor outcomes and provide continuous feedback and

streams of data throughout the project, program, or policy cycle. In addition to using indicators to

monitor inputs, activities, outputs, and outcomes, indicators can yield a wealth of performance

information about the process of and progress toward achieving these outcomes. Information from

indicators can help to alert managers to performance discrepancies, shortfalls in reaching targets, and

other variabilities or deviations from the desired outcome.

Thus, indicators provide organizations and governments with the opportunity to make midcourse

corrections, as appropriate, to manage toward the desired outcomes. Using indicators to track process and

progress is yet another demonstration of the ways that a result based M&E system can be a powerful

public management tool.

The central function of any performance measurement process is to

provide regular, valid data on indicators of performance outcomes

1
1
Chapter 2

PROJECT MANAGEMENT TECHNIQUES OF MONITORING

In the past, a company typically decided to undertake a project effort, assigned the project and the

"necessary" resources to a carefully selected individual and assumed they were using some form of

project management. Organizational implications were of little importance. Although the basic concepts

of project management are simple, applying these concepts to an existing organization is not. Richard P.

Olsen, in his article "Can Project Management Be Defined?" defined project management as "…the

application of a collection of tools and techniques…to direct the use of diverse resources toward the

accomplishment of a unique, complex, one-time task within time, cost, and quality constraints. Each task

requires a particular mix of these tools and techniques structured to fit the task environment and life cycle

(from conception to completion) of the task."

Employing project management technologies minimizes the disruption of routine business activities in

many cases by placing under a single command all of the skills, technologies, and resources needed to

realize the project. The skills required depend on each specific project and the resources available at that

time. The greater the amount of adjustments a parent organization must make to fulfill project objectives,

the greater chance exists for project failure. The form of project management will be unique for every

project endeavor and will change throughout the project.

The project management process typically includes four key phases: initiating the project, planning the

project, executing the project, and closing the project. An outline of each phase is provided below.

Initiating the Project

The project management techniques related to the project initiation phase includes:

1. Establishing the project initiation team. This involves organizing team members to assist in

carrying out the project initiation activities.

1
2
2. Establishing a relationship with the customer. The understanding of your customer's organization

will foster a stronger relationship between the two of you.

3. Establishing the project initiation plan. Defines the activities required to organize the team while

working to define the goals and scope of the project.

4. Establishing management procedures. Concerned with developing team communication and

reporting procedures, job assignments and roles, project change procedure, and how project

funding and billing will be handled.

5. Establishing the project management environment and workbook. Focuses on the collection and

organization of the tools that you will use while managing the project.

Planning the Project the project management techniques related to the project

planning phase includes:

1. Describing project scope, alternatives, and feasibility. The understanding of the content and

complexity of the project. Some relevant questions that should be answered include:

o What problem/opportunity does the project address?

o What results are to be achieved?

o What needs to be done?

o How will success be measured? o How will we know when we are finished?

2. Divide the project into tasks. This technique is also known as the work breakdown structure. This

step is done to ensure an easy progression between tasks.

3. Estimating resources and creating a resource plan. This helps to gather and arrange resources in

the most effective manner.

1
3
4. Developing a preliminary schedule. In this step, you are to assign time estimates to each activity

in the work breakdown structure. From here, you will be able to create the target start and end

dates for the project.

5. Developing a communication plan. The idea here is to outline the communication procedures

between management, team members, and the customer.

6. Determining project standards and procedures. The specification of how various deliverables are

produced and tested by the project team.

7. Identifying and assessing risk. The goal here is to identify potential sources of risk and the

consequences of those risks.

8. Creating a preliminary budget. The budget should summarize the planned expenses and revenues

related to the project.

9. Developing a statement of work. This document will list the work to be done and the expected

outcome of the project.

10. Setting a baseline project plan. This should provide an estimate of the project's tasks and resource

requirements.

Executing the Project the project management techniques related to the project execution phase includes

1. Executing the baseline project plan. The job of the project manager is to initiate the execution of

project activities, acquire and assign resources, orient and train new team members, keep the

project on schedule, and assure the quality of project deliverables.

2. Monitoring project progress against the baseline project plan. Using Gantt and PERT charts,

which will be discussed in detail further on in this paper, can assist the project manager in doing

this.

3. Managing changes to the baseline project plan.

1
4
4. Maintaining the project workbook. Maintaining complete records of all project events is

necessary. The project workbook is the primary source of information for producing all project

reports.

5. Communicating the project status. This means that the entire project plan should be shared with

the entire project team and any revisions to the plan should be communicated to all interested

parties so that everyone understands how the plan is evolving.

Closing Down the Project the project management techniques related to the project

closedown phase includes:

1. Closing down the project. In this stage, it is important to notify all interested parties of the

completion of the project. Also, all project documentation and records should be finalized so that

the final review of the project can be conducted.

2. Conducting post project reviews. This is done to determine the strengths and weaknesses of

project deliverables, the processes used to create them, and the project management process.

3. Closing the customer contract. The final activity is to ensure that all contractual terms of the

project have been met.

The techniques listed above in the four key phases of project management enable a project team to:

• Link project goals and objectives to stakeholder needs.

• Focus on customer needs.

• Build high-performance project teams.

• Work across functional boundaries.

• Develop work breakdown structures.

• Estimate project costs and schedules.

• Meet time constraints.

1
5
• Calculate risks.

• Establish a dependable project control and monitoring system.

Tools

Project management is a challenging task with many complex responsibilities. Fortunately, there are many

tools available to assist with accomplishing the tasks and executing the responsibilities. Some require a

computer with supporting software, while others can be used manually. Project managers should choose a

project management tool that best suits their management style. No one tool addresses all project

management needs. Program Evaluation Review Technique (PERT) and Gantt Charts are two of the most

commonly used project management tools and are described below. Both of these project management

tools can be produced manually or with commercially available project management software.

Program Evaluation and Review Technique (PERT) is a scheduling method originally designed to plan a

manufacturing project by employing a network of interrelated activities, coordinating optimum cost and

time criteria. PERT emphasizes the relationship between the time each activity takes, the costs associated

with each phase, and the resulting time and cost for the anticipated completion of the entire project.

PERT is an integrated project management system. These systems were designed to manage the

complexities of major manufacturing projects, the extensive data necessary for such industrial efforts, and

the time deadlines created by defense industry projects. Most of these management systems developed

following World War II, and each has its advantages.

PERT was first developed in 1958 by the U.S. Navy Special Projects Office on the Polaris missile system.

Existing integrated planning on such a large scale was deemed inadequate, so the Navy pulled in the

Lockheed Aircraft Corporation and the management consulting firm of Booz, Allen, and Hamilton.

Traditional techniques such as line of balance, Gantt charts, and other systems were eliminated, and PERT

evolved as a means to deal with the varied time periods it takes to finish the critical activities of an overall

project.

1
6
PERT is a planning and control tool used for defining and controlling the tasks necessary to complete a

project. PERT charts and Critical Path Method (CPM) charts are often used interchangeably; the only

difference is how task times are computed. Both charts display the total project with all scheduled tasks

shown in sequence. The displayed tasks show which ones are in parallel, those tasks that can be

performed at the same time. A graphic representation called a "Project Network" or "CPM Diagram" is

used to portray graphically the interrelationships of the elements of a project and to show the order in

which the activities must be performed.

PERT planning involves the following steps:

1. Identify the specific activities and milestones. The activities are the tasks of the project. The

milestones are the events that mark the beginning and the end of one or more activities.

2. Determine the proper sequence of activities. This step may be combined with #1 above since the

activity sequence is evident for some tasks. Other tasks may require some analysis to determine

the exact order in which they should be performed.

3. Construct a network diagram. Using the activity sequence information, a network diagram can be

drawn showing the sequence of the successive and parallel activities. Arrowed lines represent the

activities and circles or "bubbles" represent milestones.

4. Estimate the time required for each activity. Weeks are a commonly used unit of time for activity

completion, but any consistent unit of time can be used. A distinguishing feature of PERT is its

ability to deal with uncertainty in activity completion times. For each activity, the model usually

includes three time estimates:

o Optimistic time - the shortest time in which the activity can be completed. o

Most likely time - the completion time having the highest

probability.

o Pessimistic time - the longest time that an activity may take.

1
7
From this, the expected time for each activity can be calculated using the following weighted average:

Expected Time = (Optimistic + 4 x Most Likely + Pessimistic)

This helps to bias time estimates away from the unrealistically short timescales normally assumed.

5. Determine the critical path. The critical path is determined by adding the times for the activities

in each sequence and determining the longest path in the project. The critical path determines the

total calendar time required for the project. The amount of time that a non-critical path activity

can be delayed without delaying the project is referred to as slack time.

If the critical path is not immediately obvious, it may be helpful to determine the following four times for

each activity:

o ES - Earliest Start time o EF - Earliest Finish time o LS -

Latest Start time o LF - Latest Finish time

These times are calculated using the expected time for the relevant activities. The earliest start and finish

times of each activity are determined by working forward through the network and determining the

earliest time at which an activity can start and finish considering its predecessor activities. The latest start

and finish times are the latest times that an activity can start and finish without delaying the project. LS

and LF are found by working backward through the network. The difference in the latest and earliest

finish of each activity is that activity's slack. The critical path then is the path through the network in

which none of the activities have slack.

The variance in the project completion time can be calculated by summing the variances in the completion

times of the activities in the critical path. Given this variance, one can calculate the probability that the

project will be completed by a certain date assuming a normal probability distribution for the critical path.

The normal distribution assumption holds if the number of activities in the path is large enough for the

central limit theorem to be applied.

6. Update the PERT chart as the project progresses. As the project unfolds, the estimated times can

be replaced with actual times. In cases where there are delays, additional resources may be

1
8
needed to stay on schedule and the PERT chart may be modified to reflect the new situation. An

example of a PERT chart is provided below:

Benefits to using a PERT chart or the Critical Path Method include:

• Improved planning and scheduling of activities.

• Improved forecasting of resource requirements.

• Identification of repetitive planning patterns which can be followed in other projects, thus

simplifying the planning process.

• Ability to see and thus reschedule activities to reflect in the project dependencies and resource

limitations following know priority rules.

• It also provides the following: expected project completion time, probability of completion before

a specified date, the critical path activities that impact completion time, the activities that have

slack time and that can lend resources to critical path activities, and activity start and end dates.

Gantt charts are used to show calendar time task assignments in days, weeks or months. The tool uses

graphic representations to show start, elapsed, and completion times of each task within a project. Gantt

charts are ideal for tracking progress. The number of days actually required to complete a task that

reaches a milestone can be compared with the planned or estimated number. The actual workdays, from

actual start to actual finish, are plotted below the scheduled days. This information helps target potential

timeline slippage or failure points. These charts serve as a valuable budgeting tool and can show dollars

allocated versus dollars spent.

To draw up a Gantt chart, follow these steps:

1
9
1. List all activities in the plan. For each task, show the earliest start date, estimated length of time it

will take, and whether it is parallel or sequential. If tasks are sequential, show which stages they

depend on.

2. Head up graph paper with the days or weeks through completion.

3. Plot tasks onto graph paper. Show each task starting on the earliest possible date. Draw it as a

bar, with the length of the bar being the length of the task. Above the task bars, mark the time

taken to complete them.

4. Schedule activities. Schedule them in such a way that sequential actions are carried out in the

required sequence. Ensure that dependent activities do not start until the activities they depend on

have been completed. Where possible, schedule parallel tasks so that they do not interfere with

sequential actions on the critical path. While scheduling, ensure that you make best use of the

resources you have available, and do not over-commit resources. Also, allow some slack time in

the schedule for holdups, overruns, failures, etc.

5. Presenting the analysis. In the final version of your Gantt chart, combine your draft analysis (#3

above) with your scheduling and analysis of resources (#4 above). This chart will show when you

anticipate that jobs should start and finish. An example of a Gantt chart is provided below:

2
0
Benefits of using a Gantt chart include:

• Gives an easy to understand visual display of the scheduled time of a task or activity.

• Makes it easy to develop "what if" scenarios.

• Enables better project control by promoting clearer communication.

• Becomes a tool for negotiations.

• Shows the actual progress against the planned schedule.

• Can report results at appropriate levels.

• Allows comparison of multiple projects to determine risk or resource allocation.

• Rewards the project manager with more visibility and control over the project.

Chapter 3

2
1
UNDERSTANDING THE INITIATIVE OR PROJECT

To produce credible information that will be useful for decision makers, evaluations must be designed

with a clear understanding of the initiative, how it operates, how it was intended to operate, why it

operates the way it does and the results that it produces. It is not enough to know what worked and what

did not work (that is, whether intended outcomes or outputs were achieved or not). To inform action,

evaluations must provide credible information about why an initiative produced the results that it did and

identify what factors contributed to the results (both positive and negative). Understanding exactly what

was implemented and why provides the basis for understanding the relevance or meaning of project or

programme results.

Therefore, evaluations should be built on a thorough understanding of the initiative that is being

evaluated, including the expected results chain (inputs, outputs and intended outcomes), its

implementation strategy, its coverage, and the key assumptions and risks underlying the Results Map or

Theory of Change. The questions outlined in

Key aspects of the initiative

Key Aspect Questions to Ask

Demand What is the need or demand for the initiative? What problem or development opportunity

is the initiative intended to address?

Beneficiaries Who are the beneficiaries or targets of the initiative? Who are the individuals,

groups or organizations, whether targeted or not, that benefit directly or

indirectly from the development initiative?

Scope What is the scope of the initiative in terms of geographic boundaries and number

of intended beneficiaries?

Outputs and Outcomes What changes (outcomes) or tangible products and services (outputs) are

anticipated as a result of the initiative? What must the project,

2
2
programme or strategy accomplish to be considered successful? How do

the intended outcomes link to national priorities, UNDAF priorities and

corporate Strategic Plan goals?

Activities What activities, strategies or actions, both planned and unplanned, does the

programme take to effect change?

Theory of Change or Results/ Outcome Map What are the underlying rationales and

assumptions or theory that defines the

relationships or chain of results that lead

initiative strategies to intended outcomes? What

are the assumptions, factors or risks inherent in

the design that may influence whether the

initiative succeeds or fails?

Resources What time, talent, technology, information and financial resources are allocated

to the effort?

Stakeholders and Partnership Strategy Who are the major actors and partners involved in the

programme or project with a vested interest? What are

their roles, participation and contributions—including

financial resources, in-kind contributions, leadership and

advocacy—including UN organizations and others?


How was the partnership strategy devised? How does it

operate?

Phase of Implementation How mature is the project or programme, that is, at what stage or

year is the implementation? Is the implementation within the

planned course of the initiative? Is the programme mainly

engaged in planning or implementation activities?

Modifications from Original Design What, if any, changes in the plans and strategies of the initiative

2
3
have occurred over time? What are the potential implications for

the achievement of intended results?

Evaluability Can the project or programme as it is defined be evaluated credibly? Are

intended results (outputs, outcomes) adequately defined, appropriate and stated in

measurable terms, and are the results verifiable? Are monitoring and evaluation systems

that will provide valid and reliable data in place?

Cross-cutting Issues To what extent are key cross-cutting issues and UN values intended to be

mainstreamed and addressed in the design, implementation and results?

THE EVALUATION CONTEXT

The evaluation context concerns two interrelated sets of factors that have bearing on the accuracy,

credibility and usefulness of evaluation results:

 Social, political, economic, demographic and institutional factors, both internal and

external, that have bearing on how and why the initiative produces the results (positive and

negative) that it does and the sustainability of results.

 Social, political, economic, demographic and institutional factors within the environment

and time frame of the evaluation that affect the accuracy, impartiality and credibility of the

evaluation results.

Examining the internal and external factors within which a development initiative operates helps explain

why the initiative has been implemented the way it has and why certain outputs or outcomes have been

achieved and others have not. Assessing the initiative context may also point to factors that impede the

attainment of anticipated outputs or outcomes, or make it difficult to measure the attainment of intended

outputs or outcomes or the contribution of outputs to outcomes. In addition, understanding the political,

cultural and institutional setting of the evaluation can provide essential clues for how best to design and

conduct the evaluation to ensure the impartiality, credibility and usefulness of evaluation results.

2
4
Guiding questions for defining the context

a) What is the operating environment around the project or programme?

b) How might factors such as history, geography, politics, social and economic conditions,

secular trends and efforts of related or competing organizations affect implementation

of the initiative strategy, its outputs or outcomes?

c) How might the context within which the evaluation is being conducted (for example,

cultural language, institutional setting, community perceptions, etc.) affect the

evaluation?

d) How does the project or programme collaborate and coordinate with other initiatives

and those of other organizations?

e) How is the programme funded? Is the funding adequate? Does the project or

programme have finances secured for the future?

f) What is the surrounding policy and political environment in which the project or
programme operates? How might current and emerging policy alternatives influence
initiative outputs and outcomes?

THE EVALUATION PURPOSE

All evaluations start with a purpose, which sets the direction. Without a clear and complete statement of

purpose, an evaluation risks being aimless and lacking credibility and usefulness. Evaluations may fill a

number of different needs. The statements of purpose should make clear the following:

 Why the evaluation is being conducted and at that particular point in time

 Who will use the information

 What information is needed

 How the information will be used

2
5
The purpose and timing of an evaluation should be determined at the time of developing an evaluation

plan (see Chapter 3 for more information). The purpose statement can be further elaborated at the time a

ToR for the evaluation is drafted to inform the evaluation design

FOCUSING THE EVALUATION

EVALUATION SCOPE

The evaluation scope narrows the focus of the evaluation by setting the boundaries for what the evaluation

will and will not cover in meeting the evaluation purpose. The scope specifies those aspects of the

initiative and its context that are within the boundaries of the evaluation. The scope defines, for example:

 The unit of analysis to be covered by the evaluation, such as a system of related programmes,

polices or strategies, a single programme involving a cluster of projects, a single project, or a

subcomponent or process within a project

 The time period or phase(s) of the implementation that will be covered

 The funds actually expended at the time of the evaluation versus the total amount allocated

 The geographical coverage

 The target groups or beneficiaries to be included

The scope helps focus the selection of evaluation questions to those that fall within the defined

boundaries.

EVALUATION OBJECTIVES AND CRITERIA

Evaluation objectives are statements about what the evaluation will do to fulfill the purpose of the

evaluation. Evaluation objectives are based on careful consideration of: the types of decisions evaluation

users will make; the issues they will need to consider in making those decisions; and what the evaluation

will need to achieve in order to contribute to those decisions. A given evaluation may pursue one or a

number of objectives. The important point is that the objectives derive directly from the purpose and serve

to focus the evaluation on the decisions that need to be made.

2
6
Chapter 4

STAKEHOLDER ANALYSIS

A stakeholder analysis provides a means to identify the relevant stakeholders and assess their

views and support for the proposed project.

A stakeholder can be defined as any individuals, groups of people, institutions or organizations’

that may have a significant interest in the success or failure of a potential project around the issue

of concern. These may be affected either positively or negatively by a proposed project.

Stakeholders therefore go beyond the target group, and extend to those that may have something

to bring to assist the project, or those that may resist the project taking place. When identifying

stakeholders, it is important to consider potentially marginalized groups, such as women, the

elderly, youth, the disabled and the poor, so that they are represented in the process, especially if

the issue will affect their lives.

It is important to identify and understand the different stakeholders and their varying levels of

interest and power to influence the project, and their motivation and capacity

(resources/knowledge/skills) that they bring to the issue. Having these matters identified and

clarified will make the process of identifying the causes of the problem and potential solutions

much easier.

2
7
You should aim to identify the motivation or constraints to change from the aspect of the target

group(s), so that you can better understand the underlying causes to the issue you seek to

overcome. This is particularly important if you have more than one target group, or a diverse

group (e.g. urban and rural households). You can use relevant and up to date information from

the literature review, as well as directly engaging stakeholders to complete the stakeholder

analysis

Stakeholder analysis is used to understand who the key actors are around a given issue and to

gauge the importance of different groups' interests and potential influence. It also serves to

highlight groups who are most affected by a given issue and least able to influence the situation.

Stakeholder analysis should be focused on a single issue, e.g. girls’ education or recruitment of

child soldiers. It can serve as an analytical framework for processing data or as a data collection

exercise to be done in the field:

• based on review of existing information (documentary review);

• in group meetings;

• through key informant interviews (centrally or in the field).

It can serve in an assessment exercise, in a programme monitoring exercise (e.g. to further probe

positions/ interests as the programme advances) and in an evaluation (e.g. how have interests

changed, supporting or impeding programme progress).

What it can tell us

• Identify different groups that can be sources of information;

• Interpret perspectives provided by each group;

• Identify who could positively or negatively influence programme responses;

2
8
• To support realistic programme planning and management, data collectors must look

carefully

• within the group of primary stakeholders, recognizing that this group is not uniform, but

include sub-groups with different characteristics (e.g. women, children, leaders); and

• at the wider group of actors that might positively or negatively influence a situation.

• A "do no harm" perspective must foresee which non-primary stakeholder groups might seek

to benefit from a programme at the expense of primary stakeholders

• Direct capacity-building efforts

• A capacity-building approach to the projects should seek to increase primary stakeholders’

influence over the achievement of a goal (i.e. move primary stakeholders towards sector 1 in

the Venn diagram on the next page).

Win

1 2

Influence Be influenced

3 4
Lose

REPRESENTING STAKEHOLDERS AS A VENN DIAGRAM


2
9
Two circles distinguish stakeholders:

• Primary stakeholders (those who will benefit from an intervention) are represented inside the

dotted oval;

• The wider context of stakeholders is represented by the larger oval.

Two axes (influence/be influenced and win/lose) divide the diagram into four areas:
Sector 1: Those who can influence the situation and benefit from it; examples:

• Outsiders: local and international NGOs, political factions;

• Primary stakeholders: influential actors (e.g. leaders).

Sector 2: Those who are influenced by the changes and will benefit from it; examples:

• Primary stakeholders;

• Non-primary stakeholders who will nonetheless gain from the project’s outcomes.

Sector 3: Those who cannot influence the achievement of a goal and will be affected negatively

by it; examples:

• Primary stakeholders and outsiders whose status or relative wealth is changed by an activity.

Sector 4: Those who can influence but will lose from the achievement of a goal. This is an

important area to consider, as it will include those who actively oppose the achievement of a

project; examples:

• External factions of local leaders among the primary stakeholders opposed to change of their

status.

3
0
MATRIX FOR STAKEHOLDER ANALYSIS

• Who has influence To identify interests, consider: Resources that can be


• Who is affected mobilized in support of
• Expectations (positive and negative) interests:
• Benefits or losses stakeholders are likely to
Identifying stakeholders face (power, status, economic resources: • Information
requires consideration about financial and non-financial) • Economic resources
particularly vulnerable groups • Status (also leadership)
• Legitimacy / authority
• Coercion

Stakeholder group or Key interests Programme/decision’ Potential influence

sub-group s potential impact (+,-

Source: Benjamin Crosby (March 1992) “Stakeholder analysis: A vital tools for strategic

managers”

Importance of Evaluation and its uses

3
1
Evaluation is critical for any development project to progress towards advancing human

development. Through the generation of ‘evidence’ and objective information, evaluations

enable managers to make informed decisions and plan strategically. The effective conduct and

use of evaluation requires adequate human and financial resources, sound understanding of

evaluation and most importantly, a culture of results-orientation, learning, inquiry and

evidence-based decision making.

When evaluations are used effectively, they support programme improvements, knowledge

generation and accountability.

• Supporting programme improvements—Did it work or not, and why? How could it be

done differently for better results?

The interest is on what works, why and in what context. Decision makers, such as managers, use

evaluations to make necessary improvements, adjustments to the implementation approach or

strategies, and to decide on alternatives. Evaluations addressing these questions need to provide

concrete information on how improvements could be made or what alternatives exist to address

the necessary improvements.

• Building knowledge for generalizability and wider-application—What can we learn from

the evaluation? How can we apply this knowledge to other contexts?

The main interest is in the development of knowledge for global use and for generalization to

other contexts and situations. When the interest is on knowledge generation, evaluations

generally apply more rigorous methodology to ensure a higher level of accuracy in the evaluation

and the information being produced to allow for generalizability and wider-application beyond a

particular context.

3
2
Chapter 5

IMPORTANCE OF MONITORING AND EVALUATION

Evaluations should not be seen as an event but as part of an exercise whereby different

stakeholders are able to participate in the continuous process of generating and applying

evaluative knowledge. UNDP managers, together with government and other stakeholders,

decide who participates in what part of this process (analyzing findings and lessons, developing a

management response to an evaluation, disseminating knowledge) and to what extent they will

be involved (informed, consulted, actively involved, equal partners or key decision makers).

These are strategic decisions for UNDP managers that have a direct bearing on the learning and

ownership of evaluation findings. An evaluation framework that generates knowledge, promotes

learning and guides action is an important means of capacity development and sustainability of

results.

Supporting accountability—Is UNDP doing the right things? Is UNDP doing things

right? Did UNDP do what it said it would do?

The interest here is on determining the merit or worth and value of an initiative and its quality.

An effective accountability framework requires credible and objective information, and

evaluations can deliver such information. Evaluations help ensure that UNDP goals and

initiatives are aligned with and support the Millennium Declaration, MDGs, and global, national

and corporate priorities. UNDP is accountable for providing evaluative evidence that links

UNDP contributions to the achievement of development results in a given country and for

delivering services that are based on the principles of human development. By providing such

objective and independent assessments, evaluations in UNDP support the organization’s

accountability towards its Executive Board, donors, governments, national partners and

beneficiaries.

3
3
The intended use determines the timing of an evaluation, its methodological framework, and

level and nature of stakeholder participation. Therefore, the use has to be determined at the

planning stage..

Monitoring and Evaluation is important because it provides the only consolidated source

of information showcasing project progress; it allows actors to learn from each

other’s experiences, building on expertise and

knowledge; it often generates (written) reports that contribute to transparency and

accountability, and allows for lessons to be shared more easily; it reveals mistakes and

offers paths for learning and improvements; it provides a basis for questioning and testing

assumptions; it provides a means for agencies seeking to learn from their experiences and

to incorporate them into policy and practice; it provides a way to assess the crucial link

between implementers and beneficiaries on the ground and decision-makers; it adds to the

retention and development of institutional memory;

It provides a more robust basis for raising funds and influencing policy.

Points to note:

For an any monitoring and evaluation to be useful, the organization must ensure that the

Evaluation is:-
1) Independent—Management must not impose restrictions on the scope, content,

comments and recommendations of evaluation reports. Evaluators must be free of

conflict of interest

2) Intentional—The rationale for an evaluation and the decisions to be based on it should

be clear from the outset.

3) Transparent—Meaningful consultation with stakeholders is essential for the credibility

and utility of the evaluation.

3
4
4) Ethical—Evaluation should not reflect personal or sectoral interests. Evaluators must

have professional integrity, respect the rights of institutions and individuals to provide

information in confidence, and be sensitive to the beliefs and customs of local social and

cultural environments.

5) Impartial—removing bias and maximizing objectivity are critical for the credibility of

the evaluation and its contribution to knowledge.

6) Of high quality—All evaluations should meet minimum quality standards defined by the

Evaluation Office

7) Timely—Evaluations must be designed and completed in a timely fashion so as to ensure

the usefulness of the findings and recommendations

8) Used—Evaluation is a management discipline that seeks to provide information to be

used for evidence-based decision making. To enhance the usefulness of the findings and

recommendations, key stakeholders should be engaged in various ways in the conduct of

the evaluation.

UNDERSTANDING THE STAKEHOLDERS IN EVALUATION

You want to know more about how your group is doing, but others you work with want to know

whether you are making a difference. Welcome to the world of evaluation. If you are a

community initiative, you will want to evaluate your effort. You will need to devote some time

and energy to planning the evaluation process. Like many other aspects of community health and

development, an evaluation will ultimately be more beneficial if you spend the time and energy

searching for ways to successfully begin and complete an evaluation.

One step in the planning process includes understanding and recognizing the interests of

stakeholders in the evaluation. The stakeholders include community leaders, evaluators, and

funders, and you will want to know how the evaluation will be used by each of them. The

3
5
evaluation should respond to the interests of those three stakeholders, and nothing is more

productive than designing it together. The evaluation can serve the community leaders' interests,

the funders' interests, and the evaluators' interests in a single useful product, if you know what

they want before you start. It's important to define the stakeholder’s interests in using the

evaluation so that it can focus on optimally answering questions important to all of them. What

do we mean by needs and interests? Needs and interests are those qualities which community

leadership, evaluators, and funders see as important for doing their jobs well. Because each of

these stakeholders is looking at the evaluation from a unique perspective, it helps to recognize

those differences, and incorporate them into the evaluation.

FOR STARTERS, LETS CONSIDER WHY YOU'D WANT TO CONDUCT AN

EVALUATION IN THE FIRST PLACE.

There are many basic reasons why stakeholders want an evaluation:

• To be accountable as a public operation

• To assist those who are receiving grants to improve

• To improve a foundation's grant making

• To assess the quality or impact of funded programs

• To plan and implement new programs

• To disseminate innovative programs

• To increase knowledge

A stakeholder may want an evaluation for one, two or all of these reasons. Evaluators may want

to increase knowledge, funders may want to improve grant making and community leaders may

want to assess quality. Community leaders may not want to answer more than a phone interview

by a student intern, evaluators may be interested in systematic, disciplined inquiry, and funders

may look for accountability.

3
6
When it comes time for evaluation, you don't have to be specialists in order to make good

decisions about what you will do. You should, however, be knowledgeable about uses of

evaluations and how they match the many interests involved so that you can make informed

choices.

WHO ARE COMMUNITY LEADERS, EVALUATORS AND FUNDERS?

COMMUNITY LEADERS

May include staff, administrators, committee chairpersons, agency personnel and civic leaders,

and trustees of an initiative. They may have little knowledge of evaluation, nor feel they have

much time to provide data or read data reports. Yet, the evaluation must be responsive, useful

and sensitive to their decision-making requirements. They often are interested in how to improve

the functioning of their initiative.

EVALUATORS

Are often professionals, though anyone can design and implement an evaluation. There are

several professional associations that support evaluators and have established standards of

practice, such as the American Evaluation Association or its Collaborative, Participatory, and

Empowerment Evaluation Topical Interest Group. Evaluators can be private consultants,

university or foundation staff, or a member of the initiative. Evaluators are often interested in the

systematic production of useful, reliable information.

FUNDERS

Are those individuals or organizations that provide financial support for the initiative. They

might include program officers or other representatives of government agencies, foundations, or

other sources of financial support. Some funders have built a formal evaluation into their regular

3
7
activities, but they are in the minority. Funders are often interested in whether the use of their

funds is having an impact on the problems facing communities.

WHY SHOULD YOU UNDERSTAND THE INTERESTS OF THESE GROUPS?

So, you understand the idea in principle, but why do you need to understand the needs of leaders,

evaluators and funders? The information needs of various groups can be very different, so it's

important to take into account the kinds of information that will be convincing and useful to the

target audiences. Knowing this will help you decide what information is needed and the tools

you could use to obtain it.

While you may know your group does good work, chances are good that other important

members of the community do not know what you do. Consequently, others who have supported

and encouraged your efforts will want to know what has worked, and what hasn't worked; and

what should change and what should stay the same. Because these groups or individuals might

be instrumental in assisting your work, financially or otherwise, it makes good sense to include

their needs in the evaluation process.

Even more important is the requirement that the information used is effectively. The question is:

To whom is it useful? If there's no direction to your information gathering, you can collect just

about anything you want, but so what? If it doesn't matter to anyone else, it is meaningless. If I

collect information about the number of people that my agency serves, I may find that useful,

especially if I'm reimbursed for that number. But what if someone really wanted to know if the

efforts of the agencies in town had an impact on a health problem. The number of people my

agency serves might not be that useful.

THE INTERESTS, WHICH HELPS US DETERMINE THE INFORMATION WE NEED,

LEAD US TO DEVELOP TOOLS TO COLLECT IT. IN OTHER WORDS IT IS THE

INTERESTS OF THE STAKEHOLDERS THAT SHAPE THE INQUIRING.

• What are the interests of the stakeholders?

3
8
• What information will help them?

• What questions will you ask to get that information?

• What tools will help you collect it?

In the long run, including the stakeholders in the process will lead to greater collaboration and

organizational capacity to solve community problems. Understanding stakeholders' interests will

enable you to employ your resources better. Knowing what everyone wants and needs will help

you plan the optimal evaluation.

WHEN SHOULD YOU UNDERSTAND THE INTERESTS OF THESE GROUPS?

You will want to identify stakeholders from the get-go. By going through a process of

stakeholder identification before you begin evaluating, you will be able to obtain their views and

incorporate their ideas and needs into the evaluation itself.

Of course, the sooner you identify the needs and interests of those groups, the sooner you will be

able to gain understanding of the different issues each group is interested in without wasting time

or money. You also have to be watchful so that if interests change, you can adapt to those

changes in a timely manner and keep your evaluation valid.

WHAT ARE THE INTERESTS OF THESE GROUPS?

HOW DO WE FIND THESE PEOPLE?

First and foremost, you and other members of the group will need to sit down, pour a cup of Joe,

and grab a pencil. Think about the individuals and groups that have needs that should be

addressed in the evaluation. You should try to figure out what their interests are.

Of course, some people may ask, "Why them? Our group's interests and needs should be the

focus of the evaluation." In one sense, this is true. One of the main purposes of the evaluation

process includes providing feedback and ideas for the group itself so members can improve and

strengthen their efforts. But remember, everyone is in this together, the community funders and

3
9
those who will conduct the evaluation. You want the best information possible information that

will help you make the best decisions.

But, at the same time, there are other factors to consider.

Over the course of your brainstorming session, you should identify as many stakeholders as you

can.

To identify stakeholders, ask yourself these questions:

• Who provides funding for our initiative?

• Who will conduct the evaluation?

• Who do we collaborate with?

Once you know who these people are, find out what they want. To do this, let's take a look at the

groups. Then, we'll talk about the specific needs and interests of members of each of these

groups.

COMMUNITY LEADERS

What will this group need from your evaluation? The information should be:

• Clear and understandable: They may have limited knowledge about the goings-on of

your group, or about evaluations. Immediately, then, you know that the evaluation must

be clear and understandable.

• Efficient: They probably have a variety of different responsibilities which demand their

time and consideration, so they won't want to waste time reading information irrelevant

to their needs.

• Responsive: They may include decision-makers that can affect the future of your group.

Therefore, your evaluation needs to be responsive to their decision-making requirements.

• Sensitive: They will want to know what the initiative has accomplished, so the

evaluation should be sensitive to the activities and accomplishments of the initiative.

4
0
• Useful: They will include decision-makers for the initiative, so the evaluation needs to

show them how their efforts can be improved.

EVALUATORS

They will be assessing the effectiveness of the initiative in meeting its goals. What do they

need to get out of the evaluation?

• Input: The evaluation team needs to receive input from the initiative's clients --including

community leadership, funders, and members of the initiative itself, in order to know

what the clients want to learn about the initiative.

• Accurate and Complete Information: In a similar vein, the evaluators need accurate

and complete information in order to answer the questions posed by the stakeholders.

• Cooperation: Finally, the evaluators will need cooperation from participants and

officials in order to obtain needed data.

FUNDERS

They will need:

• Clear and timely reports: Because of their responsibilities for making decisions

concerning the continuation of financial support, the funders will need information about

the progress of the initiative.

• Evidence of community change and impact: Funders will need to be able to measure

the success of the initiative and report this to their own trustees or constituents.

HOW DO YOU DETERMINE INTERESTS?

Now you know what interests you're looking for, you have to determine a way of finding them.

You need to match people with what their interests are, whether it's through a survey, an

interview, or some other method. Failure to determine interests is often the source of problems

4
1
and misunderstandings along the way and can became disastrous if it turns out that different

stakeholders had different expectations and priorities.

SOME OF THE QUESTIONS THAT YOU CAN ASK STAKEHOLDERS TO MATCH THEM

WITH THEIR INTERESTS IN AN EVALUATION ARE:

• What are the evaluation's strengths and weaknesses?

• Do you think the evaluation is moving toward its desired outcomes?

• Which kind of implementation problems came up, and how are they being addressed?

• How are staff and clients interacting?

• What is happening that wasn't expected?

• What do you like, dislike, or would like to change in the evaluation process?

From the answers you get, you can determine what each party wants out of the evaluation. You

can also group those who have similar interests. For instance, you may find out that a community

leader and an evaluator are interested in improving their managing abilities through the

evaluation process. You have made a match, and those two will work toward a common goal.

HERE ARE SOME WAYS OF DETERMINING INTERESTS AND MATCHING THEM

WITH STAKEHOLDERS:

• Interviews: Get a representative from each group of stakeholders and ask away. Be direct

in your questions so that you can quickly get to the point you're trying to make, that is,

what interests this stakeholder has and if they match with some other stakeholders'

interests.

• Surveys: You can send out a written questionnaire to assess how the stakeholders rank

their interests and which group wants what. The survey must be succinct and direct,

4
2
asking clear questions about the evaluation in terms of quality and goals. Survey results

are easy to utilize and can be helpful for the evaluation presentation.

• Phone surveys: They can save you time and money, if you're doing them locally. You

can use the same questions you would use in a written survey, but leave more space for

commentary, as people tend to talk more when speaking to a person on the phone. Just be

sure your phone surveys don't stray from your objective.

• Brainstorm sessions: Arrange a meeting with stakeholder representatives and brainstorm

interests and possibilities for the evaluation's outcome. Bring up problems such as

continuity of the program, obtaining funds, coordinating activities, and attracting staff,

and let stakeholders have their say. Everybody will come out from the brainstorming

session with new ideas and a much better notion of everybody else's ideas. Besides these

methods, you should always conduct a survey after the completion of the evaluation. This

will benefit external audiences and decision-makers. Remember, if changes need to be

made, don't be afraid to follow through with them.

Remember that decisions about how to improve a program tend to be made in small, incremental

steps based on specific findings aimed at making the evaluation a better process for all

stakeholders involved.

Now armed with this list of needs and interests, you can find or develop the tools to obtain useful

information. The next sections will explore ways to select an evaluation team and present some

key questions for the evaluation process. Later, we will be discussing how to evaluate your

community initiative!

IN SUMMARY

Once you have a clear idea of what each stakeholder really wants, you are very likely to succeed

in your evaluation. Be sure to revise frequently the interests of all the stakeholders involved so

4
3
that you don't lose focus of what you're looking for with your evaluation. The hard part of your

evaluation work starts now!

Chapter 6

CLUSTER DEVELOPMENT

Clusters can be defined as sectoral and geographical concentration of enterprises, in particular

Small and Medium Enterprises (SME), faced with common opportunities and threats which can:

a. Give rise to external economies (e.g. specialized suppliers of raw materials, components

and machinery; sector specific skills etc.);

b. Favor the emergence of specialized technical, administrative and financial services;

c. Create a conducive ground for the development of inter-firm cooperation and

specialization as well as of cooperation among public and private local institutions to

promote local production, innovation and collective learning.

UNIDO has been implementing technical cooperation projects based on a cluster and network

development (CND) approach which is built on three assumptions:

4
4
 that clustering and networking among enterprises promotes enterprise competitiveness,

 that public policy can help to facilitate clustering and networking; and

 that support programmes targeting groups of enterprises are more cost-efficient and

costeffective than those targeting individual enterprises

UNIDO has formulated approach to guide the formulation and implementation of cluster

development initiatives. Each module represents a critical phase in the cluster development

process:

 Phase 1 - Cluster selection:

The selection of clusters to be supported has to be made according to specific and agreed

upon criteria. Such criteria should be determined in a transparent process and the ranking

be established by the implementing agency, the national counterpart agency and any

other bodies that have a clear stake in the initiative.

 Phase 2 - Cluster governance, trust building and the CDA:

The Cluster Development Agent (CDA) is a neutral professional or broker who facilitates

the process of cluster and network development. S/he plays a crucial, yet typically

temporary role in developing a sustainable cluster governance system and accompanies

the cluster development initiative over the subsequent development phases.

 Phase 3 - Cluster diagnostics:

A cluster diagnostic study or assessment forms the basis of developing a strategic vision

and action plan for the cluster. It is developed through a participatory exercise guided by

the CDA with a view to:

4
5
developing an understanding of the socioeconomic and institutional environment

of a cluster, detecting potential leverage points for the intervention, providing

a baseline for monitoring and evaluation, and building initial trust between the

CDA and the cluster stakeholders.

 Phase 4 - Vision building and action planning:

Based on the diagnostic study, the cluster stakeholders develop a long-term vision for the

cluster for the immediate future and develop a detailed plan for joint actions that are

aimed at realizing the cluster vision over specified periods (short, medium, long term).

 Phase 5 - Implementation:

Implementation refers to the entire set of joint actions that are required to realize the

long-term vision of the cluster. It is not the mandate of the implementing agency or, more

specifically of the CDA, to directly carry out all project activities. Rather, the CDA

facilitates the undertaking of activities through the establishment of partnerships with

both private organizations and other public institutions and, of course, based on the

capabilities (present and future) of the cluster firms.

 Phase 6: Monitoring and Evaluation (M&E):

While the M&E phase is the final one in the UNIDO cluster development methodology,

M&E activities have to start at the very outset of the intervention. Importantly, indicators

against which progress can be measured and reporting lines and responsibilities have to

be determined already during the very early stages of a cluster and network development

initiative. As already mentioned, the diagnostic phase will usually help with the

development of a monitoring framework, the specification of indicators and the

establishment of a baseline. At the same time, the vision building and action planning

4
6
phase will have to be considered too clearly reflect the strategic orientation of the

initiative in the monitoring framework. During the implementation phase, M&E activities

are performed regularly to assess progress and to undertake corrective action, where

necessary.

Why Clusters

Clusters have gained increasing prominence in debates on economic development in recent


years.

Governments worldwide regard clusters as potential drivers of enterprise development and innovation.

Cluster initiatives are also considered to be efficient policy instruments in that they allow for a

concentration of resources and funding in targeted areas with a high growth and development potential

that can spread beyond the target locations (spillover and multiplier effects).

Examples of internationally renowned clusters, such as that of the Silicon Valley cluster in California,

the information technology cluster of Bangalore in India, or the Australian and Chilean wine clusters

demonstrate that clusters are environments where enterprises can develop a competitive and global edge,

while at the same time generating wealth and local economic development in the process. This is because

clustering provides enterprises with access to specialized suppliers and support services, experienced and

skilled labor and the knowledge sharing that occurs when people meet and talk about business.

Clusters are also particularly promising environments for SME development. Due to their small size,

SMEs individually are often unable to realize economies of scale and thus find it difficult to take

advantage of market opportunities that require the delivery of large stocks of standardized products or

compliance with international standards. They also tend to have limited bargaining power in inputs

purchase, do not command the resources required to buy specialized support services, and have little

influence in the definition of support policies and services.

4
7
Within clusters, SMEs can realize shared gains through the organization of joint actions between cluster

enterprises (e.g., joint bulk inputs purchase or joint advertising, or shared use of equipment) and between

enterprises and their support institutions (e.g., provision of technical assistance by business associations

or investments in infrastructure by the public sector). The advantage accruing to the cluster from such

collective efforts is referred to as collective efficiency.

The UNIDO Approach to Cluster Development focuses in particular on overcoming of these

impediments to SME competitiveness and on unleashing their growth and sustainable development

potential.

Development Principles

The underlying concern of the UNIDO Approach is the stimulation of pro-poor growth, defined as a

pattern of economic growth that creates opportunities for the poor, and generates the conditions for them

to take advantage of those opportunities. In order to improve cluster performance, UNIDO addresses

economic and non-economic issues, especially those related to the fostering of human and social capital

with a view to enhancing labor force production capacities and increasing economic participation. Such

an approach requires measures that are aimed at empowering marginalized groups, improving access to

employment opportunities, and supporting the well-being of entrepreneurs and employees as well as the

development of their skills to boost productivity and enhance innovation capacity.

Facilitate the undertaking of joint actions to realize collective efficiency gains

The UNIDO Approach to cluster development focuses on initiatives that encourage enterprises and

institutions in selected clusters to undertake joint actions that could ultimately yield benefits to the

cluster as a whole and the communities in which they are embedded. It does so by brokering and

4
8
facilitating dialogue and by promoting activities oriented at building consensus within the cluster. A

distinctive feature of the UNIDO Approach is that – instead of targeting relatively large and successful

enterprises and hoping that the benefits will trickle down to smaller enterprises in the cluster - the cluster

vision and action plan are devised by a representative group of cluster stakeholders and thus comprise

activities that tackle issues of relevance to a majority of cluster stakeholders.

Provide targeted support to the cluster’s institutional support structure

The UNIDO Approach focuses on incentivizing public and private sector bodies to more effectively

promote cluster development and on strengthening their capacity to do so. Support is given to relevant

local, regional, and national institutions, including chambers of commerce, local governments, NGOs,

producer associations, universities and training institutes and regional as well as local economic

development agencies to gradually assume a strong supporting role in the development of the cluster.

UNIDO also technically assists financial and non-financial service providers (e.g., business development

service (BDS) providers, vocational schools and training institutes, large buyers and retailers, and the

suppliers of equipment and inputs) to make the services they offer more responsive to demands from

within clusters.

Involve public and private sector actors based on their respective capacities and competencies

While the role of the public sector in supporting a cluster development initiative normally includes

reacting to demands from within the cluster for changes in the business environment as well as with

regards to larger scale infrastructure development and the provision of an adequate framework for

education and broader skills development and the coordination and support of brokering activities, the

private sector can play an active role when it comes to mobilizing human and financial resources to be

invested in innovative ventures to increase the growth potential of the cluster; providing business

development and financial services on a commercially viable and sustainable basis; and establishing of

and/or participating in representative bodies to voice the interests of the business community in

4
9
dialogues with the public sector. A local public-private forum, e.g. in the context of the Cluster

Commission or other suitable dialogue mechanism, can also ensure that cluster development initiatives

within a country or region are linked with other public support programmes for private sector

development.

Monitor and evaluate project results to improve efficiency and effectiveness, enhance

accountability and demonstrate impact.

Monitoring and Evaluation (M&E) are an integral part of project management. They provide detailed

information about the project’s results assessing any changes – both intended and unintended - an

intervention may have produced. Understanding the status quo is a prerequisite to determine the

intervention strategy of a project. As project staff and management can only take corrective measures if

they are aware of the outcomes produced and the (external) factors that influenced them, monitoring

information forms the basis for project-related decision making on a daily basis as well as the

coordination of actors and activities. In the UNIDO approach, the careful construction of a cluster

development initiative’s causal chain and the determination of key performance indicators are critical

steps to be undertaken right from the beginning of a typical project. To facilitate this, UNIDO has

developed step-by-step guidelines, based on a generic causal chain and a pool of relevant indicators, to

develop a tailor-made monitoring system for each cluster development initiative. A project evaluation,

typically carried out at the end of a cluster development initiative (for longer projects, a mid-term

evaluation is recommended), assesses several aspects of an intervention, including its relevance,

efficiency, effectiveness, impact and sustainability, in order to appraise its overall usefulness.

5
0
Chapter 7

COMMUNITY BASED PARTICIPARTORY RESEARCH

Della Roberts worked as a nutritionist at the Harperville Hospital. As an African American, she

was concerned about obesity among black children, and about the fact that many of Harperville’s

African American neighborhoods didn’t have access to healthy food in stores or restaurants. She

felt that the city ought to be doing something to change the situation, but officials didn’t seem to

see it as a problem. Della decided to conduct some research to use as a base for advocacy. Della

realized that in order to collect accurate data, she needed to find researchers who would be

trusted by people in the neighborhoods she was concerned about. What if she recruited

researchers from among the people in those neighborhoods? She contacted two ministers she

knew, an African American doctor who practiced in a black neighborhood, and the director of a

community center, as well as using her own family connections. Within two weeks, she had
5
1
gathered a group of neighborhood residents who were willing to act as researchers. They ranged

from high school students to grandparents, and from people who could barely read to others

who had taken college courses.

The group met several times at the hospital to work out how they were going to collect

information from the community. Della conducted workshops in research methods and in such

basic skills as how to record interviews and observations. The group discussed the problem of

recording for those who had difficulty writing, and came up with other ways of logging

information. They decided they would each interview a given number of residents about their

food shopping and eating habits, and that they would also observe people’s buying patterns in

neighborhood stores and fast food restaurants. They set a deadline for finishing their data

gathering, and went off to learn as much as they could about the food shopping and eating

behavior of people in their neighborhoods.

As the data came in, it became clear that people in the neighborhoods would be happy to buy

more nutritious food, but it was simply too difficult to get it. They either had to travel long

distances on the bus, since many didn’t have cars, or find time after a long work day to drive to

another, often unfamiliar, part of the city and spend an evening shopping. Many also had the

perception that healthy food was much more expensive, and that they couldn’t afford it.

Ultimately, the data that the group of neighborhood residents had gathered went into a report

written by Della and other professionals on the hospital staff. The report helped to convince the

city to provide incentives to supermarket chains to locate in neighborhoods where healthy food

was hard to find.

The group that Della had recruited had become a community-based participatory research team.

Working with Della and others at the hospital, they helped to determine what kind of information

would be useful, and then learned how to gather it. Because they were part of the community,

they were trusted by residents; because they shared other residents’ experience, they knew what

5
2
questions to ask and fully understood the answers, as well as what they were seeing when they

observed.

This section is about participatory action research: what it is, why it can be effective, who might

use it, and how to set up and conduct it.

WHAT IS COMMUNITY-BASED PARTICIPATORY RESEARCH?

In simplest terms, community-based participatory research (for convenience, we’ll primarily call

it CBPR for the rest of this section) enlists those who are most affected by a community issue –

typically in collaboration or partnership with others who have research skills – to conduct

research on and analyze that issue, with the goal of devising strategies to resolve it. In other

words, community-based participatory research adds to or replaces academic and other

professional research with research done by community members, so that research results both

come from and goes directly back to the people who need them most and can make the best use

of them.

There are several levels of participatory research. At one end of the spectrum is academic or

government research that nonetheless gathers information directly from community members.

The community members are those most directly affected by the issue at hand, and they may (or

may not) be asked for their opinions about what they need and what they think will help, as well

as for specific information. In that circumstance, the community members don’t have any role in

choosing what information is sought, in collecting data, or in analyzing the information once it’s

collected. (At the same time, this type of participatory research is still a long step from research

that is done at second or third hand, where all the information about a group of people is

gathered from statistics, census data, and the reports of observers or of human service or health

professionals.)

At another level, academic or other researchers recruit or hire members of an affected group –

often because they are familiar with and known by the community – to collect data. In this case,

5
3
the collectors may or may not also help to analyze the information that they have gathered. A

third level of participatory research has academic, government, or other professional researchers

recruiting members of an affected group as partners in a research project. The community

members work with the researchers as colleagues, participating in the conception and design of

the project, data collection, and data analysis. They may participate as well in reporting the

results of the project or study. At this level, there is usually – though not always – an assumption

that the research group is planning to use its research to take action on an issue that needs to be

resolved

The opposite end of the participatory research continuum from the first level described involves

community members creating their own research group – although they might seldom think of it

as such – to find out about and take action on a community issue that affects them directly.

In this section, we’ll concern ourselves with the latter two types of participatory research – those

that involve community members directly in planning and carrying out research, and that lead to

some action that can influence the issue studied. This is what is often defined as communitybased

participatory research. There are certainly scenarios where other types of participatory research

are more appropriate, or easier to employ in particular situations, but it’s CBPR that we’ll

discuss here.

Employing CBPR for purposes of either evaluation or long-term change can be a good idea for

reasons of practicality, personal development, and politics.

ON THE PRACTICAL SIDE, COMMUNITY-BASED PARTICIPATORY RESEARCH CAN

OFTEN GET YOU THE BEST INFORMATION POSSIBLE ABOUT THE ISSUE, FOR AT

LEAST REASONS INCLUDING:

• People in an affected population are more liable to be willing to talk and give straight

answers to researchers whom they know, or whom they know to be in circumstances

similar to their own, than to outsiders with whom they have little in common

5
4
• People who have actually experienced the effects of an issue – or an intervention – may

have ideas and information about aspects of it that wouldn’t occur to anyone studying it

from outside. Thus, action researchers from the community may focus on elements of the

issue, or ask questions or follow-ups, that outside researchers wouldn’t, and get crucial

information that other researchers might find only by accident, or perhaps not at all

• People who are deeply affected by an issue, or participants in a program, may know

intuitively, or more directly, what’s important when they see or hear it. What seems an

offhand comment to an outside researcher might reveal its real importance to someone

who is part of the same population as person who made the comment.

• Action researchers from the community are on the scene all the time. Their contact both

with the issue or intervention and with the population affected by it is constant, and, as a

result, they may find information even when they’re not officially engaged in research.

• Findings may receive more community support because community members know that

the research was conducted by people in the same circumstances as their own

When you’re conducting an evaluation, these advantages can provide you with a more accurate

picture of the intervention or initiative and its effects. When you’re studying a community issue,

all these advantages can lead to a true understanding of its nature, its causes, and its effects in the

community, and can provide a solid basis for a strategy to resolve it. And that, of course, is the

true goal of community research – to identify and resolve an issue or problem, and to improve

the quality of life for the community as a whole

In the personal development sphere, CBPR can have profound effects on the development and

lives of the community researchers, particularly when those who benefit from an intervention, or

who are affected by an issue, are poor or otherwise disadvantaged, lack education or basic skills,

and/or feel that the issue is far beyond their influence. By engaging in research, they not only

learn new skills, but see themselves in a position of competence, obtain valuable knowledge and

5
5
information about a subject important to them, and gain the power and the confidence to exercise

control over this aspect of their lives.

TWO COMMON POLITICAL RESULTS OF THE CBPR PROCESS:

• Through community-based participatory research, citizens can take more control of the

direction of their communities

• Community researchers – especially those who are poor or otherwise disadvantaged –

come to be viewed differently by professionals and those in positions of power. They

have vital information, and the ability to use it, and thus become accepted as contributing

members of the community, rather than as voiceless observers or dependents. They have

gained a voice, because they understand that they have something to say. Furthermore,

the research and other skills and the self-confidence that people acquire in a

communitybased participatory research process can carry over into other parts of their

lives, giving them the ability and the assurance to understand and work to control the

forces that affect them. Research skills, discipline, and analytical thinking often translate

into job skills, making participatory action researchers more employable. Most important,

people who have always seen themselves as bystanders or victims gain the capacity to

become activists who can transform their lives and communities.

Community-based participatory research has much in common with the work of the Brazilian

political and educational theoretician and activist, Paulo Freire. In Freire’s critical education

process, oppressed people are encouraged to look closely at their circumstances, and to

understand the nature and causes of their oppressors and oppression. Freire believes that with the

right tools – knowledge and critical thinking ability, a concept of their own power, and the

motivation to act – they can undo that oppression. Many people see this as the “true” and only

reason for supporting action research, but we see many other reasons for doing so, and list some

of them both above and below.

5
6
Action research is often used to consider social problems – welfare reform or homelessness, for

example – but can be turned to any number of areas with positive results.

Some prime examples:

• The environment. It was a community member who first asked the questions and started

the probe that uncovered the fact that the Love Canal neighborhood in Niagara Falls, NY,

had been contaminated by the dumping of toxic waste.

• Medical/health issues. Action research can be helpful in both undeveloped and

developed societies in collecting information about health practices, tracking an

epidemic, or mapping the occurrence of a particular condition, to name three of numerous

possibilities.

• Political and economic issues. Citizen activists often do their own research to catch

corrupt politicians or corporations, trace campaign contributions, etc.

Just as it can be used for different purposes, CBPR can be structured in different ways. The

differences have largely to do with who comes up with the idea in the first place, and with who

controls, or makes the decisions about, the research. Any of these possibilities might involve a

collaboration or partnership, and a community group might well hire or recruit as a volunteer

someone with research skills to help guide their work.

Some common scenarios:

• Academic or other researchers devise and construct a study, and employ community

people as data collectors and/or analysts.

• A problem or issue is identified by a researcher or other entity (a human service

organization, for instance), and community people are recruited to engage in research on

it and develop a solution.

• A community based organization or other group gathers community people to define and

work on a community issue of their choosing, or to evaluate a community intervention

aimed at them or people similar to them.

5
7
• A problem is identified by a community member or group, others who are affected and

concerned gather around to help, and the resulting group sets out to research and solve

the problem on its own.

WHY WOULD YOU USE COMMUNITY-BASED PARTICIPATORY RESEARCH?

We’ve already alluded to a number of reasons why CBPR could be useful in evaluating a

community intervention or initiative or addressing a community issue. We’ll repeat them briefly

here, and introduce others as well.

Action research yields better and more nearly complete and accurate information from the

community.

• People will speak more freely to peers, especially those they know personally, than to

strangers.

• Researchers who are members of the community know the history and relationships

surrounding a program or an issue, and can therefore place it in context.

• People experiencing an issue or participating in an intervention know what’s important to

them about it – what it disrupts, what parts of their lives it touches, how they have

changed as a result, etc. That knowledge helps them to formulate interview questions that

get to the heart of what they – as researchers – are trying to learn.

Involving the community in research is more likely to meet community needs. Action

research makes a reasonable resolution or accurate evaluation more probable in two ways.

First, by involving the people directly affected by the issue or intervention, it brings to bear the

best information available about what’s actually happening. Second, it encourages community

buy-in and support for whatever plans or interventions are developed. If people are involved in

the planning and implementation of solutions to community issues, they’ll feel they own the

process, and work to make it successful. It’s equitable, philosophically consistent for most

5
8
grassroots and community-based organizations, and practical in that it usually yields the best

results

Action research, by involving community members, creates more visibility for the effort in

the community.

Researchers are familiar to the community, will talk about what they’re doing (as will their

friends and relatives), and will thus spread the word about the effort.

Community members are more likely to accept the legitimacy of the research and buy into

its findings if they know it was conducted by people like themselves, perhaps even people

they know.

Citizens are more apt to trust both the truthfulness and the motives of their friends and neighbors

than those of outsiders.

Action research trains citizen researchers who can turn their skills to other problems as

well.

People who discover the power of research to explain conditions in their communities, and to

uncover what’s really going on, realize that they can conduct research in other areas than the one

covered by their CBPR project. They often become community activists, who work to change the

conditions that create difficulty for them and others. Thus, the action research process may

benefit the community not only by addressing particular issues, but by – over the long term –

creating a core of people dedicated to improving the overall quality of its citizens’ lives.

Involvement in CBPR changes people’s perceptions of themselves and of what they can do.

An action research project can have profound effects on community researchers who are

disadvantaged economically, educationally, or in other ways. It can contribute to their personal

development, help them develop a voice and a sense of their power to change things, and vastly

expand their vision of what’s possible for them and for the community. Such an expanded vision

5
9
leads to an increased willingness to take action, and to an increase in their control over their

lives.

Skills learned in the course of action research carry over into other areas of researchers’

lives.

Both the skills and the confidence gained in a CBPR project can be transferred to employment,

education, child-rearing, and other aspects of life, greatly improving people’s prospects and

wellbeing.

A participatory action research process can help to break down racial, ethnic, and class

barriers.

CBPR can remove barriers in two ways. First, action research teams are often diverse, crossing

racial, ethnic, and class lines. As people of different backgrounds work together, this encourages

tolerance and friendships, and often removes the fear and distrust. In addition, as integral

contributors to a research or evaluation effort, community researchers interact with professionals,

academics, and community leaders on equal footing. Once again, familiarity breaks down

barriers, and allows all groups to see how much the others have to offer. It also allows for people

to understand how much they often misjudge others based on preconceptions, and to begin to

consider everyone as an individual, rather than as “one of those.”

A member of the Changes Project, a CBPR project that explored the impact of welfare reform on

adult literacy and ESOL (English as a Second or Other Language), learners wrote in the final

report: “What I learned from working in this project first off is, none of us are so great that

change couldn’t help us be better people... I walked into the first meeting thinking I was the

greatest thing to hit the pike and found that I, too, had some prejudices that I was not aware of. I

thought that no one could ever tell me I wasn’t the perfect person to sit in judgment of others

because I never had a negative thought or prejudiced bone in my body. Well, lo and behold, I

did, and seeing it through other people’s eyes I found that I, too, had to make some changes in

my opinions.

6
0
Action research helps people better understand the forces that influence their lives. Just as

Paulo Freire found in his work in Latin America, community researchers, sometimes as a direct

result of their research, and sometimes as a side benefit, begin to analyze and understand how

larger economic, political, and social forces affect their own lives. This understanding helps

them to use and control the effects of those forces, and to gain more control over their own

destinies.

Community based action research can move communities toward positive social change.

All of the above rationales described reasons for employing CBPR act to restructure the

relationships and the lines of power in a community. They contribute to the mutual respect and

understanding among community members and the deep understanding of issues that in turn lead

to significant and positive social change.

WHO SHOULD BE INVOLVED IN COMMUNITY-BASED PARTICIPATORY

RESEARCH?

THE SHORT ANSWER HERE IS PEOPLE FROM ALL SECTORS OF THE COMMUNITY,

BUT THERE ARE SOME SPECIFIC GROUPS THAT, UNDER MOST CIRCUMSTANCES,

ARE IMPORTANT TO INCLUDE.

• People most affected by the issue or intervention under study. These are the people

whose inclusion is most important to a participatory effort – both because it’s their

inclusion that makes it participatory, and because of what they bring to it. These folks, as

we discussed earlier, are closest to the situation, have better access to the population most

concerned, and may have insights others wouldn’t have. In addition, their support is

crucial to the planning and implementation of an intervention or initiative. That support is

much more likely to be forthcoming if they’ve been involved in research or evaluation.

6
1
• Other members of the affected population. People who may not themselves be directly

affected by the issue or intervention, but who are trusted by the affected population, can

be useful members of a CBPR team.

A businessman from the Portuguese community in a small city was an invaluable member of an

action research team examining the need for services in that community. He was quite

successful, had graduated from college in the US, and needed no services himself, but his

fluency in Portuguese, his credentials as a trusted member of the community, and his

understanding of both the culture of the Portuguese residents and the culture of health and human

service workers brought a crucial dimension to data gathering, analysis, and general information

about the community.

• Decision makers. Involving local officials, legislators, and other decision makers from

the very beginning can be crucial, both in securing their support, and in making sure that

what they support is in fact what’s needed. If they’re part of the team, and have all the

information that it gathers, they become advocates not just for addressing the issue, but

for recognizing and implementing the solution or intervention that best meets the actual

needs of the population affected.

• Academics with an interest in the issue or intervention in question. Academics who

have studied the issue often have important information that can help a CBPR team better

understand the data it collects. They usually have research skills as well, and can help to

train other team members. At the same time, they can learn a great deal from

communitybased researchers – about the community and communities in general, about

approaching people, about putting assumptions and preconceptions aside – and perhaps,

as a result, increase the effectiveness of their own research

It’s important that they be treated, and treat everyone else, as equals. Everyone on a team has to

view other members as colleagues, not as superiors or inferiors, or as more or less competent or

authoritative. This can be difficult on both sides – i.e. making sure that officials, academics, or

6
2
other professionals don’t look down on community members, and that community members

don’t automatically defer to (or distrust) them. It may take some work to create an environment

in which everyone feels equally respected and valued, but it’s worth the effort. Both the quality

of the research and the long-term learning by team members will benefit greatly from the effort.

(There are some circumstances where actual equality among all team members is not entirely

possible. When community members are hired as researchers, for instance, the academic or other

researcher who pays the bills has to exercise some control over the process. That doesn’t change

the necessity of all team members being viewed as colleagues and treated with respect.)

• Health, human service, and public agency staff and volunteers. Like the previous

two groups, these people have both a lot to offer and – often – a lot to learn that will

make them more sensitive and more effective at their jobs in the long run. They may have

a perspective on issues in the community that residents lack because of their closeness to

the situation. At the same time, they may learn more about the lives of those they work

with, and better understand their circumstances and the pressures that shape their lives.

• Community members at large. This category brings us back to the statement at the

beginning of this portion of the section that members of all sectors of the community

should have the opportunity to be involved. That statement covers the knowledge, skills,

and talent that different people bring to the endeavor; the importance of buy-in by all

sectors of the community if any long-term change is to be accomplished; and what team

members learn and bring back to their families, friends, and neighbors as a result of their

involvement.

6
3
WHEN SHOULD YOU EMPLOY COMMUNITY-BASED PARTICIPATORY

RESEARCH?

THERE ARE TIMES WHEN ACTION RESEARCH MAY NOT BE APPROPRIATE, AND

THERE ARE TIMES WHEN IT’S THE BEST CHOICE. HOW DO YOU DECIDE?

One criterion is the amount of time you have to do the research on the issue or intervention.

Action research may take longer than traditional methods, because of the need for training, and

because of the time it often takes for community researchers to adjust to the situation (i.e. to

realize that their opinions and intuitions are important, even if they may not always be right, and

that their conclusions are legitimate). If your time is limited, CBPR may not be the right option

Another consideration is the type of research that’s necessary. Action research lends itself

particularly well to qualitative research. If you’re obligated to deliver complicated, quantitative

results to a funder, for instance, you may want to depend on professional researchers or

evaluators. Most CBPR isn’t oriented toward producing results couched in terms of statistical

procedures. (This isn’t to say that action research teams can’t do quantitative research, but

simply that it requires more training, and therefore time, and may require an outside source or an

academic team member to crunch the numbers.)

QUALITATIVE RESEARCH

Relies on information that can’t be expressed in mathematical terms – descriptions, opinions,

anecdotes, the comments of those affected by the issue under study, etc. The results of qualitative

research are usually expressed as a narrative or set of conclusions, with the analysis backed up

by quotes, observation notes, and other non-numerical data.

(Almost anything can be expressed in terms of numbers in some way. Interviewers, for instance,

can count the number of references to a particular issue, or even record the number of times that

an interviewee squirmed in his chair. Qualitative research, however, relies on elements that can’t

be adequately – or, in many cases, at all – described numerically. The number of squirms may

6
4
say something about how nervous an interviewee is, or it may indicate that he has to go to the

bathroom. The interviewer will probably be able to tell the difference, but the numbers won’t.)

QUANTITATIVE RESEARCH

Depends on numbers – the number of people served by an intervention, for instance, the number

that completed the program, the number that achieved some predetermined outcome (lowered

blood pressure, employment for a certain period, citizenship), scores on academic or

psychological or physical tests, etc. These numbers are usually then processed through one or

more statistical operations to tell researchers exactly what they mean. (Some statistics may, for

instance, help researchers determine precisely what part of an intervention was responsible for a

particular behavior change.)

It may seem that quantitative research is more accurate, but that’s not always the case, especially

when the research deals with human beings, who don’t always do what you expect them to. It’s

often important to get other information in order to understand exactly what’s going on

Furthermore, sometimes there aren’t any numbers to work with. The Changes Project was

looking at the possible effects of a change in the welfare system on adult learners. The project

was conducted very early in the change process, in order to try to head off the worst

consequences of the new system. There was very little quantitative information available at that

point, and most of the project involved collecting information about the personal experiences of

learners on welfare.

In other words, neither quantitative nor qualitative methods are necessarily “better,” but

sometimes one is better than the other for a specific purpose. Often, a mix of the two will yield

the richest and most accurate information.

IT’S PROBABLY BEST AND MOST EFFECTIVE TO USE ACTION RESEARCH WHEN:

• There’s time to properly train and acclimate community researchers

6
5
• The research and analysis necessary relies on interviews, experience, knowledge of the

community, and an understanding of the issue or intervention from the inside, rather than

on academic skills or an understanding of statistics (unless you have the time and

resources to teach those skills or the team includes someone who has them)

• You need an entry to the community or group from whom the information is being

gathered

• You’re concerned with buy-in and support from the community

• Part of the purpose of using CBPR is to have an effect on and empower the community

researchers

• Part of the purpose of using CBPR is to set the stage for long-term social change

HOW DO YOU INSTITUTE AND CARRY OUT COMMUNITY-BASED PARTICIPATORY

RESEARCH?

Once you’ve decided to conduct an action research project, there are a number of steps to take to

get it up and running. You have to find and train the participants; determine exactly what

information you’re looking for and how to go about finding it; plan and carry out your research;

analyze and report on your findings; translate the findings into recommendations; take, or bring

about, action based on those recommendations; evaluate the process; and follow up

What follows assumes an ideal action research project with a structure, perhaps one initiated by a

health or human service organization. A community group that comes together out of common

interest probably would recruit by people already involved pulling in their friends, and probably

wouldn’t do any formal training unless they invited a researcher to help them specifically in that

way. The nature of your group will help you determine how – or whether – you follow each of

the steps below.

RECRUIT A COMMUNITY RESEARCH TEAM

6
6
How you recruit a team will depend on the purpose of the project as well as on who might be

most effective in gaining and analyzing information. A team may already exist, as in the example

at the beginning of this section. Or a team may simply be a group that gets together out of

common concerns. Many CBPR projects aim for a diverse team, with the idea that a mix of

people will both provide the broadest range of benefit and allow for the greatest amount of

personal learning for team members. Other projects may specifically draw only from a particular

population – a language minority, those served by a certain intervention, those experiencing a

particular physical condition.

It often makes sense for at least half the team to be composed of people directly affected by the

issue or intervention in question. Those numbers both assure good contact with the population

from which information needs to be gathered, and makes it less likely that community

researchers will be overwhelmed or intimidated by other (professional) team members or by the

task

Recruiting from within an organization or program may be relatively simple, because the pool of

potential researchers is somewhat of a captive audience: you know where to find them, and you

already have a relationship with them. Recruiting from a more general population, on the other

hand, requires attention to some basic rules of communication.

• Use language that your audience can understand, whether that means presenting your

message in a language other than English, or presenting it in simple, clear English

without any academic or other jargon.

• Use the communication channels that your audience is most likely to pay attention to. An

announcement in the church that serves a large proportion of your population, a program

newsletter, or word-of-mouth might all be good channels by which to reach a particular

population.

6
7
• Be culturally sensitive and appropriate. Couch your message in a form that is not only

respectful of your audience’s culture, but that also speaks to what is important in that

culture.

• Go where your audience is. Meet with groups of people from the population you want to

work with, put out information in their neighborhoods or meeting places. Don’t wait for

them to come to you.

Given all this, the best recruitment method is still face-to-face contact by someone familiar to the

person being recruited.

ORIENT AND TRAIN THE RESEARCH TEAM


Orientation and training may be part of the same process, or they might be separate. The two

have different purposes. Orientation is meant to give people a chance to ask questions and an

overall picture of what is expected.

Orientation might include:

• Introductions all around, and an introductory activity to help team members get to know

one another

• Explanation of community-based participatory research, and basic information about this

project or evaluation

• Participants’ time commitment and the support available to them, if any. Are child care,

transportation, or other support services provided or paid for?

• An opportunity to ask questions, or to discuss any part of the project or evaluation that

team members don’t understand or agree with

Especially if the team is diverse, and especially if that diversity is one of education and research

experience, an important aspect of the orientation is to start building the team, and to ensure that

everyone sees it as a team of colleagues, rather than as one group leading or dominating or –

6
8
even worse – simply tolerating another. Each person brings different skills and experience to the

effort and has something to teach everyone else. Emphasizing that from the beginning may be

necessary, not only to keep more educated members from dominating, but also to encourage less

educated members not to be afraid to ask questions and give their opinions.

Training is meant to pass on specific information and skills that people will need in order to carry

out the work of the research. There are as many models for training as there are teams to be

trained. As noted above, orientation might serve as all or part of an introductory training session.

Training can take place all at once – in one or several multi-hour sessions on consecutive days –

or over the whole period of the project, with each training piece leading to the activity that it

concerns. It might be conducted by one person – who, in turn, could be someone from inside the

organization or an outside facilitator – by a series of experts in different areas, or by the team

members themselves. (In this last case, team members might, for instance, determine what they

need to know, and then decide on and implement an appropriate way to learn it.)

Regardless of how it’s done, here are some general guidelines for training that are usually

worth following:

• Find a comfortable space to hold the training

• Provide, or make sure that people bring, food and drink

• Take frequent, short breaks. It’s better for people’s concentration to take a three-minute

break every half hour than a 20-minute break every three hours

• Structure the space for maximum participation and interaction - chairs in a circle, room

to move around, etc.

• Vary the ways in which material is presented. People learn in a variety of ways – by

hearing, by seeing, by discussion, by example (watching others), and by doing. The more

of these methods you can include, the more likely you are to hold people’s attention and

engage everyone on the team.

6
9
• Use the training to build your team. Training is a golden opportunity for people to get to

know and trust one another, and to absorb the guiding principles for the work.

The actual content of the training will, of course, depend on the project you’re undertaking,

but general areas should probably include:

• Necessary research skills. These might include interview techniques, Internet searching,

constructing a survey, and other basic research and information-gathering methods.

• Important information about the community or the intervention in question.

• Meeting and negotiation skills. Many of the people on your team may not have had the

experience of participating in numerous meetings. They need time and support both to

develop meeting skills – following discussion, knowing when it’s okay to interrupt,

feeling confident enough to express their opinions – and to become comfortable with the

meeting process.

• Preparing a report. This doesn’t necessarily mean drafting a formal document.

Depending upon the team members, a flow chart, a slide show, a video, or a collage

might be informative and powerful ways to convey research results, as might oral

testimony or a sound recording.

• Making a presentation. Knowing what to expect, and learning how to make a clear and

cogent presentation can make the difference between having your findings and

recommendations accepted or rejected.

DETERMINE THE QUESTIONS THE RESEARCH OR EVALUATION IS MEANT TO

ANSWER

The questions you choose to answer will shape your research. There are many types of answers

in either of these cases.

7
0
An evaluation can focus on process: What is actually being done, and how does that compare

with what the intervention or initiative set out to do? It can focus on outcomes: Is the end result

of the intervention what you intended it to be? Or it can try to look at both, and to decide whether

the process in fact works to gain the desired outcome. An evaluation may also aim to identify

specific elements of the process that have to be changed, or to identify a whole new process to

replace one that doesn’t seem to be working

Research on a community issue also may be approached in a number of ways. You may simply

be trying to find out whether a certain condition exists in your community, or to what extent it

exists. You may be concerned with how, or how much, it affects the community, or what parts of

the community it affects. You may be seeking a particular outcome, and the research questions

you ask may be designed to help you reach that outcome.

PLAN AND STRUCTURE YOUR RESEARCH ACTIVITY

Given your time constraints, the capacity of your team, and the questions you’re considering,

plan your research.

Your plan should include:

• The kind and amount of information-gathering that best suits your project (e.g.,

interviews, library research, surveys)

• Who will be responsible for what

• The timeline – i.e., deadlines for completing each phase of the plan

• How and by whom the information will be analyzed

• What the report of the research or evaluation will look like

• When, how, and to whom the report will be presented

ANTICIPATE AND PREPARE CONTINGENCY PLANS FOR PROBLEMS THAT MIGHT

ARISE

7
1
An action research group, like any other, can have internal conflicts, as well as conflicts with

external forces. People may disagree, or worse; some people may drop out, or may not do what

they promised; people may not understand, or may choose not to follow the procedures you’ve

agreed on. There will need to be guidelines to deal with each of these and other potential pitfalls.

IMPLEMENT YOUR RESEARCH PLAN

Now that you've completed your planning, it's time to carry it out.

PREPARE AND PRESENT YOUR REPORT AND RECOMMENDATIONS

The report, as explained previously, may be a written document, or may be in some alternative

form. If it’s an evaluation, it might be presented in one way to the staff of the intervention being

evaluated, and in another to funders or the community, depending upon your purposes.

Some possibilities for presentation include:

• A press conference

• A community presentation

• A newspaper or newsletter article

• A written report to funders and/or other interested parties

TAKE, OR TRY TO BRING ABOUT, APPROPRIATE ACTION ON THE ISSUE OR

INTERVENTION

Action can range from adjusting a single element of an intervention as a result of an evaluation,

to writing letters to the editor, advocating with legislators, taking direct action (a demonstration,

a lawsuit), and starting a community initiative that grows into a national movement. In most

cases, a CBPR effort is meant to lead to some kind of action, even if that action is simply further

research.

7
2
FOLLOW UP

An action research project doesn’t end with the presentation, or even with action. The purpose of

the research often has as much to do with the learning of the team members as it does with

research results. Even where that’s not the case, the skills and methods that action researchers

learn need to be cemented, so they can carry over to other projects.

• Evaluate the research process. This should be a collaborative effort by all team members,

and might also include others (those who actually implement an evaluated intervention,

for instance). Did things go according to plan? What were the strengths of the process?

What were its weaknesses? Was the training understandable and adequate? What other

support would have been helpful? What parts of the process should be changed?

• Identify benefits to the community or group that came about (or may come about) as a

result of the research process. These may have to do with action, with making the

community more aware of particular issues, or with creating more community activists.

• Identify team members’ learning and perceptions of changes in themselves. Some areas

to consider are basic and other academic skills; public speaking; meeting skills;

selfconfidence and self-esteem; ability to influence the world and their own lives; and

selfimage (seeing themselves as proactive, rather than acted upon, for example).

• Maintain gains by keeping researchers involved. There are a number of ways to keep the

momentum of a CBPR team going, including starting another project, if there’s a reason

to do so; encouraging team members to be active on other issues they care about (and to

suggest some potential areas, and perhaps make introductions that make it easier for them

to do so); keeping the group together as a (paid) research consortium; or consulting, as a

group, with other organizations interested in conducting action research.

CBPR is not always the right choice for an initiative or evaluation, but it’s always worthy of

consideration. If you can employ it in a given situation, the rewards can be great.

7
3
Community-based participatory research can serve many purposes. It can supply accurate and

appropriate information to guide a community initiative or to evaluate a community intervention.

It can secure community buy-in and support for that initiative or intervention. It can enhance

participants’ personal development and opportunities. It can empower those who are most

affected by conditions or issues in the community to analyze and change them. And, perhaps

most important, it can lead to long-term social change that improves the quality of life for

everyone.

IN SUMMARY

Community-based participatory research is a process conducted by and for the people most

affected by the issue or intervention being studied or evaluated. It has multiple purposes,

including the empowerment of the participants, the gathering of the best and most accurate

information possible, garnering community support for the effort, and social change that leads to

the betterment of the community for everyone

As with any participatory process, CBPR can take a great deal of time and effort. The

participants are often economically and educationally disadvantaged, lacking basic skills and

other resources. Thus, training and support – both technical and personal – are crucial elements

in any action research process. With proper preparation, however, participatory action research

can yield not only excellent research results, but huge benefits for the community over the long

run.

7
4
Chapter 8

PARTICIPATORY EVALUATION

Experienced community builders know that involving stakeholders - the people directly

connected to and affected by their projects - in their work is tremendously important. It gives

them the information they need to design, and to adjust or change, what they do to best meet the

needs of the community and of the particular populations that an intervention or initiative is

meant to benefit. This is particularly true in relation to evaluation.

As we have previously discussed, community-based participatory research can be employed in

describing the community, assessing community issues and needs, finding and choosing best

practices, and/or evaluation. We consider the topic of participatory evaluation important enough

7
5
to give it a section of its own, and to show how it fits into the larger participatory research

picture.

It's a good idea to build stakeholder participation into a project from the beginning. One of the

best ways to choose the proper direction for your work is to involve stakeholders in identifying

real community needs, and the ways in which a project will have the greatest impact. One of the

best ways to find out what kinds of effects your work is having on the people it's aimed at is to

include those on the receiving end of information or services or advocacy on your evaluation

team

Often, you can see most clearly what's actually happening through the eyes of those directly

involved in it - participants, staff, and others who are involved in taking part in and carrying out

a program, initiative, or other project. Previously, we have discussed how you can involve those

people in conducting research on the community and choosing issues to address and directions to

go in. This section is about how you can involve them in the whole scope of the project,

including its evaluation, and how that's likely to benefit the project's final outcomes.

WHAT IS PARTICIPATORY EVALUATION?

When most people think of evaluation, they think of something that happens at the end of a

project - that looks at the project after it's over and decides whether it was any good or not.

Evaluation actually needs to be an integral part of any project from the beginning.

Participatory evaluation involves all the stakeholders in a project - those directly affected by

it or by carrying it out - in contributing to the understanding of it, and in applying that

understanding to the improvement of the work.

Participatory evaluation, as we shall see, isn't simply a matter of asking stakeholders to take part.

Involving everyone affected changes the whole nature of a project from something done for a

group of people or a community to a partnership between the beneficiaries and the project

7
6
implementers. Rather than powerless people who are acted on, beneficiaries become the copilots

of a project, making sure that their real needs and those of the community are recognized and

addressed. Professional evaluators, project staff, project beneficiaries or participants, and other

community members all become colleagues in an effort to improve the community's quality of

life.

This approach to planning and evaluation isn't possible without mutual trust and respect. These

have to develop over time, but that development is made more probable by starting out with an

understanding of the local culture and customs - whether you're working in a developing country

or in an American urban neighborhood. Respecting individuals and the knowledge and skills

they have will go a long way toward promoting long-term trust and involvement.

The other necessary aspect of any participatory process is appropriate training for everyone

involved. Some stakeholders may not even be aware that project research takes place; others may

have no idea how to work alongside people from different backgrounds; and still others may not

know what to do with evaluation results once they have them. We'll discuss all of these issues -

stakeholder involvement, establishing trust, and training - as the section progresses. The real

purpose of an evaluation is not just to find out what happened, but to use the information to make

the project better.

IN ORDER TO ACCOMPLISH THIS, EVALUATION SHOULD INCLUDE EXAMINING AT

LEAST TWO AREAS:

 Process. The process of a project includes the planning and logistical activities needed to

set up and run it. Did we do a proper assessment beforehand so we would know what the

real needs were? Did we use the results of the assessment to identify and respond to those

needs in the design of the project? Did we set up and run the project within the timelines

and other structures that we intended? Did we involve the people we intended to? Did we

7
7
have or get the resources we expected? Were staff and others trained and prepared to do

the work? Did we have the community support we expected? Did we record what we did

accurately and on time? Did we monitor and evaluate as we intended?

 Implementation. Project implementation is the actual work of running it. Did we do what

we intended? Did we serve or affect the number of people we proposed to? Did we use

the methods we set out to use? Was the level of our activity what we intended (e.g., did

we provide the number of hours of service we intended to)? Did we reach the

population(s) we aimed at? What exactly did we provide or do? Did we make intentional

or unintentional changes, and why?

 Outcomes. The project's outcomes are its results - what actually happened as a

consequence of the project's existence. Did our work have the effects we hoped for? Did

it have other, unforeseen effects? Were they positive or negative (or neither)? Do we

know why we got the results we did? What can we change, and how, to make our work

more effective?

Many who write about participatory evaluation combine the first two of these areas into process

evaluation, and add a third - impact evaluation - in addition to outcome evaluation. Impact

evaluation looks at the long-term results of a project, whether the project continues, or does its

work and ends.

Rural development projects in the developing world, for example, often exist simply to pass on

specific skills to local people, who are expected to then both practice those skills and teach

them to others. Once people have learned the skills - perhaps particular cultivation techniques,

or water purification - the project ends. If in five or ten years, an impact evaluation shows that

the skills the project taught are not only still being practiced, but have spread, then the project's

impact was both long-term and positive.

In order for these areas to be covered properly, evaluation has to start at the very beginning of the

project, with assessment and planning.

7
8
IN A PARTICIPATORY EVALUATION, STAKEHOLDERS SHOULD BE INVOLVED IN:

 Naming and framing the problem or goal to be addressed


 Developing a theory of practice (process, logic model) for how to achieve success

 Identifying the questions to ask about the project and the best ways to ask them - these

questions will identify what the project means to do, and therefore what should be

evaluated

What's the real goal, for instance, of a program to introduce healthier foods in school lunches?

It could be simply to convince children to eat more fruits, vegetables, and whole grains. It

could be to get them to eat less junk food. It could be to encourage weight loss in kids who are

overweight or obese. It could simply be to educate them about healthy eating, and to persuade

them to be more adventurous eaters. The evaluation questions you ask both reflect and

determine your goals for the program. If you don't measure weight loss, for instance, then

clearly that's not what you're aiming at. If you only look at an increase in children's

consumption of healthy foods, you're ignoring the fact that if they don't cut down on something

else (junk food, for instance), they'll simply gain weight. Is that still better than not eating the

healthy foods? You answer that question by what you choose to examine - if it is better, you

may not care what else the children are eating; if it's not, then you will care.

 Collecting information about the project

 Making sense of that information

 Deciding what to celebrate, and what to adjust or change, based on information from the

evaluation

WHY WOULD (AND WHY WOULDN'T) YOU USE PARTICIPATORY EVALUATION?

Why would you use participatory evaluation? The short answer is that it's often the most

effective way to find out what you need to know, both at the beginning of and throughout the

7
9
course of a project. In addition, it carries benefits for both individual participants and the

community that other methods don't.

SOME OF THE MAJOR ADVANTAGES OF PARTICIPATORY EVALUATION:

 It gives you a better perspective on both the initial needs of the project's

beneficiaries, and on its ultimate effects. If stakeholders, including project

beneficiaries, are involved from the beginning in determining what needs to be evaluated

and why - not to mention what the focus of the project needs to be - you're much more

likely to aim your work in the right direction, to correctly determine whether your project

is effective or not, and to understand how to change it to make it more so.

 It can get you information you wouldn't get otherwise. When project direction and

evaluation depend, at least in part, on information from people in the community, that

information will often be more forthcoming if it's asked for by someone familiar.

Community people interviewing their friends and neighbors may get information that an

outside person wouldn't be offered.

 It tells you what worked and what didn't from the perspective of those most directly

involved - beneficiaries and staff. Those implementing the project and those who are

directly affected by it are most capable of sorting out the effective from the ineffective.

 It can tell you why something does or doesn't work. Beneficiaries are often able to

explain exactly why they didn't respond to a particular technique or approach, thus giving

you a better chance to adjust it properly.

 It results in a more effective project. For the reasons just described, you're much more

apt to start out in the right direction, and to know when you need to change direction if

you haven't. The consequence is a project that addresses the appropriate issues in the

appropriate way, and accomplishes what it sets out to do.

8
0
 It empowers stakeholders. Participatory evaluation gives those who are often not

consulted - line staff and beneficiaries particularly - the chance to be full partners in

determining the direction and effectiveness of a project.

 It can provide a voice for those who are often not heard. Project beneficiaries are

often low-income people with relatively low levels of education, who seldom have - and

often don't think they have a right to - the chance to speak for themselves. By involving

them from the beginning in project evaluation, you assure that their voices are heard, and

they learn that they have the ability and the right to speak for them.

 It teaches skills that can be used in employment and other areas of life. In addition to

the development of basic skills and specific research capabilities, participatory evaluation

encourages critical thinking, collaboration, problem-solving, independent action, meeting

deadlines...all skills valued by employers, and useful in family life, education, civic

participation, and other areas.

 It bolsters self-confidence and self-esteem in those who may have little of either. This

category can include not only project beneficiaries, but also others who may, because of

circumstance, have been given little reason to believe in their own competence or value

to society. The opportunity to engage in a meaningful and challenging activity, and to be

treated as a colleague by professionals, can make a huge difference for folks who are

seldom granted respect or given a chance to prove themselves.

 It demonstrates to people ways in which they can take more control of their lives.

Working with professionals and others to complete a complex task with real-world

consequences can show people how they can take action to influence people and events.

 It encourages stakeholder ownership of the project. If those involved feel the project

is theirs, rather than something imposed on them by others, they'll work hard both in

implementing it, and in conducting a thorough and informative evaluation in order to

improve it.

8
1
 It can spark creativity in everyone involved. For those who've never been involved in

anything similar, a participatory evaluation can be a revelation, opening doors to a whole

new way of thinking and looking at the world. To those who have taken part in

evaluation before, the opportunity to exchange ideas with people who may have new

ways of looking at the familiar can lead to a fresh perspective on what may have seemed

to be a settled issue.

 It encourages working collaboratively. For participatory evaluation to work well, it has

to be viewed by everyone involved as a collaboration, where each participant brings

specific tools and skills to the effort, and everyone is valued for what she can contribute.

Collaboration of this sort not only leads to many of the advantages described above, but

also fosters a more collaborative spirit for the future as well, leading to other successful

community projects.

 It fits into a larger participatory effort. When community assessment and the planning

of a project have been a collaboration among project beneficiaries, staff, and community

members, it only makes sense to include evaluation in the overall plan, and to approach it

in the same way as the rest of the project. In order to conduct a good evaluation, its

planning should be part of the overall planning of the project. Furthermore, participatory

process generally matches well with the philosophy of community-based or grass roots

groups or organizations.

With all these positive aspects, participatory evaluation carries some negative ones as well.

Whether its disadvantages outweigh its advantages depend on your circumstances, but whether

you decide to engage in it or not, it's important to understand what kinds of drawbacks it might

have.

THE SIGNIFICANT DISADVANTAGES OF PARTICIPATORY EVALUATION INCLUDE:

8
2
 It takes more time than conventional process. Because there are so many people with

different perspectives involved, a number of whom have never taken part in planning or

evaluation before, everything takes longer than if a professional evaluator or a team

familiar with evaluation simply set up and conducted everything. Decision-making

involves a great deal of discussion, gathering people together may be difficult, evaluators

need to be trained, etc.

 It takes the establishment of trust among all participants in the process. If you're

starting something new (or, all too often, even if the project is ongoing), there are likely

to be issues of class distinction, cultural differences, etc., dividing groups of stakeholders.

These can lead to snags and slowdowns until they're resolved, which won't happen

overnight. It will take time and a good deal of conscious effort before all stakeholders

feel comfortable and confident that their needs and culture are being addressed.

 You have to make sure that everyone's involved, not just "leaders" of various

groups. All too often, "participatory" means the participation of an already-existing

power structure. Most leaders are actually that - people who are most concerned with the

best interests of the group, and whom others trust to represent them and steer them in the

direction that best reflects those interests. Sometimes, however, leaders are those who

push their way to the front, and try to confirm their own importance by telling others

what to do.

By involving only leaders of a population or community, you run the risk of losing - or never

gaining - the confidence and perspective of the rest of the population, which may dislike and

distrust a leader of the second type, or may simply see themselves shut out of the process.. They

may see the participatory evaluation as a function of authority, and be uninterested in taking part

in it. Working to recruit "regular" people as well as, or instead of, leaders may be an important

step for the credibility of the process. But it's a lot of work and may be tough to sell.

8
3
 You have to train people to understand evaluation and how the participatory

process works, as well as teaching them basic research skills. There are really a

number of potential disadvantages here. The obvious one is that of time, which we've

already raised - training takes time to prepare, time to implement, and time to sink in.

Another is the question of what kind of training participants will respond to. Still another

concerns recruitment - will people be willing to put in the time necessary to prepare them

for the process, let alone the time for the process itself?

 You have to get buy-in and commitment from participants. Given what evaluators

will have to do, they need to be committed to the process, and to feel ownership of it.

You have to structure both the training and the process itself to bring about this

commitment.

 People's lives - illness, child care and relationship problems, getting the crops in, etc.

- may cause delays or get in the way of the evaluation. Poor people everywhere live on

the edge, which means they're engaged in a delicate balancing act. The least tilt to one

side or the other - a sick child, too many days of rain in a row - can cause a disruption

that may result in an inability to participate on a given day, or at all. If you're dealing

with a rural village that's dependent on agriculture, for instance, an accident of weather

can derail the whole process, either temporarily or permanently.

 You may have to be creative about how you get, record, and report information. If

some of the participants in an evaluation are non- or semi-literate, or if participants speak

a number of different languages (English, Spanish, and Lao, for instance), a way to

record information will have to be found that everyone can understand, and that can, in

turn, be understood by others outside the group.

 Funders and policy makers may not understand or believe in participatory

evaluation. At worst, this can lose you your funding, or the opportunity to apply for

funding. At best, you'll have to spend a good deal of time and effort convincing funders

8
4
and policy makers that participatory evaluation is a good idea, and obtaining their support

for your effort.

Some of these disadvantages could also be seen as advantages: the training people receive blends

in with their development of new skills that can be transferred to other areas of life, for instance;

coming up with creative ways to express ideas benefits everyone; once funders and policy

makers are persuaded of the benefits of participatory process and participatory evaluation, they

may encourage others to employ it as well. Nonetheless, all of these potential negatives eat up

time, which can be crucial. If it's absolutely necessary that things happen quickly (which is true

not nearly as often as most of us think it is), participatory evaluation is probably not the way to

go.

WHEN MIGHT YOU USE PARTICIPATORY EVALUATION?

So when do you use participatory evaluation? Some of the reasons you might decide it's the best

choice for your purposes:

 When you're already committed to a participatory process for your project.

Evaluation planning can be included and collaboratively designed as part of the overall

project plan.

 When you have the time, or when results are more important than time. As should

be obvious from the last part of this section, one of the biggest drawbacks to participatory

evaluation is the time it takes. If time isn't what's most important, you can gain the

advantages of a participatory evaluation without having to compensate for many of the

disadvantages.

 When you can convince funders that it's a good idea. Funders may specify that they

want an outside evaluation, or they may simply be dubious about the value of

participatory evaluation. In either case, you may have some persuading to do in order to

be able to use a participatory process. If you can get their support, however, funders may

8
5
like the fact that participatory evaluation is often less expensive, and that it has added

value in the form of empowerment and transferable skills.

 When there may be issues in the community or population that outside evaluators

(or program providers, for that matter) aren't likely to be aware of. Political, social,

and interpersonal factors in the community can skew the results of an evaluation, and

without an understanding of those factors and their history, evaluators may have no idea

that what they're finding out is colored in any way. Evaluators who are part of the

community can help sort out the influence of these factors, and thus end up with a more

accurate evaluation.

 When you need information that it will be difficult for anyone outside the

community or population to get. When you know that members of the community or

population in question are unwilling to speak freely to anyone from outside, participatory

evaluation is a way to raise the chances that you'll get the information you need.

 When part of the goal of the project is to empower participants and help them

develop transferable skills. Here, the participatory evaluation, as it should in any case,

becomes a part of the project itself and its goals.

 When you want to bring the community or population together. In addition to

fostering a collaborative spirit, as we've mentioned, a participatory evaluation can create

opportunities for people who normally have little contact to work together and get to

know one another. This familiarity can then carry over into other aspects of community

life, and even change the social character of the community over the long term.

WHO SHOULD BE INVOLVED IN PARTICIPATORY EVALUATION?

8
6
We've referred continually to stakeholders - the people who are directly affected by the project

being evaluated. Who are the stakeholders? That varies from project to project, depending on the

focus, the funding, the intended outcomes, etc.

THERE ARE A NUMBER OF GROUPS THAT ARE GENERALLY INVOLVED,

HOWEVER:

 Participants or beneficiaries. The people whom the project is meant to benefit. That

may be a specific group (people with a certain medical condition, for instance), a

particular population (recent Southeast Asian immigrants, residents of a particular area),

or a whole community. They may be actively receiving a service (e.g., employment

training) or may simply stand to benefit from what the project is doing (violence

prevention in a given neighborhood). These are usually the folks with the greatest stake

in the project's success, and often the ones with the least experience of evaluation.

 Project line staff and/or volunteers. The people who actually do the work of carrying

out the project. They may be professionals, people with specific skills, or community

volunteers. They may work directly with project beneficiaries as mentors, teachers, or

health care providers; or they may advocate for immigrant rights, identify open space to

be preserved, or answer the phone and stuff envelopes. Whoever they are, they often

know more about what they're doing than anyone else, and their lives can be affected by

the project as much as those of participants or beneficiaries.

 Administrators. The people who coordinate the project or specific aspects of it. Like

line staff and volunteers, they know a lot about what's going on, and they're intimately

involved with the project every day.

 Outside evaluators, if they're involved. In many cases, outside evaluators are hired to

run participatory evaluations. The need for their involvement is obvious.

8
7
 Community officials. You may need the support of community leaders, or you may

simply want to give them and other participants the opportunity to get to know one

another in a context that might lead to better understanding of community needs.

 Others whose lives are affected by the project. The definition of this group varies

greatly from project to project. In general, it refers to people whose jobs or other aspects

of their lives will be changed either by the functioning of the project itself, or by its

outcomes.

An example would be landowners whose potential use of their land

would be affected by an environmental initiative or a neighborhood

plan.

HOW DO YOU CONDUCT A PARTICIPATORY EVALUATION?

Participatory evaluation encompasses elements of designing the project as well as evaluating it.

What you evaluate depends on what you want to know and what you're trying to do. Identifying

the actual evaluation questions sets the course of the project just as surely as a standardized

testing program guides teaching. When these questions come out of an assessment in which

stakeholders are involved, the evaluation is one phase of a community-based participatory

research process.

A participatory evaluation really has two stages: One comprises finding and training stakeholders

to act as participant evaluators. The second - some of which may take place before or during the

first stage - encompasses the planning and implementation of the project and its evaluation, and

includes six steps:

 Naming and framing the issue

 Developing a theory of practice to address it

 Deciding what questions to ask, and how to ask them to get the information you need

8
8
 Collecting information

 Analyzing the information you've collected

 Using the information to celebrate what worked, and to adjust and improve the project

We'll examine both of these stages in detail.

FINDING AND TRAINING STAKEHOLDERS TO ACT AS PARTICIPANT EVALUATORS

Unfortunately, this stage isn't simply a matter of announcing a participatory evaluation and then

sitting back while people beat down the doors to be part of it. In fact, it may be one of the more

difficult aspects of conducting a participatory evaluation.

Here's where the trust building we discussed earlier comes into play. The population you're

working with may be distrustful of outsiders, or may be used to promises of involvement that

turn out to be hollow or simply ignored. They may be used to being ignored in general, and/or

offered services and programs that don't speak to their real needs. If you haven't already built a

relationship to the point where people are willing to believe that you'll follow through on what

you say, now is the time to do it. It may take some time and effort - you may have to prove that

you'll still be there in six months - but it's worth it. You're much more likely to have a successful

project, let alone a successful evaluation, if you have a relationship of mutual trust and respect.

But let's assume you have that step out of the way, and that you've established good relationships

in the community and among the population you're working with, as well as with staff of the

project. Let's assume as well that these folks know very little, if anything, about participatory

evaluation. That means they'll need training in order to be effective.

If, in fact, your evaluation is part of a larger participatory effort, the question arises as to

whether to simply employ the same team that did assessments and/or planned the project,

8
9
perhaps with some additions, as evaluators. That course of action has both pluses and minuses.

The team is already assembled, has developed a method of working together, has some training

in research methods, etc., so that they can hit the ground running - obviously a plus.

The fact that they have a big stake in seeing the project be successful can work either way: they

may interpret their findings in the best possible light, or even ignore negative information; or

they may be eager to see exactly where and how to adjust the work to make it go better.

Another issue is burnout. Evaluation will mean more time in addition to what an assessment

and planning team has already put in. While some may be more than willing to continue, many

may be ready for a break (or may be moving on to another phase of their lives). If the

possibility of assembling a new team exists, it will give those who've had enough the chance to

gracefully withdraw.

How you handle this question will depend on the attitudes of those involved, how many people

you actually have to draw on (if the recruitment of the initial team was really difficult, you may

not have a lot of choices), and what people committed to.

RECRUIT PARTICIPANT EVALUATORS

There are many ways to accomplish this. In some situations, it makes the most sense to put out a

general call for volunteers; in others, to approach specific individuals who are likely - because of

their commitment to the project or to the population - to be willing. Alternatively, you might

approach community leaders or stakeholders to suggest possible evaluators.

Some basic guidelines for recruitment include:

 Use communication channels and styles that reach the people you're aiming at

 Make your message as clear as possible

 Use plain English and/or whatever other language(s) the population uses

 Put your message where the audience is

 Approach potential participants individually where possible - if you can find people they

know to recruit them, all the better

9
0
 Explain what people may gain from participation

 Be clear that they're being asked because they already have the qualities that are

necessary for participation

 Encourage people, but also be honest about the amount and extent of what needs to be

done

 Work out with participants what they're willing and able to do

 Try to arrange support - child care, for example - to make participation easier

 Ask people you've recruited to recommend - or recruit - others

In general, it's important for potential participant evaluators - particularly those whose

connection to the project isn't related to their employment - to understand the commitment

involved. An evaluation is likely to last a year, unless the project is considerably shorter than

that, and while you might expect and plan for some dropouts, most of the team needs to be

available for that long.

In order to make that commitment easier, discuss with participants what kinds of support they'll

need in order to fulfill their commitment - child care and transportation, for instance - and try to

find ways to provide it. Arrange meetings at times and places that are easiest for them (and keep

the number of meetings to a minimum). For participants who are paid project staff, the

evaluation should be considered part of their regular work, so that it isn't an extra, unpaid, burden

that they feel they can't refuse.

Be careful to try to put together a team that's a cross-section of the stakeholder population. As

we've already discussed, if you recruit only "leaders" from among the beneficiary population, for

instance, you may create resentment in the rest of the group, not get a true perspective of the

thinking or perceptions of that group, and defeat the purpose of the participatory nature of the

evaluation as well. Even if the leaders are good representatives of the group, you may want to

broaden your recruitment in the hopes of developing more community leadership, and

empowering those who may not always be willing to speak out.

9
1
TRAIN PARTICIPANT EVALUATORS

Participants, depending on their backgrounds, may need training in a number of areas. They may

have very little experience in attending and taking part in meetings, for instance, and may need to

start there. They may benefit from an introduction to the idea of participatory evaluation, and

how it works. And they'll almost certainly need some training in data gathering and analysis.

How training gets carried out will vary with the needs and schedules of participants and the

project. It may take place in small chunks over a relatively long period of time - weeks or months

- might happen all at once in the course of a weekend retreat, or might be some combination.

There's no right or wrong way here. The first option will probably make it possible for more

people to take part; the second allows for people to get to know one another and bond as a team,

and a combination might allow for both.

By the same token, there are many training methods, any or all of which might be useful with a

particular group. Training in meeting skills - knowing when and how to contribute and respond,

following discussion, etc. - may best be accomplished through mentoring, rather than instruction.

Interviewing skills may best be learned through role playing and other experiential techniques.

Some training - how to approach local people, for example - might best come from participants

themselves.

Some of the areas in which training might be necessary:

 The participatory evaluation process. How participatory evaluation works, its goals, the

roles people may play in the process, what to expect.

 Meeting skills. Following discussion, listening skills, handling disagreement or conflict,

contributing and responding appropriately, general ground rules and etiquette, etc.

9
2
 Interviewing. Putting people at ease, body language and tone of voice, asking open-ended

and follow-up questions, recording what people say and other important information,

handling interruptions and distractions, group interviews.

 Observation. Direct vs. participant observation, choosing appropriate times and places to

observe, relevant information to include, recording observations.

 Recording information and reporting it to the group. What interviewees and those

observed say and do, the non-verbal messages they send, who they are (age, situation,

etc.), what the conditions were, the date and time, any other factors that influenced the

interview or observation.

For people for whom writing isn't comfortable, where writing isn't feasible,

or where language is a barrier, there should be alternative recording and

reporting methods. Drawings, maps, diagrams, tape recording, videos, or

other imaginative ways of remembering exactly what was said or observed

can be substituted, depending on the situation. In interviews, if audio or

video recording is going to be used, it's important to get the interviewee's

permission first - before the interviewer shows up with the equipment, so

that there are no misunderstandings.

 Analyzing information. Critical thinking, what kinds of things statistics tell you, other

things to think about.

PLANNING AND IMPLEMENTING THE PROJECT AND ITS EVALUATION

9
3
There's an assumption here that all phases of a project will be participatory, so that not only its

evaluation, but its planning and the assessment that leads to it also involve stakeholders (not

necessarily the same ones who act as evaluators). If stakeholders haven't been involved from the

beginning, they don't have the deep understanding of the purposes and structure of a project that

they'd have of one they've helped form. The evaluation that results, therefore, is likely to be less

perceptive - and therefore less valuable - than one of a project they've been involved in from the

start.

NAMING AND FRAMING THE PROBLEM OR GOAL TO BE ADDRESSED


Identifying what you're evaluating defines what the project is meant to address and accomplish.

Community representatives and stakeholders, all those with something to gain or lose, work

together to develop a shared vision and mission. By collecting information about community

9
4
concerns and identifying available assets, communities can understand which issues to focus a

project on.

Naming a problem or goal refers to identifying the issue that needs to be addressed. Framing it has

to do with the way we look at it. If youth violence is conceived of as strictly a law enforcement

problem, for instance, that framing implies specific ways of solving it: stricter laws, stricter

enforcement, zero tolerance for violence, etc. If it's framed as a combination of a number of issues -

availability of hand guns, unemployment and drug use among youth, social issues that lead to the

formation of gangs, alienation and hopelessness in particular populations, poverty, etc. - then

solutions may include employment and recreation programs, mentoring, substance abuse treatment,

etc., as well as law enforcement. The more we know about a problem, and the more different

perspectives we can include in our thinking about it, the more accurately we can frame it, and the

more likely we are to come up with an effective solution.

DEVELOPING A THEORY OF PRACTICE TO ADDRESS THE PROBLEM

How do you conduct a community effort so that it has a good chance of solving the problem at

hand? Many communities and organizations answer this question by throwing uncoordinated

programs at the problem, or by assuming a certain approach (law enforcement, as in our

example, for instance) will take care of it. In fact, you have to have a plan for creating,

implementing, evaluating, adjusting, and maintaining a solution if you want it to work. Whatever

you call this plan - a theory of practice, a logic model, or simply an approach or process - it

should be logical, consistent, consider all the areas that need to be coordinated in order for it to

work, and give you an overall guideline and a list of steps to follow in order to carry it out.

Once you've identified an issue, for instance, one possible theory of practice might be:

9
5
 Form a coalition of organizations, agencies, and community members concerned with the

problem.

 Recruit and train a participatory research team which includes representatives of all

stakeholder groups.

 The team collects both statistical and qualitative, first-hand information about the

problem, and identifies community assets that might help in addressing it.

 Use the information you have to design a solution that takes into account the problem's

complexity and context.

This might be a single program or initiative, or a coordinated,

communitywide effort involving several organizations, the media, and

individuals. If it's closer to the latter, that's part of the complexity you have

to take into account. Coordination has to be part of your solution, as do ways

to get around the bureaucratic roadblocks that might occur and methods to

find the financial and personnel resources you need.

 Implement the solution.


 Carry out monitoring and evaluation that will give you ongoing feedback about how well

you're meeting objectives, and what you should change to improve your solution.

 Use the information from the evaluation to adjust and improve the solution.

 Go back to # 2 and do as much of it again as you need to until the problem is solved, or -

more likely, since many community problems never actually disappear - indefinitely in

order to maintain and increase your gains.

DECIDING WHAT EVALUATION QUESTIONS TO ASK, AND HOW TO ASK THEM TO

GET THE INFORMATION YOU NEED

As we've discussed, choosing the evaluation questions essentially guides the work. What you're

really choosing here is what you're going to pay attention to. There could be significant results

9
6
from your project that you're never aware of, because you didn't look for them - you didn't ask

the questions to which those results would have been the answers. That's why it's so important to

select questions carefully: they'll determine what you find.

Framing the problem is one element here - putting it in context, looking at it from all sides,

stepping back from your own assumptions and biases to get a clearer and broader view of it.

Another is envisioning the outcomes you want, and thinking about what needs to change, and

how, in order to reach them.

Framing is important in this activity as well. If you want simply to reduce youth violence,

stricter laws and enforcement might seem like a reasonable solution, assuming you're willing to

stick with them forever; if you want not only to reduce or eliminate youth violence, but to change

the climate that fosters it (i.e., long term social change), the solution becomes much broader and

requires, as we pointed out above, much more than law enforcement. And a broader solution

means more, and more complex, evaluation questions.

In the first case, evaluation questions might be limited to some variation of: "Were there more

arrests and convictions of youthful offenders for violent crimes in the time period studied, as

compared to the last period for which there were records before the new solution was put in

place?" "Did youthful offenders receive harsher sentences than before?" "Was there a reduction

in violent incidents involving youth?"

Looking at the broader picture, in addition to some of those questions, there might be questions

about counseling programs for youthful offenders to change their attitudes and to help ease their

transition back to civil society, drug and alcohol treatment, control of handgun sales, changing

community attitudes, etc.

COLLECTING INFORMATION

This is the largest part, at least in time and effort, of implementing an evaluation.

9
7
Various evaluators, depending on the information needed, may conduct any or all of the

following:

 Research into census or other public records, as well as news archives, library

collections, the Internet, etc.

 Individual and/or group interviews

 Focus groups

 Community information-sharing sessions

 Surveys

 Direct or participant observation


In some cases - particularly with unschooled populations in developing countries - evaluators

may have to find creative ways to draw out information. In some cultures, maps, drawings,

representations ("If this rock is the headman's house..."), or even storytelling may be more

revealing than the answers to straightforward questions.

ANALYZING THE INFORMATION YOU'VE COLLECTED

Once you've collected all the information you need, the next step is to make sense of it. What do

the numbers mean? What do people's stories and opinions tell you about the project? Did you

carry out the process you'd planned? If not, did it make a difference, positive or negative? In

some cases, these questions are relatively easy to answer. If there were particular objectives for

serving people, or for beneficiaries' accomplishments, you can quickly find out whether they

were met or not. (We set out to serve 75 people, and we actually served 82. We anticipated that

50 would complete the program, and 61 actually completed.)

In other cases, it's much harder to tell what your information means. What if approximately half

of interviewees say the project was helpful to them, and the other half says the opposite? A result

like that may leave you doing some detective work. (Is there any ethnic, racial, geographic, or

cultural pattern as to who is positive and who is negative? Whom did each group work with?

9
8
Where did they experience the project, and how? Did members of each group have specific

things in common?)

While collecting the information requires the most work and time, analyzing it is perhaps the

most important step in conducting an evaluation. Your analysis tells you what you need to know

in order to improve your project, and also gives you the evidence you need to make a case for

continued funding and community support. It's important that it be done well, and that it makes

sense of odd results like that directly above. Here's where good training and good guidance in

using critical thinking and other techniques come in.

In general, information-gathering and analysis should cover the three areas we discussed early in

the section: process, implementation, and outcomes. The purpose here is both to provide

information for improving the project and to provide accountability to funders and the

community.

 Process. This concerns the logistics of the project. Was there good coordination and

communication? Was the planning process participatory? Was the original timeline for

each stage of the project - outreach, assessment, planning, implementation, evaluation -

realistic? Were you able to find or hire the right people? Did you find adequate funding

and other resources? Was the space appropriate? Did members of the planning and

evaluation teams work well together? Did the people responsible do what they were

expected to do? Did unexpected leaders emerge (in the planning group, for instance)?

 Implementation. Did you do what you set out to do - reach the number of people you

expected to, use the methods you intended, provide the amount and kind of service or

activity that you planned for? This part of the evaluation is not meant to assess

effectiveness, but only whether the project was carried out as planned - i.e., what you

actually did, rather than what you accomplished as a result. That comes next.

9
9
 Outcomes. What were the results of what you did? Did what you hoped for take place?

If it did, how do you know it was a result of what you did, as opposed to some other

factor(s)? Were there unexpected results? Were they negative or positive? Why did this

all happen?

USING THE INFORMATION TO CELEBRATE WHAT WORKED, AND TO ADJUST AND

IMPROVE THE PROJECT

While accountability is important - if the project has no effect at all, for example, it's just

wasted effort - the real thrust of a good evaluation is formative. That means it's meant to provide

information that can help to continue to form the project, reshape it to make it better. As a result,

the overall questions when looking at process, implementation, and outcomes are: What worked

well? What didn't? What changes would improve the project?

Answering these questions requires further analysis, but should allow you to improve the

project considerably. In addition to dropping or changing and adjusting those elements of the

project that didn't work well, don't neglect those that were successful. Nothing's perfect; even

effective approaches can be made better.

Don't forget to celebrate your successes. Celebration recognizes the hard work of everyone

involved, and the value of your effort. It creates community support, and strengthens the

commitment of those involved. Perhaps most important, it makes clear that people working

together can improve the quality of life in the community.

There's a final element to participatory research and evaluation that can't be ignored. Once

you've started a project and made it successful, you have to maintain it. The participatory

research and evaluation has to continue - perhaps not with the same team(s), but with team’s

representative of all stakeholders. Conditions change, and projects have to adapt. Research into

those conditions and continued evaluation of your work will keep that work fresh and effective.

If your project is successful, you may think your work is done. Think again - community

1
0
problems are only solved as long as the solutions are actively practiced. The moment you turn

your back, the conditions you worked so hard to change can start to return to what existed before

The work - supported by participatory research and evaluation - has to go on indefinitely to

maintain and increase the gains you've made.

IN SUMMARY

Participatory evaluation is a part of participatory research. It involves stakeholders in a

community project in setting evaluation criteria for it, collecting and analyzing data, and using

the information gained to adjust and improve the project.

Participatory process brings in the all-important multiple perspectives of those most directly

affected by the project, which are also most likely to be tied into community history and culture.

The information and insights they contribute can be crucial in a project's effectiveness. In

addition, their involvement encourages community buy-in, and can result in important gains in

skills, knowledge, and self-confidence and self-esteem for the researchers. All in all,

participatory evaluation creates a win-win situation.

Conducting a participatory evaluation involves several steps:

 Recruiting and training a stakeholder evaluation team

 Naming and framing the problem

 Developing a theory of practice to guide the process of the work

 Asking the right evaluation questions

 Collecting information

 Analyzing information

 Using the information to celebrate and adjust your work


The final step, as with so many of the community-building strategies and actions described in the

Community Tool Box, is to keep at it. Participatory research in general, and participatory

evaluation in particular, has to continue as long as the work continues, in order to keep track of

1
0
community needs and conditions, and to keep adjusting the project to make it more responsive

and effective. And the work often has to continue indefinitely in order to maintain progress and

avoid sliding back into the conditions or attitudes that made the project necessary in the first

place.

Chapter 9

WHY SHOULD YOU HAVE AN EVALUATION PLAN?

After many late nights of hard work, more planning meetings than you care to remember, and

many pots of coffee, your initiative has finally gotten off the ground. Congratulations! You have

every reason to be proud of yourself and you should probably take a bit of a breather to avoid

burnout. Don't rest on your laurels too long, though--your next step is to monitor the initiative's

1
0
progress. If your initiative is working perfectly in every way, you deserve the satisfaction of

knowing that. If adjustments need to be made to guarantee your success, you want to know about

them so you can jump right in there and keep your hard work from going to waste. And, in the

worst case scenario, you'll want to know if it's an utter failure so you can figure out the best way

to cut your losses. For these reasons, evaluation is extremely important.

There's so much information on evaluation out there that it's easy for community groups to fall

into the trap of just buying an evaluation handbook and following it to the letter. This might

seem like the best way to go about it at first glance-- evaluation is a huge topic and it can be

pretty intimidating. Unfortunately, if you resort to the "cookbook" approach to evaluation, you

might find you end up collecting a lot of data that you analyze and then end up just filing it

away, never to be seen or used again.

Instead, take a little time to think about what exactly you really want to know about the initiative.

Your evaluation system should address simple questions that are important to your community,

your staff, and (last but never least!) your funding partners. Try to think about financial and

practical considerations when asking yourself what sort of questions you want answered. The

best way to insure that you have the most productive evaluation possible is to come up with an

evaluation plan.

HERE ARE A FEW REASONS WHY YOU SHOULD DEVELOP AN EVALUATION PLAN:

 It guides you through each step of the process of evaluation

 It helps you decide what sort of information you and your stakeholders really need

 It keeps you from wasting time gathering information that isn't needed

 It helps you identify the best possible methods and strategies for getting the needed

information

 It helps you come up with a reasonable and realistic timeline for evaluation

 Most importantly, it will help you improve your initiative!

1
0
WHEN SHOULD YOU DEVELOP AN EVALUATION PLAN?

As soon as possible! The best time to do this is before you implement the initiative. After that,

you can do it anytime, but the earlier you develop it and begin to implement it, the better off your

initiative will be, and the greater the outcomes will be at the end.

Remember, evaluation is more than just finding out if you did your job. It is important to use

evaluation data to improve the initiative along the way.

WHAT ARE THE DIFFERENT TYPES OF STAKEHOLDERS AND WHAT ARE THEIR

INTERESTS IN YOUR EVALUATION?

We'd all like to think that everyone is as interested in our initiative or project as we are, but

unfortunately that isn't the case. For community health groups, there are basically three groups of

people who might be identified as stakeholders (those who are interested, involved, and invested

in the project or initiative in some way): community groups, grant makers/funders, and

university-based researchers. Take some time to make a list of your project or initiative's

stakeholders, as well as which category they fall into.

WHAT ARE THE TYPES OF STAKEHOLDERS?

 Community groups: Hey, that's you! Perhaps this is the most obvious category of

stakeholders, because it includes the staff and/or volunteers involved in your initiative or

project. It also includes the people directly affected by it--your targets and agents of

change.

 Grant makers and funders: Don't forget the folks that pay the bills! Most grant makers

and funders want to know how their money's being spent, so you'll find that they often

have specific requirements about things they want you to evaluate. Check out all your

current funders to see what kind of information they want you to be gathering. Better yet,

1
0
find out what sort of information you'll need to have for any future grants you're

considering applying for. It can't hurt!

 University-based researchers: This includes researchers and evaluators that your

coalition or initiative may choose to bring in as consultants or full partners. Such

researchers might be specialists in public health promotion, epidemiologists, behavioral

scientists, specialists in evaluation, or some other academic field. Of course, not all

community groups will work with university-based researchers on their projects, but if

you choose to do so, they should have their own concerns, ideas, and questions for the

evaluation. If you can't quite understand why you'd include these folks in your evaluation

process, try thinking of them as auto mechanics--if you want them to help you make your

car run better, you will of course include them in the diagnostic process. If you went to a

mechanic and started ordering him around about how to fix your car without letting him

check it out first, he'd probably get pretty annoyed with you. Same thing with your

researchers and evaluators: it's important to include them in the evaluation development

process if you really want them to help improve your initiative.

Each type of stakeholder will have a different perspective on your organization as well as what

they want to learn from the evaluation. Every group is unique, and you may find that there are

other sorts of stakeholders to consider with your own organization. Take some time to

brainstorm about who your stakeholders are before you are making your evaluation plan.

WHAT DO THEY WANT TO KNOW ABOUT THE EVALUATION?

While some information from the evaluation will be of use to all three groups of stakeholders,

some will be needed by only one or two of the groups. Grant makers and funders, for example,

will usually want to know how many people were reached and served by the initiative, as well as

whether the initiative had the community -level impact it intended to have. Community groups

may want to use evaluation results to guide them in decisions about their programs, and where

1
0
they are putting their efforts. University-based researchers will most likely be interested in

proving whether any improvements in community health were definitely caused by your

programs or initiatives; they may also want to study the overall structure of your group or

initiative to identify the conditions under which success may be reached.

WHAT DECISIONS DO THEY NEED TO MAKE, AND HOW WOULD THEY USE THE

DATA TO INFORM THOSE DECISIONS?

You and your stakeholders will probably be making decisions that affect your program or

initiative based on the results of your evaluation, so you need to consider what those decisions

will be. Your evaluation should yield honest and accurate information for you and your

stakeholders; you'll need to be careful not to structure it in such a way that it exaggerates your

success, and you'll need to be really careful not to structure it in such a way that it downplays

your success!

Consider what sort of decisions you and your stakeholders will be making. Community groups

will probably want to use the evaluation results to help them find ways to modify and improve

your program or initiative. Grant makers and funders will most likely be making decisions about

how much funding to give you in the future, or even whether to continue funding your program

at all (or any related programs). They may also think about whether to impose any requirements

on you to get that program (e.g., a grant maker tells you that your program may have its funding

decreased unless you show an increase of services in a given area). University-based researchers

will need to decide how they can best assist with plan development and data reporting.

You'll also want to consider how you and your stakeholders plan to balance costs and benefits.

Evaluation should take up about 10--15% of your total budget. That may sound like a lot, but

remember that evaluation is an essential tool for improving your initiative. When considering

how to balance costs and benefits, ask yourself the following questions:

• What do you need to know?

1
0
• What is required by the community?

• What is required by funding?

HOW DO YOU DEVELOP AN EVALUATION PLAN?

THERE ARE FOUR MAIN STEPS TO DEVELOPING AN EVALUATION PLAN:

• Clarifying program objectives and goals

• Developing evaluation questions

• Developing evaluation methods

• Setting up a timeline for evaluation activities

CLARIFYING PROGRAM OBJECTIVES AND GOALS

The first step is to clarify the objectives and goals of your initiative. What are the main things

you want to accomplish, and how have you set out to accomplish them? Clarifying these will

help you identify which major program components should be evaluated. One way to do this is

to make a table of program components and elements.

DEVELOPING EVALUATION QUESTIONS

For our purposes, there are four main categories of evaluation questions. Let's look at some

examples of possible questions and suggested methods to answer those questions. Later on, we'll

tell you a bit more about what these methods are and how they work

 Planning and implementation issues: How well was the program or initiative planned

out, and how well was that plan put into practice?

o Possible questions: Who participates? Is there diversity among participants? Why

do participants enter and leave your programs? Are there a variety of services and

alternative activities generated? Do those most in need of help receive services?

Are community members satisfied that the program meets local needs?

1
0
o Possible methods to answer those questions: monitoring system that tracks

actions and accomplishments related to bringing about the mission of the

initiative, member survey of satisfaction with goals, member survey of

satisfaction with outcomes.

 Assessing attainment of objectives: How well has the program or initiative met its

stated objectives?

o Possible questions: How many people participate? How many hours are

participants involved?

o Possible methods to answer those questions: monitoring system (see above),

member survey of satisfaction with outcomes, goal attainment scaling.

 Impact on participants: How much and what kind of a difference has the program or

initiative made for its targets of change?

o Possible questions: How has behavior changed as a result of participation in the

program? Are participants satisfied with the experience? Were there any negative

results from participation in the program?

o Possible methods to answer those questions: member survey of satisfaction with

goals, member survey of satisfaction with outcomes, behavioral surveys,

interviews with key participants.

 Impact on the community: How much and what kind of a difference has the program or

initiative made on the community as a whole?

o Possible questions: What resulted from the program? Were there any negative

results from the program? Do the benefits of the program outweigh the costs?

o Possible methods to answer those questions: Behavioral surveys, interviews with

key informants, community-level indicators.

DEVELOPING EVALUATION METHODS

1
0
Once you've come up with the questions you want to answer in your evaluation, the next step is

to decide which methods will best address those questions. Here is a brief overview of some

common evaluation methods and what they work best for.

Monitoring and feedback system

This method of evaluation has three main elements:

• Process measures: these tell you about what you did to implement your initiative;

• Outcome measures: these tell you about what the results were; and

• Observational system: this is whatever you do to keep track of the initiative while it's

happening.

Member surveys about the initiative

When Ed Koch was mayor of New York City, his trademark call of "How am I doing?" was

known all over the country. It might seem like an overly simple approach, but sometimes the best

thing you can do to find out if you're doing a good job is to ask your members. This is best done

through member surveys. There are three kinds of member surveys you're most likely to need to

use at some point:

• Member survey of goals: done before the initiative begins - how do your members think

you're going to do?

• Member survey of process: done during the initiative - how are you doing so far?

• Member survey of outcomes: done after the initiative is finished - how did you do?

Goal attainment report

If you want to know whether your proposed community changes were truly accomplished-- and

we assume you do--your best bet may be to do a goal attainment report. Have your staff keep

track of the date each time a community change mentioned in your action plan takes place. Later

on, someone compiles this information (e.g., "Of our five goals, three were accomplished by the

end of 1997.")

Behavioral surveys

1
0
Behavioral surveys help you find out what sort of risk behaviors people are taking part in and the

level to which they're doing so. For example, if your coalition is working on an initiative to

reduce car accidents in your area, one risk behavior to do a survey on will be drunk driving.

Interviews with key participants

Key participants - leaders in your community, people on your staff, etc. - have insights that you

can really make use of. Interviewing them to get their viewpoints on critical points in the history

of your initiative can help you learn more about the quality of your initiative, identify factors that

affected the success or failure of certain events, provide you with a history of your initiative, and

give you insight which you can use in planning and renewal efforts.

Community-level indicators of impact

These are tested-and-true markers that help you assess the ultimate outcome of your initiative.

For substance abuse coalitions, for example, the U.S. Centers for Substance Abuse Prevention

(CSAP) and the Regional Drug Initiative in Oregon recommend several proven indicators (e.g.,

single-nighttime car crashes, emergency transports related to alcohol) which help coalitions

figure out the extent of substance abuse in their communities. Studying community-level

indicators helps you provide solid evidence of the effectiveness of your initiative and determine

how successful key components have been.

SETTING UP A TIMELINE FOR EVALUATION ACTIVITIES

When does evaluation need to begin?

Right now! Or at least at the beginning of the initiative! Evaluation isn't something you should

wait to think about until after everything else has been done. To get an accurate, clear picture of

what your group has been doing and how well you've been doing it, it's important to start paying

attention to evaluation from the very start. If you're already part of the way into your initiative,

however, doesn’t scrap the idea of evaluation altogether--even if you start late, you can still

gather information that could prove very useful to you in improving your initiative.

1
1
Outline questions for each stage of development of the initiative

We suggest completing a table listing:

 Key evaluation questions (the five categories listed above, with more specific questions

within each category)

 Type of evaluation measures to be used to answer them (i.e., what kind of data you will

need to answer the question?)

 Type of data collection (i.e., what evaluation methods you will use to collect this data)

 Experimental design (A way of ruling out threats to the validity - e.g., believability - of

your data. This would include comparing the information you collect to a similar group

that is not doing things exactly the way you are doing things.)

With this table, you can get a good overview of what sort of things you'll have to do in order to

get the information you need.

When do feedback and reports need to be provided?

Whenever you feel it's appropriate. Of course, you will provide feedback and reports at the end

of the evaluation, but you should also provide periodic feedback and reports throughout the

duration of the project or initiative. In particular, since you should provide feedback and reports

at meetings of your steering committee or overall coalition, find out ahead of time how often

they'd like updates. Funding partners will want to know how the evaluation is going as well.

When should evaluation end?

Shortly after the end of the project - usually when the final report is due. Don't wait too long after

the project has been completed to finish up your evaluation - it's best to do this while everything

is still fresh in your mind and you can still get access to any information you might need.

1
1
WHAT SORT OF PRODUCTS SHOULD YOU EXPECT TO GET OUT OF THE

EVALUATION?

The main product you'll want to come up with is a report that you can share with everyone

involved. what should this report include?

 Effects expected by shareholders: Find out what key people want to know. Be sure to

address any information that you know they're going to want to hear about!

 Differences in the behaviors of key individuals: Find out how your coalition's efforts have

changed the behaviors of your targets and agents of change. Have any of your strategies

caused people to cut down on risky behaviors, or increase behaviors that protect them

from risk? Are key people in the community cooperating with your efforts?

 Differences in conditions in the community: Find out what has changed Is the public

aware of your coalition or group's efforts? Do they support you? What steps are they

talking to help you achieve your goals? Have your efforts caused any changes in local

laws or practices?

You'll probably also include specific tools (i.e., brief reports summarizing data), annual reports,

quarterly or monthly reports from the monitoring system, and anything else that is mutually

agreed upon between the organization and the evaluation team.

WHAT SORT OF STANDARDS SHOULD YOU FOLLOW?

Now that you've decided you're going to do an evaluation and have begun working on your plan,

you've probably also had some questions about how to ensure that the evaluation will be as fair,

accurate, and effective as possible. After all, evaluation is a big task, so you want to get it right.

What standards should you use to make sure you do the best possible evaluation? In 1994, the

Joint Committee on Standards for Educational Evaluation issued a list of program evaluation

standards that are widely used to regulate evaluations of educational and public health programs.

1
1
The standards the committee outlined are for utility, feasibility, propriety, and accuracy.

Consider using evaluation standards to make sure you do the best evaluation possible for your

initiative.

ASSIGNMENTS:

1. What are the qualities of a good indicator? Give an example

2. As part of the Millennium Development Goals (MDGs), Universal education is a

right for all children. Different governments have implemented free primary

education in order to achieve this goal. With example from your country please

explain the following:

a) Critically evaluate the implementation programme of free primary

education for the first 2 years

b) Analyze the unintended outcomes of free primary education on job

creation within the same period

a) what would the monitoring exercise in free primary education wish to

achieve for the following stakeholders?

• Donors

• Primary School managers

• Government

3. You have been contracted by UNICEF to undertake the role of a consultant in a

project (joint partnership between them and the Ministry of Gender and Children)

a program that gives direct funds to families staying with orphaned children, to

plan a monitoring system for the same.

a) What are the advantages of participatory evaluation methods?

b) Formulate the steps in planning a monitoring system.

1
1
1
1

You might also like