Monitoring and Evaluation
Monitoring and Evaluation
OVERVIEW
Brief description This toolkit deals with the nuts and bolts (the basics) of setting up and using a monitoring and evaluation system for a project or an organisation. It clarifies what monitoring and evaluation are, how you plan to do them, how you design a system that helps you monitor and an evaluation process that brings it all together usefully. It looks at how you collect the information you need and then how you save yourself from drowning in data by analysing the information in a relatively straightforward way. Finally it raises, and attempts to address, some of the issues to do with taking action on the basis of what you have learned. Why have a detailed toolkit on monitoring and evaluation? If you dont care about how well you are doing or about what impact you are having, why bother to do it at all? Monitoring and evaluation enable you to assess the quality and impact of your work, against your action plans and your strategic plan. In order for monitoring and evaluation to be really valuable, you do need to have planned well. Planning is dealt with in detail in other toolkits on this website. Who should use this toolkit? This toolkit should be useful to anyone working in an organisation or project who is concerned about the efficiency, effectiveness and impact of the work of the project or organisation. When will this toolkit be useful? This toolkit will be useful when: You are setting up systems for data collection during the planning phases of a project or organisation; You want to analyse data collected through the monitoring process; You are concerned about how efficiently and how effectively you are working; You reach a stage in your project, or in the life of your organisation, when you think it would be useful to evaluate what impact the work is having; Donors ask for an external evaluation of your organisation and or work. Although there is a tendency in civil society organisations to see an evaluation as something that happens when a donor insists on it, in fact, monitoring and evaluation are invaluable internal management tools. If you dont assess how well you are doing against targets and indicators, you may go on using resources to no useful end, without changing the situation you have identified as a problem at all. Monitoring and evaluation enable you to make that assessment.
OVERVIEW
p.1
BASIC PRINCIPLES
pp.2-40
BEST PRACTICE
pp.41-46
RESOURCES
p.47
Examples
Analysing information
p.34
Taking action
p.35
Reporting p.36 Learning p. 38 Effective decisionmaking p.39 Dealing with Resistance p.40
Monitoring
pp.20-22
Evaluation
p.23
Methods pp.30-33
More about monitoring and evaluation what is involved and different approaches
pp.7-11
Efficiency tells you that the input into the work is appropriate in terms of the output. This could be input in terms of money, time, staff, equipment and so on. When you run a project and are concerned about its replicability or about going to scale (see Glossary of Terms), then it is very important to get the efficiency element right. Effectiveness is a measure of the extent to which a development programme or project achieves the specific objectives it set. If, for example, we set out to improve the qualifications of all the high school teachers in a particular area, did we succeed? Impact tells you whether or not what you did made a difference to the problem situation you were trying to address. In other words, was your strategy useful? Did ensuring that teachers were better qualified improve the pass rate in the final year of school? Before you decide to get bigger, or to replicate the project elsewhere, you need to be sure that what you are doing makes sense in terms of the impact you want to achieve. From this it should be clear that monitoring and evaluation are best done when there has been proper planning against which to assess progress and achievements. There are three
In many organisations, monitoring and evaluation is something that that is seen as a donor requirement rather than a management tool. Donors are certainly entitled to know whether their money is being properly spent, and whether it is being well spent. But the primary (most important) use of monitoring and evaluation should be for the organisation or project itself to see how it is doing against objectives, whether it is having an impact, whether it is working efficiently, and to learn how to do it better. Plans are essential but they are not set in concrete (totally fixed). If they are not working, or if the circumstances change, then plans need to change too. Monitoring and evaluation are both tools which help a project or organisation know when plans are not working, and when circumstances have changed. They give management the information it needs to make decisions about the project or organisation, about changes that are necessary in strategy or plans. Through this, the constants remain the pillars of the strategic framework: the problem analysis, the vision, and the values of the project or organisation. Everything else is negotiable. (See also the toolkit on strategic planning) Getting something wrong is not a crime. Failing to learn from past mistakes because you are not monitoring and evaluating, is. The effect of monitoring and evaluation can be seen in the following cycle. Note that you will monitor and adjust several times before you are ready to evaluate and replan.
Reflect/learn / decide/adjus t
Implemen t
Monitor
It is important to recognise that monitoring and evaluation are not magic wands that can be waved to make problems disappear, or to cure them, or to miraculously make changes without a lot of hard work being put in by the project or organisation. In themselves, they are not a solution, but they are valuable tools. Monitoring and evaluation can: Help you identify problems and their causes; Suggest possible solutions to problems; Raise questions about assumptions and strategy; Push you to reflect on where you are going and how you are getting there; Provide you with information and insight; Encourage you to act on the information and insight; Increase the likelihood that you will make a positive development difference.
Monitoring is an internal function in any project or organisation. Evaluation involves: Looking at what the project or organisation intended to achieve what difference did it want to make? What impact did it want to make? Assessing its progress towards what it wanted to achieve, its impact targets. Looking at the strategy of the project or organisation. Did it have a strategy? Was it effective in following its strategy? Did the strategy work? If not, why not? Looking at how it worked. Was there an efficient use of resources? What were the opportunity costs (see Glossary of Terms) of the way it chose to work? How sustainable is the way in which the project or organisation works? What are the implications for the various stakeholders in the way the organisation works.
In an evaluation, we look at efficiency, effectiveness and impact (see Glossary of Terms). There are many different ways of doing an evaluation. Some of the more common terms you may have come across are: Self-evaluation: This involves an organisation or project holding up a mirror to itself and assessing how it is doing, as a way of learning and improving practice. It takes a very self-reflective and honest organisation to do this effectively, but it can be an important learning experience. Participatory evaluation: This is a form of internal evaluation. The intention is to involve as many people with a direct stake in the work as possible. This may mean project staff and beneficiaries working together on the evaluation. If an outsider is called in, it is to act as a facilitator of the process, not an evaluator. Rapid Participatory Appraisal: Originally used in rural areas, the same methodology can, in fact, be applied in most communities. This is a qualitative (see Glossary of Terms) way of doing evaluations. It is semi-structured and carried out by an interdisciplinary team over a short time. It is used as a starting point for understanding a local situation and is a quick, cheap, useful way to gather information. It involves the use of secondary (see Glossary of Terms) data review, direct observation, semi-structured interviews, key informants, group interviews, games, diagrams, maps and calendars. In an evaluation context, it allows one to get valuable input from those who are supposed to be benefiting from the development work. It is flexible and interactive.
For more on the advantages and disadvantages of external and internal evaluations, go to the next page. For more on selecting an external evaluator, go to Page 13. For more on different approaches to evaluation, go to Page 14.
Internal evaluation
External evaluation (done by a team or person with no vested interest in the project)
If you decide to go for external evaluation, you will find some ideas for criteria to use in choosing an external evaluator on the next page.
An understanding of development issues. An understanding of organisational issues. Experience in evaluating development projects, programmes or organisations. A good track record with previous clients. Research skills. A commitment to quality. A commitment to deadlines. Objectivity, honesty and fairness. Logic and the ability to operate systematically. Ability to communicate verbally and in writing. A style and approach that fits with your organisation. Values that are compatible with those of the organisation. Reasonable rates (fees), measured against the going rates.
How do you find all this out? By asking lots of questions! When you decide to use an external evaluator:
Check his/her/their references. Meet with the evaluators before making a final decision. Communicate what you want clearly good Terms of Reference (see Glossary of Terms) are the foundation of a good contractual relationship. Negotiate a contract which makes provision for what will happen if time frames and output expectations are not met. Ask for a workplan with outputs and timelines. Maintain contact ask for interim reports as part of the contract either verbal or written. Build in formal feedback times.
Do not expect any evaluator to be completely objective. S/he will have opinions and ideas you are not looking for someone who is a blank page! However, his/her opinions must be clearly stated as such, and must not be disguised as facts. It is also useful to have some idea of his/her (or their) approach to evaluation. For more on different approaches to evaluation, go to the next page.
10
Decision-making
Providing information.
Goal-free
Expert judgement
Use of expertise.
Our feeling is that the best evaluators use a combination of all these approaches, and that an organisation can ask for a particular emphasis but should not exclude findings that make use of a different approach. (Thanks to PACTs Evaluation Sourcebook, 1984, for much of this.)
11
There is not one set way of planning for monitoring and evaluation. The ideas included in the toolkits on overview of planning, strategic planning and action planning will help you to develop a useful framework for your monitoring and evaluation system. If you are familiar with logical framework analysis and already use it in your planning, this approach lends itself well to planning a monitoring and evaluation system. (See also in the toolkit on overview of planning, the section on planning tools overview, LFA.)
12
So, the first thing we need to know is: Is what we are doing and how we are doing it meeting the requirements of these values? In order to answer this question, our monitoring and evaluation system must give us information about: Who is benefiting from what we do? How much are they benefiting? Are beneficiaries passive recipients or does the process enable them to have some control over their lives? Are there lessons in what we are doing that have a broader impact than just what is happening on our project? Can what we are doing be sustained in some way for the long-term, or will the impact of our work cease when we leave? Are we getting optimum outputs for the least possible amount of inputs?
Do we want to know about the process or the product? Should development work be evaluated in terms of the process (the way in which the work is done) or the product (what the work produces)? Often, this debate is more about excusing inadequate performance than it is about a real issue. Process and product are not separate in development work. What we achieve and how we achieve it are often the very same thing. If the goal is development, based on development values, then sinking a well without the transfer of skills for maintaining and managing the well is not enough. Saying: It was taking too long that way. We couldnt wait for them to sort themselves out. We said wed sink a well and we did is not enough. But neither is: It doesnt matter that the well hasnt happened yet. Whats important is that the people have been empowered. Both process and product should be part of your monitoring and evaluation system. But how do we make process and product and values measurable? The answer lies in the setting of indicators and this is dealt with in the sub-section that follows.
13
But you need to decide early on what your indicators are going to be so that you can begin collecting the information immediately. You cannot use the number of television aerials in a community as a sign of improved standard of living if you dont know how many there were at the beginning of the process. Some people argue that the problem with measuring indicators is that other variables (or factors) may have impacted on them as well. Community members may be participating more in meetings because a number of new people with activist backgrounds have come to live in the area. Women may have more time for development projects because the men of the village have been attending a gender workshop and have made a decision to share the traditionally female tasks. And so on. While this may be true, within a project it is possible to identify other variables and take them into account. It is also important to note that, if nothing is changing, if there is no improvement in the measurement of the key indicators identified, then your strategy is not working and needs to be rethought. To see a method for developing indicators, go to the next page. To see examples of indicators, go to examples.
14
Step 1: Identify the problem situation you are trying to address. The following might be problems: Economic situation (unemployment, low incomes etc) Social situation (housing, health, education etc) Cultural or religious situation (not using traditional languages, low attendance at religious services etc) Political or organisational situation (ineffective local government, faction fighting etc)
There will be other situations as well. (See the section on problem analysis in the toolkit on overview of planning, in the section on doing the ground work.) Step 2: Develop a vision for how you would like the problem areas to be/look. (See the toolkit on Strategic Planning, the section on vision.) This will give you impact indicators. What will tell you that the vision has been achieved? What signs will you see that you can measure that will prove that the vision has been achieved? For example, if your vision was that the people in your community would be healthy, then you can use health indicators to measure how well you are doing. Has the infant mortality rate gone down? Do fewer women die during child-birth? Has the HIV/AIDS infection rate been reduced? If you can answer yes to these questions then progress is being made. Step 3: Develop a process vision for how you want things to be achieved. This will give you process indicators. If, for example, you want success to be achieved through community efforts and participation, then your process vision might include things like community health workers from the community trained and offering a competent service used by all; community organises clean-up events on a regular basis, and so on. Step 4: Develop indicators for effectiveness. For example, if you believe that you can increase the secondary school pass rate by upgrading teachers, then you need indicators that show you have been effective in upgrading the teachers e.g. evidence from a survey in the schools, compared with a baseline survey. Step 5: Develop indicators for your efficiency targets. Here you can set indicators such as: planned workshops are run within the stated timeframe, costs for workshops are kept to a maximum of US$ 2.50 per participant, no more than 160 hours in total of staff time to be spent on organising a conference; no complaints about conference organisation etc. With this framework in place, you are in a position to monitor and evaluate efficiency, effectiveness and impact (see Glossary of Terms).
15
Quantitative measurement tells you how much or how many. How many people attended a workshop, how many people passed their final examinations, how much a publication cost, how many people were infected with HIV, how far people have to walk to get water or firewood, and so on. Quantitative measurement can be expressed in absolute numbers (3 241 women in the sample are infected) or as a percentage (50% of households in the area have television aerials). It can also be expressed as a ratio (one doctor for every 30 000 people). One way or another, you get quantitative (number) information by counting or measuring. Qualitative measurement tells you how people feel about a situation or about how things are done or how people behave. So, for example, although you might discover that 50% of the teachers in a school are unhappy about the assessment criteria used, this is still qualitative information, not quantitative information. You get qualitative information by asking, observing, interpreting. Some people find quantitative information comforting it seems solid and reliable and objective. They find qualitative information unconvincing and subjective. It is a mistake to say that quantitative information speaks for itself. It requires just as much interpretation in order to make it meaningful as does qualitative information. It may be a fact that enrolment of girls at schools in some developing countries is dropping counting can tell us that, but it tells us nothing about why this drop is taking place. In order to know that, you would need to go out and ask questions to get qualitative information. Choice of indicators is also subjective, whether you use quantitative or qualitative methods to do the actual measuring. Researchers choose to measure school enrolment figures for girls because they believe that this tells them something about how women in a society are treated or viewed. The monitoring and evaluation process requires a combination of quantitative and qualitative information in order to be comprehensive. For example, we need to know what the school enrolment figures for girls are, as well as why parents do or do not send their children to school. Perhaps enrolment figures are higher for boys than for girls because a particular community sees schooling as a luxury and prefers to train boys to do traditional and practical tasks such taking care of animals. In this case, the higher enrolment of girls does not necessarily indicate higher regard for girls.
16
17
In order to maximise their efforts, the project or organisation needs to: Prepare reporting formats that include measurement, either quantitative or qualitative, of important indicators. For example, if you want to know about community participation in activities, or womens participation specifically, structure the fieldworkers reporting format so that s/he has to comment on this, backing up observations with facts. (Look at the fieldworker report format given later in this toolkit.) Prepare recording formats that include measurement, either quantitative or qualitative, of important indicators. For example, if you want to know how many men and how many women attended a meeting, include a gender column on your attendance list. Record information in such a way that it is possible to work out what you need to know. For example, if you need to know whether a project is sustainable financially, and which elements of it cost the most, then make sure that your bookkeeping records reflect the relevant information.
It is a useful principle to look at every activity and say: What do we need to know about this activity, both process (how it is being done) and product (what it is meant to achieve), and what is the easiest way to find it out and record it as we go along?
18
19
20
Below is a step-by-step process you could use in order to design a monitoring system for your organisation or project. For a case study of how an organisation went about designing a monitoring system, go to examples. Step 1: At a workshop with appropriate staff and/or volunteers, and run by you or a consultant:
Introduce the concepts of efficiency, effectiveness and impact (see Glossary of Terms). Explain that a monitoring system needs to cover all three. Generate a list of indicators for each of the three aspects. Clarify what variables (see Glossary of Terms) need to be linked. So, for example, do you want to be able to link the age of a teacher with his/her qualifications in order to answer the question: Are older teachers more or less likely to have higher qualifications? Clarify what information the project or organisation is already collecting.
Step 2:
Turn the input from the workshop into a brief for the questions your monitoring system must be able to answer. Depending on how complex your requirements are, and what your capacity is, you may decide to go for a computerised data base or a manual one. If you want to be able to link many variables across many cases (e.g. participants, schools, parent involvement, resources, urban/rural etc), you may need to go the computer route. If you have a few variables, you can probably do it manually. The important thing is to begin by knowing what variables you are interested in and to keep data on these variables. Linking and analysis can take place later. (These concepts are complicated. It will help you to read the case study in the examples section of the toolkit.) From the workshop you will know what you want to monitor. You will have the indicators of efficiency, effectiveness and impact that have been prioritised. You will then choose the variables that will help you answer the questions you think are important. So, for example, you might have an indicator of impact which is that safer sex options are chosen as an indicator that young people are now making informed and mature lifestyle choices. The variables that might affect the indicator include: Age Gender Religion Urban/rural Economic category Family environment Length of exposure to your projects initiative Number of workshops attended.
21
Answers to these kinds of questions enable a project or organisation to make decisions about what they do and how they do it, to make informed changes to programmes, and to measure their impact and effectiveness. Answers to questions such as: Do more people attend sessions that are organised well in advance? Do more schools participate when there is no charge? Do more young people attend when sessions are over weekends or in the evenings? Does it cost less to run a workshop in the community, or to bring people to our training centre to run the workshop?
enable the project or organisation to measure and improve their efficiency. Step 3: Step 4: Step 5: Decide how you will collect the information you need (see collecting information) and where it will be kept (on computer, in manual files). Decide how often you will analyse the information this means putting it together and trying to answer the questions you think are important. Collect, analyse, report.
22
For more on some of the more difficult components of Terms of Reference, see the following pages.
23
Examples of an evaluation purpose could be: To provide the organisation with information needed to make decisions about the future of the project. To assess whether the organisation/project is having the planned impact in order to decide whether or not to replicate the model elsewhere. To assess the programme in terms of effectiveness, impact on the target group, efficiency and sustainability in order to improve its functioning. The purpose gives some focus to the broad evaluation process.
24
Some examples of key evaluation questions related to a project purpose: The purpose of the evaluation is to assess how efficient the project is in delivering benefits to the identified community in order to inform Board decisions about continuity and replicability. Key evaluation questions: Who is currently benefiting from the project and in what ways? Do the inputs (in money and time) justify the outputs and, if so/if not, on what basis is this claim justified? What would improve the efficiency, effectiveness and impact of the current project? What are the lessons that can be learned from this project in terms of replicability?
Note that none of these questions deals with a specific element or area of the internal or external functioning of the project or organisation. Most would require the evaluation team to deal with a range of project or organisational elements in order to answer them. Other examples of evaluation questions might be: What are the most effective ways in which a project of this kind can address the problem identified? To what extent does the internal functioning and structure of the organisation impact positively on the programme work? What learnings from this project would have applicability across the full development spectrum?
Clearly, there could be many, many examples. Our experience has shown us that, when an evaluation process is designed with such questions in mind, it produces far more interesting insights than simply asking obvious questions such as: Does the Board play a useful role in the organisation? Or: What impact are we having?
25
(For more on actual methods, see the later section on collecting information, methods.) Here too one would expect to find some indication of reporting formats: Will all reporting be written? Will the team report to management, or to all staff, or to staff and Board and beneficiaries? Will there be interim reports or only a final report? What sort of evidence does the organisation or project require to back up evaluator opinions? Who will be involved in analysis? The methodology section of Terms of Reference should provide a broad framework for how the project or organisation wants the work of the evaluation done.
26
By damage control we mean what you need to do if you failed to get baseline information when you started out.
27
28
29
It is also usually best to use triangulation (See Glossary of Terms). This is a fancy word that means that one set of data or information is confirmed by another. You usually look for confirmation from a number of sources saying the same thing.
Tool Interviews Description
These can be structured, semi-structured or unstructured (see Glossary of Terms). They involve asking specific questions aimed at getting information that will enable indicators to be measured. Questions can be open-ended or closed (yes/no answers). Can be a source of qualitative and quantitative information. These are interviews that are carried out with specialists in a topic or someone who may be able to shed a particular light on the process.
Usefulness
Can be used with almost anyone who has some involvement with the project. Can be done in person or on the telephone or even by email. Very flexible.
Disadvantages
Requires some skill in the interviewer. For more on interviewing skills, see later in this toolkit.
As these key informants often have little to do with the project or organisation, they can be quite objective and offer useful insights. They can provide something of the
Needs a skilled interviewer with a good understanding of the topic. Be careful not to turn something into an absolute truth (cannot be challenged) because it
30
Questionnaires
These are written questions that are used to get written responses which, when analysed, will enable indicators to be measured.
Focus groups
In a focus group, a group of about six to 12 people are interviewed together by a skilled interviewer/facilitator with a carefully structured interview schedule. Questions are usually focused around a specific topic or issue.
This can be a useful way of getting opinions from quite a large sample of people.
Community meetings
This involves a gathering of a fairly large group of beneficiaries to whom questions, problems, situations are put for input to help in measuring indicators.
Structured report forms that ensure that indicatorrelated questions are asked and answers recorded, and observations recorded on every visit.
Community meetings are useful for getting a broad response from many people on specific issues. It is also a way of involving beneficiaries directly in an evaluation process, giving them a sense of ownership of the process. They are useful to have at critical points in community projects. Flexible, an extension of normal work, so cheap and not time-consuming.
31
Visual/audio stimuli
Rating scales
These include pictures, movies, tapes, stories, role plays, photographs, used to illustrate problems or issues or past events or even future events. This technique makes use of a continuum, along which people are expected to place their own feelings, observations etc. People are usually asked to say whether they agree strongly, agree, dont know, disagree, disagree strongly with a statement. You can use pictures and symbols in this technique if people cannot read and write. This method is a way of focusing interviews with individuals or groups on particular events or incidents. The purpose of doing this is to get a very full picture of what actually happened.
Participant observation
Self-drawings
This involves direct observation of events, processes, relationships and behaviours. Participant here implies that the observer gets involved in activities rather than maintaining a distance. This involves getting participants to draw pictures, usually of how they feel or think about something.
Very useful when something problematic has occurred and people feel strongly about it. If all those involved are included, it should help the evaluation team to get a picture that is reasonably close to what actually happened and to be able to diagnose what went wrong. It can be a useful way of confirming, or otherwise, information provided in other ways.
The evaluation team can end up submerged in a vast amount of contradictory detail and lots of he said/she said. It can be difficult not to take sides and to remain objective.
32
DO test the interview schedule beforehand for clarity, and to make sure questions
cannot be misunderstood. DO state clearly what the purpose of the interview is. DO assure the interviewee that what is said will be treated in confidence. DO ask if the interviewee minds if you take notes or tape record the interview. DO record the exact words of the interviewee as far as possible. DO keep talking as you write. DO keep the interview to the point. DO cover the full schedule of questions. DO watch for answers that are vague and probe for more information. DO be flexible and note down everything interesting that is said, even if it isnt on the schedule. DONT offend the interviewee in any way. DONT say things that are judgmental. DONT interrupt in mid-sentence. DONT put words into the interviewees mouth. DONT show what you are thinking through changed tone of voice.
33
Collect information around the indicators Develop a structure for your analysis, based on your intuitive understanding of emerging themes and concerns, and where you suspect there have been variations from what you had hoped and/or expected.
Go through your data, organising it under the themes and concerns. Identify patterns, trends, possible interpretations. Write up your findings and conclusions. Work out possible ways forward (recommendations).
34
35
Appropriate format
Written report Written report, with an Executive Summary, and verbal presentation from the evaluation team. Written report, discussed at management team meeting. Written report, presented verbally by the evaluation team. Written and verbal presentation at departmental and team levels. Written report, presented verbally by evaluation team and followed by in-depth discussion of relevant recommendations at departmental and team levels. Verbal presentation, backed up by summarised document, using appropriate tables, charts, visuals and audio-visuals. This is particularly important if the organisation or project is contemplating a major change that will impact on beneficiaries. Summarised in a written report. Full written report with executive summary or a special version, focused on donor concerns and interests. Journal articles, seminars, conferences, websites.
Management Team
on
monitoring
Staff
Beneficiaries
significant
Donors
Evaluation
For an outline of what would normally be contained in a written report, go to the following page.
36
EXECUTIVE SUMMARY
SECTION 2: FINDINGS:
SECTION 3: CONCLUSIONS:
37
38
The key steps for effective decision making are: As a management team, understand the implications of what you have learned. Work out what needs to be done and have clear motivations for why it needs to be done. Generate options for how to do it. Look at the options critically in terms of which are likely to be the most effective. Agree as a management team. Get organisational/project consensus on what needs to be done and how it needs to be done. Get a mandate (usually from a Board, but possibly also from donors and beneficiaries) to do it. Do it.
39
How can you help people accept changes? Make the reasons why change is needed very clear take people through the findings and conclusions of the monitoring and evaluation processes, involve them in decision-making. Help people see the whole picture beyond their little bit to the overall impact on the problem analysed. Focus on the key issues we have to do something about this! Recognise anger, fear, resistance. Listen to people, give them the opportunity to express frustration and other emotions. Find common ground things that they also want to see changed. Encourage a feeling that change is exciting, that it frees people from doing things that are not working so they can try new things that are likely to work, that it releases productive energy. Emphasise the importance of everyone being committed to making it work. Create conditions for regular interaction anything from a seminar to graffiti on a notice board - to discuss what is happening and how it is going. Pace change so that people can deal with it.
(Thanks to Olive Publications, Ideas for a Change Part 4, June 1999, for the ideas used in this sub-section.)
40
Average annual household income Average weekly/monthly wages Employment, by age group Unemployment, by age group, by gender Employment, by occupation, by gender Government employment Earned income levels Average length of unemployment period Default rates on loans Ratio of home owners to renters Per capita income Average annual family income % people below the poverty line Ratio of seasonal to permanent employment Growth rate of small businesses Value of residential construction and/or renovation
Death rate Life expectancy at birth Infant mortality rates Causes of death Number of doctors per capita Number of hospital beds per capita Number of nurses per capita Literacy rates, by age and gender Student: teacher ratios Retention rate by school level School completion rates by exit points Public spending per studnet Number of suicides Causes of accidents Dwellings with running water Dwellings with electricity Number of homeless Number of violent crimes Birth rate Fertility rate Gini distribution of income (see Glossary of Terms) Infant mortality rate
41
Rates of hospitalisation Rates of HIV infection Rates of AIDS deaths Number of movie theatres/swimming pools per 1000 residents Number of radios/televisions per capita Availability of books in traditional languages Traditional languages taught in schools Time spent on listening to radio/watching television by gender Number of programmes on television and radio in traditional languages and/or dealing with traditional customs Church participation, by age and gender
Number of community organisations Types of organised sport Number of tournaments and games Participation levels in organised sport Number of youth groups Participation in youth groups Participation in womens groups Participation in groups for the elderly Number of groups for the elderly Structure of political leadership, by age and gender Participation rate in elections, by age and gender Number of public meetings held Particiaption in public meetings, by age and gender
Examples adapted from Using Development Indicators for Aboriginal Development , the Development Indicator Project Steering Committee, September 1991
42
The organisation made a decision to go for a computerised monitoring system. Much of the day-to-day information needed by the organisation was already on a computerised data base (e.g. schools, regions, services provided and so on), but the monitoring system would require a substantial upgrading and the development of data base software specific to the organisations needs. The organisation also made the decision to develop a system initially for a pilot project, but with the intention of extending it to all the work over time. This pilot project would work with about 60 schools, using different scripts each year, over a period of three years. In order to raise the money needed for this process, Puppets against AIDS needed some kind of a brief for what was required so that it could be costed. At an initial workshop with staff, facilitated by consultants, the staff generated a list of indicators for efficiency, effectiveness and impact, in relation to their work. These were the things staff wanted to know from the system about what they did, how they did it, and what difference it made. The terms were defined as follows: Efficiency Here what needed to be assessed was how quickly, how correctly, how cost effectively and with what use of resources the services of the organisation were offered. Much of this information was already collected and was contained in reports which reflected planning against achievement. It needed to be made computer friendly. Here what needed to be assessed was getting results in terms of the strategy and shorter-term impact. For example, were the puppet shows an effective means of communicating messages about sexuality? Again, this information was already being collected and just needed to be adapted to fit the computerised system. Here what needed to be assessed was whether the strategy worked in that it had an impact on changing behaviour in individuals (in this case the students) and that that change in behaviour impacted positively on
Effectiveness
Impact
43
Forms/questionnaires were developed to measure impact indicators before the first intervention (to provide baseline information) and then at various points in the process, as well as to categorise such concepts as teacher profile. With the student questionnaire, it was designed in such a way to make it possible to aggregate a score which could be compared when the questionnaire was administered at different stages in the process. The questionnaire took the form of a series of statements with which students were asked to agree/disagree/strongly agree/strongly disagree etc. So, for example, statements to do with an increase in student self-esteem inlcuded When I look in a mirror, I like what I see, and Most of the people I know like the real me. The organisation indicated that it wanted the system to generate reports that would enable it to know: What difference is there between the indicator ratings on the impact objective at the beginning and end of the process? What difference is there between teacher attitudes at the beginning and end of the process? What variables to do with the school and school environment impact on the degree of difference between indicators at the beginning and end of the process? What variables to do with the way in which the shows are presented impact on the degree of difference at the beginning and end of the process?
All this was written up as a brief which was given to software experts who then came up with a system that would meet the necessary requirements. The process was slow and demanding but eventually the system was in place and it is currently being tested.
44
________________________________________________________________________ CARE-ED FIELD VISIT REPORT Date: Name of school: Information obtained from: Report completed by: Field visit number: _____ --------------------------------------------------------------------------------------------1. List the skills used by the teachers in the time period of your visit to the school:
3. List the fundraising activities the school committee is currently involved in:
45
4. Record-keeping assessment:
Kind of record Bookkeeping Petty cash Filing Correspondence Stock control Registers Up-to-date and accurate Up-to-date but not very accurate Not up-todate Not attempted
5. Number of children registered: Average attendance over past two months: 6. Number of payments outstanding for longer than two months: 7. Average attendance at committee meetings over past two months: 8. Comments on this visit:
46
47
Baseline data
Bottom line
Indicators
Opportunity costs
48
Progress data
Qualitative
Rigorous Sampling
Secondary data
SWOT Analysis
Terms of Reference
49
Variables
CIVICUS: World Alliance for Citizen Participation is an international alliance established in 1993 to nurture the foundation, growth and protection of citizen action throughout the world, especially in areas where participatory democracy and citizens freedom of association are threatened. CIVICUS envisions a worldwide community of informed, inspired, committed citizens in confronting the challenges facing humanity. These CIVICUS Toolkits have been produced to assist civil society organisations build their capacity and achieve their goals. The topics range from budgeting, strategic planning and dealing with the media, to developing a financial strategy and writing an effective funding proposal. All are available on-line, in MSWord and PDF format at www.civicus.org and on CD-ROM. For further information about CIVICUS: CIVICUS: World Alliance for Citizen Participation 24 Pim Street, corner Quinn Street Newtown, Johannesburg 2001 South Africa P.O. Box 933
50
51