0% found this document useful (0 votes)
38 views42 pages

Conceptualization of Program Evaluation

The document outlines the conceptualization of program evaluation, including definitions, types, approaches, principles, and standards. It emphasizes the importance of systematic assessment for improving program effectiveness and decision-making, highlighting various evaluation models such as the CIPP model. Additionally, it discusses the roles of internal and external evaluations, as well as the ethical considerations and standards necessary for credible evaluations.

Uploaded by

esubalew almaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views42 pages

Conceptualization of Program Evaluation

The document outlines the conceptualization of program evaluation, including definitions, types, approaches, principles, and standards. It emphasizes the importance of systematic assessment for improving program effectiveness and decision-making, highlighting various evaluation models such as the CIPP model. Additionally, it discusses the roles of internal and external evaluations, as well as the ethical considerations and standards necessary for credible evaluations.

Uploaded by

esubalew almaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Conceptualization of

evaluation

1.Definition of concepts
2.Types of program Evaluation
3.Approaches/Models of Program Evaluation-
(CIPP) Model
4.Principles of Program Evaluation
Theories,
5. Programmodels/ approaches, principles
Standards
6. Professional
gram standards and Ethical Conduct
1.Definition of concepts

•Evaluation
Program evaluation, Project Evaluations
Program evaluation and evaluation
research
•Monitoring
•Supervision
•Inspection
Evaluation
• while defining the concept, some look at it as a way of
providing information for decision-making
• Others consider it as an approach for identifying the
discrepancy between what a program has achieved and what is
expected
• Is an integral part of the monitoring and reporting that
feeds the decision-making process
• Evaluation is defined as a study designed and conducted to
assess the merit and worth of object..
• An evaluation is a systematic assessment
• Evaluations should follow a systematic and mutually agreed on
plan
• Plans will typically include the following:
 Determining the goal of the evaluation: What is the evaluation
question, what is the evaluation to find out.
 How will the evaluation answer the question: What methods will
be used.
 Making the results useful, how will the results be reported so
that they can be used by the organization to make
improvements
Aim, purpose
Evaluation is instrumental in:
Supporting organizational learning;
organization benefit from a continuous
learning process.
Providing information needed to guide the
program towards achieving set goals– to
enable it to become a more effective
organization within whatever constraints it
has to operate.
Providing early warning of activities and
processes that need corrective action
 Helping empower program partners by creating
opportunities for them to reflect critically on the
program’s direction and decide on improvements;
 Building understanding, motivation and capacity
amongst those involved in the program;
 Be informed that organizational learning does not just
happen during evaluation but takes place during
implementation and when feedbacks are incorporated
into the entire program cycle
Program Evaluation

• is the systematic assessment of the


operation and/or outcomes of a program
against a set of explicit or implicit standards
so as to contribut to the improvement of the
program
• is the systematic collection of information
about the activities, characteristics, and
outcomes of programs to:
• make judgments about the program,
• improve program effectiveness, and/or
• inform decision about future programming
is the systematic collection, analysis
and reporting of data about a program
to assist in decision-making
Program evaluation provides
processes and tools that agencies of
all kinds can apply to obtain valid,
reliable, and credible data to address
a variety of questions about the
performance of public and nonprofit
programs
Program evaluation aims:
◦ Program Improvement
◦ Continuation and/or Dissemination
◦ Accountability
◦ find out “what works” and “what does not
work”
◦ show the effectiveness of a program to the
community and funders
◦ improve staff’s frontline practice with
participants
◦increase a program’s capacity to
conduct a critical self-assessment and
plan for the future.
◦increase awareness of how various
program components affect individual
clients allows greater personalization
of the program
◦Evaluation helps in Managing for
Impact
How does evaluation help in managing for impact?

 The key idea underlying program evaluation, is


to help those responsible for managing the
resources and activities of a program to enhance
development results along a continuum, from
short-term to long-term.
 Managing for impact means steering program
interventions towards sustainable, longer-term
impact along a plausibly linked chain of results,
i.e., inputs produce outputs that engender
outcomes that contribute to impact
 Managing for impact means the program
manager and the responsible official should
adapt the program to changing circumstances so
that it has more chance of achieving its intended
objectives.
 Outcomes are defined as medium-term effects of
project outputs.
 Outcomes are observable changes that can be linked
to project interventions.
 Impact, on the other hand, is defined as the positive
and negative, primary and secondary long-term
effects produced by a development intervention,
directly or indirectly, intended or unintended
 It is often only detectable after several years and
usually not attained during the life cycle of one
project.
◦Program evaluation assesses how well
planning and managing for future impact is
being done during the program cycle
◦Because programs are collaborative efforts,
partners have co-responsibility for achieving
outcomes and, ultimately, impact
Evalution is not only benefecial. It still has
the following concers:
◦ Evaluation will divert resources away from the
program
◦ Evaluation will be too complicated
◦ Evaluation will be an additional burden on staff
◦ Evaluation will produce negative results
◦ Evaluation is just another form of program
monitoring
◦ Managing for impact is only possible if reliable
sources of information are available about:
◦ the context in which activities are taking place,
◦ the extent of progress toward the project’s objectives, and
◦ the reasons for success or failure.
Program evaluation and evaluation research

 What is the difference and similarity between program


evaluation and research?
◦ Program evaluation is NOT the same as research although
they share many characteristics:
 They differ in terms of:
◦ Questions
◦ Judgment
◦ Roles
◦ Publication
◦ Motivation of person doing the work
◦ Allegiance
◦ Audiences for results
◦ Autonomy
◦ Generalizability
 They are the same:
◦ Start with questions
◦ Use similar methods
◦ Provide similar information
◦ Program evaluation focuses on decisions.
◦ Research focuses on answering questions about
phenomena to discover new knowledge and test
theories/hypotheses.
◦ Research is aimed at truth.
◦ Program evaluation is aimed at action.
Program evaluation describes four levels of
information gathered in the process of
evaluation.
◦ Effort - Volume - how much service activity is
generated
◦ Performance - Reach - how well does the program
meet community needs
◦ Outcome - Effect - what impact does it have on clients.
◦ Efficiency - Value - how much effect you get for the
cost.
Program and project evaluations

◦ A project is a temporary entity established to deliver


specific (often tangible) outputs in line with
predefined time, cost and quality constraints.
◦ A project should always be defined and executed
and evaluated relative to an approved criterion
which balances the costs, benefits and risks of the
project.
◦ A program is a portfolio comprised of multiple
projects that are managed and coordinated as one
unit with the objective of achieving (often intangible)
outcomes and benefits for the organization.
 Hence, project evaluation assesses activities that are designed
to perform a specified task in a specific period of time
Monitoring
◦ is a process of checking on program regularly to find
out how far it is functioning according to plan.
◦ represents a well planned flow of information from
program activities to the coordinating centre and vice-
versa
Supervision
◦ involves checking on the performance of programs
◦ the aim is to use the information generated on the spot
by giving guidance to improve the performance of
program staff
Inspection
◦ covers a wider ground which includes checking on
the status of a program
◦the emphasis of inspection is on order,
compliance and discipline.
2. Types of program Evaluation

Newcomer, Hatry, and Wholey (2010) identified


the following types/ approaches to program
evaluation:
• Formative ______________Summative
• Ongoing ______________ One-shot
• Objective observers _________ Participatory
• Goal-based _____________ Goal-free
• Quantitative ______________Qualitative
• Problem orientation __________ Non-problem
We can also add one more to this:

• Internal ______________External evaluation


Internal Evaluation
◦ Is conducted by program staff
Advantages of Internal Evaluation
◦ information can be collected more regularly,
◦ the whole process is cheaper,
◦ program staff get a deeper understanding of
their systems as they evaluate it, and
◦ the results of evaluation are easily implemented
Disadvantages
◦ development administrators and facilitators are
not always trained evaluators,
◦ program staff tend to have blind spots to some
aspects of their program,
◦ internal evaluators may find it difficult to be
objective especially if they or their close friend
are the causes of weaknesses in their program
External Evaluation
◦Is done by outside evaluation specialists under
the following circumstances:
 if there are crucial difficulties within a program
and those who are responsible for the program
cannot cope with the difficulties,
 if an on-going program, or a program which has
just ended, requires a fresh inspection from
outside, and
 if the program becomes so complex that a solid
body of technical skills, not available in the
program, is necessary to evaluate the program.
◦The main strengths of the external evaluators
are that:
 they are normally carefully selected and in most
cases have relevant technical training and job
experience,
 unlike internal evaluators who may have personal
and other local interests to protect, external
evaluators normally have little interest to protect,
 external evaluators are not likely to be influenced
by local pressures or the temptation to please local
leaders, and
 external evaluators are likely to bring into the
program new ideas and experiences from similar
program which they have studied elsewhere
◦ Disadvantage
 an external evaluator has to carry out the work
hurriedly and often comes to the program without a
solid knowledge of the program background
 it is also debatable whether external evaluators are
not politically motivated or influenced, and whether
they are completely neutral
 the evaluation context is not only new and
unfamiliar to the external evaluator, but it may be
inaccessible
Formative Evaluation
◦ is an on-going evaluation
◦ its purpose is to generate information which
can be used to improve subsequent stages of
the program
◦ is based on the assumption that the program
under review is short term or has a beginning
and an end
Summative Evaluation
◦ It comes at the end of the program and looks
back and asks: what was intended? What
happened? and what are the outcomes?
◦ It generates information which helps program
managers and sponsors to decide
 whether the program should be terminated,
revived or continued
 whether the whole program was a success
or only partially successful
 whether or not other agencies would be
advised to replicate the project with or
without modifications
Participatory Evaluation
◦An attempt is made to conduct evaluation
through participation, or by involvement of
program managers, facilitators, and
recipients.
◦The evaluator becomes a facilitator who
helps the participants to decide:
 what should be evaluated;
 how the evaluation should be conducted, and
 how the information should be used
3. Approaches/Models of Program
Evaluation- (CIPP) Model
The term model is loosely used to refer to a
conception or approach or sometimes even a
method of doing evaluation….
Models are to paradigms as hypotheses are to
theories, which means less general but with
some overlap
There are different types of
program evaluation models based
on different approaches
The Context-Input-Process-Product
(CIPP) Model is one such model
The CIPP Model – is widely used in analysis
and planning of systems.
◦Evolved from a basic open system model
that includes input, process, and output
◦Daniel Stufflebeam added context,
included input and process, and
relabeled output with the term product
◦It is used to break a program or any
other system into four parameters:
 context,
 input,
 process, and
 product
 This model has two main attributes :
 It provides a convenient sub-division of the
program into four areas each of which can form an
evaluation task
 When a program is broken down to specific
ingredients, it is easier to see the relationship
between the various constituent components
Each of the four parameters is in turn
analyzed and broken down to very specific
issues, ideas and objects which can be
inspected and assessed
Context Evaluation - includes
◦examining and describing the context of the
program you are evaluating,
◦conducting a needs and goals assessment
◦determining the objectives of the program, and
◦determining whether the proposed objectives will
be sufficiently responsive to the identified needs
Input Evaluation – includes
◦description of the program inputs and resources,
◦a comparison of how the program might perform
compared to other programs, a prospective
benefit/cost assessment
◦an evaluation of the proposed design of the
program, and
◦an examination of what alternative strategies
and procedures for the program should be
considered and recommended
Process Evaluation - includes
◦examining how a program is being
implemented,
◦monitoring how the program is performing,
◦auditing the program to make sure it is following
required legal and ethical guidelines, and
◦identifying defects in the procedural design or in
the implementation of the program
◦ It is here that evaluators provide information
about what is actually occurring in the program
Product Evaluation – includes
◦determining and examining the general and
specific outcomes of the program
◦measuring anticipated outcomes,
attempting to identify unanticipated
outcomes,
◦assessing the merit of the program,
◦conducting a retrospective benefit/cost
assessment, and/or conducting a cost
effectiveness assessment

Program planning decisions Program


structuring decisions Implementing
decisions Summative evaluation
decisions
4. Principles of Program Evaluation
Itis important that program
managers and responsible officials
should ensure that program
evaluations:
◦are credible and independent,
◦contribute to organizational learning,
◦are impartiality, and
◦reinforce accountability and
transparency.
Additional guiding Principles:
◦Systematic inquiry,
◦provision of competent performance to
stakeholders,
◦Integrity and honesty,
◦Respect for people, and
◦Responsibility for the general and
public welfare.
◦Participatory-A program can only achieve
sustainability if the local partners take
ownership for the program during the design
and implementation processes and after

◦Furthermore, the whole process is
designed and managed in a
transparent and participatory way
to ensure that the evaluation:
 addresses the concerns of all
stakeholders,
 is useful to them and
 is carried out in an impartial and
balanced way
How do you strengthen Credibility and
Accountability?
The evaluation policy of an organization
guarantees that independent evaluations are
unbiased and of high quality:
They must be conducted by an external
evaluation consultant;
Decision-making related to independent
evaluation should be separated from decision-
making on the design and implementation of
the program
A designated evaluation in the organization
oversees thefocal point process, approves the
TOR and the selection of the evaluator(s), and
reviews the final report;
5. Program Standards
The Joint Committee on Standards for
Educational Evaluation (JCSEE) has
developed a set of standards for the
evaluation of educational programs that
are organized into five groups of
standards:
Utility Standards
Feasibility Standards
Propriety Standards
Accuracy Standards
Accountability standards
Utility Standards
◦ The utility standards are intended to
increase the extent to which program
stakeholders find evaluation processes
and products valuable in meeting their
needs
◦ They guide evaluations so that they will
be informative, timely, and influential
◦ They require evaluators to acquaint
themselves with their audiences, define
the audiences clearly, ascertain the
audiences’ information needs, plan
evaluations to respond to these needs,
and report the relevant information
clearly and in a timely fashion
 Evaluator Credibility
 Attention to Stakeholders
 Negotiated Purposes
 Explicit Values
 Relevant Information
 Meaningful Processes and Products
 Timely and Appropriate
Communicating and Reporting
 Concern for Consequences and
Influence
Feasibility Standards
◦Feasibility standards recognize that
evaluations usually are conducted in a
natural, as opposed to a laboratory, setting
and consume valuable resources.
◦The feasibility standards are intended to
increase evaluation effectiveness and
efficiency.
◦Evaluations must not consume more
resources, materials, personnel, or time
than necessary to address the evaluation
questions
 Project Management
 Practical Procedures
 Contextual Viability
Propriety Standards
◦ Propriety standards reflect the fact that
evaluation affect many people in a variety of
ways
◦ These standards are intended to facilitate
protection of the rights of individuals affected
by an evaluation
◦ The propriety standards support what is proper,
fair, legal, right and just in evaluations
◦ They promote sensitivity to and warn against
unlawful, unscrupulous, unethical, and inept
actions by those who conduct evaluations
◦ These standards require that individuals
conducting evaluations learn about and obey
laws concerning such matters as privacy,
freedom of information, and the protection of
 Responsive and Inclusive Orientation
 Formal Agreements
 Human Rights and Respect
 Clarity and Fairness
 Transparency and Disclosure
 Conflicts of Interests
 Fiscal Responsibility
Accuracy Standards
◦The accuracy standards are intended to
increase the dependability and truthfulness of
evaluation representations, propositions, and
findings, especially those that support
interpretations and judgments about quality
◦Accuracy standards determine whether an
evaluation has produced sound information.
◦The evaluation program must be
comprehensive
◦The information must be technically
adequate, and the judgments rendered
must be linked logically to the data.
 Justified Conclusions and Decisions
 Valid Information
 Reliable Information
 Explicit Program and Context Descriptions
 Information Management
 Sound Designs and Analyses
 Explicit Evaluation Reasoning
 Communication and Reporting
Accountability Standards
◦The evaluation accountability standards
encourage adequate documentation of
evaluations and a metaevaluative
perspective focused on improvement and
accountability for evaluation processes and
products.
 Evaluation Documentation
 Internal Metaevaluation
 External Metaevaluation
6. Professional and Ethical Conduct
All those engaged in designing,
conducting and managing evaluation
activities should be guided by sound
professional standards and strong ethical
principles.
It is especially important that the
evaluation is carried out in a climate of
trust.
Confidential information should be
handled in a responsible manner, and
informants must not risk being
disadvantaged as a consequence of their
collaboration.
Hence,
Evaluators must have personal and professional
integrity.
Evaluators must respect the right of institutions and
individuals to provide information in confidence and
ensure that sensitive data cannot be traced to its
source.
Evaluators must take care that those involved in
evaluations have a chance to examine the
statements attributed to them.
Evaluators must be sensitive to beliefs, manners
and customs of the social and cultural environments
in which they work.
Evaluators must be sensitive to and address issues
of discrimination and gender inequality.
Evaluations sometimes uncover evidence of
wrongdoing. Such cases must be reported discreetly
to the appropriate investigative body.

You might also like