Evaluation: Purpose
Evaluation: Purpose
Evaluation: Purpose
Definition[edit]
Evaluation is the structured interpretation and giving of meaning to predicted or actual
impacts of proposals or results. It looks at original objectives, and at what is either
predicted or what was accomplished and how it was accomplished. So evaluation can
be formative, that is taking place during the development of a concept or proposal, project
or organization, with the intention of improving the value or effectiveness of the proposal,
project, or organisation. It can also be summative, drawing lessons from a completed
action or project or an organisation at a later point in time or circumstance.
Evaluation is inherently a theoretically informed approach (whether explicitly or not), and
consequently any particular definition of evaluation would have been tailored to its
context – the theory, needs, purpose, and methodology of the evaluation process itself.
Having said this, evaluation has been defined as:
Discussion[edit]
However, the strict adherence to a set of methodological assumptions may make the field
of evaluation more acceptable to a mainstream audience but this adherence will work
towards preventing evaluators from developing new strategies for dealing with the myriad
problems that programs face. It is claimed that only a minority of evaluation reports are
used by the evaluand (client) (Datta, 2006). One justification of this is that "when
evaluation findings are challenged or utilization has failed, it was because stakeholders and
clients found the inferences weak or the warrants unconvincing" (Fournier and Smith,
1993). Some reasons for this situation may be the failure of the evaluator to establish a set
of shared aims with the evaluand, or creating overly ambitious aims, as well as failing to
compromise and incorporate the cultural differences of individuals and programs within the
evaluation aims and process. None of these problems are due to a lack of a definition of
evaluation but are rather due to evaluators attempting to impose predisposed notions and
definitions of evaluations on clients. The central reason for the poor utilization of
evaluations is arguably[ due to the lack of tailoring of evaluations to suit the needs of the
client, due to a predefined idea (or definition) of what an evaluation is rather than what the
client needs are (House, 1980). The development of a standard methodology for evaluation
will require arriving at applicable ways of asking and stating the results of questions about
ethics such as agent-principal, privacy, stakeholder definition, limited liability; and could-
the-money-be-spent-more-wisely issues.
Standards
Depending on the topic of interest, there are professional groups that review the quality
and rigor of evaluation processes.
Evaluating programs and projects, regarding their value and impact within the context they
are implemented, can be ethically challenging. Evaluators may encounter complex,
culturally specific systems resistant to external evaluation. Furthermore, the project
organization or other stakeholders may be invested in a particular evaluation outcome.
Finally, evaluators themselves may encounter "conflict of interest (COI)" issues, or
experience interference or pressure to present findings that support a particular
assessment.
General professional codes of conduct, as determined by the employing organization,
usually cover three broad aspects of behavioral standards, and include inter-
collegial relations (such as respect for diversity and privacy), operational issues
(due competence, documentation accuracy and appropriate use of resources), and
conflicts of interest (nepotism, accepting gifts and other kinds of favoritism). However,
specific guidelines particular to the evaluator's role that can be utilized in the management
of unique ethical challenges are required. The Joint Committee on Standards for
Educational Evaluation has developed standards for program, personnel, and student
evaluation. The Joint Committee standards are broken into four sections: Utility, Feasibility,
Propriety, and Accuracy. Various European institutions have also prepared their own
standards, more or less related to those produced by the Joint Committee. They provide
guidelines about basing value judgments on systematic inquiry, evaluator competence and
integrity, respect for people, and regard for the general and public welfare.
The American Evaluation Association has created a set of Guiding Principles for
evaluators. The order of these principles does not imply priority among them; priority will
vary by situation and evaluator role. The principles run as follows:
Perspectives[edit]
The word "evaluation" has various connotations for different people, raising issues related
to this process that include; what type of evaluation should be conducted; why there should
be an evaluation process and how the evaluation is integrated into a program, for the
purpose of gaining greater knowledge and awareness? There are also various factors
inherent in the evaluation process, for example; to critically examine influences within a
program that involve the gathering and analyzing of relative information about a program.
Michael Quinn Patton motivated the concept that the evaluation procedure should be
directed towards:
Activities
Characteristics
Outcomes
The making of judgments on a program
Improving its effectiveness,
Informed programming decisions
Founded on another perspective of evaluation by Thomson and Hoffman in 2003, it is
possible for a situation to be encountered, in which the process could not be considered
advisable; for instance, in the event of a program being unpredictable, or unsound. This
would include it lacking a consistent routine; or the concerned parties unable to reach an
agreement regarding the purpose of the program. In addition, an influencer, or manager,
refusing to incorporate relevant, important central issues within the evaluation
Approaches[edit]
There exist several conceptually distinct ways of thinking about, designing, and conducting
evaluation efforts. Many of the evaluation approaches in use today make truly unique
contributions to solving important problems, while others refine existing approaches in
some way.
Classification of approaches[edit]
Two classifications of evaluation approaches by House and Stufflebeam and Webster can
be combined into a manageable number of approaches in terms of their unique and
important underlying principles.
House considers all major evaluation approaches to be based on a
common ideology entitled liberal democracy. Important principles of this ideology include
freedom of choice, the uniqueness of the individual and empirical inquiry grounded
in objectivity. He also contends that they are all based on subjectivist ethics, in which
ethical conduct is based on the subjective or intuitive experience of an individual or group.
One form of subjectivist ethics is utilitarian, in which "the good" is determined by what
maximizes a single, explicit interpretation of happiness for society as a whole. Another form
of subjectivist ethics is intuitionist/pluralist, in which no single interpretation of "the good" is
assumed and such interpretations need not be explicitly stated nor justified.
These ethical positions have corresponding epistemologies—philosophies for
obtaining knowledge. The objectivist epistemology is associated with the utilitarian ethic; in
general, it is used to acquire knowledge that can be externally verified (intersubjective
agreement) through publicly exposed methods and data. The subjectivist epistemology is
associated with the intuitionist/pluralist ethic and is used to acquire new knowledge based
on existing personal knowledge, as well as experiences that are (explicit) or are not (tacit)
available for public inspection. House then divides each epistemological approach into two
main political perspectives. Firstly, approaches can take an elite perspective, focusing on
the interests of managers and professionals; or they also can take a mass perspective,
focusing on consumers and participatory approaches.
Stufflebeam and Webster place approaches into one of three groups, according to their
orientation toward the role of values and ethical consideration. The political orientation
promotes a positive or negative view of an object regardless of what its value actually is
and might be—they call this pseudo-evaluation. The questions orientation includes
approaches that might or might not provide answers specifically related to the value of an
object—they call this quasi-evaluation. The values orientation includes approaches
primarily intended to determine the value of an object—they call this true evaluation.
When the above concepts are considered simultaneously, fifteen evaluation approaches
can be identified in terms of epistemology, major perspective (from House), and
orientation. Two pseudo-evaluation approaches, politically controlled and public relations
studies, are represented. They are based on an objectivist epistemology from an elite
perspective. Six quasi-evaluation approaches use an objectivist epistemology. Five of them
—experimental research, management information systems, testing programs, objectives-
based studies, and content analysis—take an elite perspective. Accountability takes a
mass perspective. Seven true evaluation approaches are included. Two approaches,
decision-oriented and policy studies, are based on an objectivist epistemology from an elite
perspective. Consumer-oriented studies are based on an objectivist epistemology from a
mass perspective. Two approaches—accreditation/certification and connoisseur studies—
are based on a subjectivist epistemology from an elite perspective. Finally, adversary
and client-centered studies are based on a subjectivist epistemology from a mass
perspective.
Pseudo-evaluation