Assessment of Learning

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

EDUCATION

CHAPTER 6

EVALUATION
ALOGUIN, LUMABAO, BAUTISTA
Pseudo-evaluation
Politically controlled and public relation studies are based on an objectivist
epistemology from an elite perspective. Although both of these approaches
seek to misrepresent value interpretations about worded object, they go about
it a bit differently. Information obtained
through politically controlled studies in relaxed or withheld to meet the
special interests of the holder.

Public relations information is used to paint a positive Image of an object


regardless of the actual situation. Neither of these approaches in acceptable
evaluation practice, although the seasoned reader can surely think of a few
examples where they have been used.
Objectivist, elite, quasi-evaluation
As a group, these five approaches represent a highly respected
collection of disciplinal inquiry approaches. They are considered quasi
evaluation approaches because particular studies legitimately can focus
only on questions of knowledge without addressing any questions of value.
Such studies are, by definition, not evaluations. These approaches tan
produce characterizations without producing appraisals, although specific
studies can produce both. Each of these approaches serves its intended
purpose well. They are discussed roughly In order of the extent to which
they approach the objectivist ideal.
Objectivist, elite, quasi-evaluation

Experimental research is the best approach for determining casual


relationships between variables. The potential problem with using this
as an evaluation approach is that it highly controlled and stylized
methodology may not be sufficiently responsive to the dynamically
changing needs of most human service program.
Management information systems (MISs) can give detailed
information about the dynamic operations of complex programs.
However, this information is restricted to readily quantifiable data
usually available at regular intervals.
Objectivist, elite, quasi-evaluation

Testing programs are familiar to anyone who has attended school,


served in the military, or worked for a large company. These programs
are good at comparing individuals or groups to selected norms in a
number of subject areas or to a set of standards of performance.
However, they only focus on testee performance and they might not
adequately sample what is taught or expected.
Objectivist, elite, quasi-evaluation
Objectives-based approaches relate outcomes to prespecified objectives,
allowing judgments to be made about their level of attainment.
Unfortunately, the objectives are often not proven to be important or
they focus on outcomes too narrow to provide the basis for determining
the value of an object.

Content analysis is a quasi-evaluation approach because content


analysis judgments need not be based on value statements. Instead, they
can be based on knowledge. Such content analyses are not evaluations.
On the other hand, when content analysis judgments are based on values,
such studies are evaluations.
Objectivist, mass, quasi-evaluation

Accountability is popular with constituents because it is intended to


provide an accurate accounting of results that can improve the quality
of products and services. However, this approach quickly can turn
practitioners and consumers into adversaries when implemented in a
heavy-handed fashion.
Objectivist, elite, quasi-evaluation
Decision oriented studies are design to provide a knowledge base for
making and defending decisions, this approach usually requires the close
collaboration between an evaluator and decisionmaker, allowing it to be
susceptible to corruption and bias.

Policy studies provide general guidance and direction on broad issues by


identifying and assessing potential costs and benefits of competing
policies. The drawback is these studies can be corrupted or subverted by
the politically motivated actions of participants.
Objectivist, mass, true-evaluation
Consumer-oriented studies are used to judge the relative merits of good
and services based on generalized needs and values, along with a
comprehensive range of effects. However this approach does not
necessarily help practitioners improve their work, and it requires a very
good and credible evaluator to do it well.
Subjectivist, mass, true evaluation

Adversary- approach focuses on drawing out the pros and cons of


controversial issues through quasi-legal proceedings.

Client-centered- studies address specific concerns and issues of


practitioners and other clients of the study in a particular setting.
Evaluation methods and techniques

Evaluation- is methodological diverse using both qualitative methods


and quantitative methods, including case studies, survey research,
statistical analysis, and model building among others.
The CIPP Evaluation Model
The CIPP evaluation model  Stufflebeam (1983)- he developed a very
useful approach in educational evaluation known as the CIPP or Context,
Input, Process, Product Approach, this approach essentially systematizes
the way we evaluate the different dimensions and aspects of curriculum
development and sum/total student experiences in the educative process.
I. Use of the Context, Input and Process
(CIPP Model)
A. Evaluation of Planning (Connected to Strategic Planning)

1. Context Evaluation (C): provides information for the development of and


evaluation of mission, vision, values, goals and objectives, and priorities

a. Purposes

(1) define the characteristics of the environment


(2) determine general goals and specific objectives
(3) identify and diagnose the problems or barriers which might inhibit
achieving the goals and objectives
I. Use of the Context, Input and Process
(CIPP Model)

b. Tasks

(1) define the environment, both actual and desired


(2) define unmet needs and unused opportunities
(3) diagnose problems or barriers

c. Methods

(1) conceptual analysis to define limits of population to be served


(2)empirical studies to define unmet needs and unused opportunities
(3) judgment of experts and clients on barriers and problems
(4) judgment of experts and clients on desired goals and objectives
I. Use of the Context, Input and Process
(CIPP Model)
2. Input Evaluation (I): provides information for the development of program
designs through evaluation of data bases, internal and external stakeholders’
interests, WOTS UP? (Weaknesses, Strengths, Threats, and Opportunities).

a. Purposes

(1) design a program (intervention) to meet the objectives


(2) determine the resources needed to deliver the program
(3) determine whether staff and available resources are adequate to implement
the program
I. Use of the Context, Input and Process
(CIPP Model)
b. tasks

(1) develop a plan for a program through examination of various intervention


strategies

(a) examine strategies for achieving the plan


- time requirements
- funding and physical requirements
- acceptability to client groups
- potential to meet objectives
- potential barriers
I. Use of the Context, Input and Process
(CIPP Model)
(b) examine capabilities and resources of staff

- expertise to do various strategies


- funding and physical resources
- potential barriers

(2) develop a program implementation plan which considers time, resources,


and barriers to overcome
I. Use of the Context, Input and Process
(CIPP Model)
3. Process Evaluation (P): develop ongoing evaluation of the implementation of
major strategies through various tactical programs to accept, refine, or correct
the program design (i.e. evaluation of recruitment, orientation, transition, and
retention of first year students).

a. purpose

(1) provide decision makers with information necessary to determine if the


program needs to be accepted, amended, or terminated.
I. Use of the Context, Input and Process
(CIPP Model)
b. tasks

(1) identify discrepancies between actual implementation and intended design


(2) identify defects in the design or implementation plan

c. methods

(1) a staff member serves as the evaluator


(2) this person monitors and keeps data on setting conditions, program
elements as they actually occurred
(3) this person gives feedback on discrepancies and defects to the decision
makers.
In this approach, the user is asked to go through a series of questions in
the context, inputs, process and product stages, these questions are
reproduced below for convenience:
Context
• What is the relation of the course to other courses?
• Is the time adequate?
• What are the critical or important external factors?
• Should coursed be integrated or separate?

• What are the links between the course and research/ extension activities?

• Is there a need for the course?


• Is the course relevant to job needs?
Inputs
• What is the entering ability of students?
• What are the learning skills of students?
• What is the motivation of students?
• What are the living conditions of students?

• Are the aims suitable?

• Do the objectives derived from aims?


• Are the objective ‘SMART’?

• Is the course content clearly defined?


• What knowledge, skills, and attitudes, related to the subject, do the teachers have?
Process
• What is the workload of students?
• How well/actively do students participate?
• Are there any problems related to teaching?
• Are there any problems related to learning?

• Is there an effective 2-way communication?

• Is knowledge only transferred to students, or do they use and apply it?


• Are there any problems which students face in using /applying/ analyzing the knowledge and
skills?

• Are the teaching and learning process continuously evaluated?


• How is disciplined maintained?
Product
• Is there one final exam at the end or several during the course?
• Is there any informal assessment?
• What is the quality assessment?

• How do students use what they have learned?

• How was the overall experience for the teachers and for the students?

• What are the main ‘lesson learned’?

• Has the teacher’s reputation improved or been ruined as a result?


These guide questions are not answered by the teacher only or by a single
individual. Instead, there are many ways in which they can be answered.
Some of the more methods are listed below:

• Discussion with class


• Informal conversation or observation
• Individual student interviews
• Evaluation forms
• Observation in class session od teacher/ trainer by colleagues
These guide questions are not answered by the teacher only or by a single
individual. Instead, there are many ways in which they can be answered.
Some of the more methods are listed below:

• Video-tape of own teaching


• Organizational documents
• Participant contract
• Performance test
• Questionnaire
• Self-assessment
• Written test
Summary
● Assessment is the process of gathering and analyzing specific
information as part pf an evaluation.
● Competency evaluation is a means for teachers to determine the ability
of their students in other ways besides the standardized test.
● Course evaluation is the process of evaluating the instruction given
course.
● Educational evaluation is evaluation that is conducted specifically in
an educational setting.
Summary
● Immanent evaluation, opposed by Gilles Deleuze to value judgement
● Performance evaluation is a term from the field of language testing. It
stands in contrast to competence evaluation.
● Program evaluation is a term from the field of language testing. Its
stands in contrast to competence evaluation.
● Program evaluation is essentially a set of philosophies and techniques
to determine if a program ‘works’

You might also like