0% found this document useful (0 votes)
115 views7 pages

ASL 2 Topic 5 Product-Oriented Performance-Based Assessment

This document discusses product-oriented performance-based assessment. It defines student performances as targeted tasks that lead to a tangible product or learning outcome. Products can demonstrate a variety of skills. Performance-based assessments evaluate student proficiency on a task based on specific criteria. Rubrics are used to assess student performance levels, such as novice, skilled, and expert. Teachers should design tasks at an appropriate complexity level that encourage creativity and goal achievement. Scoring rubrics provide objective guidelines for evaluating student work based on predefined criteria.

Uploaded by

maris quilantang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views7 pages

ASL 2 Topic 5 Product-Oriented Performance-Based Assessment

This document discusses product-oriented performance-based assessment. It defines student performances as targeted tasks that lead to a tangible product or learning outcome. Products can demonstrate a variety of skills. Performance-based assessments evaluate student proficiency on a task based on specific criteria. Rubrics are used to assess student performance levels, such as novice, skilled, and expert. Teachers should design tasks at an appropriate complexity level that encourage creativity and goal achievement. Scoring rubrics provide objective guidelines for evaluating student work based on predefined criteria.

Uploaded by

maris quilantang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1

ASL 2-Topic 5

PRODUCT-ORIENTED PERFORMANCE-BASED ASSESSMENT

The role of assessment in teaching happens to be a hot issue in education today. This has
led to an increasing interest in “performance-based education.” Performance-based education
poses a challenge for teachers to design instruction that is task-oriented. The trend is based on the
premise that learning needs to be connected to the lives of the students through relevant tasks that
focus on students’ ability to use their knowledge and skills in meaningful ways. In this case,
performance-based tasks require performance-based assessments in which the actual student
performance is assessed through a product, such as a completed project or work that
demonstrates levels of task achievement. At times, performance-based assessment has been
used interchangeably with “authentic assessment” and “alternative assessment.” In all cases,
performance-based assessment has led to the use of a variety of alternative ways of evaluating
student progress (journals, checklists, portfolios, projects, rubrics, etc.) as compared to more
traditional methods of measurement (paper-and pencil testing).

5.1 Product-Oriented Learning Competencies

Student performances can be defined as targeted tasks that lead to as product or overall
learning outcome. Products can include a wide range of student works that target specific skills.
Some examples include communication skills such as those demonstrated in reading, writing,
speaking, and listening, or psychomotor skills requiring physical abilities to perform a given task.
Target tasks can also include behavior expectations targeting complex tasks that students are
expected to achieve. Using rubrics is one way that teachers can evaluate or assess student
performance or proficiency in any given task as it relates to a final product or learning outcome.
Thus, rubrics can provide valuable information about the degree to which a student has achieved a
defined learning outcome based on specific criteria that defines the framework for evaluation.

The learning competencies associated with products or outputs are linked with an
assessment of the level of “expertise” manifested by the product. Thus, product-oriented learning
competencies target at least three (3) levels: novice or beginner’s level, skilled level, and expert
level. Such levels correspond to Bloom’s taxonomy in the cognitive domain in that they represent
progressively higher levels of complexity in the thinking processes.

There are other ways to state product or project-oriented learning competencies. For
instance, we can define learning competencies for products or outputs in the following way:

 Level 1: does the finished product or project illustrate the minimum expected parts
or functions?
 Level 2: does the finished product or project contain additional parts and functions
on top of the minimum requirements which tend to enhance the final output?
(skilled level)
 Level 3: does the finished product contain the basic minimum parts and functions,
have additional features on top of minimum, and are aesthetically pleasing?
(expert level)
2

Example: the desired product is a representation of a cubic prism made out of cardboard in
an elementary geometry class.

Learning Competencies: the final product submitted by the students must:

1. possess the correct dimensions (5” x 5” x 5” ) – (minimum specifications)


2. be sturdy, made of durable cardboard and properly fastened together – (skilled
specification)
3. be pleasing to the observer, preferably colored for aesthetic purposes-(expert-level)

Example: the product desired is a scrapbook illustrating the historical event called EDSA I
People Power.

Learning Competencies: the scrapbook presented by the students must:

1. Contain pictures, newspaper clippings and other illustrations for the main characters of
EDSA I People Power namely: Corazon Aquino, Fidel V. Ramos. Juan Ponce Enrile,
Ferdinand E. Marcos, Cardinal Sin. – (minimum specifications)
2. contain remarks and captions for the illustrations made by the student himself for the
roles played by the characters of EDSA I People Power – (skilled level)
3. be presentable, complete, informative and pleasing to the reader of the scrapbook –
(expert level)

Performance-based assessment for products and projects can also be used for assessing
outputs of short-term tasks such as the one illustrated below for outputs in a typing class:

Example: the desired output consists of the output in a typing class.

Learning Competencies: the final typing outputs of the students must:

1. possess no ore than five (5) errors in spelling – (minimum specifications)


2. possess no more than 5 errors in spelling while observing proper format based on the
document to be typewritten-(skilled level)
3. Possess no more than 5 errors in spelling, has the proper format, and is readable and
presentable – (expert level).

Notice than in all of the above examples, product-oriented performance based learning
competencies is evidence-based. The teacher needs concrete evidence that the student has
achieved a certain level of competence based on submitted products and projects.
3

5.2 Task Designing

How should a teacher design a task for product-oriented performance based


assessment? The design of the task in this context depends on what the teacher desires to
observe as outputs of the students. The concepts that may with task designing include:

a. complexity. The level of complexity of the project needs to be within the range of ability of the
students. Projects that are too simple tend to be uninteresting for the students while projects
that are too complicated will most likely frustrate them.
b. Appeal. The project or activity must be appealing to the students. It should be interesting
enough so that students are encouraged to pursue the task to completion. It should lead to
self-discovery of information by the students.
c. Creativity. The project needs to encourage, students to exercise creativity and divergent
thinking. Given the same set of materials and project inputs, how does one best present the
project? It should lead the students into exploring the various possible ways of presenting the
final output.
d. Goal –Based. Finally, the teacher must bear in mind that the project is produced in order to
attain a learning objective. Thus, projects are assigned to students not just for the sake of
producing something but for the purpose of reinforcing learning.

Example: paper folding is traditional Japanese art. However, it can be as an activity to teach
the concept of plane and solid figures in geometry. Provide the students with a given number of
colored papers and ask them to construct a many plane and solid figures from these papers
without cutting them (by paper folding only)

5.3 Scoring Rubrics

Scoring rubrics are descriptive scoring schemes that are developed by teachers or others
evaluators to guide the analysis of the products or processes of students’ efforts (Brook hart,
1999). Scoring rubrics are typically employed when a judgment quality is required and may be
used to evaluate a broad range of subjects and activities. For instance, scoring rubrics can be most
useful in grading essays or in evaluating projects such as scrapbooks. Judgments concerning the
quality of a given writing sample may vary depending upon the criteria established by the individual
evaluator. One evaluator may heavily weigh the evaluation process upon the linguistic structure,
while another evaluator may be more interested in the persuasiveness of the argument. A high
quality is likely to have a combination of these and other factors. By developing a pre-defined
scheme for the evaluation process, the subjectivity involved in evaluating an essay becomes more
objective.
5.3.1 Criteria Setting. The criteria for a scoring rubrics are statements which identify
“what really counts” in the final output. The following are the most often used major
criteria for product assessment:
 quality
 creativity
 comprehensiveness
 accuracy
 aesthetics
4

From the major criteria, the next task is to identify substatements that would make the major
criteria more focused and objective. For instance, if we were scoring an essay on: “Three Hundred
Years of Spanish Rule in the Philippines”, the major criterion “Quality” may possess the following
sub statements:

 Interrelates the chronological events in an interesting manner


 Identifies the key players in each period of the Spanish rule and the roles that they
played
 Succeeds in relating the history of Philippine Spanish rule (rated as Professional,
Not quite professional, and Novice)

The example below displays a scoring rubric that was developed to aid in the evaluation of
essays written by college students in the classroom (based loosely on Leydens & Thompson’s,
1997). The scoring rubrics in this particular example exemplifies what is called a “holistic scoring
rubric”. It will be noted that each score category describes the characteristics of a response that
would receive the respective score. Describing the characteristics of responses within each score
category increases the likelihood that two independent evaluators would assign the same score to
a given response. In effect, this increases the objectivity of the assessment procedure using
rubrics. In the language of test and measurement, we are actually increasing the “inter-rater
reliability”.

Example of a scoring rubric designed to evaluate college writing samples.


 Major Criterion: Meets expectations for a first Draft of a Professional Report
Substatements:
 The document can be easily followed. A combination of the following are apparent
in the document:
1. Effective transitions are used throughout.
2. A professional format is used.
3. The graphics are descriptive and clearly support the document’s purpose.
 The document is clear and concise and appropriate grammar is used throughout.
Adequate
 The document can be easily followed. A combination of the following are apparent
in the document:
1. Basic transitions are used,
2. A structured format is used,
3. Some supporting graphics are provided, but are not clearly explained.
 The document contains minimal distractions that appear in a combination of the
following forms:
1. Flow in thought
2. Graphical presentations
3. Grammar/mechanics
Needs Improvement
 Organization of document is difficult to follow due to a combination of following:
1. Inadequate transitions
2. Rambling format
5

3. Insufficient or irrelevant information


4. Ambiguous graphics
 The document contains numerous distractions that appear in the a combination of
the following forms:
1. Flow in thought
2. Graphical presentations
3. Grammar/mechanics
Inadequate
 There appears to be no organization of the document’s contents.
 Sentences are difficult to read and understand.

The scoring rubric in this particular example exemplifies what is called a “holistic scoring
rubric”. It will be noted that each category describe the characteristics of a response that would
receive the respective scores. Describing the characteristics of response within each score
category increases the likelihood that two independent evaluators would assign the same score to
a given response. In effect, this increases the objectivity of the assessment procedure using
rubrics. In the language of test and measurement, we are actually increasing the “inter-rater
reliability”.

When are scoring rubrics an appropriate evaluation technique?

Grading essays is just one example of performances that may be evaluated using scoring
rubrics. There are many other instances in which scoring rubrics may be used successfully:
evaluate group activities, extended projects and oral presentations (e.g., Chicago Public Schools,
1999; Danielson, 1997a; 1997b; Schrock, 2000; Moskal, 2000). Also, rubrics scoring cuts across
disciplines and subject matter for they are equally appropriate to the English, Mathematics and
Science classrooms (e.g., Chicago Public Schools, 1999; State of Colorado, 1999; Danielson,
1997a; 1997b; Danielson & Marquez, 1998; Schrock, 2000). Where and when a scoring rubric is
used does not depend on the grade level or subject, but rather on the purpose of the assessment.

Other Methods

Authentic assessment schemes apart from scoring rubrics exist in the arsenal of the
teacher. For example, checklists may be used rather then scoring rubrics in the evaluation of
essays. Checklists enumerate a set of desirable characteristics for a certain product and the
teacher marks those characteristics which are equally observed. As such, checklists are an
appropriate choice for evaluation when the information that is sought is limited to the determination
of whether specific criteria have been met. On the other hand, scoring rubrics are based on
descriptive scales and support the evaluation of the extent to which criteria have been met.

The ultimate consideration in using a scoring rubrics for assessment is really the “purpose
of the assessment.” Scoring rubrics provide at least two benefits in the evaluation process. First,
they support the examination of the extent to which the specified criteria have been reached.
Second, they provide feedback to students concerning how to improve their performances. If these
benefits are consistent with the purpose of the assessment, then a scoring rubric is likely to be an
appropriate evaluation technique.
6

General Task versus Specific Task

In the development of scoring rubrics, it is well to bear in mind that it can be used to
assess or evaluate specific tasks or general or broad category of tasks. For instance, suppose that
we are interested in assessing the student’s oral communication skills. Then, a general scoring
rubric may be developed and used to evaluate each of the oral presentations given by that student.
After each such oral presentation of the students, the general scoring rubrics is shown to the
students which then allows them to improve in their previous performances. Scoring rubrics have
this advantage of instantaneously providing a mechanism for immediate feedback.

In contrast, suppose now that the main purpose of the oral presentation is to determine the
students’ knowledge of the facts surrounding the EDSA I revolution, then perhaps a specific
scoring rubrics would be necessary. A general scoring rubric for evaluating a sequence of
presentations may not be adequate since, in general, events such as EDSA I (and EDSA II) differ
on the surrounding factors (what caused the revolutions) and the ultimate outcomes of these
events. Thus, to evaluate the students’ knowledge of these events, it will be necessary to develop
specific rubrics scoring guide for each presentation.

Process of Developing Scoring Rubrics

The development of scoring rubrics goes through a process. The first step in the process
entails the identification of the qualities and attributes that the teacher wishes to observe in the
students’ outputs hat would demonstrate their level of proficiency. (Brookhart, 1999). These
qualities and attributes form the top level of the scoring criteria for the rubrics. Once done, a
decision has to be made whether a holistic or an analytical rubric would be more appropriate. In an
analytic scoring rubric, each criterion is considered one by one and the descriptions of the scoring
levels are made separately. This will then result in separate descriptive scoring schemes for each
of the criterion or scoring factor. On the other hand, for holistic scoring rubrics, the collection of
criteria is considered throughout the construction of each level of the scoring rubric and the result is
a single descriptive scoring scheme.

The next step after defining the criteria for the top level of performance is the identification
and definition of the criteria for lowest level of performance. In other words, the teacher is asked to
determine the type of performance that would constitute the worst performance or a performance
which would indicate lack of understanding of the concepts being measured. The underlying
reason for this step is for the teacher to capture the criteria that would suit a middle level
performance for the concept being measured. In particular, therefore, the approach suggested
would result in at least three levels of performance.

It is of course possible to make greater and greater distinctions between performances.


For instance, we can compare the middle level performance expectations with the best
performance criteria and come up with an above average performance criteria; between the middle
level performance expectations and the worst level of performance to come up with a slightly below
7

average performance criteria and so on. This comparison process can be used until the desired
number of score levels is reached or until no further distinctions can be made. If meaningful
distinctions between the score categories cannot be made, then additional score categories should
not be created (Brookhart, 1999). It is better to have a few meaningful score categories then to
have many score categories that are difficult or impossible to distinguish.

A note of caution, it is suggested that each score category should be defined using
descriptors of the work rather then value-judgment about the work (Brookhart, 1999). For example,
“Student’s sentences contain no errors in subject-verb agreements,” is preferable over, “Student’s
sentences are good.” The phrase “are good” requires the evaluator to make a judgment whereas
the phrase “no errors” is quantifiable. Finally, we can test whether our scoring rubrics is “reliable”
by asking two or more teachers to score the same set of projects or outputs or correlate their
individual assessments. High correlations between the raters imply high interrater reliability. If the
scores assigned by teachers differ greatly, then such would suggest a way to refine the scoring
rubrics we have developed. It may be necessary to clarify the scoring rubrics so that they would
mean the same thing to different scorers.

You might also like