Alternatives To Selected-Response Assessment

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Name : Yun Angraeni Saputri

Reg. Numb : G2Q118001

The advantages and disadvantages of different types of as-sessment.

ALTERNATIVES TO SELECTED-RESPONSE ASSESSMENT

Alternatively, an assessment can require a student to develop his or her own answer in response to a
stimulus, or prompt. An assessment of this form, such as one that requires an essay or a solution to a
mathematical problem, is called a constructed-response assessment. Neither the prompts nor the
responses need be written, however. Responses commonly include any form whose quality can be
judged accurately, from live performances to accumulated work products. For this reason, constructed-
response assessments are also called performance assessments. In our study, we also used the less
technical term alternative assessment as a synonym for both of these terms.

The classification system is based primarily on format—how the questions are presented and
how responses are produced. However, selected-response and constructed-response assessments differ in
many other ways, including the complexity of their development, administration, and scoring; the time
demands they place on stu-dents and teachers; their cost; and the cognitive demands they make on
students.

Written Assessments
Written assessments are activities in which the student selects or composes a response to a
prompt. In most cases, the prompt con-sists of printed materials (a brief question, a collection of historical
documents, graphic or tabular material, or a combination of these).
Performance Tasks
Performance tasks are hands-on activities that require students to demonstrate their ability to
perform certain actions. This category of assessment covers an extremely wide range of behaviors,
including designing products or experiments, gathering information, tabulat-ing and analyzing data,
interpreting results, and preparing reports or presentations.
Senior Projects
Senior projects are distinct from written assessments and perfor-mance tasks because they are
cumulative, i.e., they reflect work done over an extended period rather than in response to a particular
prompt. The term senior project is used here to identify a particular type of culminating event in which
students draw upon the skills they have developed over time. It has three components: a research paper, a
product or activity, and an oral presentation, all associated with a single career-related theme or topic.
Portfolios
Like a senior project, a portfolio is a cumulative assessment that represents a student’s work and
documents his or her performance. However, whereas a senior project focuses on a single theme, a
portfolio may contain any of the forms of assessments described above plus additional materials such as
work samples, official records, and student-written information.
Some portfolios are designed to represent the student’s best work, others are designed to show
how the student’s work has evolved over time, and still others are comprehensive repositories for all the
stu-dent’s work.
Portfolios present major scoring problems because each student in-cludes different pieces. This
variation makes it difficult to develop scoring criteria that can be applied consistently from one piece to
the next and from one portfolio to the next. States that have begun to use portfolios on a large scale have
had difficulty achieving accept-able quality in their scoring (Stecher and Herman, 1997), but they are
making progress in this direction.
One approach is to set guidelines for the contents of the portfolios so that they all contain similar com-
ponents.

COMPARING SELECTED-RESPONSE AND ALTERNATIVE ASSESSMENTS


For decades, selected-response tests (multiple-choice, matching, and true-false) have been the
preferred technique for measuring student achievement, particularly in large-scale testing programs. In
one form or another, selected-response measures have been used on a large scale for seventy-five years.
there are limitations to using multiple-choice and other selected-response measures. First, these
traditional forms of assessment may not measure certain kinds of knowledge and skills effectively. For
example, it is difficult to measure writing ability with a multiple-choice test. Similarly, a teacher using
cooperative learn-ing arrangements in a classroom may find that selected-response measures cannot
address many of the learning outcomes that are part of the unit, including teamwork, strategic planning,
and oral communication skills. In these cases, multiple-choice tests can only provide indirect measures of
the desired skills or abilities (e.g., knowledge of subject-verb agreement, capitalization, and punctua-tion,
and the ability to recognize errors in text may serve as surro-gates for a direct writing task). Second, when
used in high-stakes assessment programs, multiple-choice tests can have adverse effects on curriculum
and instruction. Many standardized multiple-choice tests are designed to provide information about
specific academic skills and knowledge. When teachers focus on raising test scores, they may emphasize
drill, prac-tice, and memorization without regard to the students’ ability to transfer or integrate this
knowledge. Instruction may focus on nar-row content and skills instead of broader areas, such as critical
thinking and problem solving (Miller and Legg, 1993).
In fact, they may have many of the same flaws cited for multiple-choice tests. Critics argue that
poorly designed alternative assessments can also be very narrow, so that teaching to them may also be
undesirable. For example, mathematics portfolios may overem-phasize “writing about mathematics” at
the expense of learning mathematical procedures. In addition, alternative assessments have practical
problems, including high cost, administrative complexity, low technical quality, and questionable legal
defensibility (Mehrens, 1992).

You might also like