Whats Still Wrong With Rubrics - Focusing On The Consistency of P
Whats Still Wrong With Rubrics - Focusing On The Consistency of P
Whats Still Wrong With Rubrics - Focusing On The Consistency of P
2004
Marielle Simon
Recommended Citation
Tierney, Robin and Simon, Marielle (2004) "What's still wrong with rubrics: Focusing on the consistency of
performance criteria across scale levels," Practical Assessment, Research, and Evaluation: Vol. 9 , Article
2.
DOI: https://doi.org/10.7275/jtvt-wg68
Available at: https://scholarworks.umass.edu/pare/vol9/iss1/2
This Article is brought to you for free and open access by ScholarWorks@UMass Amherst. It has been accepted for
inclusion in Practical Assessment, Research, and Evaluation by an authorized editor of ScholarWorks@UMass
Amherst. For more information, please contact [email protected].
Tierney and Simon: What's still wrong with rubrics: Focusing on the consistency of p
What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels
Robin Tierney & Marielle Simon
University of Ottawa
Scoring rubrics are currently used by students and teachers in classrooms from kindergarten to college across North
America. They are popular because they can be created for or adapted to a variety of subjects and situations. Scoring
rubrics are especially useful in assessment for learning because they contain qualitative descriptions of performance
criteria that work well within the process of formative evaluation. In recent years, many educational researchers have
noted the instructional benefits of scoring rubrics (for example, Arter & McTighe, 2001; Goodrich Andrade, 2000).
Popham noted their potential as “instructional illuminators” in a 1997 article entitled What’s Wrong - and What’s Right
- with Rubrics, but he also cautioned that “many rubrics now available to educators are not instructionally beneficial”
(p.72). Unfortunately, many rubrics are still not instructionally useful because of inconsistencies in the descriptions of
performance criteria across their scale levels. The most accessible rubrics, particularly those available on the Internet,
contain design flaws that not only affect their instructional usefulness, but also the validity of their results. For scoring
rubrics to fulfill their educational ideal, they must first be designed or modified to reflect greater consistency in their
performance criteria descriptors.
This article examines the guidelines and principles in current educational literature that relate to performance criteria
in scoring rubrics. The focus is on the consistency of the language that is used across the scale levels to describe
performance criteria for learning and assessment. According to Stiggins (2001), “Our objective in devising sound
performance criteria is to describe levels of quality, not merely judge them” (p. 299). What is valued in a classroom, in
terms of performances or products, is communicated through descriptive language. As such, performance criteria
descriptors are a critical component of rubric design that merit thorough consideration. The purpose of this article is
twofold:
1. To contribute to the educational literature aimed at improving the design of classroom assessment
rubrics.
2. To assist rubric developers in creating or adapting scoring rubrics with consistent performance criteria
descriptors.
In the following sections, the components of a rubric will be identified and defined, existing principles for performance
criteria descriptors will be discussed, and consistency will be examined closely as a design requirement for rubrics.
Anatomy of a Rubric for Learning and Assessment
Scoring rubrics can be adapted or created for a variety of purposes, from large-scale or high-stakes assessment to
personal self-assessment, and each has its own design features. The most useful rubrics for promoting learning in the
classroom have been called instructional rubrics (Goodrich Andrade, 2000), analytic-trait rubrics (Arter & McTighe,
2001;Wiggins, 1998), and skill-focused rubrics (Popham, 1999). This article is specifically concerned with the type of
classroom rubrics that can be described as descriptive graphic rating scales which use generic traits as analytic
performance criteria (See Table 1 as an example).
The performance criteria in a rubric identify the dimensions of the performance or product that is being taught and
assessed. The rubric in Table 1 contains generic performance criteria to assess the mapping skills of elementary
students. This rubric does not attempt to dichotomously measure specific geographic knowledge as being present/absent
or right/wrong. Instead, it emphasizes the development of valuable skills on a continuum. This particular rubric evolved
from the curriculum model used in Ontario, Canada, where state curriculum standards are generally referred to as
expectations. Mertler (2001) offers a template for the development of such rubrics.
Instructions: For each performance criterion, circle or highlight the level that best
describes the observed performance. To aid in this decision, refer to exemplars of
student work or the task indicator list that is provided with the assessment task.
The map includes Breadth The map The map The map The map
the expected contains few of contains some contains most contains all of
conventions (e.g. the expected of the expected of the expected the expected
title, legend, map map map map
cardinal conventions conventions conventions conventions
directions) and and geographic and geographic and geographic and
geographic elements. elements. elements. geographic
elements (e.g. elements.
countries, cities,
rivers).
The map Accuracy The expected The expected The expected The expected
conventions are map map map map
used correctly and conventions conventions conventions conventions
the geographic and the and the and the and the
elements are geographic geographic geographic geographic
placed accurately. elements are elements are elements are elements are
seldom sometimes usually always
accurate. accurate. accurate. accurate.
The performance criteria in this type of rubric are designed to represent broad learning targets, rather than features of
a particular task, and this increases the universality of the rubric’s application. The trade-off for this benefit is that the
rubric does not contain concrete or task-specific descriptions to guide interpretation. As Wiggins (1998) suggests, generic
rubrics should always be accompanied by exemplars of student work or task indicator lists. The variability of student
https://scholarworks.umass.edu/pare/vol9/iss1/2
DOI: https://doi.org/10.7275/jtvt-wg68 Page 2 of 7 2
Tierney and Simon: What's still wrong with rubrics: Focusing on the consistency of p
and rater interpretation can be reduced significantly when generic terms are clarified with task-specific exemplars or
indicators. For example, a descriptor such as moderately clear becomes more observable when it is accompanied by a list
of possible indicators. Using the mapping skills example, the clarity of a student’s product could be affected by the
legibility of the labels, the border style, the background color, or the choice of font. However, these product-specific
indicators should not be explicitly stated on the rubric itself, not only because they limit the application of the rubric,
but also because they can be easily confused with the targeted criteria (Wiggins, 1998).
The attribute, or underlying characteristic of each performance criterion, on the other hand, should be explicitly stated
within the rubric. This concept was illustrated in a rubric that Simon & Forgette-Giroux (2001) put forth for scoring
post-secondary academic skills. In Table 1, the attribute is highlighted in a separate column. Each criterion statement is
clearly articulated in the left-side column, and then modified four times to describe each level of the performance’s
attribute(s). The choice of words that describe the changing values of the attribute is another dimension that must be
dealt with in rubric design. Verbal qualifiers, such as few, some, most and all, indicate what type of scale is being used for
each performance criterion. Three measurement scales are commonly used: amount, frequency, and intensity (Aiken,
1996; Rohrmann, 2003). Table 1 includes an example of each: The attribute breadth varies in terms of amount or
quantity, accuracy varies in terms of frequency, and the last two, relevancy and clarity, vary in terms of intensity.
Existing Principles for Performance Criteria Descriptors in Scoring Rubrics
Principles or guidelines for rubric design abound in current educational literature. This study analyzed 21 documents
directly related to rubric design. Most of the principles reported in these documents specifically addressed the issue of
performance criteria while many focused on the quality of the descriptors. Most frequently mentioned is the clarity of
the descriptors, and the impact of clarity on the reliability of the interpretations made by both the students and the
raters (Arter & McTighe, 2001; Harper, O’Connor & Simpson, 1999; Moskal, 2003; Popham, 1999; Stiggins, 2001;
Wiggins, 2001). Several authors also stressed that the performance levels (or score points) should be clearly
differentiated through description (Moskal, 2003; Wiggins, 1998). Others noted that a balance between generalized
wording, which increases usability, and detailed description, which ensures greater reliability, must be achieved
(Popham, 1997; Simon & Forgette-Giroux, 2001; Wiggins, 1998). Less frequently mentioned, but nonetheless a desirable
quality of central concern, is the need for consistent wording to describe performance criteria across the levels of
achievement (Harper et. al., 1999; Simon & Forgette-Giroux, 2003; Wiggins; 1998). This, in effect, is the heart of the
discussion.
Consistency of the Attributes in Performance Criteria Descriptors
Given the fact that consistency has not been discussed extensively in relation to rubric design, it is not widely
understood by rubric developers as a technical requirement. The variety of terms that have been used to date in the
literature on performance criteria may also have confused matters. One notion of consistency suggests that “parallel”
language should be used (Harper et al, 1999; Wiggins, 1998). Parallel language is helpful when the attribute is clear,
but this is regrettably not always the case. The performance criteria attributes in many of the rubrics that are found on
the Internet are implied rather than explicitly stated, and their nature shifts from level to level. In a list of technical
requirements, Wiggins addresses this problem and identifies as coherent rubrics those with consistent descriptor
attributes:
Although the descriptor for each scale point is different from the ones before and after, the changes concern
the variance of quality for the (fixed) criteria, not language that explicitly or implicitly introduces new
criteria or shifts the importance of the various criteria. (1998, p.185)
Simon & Forgette-Giroux (2003) also discuss consistency in performance criteria. They suggest that the descriptors for
each level should deal with the same performance criteria and attributes in order for the progressive scale to be
continuous and consistent from one level to the other.
Although the language that has been used in educational literature to discuss the consistency of performance criteria
varies somewhat, the idea is essentially the same. Consistency in performance criteria can basically be viewed as the
reference to the same attributes in the descriptors across the levels of achievement. In Table 1, the attribute, or
underlying characteristic, of each criterion is consistently present across the scale, and it is the degree of the attribute
that changes (e.g. level 4 reflects more accuracy than level 1). In another example, a rubric used in an intermediate
history class might contain a performance criterion such as: student demonstrates an accurate and thorough
understanding of the causes of the rebellion. The attributes of this criterion would be the accuracy and the depth of the
student’s understanding. In this case, accuracy and depth should be explicitly stated in the criterion statement, and
they should also be present in each of the qualitative descriptors for that criterion across the levels of achievement.
Improving the Consistency of Performance Criteria Descriptors
Describing performance criteria can be a challenging aspect of rubric construction, which is in itself a task that many
teachers find time-consuming. As an alternative to developing rubrics from scratch, teachers may adapt ready-made
versions for use in their classrooms. A quick investigation using any popular search engine reveals that there are
numerous sources for an endless variety of rubrics. When adapting a scoring rubric, it is important to realize that the
Published by ScholarWorks@UMass Amherst, 2004 Page 3 of 7 3
Practical Assessment, Research, and Evaluation, Vol. 9 [2004], Art. 2
original purpose of the assessment may have resulted in design features that are not suitable for the adapted use. Many
of the rubrics that are accessible online were created by teachers for specific tasks, and others were originally designed
as holistic rubrics for large scale assessment, where the goal is to create an overall portrait of the performance. The
latter are not necessarily intended to describe a continuum of learning as it is assessed in classrooms. The following
examples were created to illustrate how some of the consistency problems found in accessible rubrics can be corrected for
classroom use. In both examples, the problems are highlighted in the first row, and the modified versions are presented
in the following rows (see Tables 2 and 3).
Example One: Basic Consistency
Many ready-made rubrics have basic consistency problems, meaning that the attribute or the performance criterion
itself changes from level to level. Table 2 presents a task-specific rubric for assessing a science journal. The product, a
science journal, is listed as if it is a performance criterion. This provides very little guidance for students who are
learning to write a science journal. The attributes are implicit, and they change from level to level. At the Novice level,
the descriptors stress accuracy of spelling, organization and breadth. Organization is dropped at the Apprentice level,
but breadth and accuracy of spelling remain. At the Master level, only breadth remains of the original attributes, but
clarity is added. And, finally, at the Expert level, neatness is further added, along with clarity and a vague requirement
for creativity. In the modified version, an effort was made to stay true to the implied intent of the original criteria. The
changes involve stating the performance criteria and the attributes clearly, as well as describing the qualitative degrees
of performance more consistently from level to level. The modifications make the task, criteria, and attributes clearer for
students, and they broaden the possibilities for the rubric’s use. Accompanied by exemplars of student work or product-
specific indicators, this rubric could be used by teachers and students to assess journal writing in any content-area class.
It could also be used to assess the same skills in either a formative or a summative context with respective instructions.
The corrections for this example deal specifically with the performance criteria. To complete the rubric, a title, a
statement of purpose, and instructions for using the rubric should also be added.
Problem Criterion
Science Journal (not stated) Writing is Entries are Entries contain Entries are
messy and incomplete. most of the creatively
entries contain There may be required written.
spelling errors. some spelling or elements and Procedures and
Pages are out grammar are clearly results are
of order or errors. written. clearly
missing. explained.
Journal is well
organized
presented in a
duotang.
Suggested Correction
The required Breadth Few of the Some of the Most of the All the
elements are required required required required
present for each elements are elements are elements are elements are
journal entries present in each present in each present in each present in each
(e.g. Lab journal entry. journal entry. journal entry. journal entry.
Summary,
Materials,
Procedure,
Results,
Conclusion).
https://scholarworks.umass.edu/pare/vol9/iss1/2
The entries are Clarity Journal entries Journal entries Journal entries Journal entries
DOI: https://doi.org/10.7275/jtvt-wg68 Page 4 of 7 4
Tierney and Simon: What's still wrong with rubrics: Focusing on the consistency of p
clearly written are slightly are are mainly are extremely
(e.g. style, clear. moderately clear. clear.
grammar clear.
enhance
understanding).
The journal is Organization The journal is The journal is The journal is The journal is
organized (e.g. slightly moderately mainly extremely
visible titles, organized. organized. organized. organized.
ordered pages,
etc.)
Problem Criterion
Silent Reading (not stated) Off task and Has difficulty Reads Chooses books
disruptive choosing books independently with
during for sustained during enthusiasm and
sustained silent reading. sustained reads
silent reading silent reading. independently
period. during
sustained silent
reading.
Suggested Correction:
1. If reading ability is the target, rethink the criterion to ensure that the attribute
is meaningful.
Published by ScholarWorks@UMass Amherst, 2004 Page 5 of 7 5
Practical Assessment, Research, and Evaluation, Vol. 9 [2004], Art. 2
2. If learning behaviors are being measured, and autonomy and attention are the
desired attributes, reword the descriptors as shown below.
Descriptors: Scoring rubrics; rating scales; performance criteria; consistency; classroom assessment; assessment for learning; student evaluation.
Citation: Tierney, Robin & Marielle Simon (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across
scale levels. Practical Assessment, Research & Evaluation, 9(2). Available online: http://PAREonline.net/getvn.asp?v=9&n=2.