Lesson 7
Lesson 7
ALTERNATIVE METHODS
LESSON 7: Organization and Analysis of Assessment Data from Alternative Methods.
Introduction:
Assessment data provide a means to look at student performance to offer evidence about
student learning in the curriculum, provide information about program strengths and
weaknesses, and guide decision-making. Analysing the data gives meaning to the
information collected and is essential to appropriately utilize and communicate the
assessment results.
There are many tools to choose from for assessing student evaluation. Tools include: (but
are not limited to) checklists, rating scales, rubrics, portfolios, exams and peer
evaluations. An assessment tool has to be proficient as it is information for educators and
learners alike in order to clearly indicate the criteria against which the learning will be
assessed.
How do we quantify results from rubrics?
Before I get into the specifics of the scores themselves, I’m going to describe all the
things that happen before those points go into the grade book. I’ll do this with an
example scenario: Suppose I want my eighth-grade students to write a narrative account
of a true story. This will not be a personal narrative, but rather a journalistic piece that
illustrates some larger concept, such as the story of one student’s chaotic after-school
routine to illustrate the problem of some kids having too many activities and homework
after school.
Rubrics
A rubric is a chart or matrix which includes indicators that describe different levels of
achievement for the major components or ‘elements’ of a performance. A typical rubric
contains a scale with a range of possible points for assessing work. Usually high numbers
are associated with strong student performance and low numbers with poor student
performance. Rubrics also use descriptors to assess student mastery and performance
levels. The following list of templates will help get you started.
To start with, I have to get clear on what the final product should look like. Although I
have my own opinions of what makes a well-written story, I need to put that into words
so my students know what I’m looking for. Ideally, these criteria should be
developed with my students. Any project will be more effective if students are part of the
conversation from the beginning; I would ask them what makes for a good story, what
kind of criteria should be used to judge its quality, and so on. To generate ideas for this
discussion, we would first read a few examples from magazines and websites of the type
of writing I want them to produce, and we’d figure out what qualities make these stories
work. Eventually we’d shape these ideas into a list of attributes for the rubric. (Full
disclosure: This is an ideal scenario. I often skipped the step of involving students to save
time, but that was ultimately not the best decision.)
I would also consult with my standards and curriculum materials, to make sure I wasn’t
missing something relevant and to make sure the language in my rubric is aligned with
those standards.
If you have been working with single-point rubrics, you know that the left-hand column
is reserved for indicating how students need to improve. The right-hand column has a
different title than what I have used in the past. In earlier versions I titled this column
“Exceeds Expectations,” providing space to tell students how they exceeded the
standards. I have adapted it here to “Above and Beyond” to make it more open-ended. It
can be a place to describe where students have gone beyond the expectations, or it could
be a place where the teacher or the student could suggest ways the work could reach even
further, a place to set “stretch goals” appropriate to that student’s readiness and the task
at hand.
Once my criteria have been defined, and if I will ultimately be giving points for this
assignment, I need to decide how to divide those points across each category. Assuming
a total of 100 points for this assignment, I would weigh certain components more heavily
than others. Because my main goal is for students to write a robust, well-developed story,
I would place more value on the top two categories—structure and idea development.
This is an area where subjectivity can take over, and where rubrics can really vary from
one teacher to another. So again, keep in mind that this is what it looks like for me.
For a 100-point assignment, I might distribute points as follows, adding them right into
the rubric with a space for inserting the student’s score when the task has been graded:
This part is crucial. Even if students are not included in the development of the rubric
itself, it’s absolutely vital to let them study that rubric before they ever complete the
assignment. The rubric loses most of its value if students aren’t aware of it until the work
is already done, so let them see it ahead of time. I typically provide students with a
printed copy of the rubric when we are in the beginning stages of working on a big
assignment like this, along with a prompt that describes the task itself.
Another powerful step that makes the rubric even more effective is to score sample
products as a class, using the rubric as a guide. I often created these samples myself,
building in the kinds of problems I often saw in that type of writing. Occasionally I
would use a piece of writing from a previous student with their name removed. Ideally,
we would score one or two of these as a whole-class activity, and then I would have
students do a few more in pairs. This process really gets students paying attention to the
rubric, asking questions about the criteria, and getting a much clearer picture of what
quality work looks like. When it comes time to craft their own pieces, they are better at
using this tool for peer review and self-assessment.
I put a check beside the criteria that has been satisfied in that draft, and add comments to
the left of those that need work. In the right-hand column, I add a few suggestions for
ways this student might push herself a bit more to make the piece even better.
You’ll notice that the space for scores has been left blank. There’s a reason for that:
When students are given both feedback and number or letter grades, their motivation
often drops and they tend to ignore the written feedback (Butler & Nisan, 1986). My own
experience has proved this to be true; I have often spent hours giving written feedback on
student writing, but found they often ignored that. Now I know this was because the
feedback also included a grade. No-grades advocate Alfie Kohn, in his piece From
Degrading to De-Grading, recommends that teachers who want to avoid this effect “make
grades as invisible as possible for as long as possible.” With that in mind, in this round,
students only get feedback, not scores.
When students have improved their work and re-submitted it, if they have gotten much
closer to achieving the criteria, this would be an appropriate time to assign points to go
into the grade book. If the issues raised in the first round have now been addressed, they
are given a check to indicate that they are no longer a problem. In cases where all criteria
in a category have been satisfied, the full number of points will be given. If a problem
persists, new feedback may be added, and a portion of the points will be deducted. Again,
this is the subjective part: I try to consider the work as a whole and deduct only a small
percentage of the total points for a small problem. Really, if a problem is significant, the
assignment should be reworked until that problem has been resolved. Once each section
of the rubric has been scored, the points are totalled and that total is the score that’s
entered into the grade book.
The quality of information acquired through the use of checklists, rating scales and
rubrics is highly dependent on the quality of the descriptors chosen for assessment. Their
benefit is also dependent on students’ direct involvement in the assessment and
understanding of the feedback provided.
Checklist
A checklist is the least complex form of scoring that examines the presence or absence of
specific elements in the product of a performance. All elements are generally weighted
the same and the gradations in quality are typically not recognized.
Rating Scale
A rating scale incorporates quality to the ‘elements’ in the process or product which can
be numeric or descriptive. Unlike checklists, rating scales allow for attaching quality to
‘elements’ in the process or product.
Rating Scales allow teachers to indicate the degree or frequency of the behaviours, skills
and strategies displayed by the learner. To continue the light switch analogy, a rating
scale is like a dimmer switch that provides for a range of performance levels. Rating
scales state the criteria and provide three or four response selections to describe the
quality or frequency of student work.
Teachers can use rating scales to record observations and students can use them as self-
assessment tools. Teaching students to use descriptive words, such as always, usually,
sometimes and never helps them pinpoint specific strengths and needs. Rating scales also
give students information for setting goals and improving performance. In a rating scale,
the descriptive word is more important than the related number. The more precise and
descriptive the words for each scale point, the more reliable the tool.
Effective rating scales use descriptors with clearly understood measures, such as
frequency. Scales that rely on subjective descriptors of quality, such as fair, good or
excellent, are less effective because the single adjective does not contain enough
information on what criteria are indicated at each of these points on the scale.
Added value
Increase the assessment value of a checklist or rating scale by adding two or three
additional steps that give students an opportunity to identify skills they would like to
improve or the skill they feel is most important. For example:
Put a star beside the skill you think is the most important for encouraging others circle
the skill you would most like to improve underline the skill that is the most challenging
for you.
Portfolios can be created for course assessment as well as program assessment. Although
the content may be similar, the assessment process is different.
Showcase Portfolios: Students select and submit their best work. The showcase portfolio
emphasizes the products of learning.
Developmental Portfolios: Students select and submit pieces of work that can show
evidence of growth or change over time. The growth portfolio emphasizes the process of
learning.
Showcase portfolio: Consider starting with one assignment plus a reflective essay from a
senior-level course as a pilot project. A faculty group evaluates the “mini-portfolios”
using a rubric. Use the results from the pilot project to guide faculty decisions on adding
to or modifying the portfolio process.
Developmental portfolio: Consider starting by giving a similar assignment in two
sequential courses: e.g., students write a case study in a 300-level course and again in a
400-level course. In the 400-level course, students also write a reflection based on their
comparison of the two case studies. A faculty group evaluates the “mini-portfolios” using
a rubric. Use the results to guide the faculty members as they modify the portfolio
process.
Suggested steps:
1. Determine the purpose of the portfolio. Decide how the results of a portfolio
evaluation will be used to inform the program.
2. Identify the learning outcomes the portfolio will address.
Tip: Identify at least 6 course assignments that are aligned with the outcomes the
portfolio will address. Note: When planning to implement a portfolio requirement,
the program may need to modify activities or outcomes in courses, the program, or
the institution.
3. Decide what students will include in their portfolio. Portfolios can contain a range of
items–plans, reports, essays, resume, checklists, self-assessments, references from
employers or supervisors, audio and video clips. In a showcase portfolio, students
include work completed near the end of their program. In a developmental portfolio,
students include work completed early and late in the program so that development
can be judged.
Tip: Limit the portfolio to 3-4 pieces of student work and one reflective essay/memo.
4. Identify or develop the scoring criteria (e.g., a rubric) to judge the quality of the
portfolio.Tip: Include the scoring rubric with the instructions given to students (#6
below).
5. Establish standards of performance and examples (e.g., examples of a high, medium,
and low scoring portfolio).
6. Create student instructions that specify how students collect, select, reflect, format,
and submit.
Tip: Emphasize to students the purpose of the portfolio and that it is their
responsibility to select items that clearly demonstrate mastery of the learning
outcomes.
Emphasize to faculty that it is their responsibility to help students by explicitly tying
course assignments to portfolio requirements.
Collect – Tell students where in the curriculum or co-curricular activities they will
produce evidence related to the outcomes being assessed.
Select – Ask students to select the evidence. Instruct students to label each piece of
evidence according to the learning outcome being demonstrated.
Reflect – Give student’s directions on how to write a one or two-page reflective
essay/memo that explains why they selected the particular examples, how the pieces
demonstrate their achievement of the program outcomes, and/or how their
knowledge/ability/attitude changed.
Format –Tell students the format requirements (e.g., type of binder, font and style guide
requirements, online submission requirements).
Submit – Give submission (and pickup) dates and instructions.
1. A faculty group scores the portfolios using the scoring criteria. Use examples of
the standards of performance to ensure consistency across scoring sessions and
readers.Tip: In large programs, select a random sample of portfolios to score (i.e.,
do not score every portfolio).
2. Share the results and use them to improve the program.