Tech Assessment and Evaluation
Tech Assessment and Evaluation
ISTE Standards
addressed in this
chapter
NETS-T 2. Design Digital-Age Learn-
ing Experiences and Assessments
Teachers design, develop, and
evaluate authentic learning experi-
ences and assessments incorporat-
ing contemporary tools and re-
sources to maximize content
learning in context and to develop
Lisa F. Young/Fotolia
Assessment and
with content and technology
standards and use resulting
data to inform learning and
Evaluation teaching.
Note: Standard 2.a is addressed
in Chapter 4, 2.b in Chapter 5,
and 2.c in Chapter 6.
161
162 CHAPTER 7
I recently visited a local middle school to inter- parents, many of whom were in two-income allow teachers to quickly determine whether
view teachers and the principal. This middle families and did not have the opportunity to they need to reteach content to the entire
school—a National Blue Ribbon School as visit the classroom as often as parents did class, a group, or an individual. Some of the
designated by the U.S. Department of Educa- when she began her career. To her, it had teachers in the school described how they post
tion—is known for its innovative and excep- “brought the parents into the classroom.” Now helpful web resources and even their class lec-
tional use of technology. The school incorpo- she routinely uses the system not only to post final ture notes or presentations on the website so
rates a web-based gradebook and reporting grades but also to inform parents of upcoming that both parents and students can use it to
software that allows teachers to post student assignments and to prompt greater home invol- support or supplement instruction outside of
assignments and grades as well as communicate vement when students need extra help or class. The veteran teacher reported that the
with parents online. When asked how this tech- encouragement with an assignment. The only parent of one of her students routinely down-
nology had affected her practice, one veteran negative feedback, joked the principal, had loaded her classroom presentations at work
teacher half-jokingly replied that she had been come from the students who groaned that their and that the whole office would look forward
brought into the technology world “kicking and parents now know “what they did in school” to reviewing them. Not only does she think that
screaming” and that she would give up her every day and are able to ensure schoolwork is this practice provides great public relations for
paper-based gradebook only “over my dead completed and supported at home. the school, but “it’s like that parent’s in the
body.” However, she then went on to elaborate The gradebook program allows teachers to classroom getting the same lesson her child
that despite her original reluctance to use the analyze and chart student performance indi- did that day.”
online gradebook, she couldn’t imagine teach- vidually, by group, or by class, and to send Source: John Ross
ing without it now. It had helped to break down confidential messages to parents with real-
communication barriers between her and the time grade data via e-mail. These data reports
the learning process. How can assessment support learning? Classroom assessments
that promote learning (Assessment Reform Group, 1999, p. 7):
· involve sharing learning goals with students
· aim to help students know and recognize the standards for which they are
aiming
· involve students in self-assessment
· provide feedback that leads to students recognizing their next steps and how
to take them
· are based on the belief that every student can improve
· involve both teachers and pupils reflecting on assessment data
Have you experienced assessments that were specifically designed to facilitate
learning? Perhaps you have had projects or assignments reviewed by your teachers
or peers prior to being allowed to revise and strengthen them. Perhaps you engaged
in simulations that allowed you to practice the steps of an experiment before demon-
strating your understanding on an exam. Formative assessments can include video-
taped performances, electronic journals, checklists and rubrics, as well as many other
formats that allow you to practice or check your understanding prior to being evalu-
ated on the “big test.” Formative assessments may or may not have a grade assigned
to them but if they do, these grades are often used for the purposes of monitoring
student progress or providing a benchmark rather than as a final judgment.
As suggested above, feedback is critical to formative assessment. Your students
will benefit most when they receive feedback about the quality of their work and sug-
gestions for improvement—not just whether their responses were right or wrong. To
improve learning, they must also receive feedback about their progress during
instruction. Outcome feedback, knowing whether a response is correct or not, is
the simplest and most common type of feedback, but it provides little guidance to
students (Butler & Winne, 1995). Early drill-and-practice software often provided
this type of feedback, in which student responses were boldly acknowledged as “COR-
RECT” or “INCORRECT,” but rarely with an explanation why. For formative assess-
ment to achieve maximum benefit, feedback must provide an explanation of why an
answer is correct or incorrect, should support students when they are using appropri-
ate strategies and content knowledge, and guide them when they are not. Cognitive
feedback refers to feedback that helps students develop a better understanding of
what is expected for performance, how their current understanding or skill levels
compare to those expectations, and how they might improve their performances
(Butler & Winne, 1995).
It’s important that you don’t isolate the assessment strategies and tools that you
use in the Monitor and Evaluate stages of your instruction and just “tack them on” at
the end. The cycle of instruction is iterative and the stages overlap. As you will learn
in the next section, you plan your assessments as you set goals for student learning.
In addition, some of the assessments you develop may be part of your instructional
actions.
Instructional Objectives
Many school systems require teachers to write instructional objectives, When considering the first part of an instructional objective, what
so it is useful to know how to do so. An instructional objective your students are going to do, content standards help determine the
describes what your students should be able to do when they have knowledge your students will need and the skill level(s) required to
successfully completed your lesson. The addition of the qualifying demonstrate that knowledge. The second part of an instructional
term, “instructional,” places it within the context of your classroom objective describes the conditions under which your students will per-
and emphasizes that it should be more specific than a state or national form the skills required by the standards, including the necessary mate-
objective that supports a content standard. The instructional objective rials and resources. The third part of an instructional objective
bridges the gap between the content standard and the assessment describes the criteria for a successful performance such as time limits,
measure. In other words, objectives can be thought of as specifications degrees of accuracy, or percentage correct. Examine the objective
for your assessments. For example, an objective and the associated below, which contains all three components:
assessment measure might look like this:
Objective: Using a reproduction of the architect’s blueprint of your
Objective: Given multiple sample checks, locate the bank routing school (the conditions), locate all restrooms and drinking foun-
number and customer account number on the checks. tains on the first floor (the performance) with 100 percent accu-
Assessment: Review the five blank checks located in the envelope. racy (the criteria).
Circle the bank routing number and underline the customer
Although content standards and objectives provide the focus for a
account number on each check.
good lesson plan, you still must use your experience, knowledge, and
An instructional objective has three parts (Dick, Carey, & Carey, other resources to develop a lesson that will help your students effec-
2005). These parts describe tively build the required content-related skills and knowledge. The
standards, objectives, instructional activities, materials employed,
1. What your students are going to do (the skills and knowledge
and assessment measures are interrelated and should support each
identified in the content standards and objectives)
other. In other words, they must all be “in alignment.”
2. How the students will do it (the conditions under which they will
For more information on converting content standards to objec-
perform the skills or behavior, including necessary materials
tives, see the textbook’s companion website.
and resources)
3. How you will know the students have done it successfully (the
criteria for a successful performance)
for interpreting standards at the local level. In other words, standards do not require
every teacher to teach every child in the exact same way.
Standards, also, do not routinely describe how students will be assessed. To
be more useful, standards must be converted to something more measurable. One
way to do that is to write instructional objectives (see Tools for Use); another way
is to develop your assessment instruments. Objectives and assessments are really
two sides of the same coin. Objectives describe the skills and knowledge your stu-
dents must master; assessments provide the specific means for them to actually dem-
onstrate those skills and knowledge. The assessment strategies and tools that you
select should be closely aligned with the learning goals you have developed for your
students; the content standards you have selected for your lesson and unit plans; and
the teaching strategies, activities, and materials used in your lesson. For example, if
your students will use science simulation software for an assessment, they must have
the opportunity to learn the skills that are needed to perform the simulation, includ-
ing practice with the software, if necessary, prior to completing the assessment. If
your students will compose a letter using word processing software, they must be
given adequate practice and experience not just in using a keyboard, but in using a
keyboard for letter-writing activities. Regardless of whether you actually create your
assessments prior to developing learning activities, assessment and learning activities
are closely linked and should be considered together.
in Table 7.1 were selected to demonstrate similar types of skills in each domain, but
at different grade levels. For example, the sample standards for mathematics are so
similar across grade levels, they use much of the same terminology:
Grade 4: The learner will understand and use graphs, probability, and data
analysis.
Grade 8: The learner will understand and use graphs and data analysis.
Grade 12: The learner will collect and interpret data to solve problems.
In these examples, the math standards and objectives relate to data use and graph-
ing. Although the global nature of the standards suggest which skills and knowledge are
necessary, they fall short in terms of specifics. Greater specificity is provided through
the supporting objectives. Although only one objective is listed for each of these exam-
ples, content standards often have multiple objectives that, in combination, make up
the requisite skills and knowledge your students must develop to demonstrate they
have mastered the standards. Together, the standards and objectives make up the cur-
riculum that your state asks you to teach. The complete standards and supporting
objectives for the eighth grade mathematics example is given in Figure 7.1.
To achieve curriculum alignment, you must develop lessons that are matched
to the standards. Similarly, you must develop or select assessments that allow stu-
dents to demonstrate content mastery at the level demanded by the standards. This
means a couple of things: first, that your assessments are aligned with your lessons
in terms of knowledge and skills, and second, that they are aligned in terms of the
format and materials used. For example, if your objective is for students to be able to
compose a five-paragraph paper and your assessment is completed in a computer lab
Standard 8.1:The learner will understand and use graphs and data analysis.
the meaning of the line as it relates to the problem and make predictions.
Figure 7.1
Eighth grade mathematics standard and objectives.
Source: Adapted from Public Schools of North Carolina. (2006). North Carolina standard course of study.
166 CHAPTER 7
Objectives
Knowledge
and Skills
Activities Assessment
Figure 7.2
Alignment among objectives, activities, and assessment.
Developing Assessments
When you develop assessment instruments you need to consider the same three
components you considered when writing instructional objectives: 1) the behavior,
skill, knowledge, or attitude to be demonstrated, 2) the conditions under which
they will be demonstrated, and 3) the criteria that specify the level of performance
or knowledge required. As you may know, the behavior, skills, knowledge, or atti-
tudes your students are required to master are outlined in your content standards
(see Chapter 3). Performance conditions identify equipment, supplies, or other
resources, including technologies, which are allowed during the assessment of stu-
dent skills. They also include any time limits or other constraints imposed upon stu-
dents as they are demonstrating the skill. Criteria clearly describe what acceptable
performance looks like. Criteria can be conveyed through rubrics, checklists, or a sim-
ple list of what an acceptable answer should include (e.g., “Your report should include
five sources and no grammatical or factual errors.”). When shared with students
before embarking on an activity, students can set goals for their own performances
and can continue to monitor their performances throughout their learning, whether
creating a web page, writing an essay, or preparing a presentation for the class. They
can also determine the essential elements of complex projects, making the project
seem more approachable even to struggling students.
Whether an intended consequence or not, the assessment activities you select will
indicate to your students which skills and knowledge you feel are most important or
worthwhile. If you test factual knowledge, students will believe that facts are most
important; if you test critical and creative thinking skills, students will pay more
attention to developing these skills. So, if your goal is to create a learning environ-
ment that promotes deep understanding of content, what type of assessments
should you choose? How can you communicate the interest and excitement you
have for your content area, or even your interest in lifelong learning, through assess-
ments? Think about the messages you send to your students when you select and use
your assessment methods. Do these messages include a focus on meaningful under-
standing or are they focused on how to pass a test at the end of the year? How do
your classroom assessments communicate your expectations to your students?
There is no magic formula to determine the right assessment task or the appro-
priate technology for every performance goal or assessment task. In fact, a variety of
Assessing Student Learning 167
technologies and formats may fit any number of assessment tasks and can be used
equally well in multiple content areas. In practice, you should use a variety of assess-
ment formats and tools that are matched to your students’ learning goals and that
provide an adequate picture of student understanding.
Figure 7.3
Hot Potatoes is popular software for developing questions for students.
Source: © 2003–2008 Half-Baked Software Inc.
Assessing Student Learning 169
the room. These systems may also be connected to or built into some type of display
hardware such as an interactive whiteboard (see Chapter 5).
Detailed data collection, analysis, and reporting can occur instantaneously. This
type of live polling of responses is ideal for monitoring learning through formative
assessment and can help you and your students quickly determine content areas
that require further instruction or where there are obvious gaps in understanding.
Student data can also be stored and tracked over time to give you a picture of indi-
vidual student growth and the need for supplemental instruction.
grammar or formatting is required. Depending on what you are assessing, you may or
may not want your students to express IMHO (in my humble opinion), LOL (laugh
out loud), or to wink ;- ) at each other. In this case, technology may actually lead to
superficial assessment activities.
Journaling is a common activity and can be used to facilitate self-reflection and
self-assessment. Journaling can be supported by a variety of technologies, such as
common word-processing software, designated journaling software, as well as blogs.
Many word-processing applications include note and tracking features that allow you
and your students to incorporate suggestions, options, and revisions within a single
document. They also include tools, such as grammar and spell checkers, that can sup-
port students with difficulties in these areas when critical thought outweighs the
need for mechanics. The use of grammar, spelling, and other writing tools are an
important part of communicating effectively when using information and communi-
cations tools in the 21st century.
Both stand-alone and web-based tools can incorporate video, graphics, and other
media that allow students to support their text-based entries. Web-based communi-
cations such as blogs or threaded discussions allow for an added layer of complexity
as you, other students, and even the original author can return to a posting and pro-
vide critique, clarification, or demonstration of further understanding or skill. This
type of collaborative journal can support peer review as a type of formative assess-
ment. Unique to threaded discussions and online forums is that messages can be
sorted and organized by parameters such as the author, date, or subject, making it
easy to follow a student’s argument, line of reasoning, and can even demonstrate
growth over time.
Performance-Based Assessments
There are a variety of ways students can demonstrate mastery through performance.
Performance-based assessments are possible in all content areas but may most easily
be exemplified by domains that require oral communication skills or the development
of psychomotor skills in conjunction with other content knowledge, such as sports,
the fine arts, and many lab sciences. Performances can very quickly extend beyond
the demonstration of rudimentary skill and can require students to demonstrate
very complex behaviors that may exhibit choices based on personal values and
creativity.
Oral communication is a ubiquitous teaching and assessment strategy. Teachers
ask their students questions to determine prior knowledge, levels of understanding
or misunderstanding, or simply to clarify a point. This type of questioning and dialog
can be informal or can be used in formal settings, such as in the case of an oral exam
or interview. And although this is a book about the use of technology, it’s important
to emphasize that technology should only be used when it facilitates learning, and
not simply as a novelty. Sometimes you’ll just want to ask a question. No technology
required. But there are some ways that technology can support assessment through
dialog, primarily through recordkeeping.
The ease with which digital video can be captured and edited allows it to serve as
a tool for demonstrating student skills and knowledge. As demonstrated by the hun-
dreds, if not thousands, of live early morning news shows at elementary schools
across the nation, even very young students can master basic video capture and edit-
ing to record their progress. Class or small group discussions can be captured and
stored in portfolios. Student presentations can be recorded and kept as a record of
content understanding. And although this section focuses on technology that sup-
ports student assessment, the value of using videotape to record your own teaching
as a means of evaluating and developing skills (as presented in Chapter 12) cannot be
understated.
Technology can provide a means for recording critical early literacy and numeracy
development—foundational skills for later learning. In early literacy classes, the
assessment technique of “running records” (see Figure 7.4) allows teachers to quickly
Assessing Student Learning 171
Figure 7.4
Wireless Generation uses handheld computers to allow teachers to gather and compile student response
almost instantaneously.
Source: http://www.wirelessgeneration.com/reading3d_demo3.html. © 2000–2008 Wireless Generation, Inc.
Project-Based Assessments
Related to performance-based assessments is the creation of products to support
project- and problem-based learning, as introduced in Chapter 3. Although
project-based assessments are especially well suited for formative purposes, they are
also employed in summative settings, such as the generation of capstone projects at
the end of a year or course of study. There are a variety of methods for incorporating
project-based assessments in a classroom, as well as many different tools to support
them.
Project-based assessments are often linked to a category called authentic
assessments. In Chapter 3 we introduced you to the concept of authentic intellec-
tual work and described how authentic instruction used real-world contexts to
engage students in the actual work of a discipline. In an authentic assessment, stu-
dents are required to demonstrate understanding of concepts and perform skills
within the context of that authentic activity, that is, by replicating real-world perfor-
mances as closely as possible (Svinicki, 2004). In these cases, the assessments may be
so intricately embedded or linked to the instruction that it may not be apparent to
172 CHAPTER 7
students that there is a formal assessment. In science classes, students can perform
experiments using probes and other measurement devices that scientists use and
record their findings in a laboratory notebook—digital or paper; students can dem-
onstrate writing proficiency by using word processing and layout software to create
brochures or newspapers; students in math classes can use GPS devices to measure
buildings and land forms and generate scale drawings that show the application of
concepts such as area.
Wiggins (1998) lists six characteristics of an authentic assessment:
1. The assessment is realistic; it reflects the way the information or skills would
be used in the “real world.”
2. The assessment requires judgment and innovation; it is based on solving
unstructured problems that could easily have more than one answer and, as
such, requires the learner to make informed choices.
3. The assessment asks the student to “do” the subject, that is, to go through the
procedures that are typical to the discipline under study.
4. The assessment is done in situations as similar as possible to the context in
which the related skills are performed.
5. The assessment requires the student to demonstrate a wide range of skills
that are related to the complex problem, including some that involve
judgment.
6. The assessment allows for feedback, practice, and second chances to solve the
problem being addressed.
Project-based assessments, whether taking the form of authentic assessments or
not, can also support pedagogies related to problem-based learning. As described
in Chapter 3, problem-based learning, also called PBL, can help students meet the
demands of standards and learning goals that require higher-order thinking skills,
such as those related to problem identification, selecting and monitoring strategies
for solving the problem, applying knowledge, and evaluating the success of one’s
efforts. Problem-based learning can also utilize performance-based assessments, as a
well-designed problem can easily require students to demonstrate new knowledge
and skills through performance with the only limitations being the appropriate fit
to the content being explored.
Think of the typical science fair project. This is a project that is often built around
a specific problem, whether determined by the student or the sponsoring organiza-
tion. The problems often meet the requirements for authentic assessment as stu-
dents are required to perform like scientists in terms of the research they complete
and the tools they use. They also usually complete some type of performance in
terms of explaining their projects and their new understandings and skills that have
been developed. So in this case, you may have an authentic, problem-based project
that is assessed via performance and a product!
These distinctions are not as critical as developing assessments that appropriately
meet the demands of your curriculum and the needs of your students—it usually
doesn’t matter if you can explicitly state whether you are engaging in project- or
problem-based activities, or both. The bottom line is that project-based learning typ-
ically results in some type of product, perhaps a web page or a multimedia presenta-
tion, and it may or may not include some type of performance, such as an oral report
or class presentation using the products students have created.
use a digital camera to take pictures to support their presentations, create graphics
using drawing programs, and demonstrate their knowledge using presentation soft-
ware. Obviously, there are many tools you can use to support performance- or
project-based learning activities. In the following discussion, we’ll focus on just a
few of the many technologies that can help you assess student learning when using
performance- and project-based assessments: 1) concept maps, 2) simulations, and
3) portfolios and work samples.
Concept Maps
As you recall from Chapter 4, concept mapping is a graphic technique for represent-
ing student understanding. Concept maps traditionally consist of nodes representing
concepts and links that show the connection between nodes (see Figure 7.5 for an
example of one type of a concept map). Concept-mapping software is available for
use by young students; however, the concepts that can be addressed and the result-
ing maps can become extremely complex and thus, are often used more with older
students. Concept mapping is not dependent upon technology; but, concept-
mapping software is widely available for facilitating the process and a variety of
map templates are readily available, including Venn diagrams (see Figure 7.6), plot
outlines, timelines, lab reports, and many others. Popular concept-mapping software
can also toggle back and forth between the visual map layout and a corresponding
text-based outline, allowing students to generate and review content in graphic or
text form.
The nodes and links in a concept map help reveal student thinking and can illu-
minate misconceptions. Your students can compare their maps to those created by
experts—including yourself—and this comparison can provide specific cognitive
feedback essential to good formative assessment. Although paper-and-pencil concept
Example Example
Example
Felsic,
mafic Magma
Composed of Formed by cooling
ultramafic Igneous
Example Example
Quartz Quartz,
Example Example High heat that Example Example Breaking
sandtone feldsapr,
changes down rocks and
shale, lithics,
igneous rocks other particles
limestone matrix
Figure 7.5
Example of a concept map.
174 CHAPTER 7
some
sharks lay
evolved eggs
12-15 take in
million all dolphins oxygen
years ago bear live through
young the water
take in (some
oxygen sharks do)
evolved
through
40 million
the air
years ago
live in
open
skeletons skeletons
water as
made of made of
well as
bone cartilege
near and
offshore
upright
horizontal caudal
caudal lins lins
Dolphins Sharks
Figure 7.6
Example of a Venn diagram.
maps may also be analyzed for understanding and student misconceptions, the wide-
spread availability of concept-mapping software increases its utility. As such, it has
become a popular technology in classrooms at all grade levels.
One concern regarding using concept maps for assessment is how to implement
concept mapping consistently across classrooms to generate valid and reliable results.
Although technology can support concept mapping, it cannot resolve the human-
dependent implementation issue. Concept maps do rate high in terms of utility, how-
ever, and can be integrated into many different content areas.
Simulations
Simulation software can provide access to learning activities that might otherwise be
difficult or impossible to create in a classroom. The simulations available for use in
your classroom range in sophistication from simple animations that can be found
free online to sophisticated virtual environments. You can access online simulations
of weather-related events or space missions and collect student performance data
using an electronic journal or laboratory notebook. Students can access historical
simulations and work in small groups to create presentations to give to the rest of
the class, complete structured worksheets, or write reflective essays guided by ques-
tions you pose.
Simulation software is unique in that it not only provides opportunities for learn-
ing but it can support assessment, as well (see Figure 7.7). Simulation software can
capture different elements of student performance data, such as the paths they fol-
low through the software and the choices they make. That data can then be analyzed
and reported and used to make judgments about skill and knowledge proficiency.
It’s easy to imagine how games and virtual environments could be used for
assessment as well. You may have to generate scoring procedures for some simple
simulations or educational games that do not have data analysis and reporting fea-
tures built into them. You can combine the use of these software applications with
other assessment methods, such as journals or short quizzes, in order to check for
students’ progress on their learning goals. However, the selection and use of simula-
tions, games, or virtual environments for assessment should meet the same rigor for
Assessing Student Learning 175
Figure 7.7
This ballistics simulation has a simple form of self-assessment. Once the student correctly manipulates the
cannonball to hit the target, the target bursts into flames. Courtesy Oak Ridge National Laboratory.
Source: www.csm.ornl.gov
selecting any technology in your classroom and should appropriately match the needs
of the content, your students, and the learning goals they are trying to meet.
students can continue to access their work after they’ve left your school or classroom.
You don’t want them to depend on proprietary software that is not readily accessible.
By creating student work samples in common digital formats, such as word-processing
documents, portable document formats (PDF), graphic files, and HTML pages, you
may be able to store these artifacts in a portfolio management system as well as pro-
vide copies to students for use in common software applications. In fact, as Helen
WEB LINK Barrett from the University of Alaska demonstrates on her portfolio website, just
Review Dr. Helen Barrett’s work on about any type of software can support the development of portfolios. As you learned
portfolios by visiting the textbook’s in Chapter 1, Dr. Barrett has created versions of her portfolio in word processing,
companion website. database, spreadsheets, and several multimedia applications and has posted them
on the web using HTML, graphics, video, and other formats.
When incorporating portfolios in your instruction and assessment, it is impor-
tant to determine an organizing structure in advance and make it apparent to your
students. For example, your own portfolio for this class is probably organized around
the ISTE NETS-T. To facilitate learning and student growth, your students them-
selves should determine what artifacts should be selected. However, the guidelines
and criteria for the selection of materials contained in the portfolio should be
explained and made clear to your students. And sometimes you may want all stu-
dents to include a specific artifact or work sample as a means of monitoring their
mastery of expected standards.
As you know, portfolios also serve as a means for self-assessment and self-
reflection, as it is common that students write or explain why they have selected
the artifacts in their portfolios, what those artifacts demonstrate in terms of their
learning or understanding, and why they feel they are exemplars. Your students will
likely need guidance and practice in creating reflections. (See information about cre-
ating self-reflections in Chapter 2.) Reflection provides a natural opportunity for pro-
viding cognitive feedback to your students as you review and discuss the portfolio
artifacts with them.
Both the process of artifact selection and the subsequent assessment of portfolios
rely on some subjective judgment. Also, portfolio use is not standard across class-
rooms. Your students’ portfolios will be highly dependent on your classroom prac-
tices, and not all teachers will place the same emphasis on portfolio development,
allot the same amount of class time for their development, provide the same access
to outside resources or help, or collaborate with students to the same degree in terms
of selecting relevant artifacts. However, the main point to remember is that although
portfolios pose problems when compared across classrooms, one of the primary ben-
efits of portfolio use is that they can help teachers and students within a classroom
become more systematic in analyzing and learning from one’s work samples than
would normally occur during instruction (Shepard, 2000).
1. Identify a technology you have little experience with and find a sample to use on your own.
You may be able to download freeware or trial versions of concept-mapping software,
check out software or hardware from the library, or borrow from a local school.
2. Develop a small-scale project using the tool that demonstrates how you might incorporate
this into your own teaching. How is the tool helpful? What limitations does it have, if any?
What resources or training are required to use it fully?
3. Share your findings with others in your class.
Scoring Expectations and Practices 177
Scoring Keys
When an assessment allows you to make an objective judgment as to whether a
response is right or wrong, grading is rather straightforward. Your student either
got it right or not. When students are required to choose between a limited set of
possible answers, as with multiple-choice, matching, and true-false questions, grading
guidelines consist of a list of correct responses. These forced-choice assessment for-
mats easily lend themselves to computerized test administration and scoring.
A popular technology for scoring forced-choice responses is a test scanner. These
scanners use the famous “bubble sheets” with which you are undoubtedly familiar.
You can create multiple test forms to increase security and a grade scanner can score
all forms within a few minutes. Test scanners also allow you to do an item analysis of
an assessment to determine whether any items were too easy or too hard, which can
help you improve the match between your standards, instruction, and assessment.
Some open-ended test formats require constructed responses that also can be
judged right or wrong. Short-answer questions and even some essay tests can be
scored using keys that consist of a list of acceptable answers. Scoring keys for these
open-ended formats should include common variations in wording for each accept-
able response. Although this type of test can also be administered and scored by com-
puter, developing a program to score constructed responses is more difficult than
developing one to score forced-choice responses.
You may have already taken essay tests that have been scored by a computer with-
out realizing it. At one time, these student responses were scored by multiple human
raters who had to undergo intensive training and were reviewed frequently for their
consistency and reliability. In several state and national tests, essays are now scored
strictly by technology using artificial intelligence. Vantage Learning has developed
software it calls IntelliMetric that uses artificial intelligence (AI) and examples of stu-
dent writing to review and score student writing assessments. The computer scoring
has been compared to and often surpasses the reliability of human scoring. The prob-
lem is that the scoring engine requires hundreds if not thousands of student samples
to provide the best results, so is still impractical for use in most classrooms. However,
some states that have shifted to the assessment of student writing online also provide
websites that teachers and students can use to practice and receive feedback.
scoring. Usually, teachers turn to the use of checklists or rubrics to score these types
of assessments and to reduce the subjective quality of their judgments.
Checklists are a simple way to score the observation or demonstration of a skill.
These can be factual recall skills or more complex skills involving analysis and evalu-
ation. You can create a simple yes-no checklist in which the parameter or skill you are
looking for is deemed either present or not. For example, the science lab checklist in
Figure 7.8 simply lists standard safety behaviors students should follow.
A more complex checklist can use a scale that indicates the degree to which the
parameter you are looking for is present, from a little to a lot. A checklist can help
you monitor your students’ cognitive development. That is, you can determine gaps
in student schema (or patterns of understanding), which can help you determine
whether they need reteaching or other supplemental instruction. From a develop-
mental perspective, you can monitor what your students are able to do indepen-
dently and how their skills develop over time.
You can also use checklists for specific assessments, such as when observing a stu-
dent demonstrating required concepts during a think-aloud. In the think-aloud pro-
cess, students verbalize what they are thinking when completing a task, such as read-
ing a story, solving a math proof, or even working on complex problems that may
require advanced skill in collaboration with others. The think-aloud process helps to
generate artifacts of student cognition, as students are actually telling you what they
are thinking, the strategies they are using, and questions and concerns they have. The
chart in Figure 7.9 shows a simple method for collecting data from a think-aloud pro-
tocol used to monitor comprehension of a reading passage. Checks are placed in the
corresponding column for each time a behavior is observed. Comments can be
inserted in the final column and are not necessary for a simple performance audit.
Checklists can also be used to score projects or performances. In this case, some
care should be taken to determine whether the presence of an item on the checklist
actually corresponds to student understanding and skill. Are you grading the student
or the project? Multiple measures of assessment may help you develop a clear picture
of student understanding.
Checklists can be developed and implemented using a variety of technologies, and
since their purpose is to quickly and easily collect data, the use of technologies to
capture, store, and report that data makes them even more powerful. Some simple
checklists may be available for use on PDAs or tablet computers that allow you to
Figure 7.8
Science lab checklist.
Scoring Expectations and Practices 179
enter data onscreen with a stylus. And although not every helpful checklist or inven-
tory is currently available for handheld devices, common productivity software, such
as word-processing and database software, is supported by many of these devices and
can allow you to quickly develop those checklists you use regularly for assessment.
Storing the results of checklist data in spreadsheets and databases provides powerful
analysis and reporting features. Simple summaries and graphs can be created quickly
that give you individual and group profiles, and also can help you determine the need
to modify your instruction for reteaching or enrichment. Portable devices can also
support many common observation tools, such as checklists of content-based beha-
viors and interview protocols.
Generally, checklists are rather one-dimensional. Usually, either the students did the
tasks or they didn’t. Rubrics, however, provide an added dimension that allows both you
and your students to determine gradations in quality. Rubrics are common methods for
assessing performance-based projects, especially those supported by technology. Rubrics
are malleable and can be created for any content area and assessment mode, such as the
scoring of projects, essays, portfolios, or live or videotaped student performances.
Rubrics are framed by some type of scale, but the degrees of the scale are clearly
described or defined to demonstrate different levels of quality. Generally, a three-,
four-, or five-point scale is manageable depending on the complexity of the task, proj-
ect, or performance to be scored. Too many “quality” levels for a simple skill or too
few for complex skills erode the effectiveness of the rubric. Another consideration is
whether to begin your rubric scale at no points (0) or 1. Your rubric should relate to
the standards or learning goals for the activity, lesson, or project and the descrip-
tions should clearly describe the levels of performance rather than subjective judg-
ments (Brookhart, 1999). For example, a descriptor for an exemplary writing sample
that notes that all sentences and proper nouns begin with a capital letter is a clear
description of the expected level of performance; whereas, use of the terms “good”
or “weak” are subjective and provide little concrete feedback or a justifiable position.
Rubrics can be analytic or holistic (Brookhart, 1999). An analytic rubric breaks
the assessment down into component categories (see Figure 7.10). For example, an
analytic rubric for a student history presentation may include categories about accu-
racy of information, proper grammar and spelling, writing style, as well as elements
of design. A holistic rubric may have descriptors that touch on each of these ele-
ments but it does not break them down into separate rating scales per category (see
Figure 7.11).
Rubrics have become popular due to their valuable pedagogical aspects. Rubrics
can help you determine the activities and resources needed in your instruction.
Since rubrics define the different degrees of exemplary products and performances,
they are likely to delineate the critical skills and knowledge necessary for mastery.
For example, a rubric that defines excellence regarding the appropriate citation of
Internet resources requires your students to 1) find and evaluate appropriate web-
based resources and then to 2) cite them according to an accepted standard. If
Figure 7.9
Collecting data using a think-aloud checklist.
180 CHAPTER 7
Score 0 1 2 3
Content Several of the required Some of the content ele- All of the content elements All content elements are in-
features content elements are not in- ments are not included or are included. cluded and are explained
cluded or are inaccurate. are inaccurate. with significant detail and
supporting data.
Grammar There are many errors in There are a few errors in There are one or two There are no errors in
and grammar and punctuation. grammar and punctuation. errors in grammar and grammar and punctuation.
punctuation punctuation.
Writing style The writing is very difficult to The writing is somewhat The writing is easy to read The writing is both interest-
read throughout with little or difficult to read in some with some variation of sen- ing and easy to read with a
no variation in sentence points with little variation in tence structures and appro- variety of sentence struc-
structure and vocabulary is sentence structure and vo- priate grade-level tures and appropriate and
below grade level. cabulary is below grade vocabulary. challenging vocabulary.
level.
General The information is difficult to Some of the information is The information is presented The information is pre-
design view and text is difficult to difficult to view and/or text clearly with text that is easy sented in a creative manner
features read. The use of images, may be difficult to read. The to read. The use of images, with text that is easy to
colors, and other media use of images, colors, and colors, and other media read. The use of images,
consistently detract from the other media may detract does not detract from the colors, and other media is
presentation of the from the presentation of the presentation of the imaginative and adds to the
information. information in some information. presentation of the
instances. information.
Figure 7.10
Analytic rubric for multimedia project.
they’ve never done this before, your rubric reminds you to provide them with this
knowledge before they can successfully meet the required criteria.
Another pedagogical value of rubrics is realized when they are jointly developed
with students. Although this does take some time and you may not choose this
approach each time you create a rubric, the process helps students develop skills in
determining what constitutes best performance. Providing students with examples of
differences in quality of performance or products can help them grasp the differences
between various degrees of acceptable performance—for example, a 3-point and a 4-
point performance. Students can then apply this understanding when creating their
own products or performances.
Rubrics also provide a mechanism for providing detailed feedback to students
(Andrade, 2005). They set expectations at the beginning of your lesson so students
can better set their own goals for performance. They also provide support for forma-
tive self-assessment and peer assessment so that students can receive critical support
for determining their levels of performance and either continuing with or selecting
different strategies to complete their projects. Underlining or circling critical ele-
ments in the descriptors in a rubric can be much quicker than generating detailed
feedback for every student. A descriptor for exemplary performance that notes that
“there are no spelling errors” when compared to a descriptor that states “there are 2
or 3 misspelled words” gives the students a real measure for determining excellence.
Scoring Expectations and Practices 181
3 Points—All content elements are included and are explained with significant detail and supporting data. There
are no errors in grammar and punctuation. The writing is both interesting and easy to read with a variety of sen-
tence structures and appropriate and challenging vocabulary. The information is presented in a creative manner
with text that is easy to read. The use of images, color, and other media is imaginative and adds to the presentation
of the information.
2 Points—All of the content elements are included. There are one or two errors in grammar and punctuation. The
writing is easy to read with some variation of sentence structures and appropriate grade-level vocabulary. The infor-
mation is presented clearly with text that is easy to read. The use of images, colors, and other media do not detract
from the presentation of the information.
1 Point—Some of the content elements are not included or are inaccurate. There are several errors in grammar and
punctuation. The writing is difficult to read in some parts with little variation in sentence structure and vocabulary is
below grade level. Some of the information is difficult to view and/or text may be difficult to read. The use of
images, colors, and other media may detract from the presentation of the information in some instances.
0 Points—Several of the required content elements are not included or are inaccurate. There are many errors in
grammar and punctuation. The writing is very difficult to read throughout with little or no variation in sentence struc-
ture and below grade-level vocabulary. The information is difficult to view and text is difficult to read. The use of
images, colors, and other media consistently detract from the presentation of the information.
Figure 7.11
Holistic rubric for multimedia project.
However, Shepard (2000) also cautions that simply providing explicit criteria may
not truly promote student learning if students learn to mechanically address the cri-
teria without actually developing the relevant skills or knowledge. She suggests that
students be allowed to use self-assessment as a way to understand what the criteria
mean, not just to apply them mechanically.
Rubrics can be created using commonly available software applications. There are
also websites that not only allow you to enter your descriptors to automatically gen-
erate a rubric but that house rubric examples and templates you can use for guidance.
The popular Rubistar website by 4Teachers allows you to quickly create, customize,
and save a rubric. If you are just starting out with creating rubrics, the Rubistar WEB LINK
engine can even suggest descriptors for each level of your rubric for a range of com- Visit the textbook’s companion
mon teaching models, such as the 6+1 Trait Writing Model, or even for specific skills, website for resources to help you
such as the use of manipulatives or the explanation of mathematical concepts. In create rubrics for use in your
addition to using this and similar web-based rubric generators, you may want to col- classroom.
laborate with other teachers in your school or district to develop rubrics based on
your state’s content standards. Joint development and use helps to improve the
validity and reliability of your assessments and rubrics. Rubric templates, examples,
and actual rubrics matched to lesson and unit plans can be stored electronically on
shared directories or within lesson-planning software.
182 CHAPTER 7
Figure 7.12
Electronic gradebooks typically include powerful graphic analysis tools. Courtesy Edline.
Reporting to Stakeholders
1. Investigate common features of student information systems (SIS). You can do this by
investigating product websites, reviewing product descriptions in journals and magazines,
visiting a local school that utilizes one, or attending an educational conference and view-
ing vendor demonstrations. You may even consider asking for a SIS demonstration for you
and others in your class from a product representative. It may be best to divide and con-
quer by having teams or groups of students investigate different products and share their
information.
2. As you review various systems, consider the following questions:
· What technology skills, required by the SIS, are you confident with and which ones
would require practice or training?
· What kinds of data are collected by the SIS? Can you include standards or learning
goals and student progress toward those standards?
· What kind of progress reports can be created for sharing with students in class or with
parents, such as in a parent conference?
· How is two-way communication supported between you and stakeholder groups? Does
it provide options for families with limited Internet access or for having to export data in
different formats?
3. Compare and discuss your findings with other members of your class.
Figure 7.13
Comparison of three states’ standards and objectives for writing.
186 CHAPTER 7
objectives in each standard for each state. But many of the other content standards,
although not mentioning specific technologies, can easily be demonstrated by writing
projects that utilize common technology-supported activities, whether electronic
journaling, using a word processor, creating an electronic presentation, developing a
web page, or participating in a threaded discussion. The choices you make for doing
so are governed by your content knowledge and technology proficiency, as well as the
technology proficiency of your students.
Over time, many of these standards for writing—like many content standards—
will remain the same. The technologies, however, will change. By selecting technologies
appropriate for the assessment of these content standards, you are also providing the
opportunity to evaluate your students’ technology proficiencies. As you can see from
reviewing these example standards, you may do so explicitly or by embedding tech-
nology proficiencies in the way you teach and your students learn relevant content.
ISTE NETS-T Standard 2 requires you to “design, develop, and evaluate authentic
learning experiences and assessments incorporating contemporary tools and
resources to maximize content learning in context and to develop the knowledge,
skills, and attitudes identified in the NETS•S.” Furthermore, you are expected to “pro-
vide students with multiple and varied formative and summative assessments
aligned with content and technology standards and use resulting data to inform
learning and teaching.” Throughout this chapter, we have been preparing you to do
this. With the shifting emphasis from isolated technology standards to embedding
technology proficiencies in content standards, your decisions for assessing technol-
ogy proficiency fall in line with assessing students’ mastery of your content standards
as a whole. As you develop and implement lessons and units, the technologies you
select for your activities and assessments should be representative of those embed-
ded in your content domain. Assessing student technology proficiency then becomes
part of monitoring and evaluating how well your students achieve your learning
goals.
Chapter Summary
This chapter has emphasized the critical role of assessment when developing authen-
tic learning experiences. Assessment is more than the assigning of grades and serves
a critical role in monitoring and evaluating the academic progress of your students.
In Chapter 6, we discussed the value of using a variety of formal and informal data to
inform your instructional decisions. Assessments should be woven throughout
instruction to serve many purposes and can take many forms. Just as you will vary
your instruction and select multiple methods for presenting your instruction, you
will draw upon multiple assessment formats and tools to support them. And
although it will be important for you to be able to monitor the technology proficien-
cies of your students, you may find that the technologies you choose to do so will
depend most on making good choices for supporting your instruction and assess-
ment. Assessment data also help you determine the effectiveness of your own
instructional choices including the selection of technology-based resources. As a
teacher it is important to know whether how you teach, with or without digital tech-
nology, is effective in helping students learn the intended content.
YOUR
PORTFOLIO
To begin to demonstrate competency in ISTE NETS-T Standard 2.d, return to the les-
son activities you began in earlier chapters and add your assessment strategies.
1. Develop assessments and scoring guidelines based on the lesson or unit plan,
to which technology is a major contributor.
2. Make sure your assessments are clearly connected to the content and technol-
ogy standards addressed by your lesson as well as the activities contained
within the lesson.
References
Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, Butler, D. H., & Winne, P. H. (1995). Feedback and self-regulated
and the ugly. College Teaching, 53(1), 27–30. learning: A theoretical synthesis. Review of Educational
Assessment Reform Group. (1999). Assessment for learning: Beyond Research, 65, 245–281.
the black box. Cambridge, UK: University of Cambridge School Dick, W., Carey, L., & Carey, J. O. (2005). The systematic design of
of Education. instruction (6th ed.). New York: HarperCollins.
Bennett, R. E. (1998). Reinventing assessment: Speculations on the Educational Testing Service (ETS). (2001, July/August). People in
future of large-scale educational testing. Princeton, NJ: Educa- the know series: Ray Christensen. Capital News & Views. 1–3.
tional Testing Service (ETS). Washington, DC: State and Federal Relations Office.
Bennett, R. E. (2001). How the Internet will help large-scale Fletcher, J. D. (1992). Individualized systems of instruction. Alex-
assessment reinvent itself. Education Policy Analysis Archives, andria, VA: Institute for Defense Analyses. (ERIC Document
9(5). Retrieved January 22, 2001, from http://epaa.asu.edu/ Reproduction Service No. ED 355 917).
epaa/v9n5.html Meijer, R. R., & Mering, M. L. (1999). Computerized adaptive
Brookhart, S. M. (1999). The art and science of classroom assess- testing: Overview and introduction. Applied Psychological Mea-
ment: The missing part of pedagogy. ASHE-ERIC Higher Edu- surement, 23, 187–194.
cation Report, 27(1). Washington, DC: The George Washington Pub. L. No. 107-110. (No Child Left Behind Act of 2001).
University, Graduate School of Education and Human Public Schools of North Carolina. (2006). North Carolina standard
Development. course of study. Retrieved May 9, 2006, from http://www.dpi
.state.nc.us/curriculum/
188 CHAPTER 7
Shepard, L. A. (2000). The role of classroom assessment in teaching Wainer, H. (2000). Introduction and history. In H. Wainer (Ed.),
and learning. CSE Technical Report 517. Los Angeles: Center for Computerized adaptive testing: A primer (2nd ed.) (pp. 1–21).
the Study of Evaluation, Standards, and Student Testing Mahwah, NJ: Lawrence Erlbaum.
(CRESST). Wiggins, G. (1998). Educative assessment: Designing assessments to
Svinicki, M. D. (2004). Authentic assessment: Testing in reality. inform and improve student performance. San Francisco: Jossey-
New Directions for Teaching and Learning, 2004(100), 23–39. Bass.