Using CBM For Progress Monitoring in Reading
Using CBM For Progress Monitoring in Reading
MONITORING IN READING
How to Use the CBM Database to Accomplish Teacher and School Accountability and for
Formulating Policy Directed at Improving Student Outcomes.........................................................45
Appendix B: Resources............................................................................................................................71
b. Identify students who are not demonstrating adequate progress and therefore require
additional or alternative forms of instruction; and/or
c. Compare the efficacy of different forms of instruction and thereby design more effective,
individualized instructional programs for problem learners.
Traditional assessments used in schools are generally lengthy tests that are not administered on
a regular basis. Many times, traditional assessments are administered to students once per year,
and teachers do not receive their students’ scores until weeks or months later, sometimes after
the school year is complete. Because teachers do not receive immediate feedback, they cannot
use these assessments to adapt their teaching methods or instructional programs in response to
the needs of their students.
Another problem with traditional assessments is that student scores are based on national
scores and averages. In fact, the students in a teacher’s classroom may differ tremendously from
a national sample of students. CBM allows teachers to compare an individual student’s data to
data on other students in their classroom. Schools or school districts may also collect normative
data on the students within their own school or district to provide teachers with a local
normative framework for interpreting scores.
Curriculum-based assessment is a broader term than CBM. As defined by Tucker (1987), CBM
meets the three curriculum-based assessment requirements:
The second distinctive feature of CBM is that it is highly prescriptive and standardized. This
guarantees reliable and valid scores. CBM provides teachers with a standardized set of
materials that has been researched to produce meaningful and accurate information. By
contrast, the adequacy of teacher-developed CBA tests and commercial CBA tests is largely
unknown. It is uncertain whether scores on those CBA tests represent performance on
meaningful, important skills and whether the student would achieve a similar score if the test
were re-administered.
CBM is used to monitor student progress across the entire school year. Students are given
standardized reading probes at regular intervals (weekly, bi-weekly, monthly) to produce
accurate and meaningful results that teachers can use to quantify short- and long-term student
gains toward end-of-year goals. With CBM, teachers establish long-term (i.e., end-of-year) goals
indicating the level of proficiency students will demonstrate by the end of the school year.
CBM tests (also called “probes”) are relatively brief and easy to administer. The probes are
administered the same way every time. Each probe is a different test, but the probes assess the
same skills at the same difficulty level. The reading probes have been prepared by researchers
or test developers to represent curriculum passages and to be of equivalent difficulty from
passage to passage within each grade level.
Probes are scored for reading accuracy and speed, and student scores are graphed for teachers
to consider when making decisions about the instructional programs and teaching methods for
each student in the class. CBM provides a doable and technically strong approach for
quantifying student progress. Using CBM, teachers determine quickly whether an educational
intervention is helping a student.
Currently, CBM probes are available in reading, math, writing, and spelling. This manual
focuses on reading CBM. Appendix A contains a list of CBM resources and how to obtain CBM
reading probes and computer software.
CBM Research
Research has demonstrated that when teachers use CBM to inform their instructional decision
making, students learn more, teacher decision making improves, and students are more aware
of their own performance (e.g., Fuchs, Deno, & Mirkin, 1984). CBM research, conducted over
the past 30 years, has also shown CBM to be reliable and valid (e.g., Deno, 1985; Germann &
Tindal, 1985; Marston, 1988; Shinn, 1989).
Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to
establish growth standards for students with learning disabilities. School Psychology Review,
30, 507–526.
Fuchs, D., Roberts, P. H., Fuchs, L. S., & Bowers, J. (1996). Reintegrating students with learning
disabilities into the mainstream: A two-year study. Learning Disabilities Research and Practice, 11,
214–229.
Reports a study that evaluated the short- and long-term effects of 3 variants of a case-by-
case process for readying students to move successfully from resource rooms to regular
classrooms for math instruction. Preparation for this transition included use of curriculum-
based measurement and transenvironmental programming, each alone and in combination.
Teachers using the more complex variants of the case-by-case process were more successful
at moving students across settings and fostering greater math achievement and positive
attitude change, especially while the students were still in special education. At 1-year
follow-up, about half of the students either never were reintegrated or were moved to the
mainstream temporarily, only to be returned to special education.
Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant
measurement models. Exceptional Children, 57, 488–501.
Explains how CBM differs from most other forms of classroom-based assessment.
Fuchs, L. S., & Deno, S. L. (1994). Must instructionally useful performance assessment be based
in the curriculum? Exceptional Children, 61, 15–24.
Examines the importance of sampling testing material from the students’ instructional
curricula; concludes that sampling from the curriculum is not essential; and proposes three
features critical to insure the instructional utility of measurement.
Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress.
School Psychology Review, 58, 45–58.
Summarizes the program of research conducted to explore CBM reading measures other
than reading aloud.
Fuchs, L. S., & Fuchs, D. (1996). Combining performance assessment and curriculum-based
measurement to strengthen instructional planning. Learning Disabilities Research and Practice, 11,
183–192.
Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for reconceptualizing
the identification of learning disabilities. Learning Disabilities Research and Practice, 13, 204–219.
Summarizes a substantial portion of the research base on the technical features and
instructional utility of CBM; provides a framework for using CBM within a treatment
validity approach to LD identification, within which students are identified for special
education when their level of achievement and rate of improvement is substantially below
that of classroom peers and when, despite intervention efforts, they remain resistant to
treatment.
Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of
reading competence: A review of three forms of classroom-based assessment. School Psychology
Review, 28, 659–671.
Fuchs, L. S., & Fuchs, D. (2000). Curriculum-based measurement and performance assessment.
In E. S. Shapiro & T. R. Kratochwill (Eds.), Behavioral assessment in schools: Theory, research, and
clinical foundations (2nd ed., pp. 168–201). New York: Guilford.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1993). Technological advances linking the assessment
of students’ academic proficiency to instructional planning. Journal of Special Education
Technology, 12, 49–62.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1994). Strengthening the connection between
assessment and instructional planning with expert systems. Exceptional Children, 61, 138–146.
Summarizes the program of research conducted on expert systems used in conjunction with
CBM to enhance teachers’ capacity to use classroom-based assessment to improve planning
and increase student learning.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (in press). Using technology to facilitate and enhance
curriculum-based measurement. In K. Higgins, R. Boone, & D. Edyburn (Eds.), The handbook of
special education technology research and practice. Whitefish Bay, WI: Knowledge by Design, Inc.
Describes a research program conducted over the past 18 years to examine how CBM
technology can be used to enhance implementation.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., & Karns, K. (1995). General educators’
specialized adaptation for students with learning disabilities. Exceptional Children, 61, 440–459.
Reports a study that examined general educators’ specialized adaptation for students with
learning disabilities, in conjunction with peer-assisted learning strategies and curriculum-
based measurement; findings revealed that (a) teachers who were provided with support to
implement adaptations engaged differentially in specialized adaptation, and their thinking
about how they planned for their students with LD changed and (b) although some teachers
implemented substantively important, individually tailored adjustments, others relied on
adaptations that were uninventive and limited.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Stecker, P. M. (1991). Effects of curriculum-based
measurement and consultation on teacher planning and student achievement in mathematics
operations. American Educational Research Journal, 28, 617–641.
Reports an experimental study contrasting CBM, CBM with expert systems, and standard
treatment; results showed the importance of helping teachers translate classroom-based
assessment information via instructional consultation.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Thompson, A., Roberts, P. H., Kubek, P., & Stecker, P. S.
(1994). Technical features of a mathematics concepts and applications curriculum-based
measurement system. Diagnostique, 19(4), 23–49.
Reports a study investigating the reliability and validity of a CBM system focused on the
concepts and applications mathematics curriculum; results supported the technical
adequacy of the CBM graphed scores as well as the CBM diagnostic skills analysis.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of
academic progress: How much growth can we expect? School Psychology Review, 22, 27–48.
Reports normative information on CBM slopes in reading, spelling, and math expected for
typically developing students.
Fuchs, L. S., Fuchs, D., Hosp, M., & Hamlett, C. L. (2003). The potential for diagnostic analysis
within curriculum-based measurement. Assessment for Effective Intervention, 28(3&4), 13–22.
Fuchs, L. S., Fuchs, D., Hosp, M., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of
reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading,
5, 239–256.
Considers oral reading fluency as an indicator of overall reading competence. The authors
examined theoretical arguments for supposing that oral reading fluency may reflect overall
reading competence, reviewed several studies substantiating this phenomenon, and
provided an historical analysis of the extent to which oral reading fluency has been
incorporated into measurement approaches during the past century.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., Dutka, S., & Katzaroff, M. (2000). The
importance of providing background information on the structure and scoring of performance
assessments. Applied Measurement in Education, 13, 83–121.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., Katzaroff, M., & Dutka, S. (1997). Effects of
task-focused goals on low-achieving students with and without learning disabilities. American
Educational Research Journal, 34(3), 513–544.
Reports a study that examined the effects of a task-focused goals treatment in mathematics,
using curriculum-based measurement. CBM students reported enjoying and benefiting from
CBM, chose more challenging and a greater variety of learning topics, and increased their
effort differentially. Increased effort, however, was associated with greater learning only for
low achievers in TFG without learning disabilities.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., & Katzaroff, M. (1999). Mathematics
performance assessment in the classroom: Effects on teacher planning and student learning.
American Educational Research Journal, 36(3), 609–646.
Gersten, R., & Dimino, J. A. (2001). The realities of translating research into classroom practice.
Learning Disabilities Research and Practice, 16, 120–130.
Hosp, M. K., & Hosp, J. (2003). Curriculum-based measurement for reading, math, and
spelling: How to do it and why. Preventing School Failure, 48(1), 10–17.
Provides a rationale for collecting and using curriculum-based measurement (CBM) data as
well as providing specific guidelines for how to collect CBM data in reading, spelling, and
math. Relying on the research conducted on CBM over the past 25 years, the authors define
what CBM is and how it is different from curriculum-based assessment (CBA). Authors
describe in detail how to monitor student growth within an instructional program using
CBM data in reading, spelling, and math. Reasons teachers should collect and use CBM data
are also discussed.
Phillips, N. B., Hamlett, C. L., Fuchs, L. S., & Fuchs, D. (1993). Combining classwide curriculum-
based measurement and peer tutoring to help general educators provide adaptive education.
Learning Disabilities Research and Practice, 8, 148–156.
Provides an overview of the math PALS methods for practitioners, with a brief summary of
an efficacy study.
Stecker, P. M., & Fuchs, L. S. (2000). Effecting superior achievement using curriculum-based
measurement: The importance of individual progress monitoring. Learning Disabilities Research
and Practice, 15, 128–134.
Step 2: How to Identify the Level for Material for Monitoring Progress for Passage Reading
Fluency and Maze Fluency (page 9)
Step 6: How to Apply Decision Rules to Graphed Scores to Know When to Revise Programs
and Increase Goals (page 32)
Step 7: How to Use the CBM Database Qualitatively to Describe Students’ Strengths and
Weaknesses (page 42)
Step 1: How to Place Students in a Reading CBM Task for Progress Monitoring
The first decision for implementing CBM in reading is to decide which task is developmentally
appropriate for each reader to be monitored over the academic year. For students who are
developing at a typical rate in reading, the correct CBM tasks are as follows:
– Select Letter Sound Fluency if you are more interested in measuring students'
progress toward decoding.
• At Grade 1, Word Identification Fluency.
– See next section for determining which level of passages to use for progress
monitoring.
• At Grades 4–6, Maze Fluency.
– Use the guidelines in the next section for determining which level of passages to use
for progress monitoring.
Note: Once you select a task for CBM progress monitoring (and for Passage Reading Fluency or
Maze Fluency, a grade level of passages for progress monitoring), stick with that task (and level
of passages) for the entire year.
Step 2: How to Identify the Level of Material for Monitoring Progress for Passage
Reading Fluency and Maze Fluency
For Passage Reading Fluency (PRF) and Maze Fluency, teachers use CBM passages written at
the student’s current grade level. However, if a student is well below grade-level expectations,
then he or she may need to read from a lower grade-level passage. If teachers are worried that a
student is too delayed in reading to make the grade-level passages appropriate, then find the
appropriate CBM level by following these steps.
1. Determine the grade level text at which you expect the student to read competently by
year’s end.
2. Administer 3 passages at this level. Use generic CBM Passage Reading Fluency (PRF)
passages, not passages that teachers use for instruction.
• If the student reads fewer than 10 correct words in 1 minute, then use the CBM word
identification fluency measure instead of CBM PRF or CBM Maze Fluency for
progress monitoring.
• If the student reads between 10 and 50 correct words in 1 minute but less than 85–
90% correct, then move to the next lower level of text and try 3 passages.
• If the student reads more than 50 correct words in 1 minute, then move to the
highest level of text where he/she reads between 10 and 50 words correct in 1
minute (but not higher than the student’s grade-appropriate text).
3. Maintain the student on this level of text for the purpose of progress monitoring for the
entire school year.
With Reading CBM, students read letters, isolated words, or passages within a 1-minute time
span. The student has a “student copy” of the reading probe, and the teacher has an “examiner
copy” of the same probe. The student reads out loud for 1 minute while the teacher marks
student errors. The teacher calculates the number of letters or words read correctly and graphs
this score on a student graph. The CBM score is a general overall indicator of the student’s
reading competency (Fuchs, Fuchs, Hosp, & Jenkins, 2001).
In reading, the following CBM tasks are available at these grade levels.
A description of each of these CBM tasks follows. Information on how to obtain the CBM
materials for each task is available in Appendix A.
CBM Letter Sound Fluency (LSF) is used to monitor student progress in beginning decoding at
kindergarten.
CBM LSF is administered individually. The examiner presents the student with a single page
showing 26 letters in random order (Figure 1). The student has 1 minute to say the sounds that
correspond with the 26 letters. The examiner marks student responses on a separate score sheet
(Figure 2). The score is the number of correct letter sounds spoken in 1 minute. If the student
finishes in less than 1 minute, then the score is prorated. Five alternate forms, which can be
rotated through multiple times, are available.
Examiner: I’m going to show you some letters. You can tell me what sound the letters
make. You may know the sound for some letters. For other letters, you may now know the
sounds. If you don’t know the sound a letter makes, don’t worry. Okay? What’s most
important is that you try your best. I’ll show you how this activity works. My turn first.
(Refer to the practice portion of the CBM LSF sheet.) This says /b/. Your turn now. What
sound does it say?
Student: /b/
Examiner: Very good. You told me what sound the letter makes. (Correction procedures are
provided in the CBM LSF manual.) You’re doing a really good job. Now it will be just your
turn. Go as quickly and carefully as you can. Remember to tell me the sounds the letters
make. Remember just try your best. If you don’t know the sounds it’s okay. Trigger the
stopwatch.
When scoring CBM LSF, short vowels (rather than long vowel sounds) are correct. If the student
answers correctly, then the examiner immediately points to the next letter on the student copy.
If the student answers incorrectly, then the examiner marks the letter as incorrect by making a
slash through that letter on the teacher’s score sheet. If a student does not respond after 3
seconds, then the examiner points to the next letter. As the student reads, the examiner does not
correct mistakes.
At 1 minute, the examiner circles the last letters for which the student provides a correct sound.
If the student finishes in less than 1 minute, then the examiner notes the number of seconds it
took to finish the letters. The score is adjusted if completed in less than 1 minute. Information
on adjusting scores is available in the administration and scoring guide.
Look at the following CBM LSF score sheet (Figure 3). Abby mispronounced 5 letter sounds in 1
minute. The last letter sound she said correctly (/r/) is circled. Her score for the LSF would be
18. A score of 18 would be charted on Abby’s CBM graph.
CBM Letter Sound Fluency is available from the University of Maryland and Vanderbilt
University. See Appendix A for contact information.
CBM Word Identification Fluency (WIF) is used to monitor students’ overall progress in
reading at first grade.
CBM WIF is administered individually. The examiner presents the student with a single page
with 50 words (Figure 4). The 50 words have been chosen from the Dolch 100 most frequent
words list or from “The educator’s word frequency guide” (Zeno, Ivens, Millard, & Duvvuri,
1995) 500 most frequent words list with 10 words randomly selected from each hundred. The
student has 1 minute to read the words. The examiner marks student errors on a separate score
sheet (Figure 5). The score is the number of correct words spoken in 1 minute. If the student
finishes in less than 1 minute, then the score is prorated. Twenty alternate forms are available.
Examiner: When I say, ‘go,’ I want you to read these words as quickly and correctly as you
can. Start here (point to the first word) and go down the page (run your finger down the
first column). If you don’t know a word, skip it and try the next word. Keep reading until I
say, ‘stop.’ Do you have any questions? Trigger the stopwatch for 1 minute.
The teacher scores a word as a “1” if it is correct and a “0” if it is incorrect. The examiner uses a
blank sheet to cover the second and third columns. As the student completes a column, the
blank sheet is moved to expose the next column. If the student hesitates, then after 2 seconds
he/she is prompted to move to the next word. If the student is sounding out a word, then
he/she is prompted to move to the next word after 5 seconds. As the student reads, the
examiner does not correct mistakes and marks errors on the score sheet.
At 1 minute, the examiner circles the last word the student reads. If the student finishes in less
than 1 minute, then the examiner notes the number of seconds it took to complete the word list,
and the student score is adjusted.
Look at the following CBM WIF score sheet (Figure 6). Shameka mispronounced 7 words in 1
minute. The last word she read correctly (car) is circled. Her score for the WIF is 29. A score of
29 is charted on Shameka’s CBM graph.
CBM Word Identification Fluency is available from Vanderbilt University. See Appendix A for
contact information.
CBM Passage Reading Fluency (PRF) is used to monitor students’ overall progress in reading at
Grades 1–8. Some teachers prefer Maze Fluency beginning at Grade 4.
CBM PRF is administered individually. In general education classrooms, students take one PRF
test each week. Special education students take two PRF tests each week. Each PRF test uses a
different passage at the same grade level of equivalent difficulty. For higher-performing general
education students, teachers might administer PRF tests (also referred to as “probes”) on a
monthly basis and have each student read three probes on each occasion.
For each CBM PRF reading probe, the student reads from a “student copy” that contains a
grade-appropriate reading passage (Figure 7). The examiner scores the student on an “examiner
copy.” The examiner copy contains the same reading passage but has a cumulative count of the
number of words for each line along the right side of the page (Figure 8). The numbers on the
teacher copy allow for quick calculation of the total number of words a student reads in 1
minute.
Examiner: I want you to read this story to me. You’ll have 1 minute to read. When I say,
‘begin,’ start reading aloud at the top of the page. Do your best reading. If you have
trouble with a word, I’ll tell it to you. Do you have any questions? Begin. Trigger the timer
for 1 minute.
The examiner marks each student error with a slash (/). At the end of 1 minute, the last word
read is marked with a bracket (]). If a student skips an entire line of a reading passage, then a
straight line is drawn through the skipped line. When scoring CBM probes, the teacher
identifies the count for the last word read in 1 minute and the total number of errors. The
teacher then subtracts errors from the total number of words to calculate the student score.
There are a few scoring guidelines to follow when administering reading CBM probes.
Repetitions (words said over again), self-corrections (words misread, but corrected within 3
seconds), insertions (words added to passage), and dialectical difference (variations in
pronunciation that conform to local language norms) are all scored as correct.
Mispronunciations, word substitutions, omitted words, hesitations (words not pronounced
within 3 seconds), and reversals (two or more words transposed) are all scored as errors.
Numerals are counted as words and must be read correctly within the context of the passage.
With hyphenated words, each morpheme separated by a hyphen(s) is counted as a word if it
can stand alone on its own (e.g., Open-faced is scored as two words but re-enter is scored as one
word). Abbreviations are counted as words and must be read correctly within the context of the
sentence.
As teachers listen to students read, they can note the types of decoding errors that students
make, the kinds of decoding strategies students use to decipher unknown words, how miscues
reflect students’ reliance on graphic, semantic, or syntactic language features, and how self-
corrections, pacing, and scanning reveal strategies used in the reading process (Fuchs, Fuchs,
Hosp, & Jenkins, 2001). Teachers can use these more qualitative descriptions of a student’s
reading performance to identify methods to strengthen the instructional program for each
student. More information about noting student decoding errors is covered under “Step 7: How
to Use the Database Qualitatively to Describe Student Strengths and Weaknesses.”
If a student skips several connected words or an entire line of the reading probe, the omission is
calculated as 1 error. If this happens, then every word but 1 of the words is subtracted from the
total number of words attempted in 1 minute.
Look at the following example (Figure 9). The student omitted text 2 times during the 1-minute
CBM PRF. The examiner drew a line through the omitted text. The first omission was on words
26–40. The examiner counts 15 words as omitted and drops 14 of the words before calculating
the total words attempted. The student also omitted words 87–100. The examiner drops 13 of
the 14 words before calculating the total words attempted.
To calculate the total number of words read in 1 minute, the examiner subtracts the 2 words (14
words from first omission plus 13 words from second omission) from the total number of words
read in 1 minute (122). The adjusted number of words attempted is then 95. The student made 7
errors (5 errors marked by slashes and 2 errors from omissions). These 7 errors are subtracted
from the adjusted number of words attempted of 95. 95 – 7 = 88. 88 is the number of words read
correctly in 1 minute.
Look at this sample CBM PRF probe (Figure 10). Reggie made 8 errors while reading the
passage for 1 minute. The straight line drawn through the fourth line shows that he also
skipped an entire line. The last word he read was “and” and a bracket was drawn after this
word. In all Reggie attempted 136 words. He skipped 15 words in the fourth line. 14 of those
skipped words are subtracted from the total words attempted (136 – 14 = 122) and 1 of those
skipped words is counted as an error. Reggie made 8 additional errors for a total of 9 errors. The
9 errors are subtracted from the 122 words attempted. 122 – 9 = 113. 113 is Reggie’s reading
score for this probe.
CBM PRF tests can be obtained from a variety of sources. See Appendix A for contact
information.
CBM Maze Fluency is available for students in Grades 1–6, but typically teachers use CBM
Maze Fluency beginning in Grade 4. Maze Fluency is used to monitor students’ overall progress
in reading.
CBM Maze Fluency can be administered to a group of students at one time. The examiner
presents each student with a maze passage (Figure 11). With CBM Maze, the first sentence in a
passage is left intact. Thereafter, every seventh word is replaced with a blank and three possible
replacements. Only one replacement is semantically correct. Students have 2.5 minutes to read
the passage to themselves and circle the word correct for each blank. The examiner monitors the
students during the 2.5 minutes and scores each test later. When the student makes 3
consecutive errors, scoring is discontinued (no subsequent correct replacement is counted).
Skipped blanks (with no circles) are counted as errors. The score is the number of correct
replacements circled in 2.5 minutes. Thirty alternate forms are available for each grade level.
Examiner: Look at this story. (Place practice maze on overhead.) It has some places where
you need to choose the correct word. Whenever you come to three words in parentheses
and underlined (point), choose the word that belongs in the story. Listen. The story
begins, “Jane had to take piano lessons. Her Mom and Dad made her do. Jane
(from/did/soda) not like playing the piano.” Which one of the three underlined words
(from/did/soda) belongs in the sentence? (Give time for response.) That’s right. The word
that belongs in the sentence is did. So, you circle the word did. (Demonstrate.) Continue
through entire practice activity.
Now you are going to do the same thing by yourself. Whenever you come to three words
in parentheses and underlined, circle the word that belongs in the sentence. Choose a
word even if you’re not sure of the answer. When I tell you to start, pick up your pencil,
turn you test over, and begin working. At the end of 2 and a half minutes, I’ll tell you to
stop working. Remember, do your best. Any questions? Start. Trigger the timer for 2.5
minutes.
When scoring CBM Maze Fluency, students receive 1 point for each correctly circled answer.
Blanks with no circles are counted as errors. Scoring is discontinued if 3 consecutive errors are
made. The number of correct answers within 2.5 minutes is the student score.
Look at the following CBM Maze score sheet (Figure 12). Juan circled 16 correct answers in 2.5
minutes. He circled 7 incorrect answers. However, Juan did make 3 consecutive mistakes, and 5
of his correct answers were after his 3 consecutive mistakes. Juan’s score for the Maze Fluency
Test would be 10. A score of 10 would be charted on Juan’s CBM graph.
CBM Maze is available from AIMSweb, Edcheckup, and Vanderbilt University. Some of these
products include computerized administration and scoring of CBM Maze Fluency. See
Appendix A for contact information.
Once the CBM data for each student have been collected, it is time to begin graphing student
scores. Graphing the scores of every CBM on an individual student graph is a vital aspect of the
CBM program. These graphs give teachers a straightforward way of reviewing a student’s
progress, monitoring the appropriateness of the student’s goals, judging the adequacy of the
student’s progress, and comparing and contrasting successful and unsuccessful instructional
aspects of the student’s program.
CBM graphs help teachers make decisions about the short- and long-term progress of each
student. Frequently, teachers underestimate the rate at which students can improve (especially
in special education classrooms), and the CBM graphs help teachers set ambitious, but realistic,
goals. Without graphs and decision rules for analyzing the graphs, teachers often stick with low
goals. By using a CBM graph, teachers can use a set of standards to create more ambitious
student goals and help better student achievement. Also, CBM graphs provide teachers with
actual data to help them revise and improve a student’s instructional program.
Teachers have two options for creating CBM graphs of the individual students in the classroom.
The first option is that teachers can create their own student graphs using graph paper and
pencil. The second option is that teachers and schools can purchase CBM graphing software
that graphs student data and helps interpret the data for teachers.
It is easy to graph student CBM scores on teacher-made graphs. Teachers create a student graph
for each individual CBM student so they can interpret the CBM scores of every student and see
progress or lack thereof.
Teachers should create a master CBM graph in which the vertical axis accommodates the range
of the scores of all students in the class, from 0 to the highest score (Figure 13). On the
horizontal axis, the number of weeks of instruction is listed (Figure 14). Once the teacher creates
the master graph, it can be copied and used as a template for every student.
Figure 13. Highest Scores for Labeling Vertical Axes on CBM Graphs
200
Correctly Read Words Per Minute
180
160
140
120
100
The horizontal axis is labeled with the
80 number of instructional weeks.
60
40
20
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
Every time a CBM probe is administered, the teacher scores the probe and then records the
score on a CBM graph (Figure 15). A line can be drawn connecting each data point.
200
Correctly Read Words Per Minute
180
160
140
120
100
80
60
40
20
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
Once a few CBM scores have been graphed, it is time for the teacher to decide on an end-of-year
performance goal for the student. There are three options. Two options are utilized after at least
three CBM scores have been graphed. One option is utilized after at least 8 CBM scores have
been graphed.
For typically developing students at the grade level where the student is being monitored,
identify the end-of-year CBM benchmark. (See recommendations in Figure 16.) This is the end-
of-year performance goal. The benchmark, or end-of-year performance goal, is represented on
the graph by an X at the date marking the end of the year. A goal-line is then drawn between
the median of at least the first 3 CBM graphed scores and the end-of-year performance goal.
Grade Benchmark
Kindergarten 40 letter sounds per minute (CBM LSF)
1st 60 words correct per minute (CBM WIF)
50 words correct per minute (CBM PRF)
2nd 75 words correct per minute (CBM PRF)
3rd 100 words correct per minute (CBM PRF)
4th 20 correct replacements per 2.5 minutes (CBM Maze)
5th 25 correct replacements per 2.5 minutes (CBM Maze)
6th 30 correct replacements per 2.5 minutes (CBM Maze)
For example, the benchmark for a first-grade student is reading 60 words correctly in 1 minute
on CBM WIF. The end-of-year performance goal of 60 would be graphed on the student’s
graph. The goal-line would be drawn between the median of the first few CBM WIF scores and
the end-of-year performance goal.
The benchmark for a sixth-grade student is correctly replacing 30 words in 2.5 minutes on CBM
Maze Fluency. The end-of-year performance goal of 30 would be graphed on the student’s
graph. The goal-line would be drawn between the median of the first few CBM Maze Fluency
scores and the end-of-year performance goal.
Identify the weekly rate of improvement for the target student under baseline conditions, using
at least 8 CBM data points. Multiply this baseline rate by 1.5. Take this product and multiply it
by the number of weeks until the end of the year. Add this product to the student’s baseline
score. This sum is the end-of-year goal.
For example, a student’s first 8 CBM scores were 10, 12, 9, 14, 12, 15, 12 and 14. To calculate the
weekly rate of improvement, find the difference between the highest score and the lowest score.
In this instance, 15 is the highest score and 9 is the lowest score: 15 – 9 = 6. Since 8 scores have
been collected, divide the difference between the highest and lowest scores by the number of
weeks: 6 ÷ 8 = 0.75.
0.75 is multiplied by 1.5: 0.75 × 1.5 = 1.125. Multiply the product of 1.125 by the number of
weeks until the end of the year. If there are 14 weeks left until the end of the year: 1.125 × 14 =
15.75. The median score of the first 8 data points was 12.00. The sum of 15.75 and the median
score is the end-of-year performance goal: 15.75 + 12.00 = 27.75. The student’s end-of-year
performance goal would be 28.0.
For typically developing students at the grade level where the student is being monitored,
identify the average rate of weekly increase from a national norm chart (Figure 17).
The teacher creates an end-of-year performance goal for the student using one of the three
options. The performance goal is marked on the student graph at the year-end date with an “X.”
A “goal-line” is then drawn between the median of the initial graphed scores and the end-of-
year performance goal (Figure 18). The goal-line shows the teacher and the students how
quickly CBM scores should be increasing to reach the year-end goal.
100
90 The X is the end-of-the year performance goal.
WIF: Correctly Read Words
60 X
50
40
30
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
After deciding on an end-of-year performance goal and drawing the goal-line, teachers
continually monitor the student graph to determine whether student progress is adequate. This
tells the teacher whether the instructional program is effective. When at least 7 or 8 CBM scores
have been graphed, teachers draw a trend-line to represent the student’s actual progress. By
drawing the trend-line, teachers can compare the goal-line (desired rate of progress) to the
trend-line (actual rate of progress).
To draw a trend-line, teachers use a procedure called the Tukey method. The Tukey method
provides a fairly accurate idea of how the student is progressing.
Teachers use the Tukey method after at least 7 or 8 CBM scores have been graphed. First, the
teacher counts the number of charted scores and divides the scores into 3 fairly equal groups. If
the scores cannot be split into 3 groups equally, then try to make the groups as equal as
possible.
Draw two vertical lines to divide the scores into 3 groups. Look at the first and third groups of
data points. Find the median (middle) data point for each group and mark this point with an X.
To draw the trend-line, draw a line through the two Xs (Figure 19).
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
After the initial 7 or 8 data points are graphed and the Tukey method is used to create a trend-
line, the student graphs should be re-evaluated using the Tukey method every 7 or 8 additional
data points. Instructional decisions for students are based on the ongoing evaluation of student
graphs.
Let’s practice using the Tukey method. Draw a trend-line using the Tukey method (Figure 20).
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
100
WIF: Correctly Read Words Per Minute
90
80
70
60
50
40
30
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
100
WIF: Correctly Read Words Per Minute
90
80
70
60 X
50
X
40
30
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
100
90
80
70
60 X
50
40 X
30
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
CBM computer management programs are available for schools to purchase. The computer
scoring programs create graphs for individual students after the student scores are entered into
the program and aid teachers in making performance goals and instructional decisions. Other
computer programs actually collect and score the data.
Various types of computer assistance are available at varying fees. Information on how to
obtain the computer programs is in Appendix A.
AIMSweb provides a computer software program that allows teachers to enter student CBM
data, once they have administered and scored the tests, and then receive graphs and automated
reports based on a student’s performance. Teachers can purchase the software from AIMSweb.
A sample CBM report produced by AIMSweb is available in Appendix A.
DIBELS operates an online data system that teachers can use for the cost of $1 per student, per
year. With the data system, teachers can administer and score tests and then enter student CBM
scores and have student graphs automatically prepared. The data system also provides reports
for the scores of an entire district or school. A sample CBM report produced by DIBELS is
available in Appendix A.
Edcheckup operates a computer assistance program that allows teachers to enter student data.
They administer and score online. Reports and graphs are automatically generated that follow
class and student progress. The program also guides teachers to set annual goals and evaluate
student progress. The Edcheckup program is available for a fee.
Step 6: How to Apply Decision Rules to Graphed Scores to Know When to Revise
Programs and Increase Goals
CBM can judge the adequacy of student progress and the need to change instructional
programs. Researchers have demonstrated that CBM can be used to improve the scope and
usefulness of program evaluation decisions (Germann & Tindal, 1985) and to develop
instructional plans that enhance student achievement (Fuchs, Deno, & Mirkin, 1984; Fuchs,
Fuchs, & Hamlett, 1989a).
After teachers draw CBM graphs and trend-lines, they use graphs to evaluate student progress
and to formulate instructional decisions. Standard CBM decision rules guide decisions about
the adequacy of student progress and the need to revise goals and instructional programs.
• If the most recent 4 consecutive CBM scores are above the goal-line, then the student’s
end-of-year performance goal needs to be increased.
• If the most recent 4 consecutive CBM scores are below the goal-line, then the teacher
needs to revise the instructional program.
• If the student’s trend-line is steeper than the goal-line, then the student’s end-of-year
performance goal needs to be increased.
• If the student’s trend-line is flatter than the goal-line, then the teacher needs to revise the
instructional program.
• If the student’s trend-line and goal-line are the same, then no changes need to be made.
Let’s look at each of these decision rules and the graphs that help teachers make decisions about
a student’s goals and instructional programs.
100
WIF: Correctly Read Words Per Minute
On this graph, the most recent 4 scores are above the goal-line. Therefore, the student’s end-of-
year performance goal needs to be adjusted. The teacher increases the desired rate (or goal) to
boost the actual rate of student progress.
The point of the goal increase is notated on the graph as a dotted vertical line. This allows
teachers to visually note when the student’s goal was changed. The teacher re-evaluates the
student graph in another 7 or 8 data points to determine whether the student’s new goal is
appropriate of whether a teaching change is needed.
100
WIF: Correctly Read Words Per Minute
90
80
70
60
X
50
40 goal-line
30
20
most recent 4 points
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
On this graph, the most recent 4 scores are below the goal-line. Therefore, the teacher needs to
change the student’s instructional program. The end-of-year performance-goal and goal-line
never decrease, they can only increase. The instructional program should be tailored to bring a
student’s scores up so they match or surpass the goal-line.
The teacher draws a solid vertical line when making an instructional change. This allows
teachers to visually note when changes to the student’s instructional program were made. The
teacher re-evaluates the student graph in another 7 or 8 data points to determine whether the
change was effective.
100
WIF: Correctly Read Words Per Minute
90
trend-line
80
70
X X
60
50
X
40 goal-line
30
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
On this graph, the trend-line is steeper than the goal-line. Therefore, the student’s end-of-year
performance goal needs to be adjusted. The teacher increases the desired rate (or goal) to boost
the actual rate of student progress. The new goal-line can be an extension of the trend-line.
The point of the goal increase is notated on the graph as a dotted vertical line. This allows
teachers to visually note when the student’s goal was changed. The teacher re-evaluates the
student graph in another 7 or 8 data points to determine whether the student’s new goal is
appropriate or whether a teaching change is needed.
100
90
80
70
X
60
50 X X
40 goal-line
30
trend-line
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
On this graph, the trend-line is flatter than the performance goal-line. The teacher needs to
change the student’s instructional program. Again, the end-of-year performance goal and goal-
line are never decreased! A trend-line below the goal-line indicates that student progress is
inadequate to reach the end-of-year performance goal. The instructional program should be
tailored to bring a student’s scores up so they match or surpass the goal-line.
The point of the instructional change is represented on the graph as a solid vertical line. This
allows teachers to visually note when the student’s instructional program was changed. The
teacher re-evaluates the student graph in another 7 or 8 data points to determine whether the
change was effective.
100
WIF: Correctly Read Words Per Minute
90
80
70
X
60
50 XX
40
X goal-line
30
trend-line
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
If the trend-line matches the goal-line, then no change is currently needed for the student.
The teacher re-evaluates the student graph in another 7 or 8 data points to determine whether
an end-of-year performance goal or instructional change needs to take place.
Step 7: How to Use the CBM Database Qualitatively to Describe Student Strengths
and Weaknesses
Student miscues during CBM PRF can be analyzed to describe student reading strengths and
weaknesses. To complete a miscue analysis, the student reads a CBM PRF passage following the
standard procedures. While the student reads, the teacher writes student errors on the examiner
copy. (See Figure 29.) The first 10 errors are written on the Quick Miscue Analysis Table (see
Figure 30) and analyzed.
Written
Word Spoken
Word Graphophonetic Syntax Semantics
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
%
To fill out the Quick Miscue Analysis table, the teacher writes the written word from the CBM
PRF passage in the Written Word column. The student mistake, or miscue, is written in the
Spoken Word column.
The teacher answers three questions for each mistake. If the student made a graphophonetic
error, then the teacher writes a “yes” in the Graphophonetic column along with a brief
description of the error. A graphophonetic error preserves some important phonetics of the
written word, even if it does not make sense (i.e., written word “friend”; spoken word “fried.”)
The teacher then answers “yes” or “no” in the Syntax and Semantics columns. A syntax error
preserves the grammar of (i.e., is the same part of speech as) the written word. Does the error
have the same part of speech as the written word? (i.e., “ran” is the same part of speech as
“jogged”). A semantics error preserves the meaning of the sentence. Does the error preserve the
meaning of the sentence? (i.e., “The woman is tall” means the same as “The lady is tall”).
Once the entire table is complete, the teacher calculates the percentage of graphophonetic,
syntax, or semantic errors that the student made. Let’s look at this example (Figure 31).
The examiner wrote the first 10 mistakes on the Quick Miscue Analysis Table. The percentage of
the time the student error was a graphophonetic, syntax, or semantics error is calculated at the
bottom of the table. To calculate the percentage, add together the number of “yes” answers and
divide the sum by 10. In the Graphophonetic column, 10 “yes” answers divided by 10 miscues
is 100%. In the Syntax column, 9 “yes” answers divided by 10 miscues is 90%. In the Semantics
column, 2 “yes” answers divided by 10 miscues is 20%. Calculating the percentages allows
teachers to glance at the various types of miscues and spot trends in student mistakes.
From the miscue analysis, the teacher gains insight about the strengths and weaknesses of the
student's reading. This student appears to rely on graphophonetic cues (especially at the
beginning and ending of words) and knowledge of syntax for identifying unknown words. The
student appears to ignore the middle portion of the unknown words, so the teacher could help
the student to sound out entire words, perhaps reading some words in isolation. However, the
student's reading does not make sense. The teacher should help the student learn to self-
monitor and self-correct. The student should ask himself/herself whether the word makes sense
given the context. Practice with the cloze procedure (similar to CBM Maze Fluency) may also
assist the student in focusing on comprehension. Tape recording the student's reading and
having the student listen to the tape also may help alert the student to inaccuracies that do not
make sense.
Now, look at another example (Figure 32). The examiner copy of the student reading is below.
Use the blank Quick Miscue Analysis Table and write in the student miscues (Figure 33).
Written
Word Spoken
Word Graphophonetic Syntax Semantics
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
%
Your miscue analysis table should look like this (Figure 34). Based on this table, the teacher can
see that the student’s problem is mistakes on short, functional words rather than content words.
The teacher might choose to practice discrimination between similar words (i.e., this / that /
the) and similar phrases (i.e., The big boy…, This big boy…, That big boy…). The teacher might
also choose to have the student echo read and complete writing and spelling exercises for the
short, functional words.
Written
Word Spoken
Word Graphophonetic Syntax Semantics
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
%
What are the strengths and weaknesses of this student? What teaching strategies might you
choose to implement for this student?
• How to Use the CBM Database to Accomplish Teacher and School Accountability for
Formulating Policy Directed at Improving School Outcomes (page 50)
Schools must determine the measure(s) to be used for AYP evaluation and the criterion for
deeming an individual student “proficient” on this measure. Schools must quantify AYP for
achieving the goal of universal proficiency by the school year 2113–2114. CBM can be used to
fulfill the AYP evaluation in reading.
Schools can assess every student using CBM to identify the number of students who initially
meet benchmarks. This number of students represents a school’s initial proficiency status. Then
the discrepancy between initial proficiency and universal proficiency can be calculated. Once
the discrepancy between initial and universal proficiency is calculated, the discrepancy is
divided by the number of years available before meeting the 2113–2114 goal. The resulting
answer gives the number of additional students who must meet CBM end-of-year benchmarks
each year.
Relying on CBM for specifying AYP provides several advantages. First, the CBM measures are
simple to administer and examiners can be trained to administer the tests in a reliable fashion in
a short amount of time. Second, because the tests are brief, schools can measure an entire
student body relatively efficiently and frequently. Routine testing allows a school to track its
own progress over the school year. Progress can be examined at the school, teacher, or student
level.
Using CBM for multi-level monitoring can transform AYP from a procedural compliance
burden into a useful tool for guiding education reform at the school level, for guiding the
instructional decision making of individual teachers about their reading programs, and for
ensuring that the reading progress of individual students is maximized.
CBM provides a multi-level monitoring system that helps schools ensure greater levels of
reading success. Here are a few examples of how CBM can be used in conjunction with a
school’s AYP.
CBM can be used to monitor across-year progress in achieving AYP (and toward achieving
universal proficiency by the 2113–2114 deadline) (Figure 37).
500
Meeting CBM Benchmarks
X
400 (497)
Number Students
300
200 (257)
100
0
2004 20052006 2007 2008 2009 2010 2011 2012 2013 2014
End of School Year
CBM can be used to monitor a school’s within-year progress towards achieving the AYP for the
year (Figure 38).
500
Meeting CBM Benchmarks
400
Number Students
300
X
200 (281)
100
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2005 School-Year Month
25
Meet CBM Benchmarks
20
15
10
5
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2005 School-Year Month
CBM can be used to monitor a school’s special education performance within a school year
(Figure 40).
25
Meet CBM Benchmarks
20
15
10
5
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2005 School-Year Month
200
180
Passage Reading Fluency
160
CBM Score: Grade 3
140
120
100
80
60
40
20
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2005 School-Year Month
For more information on using CBM for school accountability and AYP, see:
Fuchs, L. S., & Fuchs, D. (in press). Determining adequate yearly progress from kindergarten
through grade 6 with curriculum-based measurement. Assessment for Effective Intervention.
The first page of the CBM Class Report shows three graphs: one for the progress of the lower-
performing readers, another for the middle-performing readers, and one for the higher-
performing readers (Figure 42). The report also gives teachers a list of students to watch. These
are students who are in the bottom 25% of the class.
Figure 42. Sample CBM Teacher Report for Maze Fluency: Page 1
The second page of the CBM Class Report provides teachers with a list of each student’s CBM
Maze Fluency raw score, the percentage of words read correctly, and the slope of the student’s
CBM graph (Figure 43).
Figure 43. Sample CBM Teacher Report for Maze Fluency: Page 2
The third page of the CBM Class Report provides teachers with an average of the students in
the classroom and identifies students who are performing below their classroom peers both in
terms of the level (“score”) of their CBM performance and their rate (“slope”) of CBM
improvement (Figure 44).
Figure 44. Sample CBM Teacher Report for Maze Fluency: Page 3
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., Karns, K., & Dutka, S. (1997). Enhancing
students’ helping behavior during peer-mediated instruction with conceptual mathematical
explanations. Elementary School Journal, 97, 223–250.
Educational outcomes differ across a population of learners and a low-performing student may
ultimately perform not as well as his or her peers. All students do not achieve the same degree
of reading competence. Just because reading growth is low, it does not mean the student should
automatically receive special education services.
If a low-performing student is learning at a rate similar to the growth rate of other students in
the same classroom environment, he or she is demonstrating the capacity to profit from the
educational environment. Additional intervention is unwarranted.
However, when a low-performing student is not manifesting growth in a situation where others
are thriving, consideration of special intervention is warranted. Alternative instructional
methods must be tested to address the apparent mismatch between the student’s learning
requirements and those represented in the conventional instructional program.
CBM is a promising tool for identifying treatment responsiveness due to its capacity to model
student growth, to evaluate treatment effects, and to simultaneously inform instructional
programming.
200
Correctly Read Words Per Minute
180
160
140
120 Sascha’s
trend-line
100
X
80
60
X X
40
Sascha’s
20 goal-line
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Weeks of Instruction
Since Sascha’s trend-line is flatter than her goal-line, Mr. Miller needs to make a change to
Sascha’s instructional program. He has marked the week of the instructional change with a
dotted vertical line. To decide what type of instructional change might benefit Sascha, Mr.
Miller decides to do a Quick Miscue Analysis on Sascha’s weekly CBM PRF to find her
strengths and weaknesses as a reader.
This is Sascha’s Quick Miscue Analysis for her CBM PRF test (Figure 47).
Based on the Quick Miscue Analysis Table, what instructional program changes should Mr.
Miller introduce into Sascha’s reading program?
Last school year (2002–2003), all 378 students at the school were assessed using CBM PRF at the
appropriate grade level. 125 students initially met CBM benchmarks, and so 125 represents
Harrisburg’s initial proficiency status. The discrepancy between initial proficiency and
universal proficiency is 253 students. To find the number of students who must meet CBM
benchmarks each year before the 2113–2114 deadline, the discrepancy of 253 students is divided
by the number of years until the deadline (11). 253 ÷ 11 = 23. 23 students need to meet CBM
benchmarks each year in order for the school to demonstrate AYP.
During the 2003–2004 school year, Dr. Eckstein is provided with these CBM graphs based on the
performance of the students in her school.
Based on this graph (Figure 48), what can Dr. Eckstein decide about her school’s progress since
the initial year of benchmarks?
400
Meeting CBM Benchmarks
X
Number Students
300 (378)
200
100
(125)
0
2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
End of School Year
Based on this graph (Figure 49), what can Dr. Eckstein decide about her school’s progress since
the beginning of the school year?
200
Meeting CBM Benchmarks
150
Number Students
X
(148)
100
50
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2004 School-Year Month
Dr. Eckstein receives the next two graphs from two different second-grade teachers (Figures 50
and 51). What information can she gather from these graphs?
25
Meet CBM Benchmarks
20
15
10
5
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2004 School-Year Month
25
Meet CBM Benchmarks
20
15
10
5
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2004 School-Year Month
This is the graph that Dr. Eckstein receives based on the performance of Harrisburg’s Special
Education students (Figure 52). What should she learn from this graph?
Figure 52. Harrisburg Elementary: Within-Year Special Education Progress
Number Students On Track to
25
Meet CBM Benchmarks
20
15
10
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2004 School-Year Month
Dr. Eckstein receives a graph for every student in the school. She gives these graphs to the
respective teachers of each student. How can the teachers use the graphs (Figures 53 and 54)?
Figure 53. Hallie Martin
100
Word Identifcation Fluency
CBM Score: Grade 1
80
60
40
20
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2004 School-Year Month
100
Passage Reading Fluency
80
CBM Score: Grade 3
60
40
20
0
Sept Oct Nov Dec Jan Feb Mar Apr May June
2004 School-Year Month
This is the first page of Mrs. Wilson’s CBM Class Report (Figure 55). How would you
characterize how her class is doing? How can she use this information to improve the reading of
the students in her classroom?
This is the second page of Mrs. Wilson’s Class Report (Figure 56). How can she use this class
report to improve her classroom instruction?
This is the third page of Mrs. Wilson’s Class Report (Figure 57). What information does she
learn on this page? How can she use this information?
200
PRF: Words Read Correctly Per Minute
180
160 instructional
changes
140 Joshua’s
goal-line
Joshua’s
120 trend-lines
100 X
80
60
40
20
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Weeks of Instruction
After eight weeks, Mrs. Sanchez determined that Joshua’s trend-line was flatter than his goal-
line, so she made an instructional change to Joshua’s reading program. This instructional
change included having Joshua work on basic sight words that he was trying to sound out
when reading. The instructional change is the first thick, vertical line on Joshua’s graph.
After another eight weeks, Mrs. Sanchez realized that Joshua’s trend-line was still flatter than
his goal-line. His graph showed that Joshua had made no improvement in reading. So, Mrs.
Sanchez made another instructional change to Joshua’s reading program. This instructional
change included having Joshua work on basic letter sounds and how those letter sounds
combine to form words. The second instructional change is the second thick, vertical line on
Joshua’s graph.
Mrs. Sanchez has been conducting CBM for 20 weeks and still has yet to see any improvement
with Joshua’s reading despite two instructional teaching changes. What could this graph tell
Mrs. Sanchez about Joshua? Pretend you’re at a meeting with your principal and IEP team
members, what would you say to describe Joshua’s situation? What would you recommend as
the next steps? How could Mrs. Sanchez use this class graph to help her with her decisions
about Joshua (Figure 59)?
200
PerMinute
180
High-performing
160 readers
CorrectlyPer
140
Read Correctly
120
Middle-performing
100 readers
Words Read
80
60
PRF:Words
Low-performing
40
readers
PRF:
20
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Wee ks of Instruction
AIMSweb is based on CBM. It provides materials for CBM data collection and supports data
use.
– 3 graded and equivalent passages for Grades 1–8 for establishing fall, winter, and
spring benchmarks
– 24 total passages
– Also available in Spanish
• Standard Progress Monitoring Reading Assessment Passages:
– 3 Standard Assessment Reading Passages for Grades 1–8 have been prepared in a
maze (multiple choice close) format to use as another measure of reading
comprehension
– 24 maze passages total
• Standard Progress Monitoring Reading Maze Passages:
– 30 graded and equivalent passages prepared in maze format for Grades 2–8
AIMSweb also has a progress monitoring computer software program available for purchase.
Once the teacher administers and scores the CBM tests, the scores can be entered into the
computer program for automatic graphing and analysis.
AIMSweb measures, administration guides, scoring guides, and software are available for
purchase on the internet:
http://www.aimsweb.com or http://www.edformation.com
Phone: 888-944-1882
Dynamic Indicators of Basic Early Literacy Skills (DIBELS) are a set of standardized,
individually administered measures of early literacy development. They are designed to be
short (one minute) fluency measures used to regularly monitor the development of pre-reading
and early reading skills. DIBELS measures are free to download and use. To obtain the
measures, teachers must register on the DIBELS Web site.
DIBELS also operates a DIBELS Data System that allows teachers to enter students’ scores, once
the teacher has administered and scored the tests, online to generate automated reports. The
cost for this service is $1 per student, per year.
DIBELS measures, administration guides, scoring guides, and information on the automated
Data System are on the internet:
http://dibels.uoregon.edu/
Edcheckup offers an assessment system for screening student performance and measuring
student progress toward goals in reading, based on the CBM model. The assessment system
administers and scores student tests via computer.
• Reports and graphs are automatically generated that follow class and student progress.
• Guidelines for setting annual goals and evaluating student progress are provided.
http://www.edcheckup.com
Phone: 952-229-1440
Mail: WebEdCo
7701 York Avenue South – Suite 250
Edina, MN 55435
Weekly 15-minute diagnostic CBM assessments provide teachers with the information they
need to plan classroom instruction and meet individual student needs. Ongoing assessment
across the entire curriculum allows teachers to measure the effectiveness of instruction as it
takes place and track both mastery and retention of grade level skills. Yearly ProgressPro™
reports allow teachers and administrators to track progress against state and national standards
at the individual student, class, building, or district level. Administrators can track progress
towards AYP goals and disaggregate date demographically to meet NCLB requirements.
http://www.mhdigitallearning.com
CBM materials were developed and researched using standard CBM procedures.
• CBM Reading passages for Grades 1–8 (30 passages per grade)
• Maze Fluency passages for Grades 1–6 (30 passages per grade)
The CBM measures are free, except for copying costs, postage, and handling. The CBM
measures, scoring sheets, administration instructions, and scoring instructions are available:
Phone: 615-343-4782
Email: [email protected]
Appendix B: Resources
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional
Children, 52, 219–232.
Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to
establish growth standards for students with learning disabilities. School Psychology
Review, 30, 507–524.
Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston, VA:
Council for Exceptional Children.
Fuchs, L. S., & Deno, S. L. (1987). Developing curriculum-based measurement systems for data-
based special education problem solving. Focus on Exceptional Children, 19, 1–16.
Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant
measurement models. Exceptional Children, 57, 488–501.
Fuchs, L. S., & Deno, S. L. (1994). Must instructionally useful performance assessment be based
in the curriculum? Exceptional Children, 61, 15–24.
Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). Effects of frequent curriculum-based
measurement of evaluation on pedagogy, student achievement, and student awareness
of learning. American Educational Research Journal, 21, 449–460.
Fuchs, L. S., & Fuchs, D. (1990). Curriculum-based assessment. In C. Reynolds & R. Kamphaus
(Eds.), Handbook of psychological and educational assessment of children (Vol. 1): Intelligence
and achievement. New York: Guilford Press.
Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress.
School Psychology Review, 58, 45–58.
Fuchs, L. S., & Fuchs, D. (1996). Combining performance assessment and curriculum-based
measurement to strengthen instructional planning. Learning Disabilities Research and
Practice, 11, 183–192.
Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for reconceptualizing
the identification of learning disabilities. Learning Disabilities Research and Practice, 13,
204–219.
Fuchs, L. S., & Fuchs, D. (1999). Monitoring student progress toward the development of
reading competence: A review of three forms of classroom-based assessment. School
Psychology Review, 28, 659–671.
Fuchs, L. S., & Fuchs, D. (2000). Curriculum-based measurement and performance assessment.
In E. S. Shapiro & T. R. Kratochwill (Eds.), Behavioral assessment in schools: Theory,
research, and clinical foundations (2nd ed., pp. 168–201). New York: Guilford.
Fuchs, L. S., & Fuchs, D. (in press). Determining Adequate Yearly Progress from kindergarten
through grade 6 with curriculum-based measurement. Assessment for Effective Instruction.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989a). Effects of alternative goal structures within
curriculum-based measurement. Exceptional Children, 55, 429–438.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989b). Effects of instrumental use of curriculum-based
measurement to enhance instructional programs. Remedial and Special Education, 10,
43–52.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1990). Curriculum-based measurement: A standardized
long-term goal approach to monitoring student progress. Academic Therapy, 25, 615–632.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1993). Technological advances linking the assessment
of students’ academic proficiency to instructional planning. Journal of Special Education
Technology, 12, 49–62.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1994). Strengthening the connection between
assessment and instructional planning with expert systems. Exceptional Children, 61,
138–146.
Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (in press). Using technology to facilitate and enhance
curriculum-based measurement. In K. Higgins, R. Boone, & D. Edyburn (Eds.), The
handbook of special education technology research and practice. Whitefish Bay, WI:
Knowledge by Design, Inc.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., & Karns, K. (1995). General educators’
specialized adaptation for students with learning disabilities. Exceptional Children, 61,
440–459.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., Karns, K., & Dutka, S. (1997). Enhancing
students’ helping behavior during peer-mediated instruction with conceptual
mathematical explanations. Elementary School Journal, 97, 223–250.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Stecker, P. M. (1991). Effects of curriculum-based
measurement and consultation on teacher planning and student achievement in
mathematics operations. American Educational Research Journal, 28, 617–641.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Thompson, A., Roberts, P. H., Kubek, P., & Stecker, P. S.
(1994). Technical features of a mathematics concepts and applications curriculum-based
measurement system. Diagnostique, 19, 23–49.
Fuchs, L. S., Fuchs, D., Hamlett, C. L, Walz, L., & Germann, G. (1993). Formative evaluation of
academic progress: How much growth can we expect? School Psychology Review, 22,
27–48.
Fuchs, L. S., Fuchs, D., Hosp, M., & Hamlett, C. L. (2003). The potential for diagnostic analysis
within curriculum-based measurement. Assessment for Effective Intervention, 28, 13–22.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator
of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies
of Reading, 5, 241–258.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., Dutka, S., & Katzaroff, M. (2000). The
importance of providing background information on the structure and scoring of
performance assessments. Applied Measurement in Education, 13, 83–121.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., & Katzaroff, M. (1999). Mathematics
performance assessment in the classroom: Effects on teacher planning and student
learning. American Educational Research Journal, 36, 609–646.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., Katzaroff, M., & Dutka, S. (1997). Effects of
task-focused goals on low-achieving students with and without learning disabilities.
American Educational Research Journal, 34, 513–544.
Fuchs, D., Roberts, P. H., Fuchs, L. S., & Bowers, J. (1996). Reintegrating students with learning
disabilities into the mainstream: A two-year study. Learning Disabilities Research and
Practice, 11, 214–229.
Germann G., & Tindal, G. (1985). An application on curriculum-based assessment: The use of
direct and repeated measurement. Exceptional Children, 52, 244–265.
Gersten, R., & Dimino, J. A. (2001). The realities of translating research into classroom practice.
Learning Disabilities Research and Practice, 16, 120–130.
Gickling, E. E. (1981). The forgotten learner. Nevada Public Affairs Review, 1, 19–22.
Hosp, M. K., & Hosp, J. (2003). Curriculum-based measurement for reading, math, and
spelling: How to do it and why. Preventing School Failure, 48(1), 10–17.
Hosp, M. K., & Hosp, J. (2003). Progress monitoring: An essential factor for student success. The
Utah Special Educator, 24(2), 26–27.
Hosp, M. K., & Suchey, N. (2003). Progress Monitoring: A guide for Implementing Curriculum-
Based Measurement for Reading. The Utah Special Educator, 24(3), 24–25.
Hutton, J. B., Dubes, R., & Muir, S. (1992). Estimating trend progress in monitoring data: A
comparison of simple line-fitting methods. School Psychology Review, 21, 300–312.
Jenkins, J. R., Mayhall, W., Peshka, C., & Townshend, V. (1974). Using direct and daily measures
to measure learning. Journal of Learning Disabilities, 10, 604–608.
Lembke, E., Deno, S. L., & Hall, K. (2003). Identifying an indicator of growth in early writing
proficiency for elementary school students. Assessment for Effective Intervention, 28(3-4),
23–35.
Marston, D., Mirkin, P. K., & Deno, S. L. (1984). Curriculum-based measurement: An alternative
to traditional screening, referral, and identification of learning disabilities of learning
disabled students. The Journal of Special Education, 18, 109–118.
Phillips, N. B., Hamlett, C. L., Fuchs, L. S., & Fuchs, D. (1993). Combining classwide curriculum-
based measurement and peer tutoring to help general educators provide adaptive
education. Learning Disabilities Research and Practice, 8, 148–156.
Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. New York:
Guilford Press.
Shinn, M. R., Tindal, G. A., & Stein, S. (1988). Curriculum-based measurement and the
identification of mildly handicapped students: A research review. Professional School
Psychology, 3, 69–86.
Stecker, P. M., & Fuchs, L. S. (2000). Effecting superior achievement using curriculum-based
measurement: The importance of individual progress monitoring. Learning Disabilities
Research and Practice, 15, 128–134.
Tindal, G., Wesson, C., Germann, G., Deno, S., & Mirkin, P. (1982). A data-based special education
delivery system: The Pine County Model. (Monograph No. 19). Minneapolis, MN:
University of Minnesota, Institute for Research on Learning Disabilities.
Tucker, J. (1987). Curriculum-based assessment is not a fad. The Collaborative Educator, 1, 4, 10.
Wesson, C., Deno, S. L., Mirkin, P. K., Sevcik, B., Skiba, R., King, P. P., Tindal, G. A., &
Maruyama, G. (1988). A causal analysis of the relationships among outgoing
measurement and evaluation, structure of instruction, and student achievement. The
Journal of Special Education, 22, 330–343.
Yell, M. L., & Stecker, P. M. (2003). Developing legally correct and educationally meaningful
IEPs using curriculum-based measurement. Assessment for Effective Intervention, 28(3&4),
73–88.
Zeno, S. M., Ivens, S. H., Millard, R. T., & Duvvuri, R. (1995). The educator's word frequency
guide. New York, NY: Touchstone Applied Science Associates, Inc.