CompAdaptTestFAQ
CompAdaptTestFAQ
The algorithm functions slightly differently depending on the test being administered:
• Mathematics, Science, and Social Studies: adaptive at the item level.
o Based on a student’s response to an item and blueprint requirements, the algorithm will
select the next appropriate item.
o Grades 5 and 8 Science and Biology 1 may contain one or two context-dependent (CD) item
sets where students respond to several items associated with the same stimuli.
• ELA Reading: adaptive at the passage/passage set level.
o Items on an ELA Reading test are attached to a specific passage and are, therefore, referred
to as a “passage set.” Therefore, based on a student’s responses to a passage set (rather
than an individual item) and blueprint requirements, the algorithm will select the next
passage/passage set.
o The algorithm examines how well a student performed on sets of items associated with the
previous passage set(s) as well as the blueprint requirements to be satisfied to determine
the next passage set and associated items. So, the algorithm adapts at the passage level.
o Once the algorithm selects the next set, it is locked in (regardless of whether students go
back and change answers within the current set).
5. On grades 3–10 FAST tests, do the items selected by the algorithm go above and below the student’s
current grade level?
No; the item bank for each test only contains items within the student’s current grade level.
6. What is a test blueprint?
The test blueprints show the percentage of items in each reporting category students will encounter
during each progress monitoring (PM) window. Students will see the full scope of grade/course-level
content that falls within the percentage of the overall test length. Blueprint coverage ensures that each
student sees a certain percentage of items from each reporting category; they will not necessarily see an
item for every benchmark.
7. How does the algorithm select the first item for each student?
The first time a student takes a test in a given subject, the algorithm will choose a mid-level difficulty
item for the student. If the system has prior data for a student for the same subject, either from a prior
grade or a prior PM event, the algorithm will pick up where the student left off in terms of item
difficulty. Exceptions to this are grade 3 ELA Reading and Mathematics, where students are transitioning
from grade 2 to grade 3 assessments; grades 5 and 8 Science; FAST ELA Reading Retake; and the EOC
assessments, which do not track data between attempts for students.
8. If a student starts with a low-level item, does it affect his or her ability to achieve a high scale score?
No. The test is sufficiently long enough for all students to be presented with test content that allows
them to demonstrate their true ability. Although a student may start with easier questions based on
performance on a prior PM, if he or she has increased knowledge, skills, and abilities, the test will adapt
appropriately to allow them to move up one or more achievement levels.
9. How is the difficulty level determined for each item?
Data analyses conducted after field testing and after operational testing generate statistics for each
item. These statistics inform the degree to which the item differentiates between students of different
abilities, the difficulty of the item, and the likelihood of success by guessing. These item statistics, along
with others, determine an item’s level of difficulty. For instance, an item with a low percentage of
students correctly answering the items is likely to have a higher difficulty level.
The algorithm selects items first on the published test blueprint. Then, from the items that measure that
standard, it selects the most appropriate item based on item difficulty and the student’s performance
on previous items.
10. If students enter an incorrect response to an item, will the algorithm select a lower-level item next
even if it is from a different reporting category?
The algorithm discriminates by overall performance, not reporting category. It will always select the next
item based on the blueprint requirements and the estimate of the student’s ability. This estimate
becomes more reliable as the student progresses through the test.
11. Will students all receive the same number of items?
For a given PM event, students will receive approximately the same number of items. On FAST PM3 and
on spring EOC and Science assessments, students will also see field test items on their assessment.
Depending on the number of field test items a student receives, their test may be a few items longer or
shorter.
12. Can students skip items?
For Mathematics and Social Studies assessments, and most items in Science assessments, students must
provide an answer for each item to move on to the next item. Grades 5 and 8 Science and Biology 1 may
contain one or two context-dependent (CD) item sets where students respond to several items
associated with the same stimuli. For these sets, students may move between the items without
providing an answer. For ELA Reading, students may move between items within a passage set without
providing an answer. However, students must answer all items in a CD set or passage set before moving
on to the next item or passage set. Students should always provide their best answer for each item as
they encounter it.
After a student answers each item (or set of items for ELA Reading and Science assessments), the
adaptive algorithm updates its interim ability estimate (or interim skill level). Based on that information,
plus the blueprint coverage requirements, the algorithm searches for the next best item or set of items
to be administered. This cycle continues until the end of testing, and it results in a tailored test form
with more engaging questions. The final ability estimate (or scale score) is derived from a student’s final
response set across all items seen during the test. A student must answer each question so that the
adaptive algorithm can search for the next best item or set of items at each step of testing.
13. Can students return to an item and change the answer?
Yes. Students may return to items, and they are able to change the answer they provided if they choose
to do so.
14. How does going back and changing an answer affect the algorithm?
Returning to an item and changing an answer does not affect the algorithm since the subsequent items
have already been selected and delivered. However, changing an answer could affect a student’s overall
score (if it was changed from right to wrong or wrong to right).
If a student is unsure about an earlier item and goes back to change a previously provided response(s),
the algorithm is not going to pull a new set of questions, but the ability estimate (or the scale score) will
be derived through a statistical process, based on the student’s final response set.
15. What is the “Mark for Review” tool and does using it affect the test algorithm?
The Mark for Review tool is provided on all Florida computer-based tests, and it allows students to flag
an item they would like to review later after providing their best answer. Students select “Mark for
Review” from a drop-down menu on the item, and a flag is placed on that item on the item review
screen to remind the student that they may want to return to that item later. Using this tool does not
affect the algorithm or scoring in any way.
16. Are there specific test strategies that students should use when taking a CAT?
Because the test is computer-adaptive, students should do their best on each item as they progress
through the test. Students are not able to preview all items in the test because they have to provide a
response to each item or passage set before proceeding.
17. How is the test scored if a student does not complete all of the items?
Students’ tests are scored based on all of the available items on the assessment. Items students respond
to will be scored according to their difficulty and whether the student responded correctly or
incorrectly. Items that have no response, such as those that are left blank if a student does not complete
the test within the school day, are counted as incorrect.
18. How is the overall scale score determined?
Florida’s statewide assessments are scored using a method called “pattern scoring.” This means that the
pattern of answers provided by a student is analyzed in combination with the item statistics. In other
words, information about the pattern of answers (which questions were answered) and the statistical
qualities of test items (difficulty level) are evaluated together to determine the scoring weights for each
item and the likelihood of an individual student’s score.
As a result of this method of scoring, students who answer the same number of items correctly may
have similar, but not necessarily identical, scale scores. Students who were successful with the more
challenging content in the item bank will have a higher scale score.
For questions related to this document, please contact the Office of Assessment at [email protected].
Change Log
Location Change Date
#2 and #4 Updated responses for 2024–25 school year to reflect August 16, 2024
that Science and Social Studies assessments are fully
adaptive.
#17 Added new FAQ regarding unanswered items. August 16, 2024
#2 Added link to Florida’s Statewide Kindergarten–Grade January 30, 2025
2 Computer-Adaptive Tests (CAT) FAQ PDF.