0% found this document useful (0 votes)
129 views14 pages

Item Analysis and Validation

Item analysis and validation are important steps to ensure a test is useful. Teachers analyze each draft test item's difficulty index and discrimination index based on a try-out group. The difficulty index measures what percentage of students answered correctly. The discrimination index measures how well an item distinguishes between high- and low-scoring students. Items are then revised, replaced, or retained based on these analyses to improve the final test. Validating the final test determines if it adequately measures the intended content.

Uploaded by

Joana Paiste
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views14 pages

Item Analysis and Validation

Item analysis and validation are important steps to ensure a test is useful. Teachers analyze each draft test item's difficulty index and discrimination index based on a try-out group. The difficulty index measures what percentage of students answered correctly. The discrimination index measures how well an item distinguishes between high- and low-scoring students. Items are then revised, replaced, or retained based on these analyses to improve the final test. Validating the final test determines if it adequately measures the intended content.

Uploaded by

Joana Paiste
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

ITEM Analysis AND Validation

General Education (Bukidnon State University)

Studocu is not sponsored or endorsed by any college or university


Downloaded by Joana Marie Paiste ([email protected])
ASSESSMENT OF LEARNING 1

CHAPTER 6:
ITEM ANALYSIS AND VALIDATION

LEARNING OUTCOMES
 Explain the meaning of item analysis, item validity, discrimination index
 Determine validity and reliability of given test items
 Determine quality of a test item by its difficulty index, discrimination index and plausibility of
options (for a selected)

The teacher normally prepares a draft of the test. Such a draft is subjected to item analysis and validation in
order to ensure that the final version of the test would be useful and functional. First, the teacher tries out the
draft test to a group of students of similar characteristics as the intended test takers (try-out phase). From the try-out
group. each item will be analyzed in terms of its ability to discriminate between those who know and those
who do not know and also its level of difficulty (item analysis phase). The item analysis will provide
information that will allow the teacher to decide whether to revise or replace an item (item revision
phase). Then, finally, the final draft of the test is subjected to validation if the intent is to make use of the test
as a standard test for the particular unit or grading period. We shall be concerned with these concepts in
this Chapter.

6.1. Item Analysis: Difficulty Index and Discrimination Index

There are two important characteristics of an item that will be of interest to the teacher. These are:
(a) item difficulty and (b) discrimination index. We shall learn how to measure these
characteristics and apply our knowledge in making a decision about the item in question.

The difficulty of an item or item difficulty is defined as the number of students who are able to
answer the item correctly divided by the total number of students. Thus:

Item difficulty = number of students with correct answer/ total number of students The

item difficulty is usually expressed in percentage.

Example: What is the item difficulty index of an item if 25 students are unable to answer it
correctly while 75 answered it correctly?

Here, the total number of students is 100, hence the item difficulty index is 75/100 or 75%.

Another example: 25 students answered the item correctly while 75 students did not. The total number
of students is 100 so the difficulty index is 25/100 or 25 which is 25%.

It is a more difficult test item than that one with a difficulty index of 75.

A high percentage indicates an easy item/question while a low percentage indicates a difficult item.

One problem with this type of difficulty index is that it may not actually indicate that the item is
difficult (or easy). A student who does not know the subject matter will naturally be unable to answer the
item correctly even if the question is easy. How do we decide on the basis of this index whether the item is
too difficult or too easy?

The following arbitrary rule is often used in the literature:


Downloaded by Joana Marie Paiste ([email protected])
Range of Difficulty Index Interpretation Action

0 — 0.25 Difficult Revise or discard

0.26 — 0.75 Right difficulty Retain

0.76 — above Easy Revise or discard

Difficult items tend to discriminate between those who know and those who do not know the answer.
Conversely, easy items cannot discriminate between these two groups of students. We are therefore
interested in deriving a measure that will tell us whether an item can discriminate between these two
groups of students. Such a measure is called an index of discrimination.

An easy way to derive such a measure is to measure how difficult an item is with respect to those in the
upper 25% of the class and how difficult it is with respect to those in the lower 25% of the class. If the upper
25% of the class found the item easy yet the lower 25% found it difficult, then the item can discriminate
properly between these two groups. Thus:

Index of discrimination = DU — DL (U — Upper group; L — Lower group)

Example: Obtain the index of discrimination of an item if the upper 25% of the class had a
difficulty index of 0.60 (i.e. 60% of the upper 25% got the correct answer) while the lower 25% of the class
had a difficulty index of 0.20.

Here, DU = 0.60 while DL = 0.20, thus index of discrimination = .60 - .20 = .40.

Discrimination index is the difference between the proportion of the top scorers who got an item
correct and the proportion of the lowest scorers who got the item right. The discrimination index range is
between -1 and +1. The closer the discrimination index is to +1, the more effectively the item can discriminate
or distinguish between the two groups of students. A negative discrimination index means more from the
lower group got the item correctly. The last item is not good and so must be discarded.

Theoretically, the index of discrimination can range from -1.0 (when DU =0 and DL = 1) to 1.0 (when DU
= 1 and DL = 0). When the index of discrimination is equal to -1, then this means that all of the lower 25%
of the students got the correct answer while all of the upper 25% got the wrong answer. In a sense, such an
index discriminates correctly between the two groups but the item itself is highly questionable. Why should
the bright ones get the wrong answer and the poor ones get the right answer? On the other hand, if the
index of discrimination is 1.0, then this means that all of the lower 25% failed to get the correct answer
while all of the upper 25% got the correct answer. This is a perfectly discriminating item and is the ideal
item that should be included in the test.

From these discussions, let us agree to discard or revise all items that have negative discrimination index
for although they discriminate correctly
between the upper and lower 25% of the class, the content of the item itself may be highly dubious or
doubtful. As in the case of the index of difficulty, we have the following rule of thumb:

Index Range Interpretation Action

-1.0 — -.50 Can discriminate Discard

but item is questionable

-.55 - 0.45 Non-discriminating Revise

0.46 — 1.0 Discriminating item Include

Example: Consider a multiple choice type of test of which the following data were obtained:

Downloaded by Joana Marie Paiste ([email protected])


Item Options
A B* C D

1 0 40 20 20 Total
0 15 5 0 Upper 25%
0 5 10 5 Lower 25%

The correct response is B. Let us compute the difficulty index and index of discrimination:

Difficulty, Index = no. of students getting correct response/total = 40/100 = 40%, within range of a
"good item"

The discrimination index can similarly be computed:

DU = no. of students in upper 25% with correct response/no. of students in the


upper 25% = 15/20 = .75 or 75%
DL = no. of students in lower 25% with correct response/ no. of students in the
lower 25% = 5/20 = .25 or 25%
Discrimination Index = DU — DL = .75 - .25 = .50 or 50%.

Thus, the item also has a "good discriminating

power."

It is also instructive to note that the distracter A is not an effective distracter since this was never
selected by the students. It is an implausible distracter. Distracters C and D appear to have good appeal as
distracters. They are plausible distracters.
Index of Difficulty
Ru + RL
P=_________________100
T
Where:
Ru — The number in the upper group who answered the item

correctly. RL — The number in the lower group who answered the

item correctly. T — The total number who tried the item.

Index of item Discriminating Power

Ru + RL
D

½ T Where:
P percentage who answered the item correctly (index of difficulty)

R number who answered the item


correctly T total number who tried the item.
P=8/20 x100=40%
The smaller the percentage figure the more difficult the item

Estimate the item discriminating power using the formula

below: (Ru — RL) ( 6 – 2)


Downloaded by Joana Marie Paiste ([email protected])
D = ½t = 10 = 0.40

The discriminating power of an item is reported as a decimal fraction; maximum discriminating power is
indicated by an index of 1.00.

Maximum discrimination is usually found at the 50 percent level of difficulty

0.00 – 0.20 = Very difficult

0.21 – 0.80 = Moderately difficult

0.81 – 1.00 = Very easy

For classroom achievement tests, most test constructors desire items with indices of difficulty no lower
than 20 nor higher than 80, with an average index of difficulty from 30 or 40 to a maximum of 60.

The INDEX OF DISCRIMINATION is the difference between the proportion of the upper group who
got an item right and the proportion of the lower group who got the item right. This index is dependent
upon the difficulty of an item. It may reach a maximum value of 100 for an item with an index of difficulty of
50, that is, when 100% of the upper group and none of the lower group answer the item correctly. For items
of less than or greater than 50 difficulty, the index of discrimination has a maximum value of less than 100.

More Sophisticated Discrimination Index


Item discrimination refers to the ability of an item to differentiate among students on the basis of how
well they know the material being tested. Various hand calculation procedures have traditionally been
used to compare item responses to total test scores using high and low scoring groups of students.
Computerized analyses provide more accurate assessment of the discrimination power of items because they
take into account responses of all students rather than just high and low scoring groups.

The item discrimination index provided by ScorePak® is a Pearson Product Moment correlation
between student responses to a particular item and total scores on all other items on the test. This index is
the equivalent of a point-biserial coefficient in this application. It provides an estimate of the degree to which
an individual item is measuring the same thing as the rest of the items.

Because the discrimination index reflects the degree to which an item and the test as a whole are
measuring a unitary ability or attribute, values of the coefficient will tend to be lower for tests measuring a
wide range of content areas than for more homogeneous tests. Item discrimination indices must always be
interpreted in the context of the type of test which is being analyzed. Items with low discrimination indices
are often ambiguously worded and should be examined. Items with negative indices should be examined to
determine why a negative value was obtained. For example, a negative value may indicate that the item was
mis-keyed, so that students who knew the material tended to choose an unkeyed, but correct, response
option.

Tests with high internal consistency consist of items with mostly positive relationships with total test
score. In practice, values of the discrimination index will seldom exceed .50 because of the differing shapes of
item and total score distributions. ScorePak® classifies item discrimination as "good" if the index is above
.30; "fair" if it is between. 10 and .30; and "poor" if it is below .10.

A good item is one that has good discriminating ability and has sufficient level of difficult (not too
difficult nor too easy).

At the end of the Item Analysis report, test items are listed according to their degrees of difficulty
(easy, medium, hard) and discrimination (good. fair, poor). These distributions provide a quick overview of
the test. and can be used to identify items which are not performing well and which can perhaps be
improved or discarded.
Downloaded by Joana Marie Paiste ([email protected])
SUMMARY

The Item-Analysis Procedure for Norm-provides the following information:

1. The difficulty of the item;


2. The discriminating power of the item, and
3. The effectiveness of each
alternative Some benefits derived from Item

Analysis are:

1. It provides useful information for class discussion of the test.


2. It provides data which help students improve their learning.
3. It provides insights and skills that lead to the preparation for better tests in the future.

6.2 VALIDATION AND VALIDITY


After performing the item analysis and revising the items which need revision, the next step is to validate
the instrument. The purpose of validation is to determine the characteristics of the whole test itself,
namely, the validity and reliability of the test. Validation is the process of collecting and analyzing evidence
to support the meaningfulness and usefulness of the test.

Validity. Validity is the extent to which a test measures what it purports to measure or as referring
to the appropriateness, correctness, meaningfulness and usefulness of the specific decisions a teacher makes
based on the test results. These two definitions of validity differ in the sense that the first definition refers to
the test itself while the second refers to the decisions made by the teacher based on the test. A test is .valid
when it is aligned with the learning outcome.

A teacher who conducts test validation might want to gather different kinds of evidence. There are
essentially three main types of evidence that may be collected: content-related evidence of validity, criterion-
related evidence of validity and construct-related evidence of validity. Contentrelated evidence of validity
refers to the content and format of the instrument. How appropriate is the content? How comprehensive?
Does it logically get at the intended variable? How adequately does the sample of items or questions
represent the content to be assessed?

Criterion-related evidence of validity refers to the relationship between scores obtained using the
instrument and scores obtained using one or more other tests (often called criterion). How strong is this
relationship? How well do such scores estimate present or predict future performance of a certain type?

Construct-related evidence of validity refers to the nature of the psychological construct or


characteristic being measured by the test. How well does a measure of the construct explain differences in
the behavior of the individuals or their performance on a certain task?

The usual procedure for determining content validity may be described as follows: The teacher
writes out the objectives of the test based on the Table of Specifications and then gives these together with
the test to at least two (2) experts along with a description of the intended test takers. The experts look at the
objectives, read over the items in the test and place a check mark in front of each question or item that they
feel does not measure one or more objectives. They also place a check mark in front of each objective not
assessed by any item in the test. The teacher then rewrites any item checked and resubmits to the experts
and/or writes new items to cover those objectives not covered by the existing test. This continues until the
experts approve of all items and also until the experts agree that all of the objectives are sufficiently covered
by the test.

In order to obtain evidence of criterion-related validity, the teacher usually compares scores on the
test in question with the scores on some other independent criterion test which presumably has already high
validity. For example, if a test is designed to measure mathematics ability of students and it correlates
highly with a standardized mathematics achievement test (external criterion), then we say we have high
criterion-related
Downloaded by Joana Marie Paiste ([email protected])
evidence of validity. In particular, this type of criterion-related validity is called its concurrent validity.
Another type of criterion-related validity is called predictive validity wherein the test scores in the instrument
are correlated with scores on a later performance (criterion, measure) of the students. For example, the
mathematics ability test constructed by the teacher may be correlated with their later performance in a
Division-wide mathematics achievement test.

In summary content validity refers to how will the test items reflect the knowledge actually required
for a given topic area (e.g. math). It requires the use of recognized subject matter experts to evaluate whether
test items assess defined outcomes. Does a pre-employment test measure effectively and comprehensively
the abilities required to perform the job? Does an English grammar test measure effectively the ability to write
good English?

Criterion-related validity is also known as concrete validity because criterion validity refers to a test's
correlation with a concrete outcome.

In the case of pre-employment test, the two variables that are compared are test scores and employee
performance.

There are 2 main types of criterion validity-concurrent validity and predictive validity. Concurrent
validity refers to a comparison between the measure in question and an outcome assessed at the same time.

Criterion-related validity is also known as concrete validity because criterion validity refers to a test's
correlation with a concrete outcome.

In the case of pre-employment test, the two variables that are compared are test scores and employee
performance.

There are 2 main types of criterion validity-concurrent validity and predictive validity. Concurrent
validity refers to a comparison between the measure in question and an outcome assessed at the same time.

An example of concurrent validity is a comparison of the scores with NAT Math exam with course
grades in Grade 12 Math. In predictive validity, we ask this question: Do the scores in NAT Math exam
predict the Math grade in Grade 12?

6.3. Reliability
Reliability refers to the consistency of the scores obtained — how consistent they are for each
individual from one administration of an instrument to another and from one set of items to another. We
already gave the formula for computing the reliability of a test: for internal consistency; for instance, we
could use the split-half method or the Kuder-Richardson formulae (KR-20 or KR-21)

Reliability and validity are related concepts. If an instrument is unreliable, it cannot get valid outcomes.
As reliability improves, validity may improve (or it may not). However, if an instrument is shown
scientifically to be valid then it is almost certain that it is also reliable.

Predictive validity compares the question with an outcome assessed at a later time. An example of
predicitve validity is a comparison of scores in the National Achievement Test (NAT) with first semester
grade point average (GPA) in college. Do NAT scores predict college performance? Construct validity
refers to the ability of a test to measure what it is supposed to measure. As researcher, you intend to measure
depression but you actually measure anxiety so your research gets compromised.

The following table is a standard followed almost universally in educational test and
measurement.

Reliability Interpretation

90 and above Excellent reliability; at the level of the best standardized tests
Downloaded by Joana Marie Paiste ([email protected])
80 - 90 Very good for a classroom test

70 - 80 Good for a classroom test; in the range of most. There are


probably a few items which could be improved.

60-70 Somewhat low. This test needs to be supplemented by other


measures (e.g., more tests) to determine grades. There are
probably some items which could be improved.

50 - 60 Suggests need for revision of test, unless it is quite short (ten


or fewer items). The test definitely needs to be supplemented by
other measures (e.g., more tests) for grading.

50 or below Questionable reliability. This test should not contribute heavily


to the course grade, and it needs revision.

Downloaded by Joana Marie Paiste ([email protected])

You might also like