0% found this document useful (0 votes)
21 views25 pages

Reliability

Reliability refers to the consistency of measurement results obtained from a test. It is assessed using statistical indices that measure the correlation between two sets of scores from the same test. Several methods are used to compute reliability coefficients, including test-retest, equivalent forms, split-half, and Kuder-Richardson. Reliability can be affected by factors related to the test itself, the test takers, and the test administrator. Consistency is important but does not guarantee accuracy or truth.

Uploaded by

shirantha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views25 pages

Reliability

Reliability refers to the consistency of measurement results obtained from a test. It is assessed using statistical indices that measure the correlation between two sets of scores from the same test. Several methods are used to compute reliability coefficients, including test-retest, equivalent forms, split-half, and Kuder-Richardson. Reliability can be affected by factors related to the test itself, the test takers, and the test administrator. Consistency is important but does not guarantee accuracy or truth.

Uploaded by

shirantha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Reliability

• Reliability refers to the consistency of


measurement.
• Test scores of first time and second time can
not be expected the same and some variation
is always found.
• Reliability refers to the results (scores) obtained
by using a tool (test) and not the tool.
• Consistency is considered over a time and that
period is significant.
• Consistency is considered over a task, so it has
different interpretations.
• Reliability is necessary but not sufficient condition
for establishing the truth.
• Reliability may or may not speak about the truth.
• Reliability is assessed primarily with statistical
indices. It refers to the coefficient of correlation
between the two sets of scores. This coefficient of
correlation is also called as coefficient of reliability.
Methods of computing coefficient of
Reliability
• For computing the coefficient of reliability, it is essential
to have two sets of measures under identical conditions
and compare those two sets of measures. But the
conditions are hardly found identical.
American Educational Research Association (AERA),
American Psychological Association (APA), and National
Council on Measurement in Education (NCME) 1999
have taken the substitute of this ideal procedure and
suggested some methods to determine the reliability
coefficient. Some of the important methods are
described as under:
1. Test – retest Method
1. A sample of 20 individual may be selected out
of the target population.
2. The test is administered twice on the same
sample of individuals by giving a gap of some
days.
3. Coefficient of correlation is computed by
taking the two sets of scores.
4. Gap between the first and second trial neither
to be too short nor to be too long.
Example :
I st Adm
(X) X – M (x) x²
A - 10 4 16
B - 5 -1 1
C - 4 -2 4
D - 6 0 0
E - 5 -1 1
∑X = 30 ∑x² = 22

Mx = ∑X = 30 = 6.0
N 5
Example :
2 nd Adm
(Y) Y – M (Y) Y² Xy
A - 9 3 9 12
0
B - 6 0 0 4
C - 4 -2 4 0
1
D - 6 0 0 ∑xy = 17

E - 5 -1 1
∑y = 30 ∑y² = 14

My = ∑y = 30 = 6.0
N 5
• r = ∑xy
√(∑x² ) (∑y²)
= 17 /√22 x14
= 17 / 17.54
= .96
The coefficient of reliability came out to be 0.96
which is considered quite satisfactory. It show
high degree of consistency or stability between
the two sets of measures / scores.
2. Equivalent forms of method
• This is the method through which the coefficient of
reliability is computed by taking two equivalent forms
of the test.
The two different and equivalent forms of the test are
administered on the same group of the students in
close succession and obtained scores of these two
equivalent forms of the test are correlated. This
computed coefficient of correlation is termed as
coefficient of equivalence. It shows the degree to which
the measures (Scores) of the two equivalent forms of
the test are related.
• Example: sample of 5 students and
administered two forms of the test.
(same Day)
Form - A Score Form – B scores
R1 R2 R R (d)
1- 2

A - 10 1 9 1 0
B - 6 2 7 2 0
C - 4 4 3 4 0
D - 5 3 6 3 0
E - 0 5 0 5 0
∑d² = 0
• rho = 1- 6∑d²
N(N² -1)
=1–6x0
5 (25-1)
= 1.0 perfect coefficient of correlation
• A gap of 30 days
Form - A Form - B
Scores R1 Scores R2 R1- R2 d
= d
A - 10 1 7 1 0 0
B - 6 2 4 3 -1 1
C - 4 4 5 2 -2 4
D - 5 3 3 1 -1 1
E - 0 5 1 5 0 0
∑d² = 6
• rho = 1- 6∑d²
N(N² -1)
=1–6x6
5 (5-1)
= 0.7
0.7 is the coefficient of equivalence and termed as
coefficient of reliability which is slightly low but
acceptable.
3. Split-Half Method
- Selection of sample : 5 students
- Test of 20 items
- Administration of the test
- Divide the test into two halves for the purpose of scoring
Odd items Even items
1 2
3 4
5 6
7 8
9 10
11 12
13 14
15 16
17 18
19 20
1 st Half 2 nd Half
1 St Scores R1 Second Scores R2
Half Half
(R1-R2)
d d²
A 22 3 A 24 2 1 1
B 20 2 B 18 4 -2 4
C 30 1 C 25 1 0 0
D 15 4 D 20 3 1 1
E 10 5 E 13 5 0 0
∑d² =6
rho = 1- 6∑d²
N(N² -1)
=1–6x6
120
= 0.7
Reliability on full test = 2X r of two halves
1 + r of two halves
= 2 x .7
1+.7
= 0.82
Coefficient of reliability is 0.82 which is quite satisfactory
• 4. Kuder – Richardson Method:
- Method of estimating the reliability of test
scores from single administration of a single
form of test.
- For this Kuder and Richardson suggested the
following formulae.
(i) K.R. - 20
(ii) K. R. – 21
KR – 20: rKR = k x sx² - ∑pq
(k-1) sx²
Here, rkr = reliability coefficient of the whole
test
k = No. of items in the test
sx² = variance of test scores
p = proportion of group answering
the test items correctly
q = 1-p (Proportion of wrong responses
of the items)
Example : - Test of 20 items
- Sample of 5 students
- Administer the test
Students Scores X–M=x
A - 15 15 – 2 = 3 9
B - 10 10 - 12 = 2 4
C - 5 5 - 12 = 7 49
D - 12 12-12 = 0 0
E - 18 18 – 12 = 6 36
∑d² = 6
Mean = ∑ X = 60 = 12
N 5
V = ∑X² = 98 = 19.6 SD = √19.6
N 5

V = SD ²

√v = SD
R W P Q PQ
Item – 1 : 3 2 .6 .4 .24
2: 4 1 .8 .2 .16
3:
4:

20: ∑pq = 4.50


• rKR = K X sx² - ∑pq
(k-1) sx²
= 20 X 19.6 – 4.50
(20-1) 19.6
= 0.81
K R - 21
• rKR = K X 1- M (K-M)
(k-1) Ks²
= 20 X 1 - .24
(20-1)
= 0.79
Factors Affecting the Reliability of Test Scores:
The factors affecting the reliability of the test scores may
be put in three categories.
1. Factors Relating to Test:
I. length of test
II. Ambiguity in test items
III. Improper instruction
IV. Irrelevant items
V. Limited time
VI. Subjectivity / guess work
2. Factors Relating to Test Takers (Students)
I. Mental readiness
II. Anxiety
III. Physical unfitness
IV. Unawareness about the order and formation
of items
V. Determination and practice
VI. Coaching
3. Factors Relating to test Giver:
I. Carelessness at the time of administering
II. Carelessness at the time evaluating the
answer scripts.
III. No proper rapport with test taker
IV. No proper guidance and direction provided
V. Negative reinforcement

You might also like