Module 4 Psychometric Properties
Module 4 Psychometric Properties
Psychometric Properties
Topics to cover…
• Unit 2: Types of reliability- Test retest, Alternate forms, Split half, Coefficient
alpha, KR-20, Inter scorer reliabilities.
• When all the factors are controlled – a reliable test is one that produces
identical results from one occasion to another
• Odd-even method: odd numbered items constitute one part of the test &
even numbered items constitute second part.
• Each examinee gets two sets of score- Single administration ( odd and
even part)
Disadvantage:
• Temporary condition and changes within examinee and environment works wither
favorable or unfavorably – enhancement/ depression in reliability coefficient
Internal consistency reliability
Kuder-Richardson Formulas
• K-R20
• K-R21
Requirements:
• K-R20
Internal consistency reliability
Kuder-Richardson Formulas
• K-R21
Internal consistency reliability
Kuder-Richardson Formulas
• K-R20
• K-R21
SD of total score
No. of items
Internal consistency reliability
Kuder-Richardson Formulas - Disadvantages
• Used in heterogenous test – items measure different functions
• Sampling validity - extent to which the test samples the total content
area.
Types
Content or Curricular Validity
The following points should be fully covered for ensuring full content validation
of a test:
• The area of content (or items) should be specified explicitly so that all major
portions in equal proportion be adequately covered by the items.
• Before the item writing starts, the content area should be fully defined in clear
words
Concurrent Validity
• There is no time gap in obtaining test scores and criterion scores. The test is
correlated with a criterion which is available at the present time.
d) The resulting coefficient indicates the concurrent validity of the test. If the
coefficient is high, the test has good concurrent validity.
Types
Concurrent Validity
1. Relevance
3. Reliability
4. Availability
Types
Construct Validity
• Determining whether or not all or some measures act as if they were measuring
the construct
Types
Construct Validity
iv. Differences among the well defined groups on the test are theory
consistent
v. Intervention effects produces changes on the test scores that are theory
consistent.
vi. The factor analysis of the test scores produces results that are
understandable in the light of the theory by which the test was
constructed.
Types
Convergent validation
Discriminant validation
• Discriminant validity shows that two measures that are not supposed to
be related are in fact, unrelated.
Types
E.g.:
• The expectancy table is one way of showing the relation between the
test scores and the criterion measures.
• Ambiguous Directions
• Socio-cultural Differences
• Group variability
• Environmental conditions
• Homogeneity of items
• Discrimination value
• Scorer reliability
Improve reliability of test scores
• Group of examinees should be heterogeneous
• Discard the items that bring down the reliability- two techniques: factor
analysis and item analysis.
• Factor analysis ensures that tests are most reliable if they are unidimensional
• Item analysis - the correlation between each item and the total score for the
test is examined.
RELATION OF VALIDITY TO RELIABILITY
• Reliability and validity are two dimensions – Test efficiency