Reliability
Reliability
Reliability
'Reliability' of any research is the degree to which it gives an accurate score across a range of
measurement. It can thus be viewed as being 'repeatably or 'consistency'. In summary:
Inter-rater: Different people, same test.
Test-retest: Same people, different times.
Parallel-forms: Different people, same time, different test.
Internal consistency: Different questions, same construct.
(1) Inter-Rater Reliability (across different people & inter-observer reliability or inter-
coder reliability) - when multiple people are giving assessments of some kind or are the
subjects of some test, then similar people should lead to the same resulting scores. It can be
used to calibrate people, for example those being used as observers in an experiment.
(b) Split-half correlation divides items that measure the same construct into two tests,
which are applied to the same group of people, then calculates the correlation
between the two total scores.