0% found this document useful (0 votes)
28 views11 pages

Lesson 17 Validity and Reliability of The Instrument (Cont)

Uploaded by

Ken Ken Budy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views11 pages

Lesson 17 Validity and Reliability of The Instrument (Cont)

Uploaded by

Ken Ken Budy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

RELIABITY

RIC G. BRAZAL
RELIABILITY

Refers to the consistency of


the results.
A reliable instrument yields
the same results for
individuals who take the test
more than once.
METHODS IN ESTABLISHING
REALIBILITY

1.Test-retest or Stability Test – the


same test is given to a group of
respondents twice. The scores in the
first test are correlated with the
scores in the second test. Having
high correlation index means that
there is also a high reliability of the
test.
PROBLEMS TO CONSIDER

1.Some students may remember some


of the items during the first test
administration.
2.Scores may differ not only because of
the unreliability of the test but also
because the students themselves
may have changed in some ways.
WEAKNESSES INDENTIFIED

1.Interpretation is not necessarily


straightforward using test-retest
correlation. A low correlation may
not indicate that the reliability of
the test is low.
2.Reactivity is not done logically.
3.Overestimation due to memory.
INTERNAL CONSISTENCY

Items sought must be


correlated with each
other and the test
should be internally
consistent.
SPLIT HALF

A method of establishing internal consistency


wherein a test is given only once to the respondents.

Spearman-Brown Prophecy Formula:

Where: – the correlation coefficient computed for split


halves
- the estimated reliability of the entire test
KUDER-RICHARDSON TEST

A method that measures the extent to


which items in one form of a test share
commonalities with one another as do
the items of an equivalence test form.
This is called item-total correlation.
The test items are said to be
homogenous if the reliability coefficient
is high.
KUDER-RICHARDSON TEST

Kuder-Richardson Formula 20 (Catane, 2000).

Where: - reliability coefficient of the whole test


– number of items in a test
- Standard deviation of the total scores of the
test
- tabulating the proportion of persons who
answered correctly () and persons who did not
answer correctly () each item.
OTHER CRITERIA FOR ASSESSING
QUANTITATIVE MEASURES
1. Sensitivity – The instrument should be able to
identify a case correctly.
2. Specificity – The instrument should be able to
identify a non-case correctly.
3. Comprehensibility – Subjects and researchers should
be able to comprehend the behavior required to
secure accurate and valid measurements.
4. Precision – An instrument should discriminate
between people who exhibit varying degrees of an
attribute as precisely as possible.
OTHER CRITERIA FOR ASSESSING
QUANTITATIVE MEASURES
5. Speed – The researcher should not rush measuring
process so that he/she can obtain reliable
measurements.
6. Range – The instrument should be capable of detecting
the smallest expected value of the variable to the largest
in order to obtain meaningful measurements.
7. Linearity – A researcher normally strives to construct
measures that are equally accurate and sensitive over
the entire range of values.
8. Reactivity – The instrument should, as much as possible,
avoid affecting the attribute being measured.

You might also like