0% found this document useful (0 votes)
221 views6 pages

Validity

The document discusses the definition and types of validity in educational testing. It defines validity as the degree to which a test measures what it is intended to measure. There are three main types of validity discussed: content validity, which focuses on how well test items represent the material being taught; criterion-related validity, which compares test scores to external measures; and construct validity, which evaluates how well a test measures hypothetical constructs over time. Examples are provided for each type to illustrate how validity can be determined for classroom tests.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
221 views6 pages

Validity

The document discusses the definition and types of validity in educational testing. It defines validity as the degree to which a test measures what it is intended to measure. There are three main types of validity discussed: content validity, which focuses on how well test items represent the material being taught; criterion-related validity, which compares test scores to external measures; and construct validity, which evaluates how well a test measures hypothetical constructs over time. Examples are provided for each type to illustrate how validity can be determined for classroom tests.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Mikee Jane M.

Sabillo
Educ 108 TTh 5:30-7:00 PM

TASK 1: DEFINITION OF VALIDITY


Validity (Asaad, 2004) refers to the degree to which a test actually measures what it
tries to measure. The validity of a test concerns what the test measures and how well it does
so.
Validity (Airasian, 2000) is concerned whether the information obtained from an
assessment permits the teacher to make a correct decision about a students’ learning. This
means that the appropriateness of score-based inferences or decisions made are based on the
students’ test results. It is the extent to which a test measures what is it supposed to measure.
Cronbach, (1998) considered validity as the accuracy of a specific prediction or
inference made from a test score.

TASK 2: SAMPLE ILLUSTRATION OF VALIDITY (Kubiszyn, 2010)


Measures what it is supposed to measure. If it is supposed to be a test of third-grade
arithmetic ability, it should measure Third-Grade arithmetic skills, not fifth-Grade arithmetic
skills and not reading ability. If it is supposed to be a measure of ability to write behavioral
objectives, it should measure that ability, not the ability to recognize bad objectives.
TASK 3: FACTORS AFFECTING VALIDITY (Agaas, 2004)
1. Inappropriateness of the test items. Measuring the understanding, thinking skills, and
other complex types of achievement with test forms that are appropriate only
measuring factual knowledge will invalidate the results.
2. Directions of the test items. Directions that are not clearly stated as to how students
respond to the items and record their answers will tend to lessen the validity of the test
items.
3. Reading vocabulary and sentence structure. Vocabulary and sentence structures that
do not match the level of the students will result in the test measuring reading
comprehension or intelligence rather than what it intends to measure.
4. Level of difficulty of the tasty items. Test items are too easy and too difficult they
cannot discriminate between the bright and the poor students. Thus, it will lower the
validity.
5. Poorly constructed test items. Test items which unintentionally provide clues to the
answer will tend to the important aspects of Student’s performance that the test is
intended to measure will be affected.
6. Length of the test items. A test should be of sufficient number of items to measure
what it is supposed to measure.
7. Arrangement of the test items. Test items should be arranged in an increasing
difficulty.
8. Pattern of the answers. A systematic pattern of correct answers, and this will lower
again the validity of the test.
9. Ambiguity. Ambiguous statements in test items contribute to misinterpretations and
confusions.
TASK 4: TYPES OF VALIDITY
1. Face Validity. (Asaad, 2004) Test questions are said to have face validity when they
appear to be related to the group being examined. This is done by examining the test
to bind out if it is the good one. And there is no common numerical method for face
validity (Raagas, 2010).
Sample classroom illustration (Asaad, 2004):
Calculation of the area of the rectangle when it’s given direction of length and width
are 4 feet and 6 feet respectively.

2. Content Validity (Gabuyo, 2010) a type of validation that refers to the relationship
between a test and the instructional objectives, establishes content so that the test
measures what it is supposed to measure. Things to remember about validity.
a. The evidence of the content validity of the test is found in the Table of
Specification.
b. This is the most important type of validity of a classroom teacher.
c. There is no coefficient for content validity. It is determined by experts
judgmentally, not empirically.
Content Validity (Calmorin, 2004) is related to how adequately the content of the root
test sample the domain about which inference is to be made. This is being established
through logical analysis adequate sampling of test items usually enough to assure that
the test is usually enough to assure that a test has content validity (Oriondo, 1984).
Sample classroom illustration (Calmorin, 2004):
A teacher wishes to validate a test in Mathematics. He requests experts in
Mathematics to judge if the items or questions measures the knowledge the skills and
values supposed to be measured.

3. Criterion-related Validity. A type of validation that refers to the extent to which


scores from a test relate to theoretically similar measures. It is a measure of how
accurately a student’s current test scores can be used to estimate a score on a criterion
measure, like performance in courses, classes or other measurement instrument, for
example, the classroom reading grades should indicate similar levels of performance
as Standardized Reading Test scores (Gabuyo, 2012).

a. Concurrent validity. The criterion and the predictor data are collected at the same
time. This type of validity is appropriate for tests designed to assess a student’s
current criterion status or when you want to diagnose student’s status, it is good
diagnostic screening test. It is established by correlating the criterion coefficient
and other statistical tools correlations (Gabuyo, 2012).
It refers to the degree to which the test correlates with a criterion, which is set up
as an acceptable measure on standard other than the test itself. The criterion is
always available at the time of testing (Asaad, 2004).
Sample classroom illustration (Asaad, 2004):
Ms. Fatima develops a test and she wants to know if her test is valid. She takes
another test of already known validity and uses this as criterion. She gives the two
sets of tests: her test and the criterion test to the same group of 10 students. Their
scores are shown below. Determine the validity of her test.

Her Test (x) Criterion (y) xy X² Y²


34 30 1020 1156 900
40 37 1480 1600 1369
35 25 875 1225 625
49 37 1813 2401 1369
50 45 2250 2500 2025
38 29 1102 1444 841
37 35 1295 1369 1225
47 40 1880 2209 1600
38 35 1330 1444 1225
43 39 1677 1849 1521
411 352 14722 17197 12700

r= n ∑ XY −¿ ¿

10(14 , 722)−(411)(352)
r= = 0.83
√¿¿ ¿

Analysis: A 0.83 coefficient of correlation indicates that her test has high concurrent
validity.

b. Predictive validity. A type of validation that refers to a measure of the extent to


which a student’s current test result can be used to estimate accurately the
outcome of the student’s performance at later time. It is appropriate for tests
designed to assess student’s future status on a criterion (Gabuyo, 2012).

This refers to the degree of accuracy of how a test predicts one performance at
some subsequent outcome (Asaad, 2004).

Sample classroom illustration (Asaad, 2004):


Mr. Celso wants to know the predictive validity of his test administered in the
previous year by correlating the scores with the grades of the same students
obtained in a later date. Their scores and grades are presented below. Determine
the validity of the test.
Grade (x) Test (Y) XY X² Y²
89 40 3560 7921 1600
85 37 3145 7225 1369
90 45 4050 8100 2025
79 25 1975 6241 625
80 27 2160 6400 729
82 35 2870 6724 1225
92 41 3772 8464 1681
87 38 3306 7569 1444
81 29 2349 6561 841
84 37 3108 7056 1369
849 354 30295 72261 12908

r= n ∑ XY −¿ ¿

10(30,295)−( 849)(354 )
r= = 0.92
√¿ ¿ ¿

Analysis: A 0.92 coefficient of correlation indicates that his test has high predictive
validity.

4. Construct validity. A type of validation that refers to the measure of the extent to
which a test measures a theorical and unobservable variable qualities such as
intelligence, math achievement, performance anxiety, and the like, over a period of
time on the basis of gathering evidence. It is established through intensive study of the
test or measurement using convergent/divergent validation and factor analysis
(Gabuyo, 2012).
a. Congruent validity is a type of construct validation wherein a test has a high
correlation with another test that measures the same construct.
b. Divergent validity is a type of construct validation wherein a test has a low
correlation with a test that measures a different construct.
c. Factor analysis is another method of assessing the construct validity of a test
using complex statistical procedures conducted with different procedures.
The test is the extent to which a test measures a theoretical trait. This involves
such tests as those of understanding, and interpretation of data (Calmorin, 2004).

Sample classroom illustration (Calmorin, 2004):


A teacher might design whether an educational program increases artistic ability
amongst pre-school children. Construct validity is a measure of whether your research
actually measures artistic ability, a slightly abstract label.
TASK 5: COMPUTATION ON PREDICTIVE VALIDITY
Mrs. Reyes wants to know the predictive validity of her test administered in the
previous year by correlating the sores with the grades of the same students obtained in a
later date their sores and grades are presented below
Grade (x) Test (Y) XY X² Y²
78 39 3042 6084 1521
83 49 4067 6889 2401
90 59 5310 8100 3481
79 33 2607 6241 1089
85 34 2890 7225 1156
86 58 4988 7396 3364
83 27 2241 6889 729
79 54 4266 6241 2916
80 60 4800 6400 3600
82 45 3690 6724 2025
825 458 37901 68189 22282

r= n ∑ XY −¿ ¿

10(37901)−(825)( 458)
r= = 0.29
√¿ ¿ ¿

Analysis: A 0.29 coefficient of correlation indicates that his test has high predictive validity.

TASK 6: COMPUTATION ON CONCURRENT VALIDITY


A teacher just developed a test to measure the content in mathematics you both
teach. Her test takes 30 minutes less time to complete than the test you have used in the
past and this is a major advantage you decide to evaluate the new test by giving it and the
old test to the same lass of students. Using the following data, determine if the new test
has concurrent validity.
Scores on new Scores on old XY X² Y²
test (x) Test (Y)
6 48 288 36 2304
6 44 264 36 1936
8 42 336 64 1764
12 34 408 144 1156
14 32 448 196 1024
16 31 496 256 961
18 28 504 324 784
18 25 450 324 625
22 23 506 484 529
25 22 550 625 484
145 329 4250 2489 11567

r= n ∑ XY −¿ ¿

10(4250)−(145)(328)
r= = -0.97
√¿ ¿ ¿

Analysis: A -0.97 coefficient of correlation indicates that her test has no/low concurrent
validity.

References

Asaad, Abubakar S. (2004). Measurement and evaluation concepts and application (third
edition). 856 Mecañor Reyes St., Sampaloc, Manila. Rex Bookstore Inc.
Calmorin, Laurentina. (2004). Measurement and evaluation, 3rd ed. Mandaluyong City.
National Bookstore Inc.
Cronbach, L.J. (1970). Essentials of psychological testing. New York: Harper Row. In J.O.O.
Abiri (Author). Elements of evaluation measurement and statistical techniques in
education, Ilorin: Library and publication committee.
Gabuyo, Y.A. (2012). Assessment of Learning 1 (Textbook and Reviewer). St. Sampalok,
Manila: Rex Book Store, Inc.

Oriondo, L. (1984). Evaluation educational outcomes. Manila.


Raagas, Ester L. (2010). Measurement (assessment) and education concept and application
(third edition).Karsuagan, Cagayan De Oro City.

You might also like