100% found this document useful (1 vote)
2K views87 pages

Assessment of Student Learning 1

This chapter discusses principles of high-quality assessment. It identifies characteristics that ensure the quality of tests, such as clear and appropriate learning targets. High-quality assessments provide results that demonstrate and improve targeted student learning while informing instructional decision making. The chapter also distinguishes between productive and unproductive uses of tests, and classifies different types of tests.

Uploaded by

Ay Ban
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
2K views87 pages

Assessment of Student Learning 1

This chapter discusses principles of high-quality assessment. It identifies characteristics that ensure the quality of tests, such as clear and appropriate learning targets. High-quality assessments provide results that demonstrate and improve targeted student learning while informing instructional decision making. The chapter also distinguishes between productive and unproductive uses of tests, and classifies different types of tests.

Uploaded by

Ay Ban
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

$ud t

,,4
t&
4ffi.

37L.26
R261s
'2010
----s

List of Thbles and Figures + ix


Preface + xi

Chapter I NAf,URE AND PURPOSES OF ASSESSMENT + 1


Rationale + 1
Definition of Terms + 2
Classroom Assessment Defined + 4
Four Essential Components of Implementing
Classroom fusessment + 4
Purposes and Functions of Assessment + 4
Importance of Assessment + 6
Scope of Assessment + 6
Principles of Assessment + 6
Recent Trends in Classroom Assessment + 9
Non-Testing + 9
' Portfolio + 10
ffi+: Roles of Assessment in Making Instructional Decisions + 13
lu:
!L- Commonly Used Assessments in the Classroom + 13
h,:
Norm- and Criterion-Referenced Interpretation + 14
Comparison of Norm- and Criterion-Referenced Assessments + 15
Nature of Measurement + 15
h Reuiew Exercises + 16

Chapter 2 PRINCIPLES OF HIGH-QUALITY ASSESSMENT + 17


Characteristics of High-Qualidr Assessments + 17
Productive Uses of.Tests +22
Unproductive Uses of Tests + 23
Classifications of Tests + 24
Other Types of Tests + 25
Reuiew Exercises +27

I
/
s
I
Chapter 3 SOCIAL, LEGAL, AND ETHICAL
T IMPLICAIIONS OF TESTS + 28
t
I Criticisms of Testing + 28
Iil
Ethical Testing Principles + 30
E
i
. EthicalTesting Practices + 31
I
a
UnethicalTesting Practices + 32
I
B
{ Reuiew Exercises + 33
*
I
I 4 FACTORS INFLUENCING TEST CONSTRUCTION
Chapter
I
AND TEST PERFORMANCE + 34
{
&
s Factors that Influence Test Construction + 34
q
Exhaneous Factors that Influence Test Performance +37
Reuiew Exercises + 39

Chapter 5 ESTABLISHING LEARNING TARGETS + 40


Purposes of Instructional Goals and Objectives + 40
Learning Targets + 42
Types of Learning Targets + 42
Bloom's Thxonomy of Leqrning Domains + 43
Reuiew Exercise + 49

Chapter 6 PREPARATION OF CLASSROOM ASSESSMENT + 50


Planning the Teacher-Made Test + 50
Steps in Classroom Testing and fusessm enl + 52
Writing the Objective Short-Response Test + 53
Table of Specifications (TOS) + 54
How to Determine the Weights + 55
Reuiew Exercises + 57

Chapter 7 DEVELOPMENT OF CLASSROOM ASSESSMENT + 58


Multiple-Choice Test + 58
Binary Item Test (True or False Test) + 65
Matching:IYPe Test + 69
Completion or Short-Answer Test + 72
ClozeTest + 74
Essay Test + 75
Oral Question + 80
Reuiew Exercises + 84
Chapter 8 ITEM ANALYSIS + 85
Procedures in Item AnalYsis + 85
ComputationoftheDValue(IndexofDiscrimination)+87
88
Computation of the P Value (lndex of Difficulty) +
Interpreting Test Scores + 88
Reuiew Exercise +91
POINT
Chapter 9 MEASURES OF CENTRAL LOCAf,ION AND OTHER
MEASURES +92
Overview +92
Arithmetic Mean + 93
Median + 97
Mode + 101
102
Comparison of the Three Indices of Cenhal Location +
Point Measures + 104
Reuiew Exercises + 111

Chapter 10 MEASURES OF VARIAIION + 1l'3


Overview + 113
Range + 114
Average Deviation + 114
Range + 117
Quartile Deviation or Semi-lnterquartile
Standard Deviation and Variance + 119
Reuiew Exercises + 123

Chapter 11 SIMPLE CORRELATION +126


Overview + 126
PearsonProduct-MomentCorrelationCoefficient(r-,)+128
(ri) + 131
Spearman Rank-Order Correlation Coefficient
I (rrtai,) + 132
Point Biseral Coefficient of Conelation
Phi Correlation Coefficient (@) + 134
Reuiew Exercises + 136

ChapteT12GRADINGANDREPoRTINGPRACTICES+138
Nature of Grades/Marks + 139
Functions of Grades/Mark + 139
PurPoses of Grades/Marks + 140
r
Types of Grades/Marks + 140
Advantages of Grades/Mark + 141
Disadvantages of Gradesfir4arks + 141
Common Grading-Related Problems + 142
Averaging Scores to Determine a Grade + 742
Use of Zeroes + 142
Lowering Grades Because of Behavioral Infractions + 142
Guidelines for Effective Grading + 143
Criteria for a Marking-Reporting System + 143
Modes of Computing FinalGrades + 143
Reuiew Exercises + 145

Glossary +L47

References + 151

Index+ 157

About the Authors


es
ly
rd
et
1e ?rinciphs ofil Ah- guah ty
Assessutent

1g
OBJECTIVES
ot
At the end of the chapter, the learners are
. identify what constifutes high_quality expected to:
assessments;
' Iist down the productive and unproductive
uses of tests: and
. classifu the various types of tests.

eha ruc te ris tic s of Jt igh - e ua tity s ses sr,/


A en ts

High-quality assessments provide results


that demonstrate and improve
targeted student learning. They also
convey instructionaldecision making.
ensure the quality of any test, the foflowing To
criteria must be considered:
1. Clear and Appropriate Learning Targets
When designing good assessment, start by
asking if the learnino
targets are on-the right level. of difficulty
students and if there is an adequat"
d ;.JlJ;;=;;#:
uuiun." l-ong the different
types of learning targets.
A leaming target is a clear description of what
sfudents know and
able are to do Lcaming targets are categorued
by Stiggins and conkrin
(1992) into five.
a. Knowredge.rearning target is the ability of the student to
master a substantive subject matter.
b. Reasoning.rearning targetis the abirity
to use knowledge and
solve problems.
c' Skill learningtargetis the ability to demonstrate
achievement-
related skills like conducting
and operating computers. "*perime.,t
,;il;;;;;d;i,
d. product rearning target
is the abirity to create achievement_
related products such as written reports,
oral pres"ntul]-o*,
and art products.
e' At'fectiue learning target is the attainment
of affective traits
such as attitudes, varues, interests, and
serf-efficacy.

77
,-

I
2. Appropriateness of Assessment Methods
Once the learning targets have been identified, match them w-:
their corresponding methods by considering the strengths of varic",'
methods in measuring different targets.

Table z.t
MATCHING LEARNING TARGETS WITH ASSESSMENT METHODS
ii:::|i:.

SX** ffi{XM*i*;;;:
Knowledge 5 4 3 4 3 2

Reasoning 2 5 4 4 2 2

Skills 1 3 5 2 5 3

Products 1 1 5 2 4 4

Affect 1 2 4 4 4 5

3. Validity
iJ This refers to the extent to which the test seryes its purpose or the
'
efficiency with which it intends to measure. Validity is a characteristic
thatpertains to the appropriateness of the inferences, uses, and resulE
rl,

of the test or any other method utilized to gather data.


There are factors that influence the validity of the test; namely.
appropriateness of test items, directions, reading vocabulary and
sentence strucfures, pattern of answers, and arrangement of items.

a. How Validity is Determined


Validity is always determined by professional judgment.
However, there are different types of evidence to use in
determining validity. The following major sources of information
can be used to establish validity:
i. Content-related ualidity determines the extent of which
the assessment is the representative of the domain of
interest. Once the content domain is specified, review
the test items to be assured that there is a match
between the intended inferences and what is on the test.
A test blueprint or table of specification will help further
delineate what targets should be assessed and what is
important from the content domain.

e/dti
.k \gw assessment of student Learning 1: cognitive Learning
'"8,1
,-.s
ll. criterion-related ualidity determines the relationship
iwith between an assessment and another measure of
the
rrious same trait. It provides validity by relating an assessment
to some valued measure (criterion) thut .un either
provide an estimate of current performance (concurrent
criterion-related evidence) or predict fufu re performance
(predictive criterion-related evidence).
lll. c on str uct-r el ated u al idity determines which assessment
is a meaningful measure of an unobservable hait or
characteristic like intelligence, reading comprehension,
honesty, motivation, attifude, learning style, und
anxiety.
iv. Face ualidify is determined on the basis of the
appearance
of an assessment, whether based on the slperficial
examination of the test, there seems to be a reaionable
measure of the objectives of a domain.
V. Instructional-related ualidity determines to what
extent
the domain of content in the test is taught in
class.
b. TestValidity Enhancers
The following are suggestions for enhancing the ll
:r the varidity of
classroom assessments:
zristic ll
esults i. prepare a table of
specifications (TOS). I
it
ii. Construct appropriate test items.
mely, iii. Formulate directions that are brief, crear, and
concise.
r and iv. consider the reading vocaburary of the examinees.
ms. The test should not be made up oilurgonr.
v. Make the sentence strucfure of your test items
simple.
inent.
vi. Never have an identifiable pattern of answers.
s€ in vii. Anange the test items from easy to difficult.
ration viii. Provide adequate time for student to complete the
assessment.

uhich ix. Use different methods to assess the same thing.


rin of x. Use the test only for intended purposes.
zview 4. Reliability
natch
This refers to the consistency with which a sfudent
a test. may be
expected to perform on a given test. It means the extent to which
rrther a
test is dependable, self-consistent, and stable.
hat is

Chapter 2: Principles of High-euality Assessmen


t u l'9*'
,,"lt
There are factors that affect test reliability. These incr:
the (1) scorer's inconsistency because of his/her subjectir
(2) Iimited sampling because of incidental incrusitn e
accidental exclusion of some materials in the test, (3)
.hu,-.=
in the individual examinee himself/herself and his/irer in.tuU
during the examination, and (4) testing environment.

a. How Reliability is Determined


There are various ways of establishing test reriability.
are the length of the test, difficurty of the test, and o6;ecti,.:
of the scorer. There are also four methods in estimaiing *
reliability of a good measuring instrument
i. Tbst-Retest Method or kst of Stability. The sa:.e
measuring instrument is administered to the sa::E
group of subjects. The scores of the first and S€cor,-.
administrations of the test are determined by correlaL:,::
coefficient. The limitations of this method aie: Howe..a
memory effects may operate when the time interva s
short. Likewise, factors such as unrearning and forgetd:.,:
:
may occur when the time interval is rong resurting in lci
correlation of the test. Another limitation of the methc,:
;F
'I is that othervarying environmentalconditions
.nuy utr.=
the correlation of the test regardless of the time inten-a
rlt
separating the two administrations.
ii. Parallel-Forms Method or kst of Equiualence. para.[a
or equivalent forms of a test may be administered to th;
group of subjects and the paired observations correlatec
The two forms of the test must be constructed in a manne:
that the content, type of item, difficulty, instructions fci
administration, and several others, should be similar bu:
not identical.
iii. sprit-Harf Method. The test in this method may only bo
administered once, but the test items are div-ided inrc
two halves. The common procedure is to divide a te-
into odd and even items. The two halves of the test mur
be similar but not identical in content, difficurty, means
and standard deviations.
iv. Internal-consistency Method. This method is us€c
with psychological tests, which are constructed as
dichotomously scored items. The testee either passes
or fails in an item. The method of obtaining relLbitiq.
coefficients in this method is determined by thn Kud.r-
Richardson formula.
' 'ri:'ri':i'
Assessment of student Learning 1: cognitive Learning
.:3,.?:,,.'tt'tt
de b. The Concept of Error in Assessment
,t!'.
nd The concept of error in assessment is critical to the
understanding of reliability. conceptually, whenever something is
JES
assessed, an observed score or result is produced. This observed
tiry
score is the product of what the kue score or real ability or skill is
plus some degree of error.

Obserued Score : True Score * Error


P"q€

,rity
Thus, an observed score can be higher or lower than the hue
score, depending on the nature of error. The sources of error are
the
reflectedinTable 2.2.

me Table z.z
rne SOURCES OF ERROR
lnd
ion
uer, Health Directions
rl is Mood Luck
Ing Motivation Item ambiguity
low
Test-taking skills Heat in the room
hod lJ
Anxiety Lighting
Iect
rval Fatigue Sample of items lt
Generalability Observer differences and bias
I
il
ailel Test interpretation and scoring
rihe
ted. c. Test Reliability Enhancers
lner The following should be considered in enhancing the
s for reliability of classroom assessments:
r'cut
i. Use a sufficient number of items or tasks. A longer test is
more reliable.
L3e
lt. Use independent raters or observers who can provide
tnto
similar scores to the same performances.
-25[
-51
iii. Make sure the assessment procedures and scoring are
:S objective.
Continue the assessment untilthe results are consistent.
v. Eliminate or reduce the influence of extraneous events
or factors.
vi. Assess the difficulty level of the test.
vii. Use shorter assessments more frequenfly rather than a
few long assessments.

u / :/,
Chapter 2: Principles of High-Quality Assessment,i
Zlf{
'.s#;*
r 5. Fairness
This pertains to the intent that each question should be made
clear as possible to the examinees and the test is absent of any bia--s
=
An example of a bias in an intelligence test is an item about a person :r
object that has not been part of the cultural and educational contex :n
the test taker. In mathematicaltests for instance, the reading difficr:-
level of an item can be a source of unfairness.'Identified eleme:=
of fairness are the student's knowledge of learning targets befc::
instruction, the opportunity to learn, the attainment of pre-requi_<::
knowledge and skills, unbiased assessment tasks and procedura=
and teachers who avoid stereotypes.

6. Positive Consequences
These enhance the overall qualip of assessment, particularly -:.
effect of assessments on the sfudents' motivation and sfudy habil.

7. Practicality and Efficiency


Assessments need to take into consideration the teacha :
familiarity with the method, the time required, the complexitl- ::
administration, the ease of scoring and interpretation, and the c:s:
to be able to determine an assessment's practicality and efficienc".
Administrability requires that a test must be administered with ea-
,if,
clarity, and uniformity. Directions must be specific so that sfude::
,it,
and teachers will understand what they must do exactly. Scorabi.:.-
demands that a good test should be easy to score. Test results sho';:
readily be available to both students and teachers for remedial a:,:
follow-up measures.

Productiue l,lses of eesb

Learning Analysis. Tests are used to identifu the reasons or causes \\: "

students do not learn and the solutions to help them learn. Ideally, d .r€::
should be designed to determine what students do not know so that -r=
teachers can take appropriate actions.
Improuement of Cuticulum. Poor performance in a test may indice:.
that the teacher is not explaining the material effectively, the textbook is :.:"
clear, the students are not properly taught, and the students do not see --:
meaningfulness of the materials. When only a few students have difficuffe,
the teacher can address them separately and extend special help. If the enr
class does poorly, the curriculum needs to be revised or special units need :
be developed for the class to continue.
Improuement of reacher.In a reliable grading system, the class avera::
is the grade the teacher has earned.

,,;,i, 22',:.t:, Assessment of Student Learning 1: Cognitive Learning


lmprouement of lnstructional Materials. Tests measure how effective
instructional materials are in bringing about intended changes.
Indiuidualization. Effective tests always indicate differences in students'
learning. These can serve as bases for individualhelp.
Selection. When enrollment opportunity or any other opportunity is
limited, a test can be used to screen those who are more qualified.
Placement Tests can be used to determine to which category a student
belongs.
Guidance and Counseling. Results from appropriate tests, particularly
ras. standardized tests, can help teachers and counselors guide students in
assessing future academic and career possibilities.
Research.Tests can be feedback tools to find effective methods of teaching
and learn more about sfudents, their interests, goals and achievements.
--:,e
Selling and lnterpreting the School to the Community. Effective tests help
E
the community understand what the students are learning, since test items are
representative of the content of instruction. Tests can also be used to diagnose
general schoolwide weaknesses and strengths that require community or
€:S
government support.
trf ldentification of Exceptional Children Tests can reveal exceptional
{-,}L
r.-!'. sfudents inside the classroom. More often than not, these students are
AS€. overlooked and left unattended. {l
E:-IS Eualuation of Learning Program. Ideally, tests should evaluate the
lt
r...,y effectiveness of each element in a learning program, not just blanket the q
^,.I information of the total learning environment. il
atu
---J

llnproductiue llxs of eests


Grading. Tests should not be used as the only determinants in grading a
sfudent. Most tests do not accurately reflect a sfudent's performance or true
i;. llv abilities. Poor performance on a certain task may not only indicate failure but
l-d5L lack or absence of the needed foundations as well.
!
t -^-_rg^ Labeling.lt is often a serious disservice to label a sfudent, evenif. the label
is positive. Negative labels may lead the sfudents to believe the label and
kare act accordingly. Positive labels, on the other hand, may lead the students to
s :"ot underachieve or avoid standing out as different or become overconfident and
E ::1e not exert effort anymore.
d*:: Threatening. Tests lose their validity when used as disciplinary measures.
r:-ie Unannounced Testing. Surprise tests are generally not recommended.
B: IO More often than not, they are the scapegoats of teachers who are unprepared,
upset by an unruly class or reprimanded by superiors. Studies reveal that
students perform at a slightly higher level when tests are announced;
unannounced tests create anxiety on the part of the students, particularly

Chapter 2: Principles of High-Quality Assessmen t 23


11 rf

r J
those who are already fearfur of tests;
unannounced tests do not give sfude
and'surprise tests do not promote effio
ll:":i-:tltT."_l:,prepare;
learning or higher achievemeni.
Ridiculing. This means using tests tb
deride sfudents.
Tracking. Students are grouped according
to deficiencies as revealed
tests without continuous reevaruution, r

thus tolting tr,L*'i"to categories.


AllocatingFunds. Some schoors exproit
tests to solicit for funding.

ehx(rcotions ( eesb
Throughout the. years, psychologists
and educators have cooperatir,,pq
::11::"i::.:11 _u;ttei
performance with greater
un"a t",t ,.;;, ffi;;;sure
the ,tua"nl
accuracv. These tests may u" Jr]ll,r"at
to:
":Llil
1. Administration
a' Indiuidual - givln orary and requires the examinees,
cons*
since the m3nn9r.of answering miry
fli"::,^._l be as importanr a
the score. An exampre of this i.-irr"
wl.iri;r;fril'i#;,iffi3
.l: ihree^indivijuarf
iSl": Another
scales. l-h",is a powerpointv administer"a ini"irig"" r,
:l presenta,"r" r."i"tu"j':
performance test in a speech class.
i'
:91
b. Group - for measuringcognitive skills
to measure achievemen:
Most tests in schools are clnsid"r"a group
' i"Jtl *n"re differcr
ii
rr test takers can take the tests as a group.
2. Scoring
a. objectiue - independent scorers agreeon
the number of points
answer shourd receive, e.g., multiple re
choice una tu" or farse.
b' subjectiue - answers can be scored through various
These are then given different values war=
uv;;;:,
performance tests. ";.;;;;;;
3. Sort of Response being Emphasized
a' Power - ailows examinees a generous time rimit to be
answer every able ::
item. The questions are difficurt and
this difficui:.
is what is emphasized.
b. speed - *itfseverely rimited time constraints but
the items a_:-;
easy and only a few examinees are
expected to make errors.
4. Types of Response the Examinees
must Make
a' Performance - requires.students to perform a task. This is usuaji-
administered individuaily so that itre
e*u-in", .un .ount tiu
effors and measure the time the examinee
has performed"r_
each task.
b' Paper-and-pencir
- examinees are asked to write on paper.

...,rt...*%'
Assessment of Student Learning 1:
cognitive Learning
"67,3:X
tnts
ient 5. What is Measured
a. Sample - limited representative test designed to measure the
total behavior of the examinee, although no test can exhaustively
lbv measure all the knowledge of an individual.
b. Sign tesr - diagnostic test designed to obtain diagnostic signs to
suggest that some form of remediation is needed.
6. Nature of the Groups being Compared
a. Teacher-made test - for use within the classroom and contains
the subject being taught by the same teacher who constructed
;ely the test.
rnts'
iing
b. Standardized test - constructed by test specialists working with
curriculum experts and teachers.

Other eilpet of eesb


tant
It AS 1. Mastery tests measure the level of learning of a given set of materials
nce and the levelattained.
nce
2. Discriminatory tests distinguish the differences between sfudents or
SA
groups of students. It indicates the areas where students need help.
il
U
ent. 3. Recognition tests require students to choose the right answer from a
il
rent given set of responses. T
4. Recall tests require sfudents to supply the correct answer from their
il
tl
memory.
;rhe
5. Specific recall tests require short responses that are fairly objective.
ays. 6. Free recall tests require sfudents to conskuct their own complex
and responses. There are no right answers but a given answer might be
better than the other.
7. Maximum performance tests require students to obtain the best score
eto possible.
rrity
8. Typical performance tests measure the typical or usual or average
performance.
9. Written tests depend on the ability of the students to understand,
read and write.
10. oralexaminations depend on the examinees' ability to speak. Logic
r-.e is also required.
[: 11. Language tests require instructions and questions to be presented in
words.

.*.. , art
Chapter 2: Principles of High-euality Assessment ,g,'iSY
,r:i:i ..,,. .
72' Non-language tests are administered by means of pantorn::
painting or signs and symbors, e.g., p-grl.rire
Raveri's Matrice.
the Abstract Reasoning Tests.
13' structured tests have very specific, well-defined
instructions
expected outcomes.
74. Projective tests. present ambiguous stimulus
or questions designe_
elicit highly individualized reiponses.
15. Product tests emphasize only the final answer.
16' Process tests focus on how the examinees
attack, sorve, or wor:.
a problem.
77 ' External reports are tests where a ratee is
evaruated by anc:
person.
18. Internal reports are self-evaluation.
19' open book tests depend on one's understanding
express one's ideas and evaluate ' and
concepts.
20' closed book tests depend heavily on the memory
of the examr:,.
i
27. Non-learning format tests determine how
much informatio:.
i
:f*
students know.
22' Learning format tests require the sfudents
to apply previousry rea
'ulx
materials.
23' convergent format tests purposely read the
examinees to one
answer.
24. Divergent format tests read the examinees
to several
answers.
25. scale measurements distribute ratings along a continuum.
26. Test measurements refer to the items being dichotomous
right or wrong, but not both. or €

27. Pretests measure how much is known


about a material befor.
presented.
28' Postiests measure how much has
been learned after
material has been given.
a lea:

29. Sociometrics revear the interrelationship


among members c:
social structure of a group.
30. Anecdotal records reveal episodes of behavior that may indic.=.e
profile of the students.

lr':' 26 .t'' Assessment of student Learning 1: cognitive Learning


rj::.e. Table 2.3
li',i 1r COMPARISON BETWEEN TEACHER-MADE TESTS
AND STANDARDIZED TESTS
E a1d

B;: :O Directions for Usually, no uniform Specific instructions


administration directions are specified. standardize the administration
and scoring and scoring procedures.

Both content Content is determined by


r- rut
and sampling are curriculum and subject
determined by the matter experts. lt involves
B: --1er Sampling content classroom teacher. intensive investigations of
existing syllabi, textbooks, and
programs. Sampling of content
is done systematically.
ill- to
May be hurriedly It uses meticulous construction
done because of time procedures that include
constraints; often no constructing objectives and
Construction test blueprints, item test blueprints, employing
tryouts, item analysis or item tryouts, item analysis, and tr, I
revisionl quality of test item revisions.
may be quite poor. 1lrl

Only local classroom ln addition to local norms, nl


:est norms are available. standardized tests typically
Norms make available national, school
district, and school building
norms.

Best suited for Best suited for measuring


measuring particular broader curriculum objectives
f . :her Purpose and use objectives set by
the teacher and for
and for interclass, school, and
national comparisons.
intraclass comparisons.
ts"-: .: ts

b..::ing K;ri*ta
I :: ihe 1. Explain why validity implies reliabilip but not the reverse.
2. Generate some other qualities that you believe contribute to making
good assessments.
E:=:e a
3. List down your personal experiences of unfair assessments.

Chapter 2: Principles of High-Quality Assessment 27


Social, fegal, and €tllic,
ffi ltr,tpltcotious of Cestt

OBJECTIVES

At the end of the chapter, the learners are expected to:


' evaluate the soundness of the criticisms of testing; and
. cite the testing principles a teacher must observe.

erificisnrs o/ eesting

In spite of the advantages of testing, still some quarters hurl


some serio..s
allegations against its use. Below ur" .o-n criticisms:
il
F

Inuasion of Priuacy. whether tests represent an invasion


not depends in part on how they are ,r."d. For sure, there is no
of privacy r
',,t*
invasion d
privacy if the subjects were told how the test results
will be used and if fre
subjects volunteered. when children are involved, the
invasion of privaca s
somewhat more complex. Legally, the school's function
is to act u, pu..n=
while the child is at school. In such cases, teachers, as parent
substifutes. ca*
require the sfudents to take tests where the objecti,r".
ui"
schoolboard. On the test administrators rest the responsibiiity
ugr;nl-;;;;;;
or uor*l
and using the test prudently. Anastasi and urbina (1992)
offei "o".t
some ruoii
rr ro-n
)ner us taocmi
to consider to obslrve the right to privacy:
1. Purpose of testing I

2.
Relevance of information I

3.
Informed consent
4.
Confidentiality
(7976) derived the following implications for
_^_chase teaching frorn :*r
7974Family Educational Rights and privacy Act (Buckley
Amendment):
1. Teachers cannot post the grades of sfudents.
2. Teachers cannot display the works of their sfudents as an exampi. .r
poor or excellent work.

28
3. Teachers are not allowed to let the shrdents grade or correct any
t other student's paper.
4. Teachers cannot ask sfudents to raise their hands to determine if they
{ answered correctly or incorrectly to any item.
5. Teachers cannot distribute test papers in a manner that will permit
other sfudents to observe the scores of others.
6. Teachers cannot assume that letters of reiommendations requested
by students willbe kept confidential.

Creation ot' Anxiety and lnterference in Learning. Although a common


criticism of testing, this is not true in all cases. Feldhusen's (1964) study,
for instance, revealed that 80% of his respondents stated that tests helped
them learn more. Another study by Fiske (7967) corroborates the findings
of Feldhusen. Rudman (7977) found that teachers and administrators favor
giving out tests, especially achievement tests. How tests affect students was
studied by Kirkland (1971). In his study, the following conclusions regarding
tests were reached based on the relationships between anxiety and learning:
1. A mild.degree of anxiety usually facilitates learning, whereas a higher
level of anxiety hinders learning in most cases.
2. The less able student incurs a higher level of anxiety from testing ;i
than capable ones. I
3. Being familiar with the type of test to be administered reduces I
lr
or
of anxiety. T
ll
'te 4. Highly anxious sfudents do better than less anxious ones on
is measuring rote recall. They perform less well, however, on tests
ts. requiring flexibility in thought.
tn 5. Test anxiety increases with grade levels.
:te 6. Although there appears to be no relationship between sex and anxiety
ilg among elementary school children, junior high school girls indicate
)rs that they experience more anxiety than boys at comparable levels.

Anxiety in testing can be demonstrated in nail biting, pencil tapping, or


squirming. The following suggestions rnay help motivate students prepare for
and take examinations without creating unnecessary anxiety:
1. Emphasize tests for diagnosis and mastery rather than means of
punishing students who fail to live up to the expectations of the
he teachers or parents.
2. Avoid a "sudden death examination." Keep in mind that passing or
failing is a function of one test only.
of 3. Write personal notes on each examination paper encouraging
students to keep up the good work or exert more effort.

.r.r'3:,'
Chapter 3: Social, Legal, and Ethical Implications of Tests;,29:1#
;;,, .
".4;4.
4' Be sure each item has "face
varidity." Items measure some
aspects of life as perceived importa
by the students.
5. Avoid unannounced examinations.
6' Schedule personar conferences
with students as often as possibre
reduce anxiep and redirect
learning.
7. Emphasize more on strengths,
not on deficiencies.
8' Do not emphasize competitive
examinations when some studer
are unable to compete.
9. Treat each student,s grades
confidentially.
10' Allow sfudents to choose among
activities of equal insfuuctionar
varue
Permanent categorization of
students.(Tracking). The notion
measurement instruments are iJ:a
-f,uJ" and their;;;;i_""ce is fixed
infaribre
therefore, unchangeable, fru. anc_
teachers -t il;i,",".o
gn iti o n Jor ::["':rffi:i?,T?liii;ror
Penalizing Bright and creatiue
i nstan c;
studenrs. one or-tr,n most
consiste:r
Sf:'jil::1ff:':,'::l*:1,:::,'::."Tn'y" " 'rr,.,iiu"ialr"n.iure response= ,

,ouf.course'therearesomevasue,"-.,1r,",fl,I[:.":1ffi:t:ltJil:tx,i::':
Discrimination against Minority
studen.b.Many minoritgr sfudents
i take tests because th-ev have do ::n
?* the .kili;; r.rorr"ag", and attitu:;
required to succeed, In this ";il;;"d
case, it is not reasonabie to
'{* criticizetests. Rath;.:
J ro r ra,in g io1"
:lli JI":Iti #: :'lJ :1.?5 ;r'-
" ;;;.
".i,,, a e n ts th e -
M e as ur e m ent of L i m it3 d,an d
up erf ci ar Aspe ct of B eh
.s i
au or . Acom m o L_.
i

::H:;i":yffffir,5: is that thev cannot measure


imfo.tant h;;;;i-,
tt/ttcol eesting principles
Tests presume an ethicar,
responsibre
and a desire to cooperate on the partattitude on the part of the examine
interactions, of the students. As in alr socia
mutuar trust and .espect must
vs Lrsvsr(rlrr
be developed. Rerevant to testin:
are the following ethical prinaipt"'r,
1. confidentiolify. This regurates or
cotrtrors regar or rawfur access.
shallhave access to thelest results? wh:
snunrui .-onsiderations such
securitv of test content, the hazard, as th"
resurts. an:
the need of various persons
to-know"r;;;;;erstanding
til;;il influence
in particurar situations. Individuats the ansrie:
hurl in" ,igrrt to know their ter
resurts, and for this purpo.n,
labels, readily understaniubre
tn"-*rr[;# be free of technic-
and oriented toward immediate
In the case of minors, one must testin:
arso consider the right of parents
,r:
a,.. .. ?' ,:.

Assessment of Student Learning


3 .i.O:1, 1: Cognitive Learning
E.1L guardians to be informed about the test results. Third persons are
isually not given acc"ss to records unless consent has been given by
in
the owner of thn record. However, confidentiality can be breached
the following instances:
eto
a. When there is clear, immediate danger to the student and the
teacherinformsotherprofessionalsorauthorities

ENtS
b. If the students will benefit by talking to other professionals
concerned with a case
c. If the student gives permission for confidential communications
to be reported to others
u€. as such, their
2. Test secunfy. Tests are professional instruments and
to
nat dissemination is restricted to those with the technical competence
^^l
a.lu! ,rrn tnn* properly. No standardlzedtests should be left unsecured.
r- 'O to
3. Test Scores and lnterpretations. These should only be available
individuals who are qualified to use them. Test results should
be
against misuse
t-.:ent interpreted to parents in a manner that will ensure
,f ioc
and misinterPretations.'
ian a
provide a manual
4. Test Publication. standardized tests should
nOt
or technical handbook describing how the test can be used most II
effectively and who may use it. Advertisements about test
rC results A
n:ide persuasive.
should be factual and descriptive but not emotional or
dd
a:her, \rl
aeded
t thical eesting Practica
:l-.only
1. It is both ethical and advantageous to inform students in advance
:aits,
that they are about to take a test and to tell them something about
of
the nature of the test. They should also be told of the advantages
taking the test and where the results would be used'
practice
2. Teachers should explain the mechanics of taking a test and
heavy
r:riner the students on how to fill out an answer sheet (i.e., making
marks, and erasing marks completely). It is however essential
that the
social
testing teacher does not make the question available'
3. It is perfectly proper to try to motivate students to do as well as they
s Who .u1 u, long-ai they are not threatened or made anxious about their
I asthe performance.
and 4. It is essential that all testing materials and results be kept secured
before, during, and after testing.
It is ethicalto combine classes for testing as long as there are
adequate
5.
p-.torc to safeguard the test and make sure that the students are
iollowing instructions. The ideal ratio is one proctor to a maximum
F., of 30 students.

Chapter 3: Social, Legal, and Ethical Implications of Tests;131'f'


6. Once an examination has been administered and scored, it is
permissible for the teachers to examine results and determine the
areas of the sfudents' weaknesses. It is proper to modify the curriculum
' as a result of standardized achievement testing only if the teacher can
demonstrate that the change conforms to overall school objectives.
. Modifuing the curriculum solely for the purpose of increasing scores
is unethical.

l,l rethical Gsting Practices

I 1. To futor students on the specific subject matter of an expected


examination. This destroys the standardized procedures of test
administration and distorts the meanirig provided by the scores.
Scores on standardized tests should only be interpreted when
the tests are given exactly the same way they are to the norm or
comparison group. It is unethicalfor teachers to examine the content
of standardized tests to determine what is to be taught in their
classrooms.
2. To use or give a test item from any part of the test in which only a
word or phrase has been changed.
,l
3. To construct or use any practice form that is similar to the actualtest
) items to reflect the situations, options, or conditions of the original
!|. questions.
';lh,i 4. To copy and/or distribute the test before the scheduled date of the
test.
5. For teachers to use standardized tests or mandated testing programs
for their examinations. Similarly, it is unethical to use standardized
tests as instructional materials.
6. To exclude some sfudents from participating in tests, even though
the teachers expect them to do poorly. Nor is it ethicalto exclude the
whole class if they are low achievers.
7. To allow students to use false records, identification papers.
unauthorized identification cards, or computer access to official
school documents.
8. To neglect the instruction of one sfudent just to increase the tes,
scores of other pupils. The goal of education is to maximize the
achievement of each pupil, not the attainment of high-test scores.
In like manner, it is unethicalto grant any advantage to one sfuden:
over another to increase score in a given test.
9. To alter the directions, time limits, and scoring procedures.
10. To try to improve student performance by developing items paralle.
to those on standardized tests.
*'.'
.8-,,"#.f
*sessment of student Learnins 1: cosnitive Learnins
."&1?,:f
ir is 11. To create anxiety and rivalry about standardized tests among sfudents
the and between classes and schools. Examinations are not contests and
rlum should not be treated as such.
'can
12. To accept grafuities, gifts or favors that might impair or appear to
ives. influence professional decisions or actions regarding student testing
0res and scores.
13. To disclose information about students obtained in the course of
testing, unless disclosure seryes a compelling professionalpurpose or
is required by the school, is unethical.
rcted
iest
c res.
L:len
T. Or 1. Justify whether the exemption of students from taking examinations
:ient is an ethical or unethicaltesting practice.
neir 2. Determine if the privacy of sfudents is invaded through art works,
compositions, chalkboard computations, and oral examinations.
f:,i' a 3. Suggest other approaches on how sfudents can be motivated to take
examinations.
ri :est f,
g:nal t
*;
rfie \
rl

72 -Y1C

d-'ed

E.*gh
E:e
F€5.
fE::al

[t
ill'e

,d

Chapter 3: Soclal, Legal, and Ethical Implications of Tests ,.


;;*"
ll.,rr.,,.tr.
Tactors lnfluencing Cest
Construction and Cest

OBJECTIVES

At the end of the chapter, the learners are expected to:


. explain the factors that influence test construction; and
. identify the extraneous factors that affect students' performance in
the cognitive type of test.

Tactors that lnfluence eest 1onstruction

,i. There are several factors that influence the type of test a teacher has to
i construct. These are:
l|t
1. The Function of the Test in the Instructional Process
,l*,1;

Before venturing into test construction, a teacher has to clarify


the purpose for testing. Table 4.1 presents the various test purposes
that teachers have to consider. As presented, the time of testing in the
instruction process significantly influences the very purpose of testing
- and the features of the test to be made, such as item difficulty leve.
and item sampling.

" Table 4.t


BASIC TYPES OF CLASSROOM TESTS ACCORDING TO FUNCTION
AND TIME OF ADMINISTRATION (Airasian & Madaus, 1972)

ffi&
f;ffiffis&sffi
strffixx
W
xffiffi *,itt$i[g*$ :rPlA&iiiiei:iii fiaririi*s[&.4 stit3ll*ii&trffi
Focus of Prerequisite Course Predefined Most Course or unit
measurement entry skills cr unit segment of common
objectives instruction learning
errors
Nature of Limited Broad Limited Limited Broad sample of
sample sample of sample of all sample of sample of all objectives
selected skills obiectives selected skills selected skills

34
(Table 4.t Basic Types of Clossroom Tests continued)

rii:.::i:lsxss,. E',xllil
'*:*"EU$ffi",'.^l
:{I{5f,&&}CXfi }li[:ill ffis*ffii
?st ruxcmox !ii*€a&i&t :,,ptiidi{iii€iiti &ii*llt&ii& &ffi$*tm ill$ii&*rii*i I
Item difficulty Typically has Typically has Varies Typicaliy has Typically has wide
low level of wide range with the low level of range of difficulty
?ST
difficulty of difficulty segment of difHculty
instruction
tc€ Time of Beginning Beginning Periodically As needed End of course or
administration of course or of course or during during unit
unit unit instruction instruction
Use of results Remedy Instructional lmprove Remedy Assign
entry planning and and direct errors grades, certify
deficiencies advanced learning related to accomplishment,
or placement through persistent evaluate teaching
assignment ongoing learning
'r to learning feedback difficulties
group
(Airasian & Madaus, 1972, as cited in Linn and Gronlund,
2000)

2. Testing Frequency
Testing frequency depends on the function the test is to serve.
NtO In most cases, single testing can suffice for selectiori, placement,
and summative evaluation decisions. For diagnosis, formative
t
evaluation, and motivation, frequent testing is recommended and Ji
necessary. If the purpose is to provide feedback, frequent testing
I
e-ifu tl
can help the teacher guide and direct the students to thl right track]
ESES
particularly during the early stage of learning. Testing fLqrnn.y
n .}.re
t--nn helps the student acquire knowledge as efficiently as plssiutl. rne
tests thus serve as learning drills. Frequent testing can also motivate
b".,el
the learners, particularly those with lower abilities. Although it has
some disadvantages, the foremost of which is that it takes much
of
instruction time, the decision still lies with the teacher as to which is
to be prioritized.
3. The Use of open-Book and closed-Book Examinations
An open-book test permits the sfudents to use books or notes
during examinations. This is advisable when the teacher emphasizes
some of the higher objectives in Bloom's Taxonomy. Rather than
spend
the bulk of the time memorizing, it can be devotei instead to appiying
specific formula on information. The values of this type of test incfuA!
having sfudents apply rather than memorize information and use skills
in utilizing reference materials for a limited time. on the other hand, a
closed-book test is advisable and should be implemented when a test
emphasizes recall of information.

chapter 4: Factors Influencing Test construction and rest performance...r,35


,:,.i1

rt:'rl. .. .
r--
4. The Use of Take-Home Examination
Take-home examinations are extensions of open-book e
Its advantage is that the sfudent can work at his/hei leisure
ani
il
his/her most comfortable rate of speed using *nit"r* refers.
I
I t;
is available. It is most useful when the sfudent needs to resor
Il references not found in the classroom or even in the school.
Howe
it has at least two disadvantages, namely, difficulty of scoring ::
yuo$s objectivelv,
3!d the possibilitv that the studlnt might no:
his/her own work. However, if the ieacher emphasizes the
exe:
or teaching device to indicate student strengtirs, weaknesses. a
method of improvement rather than gradi"g, tn. use of take-r-,:.r
tests are maximized.

5. Mode of ltem Presentation


Test items can be presented in various modes: oral, written
board, mimeographed, and projected on screen. Each mod"
or:
n- ,

own advantages and disadvantages. oralpresentation benefits


dl
who_have difficulty in reading, but those with unrecognized
hea
handicaps are at the losing end. A multiple choice testis
discourar
when sfudents have to work at the same speed but are unuul.=
,1,
review previously answered items. True oi false, completion
identification are best suited for this mode of presentation.
F
?*
questions can be written to favor those with hearing
&
difficulties :
'rlh+l; the teacher also has to read aloud for those with readilg
impairmr:
The slide presentation of items can minim ize expend,tIr".
but ac
individual paces can be overlooked. Another considerat.;;.'
order in which items should appear. Sax and Cromack (1996)
fc;
that arranging items from the easiest to the most difficult will ..r
higher scores than those presented from the most difficrii ,;
easiest, randomly arranged, or placement of easy items
;
among :
difficult ones. They further stated that the advantJge of easy-to-:a
se.quencing is big help to the students when time is limited.
E-.,q
when items are to be presented by content (spelling, uo.ubr-=ri
math or social science) or by format (multiple .noi.J, .o.pr""-
9g o, false items) the spirol-omnibus format, i.e., arrangemen: !
different types of items in order of difficulty should still
be used.
can.be done by giving alr the easy itemi of any subject, fono;*,
l'
by the next set of items with more tomplex oblectives. Ho*"u*.
the case of achievement tests, items should be presented per
sub,
area. It is not advisable to intermix them. As io the format
of r:
presentation, those that will require less time should
be preser:
first.

*t *f*'
Attessment of Student Learning 1: Cognitive Learning
dSO _€
,Y&;'&"'

I
€vtraneous iartors thnt lnfluence eest ?erforwauce
tests.
Maximum performance tests (achievement, intelligence, and aptitude)
and at
require the student to obtain the highest possible scores while typical
erence performance tests (attitude, interest, and personality inventory) call for
sort to
ihe students to obtain scores without exerting much effort. The goal of
!1:ever.
cognitive measurement is to obtain an examinee's best and highest level of
E long pn.for*urce. The purpose of affective assessment is to assess an examinee's
not do usual, representative, and typical behavior (Hopkins & stanley, 1981).
xercise
An examinee's performance in any test is influenced by personal traits,
s. and knowledge, or proficiency. Extraneous factors, oftentimes unrecognized,
r-home
likewise cause equal or stronger influences. These include the following:

1. Test Sophistication or Test'Wiseness


Test-wiseness is defined as the examinee's ability to use the
r on the characteristics and format of the test and/or the test-taking situation
ras its to increase his/her score. Examinees unfamiliar with testing usually
s:hose perform poorly than those who have already taken tests several times
hearing (test-sophistication). Whether familiar or unfamiliar to the test at hand,
ruraged a test-wise examinee exhibits the following test-taking behavior:
e:ie to
j,on. or a. Pays careful attention to directions and ask examiners for
JI
clarifications when necessary
- Bsay j'l
tres but b. Has more than one writing tool ready
u::ents. c. Works as quickly as possible without being reckless and
It
r again, paces their rate in relation to the time limit while making
n :s the sure that they have enough time to answer each item
i :cund d. Guesses at items that willtake a disproportionate amount of
n- -"'ield time and marks the items left unanswered so that they can
k :: the be returned to if time stilL permits
r:.9 the
e. Uses any remaining time to double check answers, particularly
:-rard the doubtfulones
: :ven
r *iary, f. Answers all items, even if there is a need to guess some, only
?fon, right answers are scored or the penalty for guessing is simply
'=:,t Of a correction for chance
This g. Uses deductive reasoning by eliminating incorrect and
i:;ved implausible options and choosing from among the
,;r. in remaining ones
;-:.1ect h. Rejects options that imply the correctness of each other
:. .iem i. Utllizes relevant information found in other items
.rted j Puts himself in the shoes of the test constructor and adopts
the intended level of sophistication

.,e.r#J'
Chapter 4: Factors Influencing Test Construction and Test Peformaor",# 37 _.{
.4" ,r1;tt
r k. Uses relevant and extraneous cues to help identifu the cor:,
option (the correct answer is more likely to be qualifiec
longer, represents a different degree of generalization. cr
composed of textbook or stereotyped phraseology)
l. Tries to uncover the test constructor's tendencies to -
certain response positions or to include disproport
between true-or-faise answers
m. Recognizes the use of specific determiners and
from grammatical consfuuction, €.g., articles, agreemen:"
parallelisms

2. Practice
Several sfudies show that there is an improvement in scorei
when an examinee takes a subsequent test on a particular test or :=
parallel form. Practice testing helps the examinee improve hisrl:e
scores the next time he/she takes the same test or any of its parala
forms. The following are the generalizations on the effect of practic;
on cognitive tests:
a. The effects of practice are more pronounced in people wiil
a limited educationalbackground or experience with tests.
,t
b.
,

The effects are greater on speed tests.


i
?li c. The effects are greater on repeated tests than on paralle.
.l}tli
forms of tests.
d. The greater the interval between retests, the smaller the
effect.
e. With other factors considered equal, the effects appear to be
slightly greater for examinees of high mentalability.
f. The effects are almost negligible for a group of typica
examinees.

3. Coaching
The effect of coaching is smaller compared to the effectbrought
about by practice as long as the objective questions are reasonablg
straightforward.

4. Anxiety and Motivation


Motivated students score higher than those who are not. A
little anxiety may be of help but over anxiety often leads to poor
performance. When the test items are not intrinsically interesting.
effects can be moderate but if items are too ego-evolving, the
examinee may become over anxious, leading to adverse effects on
his/her performance.

..:.:.1 ,.1..r,._

38 ::]:" Assessment of Student Learning 1: Cognitive Learning


;,1
:orYect 5. Response Styles
ied or
According to Cronbach (1984), these are the test-taking habits
r. or is
that cause people with the same abilities to score differently on a test.

to use a. Speed us. Accuracy Set - Some examinees work slowly but
IOnl0n accurately while some work quickly but less cautiously. The
former may consume much time in just a few items but the
chances of getting the answer correct, for each item is greater.
clues
The latter, on the other hand, may cover almost all items
ilents,
but the accuracy of answers is less. Ideally, a test should be
timed in such a way that 90% of examinees complete the
test.
SCOTES b. Acquiescence Set - Cronbach (1984) found that in case of
r or its uncertainty between a true or false item, most examinees
i is her answer true. Metfessel found that most instructors have
:arallel the tendency to include more true items on the test (in
:actice Cronbach, 1984). The acquiescence set allows more people
to get credits, usually undeserved, for true items than false
ones.
le rvith
c. Positional-Pret'erence Set - Most students who are ignorant
of the answer to an item do not answer in a random manner,
Ji
but base their answer on some patterns.
,.arallel
d. Option Length Set - Most examinees choose longer options, J,}

thinking that they are more justified, hence more correct


ler the than others.
e. Set to Gamble - The examinees resort to this set when the
rtobe test is a multiple choice or a matching type.

:.;pical
Kroai't:writ i
1. Cite the arguments for and against testing frequency.
rrought
nnably
2. Discuss the distinct feafures of a take-home examination.
3. Explain why the effects of practice in student's learning are more
pronounced than those of coaching.

not. A
:o poor
resting,
ng. the
ects on

Chapter 4: Factors Influencing Test Construction and Test Performance,,.l,39''..1:rl'


:,-

€stablishing
,,, {oorfiiilg Cargett

oBJECTTVES I

At the end of the chapter, the learners are expected to:


a differentiate the types of learning targets;

a give concrete examples of the different learning targets; and

a discuss the taxonomies of learning objectives.

&
A good classroom assessment has clear, appropriate learning targets.
i Defining learning targetsloutcomes is the first step in good teaching. It is also
hi essential in the assessment of student learning.
ttt
Purpoxs of lustructional 6oals and Obiectiues
Instructional goals and objectives provide direction for the instructional
process by clarifying the intended learning outcomes. They also convey
instructionalintent to stakeholders such as sfudents, parents, other personnel,
and the public. Most importanfly, they provide a basis for assessing student
learning by describing the performance to be measured. Instructional goals
and objectives are sometimes stated in terms of actions to be taken. Thus, there
is a statement saying: "demonstrate to students how to operate computers."
This statement clearly indicates the teaching activity. However, it does not
clearly specify the intended learning outcomes and does not point very
explicitly to the type of student assessment that would be most appropriate.
Educational goals are general statements of what sfudents will know
and be able to do. Goals are written to cover most of the instructional time,
such as a unit, semester, or year, and indicate in broad terms what will be
emphasized during that time period. Goals are essentialbecause they reflect
educational philosophies about what is important. They also provide a
starting point for more specific learning objectives. By beginning with goals,
the general outline can be validated by parents, teachers, and school officials.

40
An example of a goal is "knou how to think uitically, be good followers of
school policies, and work with peers."
Instructional objectives are sometimes stated in terms of actions to be
taken. Thus, statements like these abound: "the students will demonstrate
how to use the microscope" and "the student will underline the subject and
predicate of each sentence."
Statements, such as the ones mentioned, direct attention to the students
and the types of performance they are expected to exhibit as a result of
instruction. Thus, the focus shifts from the teacher to the sfudents and from
the learning experiences to the learning outcomes.
In stating instructional objectives, it is important to keep in mind that the
concern is on the product of learning rather than with the process of learning.
This is not meant to suggest that the process is unimportant. Remember, the
long-term instructional objective concerns the product.
Instructional objectives are usually relatively specific statements of the
studentperformance thatshould be demonskated atthe end of an instructional
unit. They are meant as intended learning outcomes. They should be stated
in terms of specific, observable, and measurable sfudent response. These
types of objectives are characterized by the use of action verbs, such as list,
outline, distinguish, categorize, apply, subtract, synthesize, underline, and
gets. define. Highly precise behavioral objectives include the following criteria:
T
also 1. Audience. This is a description of the students who are expected to
perform or demonstrate the behavior of a specific grade level. J.I

2. Behau.ior. Specific behavior is indicated by action verbs.


!I
3. Criterion. This is a description of the criteria used to indicate whether
ional the behavior has been demonstrated, like answering eight out of ten
nvey questions correctly.
nnel, 4. Condition. This may be circumstances, equipment, or materials used
ident when demonstrating the behavior, such as with or without the use of
3oals a calculator, an open book, or a dictionary.
there
'ers." When in'structional objectives are written to include allthese criteria, they
s not willbe highly specific as the examples listed below:
very
ate. Giuen one and a half hours without a calculator (condition), studenfs
(now in the honors program (audience) will compute (behauior) the indexes of
hme, discrimination and difficulty of 15 multiple choice items (uiterion).
ill be Writing inskuctional objectives with these criteria is time-consuming and
eflect too confusing for teachers, however, it depicts very clearly what the students
dea are expected to demonstrate.
pals,
rcials.

Chapter 5: Establishing Learning Targets ;:'41*tf ..


,;f.l&..,;
r Another approach is to state a goal or general objective followed by
an example of specific objectives that indicate the various types of student
performances required, for example:
General Objectiue: Understand what constitutes a good test.
Specit'ic Objectiue: List the criteria of a good test.
General objectiue: Know the importance of Bloom's Taxonomy of
:

I Education in test item formulation.


Specific Objectiues: Differentiate various levels of difficulties of the several
cognitive objectives; give examples of the different
levels of cognitive objectives; distinguish the slim
difference of comprehension from application.

{earning eargets
Terms associated with learning targets are goal, objectives, competency,
outcome, standard, and expectation. Learning target is defined as a statement
of sfudent performance that includes both a description of what students
should know or be able to do at the end of a unit of instruction and the criteria
for judging the level of performances demonskated.
The word learning is used to convey that targets' emphasis on the
importance of how students will change. The learning targets are composed
Lu of content and criteria. Content is what sfudents should know and be able to
1i, do. On the other hand, criteria are dimensions of student performance used
for judging attainment.
It is necessary that the criteria for judging levels of performance be included
in the target. The target "students will know the cities in the Philippines"
means something different if the students have to commit the major chartered
cities to memory than if the students can correctly match half of the names
of major chartered cities with provinces or regions. The criterion for judging
levels of performance needs to be communicated to the students prio-r to
instruction. Below is an example of a learning target:
students will be able to explain why changing the gouernment
from
presidential to parliamentary form is (un)necessary by writing an essay
that indicates the fauorable or unfauorable conditions the chinge wouli
bring to the economy. The papers wilt be graded holistically bf, looking
for euidence of reasons and knowledge of the forms of gouLrnment anZ
organization.

eypo of {earning eargek


Knowledge Learning Targets. Knowledge of the subject matter is
the foundation upon which other learning is based. Teachers expect their

,.i1L42 ...,'' Rssessment of Student Learning 1: Cognitive Learning


.oived by sfudents to master at least some content. Marzano et al. (1993) suggested that
rf student knowledge is differentiated as declarative and procedural knowledge.
Reasoning Learning Targefs. Due to the advent of technology, the
accessibility to information has resulted in an increased attention to thinking
skills. Such capabilities may be described by a number of different terms,
including problem solving, critical thinking, analysis, comparing, intellectual
nomy of skills, intellecfual abilities, higher-order thinking skills, and judgment.
Skill LearningTargets. In most classrooms, there are things teachers want
their students to be able to do; for example, in a science class, a teacher may
te several
want students to operate a laboratory apparafus. In cases like these, success
different
lies on doing the task well. The challenge for teachers to assess lies on the
the slim
clarity of terms or the usage of words, or both.
ion.
A skill is something that the student demonstrates, something done. Skill
learning targets involve a behavior in which knowledge and reasoning are
used in an overt manner.
:petency, Product Learning Targel.s. Products, like skills, are dependent on the
tatement prior attainment of knowledge and reasoning targets. Products are samples
srudents of sfudent work that demonstrates the ability to use knowledge and reasoning
e criteria in the creation of a tangible product like a term paper, investigative report,
artwork, or other projects.
; on the Thus, products are used to demonstrate knowledge, reasoning, and
lr:'rposed skills. Performance-based assessments are examples of how product learning JI
e able to targets are measured.
-lce l1
used AffectiueLearningTargets. Affective learning includes emotions, feelings,
and beliefs that are different from cognitive learning, like knowledge, reasoning, 1
included and skills. It can be described as being positive or negative, and most teacheis
iDpines" hope that students will develop positive attitudes toward schoolsubjects and
hartered learning, themselves as learners, and other students and schools. Affective
e names learning targets are also referred to as motivational dispositions, values, and
iudging morals. Although most teachers believe that a positive affect is an important
prior to outcome as well as a determinant of cognitive learning, many believe that
schools should only be concerned with cognitive learning targets, because
even though affective targets are possible to assess, they are stillcomplex.
?ntfrom
rn essoy
e would Bloour's ea.rorowy ( {earning Dowains
iooking Educational psychologists have advocated various techniques of
ent and stating educational objectives. They have also assisted in specifying goals
by constructing taxonomies of educational objectives. These taxonomies
have classified the goals of education and are useful as a means both of
communicating goals and understanding some relationships among them.
Original plans for one classification system called for the development of
la:ier ls
taxonomies in three domains: cognitive, affective, and psychomotor.
z,:: their

,:..r ..rar,.,
Cha pter 5 : Esta bl ishing Learni n g fargets..;,ai'if;
.::,,.,,,f ,.,j;..
The cognitiue domain includes objectives that deal with the recall or
organization of knowledge and the development of intellectual abilities
and skills. The cognitive taxonomy contains six major classes of objectives
arranged in hierarchical order on the basis of the complexity of a task.
Bloom's Taxonomy enables teachers and educators to use exact and
varied terminologies for stating specific learning outcomes. These terms are
identified in the table below:
Table 5.t

ffi
FORMULATING COGN ITIVE LEARN I NG TARGETS

&

KNOWLEDGE. lt is defined as Knows common terms; Defines; describes;


the recall of previously learned knows specific facts; identifies; Iabels; lists;
material. This may involve the knows methods and matches; names;
recollection of a wide range of procedures; outlines; reproduces;
materials, from specific facts knows basic concepts; selects; states
to complete theories, although knows principles
remembering the appropriate
information is the only thing
required. Knowledge represents
i the lowest level of learning
lh outcome in the cognitive
domain.
til
COMPREHENSION. lt is defined Understands facts and Converts; defends;
as the ability to grasp the principles; distinguishes;
meaning of the material. This interprets verbal material; estimates; explainsl
may be shown by translating interprets charts and graphs; extends; infers;
the material from one form translates verbal formulas to generalizes; gives
to anothEr, interpreting the mathematical ones examples; paraphrases;
material, and estimating future estimates future predicts; rewrites;
trends. Learning outcomes go consequences; summarizes
one step beyond the simple justifies methods and
remembering of a material. lt procedures
represents the lowest level of
understanding.
APPLICATION. It refers to the Applies concepts and Changes; computes;
ability to use learned material in principles to new situations; demonstrates;
a new, concrete situation. This applies laws and theories to discoversl manipulates;
may include the application of practical situaticins; modifies; operates;
things such as rules, methods, solves mathematical predicts; prepares;
concepts, laws, principles, and problems; uses; produces; relates;
theories. Learning outcomes in demonstrates correct usage shows; solves
this area require a higher level of of a method or procedure;
unddrstanding than those under constructs charts and graphs

. er'fo"
,.{447$ Assessment of Student Learning 1: Cognitive Learning
{&*'
:all or (Table 5.r. F ormuloting Cognitive Learning T argets continued)

rilities
rctives

:t and
ns are
ANALYSIS. lt refers to the Recognizes unstated Breaks down;
ability to break down material assumptions; diagrams;
into its component parts recognizes logical fallacies differentiates;
so that its organizational in reasoning; discriminates;
structure may be understood. distinguishes between distinguishes;
This may include the facts and opinion/ identifies; illustrates;
identification of the parts, inferences; infers; outlines; points
and recognition of the evaluates the relevance of outl relates; selects;
organizational principles data; separates; subdivides
involved. Learning outcomes analyzes the organizational
here represent a higher structure of a work
intellectual level than the
comprehension of both the
content and structural form of
the material.
SYNTHESIS. lt refers to the Writes a well organized Categorizes;
ability to put parts together theme; combines; compiles;
to form a new whole. This gives a well organized composes; creates;
may involve the production speech devises; designs;
of a unique communication writes a creative short explains; generates;
/l
(theme or speech), a plan story, poem, or music; modifies; plans; J}
of operations (research proposes a plan for an organizes; rearranges; q
proposal), or a set of experimentl reconstructs; relates; I
abstract relations (scheme integrates learning from reorganizes; revisesl
for classifying information). different areas into a plan rewrites; summarizesl
Learning outcomes in this for solving a problem; tells; writes
area stress creative behaviors,formulates a new scheme
with maior emphasis on the for classifying objects,
formulation of new patterns or events or ideas
structures.
EVALUATION. Evaluation is Judges the logical Appraises; compares;
concerned with the ability to consistency of written concludesl contrasts;
judge the value of a material material; criticizesl describes;
(statement, novel, poem, judges the adequacy with discriminates; explains;
research, and report) for a which conclusions are justifies; interprets;
given purpose. The judgments supported by data; relates; summarizes;
are to be based on definite judges the value of work supports
EteS;
criteria or be given to them. (art, music, and writing) by
;
Learning outcomes in this area use of internal criteria;
;
are highest in the cognitive judges the value of work
lates;
hierarchy because they contain (art, music, and writing) by
elements of all the categories, use of external standards
plus conscious value judgments of excellence
based on clearly defined
criteria.

, *)-
Chapter 5: Establishing Learning Targets u 45 .'
.r;. ' r
r The at'fectiue domain is concerned with changes in interests, attitudes, lt:

and values and the development of appreciation and adjustment. It is divided


into five major classes hierarchically arranged on the basis of involvement.
Teachers will have a great deal to do with the feelings, attitudes, and
til,4
::
values developed in their classrooms regardless of whether they state or
'I
do not state affective objectives. Although teachers are tacitly expected to :"-,
l,t encourage mainstream values-for example, honesty and responsibility-
they may inadvertently cause a conflict by espousing a more complex or ..
:l

more specific moral stance. Therefore, the only way to assess the attitudes :
:€
or values people have is by observing what they do or say. Affective domain j-.

includes the manner in which we deal with things emotionally (l&athwohl,


Bloom & Masia, 7973). The five major categories are listed from the simplest .-
::
behavior to the most complex. :,
:i
Table 5.2 -.
FORMULATING AFFECTIVE LEARNING TARGETS 3.

,t :l
:-
Receiving (Attending) Phenomena: Examples: Listens to others with respect; ::
Awareness, willingness to hear, selected listens for and remembers the name of
attention newly introduced people
This is the lowest category in the affective Key Words: asks; chooses; describes;
domain. At this level, the student is aware follows; gives; holds; identifies; locates;
i
of the existence of a condition or problem names; points to; selects; sits; erects;
*n and is willing to at least listen attentively replies; uses
1*,
to what others have to say about it. The
element of commitment is not present, and
the behavior is somewhat analogous to
"sitting on the fence." The student is aware
of an issue, but has not yet made a decision lnt
about it. Ha
be
Responding to Phenomena: Active Examples: Participates in class discussion; pr(
participation on the part of the learners; gives a presentation; questions new ideals, ch;
attends and reacts to a particular concepts, models, etc. to fully understand ob
phenomenon. them; knows the safety rules and practices do
Learning outcomes may emphasize them SO(
compliance in responding, willingness to Key Words: answers; assists; aids; Th
respond, or satisfaction in responding complies; conforms; discusses; greets; do
(motivation). At this level, the student is helps; labels; per{orrns; practices; dl'lr
willing to go along with an idea or a value, presents; readsl recites; reports; selects; th;
such as being willing to follow school rules, tells; writes per
actively volunteers to respond, and takes or
satisfaction in the response. The level of
commitment is minimal and the behavior lev
is analogous to iumping off the fence, but par
holding on to it and being ready to iump val
back at any moment. de,
wil
hig

of Student Learning 1: Cognitive Learning


,,'d;.40,.{A.s"ssment

-
drudes, (Table 5.2. F ormulating Aff ectiv e Ledr ning Targets continued)

livided
nent.
s. and
W
Valuing: The worth or value a person attaches Examples: Demonstrates belief in the
Ete or to a particular object, phenomenon, or behavior democratic process; is sensitive toward
individual and cultural differences (values
red to This ranges from simple acceptance to
diversity); shows the ability to solve
the more complex state of commitment.
:iiity- Valuing is based on the internalization of problems; proposes a plan for social
llex or a set of specified values. Clues to these improvement and follows through with
values are expressed in the learner's overt commitmenU informs management on
titudes
behavior and are often identifiable. Here, the matters that one feels strongly about
omain student demonstrates that an attitude has Key Words: completes; demonstratesl
t,,,r'ohl, been accepted and is constantly preferred differentiates; explains; follows; forms;
mplest over competing attitudes and values. The initiates; invites; joins; justifies; proposes;
commitment is clear. The student has walked readsl reports; selects; shares; studies;
away from the fence and is willing to be works
identified as someone holding the attitude
or value.
Organization: Organizes values into priorities Examples: Recognizes the need for
by contrasting different values, resolving balance between freedom and responsible
conflicts between them, and creating behavior; accepts responsibility for one's
a unique value system; emphasis is on behavior; explains the role of systematic
comparing, relating, and synthesizing values planning in solving problems; accepts
c- I
The students eventually recognize that professional ethical standards; creates a
conflicts between values arise and must life plan in harmony with abilities, interests
= be resolved by setting priorities on values. and beliefs; prioritizes time effectively to
:).1 I
To do so, students should use higher-level meet the needs of the organization, family,
and self
lI
cognitive thinking, which will enable them
to resolve value conflicts in a logical and Key Words: adheres; arranges; combinesl
,I
defensible manner. They will then have
greater confidence in their decisions. This
compares; completes; defends; explains;
formulates; generalizes; identifies; I
level is a direct Iink between the cognitive integrates; modifies; ordersl organizes;
and the affective domains. ; relates; synthesizes
I lnternalizing values (Characterization): Examples: Shows self-reliance when
Has a value system that controls behavior; working independently; cooperates in

-.1 behavior is pervasive, consistent,


predictable, and most importantly, a
characteristic of the learner; instructional
group activities (displays teamwork); uses
an objective approach in problem solving;
displays a professional commitment to
objectives are concerned with the student's ethical practice on a daily basis; revises
ri::,1 general patterns of adjustment (personal, judgments and changes behavior in light of
social, emotional) new evidence; values people for what they
This is the highest level of the affective are, not how they look
domain. At this level, a person has developed Key Words: acts; discriminates; displays;

,.'l I
and internalized a value system to the extent
that those values are clearly reflected in the
person's behavior. When we think of a miser
infl uences; Iistens; modifies; performs;
practicesl proposes; qualifies; questions;
revisesl serves; solves; verifies
or a spendthrift, we are thinking of someone
who has reached the characterization
Ievel. That person has reason for holding
particular values and is satisfied with those

_l values. Since students are in the process of


developing their value structures, only a few
will reach the characterization level while in

Chapter 5: Establishing Learning t*S.tt 5;,,f7',$?:;


The psychomotor domain is concerned with the development of moro:
skills and neuromuscular control. Objectives in the psychomotor domain ofter.
contain elements of the cognitive and affective domains and vice versa. bui
the dominant characteristic and intent of the student's response is physica-
movement. The curricular areas in which psychomotor skills receive major
emphasis include typing, shorthand, home economics, industrial education.
art, music, and of course, physicaleducation. It is important to keep in mind.
however, that virlually all other curricular areas depend, on one degtee or
another, on psychomotor skills. Speaking, gesturing, writing, and eye-hand
coordination are examples of psychomotor domain skills. The psychomotor
domain has five levels; namely, imitation, manipulation, precision, articulation.
and nafuralization.
Table 5.3
FORMULATING PSYCHOMOTOR LEARNING TARGETS (Simpson et al., 2001)

t. lmitation - early stage in learning Begins; assembles; attempts; carries outl


a complex skill, overtly, after the copies; calibrates; constructs; dissects;
individual has indicated a readiness duplicates; follows; mimics; moves;
to take a particular type of action. practices; proceeds; repeats; reproducesl
lmitation includes repeating an act that responds; organizes; sketches; starts; tries;
i has been demonstrated or explained. volunteers
hu
It also includes trial and error until an
ts appropriate response is achieved.
z. Manipulation - individual continues to (Same as imitation); acquires; assembles;
practice a particular skill or sequence completes; conductsl does; executesl
until it becomes habitual and the action improves; maintainsl makesl manipulatesl
can be performed with some confidence operates; paces; performs; produces;
and proficiency. The response is more progresses; uses
complex than the previous level, but
the learner is still not "sure of himself/
herself.""
3, Precision skill has been attained.
- (Same as imitation and manipulation);
Proficiency is indicated by a quick, achieves; accomplishesl advances;
smooth, accurate performance, automatizes; exceeds; excels; masters;
requiring minimum energy. The overt reaches; refines; succeeds; surpasses;
response is complex and performed transcends
without hesitation.
4. Articulation - involves an even higher Adapts; alters; changes; excels; rearranges;
level of precision. The skills are so reorganizes; revisesl surpasses; transcends
well developed that the individual
can modify movement patterns to fit
special requirements or meet a problem
situation.

:,
,f'4g ,*- n.tessment of Student Learning 1: Cognitive Learning
otor (Table 5 3. F or mulati ng P sy chomotor Ledrning Tdrgets continued)
ften
i,;lllr:iri!14'tati:axi*&t;i*-i1il';r1!;;r**ijnl*s
but - :'1. - l!-qfd*rr,: *!u&16'"-1pii.,
;ical ,$, i

ajor
ion, 5. Naturalization - response is automatic. Arranges; combines; composes; constructs;
The individual begins to experiment, creates; designs; refines; originates;
ind, creating new motor acts or ways transcends
?or of manipulating materials out of
and understandings, abilities and skills
StOr developed. One acts "without thinking."
ion,

K,;ni*.'t''bi ,

n1) Think of one specific lesson in your field of specialization. Prepare a


series of objectives progressing from the low-level cognitive domain up to the
appropriat e affectiv e domain.

lI
l]
!I

t<.
ds

Cha pter 5 : Esta bl ish i n g Lea rn i n g fargets ...,{;' 49':.*l!


I
4.. -

Preporotion
of 1lossroou,t Assesswent

OBJECTIVES

At the end of the chapter, the learners are expected to:


. list down the planning stages in preparing a classroom test;
. establish relationships among learning objectives, teaching, and
testing;
' cite the importance of the table of specifications in making the test
more valid and reliable; and
. construct a sample table of specifications.
L*
lr,

?lanning the eeacher-,,14ade eest

Good tests do not just happen. They require adequate and extensive
planning so that the goals of instruction (objectives), the teaching strategy
to be employed, the textual material, and the evaluative procedures are all
related in some meaningful fashion. Most teachers recognize lhe importance
of having some systematic procedure for ascertaining the extent to which
the instructional objectives have been realized by their students. Yet, some
teachers still prepare their classroom tests with inadequate planning, or worse,
without planning at all.
In class, planning for each lesson and its accompanying evaluation starts
as early as the conceptualization of the curriculum. This practice has been
reiterated by authorities in the field of education, particularly Ralph Tyler
who, until now, is considered the "Father of Educational Evaluation" (oliva,
2001).

50
Compare \
//. pertormance data\
,tion with behaviorally
/ Establish broad \ \q"o obiectiv2/l / collect \
goals and performance
uent \ obiectives.
[ .ri:} \'1 I
'-r'' Laut^.)
/ Planning \ ---\
/ Deve.lop
Classify goals , Stage
or select
\
and objectives. \) measurement
\ ,;;;ii,"",.- ,l
Define \ /rina situatiorttn
in \ / which achievernent\
o bjectives
>ehavioral fr \ of objectives Can /
^--J
aitL-l
_terms. f \*" shown
*
.Z-t Figure 5.1.TyIER,S EVALUATlON FRAMEWORK (Otiva, 2001)

Before a classroom teacher sits down to write the test items, he/she must ll
ask himself/herself a series of questions. what do I want to do? what is the
J]
best way in which I can accomplish my goal? These are the two most general
questions the classroom teacher must consider. The following qulstions ?
should be asked by the classroom teacher in the test-planning stager
. What skills, knowledge, attifudes, etc. do I want to measure?
enslve
rategy
' Have I clearly defined my instructional objectives in terms of student
behavior?
are all . Have I prepared a table of specifications (TOS)?
nance . What kind of test (item format) do I want to use? Why?
rvhich . How long should the test be?
some . What should be the discrimination level of my test items?
L'OrSe,
. How will I arrange the various item formats?
starts ' How long will I arrange the items within each item format?
been ' what do I need to do to prepare the students in taking the test?
Tyler ' How are the students going to record their answers to the objective
Cliva, items?
. How willthe objective portion be graded?
' For objective items, should guessing instructions be given? should a
correction for guessing be applied?

Chapter 6: Preparation of Classroom Assessment


51
f-
. How will the test scores be tabulated?
. How willthe scores be assigned?
. How willthe test results be reported?

Skps in 1lossrooru Gsting oud Assesswent

Linn and Gronlund (2000) offered a logical procedure for preparing


valid, reliable, usefultests as shown in Fig. 6.2.

GOAL
lmproved
learning and
instruction

7. Using the test

6. Appraising the test

5. Assembling the test

i
r4l
4. Preparing relevant test items

i
tu 3. Selecting appropriate test types

L 2. Developing the tale of specifications

t. Determining the purpose of measurement

Figure 6.2 BASIC STEPS tN CTASSROOM TESTING

During the stage of thinking about the test, the teacher must consider
the relationships among the objectives, teaching, and testing. The following
checklist should assist the test constructor:

. Specify the course or unit content.


. List the major course or unit objectives.
. Define each objeclive in terms of student behavior.
. Discard unrealistic objectives.
Prepare a table of specifications.
. Decide on the item format to be used.
. Prepare test items.

.$92|$ Assessment of Student Learning 1: Cognitive Learning


Wrrting the ObJectiue thort-Kesponx eest

The objectivelype item was developed to overcome the criticisms leveled


against essay questions-poor content sampling, unreliable scoring, time
consumed for grading, and encouragement of bluffing. All objective item
fonnats can be subdivided into two classes: the supply type (short inswer) and
reparing the select type. One of the virtues of the objective type is that it is economical
in obtaining information from a sfudent because, in general, it takes less time
to answer than an essay question.
Suggestions on how to construct an objective type of test are summarized
as follows:
1. objective test items must be written as simply and as clearly as
possible so that all the examinees will be able to make the same
interpretation of the item's intent.
2. Test items should be tailored to fit the age and ability level of the
examinees.
3. Textbook language, technical jargon, and excessively difficult
vocabulary should be avoided whenever possible. otherwis e, the
test will be one of verbal fluency or general intelligence.
4. Irrelevant clues should be avoided. A test-wise student should not
have any undue advantage over the comparably knowledgeable but /1
non test-wise sfudent.
5. There should only be one coffect or best answer. preferably, items ,1

should ask a question that is difficult to obtain agreement, even !I


among experts, on what is the "best,, answer.
6. Test items must be reviewed, pref.erably by a fellow teacher.
7. Important ideas, rather than trivial details, should be stressed.
Otherwise, rote memory is encouraged.
nsider 8. The short-answer item is well suited to objectives and content areas
owing where the answer can be provided by a word(s), symbol, number, or
formula.
9. For short-answer items, omit the key words and over-mutilated
sentences, use a direct question format when feasible, and avoid
irrelevant clues. For numerical problems, tell the student the degree
of precision desired and indicate whether the unit of expressio-n is
expected in his answer.
10. For matching exercises, keep the lists relatively short, perhaps only
5-12 entries in each list; keep each list homogeneous; arrang"
"u.h
list in a systematic fashion, for example, order by length of response
or in an ascending or descending order for dates and numbers; have
both lists appear on the same page; and have one list shorter than
the other.

Chapter 6: Preparation of Classroom Assessmen, '"1


, Sg
11. For true or false items, avoid double-baneled items, negal.E
questions and double negatives; have an approximately ec-ar
number of true and false statements to counteract the effects o: :e
examinee's response set; and restrict the use of items for which rl;
answer is clearly true or false.

Cabk of Spec(icotions GOS)

One of the major complaints of students with regard to teacher-mao€


tests is that they are often invalid since the included items in the test w*e
not discussed. Although a table of specifications (TOS) is no guarantee tha:
effors willbe corrected, a blue print may help improve the content validitg' c:
teacher-made tests.
A TOS is a matrix where the rows consist of specific topics or skiils anc
the objectives cast in terms of Bloom's Taxonomy. It is sometimes called a
test blueprint, test grid, or content validity chart.
The main purpose of a TOS is to aid the test constructor in developing
a balanced test, which emphasizes the proportion of items to the amounl
of time spent in the classroom, activities engaged in and topics discussed. It
helps the teacher avoid the tendency to focus on materials that are easy to
develop as test items. Such tendency often limits the teacher in constructing
rdl items on knowledge.
i
l*.
.-,-1,:
Who Should Prepare Specifications?
!e
There is nothing wrong in involving students in the development
of a TOS. In fact, whenever feasible, teachers should encourage such
sfudent involvement, if not for any other reason than to have students
feelthat they have played some role in planning the course. This attitude
toward student participation should not be interpreted as giving the
students' complete control. Nor should the teachers' remarks be regarded
an abrogation of their major and final responsibility. The teacher is the
decision maker, not the sfudents, although student input should still be
considered by the teacher in making decisions. However, their opinions
should only be used in an advisory capacity.

When to Prepare Specifications


Ideally and to be of most benefit, the TOS should be prepared before
the beginning of inskuction. It would be good to consider it part and
parcel of the course preparation because it can help the teacher be more
effective. It helps provide for optimal learning on the part of the students
and optimal teaching efficiency on the part of the teachers. It serves as

t/.rf."
of Student Learning 1: Cognitive LearninE
AS+f Arr"ssment
.{**'
negative a monitoring agent and can help keep the teacher from straying off
ty- equal the
inskuctional track.
:ts of the
fiich the once the course content and instructional objectives have been
specified, the teacher is ready to integrate them in some meaningful
fashion so that the test, when completed, will be a valid measure oflhe
sfudent's knowledge.
One could delineate the course contents into finer subdivisions.
er-made whether this needs to be done depends on the nature of the content and
est were the manner in which the course content has been ouflined and taught
by
rtee that the teacher. A good rule-of-thumb to follow in determining how
dJailei
ilidity of the content area should be is to have a sufficient number of iubdivisions
to
ensure an adequate, detailed coverage. The more detailed the blueprints
kills and is, the easier it is to get ideas for test items.
cailed a Table 6.1 contains numbers in certain cells under level of complexities
such as knowledge (K), comprehension (c), application (Ap), analysis
rcloping (An), synthesis (S) and evaluation (E). The totiiof the last
column will
amount give the desired total number of items for each levelof complexities.
The
rcsed. It number 50 is the desired total number of items appropriated to different
easy to levels of complexities. The computed values (fijures) in each cell
in a
tuucting certain level suggest the number of items that should be constructed
on
a specified topic.
\
ll
Hence, the five knowledge questions must be taken from each
strucfure of a topic sentence, writing a journal and editorial, methods ,1

rpment of paragraph development, different figures of speech, and the uses of ?


p such figures of speech in a sentence.
tudents
rttitude j{ow to Deterwine the Weights
rng the
garded Potentialltem (PI) is computed by the time spent in a specific content
r is the over the total number of hours for the whole grading period multiplied
still be by 100. Functional Items (FI) determine the nu-u"i of items tl be
rinions constructed from a specific content. They are calculated by getting the
product of the PI and the desired number of items in certain level
of
complexities, (e.g., 5 for knowledge) and dividing it by 100.

before
rt and
, more
.rdents
ves as

Chapier 6: Preparation of Classroom Assessment


55
r

tA \o
z o
o
o o-
r{
J

\J ,ri
v
rn
U 3
=
ltJ
o-
o
TU
ra
lr P
fiJ

_o
\d LtJ
c
AJ

i+l *d
,!- <
.L
(U

cr
(U
I
\Lr
'
o
L

t )
;.: L

b U
o
o (tr

G'

=
UJ (!
L
qJ
o-
E.
=
tA
ci
(U
.=
r!
#
o

-o
]J
CJ
P
.e
E
-o
l
c
.9
P
(!
U
IE
U
trro U
o- o
hg +
C?
o
qJ

=u
EP
nJC -o
fE
LOJ #
U(,

,' .: ..;.,::.:'

.:.,...56 :::::. Assessment of Student Learning l: Cognitive Learning


To compute for Potential Item (PI):

Time spentltotal number of time spentfor the quarter x IOO


Example: 4.5140 : 0.1125 x 100 : 77.25
I

I
To compute for the Functional Items (FI):

+
PI x number of items allocated to each level of
complexities/l00
Example: 11.25 x5: 56.251100 : .562b

o
N
o-
J

U
f
= 1. Based on the sample two-way Tos, identify where the items in the
t! different levels of complexities should come from.
C 2. compare the objectives of a lesson plan with the items in a test paper
U
E of a teacher in an elementary or secondary school.
AJ
,L
l 3. Interview teachers on how they prepare a test for their students.
q
AJ l]
o.r
q
II
cU
1
o
=o
q
d)

o
0)
.=
ag
?
c

r!
A)

F)
l^l
rl
:l3l
'hl

pl
GI
Jl

.l
=l
9t
+l
nl
I
-l
JI
-l

-l
-t
/l

Chapter 6: Preparation of Classroom Assessment :,


57'.',,..
P

Eeuelopwent
of Clossroou,t Assesswent

OBJECTIVES

At the end of the chapter, the learners are expected to:


. identifu the different teacher-made tests, their advantages and
disadvantages; and
. formulate their own sample questions.

The quality of test construction depends largely on the part of the


classroom teacher. Every classroom teacher is interested to know how far
r4l
and wide helshe can facilitate, orient and guide his/her students with the
tu knowledge, ideas, abilities, skills, and attitudes that helshe wishes to build up
to achieve his/her teaching objectives and make his/her students responsive to
the changing needs of the society. HelShe is in the best position to ascertain
the strength and weaknesses and the needs of his/her students, and the goals
he/she wants to achieve.
The classroom teacher usually gives the following types of test (item
formats) in the classroom: multiple choice, true or false, matching type,
completion, cloze, and essay.

,,Vultipb-ehoice eest
The multiple choice type of test is a form of assessment in which students
are asked to select the correct or best answer out of the choices from a list. In
this kind of test, an item consists of two parts: the stem and a set of options or
alternatives. It requires the students to select from the options that will make
the stem complete and correct. All incorrect or less appropriate responses are
called distracters or foils.
Oftentimes, multiple choice tests include a stimulus material where the
item or question is drawn. A stimulus material, or an introductory material, is
added information in the form of chart, graph, stanza of a poem, or a novel
pictorial.
The stem is the beginning part of the item that presents the item as a
problem to be solved, a question asked of the students, or an incomplete
58
statement to be completed. It can be presented in three ways: a direct question,
an incomplete statement, or a mathematical equation. If it is an incomplete
statement, allthe options or the last one ends with a period. For elementury
0t/4€/tt sfudenh, it is advisable to use a direct question.

Example of a Direct Question:


stlfi€ltt
Who is the President of the Philippines after EDSA I?

Example of an Incomplete Statement:


The President of the Philippines after EDSA I is
a. President Gloria Arroyo c. President Joseph Eskada
,.
b. President Corazon Aquino d. President Fidel Rarnos.

s and A stem may also be presented in the form of a mathematical equation.


Example:
In the equation 2x * 3 = 4, solve for x.
a.4 d. 1.5
n of the b. 10 e.8
how far c. 0.5
',vith tfre ll
build r\n The given options are the possible answers that the examinees can choose
from, with the correct answer called key. The minimum number of options is {l
lnsive to
xcertain three while the maximum is five.
1
he goals
Advantages of the Multiple-Choice Test
sr (item 1. It has great versatility in measuring objectives from the level of rote
rg type, memorization to the most complex level.
2. It often requires less time to administer than tests requiring written
responses.
3. Because this style of test does not require a teacher to interpret the
;Lrdents answers, test-takers are graded purely on the selection, thus creating a
a list. In lower likelihood of teacher bias in the results. Factors inelevant to the
lons or assessed materials, such as handwriting and clarity of presentation,
l- rrake do not come into play in a multiple choice assessment.
lses are 4. Because student writing is rninimized, the teacher can cover a
substantial amount of course material in a relatively short time.
e:e the 5. Scoring is objective since only little interpretation is needed to count
e:iai, is the number of correct responses.
r rovel 6. Teachers can construct options that require studenh to discriminate
among them. These items vary in the degree of correctness.
r- ,5d
r:,ete
'. El:
Chapter 7: Development of Classroom Assessmen I
,*.i;W
.5.&.*,r

F
r
7. The effects of guessing are largely reduced since
options.
ther e are more

8. Items are more amenable to item analysis, and this


can be used to
detect areas of student weaknesses, evidence of item
ambiguity, item
difficulty, and the extent to which the item can measure individual
differences.
',
1)

Disadvantages of the Multiple-Choice Test


1. This type of test is more time consuming in terms of looking for
z ;',','ffil|l:il.|:?,u],,o",nu*uiguou. F",i; o*,,,..
to interpre, *n"
as the test maker intended can result in an iicorrect
response, even if
the test taker's response is potentiaily valid. The term;;iiip[;r"..
has been used to describe this scenario because test-taier."-uy
attempt to guess, rather than determine the correct
answer.
3. In a multiple choice test, a student who is incapable of
answering
a particular question can simply select a random answer
and still
have a chance of receiving a mark for it. It is a
common practice for
students with no time left t give allthe remaining questions
random
answers in the hope that they will get at least some
of them right.
4'
\rr L,
Test-naive students complain of more than one
answer.
defensible correct

Suggestions for Writing Multiple_Choice Tests


1. The stem should introduce what is expected of the examinee. The
essence of the problem should be in the stem.
A poor stem leaves
the students dealing with four possible answers,
hence making-the
examinees anxious as what to do. Allthe options
should -"u.ri th"
same objective; that is, if the stem calls foi a name
of person all the
choices should be names of persons.
2' Avoid repetition of words in the options. The stem should be written
so that the key words are incorporated in the
stem and will not have
to be repeated in each option. ihi, wiil save reading
of the student.
ti-" on tr," purt
3. when the incomplete statement format is used, the options
should
come at the end of the statement. All test items
shourd pr"r"niinu
problem to the sfudent as early and clearry
as possible.
. 4' Avoi{ specific determiners. Multiple-choice test items should
not
contain clues on what the correct answer is. One clue
is option i"ngtt ;
the longest option is usually the right one.

;fj6OffAttessment of Student Learning 1: Cognitive Learning


tle more 5. use vocabulary suited to the maturip of the sfudents.
consider the
example for Grade 4 pupils:
used to
rity. item Poor Example:
dividual The foremost conhibution of Magellan to civilization
is that he
was the first person to
a. circumnavigate the world.
b. discover the Aflantic Ocean,
dng for c. land on American soil.
d. look for the fountain of youth.
uestion Improued:
even if Magellan was the first person to
e guess
rs may
a. go around the world.
b. discover the Atlantic Ocean.
c. land on American soil.
wering d. look for the fountain of youth.
nd still
ice for Although most students understand what Magellan
had
urdom very f.ew will understand the words foremost, civilization,done,
and
circumnavigate, The poor example measured
dtt. vocibularie, thui-o.i
sfudenh in the fourth grade have not yet learned.
nrrect ll
6. stems and options should be stated positively whenever
possible.
Elementary grad-e pupils find negatives confusing. fl
For olde*t"a""ir,
a negative in either the stem or option, but not-botn,
i, per-isriut". 1
. The
If the word "not" is used in the stem, it shourd be
underrined to
ascertain that it is not overlooked
eaves
g the 7 ' Options should be plausible and homogeneous.
It is useless to include
:e the distracters that no examinees will choose
Il the 8. Items should have a defensible correct or
best option. To avoid this
pitfall, the teacher should examine each
option to make sure it is
either the most defensible or clearly the *roig
ritten
justifu the reasons for incorrect options
L"". rt is important to
have as it is to be abre to defend
part the correct ones.
9. Avoid items that measure opinions. Ail opinions
are equalry
ould defensible.
:the 10. Vary the placement of correct options.
11. Avoid overlapping options. A murtiple-choice test shourd onry have
not one correct option. Avoid having options like:
grh; a. lady c.
woman
b. lass d.
girl

Chapter 7: Development of Classroom Assessmen,Cff&'


.,f:&r''s*''
12. Use "none of the above" as an option only if there is an
absolute
right answer.
13. Avoid asking sfudents for trivial information like middle initials,
specific dates or years, spellings, among others, from the options:
Example:
The first woman president of the philippines is
a. Pres. corazon A. Aquino c. pres. corazon c. Aquino
b. Pres. corazon B. Aquino d. pres. corazon D. Aquino
74. whenever possible, affange options in a logical order of magnitude,
temporal sequence, and so on.
15. The stem should be clear and grammatically correct and should
contain elements common in each option. Muliiple-choi..
tnrt ob"v
the standard English rules of puncfuation and grammar.
16. Use four or five options.

suggestions for Measuring complex objectives with Multiple-


Choice Items
1. The objective should permit the measurement of understanding.
l#r 2. constuct items in a form different from the one originally presented.
U 3. use novel pictorials to measure principles that require sfudents to
apply knowledge.
L
4. Use analogies to measure relationships.
5. Have students identifu assumptions and analyzecriteria.
6. Have sfudents discover relationships among similar topics.
7. Have students select examples of principles or concepts.
8. Use charts and tables.
Scoring the Multiple-Choice Test
children below_the fourth grade should probably answer questions on
the test booklet itself rather than on a separate answer sheet.
A separate
answer sheet is an advantage to older chiidren since the scoring
time, and
the scoring and counting errors can be reduced. It can
also facilitate the
analyzing of the class' response to each item for diagnosis.
Another modification in scoring a multiple-choice test is by giving
credits for every option the examin ee recognizes as incorrect,
rathei thai
penalizing students for responding incorrecfly. However,
apenalty should
be given for eliminating a correcialternative.
scoring can also be modified by having students identifu the correct
option if the correct answer is not among the options.
. / ,/9.'2/;
4N
#_6ZW Assessment of Student Learning 1: Cognitive Learning
'
'6e;s
n absolute Determining the Optimal Number of Options
The number of items on a test and the number of alternatives for
te initials,
each item affect the accuracy of measurements. A very short test cannot
rptions.
differentiate among different levels of student knowledge while too many
options may prove to be confusing to sfudents and will limit the number
of items they can answer. Current evidence shows that the teacher would
Aquino be better off with 80 items having three alternatives each than 60 items
Aquino with 4 options each. Three to five choices are reasonable for multiple-
choice tests.
agnifude,
Types of Multiple-Choice Tests
d should Multiple-choice tests can be categorized into the following:
ests obey
1. Stimulus Material-Stem-Options
The papers, of course, had been full of tragedy_glaring
headlines, sandwiched biographies of every member or tr,"
tultiple- household and the usual familiar tag about the police having no
clue. Nothing was spared. The war was momentarily inactiveind
the newspapersseizedwith avidity on this crime in fashionable life:
ling. "The Mysterious Affair at styles" was the topic of the moment.
resented. "The Mysterious
Affair of Styles by Agatha Christie
-From 11
dents to why are the newspapers making The Mysterious Affair at
Sty/es their lead story? {l
. a. They are bored with regular news. I
b. The Cavendishes were fashionable.
c. The war is over.
How would one describe the newspapers' coverage of the
crime?
a. silly c. thorough
b. humorous
2. Stem-Options
Eons on
pa.rate Example:
. and which of the following serves as an example of formative
the evaluation?
a. diagnostic test c. periodicaltest
'ing b. entrance iest d. short quizzes
:an 3. Negative Stem
Jd Example:
The following are examples of an adjective EXCEpT
a. albeit c. gargantuan
b. august d. titanic
-,.-d-l
Chapter 7: Development of Classroom Assessmen,
,*-63*/
,,rr*f u*.,
4. Best Answer
Example:
Since there is no clear-cut or well defined policies on observing
privacy in all instances, the teacher is simply required to be
a. anonymous c. secretive
b. carefree d. sensitive

5. Contained Options
Example:
Identifu the error in the sentence.
My parents was in Manila to assist my sister enroll in College.
abcd
No error
e
6. Correct Answer
Example:
What is the summer capital city of the Philippines?
a. Baguio City C. Davao City
b. Cebu City d. Olongapo Cip

7. Group Options
Example:
fJ Write-
"lr
A if the item is a simple sentence.
B if the item is a compound sentence.
C if the item is a complex sentence.
D if the item is a phrase.
E if the item is a clause.
8. Morse Variety
Example:
Write-
A if W affects X but X affects Y but Y aff.ectsZ.
B if W does not affect X but X does not affect Y but Y does
not aff.eclZ.
C if W affects X but X does not affect Y but Y affectsZ.
D if W does not affect X but X affects Y and Y does not
aff.ectZ.

**,8,1'
.&ff$ att"ssment of Student Learning 1: Cognitive Learning
'WY'
Table 7.r
CHECKLIST FOR WRITING MULTIPLE-CHOICE ITEMS
servlng
P,
1. Are the item and the main probrem in the stem crearry presented?
Has the item been cast so that there is no repetition of key
words or
phrases for each option?

3. Do the options come at the end of the stem?


Have the responses been.arranged in some systematic fashion,
such as
alphabetically or by the length of options?
oilege. 5. Are all distracters plausible?
6. Have all irrelevant clues been avoided?
Are the correct answers randomry assigned throughout th"
t"rt t ith
approximately equal frequency?
8. Is there only one correct or best answer?
9. Has t'all of the above,, been avoided?

Has the "none of the above,, option been used sparingly


o. only *1.l",
appropriate?
11. Have the overlapping options been avoided?
12. Have negative statements been avoided? rf used, has the
negative been
underlined or written in capital letters? |l
ft
Kinory ltew eeil (erue or false Cest)
I
This type of test requires the examinee to recognize and mark
an item as
true or false. other possible options are agree or d[agree, yes
or no, valid or
invalid, fact or opinion, and cause or effect.

Advantages of the True or False Test


1. ltem Sampling
Because true or false tests/items and answers tend to be
short,
teachers can examine students on more materials than they
can wilh
any other type of test. The true or false (T-F) test can help Lnsure
an
adequate sample of items when a great deal of subject matter
ioes must
be covered.
2. Ease of Construction
Teachers can construct items of this tvpe bv lifting statements
not from the book and rewording some of thlm to make false items.
However, this must be avoidei since items
-uy bn.o*e ambiguous.
The said practice, although takes less time io construct, likewise
promotes rote memorization.

Chapter 7: Development of Classroom Assessment


;,Ut::i.';
r
3. Ease of Scoring
Scoring is relatively mechanical as the sfudent has to only agree
or disagree with the item. The difficulty lies in the penmanship of the
sfudent as some would write "T" in longhand and be read or appear
as "F." This can be remedied by requiring students to write in print,
write the full word, or shade a circle correspondingly.

Disadvantages of the True or False Test


1. Emphosis on Rote Memorization
Modern educational practices tend to lessen the emphasis on
rote memorization except in gaining pre-requisite knowledge for
more complex skills. It is better for the student to apply particular
skills after just having attained them. For example, how a student can
apply the rules of multiplication is better than multiplication per se.
The increasing complexity of life demands comprehension, analysis,
synthesis, application, and evaluation from a student. If examinations
only test the skills of memorization, students may oversimplify
questions that require complex answeIs. It demands that the teacher
be creative when casting T-F tests in such a way that more complex
objectives can be measured.
r{d 2. Dependence on Absolute Judgment
The T-F test presumes a dichotomous world, where things are
TJ either a falsity or truth and the possibility of intermediate values are
b not easily admitted. Although most facts are not entirely true or false
and still require qualification, it is unfair to ask the students to guess
at the teacher's criteria for the evaluation of the truth or falsity of any
statement.
3. Likelihood of Guexing
This type of test allows a high degree of guessing. Statistically, an
examinee has always the chance of obtaining 50% correct answers.
Sfudents uncertain of their answer can always guess and hope to
answer.correctly.

Pointers on Writing True or False ltems


1.'to
Construct items that measure important objectives. Requiring students
respond to new situations is one way to increase the thought-
content of T-F tests.
2. Avoid using specific determiners. Specific determiners give clues to
correct answers. These include sweeping generalizations like always,
never, all and impossible.
3. Avoid using trick questions.

t/ u1.
1: Cognitive Learning
ffi5zy'essessment of Student Learning
.Ftx'
rJy agree
4. Limit each statement to the point that is being
tested. Avoid equivocal
items.
rip of the
r appear 5. Avoid excess use of negative words and phrases.
in print, 6' Approximately half of the statements should be farse. Because it is
easier to construct hue item, teachers inadvertenfly
include -o*
statements that are true. The chance of getting
correct answers by
guessing is higher since students who
urZ in ioubt would t"na io
mark the item as true.
rasis on
7. Avoid qualitative terms like best, some, many, and
several.
dge for
rticular Modification of True or False Tests
ent can C orrections/or G uessing
p€r se. sfudents can be penarized for guessing since guessing
raJysis, reflect learning or hue performan.". A.gr.Ints does not
rations for guessing include, among others, the fillowing,
--' of corrections
in- fu'uo,
implifu
eacher a. Equate the scores of students who guess with those who work
,mllex more carefufly under restricted time rimits. Under
severe time
limits, some stu.dents may work rapidly
"ri *;;. items, whire
some may work more slowly and deliberately.
on

gs are
b. Discourage shrdents from guessing, thus facilitating rearning. on n
moral grounds, guessing is tantamount to becoming
es are
r false
and unjustsince sfudentswho guess take
dishonest
advantage of the nature I
of r-F tests and murtiple-choiie tests. on p"augogicar
guess grounds, 1
chance scores may reinforce guessing, so
lf any sfudents may get some
items right even without studying.
c. [mprove the extent to which tests are capable of predicting criteria.
corrected scores correlate highly with ihe.iit"ii"
y.an of scoring than
incorrectness.
t/ers.
Arguments Against C orrections
>e to /or Guessing
1. Sfudents may be discouraged from attempting
to answer even though
they have some information.
2' students who make effors whether they guessed
or not can be
ents
penalized.
sht- 3. The corrections for guessing are laborious
to use.
4. The differences in scores may still go unnoticed
even if the test items
sto are increased.
ays,
Reducing the Effects of Guexing Using Other Ways
1. Encourage the sfudents to guess. If the
students are tord that scores
will be conected for guessing, some may not
attempt to answer some

Chapter 7: Development of Classroom Assessmen


,d{;#'
{Vtz'
r items they doubt, but those who are still set to lpmble will continue
guessing. To equalize the scores of the students, encourage them
to guess, even though guessing is wrong on moral and pedagogical
grounds.
2. Increase the number of items. It is recommended that a T-F test
should contain 3 to 4 times more items than multiple-choice-tests
to make them equivalent in accuracy. consequently, the time limit
should be decreased.
3. Have the sfudent revise false statements. Requiring the students to
mark false items and have them corrected is one way of reducing
guessing.
4. confidence weighting. Aside from marking each statement as T or
F, sfudents can be asked to indicate the degree of their certainty in
giving such responses.

Types of True or False Tests

1. SimpleTrue or False
write TRUE if the statement is correct and FALSE if otherwise.
Example:
'I Baguio City is the summer capital of the philippines.
.-Jtl

IJ 2. ModifiedTrue and False


Write TRUE if the statement is valid and FALSE if otherwise. lf the statement
{f,, is FALSE, underline the word(s) that make it wrong.
Example:
One calendar year has thirteen months.

3. True or False with Correction


Write TRUE if the statement is correct and FALSE if otherwise. lf the statement
is false, rewrite or change the statement to make it right.
Example:
5 x3 = 8 Answer: False 5 + j = 8 or5x3 -15
4. Cluster True or False
Circle T if the statement is TRUE and F if it is FALSE.
Example:
Tests are productively used when
T F 1. lt analyzes students, learning.
T F 2. lt allocates funds.
T F 3. It improves curriculum
5. True or False with Options
Example:
Write A if only the first statement is TRUE.
B if only the second statement is TRUE.

...,4t.e{.a'
.9t Assessment of Student Learning
1: Cognitive Learning
,r,ir6..8
,.r
lcontinue C if bcth statements are TRUE.
age them D if bcth statemenis are FALSE.
dagogical (t) Ree ognition tests require the examinee to choose the right answer
frorn the given opiions. (z) An example of a recognition test is a
cornpletion test.
r T-F test
pice-tests 6. Fact ar Opinian
time limit Example:
ldentify if the staten:ent is a FACT or an OPINION.
'i. There are 12 months in a year.
udents to
reducing
L During the month of March, it never rains.

7. Identifying fnconsisfen cies in a Paragraph


rtasTor tr w.tmnla.

nainty in Cirrle the r,vord(s) in any part of the paragraphs that make(s) the
statement(s) \.vrong.
To a large extent, the gracie a student gets may not truly reflect the
autneniic learning outccrrie due to the flaws in the test construction and
adr-ninistration. Ti-:ere are certain quaiities a good test must possess.
Reiiaoiiit,v imolies ,yaiidity but noi reverse. The latter refers to the
efficiency with lvhich the test intends to measure. One of the factors that
infl,-rence validiilr is directions, 'which inform the teachers on how to score
students' responses to the items. The arrangement of the items must begin
from ijiflicuit ic easy. hle cessarily, test is administered with complexity, clarity,
and uniformitii" I
statement Table 7.2 1
CFIECKTIST FOR WRIT'NG TRUE OR FALSE ITEMS I
Factcrs ..,'llY,A3ir
1. Was each item expressec in clear, simple language?
z. Was lifting statements verbatim from the text avoided?
statement
3. Have negative statements L:een avoided where possible?
4. Have specific determiners, such as ail, may, and sometimes been avoided?
5. Have double-barreled items (part true and part false) been avoided?
6. Have trick quesiions been removed?
7.' ls each item clearly true or false?
8. ls there approxirnately the same number of true and faise items?

9. Have the items been edited?

,,l4atdrirg-Cyg eest
The matching-type test is similar to the multipie-choice test. In this kind
of test, the examinee associates an item in one column with a choice in the
second coiumn.

Chapter 7: Developnrent of Classroom Assessment


69
Advantages of the Matching-Type Test
1. The matching-type test is simple to construct and score. It is well suited
in measuring associations. Like a multiple-choice test, it presents the
sfudent with questions and alternatives.
2. It reduces the effects of guessing, although the chance of guessing
increases as the studentprogresses in answering items. This, howeverl
is easily remedied by adding more options.

Disadvantages of the Matching-Type Test


1. It tends to ask students to associate trivial information. Unfortunately.
most matching-type tests emphasize memorization, although it is
impossible to construct itemb that measure more complex cJgnitive
skills.
2. In case of commercial answer sheets, matching items can
accommodate no more than five options.

Pointers on Writing Matching-Type Tests


1. If possible, the respon. '.t should consist of short phrases, single
'J words, or numbers.
2. Use homogenous options and items.
3. Havg more options than the given iterns. Initially, a matching-item
test decreases the sfudents' tendencies to guess but as the sfudents
progress in answering the test, the guessing tendencies increase.
This
can be avoided by increasing the options.
4. Arrange the options and items alphabeticarly, numerically, or
magnitudinally. This is one way to help the examinees since ihey
can maximize their time by not searching for the correct unr*eri,
especially if there are many options.
5. Limit the number of items within each set. Ideally, the minimum is
five items and the maximum is ten per set.
6. Place the shorter responses in column B. This time-saving practice
allows the students to read the longer items first in column A and
ttien search quickly through the shorter options to locate the correct
alternative.
7. Provide complete directions. Directions should stipulate whether
options can be used only once or more than once. Tirey should also
instruct the sfudents on ho,uq to respond. The instructions should also
clarifu what columns A and B are about.

Assessment of Student Learning 1: Cognitive Learning


8. Place the list of options on the same page as the list of items.
Time
rited is wasted if students have to flip pages to search through
all options
the to locate the correct ones. Additionally, some sfudents may overlook
that there are stillsome options on th! next page.
slng 9. Avoid specific determiners and trivial information that can help the
vgr, sfudenh find the cgnect response without any effort on their part.
The use of "none of the above" as an option is recommended ii it
is
the only correct answer.
10. clearly explain the basis on which the match is to be made.
,eiy,
it is suggestions for Measuring complex objectives with Matching-
ilive Type Tests
1. Match examples with terminologies. perhaps this is the most
direct
can and simplest method of increasi*ng the thought content of matching
tests provided that the example has not yet been taught
before.
Use novel pictorial materials.

rql€ Types of Matching-Type Tests


1. Perfect Matching happens when an option is the only answer
of the items in column A.
to one
I
tem Column A Column B 1
ents Prouinces Tourist Destinations
[his 1
1. Albay a. Luneta Park
2. Bohol b. Mt. Mayon
or 3. Banaue c. Chocolate Hills
hey 4. Pangasinan d. Rice Terraces
€rs, 5. Manila e. Hundred Islands
f. Pagsanjan Falls
nis g. Malolos Church
2. lmperfect Matching happens when an option is the answer to
more
tice than one item in the column.
and
Column A Column B
rect
Tourist Destinations Prouinces

ther
1. Luneta Park a. Albay
also 2. Mines Mew Park b. Manila
also 3. Chocolate Hills c. Banaue
4. Camp John Hay d. Bohol
e. Pangasinan
5. Intramuros f. Baguio
g. Palawan
Chapter 7: Development of Classroom Assessmen ,
*i'r#"-
'ffs
/
-

3.SequencingMatchirigrequiresiheexarnineestoarrangethings,
steps, or events in ciironclogicai order'
Arran ge th e st"t', - ti h istc ricai rese arch'
:, J;;'TJ;"d
2. Gathering of source materials
o
I
J. Problem formulation
4. Criticizing soul'ce materials
( Inteq:reting hist,crical data

4.MultipteMatchingrequirestheexamineestomatchtheitemsin C
B to colurnn
colurnn a to g, ittZn match the answers from column
and further match answers frorn colun:rn C to coiumn
D'
towns
Match ihe provinces listed in coiumn A with their capital
inColumneanawiththetouristspotstheyareknownfor.
Colurnn A Colurnn ts Colurnn C
1. Bohol a. TagaYtaY CitY I' Underground
2. Camarines Sur b. Tagbilaran Ci$ River
3. Batangas c. Puertc Frincesa itr' Taal Volcano
4. Palawan d. Piii ill. Water SPorts
Batangas ei$
'rrJ ru. 8iH"',"J- n*,
1,'1 MaYon Volcano
L"i Table 7.3
.l ,-i i;
hr
CHECKLIST FOR WRIT!NG MATCh!ING-TYPE TEST
Yes

,'|l""" vr#fu." the student clear, explic-it instlucli-ot-s? -


Ar" th" t*Pon=
,/
J. I>LrllEli)LrttvrLLr_:l::_ ]H
L ^!L l:-!- L-^+.^,ann r +^
!v
'1E ontrirrs)
4. Hlr lruLll IIJLJ vlrvY!!rr , 'k

eomPlex? Are the i'esPonses


. r- _-^l
-L^*r)
5llllpledllu>r,Y|...- :
6. Are the responses arranggS!-i! a !)'steryaLc croerr --
L^+t- li-+. .ol:#irralrr froe n{ rirleq?
F

8. Do both lists appear on the sqfs iegg

eotrupklioa or Short-Attstv{{ C€st


a sentence with
This format ol testing requires ihe stu,leni:" ic corriplete
the correct word or Phrase.
Advantages of the Conmpleti$cl Test
The trc'"r' level of
1. Construction of the cornpleticn tast ls relaii're1i,'eas--u"
lypicaliy measrires ihis lrrpa *f iest' i*:wever' ccnstruciing
complexirr-

T2 ,, Assessmeni ci Sir:der:l Le;ri'ni':'l i'


F. -rs.
colllpietion {li sl-:cri-ansr{rer tesis ihat measure the higher levels of
Bloorn's Tax*non:y is diiTicuit.
2. Guessing is eliminair:d. Because tl-ris test simply requires recail, it
is
noi possii:ie f*r siucenis'r* recr:gnize correct options. students with
incompiete i*r".ov:maiion riiay reJognize the correct answer from the
opiions'of a muitipfe-chcice test bui not in a completion t"rt. sin."
guessing is not an cpticn. ihe stuiients may be unable
to s,pply the
corr€ct answet.
3. Item sampLng is impl"ovsc. ]t takes iess rime to read and answer
than
do rnultiple-chcice te.sts; henc€ the t€acher can give rnore items to
tr€asure the st,-iderri.s" r.nc."vledqe.

Disadvantages of tEae C*xmpEetic*n Test


1. Completion tests are clifficuit io score.
2. They typicaiiy m€asur€ rote rnemory. They are usuaily restricted
to
short worcis; iten'rs tei:d ia measure the recall of specific facts. narnes,
places, anci events ani rareiy measur€ more compiex
ouicomes.

Fointers o", writislg il*r:rgrEetiom amd shor*-Etern Tests


1.
desired; for 1
o 1
Z.
J. I
t!
l!
I

4.
i

i
I

i c.
i

(,
--"] l

I
a

9.

1r\
l" u.
of items, restrict the number of
i1. answers should be equal in length.
19
LL,
Types of ComPletion Tests

1. ldentificationTest
giving
It refers to the process of summing up the results of tests,

2. Enumeration
List down the three branches of Philippine government.
1.
2.
3.

3. Fillingthe Blanlcs
Bayang Magiliw
Perlas ng
Alab ng puso
Sa mo'y buhay.
hinirang
DuYan ka ng

r,
J Sa marnlulupig
'Di ka
\J
"
L
"-, -;
4. Analogy
Father: Son, Mother:

1loze Cest
consisting
cloze or cloze deletion test is an exercise, test, or assessment
and the students
of a portion of a texi with certain words removed lclozetext)
requires the ability to
are asked to replace the missing words. The cloze test
the correct words or
understand context and vocabulary to be able to identify
type of words that belong in the deleted passages of the text.
-'- -Word.
may be dele-ted from the text in question either mechanically
to
(every nth word) or selectively, depending on what aspect the test intends
give emphasis to.
Example:
Today I went to the and bought some milk and eggs'
and
I knew it was going to rain, but I forgot to take my
ended uP getting wet on the waY

-'
of student Learning 1: cognitive Learning
6;.ioa$'o*.ssment
{I

€xoy eest
This type of test differs from the completion test in degree rather than
sts, giving
in kind. Essays usually allow greater freedom of response to questions and
require more writing.

t. Advantages of Essay Tests


1. Essay give students freedom to respond within broad limits. Fssay
examinations allow students to express their ideas with relatively f.ew
restraints.
2. Guessing is eliminated. Essays involve recalland there are no options
to select from. The student is expected to supply rather than select
the proper response.
3. Essay items are practical for testing a small number of students.
However, as the number of sfudents increases, the advantage of
essay tests decrease.
4. Essay tests reduce assembling time. Less time is required for typing,
mimeographing, and assembling. If only a few questions ur" uik"J,
the teacher can just write them on the board.
5. They can measure divergent thinking. Divergent thinking is indicated
1
by unconventional, creative, relatively free respon."r. B"au.rse they
allow great freedom in answering, the opportunity to obtain unrrr,rul 1
responses is increased.
l
Disadvantages and Limitations of Essay Tests
1. Essays are difficult to score objectively because sfudents have greater
,nsisting freedom of expression. Also, long, complex essays are more difficult
tudents to score than shorter, more limited ones.
bility to 2. Extended essays rneasure only the limited aspech of student
,ords or knowledge. Because extended responses require time to write, only
a few questions can be given to the students. Thus essay tests sampll
mically limited content and are not always a fair measure of what the student
ands to actually knows. This problem is less serious when responses are
limited and the number of items are increased.
3. Essay questions are time-consuming for teachers and sfudents. students
often spend much time answerini only one or two extended essay
I eggs.
questions which may severely limit sampling their knowledge. Teachers,
-. and in the meantime, also devote much time reading lengthy ,"rponr"..
However, if time limits are kept constant, coffrnan (tili) hasshown
that objectivity is improved by increasing the number of items than by
allowing greater freedom in responding to fewer items.

Chapter 7: Devetopment of Ctassroom assessmen}$S,.;8


4. Essays eliminate guessing but not bluffing. Poorly prepared studenrs
desperately attempi to gei a passing grade b'y answering even if their
responses are not related to the questions asked.
5. Most essays require a litile more than rote memory. In practice.
very few essays require originaiity and rnost ernphasize iengthy
enumeration of memorized facts or trends"
6. Essay tests place a premium on writing. Students can rea.d much
more rapidly than they can write, Much of the time allotted to
answering an essay qi-lestion is devoted io the mechanics of writing
and there is relativeiy iittle time to think about content. On more
objectively scored tests, iittle iirne is spent in writing ancl more time
is used in thinking about responses. if ihe ieacher does not attempt
to measure writing skills, a multipie-chcice test wiii probabiy provide
more information lrer unit of time than with an essay.

The Use of Essay Tests to Facilitate Learning


Thereare varied ideas for and against essay testing. Below are the
favorable comments:

.J 1. Raises the quality of writing


2. Teaches students to organize, outline and summarize assignments
LJ' rather than simply look for facts, dates and details expected in
T-F or multiple-choice tests
t
-a;

Likewise. there arearguments against essay tests.


1. Essag tests do not aliow students to revise and rewrite their work
since time is limited.
2. The teachers' over-attention to details can destroy the themes of
essays.

Situations that Suggest the Use of Essay Questions


1. If the test objectives specify that studer-lts have to write, recall or supply
information, an essay examination rnay be necessary. Objectives that
suggest extended student responses also suggest the use of essays.
2. When the ciass size is small, the teacher can afford to spend more
time reading essay responses. Reading extended responses for trarge
classes may prove to be excessively time consuming.
3. Since multiple-choice tests are difficult to construct but easy to score,
they are considered more practical when the test can be reused. If
a test can be used only once, an essay examination may be more
convenient than a multiple-choice one.

Assessment of Student Learning 1: Cognitive Learning


$',16,,,,ir
i srudents Twenty Categories of Essay Questioms {Carter, T?TB)
e:. if their
1. Seiective recall (basis given)
:r'actice, Name the congressmen who ciied whiie still holding office.
: lengthy 2. Evaiuating recall ibasis given)
Name the three most important senators who worked on the
ai much improvement of quality eciucation.
lcned tc
>;r-iting 3. Comparison of rruo ihings (in general)
): ror€ Compare norm- and criierion-referenced evaiuation.
cre tirne 4. Comparison of tv",o things (on a single basis)
a,tempt
Compare the effects of extrer::e scores on the mean and the
:rovide
rnedian.
5. Causes or effects
Why did insurgency rapidly develop in the Philippines during
Martial Lai,r,,?
al'e the
6. Decisicn (for cr against)
Should there be a constitution amendment? Defend your
answer.
pnents 1
::.ed in 7 . Explanation of the us€ or exact meaning of some phrase or statements
in a passage 1

What rioes "be salt in the earth" mean? )


lt:'',vork
8. summary of one unit cf ihe test or some articles thai were read
Sun'lmarize. in not more than one page, the advantages and
limitations of essav rests.
rnes of
9. Analysis
Does nationai testing improve studenis learning?
10. Statement of reieiionship
s:pply Why dces validity imply reiiabriiiiy but noi the reverse?
'esthat
11. Illustraiions or exainples {the student's own) of the principles in
science, ccnstruciion in ianguago. ct other subject matter
i nore
r large 72. Ciassificaticn
To what group of compot-lnds do sucrose and iactose beicng?
_ccore,
Explain your answer.
sed If 13. Application cf rules or principles in r:ew siiuations
:l10re using the same principles on tesi construction. de'reiop a higher
order thinking question ir"r Science.

Chapter 7: Deveiopment of Ciassroom Assessment


77
14. Discussion
Discuss the Learning Theory of Piaget.

15. Statement of aim


Why did the author end the story that way?
1,6. Criticism
Critique the government's fiscal management.
' 17. Oufline
Outline the principalsteps on how to conduct logicalresearch.
18. Reorganization of facts
Trace the development of the industrial preparation in contrast
to the laboratory preparation of nitric acid.

19. Formulation of new questions (problems and questions raised)


What else must you know in order to understand the matter
under consideration?

20. New methods or procedures


,I Devise another procedure for testing sfudents who are unable to
I ;j
read. Discuss your method fully.
t-,
-'

*" Pointers on Writing Essay Questions


l
The difficulty in scoring essays rests on the teacher's failure to
precisely specifu what they want their sfudents to do. Some teachers are
not sure of what they want; others know but fail to communicate this to
sfudents. In either case, the ambiguity of the essay questions and the lack
of scoring standards reduce the effectiveness of essay tests. The following
suggestions should be useful in writing essay questions:
1. Specify limitations. Tell the students the length of the desired
response and the weight each question will be given when
determining the scores. This includes the time to be spent on
each item, the approximate number of words per item, maximum
points per item, and maximum amount of space to be devoted
for each item.
2. Structurethetask. The instructions should clearly specify the task.
Most essay questions are so vague that the instructor's intent is
lost.
3. Make each item relatiuely short and increase the number of
items. The more items there are, the greater chance there is of
the sampling of knowledge.

;{li6{#nssessment of Student Learning 1: Cognitive Learning


4. Giue all the students the same essoy questions if content is
releuant. sometimes, teachers give the sfudents the opportunity
to deal with one or two items from a set of essay questions.
5. Ask questions in a direct manner. Avoid deviousness and
pedanticism when framing questions.

Suggestions for Rating or Scoring Essay Questions


Essay tests may be scored in four ways: analytic or point system,
universal or holistic approach, sorting method, and demerits.
lfu,1. The analytic or point system is useful in scoring a large number of
limited response essay questions. Teachers using this method decide how
much weight each question will have and inform students of the number
)ntrast
of points necessary for a perfect score on each question. The student's
total score is the sum of the points awarded to each answer.
) The uniuersal or holistic approach giues the general impressions to
matter allthe answers to the questions. The sfudent's totalscore is based on the
overall quality of allthe answers to allthe questions.
The sorfin g method is more appropriate than the point system for
rating longer essays. Rather than examining every sentence or main idea
rble to to determine how many points the students should receive,the best papers 1
are placed on one pile, the worst on the other and the intermediates in
between them. After the papers are initially sorted, they are reread to 1

ensure homogeneity. Maximizing differences between or among groups )


and reducing the differences within groups should be the god or thn
lre to
teacher.
:rs are
this to Teachers using demerifs deduct points for inconsistencies in the
re lack sfudents' answers. This usually happens when the essay responses are
owing expanded. As the students further discuss their answers, they become
more prone to committing contradicting statements.
In the grading of essay responses, one must observe the following
esired
suggestions:
when
lnt on 1. Remove names from papers before grading.
imum 2. Read and evaluate each student's answer to the same question
voted before going to the next.
3. Keep the scores of previously read items out of sight when
e task. evaluating remaining questions.
tent is 4. Decide on a policy for dealing with irrelevant responses.
5. If possible, reread or have other teachers read the papers before
rcr of refurning them to the students.
zisof
6. Check the scoring key against actual responses.

it

Chapter 7; Development of Classroom Assessment


79'
7. Be consistent ivi^len grading.
from
8. Tha mechanics of expression shouid be judged separateiy
what ihe student writes.
if pcssible, have two independent readings of the test and use
iha average as the final score
10. Provide comttents and correct errors'
11. Set realistic standards.

Faetors to Consider Em Assigning Foint Values


1. Time needeti to resPond
2. questions
-ComplexiiY of the
3. Emphasis Ptraced cn ihe content
Other Consideratiorss in Grading Essay Responses
1" Use appropriaie rr:ethods to minimize biases'
2. Pay attentir:n cniy io ihe significant aspects of the answer.
3. Avoid ietting perscnal iciiosyncracies affect the grading
4. Appiy untfornn. stanilards in grading all the papers'
".J $tili Poptrlar?
Why Are Essag Tests
1.. Essay tests can in,.Sirectly rneasure attitudes, values, and opinions'
2. Good. sssay tasis are rnore easily prepared ihan
good objective tests'
3. Essay tesis are gcoci l*arning experiences'

Orn{ Qusstion

oral questioning provides immediate {eedback to both pupils and


Every day,
teachers. Ii i, frnqu"i-,tiy used by the teacher in the classroom'
the answers to
,tuan"tr urn urknJ quJstions by trtelr teachers. Although
final
these questions may not be used by the teacher to help assign.a
profitably use
.o-,rr.u grade, both ieachers and students can, if they wish,
situation'
- - results obtained to improve the teaching-learning
the
T;;;ral question is a variation of the essay test. Although more frequently
used in the final exan'rinations of college str.rdents than as
a measurement
of its utility in the
Jevice for schoolchildren, it deserv,es brief mention because
classroom, especially in the primary grades'
Both oral und examinaiions have some common advantages and
"rruy
limitations:

Bii
Advantages
ii.r,: from
1. Both perrnit the exarniner tc rietenain* h*w weii the st'i-r.cient can
synthesize and organize hisiher idsas and express himself/herseif.
anci use
2. Both are not depende;"1t, as ihe n-luitipie-choice tes.t, on the ability
of the pupil to r*c*gnize tFre ecrrect answer: boih require that thl
students know and are able ic suppiy th* correet *nr*n..
3. Both perrnit free responses by th* sturients"

Lirnitations
1. Both provide for a variet-v of limited sarnpling content.
2. Both have iower rate reliability.

Rubrics for Essay Test

ICNS, I
€ tests
,

Setting Criteria
ls and
The foilowing sr,rggesiicl-ls are heipf..i! in developing rubrics for
! day,
essay tests:
'ei's to
r iinal I " i he descriptions rnust focus on the important aspects of an essay
i- use rssponse.
o
L The type of rating {holistic or anaiy'tic} musi match the purpose
;ently of the assessment.
::tent
J, The descripiions of the criteria n:urst be directly observable.
i: rha A
t+ Ensure that the criteria are understcod by ihe students. parenrs,
s anci and others.
5r. The characteriEiics aiid iraits used in the scaie shouid be clearlg
and specificallg defineC.
5. Mininiize €rrors in scoring. These errors may he generosity errors?
centrai tei'rdency errors? and severit r errors.
7. Make the scorinE system feasibie.
'-.r-:.tr--ri:r I l.,r:r,.:i,:i:,,r,.r;,1 i i i.:i*:.aii,-r,,.ia Arr-:i:::5iii.ii.t!
e.1
c:1
Setting Performance Levels ,Table 7.

To be able to differentiate the levels of gradation or performance


levels, a rating scale is used. A rating scale also indicates the degree to
which a particular dimension is present. It provides a way to record and Orgar
communicate qualitatively different levels of performance.
Rating scales can be qualitative, and numerical and quantitative
combined. A numerical and quantitative scale uses numbers on a
continuum to indicate different levels of performance. On the other
hand, a qualitative scale uses verbal descriptions to indicate student
performance.

Table7,4
SAMPLE RUBRIC FOR PERSUASIVE ESSAY TESTS

Worr
ffi
Claim was I made a claim I made a lmade a claim I did not make a
made. and explained claim but did but it was claim.
why it was not explain confusing or
.J controversial. why it was unclear.
controversial. Senl

U,
',;i'.
Reasons I gave clear, I gave reasons I gave one or I did not give fluer
were given accurate reasons in support of two reasons convincing
L1 in support of in support of the the claim, but which did not reasons in
the claim. claim. overlooked support the support of the
important claim well. I claim.
reasons. gave irrelevant Con
or confusing
reasons,
Reasons Ithoroughly I discussed lacknowledged I did not give
were discussed the reasons against that there were reasons against
considered reasons against the claim, reasons against the claim.
against the the claim. but left out the claim but
claim. important did not explain
reasons and/or them.
I did not explain
why the claim
still stands.

' '.ll. r:' .,'


orsessment of Stu de nt Lea rn i n g 1: Cog n itive Lea rn i n g
.,g1:;,,82',il'
..an '
I

(Table 7.4. Somple Rubric for persuasive Essay Tests continued)

)lTnance
egree to
ffiW
r"Jlr ;'*i
'iri{P.ffi** ^,a**,"i

:ord and
Organization My writing was My writing had a My writing was My writing was
wellorganized; clear beginning, mostly organized aimless and
ntitative had a compelling middle, and but got off disorganized.
50na opening; had end. I used an topic at times.
e,Ather a strong, appropriate It had several
informative
s ?nt
body; had
paragraph
format.
paragraph
format errors.
a satisfying
conclusion;
and had an
appropriate
paragraph
format.

l
ffi Word choice The words I used I mostly used My words I used the same
were striking, routine words. were dull and words over and
natural, varied, uninspired and over. some
and vivid. they sounded like words were a bit
lwas tryingtoo confusing.
hard to impress.
Sentence My sentences I wrote well- My sentences I had many run- 1
fluency were clear, constructed were often flat ons, fragments
complete and but routine or awkward. 1
and awkward
of different sentences. There were also phrasing, making
lengths. some run-ons my essay h.:,'d to
,
and fragments. read.
Conventions I used the first My spelling Frequent errors My errors in
person. I used was correct on were distracting grammar,
correct sentence common words. to the reader capitalization,
structure, There were but did not spelling and
grammar, some errors in interfere with punctuations
punctuations grammar and the meaning of made my paper
and spelling. punctuations. my paper. hard to read.

Table 7.5
cHECKLtST FOR WRtTtNG ESSAY quEsTtoNs

Are the questions restricted to measuring objectives that would not be


assessed more efficiently by other test formats?

2. Does each question relate to some instructional objective?

3. Do the questions establish a framework to guide the students to the expected


answer?

Chapter 7: Development of Classroom Assessmen rU #?'


,u.8lt'"
(table 7.5. Checklistfor Writing Essdy Questions continueci)
,Y,€s

rhe student?
4. Are the questions novel? Da they challenge

1 Or* the Premises longer *o


"nd
a. difficultY?

b. the time allowed for the student to respcnd?

c. the comPlexitY of the taskl

Are all the students expected t0 answer


the same questions?
6.

Kru;rw €xsrtisrs
typcs of test:
1. Forrnuiate sampie iterns {or the foliowing
a. Muitiple Choice with Stimulus Material
b. Ctruster True or False
c. Sequenc.ingMatching
d. Filling the Bianks ComPletion
.lr e. Essay
your Field Study
Ask for or borrow sample tesi qu*stions from
I
\J r-l r)
L.
',. i"".t","r, and anal-vze how ihe questi':ns were $orrnulated'
r! il

Lear'ning
g4 :: nssessrnent of Sir:Cent Learning 1: Cognitive
Yes.r

Itew Anolysii
* !Xgffiirjtri;;::::.,:u,rii - l' .,,.,,.::..:r,.:rli.,:.::;i&
..,ij4?ffiifrtrl*

OBJECTIVES

At the end of the chapter, the learners are expected to:


' point out the importance of item analysis in the improvement of
constructed test items;
' analyze test items in terms of discrimination index and difficulty
index;
. identifu the various ways of interpreting test scores; and
interpret correctly
.Lry presented
lJrErErrLE\r Llala in tabular
data rtr graphic forms.
and gIdIJIllg
La.uulal dllu lorms.
t
I
1
I Study
Item analysis is applicable to test formats that require students to choose I
the correct or best answer from the given choices. Therefore, the multiple-
choice test is most amenable to item analysis. Examinations that greatly ,
influence the sfudents' course grades, like midterms and final examinitions,
or serve other important decision-making functions should be free from
deceptive, ambiguous items as much as ptssible. Unfortunately, it is often
difficult to recognize problems before the test has been administered.
Item analysis procedures allow teachers to discover items that are
ambiguous, irrelevant, too easy or difficult, and non-discriminating. The
same procedures can also enhance the technical quality of an examination
by pointing out options that are non-functional ind ihould be improved
or eliminated. Another purpose of item analysis is to facilitate claisroom
instruction. In diagnostic testing, for example, item analysis identifies the areas
of a student's weakness, providing information for specific remediation.

?rocedures in lkw Analysis


Test items must be evaluated thoroughly to determine its merits before it
can be classified as good.
In computerized procedures, values come out in a matter of seconds but
take a longer time to compute when done manually. Nonetheless, in the
event that computers are not available,here are the guidelines in doing item
analysis manually:
85
1. Check and score the answer sheets.
2. Arrange the papers from highest to lowest.
3. Remove the 277o highest and27% lowest of the papers, leaving the
remaining 467o intact (Sax, 1989). Select the top third and bottom
third for comparison (Bergman, n.d.). Divide the papers into two
groups using the median as reference. In case of a tie between the
pup"r. in the median, assign each paper into lower and higher groups
by chance (Downie, 7984)'
4. Count the number of students in the upper 27% who responded to
, each option. Enter the data in the third column. Do the same for the
lower 277" (see Table 8.1).
5. Subtract the number of students in the lower group who selected
the correct alternatives (marked with asterisk) from the number of
students who responded correc& in the upper 277". Place the value
in the fifth column.
6. Divide the difference found in the fifth column by the number of
students in the upper 277o or lower 27%. The value obtained is the
index of discrimination and is entered in the sixth column.
7. Count the number of students in the middle 467" who made the
"J correct responses and place the value in the seventh column.

LJ,
8. Add the number of individuals who responded correcfly to the item
(upper 27To,lower 277o, and middle 46%) and enter the data in the
;il"-.'
L+ eighth column.
9. Divide the value in column 8 by N, the total number of examinees, and
enter the value in the last column. This is the proportion of sfudenb who
responded correcfly. The quotient is the index of discrimination.

Below is an example of an item analysis for item number one with five
options. l*lter B was the correct answer, and 75 sfudenb took the examination.

Table 8.t
SAMPLE OF AN ITEM ANALYSIS

ffi*;* &ffiffiffiffi#ffiffi
1 A
*B
2 3 3

13 7 25 45
C 3 9 2

D 1 o 4
E 1 1 1

d/,{/'
gg5T{assessment of Student Learning 1: Cognitive Learning
,f,;8t8'r
Table 8.2
TABLE FOR !NTERPRETING INDEX OF DISCRIMINATION
Ieaving the (o rneulAR vALUEs)
nd bottom
s into two
etween the Questionable ltem
her groups -o.59 - -o.2o
-o.21 -
pr.ded to O.20 Moderately Discriminating
m .or the o.21 - 0.60

o selected
tumber of
rfie value Table 8.3
TABLE FOR INTERPRETING INDEX OF DIFFICULTY
umber of (e rneumR vALUEs)
ned is the

nade the
n.
the item o.41 - 0.60 Moderately Difficult ltem I
lta in the
o.6r - o.8o I
rees, and o.8r and above Very Easy ltem I
enb who
'I. eorupatotton o/ the D talue (lnder of Disaiwinatun)

w,ith five 1. Determine the difference between the number of sfudents


who
nination. got the correct answer from the upper 2TTo and the
number of the
students who got the correct answer from the lower z7yo.
2. Divide the difference from the2Tyo of the total number of
examinees.
Difference
D -
ffi
ualue
277o of N where N is the total number of examinees
_13-7
I
2A

rJ :9
20
= 0.3
. The cornputed D value is 0.3, which is interpreted as a
discriminating item.
=
. e',
Chapter B: Item AnalVsis .f g7ff
.;.{S' -:i.rd
.:,,..{ t "

r
1owputatiou of the P talue (lnder of Di/fnulfu) f - - .

1. Determine the total number of students who got the correct answer on
the item from the UPPER and LOWER2770 and from the MIDDLE
]- -
467".
2. Divide the sum of the total number of students who got the corect
answer from the total number of students who took the examination.
,
ty vv'sv
ualue:
R where R is the total number of students who got
-N the right answer

: 45 where N is the total number of students who


75 took the examination
:0.6
The computed P value is 0.6, which is interpreted as an average item.

Table 8.4
DECISION TABLE

"J lmprobable-Discard
Difficult Not Discriminating
Moderately Discriminating May need revision
{nJ'
Discriminating Accept
*i;"
L+ Moderately Difficult Not Discriminating Needs revision
Moderately Discriminating May need revision
Discriminating Accept
Easy Not Discriminating Discard
Moderately Discriminating Needs revision
Diicriminating

I nterprcting eest Ecores

A test is usually scored by marking each item separately and finding


the sum of the marks. The scores obtained when tests are corrected are called
crude or raw scores since they tell only the number of points for which the pupil
received credit-a mere numerical description of the student's performance
on a test. The said score may or may not represent his/her ability in the
subject nor his/her capacity to learn the subject. Tests that involve an element
of ihance, such as true or false tests, may be scored by a correction formula,
which still takes guessing into account. The most common way of scoring is
to count one point for each item correctly. It is frequently necessary to weigh
scores to avoid allowing undue credit for certain items. The weighing may be
done by either dividing or multiplying the score on one section of a test by a
number calculated to give the desired weight to a particular exercise.
./ t "
of Student Learning 1: Cognitive Learning
6,6i*{ou.ssment
.,{2eti'
when giving meanings to crude or raw scores, a transmutation of the
scores into a numericalgrade is usually used. Crude scores remain meaningless
answer on unless interpreted.
} MIDDLE Transmutation presumes that the highest score in the test is the perfect
score, which happens very rarely. when transmuting the raw scores into a
he correct grade, the equal cut of grade or score intervals *urib" observed. Below
is
Lination. the formula to compute for a kansmuted grade:
s who got Transmuted Grade (TG): Score
x50+ 50
Total Number of ltems
ly' - who Example:
ln a7l-rtem exam, a student gets a raw score of 30. what is his/her
transmuted grade?

30
ge item. TG=-xS?+50
/5
= 0.4x S0 + S0
:70
Another way of interpreting test results is through ranking. Ranking is
considered the first step in test score interpretation. It is the ur.lung".n"ni
the scores in order of magnifude or size. Determining the rank simpl-y involves
of I
listing of the scores from the highest to the lowest,*.". I
Example: ,
Scores Rank
(1) 78 1

(2) 74 2
(3) 73 3.5
I finding (4) 73 3.5
re called
(5) 68 5
he pupil
,rTnance (6) 66 6
,' in the (7)
element 65 8
ormula, (8) 65 8
nring is
(e) 65 8
o weigh
may be (10) 57 10
est by a

Chapter 8: Item Analysis*-


;;#
'ff 'o'
t!

The ranks assigned to the given scores are on the second column. The
highest score, 78, received a ranking of 1, and the next is 74, ranked 2. The
next two scores, 73, ranked 3.5, in which case the average of the ranks that
would be assigned to them (enclosed in parentheses) if they were not tied,
would be given to each. If unequal, their ranks would be 3 and 4; tied, the
average of the sum of 3 and 4 becomes their ranks. Similar to the scores
ranked 8, the average of their supposEd ranks if they were not tied, will be
given to them. The last score; 57 ranks 10 or N.
Ranking has its uses and limitations. As a means of cornparing scores
within a group, ranking is a simple, readily obtained measure that has some
value. It is useful in checking a group's performance in a test. On the other
hand, many educators feel that the use of ranking tends to overemphasize
individual competition to a greater extent than the practice of assigning letter
marks. Moreover, ranking fails to indicate the extent or amount of diff"rurc"
in the achievement of the students being compared.
Graphing or tabulating data aids in easing the interpretation of test scores.
At times, teachers want to present scores in tabular or graphic forms. Tabular
or graphic forms give a fairly clear picfure of how the students performed.
Frequency distributions, histograms (bar graphs), frequency polygons,
or cumulative frequencies or percent curves can make data interpretable.
Tabulating, however, may result in an appreciable sacrifice of accuracy.
-l' Percentile and percentile ranks may also be employed in interpreting
.{r t;
tri test scores. Percentile is defined as a point, which a certain percent of the
scores fall. A percentile rank gives a person's relative position or the percent
'r, ' 'l
of the students' scores falling below his/her obtained score. To illustrate the
computation of percentiles, consider the data in Table 8.5.
Percentiles should not be confused with percentage. Percentiles have the
advantage of being easy to compute and interpret. In explaining a national
norrn percentile rank to a sfudent, one would say, for example, "your
percentile rank is 80, meaning you have obtained a score higher than gO out
of. every 100 sfudents in a representative sample of fourth year sfudents in
the nation." If it is an 85 percentile rank, one can interpret that the sfudent
belongs to the upper 207o of those who took the examination.
The most powerful means of interpreting test scores is by statistical
analysis, such as measures of central tendency, measures of variability and
measures of correlation.
Table 8.5
DATA ILLUSTRATING THE COMPUTATION OF PER€ENTILES

"dr#lAssessment
ziWtf
'
r'*$,ra
of Student Learning 1: Cognitive Learning
rt

(Tabte 8.5. Dato tllustratingthe computation of Percentiles continued)


nn. The
12.The
nks that 17 8S 1 +8 95
rot tied, 't6 8o 2 47 92
ded, the 89
15 75 1 45
e scores
14 7o 2 44 86
. will be
13 6S 4 42 8o

g .res 't2 5o 4 38 72
as some 11 55 5 34 63
he other 10 5o 7 29 51
:rphasize 6 22 38
9 45
rng letter 't6 27
8 4o 5
dlerence
11 19
7 35 3
6 z 8 14
il scores. 3o
6 10
Tabular 5 25 2

rr\med. 4 20 1 4 7
oiygons, 3 15 1 3 5
pretable. 10 2 3
2 1

ac!. 1
,t
1 I
1 5
zrpreting
o
nr of the o o o 1
I
: percent I
n-,ate the

have the
national
With a complete set of test papers with multiple choice items that has
been administered to the students, find the index of difficulty and index of
e" "Your
discrimination of each item and the entire test.
out
u', 80
uients in
e *udent

stical
h, rn , and

Fs

F
E *el,.f,.);
Chapter B: Item AnalVsisgf 91agf
,f&tv'
7

i
Sroding aud Keportiug
?roctrces
*{$*}kX$ *tt*tt*irattteit
'is:*I.:}*ii3*ll:1}**1*:td*q*f

OBJECTIVES

At the end of the chapter, the learners are expected to:


' recognize the necessity for reporting schemes and discuss how
reports can be useful for different groups of people;
' discuss different systems for reporting and identifu the advantages
and disadvantages of each; and
. identifu desirable reporting practices.

""il1t
Examinations and marks have always been closely related in both
the
teachers' and sfudents' minds. Periodic and monthly examinations
usually
ri have_ a major part to play in the students, final grades.
students, parents, teachers, administratorq prosp ective employers,
student admission officers all need information from t-he schoolto
and
aisist ihem
in decision making. sfudents, primarily should receive such information
through daily interaction with their teachers, although formalperiodic
reports
can also help students when making decisions.
Many educators and sfudents believe that recording of grades motivates
sfudents to learn things they would not otherwise learn.-
Marks rcfer to those systems that use summary symbols of some type.
If only a single symbol is assigned, it should ,"pro"nt achievement in the
subject matter, not attitude and status rather ihun gro*th. It is also
an
objective judgment of one person (student) by another"(teacher).
Below are
some questions regarding r4arking:
1. Are marks an effective conveyor of information about the sfudent,s
achievement?
2. Can anyone achieve the mark helshe wishes if helshe tries hard
enough?
3. Are marks the means or an end to a student's achievement?
4. Is there any correlation between school marks received at
one level
of education and marks received at another?

138
5. Do marks bear any relation to success in life?
6. Do marking practices provide a justifiable introduction to competitive
adult life?
"ti/t{ ;,
Grading methods communicate the teachers' evaluative appraisals of the
sfudents' academic achievement and performance. In the process of grading,
teachers convert different types of descriptive information and various
measures of the students' academic performance into a single grade or mark
that summarizes their assessment of the sfudents' accomplishments (Guskey
& Bailey, 2007).
Grading, on the other hand, is the process by which a teacher assesses
sfudent learning through classroom tests and assignments, the context in which
good teachers establish that process, and the dialogue that surrounds grades
and defines their meaning to various audiences (Walwood & Anderson, 1998
how
p.1).

ages
,Nature ( Smdes/J4arks
Grades are the teacher's judgment on the performance of students based
on certain criteria. Although objective, grades can be subjective from time
to time and are also relative from one school to another school, from one
oth the teacher to another teacher, and from one sfudent to another student. Several
usually variables, such as periodical examinations, class standing, and projects, are
considered in grading the students.
rs. and
st them Tunctions o/ 6 rndes/rVorks
mation
reports The school can never escape relative judgments about sfudents. Grades
or marks have certain functions to perform and these functions are served
)tivates best by an unbiased grade. Such functions are as follows:
1. To help guide the students and the parents with respect to future
,e tgpe.
educational plans;
i in the
ilso an 2. To help the school decide upon a student's readiness to enroll in
certain selective programs or courses;
low are
3. To help higher educationallevels appraise an applicant's acceptability
for the program being offered; and
udent's
4. To help a potential employer decide on the suitability of the student
for certain jobs that depend on academic skills.
s hard
Grades or marks are necessary for guiding the sfudent in his/her school
work, understanding his/her personal trials and tribulations, helping him/
re level her plan his/her educational and occupational fufure, and cooperating with

Chapter 12: Grading and Reporting Practices.. ,,'


l3g
'a
future school officials and employers in selecting who may
-t suitably be
instructed or employed. Grades are, at best, the iaw materLi. ,.r, formulating
educational and vocational plans.

Pu ryoses of 0 rodes/,,4harks
Grades may serve the following purposes:
1. Administration admission, selection or grouping, promotion,
retention, dismissal, fit for graduation
2. Guidance - diagnostic-readiness, prediction of success, remediation,
validation, career-guidance, psychorogicar assessment
3. Motiuation - skill mastery, goal-setting, positive mobilizer

Generally, the major purposes of grading and reporting are as fonows:


1. To communicate the achievement stafus of the sfudents to their
parents and other stakeholders;
2. To provide information that can be used by the students for self-
evaluation;

f 3. To select, identifi7, or group sfudents for certain educational programs;


" ,

*tl 4. To provide incentives for sfudents to learn;


it
[lt 5. To evaluate the eflectiveness of instructional programs; and
st 6. To provide evidence of the students' lack of effort or inappropriate
t, responsibility.

C{pes o/ 6 rades/,,4,torks
1. Percentage system (Ts-100).lt is often used as it is easily
and
universally understood. It implies precision of judgment that is
[ardly
attainable by most measuring instruments.
2' Poss or Foi/. This is good for survey subjects or vocational
courses
and higher level courses in exact disciplines like math and physics.
The most common justification for the P-F system is that it encouiages
sfudents to take the courses they would otherwise not take
b".urr"
of a fear of lowering their grade point average (GpA) or general
average. This system also reduces student anxiet5r, gives sludents
greater control over the allocation of their sfudy time,
and shifts the
sfudenh' efforts from grade-getting to leaming.
3. Fiue Point Multiple Scale. This enables one to categorize students.
Examples of^this grading type are A, B+, B, c+ u'i c f,-i.is,
1.5, 1.75, 2.0, etc. ",

..*,;r&:t
S l4Oynssessment of Student L'earning 1: Cognitive Learning
suitably be 4. Duol sysf em. (Any combination of the previous three types of
ormulating grades/marks). It may be a letter gr {e or percentage system for
academic subjects and P-F for n .academic and vocational
courses.
5. Checklist and Rating Sco/es. These rating scales or checklists should
include the major cognitive (psychomotor) for each subject matter
area. Checklists or rating scales on affective objectives should also be
romotion, developed. This type of grading is appropriate for early elementary
grades.
nediation,
,r4duan tages o/ 6 roda /,,4,40 rks
Some of the advantages of marks are as follows:
follows: 1. Marks are the least time consuming and most efficient method of
s to their reporting.
2. Symbols can be converted to numbers. Thus, generalaverage grades
s for self- can be computed. Generalaverage grades are usefulin many types of
selection, placement, and classification. They are the best predictors
of success in fufure education.
3. Marks relate not only to chances of obtaining good grades in future
courses; they also relate somewhat to achievements beyond school.
d
4. Marks serve as an overall summary index. students want and need
rpropriate
to know how they did on each separate subjects, as wellas how they
performed on the whole.

Disaduon toges ( g mdes /,)4orks


asily and
I is hardly
1. Marks are inaccurate measures of competence and are not used in
a comparable way from school to school, or even from instructor to
instructor. For one teacher, B may just be an average grade, while for
il courses
another, B may already be above average.
I physics.
courages 2. Marks are not related to the important objectives of the school.
r because 3. Marks are not enough as means of communication to the sfudents,
r general homes.
sfudents 4. Marks produce side effects detrimental to the welfare of the child.
shifts the The side effects usually are:
a. the debilitating impact of failure;
students. b. excessive competitiveness;
'1, 1.25, c. cheating; and
d. a distortion of educational values, which makes marks, instead
of learning, the important criterion of success.

Chapter 12; Grading and Reporting practices


- fif i{
Oowwon S rading- Kelated Probluns j uti,

1. Gradelnflation. Many argue that more sfudents receive failing grada.


not because of poor performance but because of the grading system
To minimize failures, many teachers p rctice grade inflation.
2. Questionable GradingPractices. Grao^ J practices are clearly matters
of opinion, no strong evidence confirms their value nor the han:.
they cause.

f
Awragirg Scores to Deterwine n Qrade
If the purpose of grading is to provide an accurate description of whar
students have learned, then averaging scores from past assessments with
measures of current performance is inappropriate. Relying on data from past
assessments can give the wrong information regarding the student's progress
in the learning process.
q
Below are some guidelines for deciding what evidences or combination
of evidence represents the truest, most appropriate summary of the students'
achievements and performance. i_
r. 1. Give priority to the most recent evidence. Scores from assessments at
#l the end of the marking period are typically more indicative of what
,. l{
the students have learned than those gathered from the beginning. c'litci
}i
rl 2. Give priority to the most comprehensive evidence. If certain sources l
rJ of evidence represent cumulative summaries of the knowledge and
skills the students have acquired, these should hold the greatest
weight in determining the students' grades.
3. Give priority to evidence related to the most important learning goals
or standards. Rank the evidence gathered in terms of its importance
to the course's learning goals or standards.

l.lse of Zeroes
(

Most educators believe that a zero is not an accurate reflection of the


students' learning, instead, zeroes are typically assigned to punish sfudents
for not displaying appropriate effort or responsibility. A single zero has a
profound eff.ectwhen combined with the practice of averaging as it drastically ,)4od
changes the average.

.Cowering Sroda Kecause ( Kehautural fufmctions


Behavioral infractions cannot be considered indicators of achievement or grad
performance since they do not reflect product criteria. perf(
four

. ,Yi,'

;6::,141':$ Assessment of Student Learning 1: Cognitive Learning


Quidelines /or €//ectiue Sradiag
i grades 1. Describe the grading procedures to the students at the beginning of
system. inskuction.
2. Clarifu to the students that the course willbe based on achievement.
rTlatters
e harm
3. Explain how other factors, such as eff_ort, work habits, and punctuality
in the submission of requirements, will bg treated/ operated.
4. Relate the grading procedures to the learning outcomes.
5. Obtain valid evidences as bases for assigning grades.
6. Take precautions to prevent cheating on tests, reports, and other
:f what types of evaluation.
ra with
rm past
7. Return and review all the test results as soon as possible.
rogTess 8. Properly weigh the various types of achievements included in the
grade.

ination 9. Do not lower an achievement grade for tardiness, lack of effort, or


:dents' misbehavior.
10. Avoid bias and when in doubt, review the evidences. If still in doubt,
assign a higher grade.
rdnts at
rf'what
ning. Oriterio for o Jhorkiu{- Kepwtiug Systeut

ources 1. Is the system based on a clear statement of educational objectives?


;e and 2. Is the system understood both by those making the reports and those
reatest
to whom they are communicated?
3. Does the system desirably affect the sfudents' learning?
l goals 4.
rlance
Is the system detailed enough to be diagnostic but still compact
enough to be operational?
5. Does the system involve two-way communication between the
sfudents' homes and the school?
6. Does the system promote desirable public relations?
of the 7. Is the system reasonably economical in terms of teacher time?
rdents
has a
;tically J4odes o/ 1otruputing Tinol Qmdes

There are two ways of computing the final grade: the averaging and
cumulative grading systems.
The aueraging grading system treats the students' performance in each
ent or grading period independently. Table 11.1 shows a student,s academic
performance per quarter in English. With 78, 86,82, and g4 in each of the
four quarters, his final grade based on averaging is 82.5.

.,.,. .,{):
Chapter 12: crading and Reporting eractices_
fljY
{'''
,"'*
Table lt.t
ILLUSTRATION OF THE AVERAGING GRADING SYSTEM

English tB
ffi ffi
86 8z
srl:<$*;eYJ*s
S**Wra'*
a":;. .r* :q!
i!i-*i";&t-q-in
8+ 82.5

83 88 8r
Math l6 77

Science 79 8S 93 8t 86

Filipino 8+ 88 88 91 8l.ts
'88.r
MAKABAYAN 86 89 86 93
85.r5
General Average

The cumulatiue grading system believes that the performance of students


to a large extent is alfecteJby their past performances' Thus, the cumulative
grading system gets a certain percentage from the previous grade and adds
final grade
it to the tentative present gradl of a certain period. The student's
for each subject is the gtua" in the last grading period' Table 11'2 shows
quarters.
cumulative computation in the second, third, and fourth

#l
t Table tt.z
l lt ILLUSTRATIoNoFTHEcUMULATIVEGRADINGSYSTEM
hl
3l
rd
English l8 (83) 8z (zg) 8o (83) 8z
ffi
8z

Math t6 (til t8 (Bs) 8r (8+) 8+ 8+

Science 79 (8+) 8t (8r) 8z (86) 85 85

8+ (Bo) 8t (82) 8s (87) 86 86


Filipino

MAKABAYAN 86 (gz) go (88) Bg (gt) go 9o

General Average

The.tentative grade for the second quarter is 83 and the final second
grade for the second quarter is 82. To compute for 82:
78 x .30 :23.4
83 x.70 :
58.1
23.4 + 58.1= 81.5 or 82

*','
..e,/;;fd
XiftYntt"ssment of Student Learning 1: Cognitive Learning
,,,';df"rp

You might also like