0% found this document useful (0 votes)
128 views88 pages

Assessment of Learning 1 - Gabuyo

Chapter 1 covers the fundamental concepts of assessment in education, including definitions and distinctions between assessment, evaluation, measurement, and testing. It emphasizes the importance of various types of assessments, such as formative, summative, diagnostic, and placement assessments, and their roles in enhancing the teaching-learning process. The chapter also discusses methods of interpreting assessment results, including norm-referenced and criterion-referenced interpretations.

Uploaded by

nielandres25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views88 pages

Assessment of Learning 1 - Gabuyo

Chapter 1 covers the fundamental concepts of assessment in education, including definitions and distinctions between assessment, evaluation, measurement, and testing. It emphasizes the importance of various types of assessments, such as formative, summative, diagnostic, and placement assessments, and their roles in enhancing the teaching-learning process. The chapter also discusses methods of interpreting assessment results, including norm-referenced and criterion-referenced interpretations.

Uploaded by

nielandres25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

CHAPTER 1

BASIC CONCEPTS IN ASSESSMENT

Learning Outcomes

At the end of the chapter, the students should able to:

1. Define the terms: assessment, evaluation, measurement, test, testing, formative


assessment, placement assessment, diagnostic assessment, summative
assessment, traditional assessment, portfolio assessment, performance
assessment;
2. Discriminate the different purposes of assessment;
3. Differentiate the different types of assessment;
4. Identify and discuss the general principles of assessment;
5. Discuss the different guidelines for effective student assessment; and 6.
Differentiate norm-referenced from criterion-referenced interpretation

INTRODUCTION

Assessment of Learning focuses on the development and utilization of


assessment tools to improve the teaching-learning process. It emphasizes on the use of
testing for measuring knowledge, comprehension and other thinking skills. As part of
the overall evaluation process, we need specifically to find out if the learners are actually
learning (changing their behavior) as a result of the teaching. This will show us whether
the teaching has been effective, which is ultimately the most important issue.
Assessment is a means of finding out what learning is taking place. As well as specific
knowledge and skills, we might also like to measure other changes in behavior related to
personality’, social skills, interests, learning styles, among others

There is a lot of debate about how to assess learning, and especially about how to
evaluate performance. Our objective give us guidance on what to assess, because they
are written in terms of what the learners should be able to do. Based on these objectives,
it is very useful to identify all the activities and skills which the learners will carry out,
the conditions under which they will perform these tasks and activities, the possible
results which might be obtained, and the standards by which their performance will be
measured.

The assessment itself can be done in different ways:

1. Ask the learners to recall facts or principles) e.g.’ What is ‘x’?).


2. Ask the learner to apply given or recalled facts or principles (e.g.’ How does x
help you solve this problem?).
3. Ask the learner to select and apply facts and principles to solve a given
problem (e.g.’ What do you know that will help you solve this problem?).
1
4. Ask the learner to formulate and solve her own problem by selecting,
generating and applying facts and principles (e.g.What I see as the problem
here and how can I reach a satisfying solution?).
5. Ask the leaner to perform tasks that shows mastery of the learning outcomes.

Once again, we need to stress the importance of participation, and this is


especially important in assessment and evaluation. Learners should be actively involved
in both the development o learning objectives, and as much as possible in their own
assessment. In many education systems, assessment is used as a tool for sorting
students for selection purposes (progression to a higher level of education, higher
rewards, among others). Assessment where students are compared with other is known
as norm-referencing. It is much better if learners are aware of what they need to learn
and what they have learned, so they can set their own targets and monitor their own
progress. Of course, teachers and trainers should advise the learners, and guide them in
order to help them learn; this is the key role if the teacher. Assessment of learners in
relation to a particular target or level of performance is called criterion-referencing.

DIFFERENT TERMINOLOGIES: ASSESSMENT, TEASTING, MEASUREMENT AND


EVALUATION

Assessment, measurement and evaluation mean many different things. These


terms are sometimes used interchangeably in the field of education. In this section, we
shall point out the fundamental different of the terms assessment, testing, measurement
and evaluation.

The term Assessment refers to the different components and activities of


different schools. An assessment can be used to student learning and in comparing
student learning with the learning goals of a academic program. Assessment is defined
as an act or process of collecting and interpreting information about student learning.
Another source expands this statement by adding that it is a systematic process of
gathering, interpreting, and using this information about student learning. It is a very
powerful tool for education improvement. It emphasizes on individual student or groups
of individuals and on the academic program of a certain educational institution. There
are different purposes of assessment such as: to provide feedback to students and to
serve as diagnostic tool for instruction. For this purpose assessment usually answer the
questions, “Was the instruction effective?” and “Did the students achieve the intended
learning outcome?”

Assessment is a general term that includes different ways that the teachers used
to gather information in the classroom. Information that help teachers understand their
students, information that is used to plan and monitor their classroom instruction,
information that is used to a worthwhile classroom culture and information that is used
for testing and grading. The most common form of assessment is giving a test. Since test
is a form of assessment, hence, it also answer the question, “how does individual student
perform?” Test is formal and systematic instrument, usually paper and pencil

2
procedure designed to assess the quality, ability, skill or knowledge of the students by
giving a set of question in uniform manner. A test is one of the many types of assessment
procedure used to gather information about the performance of students, Hence,
testing is one of the different methods used to measure the level of performance or
achievement of the learners. Testing also refers to the administration, scoring, and
interpretation of the procedures designed to get information about the extent of the
performance of the students. Oral questionings, observations, projects, performances
and portfolios are the other assessment processes that will be discussed later in detail.

Measurement is a process of quantifying or assigning number to the individual’s


intelligence, personality, attitudes and values, and achievement f the students. In other
words, express the assessment data in terms of numerical values and answer the
question, “how much?” Common example of measurement is when a teacher gives
scores to the test of the students such as Renzelgot 23 correct answer out of 25 items in
Mathematics test; Princess Mae got 95% in her English first grading periodic test;
Ronnick’s score 88% in his laboratory test in Biology. In the examples, numerical values
are used to represent the performance of the students in different subjects.

After collecting the assessment data, the teacher will use this to make decisions
or judgment about the performance of the students in a certain instruction.

Evaluation refers to the process of judging the quality of what is good and what
is desirable. It is the comparison of data to a set of standard or learning criteria for the
purpose of judging the worth or quality. Examples, in judging the quality of an essay
written by the students about their opinion regarding the first state of the nation
address of Pres. Benigno C. Aquino, evaluation occurs after the assessment data has
been collected and synthesized because it is only in this time where teacher is in the
position to make judgment about the performance of the students. Teachers evaluate
how well or to what extent the students attained the instructional outcomes.

TYPES OF ASSESSMENT PROCEDURES

Classroom assessment procedures can be classified according to the nature of


assessment, format of assessment, use in the classroom instruction and methods of
interpreting the results (Gronlund and Linn,2000).

Nature of Assessment

1. Maximum Performance
It is used to determine what individuals can do when performing at their
best. Examples of instruments using maximum performance are aptitude
tests and achievement tests.
2. Typical Performance

3
It is used to determine what individuals will do under natural conditions.
Examples of instruments using typical performance are attitude, interest,
and personality inventories, observational techniques and peer appraisal.

Format of Assessment

1. Fixed-choice Test
An assessment used to measure knowledge and skills effectively and
efficiently. Standard multiple-choice test is an example of instrument used
in fixed-choice test.
2. Complex-performance Assessment
An assessment procedure used to measure the performance of the learner
in context and on problems valued in their own right. Examples of
instruments used in complex-performance assessment are hands-on
laboratory experiment, projects, essays, oral presentation.

Role of Assessment in Classroom Instruction

“Teaching and Learning are reciprocal processes that depend on and affect one
another (Swearingen 2002 and Kellough, 1999).” The assessment component of the
instructional processes deals with the learning progress of the students and the
teacher’s effectiveness in imparting knowledge to the students.

Assessment enhances learning in the instructional processes of the result


provides feedbacks to both students and teachers. The information obtained from the
assessment is used to evaluate the teaching methodologies and strategies of the teacher.
It is also used to make teaching decisions. The result of assessment is used to diagnose
the learning problems of the students.

When planning assessment, it should start when teacher plans his instruction.
That is, when writing learning outcomes up to the time when the teachers assesses the
extent of achieving the learning outcomes. Teachers made decisions from the beginning
if instruction up to the end of instruction. There are four roles of assessment used in the
instructional process. The first is placement assessment, a type of assessment given at
the beginning if instruction. The second and third types of assessment are formative
assessment and diagnostic assessment and diagnostic assessment given during
instruction and the last is the summative assessment given at the end, of instruction.

1. Beginning of Instruction
Placement Assessment according to Gronlund, Linn, and Miller (2009) is
concerned with the entry performance and typically focuses on the questions:
Does the learner possess the knowledge ad skills needed to begin the planned
instruction? To what extent has the learner already developed the understanding
and skills that are the goals of planned objectives? To what extent do the
student’s interest, work habits, and personality indicate that one mode of

4
instruction might be better than another? The purpose of placement assessment
is to determine the prerequisite skills, degree of mastery of the course objectives
and the nest mode learning.
2. During Instruction
During the instructional process the main concern of a classroom teacher
is to monitor the learning progress of the students. Teacher should assess
whether students achieved the intended learning outcomes set for a particular
lesson. If the students achieved the planned learning outcomes, the teacher
should provide a feedback to reinforce learning. Based on recent researches, it
shows that providing feedback to students is the most significant strategy to
move students forward in their learning. Garnison and Ehringhaus (2007),
stressed in their paper “Formative and Summative Assessment in the Classroom,”
that feedback provides students with an understanding of what they are doing
well, links to classroom learning. If it is not achieved, the teacher will give a group
,or individual remediation. During this process we shall consider formative
assessment and diagnostic assessment.
Formative Assessment is a type of assessment used to monitor the
learning process of the students during instruction. The purposes of formative
assessment are the ,following: to provide immediate feedback to both student
and teacher regarding the success and failures of learning; to identify the
learning errors that are in need of correction; to provide teachers with
information on how to modify instruction; and also to improve learning and
instruction.
Diagnostic Assessment is a type of assessment given at the beginning of
instruction or during instruction. It aims to identify the strengths and
weaknesses of the students regarding the topics t be discussed. The purpose of
diagnostic assessment are to determine the level of competence of the students;
to identify the students who already have knowledge about the lesson; to
determine the causes of learning problems that cannot be revealed by formative
assessment; and to formulate a plan for remedial action.
3. End of Instruction
Summative Assessmentis type of assessment usually given at the end of
a course or unit. The purposes of summative assessment are to determine the
extend to which the instructional objectives have been meet; to certify student
mastery of the intended learning outcomes as well as use it for assigning grades;
to provide information for judging appropriateness of the instructional
objectives; and to determine the effectiveness of instruction.

Methods of Interpreting the Results

1. Norm-referenced Interpretation
It is used to describe student performance according to relative position
in some known group. In this method of interpretation it is assumed that the

5
level of performance of students will not vary much from one class to another
class. Examples: ranks 5th in a classroom group of 40.
2. Criterion-referenced Interpretation
It is used to describe students’ performance according to a specified
domain of clearly defined learning tasks. This method of interpretation is used
when the teacher wants to determine how well the students have learned
specific knowledge of skills in a certain course or subject matter. Examples:
divide three-digit whole numbers correctly and accurately; multiply binomial
terms procedures correctly.

There are ways of describing classroom tests and other assessment procedures.
This table is a summary of the different types of assessment procedures that was
adapted and modified from Gronlund, Linn, and Miller (2009).

Classification Types of Function of Examples of


Assessment Assessment Instruments

Nature of Maximum It is used to Aptitude tests,


assessment Performance determine what achievement tests
individuals can do
when performing
at their best.

Typical It is used to Attitude, interest,


Performance determine what and personality
individuals will do inventories;
under natural observational
conditions. techniques; pee
appraisal

Form of Fixed-choice An assessment Standard multiple


Assessment Test used to measure choice test
knowledge and
skills effectively
and efficiently.
Complex An assessment Hands-on
performance procedure used to laboratory
assessment measure the experiment,
performance of projects, essays,
the learner in oral presentation
contexts and on
problems valued
in their own right.

Use in classroom Placement An assessment Readiness tests,


instruction procedure used to aptitude tests,
determine the pretests on course
learner’s objectives, self
prerequisite skills, report inventories,
degree of mastery observational
of the course techniques
goals,

6
and/ or best
modes of learning

Formative An assessment Teacher-made


procedure used to tests, custom
determine the made tests from
learner’s learning textbook
progress, provides publishers,
feedback to observational
reinforce learning, techniques
and corrects
learning errors.

Diagnostic An assessment Published


procedure used to diagnostic tests,
determine the teacher-made
causes of learner’s diagnostic tests,
persistent learning observational
difficulties such as techniques
intellectual,
physical,
emotional, and
environmental
difficulties.
Summative An assessment Teacher-made
procedure used to survey test,
determine the performance
end- of-course rating scales,
achievement for product scales
assigning grades
or certifying
mastery of
objectives.

Methods of Criterion- It is used to Teacher-made


Interpreting referenced describe student tests,
results performance custom-made tests
according to a from textbook
specified domain publishers,
of clearly defined observational
learning tasks. techniques
Example:
multiplies
three-digit to
whole numbers
correctly and
accurately.

Norm- It is used to Standardized


referenced describe student’s aptitude and
performance achievement tests,
according to teacher-made
relative position survey tests,
in some known interest
group. Example: inventories,
rank 5th in a adjustment
classroom group inventories.
of 40.

7
OTHER TYPES OF TEST

Other types of descriptive terms used to describe tests in contrasting types such
as the non-standardized versus standardized tests; objective versus subjective tests;
supply versus fixed-response test; individual versus group test; mastery versus survey
tests, speed versus power tests.

Non-standardized Test versus Standardized Test

1. Non-standardized test is a type of test developed by the classroom teachers. 2.


Standardized test is a type of test developed by test specialists. It is administered,
scored and interpreted using a certain standard condition.
Objective Test versus Subjective Test

1. Objective test us a type of test in which two or more evaluators give an examinee
the same score.
2. Subjective test is a type of test in which the scores are influenced by the judgment
of the evaluators, meaning there is no one correct answer.

Supply Test versus Fixed-response Test

1. Supply test is a type of test that requires the examinees to supply an answer, such
as an essay test item or completion or short answer test item.
2. Fixed-response test is a type of test that requires the examinees to select an
answer from a given option such as multiple-choice test, matching type of test, or
true/ false test.

Individual Test versus Group Test

1. Individual test is a type of test administered to student on a one-on-one basis


using oral questioning.
2. Group test is a type of test administered to a group of individuals or group of
students.

Mastery Test versus Survey Test

1. Mastery test is a type of achievement test that measures the degree of mastery of a
limited set of learning outcomes using criterion-reference to interpret the result.
2. Survey test is a type of test that measurers students’ general achievement over a
broad range of learning outcomes using norm-reference to interpret the result.

Speed Test versus Power Test

1. Speed test is designed to measure number of items an individual van complete


over a certain period of time.

8
2. Power test is designed to measure the level of performance rather than speed of
response. It contains test items that are arranged according to increasing degree
of difficulty.

MODES OF ASSESSMENT

There are different types of modes of assessment used by a classroom teacher to


assess the learning progress of the students. These are traditional assessment,
alternative assessment, performance-based assessment, and portfolio assessment.

Traditional Assessment
It is a type of assessment in which the students choose their answer from a given
list of choices. Examples of this type of assessment are multiple-choice test, standard
true/ false test, matching type test, and fill-in-the-blank test. In traditional assessment,
students are expected to recognize that there is only one correct or best answer for the
question asked.

Alternative Assessment

An assessment in which students create an original response to answer a certain


question.Students response to a question using their own ideas, in their own words.
Examples of a alternative assessment are short-answer questions, essays, oral
presentation, exhibitions, demonstrations, performance assessment, and portfolios.
Other activities included in this type are teacher observation and student self
assessment.

Components of Alternative Assessment

a. Assessment is based on authentic tasks that demonstrate students’ ability to


accomplish communication goals.
b. The teacher and students focus on communication, not on right and wrong
answer.
c. Students help the teacher to set the criteria for successful completion of
communication tasks.
d. Students have opportunities to assess themselves and their peers.

Performance-based Assessment

Performance assessment (Mueler, 2010) us an assessment in which students are


asked to perform real-world tasks that demonstrate meaningful application of essential
knowledge and skills.

It is a direct measure of student performance because the tasks are designed to


incorporate context, problems and solution strategies that students would use in real
life. It focus on processes and rationales. There is no single correct answer, instead
students are led to craft polished, thorough and justifiable responses, performances and

9
products. It also involved long-range projects, exhibits, and performances that are linked
to the curriculum. In this kind of assessment, the teacher is an important collaborator in
creating tasks, as well as in developing guidelines for scoring and interpretation.

GUIDELINE FOR EFFECTIVE STUDENT ASSESSMENT

Improvement of student learning is the main purpose of classroom assessment.


This can be done if assessment is integrated with good instruction and is guided by
certain principles. Gronlund (1998) provided the general guidelines for using student
assessment effectively.

1. Effective assessment requires a clear concept of all intended learning out-comes. 2.


Effective assessment requires that a variety of assessment procedures should be
used.
3. Effective assessment requires that the instructional relevance of the procedure be
considered.
4. Effective assessment requires an adequate sample of student performance. 5.
Effective assessment requires that the procedures must be fair to everyone. 6.
Effective assessment requires specifications of criteria for judging successful
performance.
7. Effective performance requires feedback to students emphasizing strengths of
performance and weakness to be corrected.
8. Effective assessment must be supported by comprehensive grading and reporting
system.

CHAPTER 2

Learning Outcomes

At the end of this chapter, the students should be able to:

1. Define the following terms: goals, objectives, and educational objectives/


instructional objectives, specific/behavioral objectives, general/ expressive
objectives, learning outcomes, learning activity, observable outcome,
unobservable outcome, cognitive domain, affective domain, psychomotor
domain, and educational taxonomy;
2. Write specific and general objectives;
3. Identify learning outcomes and learning activities;
4. Determine observable outcomes and non-observable learning outcomes;
5. Identify the different levels of Bloom’s taxonomy;
6. Identify the different levels of Krathwolh’s 2001 revised cognitive domain;
7. Write specific cognitive outcomes;
8. Write specific affective outcomes;
9. Write specific psychomotor outcomes;

10
10. Write measurable and observable learning outcomes.

INTRODUCTION

Instructional goals and objectives play a very important role in both instructional
process and assessment process. This serves as a guide both for teaching and learning
process, communicate the purpose of instruction to other stakeholders, and to provide
guidelines for assessing the performance of the students. Assessing the learning
outcomes of the students is one of the very critical functions of teachers. A classroom
teacher should classify the objectives of the lesson because it is very important for the
selection of the teaching method and the selection of the instructional materials. The
instructional material should be appropriate for the lesson so that the teacher can
motivate the students properly. The objectives can be classified according to the leaning,
outcomes of the lesson that will be discussed.

PURPOSES OF INSTRUCTIONAL GOALS AND OBJECTIVES

The purposes of, the instructional goals and objectives.


1. It provides direction for the instructional process by clarifying the intended
leaning outcomes.
2. It conveys instructional intent to other stakeholders such as students, parents,
school officials, and the public.
3. It provides basis for assessing the performance of the students by describing the
performance to be measured.

GOALS AND OBJECTIVES

The terms goals and objectives are two different concepts but they are related to
each other. Goals and objectives are very important, most especially when you want to
achieve something for the students in any classroom activities. Goals can never be
accomplished without objectives and you cannot get the objectives that you need in
order that you can accomplish what you want to achieve. Below are the different
descriptions between goals and objectives.
Goals Objectives

Broad Narrow

General intention Precise

Intangible Tangible

Abstract (less structure) Concrete

Cannot be validated as is Can be validated

Long term aims what you want to Short term aims what you want to achieve
accomplish

Hard to quantify or put in a timeline Must be given a timeline to accomplish to


be more effective

11
Goals, General Educational Program Objectives, and Instructional Objectives

Goals.A broad statement of very general educational outcomes that do not


include specific level of performance. It tent to change infrequently and in response to
the societal pressure, e.g., learn problem solving skills; develop high level thinking skills;
appreciate the beauty f an art; be creative; and be competent in the basic skills in the
area or grammar.

General Educational Program Objectives. More narrowly defined statements of


educational outcomes that apply to specific educational program; formulated on the
annual basis; developed by program coordinators, principals, and other school
administrators.

Instructional Objectives. Specific statement of the learners behavior or outcomes


that are expected to be exhibited by the students after completing a unit of instruction.
Unit o instruction may mean: a two weeks lesson on polynomials; one week lesson on
“parallelism after correlatives”; one class period on “katangianngwika.” At the end of the
lesson the students should be able to add fractions with 100% accuracy; the students
should be able to dissed the frog following the correct procedures, are example of
instructional objectives.\

Typical Problems Encountered When Writing Objectives


Problems Error Types Solutions

Too broad or complex The objective is too broad Simplify or break apart
in scope or is actually
more than one objective

False or missing behavior, The objective does not list Be more specific; make
condition, or degree the correct behavior, sure the behavior,
condition, and/ or degree, condition, and degree are
or it is missing included

False given Describes instruction, not Simplify, include ONLY


conditions ABCDs

False performance No true overt, observable Describe what behavior


performance listed you must observe

To avoid different problems encountered in writing objectives, let us discuss the


components of instructional objectives and other terms related to constructing a good
instructional objective.

Four Main things That Objective Should Specify

1. Audience
Who? Who are the specific people the objectives are aimed at?
2. Observable Behavior
12
What? What do you expect them to be able to do? This should be an overt,
observable behavior, even if the actual behavior is covert or mental in nature. If
you cannot see it, heat it, touch it, taste it, or smell it, you cannot be sure your
audience really learned it.

3. Special Conditions
The third components of instructional objectives is the special conditions
under which the behavior must be displayed by the students. How? Under what
circumstances will be learning occur? What will the student be given o already be
expected to know to accomplish the learning?
4. Stating Criterion Level
The fourth component of the instructional objectives is stating the
criterion level. The criterion level of acceptable performance specifies how many
of the items must the students answer correctly for the teacher to attain his/her
objectives. How much? Must a specific set of criterion be met? Do you want total
mastery (100%), do you want them to response correctly 90% of the time,
among others? A common (and totally non-scientific) setting is 90% of the time.
Always remember that the criterion level need not be specified on
percentage of the number of items correctly answered. It can be stated as,
number of items correct; number of consecutive items correct; essential features
included in the case of essay question or paper; completion within a specified
time or completion with a certain degree of accuracy.

Types of Educational Objectives

Educational objective is also known as instructional objective. There are two


typeso educational objectives: specific or behavioral objectives and general or
expressive objectives (Kubiszyn and Borich, 2007).

1. Specific or Behavioral Objectives. Precise statement of behavioral to be


exhibited by the students; the criterion by which mastery of the objectives will be
judged; the statement of the conditions under which behavior must be
demonstrated.

Example of behavioral objective are: (1) Multiply three-digit number with


95% accuracy. (2) List the months of the year in proper order from memory, with
100% accuracy. (3) Encode 30 words per minute with at most three (3) errors
using computer. These activities specify specific educational outcomes.

2. General or Expressive Objectives. Statement wherein the behaviors are not


usually specified and the criterion of the performance level is not stated. It only
describes the experience or educational activity to be done. The outcomes of the
activity is not expressed in specific terms but in general terms such as
understand, interpret or analyze. Examples of expressive objectives: (1) Interpret
the novel the Lion, the Witch, and the Wardrobe; (2) Visit Manila Zoo

13
and discuss what was of interest; (3) Understanding the concept of normal
distribution. These examples specify only the activity or experience and broad
educational outcome.
Instructional objective is a clear and concise statement of skill or skills
that students are expected to perform or exhibit after discussing a certain lesson
or unit of instruction. The components of instructional objective are observable
behaviors, special conditions which the behavior must be exhibited and
performance level considered sufficient todemonstrate mastery.
When a teacher developed instructional objectives, he must include an
action verb that specifies learning outcomes. Some educators and education
students are often confused with learning outcome and learning activity. An
activity that implies a certain product or end result of instructional objectives is
called learning outcome. If you write instructional objectives as a means or
processes of attaining the end product, then it is considered as learning activity.
Hence, revise it so that the product of the activity is stated.

Examples:
Learning Activities Learning Outcomes

Study identify

Read Write

Watch Recall

listen list

TYPES OF LEARNING OUTCOMES

After developing learning outcomes the next step the teacher must consider is to
identify whether the learning outcome is stated as a measurable and observable
behavior or non-measurable and non-measurable and non-observable behavior. If
learning outcome is measurable then it is observable, therefore, always state the
learning outcomes in observable behavior. Teachers should always develop instructional
objectives that are specific, measurable statement of outcomes of instruction that
indicates whether instructional intents have been achieved (Kubiszyn, 2007). The
following are examples of verbs in terms of observable learning outcomes and
unobservable learning outcomes.
Observable Learning Outcomes Non-observable Learning Outcomes

Draw Understand

Build Appreciate

List Value
Recite Know

Add Be familiar

Examples of observable learning outcomes:

14
1. Recite the names of the characters in the story MISERY by Anton Chechov.
2. Add two-digit numbers with 100% accuracy.
3. Circle the initial sounds of words.
4. Change the battery of an engine.
5. List the steps of hypothesis testing in order.

Examples of non-observable learning outcomes:

1. Be familiar with the constitutional provisions relevant to agrarian reforms.


2. Understand the process of evaporation.
3. Enjoy speaking Spanish.
4. Appreciate the beauty if an art.
5. Know the concept of normal distribution.

Types of Learning Outcomes to Consider

Below are the lists of learning outcomes classified as a learning objective. The
more specific outcome should not be regarded as exclusive; there are merely suggestive
as categories to be considered (Gronlund, Linn, and Miller, 2009).

1. Knowledge
1.1 Terminology
1.2 Specific facts
1.3 Concepts and principles
1.4 Methods and procedures
2. Understanding
2.1 Concepts and principles
2.2 Methods and procedures
2.3 Written materials, graph, maps, and numerical data
2.4 Problem situations
3. Application
3.1 factual information
3.2 concepts and principles
3.3 methods and procedures
3.4 problem solving skills
4. Thinking skills
4.1 critical thinking
4.2 scientific thinking
5. General skills
5.1 laboratory skills
5.2 performance skills
5.3 communication skills
5.4 computational skills
5.5 Social skills
6. Attitudes

15
6.1 Social attitudes
6.2 Scientific attitudes
7. Interests
7.1 Personal interests
7.2 Educational interests
7.3 Vocational interests
8. Appreciations
8.1 Literature, art, and music
8.2 Social and scientific achievements
9. Adjustments
9.1 Social adjustments
9.2 Emotional adjustments

TAXONOMY OF EDUCATIONAL OBJECTIVES

Taxonomy of Educational Objectives is a useful guide for developing a


comprehensive list of instructional objectives. A taxonomy is primarily useful in
identifying the types of learning outcomes that should be considered when developing a
comprehensive list of objectives for classroom instruction.

Benjamin S. Bloom (1948, as cited by Gabuyo, 2011), awell-known psychologist


and educator, took the initiative to lead in formulating and classifying the goals and
objectives of the educational process.The three domains of educational activities were
determined: the cognitive domain, affective domain and the psychomotor domain.

1. Cognitive Domain called for outcomes of mental activity such as memorizing,


reading problem solving, analyzing, synthesizing and drawing conclusions. 2.
Affective Domain describes learning objectives that emphasize a feeling tone, an
emotion, or a degree of acceptance or rejection. Affective objectives vary from simple
attention to selected phenomena to complex but internally consistent qualities of
character and conscience. We found a large number of such objectives in the
literature expressed as interests, attitudes, appreciations, values, and emotional sets
or biases (Krathwohl et al., 1964 as cited by Esmane, 2011). It refers to the persons’
awareness and internalization of objects and simulations, it focus on the emotions of
the learners.
3. Psychomotor Domain is characterized by the progressive levels of behaviors from
observation to mastery of physical skills (Simpson, 1972 as cited by Esmane,
2011). This includes physical movements, coordination, and use of the
motor-skill areas. Development of these skills requires practice and is measured
in terms of speed, precision, distance, procedures, or techniques in execution. It
focused on the physical and kinesthetic skills of the learner. This domain is
characterized by the progressive levels of behaviors from observation to mastery
of physical skills.

16
Bloom and other educators work on cognitive domain, established and completed
the hierarchy of educational objectives in 1956, it was called as the Bloom’s Taxonomy
of the cognitive domain. The affective and psychomotor domains were also developed by
other group of educators.

CRITERIA FOR SELECTING APPROPRIATE OBJECTIVES

1. The objectives should include all important outcomes of the course or subject
matter,
2. The objectives should be in harmony with the content standards of the state and
with the general goals of the school.
3. The objectives should be in harmony with the sound principles of learning. 4. The
objectives should be realistic in terms of the abilities of the students, time and the
available facilities.

CLEAR STATEMENT OF INSTRUCTIONAL OBJECTIVES

To obtain a clear statement of instructional objectives you should define the


objectives in two steps. First, state the general objectives of instruction as intended
learning outcomes. Second, list under each objective a sample of the specific types of
performance that the students should be able to demonstrate when they have achieved
the objectives (Gronlund, 2000 as cited by Gronlund, Linn, and Miller, 2009). This
procedure should result in the statement of general objectives and specific learning
outcomes such as the given example below.

1. Understands the scientific principles


1.1 Describe the principle in their own words.
1.2 Identifies examples of the principle.
1.3 States reasonable hypotheses based on the principles.
1.4 Uses the principle in solving problem
1.5 Distinguishes between two given principles.
1.6 Explains the relationships between the given principles.

In this example, the expected learning outcome is concerned with the


understanding of the students regarding scientific principles. As the verb understands
is expressed as a genera; objective, the statement immediately starts with the word
understands. It is very important to start immediately with the verb so that it will focus
only on the intended outcomes. No need to add phrase such as “the student should be
able to demonstrate that they understand,” and the like. Beneath the general objective
are statements of specific learning outcomes that start immediately with verbs that are
specific, indicate definite, and observable responses that is, one can be seen and can be
assessed by outside observes of evaluators. The verbs describes, identifies, states, uses,
distinguishes, and explains are specific learning outcomes stated in terms of observable
students performance.

17
MATCHING TEST ITEMS TO INSTRUCTIONAL OBJECTIVES

When constructing test items, always remembers that they should match the
instructional objectives. The learning outcomes and the learning conditions specified in
the test items should match with the learning outcomes and conditions stated in the
objectives. If a test developer followed this basic rule, then the test is ensured to have
content validity. The content validity is very important so that your goal is to assess the
achievements of the students, hence, don’t ask tricky questions. To measure the
achievement of the students ask them to demonstrate a mastery of skills that was
specified in the conditions in the instructional objectives.

Consider the following examples of matching test items to instructional


objectives as the author adapted and modified Kubiszyn and Borich’s (2007)
instructional objectives. From the table below, items 1 and 3 have learning outcomes
that match with the test item while items 2,4, ad 5 have learning outcomes that were
unmatched with the test items.
Match?

Yes No

1. Objective: discriminate fact from opinion from Pres. /


Benigno C. Aquino’s first State of the Nation Address
(SONA).
Test item: From the State of the Nation Address (SONA)
speech of President Aquino, give five (5) examples of
facts and five (5) examples of opinions.

2. Objectives: Recall the names and capitals of all the /


different provinces of Regions I and II in the
Philippines. Test items: List the names and capitals of
two provinces in Region I and three provinces in
Region II.
3. Objective: List the main event in chronological order, /
after reading the short story a VENDETTA by Guy de
Maupassant.
Test item: From the short story A VENDETTA by Guy de
Maupassant, list the main event in chronological order.

4. Objective: Circle the nouns and pronouns from the given /


list of words.
Test item: Give five examples of pronouns and five
examples of verbs.

5. Objective: Make a freehand drawing about Region II /


using your map as a guide.
Test item: without using your map, draw the map of
Region II.

BLOOM’S REVISED TAXONOMY

Lorin Anderson a former student of Bloom together with Krathwolh, revised the
Bloom’s taxonomy of cognitive domain in the mid-90s in order to fit the more outcome-

18
focused modern education objectives. There are two major changes: (1) the names in
the six categories from noun to active verb, and (2) the arrangement of the order of the
last two highest levels as shown in the given figure below. This new taxonomy reflects a
more active from of thinking and is perhaps more accurate.

1956 2001

Evaluation Creating
Synthesis Evaluating
Analysis Analyzing
Application Applying
Comprehension Understanding
Knowledge Remembering
Noun to Verb From

Changes o Bloom’s Taxonomy

*Adapted with written permission from Leslie Owen Wilson’s curriculum Pages
Beyond Bloom – A New Version of the Cognitive Taxonomy.
Bloom’s Taxonomy in 1956 Anderson/Krathwolh’s Revision in 2001
1. Knowledge: Remembering or 1. Remembering: Objectives written on the
retrieving previously learned remembering level (lowest cognitive level):
material. Retrieving, recalling, or recognizing knowledge
Examples of verbs that relate to from memory. Remembering is when memory
this function are: identify, relate, is used to produce definitions, facts, or lists; to
list, define, recall, memorize, recite or retrieve material.
repeat, record name, recognize, Sample verbs appropriate for objectives
acquire written at the remembering level: state, tell,
underline, locate, match, state, spell, fill in the
blank, identify, relate, list, define, recall,
memorize, repeat, record, name, recognize,
acquire

2. Comprehension: the ability to 2. Understanding: Objectives written on the


grasp or construct meaning from understanding level (higher level of mental
material. ability than remembering requires the lowest
Examples of verbs that relate to level of understanding from the student):
this function are: restate, locate, Constructing meaning from different types of
report, recognize, explain,, express, functions be they written or graphic message
identify, discuss, describe, review, activities like interpreting, exemplifying,
infer, conclude, illustrate, interpret, classifying, summarizing, inferring, comparing
draw, represent, differentiate and explaining.
Sample verbs appropriate for objectives
written t the understanding level: restate,
locate, report, recognize, explain, express,
identify, discuss, describe, review, infer,
conclude, illustrate, interpret, draw, represent,
differentiate

19

3. Application: the ability to use 3.Applying: Objectives written on the applying


learned material, or to implement level require the learner to implement (use)
material in new and concrete the information: Carrying out or using a
situations. procedure through executing, or
Examples of verbs that relate to implementing. Applying relates and refers to
this function are: apply, relate, situations where learned material is used
develop, translate, use, operate, through products like models, presentations,
organize, employ, restructure, interviews or simulations.
interpret, demonstrate, illustrate, Sample verbs appropriate or objectives
practice, calculate, show, exhibit, written at the applying level: apply, relate,
dramatize develop, translate, use, operate, organize,
employ, restructure, interpret, demonstrate,
illustrate, practice, calculate, show, exhibit,
dramatize
4. Analysis: the ability to break 4. Analyzing: Objectives written on the
down or distinguish the parts of analysis level requires the learner to break the
the material into their components information into component parts and
so that their organizational describe the relationship. Breaking material or
structure may be better concepts into parts, determining how the
understood. parts relate or interrelate to one another or to
Examples of verbs that relate to an overall structure or purpose. Mental
this function are: analyze, compare, actions included in this function are
probe, inquire, examine, contrast, differentiating, organizing and attributing, as
categorize, differentiate, investigate well as being able to distinguish between the
detect, survey, classify, deduce, components or parts. When one is analyzing,
experiment, scrutinize, discover, he/she can illustrate this mental function be
inspect dissect, discriminate creating spreadsheets, survey, charts, or
separate diagrams, graphic representations.
Samples verbs appropriate for objectives
written at the analyzing level: analyze,,
compare, probe, inquire, examine, contrast,
categorize, differentiate, contrast, investigate,
detect, survey, classify, deduce, experiment,
scrutinize, discover, inspect, dissect,
discriminate, separate

5. Synthesis: The ability to put 5.Evaluating: Objectives written on the


parts together to form a coherent evaluating level require the student to make a
or unique new whole. judgment about materials or methods. Making
Examples of verbs that relate to judgments based on criteria and standards
this function are: compose, through checking and critiquing. Critiques,
produce, design, assemble, create, recommendations, and reports are some of the
prepare, predict, modify, plan, products that can be created to demonstrate
invent, formulate, collect, set up, the processes of evaluation. In the newer
generalize, document, combine, taxonomy, evaluation comes before creating as
propose, develop, arrange, it is often a necessary part of the precursory
construct, organize, originate, behavior before creating something.
derive, write
Remember this part has now changed places
with the last one on the old taxonomy.
Sample verbs appropriate for objectives

20
written at the evaluating level: appraise,
choose, compare, conclude, decide, defend,
evaluate, give your opinion, judge, justify,
prioritize, rank, rate, select, rate, support,
value
6. Evaluation: The ability to 6.Creating: Objectives written on the
judge, check, and even critique the creating level require the student to generate
value of material for a given new idea and ways of viewing things. Putting
purpose. elements together to from a coherent or
Examples of verbs that relate to functional whole; reorganizing elements into a
this function are: judge, assess, new pattern or structure through generating,
compare, evaluate, conclude, planning, or producing. Creating requires users
measure, deduce, argue, decide, to put parts together in a new ways or
choose, rate, select, estimate, synthesize parts into something new and
validate, consider, appraise, value, different form or product. This process is the
criticize, infer most difficult mental function in the new
taxonomy.
This one used be No. 5 in Bloom’s taxonomy
and was known as the synthesis. Sample verbs
appropriate for objectives written at the
creating level: Change, combine, compose,
construct, create, invent, design, formulate,
generate, produce, revise, reconstruct,
rearrange, visualize, write, plan

*adapted with written permission from Leslie Owen Wilson’s Curriculum Pages
Beyond Bloom- A New Version of the Cognitive Taxonomy.

Cognitive Domain

Bloom’s taxonomy of cognitive domain is arranged according to the lowest level


to the highest level. Knowledge as the lowest level followed by comprehension, analysis,
application, synthesis and evaluation as the highest level.

1. Knowledge recognizes students’ ability to use rote memorization and recall


certain facts. Test questions focus on identification and recall information.

Sample verbs of stating specific learning outcomes:


Cite, define, identify, label, list, match, name, recognize, reproduce, select,
state

Instructional Objectives:
At the end of the topic, the students should be able to identify the
different steps in testing hypothesis.

Test Item:
What are the different steps in testing hypothesis?

21
2. Comprehension involves students’ ability to read course content, interpret
important information and put other’s ideas into words. Test questions should
focus on the use of facts, rule and principles.

Sample verbs of stating specific learning outcomes:


Classify, convert, describe, distinguish between, give examples, interpret
summarize

Instructional objective:
At the end of the lesson, the students should be able to summarize ,the
main events of the story INVICTUS in grammatically correct English.

Test Item:
Summarize the main events in the story INVICTUS in grammatically
correct English.

3. Application students take new concepts and apply them to new situation. Test
questions focus on applying facts and principles.

Sample verbs of stating specific learning outcomes:

Apply, arrange, compute, construct, demonstrate, discover, extend,


operate, predict relate, show, solve, use

Instructional objective:

At the end of the lesson the students should be able to write a short poem
in iambic pentameter.

Test Item:

Write short poem in iambic pentameter

4. Analysis students have the ability to take new information and break it down into
parts and differentiate between them. The test questions focus on separation of a
whole into component parts.

Samples verbs of stating specific learning outcomes:

Analysis, associate, determine, diagram, differentiate, discriminate,


distinguish, estimate, point out, infer, outline, separate

Instructional objectives:
At the end of the lesson, the students should be able to describe the
statistical tools needed in testing the difference between two means
22
Test Item:
What kind of statistical test would you, run to see if there is a significant
different between pre-test and post-test?

5. Synthesis students re able to take various pieces of information and dorm a


whole creating a pattern where one did not previously exist. Test question
focuses on combining new ideas to from a new whole.

Sample verbs of stating specific learning outcomes:


Combine, compile, compose, construct, create, design, develop, devise,
formulate, integrate, modify, revise, rewrite, tell, write

Instructional objectives:
At the end of the lesson, the students should be able to compare and
contrast the two types of error.

Test Item:
What is the difference between type I and Type II error?

6. Evaluation involves students’ ability to look at someone else’ or principles and


the worth of the work and the value of the conclusion.

Sample verbs of stating specific leaning outcomes:


Appraise, assess, compare, conclude, contrast, criticize, evaluate, judge,
justify, support

Instructional objectives:
At the end of the lesson, the students should be able to conclude the
relationship between two means.

Test Item:
What should the researcher conclude about the relationship in the
population?

Affective Domain

Affective domain describes learning objectives that emphasize a feeling tone, an


emotion, or a degree of acceptance or rejection. Affective objectives vary from simple
attention to selected phenomena to complex but internally consistent qualities of
character and conscience. We found a large number of such objectives in the literature
expressed as interests, attitudes, appreciations, values, and emotional sets or biases
(Krathwohl et al., as cite by Esmane, 2011). The affective domain includes objectives
pertaining to attitudes, appreciations, values, and emotions.
23
Krathwohl’s affective domain is perhaps the best known of any of the affective
domain. “The taxonomy is ordered according to the principles of internalization.”
Internalization refers to the process whereby a person’s affect toward an object passes
from a general awareness level to a point where the affect is internalized and
consistently guides or controls the person’s behavior. The arrangement of the affective
domain from lowest level to the highest level as articulated by Esmane (2011).

Level of Affective Domain


Level Definition Sample Verbs

1. Receiving Refers to being aware of or Example:


sensitive to the existence of
certain ideas, materials, or Listens to the ideas of others
phenomena and being able to with respect.
tolerate them. The learners Sample verbs appropriate
are willing to listen. for objectives written at the
receiving level: masks,
choose, describes, follows,
gives, holds, identifies, locates,
names, points to, selects, sits,
erects, replies, uses

2. Responding Refers to the commitment in Example:


some measure to the ideas,
materials, or phenomena Participates in class
involved by actively discussions actively.
responding to them. It
answers question about ideas. Samples verbs appropriate
The learning outcomes for objectives written at the
emphasize compliance in responding level: answers,
responding, willingness to assists, aids, complies,
respond, or satisfaction in conforms, discusses, greets,
responding. The learners are helps, labels, performs,
willing to participate practices, presents, reads,
recites, reports, selects, tells,
writes

3. Valuing Refers to the willingness to be Examples:


perceived by others as valuing
certain ideas, materials, Demonstrates belief in the
phenomenon or behavior. It is democratic process.
based on the internalization of
a set of specified values, while Show the ability to solve
clues to these values are problems.
expressed in the learner’s Sample verbs appropriate
overt behavior and are often for objectives written at the
identifiable. This ranges from valuing level: completes,
simple acceptance to the more demonstrates, differentiates,
complex state of commitment. explains, follows, forms,
The learners are willing to initiates, invites, joins,
be involved. justifies, proposes, reads,
reports, selects, shares,
studies, works

4. Organization Refers to the ability to relate Examples:


the value to those already held
and bring it into a harmonious Explains the role of systematic
and internally consistent planning in solving problems.
philosophy. Commits to using
ideas and incorporate them to Prioritizes time effectively to
different activities. It meet the needs of the
emphasizes on

24
comparing, relating, and organization, family, and self,.
synthesizing values. The
learners are willing to be an Sample verbs appropriate
advocate. for objectives written at the
organizing level: adheres,
alters, arranges, combines,
compares, completes, defends,
explains, formulates,
generalizes, identifies,
integrates, modifies, orders,
organizes, prepares, relates,
synthesizes

5. Characterization by Incorporate ideas completely Examples:


value or value set into practice, recognized by the
use of them. The value system Shows self-reliance when
that controls their behavior. working independently.
Instructional objectives are
concerned with the student’s Values people for what they
general patterns of adjustment are, not how they look.
such as personal, social, and Sample verbs appropriate
emotional. The learners are for objectives written at the
willing to change one’s characterizing level: acts,
behavior, lifestyle, or way of discriminates, displays,
life influences, listens, modifies,
performs, practices, proposes,
qualifies, questions, revises,
serves, solves, verifies

Psychomotor Domain

Psychomotor domain is characterized by the progressive levels of behaviors from


observation to mastery of physical skills. Esmane (2011) includes physical movement,
coordination, and use of the motor-skill areas. Development of these skills requires
practices and is measured in terms of speed, precision, distance, procedures, or
techniques in execution. The seven major categories are listed from the simplest
behavior to the most complex. The Psychomotor Domain includes objectives that
requires basic motor skills and/ or physical movement such as construct, kick or ski.

Level of Psychomotor Domain


Level Definition Example

1. Perception The ability to use sensory cues Examples:


to guide motor activity. This Detects nonverbal
ranges from sensory communication cues.
stimulation, through cue
selection, to translation Estimate where a ball will land
after it is thrown and then
moving to the correct
locations= to catch the ball.
Sample verbs appropriate
for objectives written at the
perception level: closes,
describes, detects,
differentiates, distinguishes,
identifies, isolates, relates,
selects

2. Set Readiness to act. It includes Examples:

25
mental, physical, and Recognizes one’s abilities and
emotional sets. These three limitations. Shows desire to
sets are dispositions that learn a new process
predetermine a person’s (motivation). Note: this
response to different subdivision of Psychomotor
situations (so metimes called domain is closely related to
mindsets). the “responding to
phenomena” subdivision of the
Affective domain.
Sample verbs appropriate
for objectives written at the
set level: begins, displays,
explains, moves, proceeds,
reacts, shoes, states,
volunteers

3. Guided Response The early stages in learning a Examples:


complex skill that includes Performs a mathematical
imitation and trial and error. equation as demonstrated.
Adequacy of performance is
achieved by practicing. Follow instructions to build a
model.
Sample verbs appropriate
fro objectives written at the
guided response level: copies,
traces, follows, reacts,
reproduces, responds
4. Mechanism This is the intermediate stage Examples:
in learning a complex skill. Uses a personal computer.
Learned responses have
become habitual and the Repairs a leaking faucet.
movements can be performed
with some confidence and Drives a car.
proficiency. Sample verbs appropriate
objectives written at the
mechanism level: assembles,
calibrates, constructs,
dismantles, displays, fastens,
fixes, grinds, heats,
manipulates, measures,
mends, mixes, organizes,
sketches

5. Complex Overt The skillful performance of Examples:


Response motor and acts that involves Operates a computer quickly
complex movement patters. and accurately.
Proficiency is indicated by a
quick, accurate, and highly Displays competence while
coordinated performance, playing the piano.
requiring a minimum of Samples verbs appropriate
energy. This category includes for objectives written at the
performing without hesitation, complex overt response
and automatic performance. level: assembles, builds,
For example, players often calibrates, constructs,
utter sounds of satisfaction or dismantles, displays, fasten,
expletives as soon as they hit a fixes, grinds, heats,
tennis ball or throw a football, manipulates, measures, mends,
because they can tell by the fell mixes, organizes, sketches
of the act what the result will
produce. Note: the key words are the
same as mechanism, but will
have adverbs or adjectives that
indicate that the performance
is quicker, better, more
accurate, etc.

26
6. Adaption Skills are well developed and Examples:
the individual can modify Responds effectively to
movement patterns to fit unexpected experiences.
special requirements.
Modifies instruction to meet
the needs of the learners.
Samples verbs appropriate
for objectives written at the
adaption level: adapts, alters,
changes, rearranges,
reorganizes, revises, varies
7. Origination Creating new movement Examples:
patterns to fit a particular Creates a new gymnastic
situation or specific problem. routine. Sample verbs
Learning outcomes emphasize appropriate for objectives
creativity based upon highly written at the origination
developed skills. level: arranges, builds,
combines, composes,
constructs, creates, designs,
initiates, makes, originates

Other Psychomotor Domains

Aside from the discussion of Simpson (1972) about the psychomotor domain,
there are two other popular versions commonly used by educators. The works of Dave,
R. H. (1975) and Harrow, Anita (1972) and Kubiszyn and Borich (2007) were discussed
below.
Level Definition Example

Imitation Observing and patterning Copying a work of art


behavior after someone
else. Performance may be
of low quality

Manipulation Being able to perform Creating work on one’s


certain actions by own, after taking lessons,
following instructions or reading about it
and practicing.

Precision Refining, becoming more Working and reworking


exact. Few errors are something, so it will be
apparent “just right”

Articulation Coordinating a series of Producing a video that


actions, achieving involves music, drama,
harmony and internal color, sound, etc.
consistency.

Naturalization Having high level Michael Jordan playing


performance become basketball, Nancy Lopez
natural, without needing hitting a go0ld ball, etc.
to think much about it.

27
Harrow’s (1972), Kubisxyn and Borich (2007)
Level Definition Example

Reflex movements Reactions that are not Flexion, extension,


learned. stretch, postural
adjustment

Fundamental movements Inherent movement Basic movements such as


patterns which are walking, grasping,
formed by combinations twisting, manipulating
of reflex movements, the
basis for complex skilled
movements.

Perception Response to stimuli such Coordinated movements


as visual, auditory, such as jumping rope,
kinesthetic, or tactile punting, catching
discrimination.

Physical abilities Stamina that must be Muscular exertion, quick


developed for further precise movement
development such as
strength and agility.

Skilled movements Advanced learned Skilled activities in sports,


movements as one would recreation and dance
find in sports or acting.

No discursive Effective body language, Body postures, gestures,


communication such as gestures and facial expressions
facial expressions. efficiently executed in
skilled and dance
movements and
choreographies

CHAPTER 3

DEVELOPMENT OF CLASSROOM ASSESSMENT TOOLS

Learning Outcomes

At the end of this chapter, the student should be able to:

1. Define the following terms: clarity of the learning target, appropriateness of


assessment tools, validity, reliability, fairness, objectivity, comprehensiveness,
ease in scoring and administering, practically and efficiency, table of
specification, matching type of test, multiple-choice test, true or false test,
completion test, objective test, stem, distracters, key options;
2. Discuss the different principles of testing/ assessing;
3. Identify the different qualities of assessment tools;
4. Identify the different steps in developing test items;
5. Discus the steps in developing table of specification;
6. Construct a table of specification using the different formats;
28
7. Discuss the different format of assessment tools;
8. Determine the advantages and disadvantages of the different format of test item; 9.
Identify the different rules in constructing multiple-choice test, matching type test,
completion test, true or false test; and
10. Construct multiple-choice test, matching type test, completion test, true or false
test.

INSTRODUCTION

In the previous chapter, we have discussed the process of developing


instructional objectives. As discussed, the instructional objectives must be specific,
measurable and observable. Teachers must develop test items that should match with
the instructional objectives appropriately and accurately. In this section, we shall discuss
the general principles of testing, the different qualities of assessment tools, steps in
developing assessment tools, format of table of specifications, and different types of
classroom tools.

GENERAL PRINCIPLES OF TESTING

Ebel and Frisbie (1999) as cited by Garcia (2008) listed five basic principle that
should guide teachers in assessing the learning progress of the students and in
developing their own assessment tools. These principles are discussed below.

1. Measure all instructional objectives. When a teacher constructs test items to


measure the learning progress of the students, they should match all the learning
objectives posed during instruction. That is why the first step in constructing a
test is for the teacher to go back to the instructional objectives.
2. Cover all the learning tasks. The teachers should construct a test that contains a
wide range of sampling of items. In this case, the teacher can determine the
educational outcomes or abilities that the resulting scores are representatives of
the total performance in the areas measured,.
3. Use appropriate test items. The test items constructed must be appropriate to
measure learning outcomes.
4. Make test valid and reliable. The teacher must construct a test that is valid so that
it can measure what it is supposed to measure from the students. The test is
reliable when the scores of the students remain the same or consistent when the
teacher gives the same test for the second time.
5. Use test to improve learning. The test scores should be utilized by the teacher
properly to improve learning by discussing the skills or competencies on the
items that have not been learned or mastered by the learners.

PRINCIPLES OF HIGH QUALITY ASSESSMENT

Assessing the performance of every student is a very critical task for classroom
teacher. It is very important that a classroom teacher should prepare the assessment

29
tool appropriately. Teacher-made tests are developed by a classroom teacher to assess
the learning progress of the students within the classroom. It has weaknesses and
strengths. The strengths of a teacher-made test lie on its applicabililtyand relevance in
the setting where they are utilized. Its weaknesses are the limited time and resources
for the teacher to utilize the test and also some of the technicalities involved in the
development of the assessment tools.

Test construction believed that every assessment tool should possess good
qualities. Most literatures consider the most common technical concepts in assessment
are the validity and reliability. For any type of assessment, whether traditional or
authentic, it should be carefully developed so that it may serve whatever purpose it is
intended for and the test results must be consistent with the type of assessment that
will be utilized.

In this section, we shall discuss the different terms such as clarity of the learning
target, appropriateness of an assessment tool, fairness, objectivity, comprehensiveness,
and ease of scoring and administering. Once these qualities of a good test are taken into
consideration in developing an assessment tool, the teacher will have accurate
information about the performance of each individual pupils or student.

Clarity of the Learning Target

When a teacher plans for his classroom instruction, the learning target should be
clearly stated and must be focused on student learning objectives rather than teacher
activity. The learning outcomes must be Specific, Measurable, Attainable, Realistic and
Time-bound (SMART) as discussed in the previous chapter. The performance task of the
students should also be clearly presented so that they can accurately demonstrate what
they are supposed to do and how the final product should be done. The teacher should
also discuss clearly with the students the evaluation procedures, the criteria to be used
and the skills to be assessed in the task.

Appropriateness of Assessment Tool

The type of test used should always match the instructional objectives or
learning outcomes of the subject matter posed during the delivery of the instruction.
Teachers should be skilled in choosing and developing assessment methods appropriate
for instructional decisions. The kinds of assessment tools commonly used to assess the
learning progress of the students will be discussed in details in this chapter and in the
succeeding chapter.

1. Objective Test. It is a type of test that requires students to select the correct
response from several alternatives or to supply a word or short phrase to answer
a question or complete a statement. It includes true-false, matching type, and
multiple-choice questions. The word objective refers to the scoring, it indicates
that there is only one correct answer.

30
2. Subjective Test. It is a type of test that permits the student to organize and
present an original answer. It includes either short answer questions or long
general questions. This type of test has no specific answer. Hence, it is usually
scored on an opinion basis, although there will be certain facts and
understanding expected in the answer.
3. Performance Assessment. (Mueller, 2010) is an assessment in which students
are asked to perform real-world tasks that demonstrate meaningful application
of essential knowledge and skills. It is can appropriately measure learning
objectives which focus on the ability of the students to demonstrate skills or
knowledge in real-life situations.
4. Portfolio Assessment. It is an assessment that is based on the systematic,
longitudinal collection of student work created in response to specific known
instructional objectives and evaluated in relation to the same criteria (Ferenz, K.,
2001). Portfolio is a purposeful collection of student’s work that exhibits that
student’s efforts, progress and achievements in one or more areas over a period
of time. It measures the growth and development of students.
5. Oral Questioning. This method is used to collect assessment data by asking oral
questions. The most commonly used of all forms of assessment in class, assuming
that the learner hears and shares the use of common language with the teacher
during instruction. The ability of the students to communicate orally is very
relevant to this type of assessment. This is also a form of formative assessment.
6. Observation Technique. Another method of collecting assessment data is
through observation. The teacher will observe how students carry out certain
activities either observing the process of product. There are two types of
observation techniques: formal and informal observations. Formal observation
are planned in advance like when the teacher assess oral report or presentation
in class while informal observation is done spontaneously, during instruction like
observing the working behavior of students while performing a laboratory
experiment in a biology class and the like. The behavior of students involved in
hid performance during instruction is systematically monitored, described,
classified, and analyzed.
7. Self-report. The response of the students may be used to evaluate both
performance and attitude. Assessment tools could include sentence completion,
likert scales, checklists, or holistic scales.

Different Qualities of Assessment Tools

1. Validity refers to the appropriateness of score-based inferences; or decisions


made based on the students’ test results. The extent to which a test measures
what it is supposed to measure.
2. Reliability refers to the consistency of measurement; that is, how consistent test
results or other assessment results from one measurement to another. We can
say that a test is reliable when it can be used to predict practically the same

31
scores when test administered twice to the same group of students and with a
reliability index of 0.61 above,.\
3. Fairness means the test item should not have any biases. It should not be
offensive to any examinee subgroup. A test can only be good if it is fair to all the
examinees.
4. Objectivity refers to the agreement of two or more raters of test administrators
concerning the score of a student. If the two rates who assess the same student
on the same test cannot agree on the score, the test lacks objectivity and neither
of the score from the judges is valid. Lack of objectivity reduces test validity in
the same way that the lack of reliability influence validity.
5. Scorability means that the test should be easy to score, direction for scoring
should be clearly in the instruction. Provide the students an answer sheet and
the answer key for the one who will check the test.
6. Adequacy means that the test should contain a wide range of sampling of items to
determine the educational outcomes or abilities so that the resulting scores are
representatives of the total performance in the areas measured.
7. Administrabilitymeans that the test should be administered uniformly to all
students so that the scores obtained will not very due to factors other than
differences of the students’ knowledge and skills. There should be a clear
provision for instruction for the students, proctors and even the one who will
check ,the test or the test scorer.
8. Practicality and Efficiency refers to the teacher’s familiarity with the methods
used, time required for the assessment, complexity of the administration, ease of
scoring, ease of interpretation of the test results and the materials used must be
at the lowest cost.

STEPS IN DEVELOPING ASSESSMENT TOOLS

1. Examine the instructions objectives of the topics previously discussed.


2. Make a table of specification (TOS).
3. Construct the test items.
4. Assemble the test items.
5. Check the assembled test items.
6. Write directions.
7. Make the answer key.
8. Analyze and improve the test items.

Let us discuss in details the different steps needed in developing good assessment
tools. Following the different steps is very important so that the test items developed
will measure the different learning outcomes appropriately. In this case, the teacher can
measure what is supposed to measure. Consider the following discussions in each step.

32
Examine the instructional Objectives of the Topic Previously Discussed

The first step in developing an achievement test is to examine and go back to the
instructional objectives so that you can match with the test items to be constructed.

Make a Table of Specification (TOS)

Table of Specification (TOS) is a chart or table that details the content and level
of cognitive level assessed on a test as well as the types and emphases of test items
(Gareis and Grant, 2008). Table of specification is very important in addressing the
validity and reliability of the test items. The validity of the test means that the
assessment can be used to draw appropriate result from the assessment because the
assessment guarded against any systematic error.

Table of specification provides the test constructor a way to ensure that the
assessment is based from the intended learning outcomes. It is also a way of ensuring
that the number of questions on the test is adequate to ensure dependable results that
are not likely caused by chance. It is also a useful guide in constructing a test and in
determining the type of test items that you need to construct.

Preparing a Table of Specification

Below are the suggested steps in preparing a table of specification used by the
test constructor. Consider these steps in making a two-way chart table of specification.
See also format 1 of the Table of Specification for the other steps.

a. Selecting the learning outcomes to be measured. Identify the necessary


instructional objectives needed to answer the test items correctly. The list of the
instructional objectives will include the learning outcomes in the areas of knowledge,
intellectual skills or abilities, general skills, attitudes, interest, and appreciation. Use
Bloom’s Taxonomy or Krathwolh’s 2011 revised taxonomy of cognitive domain as guide.
b. Make an outline of the subject matter to be covered in the test. The length of the test
will depend on the areas covered in its content and the time needed to answer. c. Decide
on the number of items per subtopic. Use this formula to determine the number of items
to be constructed for each subtopic covered in the test so that the number of item in
each topic should be proportioned to the number of class sessions. Number of class
sessions x desired total number of items
Number of items =
------------------------------------------------------------------------------- Total number of
class session
d. Make the two-way chart s shown in the format 2 and format 3 of a Table of
Specification.
e. Construct the test items. A classroom teacher should always follow the general
principle of constructing test items. The test item should always correspond with the
learning outcome so that it serves whatever purpose it may have.

33
If properly prepared, a table of specification will help you limit the coverage of test
and identify the necessary skills or cognitive level required to answer the test item
correctly.

Different Formats of Table of Specification

Gronlund (1990) lists several examples and format on how a table of


specification should be prepared.

a. Format 1 of a Table of Specification

The first format of a table of specification is composed of the specific objectives, the
cognitive level, type of test used, the item number and the total points needed in each
item. Below is the template of the said format.
Specific Objectives Cogniti Type of Test Item Number Total
ve Points
Level

Solve worded Application Multiple-choice 1 and 2 4


problems in points
consecutive
integers.

Specific Objectives refers to the intended learning outcomes state as specific


instructional objective covering a particular test topic.

Cognitive Level pertains to the intellectual skill or ability to correctly answer a test
item using Bloom’s taxonomy of educational objectives. We sometimes refer to this as
the cognitive demand of a test item. Thus, entries in this column could be “knowledge,
comprehension, application, analysis, synthesis and evaluation.

Type of Test Item identifies the type or kind of test a test item belongs to. Examples
of entries in ths column could be “multiple-choice, true or false, or even essay.”

Item Number simply identifies the question number as it appears in the test.
Total Points summarize the score given to a particular test.

Example on how to compute the number of items in each test.

Number of item for the topic: Synthetic division


Number of class session discussing the topic: 3
Desired number of items: 10
Total number of class sessions for the unit: 10

Number of class sessions x desire total number of items


Number of items =
--------------------------------------------------------------------------- Total number of
class session

34
3 x 10
Number of items = -------------
10
30
Number o items = ------
10
Number of items for the topic synthetic division = 3

b. Format 2 of Table of Specification (one-way table of specification)


Contents Numb Numb Cognitive Level Test
er Of er of Item
Class Items K-C A HO Distrib
Sessions TS ut ion

Basic Concepts Fraction 1 2 1-2

Addition of Fraction 1 2 3-4

Subtraction of Fraction 1 2 5-6

Multiplication and Division of 3 6 7-12


Fraction

Application/ Problem Solving 4 8 13-20

Total 10 20

c. Format 3 of Table of Specification (two-way table of specification)


Content Class Krathwohl’s Cognitive Level
Sessio
ns
Remem Underst Applyi Evaluat Creati Tot Item
be ring an ding ng ing ng al Distri
Ite but
ms ion

concept 1 2 1-2
s

z-score 2 4 3-6

t-score 2 4 7-10

Stanin 3 6 11-16
e

Perce 3 6 17-22
nti le
rank

Appli 4 8 23-30
cat
ion

Total 15 30

Note:

The number if item for each level will depend on the skills the teacher wants to
develop in his students. In the case of tertiary level, the teacher must develop more
higher-order thinking skills (HOTS) questions.

For elementary and secondary levels, the guidelines in constructing test will be
as stipulated in the DepEd Order 33, Series 2004 must be followed. That is, factual

35
information 60%, moderately difficult or more advanced questions 30% and higher
order thinking skills 10% for distinguishing honor students.

Construct the Test Items

In this section, we shall discuss the different format of objective type of test
items, the steps in developing objective and subjective test, the advantages and its
limitations. The different guidelines of constructing different types of objective and
subjective test items will also be discussed in this section.

General Guidelines for constructing Test Items

Kubisxyn and borich (2007) suggested some general guidelines for writing test
items ,to help classroom teachers improve the quality of test items to write.
1. Begin writing items far enough or in advance so that you will have time to revise
them,.
2. Match items to intended outcomes at appropriate level of difficulty to provide
valid measure of instructional objectives. Limit the question to the skill being
assessed.
3. Be sure each item deals with an important aspect of the content area and not with
trivia.
4. Be sure the problem posed is clear and unambiguous.
5. Be sure that the item is independent with all other items. The answer to one item
should not be required as a condition in answering the next item. A hint to one
answer should not be embedded to another item.
6. Be sure the item has one or best answer on which expert would agree. 7. Prevent
unintended clues to an answer in the statement or question. Grammatical
inconsistencies such a or an giveclues to the correct answer to those students who
are not well prepared for the test.
8. Avoid replication of the textbook in writing test items; do not quote directly from
the textual materials. You are usually not interested in how well students
memorize the text. Besides, taken out of context, direct quotes from the text are
often ambiguous.
9. Avoid trick or catch questions in an achievement test. Do not waste time testing
how well the students can interpret your intentions.
10. Try to write items that require higher-order thinking skills.

Determining the Number of Test Items

Consider the following average time in constructing the number of test items.
The length of time and the type of item used are also factors to be considered in
determining the number of items to be constructed in an achievement test. These
guidelines will be very important in determining appropriate assessment for college
students.

36
Assessment Format Average Time to Answer

True-false 30 seconds

Multiple-choice 60 seconds

Multiple-choice of higher 90 seconds


level learning objectives

Short Answer 120 seconds

Completion 60 seconds
Matching 30 seconds per response

Short Essay 10-15 minutes

Extended Essay 30 minutes

Visual Image 30 seconds

The number of items included in a given assessment will also depend on the
length of the class period and the type of items utilized. The following guidelines will
assist you in determining an assessment appropriate for college-level students aside
from the previous formula discussed.
Yes No

The item is appropriate to measure a learning objective.

The item format is the most effective means of measuring the desired
knowledge.

The item is clearly worded and can be easily understood by the target
student population.

The items of the same format are grouped together.

There are various item types include in the assessment.

The students have enough time to answer all test items.

The test instructions are specific and clear,.

The number of questions targeting each objective matches the weight


of importance of that objective.

The scoring guidelines are discussed clearly and available to students.

Assemble the Test Item

After constructing the test items following the different principles of constructing
test item, the next step to consider is to assemble the test items. There are two steps in
assembling the test: (1) packaging the test; and (2) reproducing the test,.

In assembling the test, consider the following guidelines:

a. Group all test items with similar format. All items in similar format must be
grouped so that the students will not be confused.
b. Arrange test items from easy to difficult. The test items must be arranged from
easy to difficult so that students will answer the first few items correctly and
build confidence at the start of the test.
c. Space the test items for easy reading.
d. Keep items and option in the same page.
37
e. Place the illustrations near the description.
f. Check the answer key.
g. Decide where to record the answer.

Write Directions

Check the test directions for each item format to be sure that it is clear for the
students to understand. The test direction should contain the numbers of items to which
they apply; how to record their answers; the basis of which they select answer; and the
criteria for scoring or the scoring system.

Check the Assembled Test Items

Before reproducing the test, it is very important to proofread first the test items
for typographical and grammatical errors and make necessary corrections if any. If
possible, let others examine the test to validate its content. This can save time during
the examination and avoid destruction of the students.

Make the Answer Key

Be sure to check your answer key so that the correct answers follow a fairly
random sequence. Avoid answers such as TFTFTF, etc., or TTFFF for a true or false type,
and A B C D A B C D patterns for multiple-choice type. The number of true answers must
be equally the same with dales answers and also among the multiple-choice options.

Analyze and Improve The Test Items

Analyzing and improving the test should be done after checking, scoring and
recording the test. The details of this part will be discussed in the succeeding chapter.

DIFFERENT FORMATS OF CLASSROOM ASSESSMENT TOOLS

There are different types of assessing the performance of students. We have


objective test, subjective test, performance based assessment, oral questioning, portfolio
assessment, self-assessment and checklist. Each of this has their own function and use.
Type of assessment tools should alwasys be appropriate with the objectives of the
lesion.

There are two general types of test item to use in achievement test using paper
and pencil test. It is classified as selection-type items and supply type items.

Selection Type or Objective Test Item


Selection type items require students to select the correct response from several
options. This is also known as objective test item. Selection type items can be classified
as: Multiple-choice; matching type; true or false; or interpretative exercises.

Objective test item requires only one correct answer in each item.

38
Kinds of Objective Type Test

In this section, we shall discuss the different format of objectives types of test
items and the general guidelines in constructing multiple-choice type of test, guidelines
in constructing the stem, options and distracters, advantages and disadvantages of
multiple-choice, guidelines in constructing matching type of test, advantages and
disadvantages of matching type of test, guidelines in constructing true or false and
comprehension types of test, advantages and disadvantages of true or false and
interpretative exercises.

a. Multiple-choice Test

A multiple-choice test is used to measure knowledge outcomes and other types


of learning outcomes such as comprehension and applications. It is the most commonly
used format in measuring student achievements at different levels of learning.

Multiple-choice item consists of three parts: the stem, the keyed option and the
incorrect options or alternatives. The stem represents the problem or question usually
expressed in completion form or question form. The keyed option is the correct answer.
The incorrect options or alternativesalso called distracters or foil.

General Guidelines in Constructing Multiple-choice Test

1. Make a test item that is practical or with real-world applications to the


students.
2. Use diagram or drawing when asking question about application,
analysis or evaluation.
3. When ask to interpret or evaluate about quotations, present actual
quotations from secondary sources like published books or
newspapers.
4. Use tables, figures, or charts when asking question to interpret.
5. Use pictures if possible when students are required to apply concepts
and principles.
6. List the choices/ options vertically not horizontally.
7. Avoid trivial questions.
8. Use only one correct answer or best answer format.
9. Use three to five options to discourage guessing.
10. Be sure that distracters are plausible and effective.
11. Increase the similarity of the options to increase the difficulty of the
item.
12. Do not use “none of the above” options when asking for a best answer.
13. Avoid using “all of the above” options. It is usually the correct answer
and makes the item too easy for the examinee with partial knowledge.

39
Guidelines in Constructing the Stem

1. The stem should be written in question form or completion form. Research


showed that it is more advisable to use question form.
2. Do not leave the blank at the beginning or at the middle of the stem when using
completion form of a multiple-choice type of test.
3. The stem should pose the problem completely.
4. The stem should be clear and concise.
5. Avoid excessive and meaningless use of words in the stem.
6. State the stem in positive form. Avoid using the negative phrase like “not” or
“except”. Underline or capitalize the negative words if it cannot be avoided.
Example: Which of the following does not belong to the group? Or ,which of the
following does NOT belong to the group.
7. Avoid grammatical clues in the correct answer.

Guideline in Constructing Options

1. There should be one correct or best answer in each item.


2. List options in vertical order not a horizontal order beneath the stem. 3. Arrange
the options in logical order and use capital letters to indicate each option such as A,
B, C, D, E.
4. No overlapping options; keep it independent.
5. All options must be homogenous in content to increase the difficult of an item.
6. As much as possible the length of the options must be the same or equal. 7.
Avoid using the phase “all of the above”.
8. Avoid using the phase “none of the above” or “I don’t know.”

Guidelines in Constructing the Distracters

1. The distracter should be plausible.


2. The distracter should be equally popular to all examinees.
3. Avoid using ineffective distracters. Replace distracter(s) that are not effective to
the examinees.
4. Each distracter should be chosen by at least 5% of the examinees but not more
than the key answer.
5. Revise distracter(s) that are over attractive to the teachers. They might be
ambiguous to the examinees.

Examples of Multiple-choice Items

1. Knowledge Level
The most stable measure(s) of central tendency is the
_______________. A. Mean
B. Mean and median
C. Median
D. Mode

40
This kind of question is a knowledge level type because the students are required
only to recall the properties of the mean. The correct answer is option A. 2.
Comprehension Level
Which most of the following statements describe normal
distribution? A. The mean is greater than the median.
B. The mean median and mode are equal.
C. The scores are more concentrated at the other part of the
distribution. D. Most of the scores are high.

This kind of question is a comprehension level type because the students are
required to describe the scores that are normally distributed. The correct answer
is option B.

3. Application Level
What is the standard deviation of the following scores of 10 students in
mathematics quiz, 10, 13, 16, 16, 17, 19, 20, 20, 20, 25?
A. 3.90
B. 3.95
C. 4.20
D. 4.25

This kind of question is an application level because the students are asked to
apply the formula and solve for the variance. The correct answer is option C. 4.
Analysis Level
What is the statistical test used when you test the mean difference between
pre test?
A. Analysis of variance
B. t-test
C. Correlation
D. Regression analysis
This kind of question is an example of analysis level type because students are
required to distinguish which type of test is used. The correct answer is option B.

Advantage of multiple choice test

1. Measure learning outcomes from the knowledge to evaluation level.


2. Scoring is highly objective, easy and reliable.
3. Scores are more reliable than subject type of test.
4. Measures broad samples of content within a short span of time.
5. Distracters can provide diagnostics information.
6. Item analysis can reveal the difficulty of an item and can discriminate the good and
the poor performing students.

Disadvantage of Multi-choice Test

1. Time consuming to construct a good item.


2. Difficult to find effective and plausible distracters.
3. Scores can be influence by the reading ability of the examiners.
4. In some cases, there is more than one justifiable correct answer.

41
5. Ineffective in assessing the problem solving skills of the students. 6. Not applicable
when assessing the student’s ability to organize and express ideas.

b. Matching type
Matching type item consist of two columns. Column A contains the description
and must be place at the left side while column B contains the options and placed
at the right side. The examinees are asked to match the option that are associated
with the description.

Guideline in Constructing Matching types of Test

1. The description and option must be short and homogeneous.


2. The description must be written at the left side and marked it with column A
and the option must be written at the right side and marked it with column B
to save for the examinees.
3. There should be more options than descriptions or indicate in the directions
that each option may be used more than once to decrease the chance of
guessing.
4. Matching directions should, specify the basis for matching. Failure to indicate
how matches should be marked can greatly increase the tie consumed by the
teacher in scoring.
5. Avoid too many correct answers.
6. When using names, always include the complete name (first name and
surname) to avoid ambiguities.
7. Use number for the descriptions and capital letters for the options to avoid
confusions to the students that have a reading problem.
8. Arrange the options into a chronological order or alphabetical order. 9. The
descriptions and options must be written in the same page. 10. A minimum of
three and a maximum of seven items for elementary level and a maximum of
seventeen items for secondary and tertiary levels.

Examples of Matching Type Test

Direction: Match the function of the part of computer in Column A with its name
in Column B. Write the letter of your choice before the number.

Column A Column B

_____ 1. Stores information waiting to be used A. Central Processing Unit

42
_____ 2. Consider as the brain of the computer B. Hard Drive _____ 3. Hand-held

device used to move the cursor C. Hardware _____ 4. An example of an output

device D. Mass Storage Device _____ 5. Stores permanent information in the

computerE. Mouse

_____ 6. Physical aspect of the computer F. Monitor _____ 7. Used to display the output

G. Processor _____ 8. The instruction fed into the computer H. Printer _____ 9.

Pre-loaded data I. Random Access Memory _____ 10. Permits a computer to store

large amount of data J. Read Only Memory K. Software

L. Universal Serial Bus

Advantages of Matching Type Test

1. It is simple to construct than a multiple-choice type of test.


2. It reduces the effects of guessing compared to the multiple-choice and true or false
type of test.
3. It is appropriate to assess the association between facts.
4. Provides easy, accurate, efficient, objective and reliable test scores.
5. More content can be covered in the given set of test.
Disadvantages of Matching type Test

1. It measures only simple recall or memorization of information.


2. It is difficult to construct due to problems in selecting the description and option. 3.
It assesses only low level of cognitive domain such as knowledge and
comprehension.

c. True or False Type

Another format of an objective type of test is the true or false type of test items. In
this type of test, the examinees determine whether the statement presented true or
false. True or false test item is an example of a “force-choice test” because there are only
two possible choices in this type of test. The students are required to0 choose the
answer true or false in recognition to a correct statement or incorrect statement.

True or False type of test is appropriate in assessing the behavioral objectives such
as “identify” “select,” or “recognize”. It is also suited to assess the knowledge and

43
comprehension level in cognitive domain. This type of test is appropriate when there are
only two plausible alternatives or distracters.

Guideline in Constructing true or False test

1. Avoid writing a very long statement. Eliminate unnecessary word(s) in the


statement (be concise).
2. Avoid trial question.
3. It should contain only one idea in each item except for statement showing the
relationship between cause and effect.
4. It can be used for establishing cause and effect relationship.
5. Avoid using option-base statement, if it cannot be avoided the statement
should be attributed to somebody.
6. Avoid using negative or double negatives. Construct the statement positively. If
this cannot be avoided, bold negative words or underlined it to call the
attention of the examinees.
7. Avoid specific determiner such as “never,” “always,” “ all,” “none” for they tend
to appear in the statements that are false.
8. Avoid specific determiner such as “ some,” “sometimes,” and “may” they tend to
appear in the statement that are true.
9. The number of true items must be the same with the number of false items. 10.
Avoid grammatical clues that lead to a correct answer such as the article (a, an,
the).
11. Avoid statement directly taken from the textbook.
12. Avoid arranging the statement in a logical order such as (TTTTTT-FFFFF,
TFTFTF, TTFFTTFF).
13. Directions should indicate where or how the students should mark their
answer.

Example of True or False type of test

Direction: Write your answer before the number in each item. Write T if the statement
is true and F if the statement if false.

T F 1. Test constructor should never phrase a test item in the negative. T

F 2. Photosynthesis is the process by which leaves make a plant’s food. T

F 3. The equation 3x³+x³+6=4x+6.

T F 4. All parasite are animals.

T F 5. A statement of opinion may be used in a true or false test item.

Advantage of a True or false test

1. It covers a lot of content in a short span of time.

44
2. It easier to prepare compared to multiple-choice and matching type of test. 3. It is
easier to score because it can be scored objectively compared to a test that depends on
the judgment of the rater(s).
4. T is useful when there are two alternative only.
5. The score is more reliable than essay test.

Disadvantage of a true or false test

1. Limited only to low level of thinking skills such as knowledge and comprehension,
or recognition or recall information.
2. High probability of guessing the correct answer (5%) compared to multiple choice
which consist of four option (25%).

Supply type or subjective type of test items

Supply type items require students to create and supply their own answer or perform a
certain task to show mastery of knowledge or skills. It is also known as constructed
response test. Supply type items or constructed response test are classified as:

a. Short answer or completion type


b. Essay type items (restricted response or extended response

Another way of assessing the performance of the students is by using the performance
base assessment and portfolio assessment which are categorized under constructed
response test. Let us discuss the details of the selection type and supply type test items
in this selection while the performance-based assessment and portfolio assessment will
be discussed in the succeeding chapters.

Subjective test item requires the students to organize and present an original answer
(essay test ) and perform task to show mastery of learning (performance-based
assessment and portfolio assessment) or supply a word or phrase to answer a certain
question (completion or short answer type of test).

Essay test is a form of subjective type of test. Essay test measures complex cognitive
skills or processes. This type of test has no one specific answer per students. It is usually
scored on an option basis, although there will be certain facts and understanding
expected in the answer. There are two kinds of essay items: extended response essay
and restricted response essays.

Kinds of subjective types test items

Subjective types of test is another test format where the students supplies answer
rather than select the correct answer. In this selection, we shall consider the completion
type items or short answer test and essay type item. There are two types of essay items

45
according to the length of the answer: extended response essay and restricted response
essay.

The teacher must present and discuss the criteria used in assessing the answer of the
students in advance to help them to prepare from the test.

a. Completion type of short answer test


Completion or short answer type is an alternative form of assessments because
the examinee needs to supply or to create the appropriate word(s), symbol(s) or
number(s) to answer the question or complete a statement rather than selecting the
answer from the given options. There are two ways of constructing completion type or
short answer type of test: question form of completion the statement form.

Guidelines in constructing completion type or short answer test 1. The


answer should require a single word answer or brief and definite statement.
Do not used indefinite statement that allows several answers. 2. Be sure that
the language used in the statement is precise and accurate in relation to the
subject matter being tested.
3. Be sure to omit only key words; do not eliminate so many words so that
the meaning of the item statement will not change.
4. Do not leave the blank at the beginning or within the statement. It should
be at the end of the statement.
5. Use direct question rather than incomplete statement. The statement
should pose the problem to the examinee.
6. Be sure to indicate the units in which to be expressed when the statement
requires numerical answer.
7. Be sure that the answer the students is required to produce is factually
correct.
8. Avoid grammatical clues.
9. Do not select textbook sentence.

46
Examples of completion and short answer
Direction: Write your answer before the number in each item. Write the word(s),
phrase, or symbol(s) to complete the statement.
Question Form Completion Form

Essay Item 1. Which supply type Essay Item 1. Supply type item used to
item is used to measure the ability to measure the ability too organize and
organize and integrated material? integrate material is called _________.

Distracters 2. What are the incorrect Distracters2. The incorrect options in a


option in a multiple-choice item multiple-choice test item are called
called? _________.

Pentagon 3. What do you call a Pnetagon 3. A polygon with five sides is


polygon that has five sides? called _________.

Evaluation 4. What is the most Evaluation 4. The most complex level in


complex level in the bloom’s the bloom’s taxonomy of cognitive
taxonomy of cognitive domain? domain is called _________.

Multiple-choice test item 5. Which Multiple-choice Test Item 5. The test


test item measures the greatest item that measures the greatest variety
variety of learning outcomes? of learning outcomes is called _________.

Advantages of a Completion or Short Answer Test

1. It covers a broad range of topic in a short span of time.


2. It is easier to prepare and less time consuming compared to multiple-choice and
matching type of test.
3. It can assess effectively the lower level of Bloom’s Taxonomy. It can assess recall of
information, rather than recognition.
4. It reduces the possibility of guessing the correct answer because it requires recall
compared to true of false items and multiple-choice items.
5. It covers greater amount to content than matching type test.

Disadvantages of a Completion or Short Answer Test

1. It is only appropriate for questions that can be answered with short responses. 2.
There is a difficult in scoring when the questions are not prepared properly and
clearly. The question should be clearly stated so that the answer of the student is
clear.
3. It can assess only knowledge, comprehension and application levels in Bloom’s
taxonomy of cognitive domain.
4. It is not adaptable in measuring complex learning outcomes.

47
5. Scoring is tedious and time consuming.
b. Essay Items
It is appropriate when assessing students’ ability to organize and present their
original ideas. It consists of a few number of questions wherein the examinee is
expected to demonstrate the ability to recall factual knowledge; organize his
knowledge; and present his knowledge in logical and integrated answer.

Types of Essay Items

There are two types of essay item: extended response and restricted response
essay.

b.1. Extended Response Essays

An essay test that allows the students to determine the length and
complexity of the response is called extended response essay item (Kubiszyn
and Borich, 2007). It is very useful in assessing the synthesis and evaluation
skills of the students. When the objective is to determine whether the
students can organize ideas, integrated and express ideas, evaluate
information in knowledge, it is best to use extended response essay test.
Using extended response essay item has advantages and disadvantages.
Advantages are: demo9nstrate learning outcomes at the synthesis and
evaluation levels; evaluate the answers with sufficient reliability to provide
useful measures of learning; provides more freedom to give responses to the
question and provide creative integration of ideas. Disadvantages are: more
difficult to construct extended response essay questions; scoring is time
consuming than restricted response essay.

Examples of Extended Response Essay Questions:

1. Present and describe the modern theory of evolution and discuss how it
is supported by evidence from the areas of (a) comparative anatomy,
(b) population genetic.
2. From the statement, “Wealthy politicians cannot offer fair
representation to all the people.” What do you think is the reasoning of
the statement? Explain your answer.

b.2. Restricted Response Essay

An essay item that places strict limits on both content and the response
given by the students is called restricted response essay item. In this type of
essay the content is ,usually restricted by the scope of the topic to be
discussed and the limitations on the form of the response is indicated in the
question.

48
When there is a restriction on the form and scope of the answer of the
students in an essay test, there can be advantages and disadvantages. The advantages
are: it is easier to prepare questions; it is easier to score; and it is more directly related
to the specific learning outcomes. The disadvantages are: it provides little opportunity
for the students to demonstrate their abilities to organize ideas, to integrate materials,
and to develop new patterns of answers; it measures learning outcomes at
comprehension, application and analysis levels only.

Example of Restricted Response Essay Questions:

1. List the major facts and opinions in the first state of the nation address (SONA)
of Pres. BenignoCojuangcon Aquino, Jr. Limit your answer to one page only.
The score will depend on the content, organization and accuracy of your
answer.
2. Point out the strength =s and weaknesses of a multiple-choice type of test.
Limit your answer to five strengths and five weaknesses. Explain each answer
in not more than two sentences.
Guidelines in Constructing Essay Test Items

1. Construct essay question used to measure complex learning outcomes only. 2.


Essay questions should relate directly to the learning outcomes to be measured.
3. Formulate essay questions that present a clear tasks to be performed. 4. An
item should be stated precisely and it must clearly focus on the desired answer.
5. All students should be required to answer the same question.
6. Number of points and time spent in answering the question must be indicated
in each item.
7. Specify the number of words, paragraphs or the number of sentences for the
answer.
8. The scoring system must be discussed or presented to the students.

Example of Essay Test Item

1. Choose a leader you admire most and explain why you admire him or her. 2.
Pick a controversial issue in the Aquino administration. Discuss the issue and
suggest a solution.
3. If you were the principal of a certain school, describe how would you
demonstrate your leadership ability inside and outside of the school. 4. Describe
the difference between Norm-referenced assessment and Criterion referenced
assessment.
5. Do you agree or disagree with the statement, “Education comes not from
books but from practical experience. “Support your position.

49
Types of Complex Outcomes and Related Terms

for Writing Essay Questions


Outcomes Sample Verbs Sample Questions

Comparing Compare, classify, describe, Describe the similarities and


distinguish between, explain, differences between Philippine
outline, summarize educational system and the
Singaporian educational system.

Interpreting Convert, draw, estimates, Summarize briefly the content of


illustrate, interpret, restate, the second SONA of President
summarize, translate Benigno C. Aquino, Jr.

Inferring Derive, draw, estimate, Using the facts presented, what is


extend, predict, propose, most likely to happen when………?
relate
Applying Arrange, compute, describe, Solve the solution set of the
illustrate, relate, summarize, equation X2 + 5x – 24 = 0 using
solve factoring method.

Analyzing Breakdown, describe, List and describe the characteristics


differentiate, divide, list, of a good assessment instrument.
outline

Creating Compose, design, draw, Formulate a hypothesis about the


formulate, list, present, make problem “Mathematics attitude and
up competency levels of the education
students of U.E.”

Synthesizing Arrange, combine, construct, Design a scoring guide in evaluating


design, relate, group portfolio assessment.

Generalizing Construct, develop, explain, Explain the function of assessment


formulate, make, state of learning.

Evaluating Appraise, criticize, defend, Describe the strengths and


describe, evaluate, explain, weaknesses of using performance
judge, rate, write. based assessment in evaluating the
performance of the students.

Advantages of Essay Test

1. It is easier to prepare and less time consuming compared to other paper and
pencil tests.
2. It measures higher-order thinking skills (analysis, synthesis and evaluation). 3. It
allows students’ freedom to express individuality in answering the given question.
4. The students have a chance to express their own ideas in order to plan their own
answer.
5. It reduces guessing answer compared to any of the objective type of test.
6. It presents more realistic task to the students.
7. It emphasizes on the integration and application of ideas.

Disadvantages of Essay Test

50
1. It cannot provide an objective measure of the achievement of the students.
2. It needs so much time to grade and prepare scoring criteria.
3. The scores are usually not reliable most especially without scoring criteria.
4. It means limited amount of contents and objectives.
5. Low variation of scores.
6. It usually encourages bluffing.

Suggestions for Grading Essay Test


Zimmaro (2003) suggested different guidelines in scoring an essay type. These
guidelines are very important in the performance of the ,students to avoid or lessen the
subjectivity of the scoring.

1. Decide on a policy for dealing with incorrect, irrelevant or illegal responses.


2. Keep scores of the previously read items out of sight.
3. The student’s identify should remain anonymous while his/her paper grading the
nest question.
4. Read and evaluate each student’s answer to the same question before grading the
next question.
5. Provide students with general grading criteria by which they will be evaluated
prior to the examination
6. Use analytic scoring or holistic scoring.
7. Answer the test question yourself by writing the ideal answer to it so that you can
develop the scoring criteria from your answer.
8. Write your comments on their papers.

Checklists for Evaluating Essay Questions


Yes No

The test item is appropriate for measuring the intended learning outcomes.

The test item task matches with the learning task to be measured.

The questions constructed measure complex learning outcomes.

It is states in the questions what is being measured and how the answer
are to be evaluated.

The terminology used clarified and limits the task.

All students are required to answer the same question.

There is an establish time limit to answer each question.

Provisions for scoring answers are given (criteria for evaluating answer).

51
CHAPTER 4

ADMINISTERING, ANALYZING, AND IMPROVING TESTS

Learning Objectives

At the end of this chapter, the students should be able to:


1. Define the basic concepts regarding item analysis;
2. Identify the steps in improving test items;
3. Solve difficulty index and discrimination index;
4. Identify the level of difficulty of an item;
5. Perform item analysis properly and correctly;
6. Identify the item to be rejected, revised, or retained; and
7. Interpret the results of item analysis.

INTRODUCTION

One of the most important functions of a teacher is to assess the performance of


the students. This is a very complicated task because you will consider many activities
such as the timing of the assessment process, the format of the assessment tools and the
duration of the assessment procedures.

After designing the assessment tools, package the test, administer the test to the,
students, check the test papers, score and then record them. Return the test papers and
then give feedback to the students regarding the result of the test.

PACKAGING AND REPRODUCING TEST ITEMS

Assuming that you have already assembled the test, you write the instructional
objectives, prepare the table of specification, and write the test items that match with
the instructional objectives, the next thing to do is to package the test and reproduce it
as discussed in the previous chapter.

1. Put the items with the same format together.


2. Arrange the test items from easy to difficult.
3. Give proper spacing for each item for easy reading.
4. Keep questions and options in the same page.
5. Place the illustrations near the options.
6. Check the key answer.
7. Check the direction of the test.
8. Provide space for name, date and score.
9. Proofread the test.
10. Reproduce the test.

52
ADMINISTERING THE EXAMINATION

After constructing the test items and putting them in order, the nest step is to
administer the test to the students. The administration procedures greatly affect the
performance of the students in the test. The test administration does not simply means
giving the test questions to the students ad collecting the test papers after the given
time. Below are the guidelines in administering the test before, during and after the test.

Guidelines Before Administering Examinations

1. Try to induce positive test-taking attitude.


2. Inform the students about the purpose of the test.
3. Give oral directions as early as possible before distributing the tests. 4. Give
test-taking hints about guessing, skipping, and the like, are strictly prohibited.
5. Inform the students about the length of time allowed for the test. If possible, write
on the board the time in which they must be finished with answering the test.
Give the students a warning before the end of the time limit.
6. Tell the students how to signal or call your attention if they have a question. 7. Tell
the students what to do with their papers when they are done answering the test
(how papers are to collected).
8. Tell the students what to do when they are done with the test, particularly if they
are to go on to another activity (also write these directions on the chalkboard so
they can refer to them).
9. Rotate the method of distributing papers so you don’t always start from the left or
the front row.
10. Make sure the room is well lighted and has a comfortable temperature. 11.
Remind students to put their names on their papers (and where to do so). 12. If the
test has more than one page. Have each student checked to see that all pages are
there.

Guidelines During the Examination

1. Do not give instructions or avoid talking while examination is going on minimize


interruptions and distractions.
2. Avoid giving hints.
3. Monitor to check student progress and discourage cheating.
4. Give time warning if students are not pacing their work appropriately. 5. Make a
note of any questions students ask during the test so that items can be revised for
future use.
6. Test papers must be collected uniformly to save time and to avoid test papers to
be misplaced.

53
Guideline After the Examination

After the examination, the next activity that the teacher needs to do is to score
the test papers, record the result of the examination; return the test papers and last to
discuss the test items in the class so that you can analyze and improve the test items for
future use.

1. Grade the papers (and add comments if you can); do test analysis (see the module
on test analysis) after scoring and before returning papers to students if at all
possible. If it is impossible to do your test analysis before returning the papers, be
sure to do it at another time. It is important to do both the evaluation of your
students and the improvement of your tests.
2. If you are recording grades or scores, record them in pencil in your class record
before returning the papers. If there are errors/ adjustments in grading they
(grades) are easier to change when recorded in pencil.
3. Return papers in a timely manner.
4. Discuss test items with the students. If students have questions, agree to look over
their papers again, as well as the papers of others who have the same question. It
is usually better not to agree to make changes in grades on the spur of the
moment while discussing the tests with the students but to give yourself time to
consider what action you want to take. The test analysis may have already alerted
you to a problem with a particular question that is common to several students,
and you may already have made a decision regarding, that question (to disregard
the question and reduce the highest possible score according, to give all students
credit for that question, among others).

ANALYSIG THE TEST

After administering and scoring the test, the teacher should also analyze the
quality of each item in the test. Through this you can identify the item that is good, item
that needs improvement or items to be removed from the test. But when do we consider
that the test is good? How do we evaluate the quality of each item in the test? Why is it
necessary to evaluate each item in the test? Lewis Aiken (1997) an author or
psychological and educational measurement pointed out that a “postmortem” is just as
necessary in classroom assessment as it is in medicine.

In this section, we shall introduce the technique to help teachers determine the
quality of a test item known as item analysis. One of the purposes of item analysis is to
improve the quality of the assessment tools. Through this process, we can identify the
item that is to be retained, revised or rejected and also the content of the lesson that is
mastered or not.

There are two kinds of item analysis, quantitative item analysis and qualitative
item analysis (Kubiszyn and Borich, 2007).

54
Item Analysis

Item analysis is a process of examining the student’ response to individual item


in the test. It consists of different procedures for assessing the quality of the test items
given to the students. Through the use of item analysis we can identify which of the
given are good and defective test items. Good items are to be retained and defective
items are to be improved, to be revised or to be rejected.

Uses of Item Analysis

1. Item analysis data provide a basis for efficient class discussion of the test
results.
2. Item analysis data provide a basis for remedial work.
3. Item analysis data provide a basis for general improvement of classroom
instruction.
4. Item analysis data provide a basis for increased skills in test construction.
5. Item analysis procedures provides a basis for constructing test bank.

Types of Quantitative Item Analysis

There are three common types of quantitative tem analysis which provide
teachers with three different types of information about individual test items. These are
difficulty index, discrimination index, and response options analysis.

1. Difficulty Index
It refers to the proportion of the number of students in the upper and
lower groups who answered an item correctly. The larger the proportion, the
more students, who have learned the subject is measured by the item. To
compute the difficulty index of an item, use the formula:
DF=��N, where
DF = difficulty index
N = number of the students selecting item correctly in the upper group
and in the lower group.
N = total number of students who answered the test

Level of Difficulty

To determine the level of difficulty of an item, find first the difficulty index
using the formula and identify the level of difficulty using, the range given below.
Index Range Difficulty Level

0.00 – 0.20 Very Difficult

0.21 – 0.40 Difficult

0.41 – 0.60 Average/


Moderately
Difficult
55
0.61 – 0.80 Easy

0.81 – 1.00 Very Easy

The higher the value of the index of difficulty, the easier the item is. Hence, more
students got the correct answer and more students mastered the content measured by
that item.

2. Discrimination Index
The power of the item to discriminate the students between those who
scored high and those who scored low in the overall test. In other words, it is the
power of the item to discriminate the students who know the lesson and those
who do not know the lesson.
It also refers to the number of students in the upper group who got an
item correctly minus the number of students in the power group who got an item
correctly. Divide the difference the difference by either the number of the
students in the upper group or number of students in the lower group or get the
higher number if they are not equal.
Discrimination index is the basis of measuring the validity of an item. This
index can be interpreted as an indication of the extent to which overall
knowledge of the content area or mastery of the skills is related to the response
on an item.

Types of Discrimination Test

There are three kinds of discrimination index: p0ositive discrimination, negative


discrimination and zero discrimination.

1. Positive discrimination happens when more students in the uppe group got the
item correctly than those students in the lower group.
2. Negative discrimination occurs when more students in the lower group got the
item correctly than the students in the upper group.
3. Zero discrimination happens when a number of students in the upper group and
lower who answer the test correctly are equal, hence, the test item cannot
distinguish the students who performed in the overall test and the students
whose performance are very poor.

Level of Discrimination
Ebel and Frisbie (1986) as cited by Hetzel (1997) recommended the use of Level
of Discrimination of an Item for easier interpretation.

56
Index Range Discrimination Level

0.19 and Poor item, should be eliminated or need to be revised


below

0.20 – 0.29 Marginal item, needs some revision

0.30 – 0.39 Reasonably good item but possibly for improvement

0.40 and Very good item


above

Discrimination Index Formula

CUG – CLG
DI = ------------- , where
D
DI = discrimination index value

CUG = number of the students selecting the correct answer in the upper group

CLG = number of the students selecting the correct answer in the lower group

D = number of students in either the lower group or upper group.

Note: Consider the higher number in case the sizes in upper and lower group a rot equal.

Steps in Solving Difficulty Index and Discrimination Index

1. Arrange the scores from higher to lowest.


2. Separate the scores into upper group and lower group. There are different
methods to do this: (a) if a class consists of 30 students who takes an exam,
arrange their scores from highest to lowest, then divide them into two groups.
The highest score belongs to the upper group. The lowest score belongs to the
lower group. (b) Other literatures suggested to use 27%, 30%, or 33% of the
students for the upper group and lower group. However, in the Licensure
Examination for Teachers (LET) the test developers always used 27% of the
students who participated in the examination for the upper and lower groups.
3. Count the number of those who chose the alternatives in the upper and lower
group for each item and record the information using the template:
Options A B C D E

Upper Group

Lower Group

Note: Put asterisk for the correct answer.


4. Compute the value of the difficulty index and the discrimination index ans also the
analysis of each response in the distracters.
5. Make an analysis for each item.

Checklist for Discrimination Index


57
It is very important to determine whether the test item will be retained revised
or rejected. Using the Discrimination Index we can identify the nonperforming question
items; just always remember that they seldom indicate what is the problem. Use the
given checklist below:
Yes No

1. Does the key discriminate positively?

2. Does the incorrect options discriminate negatively?

If the answer to questions 1 and 2 are both YES, retain the item.

If the answers to questions 1 and 2 are either YES or NO, revise the item. If

the answer to questions 1 and 2 are both NO, eliminate or reject the item.

3. Analysis or Response Options


Aside from identifying the difficulty index and discrimination index, another way
to evaluate the performance of the entire test item is through the analysis of the
response options. It is very important to examine the performance of each
option in a multiple-choice item. Through this, you can determine whether the
distracters or incorrect answer. The attractiveness of the incorrect options is
determined when more students in the lower group than in the upper group
choose it. Analyzing the incorrect options allows the teachers to improve the test
items so that it can be used again in the future.

Distracter Analysis

1. Distracter

Distracter is the term used for the incorrect options in the muliplr-choice type of
test while the correct answer represents the key. It is very important for the test
writer to know if the distracters are effective or good distracters. Using
quantitative item analysis we can determine if the options are good or if the
distracters are effective.

Item analysis can identify non-performing test items, but this item seldom
indicates the error or the problem in the given item. There are factors to be
considered why students failed to get the correct answer in the given question.

a. It is not taught in the class properly.


b. It is ambiguous.
c. The correct answer is not in the given options.
d. It has more than one correct answer.
e. It contains grammatical clues to mislead the students.
f. The student is not aware of the content.
g. The student were confused by the logic of the question because it has double
negatives.

58
h. The student failed to study the lesson.
2. Miskeyed item
The test item is a potential miskey if there re more students from the upper
group who choose the incorrect options than the key.
3. Guessing item
Students from the upper group have equal spread of choices among the given
alternatives. Students from the upper group guess their answers because of the
following reasons:
a. The content of the test is not discussed in the class or in the text.
b. The test item is very difficult.
c. The question is trivial.
4. Ambiguous item
This happens when more students from the upper group choose equally an
incorrect option and the keyed answer.

Qualitative Item Analysis

Qualitative item analysis (Zurawski, R.M) is a process in which the teacher or


expert carefully proofreads the test before it is administered, to check if there are
typographical errors, to avoid grammatical clues that may lead to giving away the
correct answer, and to ensure that the level of reading materials is appropriate. These
procedures can also include small group discussions on the quality of the examination
and its items, with examinees that have already took the test. According to Cohen,
Swerdlik, and Smith (1992) as cited by Zurawski, students who took the examination are
asked to express verbally their experience in answering each item in the examination.
This procedure can help the teacher in determining whether the test takers
misunderstood a certain item, nd it can help also in determining why they
misunderstood a certain item.

IMPROVING TEST ITEMS

As presented in the introduction of this shapter, item analysis enables the


teachers to improve and enhance their skills in writing test items. To improve multiple
choice test item we shall consider the stem of the item, the distracters and the key
answer.

How to Improve the Test Item

Consider the following examples in analyzing the test item and some notes on
how to improve the item based from the results of items analysis.

Example 1. A class is composed of 40 students. Divide the group into two. Option
B is the correct answer. Based from the given data on the table, as a teacher, what would
you do with the test item.

59
Option A B* C D E

Upper 3 10 4 0 3
Group

Lower 4 4 8 0 4
Group

1. Compute the difficulty index.


n = 10 + 4 = 14
N = 40
DF = ��N
DF = 14
40
DF = 0.35 or 35%
2. Compute the discrimination index.
CUG = 10
CLG = 4
D = 20
CUG – CLG
DI = ----------------
D
= 10−4
20
= 620
= 0.30 or 30%
3. Make an analysis about the level of difficulty, discrimination and distracters. a.
Only 35% of the examines got the answer correctly, hence, the item is difficult.
b. More students from the upper group got the answer correctly, hence, it has a
positive discrimination.
c. Retain options A, C, and E because most of the students who did not perform
well in the overall examination selected it. Those options attract most
students from the lower group.
4. Conclusion: Retain the test item but change option D, make it more realistic to
make it effective for the upper and lower groups. At least 5% of the examinees
choose the incorrect option.

Example 2.A class is composed of 50 students. Use 27% to get the upper and the
lower groups. Analysis the item given the following results. Option D is the correct
answer. What will you do with the test item?
Option A B C D* E

Upper Group 3 1 2 6 2
(27%)

Lower Group 5 0 4 4 1
(27%)

60
1. Compute the difficulty index
n = 6 +4 = 10
N = 28
DF –��N
DF = 10
28
DF = 0.36 of 36%

2. Compute the discrimination index.


CUG = 6
CLG = 4
D=4
CUG – CLG
DI = ----------------
D
DI = 6−4
14
2
DI = 14
DI = 0.14 or 14%
3. Make an analysis
a. Only 36% of the examinees got the answer correctly, hence, the item is difficult.
b. More students from the upper group got the answer correctly, hence, it has a
positive discrimination.
c. Modify options and B and E because more students from the upper group
chose them compare with the lower group, hence, they are not effective
distracters because most of the students who performed well in the overall
examination selected them as their answer.
d. Retain options A and C because most of the students who did not perform well
in the overall examination selected them as the correct answer. Hence,
options A and C are effective distracters.
4. Conclusion: Revised the item by modifying options B and E.

Example 3.A class is composed of 50 students. Use 27% to get the upper and the
lower groups. Analyze the item given the following results. Option E is the correct
answer. What will you do with the test item?
Option A B C D E*

Upper Group 2 3 2 2 5
(27%)

Lower Group 2 2 1 1 8
(27%)

61
1. Compute the difficulty index:
n = 5 + 8 = 13
N = 28
DF = ��N
DF = 13
28
DF = 0.46 or 46%
2. Compute the discrimination index.
CUG = 5
CLG = 8
D=4
CUG – CLG
DI = ----------------
D
DI = 5−8
14
−3
DI =
14
DI = 0.21 or -21%
3. Make an analysis.
a. 46% of the students got the answer to test item correctly, hence, the test item
is moderately difficult.
b. More students from the lower group got the item correctly; therefore, it is a
negative discrimination. The discrimination index is -21%.
c. No need to analyze the distracters because the item discriminates negatively. d.
Modify all the distracters because they are not effective. Most of the students in
the upper group chose the incorrect options. The options are effective if most of
the students in the lower group chose the incorrect options.
4. Conclusion: Reject the item because it has a negative discrimination index.

Example 4.Potential Miskeyed Item. Make an item analysis about the table below.

What will you do with the test that is a potential miskeyed item?

Option A* B C D E

Upper Group 1 2 3 10 4

Lower Group 3 4 4 4 5

1. Compute the difficulty index:


n=1+3=4
N = 40
DF = ��N
DF = 440

62
DF = 0.10 or 10%
2. Compute the discrimination index.
CUG = 1
CLG = 3
D = 20
CUG – CLG
DI = ----------------
D
DI = 1−3
20
−2
DI =
20
DI = 0.10 or -10%
3. Make an analysis.
a. More students from the upper group choose option D than option A, even
though option A is supposedly the correct answer.
b. Most likely the teacher has written the wrong answer key.
c. The teacher checks and finds out that he/she did not miskey the answer that
he/ she though is the correct answer.
d. If the teacher,miskeyed it, he/ she must check and retally the scores of the
students’ test papers before giving them back.
e. If option A is really the correct answer, revise to weaken option D, distracters
are not supposed to draw more attention than the keyed answer.
f. Only 10% of the students got the answer to the test item correctly, hence, the
test item is very difficult.
g. More students from the lower group got the item correctly, therefore a negative
discrimination resulted. The discrimination index is -10%.
h. No need to analyze the distracters because the test item is very difficult and
discriminates negatively.
4. Conclusion: Reject the item because it is very difficult and has a negative
discrimination.

Example 5.Ambiguous Item.Below is the result of item analysis of a test with an


ambiguous test item. What can you say about the item? Are you going to retain,
revise or reject it?
Option A B C D E*

Upper Group 7 1 1 2 8

Lower Group 6 2 3 3 6

1. Compute the difficulty index:


n = 8 + 6 = 14
N = 39
DF = ��N

63
14
DF =
39
DF = 0.36 or 36%
2. Compute the discrimination index.
CUG = 8
CLG = 6
D = 20
CUG – CLG
DI = ----------------
D
DI = 8−6
20
2
DI = 20
DI = 0.10 or 10%
3. Make an analysis.
a. Only 36% of the students got the answer to the test item correctly, hence, the
test item is difficult.
b. More students from the upper group got the item correctly, hence, it
discriminates positely. The discrimination index is 10%.
c. About equal numbers of top students went for option A and option E, this
implies that they could not tell which is the correct answer. The students do
not know the content of the test, thus, a reteach is needed.
4. Conclusion: revise the test item because it is ambiguous.

Example 6.Guessing Item.Below is the result of item analysis for a test with students’
answers mostly based on a guess. Are you going to reject, revise or retain the test item?
Option A B C* D E

Upper Group 4 3 4 3 6

Lower Group 3 4 3 4 5

1. Compute the difficulty index:


n=4+3=7
N = 39
DF = ��N
DF = 739
DF = 0.18 or 18%
2. Compute the discrimination index.
CUG = 4
CLG = 3
D = 20
CUG – CLG
DI = ----------------
D

64
4−3
DI =
20
1
DI = 20
DI = 0.05 or 5%
3. Make an analysis.
a. Only 18% of the students got the answer to the test item correctly, hence, the
test item is very difficult.
b. More students from the upper group got the correct answer to the test item;
therefore, the test item is a positive discrimination. The discrimination index
is 5%.
c. Students respond about equally to all alternatives, an indication that they are
quessing.
Three possibilities why student guesses the answer on a test item:
∙ The content of the test item had not yet been discussed in the class
because the test is designed in advanced;
∙ Test items were badly written that students have no idea what the
question is really about; and
∙ Test items were very difficult as shown from the difficulty index and
low discrimination index.
4. Conclusion: Reject the item because it is very difficult; reteach the material to the
class.

Example 7.Guessing Item.The table below shows an item analysis of a test item with
ineffective distracters. What can you conclude about the test item?
Option A B C* D E

Upper Group 5 3 9 0 3

Lower Group 6 4 6 0 4

1. Compute the difficulty index:


n = 9 + 6 = 15
N = 40
DF = ��N
DF = 15
40
DF = 0.38 or 38%
2. Compute the discrimination index.
CUG = 9
CLG = 6
D = 20
CUG – CLG
DI = ----------------
D
DI = 9−6
20

65
3
DI = 20
DI = 0.15 or 15%
3. Make an analysis.
a. Only 38% of the students got the answer to the test item correctly, hence, the
test item is difficult.
b. More students from the upper group answered the test item correctly; as a
result, the test got a positive discrimination. The discrimination index is 15%. c.
Options A, B and E are attractive distracters.
d. Option D is ineffective, therefore, change it with more realistic one.
4. Conclusion: Revise the item by changing option D.

66
CHAPTER 5

UTILIZATION OF ASSESSMENT DATA

Learning Outcomes

At the end of this chapter, the students should be able to:

1. Apply statistics in research and in any systematic investigation;


2. Construct frequency distribution for a given set of scores;
3. Graph the scores using histogram and frequency distribution;
4. Calculate the mean, median and mode, decile, quartile, and percentile of the
students’ scores[‘
5. Identify the different properties of the measure of central tendency;
6. Identify the uses of the different measures of variability;
7. Calculate the value and make an analysis of range, mean deviation, quartile
deviation, variance and standard deviation of given scores;
8. Differentiate standards deviation from coefficient of variation;
9. Identify the properties of the different measures of variability;
10. Apply the concept of skewness in identifying the performance of the students; 11.
Determine the spread of scores using the measures of variation; 12. Compare the
performance of the students using measures of central tendency and measure of
variability;
13. Convert raw scores to standard scores;
14. Determine the relationship of two groups of scores; and
15. Computer r and ƿ value of scores and make an analysis.

INTRODUCTION

Statistics is very important tool in the utilization of the assessment data most
especially in describing, analyzing, and interpreting the performance of the students in
the assessment procedures. The teachers should have the necessary background in the
statistical procedures used in assessment of student learning in order to give a correct
description and interpretation about the achievement of the students in a certain test
whether classroom assessment conducted by the teacher, division or national
assessment conducted by the Department of Education.

In this chapter, we shall discuss the important tools in analyzing and interpreting
assessment results. These statistical tools are measures of central tendency, measures of
variation, skewness, correlation, and different types of converted scores.

DEFINITION OF STATISTICS

Statistics is a branch of science, which deals with the collection, presentation,


analysis and interpretation of quantitative data.

67
Branches of Statistics

Descriptive Statistics is a method concerned with collecting, describing, and


analyzing a set of data without drawing conclusions (or inferences) about a large group.

Inferential statistics is a branch of statistics, concerned with the analysis of a


subset of data leading to predictions or inferences about the entire set of data.
FREQUENCY DISTRIBUTION

Frequency distribution is a tabular arrangement of data into appropriate


categories showing the number of observation in each category or group. There are two
major advantages: (a) it encompasses the size of the table; and (b) it makes the data
more interpretive.

Parts of Frequency Table

1. Class Limit is the grouping or categories defined by the lower and upper limits.
Examples: LL – UL
10 – 14
15 – 19
20 – 24
Lower class limit (LL) represent the smallest number in each group.
Upper class limit (UL) represent the highest number in each group.
2. Class size (c.i) is the width of each class interval.
Examples: LL – UL
10 – 14
15 – 19
20 – 24

The class size in this score distribution is 5.

3. Class boundaries are the numbers used to separate each category in the
frequency distribution but without gaps create by the class limits. The scores of
the students are discrete. Add 0.5 to the upper limit to get the upper class
boundary and subtract 0.5 to the lower limit to get the lower class boundary in
each group or category.
Examples: LL – UL LCB - UCB
10 – 14 9.5 – 14.5
15 – 19 14.5 – 19.5
20 – 24 19.5 – 24.5
4. Class marks are the midpoint of the lower and upper class limits. The formula is
XM= LL+UL
2.

Examples: LL – UL XM
10 – 14 12

68
15 – 19 17
20 – 24 22

Steps in Constructing Frequency Distribution


1. Compute the value of the range (R). Range is the difference between the highest
score and the lowest score.
R = HS – LS

Determine the class size (c.i). The class size is the quotient when you
divide the range by the desired number of classes or categories. The desired
numbers of classes are usually 5, 10 or 15 they depend in the number of scores
in the distribution. If the desired number of classes is not identified,

��. �� =��
desired number of classes or ��. �� =����.
2. Set up the class limits of each class or category. Each class defined by the lower
limit and upper limit. Use the lowest score as the lower limit of the first class. 3. Set
up the class boundaries of needed. use the formula

��. �� =���� ���� ��ℎ�� ������������


���������� − ���� ���� ��ℎ�� ����������
����������
2
4. Tally the scores in the appropriate classes.
5. Find the other parts if necessary such as class marks, among others.

Examples: Raw score of 40 students in a 50-item mathematics quiz. Construct a


frequency distribution following the steps given previously.
17 25 30 33 25 45 23 19

27 35 45 48 20 38 39 18

44 22 46 26 36 29 15-LS 21

50-HS 47 34 26 37 25 33 49

22 33 44 38 46 41 37 32

R = HS – LS
= 50 – 15
R = 35
n = 40
Solve the value of k.
k = 1 + 3.3 log n
k = 1 + 3.3 log 40
k = 1 + 3.3 (1.602059991)
k = 1 + 5.286797971
k = 6.286797971
k=6
Find the class size.
��
��. �� = ��
35
��. �� = 6
69
��. �� = 5.833
��.�� = ��

Construct the class limit starting with the lowest score as the lower limit of the
first category. The last category should contain the highest score in the distribution.
Each category should contain 6 as the size of the width (X). Count the number of scores
that falls in each category (f).

X Tally frequency (f)

15 – 20 //// 4

21 – 26 ///////// 9

27 – 32 /// 3

33 – 38 ////////// 10

39 – 44 //// 4

45 – 50 ////////// 10.
n = 40

Find the class boundaries and class marks of the given score distribution.

X f Class Boundaries XM

15 – 20 4 14.5 – 20.5 17.5

21 – 26 9 20.5 – 26.5 23.5

27 – 32 3 26.5 – 32.5 29.5

33 – 38 10 32.5 – 38.5 35.5

39 – 44 4 38.5 – 44.5 41.5

45 – 50 10. 44.5 – 50.5 47.5


n = 40

Graphical Representation of Scores in Frequency Distribution

The scores expressed in frequency distribution can be meaningful and easier to


interpret when they are graphed. There are methods of graphing frequency distribution:
bar graph or histogram and frequency polygon and smooth curve. Bar graph or
histogram and frequency distribution will be discussed in this section while smooth
curve will be discussed later in the skewness.

Histogram consists of a set of rectangles having bases on the horizontal axis


which centers at the class marks. The base widths correspond to the class size and the

70
height of the rectangles corresponds to the class frequencies. Histogram is best used for
graphical representation of discrete data or non-continuous data.

Frequency polygon is constructed by plotting the class marks against the class
frequencies. The x-axis corresponds to the class marks and the y-axis corresponds to the
class frequencies. Connect the points consecutively using a straight line. Frequency
polygon is best used in representing continuous data such as the scores of students in a
given test.

Construct a histogram and frequency polygon using the frequency distributions


of 40 students in a 50-itm mathematics quiz previously discussed.

X frequency (f)

15 – 20 4

21 – 26 9

27 – 32 3

33 – 38 10

39 – 44 4

45 – 50 10.
n = 40

DESCRIBING GROUP PERFORMANCE

There are two major concepts in describing the assessed performance of the
group: measures of central tendency and measures of variability. Measures of central
tendency are used to determine the average score of a group of scores while measure of
variability indicate the spread of scores in the group. These two concepts are very
important and helpful in understanding the performance of the group.

Measure of Central Tendency

Measure of central tendency provides a very convenient way of describing a set


of scores with a single number that describe the performance of the group. It is also
defined as a single value that is used to describe the “center” of the data. It is thought of
as a typical value in a given distribution. There are three commonly used measures of
central tendency. These are the mean, median, and mode. In this section, we shall
discuss how to compute the value and some of the properties of the mean, median, and
mode as applied in a classroom setting.

71
1. Mean
Mean is the most commonly used measure of the center of data and it is also
referred as the “arithmetic average.”
Computation of Population Mean
�� =ƩXN = ��1+ ��2 + ��3 +⋯ ����
N

Computation of Sample Mean

�� =ƩXN= ��1+ ��2 + ��3 +⋯ ����


N

Computation of the Mean for Ungrouped Data

1. �� =ƩX��
2. �� =Ʃfxn
Example 1: Scores of 15 students in Mathematics I quiz consist of 25
items. The highest score is 25 and the lowest score is 10. Here are the scores:
25,20,18, 18,17,15,15,15,14,14,13,12,12,10,10. Find the men in the following
scores.
X (scores)
25
20
18
18
17
15
15
15
14
14
13
12
12
10
10
Ʃx = 228
n = 15
ƩX
�� = ��=228
15 = 15.2

72
Analysis:

The average performance of 15 students who participated in a mathematics quiz


consisting of 25 items is 15.2. The implication of this is that student who got score
below 15.2 did not perform well in the said examination. Students who got score higher
than 15.2 performed well in the examination compared to the performance of the whole
class.

Example 2: Find the Grade Point Average (GPA) of Ritz Glenn for the first
semester of the school year 2010 – 2011. Use the table below:
Subject Grade (xi) Units (wi) (wi) (xi)

BM 112 1.25 3 3.75

BM 101 1.00 3 3.00

AC 103N 1.25 6 7.50

BEC 111 1.00 3 3.00

MGE 101 1.50 3 4.50

MKM 101 1.25 3 3.75

FM 111 1.50 3 4.50

PEN 2 1.00 2 2.00

Ʃ(wi) = 26 Ʃ(wi) (xi) = 32.00

�� =Ʃ(����) (����)
Ʃ����

�� =32
26

�� = ��. ����

The Grade Point Average of Ritz Glenn for the first semester SY 2010 – 2011 is 1.23.

Mean for Grouped Data

Grouped data are the data or scores that are arranged in a frequency
distribution. Frequency distribution is the arrangement of scores according to
category of classes including the frequency. Frequency is the number of observations
falling in a category.

For this particular lesson we shall discuss only one formula in solving the mean
for gouped data which is called midpoint method. The formula is:

�� =Ʃf����
n

where x = mean value

f = frequency in each class or category

Xm = midpoint of each class or category


73
Ʃf���� – summation of the product of f����

Steps of Solving Mean for Grouped Data

1. Find the midpoint or class mark (Xm)of each class or category using the
formula Xm=LL+UL
2.
2. Multiply the frequency and the corresponding class mark
f����. 3. Find the sum of the results in step 2.
4. Solve the mean using the formula�� =Ʃf����
n.

Example 3: Scores of 40 students in a science class consist of 60 items and they


are tabulated below.
X F Xm ������

10 – 14 5 12 60

15 – 19 2 17 34

20 – 24 3 22 66

25 – 29 5 27 135

30 – 34 2 32 64

35 – 39 9 37 333

40 – 44 6 42 252

45 – 49 3 47 141

50 – 54 5 52 260
n = 40 Ʃf���� = 1 345

�� =Ʃf����
n

�� =1 345
40

�� = 33.63

Analysis:

The mean performance of 40 students in science quiz is 33.63. Those


students who got scores below 33.63 did not perform well in the said examination
while those students who got scores above 33.63 performed well.

Properties of the Mean

1. It measures stability. Mean is the most stable among other measures of central
tendency because every score contributes to the value of the mean. 2. The sum of
each score’s distance from the mean is zero.
3. It is easily affected by the extreme scores.

74
4. It may not be an actual score in the distribution.
5. It can be applied to interval level of measurement.
6. It is very easy to compute.

When to Use the Mean

1. Sampling stability is desired.


2. Other measures are to be computed such as standard deviation, coefficient of
variation and skewness.

2. Median

Median is the second type of measures of central tendency. Median is what


divides the scores in the distribution into two equal parts. Fifty percent (50%) lies
below the median value and 50% lies above the median value. It is also known as the
middle score or the 50th percentile. For classroom purposes, the first thing to do is to
arrange the scores in proper order. That is to arrange the scores from the lowest score to
highest score or highest score to the lowest score. When the number cases are odd, the
median is a score that has the same number of scores below and above it. When the
scores are even, determine the average of the two middle most scores that have equal
number of scores below and above it.
Median of Ungrouped Data

1. Arrange the scores (from lowest to highest or highest to lowest). 2.


Determine the middle most score in a distribution if n is an odd number
and get the average of the two middle most scores if n is an even number.

Example 1: Fin the median score of 7 students in an English class.

X (score)
19
17
16
15
10
5
2

Analysis:

The median score is 15. Fifty percent (50%) or three of the scores are above 15
(19,17,16) and 50% or three of the scores are below 15 (10,5,2).

Example 2: Find the median score of 8 students in an English class.

X (score)

75
30
19
16
15
10
5
2

̃
�� =16 + 15
2
̃
�� = 15.5

Analysis:

The median score is 15.5 which means that 50% of the scores in the distribution
are lower than 15.5, those are 15,10,5, and 2; and 50% are greater then 15.5 those are
30,19,17,16 which mean four (4) scores are below 15.5 and four (4) scores are above
15.5.

Median of Grouped Data


Formula:

̃ ��
�� = ���� [ 2−cfp
fm ]c.i
̃
�� = median value

MC = median class is a category containing the ��2

���� = lower boundary of the median class (MC)

cfp = cumulative frequency before the median class if the scores are
arranged from lowest to highest value

fm = frequency of the median class

c.i = size of the class interval

Steps in Solving Median for Grouped Data

1. Complete the table for cf<.


2. Get ��2of the scores in the distribution so that you can identify
MC. 3. Determine ����, cfp, fm, and c.i.
4. Solve the median using the formula.

76
Example 3: Scores of 40 students in a science class consist of 60 items and they are
tabulated below. The highest score is 54 and the lowest score is 10.
X F cf<

10 – 14 5 5

15 – 19 2 7

20 – 24 3 10

25 - 29 5 15

30 – 34 2 17 (cfp)

35 – 39 9 (fm) 26

40 - 44 6 32

45 – 49 3 35
50 - 54 5 40

n = 40

Solution:
��
40
2= 2= 20
The category containing ��2 is 35-39.
MC = 35 – 39

LL of the MC = 35

���� = 34.5

cfp = 17

fm = 9

c.i = 5

̃ ��
�� = ���� + [ 2−cfp
fm ]c.i

= 34.5 + [20−17
9] 5

= 34.5+ [39] 5

= 34.5 + 159
= 34.5 + 1.67

77
̃
�� = 36.17

Analysis:

The median value is 36.17, which means that 50% or 20 scores are less
than 36.17.

Properties of the Median

1. It may not be an actual observation in the data set.


2. It can be applied in ordinal level.
3. It is not affected by extreme values because median is a positional measure.

When to Use the Median

1. The exact midpoint of the score distribution is desired.


2. There are extreme scores in the distribution.

3. Mode

Mode is the third measure of central tendency. The mode or the modal score is a
score or scores that occurred most in the distribution. I is classified as unimodal,
bimodal, and trimodal and multimodal.Unimodal is a distribution o scores that consists
of only one mode. Bimodal is a distribution of scores that consists of two modes.
Trimodal is a distribution of scores that consists of three modes or multimodal is a
distribution of scores that consists of more than two modes.

Example 1. Scores of 10 students od Section A, Section B, and Section C


Scores of Section A Scores of Section B Scores of Section C

25 25 25

24 24 25

24 24 25

20 20 22

20 18 21

20 18 21

16 17 21

12 10 18

10 9 18

7 7 18

The score that appeared most in section A is 20, hence, the mode of section A is
20. There is only one mode, therefore, score distribution is called unimodal. The modes
of section B are24, since both 18 and 24 appeared twice. There are two modes in section
B, hence, the disctribution is a bimodal distribution. The modes for section C are18,21,
and 25. There are three modes for section C, therefore, it is called a trimodalor
multimodal distribution.

78
Mode for Grouped Data
In solving the mode value using grouped data, use the formula:

��̂ = ���� + [��1


��1+��2]c.i

���� = lower boundary of the modal class

Modal Class (MC) = is a category containing the highest frequency

d1 = difference between the frequency of the modal class and the


frequency above it, when the scores are arranged from lowest to
highest.

d2 = difference between the frequency of the modal class and the


frequency below it, when the scores are arranged from lowest to
highest.

c.i = size of the class interval

Examples 2. Scores of 40 students in a science class consist of 60 items and they


are tabulated below.
x f

10 – 14 5

15 – 19 2

20 – 24 3

25 – 29 5

30 – 34 2

35 – 39 9

40 – 44 6

45 – 49 3

50 – 54 5

n = 40

Modal Class = 35 – 39

LL of MC = 35

���� = 34.5

d1 = 9 – 2 = 7
d2 = 9 – 6 = 3

79
c.i = 5

��̂ = ���� + [��1


��1+��2]c.i

= 34.5 + [7
7+3]5

= 34.5 + 3510
��̂ = 3.5 + 3.5

��̂ = 3.8

The mode of the score distribution that consists of 40 students is 38, because 38
occurred several times.

Properties of the Mode

1. It can be used when the data are qualitative as well as quantitative.


2. It may not be unique.
3. It is not affected by extreme values.
4. It may not exist.

When to Use the Mode

1. When the “typical” value is desired.


2. When the data set is measured on a nominal scale.

4. Quantiles

Quantile is a score distribution where the scores are divided into different equal
parts. There are three kinds of quantile. The quartile is a score point that divided the
scores in the distribution into four (4) equal parts. Decile is a score point that divides the
scores in the distribution into hundred (100) equal parts.
80

You might also like