0% found this document useful (0 votes)
20 views8 pages

Educ 202

Uploaded by

mary ann cuba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views8 pages

Educ 202

Uploaded by

mary ann cuba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

EDUC 202-FINALS REVIEWER Principles of High-Quality Assessment

UNIT 4 - Any type of assessment should be


meticulously and carefully crafted to
Guiding Principles of Testing
serve its intended purpose and
- Guide teachers in assessing the learning
achieveprecise and dependable
progress of the students and in developing their
results.
own assessment tools.
1. Clarity of the Learning Target
5 Guiding Principles of Testing
Learning Target:
1. Measures all Instructional Objectives
• Objectives / Learning Outcomes
- The first step in constructing a test is
- Knowledge
for the teacher to check whether the
- Reasoning
items match all the learning
- Affects
objectives posed during instruction.
- Skills
2. Cover all the Learning Tasks
- Products
- To determine if the test scores of
2. Appropriateness of Assessment Tools
students accurately represent their
- The type of test should always match
overall performance in the areas being
with the learning objectives.
assessed, teachers must use different
types of tests. ASSESSMENT TOOLS
3. Use Appropriate Test Items
1. OBJECTIVE TEST
- Items must be age-appropriate and
• Knowledge or the Lower Order
according to the developmental level
Thinking Skills (LOTS)
of the students to obtain valid and
• Selective type of test
reliable results.
- Do not use foul and/or unfamiliar • One definite answer
words, terms, and phrases. 2. SUBJECTIVE TEST
- Avoid complex word arrangements. • Reasoning or the Higher Order
- Avoid using unnecessary words or Thinking Skills (HOTS)
phrases. • Supply type of test
- Use simpler sentences instead of • Has no specific answer
negatives or double negatives. 3. PERFORMANCE ASSESSMENT
4. Make Test Valid and Reliable • Skills and Products or the Applied
- Validity - The test's ability to measure Knowledge and Skills
what it's supposed to measure • Create products and/or to perform
- Reliability - The consistency and real-world tasks
stability of the scores obtained over • Demonstration and application of
time across different situations. the knowledge and skills observed
5. Use Test to Improve Learning en situ.
- Tests should be seen as evaluation
tool and a means to enhancestudents’
learning and teacher’s instruction.
4. PORTFOLIO ASSESSMENT 6. ADEQUACY - means that the test should
• Collection of students’ works that contain a wide range of sampling of items
exhibit their efforts, progress, to determine the educational outcomes
achievement, growth,and or abilities so that the resulting scores are
development. representatives of the total performance
• The most complete representation in the areas measured.
of a student’s progress. 7. ADMINISTRABILITY - means that the test
should be administered uniformly to all
2 TYPES OF OBESERVATION TECHNIQUE students so that the score obtained will
• Formal Observation - ARE not vary due to factors other than
PLANNED IN ADVANCE ASSESS differences of the students’ knowledge
ORAL REPORT OR PRESENTATION. and skills.
• Informal Observation - IS DONE 8. PRACTICALLY AND EFFECIENCY - refers
SPONTANEOUSLY, DURING to teachers’ familiarity with the methods
INSTRUCTION WORKING used, time required for assessment,
BAHAVIOR OFTHE STUDENT complexity of the administration, ease of
scoring, ease of interpretation of the test
DIFFERENT QUALITIES OF ASSESSMENT result and the material used must be at
TOOLS lowest cost.
1. VALIDITY - refers to the appropriateness TABLE OF SPECIFICATION - is a chart or table
of score-based inferences; or decision that details the content and level of cognitive
made based on the students test results. level assessed on a test as well as the types and
the extent to which a test measure what is emphases of test items (Gareis & Grant, 2008).
supposed to measures.
2. RELIABILITY STEPS IN PREPARING A TABLE OF
- refers to the consistency of SPECIFICATION
measurement; that is how consistent A. Selecting a learning outcomes to be
test results or other assessment measured.
results from one measurement to B. Make an outline of the subject matter to
another. reliability index is 0.61 above. be covered in the test.
3. FAIRNESS C. Decide on the number of items per sub-
- means the test item should not have topic.
any biases.
4. OBJECTIVITY Formula:
- refers to the agreement of two or more 𝑁𝑜. 𝑜𝑓 𝑖𝑡𝑒𝑚𝑠 =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑙𝑎𝑠𝑠 𝑥 𝐷𝑒𝑠𝑖𝑟𝑒𝑑 𝑡𝑜𝑡𝑎𝑙 𝑛𝑜. 𝑜𝑓 𝑖𝑡𝑒𝑚𝑠
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜. 𝑜𝑓 𝐶𝑙𝑎𝑠𝑠 𝑠𝑒𝑐𝑡𝑖𝑜𝑛
raters or test administrators
concerning the score of the student. D. Make the two-way chart as shown in the
5. SCORABILITY format 2 & 3 of a Table of specification.
- means that the test should be easy to E. Construct the test items.
score, direction for scoring should be
clearly stated in the instruction.
DIFFERENT FORMATS OF TABLE OF GENERAL GUIDLINES FOR
SPECIFICATION CONSTRUCTINGTEST ITEMS

FORMAT 1 KUBISZYN AND BORICH (2007)

- Suggest some general guidelines for writing test


items to help classroom teacher improve the
quality of test items to write.
1. SPECIFIC OBJECTIVES- refers to the
intended learning outcomes stated as the GENERAL GUIDELINES
specific instructional objective covering a 1. Begin writing items far enough or in
particular test topic. advance so that you will have time revise
2. COGNITIVE LEVEL- pertains to the them.
intellectual skill or ability to correctly 2. Match items to intended outcomes at
answer the test item using Bloom’s appropriate level of difficulty to provide
taxonomy of educational objective. valid measure of instructional objectives.
3. TYPE OF TEST ITEMS - Identifies the type 3. Be sure each item deals with an
or kind of test item belongs to. important aspect of the content area and
4. TOTAL POINTS - Identifies the question not with trivia.
number as it appears in the test. 4. Be sure the problem posed is clear and
FORMAT 2 unambiguous.
5. Be sure that the item is independent with
all other items.

DETERMINE THE NUMBERS OF TEST ITEMS

FORMAT 3 ( two-way table of specification)

𝑛𝑜. 𝑜𝑓 𝐷𝑎𝑦𝑠 CHECKLIST FOR CONSTRUCTING TEST ITEMS


%= 𝑥100
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜. 𝑜𝑓 𝐼𝑡𝑒𝑚𝑠
ASSEMBLE THE TEST ITEMS True-false test items - are typically used to
measure the ability to identify whether or not the
1. PACKAGING THE TEST- Refers to the
statements of facts are correct.
process of preparing a test for distribution
or implementation. Subjective Test - Tests where students are
2. REPRODUCING THE TEST- Refers to the asked to give an argument about a prompt and
process of repeating a test under the support it with evidence. These tests are often
same conditions to verify the results looking for explanation, application, synthesis,
and demonstration of ideas.
Different Formats of Classroom
Assessment Tool Types of subjective test:

Short Answers - It is an alternative form of


Objective Test - Objective test item requires assessment where students need to complete
only one correct answer in each item. the statement rather than selecting the answer
from the given options.
Kinds of Objective Type Test
1. Multiple-choice Test Essay - An essay test, a fundamental tool in
2. Matching Type academic assessment, measures a student's
3. True or False Type ability to express, argue, and structure their
thoughts on a given subject through written
MULTIPLE CHOICE TEST- used to measure
words.
knowledge outcomes and other types of learning
- There are two types of essay items:
outcomes such as comprehension and
applications. Extended response essay - allows the students
- most used format in measuring student to determine the length and complexity of the
achievements. response. (Kubiszyn and Borich, 2007)

Three parts of Multiple Choice -These questions often address higher order
thinking skills and require students to recall or
1. STEM - represents the problem or
research information and apply that information
question usually expressed in completion
in different ways.
form or question form.
2. KEYED OPTION - correct answer. Restricted response essay - essay item that
3. DISTRACTORS/FOIL - incorrect options places strict limits on both content and
or alternatives. response given by the students.

Matching Type Test - Consists of two columns.


- Column A contains the descriptions and must
ANALYSIS AND INTERPRETATION OF
be placed at left side while Column B contains
ASSESSMENT RESULTS
the options and placed at the right side.
Item Analysis - statistical technique which is
- The examinees are asked to match the options
used for selecting and rejecting the items of the
that are associated with the description(s)
test on the basis of their difficulty value and
discriminated power.
Steps in Item Analysis Three kinds of discrimination indexes:

1. Score the TEST. 1. Positive Discrimination


2. Arrange the test papers from highest to 2. Negative Discrimination
lowest. 3. Zero Discrimination
3. Separate the top 27% and the lower 27%.
4. Make the item analysis (Difficulty Index,
Separation or Discrimination Index, and
Plausible Index or Distractor Analysis)
Distracter Analysis - Distracter is a term for
incorrect options in a multiple-choice type of
DIFFICULTY INDEX/ITEM DISCRIMINATION test. It is important that the distractors are
INDEX DISTRACTER ANALYSIS effective in challenging the students.

- The proportion of the number of


students in the upper and lower A mis-keyed items - a test item can be potentially
groups who answered an item miskey if there are more students from the upper
correctly. group who choose the incorrect options than the
𝐷𝑖𝑓𝑓𝑖𝑐𝑢𝑙𝑡𝑦 𝑜𝑓 𝐼𝑛𝑑𝑒𝑥 =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑠𝑡𝑢𝑑𝑒𝑛𝑡𝑠 𝑎𝑛𝑠𝑤𝑒𝑟𝑖𝑛𝑔 𝑖𝑡𝑒𝑚𝑠 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑙𝑦 key.
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜. 𝑜𝑓 𝑠𝑡𝑢𝑑𝑒𝑛𝑡𝑠 𝑤ℎ𝑜 𝑎𝑛𝑠𝑤𝑒𝑟𝑒𝑑 𝑡ℎ𝑒 𝑡𝑒𝑠𝑡

Guessing items - students from the upper group


have an equal spread of choices among the
given alternatives. Students may guess their
answer because of the following possible
reasons:

Discrimination Index Ambiguous items - happens when more


students from the upper group choose equally
- is the basis for measuring the validity an incorrect option or choices than the key
of an item. It can be interpreted as an answer.
indication of the extent to which
overall knowledge of the content area EXAMPLES
is related to the response on an item. 𝐷𝑖𝑓𝑓𝑖𝑐𝑢𝑙𝑡𝑦 𝑜𝑓 𝐼𝑛𝑑𝑒𝑥
𝑛𝑜. 𝑜𝑓 𝑠𝑡𝑢𝑑𝑒𝑛𝑡𝑠 𝑎𝑛𝑠𝑤𝑒𝑟𝑖𝑛𝑔 𝑡ℎ𝑒 𝑖𝑡𝑒𝑚𝑠 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑙𝑦
=
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜. 𝑜𝑓 𝑠𝑡𝑢𝑑𝑒𝑛𝑡𝑠 𝑤ℎ𝑜 𝑎𝑛𝑠𝑤𝑒𝑟𝑒𝑑 𝑡ℎ𝑒 𝑡𝑒𝑠𝑡
25
𝐷𝑖𝑓𝑖𝑐𝑢𝑙𝑡𝑦 𝑜𝑓 𝐼𝑛𝑑𝑒𝑥 =
50
𝐷𝑖𝑓𝑓𝑖𝑐𝑢𝑙𝑡𝑦 𝑜𝑓 𝑖𝑛𝑑𝑒𝑥 = 0.5 = average/moderately difficult
𝐷𝑖𝑠𝑐𝑟𝑖𝑚𝑖𝑛𝑎𝑡𝑖𝑜𝑛 𝐼𝑛𝑑𝑒𝑥 = 𝐷𝑈 − 𝐷𝐿 Parts Frequency Distribution - Class Size
the width of each class interval
8
𝐷𝑈 = = 0.57
14
5
𝐷𝐿 = = 0.36
14
𝐼𝐷 = 0.57 − 0.36 = 0.21 Parts Frequency Distribution - Class
Boundaries numbers used to separate
each category in the frequency
BASIC STATISTICS distribution but without gaps created by
class limits.
Statistics - is a branch of science, which deals
with the collection, presentation, analysis, and
interpretation of quantitative data.

Branches of Statistics:
Parts Frequency Distribution - Class
1. Descriptive Statistics - A method Marks the midpoint of the lower and
concerned with collecting, describing, upper-class limits.
and analyzing a set of data without
drawing conclusions. Steps in Constructing Frequency
2. Inferential Statistics - concerned with the Distribution
analysis of a subset of data leading to
predictions or inferences about the entire
set of data.
3. Frequency Distribution - a presentation of
the frequencies in a scientific and get the Range (R)
hierarchical arrangement of set of R = HS – LS
observations or data about a population = 50 - 15
or sample. R = 35
4. Parts Frequency Distribution - Class n = 50
Limits the groupings or categories defined
by the lower and upper limits. Find the desired number of classes

K = 1 + 3.3 log n
= 1 + 3.3 log 50
= 1 + 3.3 (1.69)
= 1 + 5.57 = 6

Find the class size c.i =


R/K = 35 / 6
= 5.833
=6
Measures of Central Tendency- It is defined as frequency after the modal class
a single value that’s used to describe the
“center” of the data. This provides a very
convenient way of describing a set of scores with
a single number that describes the performance
of a Group.

1. MEAN (Arithmetic Average) - The mean (or


average) is the most popular and well known
measure of central tendency. UNGROUPED DATA
19, 17, 16, 15, 10, 5, 2
Grouped Data are the data or scores that are MEAN:
arranged in a frequency distribution table. 82 ÷ 7 = 11.7
MEDIAN:
Frequency distribution is the arrangement of
19, 17, 16, 15, 10, 5, 2, 1
scores according to category of classes
15 +10 = 25
including the frequency.
𝟐𝟓 ÷ 𝟐 = 12.5
Frequency is the number of observations falling
in a category MODE: most frequent
bimodal – two mode

Trimodal – 3 mode

Multi modal – 4 or more mode

Measures of Variability describes how far apart


data points lie from each other and from the
GROUPED DATA center of a distribution. Along with measures of
MEAN: central tendency, measures of variability give you
descriptive statistics that summarize your data.

Range: the difference between the highest and


MEDIAN:
lowest values

Interquartile range: the range of the middle half


of a distribution

Standard deviation: average distance from the


mean

Variance: average of squared distances from


MODE: the mean
To get the value of 𝑑1, and 𝑑2, find the differences of the
frequency of the modal to the frequency before and to the The standard deviation is the average amount
of variability in your dataset. It tells you, on
average, how far each score lies from the mean.
The larger the standard deviation, the more Z-Score
variable the data set is.

Measures of Shape

Variance

Percentile

Quartiles

You might also like