0% found this document useful (0 votes)
31 views

Assessment in Learning 1 Module

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Assessment in Learning 1 Module

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

ASSESSMENT IN LEARNING 1

CHAPTER 1 : BASIC CONCEPTS OF ASSESSMENT

 Assessment for Learning and quality feedback can and do promote increased learner progress.
 Assessment for learning has preoccupied the minds of the profession for many years in an effort to
meet awarding body requirements.
 Assessment of learning can detract from effective classroom practice and prevent feeding back
assessment decisions to learners on their performance with the view to improving their work.

The term Assessment- has been widely used by educators to evaluate, measure, and document the academic
readiness, learning progress, and skill acquisition of students throughout their learning in life.

Different terminologies are there for assessment and evaluation such as Measurement, Tests,
Examination, Appraisal and Evaluation.

There are certain Learning theories which are having conceptual frameworks describing how information is
absorbed, processed and retained during learning.

3 purposes of Assessment

1. Assessment FOR learning - occurs when teacher use influences about student progress to inform their teaching

Example: Formative, using inferences to measure students progress to inform their teaching

2. Assessment OF Learning - occurs when teachers use evidence of student learning to make judgement on student
achievement against goals and standard.

Example: Summative, using evidence to compare students achievement against goals

3. Assessment As Learning - occurs when student reflect on and monitor their progress to inform their future learning
goals

Example: reflecting and monitoring their own progress

MEASUREMENT, ASSESSMENT AND EVALUATION IN OUTCOMES-BASED EDUCATION

TEST-

TEST- used as a tool for measuring, the knowledge skills and attitudes of learners

- it is comprises of item in the area it is designed to measure

MEASUREMENT

 Measurement- is actually the process of estimating the values that is the physical quantities like;
time, temperature, weight, length etc. each measurement value is represented in the form of some
standard units. When we measure, we use some standard instrument to find out how long, heavy, hot,
voluminous, cold, fast or straight some things are. Such instruments may be ruler, scale, thermometer
or pressure gauge. The estimated values by these measurements are actually compared against the
standard quantities that are of same type
 Measurement is the assignment of a number to a characteristic of an object or event, which can be
compared with other objects or events. The scope and application of a measurement is dependent on
the context and discipline.
 Sometimes we can measure physical quantities by combining directly measurable quantities to form
derived quantities. For example, to find the area of a rectangular piece of paper, we simply multiply the
lengths of the sides of the paper. In the field of education, however the quantities and qualities of
interest are abstract, unseen and cannot be touched and so the measurement process becomes
difficult; hence the need to specify the learning outcomes to be measured.

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 1


ASSESSMENT

 Assessment is the process of gathering evidences of student’s performance over a period of time to
determine learning and mastery of skills. Such evidences of learning can take the forms of dialogue
record, journals, written work, portfolios, tests and other learning tasks.
 Assessment requires review of journal entries, written work, presentation, research papers, essays,
story written, tests results etc.

The overall goal of assessment is to improve student learning and provide students, parents and
teachers with reliable information regarding student progress and extent of attainment of the expected
learning outcomes.

 Assessments use as a basis, the levels of achievement and standards required for the curricular goals
appropriate for the grade or year level.
 Assessment results show the more permanent learning and clearer picture of the students’ ability.

Assessment of skill attainment is relatively easier than assessment of understanding and other mental
ability. Skills can be practised and are readily demonstrable. Either the skill exists at a certain level or it doesn’t.
Assessment of understanding is much more complex. We can assess a person’s knowledge in a number of ways
but we need to infer from certain indicators of understanding through written descriptions.

PURPOSES OF ASSESSMENT

1. TEACHING AND LEARNING

The primary purpose of assessment is to improve students‘ learning and teachers‘ teaching as both
respond to the information it provides. Assessment for learning is an on-going process that arises out of the
interaction between teaching and learning.

What makes assessment for learning effective is how well the information is used.

2. SYSTEM IMPROVEMENT

Assessment can do more than simply diagnose and identify students‘learning needs; it can be used
to assist improvements across the education system in a cycle of continuous improvement:

 Students and teachers can use the information gained from assessment to determine their next
teaching and learning steps.
 Parents and families can be kept informed of next plans for teaching and learning and the progress
being made, so they can play an active role in their children‘s learning.
 School leaders can use the information for school-wide planning, to support their teachers and
determine professional development needs.
 Communities and Boards of Trustees can use assessment information to assist their governance role
and their decisions about staffing and resourcing.
 The Education Review Office can use assessment information to inform their advice for school
improvement.
 The Department of Education can use assessment information to undertake policy review and
development at a national level, so that government funding and policy intervention is targeted
appropriately to support improved student outcomes.

EVALUATION

 Evaluation is a broader term that refers to all of the methods used to find out what happens as a result
of using a specific intervention or practice.
 Evaluation is the systematic assessment of the worth or merit of some object. It is the systematic
acquisition and assessment of information to provide useful feedback about some object.
 Evaluation is a process designed to provide information that will help us make a judgment about a
particular situation. The end result of evaluation is to adopt, reject or revise what has been evaluated.

INTERRELATION AMONG ASSESSMENT, EVALUATION AND MEASUREMENT

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 2


Though the terms assessment and evaluation are often used interchangeably (Cooper, 1999), many
writers differentiate between them. Assessment is defined as gathering information or evidence, and evaluation is
the use of that information or evidence to make judgments (Snowman, McCown, and Biehler, 2012).
Measurement involves assigning numbers or scores to an "attribute or characteristic of a person in such a way
that the numbers describe the degree to which the person possesses the attribute" (Nitco and Brookhart, 2011, p.
507). Assigning grade equivalents to scores on a standardized achievement test is an example of measurement.

CHAPTER 2

ROLES OF ASSESSMENT IN MAKING INSTRUCTIONAL DECISION

The types of assessment tasks that we ask our students to do determine how students will approach the
learning task and what study behaviours they will use. In the words of higher education scholar John Biggs,
“What and how students learn depends to a major extent on how they think they will be assessed.” (1999, p.
141).
Given the importance of assessment for student learning, it is important to consider how to best
measure the learning that you want your students to achieve. Assessment should integrate grading, learning, and
motivation for your students. Well-designed assessment methods provide valuable information about student
learning. They tell us what students learned, how well they learned it, and where they struggled. Good
assessments allow you to answer the question.

CLASSIFICATIONS OF ASSESSMENT

TYPES PF ASSESSMENT

1. TRADITIONAL ASSESSMENT: Paper-and-pencil test

2. ALTERNATIVE ASSESSMENT: Performance test, projects, portfolios, journals, etc.

3. AUTHENTIC ASSESSMENT: Test that reflect real-life situations or experiences.

4 types of assessment. Although these four are generally referred to simply as assessment, there are distinct
differences between the three.

1. Placement assessment –
 Placement evaluation is used to place students according to prior achievement or personal
characteristics, at the most appropriate point in an instructional sequence, in a unique instructional
strategy, or with a suitable teacher conducted through placement testing, i.e. the tests that colleges
and universities use to assess college readiness and place students into their initial classes.
 Placement evaluation, also referred to as pre-assessment or initial assessment, is conducted prior
to instruction or intervention to establish a baseline from which individual student growth can be
measured. This type of an assessment is used to know what the student's skill level is about the
subject. It helps the teacher to explain the material more efficiently. These assessments are not
graded.
2. Formative Assessment.
 Formative assessment provides feedback and information during the instructional process, while
learning is taking place, and while learning is occurring.
 Formative assessment measures student progress but it can also assess your own progress as
an instructor. A primary focus of formative assessment is to identify areas that may need
improvement. These assessments typically are not graded and act as a gauge to students‘ learning
progress and to determine teaching effectiveness (implementing appropriate methods and
activities).
Types of Formative Assessment:
 Observations during in-class activities
 Homework exercises as review for exams and class discussions
 Reflections journals that are reviewed periodically during the semester
 Question and answer sessions, both formal—planned and informal—spontaneous
 Conferences between the instructor and student at various points in the semester
 In-class activities where students informally present their results

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 3


 Student feedback collected by periodically
3. Diagnostic Assessment:
 Diagnostic assessment can help you identify your students‘ current knowledge of a subject, their
skill sets and capabilities, and to clarify misconceptions before teaching takes place. Knowing
students‘ strengths and weaknesses can help you better plan what to teach and how to teach it.

Types of Diagnostic Assessments:

 Pre-tests (on content and abilities)


 Self-assessments (identifying skills and competencies)
 Discussion board responses (on content-specific prompts)
 Interviews (brief, private, 10-minute interview of each student)

4. Summative Assessment. Summative assessment takes place after the learning has been completed
and provides information and feedback that sums up the teaching and learning process. Typically, no
more formal learning is taking place at this stage, other than incidental learning which might take place
through the completion of projects and assignments.

Types of Summative Assessment

 Examinations (major, high-stakes exams)


 Final examination (a truly summative assessment)
 Term papers (drafts submitted throughout the semester would be a formative assessment)
 Projects (project phases submitted at various completion points could be formatively assessed)
 Portfolios (could also be assessed during its development as a formative assessment)
 Performances
 Student evaluation of the course (teaching effectiveness)
 Instructor self-evaluation

LEARNING THEORY

Learning theories are conceptual frameworks describing how information is absorbed, processed and
retained during learning. Cognitive, emotional, and environmental influences, as well as prior experience, all play
a part in how understanding, or a world view, is acquired or changed and knowledge and skills retained.
Constructivist learning theory stated that the process of adjusting our mental models to
accommodate new experiences.

Behaviourism is a philosophy of learning that only focuses on objectively observable behavior and
discounts mental activities. Piaget proposed that a child's cognitive structure increases in sophistication
with development, moving from a few innate reflexes such as crying and sucking to highly complex
mental activities.
Behaviorists look at learning as an aspect of conditioning and will advocate a system of rewards and
targets in education.

Educators who embrace cognitive theory believe that the definition of learning as a change in behavior
is too narrow and prefer to study the learner rather than their environment and in particular the complexities of
human memory.

CLASSICAL CONDITIONING
was proposed by a Russian Physiologist Ivan Pavlov. According to this theory, behavior is learnt by a repetitive
association between the response and the stimulus.

OPERANT CONDITIONING was given by B.F Skinner, who believed that internal thoughts cannot be
used to explain behavior. Instead, one should look at external and observable cause of human behavior.

1. Behaviorism - Behaviorism is a philosophy of learning that only focuses on objectively observable


behaviors and discounts mental activities. Behavior theorists define learning as nothing more than the
acquisition of new behavior. Experiments by behaviorists identify conditioning as a universal learning
process. There are two different types of conditioning, each yielding a different behavioral pattern:
Halimbawa: pag nasanay kana Parang gusto mong ulit ulitin.

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 4


 Classic conditioning occurs when a natural reflex responds to a stimulus.

The most popular example is Pavlov's observation that dogs salivate when they eat or even see food.
Essentially, animals and people are biologically "wired" so that a certain stimulus will produce a specific
response.

 Behavioral or operant conditioning occurs when a response to a stimulus is reinforced. Basically,


operant conditioning is a simple feedback system: If a reward or reinforcement follows the response to a
stimulus, then the response becomes more probable in the future. For example, leading behaviorist
B.F. Skinner used reinforcement techniques to teach pigeons to dance and bowl a ball in a mini-
alley.
Halimbawa: Parang nagbibigay ka ng reward sa sarili mo. Yung tipong mahina ka sa history
tapos gusto mong subject ay TLE so Ang gagawin mo ngayon ay bigyan mo ng reward Yung
sarili mo ay ganito gawin ko pagkatapos Kong gawin Ang history kailangan tapusin ko Ang
history ganun hahaha.

How Behaviorism impacts learning?

 Positive and negative reinforcement techniques of Behaviorism can be very effective.


 Teachers use Behaviorism when they reward or punish student behaviours.

2. Cognitivism - Jean Piaget authored a theory based on the idea that a developing child builds cognitive
structures, mental "maps", for understanding and responding to physical experiences within their
environment. Piaget proposed that a child's cognitive structure increases in sophistication with
development, moving from a few innate reflexes such as crying and sucking to highly complex mental
activities. The four developmental stages of Piaget's model and the processes by which children
progress through them are: The child is not yet able to conceptualize abstractly and needs concrete
physical situations. As physical experience accumulates, the child starts to conceptualize, creating
logical structures that explain their physical experiences. Abstract problem solving is possible at this
stage. For example, arithmetic equations can be solved with numbers, not just with objects. By this
point, the child's cognitive structures are like those of an adult and include conceptual reasoning.

Piaget proposed that during all development stages, the child experiences their environment using
whatever mental maps they have constructed. If the experience is a repeated one, it fits easily - or is
assimilated - into the child's cognitive structure so that they maintain mental "equilibrium". If the
experience is different or new, the child loses equilibrium, and alters their cognitive structure to
accommodate the new conditions. In this way, the child constructs increasingly complex cognitive
structures.

How Piaget's theory impacts learning?

 Curriculum - Educators must plan a developmentally appropriate curriculum that enhances their
students' logical and conceptual growth.
 Instruction - Teachers must emphasize the critical role that experiences, or interactions with the
surrounding environment, play in student learning. For example, instructors have to take into account
the role that fundamental concepts, such as the permanence of objects, play in establishing cognitive
structures.

3. Constructivism - Constructivism is a philosophy of learning founded on the premise that, by reflecting


on our experiences we construct our own understanding of the world we live in. Each of us generates
our own "rules" and "mental models," which we use to make sense of our experiences. Learning,
therefore, is simply the process of adjusting our mental models to accommodate new experiences.

The guiding principles of Constructivism:

 Learning is a search for meaning. Therefore, learning must start with the issues around which students
are actively trying to construct meaning.
 Meaning requires understanding wholes as well as parts and parts must be understood in the context of
wholes. Therefore, the learning process focuses on primary concepts, not isolated facts.
 In order to teach well, we must understand the mental models that students use to perceive the world
and the assumptions they make to support those models. The purpose of learning is for an individual
to construct his or her own meaning, not just memorize the "right" answers and repeat someone else's
meaning. Since education is inherently interdisciplinary, the only valuable way to measure learning is to
make assessment part of the learning process, ensuring it provides students with information on the
quality of their learning.

How Constructivism impacts learning?

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 5


 Curriculum - Constructivism calls for the elimination of a standardized curriculum. Instead, it promotes
using curricula customized to the students' prior knowledge. Also, it emphasizes hands-on problem
solving.
 Instruction - Under the theory of constructivism, educators focus on making connections between facts
and fostering new understanding in students. Instructors tailor their teaching strategies to student
responses and encourage students to analyze, interpret and predict information. Teachers also rely
heavily on open-ended questions and promote extensive dialogue among students.
 Assessment - Constructivism calls for the elimination of grades and standardized testing. Instead,
assessment becomes part of the learning process so that students play a larger role in judging their
own progress.

Principles of High Quality Assessment

Formulating Instructional objectives or learning targets is identified as the first step in conducting both
the process of teaching and evaluation.

Once you have determined your objectives or learning targets, or have answered the question “what to
assess”, you will probably be concerned with answering the question “how to assess? At this point, it is important
to keep in mind several criteria that determine the quality and credibility of the assessment methods that you
choose.

CLARITY & APPROPRIATENESS OF LEARNING TARGETS


Assessment should be clearly stated and specified and centered on what is truly important.

"Teaching emphasis should parallel testing emphasis."


3. BALANCE
Assessment methods should be able to assess all domains of learning and hierarchy of objectives.

A. Cognitive Domain (BLOOM’S TAXONOMY)

The cognitive domain involves the development of our mental skills and the acquisition of knowledge. The six

categories under this domain are:

1. Knowledge: the ability to recall data and/or information.

Example: A child recites the English alphabet.

2. Comprehension: the ability to understand the meaning of what is known.

Example: A teacher explains a theory in his own words.

3. Application: the ability to utilize an abstraction or to use knowledge in a new situation.

Example: A nurse intern applies what she learned in her Psychology class when she talks to patients.

4. Analysis: the ability to differentiate facts and opinions.

Example: A lawyer was able to win over a case after recognizing logical fallacies in the reasoning of the

offender.

5. Synthesis: the ability to integrate different elements or concepts in order to form a sound pattern or structure
so a new meaning can be established.

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 6


Examples: A therapist combines yoga, biofeedback and support group therapy in creating a care plan for his

patient.

6. Evaluation: the ability to come up with judgments about the importance of concepts.

Examples: A businessman selects the most efficient way of selling products.

B. Affective Domain

The affective domain involves our feelings, emotions and attitudes. This domain is categorized into 5

subdomains, which include:

1. Receiving Phenomena: the awareness of feelings and emotions as well as the ability to utilize selected
attention.

Example: Listening attentively to a friend.

2. Responding to Phenomena: active participation of the learner.

Example: Participating in a group discussion.

3. Valuing: the ability to see the worth of something and express it.

Example: An activist shares his ideas on the increase in salary of laborers.

4. Organization: ability to prioritize a value over another and create a unique value system.

Example: A teenager spends more time in her studies than with her boyfriend.

5. Characterization: the ability to internalize values and let them control the person`s behaviour.

Example: A man marries a woman not for her looks but for what she is.

C. Psychomotor Domain

The psychomotor domain is comprised of utilizing motor skills and coordinating them. The seven categories

under this include:

1. Perception: the ability to apply sensory information to motor activity.

Example: A cook adjusts the heat of stove to achieve the right temperature of the dish.

2. Set: the readiness to act.

Example: An obese person displays motivation in performing planned exercise.

3. Guided Response: the ability to imitate a displayed behavior or to utilize trial and error.

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 7


Example: A person follows the manual in operating a machine.

4. Mechanism: the ability to convert learned responses into habitual actions with proficiency and confidence.

Example: A mother was able to cook a delicious meal after practicing how to cook it.

5. Complex Overt Response: the ability to skilfully perform complex patterns of actions.

Example: Typing a report on a computer without looking at the keyboard.

6. Adaptation: the ability to modify learned skills to meet special events.

Example: A designer uses plastic bottles to create a dress.

7. Origination: creating new movement patterns for a specific situation.

Example: A choreographer creates a new dance routine.

1. VALIDITY
Assessment should be valid. -There are several types of validity that are to be established.

Types of Validity

Validity tells you how accurately a method measures something. If a method measures what it claims to
measure, and the results closely correspond to real-world values, then it can be considered valid. There are four
main types of validity:

 Construct validity: Does the test measure the concept that it’s intended to measure?
 Content validity: Is the test fully representative of what it aims to measure?
 Face validity: Does the content of the test appear to be suitable to its aims?
 Criterion validity: Do the results correspond to a different test of the same thing?

Note that this article deals with types of test validity, which determine the accuracy of the actual components
of a measure. If you are doing experimental research, you also need to consider internal and external validity,
which deal with the experimental design and the generalizability of results.

1. Construct validity

Construct validity evaluates whether a measurement tool really represents the thing we are interested in
measuring. It’s central to establishing the overall validity of a method.

What is a construct?
A construct refers to a concept or characteristic that can’t be directly observed, but can be measured by
observing other indicators that are associated with it.

Constructs can be characteristics of individuals, such as intelligence, obesity, job satisfaction, or depression;
they can also be broader concepts applied to organizations or social groups, such as gender equality, corporate
social responsibility, or freedom of speech.

Example

There is no objective, observable entity called “depression” that we can measure directly. But based on
existing psychological research and theory, we can measure depression based on a collection of symptoms and
indicators, such as low self-confidence and low energy levels.

What is construct validity?

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 8


Construct validity is about ensuring that the method of measurement matches the construct you want to
measure. If you develop a questionnaire to diagnose depression, you need to know: does the questionnaire
really measure the construct of depression? Or is it actually measuring the respondent’s mood, self-esteem, or
some other construct?

To achieve construct validity, you have to ensure that your indicators and measurements are carefully
developed based on relevant existing knowledge. The questionnaire must include only relevant questions that
measure known indicators of depression.

The other types of validity described below can all be considered as forms of evidence for construct validity.

2. Content validity

Content validity assesses whether a test is representative of all aspects of the construct.

To produce valid results, the content of a test, survey or measurement method must cover all relevant parts
of the subject it aims to measure. If some aspects are missing from the measurement (or if irrelevant aspects are
included), the validity is threatened.

Example

A mathematics teacher develops an end-of-semester algebra test for her class. The test should cover every
form of algebra that was taught in the class. If some types of algebra are left out, then the results may not be an
accurate indication of students’ understanding of the subject. Similarly, if she includes questions that are not
related to algebra, the results are no longer a valid measure of algebra knowledge.

3. Face validity

Face validity considers how suitable the content of a test seems to be on the surface. It’s similar to content
validity, but face validity is a more informal and subjective assessment.

Example

You create a survey to measure the regularity of people’s dietary habits. You review the survey items,
which ask questions about every meal of the day and snacks eaten in between for every day of the week. On its
surface, the survey seems like a good representation of what you want to test, so you consider it to have high
face validity.
As face validity is a subjective measure, it’s often considered the weakest form of validity. However, it can be
useful in the initial stages of developing a method.

4. Criterion validity

Criterion validity evaluates how closely the results of your test correspond to the results of a different test.

What is a criterion?
The criterion is an external measurement of the same thing. It is usually an established or widely-used
test that is already considered valid.

What is criterion validity?


To evaluate criterion validity, you calculate the correlation between the results of your measurement and
the results of the criterion measurement. If there is a high correlation, this gives a good indication that your test is
measuring what it intends to measure.

Example

A university professor creates a new test to measure applicants’ English writing ability. To assess how
well the test really does measure students’ writing ability, she finds an existing test that is considered a valid
measurement of English writing ability, and compares the results when the same group of students take both
tests. If the outcomes are very similar, the new test has a high criterion validity.

5. RELIABILITY

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 9


Assessment should show consistent and stable results. There are methods which can be used to measure and
establish reliability.

6. FAIRNESS
Assessment should give equal opportunities for every student. There should be no discrimination of any kind
(racial, age, gender, etc.)

7. AUTHENTICITY
Assessment should touch real life situations and should emphasize practicability.

8. PRACTICALITY & EFFICIENCY


Assessment should save time, money, etc. It should be resourceful.

9. ASSESSMENT IS A CONTINUOUS PROCESS.


Because assessment is an integral part of the teaching-learning process, it should be continuous.

10. ETHICS IN ASSESSMENT

Assessment should not be used to derogate the students. One example of this is the right to confidentiality.

11. CLEAR COMMUNICATION

Assessment's results should be communicated to the learners and the people involved. Communication should
also be established between the teacher and the learners by way of pre- and post-test reviews.
12. POSITIVITY OF CONSEQUENCE

Assessment should have a positive effect. It should motivate students to learn and do more and should give way
to improve the teacher's instruction.

 Being aware of why we are testing students and what exactly we want to test can help make students’
and instructors' experience of exams more useful.

The following tips will gear you towards issues you should think about during the entire exam process, from
planning to reflection.

Before you start preparing an exam

Why are you giving an exam to your students?

 To evaluate and grade students. Exams provide a controlled environment for independent work and
so are often used to verify students’ learning.
 To motivate students to study. Students tend to open their books more often when an evaluation is
coming up. Exams can be great motivators.
 To add variety to student learning. Exams are a form of learning activity. They can enable students to
see the material from a different perspective. They also provide feedback that students can then use to
improve their understanding.
 To identify weaknesses and correct them. Exams enable both students and instructors to identify
which areas of the material students do not understand. This allows students to seek help, and
instructors to address areas that may need more attention, thus enabling student progression and
improvement.
 To obtain feedback on your teaching. You can use exams to evaluate your own teaching. Students’
performance on the exam will pinpoint areas where you should spend more time or change your current
approach.
 To provide statistics for the course or institution. Institutions often want information on how students
are doing. How many are passing and failing, and what is the average achievement in class? Exams
can provide this information.
 To accredit qualified students. Certain professions demand that students demonstrate the acquisition
of certain skills or knowledge. An exam can provide such proof – for example, the Uniform Final
Examination (UFE) serves this purpose in accounting.

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 10


What do you want to assess?

What you want to assess should be related to your learning outcomes for the course.

 Knowledge or how it is used. You can design your test questions to assess students’ knowledge or
ability to apply material taught in class.
 Process or product. You can test students’ reasoning skills and evaluate the process by focusing the
marks and other feedback on the process they follow to arrive at a solution. Alternatively, you can
evaluate the end product.
 The communication of ideas. You can evaluate students’ communication skills their ability to express
themselves - whether this is by writing a cogent argument, or creating an elegant mathematical proof.
 Convergent thinking or divergent thinking. You can test your students’ ability to draw a single
conclusion from different inputs (convergent thinking). Or you may alternatively want them to come up
with different possible answers (divergent thinking). Do you expect different answers from students, or
do you expect all of them to provide the same answer?
 Absolute or relative standards. Is student success defined by learning a set amount of material or
demonstrating certain skills, or is student success measured by assessing the amount of progress the
students make over the duration of the course?

How do you decide what to test and how to test it?

The overall exam should be consistent with your learning outcomes for the course. There are a number of ways
to review and prioritize the skills and concepts taught in a course. You could:

 Use the topics list provided in your course outline


 Skim through your lecture notes to find key concepts and methods
 Review chapter headings and subheadings in the assigned readings

What are the qualities of a good exam?

 A good exam gives all students an equal opportunity to fully demonstrate their learning. With this
in mind, you might reflect on the nature and parameters of your exam. For example, could the exam be
administered as a take-home exam? Two students might know the material equally well, but one of
them might not perform well under the pressure of a timed or in-class testing situation. In such a case,
what is it that you really want to assess: how well each student knows the material, or how well each
performs under pressure? Likewise, it might be appropriate to allow students to bring memory aids to an
exam. Again, what is it that you want to assess: their ability to memorize a formula or their ability to use
and apply a formula?
 Consistency. If you give the same exam twice to the same students, they should get a similar grade
each time.
 Validity. Make sure your questions address what you want to evaluate.
 Realistic expectations. Your exam should contain questions that match the average student’s ability
level. It should also be possible to respond to all questions in the time allowed. To check the exam, ask
a teaching assistant to take the test – if they can’t complete it in well under the time permitted then the
exam needs to be revised.
 Uses multiple question types. Different students are better at different types of questions. In order to
allow all students to demonstrate their abilities, exams should include a variety of types of questions.
Read our Teaching Tip, Asking Questions: 6 Types.
 Offer multiple ways to obtain full marks. Exams can be highly stressful and artificial ways to
demonstrate knowledge. In recognition of this, you may want to provide questions that allow multiple
ways to obtain full marks. For example, ask students to list five of the seven benefits of multiple-choice
questions.
 Free of bias. Your students will differ in many ways including language proficiency, socio-economic
background, physical disabilities, etc. When constructing an exam, you should keep student differences
in mind to watch for ways that the exams could create obstacles for some students. For example, the
use of colloquial language could create difficulties for students whose first language is not English, and
examples easily understood by North American students may be inaccessible to international students.
 Redeemable. An exam does not need to be the sole opportunity to obtain marks. Assignments and
midterms allow students to practice answering your types of questions and adapt to your expectations.
 Demanding. An exam that is too easy does not accurately measure students’ understanding of the
material.
 Transparent marking criteria. Students should know what is expected of them. They should be able to
identify the characteristics of a satisfactory answer and understand the relative importance of those

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 11


characteristics. This can be achieved in many ways; you can provide feedback on assignments,
describe your expectations in class, or post model solutions.
 Timely. Spread exams out over the semester. Giving two exams one week apart doesn’t give students
adequate time to receive and respond to the feedback provided by the first exam. When possible, plan
the exams to fit logically within the flow of the course material. It might be helpful to place tests at the
end of important learning units rather than simply give a midterm halfway through the semester.
 Accessible. For students with disabilities, exams must be amenable to adaptive technologies such as
screen-readers or screen magnifiers. Exams that have visual content, such as charts, maps, and
illustrations etc.

COMMON OBSERVATION OF STUDENTS ON TEST QUESTIONS

 It is not the scope of the lesson.


 It is not discussed on the class.
 The question and choices are lengthy.
 The layout of test is unorganized.
 The questions are confusing.
 None of the choices is the correct answer.
 Wrong grammar

POSSIBLE REASONS FOR FAULTY TEST QUESTIONS

 Questions are copied verbatim from the book or other resources.


 Not consulting the course outline.
 Much consideration is given to reduce printing cost.
 No TOS or TOS was made after making the test.

FACTORS TO CONSIDER IN PREPARING A TEST QUESTION

 Purpose of the test


 Time available to prepare, administer, and score a test.
 Number of students to be tested.
 Skill of the teacher in writing the test.
 Facilities available in reproducing the test.

“To be able to prepare a GOOD TEST, one has to have a mastery of the subject matter, knowledge of the pupils
to be tested, skill in verbal expression and the use of the different test format.”

CHARACTERISTICS OF A GOOD TEST

 Validity – the extent to which the test measures what it intends to measure
 Reliability – the consistency with which a test measures what it is supposed to measure.
 Usability – the test can be administered with ease, clarity, and uniformity.
 Scorability – easy to score
 Interpretability – test results can be properly interpreted and is a major basis in makings educational
decisions
 Economical – the test can be reused without compromising its validity and reliability

STEPS IN PLANNING FOR A TEST

 Identifying test objectives


 Deciding on the type of objective test to be prepared
 Preparing a Table of Specifications (TOS)
Constructing the draft test items
 Trying-out and validatin

ISABELA STATE UNIVERSITY |ASSESSMENT IN LEARNING 1 12

You might also like