0% found this document useful (0 votes)
14 views22 pages

8602

The document discusses various types of assessments used in education, including diagnostic, formative, summative, ipsative, norm-referenced, and criterion-referenced assessments, highlighting their purposes and methods. It also explains the differentiation between assessment for learning, as learning, and of learning. Additionally, it covers Bloom's Taxonomy and its revisions, emphasizing the importance of understanding educational objectives and the components of attitude in relation to learning.

Uploaded by

Naveed Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views22 pages

8602

The document discusses various types of assessments used in education, including diagnostic, formative, summative, ipsative, norm-referenced, and criterion-referenced assessments, highlighting their purposes and methods. It also explains the differentiation between assessment for learning, as learning, and of learning. Additionally, it covers Bloom's Taxonomy and its revisions, emphasizing the importance of understanding educational objectives and the components of attitude in relation to learning.

Uploaded by

Naveed Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

ALLAMA IQBAL OPEN UNIVERSITY, ISLAMABAD

Course: Educational Assessment and Evaluation (8602)


Semester: Autumn, 2021
Level: B.Ed. 1.5
Assignment No.1
Q.1 What are the types of assessment? Differentiate assessment for training
of learning and as learning.
Answer :
Types of assessment to use in your classroom
There’s a time and place for every type of assessment. Keep reading to find creative
ways of delivering assessments and understanding your students’ learning process!
1. Diagnostic assessment
Let’s say you’re starting a lesson on two-digit multiplication. To make sure the unit
goes smoothly, you want to know if your students have mastered fact families, place
value and one-digit multiplication before you move on to more complicated
questions.
When you structure diagnostic assessments around your lesson, you’ll get the
information you need to understand student knowledge and engage your
whole classroom.
Some examples to try include:
 Short quizzes
 Journal entries
 Student interviews
 Student reflections
 Classroom discussions
 Graphic organizers (e.g., mind maps, flow charts, KWL charts)
Diagnostic assessments can also help benchmark student progress. Consider giving
the same assessment at the end of the unit so students can see how far they’ve come!
Using Prodigy for diagnostic assessments
One unique way of delivering diagnostic assessments is to use a game-based
learning platform that engages your students.
Prodigy’s assessments tool helps you align the math questions your students see in-
game with the lessons you want to cover.
To set up a diagnostic assessment, use your assessments tool to create a Plan that
guides students through a skill. This adaptive assessment will support students with
pre-requisites when they need additional guidance.
2. Formative assessment
Just because students made it to the end-of-unit test, doesn’t mean they’ve mastered
the topics in the unit. Formative assessments help teachers understand student
learning while they teach, and provide them with information to adjust their
teaching strategies accordingly.
Meaningful learning involves processing new facts, adjusting assumptions and
drawing nuanced conclusions. As researchers Thomas Romberg and Thomas
Carpenter describe it:
“Current research indicates that acquired knowledge is not simply a collection of
concepts and procedural skills filed in long-term memory. Rather, the knowledge
is structured by individuals in meaningful ways, which grow and change over
time.”
In other words, meaningful learning is like a puzzle — having the pieces is one
thing, but knowing how to put it together becomes an engaging process that helps
solidify learning.
Formative assessments help you track how student knowledge is growing and
changing in your classroom in real-time. While it requires a bit of a time
investment — especially at first — the gains are more than worth it.
A March 2020 study found that providing formal formative assessment evidence
such as written feedback and quizzes within or between instructional units helped
enhance the effectiveness of formative assessments.
3. Summative assessment
Summative assessments measure student progress as an assessment of learning.
Standardized tests are a type of summative assessment and provide data for you,
school leaders and district leaders.
They can assist with communicating student progress, but they don’t always give
clear feedback on the learning process and can foster a “teach to the test” mindset
if you’re not careful.
Plus, they’re stressful for teachers. One Harvard survey found 60% of teachers said
“preparing students to pass mandated standardized tests” “dictates most of” or
“substantially affects” their teaching.
Sound familiar?
But just because it’s a summative assessment, doesn’t mean it can’t be engaging for
students and useful for your teaching. Try creating assessments that deviate from
the standard multiple-choice test, like:
 Recording a podcast
 Writing a script for a short play
 Producing an independent study project
No matter what type of summative assessment you give your students, keep some
best practices in mind:
 Keep it real-world relevant where you can
 Make questions clear and instructions easy to follow
 Give a rubric so students know what’s expected of them
 Create your final test after, not before, teaching the lesson
 Try blind grading: don’t look at the name on the assignment before you
mark it
Use these summative assessment examples to make them effective and fun for your
students!
4. Ipsative assessments
How many of your students get a bad grade on a test and get so discouraged they
stop trying?
Ipsative assessments are one of the types of assessment as learning that compares
previous results with a second try, motivating students to set goals and improve
their skills.
When a student hands in a piece of creative writing, it’s just the first draft. They
practice athletic skills and musical talents to improve, but don’t always get the same
chance when it comes to other subjects like math.
A two-stage assessment framework helps students learn from their mistakes and
motivates them to do better. Plus, it removes the instant gratification of goals and
teaches students learning is a process.
You can incorporate ipsative assessments into your classroom with:
 Portfolios
 A two-stage testing process
 Project-based learning activities
One study on ipsative learning techniques found that when it was used with higher
education distance learners, it helped motivate students and encouraged them to act
on feedback to improve their grades.
In Gwyneth Hughes' book, Ipsative Assessment: Motivation Through Marking
Progress, she writes: "Not all learners can be top performers, but all learners can
potentially make progress and achieve a personal best. Putting the focus onto
learning rather than meeting standards and criteria can also be resource efficient."
While educators might use this type of assessment during pre- and post-test results,
they can also use it in reading instruction. Depending on your school's policy, for
example, you can record a student reading a book and discussing its contents. Then,
at another point in the year, repeat this process. Next, listen to the recordings
together and discuss their reading improvements.
What could it look like in your classroom?
5. Norm-referenced assessments
Norm-referenced assessments are tests designed to compare an individual to a
group of their peers, usually based on national standards and occasionally adjusted
for age, ethnicity or other demographics.
Unlike ipsative assessments, where the student is only competing against
themselves, norm-referenced assessments draw from a wide range of data points
to make conclusions about student achievement.
Types of norm-referenced assessments include:
 IQ tests
 Physical assessments
 Standardized college admissions tests like the SAT and GRE
Proponents of norm-referenced assessments point out that they accentuate
differences among test-takers and make it easy to analyze large-scale trends. Critics
argue they don’t encourage complex thinking and can inadvertently discriminate
against low-income students and minorities.
Norm-referenced assessments are most useful when measuring student
achievement to determine:
 Language ability
 Grade readiness
 Physical development
 College admission decisions
 Need for additional learning support
While they’re not usually the type of assessment you deliver in your classroom,
chances are you have access to data from past tests that can give you valuable
insights into student performance.
6. Criterion-referenced assessments
Criterion-referenced assessments compare the score of an individual student to a
learning standard and performance level, independent of other students around
them.
In the classroom, this means measuring student performance against grade-level
standards and can include end-of-unit or final tests to assess student understanding.
Outside of the classroom, criterion-referenced assessments appear in professional
licensing exams, high school exit exams and citizenship tests, where the student
must answer a certain percentage of questions correctly to pass.
Criterion-referenced assessments are most often compared with norm-referenced
assessments. While they’re both considered types of assessments of learning,
criterion-referenced assessments don’t measure students against their peers.
Instead, each student is graded to provide insight into their strengths and areas for
improvement.
Differentiate assessment for training of learning and as learning.
Assessment FOR learning is to determine where students “are at” in order to
determine how to plan for instruction so that they achieve curriculum goals.
This is almost synonymous with formative learning. It occurs during the process
of learning and helps a teacher plan rather than report.
Assessment AS learning is when learning happens while doing an assessment
activity. The students could learn what they do not know or they construct
knowledge more deeply or are more skilled by doing an activity or when some
piece of work is assessed. In this case assessment is more of a learning activity
than a tool for gauging achievement or progress.
Assessment OF learning is determining how much knowledge or skill a student
has learned for the purposes of determining their degree of success or failure. It
is summative and occurs at the end of some learning period in order usually to
help the teacher report on what has been learned.

Q.2 What do you know about taxonomy of educational objectives? Write in


detail.
Ans:
Taxonomies of Educational Objectives
BLOOM’S TAXONOMY
Taxonomy Information and quotations in this summary, except where otherwise
noted, are drawn from Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy:
An overview. Theory into Practice, 41 (4), 212-261. Krathwohl participated in
the creation of the original Taxonomy, and was the co-author of the revised
Taxonomy.
“The Taxonomy of Educational Objectives is a framework for classifying
statements of what we expect or intend students to learn as a result of instruction.
The framework was conceived as a means of facilitating the exchange of test
items among faculty at various universities in order to create banks of items, each
measuring the same educational objective (p. 212).”
The Taxonomy of Educational Objectives provides a common language with
which to discuss educational goals.
Bloom’s Original Taxonomy
Benjamin Bloom of the University of Chicago developed the Taxonomy in 1956
with the help of several educational measurement specialists.
Bloom saw the original Taxonomy as more than a measurement tool. He believed
it could serve as a:
 common language about learning goals to facilitate communication across
persons, subject matter, and grade levels;
 basis for determining in a particular course or curriculum the specific
meaning of broad educational goals, such as those found in the currently
prevalent national, state, and local standards;
 means for determining the congruence of educational objectives, activities,
and assessments in a unit, course, or curriculum; and
 panorama of the range of educational possibilities against which the limited
breadth and depth of any particular educational course or curriculum could
be contrasted (Krathwohl, 2002).
Bloom’s Taxonomy provided six categories that described the cognitive
processes of learning: knowledge, comprehension, application, analysis,
synthesis, and evaluation. The categories were meant to represent educational
activities of increasing complexity and abstraction.
Bloom and associated scholars found that the original Taxonomy addressed only
part of the learning that takes place in most educational settings, and developed
complementary taxonomies for the Affective Domain (addressing values,
emotions, or attitudes associated with learning) and the Psychomotor Domain
(addressing physical skills and actions). These can provide other useful
classifications of types of knowledge that may be important parts of a course.
The Affective Domain
1. Receiving
2. Responding
3. Valuing
4. Organization
5. Characterization by a value or value complex
From Krathwohl, Bloom, & Masia. Taxonomy of Educational Objectives, the
Classification of Educational Goals. Handbook II: Affective Domain. (1973).
Psychomotor Domain
1. Reflex movements
2. Basic-fundamental movements
3. Perceptual abilities
4. Physical abilities
5. Skilled movements
6. Nondiscursive communication
From Harrow. A taxonomy of psychomotor domain: a guide for developing
behavioral objectives. (1972).
The Revised Taxonomy
Bloom’s Taxonomy was reviewed and revised by Anderson and Krathwohl, with
the help of many scholars and practitioners in the field, in 2001. They developed
the revised Taxonomy, which retained the same goals as the original Taxonomy
but reflected almost half a century of engagement with Bloom’s original version
by educators and researchers.
Orignal vs Revised Bloom’s Taxonomy
[1] Unlike Bloom’s original “Knowledge” category, “Remember” refers only to
the recall of specific facts or procedures
[2] Many instructors, in response to the original Taxonomy, commented on the
absence of the term “understand”. Bloom did not include it because the word
could refer to many different kinds of learning. However, in creating the revised
Taxonomy, the authors found that when instructors use the word “understand”,
they were most frequently describing what the original taxonomy had named
“comprehension”.
Structure of the Cognitive Process Dimension of the Revised Taxonomy
Remember – Retrieving relevant knowledge from long-term memory
Recognizing
Recalling
Understand – Determining the meaning of instructional messages, including
oral, written, and graphic communication
Interpreting
Exemplifying
Classifying
Summarizing
Inferring
Comparing
Explaining
Apply – Carrying out or using a procedure in a given situation
Executing
Implementing
Analyze – Breaking material into its constituent parts and detecting how the parts
relate to one another and to an overall structure or purpose
Differentiating
Organizing
Attributing
Evaluate – Making judgments based on criteria and standards
Checking
Critiquing
Create – Putting elements together to form a novel, coherent whole or make an
original product
Generating
Planning
Producing
One major change of the revised Taxonomy was to address Bloom’s very
complicated “knowledge” category, the first level in the original Taxonomy. In
the original Taxonomy, the knowledge category referred both to knowledge of
specific facts, ideas, and processes (as the revised category “Remember” now
does), and to an awareness of possible actions that can be performed with that
knowledge. The revised Taxonomy recognized that such actions address
knowledge and skills learned throughout all levels of the Taxonomy, and thus
added a second “dimension” to the Taxonomy: the knowledge dimension,
comprised of factual, conceptual, procedural, and metacognitive knowledge.
Structure of the Knowledge Dimension of the Revised Taxonomy
 Factual knowledge – The basic elements that students must know to be
acquainted with a discipline or solve problems in it.
 Conceptual knowledge – The interrelationships among the basic elements
within a larger structure that enable them to function together.
 Procedural knowledge – How to do something; methods of inquiry; and
criteria for using skills, algorithms, techniques, and methods.
 Metacognitive knowledge – Knowledge of cognition in general as well as
awareness and knowledge of one’s own condition.
The two dimensions – knowledge and cognitive – of the revised Taxonomy
combine to create a taxonomy table with which written objectives can be
analyzed. This can help instructors understand what kind of knowledge and skills
are being covered by the course to ensure that adequate breadth in types of
learning is addressed by the course.
For examples of learning objectives that match combinations of knowledge and
cognitive dimensions see Iowa State University’s Center for Excellence in
Learning and Teaching interactive Flash Model by Rex Heer.
Structure of Observed Learning Outcomes (SOLO) taxonomy
Like Bloom’s taxonomy, the Structure of Observed Learning Outcomes (SOLO)
taxonomy developed by Biggs and Collis in 1992 distinguishes between
increasingly complex levels of understanding that can be used to describe and
assess student learning. While Bloom’s taxonomy describes what students do
with information they acquire, the SOLO taxonomy describes the relationship
students articulate between multiple pieces of information. Atherton (2005)
provides an overview of the five levels that make up the SOLO taxonomy:
1. Pre-structural: here students are simply acquiring bits of unconnected
information, which have no organization and make no sense.
2. Unistructural: simple and obvious connections are made, but their
significance is not grasped.
3. Multistructural: a number of connections may be made, but the meta-
connections between them are missed, as is their significance for the whole.
4. Relational level: the student is now able to appreciate the significance of
the parts in relation to the whole.
5. At the extended abstract level, the student is making connections not only
within the given subject area, but also beyond it, able to generalize and
transfer the principles and ideas underlying the specific instance.
Q.3 How will you define attitude? Elaborate its components.
Ans:
Definition of Attitude:
Attitude is the manner, disposition, feeling, and position about a person or thing,
tendency, or orientation, especially in mind.
According to Gordon Allport, “An attitude is a mental and neural state of
readiness, organized through experience, exerting a directive or dynamic
influence upon the individual’s response to all objects and situations with which
it is related.”
Frank Freeman said, “An attitude is a dispositional readiness to respond to certain
institutions, persons or objects in a consistent manner which has been learned and
has become one’s typical mode of response.”
Thurstone said, “An attitude denotes the total of man’s inclinations and feelings,
prejudice or bias, preconceived notions, ideas, fears, threats, and other any
specific topic.”
Anastasi defined attitude as “A tendency to react favorably or unfavorably
towards a designated class of stimuli, such as a national or racial group, a custom
or an institution.”
According to N.L. Munn, “Attitudes are learned predispositions towards aspects
of our environment. They may be positively or negatively directed towards
certain people, service, or institution.”
“Attitudes are an ‘individual’s enduring favorable or unfavorable evaluations,
emotional feelings, and action tendencies toward some object or idea.” — David
Krech, Richard S. Crutchfield, and Egerton L. Ballackey.
“Attitude can be described as a learned predisposition to respond in a consistently
favorable or unfavorable manner for a given object.” — Martin Fishbein and Icek
Ajzen.
“An attitude is a relatively enduring organization of beliefs around an object or
situation predisposing one to respond in some preferential manner.” — Milton
Rokeach.
Characteristics of Attitude
Attitude can be described as a tendency to react positively or negatively to a
person or circumstances.
Thus the two main elements of attitude are this tendency or predisposition and
the direction of this predisposition.
It has been defined as a mental state of readiness, organized through experience,
which exerts a directive or dynamic influence on the responses.
These can also be explicit and implicit.
Explicit attitudes are those that we are consciously aware of and that clearly
influences our behaviors and beliefs. Implicit attitudes are unconscious but still
affect our beliefs and behaviors.
Psychologists Thomas, which imposes limits as a level attitude trend, is positive
and negative, associated with psychology.
Object psychology here includes symbols, words, slogans, people, institutions,
ideas, etc.
Characteristics of Attitude are;
1. Attitudes are the complex combination of things we call personality,
beliefs, values, behaviors, and motivations.
2. It can fall anywhere along a continuum from very favorable to very
unfavorable.
3. All people, irrespective of their status or intelligence, hold attitudes.
4. An attitude exists in every person’s mind. It helps to define our identity,
guide our actions, and influence how we judge people.
5. Although the feeling and belief components of attitude are internal to a
person, we can view a person’s attitude from their resulting behavior.
6. Attitude helps us define how we see situations and define how we behave
toward the situation or object.
7. It provides us with internal cognitions or beliefs and thoughts about people
and objects.
8. It can also be explicit and implicit. Explicit attitude is those that we are
consciously aware of an implicit attitude is unconscious, but still, affect
our behaviors.
9. Attitudes cause us to behave in a particular way toward an object or person.
10.An attitude is a summary of a person’s experience; thus, an attitude is
grounded in direct experience predicts future behavior more accurately.
11.It includes certain aspects of personality as interests, appreciation, and
social conduct.
12.It indicates the total of a man’s inclinations and feelings.
13.An attitude is a point of view, substantiated or otherwise, true or false,
which one holds towards an idea, object, or person.
14.It has aspects such as direction, intensity, generality, or specificity.
15.It refers to one’s readiness for doing Work.
16.It may be positive or negative and may be affected by age, position, and
education.
Components of Attitude
Attitudes are simply expressions of much we like or dislike various things.
Attitudes represent our evaluations, preferences, or rejections based on the
information we receive.
Attitudes represent our evaluations, preferences or rejections based on the
information we receive.
It is a generalized tendency to think or act in a certain way in respect of some
object or situation, often accompanied by feelings. It is a learned predisposition
to respond in a consistent manner with respect to a given object.
This can include evaluations of people, issues, objects, or events. Such
evaluations are often positive or negative, but they can also be uncertain at times.
These are the way of thinking, and they shape how we relate to the world both in
work and Outside of work. Researchers also suggest that there are several
different components that make up attitudes.
One can see this by looking at the three components of an attitude: cognition,
affect and behavior.
3 components of attitude are;
1. Cognitive Component.
2. Affective Component.
3. Behavioral Component.
Cognitive Component
The cognitive component of attitudes refers to the beliefs, thoughts, and attributes
that we would associate with an object. It is the opinion or belief segment of an
attitude. It refers to that part of attitude which is related in general knowledge of
a person.
Typically these come to light in generalities or stereotypes, such as ‘all babies are
cute’, ‘smoking is harmful to health’ etc.
Affective Component
Affective component is the emotional or feeling segment of an attitude.
It is related to the statement which affects another person.
It deals with feelings or emotions that are brought to the surface about something,
such as fear or hate. Using the above example, someone might have the attitude
that they love all babies because they are cute or that they hate smoking because
it is harmful to health.
Behavioral Component
Behavior component of an attitude consists of a person’s tendencies to behave’in
a particular way toward an object. It refers to that part of attitude which reflects
the intention of a person in the short-run or long run.
Using the above example, the behavioral attitude maybe- ‘I cannot wait to kiss
the baby’, or ‘we better keep those smokers out of the library, etc.
Conclusion
Attitude is composed of three components, which include a cognitive component,
effective or emotional component, and a behavioral component.
Basically, the cognitive component is based on the information or knowledge,
whereas the affective component is based on the feelings.
The behavioral component reflects how attitude affects the way we act or behave.
It is helpful in understanding their complexity and the potential relationship
between attitudes and behavior.
But for clarity’s sake, keep in mind that the term attitude essentially refers to the
affected part of the three components.
In an organization, attitudes are important for their goal or objective to succeed.
Each one of these components is very different from the other, and they can build
upon one another to form our attitudes and, therefore, affect how we relate to the
world.
Q.4 What are the type of every question? Also write its advantages and
disadvantages.
Ans:
Types of Questions: Sample Question Types with Examples
What is a Question?
A question is a sentence that seeks an answer for information collection, tests,
and research. Right questions produce accurate responses and aids in collecting
actionable quantitative and qualitative data.
Questions have over the years evolved to different question types to now collect
different sets of information. The types of question used in a research study are
decided by the information required, nature of the study, the time needed to
answer, and the budget constraints of a study.
The art of asking the right questions helps to gain deep insights, make informed
decisions, and develop effective solutions. To know how to ask good questions,
understand the basic question types.
Below are some widely used types of questions with sample examples of these
question types:
1. The Dichotomous Question
The dichotomous question is generally a "Yes/No" close-ended questionand used
for basic validation. In the below example, a yes or no question is used to
understand if the person has ever used your online store to make a purchase. The
respondents who answer "Yes" and "No" can be bunched together into groups.
Then, you can ask different questions to both groups.
2. Multiple Choice Questions
Multiple choice questions are a question type in which respondents have to select
one (single select multiple choice question) or many (multi-select multiple choice
question) responses from a given list of options. The multiple-choice question
consists of an incomplete stem (question), right answer or answers, incorrect
answers, close alternatives, and distractors. However, the questions are designed
as it best matches the expected outcome. Typically, single select questions are
denoted by radio buttons, and multi-select questions are denoted by check-boxes.
An example of a multi-select multiple-choice question is a bank that would like
to launch a new credit card and wants to understand payment merchants' usage:
This helps the bank understand the preference of payment merchant and use that
in their new product launch.
3. Rank Order Scaling Question
The rank order question type allows the respondent to rank preferences in a
question in the order of choice. Use this question type to understand the weightage
that is offered by respondents to each option. The other type of rank order
question is a drag and drop question type, where the respondent can rearrange
options based on importance. An example of a rank order question is a sports
goods store looking to understand from respondents their choice of sports and the
order they would place them.
4. Text Slider Question
A text slider question is a rating scale question type that uses an interactive slider
in answer to select the most appropriate option. The options scale is well-defined
and on the same continuum. Rating scales are used to measure the direction and
intensity of attitudes. You can also use a text slider where either end of the option
has an anchor.
5. Likert Scale Question
Likert Scale is one of the most used tools by market researchers to evaluate their
target audience's opinions and attitudes. This type of question is essential in
measuring a respondent's opinion or belief towards a given subject. The answer
options scale is typically a five, seven, or nine-point agreement scale used to
measure respondents' agreement with various statements. Likert scales can be
unipolar, indicating a respondent to think of the presence or absence of quality.
Or they can be bipolar, mentioning two different qualities, and defining the
relative proportion of those qualities. For example, if a telecom company would
like to understand the respondent's satisfaction level with their services, the
question that can be asked is:
6. Semantic Differential Scale
Semantic differential scale is a type of question that asks people to rate a product,
company, brand, or any "entity" within the frames of a multipoint rating option.
These survey answering options are grammatically on opposite adjectives at each
end. For example, if the national health association wants to collect feedback on
Government insurance policies from the general public, the following question
could be administered.
7. Stapel Scale Question
The Stapel scale question is a close-ended rating scale with a single adjective
(unipolar), developed to gather respondent insights about a particular subject or
event. The survey question comprises an even number of response options
without a neutral point. For example, if an airline wants to collect feedback on
multiple attributes of a respondent's flying experience.
8. Constant Sum Question
Constant Sum question is also a rank order question type where the respondent
can only select options in the form of numerics. A constant sum question allows
respondents to enter numerical values for a set of variables but requires them to
add up to a pre-specified total. Each numeric entry is summed and can be
displayed to the respondent. It is a great question type to use when asking
financial, budget-related questions, or percentage based questions. An example
of this question type is collecting data on how respondents allocate monthly
budgets based on their income.
9. Comment Box Open Ended Question
The comment box open-ended question is used to collect any feedback or
suggestions that could be very long. They are open text format such that the
respondent can answer based on their complete knowledge, feelings, and
understanding. Hence, this question type is used when the organization
conducting the study would like to justify a selection in a prior question or when
extensive feedback is required from the respondent.
10. Text Question
A text question is similar to a comment box, but the data to be entered is generally
regulated and requires validation. This type of question has three sub-types:
 Single row text: One line of text can be inputted, for example, house
address.
 Numeric textbox: Only numbers can be entered. The use of other characters
will throw an error—for example, contact number.
 Email address: Insert email address for further correspondence.
11. Contact Information Question
This question type is an open-ended question with multiple rows of text indicated
with a title, and the textual characters are regulated. This type of question collects
respondent information like full name, address, email address, phone number,
age, and sex.
12. Demographic Question
The demographic question captures the demographic data from a population set.
They are used to identify age, gender, income, race, geographic place of
residence, number of children, etc. Demographic data helps you paint a more
accurate picture of the group of people. For example:
13. Matrix Table Question
Matrix table questions are arranged in tabular format with questions listed on the
left of the table while the answer options are at the top of the table. There are
multiple variants of the matrix table question type. Multipoint matrix table
questions use radio buttons to select answers for multiple aspects in a question.
Multi-select matrix table questions use check-box buttons to select answers. The
spreadsheet matrix table question is used to insert text while answering questions.
An example of this is if an organization wants to collect feedback on specific
attributes; the question that can be asked is:
14. Side-by-Side Matrix Question
In case you have to organize a survey to know the importance and satisfaction
level of the various services offered to users, you can use side-by-side matrix
question. It gives you the option to define multiple rating options simultaneously.
The side by side matrix question makes it easy to identify your strengths and
improvement areas.
15. Star Rating Question
The star rating question is a type of rating questionthat uses an odd number of
stars to rank attributes or display feelings and emotions. An odd number matrix
type question is used to account for a middle or neutral point in a rating survey.
Higher the number of stars, higher is the agreement with a statement. This
question type allows for a rating on multiple rows to be collected for one single
topic. For example, if a retail brand wants to gather feedback about their brand,
they can use a star rating question in the following format:
16. Max Diff Question
Maximum Differential Scaling or Max-Diff is a question type where respondents
are given a set of attributes and asked to indicate the best and the worst attributes.
There could only be one of each option in the final response. For example, if a
bank wants to understand the preference of payment merchant, the question can
be asked in the following format:
It’s good to regularly review the advantages and disadvantages of the most
commonly used test questions and the test banks that now frequently provide
them.
MULTIPLE-CHOICE QUESTIONS
Advantages
 Quick and easy to score, by hand or electronically
 Can be written so that they test a wide range of higher-order thinking
skills
 Can cover lots of content areas on a single exam and still be
answered in a class period
Disadvantages
 Often test literacy skills: “if the student reads the question carefully,
the answer is easy to recognize even if the student knows little about
the subject” (p. 194)
 Provide unprepared students the opportunity to guess, and with
guesses that are right, they get credit for things they don’t know
 Expose students to misinformation that can influence subsequent
thinking about the content
 Take time and skill to construct (especially good questions)
TRUE-FALSE QUESTIONS
Advantages
 Quick and easy to score
Disadvantages
 Considered to be “one of the most unreliable forms of assessment”
(p. 195)
 Often written so that most of the statement is true save one small,
often trivial bit of information that then makes the whole statement
untrue
 Encourage guessing, and reward for correct guesses
SHORT-ANSWER QUESTIONS
Advantages
 Quick and easy to grade
 Quick and easy to write
Disadvantages
 Encourage students to memorize terms and details, so that their
understanding of the content remains superficial
ESSAY QUESTIONS
Advantages
 Offer students an opportunity to demonstrate knowledge, skills, and
abilities in a variety of ways
 Can be used to develop student writing skills, particularly the ability
to formulate arguments supported with reasoning and evidence
Disadvantages
 Require extensive time to grade
 Encourage use of subjective criteria when assessing answers
 If used in class, necessitate quick composition without time for
planning or revision, which can result in poor-quality writing
QUESTIONS PROVIDED BY TEST BANKS
Advantages
 Save instructors the time and energy involved in writing test
questions
 Use the terms and methods that are used in the book
Disadvantages
 Rarely involve analysis, synthesis, application, or evaluation (cross-
discipline research documents that approximately 85 percent of the
questions in test banks test recall)
 Limit the scope of the exam to text content; if used extensively, may
lead students to conclude that the material covered in class is
unimportant and irrelevant
We tend to think that these are the only test question options, but there are
some interesting variations. The article that promoted this review proposes
one: Start with a question, and revise it until it can be answered with one
word or a short phrase. Do not list any answer options for that single question,
but attach to the exam an alphabetized list of answers. Students select answers
from that list. Some of the answers provided may be used more than once,
some may not be used, and there are more answers listed than questions. It’s
a ratcheted-up version of matching. The approach makes the test more
challenging and decreases the chance of getting an answer correct by
guessing.
Remember, students do need to be introduced to any new or altered question
format before they encounter it on an exam.

Q.5 Construct a test, administer it and ensure its reliability.


Ans:
Test of 6th Class English
CHOOSE THE CORRECT ANSWER BY CIRCLING THE CORRECT
LETTER:
1. Find the word starts with a vowel:
a. Dog
b. Cat
c. Owl
d. Hat
2. Find the word that ends in the ‘f’ sound:
a. Plate
b. Graph
c. Sign
d. Fall
3. Find the word that rhymes with ‘lake.’
a. Like
b. Pick
c. Ankle
d. Rake
4. What is the opposite of tiny?
a. Small
b. Miniature
c. Huge
d. Limy
Please write three sentences in English about your favorite animal or food, Why
is it your favorite?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Read the story and answer the questions. Use complete sentences.
Ali went to the store with his mother in a green rickshaw. His mother was wearing
a white dress. He was wearing a blue shirt and black pants. He bought two eggs,
five candies, and three boxes of milk. He gave one candy to his sister. His mother
cooked two eggs and made an omelette. He had a happy day.

How many boxes of milk did Ali buy?


________________________________________________________________
What did his mother make with the eggs?
________________________________________________________________
What was Ali wearing that day?

Test administration
Test administration procedures are developed for an exam program in order to
help reduce measurement error and to increase the likelihood of fair, valid, and
reliable assessment. Specifically, appropriate standardized procedures improve
measurement by increasing consistency and test security. Consistent,
standardized administration of the exam allows you to make direct comparisons
between examinees' scores, despite the fact that the examinees may have taken
their tests on different dates, at different sites, and with different proctors.
Furthermore, administration procedures that protect the security of the test help
to maintain the meaning and integrity of the score scale for all examinees.
Importance of Test Administration
Consistency
Standardized tests are designed to be administered under consistent procedures
so that the test-taking experience is as similar as possible across examinees. This
similar experience increases the fairness of the test as well as making examinees'
scores more directly comparable. Typical guidelines related to the test
administration locations state that all the sites should be comfortable, and should
have good lighting, ventilation, and handicap accessibility. Interruptions and
distractions, such as excessive noise, should be prevented. The time limits that
have been established should be adhered to for all test administrations. The test
should be administered by trained proctors who maintain a positive atmosphere
and who carefully follow the administration procedures that have been developed.
Test Security
Test security consists of methods designed to prevent cheating, as well as to
protect the test items and content from being exposed to future test-takers. Test
administration procedures related to test security may begin as early as the
registration procedures. Many exam programs restrict examinees from registering
for a test unless they meet certain eligibility criteria. When examinees arrive at
the test site, additional provisions for test security include verifying each
examinee's identification and restricting materials (such as photographic or
communication devices) that an examinee is allowed to bring into the test
administration. If the exam program uses multiple, parallel test forms, these may
be distributed in a spiraled fashion, in order to prevent one examinee from being
able to copy from another. (Form A is distributed to the first examinee, Form B
to the second examinee, Form A to the third examinee, etc.) The test proctors
should also remain attentive throughout the test administration to prevent
cheating and other security breaches. When testing is complete, all test related
materials should be carefully collected from the examinees before they depart.
Reliability of Test
Reliability refers to how dependably or consistently a test measures a
characteristic. If a person takes the test again, will he or she get a similar test
score, or a much different score? A test that yields similar scores for a person who
repeats the test is said to measure a characteristic reliably.
How do we account for an individual who does not get exactly the same test score
every time he or she takes the test? Some possible reasons are the following:
 Test taker's temporary psychological or physical state. Test
performance can be influenced by a person's psychological or physical
state at the time of testing. For example, differing levels of anxiety, fatigue,
or motivation may affect the applicant's test results.
 Environmental factors. Differences in the testing environment, such as
room temperature, lighting, noise, or even the test administrator, can
influence an individual's test performance.
 Test form. Many tests have more than one version or form. Items differ on
each form, but each form is supposed to measure the same thing. Different
forms of a test are known as parallel forms or alternate forms. These
forms are designed to have similar measurement characteristics, but they
contain different items. Because the forms are not exactly the same, a test
taker might do better on one form than on another.
 Multiple raters. In certain tests, scoring is determined by a rater's
judgments of the test taker's performance or responses. Differences in
training, experience, and frame of reference among raters can produce
different test scores for the test taker.
These factors are sources of chance or random measurement error in the
assessment process. If there were no random errors of measurement, the
individual would get the same test score, the individual's "true" score, each time.
The degree to which test scores are unaffected by measurement errors is an
indication of the reliability of the test.
Reliable assessment tools produce dependable, repeatable, and consistent
information about people. In order to meaningfully interpret test scores and make
useful employment or career-related decisions, you need reliable tools. This
brings us to the next principle of assessment.

You might also like