Classroom Assessement

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 123

CLASSROOM ASSESSMENT

For ADE and B.Ed (Hons) Levels

IQRA DEGREE COLLEGE


HAVELIAN

Units 1-6 Credit Hours: 3


COURSE CONTENT
UNIT 01: Concept of classroom
UNIT 02: Achievement tests
UNIT 03: Test construction
UNIT 04: Test Administration and
Analysis
UNIT 05: Interpreting test scores
UNIT 06: Grading and reporting
results
COURSE OBJECTIVES

After studying this course the prospective teachers will be able to:

1. Understand the concepts and application of classroom assessment.


2. Integrate objectives with evaluation and measurement.
3. Acquire skills of assessing the learning outcomes.
4. Interpret test scores.
5. Know about the trends and techniques of classroom assessment.
UNIT 01: CONCEPT OF CLASSROOM
ASSESSEMENT

MEASUREMENT, ASSESSMENT AND


EVALUATION
Concept of Measurement, Assessment and Evaluation
Despite their significant role in education the terms measurement, assessment, and
evaluation are usually confused with each other. Mostly people use these terms
interchangeably and feel it very difficult to explain the differences among them. Each of
these terms has a specific meaning sharply distinguished from the others.
Measurement: In general, the term measurement is used to determine the attributes or
dimensions of object. For example, we measure an object to know how big, tall or heavy
it is. In educational perspective measurement refers to the process of obtaining a
numerical description of a student’s progress towards a pre-determined goal. This process
provides the information regarding how much a student has learnt. Measurement provides
quantitative description of the students’ performance for example Rafaih solved 23
arithmetic problems out of 40. But it does not include the qualitative aspect for example,
Rafaih’s work was neat.
Testing: A test is an instrument or a systematic procedure to
measure a particular characteristic. For example, a test of
mathematics will measure the level of the learners’ knowledge of
this particular subject or field.
Assessment: Kizlik (2011) defines assessment as a process by
which information is obtained relative to some known objective or
goal. Assessment is a broad term that includes testing. For
example, a teacher may assess the knowledge of English language
through a test and assesses the language proficiency of the students
through any other instrument for example oral quiz or presentation. Based upon this
view, we can say that every test is assessment but every assessment is not the test.
The term ‘assessment’ is derived from the Latin word ‘assidere’ which means ‘to sit
beside’. In contrast to testing, the tone of the term assessment is non-threatening
indicating a partnership based on mutual trust and understanding. This emphasizes that
there should be a positive rather than a negative association between assessment and the
process of teaching and learning in schools. In the broadest sense assessment is
concerned with children’s progress and achievement.
In a comprehensive and specific way, classroom assessment may be defined as:
the process of gathering, recording, interpreting, using and
communicating information about a child’s progress and achievement
during the development of knowledge, concepts, skills and attitudes.
(NCCA, 2004)

In short, we can say that assessment entails much more than testing. It is an ongoing
process that includes many formal and informal activities designed to monitor and
improve teaching and learning.

Evaluation: According to Kizlik (2011) evaluation is most complex and the least
understood term. Hopkins and Antes (1990) defined evaluation as a continuous
inspection of all available information in order to form a valid judgment of students’
learning and/or the effectiveness of education program.
The central idea in evaluation is
"value." When we evaluate a variable,
we are basically judging its
worthiness, appropriateness and
goodness. Evaluation is always done
against a standard, objectives or
criterion. In teaching learning process
teachers made students’ evaluations
that are usually done in the context of
comparisons between what was
intended (learning, progress,
behaviour) and what was obtained.

Evaluation is much more comprehensive term than measurement and assessment. It


includes both quantitative and qualitative descriptions of students’ performance. It always
provides a value judgment regarding the desirability of the performance for example,
Very good, good etc.
Kizlik 2011 http://www.adprima.com/measurement.htm

Activity 1.1: Distinguish among measurement, assessment and evaluation with the
help of relevant examples

Classroom Assessment: Why, What, How and When


According to Carole Tomlinson “Assessment is today's means of modifying tomorrow's
instruction." It is an integral part of teaching learning process. It is widely accepted that
effectiveness of teaching learning process is directly influenced by assessment. Hamidi
(2010) developed a framework to answer the Why; What, How and When to assess. This
is helpful in understanding the true nature of this concept.
Why to Assess: Teachers have clear goals for instruction and they assess to ensure that
these goals have been or are being met. If objectives are the destination, instruction is the
path to it then assessment is a tool to keep the efforts on track and to ensure that the path
is right. After the completion of journey assessment is the indication that destination is
ahead.
What to Assess: Teachers cannot assess whatever they themselves like. In classroom
assessment, teachers are supposed to assess students' current abilities in a given skill or
task. The teacher can assess students’ knowledge, skills or behaviour related to a
particular field.
Who to Assess: It may seem strange to ask whom a teacher should assess in the
classroom, but the issue is of great concern. Teachers should treat students as 'real
learners', not as course or unit coverers. They should also predict that some students are
more active and some are less active; some are quick at learning and some are slow at it.
Therefore, classroom assessment calls for a prior realistic appraisal of the individuals
teachers are going to assess.
How to Assess: Teachers employ different instruments, formal or informal, to assess
their students. Brown and Hudson (1998) reported that teachers use three sorts of
assessment methods – selected-response assessments, constructed-response assessments,
and personal-response assessments. They can adjust the assessment types to what they
are going to assess.
When to Assess: There is a strong agreement of educationists that assessment is
interwoven into instruction. Teachers continue to assess the students learning throughout
the process of teaching. They particularly do formal assessments when they are going to
make instructional decisions at the formative and summative levels, even if those
decisions are small. For example, they assess when there is a change in the content; when
there is a shift in pedagogy, when the effect of the given materials or curriculum on
learning process is examined.
How much to Assess: There is no touchstone to weigh the degree to which a teacher
should assess students. But it doesn't mean that teachers can evaluate their students to the
extent that they prefer. It is generally agreed that as students differ in ability, learning
styles, interests and needs etc so assessment should be limited to every individual's needs,
ability and knowledge. Teachers’ careful and wise judgment in this regard can prevent
teachers from over assessment or underassessment.
Activity: Critically discuss the significance of decisions that teachers take regarding
classroom Assessment.

Types of Assessment
"As coach and facilitator, the teacher uses formative assessment to help support and
enhance student learning, As judge and jury, the teacher makes summative judgments
about a student's achievement..."

Atkin, Black & Coffey (2001)


Assessment is a purposeful activity aiming to facilitate students’ learning and to improve
the quality of instruction. Based upon the functions that it performs, assessment is
generally divided into three types: assessment for learning, assessment of learning and
assessment as learning.
a) Assessment for Learning (Formative Assessment)
Assessment for learning is a continuous and an ongoing assessment that allows teachers
to monitor students on a day-to-day basis and modify their teaching based on what the
students need to be successful. This assessment provides students with the timely,
specific feedback that they need to enhance their learning. The essence of formative
assessment is that the information yielded by this type of assessment is used on one hand
to make immediate decisions and on the other hand based upon this information; timely
feedback is provided to the students to enable them to learn better. If the primary purpose
of assessment is to support high-quality learning then formative assessment ought to be
understood as the most important assessment practice.

The National Center for Fair and The Value of Formative Assessment.
Open Testing (1999). http://www.fairtest.org/examarts/winter99/k-
forma3.html
Assessment for learning has many unique characteristics for example this type of
assessment is taken as “practice." Learners should not be graded for skills and concepts
that have been just introduced. They should be given opportunities to practice. Formative
assessment helps teachers to determine next steps during the learning process as the
instruction approaches the summative assessment of student learning. A good analogy for
this is the road test that is required to receive a driver's license. Before the final driving
test, or summative assessment, a learner practice by being assessed again and again to
point out the deficiencies in the skill
Another distinctive characteristic of formative assessment is student involvement. If
students are not involved in the assessment process, formative assessment is not practiced
or implemented to its full effectiveness. One of the key components of engaging students
in the assessment of their own learning is providing them with descriptive feedback as
they learn. In fact, research shows descriptive feedback to be the most significant instructional
strategy to move students forward in their learning. Descriptive feedback provides students with
an understanding of what they are doing well. It also gives input on how to reach the next step in
the learning process.

Role of assessment for learning in instructional process can be best understood with the
help of following diagram.

Source: http://www.stemresources.com/index.php?
option=com_content&view=article&id=52&It emid=70
Garrison, & Ehringhaus, (2007) identified some of the instructional strategies that can be
used for formative assessment:
 Observations. Observing students’ behaviour and tasks can help teacher to identify
if students are on task or need clarification. Observations assist teachers in gathering
evidence of student learning to inform instructional planning.
 Questioning strategies. Asking better questions allows an opportunity for deeper
thinking and provides teachers with significant insight into the degree and depth of
understanding. Questions of this nature engage students in classroom dialogue that
both uncovers and expands learning.
 Self and peer assessment. When students have been involved in criteria and goal
setting, self-evaluation is a logical step in the learning process. With peer evaluation,
students see each other as resources for understanding and checking for quality work
against previously established criteria.
 Student record keeping It also helps the teachers to assess beyond a "grade," to see
where the learner started and the progress they are making towards the learning
goals.
b) Assessment of Learning (Summative Assessment)
Summative assessment or assessment of learning is used to evaluate students’
achievement at some point in time, generally at the end of a course. The purpose of this
assessment is to help the teacher, students and parents know how well student has
completed the learning task. In other words summative evaluation is used to assign a
grade to a student which indicates his/her level of achievement in the course or program.
Assessment of learning is basically designed to provide useful information about the
performance of the learners rather than providing immediate and direct feedback to
teachers and learners, therefore it usually has little effect on learning. Though high
quality summative information can help and guide the teacher to organize their courses,
decide their teaching strategies and on the basis of information generated by summative
assessment educational programs can be modified.
Many experts believe that all forms of assessment have some formative element. The
difference only lies in the nature and the purpose for which assessment is being
conducted.
Comparing Assessment for Learning and Assessment of Learning

Assessment for Learning Assessment of Learning


(Formative Assessment) (Summative Assessment)

Checks how students are learning and is Checks what has been learned to date.
there any problem in learning process. it
determines what to do next.

Is designed to assist educators and students Is designed to provide information to


in improving learning? those not directly involved in classroom
learning and teaching (school
administration, parents, school board), in
addition to educators and students?

Is used continually? Is periodic?

Usually uses detailed, specific and Usually uses numbers, scores or marks as
descriptive feedback—in a formal or part of a formal report.
informal report.

Usually focuses on improvement, compared Usually compares the student's learning


with the student's own previous performance either with other students' learning (norm-
referenced) or the standard for a grade
level (criterion-referenced)
Source: adapted from Ruth Sutton, unpublished document, 2001, in Alberta Assessment
Consortium
c) Assessment as Learning
Assessment as learning means to use assessment to develop and support students'
metacognitive skills. This form of assessment is crucial in helping students become
lifelong learners. As students engage in peer and self-assessment, they learn to make
sense of information, relat and critical thinking when they use teacher, peer and self-
assessment feedback to make adjustments, improvements and changes to what they
understand.

Garrison, Defining Formative and Summative Assessment


C., & http://www.education.vic.gov.au/images/content/studentlearning/forofas.jp
Ehringhaus g
, M. (2007)

Self Assessment: ‘Formative assessment results in improved teaching learning


process.’ Comment on the statement and give arguments to
support your response.

Characteristics of Classroom Assessment


1. Effective assessment of student learning begins with educational goals.
Assessment is not an end in itself but a vehicle for educational improvement. Its
effective practice, then, begins with and enacts a vision of the kinds of learning we most
value for students and strive to help them achieve. Educational values/ goals should drive
not only what we choose to assess but also how we do so. Where questions about
educational mission and values are skipped over, assessment threatens to be an exercise
in measuring what's easy, rather than a process of improving what we really care about.

2. Assessment is most effective when it reflects an understanding of learning as


multidimensional, integrated, and revealed in performance over time.
Learning is a complex process. It entails not only what students know but what they can
do with what they know; it involves not only knowledge and abilities but values,
attitudes, and habits of mind that affect both academic success and performance beyond
the classroom. Assessment should reflect these understandings by employing a diverse
array of methods, including those that call for actual performance, using them over time
so as to reveal change, growth, and increasing degrees of integration. Such an approach
aims for a more complete and accurate picture of learning, and therefore, firm base for
improving our students' educational experience.
3. Assessment works best when it has a clear, explicitly stated purposes.
Assessment is a goal-oriented process. It entails comparing educational performance with
educational purposes and expectations -- those derived from the institution's mission,
from faculty intentions in program and course design, and from knowledge of students'
own goals. Where program purposes lack specificity or agreement, assessment as a
process pushes a campus towards clarity about where to aim and what standards to apply;
assessment also prompts attention to where and how program goals will be taught and
learned. Clear, shared, implementable goals are the cornerstone for assessment that is
focused and useful.

4. Assessment requires attention to outcomes but also and equally to the experiences
that lead to those outcomes.
Information about outcomes is of high importance; where students "end up" matters
greatly. But to improve outcomes, we need to know about student experience along the
way -- about the curricula, teaching, and kind of student effort that lead to particular
outcomes. Assessment can help us understand which students learn best under what
conditions; with such knowledge comes the capacity to improve the whole of their
learning.

5. Assessment works best when it is ongoing not episodic.


Assessment is a process whose power is cumulative. Though isolated, "one-shot"
assessment can be better than none, improvement is best fostered when assessment
entails a linked series of activities undertaken over time. This may mean tracking the
process of individual students, or of cohorts of students; it may mean collecting the same
examples of student performance or using the same instrument semester after semester.
The point is to monitor progress towards intended goals in a spirit of continuous
improvement. Along the way, the assessment process itself should be evaluated and
refined in light of emerging insights.

6. Assessment is effective when representatives from across the educational


community are involved.
Student education is a campus-wide liability, and assessment is a way of acting out that
responsibility. Thus, while assessment attempts may start small, the aim over time is to
involve people from across the educational community. Faculty plays an important role,
but assessment's questions can't be fully addressed without participation by educators,
librarians, administrators, and students. Assessment may also involve individuals from
beyond the campus (alumni/ae, trustees, employers) whose experience can enrich the
sense of appropriate aims and standards for learning. Thus understood, assessment is not
a task for small groups of experts but a collaborative activity; its aim is wider, better-
informed attention to student learning by all parties with a stake in its improvement.
7. Assessment makes a difference when it begins with issues of use and illuminates
questions that people really care about.
Assessment recognizes the value of information in the process of improvement. But to
be useful, information must be connected to issues or questions that people really care
about. This implies assessment approaches that produce evidence that relevant parties
will find credible, suggestive, and applicable to decisions that need to be made. It means
thinking in advance about how the information will be used, and by whom. The point of
assessment is not to collect data and return "results"; it is a process that starts with the
questions of decision-makers, that involves them in the gathering and interpreting of data,
and that informs and helps guide continuous improvement.

9. Through effective assessment, educators meet responsibilities to students and to the


public.
There is a compelling public stake in education. As educators, we have a responsibility to
the public that support or depend on us to provide information about the ways in which
our students meet goals and expectations. But that responsibility goes beyond the
reporting of such information; our deeper obligation -- to ourselves, our students, and
society -- is to improve. Those to whom educators are accountable have a corresponding
obligation to support such attempts at improvement. (American Association for Higher
Education; 2003)

Activity 1.2: Effective assessment involves representatives from across the


educational community: Discuss

Role of Assessment
"Teaching and learning are reciprocal processes that depend on and affect one another.
Thus, the assessment component deals with how well the students are learning and how
well the teacher is teaching" Kellough and Kellough, (1999)

Assessment does more than allocate a grade or degree classification to students – it plays
an important role in focusing their attention and, as Sainsbury & Walker (2007) observe,
actually drives their learning. Gibbs (2003) states that assessment has 6 main functions:
1. Capturing student time and attention
2. Generating appropriate student learning activity
3. Providing timely feedback which students pay attention to
4. Helping students to internalize the discipline’s standards and notions of equality
5. Generating marks or grades which distinguish between students .
Providing evidence for other outside the course to enable them to judge the
appropriateness of standards on the course.

Surgenor (2010) summarized the role of assessment in learning in the following points.
 It fulfills student expectations
 It is used to motivate students
 It provide opportunities to remedy mistakes
 It indicate readiness for progression
 Assessment serves as a diagnostic tool
 Assessment enables grading and degree classification
 Assessment works as a performance indicator for students
 It is used as a performance indicator for teacher
 Assessment is also a performance indicator for institution
 Assessment facilitates learning in the one way or the other.

Activity 1.3: Enlist different role of formative and summative assessment in teaching
learning process.

Principles of Classroom Assessment


Hamidi (2010) described following principles of classroom assessment.

1. Assessment should be formative


Classroom assessment should be carried out regularly in order to inform on-going
teaching and learning. It should be formative because it refers to the formation of a
concept or process. To be formative, assessment is concerned with the way the student
develops, or forms. So it should be for learning. In other words, it has a crucial role in
"informing the teacher about how much the learners as a group, and how much
individuals within that group, have understood about what has been learned or still needs
learning as well as the suitability of their classroom activities, thus providing feedback on
their teaching and informing planning. Teachers use it to see how far learners have
mastered what they should have learned. So classroom assessment needs fully to reach its
formative potential if a teacher is to be truly effective in teaching.
2. Should determine planning
Classroom assessment should help teachers plan for future work. First, teachers should
identify the purposes for assessment – that is, specify the kinds of decisions teachers want
to make as a result of assessment. Second, they should gather information related to the
decisions they have made. Next, they interpret the collected information—that is, it must
be contextualized before it is meaningful. Finally, they should make the final, or the
professional, decisions. The plans present a means for realizing instructional objectives
which are put into practice as classroom assessment to achieve the actual outcomes.

3. Assessment should serve teaching


Classroom assessment serves teaching through providing feedback on pupils' learning
that would make the next teaching event more effective, in a positive, upwards direct.
Therefore, assessment must be an integral part of instruction. Assessment seems to drive
teaching by forcing teachers to teach what is going to be assessed. Teaching involves
assessment; that is, whenever a student responds to a question, offers a comment, or tries
out a new word or structure, the teacher subconsciously makes an assessment of the
student’s performance. So when they are teaching, they are also assessing. A good
teacher never ceases to assess students, whether those assessments are incidental or
intended.

4. Assessment should serve learning.


Classroom assessment is an integral part of learning process as well. The ways in which
learners are assessed and evaluated strongly affect the ways they study and learn. It is the
process of finding out who the students are, what their abilities are, what they need to
know, and how they perceive the learning will affect them. In assessment, the learner is
simply informed how well or badly he/she has performed. It can spur learners to set goals
for themselves. Assessment and learning are seen as inextricably linked and not separate
processes because of their mutually-influenced features. Learning by itself has no
meaning without assessment and vice-versa.

5. Assessment should be curriculum-driven


Classroom assessment should be the servant, not the master, of the curriculum.
Assessment specialists view it as an integral part of the entire curriculum cycle.
Therefore, decisions about how to assess students must be considered from the very
beginning of curriculum design or course planning.

6. Assessment should be interactive


Students should be proactive in selecting the content for assessment. It provides a context
for learning as meaning and purpose for learning and engages students in social
interaction to develop oral and written language and social skills. Assessment and
learning are inextricably linked and not separate processes, Effective assessment is not a
process carried out by one person, such as a teacher, on another, a learner, it is seen as a
two-way process involving interaction between both parties. Assessment, then, should be
viewed as an interactive process that engages both teacher and student in monitoring the
student's performance.

7. Assessment should be student-centered


Since learner-centered methods of instruction are principally concerned with learner
needs, students are encouraged to take more responsibility for their own learning and to
choose their own learning goals and projects. Therefore, in learner-centered assessment,
they are actively involved in the process of assessment. Involving learners in aspects of
classroom assessment minimizes learning anxiety and results in greater student
motivation.

8. Assessment should be diagnostic


Classroom assessment is diagnostic because teachers use it to find out learners' strengths
and weaknesses during the in-progress class instruction. They also identify learning
difficulties. If the purpose of assessment is to provide diagnostic feedback, then this
feedback needs to be provided in a form – either verbal or written – that is for learners to
understand and use.

9. Assessment should be exposed to learners


Teachers are supposed to enlighten learners' accurate information about assessment. In
other words, it should be transparent to learners. They must know when the assessments
occur, what they cover in terms of skills and materials, how much the assessments are
worth, and when they can get their results and the results are going to be used. They must
also be aware of why they are assessed because they are part of the assessment process.
Because the assessment is part of the learning process, it should be done with learners,
not to them. It is also important to provide an assessment schedule before the instruction
begins.

10.Assessment should be non-judgmental


In the classroom assessment, everything focuses on learning which results from a number
of such factors as student needs, student motivation, teaching style, time on task, study
intensity, background knowledge, course objectives, etc. So there is no praise or blame
for a particular outcome of learning. Teachers should take no stance on determining who
has done better and who has failed to perform well. Assessment should allow students to
have reasonable opportunities to demonstrate their expertise without confronting barriers

11.Assessment should develop a mutual understanding


Mutual understanding occurs when two people come to a similar feeling of reality. In
second language learning, this understanding calls for a linguistic environment in which
the teacher and students interact with each other based on the assessment objectives.
Therefore, assessment has the ability to create a new world image by having the
individuals share their thoughts helpful in learning process. When learning occurs, this is
certainly as a result of common understanding between the teacher and students.

12.Assessment should lead to learner's autonomy


Autonomy is a principle in which students come to a state of making their own decisions
in language learning. They assume a maximum amount of responsibility for what they
learn and how they learn it. Autonomous learning occurs when students have made a
transition from teacher assessment to self-assessment. This requires that teachers
encourage students to reflect on their own learning, to assess their own strengths and
weaknesses, and to identify their own goals for learning. Teachers also need to help
students develop their self-regulating and met cognitive strategies. Autonomy is a
construct to be fostered in students, not taught, by teachers.

13.Assessment should involve reflective teaching


Reflective teaching is an approach instruction in which teachers are supposed to develop
their understanding of teaching (quality) based on data/information obtained and
collected through critical reflection on their teaching experiences. This information can
be gathered through formative assessment (i.e., using different methods and tools such as
class quizzes, questionnaires, surveys, field notes, feedback from peers, classroom
ethnographies, observation notes, etc) and summative assessment (i.e., different types of
achievement tests taken at the end of the term).
Hamidi, Fundamental Issues in L2 Classroom Assessment Practices. Academic
Eameal Leadership Online Journal. Volume 8 Issue 2
(2010)
http://www.sisd.net/cms/lib/TX01001452/Centricity/Domain/2073/ALJ_I
SSN1533-7812_8_2_444.pdf

Self Assessment Questions


 Highlight the role of assessment in teaching and learning Process
 Discuss critically the principles of assessment with the help of relevant examples
 Differentiate between assessment for learning and assessment of learning
References/Suggested Reading’s

 Catherine Garrison, Dennis Chandler & Michael Ehringhaus, (2009). Effective


Classroom Assessment: Linking Assessment with Instruction: NMSA &
Measured Progress Publishers

 Kathleen Burke, (2010). How to assess authentic learning. California: Corwin


Press

 Charles Hopkins, (2008). Classroom Measurement and Evaluation. Illinois:


Peacock

 Carolin Gipps, ( 1994) Beyond Testing : Towards a Theory of Educational


Assessment Routledge Publishers
e it to prior knowledge and use it for new learning. Students develop a sense of efficacy
UNIT 02
ACHIEVEMENT TESTS
Achievement Tests
Achievement tests are widely used throughout education as a method of assessing and
comparing student performance. Achievement tests may assess any or all of reading,
math, and written language as well as subject areas such as science and social studies.
These tests are available to assess all grade levels and through adulthood. The test
procedures are highly structured so that the testing process is the same for all students
who take them.
It is developed to measure skills and knowledge learned in a given grade level, usually
through planned instruction, such as training or classroom instruction. Achievement tests
are often contrasted with tests that measure aptitude, a more general and stable cognitive
trait.
Achievement test scores are often used in an educational system to determine what level
of instruction for which a student is prepared. High achievement scores usually indicate a
mastery of grade-level material, and the readiness for advanced instruction. Low
achievement scores can indicate the need for remediation or repeating a course grade.
Teachers evaluate students by: observing them in the classroom, evaluating their day-to-
day class work, grading their homework assignments, and administrating unit tests.
These classroom assessments show the teacher how well a student is mastering grade
level learning goals and provide information to the teacher that can be used to improve
instruction. Overall achievement testing serves following purposes:
 Assess level of competence
 Diagnose strength and weaknesses
 Assign Grades
 Achieve Certification or Promotion
 Advanced Placement/College Credit Exams
 Curriculum Evaluation
 Accountability
 Informational Purposes

(i) Types of Achievement Tests


(a) Summative Evaluation:

Testing is done at the end of the instructional unit. The test score is seen as the
summation of all knowledge learned during a particular subject unit.
(a) Formative Evaluation:
Testing occurs constantly with learning so that teachers can evaluate the
effectiveness of teaching methods along with the assessment of students' abilities.
(ii)Advantages of Achievement Test:
 One of the main advantages of testing is that it is able to provide assessments
that are psychometrically valid and reliable, as well as results which are
generalized and replicable.
 Another advantage is aggregation. A well designed test provides an
assessment of an individual's mastery of a domain of knowledge or skill
which at some level of aggregation will provide useful information. That is,
while individual assessments may not be accurate enough for practical
purposes, the mean scores of classes, schools, branches of a company, or
other groups may well provide useful information because of the reduction of
error accomplished by increasing the sample size.

(iii) Designing the Test


Step 1: The first step in constructing an effective achievement test is to identify what you
want students to learn from a unit of instruction. Consider the relative importance of the
objectives and include more questions about the most important learning objectives.
Writing the questions:
Step2: Once you have defined the important learning objectives and have, in the light of
these objectives, determined which types of questions and what form of test to use, you
are ready to begin the second step in constructing an effective achievement test. This step
is writing the questions.
Step3: Finally, review the test. Are the instructions straightforward? Are the selected
learning objectives represented in appropriate proportions? Are the questions carefully
and clearly worded? Special care must be taken not to provide clues to the test-wise
student. Poorly constructed questions may actually measure not knowledge, but test-
taking ability.

(iv) General Principles:


While the different types of questions--multiple choice, fill-in-the-blank or short answer,
true-false, matching, and essay--are constructed differently, the following principles
apply to construct questions and tests in general.
 Make the instructions for each type of question simple and brief.
 Use simple and clear language in the questions. If the language is difficult, students
who understand the material but who do not have strong language skills may find it
difficult to demonstrate their knowledge. If the language is ambiguous, even a
student with strong language skills may answer incorrectly if his or her interpretation
of the question differs from the instructor's intended meaning.
 Write items that require specific understanding or ability developed in that course, not
just general intelligence or test-wiseness.
 Do not suggest the answer to one question in the body of another question. This makes
the test less useful, as the test-wise student will have an advantage over the student
who has an equal grasp of the material, but who has less skill at taking tests.
 Do not write questions in the negative. If you must use negatives, highlight them, as
they may mislead students into answering incorrectly.
 Specify the units and precision of answers. For example, will you accept numerical
answers that are rounded to the nearest integer?

(v) Interpreting the Test Results:


If you have carefully constructed an achievement test using the above principles, you can
be confident that the test will provide useful information about the students' knowledge of
the learning objectives. Considering the questions relating to the various learning
objectives as separate subtests, you can develop a profile of each student's knowledge of
or skill in the objectives. The scores of the subtests can be a useful supplement to the
overall test score, as they can help you identify specific areas which may need attention.
A carefully-constructed achievement test can, by helping you know what your students
are learning, help you to teach more effectively and, ultimately, help the students to
master more of the objectives.

Activity 3.1: Prepare the achievement test on content to be taught of any subject
while focusing its steps and discuss with your course mates.

Aptitude Tests
Aptitude tests assume that individuals have inherent strengths and weaknesses, and are
naturally inclined toward success or failure in certain areas based on their inherent
characteristics.
Aptitude tests determine a person's ability to learn a given set of information. They do not
test a person's knowledge of existing information. The best way to prepare for aptitude
tests is to take practice tests.

Aptitude and ability tests are designed to assess logical reasoning or thinking
performance. They consist of multiple choice questions and are administered under exam
conditions. They are strictly timed and a typical test might allow 30 minutes for 30 or so
questions. Test result will be compared to that of a control group so that judgments can
be made about your abilities.
You may be asked to answer the questions either on paper or online. The advantages of
online testing include immediate availability of results and the fact that the test can be
taken at employment agency premises or even at home. This makes online testing
particularly suitable for initial screening as it is obviously very cost-effective.

(i) Types of Aptitude Test


The following is a list of the different types of aptitude test that are used for assessment
process.

(a) Critical Thinking


Critical thinking is defined as a form of reflective reasoning which analyses and evaluates
information and arguments by applying a range of intellectual skills in order to reach
clear, logical and coherent judgments within a given context. Critical thinking tests force
candidates to analyse and evaluate short passages of written information and make
deductions to form answers.

(b)Numerical Reasoning Tests


Numerical tests, sometimes known as numerical reasoning, are used during the
application process at all major investment banks and accountancy & professional
services firms. Test can be either written or taken online. The tests are usually provided
by a third party.

Perceptual Speed Tests


Perceptual speed is the ability to quickly and accurately compare letters, numbers,
objects, pictures, or patterns. In tests of perceptual speed the things to be compared may
be presented at the same time or one after the other. Candidates may also be asked to
compare a presented object with a remembered object.

(c) Spatial Visualization Tests


Spatial visualization ability or Visual-spatial ability refers to the ability to mentally
manipulate 2-dimensional and 3-dimensional figures. It is typically measured with simple
cognitive tests and is predictive of user performance with some kinds of user interfaces

(d)Logical Reasoning Tests


Logical reasoning aptitude tests (also known as Critical Reasoning Tests) may be either
verbal (word based, e.g. "Verbal Logical Reasoning"), numerical (number based, e.g.
"Numerical Logical Reasoning") or diagrammatic (picture based, see diagrammatic tests
for more information).

(e) Verbal Reasoning Tests


Verbal reasoning tests are a form of aptitude test used by interviewers to find out how
well a candidate can assess verbal logic. In a verbal reasoning test, you are typically
provided with a passage, or several passages, of information and required to evaluate a
set of statements by selecting one of the following possible answers.
(f) Perceptual Speed Tests:
Perceptual speed is the ability to quickly and accurately compare letters, members,
objects, pictures, or patterns. In tests of perceptual speed the things to be compared may
presented at the same time or one after the other. Candidates may also be asked to
compare a presented object with a remembered object.

(ii)Value of Aptitude Tests


Aptitude tests tell us what a student brings to the task regardless of the specific
curriculum that the student has already experienced. The difference between aptitude and
achievement tests is sometimes a matter of degree. Some aptitude and achievement tests
look a lot alike. In fact, the higher a student goes in levels of education, the more the
content of aptitude tests resembles achievement tests. This is because the knowledge that
a student has already accumulated is a good predictor of success at advanced levels.
In addition, group aptitude tests--usually given as part of a group achievement battery of
tests--can be given quickly and inexpensively to large numbers of children. Children who
obtain extreme scores can be easily identified to receive further specialized attention.
Aptitude tests are valuable in making program and curricula decisions.
 They are excellent predictors of future scholastic achievement.
 They provide ways of comparing a child's performance with that of other children in
the same situation.
 They provide a profile of strengths and weaknesses.
 They assess differences among individuals.
 They have uncovered hidden talents in some children, thus improving their
educational opportunities.
 They are valuable tools for working with handicapped children.
(iii) How can we use aptitude test results?
In general, aptitude test results have three major uses:

(a) Instructional
Teachers can use aptitude test results to adapt their curricula to match the level of their
students, or to design assignments for students who differ widely. Aptitude test scores
can also help teachers form realistic expectations of students. Knowing something about
the aptitude level of students in a given class can help a teacher identify which students
are not learning as much as could be predicted on the basis of aptitude scores. For
instance, if a whole class were performing less well than would be predicted from
aptitude test results, then curriculum, objectives, teaching methods, or student
characteristics might be investigated.

(b) Administrative
Aptitude test scores can identify the general aptitude level of a high school, for example.
This can be helpful in determining how much emphasis should be given to college
preparatory programs. Aptitude tests can be used to help identify students to be
accelerated or given extra attention, for grouping, and in predicting job training
performance.

(c) Guidance
Guidance counselors use aptitude tests to help parents develop realistic expectations for
their child's school performance and to help students understand their own strengths and
weaknesses.

Activity: 3.2 Discuss with your course mate about their aptitudes towards teaching
profession and analyze their opinions.

Attitude
Attitude is a posture, action or disposition of a figure or a statue. A mental and neural
state of readiness, organized through experience, exerting a directive or dynamic
influence upon the individual's response to all objects and situations with which it is
related.
Attitude is the state of mind with which you approach a task, a challenge, a person, love,
life in general. The definition of attitude is “a complex mental state involving beliefs and
feelings and values and dispositions to act in certain ways”. These beliefs and feelings are
different due to various interpretations of the same events by various people and these
differences occur due to the earlier mentioned inherited characteristics’.
(i) Components of Attitude
1. Cognitive Component:
It refers that's part of attitude which is related in general know how of a person,
for example, he says smoking is injurious to health. Such type of idea of a person
is called cognitive component of attitude.
2. Effective Component:
This part of attitude is related to the statement which affects another person. For
example, in an organization a personal report is given to the general manager. In
report he points out that the sale staff is not performing their due responsibilities.
The general manager forwards a written notice to the marketing manager to
negotiate with the sale staff.
3. Behavioral Component:
The behavioral component refers to that part of attitude which reflects the
intension of a person in short run or long run. For example, before the production
and launching process the product. Report is prepared by the production
department which consists of the intention in near future and long run and this
report is handed over to top management for the decision.

(ii) List of Attitude:


In the broader sense of the word there are only three attitudes, a positive attitude, a
negative attitude, and a neutral attitude. But in general sense, an attitude is what it is
expressed through. Given below is a list of attitudes that are expressed by people, and are
more than personality traits which you may have heard of, know of, or might be even
carrying them:
 Acceptance
 Confidence
 Seriousness
 Optimism
 Interest
 Cooperative
 Happiness
 Respectful
 Authority
 Sincerity
 Honest
 Sincere
Activity: Develop an attitude scale for analyzing the factors motivating the
prospective teachers to join teaching profession.

Intelligence Tests
Intelligence involves the ability to think, solve problems, analyze situations, and
understand social values, customs, and norms. Two main forms of intelligence are
involved in most intelligence assessments:
 Verbal Intelligence is the ability to comprehend and solve language-based problems;
and
 Nonverbal Intelligence is the ability to understand and solve visual and spatial
problems.
Intelligence is sometimes referred to as intelligence quotient (IQ), cognitive functioning,
intellectual ability, aptitude, thinking skills and general ability.
While intelligence tests are psychological tests that are designed to measure a variety of
mental functions, such as reasoning, comprehension, and judgment.
Intelligence test is often defined as a measure of general mental ability. Of the
standardized intelligence tests, those developed by David Wechsler are among those most
widely used. Wechsler defined intelligence as “the global capacity to act purposefully, to
think rationally, and to deal effectively with the environment.” While psychologists
generally agree with this definition, they don't agree on the operational definition of
intelligence (that is, a statement of the procedures to be used to precisely define the
variable to be measured) or how to accomplish its measurement.
The goal of intelligence tests is to obtain an idea of the person's intellectual potential. The
tests center around a set of stimuli designed to yield a score based on the test maker's
model of what makes up intelligence. Intelligence tests are often given as a part of a
battery of tests.

(i) Types of Intelligence Tests


Intelligence tests (also called instruments) are published in several forms:
(a) Group Intelligence tests usually consist of a paper test booklet and
scanned scoring sheets. Group achievement tests, which assess academic
areas, sometimes include a cognitive measure. In general, group tests are
not recommended for the purpose of identifying a child with a disability.
In some cases, however, they can be helpful as a screening measure to
consider whether further testing is needed and can provide good
background information on a child's academic history.
(b) Individual intelligence tests may include several types of tasks and may
(c) Computerized tests are becoming more widely available, but as with all
tests, examiners must consider the needs of the child before choosing this
format.
(d) Verbal tests evaluate your ability to spell words correctly, use correct
grammar, understand analogies and analyze detailed written information.
Because they depend on understanding the precise meaning of words,
idioms and the structure of the language they discriminate very strongly
towards native speakers of the language in which the test has been
developed. If you speak English as a second language, even if this is at a
high standard, you will be significantly disadvantaged in these tests.
There are two distinct types of verbal ability questions, those dealing
with spelling, grammar and word meanings, and those that try to measure
your comprehension and reasoning abilities. Questions about spelling,
grammar and word meanings are speed tests in that they don’t require
very much reasoning ability. You either know the answer or you don’t.
(e) Non-verbal tests are comprised of a variety of item types, including
series completion, codes and analogies. However, unlike verbal
reasoning tests, none of the question types requires learned knowledge
for its solution. In an educational context, these tests are typically used
as an indication of a pupil’s ability to understand and assimilate novel
information independently of language skills. Scores on these tests can
indicate a pupil’s ability to learn new material in a wide range of school
subjects based on their current levels of functioning.

(ii)Advantages
In general, intelligence tests measure a wide variety of human behaviours better than any
other measure that has been developed. They allow professionals to have a uniform way
of comparing a person's performance with that of other people who are similar in age.
These tests also provide information on cultural and biological differences among people.
Intelligence tests are excellent predictors of academic achievement and provide an outline
of a person's mental strengths and weaknesses. Many times the scores have revealed
talents in many people, which have led to an improvement in their educational
opportunities. Teachers, parents, and psychologists are able to devise individual curricula
that matches a person's level of development and expectations.

(iii) Disadvantages
Some researchers argue that intelligence tests have serious shortcomings. For example,
many intelligence tests produce a single intelligence score. This single score is often
inadequate in explaining the multidimensional.
Another problem with a single score is the fact that individuals with similar intelligence
test scores can vary greatly in their expression of these talents. It is important to know the
person's performance on the various subtests that make up the overall intelligence test
score. Knowing the performance on these various scales can influence the understanding
of a person's abilities and how these abilities are expressed. For example, two people
have identical scores on intelligence tests. Although both people have the same test score,
one person may have obtained the score because of strong verbal skills while the other
may have obtained the score because of strong skills in perceiving and organizing various
tasks.
Furthermore, intelligence tests only measure a sample of behaviors or situations in which
intelligent behavior is revealed. For instance, some intelligence tests do not measure a
person's everyday functioning, social knowledge, mechanical skills, and/or creativity.
Along with this, the formats of many intelligence tests do not capture the complexity and
immediacy of real-life situations. Therefore, intelligence tests have been criticized for
their limited ability to predict non-test or nonacademic intellectual abilities. Since
intelligence test scores can be influenced by a variety of different experiences and
behaviors, they should not be considered a perfect indicator of a person's intellectual
potential.
Activity 3.4:

Discuss with your course mate about the intelligence testing and identify the methods
used to measure intelligence, and make a list of problems in measuring intelligence

Personality Tests
Your personality is what makes you who you are. It's that organized set of unique traits
and characteristics that makes you different from every other person in the world. Not
only does your personality make you special, it makes you!?
“The particular pattern of behavior and thinking that prevails across
time and contexts, and differentiates one person from another.”

The goal of psychologists is to understand the causes of individual differences in


behavior. In order to do this one must firstly identify personality characteristics (often
called personality traits), and then determine the variables that produce and control them.
A personality trait is assumed to be some enduring characteristic that is relatively
constant as opposed to the present temperament of that person which is not necessarily a
stable characteristic. Consequently, trait theories are specifically focused on explaining
the more permanent personality characteristics that differentiate one individual from
another. For example, things like being; dependable, trustworthy, friendly, cheerful, etc.

A personality test is completed to yield a description of an individual’s distinct


personality traits. In most instances, your personality will influence relationships with
your family, friends, and classmates and contribute to your health and well being.
Teachers can administer a personality test in class to help your children discover their
strengths and developmental needs. The driving force behind administering a
personality test is to open up lines of communication and bring students together to have
a higher appreciation for one another. A personality test can provide guidance to
teachers of what teaching strategies will be the most effective for their students.
Briefly personality test can benefit your students by:

 Increasing productivity
 Get along better with classmates
 Help students realize their full potential
 Identify teaching strategies for students
 Help students appreciate other personality types.

(i) Types of Personality Tests


Personality tests are used to determine your type of personality, your values, interests
and your skills. They can be used to simply assess what type of person you are or, more
specifically, to determine your aptitude for a certain type of occupation or career.
There are many different types of personality tests such as self report inventory, Likert
scale and projective tests.

(a) Self-report Inventory


A self-report inventory is a type of psychological test often used in personality
assessment. This type of test is often presented in a paper-and-pencil format or may even
be administered on a computer. A typical self report inventory presents a number of
questions or statements that may or may not describe certain qualities or characteristics of
the test subject.
Chances are good that you have taken a self-report inventory at some time the past. Such
questionnaires are often seen in doctors’ offices, in on-line personality tests and in market
research surveys. This type of survey can be used to look at your current behaviors, past
behaviors and possible behaviors in hypothetical situations.

(i) Strengths and Weaknesses of Self-Report Inventories


Self-report inventories are often good solution when researchers need to administer a
large number of tests in relatively short space of time. Many self report inventories can be
completed very quickly, often in as little as 15 minutes. This type of questionnaire is an
affordable option for researchers faced with tight budgets.
Another strength is that the results of self report inventories are generally much more
reliable and valid. Scoring of the tests a standardized and based on norms that have been
previously established.
However, self report inventories do have their weaknesses. Such as people are able to
Another weakness is that some tests are very long and tedious. For example, the MMPI
takes approximately 3 hours to complete. In some cases, test respondents may simply
lose interest and not answer questions accurately. Additionally, people are sometimes not
the best judges of their own behavior. Some individuals may try to hide their own
feelings, thoughts and attitudes.

Selection Type Items (objective type)


There are four types of test items in selection category of test which are in common use
today. They are multiple-choice, matching, true-false, and completion items.

Multiple Choice Questions


Multiple-choice test items consist of a stem or a question and three or more alternative
answers (options) with the correct answer sometimes called the keyed response and the
incorrect answers called distracters. This form is generally better than the incomplete
stem because it is simpler and more natural.
Grounlund (1995) writes that the multiple choice question is probably the most popular
as well as the most widely applicable and effective type of objective test. Student selects
a single response from a list of options. It can be used effectively for any level of course
outcome. It consists of two parts: the stem, which states the problem and a list of three to
five alternatives, one of which is the correct (key) answer and the others are distracters
(incorrect options that draw the less knowledgeable pupil away from the correct
response). Multiple choice questions consist of three obligatory parts:
1. The question ("body of the question")
2. The correct answer ("the key of the question")
3. Several incorrect alternatives (the so called "distracters") and
optional (and especially valuable in self-assessment)
4. Feedback comment on the student's answer.

The stem may be stated as a direct question or as an incomplete statement. For example:

Direct question
Which is the capital city of Pakistan?------------------------(Stem)
A. Paris. --------------------------------------- (Distracter)
B. Lisbon. -------------------------------------- (Distracter)
C. Islamabad. ---------------------------------- (Key)
D. Rome. --------------------------------------- (Distracter)
Multiple choice questions are composed of one question with multiple possible answers
(options), including the correct answer and several incorrect answers (distracters). Typically,
students select the correct answer by circling the associated number or letter, or filling in the
associated circle on the machine-readable response sheet. Students can generally respond to
these types of questions quite quickly. As a result, they are often used to test student’s
knowledge of a broad range of content. Creating these questions can be time consuming
because it is often difficult to generate several plausible distracters. However, they can be
marked very quickly.

Multiple Choice Questions Good for:


 Application, synthesis, analysis, and evaluation levels
RULES FOR WRITING MULTIPLE-CHOICE QUESTIONS
There are several rules we can follow to improve the quality of this type of written
examination.

1. Examine only the Important Facts!


Make sure that every question examines only the important knowledge. Avoid detailed
questions - each question has to be relevant for the previously set instructional goals of
the course.

2. Use Simple Language!


Use simple language, taking care of spelling and grammar. Spelling and grammar
mistakes (unless you are testing spelling or grammar) only confuse students. Remember
that you are examining knowledge about your subject and not language skills.

3. Make the Questions Brief and Clear!


Clear the text of the body of the question from all superfluous words and irrelevant
content. It helps students to understand exactly what is expected of them. It is desirable to
formulate a question in such way that the main part of the text is in the body of the
question, without being repeated in the answers.

4. Form the Questions Correctly!


Be careful that the formulation of the question does not (indirectly) hide the key to the
correct answer. Student (adept at solving tests) will be able to recognize it easily and will
find the right answer because of the word combination, grammar etc, and not because of
their real knowledge.
5. Take into Consideration the Independence of Questions!
Be careful not to repeat content and terms related to the same theme, since the answer to
one question can become the key to solve another.

6. Offer Uniform Answers!


All offered answers should be unified, clear and realistic. For example, unlikely
realisation of an answer or uneven text quantity of different answers can point to the right
answer. Such a question does not test real knowledge. The position of the key should be
random. If the answers are numbers, they should be listed in an ascending order.

7. Avoid Asking Negative Questions!


If you use negative questions, negation must be emphasized by using CAPITAL letters,
e.g. "Which of the following IS NOT correct..." or "All of the following statements are
true, EXCEPT...".

8. Avoid Distracters in the Form of "All the answers are correct" or "None of the
Answers is Correct"!
Teachers use these statements most frequently when they run out of ideas for distracters.
Students, knowing what is behind such questions, are rarely misled by it. Therefore, if
you do use such statements, sometimes use them as the key answer. Furthermore, if a
student recognizes that there are two correct answers (out of 5 options), they will be able
to conclude that the key answer is the statement "all the answers are correct", without
knowing the accuracy of the other distracters.

9. Distracters must be Significantly Different from the Right Answer (key)!


Distracters which only slightly differ from the key answer are bad distracters. Good or
strong distracters are statements which themselves seem correct, but are not the correct
answer to a particular question.

10.Offer an Appropriate Numbers of Distracters.


The greater the number of distracters, the lesser the possibility that a student could guess
the right answer (key). In higher education tests questions with 5 answers are used most
often (1 key + 4 distracters). That means that a student is 20% likely to guess the right
answer.
Advantages:
Multiple-choice test items are not a panacea. They have advantages and advantages just
as any other type of test item. Teachers need to be aware of these characteristics in order
to use multiple-choice items effectively.

Advantages
Versatility

Multiple-choice test items are appropriate for use in many different subject-matter areas,
and can be used to measure a great variety of educational objectives. They are adaptable
to various levels of learning outcomes, from simple recall of knowledge to more complex
levels, such as the student’s ability to:
• Analyze phenomena
• Apply principles to new situations
• Comprehend concepts and principles
• Discriminate between fact and opinion
• Interpret cause-and-effect relationships
• Interpret charts and graphs
• Judge the relevance of information
• Make inferences from given data
• Solve problems
The difficulty of multiple-choice items can be controlled by changing the alternatives,
since the more homogeneous the alternatives, the finer the distinction the students must
make in order to identify the correct answer. Multiple-choice items are amenable to item
analysis, which enables the teacher to improve the item by replacing distracters that are
not functioning properly. In addition, the distracters chosen by the student may be used to
diagnose misconceptions of the student or weaknesses in the teacher’s instruction.

Validity
In general, it takes much longer to respond to an essay test question than it does to
respond to a multiple-choice test item, since the composing and recording of an essay
answer is such a slow process. A student is therefore able to answer many multiple-
choice items in time it would take to answer a single essay question. This feature enables
the teacher using multiple-choice items to test a broader sample of course contents in a
given amount of testing time. Consequently, the test scores will likely be more
representative of the students’ overall achievement in the course.
Reliability
Well-written multiple-choice test items compare favourably with other test item types on
the issue of reliability. They are less susceptible to guessing than are true-false test items,
and therefore capable of producing more reliable scores. Their scoring is more clear-cut
than short answer test item scoring because there are no misspelled or partial answers to
deal with. Since multiple-choice items are objectively scored, they are not affected by
scorer inconsistencies as are essay questions, and they are essentially immune to the
influence of bluffing and writing ability factors, both of which can lower the reliability of
essay test scores.

Efficiency
Multiple-choice items are amenable to rapid scoring, which is often done by scoring
machines. This expedites the reporting of test results to the student so that any follow-up
clarification of instruction may be done before the course has proceeded much further.
Essay questions, on the other hand, must be graded manually, one at a time. Overall
multiple choice tests are:
 Very effective
 Versatile at all levels
 Minimum of writing for student
 Guessing reduced
 Can cover broad range of content

Disadvantages
Versatility

Since the student selects a response from a list of alternatives rather than supplying or
constructing a response, multiple-choice test items are not adaptable to measuring certain
learning outcomes, such as the student’s ability to:
• Articulate explanations
• Display thought processes
• Furnish information
• Organize personal thoughts.
 Perform a specific task
• Produce original ideas
• Provide examples
Reliability
Although they are less susceptible to guessing than are true false-test items, multiple-
choice items are still affected to a certain extent. This guessing factor reduces the
reliability of multiple-choice item scores somewhat, but increasing the number of items
on the test offsets this reduction in reliability.

Difficulty of Construction
Good multiple-choice test items are generally more difficult and time-consuming to
write than other types of test items. Coming up with plausible distracters requires a
certain amount of skill. This skill, however, may be increased through study, practice,
and experience.
Gronlund (1995) writes that multiple-choice items are difficult to construct. Suitable
distracters are often hard to come by and the teacher is tempted to fill the void with a
“junk” response. The effect of narrowing the range of options will available to the test
wise student. They are also exceedingly time consuming to fashion, one hour per
question being by no means the exception. Finally multiple-choice items generally take
student longer to complete (especially items containing fine discrimination) than do other
types of objective question.
 Difficult to construct good test items.
 Difficult to come up with plausible distracters/alternative responses.

Activity 4.1: Construct two items of direct question and two items of incomplete
statement while following the rules of multiple items.

True/False Questions
A True-False test item requires the student to determine whether a statement is true or
false. The chief disadvantage of this type is the opportunity for successful guessing.
According to Gronlund (1995) the alternative response test items that consists of a
declaration statement that the pupil is asked to mark true or false, right or wrong, correct
or incorrect, yes or no, fact or opinion, agree or disagree and the like. In each case there
are only two possible answers. Because the true-false option is the most common, this
type is mostly refers to true-false type. Students make a designation about the validity of
the statement. Also known as a “binary-choice” item because there are only two options
to select from. These types of items are more effective for assessing knowledge,
comprehension, and application outcomes as defined in the cognitive domain of Blooms’
Taxonomy of educational objectives.
Example
Directions: Circle the correct response to the following statements.
1. Allama Iqbal is the founder of Pakistan. T/F
2. Democracy system is for the people. T/F
3. Quaid-e-Azam was the first Prime Minister of Pakistan. T/F

Good for:
 Knowledge level content
 Evaluating student understanding of popular misconceptions
 Concepts with two logical responses

Advantages:
 Easily assess verbal knowledge
 Each item contains only two possible answers
 Easy to construct for the teacher
 Easy to score for the examiner
 Helpful for poor students
 Can test large amounts of content
 Students can answer 3-4 questions per minute

Disadvantages:
 They are easy to construct.
 It is difficult to discriminate between students that know the material and
students who don't know.
 Students have a 50-50 chance of getting the right answer by guessing.
 Need a large number of items for high reliability.
 Fifty percent guessing factor.
 Assess lower order thinking skills.
 Poor representative of students learning achievement.

Tips for Writing Good True/False items:


 Avoid double negatives.
 Use only one central idea in each item.
 Don't emphasize the trivial.
 Use exact quantitative language
 Don't lift items straight from the book.
 Make more false than true (60/40). (Students are more likely to answer true.)
 The desired method of marking true or false should be clearly explained before
students begin the test.
 Construct statements that are definitely true or definitely false, without additional
qualifications. If opinion is used, attribute it to some source.

Avoid the following:


a. verbal clauses, absolutes, and complex sentences;
b. broad general statements that are usually not true or false without further
qualifications;
c. terms denoting indefinite degree (e.g., large, long time, or regularly) or absolutes
(e.g., never, only, or always).
d. placing items in a systematic order (e.g., TTFF, TFTF, and so on);
e. taking statements directly from the text and presenting them out of context.

Activity 4.2: Enlist five items by indicating them T/F (True & False)

Matching items
According to Cunningham (1998), the matching items consist of two parallel columns.
The column on the left contains the questions to be answered, termed premises; the
column on the right, the answers, termed responses. The student is asked to associate
each premise with a response to form a matching pair.
For example;

Column “A” Capital City Column “B” Country


Islamabad Iran
Tehran Spain
Istanbul Portugal
Madrid Pakistan
Jaddah Netherlands
Turkey
West Germany

Matching test items are used to test a student's ability to recognize relationships and to
make associations between terms, parts, words, phrases, clauses, or symbols in one
column with related alternatives in another column. When using this form of test item, it
is a good practice to provide alternatives in the response column that are used more than
once, or not at all, to preclude guessing by elimination. Matching test items may have
either an equal or unequal number of selections in each column.
Matching-Equal Columns. When using this form, providing for some items in the
response column to be used more than once, or not at all, can preclude guessing by
elimination.

Good for:
 Knowledge level
 Some comprehension level, if appropriately constructed

Types:
 Terms with definitions
 Phrases with other phrases
 Causes with effects
 Parts with larger units
 Problems with solutions
Advantages:
The chief advantage of matching exercises is that a good deal of factual information can
be tested in minimal time, making the tests compact and efficient. They are especially
well suited to who, what, when and where types of subject matter. Further students
frequently find the tests fun to take because they have puzzle qualities to them.
 Maximum coverage at knowledge level in a minimum amount of space/prep time
 Valuable in content areas that have a lot of facts

Disadvantages:
The principal difficulty with matching exercises is that teachers often find that the subject
matter is insufficient in quantity or not well suited for matching terms. An exercise
should be confined to homogeneous items containing one type of subject matter (for
instance, authors-novels; inventions inventors; major events-dates terms – definitions;
rules examples and the like). Where unlike clusters of questions are used to adopt but
poorly informed student can often recognize the ill-fitting items by their irrelevant and
extraneous nature (for instance, in a list of authors the inclusion of the names of capital
cities).

Student identifies connected items from two lists. It is useful for assessing the ability to
discriminate, categorize, and association amongst similar concepts.
 Time consuming for students
 Not good for higher levels of learning

Tips for Writing Good Matching items:


Here are some suggestions for writing matching items:
 Keep both the list of descriptions and the list of options fairly short and
homogeneous – they should both fit on the same page. Title the lists to ensure
homogeneity and arrange the descriptions and options in some logical order. If
this is impossible you’re probably including too wide a variety in the exercise.
Try constructing two or more exercises.
 Make sure that all the options are plausible distracters for each description to
ensure homogeneity of lists.
 The list of descriptions on the left side should contain the longer phrases or
statements, whereas the options on the right side should consist of short phrases,
words or symbols.
 Each description in the list should be numbered (each is an item), and the list of
options should be identified by letter.
 Include more options than descriptions. If the option list is longer than the
description list, it is harder for students to eliminate options.
options that do not match any of the descriptions, or some that match more than
one, or both.
 In the directions, specify the basis for matching and whether options can be used
more than once.
 Need 15 items or less.
 Give good directions on basis for matching.
 Use items in response column more than once (reduces the effects of guessing).
 Make all responses plausible.
 Put all items on a single page.
 Put response in some logical order (chronological, alphabetical, etc.).

Activity 4.3: Keeping in view the nature of matching items, construct at least five
items of matching case about any topic.

Completion Items
Like true-false items, completion items are relatively easy to write. Perhaps the first tests
classroom teachers’ construct and students take completion tests. Like items of all other
formats, though, there are good and poor completion items. Student fills in one or more
blanks in a statement. These are also known as “Gap-Fillers.” Most effective for
assessing knowledge and comprehension learning outcomes but can be written for higher
level outcomes. e.g.
The capital city of Pakistan is-----------------.

Suggestions for Writing Completion or Supply Items


Here are our suggestions for writing completion or supply items:
I. If at all possible, items should require a single-word answer or a brief and
definite statement. Avoid statements that are so indefinite that they may be
logically answered by several terms.
a. Poor item:
World War II ended in .
b. Better item:
World War II ended in the year .
II. Be sure the question or statement poses a problem to the examinee. A direct
question is often more desirable than an incomplete statement because it provides
more structure.
III. Be sure the answer that the student is required to produce is factually correct. Be
sure the language used in the question is precise and accurate in relation to the
subject matter area being tested.
IV. Omit only key words; don’t eliminate so many elements that the sense of the
content is impaired.
a. Poor item:
The type of test item is usually more than the
type.
b. Better item:
The supply type of test item is usually graded less objectively than the
type.

I. Word the statement such that the blank is near the end of the sentence rather than
near the beginning. This will prevent awkward sentences.
II. If the problem requires a numerical answer, indicate the units in which it is to be
expressed.

Activity 4.3: Construct five fill in the blanks about Pakistan.

Supply Type Items


The aviation instructor is able to determine the students' level of generalized knowledge
of a subject through the use of supply-type questions. There are four types of test items in
supply type category of test. Commonly these are completion items, short answers,
restricted response and extended response (essay type comprises the restricted and
extended responses).

Short Answer
Student supplies a response to a question that might consistent of a single word or phrase.
Most effective for assessing knowledge and comprehension learning outcomes but can be
written for higher level outcomes. Short answer items are of two types.
 Simple direct questions
Who was the first president of the Pakistan?
 Completion items

The name of the first president of Pakistan is .


The items can be answered by a work, phrase, number or symbol. Short-answer tests are
a cross between essay and objective tests. The student must supply the answer as with an
essay question but in a highly abbreviated form as with an objective question.
Good for:
 Application, synthesis, analysis, and evaluation levels

Advantages:
 Easy to construct
 Good for "who," what," where," "when" content
 Minimizes guessing
 Encourages more intensive study-student must know the answer vs. recognizing
the answer.
Gronlund (1995) writes that short-answer items have a number of advantages.
 They reduce the likelihood that a student will guess the correct answer
 They are relatively easy for a teacher to construct.
 They are will adapted to mathematics, the sciences, and foreign languages where
specific types of knowledge are tested (The formula for ordinary table salt is
).
 They are consistent with the Socratic question and answer format frequently
employed in the elementary grades in teaching basic skills.

Disadvantages:
 May overemphasize memorization of facts
 Take care - questions may have more than one correct answer
 Scoring is laborious

According to Grounlund (1995) there are also a number of disadvantages with short-
answer items.
 They are limited to content areas in which a student’s knowledge can be
adequately portrayed by one or two words.
 They are more difficult to score than other types of objective-item tests since
students invariably come up with unanticipated answers that are totally or
partially correct.
 Short answer items usually provide little opportunity for students to synthesize,
evaluate and apply information.

Tips for Writing Good Short Answer Items:


 Use direct questions, not an incomplete statement.
 If you do use incomplete statements, don't use more than 2 blanks within an item.
 Arrange blanks to make scoring easy.
 Try to phrase question so there is only one answer possible.

Activity 4.5: Develop a test of short answers on democracy in Pakistan.

4.2.3 Essay type test items :


Essay questions are supply or constructed response type questions and can be the best
way to measure the students' higher order thinking skills, such as applying, organizing,
synthesizing, integrating, evaluating, or projecting while at the same time providing a
measure of writing skills. The student has to formulate and write a response, which may
be detailed and lengthy. The accuracy and quality of the response are judged by the
teacher.
Essay questions provide a complex prompt that requires written responses, which can
vary in length from a couple of paragraphs to many pages. Like short answer questions,
they provide students with an opportunity to explain their understanding and demonstrate
creativity, but make it hard for students to arrive at an acceptable answer by bluffing.
They can be constructed reasonably quickly and easily but marking these questions can
be time-consuming and grade agreement can be difficult.
Essay questions differ from short answer questions in that the essay questions are less
structured. This openness allows students to demonstrate that they can integrate the
course material in creative ways. As a result, essays are a favoured approach to test
higher levels of cognition including analysis, synthesis and evaluation. However, the
requirement that the students provide most of the structure increases the amount of work
required to respond effectively. Students often take longer time to compose a five
paragraph essay than they would take to compose paragraph answer to short answer
questions.
Essay items can vary from very lengthy, open ended end of semester term papers or take
home tests that have flexible page limits (e.g. 10-12 pages, no more than 30 pages etc.) to
essays with responses limited or restricted to one page or less. Essay questions are used
both as formative assessments (in classrooms) and summative assessments (on
standardized tests). There are 2 major categories of essay questions -- short response (also
referred to as restricted or brief) and extended response.
 Restricted Response: more consistent scoring, outlines parameters of responses
 Extended Response Essay Items: synthesis and evaluation levels; a lot of
freedom in answers
A. Restricted Response Essay Items
An essay item that poses a specific problem for which a student must recall proper
information, organize it in a suitable manner, derive a defensible conclusion, and express
it within the limits of posed problem, or within a page or time limit, is called a restricted
response essay type item. The statement of the problem specifies response limitations that
guide the student in responding and provide evaluation criteria for scoring.

Example 1:
List the major similarities and differences in the lives of people living in Islamabad and
Faisalabad.
Example 2:
Compare advantages and disadvantages of lecture teaching method and demonstration
teaching method.

When Should Restricted Response Essay Items be used?


Restricted Response Essay Items are usually used to:-
 Analyze relationship
 Compare and contrast positions
 State necessary assumptions
 Identify appropriate conclusions
 Explain cause-effect relationship
 Organize data to support a viewpoint
 Evaluate the quality and worth of an item or action
 Integrate data from several sources

B. Extended Response Essay Type Items


An essay type item that allows the student to determine the length and complexity of
response is called an extended-response essay item. This type of essay is most useful at
the synthesis or evaluation levels of cognitive domain. We are interested in determining
whether students can organize, integrate, express, and evaluate information, ideas, or
pieces of knowledge the extended response items are used.

Example:
Identify as many different ways to generate electricity in Pakistan as you can? Give
advantages and disadvantages of each. Your response will be graded on its accuracy,
comprehension and practical ability.
Good for:
 Application, synthesis and evaluation levels

Types:
 Extended response: synthesis and evaluation levels; a lot of freedom in answers
 Restricted response: more consistent scoring, outlines parameters of responses

Advantages:
 Students less likely to guess
 Easy to construct
 Stimulates more study
 Allows students to demonstrate ability to organize knowledge, express opinions,
show originality.

Disadvantages:
 Can limit amount of material tested, therefore has decreased validity.
 Subjective, potentially unreliable scoring.
 Time consuming to score.

Tips for Writing Good Essay Items:


 Provide reasonable time limits for thinking and writing.
 Avoid letting them to answer a choice of questions (You won't get a good idea of
the broadness of student achievement when they only answer a set of questions.)
 Give definitive task to student-compare, analyze, evaluate, etc.
 Use checklist point system to score with a model answer: write outline,
determine how many points to assign to each part
 Score one question at a time-all at the same time.

Activity 4.6: Develop an essay type test on this unit while covering the levels of
knowledge, application and analysis.
Self Assessment Questions:
1. In an area in which you are teaching or plan to teach, identify several learning
outcomes that can be best measured with objective and subjective types
questions.
2. Criticize the different types of selection and supply categories. In your opinion which
type is more appropriate for measuring the achievement level of elementary
students?
3. What factors should be considered in deciding whether subjective or objective type
questions should be included in a classroom tests?
4. Compare the functions of selection and supply types items.
References/Suggested Readings
Airasian, P. (1994) "Classroom Assessment," Second Edition, NY" McGraw-Hill.
American Psychological Association. (1985). Standards for Educational and
Psychological Testing. Washington, DC: American Psychological Association.
Anastasi, A. (1988). Psychological Testing (6th ed.). New York, NY: MacMillan
Publishing Company.
Cangelosi, J. (1990) "Designing Tests for Evaluating Student Achievement." NY:
Addison-Wesley.
Cunningham, G.K. (1998). Assessment in the Classroom. Bristol, PA: Falmer Press.
Ward, A.W., & Murray-Ward, M. (1999). Assessment in the Classroom.
Belmont, CA: Wadsworth Publishing Co.
Gronlund, N. (1993) "How to Make Achievement Tests and Assessments," 5th Edition,
NY: Allyn and Bacon.
Gronlund, N. E. & Linn, R. L. (1995). Measurement and Assessment in Teaching. New
Delhi: Baba Barkha Nath Printers.
Haladyna, T.M. & Downing, S.M. (1989) Validity of a Taxonomy of Multiple-Choice
Item-Writing Rules. "Applied Measurement in Education," 2(1), 51-78.
Monahan, T. (1998) The Rise of Standardized Educational Testing in the U.S. – A
Bibliographic Overview.
Ravitch, Diane, “The Uses and Misuses of Tests”, in The Schools We Deserve (New
York: Basic Books, 1985), pp. 172–181.
Thissen, D., & Wainer, H. (2001). Test Scoring. Mahwah, NJ: Erlbaum.
Wilson, N. (1997) Educational standards and the problem of error. Education Policy
Analysis Archives, Vol 6 No 10
UNIT : 03
TEST CONSTRUCTION

Purpose of a Test
Assessment of a student in class is inevitable because it is integral part of teaching-
learning process. Assessment on one hand provides information to design or redesign
instruction and on the other hand it promotes learning. Teachers use different techniques
and procedures to assess their students i.e tests, observations, questionnaires, interviews,
rating scales, discussion etc. A teacher develops, administers, and marks academic
achievement and other types of tests in order to measure the ability of a student in a
subject or measures behaviour in class or in school. What are these tests? Does a teacher
really need to know that what is test? Yes, it is very important. The teaching-learning
process remains incomplete if a teacher does not know that how well her class is doing
and to what extent her teaching is effective in terms of achievement of pre defined
objectives. There are many technical terms which are related with assessment. Before we
go any further, it would be beneficial to define first what is a test.

What is a Test ?
A test is a device which is used to measure behaviour of a person for a specific purpose.
Moreover it is an instrument that typically uses sets of items designed to measure a
domain of learning tasks. Tests are systematic method of collecting information that lead
to make inferences about the characteristics of people or objects. A teacher must
understand that educational test is a measuring device and therefore involves rules
(administering, scoring) for assigning numbers that will be used for describing the
performance of an individual. You should also keep in mind that it is not possible for a
teacher to test all the subject matter of a course that has been taught to the class in a
semester or in a year. Therefore, teacher prepares tests while sampling the items from a
pool of items in such a way that it represents the whole subject matter. Teacher must also
understand that whole content with many topics and concepts that have been taught
within a semester or in a year can not be tested in one or two hours. In simple words a
test should assess content area in accordance with relative importance a teacher has
assigned to them. It is believed most commonly that the meaning of a test is simple
paper-and-pencil tests. But now a days other testing procedures have been developed and
are practiced in many schools.
Even tests are of many types that can be placed into two main categories. These are:
(i) Subjective type tests
(ii)Objective type tests
At elementary level students do not have much proficiency of writing long essay type
answer of a question, therefore, objective type tests are preferred. Objective type tests are
also called selective-response tests. In this types of tests responses of an item are
provided and the students are required to choose correct response. The objective types of
tests that are used at elementary level are:
(i) Multiple choice
(ii)Multiple Binary-choice
(iii) Matching items
You will study about the development process of each of these items in next units. In this
unit you have been given just an idea that what does a test mean for a teacher. Definitely
after going through this discussion you might be ready to extract yourself from the above
mentioned paragraphs that why it is important for a teacher to know about a classroom
test. What purpose it serves? The job of a teacher is to teach and to test for the following:

Purposes of test:
You have learned that a test is a simple device which measures the achievement level of a
student in a particular subject and grade. Therefore we can say that a test is used to serve
the following purposes:

1. Monitoring Student Progress


Why should teacher assess their students? The simple answer is that it helps teachers to
know whether their students are making satisfactory progress. We must realize that the
appropriate use of tests and other assessment procedures allows a teacher to monitor the
progress of their students. A useful purpose of classroom test is to know whether students
are satisfactorily moving towards the instructional goals. After knowing the weaknesses
if any, the teacher will modify her/his instructional design. If the progress is adequate
there will be no need of instructional changes. The results obtained during the monitoring
of students progress can further be utilized for making formative assessment of their
instructional procedures. Formative evaluation provides feedback to students as well as to
the teachers.

2. Diagnosing Learning Problems


Identification of students strength and weaknesses is one of the main purpose of a test.
An elementary teacher needs to know that whether a student is comprehending the
content that he/she reads. If he/she reads with certain difficulties, then definitely as a
teacher you have to address the problem instructionally. Otherwise, it will be wastage of
time and energy if students are not comprehending but the teacher is moving forward.
Thus by measuring students current status teacher can determine:
(i) How to improve students weaknesses through instructional changes?
(ii) How to instructionally avoid already mastered skills and knowledge?
The diagnosis taken before instruction is usually referred as pre-testing or pre-
assessment. It provides the teacher that what is the level of previous knowledge the
students possess at the beginning of instruction.
3. Assigning Grades
A teacher assigns grade after scoring the test. The best way to assign grades is to collect
objective information related to student achievements and other academic
accomplishments. Different institutions have different criteria for assigning the grades.
Mostly alphabets ‘A, B, C, D, or F are assigned on the bases of numerical evidence.

4. Classification and Selection of Students


A teacher makes different decisions regarding the classification, selection and placement
of students. Though these terms are used interchangeably, but technically they have
different meanings. On the bases of test scores students are classified in to high ability,
average ability and low ability groups. Or test can be used to classify students having
learning disabilities, emotionally disturbed children, or some other category of disability
(speech handicap etc). On the basis of test score students are selected or rejected for
admission in schools, colleges and or in other institutions. As contrary to selection, while
making placement decisions no one is rejected rather all students are placed in various
categories of educational levels, for example regular, remedial, or honors.

5. Evaluating Instruction
Students’ performance on tests helps the teacher to evaluate her/his own
instructional effectiveness or to know that how effective their teaching have been. A
teacher teaches a topic for two weeks. After the completion of topic the teacher gives a
test. The score obtained by students show that they learned the skills and knowledge that
was expected to learn. But if the obtained score is poor, then the teacher will decide to
retain, alter or totally discard their current instructional activities.

Activity-2.1: Visit some schools of your area and perform the following:

Conduct an interview of at least 10 teachers and ask the teachers why do


they administer the tests to their students. Match their responses with the
purposes of test (1-5) given in section 2.3.

Defining Learning Outcomes


Learning outcomes are the statements indicating what a student is expected to be able to
do as a result of a learning activity. Major difference between learning objectives and out
comes is that objectives are focused upon the instruction, what will be given to the
students and the outcomes are focused upon the students what behaviour change they are
being expected to show as the result of the instruction.
1. Different Definitions of Learning Outcomes
Adam, 2004 defines learning outcomes as:
A learning outcome is a written statement of what the successful student/learner is
expected to be able to do at the end of the module/course unit, or qualification.

The Credit Common Accord for Wales defines learning outcomes as:
Statements of what a learner can be expected to know, understand and/or do as a result of
a learning experience. (QCA /LSC, 2004, p. 12)

University of Exeter (2007) defines:


Learning Outcome: An expression of what a student will demonstrate on the successful
completion of a module. Learning outcomes:
 are related to the level of the learning;
 indicate the intended gain in knowledge and skills that a typical student will
achieve;
 should be capable of being assessed.

2. Difference between Learning Outcomes and Objectives


Learning outcomes and objectives’ are often used synonymously, although they are not
the same. In simple words, objectives are concerned with teaching and the teacher’s
intentions whereas learning outcomes are concerned with students learning.
However, objectives and learning outcomes are usually written in same terms. For further
detail check the following website.
http://www.qualityresearchinternational.com/glossary/learningoutcomes.htm

3. Importance of Learning Outcomes


Learning outcomes facilitate teachers more precisely to tell students what is expected of
them. Clearly stated learning outcomes:
 help students to learn more effectively. They know where they stand and the
curriculum is made more open to them.
 make it clear what students can hope to gain from a particular course or lecture.
 help instructors select the appropriate teaching strategy, for example lecture,
seminar, student self-paced, or laboratory class. It obviously makes sense to
match the intended outcome to the teaching strategy.
 help instructors more precisely to tell their colleagues what a particular activity is
designed to achieve.
 assist in setting examinations based on the content delivered.
 Help in the selection of appropriate assessment strategies.

Activity-2.4 Differentiate between learning Objective and Outcome with the help of
relevant examples

4. SOLO Taxonomy
The SOLO taxonomy stands for:
Structure of
Observed
Learning
Outcomes

SOLO taxonomy was developed by Biggs and Collis (1982) which is further explained
by Biggs and Tang (2007). This taxonomy is used by Punjab for the assessment.
It describes level of increasing complexity in a student's understanding of a subject
through five stages, and it is claimed to be applicable to any subject area. Not all students
get through all five stages, of course, and indeed not all teaching.

1 Pre-structural: here students are simply acquiring bits of unconnected


information, which have no organisation and make no sense.

2 Unistructural: simple and obvious connections are made, but their


significance is not grasped.

3 Multistructural: a number of connections may be made, but the meta-


connections between them are missed, as is their significance for the whole.

4 Relational level: the student is now able to appreciate the significance of the
parts in relation to the whole.

5 At the extended abstract level, the student is making connections not only
within the given subject area, but also beyond it, able to generalise and
transfer the principles and ideas underlying the specific instance.
SOLO taxonomy
http://www.learningandteaching.info/learning/solo.htm#ixzz1nwXTmNn9

Preparation of Content Outline


First you must understand that what is content. In this regard content refers to the major
matter that will be included in a measuring device. For example, the test of General
Science he diagrams, pictures of different plants, insects or animal or living or non-living
things that constitute the test. For a psychomotor test such as conducting an experiment
in laboratory might require setting up of apparatus for the experiment. For an effective
device, the content might consist of the series of statement to which the students might
choose correct or best answer. Most tests taken by students are developed by teachers
who are already teaching the subject for which they have to develop the test. Therefore
selection of test content might not be the problem for them. Selection and preparation of
content also depends on the type of decisions a teacher has to make about the students. If
the purpose of a test is to evaluate the instruction, then the content of a test must reflect
the age appropriateness. If test is made for making decisions regarding selection then the
content might of predictive nature. This type of test domain will provide information that
how well the student will perform in the program.
A teacher should know that items selected for the test come from instructional material
which a teacher has covered during teaching. You may heard about students reaction
during examination that ‘ test was out of course’. It indicates that teacher while
developing the test items has not considered the content that was taught to the student.
The items included in the test might have been not covered during the instruction period.

Look at following these diagrams:

Content taught

Content of test items

Figure- 2.5 Poor representativeness

Content taught

Content of test items

Figure- 2.6 Inadequate representativeness

Content of the test items Content of test items

Content taught
Figure-2.7 Inadequate representativeness

Test items

Content taught

Figure-2.8 Completely inadequate representativeness

Test items Content taught

Figure-2.9 Adequate representativeness

In figures 2.5 to 2.9 the shaded area represents the test items which cover the content of
subject matter whereas un-shaded area is the subject matter (learning domain) which the
teacher has taught in the class in the subject of social studies.
Figures 2.5-2.8 show the poor or inadequate representativeness of content of test
items. For example in figure-2.5 test covers a small portion (shaded area) of taught
content domain, rest of the items do not coincide with the taught domain. In figure 2.5 &
most of the test items/questions have been taken from a specific part of taught domain,
therefore, the representation of taught content domain is inadequate. Though, the test
items have been taken from the same content domain. The content of test items in figure
2.7 give very poor picture of a test. None of the parts of taught domain have been
assessed, therefore test shows zero representativeness.
It implies that the content from which the test item have to be taken should be well
defined and structured. With out setting the boundary of knowledge, behaviour, or skills
to be measured, the test development task will become difficult and complex. As a result
the assessment will produce unreliable results. Therefore a good test represents the taught
content up to maximum extent. A test which is representative of the entire content
domain is actually is a good test. Therefore it is imperative for a teacher to prepare
outline of the content that will be covered during the instruction. The next step is the
selection of subject matter and designing of instructional activities. All these steps are
guided by the objectives. One must consider objectives of the unit before selection of
content domain and subsequently designing of a test. It is clear from above discussion
that the outline of the test content should based on the following principles:
1. Purpose of the test (diagnostic test, classification, placement, or job employment)
2. Representative sample of the knowledge, behaviour, or skill domain being measured.
3. Relevancy of the topic with the content of the subject
4. Language of the content should be according to the age and grade level of the
students.
5. Developing table of specification.
A test, which meets the criteria stated in above principles, will provide reliable and valid
information for correct decision regarding the individual. Now keeping in view these
principles go on the following activity.

Activity-2.5:

Visit elementary school of your area and collect question papers/tests of sixth class
of any subject developed by the school teachers. Now perform the following:
How many items are related with the content?
(1) a.
How many items (what percentage) are not related with the
b. content covered for the testing period?
c.
Is the test representative of the entire content domain?
d.
(2) Share your
Does the results
test fulfill electronically
the criteria of with your classmates,
test construction? and get
Explain.
their opinion on the clarification of concept discussed in unit-2

Self- Assessment Questions:


(1) Explain with examples the purpose a classroom test.
(2) How do you define an objective and a outcome? Differentiate between
objectives and outcomes with the help of examples.
(3) What is your understanding on the importance of learning outcomes?
(4) What is cognitive domain? Explain all levels with examples.
(5) Develop two objectives for measuring recall level, two objectives for
measuring application level and two for evaluation level for 5th class
from English text book,
(6) Prepare a table of specification of 50 items for General Science subject
for 6th class.
UNIT 04

TEST ADMINISTRATION AND


ANALYSIS

Planning a Test
The main objective of classroom assessment is to obtain valid, reliable and useful data
regarding student learning achievement. This requires determining what is to be
measured and then defining it precisely so that assessments tasks to measure desired
performance can be developed. Classroom tests and assessments can be used for the
following instructional objectives:
i. Pre-testing
Tests and assessments can be given at the beginning of an instructional unit or course to
determine:-
 weather the students have the prerequisite skills needed for the
instruction (readiness, motivation etc)
 to what extent the students have already achieved the objectives of
planned instruction (to determine placement or modification of
instruction)
ii. During the Instruction Testing
 provides bases for formative assessment
 monitor learning progress
 detect learning errors
 provide feedback for students and teachers
iii. End of Instruction Testing
 measure intended learning outcomes
 used for formative assessment
 provides bases for grades, promotion etc
Prior to developing an effective test, one needs to determine whether or not a test is the
appropriate type of assessment. If the learning objectives are of primarily types of
procedural knowledge (how to perform a task) then a written test may not be the best
approach. Assessment of procedural knowledge generally calls for a performance
demonstration assessed using a rubric. Where demonstration of a procedure is not
appropriate, a test can be an effective assessment tool.
The first stage of developing a test is planning the test content and length. Planning the
test begins with development of a blueprint or test specifications for the test structured on
the learning outcomes or instructional objectives to be assessed by the test instrument.
For each learning outcome, a weight should be assigned based on the relative importance
of that outcome in the test. The weight will be used to determine the number of items
related to each of the learning outcomes.

ADMINISTRATION AND CONDUCTING THE TEST

Test Specifications
When an engineer prepares a design to construct a building and choose the materials, he
intends to use in construction, he usually know what a building is going to be used for,
and therefore designs it to meet the requirements of its planned inhabitants. Similarly, in
testing, table of specification is the blueprint of the assessment which specifies
percentages and weightage of test items and measuring constructs. It includes constructs
and concepts to be measured, tentative weightage of each construct, specify number of items
for each concept, and description of item types to be constructed. It is not surprising that
specifications are also referred to as ‘blueprints’, for they are literally architectural drawings
for test construction. Fulcher & Davidson (2009) divided test specifications into the following
four elements:
 Item specifications: Item specifications describe the items, prompts or tasks, and
any other material such as texts, diagrams, and charts which are used as stimuli.
Typically, a specification at this sub-level contains two key elements: samples of
the tasks to be produced, and guiding language that details all information
necessary to produce the task.
 Presentation Model: Presentation model provides information how the items
and tasks are presented to the test takers.
 Assembly Model: Assembly model helps the test developer to combine test
items and tasks to develop a test format.
 Delivery Model: Delivery Model tells how the actual test is delivered. It
includes information regarding test administration, test security/confidentiality
and time constraint.

Table 7.1: Table of Specifications for Social Studies Class VI

Objectives/ Knowledge Understanding Application Percentage


Contents LA SA MCQ LA SA MCQ LA SA MCQ
Climate 1 2 3 1 2 3 1 2 3 25%
Resources 1 2 3 1 2 3 1 2 3 25%
Population 1 2 3 1 2 3 1 2 3 25%
Society 1 2 3 1 2 3 1 2 3 25%
Total 4 8 12 4 8 12 4 8 12 100%
LA: Long Answer, SA: Short Answers, MCQ: Multiple Choice Questions
Note: Number of items/questions and percentage may be changed according to the
objectives/contents and hierarchy of learning.

General Consideration in Constructing Objective Test Items


The second step in test planning is determining the format and length of the test. The
format is based on the different types of items to be included in the test. The construction
of valid and good test items is a skill just like effective teaching. Some rules are to be
followed and some techniques are to be used to construct good test items. Test items can
be used to assess student’s ability to recognize concepts or to recall concepts. Generally
there are two types of objective test items:-
i. Select type.
ii. Supply type.

Select Type Items


A. Matching Items
According to W. Wiersma and S.G. Jurs (1990), the matching items consist of two
parallel columns. The column on the left contains the questions to be answered, termed
premises; the column on the right, the answers, termed responses. The student is asked to
associate each premise with a response to form a matching pair. For example
Column “A” Capital City Column “B” Country
Islamabad Iran
Tehran Spain
Istanbul Portugal
Madrid Pakistan
Hague Netherlands
Turkey
West Germany

According to W. Wiersma and S.G. Jurs (1990) in some matching exercises the number
of premises and responses are the same, termed a balanced or perfect matching exercise.
In others, the number and responses may be different.

Advantages
The chief advantage of matching exercises is that a good deal of factual information can
be tested in minimal time, making the tests compact and efficient. They are especially
well suited to who, what, when and where types of subject matter. Further students
frequently find the tests fun to take because they have puzzle qualities to them.

Disadvantages
The principal difficulty with matching exercises is that teachers often find that the subject
matter is insufficient in quantity or not well suited for matching terms. An exercise
should be confined to homogeneous items containing one type of subject matter (for
instance, authors-novels; inventions inventors; major events-dates terms – definitions;
rules examples and the like). Where unlike clusters of questions are used to adopt but
poorly informed student can often recognize the ill-fitting items by their irrelevant and
extraneous nature (for instance, in a list of authors the inclusion of the names of capital
Student identifies connected items from two lists. It is Useful for assessing the ability to
discriminate, categorize, and association amongst similar concepts.

Suggestions for Writing Matching Items


Here are some suggestions for writing matching items:
i. Keep both the list of descriptions and the list of options fairly short and
homogeneous – they should both fit on the same page. Title the lists to ensure
homogeneity and arrange the descriptions and options in some logical order. If
this is impossible, you’re probably including too wide a variety in the exercise.
Try constructing two or more exercises.
ii. Make sure that all the options are plausible distracters for each description to
ensure homogeneity of lists.
iii. The list of descriptions on the left side should contain the longer phrases or
statements, whereas the options on the right side should consist of short phrases,
words or symbols.
iv. Each description in the list should be numbered (each is an item), and the list of
options should be identified by letter.
v. Include more options than descriptions. If the option list is longer than the
description list, it is harder for students to eliminate options. If the option list is
shorter, some options must be used more than once. Always include some
options that do not match any of the descriptions, or some that match more than
one, or both.
vi. In the directions, specify the basis for matching and whether options can be used
more than once.

B. Multiple Choice Questions (MCQ’s)


Norman E. Grounlund (1990) writes that the multiple choice question is probably the
most popular as well as the most widely applicable and effective type of objective test.
Student selects a single response from a list of options. It can be used effectively for any
level of course outcome. It consists of two parts: the stem, which states the problem and a
list of three to five alternatives, one of which is the correct (key) answer and the others
are distracters (“foils” or incorrect options that draw the less knowledgeable pupil away
from the correct response).
The stem may be stated as a direct question or as an incomplete statement. For example:

Direct question
Which is the capital city of Pakistan? -------- (Stem)
A. Lahore. -------------------------------------- (Distracter)
B. Karachi. ------------------------------------- (Distracter)
Incomplete Statement
The capital city of Pakistan is
A. Lahore.
B. Karachi.
C. Islamabad.
D. Peshawar.

RULES FOR WRITING MULTIPLE-CHOICE QUESTIONS


1. Use Plausible Distracters (wrong-response options)
 Only list plausible distracters, even if the number of options per question changes
 Write the options so they are homogeneous in content
 Use answers given in previous open-ended exams to provide realistic distracters

2. Use a Question Format


 Experts encourage multiple-choice items to be prepared as questions (rather than
incomplete statements)
Incomplete Statement Format:
The capital of AJK is in-----------------.
Direct Question Format:
In which of the following cities is the capital of AJK?

3. Emphasize Higher-Level Thinking


 Use memory-plus application questions. These questions require students to
recall principles, rules or facts in a real life context.
 The key to prepare memory-plus application questions is to place the concept in a
life situation or context that requires the student to first recall the facts and then
apply or transfer the application of those facts into a situation.
 Seek support from others who have experience writing higher-level thinking
multiple-choice questions.

EXAMPLES:
Memory Only Example (Less Effective)

Which description best characterizes whole foods?


a. toast
b. bran cereal
c. grapefruit

Memory-Plus Application Example (More Effective)


Sana’s breakfast this morning included one glass of orange juice (from Concentrate), one
slice of toast, a small bowl of bran cereal and a grapefruit. What “whole food” did Sana
eat for breakfast?
a. orange juice
b. toast
c. bran cereal
d. grapefruit
Memory-Plus Application Example

Ability to Interpret Cause-and-Effect Relationships Example


Why does investing money in common stock protect against loss of assets during
inflation?
a. It pays higher rates of interest during inflation.
b. It provides a steady but dependable income despite economic conditions.
c. It is protected by the Federal Reserve System.
d. It increases in value as the value of a business increases.

Ability to Justify Methods and Procedures Example


Why is adequate lighting necessary in a balanced aquarium?
a. Fish need light to see their food.
b. Fish take in oxygen in the dark.
c. Plants expel carbon dioxide in the dark.
d. Plants grow too rapidly in the dark.

4. Keep Option Lengths Similar


 Avoid making your correct answer the long or short answer
5. Balance the Placement of the Correct Answer
 Correct answers are usually the second and third option

6. Be Grammatically Correct
 Use simple, precise and unambiguous wording
 Students will be more likely to select the correct answer by finding the
grammatically correct option

7. Avoid Clues to the Correct Answer


 Avoid answering one question in the test by giving the answer
somewhere else in the test
 Have the test reviewed by someone who can find mistakes, clues,
grammar and punctuation problems before you administer the exam to
students
 Avoid extremes – never, always, only
 Avoid nonsense words and unreasonable statements

8. Avoid Negative Questions


 31 of 35 testing experts recommend avoiding negative questions
 Students may be able to find an incorrect answer without knowing the
correct answer

9. Use Only One Correct Option (Or be sure the best option is clearly the best
option)
 The item should include one and only one correct or clearly best answer
 With one correct answer, alternatives should be mutually exclusive and
not overlapping
 Using MC with questions containing more than one right answer lowers
discrimination between students

10.Give Clear Instructions


Such as:
 Questions 1 - 10 are multiple-choice questions designed to assess your
ability to remember or recall basic and foundational pieces of knowledge
related to this course.
 Please read each question carefully before reading the answer options.
When you have a clear idea of the question, find your answer and mark
your selection on the answer sheet. Please do not make any marks on this
exam.
 Questions 11 – 20 are multiple-choice questions designed to assess your
ability to think critically about the subject.
 Please read each question carefully before reading the answer options.
 Be aware that some questions may seem to have more than one right
answer, but you are to look for the one that makes the most sense and is
the most correct.
 When you have a clear idea of the question, find your answer and mark
your selection on the answer sheet.
 You may justify any answer you choose by writing your justification on
the blank paper provided.

11.Use Only a Single, Clearly-Defined Problem and Include the Main Idea in the
Question
 Students must know what the problem is without having to read the
response options

12.Avoid “All the Above” Option


 Students merely need to recognize two correct options to get the answer
correct

13.Avoid the “None of the Above” Option


 You will never know if students know the correct answer

14.Don’t Use MCQ When Other Item Types Are More Appropriate
 Limited distracters or assessing problem-solving and creativity

Advantages
The chief advantage of the multiple-choice question according to N.E. Gronlund (1990)
is its versatility. For instance, it is capable of being applied to a wide range of subject
areas. In contrast to short answer items limit the writer to those content areas that are
capable of being stated in one or two words, multiple choice item necessary bound to
homogeneous items containing one type of subject matter as are matching items, and a
multiple choice question greatly reduces the opportunity for a student to guess the correct
answer from one choice in two with a true – false items to one in four or five, there by
increasing the reliability of the test. Further, since a multiple – choice item contains
plausible incorrect or less correct alternative, it permits the test constructor to tine tune
the discriminations (the degree or homogeneity of the responses) and control.
Disadvantages
N.E. Gronlund (1990) writes that multiple-choice items are difficult to construct. Suitable
distracters are often hard to come by and the teacher is tempted to fill the void with a
“junk” response. The effect of narrowing the range of options will available to the test
wise student. They are also exceedingly time consuming to fashion, one hour per
question being by no means the exception. Finally they generally take student longer to
complete (especially items containing fine discrimination) than do other types of
objective question.

ITEM ANALYSIS AND MODIFICATION

Suggestions for Writing MCQ’s Items


Here are some guidelines for writing multiple-choice tests:
I. The stem of the item should clearly formulate a problem. Include as much of the item
as possible, keeping the response options as short as possible. However, include
only the material needed to make the problem clear and specific. Be concise –
don’t add extraneous information.
II. Be sure that there is one and only one correct or clearly best answer.
III.Be sure wrong answer choices (distracters) are plausible. Eliminate unintentional
grammatical clues, and keep the length and form of all the answer choices equal.
Rotate the position of the correct answer from item to item randomly.
IV.Use negation questions or statements only if the knowledge being tested requires it. In
most cases it is more important for the student to know what a specific item of
information is rather than what it is not.
V. Include from three to five options (two to four distracters plus one correct answer) to
optimize testing for knowledge rather than encouraging guessing. It is not
necessary to provide additional distracters from an item simply to maintain the
same number of distracters for each item. This usually leads to poorly
constructed distracters that add nothing to test validity and reliability.
VI.To increase the difficulty of a multiple-choice item, increase the similarity of content
among the options.
VII. Use the option “none of the above” sparingly and only when the keyed answer
can be classified unequivocally as right or wrong.
VII. Avoid using “all of the above”. It is usually the correct answer and makes the
item too easy for students with partial information.

II. Supply Type Items


A. Completion Items
Like true-false items, completion items are relatively easy to write. Perhaps the first tests
classroom teachers’ construct and students take completion tests. Like items of all other
formats, though, there are good and poor completion items. Student fills in one or more
blanks in a statement. These are also known as “Gap-Fillers.” Most effective for
assessing knowledge and comprehension learning outcomes but can be written for higher
level outcomes. e.g.
The capital city of Pakistan is-----------------.

Suggestions for Writing Completion or Supply Items


Here are our suggestions for writing completion or supply items:

I. If at all possible, items should require a single-word answer or a brief and


definite statement. Avoid statements that are so indefinite that they may be
logically answered by several terms.
a. Poor item:
Motorway (M1) opened for traffic in .
b. Better item:
Motorway (M1) opened for traffic in the year .
II. Be sure the question or statement poses a problem to the examinee. A direct
question is often more desirable than an incomplete statement because it provides
more structure.
III. Be sure the answer that the student is required to produce is factually correct. Be
sure the language used in the question is precise and accurate in relation to the
subject matter area being tested.
IV. Omit only key words; don’t eliminate so many elements that the sense of the
content is impaired.
c. Poor item:
The type of test item is usually more than the
type.
d. Better item:
The supply type of test item is usually graded less objectively than the
type.
V. Word the statement such that the blank is near the end of the sentence rather than
near the beginning. This will prevent awkward sentences.
VI. If the problem requires a numerical answer, indicate the units in which it is to be
expressed.

B. Short Answer
Student supplies a response to a question that might consistent of a single word or phrase.
Most effective for assessing knowledge and comprehension learning outcomes but can be
written for higher level outcomes. Short answer items are of two types.
 Simple direct questions
Who was the first president of the Pakistan?
 Completion items

The name of the first president of Pakistan is .


The items can be answered by a work, phrase, number or symbol. Short-answer tests are
a cross between essay and objective tests. The student must supply the answer as with an
essay question but in a highly abbreviated form as with an objective question.

Advantages
Norman E. Gronlund (1990) writes that short-answer items have a number of advantages.
 They reduce the likelihood that a student will guess the correct answer
 They are relatively easy for a teacher to construct.
 They are will adapted to mathematics, the sciences, and foreign languages where
specific types of knowledge are tested (The formula for ordinary table salt is
).
 They are consistent with the Socratic question and answer format frequently
employed in the elementary grades in teaching basic skills.

Disadvantages
According to Norman E. Grounlund (1990) there are also a number of disadvantages with
short-answer items.
 They are limited to content areas in which a student’s knowledge can be
adequately portrayed by one or two words.
 They are more difficult to score than other types of objective-item tests since
students invariably come up with unanticipated answers that are totally or
partially correct.
 Short answer items usually provide little opportunity for students to synthesize,
evaluate and apply information.

Self Assessment Questions


 What strategies will you adopt to plan an annual exam of your class?
 Write down your preferences of selecting Multiple Choice Questions rather than
True-False test items.
UNIT 05

INTERPRETING TEST SCORES

Introduction of Measurement Scales and Interpretation of Test Scores

Interpreting Test Scores


All types of research data, test result data, survey data, etc is called raw data and
collected using four basic scales. Nominal, ordinal, interval and ratio are four basic
scales for data collection. Ratio is more sophisticated than interval, interval is more
sophisticated than ordinal, and ordinal is more sophisticated than nominal. A variable
measured on a "nominal" scale is a variable that does not really have any evaluative
distinction. One value is really not any greater than another. A good example of a
nominal variable is gender. With nominal variables, there is a qualitative difference
between values, not a quantitative one. Something measured on an "ordinal" scale does
have an evaluative connotation. One value is greater or larger or better than the other.
With ordinal scales, we only know that one value is better than other or 10 is better than
9. A variable measured on interval or ration scale has maximum evaluative distinction.
After the collection of data, there are three basic ways to compare and interpret results
obtained by responses. Students’ performance can be compare and interpreted with an
absolute standard, with a criterion-referenced standard, or with a norm-referenced
standard. Some examples from daily life and educational context may make this clear:
Sr. Standard Characteristics daily life educational context
No.
1 Absolute simply state the He is 6' and 2" He spelled correctly
observed outcome tall 45 out of 50 English
words
2 criterion- compare the He is tall His score of 40 out
referenced person's enough to of 50 is greater than
performance with a catch the minimum cutoff
standard, or branch of this point 33. So he must
criterion. tree. promoted to the
next class.
3 norm-referenced compare a person's He is the third His score of 37 out
performance with fastest ballar of 50 was not very
that of other people in the good; 65% of his
in the same context. pakistani class fellows did
squad 15. better.

All three types of scores interpretation are useful, depending on the purpose for which
comparisons made.
An absolute score merely describes a measure of performance or achievement without
comparing it with any set or specified standard. Scores are not particularly useful without
any kind of comparison. Criterion-referenced scores compare test performance with a
specific standard; such a comparison enables the test interpreter to decide whether the
scores are satisfactory according to established standards. Norm-referenced tests compare
test performance with that of others who were measured by the same procedure. Teachers
are usually more interested in knowing how children compare with a useful standard than
how they compare with other children; but norm-referenced comparisons may also
provide useful insights.

Interpreting Test Scores by Percentiles


The students’ scores in terms of criterion-referenced scores are most easy to understand
and interpret because they are straightforward and usually represented in percentages or
raw scores while norm-referenced scores are often converted to derive standard scores or
converted in to percentiles. Derived standard scores are usually based on the normal
curve having an arbitrary mean to compare respondents who took the same test. The
conversion of students’ score into student's percentile score on a test indicates what
percentage of other students are fell below that student's score who took the same test.
Percentiles are most often used for determining the relative standing position of any
student in a population. Percentile ranks are an easy way to convey a student's standing
at test relative to other same test takers.

For example, a score at the 60th percentile means that the individual's score is the same
as or higher than the scores of 60% of those who took the test. The 50th percentile is
known as the median and represents the middle score of the distribution.
Percentiles have the disadvantage that they are not equal units of measurement. For
instance, a difference of 5 percentile points between two individual’s scores will have a
different meaning depending on its position on the percentile scale, as the scale tends to
exaggerate differences near the mean and collapse differences at the extremes.

Percentiles cannot be averaged nor treated in any other way mathematically. However,
they do have the advantage of being easily understood and can be very useful when
giving feedback to candidates or reporting results to managers.
If you know your percentile score then you know how it compares with others in the
norm group. For example, if you scored at the 70th percentile, then this means that you
scored the same or better than 70% of the individuals in the norm group.
Percentile score is easily understood when tend to bunch up around the average of the
group i.e. when most of the student are the same ability and have score with very small
rang.
To illustrate this point, consider a typical subject test consisting of 50 questions. Most of
the students, who are a fairly similar group in terms of their ability, will score around 40.
Some will score a few less and some a few more. It is very unlikely that any of them will
score less than 35 or more than 45.

These results in terms of achievement scores are a very poor way of analyzing them.
However, percentile score can interpret results very clearly.
Definition
A percentile is a measure that tells us what percent of the total frequency scored at or
below that measure. A percentile rank is the percentage of scores that fall at or below a
given score. OR
A percentile is a measure that tells us what percent of the total frequency scored below
that measure. A percentile rank is the percentage of scores that fall below a given score.
Both definitions are seams to same but statistically not same. For Example
Example No.1
If Aslam stand 25th out of a class of 150 students, then 125 students were ranked below
Aslam.

Formula:
To find the percentile rank of a score, x, out of a set of n scores, where x is
B 0.5E
included: .100  percentilerank
n
Where B = number of scores below x
E = number of scores equal to x
n = number of scores
using this formula Aslam's percentile rank would be:

Formula:
To find the percentile rank of a score, x, out of a set of n scores, where x is not included:

number of scores below x


.100  percentilerank
n
using this formula Aslam's percentile rank would be:
125
.83  83rd percentile
150
Therefore both definition yields different percentile rank. This difference is significant
only for small data. If we have raw data then we can find unique percentile rank using
both formulae.

Example No.2
The science test scores are: 50, 65, 70, 72, 72, 78, 80, 82, 84, 84, 85, 86, 88, 88, 90, 94,
96, 98, 98, 99 Find the percentile rank for a score of 84 on this test.

Solution:
First rank the scores in ascending or descending order
50, 65, 70, 72, 72, 78, 80, 82, 84, |84, 85, 86, 88, 88, 90, 94, 96, 98, 98, 99
Since there are 2 values equal to 84, assign one to the group "above 84" and the other to
the group "below 84".

Solution Using Formula:


B  0.5E
.100  percentile rank
n
Solution Using Formula:
scores below x
number .100  percentile rank
of
n
9
.100  45th percentile
10
0
Therefore score of 84 is at the 45th percentile for this test.

Example No.3
The science test scores are: 50, 65, 70, 72, 72, 78, 80, 82, 84, 84, 85, 86, 88, 88, 90, 94,
96, 98, 98, 99. Find the percentile rank for a score of 86 on this test.

Solution:
First rank the scores in ascending or descending order
Since there is only one value equal to 86, it will be counted as "half" of a data value for
the group "above 86" as well as the group "below 86".
Solution Using Formula:

B  0.5E
.100  percentile rank
n
11  0.5(1) 11.5
.100  .100  58th percentile

20 20

Solution Using Formula:

number of scores below x


.100  percentilerank
n
11.5
.100  57.5  58th
percentile 20
The score of 86 is at the 58th percentile for this test.

Keep in Mind:
 Percentile rank is a number between 0 and 100 indicating the percent of cases
falling at or below that score.
 Percentile ranks are usually written to the nearest whole percent: 64.5% = 65%
= 65th percentile
 Scores are divided into 100 equally sized groups.
 Scores are arranged in rank order from lowest to highest.
 There is no 0 percentile rank - the lowest score is at the first percentile.
 There is no 100th percentile - the highest score is at the 99th percentile.
 Percentiles have the disadvantage that they are not equal units of measurement.
 Percentiles cannot be averaged nor treated in any other way mathematically.
 You cannot perform the same mathematical operations on percentiles that you can on
raw scores. You cannot, for example, compute the mean of percentile scores, as
the results may be misleading.
 Quartiles can be thought of as percentile measure. Remember that quartiles break the
data set into 4 equal parts. If 100% is broken into four equal parts, we have
subdivisions at 25%, 50%, and 75% .creating the:

First quartile (lower quartile) to be at the 25th percentile.


Median (or second quartile) to be at the 50th percentile.
Third quartile (upper quartile) to be a the 75th percentile.

Interpreting Test Scores by Percentages


The number of questions a student gets right on a test is the student's raw score (assuming
each question is worth one point). By itself, a raw score has little or no meaning. For
example if teacher says that Fatima has scored 8 marks. This information (8 marks)
regarding Fatima’s result does not convey any meaning. The meaning depends on how
many questions are on the test and how hard or easy the questions are. For example, if
Umair got 10 right on both a math test and a science test, it would not be reasonable to
conclude that his level of achievement in the two areas is the same. This illustrates, why
raw scores are usually converted to other types of scores for interpretation purposes. The
conversion of raw score into percentage convey students’ achievements in understanding
and meaningful way. For example if Sadia got 8 questions right out of ten questions then
we can say that Sadia is able to solve
8
100 =80% questions. If each question carries equal marks then we can say that
10
Sadia has scored 80% marks. If different questions carry different marks then first count
marks obtained and total marks the test. Use the following formula to compute % of
marks.

MarksOtained
100 = % marks
TotalMarks
Example:
The marks detail of Hussan’s math test is shown. Find the percentage marks of Hussan.
Question Q1 Q2 Q3 Q4 Q5 Total
Marks 10 10 5 5 20 50
Marks 8 5 2 3 10 28
obtained

Solution:
Hussan’ s marks = 28
Total marks =50

Hussan got = 28
MarksObtained 100 =56 %
100 =
TotalMarks 50
For example, a number can be used merely to label or categorize a response. This sort of
number (nominal scale) has a low level of meaning. A higher level of meaning comes
with numbers that order responses (ordinal data). An even higher level of meaning
(interval or ratio data) is present when numbers attempt to present exact scores, such as
when we state that a person got 17 correct out of 20. Although even the lowest scale is
useful, higher level scales give more precise information and are more easily adapted to
many statistical procedures.
Scores can be summarized by using either the mode (most frequent score), the median
(midpoint of the scores), or the mean (arithmetic average) to indicate typical
performance. When reporting data, you should choose the measure of central tendency
that gives the most accurate picture of what is typical in a set of scores. In addition, it is
possible to report the standard deviation to indicate the spread of the scores around the
mean.
Scores from measurement processes can be either absolute, criterion referenced, or norm
referenced. An absolute score simply states a measure of performance without comparing
it with any standard. However, scores are not particularly useful unless they are
compared with something. Criterion-referenced scores compare test performance with a
specific standard; such a comparison enables the test interpreter to decide whether the
scores are satisfactory according to established standards. Norm-referenced tests compare
test performance with that of others who were measured by the same procedure. Teachers
are usually more interested in knowing how children compare with a useful standard than
how they compare with other children; but norm referenced comparisons may also
provide useful insights.
Criterion-referenced scores are easy to understand because they are usually
straightforward raw scores or percentages. Norm-referenced scores are often converted to
percentiles or other derived standard scores. A student's percentile score on a test
indicates what percentage of other students who took the same test fell below that
student's score. Derived scores are often based on the normal curve. They use an arbitrary
mean to make comparisons showing how respondents compare with other persons who
took the same test.

Interpreting Test Scores by ordering and ranking


Organizing and reporting of students’ scores start with placing the scores in ascending or
descending order. Teacher can find the smallest, largest, rang, and some other facts like
variability of scores associated with scores from ranked scores. Teacher may use ranked
scoes to see the relative position of each student within the class but ranked scores does
not yield any significant numerical value for result interpretation or reporting.

Measurement Scales
Measurement is the assignment of numbers to objects or events in a systematic fashion.
Measurement scales are critical because they relate to the types of statistics you can use
to analyze your data. An easy way to have a paper rejected is to have used either an
incorrect scale/statistic combination or to have used a low powered statistic on a high
powered set of data. Following four levels of measurement scales are commonly
distinguished so that the proper analysis can be used on the data a number can be used
merely to label or categorize a response.

Nominal Scale
Nominal scales are the lowest scales of measurement. A nominal scale, as the name
implies, is simply some placing of data into categories, without any order or structure.
You are only allowed to examine if a nominal scale datum is equal to some particular
value or to count the number of occurrences of each value. For example, categorization of
blood groups of classmates into A, B. AB, O etc. In The only mathematical operation we
can perform with nominal data is to count. Variables assessed on a nominal scale are
called categorical variables; Categorical data are measured on nominal scales which
merely assign labels to distinguish categories. For example, gender is a nominal scale
variable. Classifying people according to gender is a common application of
a nominal scale.

Nominal Data
 classification or gatagorization of data, e.g. male or female
 no ordering, e.g. it makes no sense to state that male is greater than female (M >
F) etc
 arbitrary labels, e.g., pass=1 and fail=2 etc
Ordinal Scale
Something measured on an "ordinal" scale does have an evaluative connotation. You are
also allowed to examine if an ordinal scale datum is less than or greater than another
value. For example rating of job satisfaction on a scale from 1 to 10, with 10
representing complete satisfaction. With ordinal scales, we only know that 2 is better than
1 or 10 is better than 9; we do not know by how much. It may vary. Hence, you can 'rank'
ordinal data, but you cannot 'quantify' differences between two ordinal values. Nominal
scale properties are included in ordinal scale.

Ordinal Data
 ordered but differences between values are not important. Difference between
values may or may not same or equal.
 e.g., political parties on left to right spectrum given labels 0, 1, 2
 e.g., Likert scales, rank on a scale of 1..5 your degree of satisfaction
 e.g., restaurant ratings

Interval Scale
An ordinal scale has quantifiable difference between values become interval scale. You
are allowed to quantify the difference between two interval scale values but there is no
natural zero. A variable measured on an interval scale gives information about more or
better as ordinal scales do, but interval variables have an equal distance between each
value. The distance between 1 and 2 is equal to the distance between 9 and 10. For
example, temperature scales are interval data with 25C warmer than 20C and a 5C
difference has some physical meaning. Note that 0C is arbitrary, so that it does not make
sense to say that 20C is twice as hot as 10C but there is the exact same difference
between 100C and 90C as there is between 42C and 32C. Students’ achievement scores
are measured on interval scale

Interval Data
 ordered, constant scale, but no natural zero
 differences make sense, but ratios do not (e.g., 30°-20°=20°-10°, but 20°/10° is
not twice as hot!
 e.g., temperature (C,F), dates

Ratio Scale
Something measured on a ratio scale has the same properties that an interval scale has
except, with a ratio scaling, there is an absolute zero point. Temperature measured in
Kelvin is an example. There is no value possible below 0 degrees Kelvin, it is absolute
zero. Physical measurements of height, weight, length are typically ratio variables.
Weight is another example, 0 lbs. is a meaningful absence of weight. This ratio hold true
regardless of which scale the object is being measured in (e.g. meters or yards). This is
because there is a natural zero.

Ratio Data
 ordered, constant scale, natural zero
 e.g., height, weight, age, length
One can think of nominal, ordinal, interval, and ratio as being ranked in their relation to
one another. Ratio is more sophisticated than interval, interval is more sophisticated than
ordinal, and ordinal is more sophisticated than nominal.

Frequency Distribution
Frequency is how often something occurs. The frequency (f) of a particular observation
is the number of times the observation occurs in the data.

Distribution
The distribution of a variable is the pattern of frequencies of the observation.

Frequency Distribution
It is a representation, either in a graphical or tabular format, which displays the number of
observations within a given interval. Frequency distributions are usually used within a
statistical context.

Frequency Distribution Tables


A frequency distribution table is one way you can organize data so that it makes more
sense. Frequency distributions are also portrayed as frequency tables, histograms,
orpolygons. Frequency distribution tables can be used for both categorical and numeric
variables. The intervals of frequency table must be mutually exclusive and exhaustive.
Continuous variables should only be used with class intervals. By counting frequencies,
we can make a frequency distribution table. Following examples will figure out
procedure of construction of frequency distribution table.
Example 1
For example, let’s say you have a list of IQ scores for a gifted classroom in a particular
elementary school. The IQ scores are: 118, 123, 124, 125, 127, 128, 129, 130, 130, 133,
136, 138, 141, 142, 149, 150, 154. That list doesn’t tell you much about anything. You
could draw a frequency distribution table, which will give a better picture of your data
Step 1:
 Figure out how many classes (categories) you need. There are no hard rules about
how many classes to pick, but there are a couple of general guidelines:
 Pick between 5 and 20 classes. For the list of IQs above, we picked 5 classes.
 Make sure you have a few items in each category. For example, if you have 20
items, choose 5 classes (4 items per category), not 20 classes (which would give
you only 1 item per category).

Step 2:
 Subtract the minimum data value from the maximum data value. For example,
our the IQ list above had a minimum value of 118 and a maximum value of 154,
so:
154 – 118 = 36

Step 3:
 Divide your answer in Step 2 by the number of classes you chose in Step 1.
36 / 5 = 7.2

Step 4:
 Round the number from Step 3 up to a whole number to get the class width.
Rounded up, 7.2 becomes 8.

Step 5:
 Write down your lowest value for your first minimum data value:
The lowest value is 118

Step 6:
 Add the class width from Step 4 to Step 5 to get the next lower class limit:
118 + 8 = 126

Step 7:
 Repeat Step 6 for the other minimum data values (in other words, keep on
adding your class width to your minimum data values) until you have created the
number of classes you chose in Step 1. We chose 5 classes, so our 5 minimum
data values are:
118
126 (118 + 8)
134 (126 + 8)
142 (134 + 8)
150 (142 + 8)

Step 8:
 Write down the upper class limits. These are the highest values that can be in the
category, so in most cases you can subtract 1 from class width and add that to the
minimum data value. For example:
118 + (8 – 1) = 125
118 – 125
126 – 133
134 – 142
143 – 149
150 – 157

Step 9:
 Add a second column for the number of items in each class, and label the
columns with appropriate headings:
IQ Number
118 – 125
126 – 133
134 – 142
143 – 149
150 – 157

Step 10:
 Count the number of items in each class, and put the total in the second column. The
list of IQ scores are: 118, 123, 124, 125, 127, 128, 129, 130, 130, 133, 136, 138,
141, 142, 149, 150, 154.
IQ Number
118 – 125 4
126 – 133 6
134 – 142 4
143 – 149 1
150 – 157 2
Example 2
A survey was taken in Lahore. In each of 20 homes, people were asked how many cars
were registered to their households. The results were recorded as follows:
1, 2, 1, 0, 3, 4, 0, 1, 1, 1, 2, 2, 3, 2, 3, 2, 1, 4, 0, 0
Use the following steps to present this data in a frequency distribution table.
1. Divide the results (x) into intervals, and then count the number of results in each
interval. In this case, the intervals would be the number of households with no
car (0), one car (1), two cars (2) and so forth.
2. Make a table with separate columns for the interval numbers (the number of cars
per household), the tallied results, and the frequency of results in each interval.
Label these columns Number of cars, Tally and Frequency.
3. Read the list of data from left to right and place a tally mark in the appropriate
row. For example, the first result is a 1, so place a tally mark in the row beside
where 1 appears in the interval column (Number of cars). The next result is a 2,
so place a tally mark in the row beside the 2, and so on. When you reach your
fifth tally mark, draw a tally line through the preceding four marks to make your
final frequency calculations easier to read.
4. Add up the number of tally marks in each row and record them in the final
column entitled Frequency.

Your frequency distribution table for this exercise should look like this:
Table 1. Frequency table for the number of cars registered in
each household

Number of cars (x) Tally Frequency (f)


0 4
1 6
2 5
3 3
4 2
By looking at this frequency distribution table quickly, we can see that out of
20 households surveyed, 4 households had no cars, 6 households had 1 car, etc.
Relative frequency and percentage frequency
An analyst studying these data might want to know not only how long batteries last, but
also what proportion of the batteries falls into each class interval of battery life.
This relative frequency of a particular observation or class interval is found by dividing
the frequency (f) by the number of observations (n): that is, (f ÷ n). Thus:
Relative frequency = frequency ÷ number of observations
The percentage frequency is found by multiplying each relative frequency value by 100.
Thus:

Percentage frequency = relative frequency X 100 = f ÷ n X 100

Interpreting Test Scores by Graphic Displays of Distribution

The data from a frequency table can be displayed graphically. A graph can provide a
visual display of the distributions, which gives us another view of the summarized data.
For example, the graphic representation of the relationship between two different test
scores through the use of scatter plots. We learned that we could describe in general
terms the direction and strength of the relationship between scores by visually examining
the scores as they were arranged in a graph. Some other examples of these types of
graphs include histograms and frequency polygons.
A histogram is a bar graph of scores from a frequency table. The horizontal x-axis
represents the scores on the test, and the vertical y-axis represents the frequencies. The
frequencies are plotted as bars.

Histogram of Mid-Term Language Arts Exam

A frequency polygon is a line graph representation of a set of scores from a frequency


table. The horizontal x-axis is represented by the scores on the scale and the vertical y-
axis is represented by the frequencies.
Frequency Polygon of Mid-Term Language Arts Exam

A frequency polygon could also be used to compare two or more sets of data by
representing each set of scores as a line graph with a different color or pattern. For
example, you might be interested in looking at your students’ scores by gender, or
comparing students’ performance on two tests (see Figure 9.4).

Frequency Polygon of Midterm by Gender


Frequency polygons are a graphical device for understanding the shapes of distributions.
They serve the same purpose as histograms, but are especially helpful in comparing sets
of data. Frequency polygons are also a good choice for displaying cumulative frequency
distributions.

. Draw the Y-axis to indicate the frequency of each class. Place a point in the middle of
each class interval at the height corresponding to its frequency. Finally, connect the
points. You should include one class interval below the lowest value in your data and one
above the highest value. The graph will then touch the X-axis on both sides.
A frequency polygon for 642 psychology test scores is shown in Figure 1. The first label
on the X-axis is 35. This represents an interval extending from 29.5 to 39.5. Since the
lowest test score is 46, this interval has a frequency of 0. The point labeled 45 represents
the interval from 39.5 to 49.5. There are three scores in this interval. There are 150 scores
in the interval that surrounds 85.
You can easily discern the shape of the distribution from Figure 1. Most of the scores are
between 65 and 115. It is clear that the distribution is not symmetric inasmuch as good
scores (to the right) trail off more gradually than poor scores (to the left). In the
terminology of Chapter 3 (where we will study shapes of distributions more
systematically), the distribution is skewed.

Figure 1: Frequency polygon for the psychology test scores.

A cumulative frequency polygon for the same test scores is shown in Figure 2. The graph
is the same as before except that the Y value for each point is the number of students in
the corresponding class interval plus all numbers in lower intervals. For example, there
are no scores in the interval labeled "35," three in the interval "45,"and 10 in the interval
"55."Therefore the Y value corresponding to "55" is 13. Since 642 students took the test,
the cumulative frequency for the last interval is 642.
Figure 2: Cumulative frequency polygon for the psychology test scores.

Frequency polygons are useful for comparing distributions. This is achieved by


overlaying the frequency polygons drawn for different data sets. Figure 3 provides an
example. The data come from a task in which the goal is to move a computer mouse to a
target on the screen as fast as possible. On 20 of the trials, the target was a small
rectangle; on the other 20, the target was a large rectangle. Time to reach the target was
recorded on each trial. The two distributions (one for each target) are plotted together
in Figure 3. The figure shows that although there is some overlap in times, it generally
took longer to move the mouse to the small target than to the large one.

Figure 3: Overlaid frequency polygons.


It is also possible to plot two cumulative frequency distributions in the same graph. This
is illustrated in Figure 4 using the same data from the mouse task. The difference in
distributions for the two targets is again evident.

Figure 4: Overlaid cumulative frequency polygons.

The raw scores for the 10 pt. quiz are:


10 9 8 8 7 7 6 6 5 4 2 10 9 8 8 7 6 6 5 5 3 10 9 8 7 7 6 6 5 4 3
Draw frequency graph, bar graph, frequenvy polygone, and frequency curve

Solution
Self Assessment Questions
1. The control group scored 47.26 on the pretest. Does this score represent nominal,
ordinal, or interval scale data?
2. The control group's score of 47.26 on the pretest put it at the 26th percentile.
Does this percentile score represent nominal, ordinal, or interval scale data?
3. The control group had a standard deviation of 7.78 on the pretest. Does this
standard deviation represent nominal, ordinal, or interval scale data?
4. Construct a frequency distribution with suitable class interval size of marks obtained
by 50 students of a class are given below:
23, 50, 38, 42, 63, 75, 12, 33, 26, 39, 35, 47, 43, 52, 56, 59, 64, 77, 15, 21, 51,
54, 72, 68, 36, 65, 52, 60, 27, 34, 47, 48, 55, 58, 59, 62, 51, 48, 50, 41, 57, 65,
54, 43, 56, 44, 30, 46, 67, 53
5. The Lakers scored the following numbers of goals in their last twenty matches: 3, 0,
1, 5, 4, 3, 2, 6, 4, 2, 3, 3, 0, 7, 1, 1, 2, 3, 4, 3
6. Which number had the highest frequency?
7. Which letter occurs the most frequently in the following sentence?

THE SUN ALWAYS SETS IN THE WEST.


8. Pi is a special number that is used to find the area of a circle. The following
number gives the first 100 digits of the number pi:
141 592 653 589 793 238 462 643 383 279 502 884 197 169 399 375 105 820
974 944 592 307 816 406 286 208 998 628 034 825 342 117 067
Which of the digits 0 to 9 occurs most frequently in this number?
9. Identify by correctly labeling the following graphic illustrations of results of a five
point quiz taken by ten students.
1. In each data set given, find the mean of the group
a) Times were recorded when learners played a game
Time in
36 - 45 46 - 55 56 - 65 66 - 75 76 - 85 86 - 95 96 - 105
seconds
Frequency 5 11 15 26 19 13 6
b) The following data were collected from a group of learners
Time in
41 - 45 46 - 50 51 - 55 56 - 60 61 - 65 66 - 70 71 - 75 76 - 80
seconds
Frequency 3 5 8 12 14 9 7 2
11. Following are the wages of 8 workers of a factory. Find the range and the
coefficient of range. Wages in (Rs) 14000, 14500, 15200, 13800, 14850, 14950,
15750, 14400.
12. The following distribution gives the numbers of houses and the number of
persons per house.
Number of Persons 1 2 3 4 5 6 7 8 9 10
Number of Houses 26 113 120 95 60 42 21 14 5 4
Calculate the range and coefficient of range.
UNIT 06

GRADING AND REPORTING RESULTS,

Functions of Test Scores and Progress Reports

The task of grading and reporting students’ progress cannot be separated from the
procedures adopted in assessing students’ learning. If instructional objectives are well
defined in terms of behavioural or performance terms and relevant tests and other
assessment procedures are properly used, grading and reporting become a matter of
summarizing the results and presenting them in understandable form. Reporting students’
progress is difficult especially when data is represented in single letter-grade system or
numerical value (Linn & Gronlund, 2000).
Assigning grades and making referrals are decisions that require information about
individual students. In contrast, curricular and instructional decisions require information
about groups of students, quite often about entire classrooms or schools (Linn &
Gronlund, 2000).
There are three primary purposes of grading students. First, grades are the primary
currency for exchange of many of the opportunities and rewards our society has to offer.
Grades can be exchanged for such diverse entities as adult approval, public recognition,
college and university admission etc. To deprive students of grades means to deprive
them of rewards and opportunities. Second, teachers become habitual of assessing their
students’ learning in grades, and if teachers don’t award grades, the students might not
well know about their learning progress. Third, grading students motivate them. Grades
can serve as incentives, and for many students incentives serve a motivating function.
The different functions of grading and reporting systems are given as under:

1. Instructional uses
The focus of grading and reporting should be the student improvement in learning. This
is most likely occur when the report: a) clarifies the instructional objectives; b) indicates
the student’s strengths and weaknesses in learning; c) provides information concerning
the student’s personal and social development; and d) contributes to student’s motivation.
The improvement of student learning is probably best achieved by the day-to-day
assessments of learning and the feedback from tests and other assessment procedures. A
portfolio of work developed during the academic year can be displayed to indicate
student’s strengths and weaknesses periodically.
Periodic progress reports can contribute to student motivation by providing short-term
goals and knowledge of results. Both are essential features of essential learning. Well-
designed progress reports can also help in evaluating instructional procedures by
identifying areas need revision. When the reports of majority of students indicate poor
progress, it may infer that there is a need to modify the instructional objectives.
2. Feedback to students
Grading and reporting test results to the students have been an on-going practice in all the
educational institutions of the world. The mechanism or strategy may differ from country
to country or institution to institution but each institution observes this practice in any
way. Reporting test scores to students has a number of advantages for them. As the
students move up through the grades, the usefulness of the test scores for personal
academic planning and self-assessment increases. For most students, the scores provide
feedback about how much they know and how effective their efforts to learn have been.
They can know their strengths and areas need for special attention. Such feedback is
essential if students are expected to be partners in managing their own instructional time
and effort. These results help them to make good decisions for their future professional
development.
Teachers use a variety of strategies to help students become independent learners who are
able to take an increasing responsibility for their own school progress. Self-assessment is
a significant aspect of self-guided learning, and the reporting of test results can be an
integral part of the procedures teachers use to promote self-assessment. Test results help
students to identify areas need for improvement, areas in which progress has been strong,
and areas in which continued strong effort will help maintain high levels of achievement.
Test results can be used with information from teacher’s assessments to help students set
their own instructional goals, decide how they will allocate their time, and determine
priorities for improving skills such as reading, writing, speaking, and problem solving.
When students are given their own test results, they can learn about self-assessment while
doing actual self-assessment. (Iowa Testing Programs, 2011).
Grading and reporting results also provide students an opportunity for developing an
awareness of how they are growing in various skill areas. Self-assessment begins with
self-monitoring, a skill most children have begun developing well before coming to
kindergarten.

3. Administrative and guidance uses


Grades and progress reports serve a number of administrative functions. For example,
they are used for determining promotion and graduation, awarding honours, determining
sports eligibility of students, and reporting to other institutions and employers. For most
administrative purposes, a single letter-grade is typically required, but of course,
technically single letter-grade does not truly interpret student’s assessment.
Guidance and Counseling officers use grades and reports on student’s achievement, along
with other information, to help students make realistic educational and vocational plans.
Reports that include ratings on personal and social characteristics are also useful in
helping students with adjustment problems.

4. Informing parents about their children’s performance


Parents are often overwhelmed by the grades and test reports they receive from school
personnel. In order to establish a true partnership between parents and teachers, it is
essential that information about student progress be communicated clearly, respectfully
and accurately. Test results should be provided to parents using; a) simple, clear language
free from educational and test jargon, and b) explanation of the purpose of the tests used
(Canter, 1998).
Most of the time parents are either ignored or least involved to let them aware of the
progress of their children. To strengthen connection between home and school parents
need to receive comprehensive information about their children achievement. If parents
do not understand the tests given to their children, the scores, and how the results are
used to make decisions about their children, they are prohibited from helping their
children learn and making decisions.
According to Kearney (1983), the lack of information provided to consumers about test
data has sweeping and negative consequences. He states;
Individual student needs are not met, parents are not kept fully informed of student
progress, curricular needs are not discovered and corrected, and the results are not
reported to various audiences that need to receive this information and need to know what
is being done with the information.
In some countries, there are prescribed policies for grading and reporting test results to
the parents. For example, Michigan Educational Assessment Policy (MEAP) is revised
periodically in view of parents’ suggestions and feedback. MEAP consists of criterion-
referenced tests, primarily in mathematics and reading, that are administered each year to
all fourth, seventh and tenth graders. MEAP recommends that policy makers at state and
local levels must develop strong linkages to create, implement and monitor effective
reporting practices. (Barber, Paris, Evans, & Gadsden, 1992).
Without any doubt, it is more effective to talk parents to face about their children’s scores
than to send a score report home for them to interpret on their own. For a variety of
reasons, a parent-teacher or parent-student-teacher conference offers an excellent
occasion for teachers to provide and interpret those results to the parents.
1. Teachers tend to be more knowledgeable than parents about tests and the types of
scores being interpreted.
2. Teachers can make numerous observations of their student’s work and
consequently substantiate the results. In-consistencies between test scores and
classroom performance can be noted and discussed.
3. Teachers possess work samples that can be used to illustrate the type of
classroom work the student has done. Portfolios can be used to illustrate
strengths and to explain where improvements are needed.
4. Teachers may be aware of special circumstances that may have influenced the
scores, either positively or negatively, to misrepresent the students’ achievement
level.
5. Parents have a chance to ask questions about points of misunderstanding or about
how they can work. The student and the teacher in addressing apparent
weaknesses and in capitalizing on strengths wherever possible, test scores should
be given to the parents at the school. (Iowa Testing Program, 2011).
Under the Act of 1998, schools are required to regularly evaluate students and
periodically report to parents on the results of the evaluation, but in specific terms, the
NCCA guidelines make a recommendation that schools should report twice annually to
parents – one towards the end of 1 st term or beginning of 2 nd term, and the other towards
the end of school year.
Under existing data protection legislation, parents have a statutory right to obtain scores
which their children have obtained in standardized tests. NCCA have developed a set of
reports card templates to be used by schools in communicating with parents and taken in
conjunction with the Circular 0138 which was issued by the Department of Education in
2006.
In a case study conducted in the US context (www.uscharterschools.org) it was found
that ‘the school should be a source for parents, it should not dictate to parents what their
role should be’. In other words, the school should respect all parents and appreciate the
experiences and individual strengths they offer their children.

Types of Test Reporting and Marking


Usually two types of tests are used in schools, criterion-referenced and norm-referenced.
Criterion-referenced tests are used to measure student mastery of instructional objectives
or curriculum rather than to compare one student’s performance with another or to rank
students. They are often used as benchmarks to identify areas of strengths and/or
weaknesses in a given curriculum. Norm-referenced tests compare an individual’s
performance to that of his/her classmates, thus emphasizing relative rather an absolute
performance. Scores on norm-referenced tests indicate the students’ ranking relative
position to that group. Typical scores used with norm-referenced tests include raw scores,
grade norms, percentiles, stanines, and standard scores.

1. Raw scores
The raw score is simply the number of points received on a test when the test has been
scored according to the directions. For example, if a student responds to 65 items
correctly on an objective test in which each correct item counts one point, the raw score
will be 65.
Although a raw score is a numerical summary of student’s test performance, it is not very
meaningful without further information. For example, in the above example, what does a
raw score of 35 mean? How many items were in the test? What kinds of the problems
were asked? How the items were difficult?
2. Grade norms
Grade norms are widely used with standardized achievement tests, especially at
elementary level. The grade equivalent that corresponds to a particular raw score
identifies the grade level at which the typical student obtains that raw score. Grade
equivalents are based on the performance of students in the norm group in each of two or
more grades.

3. Percentile ranking
A percentile is a score that indicates the rank of the score compared to others (same
grade/age) using a hypothetical group of 100 students. In other words, a percentile rank
(or percentile score) indicates a student’s relative position in the group in terms of
percentage of students.
Percentile rank is interpreted as the percentage of individuals receiving scores equal or
lower than a given score. A percentile of 25 indicates that the student’s test performance
is equal or exceeds 25 out of 100 students on the same measure.

4. Standard scores
A standard score is also derived from the raw scores using the normal information
gathered when the test was developed. Instead of indicating a student’s rank compared to
others, standard scores indicate how far above or below the average (Mean) an individual
score falls, using a common scale, such as one with an average of 100. Basically standard
scores express test performance in terms of standard deviation (SD) from the Mean.
Standard scores can be used to compare individuals of different grades or age groups
because all are converted into the same numerical scale. There are various forms of
standard scores such as z-score, T-score, and stanines.
Z-score expresses test performance simply and directly as the number of SD units a raw
score is above or below the Mean. A z-score is always negative when the raw score is
smaller than Mean. Symbolic representation can be shown as: z-score = X-M/SD.
T-score refers to any set of normally distributed standard cores that has a Mean of 50 and
SD of 10. Symbolically it can be represented as: T-score = 50+10(z).
Stanines are the simplest form of normalized standard scores that illustrate the process of
normalization. Stanines are single digit scores ranging from 1 to 9. These are groups of
percentile ranks with the entire group of scores divided into nine parts, with the largest
number of individuals falling in the middle stanines, and fewer students falling at the
extremes (Linn & Gronlund, 2000).

5. Norm reference test and traditional letter-grade system


It is the most easiest and popular way of grading and reporting system. The traditional
system is generally based on grades A to F. This rating is generally reflected as: Grade A
(Excellent), B (Very Good), C (Good), D (Satisfactory/Average), E (Unsatisfactory/
Below Average), and F (Fail).
This system does truly assess a student’s progress in different learning domains. First
shortcoming is that using this system it is difficult to interpret the results. Second, a
student’s performance is linked with achievement, effort, work habits, and good
behaviour; traditional letter-grade system is unable to assess all these domains of a
student. Third, the proportion of students assigned each letter grade generally varies
from teacher to teacher. Fourth, it does not indicate patterns of strengths and weaknesses
in the students (Linn & Gronlund, 2000). Inspite of these shortcomings, this system is
popular in schools, colleges and universities.

6. Criterion reference test and the system of pass-fail


It is a popular way of reporting students’ progress, particularly at elementary level. In the
context of Pakistan, as majority of the parents are illiterate or hardly literate, therefore
they have concern with ‘pass or fail’ about their children’s performance in schools. This
system is mostly used for courses taught under a pure mastery learning approach i.e.
criterion-referenced testing.
This system has also many shortcomings. First, as students are declared just pass or fail
(successful or unsuccessful) so many students do not work hard and hence their actual
learning remains unsatisfactory or below desired level. Second, this two-category system
provides less information to the teacher, student and parents than the traditional letter-
grade (A, B, C, D) system. Third, it provides no indication of the level of learning.

7. Checklist of Objectives
To provide more informative progress reports, some schools have replaced or
supplemented the traditional grading system with a list of objectives to be checked or
rated. This system is more popular at elementary school level. The major advantage of
this system is that it provides a detailed analysis of the students’ strengths and
weaknesses. For example, the objectives for assessing reading comprehension can have
the following objectives.
 Reads with understanding
 Works out meaning and use of new words
 Reads well to others
 Reads independently for pleasure (Linn & Gronlund, 2000).

8. Rating scales
In many schools students’ progress is prepared on some rating scale, usually 1 to 10,
instead letter grades; 1 indicates the poorest performance while 10 indicates as the
excellent or extra-ordinary performance. But in the true sense, each rating level
corresponds to a specific level of learning achievement. Such rating scales are also used
by the evaluation of students for admissions into different programmes at university
level. Some other rating scales can also be seen across the world.
In rating scales, we generally assess students’ abilities in the context of ‘how much’,
‘how often’, ‘how good’ etc. (Anderson, 2003). The continuum may be qualitative such
as ‘how good a student behaves’ or it may quantitative such as ‘how much marks a
student got in a test’. Developing rating scales has become a common practice now-a-
days, but still many teachers don’t possess the skill of developing an appropriate rating
scale in context to their particular learning situations.

9. Letters to parents/guardians
Some schools keep parents inform about the progress of their children by writing letters.
Writing letters to parents is usually done by a fewer teachers who have more concern
with their students as it is a time consuming activity. But at the same time some good
teachers avoid to write formal letters as they think that many aspects are not clearly
interpreted. And some of the parents also don’t feel comfortable to accept such letters.
Linn and Gronlund (2000) state that although letters to parents might provide a good
supplement to other types of reports, their usefulness as the sole method of reporting
progress is limited by several of the following factors.
 Comprehensive and thoughtful written reports require excessive amount of time and
energy.
 Descriptions of students learning may be misinterpreted by the parents.
 Fail to provide a systematic and organized information

10.Portfolio
The teachers of some good schools prepare complete portfolio of their students. Portfolio
is actually cumulative record of a student which reflects his/her strengths and weaknesses
in different subjects over the period of the time. It indicates what strategies were used by
the teacher to overcome the learning difficulties of the students. It also shows students’
progress periodically which indicates his/her trend of improvement. Developing portfolio
is really a hard task for the teacher, as he/she has to keep all record of students such as
teacher’s lesson plans, tests, students’ best pieces of works, and their assessments records
in an academic year.
An effective portfolio is more than simply a file into which student work products are
placed. It is a purposefully selected collection of work that often contains commentary on
the entries by both students and teachers.
No doubt, portfolio is a good tool for student’s assessment, but it has three limitations.
First, it is a time consuming process. Second, teacher must possess the skill of developing
portfolio which is most of the time lacking. Third, it is ideal for small class size and in
Pakistani context, particularly at elementary level, class size is usually large and hence
the teacher cannot maintain portfolio of a large class.

11.Report Cards
There is a practice of report cards in many good educational institutions in many
countries including Pakistan. Many parents desire to see the report cards or progress
reports in written form issued by the schools. Although a good report card explains the
achievement of students in terms of scores or marks, conduct and behaviour, participation
in class activities etc. Well written comments can offer parents and students’ suggestions
as to how to make improvements in specific academic or behavioural areas. These
provide teachers opportunities to be reflective about the academic and behavioural
progress of their students. Such reflections may result in teachers gaining a deeper
understanding of each student’s strengths and needs for improvement. Bruadli (1998) has
divided words and phrases into three categories about what to include and exclude from
written comments on report cards.
A. Words and phrases that promote positive view of the student
1. Gets along well with people
2. Has a good grasp of …
3. Has improved tremendously
4. Is a real joy to have in class
5. Is well respected by his classmates
6. Works very hard

B. Words and phrases to convey the students need help


1. Could benefit from …
2. Finds it difficult at time to …
3. Has trouble with …
4. Requires help with …
5. Needs reinforcement in …

C. Words and phrases to avoid or use with extreme caution


1. Always
2. Never
3. Can’t )or unable to)
4. Won’t
Report card usually carries two shortcomings: a) regardless of how grades are assigned,
students and parents tend to use them normatively; and b) many students and parents (and
some teachers) believe that grades are far more precise than they are. In most grading
schemes, an ‘F’ denotes to fail or unsatisfactory. Hall (1990) and Wiggins (1994) state
that not only grades imprecise, they are vague in their meaning. They do not provide
parents or students with a thorough understanding of what has been learned or
accomplished.

12.Parent-teacher conferences
Parent-teacher conferences are mostly used in elementary schools. In such conferences
portfolio are discussed. This is a two-way flow of information and provides much
information to the parents. But one of the limitations is that many parents don’t come to
attend the conferences. It is also a time consuming activity and also needs sufficient
funds to hold conferences.
Literature also highlights ‘parent-student-teacher conference’ instead ‘parent-teacher
conference’, as student is also one of the key components of this process since he/she is
directly benefitted. In many developed countries, it has become the most important way
of informing parents about their children’s work in school. Parent-teacher conferences are
productive when these are carefully planned and the teachers are skilled and committed.
The parent-teacher conference is an extremely useful tool, but it shares three important
limitations with informal letter. First, it requires a substantial amount of time and skills.
Second, it does not provide a systematic record of student’s progress. Third, some parents
are unwilling to attend conferences, and they can’t be enforced.
Parent-student-teacher conferences are frequently convened in many states of the USA
and some other advanced countries. In the US, this has become a striking feature of
Charter Schools. Some schools rely more on parent conferences than written reports for
conveying the richness of how students are doing or performing. In such cases, a school
sometimes provides a narrative account of student’s accomplishments and status to
augment the parent conferences. (www.uscharterschools.org).

13.Other ways of reporting students results to parents


There are also many other ways to enhance communication between teacher and parent,
e.g. phone calls. The teachers should contact telephonically to the parents of the children
to let them inform about child’s curriculum, learning progress, any special achievement,
sharing anecdote, and invite parents in open meetings, conferences, and school functions.

Calculating CGPA and Assigning Letter Grades


CGPA stands for Cumulative Grade Point Average. It reflects the grade point average of
all subjects/courses regarding a student’s performance in composite way. To calculate
CGPA, we should have following information.
 Marks in each subject/course
 Grade point average in each subject/course
 Total credit hours (by adding credit hours of each subject/course)
Calculating CGPA is very simple that total grade point average is divided by total credit
hours. For example if a student MA Education programme has studied 12 courses, each
of 3 credits. The total credit hours will be 36. The average of GPA, in all the twelve
course will be the CGPA. In the following table the GPA calculated for astudent of MA
Education program is given as example.
Sr. # Course Title Credits Marks Grade GPA CGPA
1. Philosophy of Education 3 85 A 4.0
2. Curriculum and Instruction 3 78 B+ 3.3
Edul. Admin.&
3. 3 72 B 3.0
Supervision
4. Computer in Education 3 77 B+ 3.3
5. Educational Technology 3 77 B+ 3.3
6. Instructional Technology 3 71 B 3.0
Teacher Edu. in Islamic
7. 3 79 B+ 3.3
Pers.
8. History of TE in Pakistan 3 76 B+ 3.3
9. Master Research Project 3 81 A- 3.7
Islamic System of
10. 3 85 A 4.0
Education
11. Research Methods in Edu. 3 86 A 4.0
12. Edul. Assessment & Evalu. 3 75 B+ 3.3
13. Comparative Education 3 82 A- 3.7
Methods of Teaching
14. 3 85 A 4.0
Islamiat
15. Teaching of Urdu 3 80 A- 3.7
Islamic Ideology &
16. 3 81 A- 3.7
Ideology
Student Teaching & Obs.
17. 3 80 A- 3.7
I
Student Teaching & Obs.
18. 3 88 A 4.0
II
19. Education in Pakistan 3 88 A 4.0
20. Teaching of Social Studies 3 81 A- 3.7
21. Total 60

The average of GPA, will represent (GPA)


sumof GPA
CGPA 
Assigning letter grades totalcourse
Letter grade system is most popular in the world including Pakistan. Most teachers face
problems while assigning grades. There are four core problems or issues in this regard; 1)
what should be included in a letter grade, 2) how should achievement data be combined
in assigning letter grades?, 3) what frame of reference should be used in grading, and 4)
how should the distribution of letter grades be determined?

1. Determining what to include in a grade


Letter grades are likely to be most meaningful and useful when they represent
achievement only. If they are communicated with other factors or aspects such as effort
of work completed, personal conduct, and so on, their interpretation will become
hopelessly confused. For example, a letter grade C may represent average achievement
with extraordinary effort and excellent conduct and behaviour or vice versa.
If letter grades are to be valid indicators of achievement, they must be based on valid
measures of achievement. This involves defining objectives as intended learning
outcomes and developing or selecting tests and assessments which can measure these
learning outcomes.

2. Combining data in assigning grades


One of the key concerns while assigning grades is to be clear what aspects of a student
are to be assessed or what will be the tentative weightage to each learning outcome. For
example, if we decide that 35 percent weightage is to be given to mid-term assessment,
40 percent final term test or assessment, and 25% to assignments, presentations,
classroom participation and conduct and behaviour; we have to combine all elements by
assigning appropriate weights to each element, and then use these composite scores as a
basis for grading.

3. Selecting the proper frame of reference for grading


Letter grades are typically assigned on the basis of one of the following frames of
reference.
a) Performance in relation to other group members (relative grading)
b) Performance in relation to specified standards (absolute grading)
c) Performance in relation to learning ability (amount of improvement)
Assigning grades on relative basis involves comparing a student’s performance with that
of a reference group, mostly class fellows. In this system, the grade is determined by the
student’s relative position or ranking in the total group. Although relative grading has a
disadvantage of a shifting frame of reference (i.e. grades depend upon the group’s
ability), it is still widely used in schools, as most of the time our system of testing is
‘norm-referenced’.
Assigning grades on an absolute basis involves comparing a student’s performance to
specified standards set by the teacher. This is what we call as ‘criterion-referenced’
testing. If all students show a low level of mastery consistent with the established
performance standard, all will receive low grades.
The student performance in relation to the learning ability is inconsistent with a standard-
based system of evaluating and reporting student performance. The improvement over the
short time span is difficult. Thus lack of reliability in judging achievement in relation to
ability and in judging degree of improvement will result in grades of low dependability.
Therefore such grades are used as supplementary to other grading systems.

4. Determining the distribution of grades


The assigning of relative grades is essentially a matter of ranking the student in order of
overall achievement and assigning letter grades on the basis of each student’s rank in the
group. This ranking might be limited to a single classroom group or might be based on
the combined distribution of several classroom groups taking the same course.
If grading on the curve is to be done, the most sensible approach in determining the
distribution of letter grades in a school is to have the school staff set general guidelines
for introductory and advanced courses. All staff members must understand the basis for
assigning grades, and this basis must be clearly communicated to users of the grades. If
the objectives of a course are clearly mentioned and the standards for mastery
appropriately set, the letter grades in an absolute system may be defined as the degree to
which the objectives have been attained, as followed.
A = Outstanding (90 to 100%)
B = very Good (80-89%)
C = Satisfactory (70-79%)
D = Very Weak (60-69%)
F = Unsatisfactory (Less than 60%)

Conducting Parent-Teacher Conferences


The first conference is usually arranged in the beginning of the school year to allow
parents and teachers to get acquaintance and preparing plan for the coming months.
Teachers usually receive some training to plan and conduct such conferences. Following
steps may be observed for holding effective parent-teacher conferences.
1. Prepare for the conference
 Review the goals and objectives
 Organize the information to present
 If portfolios are to discuss, these are well-arranged
 Start and keep positive focus
 Announce the final date and time as per convenience of the parents and
children
 Consider socio-cultural barriers of students / parents
 Check with other staff who works your advisee
 Develop a packet of conference including student’s goals, samples of
work, and reports or notes from other staff.

2. Rehearse the conference with students by role-playing


 Students present their goals, learning activities, samples of work
 Students ask for comments and suggestions from parents

3. Conduct conference with student, parent, and advisor. Advisee takes the lead to the
greatest possible extent
 Have a comfortable setting of chairs, tables etc.
 Notify a viable timetable for the conferences
 Review goals set earlier
 Review progress towards goals
 Review progress with samples of work from learning activities
 Present students strong points first
 Review attendance and handling of responsibilities at school and
home
 Modify goals for balance of the year as necessary
 Determine other learning activities to accomplish goals
 Describe upcoming events and activities
 Discuss how the home can contribute to learning
 Parents should be encouraged to share their thoughts on students’
progress
 Ask parents and students for questions, new ideas

4. Do’s of parent-teacher conferences


 Be friendly
 Be honest
 Be positive in approach
 Be willing to listen and explain
 Be willing to accept parents’ feelings
 Be careful about giving advice
 Be professional and maintain a positive attitude
 Begin with student’s strengths
 Review student’s cumulative record prior to conference
 Assemble samples of student’s work
 List questions to ask parents and anticipate parents’ questions
 Conclude the conference with an overall summary
 Keep a written record of the conference, listing problems and
suggestions, with a copy for the parents

5. Don’ts of the parent teacher conference


 Don’t argue
 Don’t get angry
 Don’t ask embarrassing questions
 Don’t talk about other students, parents and teachers
 Don’t bluff if you don’t know
 Don’t reject parents’ suggestions
 Don’t blame parents
 Don’t talk too much; be a good listener (www.udel.edu.)

Activities
Activity 1:
Enlist three pros and cons of test scores.
Activity 2:
Give a self-explanatory example of each of the types of test scores.

Activity 3:
Write down the different purposes and functions of test scores in order of importance as
per your experience. Add more purposes as many as you can.

Activity 4:
Compare the modes of reporting test scores to parents by MEAP and NCCA. Also
conclude which is relatively more appropriate in the context of Pakistan as per your point
of view.

Activity 5:
In view of the strengths and shortcomings in above different grading and reporting
systems, how would you briefly comment on the following characteristics of a multiple
grading and reporting system for effective assessment of students’ learning?
a) Grading and reporting system should be guided by the functions to be served.
b) It should be developed cooperatively by parents, students, teachers, and other school
personnel.
c) It should be based on clear and specific instructional objectives.
d) It should be consistent with school standards.
e) It should be based on adequate assessment.
f) It should provide detailed information of student’s progress, particularly
diagnostic and practical aspects.
g) It should have the space of conducting parent-teacher conferences.

Activity 6:
Explain the differences between relative grading and absolute grading by giving an
example of each.
Activity 7:
Faiza Shaheen, a student of MA Education (Secondary) has earned the following marks,
grades and GPA in the 22 courses at the Institute of Education & Research, University of
the Punjab. Calculate her CGPA. Note down that that maximum value of GPA in each
course is 4.

Activity 8:
Write Do’s and Don’ts in order of priority as per your perception. You may add more
points or exclude what have been mentioned above.

SELF ASSESMENT QUESTIONS


1. Describe the various types of reporting test scores by giving examples from our
country context.
2. In what way parent-teacher conferences play significant role in regard to provide
feedback to parents about their children academic growth and development?
3. Describe the various types of reporting test scores by giving examples from our
country context.
4. In what way parent-teacher conferences play significant role in regard to provide
feedback to parents about their children academic growth and development?
QUIZ
MODEL PAPER OF CLASSROOMASSESMENT

Part-I: MCQs:

Encircle the best/correct response against each of the following statements.


1. Comparing a students’ performance in a test in relation to his/her classmates is
referred to as:
a) Learning outcomes
b) Evaluation
c) Measurement
d) Norm-referenced assessment
e) Criterion-referenced assessment

2. The first test data on a test is called as:


a) Frequency
b) Numeric
c) Raw score
d) True score
e) Cleaned data

3. A student’s relative position in the group in terms of percentage is referred to as:


a) Mean
b) Median
c) Mode
d) Standard deviation
e) Percentile

4. A z-score is always negative when:


a) Raw score is smaller than mean
b) Raw score is greater than mean
c) Raw score is equal to mean
d) T-score = 50
5. The simplest form of normalized standard score is:
a) Standard score
b) Z-score
c) True score
d) Stanines
e) T-score

6. Grading and reporting works better when:


a) Assessment procedures rarely used
b) Assessment procedures mostly used
c) Assessment procedures properly used
d) Students perform better
e) Awards are given to students

7. Periodic assessment is almost synonymous to:


a) Evaluation
b) Measurement
c) Summative assessment
d) Formative assessment
e) Monthly assessment

8. A student’s best work is generally compiled by a teacher in the form of:


a) Cumulative record
b) Portfolio
c) Assessment report
d) Comments by the teacher
e) Written comments

9. Self-assessment begins with:


a) Excellent work
b) Any academic contribution
c) Self-monitoring
d) Skill development
e) Knowledge updating

10.Who said that ‘lack of information provided to consumers about test data has
negative and sweeping consequences’
a) Hopkins & Stanley
b) Anderson
c) Linn & Gronlund
d) Barber et al.
e) Kearney

11.Michigan Educational Policy (MEAP) is revised by parents’ suggestions:


a) Quarterly
b) Biannually
c) Annually
d) Every three years
e) Periodically

12.The system used in our BISEs is based on:


a) Letter grade
b) Pass-fail
c) Checklist of objectives
d) Rating scales
e) Portfolio

13.The contribution to report cards is of:


a) Hopkins & Stanley
b) Hall
c) Wiggins
d) Anderson
e) Bruadli

14.The first stage in parent-teacher conferences is:


a) Start and keep positive focus
b) Planning
c) Implementing/conducting
d) Rehearsal
e) Role play

Part-II: Short Answer Questions


1. How do Z-scores and T-scores differ?
2. Write down two strengths and two shortcomings of test scores.
3. Explain briefly the function of ‘instructional uses’ of grading and reporting.
4. What type of grading system is employed in public sector elementary schools of
Pakistan?
5. How does ‘pass-fail’ system not truly assess students’ performance?
6. What do you mean by ‘checklist of objectives’ in context to a type of grading and
reporting?
7. Enlist the activities that a teacher can consider in developing a portfolio of a
student.
8. Report cards are a good means of reporting results to parents. Comment.
9. What is the importance of assigning letter grades to assess students’ assessment?

Part-III: Essay-type Questions


5. Describe the various types of reporting test scores by giving examples from our
country context.
6. In what way parent-teacher conferences play significant role in regard to provide
feedback to parents about their children academic growth and development?
7. What should be essentials of a good progress report? Discuss in detail with
respect to public school system in Pakistan.
Key to MCQs

Q. No. Correct Q. No. Correct


response Response
1. D 2. C
3. E 4. A
5. D 6. C
7. D 8. B
9. C 10. E
11. D 12. B
13. E 14. B

THE END

You might also like