0% found this document useful (0 votes)
2K views335 pages

Assessment For Learning Book

This document provides an overview of assessment for learning based on the new two-year B.Ed. curriculum. It discusses traditional notions of assessment versus a constructivist understanding where assessment is an ongoing process to support learning through feedback. The aim is to develop critical and dynamic assessment processes that are culturally responsive to improve learning outcomes. The course will help student-teachers understand the social and political dimensions of assessment and explore alternative practices to traditional competitive models. It will cover formative and summative assessment tools and techniques, using assessment data, issues in assessment, and ensuring fair and inclusive practices.

Uploaded by

Lakshmi Prasanna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views335 pages

Assessment For Learning Book

This document provides an overview of assessment for learning based on the new two-year B.Ed. curriculum. It discusses traditional notions of assessment versus a constructivist understanding where assessment is an ongoing process to support learning through feedback. The aim is to develop critical and dynamic assessment processes that are culturally responsive to improve learning outcomes. The course will help student-teachers understand the social and political dimensions of assessment and explore alternative practices to traditional competitive models. It will cover formative and summative assessment tools and techniques, using assessment data, issues in assessment, and ensuring fair and inclusive practices.

Uploaded by

Lakshmi Prasanna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 335

ASSESSMENT

FOR LEARNING
(As per the new two year B.Ed.
Curriculum)

PRASANTH VENPAKAL
M.Com., M.Ed. , NET
Venpakal, Neyyattinkara,
Thiruvananthapuram, 695123
[email protected]
PREFACE
The course is designed keeping in mind the
critical role of assessment in enhancing learning. In
contrast to the traditional notion of assessment as an
act to be performed at the end of teaching, using a
paper-pencil test, the course situates assessment
within a constructivist paradigm. The course
critiques the traditional purpose of assessment as a
mechanism to filter learners as per their abilities or
potentials and thus reducing learning to a limited set
of ‘expertise’ that can be displayed on paper;
assessment as a selective and competitive act and
achievement as an outcome of individual’s innate
factors.
With the constructivist understanding of
learning and assessment, assessment cannot be an
end-of-teaching activity. Rather, it has to be an
ongoing process where the teacher closely observes
learners during the process of teaching-learning,
records learning landmarks, and supports them by
providing relevant feedback. The need for giving
feedback to students and their guardians will be
highlighted, with practical experience of how to
record and report progress, and create forums for
engagement with the community. Student-teachers
will thus learn to explore diverse methods and tools
of assessing an array of learning/performance
outcomes of diverse learners. The course discusses
the relationship of assessment with self-esteem,
motivation, and identity as learners , with an
emphasis on ‘fixed’ or ‘growth’ mindsets
regarding notions of intelligence and ability.
The course will support student-teachers in
understanding the psycho-social and political
dimensions of assessment. They will see how
traditional assessment used for competitive
selection has provided legitimacy to iniquitous
systems of education and worked towards
perpetuating equations of power and hegemony in
society. The aim of this course is therefore to
develop a critical understanding of issues in
assessment and also explore realistic,
comprehensive and dynamic assessment processes
which are culturally responsive for use in the
classroom. This is one of the greatest challenges
before the Indian system and this course will
prepare prospective teachers to critically look at the
prevalent practices of assessment and selection, and
instead develop enabling processes which lead to
better learning and more confident and creative
learners.
PRASANTH VENPAKAL
CONTENTS
Unit I. Basics of Assessment
i) Meaning, Related terms- measurement,
evaluation, examination
ii) Role of Assessment in Learning- as learning, for
learning, of learning
iii) Formative and Summative assessment
iv) Purposes of Assessment
v) Principles of Assessment Practices –principles
related to selection of methods for assessment,
collection of assessment information, judging and
scoring of student performance, summarization and
interpretation of results, reporting of assessment
findings
Unit II. Assessment for Learning in Classroom
i) Student evaluation in transmission-reception
(behaviorist) model of education drawbacks
ii) Changing assessment practices- assessment in
constructivist approach-Continuous and
Comprehensive evaluation- projects, seminars,
assignments, portfolios; Grading
iii)Types of assessment- practice based, evidence
based, performance based, examination based
iv)Practices of assessment- dialogue, feedback
through marking, peer and self-assessment,
formative use of summative tests
Unit III. Tools & techniques for classroom
assessment
i) Tools & techniques for classroom assessment-
observation, Self- reporting, Testing; anecdotal
records, check lists, rating scale, Test- types of tests.
ii) Rubrics- meaning, importance
iii)Assessment Tools for affective domain- Attitude
scales, motivation scales-interest inventory
iv)Types of test items-principles for constructing
each type of item
Unit IV. Issues in classroom assessment
i) Major issues- Commercialisation of assessment,
poor test quality, domain dependency, measurement
issues, system issues
ii) Reforms in assessment-open book, IBA, on line,
on demand
iii)Examination reform reports
Unit V. Assessment in inclusive practices
i) Differentiated assessment- culturally responsive
assessment
ii) Use of tests for learner appraisal-achievement
test, Diagnostic test- construction of each-
preparation of test items- scoring key- marking
scheme-question wise analysis
iii)Quality of a good test
iv)Ensuring fairness in assessment
v) Assessment for enhancing confidence in
learning- Relationship of assessment with
confidence, self-esteem, motivation-ipsative
assessment
Unit VI. Reporting Quantitative assessment data
Statistical techniques for interpreting and reporting
quantitative data
i)Measures of central tendency
ii)Measures of dispersion
iii)Correlation
iv) Graphs & Diagrams

UNIT- I.
BASICS OF ASSESSMENT
MEANING OF RELATED TERMS-
ASSESSMENT, MEASUREMENT,
EVALUATION & EXAMINATION
ASSESSMENT
“Assessment is the systematic collection,
review, and use of information about educational
programs undertaken for the purpose of improving
student learning and development”.
T. Marchese (1987)
Educational assessment is the process of
documenting, usually in measurable terms,
knowledge, skills, attitudes and beliefs. Assessment
can focus on the individual learner, the learning
community (class, workshop, or other organized
group of learners), the institution, or the educational
system as a whole. According to the Academic
Exchange Quarterly: "Studies of a theoretical or
empirical nature (including case studies, portfolio
studies, exploratory, or experimental work)
addressing the assessment of learner aptitude and
preparation, motivation and learning styles,
learning outcomes in achievement and satisfaction
in different educational contexts are all welcome, as
are studies addressing issues of measurable
standards and benchmarks".
Assessment is a process by which
information is obtained relative to some known
objective or goal. Assessment is a broad term that
includes testing. A test is a special form of
assessment. Tests are assessments made under
contrived circumstances especially so that they may
be administered. In other words, all tests are
assessments, but not all assessments are tests. We
test at the end of a lesson or unit. We assess progress
at the end of a school year through testing, and we
assess verbal and quantitative skills through such
instruments as the SAT and GRE. Whether implicit
or explicit, assessment is most usefully connected to
some goal or objective for which the assessment is
designed. A test or assessment yields information
relative to an objective or goal. In that sense, we test
or assess to determine whether or not an objective
or goal has been obtained. Assessment of skill
attainment is rather straightforward. Either the skill
exists at some acceptable level or it doesn’t. Skills
are readily demonstrable. Assessment of
understanding is much more difficult and complex.
Skills can be practiced; understandings cannot. We
can assess a person’s knowledge in a variety of
ways, but there is always a leap, an inference that
we make about what a person does in relation to
what it signifies about what he knows. In the section
on this site on behavioral verbs, to assess means To
stipulate the conditions by which the behavior
specified in an objective may be ascertained. Such
stipulations are usually in the form of written
descriptions.

Assessment Steps:
 Develop learning objectives.
 Check for alignment between the curriculum
and the objectives.
 Develop an assessment plan (must use direct
measures).
 Collect assessment data.
 Use results to improve the program.
 Routinely examine the assessment process
and correct, as needed.
Evaluation
Evaluation is the process by which we judge
the quality of a something. It is the processes of
determining the extent to which an objective is
achieved or the thing evaluated possess the qualities
envisaged. Evaluation is a process of assigning
value to something. This is possible only on the
basis of specific pre-determined goals. Therefore
evaluation in education warrants the determination
of specific educational goals. From the point of
view of the class room teacher instructional
objectives act as the basis of evaluation .This means
that educational evaluation is possible only if the
instructional objectives are determiner earlier .
Evaluation based on pre-determined objectives is
called objective based evaluation.
Evaluation is perhaps the most complex and
least understood of the terms. Inherent in the idea of
evaluation is "value." When we evaluate, what we
are doing is engaging in some process that is
designed to provide information that will help us
make a judgment about a given situation. Generally,
any evaluation process requires information about
the situation in question. A situation is an umbrella
term that takes into account such ideas as
objectives, goals, standards, procedures, and so on.
When we evaluate, we are saying that the process
will yield information regarding the worthiness,
appropriateness, goodness, validity, legality, etc., of
something for which a reliable measurement or
assessment has been made. Teachers, in particular,
are constantly evaluating students, and such
evaluations are usually done in the context of
comparisons between what was intended (learning,
progress, behavior) and what was obtained.
Functions of Evaluation
 Evaluation enhance the quality of teaching .
Through evaluation , teachers are able to find out
how far they have been successful ion achieving the
objectives of education they had in the mind. In
other word they are able to find or assess the degree
to which they have succeeded in teaching . this
assessment leading to value judgment enable
instructional strategies.
 Guidance can be given on the basis of
evaluation
Evaluation makes the individual difference clear
, specific difficulties also will be identified and
diagnosed . on the basis of this diagnosis , the
teacher can plan remedial activities within turn
help the realization of the goals to the maximum
possible . Hence it is greate utility in educational
guidance . on the basis of the measurement of
abilities prediction can be regarding the nature of
performance of individual s in a context or task .
This will enable the teacher to provide educational
and vocational guidance.
 Evaluation help in adjudging the position of
students within a group.
One of the important function served by
evaluation is ‘placement’ of students . The under
going a course have to be judged on the basis of
their eligibility to proceed to higher stage of study
measurement can be ascertained at any moment
without reference to the past or future. In evaluation
we consider the previous results and certain goals,
or objectives anticipated. While measurement aims
only ascertaining quantity evaluation aims only
ascertaining quantity evaluation aims at the
weakness if any discovered .Here we are always
concern whether we are reaching the goal . Because
of this nature of evaluation .It is a continuous
process while measurement is attempt only when it
is needed .As evaluation involved value judgment.
It may not be peruse as measurement, but ,it is more
valid and useful than measurement . However,
proper measurement can make evaluation more
objective.
Steps in The Process of Evaluation
An effective process of evaluation involves
the following steps;
i. Setting up of objectives of education
according to the needs of learner.
ii. Writing the instructional objectives in
behavior terms.
iii. Imparting learning experience / engaging
learners with the learning environment.
iv. Developing tools and techniques of
evaluation in accordance with the
instructional objectives.
v. Implementing the tools and find out the
results.
vi. Analysis and interpretation of results.
vii. Modifying with remedial teaching, if there is
any deviations.
viii. Recording for future use.
DIFFERENCE BETWEEN ASSESSMENT &
EVALUATION
Assessment Evaluation
Emphasis on the Emphasis on the
teaching process and mastery of
progress competencies
Focus on the Teacher Focus on Student
Activity or Student Performance or Teacher
Activity Performance
Methods include: Methods include:
Student Critiques, Test/Quizzes, Semester
Focus Groups, Projects,
Interviews, Reflective Demonstrations or
Practice, Surveys and Performances
Reviews
Purpose is to improve Purpose is to assign a
the teaching and grade or ranking
learning process
Generally Formative Generally Summative

MEASUREMENT
According to Stevens “ Measurement is the
assignment of numerals to objects , or events,
according to rules”
According to Stuffebeam “ Measurement as
the assignment of numerals to entities according to
rules”
Measurement is the process by which we
ascertain the quantity of something. It is merely the
assignment of a numerical index to the thing or
phenomenon we measure. Measurement refers to
the process by which the attributes or dimensions of
some physical object are determined. One exception
seems to be in the use of the word measure in
determining the IQ of a person. The phrase, "this
test measures IQ" is commonly used. Measuring
such things as attitudes or preferences also applies.
However, when we measure, we generally use some
standard instrument to determine how big, tall,
heavy, voluminous, hot, cold, fast, or straight
something actually is. Standard instruments refer to
instruments such as rulers, scales, thermometers,
pressure gauges, etc. We measure to obtain
information about what is. Such information may or
may not be useful, depending on the accuracy of the
instruments we use, and our skill at using them.
A Comparison Of Measurement And
Evaluation
Measurement Evaluation
1. Measurement is Evaluation is
quantitative it qualitative judgment
refers to ‘How of value and
Much’ without any purposes. It refers to
reference to ‘how good’ with
purpose past ,or reference to purposes
future. It is present, past or
concerned only future.
with the present.
2. Measurement is Evaluation is
objective and subjective and
impersonal. It does personal to a great
not change with extent.
change of
individuals. Evaluation is
3. Measurement is interpretative and
precise and philosophical.
scientific. Evaluation is a
4. Measurement is continuous process.
not a continuous Teachers are
process, it is evaluating their
occasional. Tests pupils continuously.
are conducted only In addition to tests,
occasionally to get observation,
a measure of pupils interview,
achievement. sociometry,etc are
the common
techniques used for
5. Measurement is the purpose correct
independent of evaluation depends
evaluation. upon correct
6. Scope of measurement.
measurement is Correct evaluation
limited depends up on correct
measurement.
Scope of evaluation is
unlimited.
TEST / EXAMINATION
A test or an examination (or "exam") is an
assessment intended to measure a test-taker's
knowledge, skill, aptitude, or classification in many
other topics (e.g., beliefs). In practice, a test may be
administered orally, on paper, on a computer, or in
a confined area that requires a test taker to
physically perform a set of skills. The basic
component of a test is an item, which is sometimes
colloquially referred to as a "question."
Nevertheless, not every item is phrased as a
question given that an item may be phrased as a
true/false statement or as a task that must be
performed (in a performance test). In many formal
standardized tests, a test item is often retrievable
from an item bank.
A test may vary in rigor and requirement. For
example, in a closed book test, a test taker is often
required to rely upon memory to respond to specific
items whereas in an open book test, a test taker may
use one or more supplementary tools such as a
reference book or calculator when responding to an
item. A test may be administered formally or
informally. An example of an informal test would
be a reading test administered by a parent to a child.
An example of a formal test would be a final
examination administered by a teacher in a
classroom or an I.Q. test administered by a
psychologist in a clinic. Formal testing often results
in a grade or a test score. A test score may be
interpreted with regards to a norm or criterion, or
occasionally both. The norm may be established
independently, or by statistical analysis of a large
number of participants. A formal test that is
standardized one that is administered and scored in
a consistent manner to ensure legal defensibility.
ROLE OF ASSESSMENT IN LEARNING- AS
LEARNING, FOR LEARNING, OF
LEARNING
Assessment For Learning
It involves teachers uses information about
student’s knowledge, understanding and skills to
inform their teaching. It occurs throughout the
teaching learning process to clarify student’s
learning and understanding. It includes two
phases—initial or diagnostic assessment and
formative assessment. This type of assessment can
be based on a variety of information sources (e.g.,
portfolios, works in progress, teacher observation,
conversation) .Verbal or written feedback given to
the student after this assessment is primarily
descriptive and emphasizes strengths, identifies
challenges, and points to next steps. Through this
assessment teachers check on understanding they
adjust their instruction to keep students on track. No
grades or scores are given - record-keeping is
primarily anecdotal and descriptive. It occurs
throughout the learning process, from the outset of
the course of study to the time of summative
assessment
Assessment As Learning
It occurs when students act as their own
assessors. Students monitor their own learning , ask
questions and use a range of strategies to decide
what they know and can do for learning. It begins
as students become aware of the goals of instruction
and the criteria for performance. It encourages to
take responsibility for their own learning. It
involves goal-setting, monitoring progress, and
reflecting on results It implies student ownership
and responsibility for moving his or her thinking
forward (metacognition). It occurs throughout the
learning process
Assessment Of Learning
It assists teachers in using evidence of students
learning to assess achievements against outcomes
and standards. Sometimes it is known as summative
assessment. This assessment helps for assigning
grades & ranks. It compares one student’s
achievement with standards The results can be
communicated to the student and parents .It occurs
at the end of the learning unit.
FORMATIVE & SUMMATTIVE
ASSESSMENT
Formative Assessment
The goal of formative assessment is to
monitor student learning to provide ongoing
feedback that can be used by instructors to improve
their teaching and by students to improve their
learning. Formative assessment provides feedback
and information during the instructional process,
while learning is taking place, and while learning is
occurring. Formative assessment measures student
progress but it can also assess your own progress as
an instructor. A primary focus of formative
assessment is to identify areas that may need
improvement. These assessments typically are not
graded and act as a gauge to students’ learning
progress and to determine teaching effectiveness.
Features of Formative Assessment
 Is diagnostic and remedial
 Makes the provision for effective feedback
 Provides the platform for the active
involvement of students in their own
learning.
 Enables teachers to adjust teaching to take
account of the results of assessment
 Recognizes the profound influence
assessment has on the motivation and self-
esteem of students, both of which are crucial
influences on learning
 Recognizes the need for students to be able to
assess themselves and understand how to
improve
 Builds on students' prior knowledge and
experience in designing what is taught.
 Incorporates varied learning styles into
deciding how and what to teach.
 Encourages students to understand the
criteria that will be used to judge their work
 Offers an opportunity to students to improve
their work after feedback,
 Helps students to support their peers, and
expect to be supported by them.
Types of Formative Assessment
 Observations during in-class activities; of
students non-verbal feedback during lecture
 Homework exercises as review for exams and
class discussions)
 Reflections journals that are reviewed
periodically during the semester
 Question and answer sessions, both formal—
planned and informal—spontaneous
 Conferences between the instructor and student
at various points in the semester
 In-class activities where students informally
present their results
 Student feedback collected by periodically
answering specific question about the instruction
and their self-evaluation of performance and
progress
Summative Assessment

The goal of summative assessment is to


evaluate student learning at the end of an
instructional unit by comparing it against some
standard or benchmark. Summative assessment
takes place after the learning has been completed
and provides information and feedback that sums up
the teaching and learning process. Typically, no
more formal learning is taking place at this stage,
other than incidental learning which might take
place through the completion of projects and
assignments. Grades are usually an outcome of
summative assessment. Summative assessment is
more product-oriented and assesses the final
product, whereas formative assessment focuses on
the process toward completing the product.

Types of Summative Assessment


 Examinations (major, high-stakes exams)
 Final examination (a truly summative
assessment)
 Term papers (drafts submitted throughout the
semester would be a formative assessment)
 Projects (project phases submitted at various
completion points could be formatively
assessed)
 Portfolios (could also be assessed during it’s
development as a formative assessment)
 Performances
 Student evaluation of the course (teaching
effectiveness)
 Instructor self-evaluation
DIFFERENCE BETWEEN SUMMATIVE &
FORMATIVE ASSESSMENT
Summative Formative Assessment
Assessment
For grades Enhance learning
Usually occurs at Considered a part of the
critical points in the course instruction
learning process (e.g.
mid-term; final exam)
Evaluated with a score Evaluated by providing
feedback
Once an evaluation is Activities tend to build
complete, it is added upon the learning
to the students record; process (i.e. tasks will
typically no flow into each other so
opportunity for change learning becomes more
of a process)
Can be viewed as Tends to be viewed as a
"threatening" as the non-threatening
end result is more approach
definitive

PURPOSE OF ASSESSMENT

The primary purpose of assessment is to


improve students' learning and teachers' teaching as
both respond to the information it
provides. Assessment for learning is an ongoing
process that arises out of the interaction between
teaching and learning.

 Assessment drives instruction

A pre-test or needs assessment informs


instructors what students know and do not know at
the outset, setting the direction of a course. If done
well, the information garnered will highlight the gap
between existing knowledge and a desired
outcome. Accomplished instructors find out what
students already know, and use the prior knowledge
as a stepping off place to develop new
understanding. The same is true for data obtained
through assessment done during instruction. By
checking in with students throughout instruction,
outstanding instructors constantly revise and refine
their teaching to meet the diverse needs of students.

 Assessment drives learning

What and how students learn depends to a major


extent on how they think they will be
assessed. Assessment practices must send the right
signals to students about what to study, how to
study, and the relative time to spend on concepts
and skills in a course. Accomplished faculty
communicate clearly what students need to know
and be able to do, both through a clearly articulated
syllabus, and by choosing assessments carefully in
order to direct student energies. High expectations
for learning result in students who rise to the
occasion.

 Assessment informs students of their progress

Effective assessment provides students with a


sense of what they know and don’t know about a
subject. If done well, the feedback provided to
students will indicate to them how to improve their
performance. Assessments must clearly match the
content, the nature of thinking, and the skills taught
in a class. Through feedback from instructors,
students become aware of their strengths and
challenges with respect to course learning
outcomes. Assessment done well should not be a
surprise to students.

 Assessment informs teaching practice

Reflection on student accomplishments offers


instructors insights on the effectiveness of their
teaching strategies. By systematically gathering,
analyzing, and interpreting evidence we can
determine how well student learning matches our
outcomes / expectations for a lesson, unit or
course. The knowledge from feedback indicates to
the instructor how to improve instruction, where to
strengthen teaching, and what areas are well
understood and therefore may be cut back in future
courses.

 Assessment for Grading

Grades should be a reflection of what a student


has learned as defined in the student learning
outcomes. They should be based on direct evidence
of student learning as measured on tests, papers,
projects, and presentations, etc. Grades often fail to
tell us clearly about “large learning” such as critical
thinking skills, problem solving abilities,
communication skills (oral, written and listening),
social skills, and emotional management skills.

 Assessment motivate students

Studies has shown that students will be


motivated and confident learners when they
experience progress and achievement, rather than
the failure and defeat associated with being
compared to more successful peers.

METHODS OF ASSESSMENT
1. Group assessment : This develops interpersonal
skills and may also develop oral skills and
research skills (if combined, for example, with a
project).
2. Self-assessment : Self-assessment obliges
students more actively and formally to evaluate
themselves and may develop self-awareness and
better understanding of learning outcomes.
3. Peer assessment : By overseeing and evaluating
other students’ work, the process of peer
assessment develops heightened awareness of
what is expected of students in their learning.
4. Unseen examination : This is the ‘traditional’
approach. It tests the individual knowledge base
but questions are often relatively predictable
and, in assessment, it is difficult to distinguish
between surface learning and deep learning.
5. Testing skills : It can be useful to test students on
questions relating to material with instead of
which they have no familiarity. This often
involves creating hypothetical knowledge
scenarios. It can test true student ability and
avoids problems of rote- and surface-learning.
6. Coursework essays : A relatively traditional
approach that allows students to explore a topic
in greater depth but can be open to plagiarism.
Also, it can be fairly time consuming and may
detract from other areas of the module.
7. Oral examination : With an oral exam, it is
possible to ascertain students’ knowledge and
skills. It obliges a much deeper and extensive
learning experience, and develops oral and
presentational skills.
8. Projects : These may develop a wide range of
expertise, including research, IT and
organisational skills. Marking can be difficult,
so one should consider oral presentation.
9. Presentations : These test and develop important
oral communication and IT skills, but can prove
to be dull and unpopular with students who do
not want to listen to their peers, but want instead
to be taught by the tutor.
10.Multiple choice :These are useful for self-
assessment and easy to mark. Difficulties lie in
designing questions and testing depth of
analytical understanding.
11.Portfolio: This contains great potential for
developing and demonstrating transferable skills
as an ongoing process throughout the degree
programme.
12.Computer-aided : Computers are usually used
with multiple-choice questions. Creating
questions is time consuming, but marking is very
fast and accurate. The challenge is to test the
depth of learning.
13.Literature reviews : These are popular at later
levels of degree programmes, allowing students
to explore a particular topic in considerable
depth. They can also develop a wide range of
useful study and research skills.
PRINCIPLES OF ASSESSMENT
Good principles will help those wishing to
evaluate their assessment designs or their
implementations in practice. Following are the
important principles that might kept in mind while
assessing performance of learners .
1. It should be clear and has direct link with
outcomes :
The assessment strategies employed by the
teacher in the classroom need to be directly linked
to and reflect the syllabus outcomes. The methods
of assessment should be planned in a very clear
manner.
2. It should integrate to teaching and learning :
Effective assessment practices involves
selecting strategies that are directly derived from
well structured teaching and learning activities.
These strategies should provide information
concerning student progress and achievement that
helps to inform ongoing teaching and learning as
well as the diagnosis of areas of strength and need.
3. It should be comprehensive and balanced :
Effective assessment program should give
result of performance in all areas may be in
scholastic as well as co-scholastic. Teacher should
take care a balanced adoption of assessment
strategies.
4. Strategies adopted should be valid and
reliable :
Valid and reliable assessment strategies are
those that give results that what the teacher actually
assess not only in a particular situation but also in
other situations.
5. It should be fair :
Effective assessment strategies are designed
to ensure equal opportunity for success regardless
of students’ age, gender, physical or other disability,
culture, background language, socio economic
status, etc.
6. It should be student centered :
The learning outcomes and the assessment
process to be used should be made explicit to
students. Students should participate in the
negotiation of learning task and actively monitor
and reflect up on their achievement and progress.
7. It should be time efficient and manageable :
Teachers need to plan carefully the timing
frequencies and nature of their assessment
strategies. Good planning ensures that assessment
and reporting is manageable and maximizes the
usefulness of the strategies selected.
8. It should enable to recognize individual
achievement and progress :
All students must be given appropriate
opportunities to demonstrate achievement. For
giving constructive feedback to the students the
assessment strategies should enable to evaluate
learners individually.
9. It must ensure active involvement of Parents:
School authorities should ensure full and
informed participation by parents in the continuing
development and review of the school policy on
assessment process.

TEST YOUR UNDERSTANDING


1. Define the term Assessment.
2. What you mean by Evaluation?
3. Differentiate the terms Assessment and
Evaluation.
4. Define the term Measurement.
5. Differentiate Measurement and Evaluation.
6. Differentiate Examination and Assessment.
7.Differentiate Formative & Summative
Assessment.
8.Explain the advantages of formative assessment.
9. List out the purposes of Assessment.
10. Explain the different principles applied while
assessing learners.
11. Explain the importance of assessment in our
education system.
12. How will you conduct summative & formative
assessment ?

UNIT - II
ASSESSMENT FOR LEARNING IN
CLASSROOM
Learning is a relatively permanent change in,
or acquisition of knowledge, understanding or
behavior. There are three ways of learning, they’re
Transmission, Reception and Construction.
Student Evaluation in Transmission Reception (
Behaviorist ) Model of Education
Reception is model of learning where there is
transmission of knowledge from the external source
(for example, teacher) to the receiver (students). So,
learning here is being taught. The teacher gives
students the concept and knowledge while students
are only receiving it purely.
Transmission is Sending & Receiving
messages, knowledge, signals. Which includes no
scope for creativity, Rigidity and Generally method
of teaching is Lecture Method.
Behaviorism Theory of Learning “ Teachers
must learn how to teach … they need only to be
taught more effective ways of teaching.” -B. F.
Skinner By: Brittaney
Behaviorism assumes that a learner is
essentially passive, responding to environmental
stimuli. It Believes that When born our mind is
‘tabula rasa’ (a blank slate) , and behavior is shaped
by positive and negative reinforcement.
Behaviorism is primarily concerned with
observable behavior, as opposed to internal events
like thinking and emotion. Observable (i.e.
external) behavior can be objectively and
scientifically measured. Internal events, such as
thinking should be explained through behavioral
terms (or eliminated altogether).
Assessment in Behaviorist Model of Education
Here the importance is to assess how much
students where receiving the information
transmitted by the teacher. Knowledge transmission
cannot be evaluated. But indirect methods can be
used to assess attention or emotional states. Here
teacher can assess only the success of teaching
process. In this more weightage is given to
knowledge level and understanding level of
attainment of objectives. Traditional bloom’s
taxonomy is the base for assessment. In this
assessment is summative in nature.
Drawbacks of Assessment in Behaviorist Model
of Education
 Assessment is only about the success of
teaching process.
 Students are passive listeners so proper
assessment of achievement is not possible.
 Less importance to psychological aspects of
learner.
 More importance to the product achieved by
the students.
 No weightage to the mental process of
learners.
 No continues assessment of the learner.
 Less importance to co-scholastic
achievements.
Student Evaluation in Constructivist Model of
Education
Formalization of the theory of constructivism
is generally attributed to jean piaget, who
articulated mechanisms by which knowledge is
internalized by learners. He suggested that through
processes of accommodation and assimilation,
individuals construct new knowledge from their
experiences. “ Teaching is not about filling up the
pail, it is about lighting a fire” Constructivism:
focuses on knowledge construction .It is a theory of
knowledge that argues that humans generate
knowledge and meaning from an interaction
between their experiences and their ideas.
Constructivism is a theory of knowledge that
argues that humans generate knowledge and
meaning from an interaction between their
experiences and their ideas. It has influenced a
number of disciplines, including psychology,
sociology, education and the history of science.
When individuals assimilate, they
incorporate the new experience into an already
existing framework without changing that
framework. This may occur when individuals’
experiences are aligned with their internal
representations of the world, but may also occur as
a failure to change a faulty understanding; for
example, they may not notice events, may
misunderstand input from others, or may decide that
an event is a fluke and is therefore unimportant as
information about the world. In contrast, when
individuals’ experiences contradict their internal
representations, they may change their perceptions
of the experiences to fit their internal
representations.
According to the theory, accommodation is
the process of reframing one’s mental
representation of the external world to fit new
experiences. Accommodation can be understood as
the mechanism by which failure leads to learning:
when we act on the expectation that the world
operates in one way and it violates our expectations,
we often fail, but by accommodating this new
experience and reframing our model of the way the
world works, we learn from the experience of
failure, or others’ failure.
It is important to note that constructivism is
not a particular pedagogy. In fact, constructivism is
a theory describing how learning happens,
regardless of whether learners are using their
experiences to understand a lecture or following the
instructions for building a model airplane. In both
cases, the theory of constructivism suggests that
learners construct knowledge out of their
experiences.
Assessment in Constructivist Model of
Education
Constructivism is often associated with
pedagogic approaches that promote active learning
, or learning by doing. The view of the learner
changed from that of a recipient of knowledge to
that of a constructor of knowledge, an autonomous
learner with metacognitive skills for controlling his
or her cognitive process during learning. Learning
involves selecting relevant information and
interpreting it through one’s existing knowledge.
Accordingly, the teacher becomes a participant with
the learner in the process of shared cognition, that
is, in the process of constructing meaning in a given
situation. Concerning instruction, the focus changed
from the curriculum to the cognition of the student.
Thus, instruction is geared toward helping the
student to develop learning and thinking strategies
that are appropriate for working within various
subject domains. Correspondingly, assessment is
qualitative rather than quantitative, determining
how the student structures and process knowledge
rather than how much is learned. Continuous and
comprehensive assessment is one of the main
strategy in constructivist learning. In this
assessment is formative rather than summative.
Weightage to learning objectives in the assessment
is given based on the revised blooms taxonomy.
Continuous and Comprehensive Evaluation
(CCE)
Continuous and comprehensive evaluation is
a process of assessment, mandated by the Right to
Education Act, of India. This approach to
assessment has been introduced by state
governments in India, as well as by the Central
Board of Secondary Education in India. The main
aim of CCE is to evaluate every aspect of the child
during their presence at the school. This is believed
to help reduce the pressure on the child
during/before examinations as the student will have
to sit for multiple tests throughout the year, of which
no test or the syllabus covered will be repeated at
the end of the year, whatsoever. The CCE method
is claimed to bring enormous changes from the
traditional chalk and talk method of teaching,
provided it is implemented accurately.
As a part of this new system, student's marks
will be replaced by grades which will be evaluated
through a series of curricular and extra-curricular
evaluations along with academics. The aim is to
decrease the workload on the student by means of
continuous evaluation by taking number of small
tests throughout the year in place of single test at the
end of the academic program. Only Grades are
awarded to students based on work experience
skills, dexterity, innovation, steadiness, teamwork,
public speaking, behavior, etc. to evaluate and
present an overall measure of the student's ability.
This helps the students who are not good in
academics to show their talent in other fields such
as arts, humanities, sports, music, athletics, and also
helps to motivate the students who have a thirst of
knowledge.
Objectives of CCE
1. To help for developing cognitive , psychomotor
and affective skills.
2. To give emphasis on thought process and de-
emphasis on memorization.
3. To make evaluation an integral part of teaching
learning process.
4. To use evaluation for improvement of student’s
achievement and teaching strategies.
5. To use evaluation as a quality control device to
increase standard of performance.
6. To make the teaching learning process a student
centered one.
Characteristics of CCE
 Teachers evaluate students in day-to-day
basis and use the feedback for improvement
in teaching – learning process.
 Teachers can use varieties of evaluation
methods over and above the written tests.
 Students can be assessed in both scholastic
and co-scholastic areas.
 Evaluation is done throughout the year and
therefore it is expected to provide more
reliable evidence of students’ progress.
 CCE encourages the students in forming
good study habits.
 The feedback provided by CCE can be
effectively used in remedial teaching to slow
learners.
Advantages of Continuous and Comprehensive
Evaluation (CCE)
CCE is child-centric and views each learner
as unique. This evaluation system aims to build on
the individual child’s abilities, progress and
development. That the child should not feel
burdened during the learning years, CCE made
formative and summative assessments mandatory in
all CBSE schools. The learner thus was also
benefitted by having to focus on only a small part of
the entire syllabus designed for an academic year.
Assessment of Projects
Assessment plays a major role in education.
A key role of assessment is the diagnostic process—
by establishing what students have learned, it is
possible to plan what students need to learn in the
future. Project work is a method of allowing
students to use what they have learned in statistics
classes in a practical context. It is this practical
application of projects that make them such a useful
part of the learning process.
Although project work may look easy, a brief
introduction with this way of working will show
how demanding it really is for both teachers and
students. Students must make connections between
one piece of learning with another. They have to
transfer the skills acquired in statistics to other areas
such as science and geography, and vice-versa.
They have to familiarise themselves with a wide
range of information. This is much more demanding
than learning one isolated fact after another.
Integrated work of this kind is often the best
preparation for higher education and future
employment. Project work allows students to
connect various pieces of knowledge together that
suits a solution to a chosen problem. Through the
following steps we can assess the project work of
students.
Criterias for Assessing Projects
Assessing the effort put by a learner in the
conduction of project based learning is not an easy
task. We can use the following basic criterias for
assessing their output;
I. Research skills: it includes the assessment of
their involvement in following elements ;
Selection of topic
Framing objectives and hypotheses
Preparation of tools and techniques
Implementation of study and data collection
Analysis of collected Data and its interpretation
Participation in discussion
Creativity (thinks of new/next experiments/new
ideas)
Initiative
Interest in his/her work
Critical thinking
Professional conduct
Communication/sociability/time
management/teamwork
II. Written report
Process of writing
Appropriateness of language
Language: spelling, grammar, not unnecessarily
lengthy
Response to suggestions
Report defence during evaluation
Initiative/independence
Theoretical background
Presentation of results: clarity of tables, figures
Depth and critical analysis
Structure and line of reasoning
Foundation of conclusions
Use of references
Time management/lay out/completeness
III. Oral presentation
i. Composition and design
The content of the presentation should meet the
requirements of the written report
Clarity of slides
Order of components
ii. Professional attitude
Response to questions and remarks
iii. Presentation technique
Use of language
Use of slides
Use of voice
ASSESSING OF SEMINAR
Seminar in class room is a socialized way of
expression of contents. It is defined as the sessions
that provide the opportunity for students to engage
in discussion of a particular topic . It helps to
explore the content in more detail that might be
covered in classrooms. It may be implemented in
class rooms on a small size basis or at large level.
The following points must be remembered while
assessing the seminar of students;
1) When assessing written work consider the
following points:
 Depth of understanding of basic concepts and
issues
 Relevance to the assignment title or question
 Logical organisation and linking of ideas
(coherence)
 Personal evaluation of issues under
discussion and/or application of a descriptive
framework to data
 Analysis, including originality of examples
used; or originality of narrative / poetic
structure in creative work
 Knowledge of the relevant contexts of the
subject
 Critical use of secondary material
 Clarity of expression
 Accuracy of grammar and punctuation
 Systematic and standardised in-text and
bibliographical references
 Final copy presentation and layout .

2) When assessing presentations consider the


following points
 Engagement of audience
 Use of appropriate supporting
materials/technology (OHP, Slides,
PowerPoint, handouts, audio, video etc.)
 Indicative references for use of secondary
material (e.g. on PowerPoint or handout.
 Time-keeping
 In the case of group presentations, group
cohesion and appropriate distribution of
roles.
Assessment Through Portfolio
Student portfolios are a collection of
evidence, prepared by the student and evaluated by
the faculty member, to demonstrate mastery,
comprehension, application, and synthesis of a
given set of concepts. To create a high quality
portfolio, students must organize, synthesize, and
clearly describe their achievements and effectively
communicate what they have learned. Portfolio
assessment strategies provide a structure for long-
duration, in-depth assignments. The use of
portfolios transfers much of the responsibility of
demonstrating mastery of concepts from the
professor to the student.
The overall goal of the preparation of a
portfolio is for the learner to demonstrate and
provide evidence that he or she has mastered a given
set of learning objectives. More than just thick
folders containing student work, portfolios are
typically personalized, long-term representations of
a student’s own efforts and achievements. Whereas
multiple-choice tests are designed to determine
what the student doesn’t know, portfolio
assessments emphasize what the student does know.
Some suggest that portfolios are not really
assessments at all because they are just collections
of previously completed assessments. But, if we
consider assessing as gathering of information
about someone or something for a purpose, then a
portfolio is a type of assessment. Sometimes the
portfolio is also evaluated or graded, but that is not
necessary to be considered an assessment.
Furthermore, in the more thoughtful portfolio
assignments, students are asked to reflect on their
work, to engage in self-assessment and goal-setting.
Those are two of the most authentic skills students
need to develop to successfully manage in the real
world. Research has found that students in classes
that emphasize improvement, progress, effort and
the process of learning rather than grades and
normative performance are more likely to use a
variety of learning strategies and have a more
positive attitude toward learning. Yet in education
we have shortchanged the process of learning in
favor of the products of learning. Students are not
regularly asked to examine how they succeeded or
failed or improved on a task or to set goals for future
work; the final product and evaluation of it receives
the bulk of the attention in many classrooms.
Consequently, students are not developing the
metacognitive skills that will enable them to reflect
upon and make adjustments in their learning in
school and beyond.
Portfolios provide an excellent vehicle for
consideration of process and the development of
related skills. So, portfolios are frequently included
with other types of authentic assessments because
they move away from telling a student's story
though test scores and, instead, focus on a
meaningful collection of student performance and
meaningful reflection and evaluation of that work.
Evaluation refers to the act of making a
judgment about something. Grading takes that
process one step further by assigning a grade to that
judgment. Evaluation may be sufficient for a
portfolio assignment. What is (are) the purpose(s)
of the portfolio? If the purpose is to demonstrate
growth, the teacher could make judgments about the
evidence of progress and provide those judgments
as feedback to the student or make note of them for
her own records. Similarly, the student could self-
assess progress shown or not shown, goals met or
not met. No grade needs to be assigned. On the other
hand, the work within the portfolio and the process
of assembling and reflecting upon the portfolio may
comprise such a significant portion of a student's
work in a grade or class that the teacher deems it
appropriate to assign a value to it and incorporate it
into the student's final grade. Alternatively, some
teachers assign grades because they believe without
grades there would not be sufficient incentive for
some students to complete the portfolio. Some
portfolios are assessed simply on whether or not the
portfolio was completed. Teachers assess the entire
package: the selected samples of student work as
well as the reflection, organization and presentation
of the portfolio.

GRADING SYSTEM
Fundamentally grade is a score. When
students level of performance are classified into a
few classificatory unit using letter grades , the
system of assessment is called grading system.
Grading in education is the process of applying
standardized measurements of varying levels of
achievement in a course. Grading system is
primarily a method of communicating the measure
of achievement. Another way the grade point
average (GPA) can be determined is through extra
curricular activities. Grades can be assigned as
letters (generally A through F), as a range (for
example 1 to 6), as a percentage of a total number
of questions answered correctly, or as a number out
of a possible total (for example out of 20 or 100).
Types of Grading
There is mainly two types of grading , direct
and indirect grading.
Direct Grading
Here particular grades are assigned to
answers of each individual questions on the basis of
its quality judged by the evaluator. The grade point
average will then have to be evaluated for obtaining
the overall grade of the student.
Indirect Grading
It is the process of giving grades through
marks. In this procedure marks are awarded as usual
. The conversion of marks into grade is based on two
view points. Two types of indirect grading are
absolute grading and relative grading.
In absolute grading some fixed range of
scores is determined in advance for each grade. On
the basis of this the score obtained by a candidate in
a subject is converted to grades. It is a type of
criterion based grading.
In relative grading the grade range is not
fixed in advance. It can carry in turn with the
relative position of the candidates .
Functions of Grading and Reporting Systems
Improve students’ learning by:
 clarifying instructional objectives for them
 showing students’ strengths & weaknesses
 providing information on personal-social
development
 enhancing students’ motivation (e.g., short-
term goals)
 indicating where teaching might be modified
Reports to parents/guardians
 Communicates objectives to parents, so they
can help promote learning
 Communicates how well objectives being
met, so parents can better plan
Administrative and guidance uses
 Help decide promotion, graduation, honors,
athletic eligibility
 Report achievement to other schools or to
employers
 Provide input for realistic educational,
vocational, and personal counseling
Advantages of Grading System
 The New Scheme of Grading has been
introduced with the aim that :
 It will minimize misclassification of students
on the basis of marks.
 It will eliminate unhealthy competition
among high achievers.
 It will reduce societal pressure and will
provide the learner with more flexibility.
 It will lead to a focus on a better learning
environment Operational
 It will facilitate joyful and stress free
learning.
TYPES OF ASSESSMENT
PRACTICE BASED ASSESSMENT :
Constructivist and naturalistic classroom
environments give more opportunity for developing
practical abilities than behaviorist classrooms.
Assessing student learning in the practice setting is
one of the most sophisticated and complex forms of
activity. Assessment needs to include evaluation of
skill (technical, psychomotor and interpersonal),
attitudes and insights, and reasoning. Continuous
and comprehensive evaluation techniques are used
mainly for assessing learners piratical skills.
Importance is given for assessing the practical
capability to complete tasks in real life situations.
Some examples of practice based assessment are,
structured clinical examinations, performance in
viva, simulated practice scenario, project works,
preparation of presentations, etc
EVIDENCE BASED ASSESSMENT :
Evaluating student achievement of expected
learning outcomes should be treated as evidence-
based assessment. It means teachers assessing the
students about the achievement of learning
outcomes based some evidences. The evidence may
be achievement score on particular examination,
report submitted after completion of research, the
solution founded after completion of experiments,
etc.
PERFORMANCE BASED ASSESSMENT :
Knowing how to do something is measured
by performance tests such as portfolios, exhibitions
and demonstrations. Performance tests or
assessments provide greater realism of task that
traditional test like pen and paper tests but are very
time consuming. It can provide greater motivation
for students by making learning more meaningful
and clarifying goals. Performance assessment
require students to actively demonstrate what they
know. There is a big difference in answering
questions on how to give a speech or presentation
than actually giving one.

Performance assessment may be used for


diagnostic purposes. Information provided at the
beginning of the course may help decide where to
start or what needs special attention. To improve the
results of performance assessment criterias that are
being judged must be clear and defined. Instructions
must also be clear and complete. Records must be
done as soon as possible after the performance and
the evaluation form must be relevant and easy to
use. Also the use of portfolios and student
participation can contribute to the improvement of
performance assessments.
Performance assessment is an excellent way
of determining whether pupils have mastered the
outcome/s. In other words it provides for realism of
a task and increase makes such as task complicated
or complex, therefore it shows whether pupils
understood the concepts taught. It is a skillful
assessment for challenging ones cognitive skills.

Benefits of Performance Assessment

Performance assessment is an excellent


indicator to display a child’s true potential and
ability. Benefits of Performance Assessments are ;

 They systematically document what children


know and can do based on activities they engage
in on a daily basis in their classrooms. In
addition, performance assessment evaluates
thinking skills such as analysis, synthesis,
evaluation, and interpretation of facts and ideas
— skills which standardized tests generally
avoid.
 They are flexible enough to permit teachers to
assess each child's progress using information
obtained from ongoing classroom interactions
with materials and peers. In other words, they
permit an individualized approach to assessing
abilities and performance.
 They are a means for improving instruction,
allowing teachers to plan a comprehensive,
developmentally oriented curriculum based on
their knowledge of each child.
 They provide valuable, in-depth information for
parents, administrators, and other policy makers.
 They put responsibility for monitoring what
children are learning — and what teachers are
teaching — in the hands of teachers.

EXAMINATION BASED ASSESSMENT


Assessment of learner’s performance with
the support of different forms of test or examination
is known as examination based
assessment.A test or examination (informally, exa
m) is an assessment intended to measure a test-
taker's knowledge, skill, aptitude, physical fitness,
or classification in many other topics. A test may
be administered verbally, on paper, on a computer,
or in a confined area that requires a test taker to
physically perform a set of skills. Tests vary in
style, rigor and requirements. For example, in a
closed book test, a test taker is often required to rely
upon memory to respond to specific items whereas
in an open book test, a test taker may use one or
more supplementary tools such as a reference book
or calculator when responding to an item. A test
may be administered formally or informally and
with the help of standardised and non-standardised
tests.
A standardized test is any test that is
administered and scored in a consistent manner to
ensure legal defensibility. Standardized tests are
often used in education, professional
certification, psychology , the military, and many
other fields. A non-standardized test is usually
flexible in scope and format, variable in difficulty
and significance. Since these tests are usually
developed by individual instructors, the format and
difficulty of these tests may not be widely adopted
or used by other instructors or institutions. A non-
standardized test may be used to determine the
proficiency level of students, to motivate students to
study, and to provide feedback to students.
Written tests are tests that are administered
on paper or on a computer. A test taker who takes a
written test could respond to specific items by
writing or typing within a given space of the test or
on a separate form or document. The responses of
test taker will give evidence for the achievement of
students. So examination based assessment is
common form of assessment prevailing in all
countries.
PRACTICES OF ASSESSMENT
Dialogue
The term dialogue derives from the Greek
term di-a-logos, an exploration of dialogues in
Greek philosophy is a necessary start. According to
Hamilton (2002) etymologically, dialogue does not
denote two people speaking with each other (the
conventional use in English). Rather the Greek
prefix di means ‘through’, thus explaining why
diaphanous means ‘see-through’. Logos has a dual
meaning. It can mean rationality but also
communication or discourse. It can then be
suggested that the combination of dia and logos
means “reasoning-through” and the dual meaning of
the term logos allows us to establish a link between
reasoning and communication. It seems therefore
justifiable to propose that Di-a-logos signifies
reasoning through interaction in a communicative
manner and hence proposing that the emergence of
rationality is mediated “through” interaction is one
of its constitutive characteristics.
In psychological terms, dialogue incorporates
activities aimed at shared knowledge construction;
in sociological terms, dialogue is akin to interactive
action, enabling learners to greater participation in
society; in literary terms dialogue may entail
interactive processes which open the reader to other
perspectives and broaden the reader’s conceptual
horizon to enter into the dimension of the writer’s
intentionality. All of these activities necessitate, at
least in some degree, the achievement of shared
meaning.
Education is widely believed to have the
power to shape society, and therefore it is not
surprising that sociologists have a special interest in
educational practices. Relationships in society often
are an amplified version of the teaching and
learning relationship. It is important at this point to
clarify the connection between education and
democracy with the view to argue that the infusion
of dialogue in education entails a democratisation of
educational practices.
Dialogue has been described as a method, a process,
an activity, an ethical relation, a model of cognition,
a semiotic exchange and a praxis. Its
conceptualization varies greatly in terms of
definition and function . Pedagogical dialogue is in
first place a way of being rather than a method in the
process of learning. This entails the establishment
of relations that foster mutuality, respect for
difference, trust, reciprocity and shared –but not
forced to converge -understanding through the
means available in a particular context of practice.
Dialogue should be infused in all educational
practices, including assessment.
The connection between assessment and
dialogue is not straightforward. Assessment and
dialogue may be seen as antithetical in some
quarters. Pedagogical dialogue and educational
practice are activities necessarily situated in specific
educational contexts. Therefore the contextual
dimension of such practices plays an important role
in their reconceptualization. More specifically it
also argues that pedagogical dialogue can offer a
productive theoretical basis for re-conceiving the
interaction between assessors and assessees in
educational assessment in order to maximise
students’ development -both educational and
personal. Dialogue and learning are both processes.
Therefore the association of dialogue with
assessment should lead to reframing assessment as
a process. So through dialogue or interaction
between teacher and students we can assess the
student’s performance.

Feedback Through Marking


Providing relevant and timely feedback to
pupils, both orally and in writing, brings positive
behaviours in pupils. Marking intends to serve the
purposes of valuing pupils’ learning, helping to
diagnose areas for development or next steps, and
evaluating how well the learning task has been
understood. Marking should be a process of creating
a dialogue with the learner, through which feedback
can be exchanged and questions asked; the learner
is actively involved in the process.

Marking And Feedback Strategies To Be Used


In Schools
The following strategies can be used to mark,
assess and provide feedback:

1. Verbal Feedback

This means an adult having direct contact with a


child to discuss work that has been completed. It is
particularly appropriate with younger, less able or
less confident children. Verbal feedback will be the
main strategy being used in the Foundation Stage.
A discussion should be accompanied by the
appropriate marking code symbol in the child’s
book or remark to serve as a permanent record for
the child, teacher and parent. In some cases it may
be helpful to add a record of the time taken and
context in which the work was done.

2. Success Criteria Checklist

Success Criteria checklists can be used in all


subjects and may include columns for self/peer
assessment and teacher assessment. These should
be differentiated where appropriate.

3. Peer Marking

Children are encouraged to support each


other and feedback on learning and achievement.
Children should be given the opportunity to act as
response partners and pair mark work. Children
should be trained to do this and ground rules set and
displayed. Children should be able to first point out
things that they like then suggest ways to improve
the piece but only against the learning objective or
success criteria. The pairing of children should be
on ability or trust.

4. Quality feedback comments.

Personalized quality feedback comments


should be used frequently in all subject areas to
extend learning and must be differentiated
appropriately. When marking, teachers will be
looking for opportunities to extend children’s
learning either by clarification or providing
prompts. All work should be marked in green pen
and written comments should reflect the school’s
handwriting style.

5. Marking codes

It is imperative that any marking codes are


used consistently across the school so that there is
no misunderstanding from the child’s point of view
as to what is expected of them.

Self & Peer Assessment

Self-assessment is a process of formative


assessment during which students reflect on and
evaluate the quality of their work and their learning,
judge the degree to which they reflect explicitly
stated goals or criteria, identify strengths and
weaknesses in their work, and revise accordingly.
According to Boud (1995), all assessment including
self-assessment comprises two main elements:
making decisions about the standards of
performance expected and then making judgments
about the quality of the performance in relation to
these standards. Students should be involved in
establishing the criteria for judgment as well as in
evaluating their own work . Regardless of the ways
in which the criteria are set up, students need to be
absolutely clear about the standards of work to
which they are aspiring, and if possible, have
practice in thinking about sample work in relation
to these criteria.
Need for Self Assessment

 Self-evaluation builds on a natural tendency to


check out the progress of one‟s own learning.
 Further learning is only possible after the
recognition of what needs to be learned.
 If a student can identify his/her learning
progress, this may motivate further learning.
 Self-evaluation encourages reflection on one‟s
own learning.
 Self-assessment can promote learner
responsibility and independence.
 Self-assessment tasks encourage student
ownership of the learning.
 Self-assessment tasks shift the focus from
something imposed by someone else to a
potential partnership.
 Self-assessment emphasizes the formative
aspects of assessment.
 Self-assessment encourages a focus on process.
 Self-assessment can accommodate diversity of
learners‟ readiness, experience and
backgrounds.
 Self-assessment practices align well with the
shift in the higher education literature from a
focus on teacher performance to an emphasis on
student learning.
Peer Assessment
There are many variants of peer assessment,
but essentially it involves students providing
feedback to other students on the quality of their
work. In some instances, the practice of peer
feedback will include the assigning of a grade, but
this is widely recognized to be a process that is
fraught with difficulties. “Peer assessment requires
students to provide either feedback or grades (or
both) to their peers on a product or a performance,
based on the criteria of excellence for that product
or event which students may have been involved in
determining”. Peer learning builds on a process that
is part of our development from the earliest years of
life .
Use of Peer Assessment
 Peer feedback can encourage collaborative
learning through interchange about what
constitutes good work.
 If the course wants to promote peer learning
and collaboration in other ways, then the
assessment tasks need to align with this. It is
also important to recognize the extra work
that peer learning activities may require from
students through the assessment.
 Students can help each other to make sense of
the gaps in their learning and understanding
and to get a more sophisticated grasp of the
learning process.
 The conversation around the assessment
process is enhanced. Research evidence
indicates that peer feedback can be used very
effectively in the development of students‟
writing skills.
 Students engaged in commentary on the work
of others can heighten their own capacity for
judgment and making intellectual choices.
 Students receiving feedback from their peers
can get a wider range of ideas about their
work to promote development and
improvement.
 Peer evaluation helps to lessen the power
imbalance between teachers and students and
can enhance the students‟ status in the
learning process.
 The focus of peer feedback can be on process,
encouraging students to clarify, review and
edit their ideas.

 It is possible to give immediate feedback, so


formative learning can be enhanced. Peer
assessment processes can help students learn
how to receive and give feedback which is an
important part of most work contexts.
 Peer assessment aligns with the notion that an
important part of the learning process is
gradually understanding and articulating the
values and standards of a “community of
practice” .
Formative use of summative assessment
Summative assessment
(assessment of learning) is the assessment that
involves an evaluation of student achievement
resulting in a grade or a certification. Both
formative assessment (assessment for learning) and
summative assessment have vital roles to play in the
education of students, and although on the surface
they may not seem to have much in common, there
are identified ways they can work together to
improve student learning. Making formative use of
summative assessment means using information
derived from summative assessment to improve
future student performance.
For the teacher it involves:
 providing a range of assessment tasks and
opportunities to make certain that a range of student
learning styles are catered for
 teaching students to prepare more efficiently for
summative assessment by making use of knowledge
about themselves as learners
 making use of the results of summative assessment
so that learning is emphasised.
For the student it involves:
 developing the ability to identify 'where I am now'
and 'where I need to be'… and to prepare for
summative assessment accordingly
 recognising that summative assessment experiences
are an opportunity for further learning and a chance
to improve future .

TEST YOUR UNDERSTANDING


1. How will you assess students in behaviorist
classroom ?
2. Explain the benefits of CCE .
3.What is the uses of grading ?
4. What is examination based assessment ?
5.What is the benefits of performance based
assessment ?
6. List out the uses of practice based assessment ?
7. Explain the importance of feedback through
marking .
8.Differentiate self and peer assessments?
9. Explain the concept of formative use of
summative assessment .

UNIT - III
TOOLS & TECHNIQUES FOR CLASSROOM
ASSESSMENT
Assessment is a systematic process of
gathering information about what a student knows,
is able to do, and is learning to do. Assessment
information provides the foundation for decision-
making and planning for instruction and learning.
Assessment is an integral part of instruction that
enhances, empowers, and celebrates student
learning. Using a variety of assessment techniques,
teachers gather information about what students
know and are able to do, and provide positive,
supportive feedback to students. They also use this
information to diagnose individual needs and to
improve their instructional programs, which in turn
helps students learn more effectively.
Assessment must be considered during the
planning stage of instruction when learning
outcomes and teaching methods are being targeted.
It is a continuous activity, not something to be dealt
with only at the end of a unit of study. Students
should be made aware of the expected outcomes of
the course and the procedures to be used in
assessing performance relative to the learning
outcomes. Students can gradually become more
actively involved in the assessment process in order
to develop lifelong learning skills.
Evaluation refers to the decision making
which follows assessment. Evaluation is a judgment
regarding the quality, value, or worth of a response,
product, or performance based on established
criteria and curriculum standards. Evaluation
should reflect the intended learning outcomes of the
curriculum and be consistent with the approach
used to teach the language in the classroom. But it
should also be sensitive to differences in culture,
gender, and socio-economic background. Students
should be given opportunities to demonstrate the
full extent of their knowledge, skills, and abilities.
Evaluation is also used for reporting progress to
parents or guardians, and for making decisions
related to such things as student promotion and
awards.
Classroom Assessment is a systematic approach to
formative evaluation, used by instructors to
determine how much and how well students are
learning. Classroom assessment tools and
techniques and other informal assessment tools
provide key information during the semester
regarding teaching and learning so that changes can
be made as necessary. The central purpose of
Classroom Assessment is to empower both teachers
and their students to improve the quality of learning
in the classroom through an approach that is learner-
centered, teacher-directed, mutually beneficial,
formative, context-specific, and firmly rooted in
good practice. It helps for assessing course-related
knowledge and skills, learner attitudes, values and
self-awareness and for assessing learner reactions to
instruction.
In the classroom, teachers are the primary
assessors of students. Teachers design assessment
tools with two broad purposes: to collect
information that will inform classroom instruction,
and to monitor students’ progress towards achieving
year-end learning outcomes. Teachers also assist
students in developing self-monitoring and self-
assessment skills and strategies. To do this
effectively, teachers must ensure that students are
involved in setting learning goals, developing
action plans, and using assessment processes to
monitor their achievement of goals. The different
tools and techniques used in classroom assessment
are the following ;
 Observation,
 Self Reporting,
 Testing;
 Anecdotal Records,
 Check Lists,
 Rating Scale,
OBSERVATION
From the earliest history of scientific
activity, observation has been the prevailing
methods of inquiry. Observation of natural
phenomena judged by systematic classification and
measurement led to the development of theories
and laws of nature’s force. Observation is one of
the most refined modern research technique.
Observation seeks to ascertain what people think
and do by walking them in action as they express
themselves in various situations and activities. It
can be made progressively more scientific to meet
the needs of the particular situation and observation
is a fundamental tool even at the most advanced
levels of science.
Observation is recognized as the most direct
means of studying people when one is interested in
their overt behavior. Observation is defined as “a
planned methodological watching that involves
constraints to improve accuracy.” According to
Gardner (1975), observation is the selection,
provocation, recording and encoding of that set of
behaviours and settings concerning organism “in
situ’ whcih arc consistent with empirical aims.”

CHARACTERISTICS OF OBSERVATION

1. Observation is at once a physical as well as


mental activity.
2. Observation is selective and purposeful.
3. Scientific observation is systematic
4. Observation is specific 5) Scientific
observation is objective.
5. Scientific observation is quantitative.
6. The record of observation is immediately.
7. Observation is verifiable
8. Behavior is observed is natural surroundings
9. It enables understanding significant events
affecting social

i. relations of the participants.


10.It determines reality from the perspective of
observed person himself.
11.It identifies regularities and recurrences in
social life by comparing data is one study
with those in another study.
12.It focused on hypotheses free inquiry
13.It avoids manipulations in the independent
variable.
14.Observation involves some controls
pertaining to the observe and to the means he
uses to record data.

TYPES OF OBSERVATION

1) Casual & Scientific observation


An observation may be either casual or
scientific. Casual observation occurs without any
previous preparations. Scientific observation is
carried out with the help of tools of measurement.
2) Simple and systematic observation
Observation is found in almost all research
studies, at least in the exploratory stage. Such data
collection is often called simple observation. Its
practice is not very standardized. Systematic
observation it employs standardized procedures,
training of observers, schedules for recording.
3. Subjective and Objective Observation
One may have to observe one’s own
immediate experience, it is called subjective
observation. In any investigations, the observer is
an entity apart from the thing observed, that type of
observation is called objective observation
4. Intra – subjective and inter subjective
observation
If repeated observation of a constant
phenomenon by the same observer yield constant
data the observation is said to be intra subjective. If
repeated observations of a constant phenomenon
by different observers yield constant data the
observation is said to be inter subjective
5. Direct and indirect observation
The direct observation describes the
situation in which the observer is physically present
and personally monitors what take place. Indirect
observation is used to describe studies in which the
recording is done by mechanical, photographic or
electronic means.
6. Structured and Un structured observation
Structured observation is organised and
planned which employs formal procedures, has a
set of well defined observation categories, and is
subjectd to high levels of control and
differentiation. Unstructured observations is
loosely organized and the process is largely left to
the observer to define.
7. Natural and Artificial Observation
Natural observation is one in which
observation is made in natural settings while
artificial observation is one in which observation is
made in a laboratory conditions
8. Participant and Non-participant
observation
When the observer participates with the
activities of these under study is called participant
observation Merits:- Acquiring wide information,
Easy in exchange of clear observation of neutral
and real behavior. Limitations :- larger time
required, greater resources required lack of
objectivity.
When the observer does not actually
participate in the activities of the group to be
studied but simply present in the group it is
common as non participant observation. The
observer in this method makes not effort to his
influence or to create a relationship between him
and the group.
Merits:- Acquiring information with influence ,
maintaining impartial status, maintaining
objectivity and scientific outlet.
Limitations:- Inadequate and incomplete
observation, subjectivity, Unnatural attitude of the
subject matter of observation
Organization Of Field Observation
For valid and useful field observation, the
following steps have be taken .
1. Determination of the method of study ie the
field observation in relation to the
phenomena.
2. Determination of the nature and limits of
observation is the preparation of a plan of
observation
3. Decision as of directness of observation ie
the relationship between the observer and the
subject must be direct.
4. Determination of Expert investigations
/Agency of field observation is the person
who makes observation may be the
researcher himself or the field workers
5. Determination of time, place and subject to
study
6. Provision of mechanical Appliances needed
in the usage of various instrumental aids like
camera, maps.
7. Data collection, having arranged all the
necessary tools and equipments need for
research.
8. Data analysis :- The data should be
analyzed and processed
through classification, tabulation etc.
9. Generalization :- The interpretation leading
to draw general conclusion
Steps In Observation.
1. Selection of the topic :- This refers to
determining the issue to be studied
through observations e.g.:- material
conflict, riot etc.
2. Formulation of the topic :- This
involves fixing up categories to be
observed and pointing out situations in
which cases are to be observed.
3. Research design :- This determines
identification of subjects to be observed
preparing observation schedule if any
and arranging entry in situations to be
observed.
4. Collection of data :- This involves
familiarization with the setting,
observation and recording.
5. Analysis of data :- The researcher
analyze the data, prepares tables of
interprets.
6. Report writing :- This involves writing
of the report for submission to the
sponsoring agency or for publication.

Guidelines To Effective Observation

1. Obtain Prior knowledge of what to observe.


2. Examine general and specific objective.
3. Define and Establish categories, each
category or level of data being collected
should be concisely and carefully described
by indicating the phenomena the
investigator expects to find in each.
4. Observe carefully and critically.
5. Rate specific phenomena independently,
using well defined
rating scale.
6. Devise a method of recording results ie the
observation schedule.
7. Become well acquainted with the recording
instrument.
8. Observers would separate the facts from
their interpretation at a later time. They can
observe the facts, and make interpretation at
a later time.
9. Observations are to be checked and verified,
wherever possible by repetition or by
comparison with those of other competent
observers.

Instruments In Observation

Instruments such as the camera, stopwatch,


light meter, audiometer, SET meter, audio and
video tape recorders, mechanical counter, and other
devices like detailed field notes, checklist, maps,
schedules, store cards , socio-metric scales etc
make possible observations that are more precise
than mere sense observations. Such things are also
referred as techniques of control as used in
controlled observation.

Process Of Observation

Observation involves three process ie


sensation , attention perception. Sensation in
gained through the sense organs which depends
upon the physical alertness of the observer. Then
comes attention on concentration which is largely a
matter of habit. The third is perception which
comprises the interpretation of sensory reports.
Thus sensation merely reports the mind to recognize
the facts.
Qualities Of A Good Observer

 The observer should pocess efficient sense


organs.
 The observer must able to estimate rapidly and
accurately
 The observer must possess sufficient alertness’
to observer several details simultaneously .
 The observer must be able to control the effects
of his personal prejudices.
 The observer should be in good physical
conditions
 The observer must be able to record immediately
and Accurately
 The observer should be a visiting stranger, an
attentive, listener eager learner of a participant
observer.

VARIOUS STEPS OF GOOD OBSERVATION

1. Intelligent planning
Intelligent planning should be needed in a
good observation, the observer should be fully
trained as well equipped, too many variables may
not be observed simultaneously, the conditions of
observation should remain constant.
2. Expert execution
An expert execution demands utilizing the
training received in terms of expertness, proper,
arrangement of special conditions for the subject,
occupying, phisical observing, focussing attention
on the specific well defined activities, observing
discreet keeping in mind the length, number and
intervals of observation decided up on and handling
well the recording instruments to be used.
3. Adequate recording
The recording should be as comprehensive
as possible to over all the points and not miss any
substantive issues
4. Scientific Interpretation
The interpretation made and recorded
comprehensively need to be interpreted carefully.
So adequacies and competencies required for this
need to be present in an observer. This alone
facilities a good interpretation.

ADVANTAGES OF OBSERVATION

1. It allows collection of wide range of


information
2. It is a flexible technique in which research
design can be modified at any time
3. It is less complicated and less time
consuming.
4. It approaches reality in its natural structure
and studies events as they evolve.
5. It is relatively inexpenience
6. The observer can assess the emotional
reaction of subjects.
7. The observer is able to record the content
which gives meaning to respondant’s
expression
8. The behaviour being observed in natural
environment will not cause any bias.
9. Superior in data collection on dependable
and convincing
10.Greater accuracy and reliability of data.
11.Results are more dependable and convincing
LIMITATIONS OF OBSERVATION
1. Establishing the validity of observation is
always difficult
2. The problem of subjectivity also involved
3. There is the possibility of distortion of the
phenomena through the very act of
observing.
4. It is a slow and laborious process
5. The events may not be easily classifiable
6. The data may be unmanageable
7. It is going to be a costly affair
8. It cannot offer quantitative generations.

SELF REPORTING

Self-reporting is one of the modern technique


of assessing student’s views and personality. It
gives a clear cut idea about student’s needs,
attitudes, wants, etc. A self-report is a type of
survey, questionnaire, or poll in which respondents
read the question and select a response by
themselves without researcher interference. A self-
report is any method which involves asking a
participant about their feelings, attitudes, beliefs
and so on. Examples of self-reports are
questionnaires and interviews; self-reports are often
used as a way of gaining participants' responses in
observational studies and experiments.
Questionnaires are a type of self-report
method which consist of a set of questions usually
in a highly structured written form. Questionnaires
can contain both open questions and closed
questions and participants record their own
answers. Interviews are a type of spoken
questionnaire where the interviewer records the
responses. Interviews can be structured whereby
there is a predetermined set of questions or
unstructured whereby no questions are decided in
advance. The main strength of self-report methods
are that they are allowing participants to describe
their own experiences rather than inferring this from
observing participants. Questionnaires and
interviews are often able to study large samples of
people fairly easy and quickly. They are able to
examine a large number of variables and can ask
people to reveal behaviour and feelings which have
been experienced in real situations. However
participants may not respond truthfully, either
because they cannot remember or because they wish
to present themselves in a socially acceptable
manner. Social desirability bias can be a big
problem with self-report measures as participants
often answer in a way to portray themselves in a
good light. Questions are not always clear and we
do not know if the respondent has really understood
the question we would not be collecting valid data.
If questionnaires are sent out, say via email or
through tutor groups, response rate can be very low.
Questions can often be leading. That is, they may be
unwittingly forcing the respondent to give a
particular reply.
Unstructured interviews can be very time
consuming and difficult to carry out whereas
structured interviews can restrict the respondents’
replies. Therefore psychologists often carry out
semi-structured interviews which consist of some
pre-determined questions and followed up with
further questions which allow the respondent to
develop their answers.
Closed questions are questions which provide
a limited choice (for example, a participant’s age or
their favourite type of football team), especially if
the answer must be taken from a predetermined list.
Such questions provide quantitative data, which is
easy to analyse. However these questions do not
allow the participant to give in-depth insights. Open
questions are those questions which invite the
respondent to provide answers in their own words
and provide qualitative data. Although these type of
questions are more difficult to analyse, they can
produce more in-depth responses and tell the
researcher what the participant actually thinks,
rather than being restricted by categories.
One of the most common rating scales for
self-reporting is the Likert scale. A statement is
used and the participant decides how strongly they
agree or disagree with the statements. One strength
of Likert scales is that they can give an idea about
how strongly a participant feels about something.
This therefore gives more detail than a simple yes
no answer. Another strength is that the data are
quantitative, which are easy to analyse statistically.
The great advantage of self reporting is that it
gives free environment to response or show their
emotions. At the same time there may be
possibilities for hiding natural emotions as per
situations.
ANECDOTAL RECORDS
A fundamental purpose of assessment is to
communicate what the child knows and is able to
do. Teacher-generated, anecdotal records provide
an insider’s perspective of the child’s educational
experience. This perspective is vital to
communication with the child and the child’s family
about academic progress. Anecdotal records also
facilitate assessment conversations as educational
professionals describe their observations of student
learning and consider ways to develop appropriate
strategies to build on strengths and address
academic needs. The more focused the
observational records, the more helpful they can be
in making daily decisions about instructional
approaches.
Anecdotal Records are collections of
narratives involving first-hand observations of
interesting, illuminating incidents in children’s
literacy development. Anecdotal records are reports
about the teacher informal observations about
students. It will helps the teacher to collect details
regarding student’s behaviours at different
situations. It will be a good tool to bring positive
behavioral patterns through daily observation and
correction. It involves the following informations ;
 Social interactions and literacy exchanges that
teacher have observed
 Children’s everyday routines, such as what they
choose to do in center workshops; a particular
writing topic in a journal or on a sheet of paper
during independent writing time; the book they
choose during independent reading time; and
when they spend time with blocks, sand, painting,
or other forms of creative expression
 Children’s learning styles
 Recurring patterns in children’s ways of
understanding
 Changes in children’s behaviors
 Milestones in children’s development
Steps Involved In Preparation Of Anecdotal
Records
Teachers basically use the following steps for
the preparation of Anecdotal records ;
1. Observing children in instructional settings :
Formal and information is the starting point in the
preparation of anecdotal records.
2. Maintaining a standards-based focus :
Follow some criterias as standards at the time of
observation.
3. Making anecdotal records :
Writing quality anecdotal records is facilitated by
keeping in mind the following considerations: Write
observable data, use significant abbreviations, write
records in the past tense.
4. Managing anecdotal records :
Once the records are coded for strengths,
needs, or information, simply list an abbreviated
summary of the strengths and the needs in the space
provided below the records. Separating the records
into strengths and needs allows the teacher to
summarize what patterns are being exhibited by the
student. The summary also helps clarify and
generate appropriate instructional
recommendations.
5. Analysis of anecdotal records:
Anecdotal records assessment is informed by
comparing the standards to the child’s performance.
The standards also inform the selection of strategies
and activities for instructional recommendations.
Periodically, analyze the compiled records for each
student. The time between analyses may vary
according to your own academic calendar.
RATING SCALE
Rating scale is one of the scaling techniques
applied to the procedures for attempting to
determine quantitative measures of subjective
abstract concepts. It gives an idea of the personality
of an individual as the observer judge the behavior
of a person includes a limited number of aspects of
a thing or of traits.
Rating means the judgment of one person by
another. “Rating is in essence directed
observation”. Writes Ruth Strang. A.S. Barr and
other define, “Rating is a term applied to expression
of opinion or judgment regarding some situation,
objects or character. Opinions are usually expressed
on a scale or values. Rating techniques are devises
by which such judgments may be qualified.”
A rating scale is a method by which we
systematize the expression of opinion concerning a
trait. The ratings are done by parents, teachers, a
board of interviewers and judges and by the self as
well.
Rating is a term applied to expression of
opinion or judgment regarding some situation,
object or character. Opinions are usually expressed
on a scale of values.
Rating scale refers to a set of points which
describe varying degrees of the dimension of an
attribute being observed.
CHARACTERISTICS
There are two characteristics of a rating
scale.
1. Description of the characteristics to be
related,
2. Some methods by which the quality,
frequency or importance of each item to be
rated may be given.
PRINCIPLES GOVERNING RATING SCALE
1. The trait to be treated should be reading
observable.
2. The specific trait or mode of behavior must
be defined
properly. For example, we want to rate a
child’s originality in performing a task.
First of all we must formulate a definition
of ‘originality’ and then try to rate it.
3. The scale should be clearly defined ie, We
are rating at a three, four or fire-point scale.
4. Uniform standards of rating scale should be
observed.
5. The rater should observe the rates in
different situations involving the trait to be
rated.
6. The number of characteristics to be rated
should be limited.
7. In the rating scale, card, some space may be
provided for the rater to write some
supplementary material.
8. The directions of using the rating scales
should be clear and comprehensive.
9. Several judges may be employed to
increase the reliability of any rating scale.
10.Well informed and experienced persons
should be selected for
rating.
TYPES OF RATING SCALE
A number of rating techniques have been
developed which enable the observers to assign
numerical values or ratings to their judgments of
behavior.
According to Guilford (1954, P. 263) these
techniques have given rise to five board categories
of rating scale.
1. Numerical scale (Itemized rating scale)
2. Graphic scale
3. Standard scale
4. Rating by cumulative points
5. Forced choice ratings.
Numerical Scale
In the typical numerical scale, a sequence of
defined numbers is applied to the rater or the
observer, The rater assigns an appropriate number
in line to each stimulus.
Eg. Guilfor (1954, P 263) used in obtaining
ratings of the effective values of colours and orders
as follows:-
10. Most pleasant imaginable
9. Most pleasant
8. Extremely pleasant
7. Moderately pleasant
6. Mildly present
5. Indifferent
4. Mildly unpleasant
3. Modularity unpleasant
2. Extremely unpleasant
1. Most unpleasant
0. Most unpleasant imaginable
Thus in a typical numerical scale, numbers
are assigned to each trait. If it is a seven point scale
the number of 7 represents the maximum amount of
that trait in the individual and 4 represents the
construct.
Numerical rating scale are easiest to
construct and to apply. They are simplest in
handling the results. But this rating scales are
rejected in favor of other types of scales because it
is believed that they suffer from many biases and
errors.
Graphic Scale
Graphic scale is the most popular and widely
used type of rating scale. In this scale, a straight line
is shown. Vertically or horizontally, The line is
either segmented in units or it is continuous. Scale
points with brief description may be indicated along
the line.
There are many advantages of graphic scale.
- Simple and easy to administer
- Require little added motivation
- Provides opportunity for fine
discrimination
It has certain limitation also. The respondents
may check at almost any position along the line
which fact may increase the difficulty of analysis.
The meaning of the terms like ‘very much’ and
‘some what’ may depend upon respondent’s frame
of reference.
Standard scales.
In standard scales a set of standards is
presented to the rater. The standards are usually
objects of some kind to be rated with preestablished
scale values. The man to man scale and portrait
matching scale are other two forms that conform
more or less to the principle of standards scales.
Man – to – man scale is used in connection with
military personal. The portrait – matching
technique was first used in connection with the
studies of character by Hartyshorne and May
(1929)
Rating By Cumulative PointS
Here the rates is asked to give the percentage
of the group that prosses the trait on which the
individual is rated
Forced Choice Ratings:
In this method, the rater is asked, not to say
whether the rate has a certain trait or to say how
much of a trait the ratee has but to say essentially
whether he was more of one trait than another of a
pair. In the construction of a forced – choice rating
instrument, descriptions are obtained concerning
persons who are recognized as being at the highest
and lowest extremes of the performance continue
for a particular group to be rated. Descriptions are
analyzed into simple behavior qualities stated in
very short sentences, which have been called –
‘elevents’ by Sission (1945) and preference value
are determined for each element. In forming an
item, elements are paired. Two statements or terms
with the same high preference value are paired, one
of which is valid and the other not. Two statements
or terms with about equally low preference value
are also paired, one being valid and the other not.

USE AND ADVANTAGES OF RATING


SCALES

1. Helpful in measuring specified outcomes or


objectives of education
2. Helpful in supplementing other sources of
understanding about the child.
3. Helpful in their simulating effect upon the
individuals who are rated.
4. Helpful in writing reports to parents
5. Helpful in filling out admission
6. Helpful in finding out student’s needs
7. Helpful in making recommendations to the
employers.
8. Helpful to the students to rate himself.

LIMITATIONS

1. Some characteristics are more different to


rate.
2. Subjective element is present.
3. Lack of opportunities to rate students.
4. Rates tend to be generally generous.
ERRORS IN RATING
Rating scales have several limitations. Some
of them are discussed as under.
a) Generosity Error.
Sometimes raters would not like to bring down their
own people by giving them low ratings. The result
is that high ratings are given in almost all cases.
Such an error is known as generosity error.
b) Stringency Error
The opposite of generosity error may be called
stringency error.
Some raters have a tendency to rate all individuals
low.
c) Halo Error : ‘Halo’ means a tendency to rate in
terms of general impressions about the rates formed
on the basis of some previous performance.
d) Error Of Central Tendency. There is a tendency
in some observers to rate all or most of the rates near
the midpoint of the scale. They would like to put
most of the rates as ‘Average’ etc.
e) The Logical Error. Such an Error occurs when the
characteristics or the trait to be rated is
misunderstood.

CHECK LIST

A checklist is a simple device consisting of a


prepared list of items which are thought by the
researcher to be relevant to the problem being
studied. A checklist is a selected list of words,
phrases, or sentences following which an observer
records a check ( ) to denote the presence or
absence of whatever being observed. When we
want to asses whether some traits are present or
absent in the behavior of an individual, we can use
check list method. This consists of a number of
statements on various traits of personality. The
statement which applies to the individuals is
checked.
Thus responses to the checklist items are a
matter of ‘fact’, not of ‘judgment’. The checklist is
an important tool in gathering facts for educational
surveys, that is for checking of library, laboratory,
game facilities, school building, textbooks,
instructional surveys, that is for checking of library,
laboratory procedures, etc. checklist are sometimes
used in the form of a questionnaire. Which are
completed by the respondent rather than by the
observer.

CONSTRUCTION OF A CHECKLIST

The items are determined may be arranged in


logical and psychological order. There are various
ways of writing and arranging the items in a
checklist.
Kempler (1960) has suggested four ways and
the researcher may make use of all or some of them
to serve his purpose best.
1. The form in which the observer or respondent
is asked to check all items found in a situation
for example, put a tick mark (*) in the blank
provided before each game played in your
school.
* Football
* Hockey
* Cricket * Volleyball
* Basket ball
2. The form in which questions with ‘yes’ or
‘no’ are asked to be encircled, underlined or
checked in response to the item given. Eg.
Does your university have a Teacher’s
Union? Yes/No.
3. The form in which items are positive
statements and the respondent or observer is
asked to put a tick mark ( ) in the space
provided
Eg. Our school has a student’s union
4. The form where items can best be put in
sentences and the observer on respondent is
asked to check, underline or encircle the
appropriate word/words.
Eg. The school organizes debates weekly,
fortnightly, monthly,
annually, irregularly.
The items of the checklist should be phrased
in such a way that they are discriminative in
quality. It will increase the validity of the
check list. A preliminary tryout of the check
list may also prove helpful in making the tool
one objective.
ANALYSIS AND INTERPRETATION OF
CHECK LIST RESPONSES.
The tabulation, qualification and
interpretation of the checklist response is done in
very much the same way as that of the questionnaire
responses.
PROJECTIVE TECHNIQUES
The word projection has been described in
many ways. According to Covillo Costallo and
othrs. It is “the mechanism by which the individuals
projects himself from awareness of his own
undesirable traits or feelings by contributing them
to others’
Projection, according to Freud, means
externalizing of conflicts or other internal
conditions that has given rise to conscious pain and
anxiety. Projective tests of personality assessment
are those which evoke responses from the
unconscious and provide an opportunity to ** into
the depth of unconscious built of an individual’s
personality.
DEFINITIONS FOR PROJECTION
TECHNIQUES
Lindzev (1961) defines “A projective
techniques is an instruments that is considered
especially sensitive to connect or unconscious
aspects of behavior, it permits or encourage a wide
variety of subject responses, it is highly
multidimensional and it evokes usually rich
response data with a minimum of subject awareness
concerning the purpose of the test”
Frank (1939) Projective techniques as a
king of ‘X-ray” into those aspects of personality
which subjects either cannot or will not openly
reveal.
CHARACTERISTICS OF PROJECTIVE
TECHNIQUES.
1. Ambiguous material : Projective tests often
use ambeyours material to which the subject
must respond freely often in descriptive
form. Ambigious material mean that every
subject can interpret the test stimulate in his
own way.
2. Evoke responses from unconscious : The test
stimulate evoke responses from unconscious
of the subject. The subject projects his inner
feelings in the test situation.
3. Multi dimensionality of responses: The
dimensions in which the subject can respond
are various as physical, intellectual, social
and emotional. There is more freedom to
respond against the instrumental stimuli of
the tests. It is possible for the subject to make
a great variety of responses to the test task.
4. Freedom to respond. The projective
techniques provide full freedom to the
subject to test stimuli. He is not restricted as
regards the nature of responses.
5. Holistic approach : It means that projective
tests attempt to study the totality of behavior.
They do not explore the molecular behavior
of the individual. They emphasizes the moral
approach to understand personality.
6. Answers are not right or wrong : The
responses of the subject are not second or
evaluated as right or wrong. They are
evaluated qualitatively.
7. Purpose of the test is disguised. The purpose
of the test is not disclosed to the subject
otherwise he becomes test conscious and
may hide his real feelings.
8. Types of projective measures.
9. Pictorial Technique
• Rorschach Inkbot test
• Thematic apperception test (TAT)
• Pictures
Verbal Techniques
• Story or sentence completion test
• Word association test (WAT)
Play Techniques
• Doll play
Psycho drama or socio drama techniques
• Role playing
Rorchach Inknot test
This is the best known projective technique
developed by a Swiss Psychiatrict Heemann
thorschach in 1942. In this test ten standard cards,
each bearing an inkblot, representing different
diagnostic categories, are administered to subjects,
who are then asked to interpret and describe what
they see. The test administrator notes down this
description for subsequent analysis i.e. the
individual is arise in his mind etc. The scoring is
done objectively on the basis of colour, form,
movement, content speed originality . Scores can be
categorized three…..
1. Location
2. Contents
3. Determinants.
Location involves seeing of the whole.
Determinant includes shape, colour, shading
movement human figure, animal figures.
This thorschach technique has been used in
clinical personality as also some aspects of subjects
mental life , adjustment process, depression define
mechanism etc.
Thematic apperception test TAT
This test was devised by morgan and
Murray in 1935. It consists of 20 pictures (Morgan)
Each picture is ambiguous enough to permit a
variety of interpretations. Presenting the picture,
the testee is asked to make up a stony of what is
happening in the picture. Most people when they
makeup such stories identify themselves with one
of the characters in the picture and their stories may
be little more than thirty disguised
autobiographies. If makes an hour to administer the
test and the testee may be asked to appear before
an interview.
The stories are analyses to know the testee
attitudes wishes and mental life. These stories
reflect the repressed motivations of the subject.
The test is more useful in knowing general
personality rather than the diagnostic aspects. If
can be used with Thorchach to obtain better results.
The children’s appreciation test has been made for
children in which pictures of animal have been
used.
Each story is scored out under four main Categories
vectors levels conditions qualifies.
Vectors : drives, feeling direction of Behaviour
Levels : Object description, wish intention
night dream
Conditions : psychological, physical, social,
valences, depression,anxiety, security and
Qualifies : temporal characteristics contingency
casualty,negation
This test is being employed in clinical
studies of the maladjusted and abnormal section of
students normal group. It is permitting wide
quantitative and qualitative frustrstion modes of
adjustments.
Pictures Instead if using dolls, the researcher
presents pictures to the child and ask questions
about them one could present pictures of rural and
urban persons, Rajasthani and Gujarathi females,
Hindus and Muslims, Brahmins and lalits and soon
and ask with whom the child would like to play
with.
VERBAL TECHNIQUES
Story Or Sentence Completion Test
Lindzey call this completion technique. The
respondents are given some incomplete stories on
sentences for completion. In the story , the end is
not given but the children are asked to finish it. A
partial sentence is asked to complete with the first
word or phrase that comes to mind. For example.
• A female teacher should be ………… • A
male teacher should not be. ……….
• A good house wife is…………….
• An efficient manager is ………………
• When someone interferes in may studies,
I feel ………..
Words Association Test (WAT)
Lindzey calls this also as association
techniques in this test, the subject is given a list of
words, one at a time, and asked to link it with the
word that immediately comes to his the mind. These
wards are recorded. For example, a teacher is asked
about the roles which a teacher is expected to
perform. It is not necessary that all respondents will
point out all roles which a teacher is to perform.
Say, to teach, to guide, to control, to increate,
values, and so on. Every respondent will answer the
question as he perceives it… A doctor is described
as commercial – minded, greedy, inefficient,
careless. A vegetable, seller is seen as cheat, liar
greedy, impolite. A college / University lecture /
Professors, is described thee days as a politician,
class – cutting person asking for more and more pay
and privileges and less and less and less interested
in studies, research, publications and seminars /
conferences.
It is assumed that respondent’s first thought
is a spontaneous answer because the subject does
not have much time to think about it. It is only is
face association process that the person reveals him
inner feelings about the subject. Ward association
test are affected by clasped time. If a person is
caught asserting a your girl, and the man who
watched it is immediately asked how to deal with
the assaulter his immediate replay could be “severe,
retributive and deterrent punishment’. But if he is
asked the same questions after a month or so, he
could only say, “he should be punished”.
PLAY TECHNIQUE
DOLL PLAY
This projective method is used extensively
both in theory and in data gathering interviews. For
example, the interviewer studying sibling rivalry
can setup a scene containing a mother doll breast –
feeding respondent looking on. The investigator
then asks the child what he/ She encounters the
mother and baby (Yarrow, 1960 : 584). Dolls have
also been used extensively in studying prejudies.
PSYCHO – DRAMA OR SOCIO DRAMA
TECHNIQUE
Role playing
Sometimes students in a college are asked to
organize a ‘mock parliament’ session and different
students are asked to play the role of as speaker,
Prime Minister, foreign minister, Opposition
leader, MPs of different political parties an
independent MP and so on. This is called a third
person technique because it is a dynamic –re-
enactment of the third person technique in a given
situation. The role player acts our someone else’
behavior in a particular setting. Many a time a
student is asked to perform a teacher’s task. This
techniques can be used to determine a true feeling
of a student about a teacher in a class situation.
Role playing is particularly useful in investigating
situations. Where interpersonal relationship are the
subject of the research, eg : husband – wife, shop
keeper – customer – employer-employee officers –
clerk etc.
ADVANTAGES OF PROJECTIVE
TECHNIQUES
1. An individual reveals himself in various
situations and
sometimes he is not aware of this fact.
Thus we get reliable information.
2. The connection between diagnosis and
the situation is very close
3. It is not possible for the individual to give
readymade habitual or conventional
responses as the tasks presented are novel
and instrumented.
4. These techniques encourage spontaneous
responses.
5. These enable us to have a total view of the
personality of an individual rather than in
piece – meal.

LIMITATIONS
- They are very subjective
- They require a lot of training in their
administration only trained psychologist can
administer them.
- It is time consuming
- Difficult to interpret
- There are very few standardized tests.
QUESTIONNAIRE
Questionnaire is the structured set of
questions . It is described as a “A document that
contains a set of questions , the answers to which
are to be provided personally by the respondents.”
It is a device for securing answer to questions by
using from which the reaspondent fill by himself .
It is the most flexible tool in collecting both
quantitative and qualitative information.
A questionnaire cannot be judged as good or
bad , efficient or inefficient unless the job it was
intended to accomplish is known. Developing a
questionnaire requires a certain amount of technical
knowledge. The researcher must decide the points
like method of data collection , procedure to be
followed in approaching the respondent order of
sequence of questions structured vs unstructured
questions while framing a questionnaire.
Scope of Questionnaire.
1. When very large samples are desired .
2. Cost have to be kept low.
3. The target groups who are likely to have high
response rates are specialized.
4. Ease of administration is necessary.
5.Moderate response rate is considered satisfactory
.
It has been used for wide range of problems like ;
1.The problem of teacher training .
2. Administrative difficulties ,
3. suitability of the curriculam.
4. Method of teaching.
5 Study habits
6. Testing of achievements.
7. Duties difficulties of teachers.
8.Rating of school textbooks, etc .
Characteristics of A Good Questionnaire.
1. It deals with an important or significant topic so
that it enthuses respondent to give response. Its
significance is carefully stated on the questionnaire
itself.
2. It seeks only that data which cannot be obtained
from the resources like books reports and records .
3.It is as short as possible because long
questionnaire are frequently thrown away into the
waste paper –basket.
4.It is at the same time as much comprehensive as
necessary so that it does not leave out any relevant
and crucial information.
5. It is attractive in appearance, neatly arranged and
clearly duplicated or printed .
6. Directions are clear and complete , important
terms are clarified each question deals with single
idea and is worded in simple and clear manner as
possible and provide an opportunity for easy
accurate unambiguous response.
7.The questions are objective with no clues ,hints or
suggestions as to the responses desired . Leading
questions are carefully avoided .
8.Questions are presented in good psychological
order proceeding from general to more specific
responses.
9. The offending annoying or embarrassing
questions have to be avoided as far as possible.
10. Items are arranged in categories to ensure easy
and accurate responses.
11. Descriptive adjectives and adverbs that have no
agreed up on meaning are avoided .
12. Double negatives are also avoided.
13. The questions carry adequate number of
alternatives .
14. Double barreled questions or putting two
questions in one questions or putting two questions
in one question are also avoided.
15. It is easy to tabulate summarize and interpret.
Various Forms of questionnaire
Questions in the questionnaire may vary with
respect to a number of criteria.
1.Primary, Secondary and Tertiary Questions
On the basis of the nature of information
elicited questions may be classified as primary ,
secondary, and tertiary . Primary questions elicit
information directly related to the research topic.
Secondary questions elicit information
which do not relate directly to the topic , ie, the
information is of secondary importance.
Tertiary questions only establish a frame
work that allows convenient data collection and
sufficient information without exhausting or biasing
the respondent.
2. Closed –ended and open –ended questions
The closed- ended are the fixed choice
questions. They require the respondent to choose a
response from those provided by the researcher . It
is easy to fill out, takes less time keeps the
respondent on the subject is relatively more
objective , more acceptable and convenient to
respondent and is fairly easy to tabulate and
analyse.
The open-ended type questions
which respondents to answer in their own words.
The subject reveals his mind gives his responses .
This type of item is some times difficult to interpret,
tabulate and summarize in the research report.
3. Structured and non- structured questions
The structured questions contains definite concrete
and direct questions where as non – structured may
consist of partially compleated questions or
statements . A non- structured questionnaire is
often used as the interview guide which is non –
directive. The interviewer posses only a blue print
of the enquires and he is largely free to arrange the
from or statements of the questions.
Steps In Questionnaire Construction
Questionnaires are constructed in a systematic
manner .The process goes through a number of
interrelated steps. They are;
1. Preparation; The researcher thinks of various
items to be covered in the questionnaire and
arrangement of these items in relation to another .
2. Constructing the first draft; The researcher
formulates a number of questions including all
types of questions.
3. Self evaluation; The researcher thinks about
relevance systemtically, clarity in language, etc.
4. External evaluation; The first draft is given to
one or two experts/ colleges for scrutiny and
suggestions for changes.
5. Revision ; After receiving suggestions some
questions are eliminated some changed and some
questions are added .
6. Pre – test or pilot study; A pre test is undertaken
to check the suitability of the questionnaire as a
whole .
7. Revision ; The minor and major change may be
made on the basis of experience gained in pre-
testing.
8. Second pre –testing ; The revised questionnaire
is then subjected to a second test and amended if
necessary.
9. Preparing final draft; After editing ,checking
,spelling , space for response , pre coding, the final
–draft is prepared.
Administering Questionnaire
It can be administered in several ways;
1. Self Administered questionnaire ; there are two
type of self administered questionnaires . They are
a) Self administered questionnaires in the presence
of the researcher ; The presence of a researcher is
helpful in that it enables any queries or uncertainties
to be added immediately with the questionnaire
designer .
b) Self- administered questionnaire without the
presence of the researcher; Absents of the
researcher helps the respondents to complete the
questionnaire in private by devoting as much as
time in familiar surroundings. It can be inexpensive
to operate .
2) Postal questionnaires ; The postal questionnaire
is the best form of survey in an educational inquiry
. In postal questionnaire use good quality envelop ,
typed and addressed to a named person wherever
possible , also first class rapid postage service to
send the questionnaire . Also enciose a first class
stamped envelope for the respondent’s reply.
3. Telephone ; In this respondents can be contacted
at their convenient time even in the evening. It can
be recorded in machine.
4.Internet ; It is conducted with the help of the help
computers .It can be administered only between
those persons both of them have computer and
internet facility.
Advantages of Questionnaire
It has greater potentialities when it is properly used
otherwise progress in many areas of education
would be greatly handicapped.
It is economical way of collecting information to
educaters.
3.It permits a nation wide or even international
coverge.
it can cover a large group at the same time .
It is easy to plan construct and administer .
Once it has been constructed skillfully the
investigator may ask anybody to administer it on his
behalf.
Confidential informations often may be obtained
more readily by means of questionnaire.
It places less pressure on the subject for immediate
response .
It helps in focusing the respondent’s attention on
all the significant items.
10.It may be used as a preliminary tool for
conducting a depth study later on by any other
method.
Limitations of Questionnaire
1. The mailed questionnaires can be used only for
educated people also restricts the number of
respondents .
2. The return rate of questionnaire is low.
3. The mailing address may not correct which may
omit some eligible respondents .
4. Sometimes different respondents interpret
questions differently .
5. The researcher is not present to explain the
meaning of certain concepts the respondent may
leave the question blank.
6. It does not provide an opportunity for collecting
additional information.
7. The respondent can consult others before filling
in the questionnaire this response cannot be
considered as his own views.
8. There is a lack of depth or probing for a more
specific answer.

SOCIOGRAMS

Social interaction plays an important role in


the development of personality of an individual.
Children in school situations mostly interact in
groups. The teacher parents, social workers,
psychologists and other persons who are interest in
the improvement of social relations must study the
mechanism that operate in social interaction. To
deal effectively with social groups one must study
the dynamics of social behavior.
Sherif and Sherif in their book on Social
Psychology defined a group as “A groups is a social
unit consisting of a number of individuals who
stand in role and status relationships to one another,
stabilized in some relationships to one another,
stabilized in some degree at the time and who
possess a set of values of norms of their own
regulating their behavior, at least in matters of
consequence to the group.
CHARACTERISTICS OF CLASS AS A
GROUP
A class in the school fulfills all the
characteristics of a group. The class has the
following essential properties which make it a
group in the psychological sense:
1. A common goal
2. Organised structure
3. Motivation
4. Leadership
MEASUREMENT OF SOCIAL RELATIONS -
SOCIOMETRY
An Austrian psychologist by name, J.L.
Moreno invented the technique of sociometry.
‘Sociomery is the study of those aspects of the
socio-
emotional climate in the classroom having to do
with feelings of attraction, rejection, and
indifference which faced with situations calling for
interaction within the classroom.
After a few weeks of commencement of school,
teacher
has to conduct this test. It is not really a test like an
intelligence test. It is to test the reactions of student
among themselves. Within a few weeks each one
would have known one another sufficiently to get
close as friends or to maintain a distance. Teacher
has to prepare open ended questionnaire. This could
be administer quite informally in one of the class
hours assuring students of utmost confidentiality of
their responses. They should be urged to be frank
and forthright.
Sometimes students may be asked to state the names
of
three classmates for each question in order of
preference. Students tend to be a little reserved in
the beginning, particularly in giving their negative
choices. Tact is needed on the part of teachers to
establish rapport and trust that their responses
would never be leaked out. The responses are
recorded in a rectangular card in which a student
could write his name at the top, write down the
question number and their choice of class – fellow
so that it would be easily processed and tabulated.
On the basis of student reactions teacher could
prepare a socio matrix.
Chooser
A B C D E
Choosen

A 1 -1

B 1 -1

C 1 -1

D 1 -1

E 1 -1

+1 -1
Total 3 -1 +1 -3
Each card could be checked and the choice
entered in the matrix in the form of tallies so that in
a class of 40 students there would be 40 squares
horizontally and 40 vertically down, making a total
of 1600 squares of which 40 squares would be
eliminated by drawing a diagonal line from the top
left to the bottom right square. Total for each
student could be counted and entered. This would
give a measure of acceptance or popularity for
positive responses and rejection or unpopularity for
negative responses. Some studies might fear that
revelation of negative choice would invite trouble
from bullies and embitter relationship. It teachers
could ensure confidentiality of pupil responses and
avert leakages of preferences, students could be
persuaded to fill in both set of questions.

SOCIOGRAMS

The Martix could also be represented in the


form of a diagram, called sociogram. To draw such
a diagram, a few rules have to be followed. Eg. If
‘A’ likes ‘B’, it is represented thus; A B. If ‘B’ in
turns like ‘A’ is becomes a receiprocated choice
A B If ‘A’ dislikes
‘B’ it is represented thus : A------- B.
If ‘A’ were to like ‘B’ and ‘B’ were to reject
it is represented thus A B. If neither a broken line
nor a continuous line is drawn towards a student. It
has to be understood that the student is ignored.
To draw in sociogam for a class consisting of 30 or
40 students four Concentric squares one with in
others are draw and students are placed in various
positions depending upon their scores obtained in
the sociomatrix. Thus a sociogram is a
diagrammatic representation of the mutual choice,
rejection and indifference of the pupils in a class
room torwards one another. On the basis of
relationship among, the students in class may be
classified in two 4 types
1. Stars
2. isolates
3. chains
4. mutual choice
1. STARS.
Stars are those students in the class room
whom large number of students are attracted or
student like. Such students are known as stars of the
class or popular students of the class

CHARACTERISTICS OF STARS.

1. They have attractive physique or good health


2. They are usually of above average
intelligence
3. They have better or high achievement in the
class
4. They have extrovert personality
5. They are of high self-esteem and high self
concept or high level of aspiration.
6. They are talkative or take part in all type of
conversation and have self confidence.
7. They are very co-operative and helpful to
others.
The teacher can take help of popular
students in organizing effective teaching. They are
helpful for adjusting the isolates of the class. The
classroom problem can be easily solved by teacher
taking them into his confidence, but he should not
give undue weightage to them. they may play a
constructive role in classroom teaching learning
situation.

3. ISOLATES

Isolates are those students of the class room


whom no student of the class like or does not make
friendship with them, such students are called
insolates or rejected student of the class. They
require help of the teacher.

ADJUSTMENT OF ISOLATES IN THE


CLASS

1. The teacher should try to identify their


problems by discussing with them. The
physical, psychological and educational
tests should be used for the diagnosis
purposes.
2. The Isolates should be given the awareness
of the
characteristics of stars.
3. The teacher should make moderate praise
of the isolates whenever they succeed in
some school work.
4. The teacher must find out those skills and
hobbies in which isolates show promise and
should try to develop them.
5. The teacher should discuss the problem of
the isolates with
their parents.
6. They should be encouraged and teacher
should deal with sympathetically by
developing report with them.

3. MUTUAL PAIRS OR FRIENDS

Mutual pairs are those students who have the


mutual attraction of liking with each other. The
students have their close friendship or mutual
attraction are known as mutual pairs of friends.

(5). CHAINS OR STUDETNS


RELATIONSHIP

There are chains of attractions among the


students of a class. The mutual pairs have their
liking with third or fourth students. The third and
fourth have the attractions or liking with sixth or
seventh student. Thus their liking or attraction form
chains of relationship among the classmates.
Another category of students are “rejectee”.
A rejectee is one who creates niisance in class by
frequent fighting and quarelling. His classmates
may avoid him out of fear. He may be a fully. One
who receives maximum
Number of negative scores is a rejective.
He is disliked by most of his classmates.
SOME GUIEDNPOSTS IN THE
ADMINISTRAION OF SOCIOMETRY
1. Students in the class should be well
acquainted with each other.
Sociometric test should not be administered in the
first week.
At least six weeks interval should be
allowed.
2. Positive teacher-pupil relationship chould
exist
3. Student responses should be kept
confidential.
4. Students should know that results will be
used positively.
5. A relaxed, informal classroom atmosphere
should prevail when it is administered.
6. No prior announcement is needed. It should
not take more than fun to fifteen minutes.
7. Directions should be clear and simple.
2
3
11
4

1
10
13 5
15

9
12
6

8
7 14
Star - 10
Isolates - 5, 12, 1
Mutual pairs - (9,10), (10,13), 6,15)
We must remember that sociometry is
concerned with feelings as opposed to Considered
judjements. Spontaneity underlines Sociometric
choice.
Feelings are not always based on reason.
Every member must be present on the day of the
test. It should not be administered shortly after a
new student has joined the class.
TEST AND TESTING
Test is an instrument or systematic procedure
for measuring a sample of behavior by posing a set
of questions on a uniform manner. A test is a form
of assessment. It answers the question how well did
the individual performed. It can be either in
comparison with others or in comparison with a
domain of performance tasks.
So we can say – a list is a type of assessment
consisting of a set of questions administered during
a fixed period of time under reasonably comparable
conditions for all students.
Purpose of Testing
The use of psychological testing is to
evaluate behavior, cognitive behavior personality
traits and other individual and group characteristics
in order to assist in making judgments, predictions
and decisions about people. To say it specifically
list are used for screening applicants for jobs,
educational programs etc and to classify and place
people in the right contexts. It helps to council and
guide individuals and also to prescribe
psychological treatment and many more. To get an
apt result for the test there is a need to follow same
steps.
Steps in the listing program
1. Determining the purpose of testing
The first step in the listing program is to
define specifically the purpose of listing and the
type of information being sought through testing.
As is emphasized by the firsts standard for list users
in the code of fair testing practices in education, is
critical that the purpose must be clearly defined and
that the list match the purpose.
2. Selecting the appropriate test
To make a proper selection, we must first
identify the objectives and specific learning
outcome of the instructional program. This is
necessary in choosing relevant test irrespective of
the size of the group to the tested single
Test or school wide testing program.
Selection must be preceded by an analysis of the
intended use of the results and the type of the data
most appropriate for each use. When need and use
are identified, a list of possible test can be had from
test publishers. The users should select test that
meet the intended purpose and that are appropriate
for the intended test takers.
Points to be kept in mind while selecting the list
*review and select test based on the appropriateness
of test content, skills listed and content coverage.
*review materials provided by test developers and
select test for which clear, accurate and complete
information is provided.
*evaluate evidence of the technical quality of the
test provided by the test developer and any
independent reviewers
*evaluate representative samples of test questions,
directions, answer sheets, manuals and score reports
before selecting a list.
*evaluate procedures and materials used by test
developers as well as the resulting test, to ensure
that potentially offensive content or language is
avoided.
*select test with appropriately modified forms or
admission procedures for test takers with
disabilities who need special accommodations.
3. Administering the test
The main requirement to administer a test is
that the testing procedures prescribed in the test
manual be generously followed. When we alter the
procedures for administering a published test we
loss the basis for a meaningful interpretation of the
scores.
The administration of the group test is
relatively simple.
a) Motivates the students to do their best
b) Follow the directions closely
c) keep time accurately
d) Record any significant events that might
influence test scores.
e) Collects the materials promptly
a. Motivates the students
In testing our goals should be to obtain
maximum performance within the standard
conditions set forth on the testing procedures. We
want all students to earn as high a score as they are
capable of achieving. This obviously means that
they must be motivated to put forth their abilities or
else will not work seriously at the task unless they
are convinced that the test result will be beneficial
to them.
b. Follow directions strictly
The importance of following the directions
given on the test manual cant be over emphasized
unless the test is administered in exact accordance
with the standard directions. The best results
containing errors may prevent proper
interpretations and use.
c. Keep time accurately
To ensure accurate timing, keep a written
record of starting and ending test time.
d. Record significant events
The students should be carefully observed
during testing a record must be made of any unusual
behavior or events that might influence the scores.
e. Collects list materials promptly
When the test ends the test materials should
be collected promptly so that students cannot work
or correct the materials after the time limit.

4. Scoring the test


Essay tests may be scored holistically or
analytically. For both the examinee should be
informed of the methods used. Numerical scores
added with written comments and explanations are
often helpful in providing feedback on essay test
performance.
In the case objective type tests computers and
other machines take the place of human scoring.
Machine scoring is generally superior in terms of
speed and accuracy but less flexible than hand
scoring.
5. Analyzing and interpreting the scores
Test result can be interpreted in terms of the
types of task that can be performed or the relative
position held in reference to group. Once refers to
what a person can do and the other how the
performance is compared with that of others.
6. Applying the results
The object of test is to bring in some change
in instruction, educational support or inform some
other aspect for which the test was conducted. This
cannot be achieved unless the results are interpreted
correctly but reported accurately and appropriately
to those who have a need the outcomes too must be
informed. The feedback that the test administration
provides to the test taker and the other relevant
authorities are of great importance. These
achievement and learning ability test can serve
many different purposes in the school educational
program. They help to identify the level and range
of ability among students – helps to identify areas
of instruction needing greater emphasis – helps to
identify learning errors and plan remedial
instruction. Helps to identify individual difference
and helps to provide individualized instruction –
exceptional students can be identified and necessary
steps can be taken to promote their education
through enabling them opt for right course.
7. Retesting to determine success of program
After applying the results a retest should be
conducted to find out the success of the remedial
programs.
8- Making suitable records and reports
The final step is to set suitable records and
reports of the testing program. The result should be
reported clearly which can be easily understood and
usable for future purpose.
TYPES OF TEST ITEMS
The objective type are constructed on
educational achievements aptitude, and intelligence
objective type test much more precise than essay
type tests. The objective types test are standardized
this type test mainly used in research work,
guidance and counseling and also in administration
for selecting candidates for different jobs. The
obtained scores are transformed into standard scores
which can be easily interpretable and
understandable.
R-L Ebel and D.A Frishe (1986) define an
objective tent as “one that can be provided with a
simple predetermined tent of correct answers that
objective opinion or judgement in the acoring
procedure is eliminated.”
W. Wiersma and S.G. Jurs (1990) states,
“objective items are items can be objectively
scored, items on which persons select a response
from a list of opinions.” There are three type of
objective type tests are following.
1. Alternate – Response test item
2. Matching type system
3. Multiple choice type system.
ALTERNATE-RESPONSE TEST ITEM

According to N.E Gronlernd (1985), “the


alternative response test item consists of a
declarative statement that the pupil is asked to mark
true or false, right or wrong , correct or incorrect,
yes or no, factor opinion, agree or disagree and the
like. In each case there are only two possible
answers. Because the true-false opinion is the most
common, this item type is most frequently referred
to as true false item.”
In the alternative-response test, one of two
responses only one is correct. Some of the common
variations of the alternate –response test are
a) True of False b) Yes-No
c) Right-wrong d) Correct-incorrect.
For example : The Vedas are the religious books of
the Hindus Yes/
No
MERITS OF ALTERNATE-RESPONSE
TEST ITEMS.
1. It is easy to correct them
2. They are capable of sampling very quickly
a wide range of the subject matter.
3. They are more suitable for young children
who have poor vocabulary.
4. They are more reliable per unit of testing
item.
5. They can be scored objectively.
6. They are adaptable to most content areas.
7. They are early to construct.
8. They are time savers.
9. They provide simple and direct means for
measuring the out comes of formed
instruction.

LIMITATIONS

1. Generally they emphasis rote memorization.


2. The examinees are not required to apply
principles to new
situations.
3. These are only two choices, they allow a high
degree of guessing.
4. They may motivate students to study and
accept only over
simplified statements of facture details.
5. There can be attempted even by those who
know nothing of
the subject matter.
6. They are largely limited to learning out
comes in the knowledge domain.

SUGGESTIONS FOR TRUE OR FALSE


ITEMS.

1. Be sure that the item as written can be


classified unequivocally as aided true or
false.
2. Avoid ambiguous and indefinite terms of
degree or amount.
3. Keep true or false statements approximately
equal in length.
4. Employ a random occurrence of true or false
statements to avoid giving irrelevant clues.
5. Avoid double negative statements.
6. The direction regarding the answers should
be very clear.
7. Long and complex statements should not be
used because they measure regarding.
Comprehensive also and which may not be
the objective of the examiner.

MATCHING TYPE TEST ITEM

N.E. Gronlund (1985), “The matching


exercise consists of two parallel columns with each
word, number or symbol in one column being
matched to a word, a sentence or phrase in the other
column. The items in the column for which match
is sought are called premises and the items in the
column from which the selection is made are called
responses.
There are several varieties of matching tests.
In the traditional format of a matching test consists
of two column. The examinee is required to make
some sort of association between each premise and
each response in the two columns he pairs the
corresponding elements and records his answers.
MERITS OF MATCHING TESTS
1. Many questions can be asked in limited
time between they
require little reading time.
2. Reliability of the test increases as they
afford as an opportunity to have a large
sampling of the content.
3. Scoring is comparatively easier.
4. Matching test can be constructed relatively
easily and quickly.
5. There is less scope for guessing as
compared with true –
false tests
6. A good deal of space can be saved.

LIMITATIONS

1. They are not well adapted testing for


the acquision of
knowledge or understanding of and
ability to use relatively complete
interpretive ideas.
2. They may encourage serial
memorization rather than
association is sufficient care is not
taken in their construction.
3. Generally they provide clues.
4. It is at times difficult to get dusters of
questions that are
sufficiently similar questions that a
common set of response can be used.

SUGGESTIONS FOR CONSTRUCTING


MATCHING TESTS

1. Keep each lists relatively short.


2. Each matching tests should consists of
homogeneous
items.
3. Avoid an equal members of premises
and responses.
4. All items of the tests should be on the
same page.
5. Do not more statements or responses
highly dissimilar in
character.
6. Avoid using matching tests for testing
small units of the
subject matter

MULTIPLE CHOICE TYPE TEST ITEM

“ According ti N.E Gronlund (1985)”A multiple


choice
item consists of a problem and lists suggested
solutions. The problem may be stated as direct
acquisition or an incomplete statements and is
called the stern of item. The tent of suggested
solutions may include words numbers, symbols or
phrases and are called alternatives. The pupil is
typically requested to read the stem and the list of
alternatives and to select the one correct or best
alternative” A multiple item of two parts.
1. The “stem” which contains the problem.
2. Options or responses it list of suggested
answers. The stem be stated as direct
question or an incomplete statements.
FORMS OF MULTIPLE TESTS
a) The correct answer form
It contains three or more choices but only one of
them is correct.
b) The best answer form
One or more all choices may be correct but
one of them is the best answer the examine is
required to select the best one.
c) The multiple response them
The correct answers may consist of more
than one choices and the examine is asked to
identify all those which are correct.
d) The complete statement form the stem is
incomplete and can be completed by the
correct choice. The examinee is asked to
select one.
e) The substitution form
The word outline the stem is to the substituted by
the
correct response. Responses are given are the
examinee is asked to select one which can substitute
the desired word.
f) The combined response form
The choices are different spaces or sentence or
paragraph.
The examinee required to correct order of the
phrases or sentences.

MERITS OF MULTIPLE CHOICE TEST


ITEM

1. They can measure cognitive levels better than


true false items because examine do not score
for merely knowing whether the statement is
true or false but for knowing which is the
correct answer.
2. They can measure from the most element the
knowledge lent to the most complex level.
3. A substantial amount of the subject matter
can be tested because the examinees do not
require much time for righting the answer.
4. They are objective in scoring because they
key for the correct answer is prepared along
with the test.
5. They reduce the effect of guessing because
there are three or four choices.
6. Their format is helpful in item analysis to find
out he areas of weakness of the examinee.
7. They can be easily adopted for machine
scoring.

LIMITATIONS
1. They donot permit the exminees to express
their own views
2. They cannot measure attitudes or motor skills
.
3. It is difficult to find four choices for each item
out of which thir may be plausible in correct
answers.
4. they cannot evaluate the ability to organize
any present ideas.
5. They require more time to construct.
6. They check only limited knowledge

SUGGESTIONS FOR MULTIPLE CHOICE


QUESTION

1. Be sure the stem of the item clearly


formulates a problem
2. Include as mush of the item as possible in the
stem and keep options as short as possible.
3. Include in the stem only the metric required
to make the problem clear and specific.
4. us e the negative sparing the amid stem of the
item
5. Repetition of words in the options should be
avoided
6. Unfamiliar and difficult symbols and
vocabulary should avoided.

MERITS OF OBJECTIVE TYPE TEST.


1. The new type examinations are motive
objective in their scoring they are free from
personal factor of the teacher.
2. They may be very comprehensive and can be
made to cover a great deal more material
than the old type of examinations
3. They are very easy to score.
4. They are more education for the pupils.
5. It discourage examine and encourage
thinking observation and scrutiny
6. They are more reliable
7. Objective tests can be standardized by
applying before hand to a large numbers of
students of the same age group before the
actual examination

LIMITATIONS

1. The pupil does not have an opportunity to


show his ability to organize his thoughts .
This type of tests are not diagnostic in that they do
not tell where the pupils reasoning process goes
wrong or where he stops reasoning all together and
starts guessing.
2. It is commonly said that this type of tests fail
to check cramming
Short Answer Questions

Short-answer questions are open-ended


questions that require students to create an answer.
They are commonly used in examinations to assess
the basic knowledge and understanding (low
cognitive levels) of a topic before more in-depth
assessment questions are asked on the topic. Short
Answer Questions do not have a generic structure.
Questions may require answers such as complete
the sentence, supply the missing word, short
descriptive or qualitative answers, diagrams with
explanations etc. The answer is usually short, from
one word to a few lines. Often students may answer
in bullet form.

Advantages of Short Answer Questions

 Short Answer Questions are relatively fast to


mark and can be marked by different
assessors, as long as the questions are set in
such a way that all alternative answers can be
considered by the assessors.
 Short Answer Questions are also relatively
easy to set compared to many assessment
methods.
 Short Answer Questions can be used as part
of a formative and summative assessment, as
the structure of short answer questions are
very similar to examination questions,
students are more familiar with the practice
and feel less anxious.
 Unlike MCQs, there is no guessing on
answers, students must supply an answer.

Disadvantages of Short Answer Questions

 Short Answer Questions (SAQ) are only


suitable for questions that can be answered
with short responses. It is very important that
the assessor is very clear on the type of
answers expected when setting the questions,
because SAQ is an open-ended questions,
students are free to answer any way they
choose, short-answer questions can lead to
difficulties in grading if the question is not
worded carefully.
 Short Answer Questions are typically used
for assessing knowledge only, students may
often memorize Short Answer Questions
with rote learning. If assessors wish to use
Short Answer Questions to assess deeper
learning, careful attention (and many
practices) on appropriate questions are
required.
 Accuracy of assessment may be influenced
by handwriting/spelling skills
 There can be time management issues when
answering Short Answer Questions
Design A Good Short Answer Question

 Design short answer items which are


appropriate assessment of the learning
objective
 Make sure the content of the short answer
question measures knowledge appropriate to
the desired learning goal
 Express the questions with clear wordings
and language which are appropriate to the
student population
 Ensure there is only one clearly correct
answer in each question
 Ensure that the item clearly specifies how the
question should be answered (e.g. Student
should answer it briefly and concisely using
a single word or short phrase? Is the question
given a specific number of blanks for
students to answer?)
 Consider whether the positioning of the item
blank promote efficient scoring
 Write the instructions clearly so as to specify
the desired knowledge and specificity of
response
 Set the questions explicitly and precisely.
 Direct questions are better than those which
require completing the sentences.
 For numerical answers, let the students know
if they will receive marks for showing partial
work (process based) or only the results
(product based), also indicated the
importance of the units.
 Let the students know what your marking
style is like, is bullet point format acceptable,
or does it have to be an essay format?
 Prepare a structured marking sheet; allocate
marks or part-marks for acceptable
answer(s).
 Be prepared to accept other equally
acceptable answers, some of which you may
not have predicted.

ESSAY TYPE TEST


The word essay has been derived from a
French word ‘essayer’ which means ‘to try’ or ‘to
attempt’.
“Essay test is a test that requires the student
to structure a rather long written response up to
several paragraphs.”
-William weirsama
The essay test refers to any written test that requires
the examinee to write a sentence, a paragraph or
longer passages. Essay questions provide a complex
prompt that requires written responses, which can
vary in length from a couple of paragraphs to many
pages. Like short answer questions, they provide
students with an opportunity to explain their
understanding and demonstrate creativity, but make
it hard for students to arrive at an acceptable answer
by bluffing. They can be constructed reasonably
quickly and easily but marking these questions can
be time-consuming and grader agreement can be
difficult.
Essay questions differ from short answer questions
in that the essay questions are less structured. This
openness allows students to demonstrate that they
can integrate the course material in creative ways.
As a result, essays are a favoured approach to test
higher levels of cognition including analysis,
synthesis and evaluation.
Characteristics of essay test:
1. The length of the required responses varies
with reference to marks and time
2. It demands a subjective judgment: Judgment
means making judgment or assessing whereas
subjective means not fair enough i.e. it differs from
person to person.
3. Most familiar and widely used:Essay has
become a major part of a formal education.
Secondary students are taught structured essays
format to improve their writing skills.
Types of Essay Test
1. Restricted response questions:
The restricted response question usually limits both
the content and the response the content is usually
restricted by the scope of the topic to be discussed
limitations on the form of response are generally
indicated in the question another way of restricting
responses in essay tests is to base the questions on
specific problems. For this purpose, introductory
material like that used in interpretive exercises can
be presented. Such items differ from objective
interpretive exercise only by the fact that essay
questions are used instead of multiple choice or true
or false items. Because the restricted response
question is more structured it is most useful for
measuring learning outcomes requiring the
interpretation and application of date in a specific
area.
2. Extended response questions:
No restriction is placed in students as to the points
he will discuss and the type of organization he will
use. Teachers in such a way so as to give students
the maximum possible freedom to determine the
nature and scope of question and in a way he would
give response of course being related topic and in
stipulated time frame these types of questions.
The student may be select the points he thinks are
most important, pertinent and relevant to his points
and arrangement and organize the answers in
whichever way he wishes. So they are also called
free response questions. This enables the teacher to
judge the student’s abilities to organize, integrate,
interpret the material and express themselves in
their own words. It also gives an opportunity to
comment or look into students’ progress, quality of
their thinking, the depth of their understanding
problem solving skills and the difficulties they may
be having. These skills interact with each other with
the knowledge and understanding the problem
requires. Thus it is at the levels of synthesis and
evaluation of writing skills that this type of
questions makes the greatest contribution.
Merits of essay writing:
1. It is relatively easier to prepare and administer a
six-question extended response essay test than to
prepare and administer a comparable 60 item
multiple choice test items.
2. It is the only means that can assess an
examinee’s ability to organize and present his ideas
in a logical and coherent fashion and in effective
prose.
3. It can be successfully employed for practically
all school subjects.
4. Some of the objectives – such as ability to
organize idea effectively ability to criticize or
justify a statement, ability to criticize or justify a
statement, ability to interpret etc. can be measured
by this type of test.
5. Logical thinking and critical reasoning,
systematic presentation etc. can be best developed
by this type of test.
6. Its helps induce good study habits such as
making outlines and summaries, organizing the
arguments for and against, etc.
7. The student can show their initiative, the
originality of their thought and the fertility of their
imagination, as they are permitted freedom of
response.
8. The response of the students need not be
completely right or wrong. All degrees of
comprehensiveness and accuracy are possible.
9. It largely eliminates guessing.

Demerits of essay writing:


Every coin has 2 sides same ways in essay test
if there are merits than demerits are also there so we
will see demerits of essay test writing.
1. Limited sampling of the content: -
It means few questions can be included in given test.
Example if in one particular book 18 chapter are
given, teacher cannot ask question from all the
chapters. They have to neglect some areas.

2. Subjectivity of scoring:
If all students are writing same answer of one
question, why they get different marks? In essay test
answer of question are scored differently by
different teacher. Even the same teacher scores the
answer differently at different times.

3. Halo effects:
It means teacher knows the particular student very
well and has good impression because of his
previous paper and writing skills.

4. Mood of the examiner:


The general feeling of all students after writing your
S.S.C board paper what you had discussed with
your fried. I hope the teacher who is checking my
paper has not quarreled with some”
5. Ambiguous wording of the question:
Sometime essay questions are so worded that
students do not know the exact implications of the
questions.
6. Examiner contaminated by various factors:
The examiner is contaminated by various factors
like hand writing, spelling, grammar etc some
students who has good verbal knowledge may write
many things on an essay topic.
7. It requires an excessive time on the part of
students to write while assessing reading essays is
very time-consuming and laborious.

8. Only a teacher or competent professionals can


assess it.

9. The speed of writing can influence the


performance of the learner. This results in low
scores even if the learner may know the correct
answer of all questions.

10. It may not provide a true picture of the


comprehension level of the leaner. Grammars may
get good marks.
Suggestions for improving essay questions:
1. Restrict the use of essay questions to those
learning outcomes that cannot be satisfactorily
measured by objective items. Such functions as
ability to organize’ to express, to interpret and to
elicit understanding may be tested through essay
questions.
2. Do not start the essay questions with words
such as what, who, when, enumerate etc. in general
start with compare, contrast, discuss, explain etc.
3. Write the essay question in such a way that the
task is clearly and unambiguously defined for each
examinee.
4. Directions for the test should be explicitly
written.
For example, (a) each question carries 20 marks (b)
Marks will be deducted for spelling mistakes.
5. Avoid the use of optional questions. A fairly
common practice in the use of essay questions is to
provide pupils with more questions than they are
expected to answer. When pupils answer different
questions, it is obvious they are taking different test
and the common basis for evaluating their
achievement is lost.
6. Students are found to be misinformed about the
meaning of important terms used in essay questions.
For example: students frequently discuss or
describe when asked to define. A solution would be
to supply the necessary training to the students in
writing essay questions.
7. Allow liberal time limit so that the essay test
will not be a test of speed of writing. While setting
up the question paper s that it can be answered in
the allotted time, leaving some time for reading the
question, drawing up an outline of the answers and
finally for revision.
8. We have seen that the essay examination
suffers from lack of adequate sampling. This defect
can to some extent be overcome by increasing the
number of questions any limiting the length of their
answers. A question paper with 10 questions would
represent a better sample than one with 5 questions
only.
9. One of the favourite questions of examiners is
:
for example: write short notes on: 1. Social
education. 2. Homework.
This type of question is worse than the one
mentioned in above. The student dos not know the
limits and he goes on writing pages after pages. The
better way to write is e.g. write short notes on (a)
objectives of social education (b) misuse of
homework.
Suggestions for scoring the essay examination
1. Prepare on outline of the expected answers in
advance, showing what points are required and the
credits to be allowed for each. This will provide a
common frame of reference for evaluating the
individual papers
2. Decide in advance that factors are to be
measured. If the ability to organize, to interpret or
to apply the principles is to be assessed, the
examiner should not allow him to be biased by bad
handwriting, spelling, sentence structure or
neatness. The ability to write, to spell or to use
correct English can be assessed through other
suitable tests
3. Examination should be scored as far as possible
by the one who frames the questions. He is the
person who can give a clear picture of the expected
responses whenever more than one examiner is
involved; they should be brought together to
develop a uniform scoring procedure. Model
answers and marking schemes may be discussed
and finalized in this meeting
4. Grade the paper as nearly anonymously as
possible, the less you know about who wrote an
answer, the more objectively you can grade papers
before considering another question. This type of
scoring permits the examiner to concentrate on the
answer to a single question and judge better the
merits of the several pupil responses to the same
question.
5. Score one question through all of the papers
before considering another question. This type of
scoring permits the examiner to concentrate on the
answer to a single question and to judge better the
merits of several pupils responses to the same
question.

6. When important decisions such as selection for


awards or scholarships are to be based on the
results, obtain two or more independent rating and
average them.

7. The mechanics of expression (legibility,


spelling, punctuation, grammar) should be judged
separately from what the student writes i.e. the
subject matter content. Provide comments and
correct answers in the answer book. This will
explain the teacher’s, method of assigning a
particular score or grade to particular paper.
RUBRIC
The traditional meanings of the
word rubric stem from "a heading on a document
(often written in red — from Latin, rubrica, red
ochre, red ink), or a direction for conducting church
services". In modern education circles, rubrics have
recently come to refer to an assessment tool. The
first usage of the term in this new sense is from the
mid-1990s, but scholarly articles from that time do
not explain why the term was co-opted. Perhaps
rubrics are seen to act, in both cases,
as metadata added to text to indicate what
constitutes a successful use of that text. It may also
be that the color of the traditional red marking pen
is the common link.
In education terminology, rubric means "a
scoring guide used to evaluate the quality of
students' constructed responses". Rubrics usually
contain evaluative criteria, quality definitions for
those criteria at particular levels of achievement,
and a scoring strategy. They are often presented in
table format and can be used by teachers when
marking, and by students when planning their work.
A scoring rubric is an attempt to
communicate expectations of quality around a task.
In many cases, scoring rubrics are used to delineate
consistent criteria for grading. Because the criteria
are public, a scoring rubric allows teachers and
students alike to evaluate criteria, which can be
complex and subjective. A scoring rubric can also
provide a basis for self-evaluation, reflection, and
peer review. It is aimed at accurate and fair
assessment, fostering understanding, and indicating
a way to proceed with subsequent
learning/teaching. This integration of performance
and feedback is called ongoing assessment
or formative assessment.
The benefits of using a rubric are that it
creates a more objective method of scoring; specific
criteria are identified and the students are evaluated
only on those criteria. Students can often be
involved in the creation of a rubric in order to have
a say in what they believe to be the most important
aspects of the task; this can help with student
motivation and investment. Even if students are not
involved in the creation of the rubric, they should
have a copy of it so they are aware of what is being
assessed. This ensures fairness is maintained and
pushes students to prepare to the best of their ability.

Components Of A Scoring Rubric


Scoring rubrics include one or more
dimensions on which performance is rated,
definitions and examples that illustrate the
attribute(s) being measured, and a rating scale for
each dimension. Dimensions are generally referred
to as criteria, the rating scale as levels, and
definitions as descriptors. The components of
rubrics are;
1) One or more traits or dimensions that serve as
the basis for judging the student response
2) Definitions and examples to clarify the meaning
of each trait or dimension
3) A scale of values on which to rate each dimension
Steps To Create A Scoring Rubric
Scoring rubrics may help students become
thoughtful evaluators of their own and others’ work
and may reduce the amount of time teachers spend
evaluating student work
1. Have students look at models of good versus
"not-so-good" work. A teacher should
provide sample assignments of variable
quality for students to review.
2. List the criteria to be used in the scoring
rubric and allow for discussion of what
counts as quality work. Asking for student
feedback during the creation of the list also
allows the teacher to assess the students’
overall writing experiences.
3. Articulate gradations of quality. These
hierarchical categories should concisely
describe the levels of quality (ranging from
bad to good) or development (ranging from
beginning to mastery).
4. Practice on models. Students can test the
scoring rubrics on sample assignments
provided by the instructor. This practice can
build students' confidence by teaching them
how the instructor would use the scoring
rubric on their papers. It can also aid
student/teacher agreement on the reliability
of the scoring rubric.
5. Ask for self and peer-assessment.
6. Revise the work on the basis of that feedback.
7. Use teacher assessment, which means using
the same scoring rubric the students used to
assess their work

Type of Disadvantage
Rubric Definition Advantages s
Analyti Each criterion Gives Takes more
c (dimension, diagnostic time to score
trait) is information than holistic
evaluated to teacher. rubrics.
separately. Gives Takes more
formative time to
feedback to achieve inter-
students. rater
Easier to reliability
link to than with
instruction holistic
than holistic rubrics.
rubrics.
Good for
formative
assessment;
adaptable
for
summative
assessment;
if you need
an overall
score for
grading, you
can combine
the scores.
Holistic All criteria Scoring is Single overall
(dimensions, faster than score does not
traits) are with communicate
evaluated analytic information
simultaneously rubrics. about what to
. Requires do to improve.
less time to Not good for
achieve formative
inter-rater assessment.
reliability.
Good for
summative
assessment.
General Description of Can share Lower
work gives with reliability at
characteristics students, first than with
that apply to a explicitly task-specific
whole family linking rubrics.
of tasks (e.g., assessment Requires
writing, and practice to
problem instruction. apply well.
solving). Reuse same
rubrics with
several tasks
or
assignments
.
Supports
learning by
helping
students see
"good work"
as bigger
than one
task.
Supports
student self-
evaluation.
Students can
help
construct
general
rubrics.
Task- Description of Teachers Cannot share
Specific work refers to sometimes with students
the specific say using (would give
content of a these makes away
particular task scoring answers).
(e.g., gives an "easier." Need to write
answer, Requires new rubrics
specifies a less time to for each task.
conclusion). achieve For open-
inter-rater ended tasks,
reliability. good answers
not listed in
rubrics may
be evaluated
poorly.

How To Use Rubrics Effectively

 Develop a different rubric for each assignment.


Although this takes time in the beginning, you’ll
find that rubrics can be changed slightly or re-
used later.
 Give students a copy of the rubric when you
assign the performance task.
 Require students to attach the rubric to the
assignment when they hand it in.
 When you mark the assignment, circle or
highlight the achieved level of performance for
each criterion.
 Include any additional comments that do not fit
within the rubric’s criteria.
 Decide upon a final grade for the assignment
based on the rubric.
 Hand the rubric back with the assignment.
 If an assignment is being submitted to an
electronic drop box you may be able to develop
and use an online rubric. The scores from these
rubrics are automatically entered in the online
grade book in the course management system.

ASSESSMENT TOOLS FOR AFFECTIVE


DOMAIN
The affective domain is a part of a system that
was published in 1965 for identifying understanding
and addressing how people learn. This describes
learning objectives that emphasize a feeling tone, an
emotion, or a degree of acceptance or rejection. It is
far more difficult domain to objectively analyze and
assess since affective objectives vary from simple
attention to selected phenomena to complex but
internally consistent qualities of character and
conscience. Nevertheless, much of the educative
process needs to deal with assessment and
measurement of students’ abilities in this domain.
This simply refers to the fact that much of the
processes in education today are aimed at
developing the cognitive aspects of development
and very little or no time is spent on the
development of the affective domain. The
Taxonomy in the Affective Domain. The taxonomy
in the affective domain contains a large number of
objectives in the literature expresses as interests,
attitudes, appreciation, values, and emotional sets or
biases. The descriptions of step in the taxonomy
was culled from Kratwohl’s Taxonomy of Affective
Domain:
1. Receiving is being aware of or sensitive to the
existence of certain ideas, material, pr phenomena
and being willing to tolerate them. Examples: To
differentiate, To accept, To listen, To respond to.
2. Responding is committed in some small measure
to the ideas, materials, or phenomena involved by
actively responding to them. Examples: to comply
with, to follow, to commend, to volunteer, to spend
leisure time in, to acclaim
3. Valuing is willing to be perceived by others as
valuing certain ideas, materials, or phenomena.
Examples: to increase measured proficiency in, to
relinquish, to subsidize, to support, to debate
4. Organization is to relate the value to those
already held and bring into a harmonious and
internally consistent philosophy. Examples: To
discuss, To theorize, To formulate, To balance, To
examine
5. Characterization by value or value set is to act
consistently in accordance with the values he or she
has internalized. Examples: To revise, To require,
To be rated high in the value, To avoid, To resist,
To manage, To resolve .
Development of Assessment Tools/Standard
Assessment Tools
Assessment tools in the affective domain are
those which are used to assess attitudes, interest,
motivations and self efficacy. These include:
1. Self-report. This the most common measurement
tool in the affective domain. It essentially requires
an individual to provide an account of his attitude
or feelings toward a concept or idea or people. It is
also called “written reflections” (“Why I Like or
Dislike Mathematics”. The teacher ensures that the
students write something which would demonstrate
the various levels of the taxonomy ( receiving to
characterization)
2. Rating Scales refers to a set of categories
designed to elicit information about a quantitative
attribute in social science. Common examples are
the Likert scale and 1-10 rating scales for which a
person selects the number which is considered to
reflects the perceived quality of a product. The basic
feature of any rating scale is that it consists of a
number of categories. These are usually assigned
integers.
3. Semantic Differential (SD) Scales tries to assess
an individual’s reaction to specific words, ideas or
concepts in terms of ratings on bipolar scales
defined with contrasting adjectives at each end.
4.Checklists : Checklists are the most common and
perhaps the easiest instrument in the affective
domain. It consist of simple items that the student
or teacher marks as “absent” or “present” .
ATTITUDE SCALES
The term personality is a broad complex. It
has inner as well as outer aspects. The inner aspects
of the personality; the motivation perceptions,
feelings, attitudes, interest, values and preferences
are prejudices that are the basis one’s behavior. The
inner aspect of the personality play the significant
role in the performance of an individual. The
measures of attitude, interests and values are
different as these are independent traits. These
aspects of one’s personality influence one another.
An attitude is a tendency to react favourably
or unfavourably towards a disginted class of stimuli,
such as a custom, a caste, an institution or a nation.
An attitude can be observed directly. An attitude
stands for response consistency to certain categories
of stimuli. Attitude is frequently associated with
social stimuli and emotionally toned responses.
Meaning And Definition Of Attitude
An attitude is a variable which directly
observed but it is inferred from overt behavior both
verbal and non- verbal responses. In more objective
term the concept of attitude may be said to can not
response tendency with regard to certain categories
of stimuli.
In actual practice the term attitude has been
most frequently associated with emotionally toned
responses. The deep rooted feelings are the attitudes
which can not be changed easily. An attitude is
defined as a tendency to react in certain way
towards a designated class of stimuli or an object.
Attitude has been defined by others in the
following ways. “an attitude is essentially a form of
anticipatory response, a beginning of action not
necessarily completed” –K. Young
“An attitude can be defined as an enduring
organization of motivational, emotional, perceptual
and cognitive processes with respect to some aspect
of the individual’s word”- Krech and Crutchfield.
Attitude is the sum total of an individual’s
inclination, feelings, prejudices or biases,
preconceived notton’s ideas threats and convictions
or beliefs about any specific object”- L.L.
Thurstone.
“An attitude is a mental and neutral state of
readiness, exerting directive or dynamic influence
upon the individuals response to all objects and
situations with which it is related”- Britt.
Characterstics Of Attitude
 There are individual differences in attitudes
 It is a bi-polar triat as it is a position towards
an object either for or against.
 It mat be overt or covert and it is fathemless
or unlimited.
 It is integrated in to an organized system and
can no be changed easily.
 It varies culture to culture and society to
society.
 It implies a subject-object relationship.
Determinents of attitude
The following factors may influence the
attitudes of a person
 Cultural and social factors
 Psychological factors (needs, emotions,
perceptions,
 experiences etc.)
 Functional factors (role of temperament) The
attitudes are formed on the following basis.
 Acceptance of social norms and calues.
 Emotional and personal experiences
 Ego-involvement and social perceptions
 Technology changes and economic
developments
 Suggestions and self concept or ideals of life
MEASUREMENT OF ATTITUDES
There are various techniques for the
measurement of attitudes. The projective techniques
used are Rorschach, T.A.T, Word Association Test
and Sentence Completion test, Questionnaires,
inventories, Situational test and interviews are also
helpful. The most important technique of measuring
attitudes is the ‘Scaling’ techniques.
TYPES OF ATTITUDE SCALES
 Numerical Scales
 Graphic scales
 Standard scales
 Check lists
 Forced choice scales
 Ranking method
 Q sort method
Numerical Scales:- One of the simplest to constract
and easiest to use, is the numerical rating scale. This
type of tool usually consists of several items each of
which names or describes the behaviour to be rated
and then offers as alternative responses a series of
numbers representing points along the scale. This
simple cumerical scale does ave face validity and
therefore seems to be widely accepted. It is more
subjective or bias tool.
Graphic Scales: if the format of the rating scale is
such that the characteristics to be rated is
represented as a straight line along which are placed
some verbal guides, the tool is referred to as a
graphic rating scale.
It is easy to construct and easy to administer
therefore it is widely used of all the specific types
of rating scales, but it is less reliable measure.
Standard scale: in the standard scale approach an
attempt is made to provide the rater with more than
verbal uses to describe various scale points. Ideally,
several samples of the objects to be rated are
included each with a given scale
value which has been determined in experimental
studies prior to the use of the scale.
Check lists:
An approach which is widely popular because it is
simple to administer and still permits wide coverage
in short time is the behavior check list. It contains a
long list of specific behavior which supposedly
represented individual differences and rater simply
checks whether the item applies. The behavior
index of individual is obtained by summing up the
items, which have been checked.
Forced choice scale
One of the most recent innovations in the rating
scale area has been developed a forced choice
technique which has been designed to overcome the
major difficulties faced on with earlier techniques.
In a forced choice rating the rater is required to
consider not just one attribute, but several
characterstics all at one time.
Rater is asked to select one which is most
appropriate statement.
Ranking method
It is not possible that rater can accurately judge
equivalent distances at various points along the
scale. Under these conditions a ranking method
which requires only that subjects who are being
rated to be placed in order of each trait cab be used.
This approach is essential for large number of
persons are to be rated.
The ranking approach has the advantage of forcing
the judge to make a definite discriminations among
this rates by eliminating the subjective differences
faced by the judges, second advantage that group
ranking is uniform.
Q Short
Q Short is developed by Stephenson. It is one of the
best approach to obtain a comprehensive
description of an individual while ranking method
gives the comprehensive friction of a group of the
individuals. Q short is widely used for rating
persons School or on the job for individual
guidance.
SOME MAJOR APPROACHES TO SCALE
CONSTRUCTION
Thurstone scale
The outstanding features of this scale is the use of
judges to determine the points on the attitude
continuum. Thursone’s quantification of judgement
data represented a great achievement in attitude
scale construction. Several hundred statements are
gathered which seem to express various degrees of
negative and positive attitudes towards the objects
being studied. Several hundred persons are then
chosen as judges. Each judge is handed all the
statements and asked to sort them into 11 piles from
extremely favourable through neutral to extremely
unfavourable. The judges are not to indicate their
own attitudes but only classify the statement. The
medium position assigned to each statement is
regarded it’s scale value. The variability of the
judgement is taken as an index of it’s ambiguity.
Items are chosen so as to represent minimum
variability and a wide spread of scale values,
providing equal spacing across the 11-point range.
The Scale position for each item is considered to be
the median intensity judgement. The final scale
consists of twenty or so items which spread most
evenly over the intensity range. Ideally, the items
should have median intensity judgement
respectively of 0,0.5, 1.0, 1.5 and so on. In the final
form of the scale, the statements are presented in
random order, without giving any indication of their
scale values. The respondents’s score is the median
scale value of all the statements be endorses.
By these procedures, Thurstone, (1959) and his co-
workers prepared about 20 scales for measuring
attitudes towards war, church, patriortrism, capital
punishment, censorship, and many other
institutions,, practices, issues and groups.
Likert-Type Scale
Likert (1932) developed a scale that is easier to
construct. At the same time it yields satisfactory
reliability. It also starts with the collection of a large
number of positive and negative statement about an
object. Judges are not employed in this method.
Instead, the scale is derived by item analysis
techniques. The items are administered to a group
of subjects. Each item is rated on a five point
continuum. Only those items which have high
correlation with total score are retained for the
attitude scale. The principal basis for item selection
is internal consistency. This method more directly
determines whether or not only one attitude is
involved in the items collected. On the five point
scale an individual gets scores from 5 to 1 for
positive items and from 1 to 5 for negative items.
His final score is obtained by summing up the item
scores.
Comprising the Likert and Thurstone methods, the
Likert approach is more empirical because it dealts
directly with respondents score rather than
employing judges.
The Likert method more directly determines
whether or not only one attitude is involved in the
original collection of items
and the scale which is derived measures the most
general attitudinal factor which is present. The use
of a five points scale for each item provides more
information than the simple dictionary of agree or
disagree.
The only place in which the Thurstone method
mi9ght be superior is in the direct meaning-fulness
of scale scores but the Likert method fails to provide
absolute meaning.
The Likert also uses more statements as a rule,
therefore it is reliable than the Thurstone type.
Minnesota Teacher Attitude Inventory
It is a modified form of Likert-type Scale. Each
statement is to be marked in the same way on the
five point-scale. The numerical weights for these
responses are based on criterion keying, rather than
on the usual 1 to 5 scale. It was developed by
administering over 700 items to 100 teachers
nominated as superior in studentteacher relationship
and 100 teachers nominated as inferior in this
relationship. Cross-validation of the final 150 item
inventory in different groups yielded concurrent-
validity co-efficients of 0.46 to 0.60 with a
composite criterion derived from the principal’s
estimate pupils rating and evaluation y an expert.
Subsequent longitudinal studies by the author found
predicitive validation against the same criterion.
The Bogardus Social Distance Scale
Bogardus developed a technique for measuring
attitude towards different national groups. This
scaling procedure, such as the Thurstone and Likert
methods, the Bogardus scale is identified by a novel
type items in social distance form. The Bogardus
social distance scale is much easier to construct than
other scales.
The Guttman Method of Scale
An interesting new approach to attitudes scaling is
the procedure developed by Guttman in connection
with studies of the morale of American soldiers
during the second world war. The response pattern
found in the perfect Guttman Scale is exactly what
is obtained if people are rule-ordered on a physical
conditions. The purpose of the Guttman procedure
is to test whether or not a collection of attitude
statements will exhibit the characteristic pattern.
INTEREST INVENTORY
The nature and strength of one’s interests and
attitudes represents an important aspect of
personality. These charecteristics materially affect
educational and occupational achievement
,interpersonal relations ,etc. The greatest aid to
attention is interest. Interest is A feeling both
painful or pleasure and is generally accompanied by
attention . Without interest however the attention
cannot hold for a long time. To secure attention
among students an effort has to be made to evoke
their interest in the classroom. One of the cardinal
aims of education is fostering many sided interests
of students .Broadly speaking interests are our likes
and dislikes. We are not born with specific interests
we acquire them due to environmental stimulation.
Healthy interests are called ‘hobbies’. Achievement
is a resultant of aptitude and interest . The large
majority of interests inventories are designed to
assess the individual’s interest in different fields of
work.
Meaning And Definition
Interests is a disposition in its
dynamic aspect. It is a feeling of liking associated
with a reaction . It is the motivating force that
impels one for activity . If an individual is interested
in a job he will probably like the job .As interests
are subjective experiences the direct way to
measure them is to ask the individual to report his
likes and dislikes .We measure interests from the
responses we get from the individual by
administering interest inventories .
In the words of Cronbach [1949],
interest is ” A tendency to seek out an activity or a
tendency to choose it rather than some alternatives”
Qualitatively interests could be
classified under various headings such as social
interests, vocational interests intellectual interests
,scientific interests ,literacy and musical interests,
business interests ,etc. Many researcher studied
about the phenomenon of interests interms of their
duration their extensity, and their intensity
.Duration would denote the temporal aspects of
interests , extensity would denote the temporal
aspects of interests extensity would be described as
the number of interests which the individual is
showing and the intensity would denote the strength
in the interests.
Classification Of Interests.
Interest in an object or in an
activity reveals itself a heightening of attention to it
. Mainly three types of interests are their they are ;
[A] Expressed Interests : Expressed interest are
identified by asking a pupil to tell or write about the
activities , vocational and avocational interests
which a person most and least enjoys.
[B]Manifested Interests : Manifested interest may
be identified by directing and observing the pupil or
by finding out about his hobbies and other activities
.
[C] Interests inferred from tests : From tests also
interests can be inferred . Inventories are interests
that are measured with standardized instruments
which require a person to choose from a large
number of activities.
Types of Interest Inventories
In the year 1920, the Carnegie Institute of
Technology USA organized a Graduate seminar on
“Interest” . Interest at a time is considered as a
verbal expression of one’s aptitude. The interest
inventories are those which are designed to assess
the individual’s interest in different fields of work .
It is generally observed that what a person may say
about his interest. Interest inventories have
stimulative value and they have also informative
value.
The stimulative value of it is seen
from the fact that it encourages the person to have a
thoughtful self-scrutiny in depth. The informative
function of it is availed of due to the fact that these
are specially constructed to obtain various
informations regarding person’s likes and dislikes .
Interest inventories must be able to identify not only
a person’s specific likes and dislikes but also the
major trends of interest.
Researches have shown that the
measurement of a person’s present interests is a
means to provide symptoms indicative of what his
interests are likely to be in the future . Such an
assessment throws light on four possibilities of the
person;
1) It will indicate the probability of the actual work
of the occupation that the person is considering well
enough to identify himself with to follow it.
2) It will indicate the probability of finding himself
among congenial associates with similar interest
patterns as his own .
3) It will indicate the symptoms of his future
abilities .
4) It will suggest alternative fields of occupation
which may not yet have been seriously considered.
Measurement of Interest
Interest has been pointed to be more
amenable to measurement than the measurement of
personality . E.K.Strong ,Jr. developed the first
Interest Inventory known as Strong’s Vocational
Interest Blank. Many instruments have been
instructed to measure different interests among
individuals. Tests and scales with a real
psychological meaning have been produced . Some
of the instruments employed in the measurement of
interest are given below;
[A] Interest Questionnaire for high school
students : In this questionnaire there are 68 items
indicating liking, indifference or dislike . The
questionnaire can predict success in the curriculum
of the subject’s choice more accurately than it is
predicted by a general intelligence test. The
instruments carefully and completely constructed.
B] Strong’s Vocational Interest Blank :
This blank lists 100 occupations
,38 amusements 36 school subjects and contains 46
items having to do with types and peculiarities of
people. Responses are scored in terms of ‘L’[for
likes] and ‘D’ [for dislikes ], ‘I’ [for indifference].
Self rating of preferences , habits and traits are also
solicited. Generally this blank has been found useful
when combined with other criteria.
C] Vocational interest for women :
This has been constructed
particularly for women and follows the same
technique of construction and same general
organization as are embodied in the Vocational
Interest Blank for men. It includes references to ‘17’
occupations and the traits of masculinity-feminity.
The blanks has been standardized on mature
women. Its applicability is therefore, limited.
D] Kurdar Preference Record:
It consist of 14 sets of ‘3’ choice
items. There is no limits but the time required is
usually about 40 minutes .Scores are classifiable
into nine areas mechanical , computational,
scientific , persuasive , artistic , literary, musical
,social service and the clerical .The Preference
records has been shown to be reliable enough for
counseling.
MOTIVATION SCALE

Motivation is of particular interest to educational


psychologists because of the crucial role it plays in
student learning. However, the specific kind of
motivation that is studied in the specialized setting
of education differs qualitatively from the more
general forms of motivation studied by
psychologists in other fields. Motivation in
education can have several effects on how students
learn and how they behave towards subject matter.
It can:
 Direct behavior toward particular goals
 Lead to increased effort and energy
 Increase initiation of, and persistence in,
activities
 Enhance cognitive processing
 Determine what consequences are
reinforcing
 Lead to improved performance.

Because students are not always internally


motivated, they sometimes need situated
motivation, which is found in environmental
conditions that the teacher creates. If teachers
decided to extrinsically reward productive student
behaviors, they may find it difficult to extricate
themselves from that path. Consequently, student
dependency on extrinsic rewards represents one of
the greatest detractors from their use in the
classroom. Generally, motivation is conceptualized
as either intrinsic or extrinsic.

Intrinsic motivation occurs when people are


internally motivated to do something because it
either brings them pleasure, they think it is
important, or they feel that what they are learning is
significant. Extrinsic motivation comes into play
when a student is compelled to do something or act
a certain way because of factors external to him or
her (like money or good grades).

Thematic Apperception Test

The Thematic Apperception Test (TAT) is


a projective psychological test. Proponents of the
technique assert that subjects' responses, in the
narratives they make up about ambiguous pictures
of people, reveal their underlying motives,
concerns, and the way they see the social
world. Historically, the test has been among the
most widely researched, taught, and used of such
techniques.

The TAT was developed during the 1930s by


the American psychologist Henry A. Murray and
lay psychoanalyst Christiana D. Morgan at the
Harvard Clinic at Harvard University.Murray
wanted to use a measure that would reveal
information about the whole person but found the
contemporary tests of his time lacking in this
regard. Therefore, he created the TAT. The
rationale behind the technique is that people tend to
interpret ambiguous situations in accordance with
their own past experiences and current motivations,
which may be conscious or unconscious. Murray
reasoned that by asking people to tell a story about
a picture, their defenses to the examiner would be
lowered as they would not realize the sensitive
personal information they were divulging by
creating the story. Later, in the 1970s, the Human
Potential Movement encouraged psychologists to
use the TAT to help their clients understand
themselves better and stimulate personal growth.

TEST YOUR UNDERSTANDING


1. Explain the different tools and techniques
used in assessment in schools.
2. Explain the term Anecdotal record.
3. Explain the use of Rating scale
4. Differentiate questionnaires and checklists.
5. How observation helps in assessment of a
child.
6. What is self-reporting?
7. What are different types of test items?
8. Explain the term testing.
9. Explain the importance of Rubrics.
10.Explain the importance of assessing affective
domain.
11.List out different types of attitude scale and
interest inventories.

UNIT - IV.
ISSUES IN CLASSROOM ASSESSMENT

MAJOR ISSUES IN CLASSROOM


ASSESSMENT
The role of assessment in our classrooms
increasing day by day. Assessment in classrooms
gives better uplift to both the teacher and learner.
Output of assessment programmes changes the
learning strategies of the teachers as well as the
vision of learners. Our assessment strategies were
gone a long so far from the paper pencil test to
digital evaluation even though we are facing lots of
issues. The major issues facing our educational
system are ;
 Commercialisation of Assessment
 Poor test quality.
 The domain dependency
 Measurement issues
 System issues.
Commercialisation of Assessment
From the beginning of a structured form of
assessment, educators played a dominant role in
setting objectives of assessment, preparing tools of
assessment, implementing it in classrooms and
analyzing its outputs. But now a days as a part of
Commercialisation , educators are not taking much
effort on its preparation . Because lots of readymade
assessment materials are available in our market in
the form of books, guides and also these are
available in online educational websites. Educators
collects the materials and applying it to the learners,
some times they were not checking its genuinity.
The parents of brilliant childrens also collecting
such type of materials , and their childrens were
practicing at home. But the childrens coming from
low income groups cannot collect such materials.
The assessment patterns will also give some times
negative result. Teachers must prepare their own
assessment strategies after avoiding all these
commercial materials.
Poor Test Quality
Due to the change in technology the total
education system is also changed. In the educational
assessment sector also we can saw lots of such
changes. But most of educators are not aware of
preparing good, valid and reliable test items. They
are always preparing test items without proper
planning. So the tools will never bring the right
results. In some areas teachers are not getting proper
training and awareness programmes on the
preparation and use of good test items. Now a days
most of curriculum following revised versions of
assessment patterns from school to college level,
but the paper setters still depends upon the oldest
form of test items.
Domain Dependency
The basis for most of the assessment
programmes are the theories of Bloom’s taxonomy
and revised Bloom’s taxonomy. There is three
levels of domains were the whole instructional
objectives are arranged in a systematical manner,
they are cogitative , affective and psycho motor
domains. But in all assessment procedures with
regard to the academic excellence gives importance
to the cognitive domain.
Following this reasoning, to be maximally
effective, formative assessment requires the
interaction of general principles, strategies, and
techniques with reasonably deep cognitive-domain
understanding. That deep cognitive-domain
understanding includes the processes, strategies and
knowledge important for proficiency in a domain,
the habits of mind that characterise the community
of practice in that domain, and the features of tasks
that engage those elements. It also includes those
specialised aspects of domain knowledge central to
helping students learn .
This claim has at least two implications. The first
implication is that a teacher who has weak
cognitive-domain understanding is less likely to
know what questions to ask of students, what to
look for in their performance, what inferences to
make from that performance about student
knowledge, and what actions to take to adjust
instruction. The second implication is that the
intellectual tools and instrumentation we give to
teachers may differ significantly from one domain
to the next because they ought to be specifically
tuned for the domain in question .
A possible approach to dealing with the domain
dependency issue is to conceptualise and instantiate
formative assessment within the context of specific
domains. Any such instantiation would include a
cognitive-domain model to guide the substance of
formative assessment, learning progressions to
indicate steps toward mastery on key components of
the cognitive-domain model, tasks to provide
evidence about student standing with respect to
those learning progressions, techniques fit to that
substantive area, and a process for teachers to
implement that is closely linked to the preceding
materials .It may be workable, for instance, to
provide formative assessment materials for the key
ideas or core understandings in a domain, which
should be common across curricula. That would
leave teachers to either apply potentially weaker,
domain-general strategies to the remaining topics
or, working through the teacher learning
communities, create their own formative materials,
using the provided ones as models.
Measurement Issue
A basic definition of educational
measurement is that it involves four activities:
designing opportunities to gather evidence,
collecting evidence, interpreting it, and acting on
interpretations. Although programmes that target
the development of teachers’ assessment literacy
cover much of this territory , the formative
assessment literature gives too little attention to that
third activity, in particular to the fundamental
principles surrounding the connection of evidence –
or what we observe – to the interpretations we make
of it. This problem was touched upon earlier in the
context of the effectiveness issue, when it was noted
that formative assessment is not simply the
elicitation of evidence but also includes making
inferences from that evidence. Because this idea is
so foundational, and only just beginning to become
integrated into definitions of formative assessment
we return to it now.
Assessment is like all educational measurement, is
an inferential process because we cannot know with
certainty what understanding exists inside a
student’s head We can only make conjectures based
on what we observe from such things as class
participation, class work, homework, and test
performance. Backing for the validity of our
conjectures is stronger to the extent we observe
reasonable consistency in student behaviour across
multiple sources, occasions, and contexts. Thus,
each teacher-student interaction becomes an
opportunity for posing and refining our conjectures,
or hypotheses, about what a student knows and can
do, where he or she needs to improve, and what
might be done to achieve that change.
The centrality of inference in formative
assessment becomes quite clear when we consider
the distinctions among errors, slips,
misconceptions, and lack of understanding. It is
worth noting that the generation and testing of
hypotheses about student understanding is made
stronger to the extent that the teacher has a well
developed, cognitive-domain model. Such a model
can help direct an iterative cycle, in which the
teacher observes behaviour, formulates hypotheses
about the causes of incorrect responding, probes
further, and revises the initial hypotheses
Formative inferences are not only subject to
uncertainty, they are also subject to systematic,
irrelevant influences that may be associated with
gender, race, ethnicity, disability, English language
proficiency, or other student characteristics. Put
simply, a teacher’s formative actions may be
unintentionally biased. A teacher may more or less
efficaciously judge student skill for some, as
opposed to other, groups, with consequences for
how appropriately instruction is modified and
learning facilitated.
We can, then, make formative assessment more
principled, from a measurement perspective, by
recognising that our characterisations of students
are inferences and that, by their very nature,
inferences are uncertain and also subject to
unintentional biases. We can tolerate more
uncertainty, and even bias, in our inferences when
the consequences of misjudgment are low and the
decisions based upon it are reversible. Such
conditions are certainly true of formative contexts.
System Issue
This last issue may be the most challenging of all.
The ‘system issue’ refers to the fact that formative
assessment exists within a larger educational
context. If that context is to function effectively in
educating students, its components must be
coherent ( types of coherence, internal and
external). Assessment components can be
considered internally coherent when they are
mutually supportive; in other words, formative and
summative assessments need to be aligned with one
another. Those components must also be externally
coherent in the sense that formative and summative
assessments are consistent with accepted theories of
learning, as well as with socially valued learning
outcomes. External coherence, of course, also
applies to other system components, including pre-
service training institutions which must give
teachers the fundamental skills they need to support
and use assessment effectively. In any event, if
these two types of coherence are not present,
components of the system will either work against
one another or work against larger societal goals.
A common reality in today’s education systems
is that, for practical reasons, summative tests are
relatively short and predominantly take the
multiple-choice or short-answer formats. Almost
inevitably, those tests will measure a subset of the
intended curriculum, omitting important processes,
strategies, and knowledge that cannot be assessed
efficiently in that fashion. Also almost inevitably,
classroom instruction and formative assessment
will be aligned to that subset and, as a consequence,
the potential of formative assessment to engender
deeper change will be reduced.
Thus, the effectiveness of formative assessment
will be limited by the nature of the larger system in
which it is embedded and, particularly, by the
content, format, and design of the accountability
test. Ultimately, we have to change the system, not
just the approach we take to formative assessment,
if we want to have maximum impact on learning and
instruction. Changing the system means remaking
our accountability tests and that is a very big
challenge indeed.
REFORMS IN ASSESSMENT
OPEN-BOOK EXAMINATION
An "open book examination" is one in which
examinees are allowed to consult their class notes,
textbooks, and other approved material while
answering questions. This practice is not
uncommon in law examinations, but in other
subjects, it is mostly unheard of. Radical and
puzzling though the idea may sound to those who
are used to conventional examinations, it is ideally
suited to teaching programmes that especially aim
at developing the skills of critical and creative
thinking. Open-book examinations are similar to
traditional examinations. The major difference is
that in open-book examinations, students are
allowed to bring their textbooks, notes or other
reference materials into the examination situations.
Teachers may also assign a standard set of teaching
materials or a standard set of examination questions
to their students before the examination, so that
students can prepare in advance with the assigned
resources.
Structure of Open-book Examination
There are various ways of arranging an open-
book examination in a course. The following
approaches are some examples:
Students are allowed to bring or to have
access to resources and references during an
examination. Questions are given to students prior
to the examination and students can utilize their
prepared resources in the examination. Another
format can be setting the examination in a take-
home format. Take-home questions can be handed
out to students. These take-home questions can be
essay questions, short answer questions and
multiple choice questions. Students then have to
return the examination paper within a specified
period of time without getting help from other
people.
Advantages of Open-book Examination
 Less demanding on memory (regurgitation of
memorized materials) because it is no longer
necessary for students to cram a lot of facts,
figures and numbers for open-book
examination
 Provides a chance for students to acquire the
knowledge during the preparation process of
gathering suitable learning materials rather
than simply recalling or rewriting it
 Enhances information retrieval skills of
students through finding the efficient ways to
get the necessary information and data from
books and various resources
 Enhances the comprehension and
synthesizing skills of students because they
need to reduce the content of books and other
study materials into simple and handy notes
for examination
Disadvantages of Open-book Examination
 Difficult to ensure that all students are
equally equipped regarding the books they
bring into the exam with them, because the
stocks of library books may be limited and
also some books may be expensive to
students
 More desk space is needed for students
during the examination because students
often need lots of desk space for their
textbooks, notes and other reference
materials
 Sometimes students may spend too much
time on finding out which parts of the books
to look for answers instead of applying the
knowledge, practical skills and reasoning
ability
 A lot of students are unfamiliar with open-
book examinations. They must be provided
with clear procedures and rules.
Design Of A Good Open-Book Examination
Assessment
 Set questions that require students to do
things with the information available to them,
rather than to merely locate the correct
information and then summarize or rewrite it
 Make the actual questions straightforward
and clear to understand. Students usually read
the questions quickly because they often
want to save their time searching answers
from textbooks and notes
 Arrange a bigger venue to hold the
examinations because students may need
larger desks for examinations
 Make sure there is enough time for students
taking the examination. The length of open-
book examination is usually longer than the
traditional examination because students
need extra time for searching information and
data from their notes and textbooks.
 Set up the appropriate marking criteria for
open-book examinations as the aspects to be
assessed in open-book examinations may be
different from those in traditional
examinations.

International Baccalaureate Organisation (IBO)


The International Baccalaureate (IB),
formerly known as The International Baccalaureate
Organization (IBO), is an international
educational foundation headquartered
in Geneva, Switzerland, founded in 1968. IB offers
four educational programmes for children aged 3–
19. The organization's name and logo were changed
in 2007 to reflect a reorganisation. In the mid-
1960s, a group of teachers from the International
School of Geneva (Ecolint) created the International
Schools Examinations Syndicate (ISES), which
would later become the International Baccalaureate
(IB). International Baccalaureate Africa, Europe
and Middle-East (IBAEM) was established in 1986
and International Baccalaureate Asia Pacific
(IBAP) established during the same period.
The IB Middle Years Programme (MYP)
adheres to the study of eight subject areas and was
developed and piloted in the mid-1990s. Within five
years 51 countries had MYP schools. The IB
Primary Years Programme (PYP) was piloted in
1996 in thirty primary schools on different
continents, and the first PYP school was authorised
in 1997, with as many as 87 authorised schools in
43 countries within five years. The newest offering
from the IB, the IB Career-related Programme is
designed for students of ages 16 to 19 who want to
engage in career-related learning. The IB
introduced its newly reviewed MYP for first
teaching in September 2014. As the IB’s mission in
action, the learner profile concisely describes the
aspirations of a global community that shares the
values underlying the IB’s educational philosophy.
The IB learner profile describes the attributes and
outcomes of education for international-
mindedness.
The International Baccalaureate (IB) aims to
develop inquiring, knowledgeable and caring young
people who help to create a better and more peaceful
world through intercultural understanding and
respect. To this end the organisation works with
schools, governments and international
organisations to develop challenging programmes
of international education and rigorous assessment.
These programmes encourage students across the
world to become active, compassionate and lifelong
learners who understand that other people, with
their differences, can also be right

ELECTRONIC ASSESSMENT

Electronic assessment, also known as e-


assessment, computer assisted/mediated
assessment and computer-based assessment, is the
use of information technology in various forms of
assessment such as educational assessment, health
assessment, psychiatric assessment,
and psychological assessment. This may utilize
an online computer connected to a network. This
definition embraces a wide range of student activity
ranging from the use of a word processor to on-
screen testing. Specific types of e-assessment
include computerized adaptive
testing and computerized classification testing.
Different types of online assessments contain
elements of one or more of the following
components, depending on the assessment's
purpose: formative, diagnostic, or
summative. Instant and detailed feedback may (or
may not) be enabled.

E-assessment can be used not only to


assess cognitive and practical abilities but anxiety
disorders, such as social anxiety disorder, i.e. SPAI-
B. Widely in psychology. Cognitive abilities are
assessed using e-testing software, while practical
abilities are assessed using e-
portfolios or simulation software. Online
assessment is used primarily to
measure cognitive abilities, demonstrating what has
been learned after a particular educational event has
occurred, such as the end of an instructional unit or
chapter. When assessing practical abilities or to
demonstrate learning that has occurred over a
longer period of time an online portfolio
(or ePortfolio) is often used. The first element that
must be prepared when teaching an online course is
assessment. Assessment is used to determine if
learning is happening, to what extent and if changes
need to be made.

Electronic marking, also known as e-marking


and onscreen marking, is the use of
digital educational technology specifically designed
for marking. The term refers to the electronic
marking or grading of an exam. E-Marking is an
examiner led activity closely related to other e-
assessment activities such as e-testing, or e-learning
which are student led. e-marking allows markers to
mark a scanned script or online response on a
computer screen rather than on paper. There are no
restrictions to the types of tests that can use e-
marking, with e-marking applications designed to
accommodate multiple choice, written, and even
video submissions for performance examinations.
E-marking software is used by individual
educational institutions and can also be rolled out to
the participating schools of awarding exam
organisations.

Online Assessment
Online assessment is a procedure by which
specific abilities or characteristics can be evaluated
via the Internet. Such assessments are most
frequently used in the area of personnel selection, in
order to determine how suitable a candidate is for a
specific job. Online Assessments consist of several
tests or questionnaires to be completed by the
candidate. Depending on the position which you
have applied for, various abilities and
characteristics are determined. It is often possible
for you to choose the order in which you do the
tests. It is usually not necessary to complete all the
tests in one sitting, but rather you can take breaks
between the tests.
There are several types of online tests that can
be categorised as follows: If the aim of a test is to
determine abilities such as concentration, logical
conclusions or text comprehension, we refer to them
as performance tests. Qualities such as willingness
to cooperate, ambition or sensitivity are determined
with the aid of personality questionnaires; specific
professional knowledge is determined with the help
of knowledge tests.
Online Assessment has some very clear
benefits in comparison to traditional assessments:
Firstly, no supervisors or invigilators are needed for
Online Assessment. This means that ‘gut feeling’
plays no role, neither while taking the test nor
during the assessment so Online Assessments are
very objective. Secondly, Online Assessment can
also predict, with a relatively high degree of
accuracy, how suitable a candidate is for a specific
position.
Advantages of Online Assessment
There are some definite advantages to online
assessing:

 Although creating online tests is labor-


intensive, once a test is developed in
Blackboard, it is relatively easy to transfer it
and repeat it in other Blackboard courses.
 Blackboard allows for a high degree of
customization in the feedback students get in
response to each answer that they submit. As
an instructor, you could leverage this tool as
another way to engage with students about
course content.
 Online tests are asynchronous and can be
accessed on a variety of devices. If students
buy the Blackboard mobile app, they can
even take a test from their smartphone. The
flexibility offered by online testing can be a
great solution for learners with busy
schedules or when unexpected class
cancellations occur.
 While it is hard to prevent cheating,
Blackboard tests do offer many settings for
instructors to randomize questions, impose
test taking time limits, and restrict attempts.
However, make sure to explain all the
settings to students before they begin taking
the test.
 Testing in an online environment can be a lot
more interactive than traditional paper and
pen tests. Instructors can embed multimedia
in test questions to provide more engaging
assessments. For example, students may be
asked to identify a particular area of an image
by directly clicking on it instead of having to
answer in written form.
 In all likelihood, students are already using
online tools as study aids for their courses.
Instructors can better serve students by
providing them with custom made study aids
like online practice tests, rather than
entrusting students to rely on outside
resources that may not be valid sources of
information.
 For objective question types like multiple-
choice, Blackboard will automatically grade
student responses, saving time for the
instructor and providing more immediate
feedback to students.
 Online tests can be more accessible to
students with disabilities who have assistive
technologies built into their computers than
hand written tests are.

De-merits of Online Assessment

 Unlike collaborative, project-based online


assessments, multiple choice or essay tests
online can feel even more impersonal than
they do in the classroom which may
contribute to an online student’s sense of
isolation.
 While it is tempting to use the multiple choice
quizzes provided by the textbook publisher,
these types of assessments lack creativity and
may not be suitable to the specific needs of
your learners.
 Creating online tests in Blackboard can be
very tedious and time-consuming. It is not as
easy as simply uploading the Microsoft Word
version of your test. Instead, instructors have
to copy and paste each question’s text and
each individual answer’s text into
Blackboard, mark the correct answers, and
customize feedback and setting options.
 Some students will not be accustomed to
taking quizzes and tests online, and they may
need some hand-holding early in the semester
before they feel comfortable with the
technology.
 Cheating on an online test is as simple as
opening up another window and searching
Google or asking a classmate for the correct
answers. Furthermore, cheating on online
multiple choice tests is near impossible for
the instructor to prevent or catch.
 Though the technology that makes online
tests possible is a great thing, it can also cause
problems. If you do online testing, have a
back-up plan for students who have technical
difficulties and be ready to field some frantic
emails from students who have poor internet
connections or faulty computers.

Factors To Be Remembered While Applying


Online Assessment

 Unlike collaborative, project-based online


assessments, multiple choice or essay tests
online can feel even more impersonal than
they do in the classroom which may
contribute to an online student’s sense of
isolation.
 While it is tempting to use the multiple choice
quizzes provided by the textbook publisher,
these types of assessments lack creativity and
may not be suitable to the specific needs of
your learners.
 Creating online tests in Blackboard can be
very tedious and time-consuming. It is not as
easy as simply uploading the Microsoft Word
version of your test. Instead, instructors have
to copy and paste each question’s text and
each individual answer’s text into
Blackboard, mark the correct answers, and
customize feedback and setting options.
 Some students will not be accustomed to
taking quizzes and tests online, and they may
need some hand-holding early in the semester
before they feel comfortable with the
technology.
 Cheating on an online test is as simple as
opening up another window and searching
Google or asking a classmate for the correct
answers. Furthermore, cheating on online
multiple choice tests is near impossible for
the instructor to prevent or catch.
 Though the technology that makes online
tests possible is a great thing, it can also cause
problems. If you do online testing, have a
back-up plan for students who have technical
difficulties and be ready to field some frantic
emails from students who have poor internet
connections or faulty computers.
ON DEMAND ASSESSMENT
On Demand Assessment is an online resource
for teachers to use when, where and how they
choose. Tests are designed to link to curriculum
and standards. It is a time-saving tool that can be
administered to a single student and/or a whole
class. The On Demand testing program uses the
VCAA Assessment Online software, a
decentralised computer-based system where
schools download tests from the VCAA central
server and distribute them via the school's Local
Area Network (LAN). Government schools can
access the On Demand program via the
CASES/CASES 21 Server.
The On Demand assessment program is a
valuable tool for schools, enabling them to conduct
assessment in a reliable and standardised
manner. These tests have been constructed by the
VCAA and are on offer for schools to
download. Once completed, tests are computer
marked and a range of reports are available for
teachers to immediately view and analyse the
results. To assist teachers with a variety of
assessment needs, On Demand tests can be used for:
 Pre-testing students prior to beginning a topic
 Applying the same test to post-test a topic of
work
 Testing new intake students or a late arrival
 Identifying individual student's strengths and
weaknesses
 Corroborating teacher judgments
 Assisting in forward planning of teaching
programs.
On Demand Testing can save time for teachers
by automatically marking tests and delivering
results by the way of generating analytical reports
with the exception of extended response questions.
The computer-based system will instantly mark a
test and present the results to students when it is
applicable and generate different types of reports
for the teachers. This immediate feedback can be
used to support, encourage and motivate students in
their learning programs. Depending on the type of
test administered, teachers have the option to
display scores to students and view results through
a range of available reports. The system has the
capacity to store results from a range of assessment
tasks, enabling teachers to track and monitor student
progress over time.
Types of On Demand Assessments
Computer Adaptive Tests
Computer Adaptive Tests deliver sets of
questions to students that vary according to student
ability. Depending on the responses given in
previous questions, the system presents
progressively easier or more difficult questions to
the student. There are currently three reports
available for Computer Adaptive Tests - the ‘Class
Standard Score Report’, the ‘Student Test Session
Performance Report’, and the ‘Student Tracking
Report’. These reports provide immediate feedback
on the results for each student, including an
estimated ability score. Question level analysis is
also possible for Computer Adaptive Tests through
the ‘Student Test Session Performance Report’.
Linear Tests
In a linear test, students receive a fixed set of
questions using a variety of question types. All
students are presented with the same questions in
the same order during the test. Student responses are
saved and stored by the computer and teachers are
able to view and analyse the results at a student,
class or question level.
BLOOM'S DIGITAL TAXONOMY
Bloom's Taxonomy and Bloom's Revised
Taxonomy are key tools for teachers and
instructional designers. Benjamin Bloom published
the original taxonomy in the 1950's and Lorin
Anderson in 2000 . Since the most recent
publication of the taxonomy there have been many
changes and development that must be addressed.
So, this is an update to Bloom's Revised Taxonomy
to account for the new behaviours, actions and
learning opportunities emerging as technology
advances and becomes more ubiquitous. Bloom's
Revised Taxonomy accounts for many of the
traditional classroom practices but does not account
for the new technologies and the processes and
actions associated with them, nor does it do justice
to the “digital children”.
The Original taxonomy and the revised
taxonomy by Anderson and Krathwohl are both
focused within the cognitive domain. As a
classroom practitioner, these are useful but do not
address the activities undertaken in the classroom.
This Digital Taxonomy is not restricted to the
cognitive domain rather it contains cognitive
elements as well as methods and tooling. These are
the elements that as a practitioner I would use in my
classroom practice. Like the previous taxonomies,
its is the quality of the action or process that defines
the cognitive level, rather than the action or process
alone. While Bloom's in its many forms, does
represent the learning process, it does not indicate
that the learners must start at the lowest taxonomic
level and work up. Rather, the learning process can
be initiated at any point, and the lower taxonomic
levels will be encompassed within the scaffolded
learning task. An increasing influence on learning is
the impact of collaboration in its various forms.
These are often facilitated by digital media and are
increasingly a feature of our digital classrooms.
This taxonomy is not about the tools and
technologies, these are just the medium, instead it is
about using these tools to achieve, recall,
understanding, application, analysis, evaluation and
creativity.
Remembering
While the recall of knowledge is the lowest
of the taxonomic levels it is crucial to learning.
Remembering does not necessarily have to occur as
a distinct activity.
For example. The rote learning of facts and figures.
Remembering or recall is reinforced by application
in higher level activities. This element of the
taxonomy infers the retrieval of material. In a digital
age, given the vast amount of information available
to us it is not realistic to expect students to
remember every fact or figure. However, it is
crucial that students can use digital means to find,
record, organise, manage and retrieve the important
resources they need. This is a key element given the
growth in knowledge and information.
The digital additions and their justifications are as
follows:

Bullet pointing - This is analogous with listing but


in a digital format.
Highlighting – This is a key element of most
productivity suites, encouraging students to pick out
and highlight key words and phrases is a techniques
for recall.
Bookmarking or favouriting – this is where the
students mark for later use web sites, resources and
files. Students can then organise these.
Social networking – this is where people develop
networks of friends and associates. It forges and
creates links between different people. Like social
bookmarks (see below) a social network can form a
key element of collaborating and networking
Social bookmarking – this is an online version of
local bookmarking or favourites, it is more
advanced because you can draw on others
bookmarks and tags. Searching or “googling -
Search engines are now key elements of students
research.
Understanding
Understanding builds relationships and links
knowledge. At this taxonomic level the students
should understand the processes and concepts
essentially they are able to explain or describe these.
They can summarise and rephrase these into their
own words. There is a clear difference between
remembering, the recall of facts and knowledge in
its various forms like listing, bullet points,
highlighting etc, and understanding.
The digital additions and their justifications are as
follows:
Advanced and Boolean Searching - This is a
progression from the previous category. Students
require a greater depth of understanding to be able
to create, modify and refine searches to suit their
search needs.
Blog Journalling – This is the simplest of the uses
for a blog, simply a student “talks” “writes” or
“type” a daily or task specific journal. This show a
basic understanding of the activity report upon. The
blog can be used to develop higher level thinking
when used for discussion and collaboration.
Categorising & Tagging – Digital Classification -
organising and classify files, web sites and materials
using folders, using Del.ico,us and other similar
tools beyond simple bookmarking. This can be
organising, structuring and attributing online data,
meta-tagging web pages etc. Students need to be
able understand the content of the pages to be able
to tag it
Commenting And Annotating – a variety of tools
exist that allow the user to comment and annotate
on web pages, pdf files and other documents. The
user is developing understanding by simply
commenting on the pages. This is analogous with
writing notes on hand outs, but is potentially more
powerful as you can link and index these.
Subscribing – Subscription takes bookmarking in its
various forms and simple reading one level further.
The act of subscription by itself does not show or
develop understanding but often the process of
reading and revisiting the subscribe feeds leads to
greater understanding.
Applying
Applying related and refers to situations
where learned material is used through products like
models, presentation, interviews and simulations.A
student applies facts and process he had learnt to a
situation. Applying could be using a process, skill
or set of facts. The digital additions and their
justifications are as follows:
Running And Operating - This the action of
initiating a program. This is operating and
manipulating hardware and applications to obtain a
basic goal or objective.
Playing – The increasing emergence of games as a
mode of education leads to the inclusion of this term
in the list. Students who successfully play or operate
a game/s are showing understanding of process and
task and application of skills.
Uploading and Sharing - uploading materials to
websites and the sharing of materials via sites like
flickr etc. This is a simple form of collaboration, a
higher order skill.
Hacking – hacking in its simpler forms is applying
a simple set of rules to achieve a goal or objective.
Editing – With most media's, editing is a process or
a procedure that the editor employs.
Analysing
Breaking material or concepts into parts,
determining how the parts relate or interrelate to one
another or to an overall structure or purpose. Mental
actions include differentiating, organizing and
attributing as well as being able to distinguish
between components. The digital additions and
their justifications are as follows:
Mashing - mash ups are the integration of several
data sources into a single resource. Mashing data
currently is a complex process but as more options
and sites evolve this will become an increasingly
easy and accessible means of analysis
Linking – this is establishing and building links
within and outside of documents and web pages.
Reverse-Engineering - this is analogous with
deconstruction. It is also related to cracking often
with out the negative implications associated with
this.
Cracking – cracking requires the cracker to
understand and operate the application or system
being cracked, analyse its strengths and weaknesses
and then exploit these.
Evaluating
Making judgements based on criteria and
standards through checking and critiquing. The
digital additions and their justifications are as
follows:
Blog/vlog commenting and reflecting - Constructive
criticism and reflective practice are often facilitated
by the use of blogs and video blogs. Student
commenting and replying to postings have to
evaluate the material in context and reply to this.
Posting – posting comments to blogs, discussion
boards, threaded discussions are increasingly
comment elements of students daily practice. Good
postings like good comments are not simple one line
answers rather they structured and constructed to
evaluate the topic or concept.
Moderating – This is high level evaluation, the
moderator must be able to evaluate a posting or
comment from a variety of perspectives, assessing
its worth, value and appropriateness.
Collaborating and networking – Collaboration is an
increasing feature of education. In a world
increasingly focused on communication,
collaboration, leading to collective intelligence is a
key aspect. Effective collaboration involves
evaluating the strengths and abilities of the
participants and evaluating the contribution they
make. Networking is a feature of collaboration,
contacting and communicating with relevant person
via a network of associates.
Testing (Alpha and Beta) – Testing of applications,
processes and procedures is a key element in the
development of any tool. To be an effective tester
you must have the ability of analyse the purpose of
the tool or process, what its correct function should
be and what its current function is.
Validating – With the wealth of information
available to students combined with the lack of
authentication of data, students of today and
tomorrow must be able to validate the veracity of
their information sources. To do this they must be
able to analyse the data sources and make
judgements based on these.
Creating
Creativity involves all of the other facets of the
taxonomy. In the creative process the student/s,
remembers, understands & applies knowledge,
analyses and evaluates outcomes, results, successes
and failures as well as processes to produce a final
product. The digital additions and their
justifications are as follows:
Programming - Whether it is creating their own
applications, programming macros or developing
games or multimedia applications within structured
environments, students are routinely creating their
own programs to suit their needs and goals
Filming, animating, videocasting, podcasting,
mixing and remixing – these relate to the increasing
trend and availability of multimedia and multimedia
editing tools. Students frequently capture, create,
mix and remix content to produce unique products.
Directing and producing – to directing or producing
a product, performance or production is a highly
creative product. It requires the student to have
vision, understand the components and meld these
into a coherent product.
Publishing – whether via the web or from home
computers, publishing in text, media or digital
formats is increasing. Again this requires a huge
overview of not only the content being published,
but the process and product. Related to this concept
are also Video blogging – the production of video
blogs, blogging and also wiki-ing - creating, adding
to and modify content in wikis. Creating or building
Mash ups would also fit here
EXAMINATION REFORM REPORTS IN
INDIA

Ever since they were institute, with the


establishment of the universities of Calcutta ,
Madras and Bombay in 1857, examinations have
been under criticism . There effectiveness, the
purposes they serve, and their relevance have
remained controversial issues. Even the Earliest of
the reviews of education in India dating from 1886,
points out that the university entrance
examinations” Matriculation” has apparently
stimulated the holding of at least 6 external
examinations extending down to the lower primary
stages. The Indian University commission(1902)
which also consider the matriculation examinations
and its effects, carry forward the recommendations
of the Hunter Commission(1982) when it started,”
It is beyond doubt that the greatest evil from which
the system of Indian University education Suffers is
that teaching is subordinated to examinations and
not examination to teaching.”

The Calcutta University Commission (1917-19)


recommended the creation of boards of secondary
education so as to end the domination of school
education by the universities. Intermediate classes
were also introduced as a buffer between
universities and secondary education. They were of
two years duration after ten years of schooling and
provided preparatory education for universities and
professional education. This commission also
identified several short comings in the examination
system and specifically indicated its unhappiness
about alternative questions, mechanical system of
marking, grace marks, frequency of examination,
and so on.
The transfer of administrative responsibilities for
education from the British to the Indian Ministers
in 1921-22 and the emergence of provincial in 1935
brought all the stages of education under the
effective control of Indians themselves. This
political development led to the establishment of
Boards of Secondary Education in the states. The
main function of these boards was to conduct
external examinations at the school-leaving stage.
Even though such boards began holding
examinations, the matriculation examinations,
which were conducted by the universities
simultaneously with the boards' own, continued to
dominate the scene.
The Hartog Committee Report (1929) criticized
the academic bias of examinations at the school
level which continued to be geared to the needs of
university entrance and provided no opportunities
for the majority of students to take up industrial,
commercial, or technical courses as a preparation
for life. The report of the Central Advisory Board
for Post-War Educational Development in India,
known as the Sargent Plan (1944) , again criticized
the subordination of the high school curriculum to
the requirements of universities, particularly in
view of the fact that only one out of ten or fifteen
high school leavers went on to a university.
Post-Independence Era
After India became an independent nation,
the University Education Commission (1948) was
equally vocal in its criticism of examinations,
stating that, if members were asked to make just one
recommendation for reforming education, they
would identify the area of examinations as the one
where greatest priority and urgency for introducing
reforms should be applied.
Almost at the same time, the state
governments became increasingly conscious about
improving their systems of education. In 1948, the
United Provinces (nowadays Uttar Pradesh)
Government appointed a Committee on the
Reorganization of Primary and Secondary
Education . In the same year, a Committee on the
Reorganization of Secondary Education was also
appointed by the Government of Central Provinces
and Berar , Both committees deliberated on the
problems of examinations in the context of
education and suggested immediate action for
reforming them. Soon afterwards, a Secondary
Education Reorganization Committee (1953) was
appointed in Uttar Pradesh. This committee made
the positive suggestions that external examinations
might be replaced by an assessment made by the
teacher, and that continuous evaluation could be the
main basis for a final assessment of a student.
Mudaliar Commission
The Secondary Education Commission,
popularly known as the Mudaliar Commission
(1952-53) , made the following specific
recommendations in regard to examination reform :
1. The number of external examinations should be
reduced, and the element of subjectivity in the
essay-type tests should be minimized by
introducing objective tests and also by changing
the type of question.
2. In order to assess the pupil's all-round progress
and to determine his future, a proper system of
school records should be maintained for every
pupil. These would indicate the work done by
him during successive periods, and his
attainments in each of the different spheres.
3. In the final assessment of the pupils, due credit
should be given to the 'internal' (in-school) tests
and the school records of the pupils.
4. A system of symbolic rather than numerical
marking should be adopted for evaluating and
grading the work of the pupils in external and
internal examinations, and in maintaining the
school records.
5. There should be only one public examination at
the completion of the secondary school course.
6. The certificate awarded should contain, besides
the results of the public examination in different
subjects, results of the school tests in subjects not
included in the public examination; as well as the
gist of the school records.
7. The final public examination should be
transformed into a system of compartmental
examinations. These were conceived as
supplementary to the main public examination.
They provided an opportunity for students who
had secured the minimum qualifying marks in
most subjects, but had failed in one or two
subjects by a small margin, to retake the
examinations in the deficient subjects.
The All India Council For Secondary Education
After the Mudaliar Commission submitted its
report, the Union Ministry of Education appraised
these recommendations and began seeking ways of
implementing them. For this purpose, the All India
Council for Secondary Education (AICSE) was
established. The then Minister for Education,
Maulana Abdul Kalam Azad, summarized the main
functions of body as 'an organization to advise the
Government of India and state governments on the
manner in which the recommendations of the
commission could be effectively implemented' .
The AICSE started working on a variety of
problems but soon realized that, to be effective, it
should concentrate its efforts on a smaller number
of specific priority problems. If therefore circulated
a questionnaire to a large number of educational
agencies and eminent individuals, to help determine
priorities for a plan of action. From analysis of
responses the following priorities in its various
fields of work were identified: examination reform;
pre-service and in-service teacher education;
curriculum for higher secondary schools;
methodology, apparatus, and equipment for science
teaching; administration and organization of multi-
purpose schools.
Without losing time, AICSE organized a seminar
on 'Examination reform' at Bhopal, 22-29 February
1956. Besides other recommendations for
improving examinations, the seminar also
recommended the creation of an expert body—to be
called the Central Examination Unit—to work in
this specialized area.
Examination Reform As A National Programme
The growing consensus among Indian
educationists about the need for reform was
receiving the active attention of the Ministry of
Education, Government of India. The decision
accepting Bhopal Seminar's recommendation
regarding establishment of a Central Examination
Unit (CEU) was the earliest outcome. The next task
became one of developing a comprehensive plan of
action for moving forward quickly and effectively.
In doing so, the Ministry was eager to draw upon
the experiences of other countries as well as the
expertise available at the national level. In 1957, it
invited Dr. Benjamin S. Bloom, then Chief
Examiner of the University of Chicago, to advise on
the examination reform task. In the course of his
brief stay in India, Dr. Bloom met with Indian
educationists and educational administrators in
different parts of the country and worked with about
300 school and university teachers in seven
workshops. H e then assisted the Ministry of
Education in developing the required plan of action.
Due to a paucity of trained personnel to man the
C E U , it could only be started on 13 January 1958
as a pilot unit with five officers within AICSE.
Simultaneously, ten other educators selected from
different parts of the country were sent for training
in Curriculum and Educational Evaluation at the
ledged unit. It was with this event that the
Examination Reform Programme took the form of
a national movement.
In 1959, AICSE and its Central Examination
Unit were absorbed into the Union Ministry of
Education. This occurred in a period when a
supporting project of State Evaluation Units,
sponsored and financed by the Ministry, was being
initiated. In 1961, with the establishment of the
National Council of Educational Research and
Training (NCERT), the C E U became part of a
body which already had a strong involvement with
secondary education and which, in 1967, was given
responsibilities for the improvement of primary
education as well. By this time, too, several
universities had approached C E U for help in
reforming their examinations. NCERT's
reorganized and re-named Examination Reform
Unit has functioned as such since 1974.
Kothari Commission
The examination reform movement was given
strong impetus when the Kothari Commission was
established in 1964 by the Government of India .
This commission was different from the earlier
ones as its terms of reference extended to all stages
of education. It could, therefore, study India's
education system as a whole give concrete
recommendations on examination reform for all
stages of education. The programme now being
pursued is largely based on the Kothari
recommendations, which were as follows:
1. The new approach to evaluation will attempt:
(a) to improve the written examination so that it
becomes a valid and reliable measure of
educational achievement; and (b) to devise
techniques for measuring those important
aspects of the student's growth that cannot be
measured by written examinations.
2. Evaluation at the lower primary stage should
help pupils to improve their achievement of
basic skills, and in developing constructive
habits and attitudes.
It would be desirable to treat classes I to IV as
an ungraded unit. This would enable children to
advance at their own pace. Where this is not
feasible, classes I and II may be treated as one
block divided into two groups—one for slow
and the other for fast learners. Teachers should
be appropriately trained for the ungraded system
assessment. Diagnostic testing should be done
through simple teacher made tests. Cumulative
record cards are important in indicating pupils'
growth and development. Even so, at this level
they should be very simple and should be
introduced in a phased manner.
3. Although the first national standard of
attainment is to be set at the end of the primary
stage, it is not considered necessary or desirable
to prescribe a rigid and uniform level of
attainment tested by a compulsory external
examination. However, for the proper
maintenance of standards, periodic surveys of
the levels of achievement in primary schools
should be conducted by district school
authorities, using refined tests prepared by state
evaluation organizations.
4. The district educational authority may arrange
for a common examination at the end of primary
stage for schools in the district, using
standardized and refined tests. This
examination will have greater validity and
reliability than the in-school examination, and
will provide inter-school comparability of
levels of performance.
5. The certificate at the end of the primary course
should be given by the school and should be
accompanied by the cumulative record card and
the statements of results of the common
examinations, if any.
6. In addition to the common examinations,
special tests may be held at the end of the
primary course for the award of scholarships or
certificates of merit and for the purpose of
identifying talent.
7. External examinations should be improved by
raising the technical competence of paper-
setters; orienting question papers to objectives
other than the simple acquisition of knowledge;
improving the nature of questions; adopting
scientific scoring procedures; and mechanizing
the scoring of scripts and the processing of
results.
The certificate issued by the state board of
school education on. the basis of the results of
the external examination should give the
candidate's performance in different subjects
for which he has appeared, and there should be
no remark to the effect that he has passed or
failed in the whole examination. The candidate
should be permitted to appear again, if he so
desires, for the entire examination or for
separate subjects in order to improve his
performance record t the end of class X ; one
which will be regarded as equivalent to the
external examinations of the state board of
school education. The latter body will issue
certificates to the successful candidates of these
schools on the recommendation of the schools.
A committee set up by the state board of school
education should develop carefully worked-out
criteria for the selection of such schools. The
schools should be permitted to frame their own
textbooks, and conduct their educational
activities without external restrictions.
Internal assessment by the schools should be
comprehensive enough to evaluate all aspects of
student growth, including those not measured
by the external examinations. It should be
descriptive as well as quantified. Written
examinations conducted by schools should be
improved, and teachers trained appropriately.
The internal assessment should be shown
separately from the external examination
marks. During the transition period, higher
secondary students will have to appear for two
successive external examinations (at the end of
classes X and XI) within one year. Where,
however, the courses in classes IX to X I are
integrated, the examination at the end of class X
need not be insisted upon.

TEST YOUR UNDERSTANDING


1. Explain the major issues in the classroom
assessment in India.
2. List out the benefits of on-line exams.
3. What are the advantages of on-line
assessment ?
4. Explain the concept of open book exam.
5. Explain the term on –demand assessment.
6. Explain the concept of IBO.
7. Briefly explain the different reports on
examination reforms in India.
8. Explain the concept Digital Bloom’s
Taxonomy.

UNIT - V
ASSESSMENT IN INCLUSIVE PRACTICES
DIFFERENTIATED ASSESSMENT
“Differentiation allows students multiple
options for taking in information, making sense of
ideas, and expressing what they have learned. A
differentiated classroom provides different avenues
to acquiring content, to processing or making sense
of ideas, and to developing products so that each
student can learn effectively.”
- Carol Ann Tomlinson
Differentiated assessment is using a variety
of tasks that reflect the learning differences present
in the class and allow opportunities for all learners
to demonstrate what they know and are able to do.
In differentiated assessment, (1) students are active
in setting goals based on student readiness,
interests, and abilities. They may choose the topic
and plan the practice, but they should also help
decide how and when they want to be evaluated, as
well as whether they should be evaluated on the
basis of growth or of attainment. This gives them a
feeling of ownership in their own learning process
and of partnership with the instructor, and generally
motivates as well as empowers them. Motivation is
an important factor in learning, and is all too often
underemphasized in the assessment phase; and (2)
assessment of student readiness and growth is
ongoing and built into the curriculum. Teachers
continuously assess student readiness and interest to
provide support when students need additional
instruction and guidance as well as evaluate when a
student or group of students is ready to move ahead
to another phase of curriculum.
Variety of measurement can be accomplished
by assessing the students through different
measures that allow you to see them apply what they
have learned in different ways and from different
perspectives. Teachers need to create a variety of
entry points to ensure that student differing abilities,
strengths, and needs are all taken into consideration.
Students then need varying opportunities to
demonstrate their knowledge based on the teaching,
hence differentiated assessment. Key features of
differentiated assessment are ;
 Choice is key to the process. Choice of learning
activity as well as choice in the assessment (how
the student will demonstrate understanding).
 The learning tasks always consider the students’
strengths/weaknesses. Visual learners will have
visual cues, auditory learners will have auditory
cues, etc.
 Groupings of students will vary, some will work
better independently, and others will work in
various group settings.
 Multiple intelligence is taken into consideration
as are the students’ learning and thinking styles.
 Lessons are authentic to ensure that all students
can make connections.
 Project and problem based learning are also key
in differentiated instruction and assessment.
 Lessons and assessments are adapted to meet the
needs of all learners.
 Opportunities for children to think for
themselves are clearly evident.

CULTURALLY RESPONSIVE
EDUCATIONAL SYSTEMS

Culturally responsive educational systems


are grounded in the belief that we live in a society
where specific groups of people are afforded
privileges that are not accessible to other groups. By
privileging some over others, a class structure is
created in which the advantaged have more access
to high quality education and later, more job
opportunities in high status careers. This leads to
socio-economic stratification and the development
of majority/minority polarity. We can turn the tide
on this institutionalized situation by building
systems that are responsive to cultural difference
and seek to include rather than exclude difference.

Students who come from culturally and


linguistically diverse backgrounds can excel in
academic endeavors if their culture, language,
heritage, and experiences are valued and used to
facilitate their learning and development. These
systems are concerned with instilling caring ethics
in the professionals that serve diverse students,
support the use of curricula with ethnic and cultural
diversity content, encourage the use of
communication strategies that build on students’
cultures, and nurture the creation of school cultures
that are concerned with deliberate and participatory
discourse practices. Moreover, culturally
responsive educational systems create spaces for
teacher reflection, inquiry, and mutual support
around issues of cultural differences. For the
effective implementation of culturally responsive
education, the head of the institution must consider
the following factors;

1. Every student must have an equal opportunity to


achieve her or his full potential.

2. Every student must be prepared to competently


participate in an increasingly intercultural society.
3. Teachers must be prepared to effectively
facilitate learning for every student, no matter how
culturally different or similar from her or himself.
4. Schools must be active participants in ending
oppression of all types, first by ending oppression
within their own walls, then by producing socially
and critically active and aware students.
5. Education must become more fully student-
centered and inclusive of the voices and experiences
of the students.
6. Educators, activists, and others must take a more
active role in reexamining all educational practices
and how they affect the learning of all students:
testing methods, teaching approaches, evaluation
and assessment, school psychology and counseling,
educational materials and textbooks, etc.
Culturally Responsive Assessment
Schools today are becoming increasingly
diverse and culturally rich. The optimum
educational environment is a dynamic place where
students who are culturally and linguistically
diverse are provided the opportunity to learn and
grow. In reality, some aspects of the school system
provide less than optimal conditions for this diverse
group of students. As a result, a percentage of
students from culturally and ethnically diverse
backgrounds are over-represented in certain high-
incidence disability categories in special education
including mild mental retardation, emotional
disturbance, speech impairment and learning
disabilities.
In light of this over-representation, the role of
culturally competent assessment has gained
importance in the field of special education
eligibility. Obtaining knowledge and skills in
appropriate assessment techniques is imperative
and ethically necessary. The validity of the Full and
Individual Assessment results is an issue of vital
importance as these results are used to inform
important decisions that impact a student's life.
Securing valid results is contingent upon many
factors, including the selection of appropriate
instruments for the evaluation as well as the
selection of instruments that have been validated
against samples from the population to which the
student being assessed belongs. Additionally, if an
assessment was translated, there must be sufficient
evidence that this process occurred with integrity
and fidelity.
Consideration of student factors such as the
history of immigration, acculturative status and
stress, socio-economic status, history of educational
programs and language assessment should provide
helpful information for the selection of appropriate
assessment instruments in the evaluation process.
For assessment personnel, culturally competent
assessment requires the integration of culturally
sensitive attitudes, knowledge, interview skills,
intervention strategies and evaluation
practices. Ultimately, the purpose of assessment is
to determine appropriate intervention techniques
and strategies designed to promote success. The
focus of nondiscriminatory assessment should be to
analyze the data fairly in order to link the results to
intervention. Therefore, the value in the evaluation
is not limited to identification or classification;
rather it should be extended to inform appropriate
instructional interventions, accommodations and
instructional program development.
ACHIEVEMENT TEST

An achievement test is designed to evaluate a


unit during the teaching-learning process. It has a
great significance in all types of instructional
progress of the individual. It focuses upon an
examinees attainments at a given point in time. A
class room teacher depends upon the achievement
tests for measuring the progress of his students in
his subject area. It is very important that several
educational and vocational decisions about students
are taken on their performance in the achievement
tests.
The most instructionally-relevant
achievement tests are those developed by the
individual teacher for use with a particular class.
Teachers can tailor tests to emphasize the
information they consider important and to match
the ability levels of their students. If carefully
constructed, classroom achievement tests can
provide teachers with accurate and useful
information about the knowledge retained by their
students.

Many of the educators has defined an


achievement test in several ways. According
to Thorndike and Hegan “The type of ability test
that describes what a person has learned to do is
called an Achievement Test”.

Gronlund observes an achievement test as “a


systematic procedure for determining the amount
a student has learned through instructions”.

In the words of Wiersma and Jurs an


achievement “is a measure of knowledge and skills
in a content area”

Purpose Of Achievement Tests:

Achievement tests are universally


used mainly for the following purposes :
 To measure whether students possess the pre-
requisite skills needed to succeed in any unit or
whether the students have achieved the
objectives of the planned instruction.
 To monitor students' learning and to provide
ongoing feedback to both students and teachers
during the teaching-learning process.
 To identify the students' learning
difficulties whether persistent or recurring.
 To assign grades.

Characteristics Of A Good Achievement Test

 A good achievement test is tried out and selected


on the basis of its difficulty level and power of
determining.
 It should have a description of
measured behaviour.
 It should contain sufficient number of test items
for each measured behaviour.
 It should be divided into different knowledge
and skills according to behaviours to be
measured.
 It should be standardized for different users.
 It carries with a test manual for its administering
and scoring.
 It provides equivalent and comparable forms of
the test.
Uses Of Achievement Tests

 It helps to get a better understanding of the needs


an abilities of the pupils.
 It helps to discover the type of learning
experiences that will achieve the objectives
with best possible results.
 It helps to evaluate the extent to which the
objectives of education are being achieved
 To evaluate, to revise and to improve the
curriculum in the light of these results.
 The teacher will able to discover backward
children and providing proper remedial
instruction for their betterment.
 The teacher will able to determine and diagnose
the weakness of the students in various
subjects.
 It helps the parents in recognizing the strength
and weakness of their children.
 By studying the results of Achievement test the
teacher will able to determine whether or not
the students are working at their maximum
capacity.
 It helps to determine the general level of
achievement of a class and thus to judge the
teaching efficiency of the teacher.
CONSTRUCTION OF ACHIEVEMENT
TESTS

The basis for construction of achievement


test in the traditional classroom was the theory of
Bloom’s Taxonomy. But now ad days educators
gives importance for constructivist classrooms and
to the assessment of attainment of mental process.
So there is weightage for the mental process /
instructional objectives from the theory of Revised
Bloom’s taxonomy is given. Due to the emergence
of Digital Taxonomy, educators also uses online
assessment technologies for assessing the
achievement of students. There are several steps
involved in the construction of Achievement
Tests. They are ;

 Planning of the test


 Preparation of a design for the test.
 Preparation of the Blueprint.
 Writing of items.
 Preparation of the scoring key and
marking scheme
 Preparation of Question-wise analysis.
Planning of the test
The first and the most important step in
planning a test is to identify the instructional
objectives. Each subject has a different set of
instructional objectives. In the subjects of Science,
Social Sciences, and Mathematics the major
objectives are categorized as knowledge,
understanding, application and skill, while in
languages the major objectives are categorised as
knowledge, comprehension and expression. Before
starting the writing of items the paper setter should
plan the learning outcomes to be realized from the
achievement test. The paper setter should also
determine the maximum time , maximum mark of
the test and the nature of the test.

Preparation of a design for the test.


The second step in planning a test is to make
the "Design". The Design specifies weightages to
different (a) instructional objectives, (b) types (or
forms) of questions, (c) units and sub-units of the
course content, (d) levels of difficulty. It also
indicates as to whether there are any options in the
question paper, and if so, what their nature is. The
design, in fact, is termed as an instrument which
reflects major policy decisions of the examining
agency, whether it is a Board or an individual.

 Weightage to Objectives

It indicates the weightage to be given for


assessment of instructional objectives. There will be
a variation in the level of attainment of objectives
from the lowest to the highest level of instructional
objectives. Only through giving weightages we can
assess which objective is attained and at which
level.

 Weightage to Content / Sub units.

It shows the weightage given each sub topics of


a unit in a lesson or contents. The weightage is given
according to the importance and depthness of
content.

 Weightage to Form of Questions

It Indicates the weightage given to each type of


questions i.e., objective type questions, short
answer type questions and essay type questions. The
paper setter should select those forms of questions
that are suitable to the objectives and contents to be
tested.

 Weightage to Difficulty Level

In a classroom there will be students with


individual difference, may be high , average and
low achievers. So the paper setter should give due
weightage to all these type of learners while giving
weightage to difficulty of test items. Hence the
achievement test will contain easy, average and
difficult question, that will give opportunity to
respond by the different types of learners.
DESIGN FOR AN ACHIEVEMENT TEST
Class : VIII Marks : 50
Subject : Physical Science Time : 2hrs
I Weightage to objectives

S.N Objectives Mark Percentag


o s e
1 Knowledge 6 12
2 Understandin 16 32
3 g 23 46
4 Application 5 10
Skill

Total 50 100
2 Weightage to Content

S.No Content Marks Percentage


1 Centre of 15 30
2 gravity 10 20
3 Simple 16 32
4 machines 9 18
Levers and
pulleys
Friction
Total 50 100
3 Weightage to Difficulty Level

S.No Forms Marks Percentage


1 Easy 10 20
2 Average 30 60
3 Difficulty 10 20

Total 50 100

4 Weightage to form of Questions

S.No Question Marks Percentage


Type
1 Objective 20 40
2 Short 20 40
3 Answer 10 20
Essay
Total 50 100
Preparation of The Blueprint
The third step is to prepare the "Blueprint".
The policy decisions, as reflected in the design
of the question paper, are translated into action
through the Blueprint. “A blue print is a three-
dimensional chart, showing distribution of
questions reflecting numerical weightages in terms
of emphasis to be given to different units,
instructional objectives and forms of questions”.
The three dimensions of the blueprint consist of
content areas in horizontal rows and objectives and
forms of questions in vertical columns. Once the
blueprint is prepared, the paper setter can select the
items and prepare the question paper. It is at this
stage that the paper setter decides as to how many
question are to be set for different objectives.
Objectives Knowledge Understandi Application Skill
ng
Forms of
Total
Questions O S E O S E O S E O S E

Content
Centre of
gravity (3) (3) (1) (2 (1) 15
1 1 2 )2 5
Simple
machines (1) (3) (2) (2) 10
1 1 2 1
Levers
and (1) (2) (1) (3) (1) 16
2 1 2 1 5
pulleys
Friction
(3) (3 9
1 )2
Sub Total 4 2 0 8 8 0 8 10 5 0 0 5
50

Total 6 16 23 5

Writing of Questions / Items

The forth step after the finalization of the


blueprint is writing appropriate questions in
accordance with the broad parameters set out in the
blueprint. The basic criteria of good question paper
are validity, reliability and usability or
practicability.

Validity refers to the relevance of testing or


‘to the extent to which a test measures what it
intends to measure’. Reliability means ‘how
accurately and consistently it measures the
achievement from time to time, whatever it
measures. A question paper is Usable or
Practicable if it is easy to construct, administer,
score and interpret.

There are mainly three kinds of


questions - essay, short answer and objective
type. Each question should be made appropriately
for meeting the required instructional objectives and
skills.

 Objective Type Question:


An objective question is one which is free from
any subjective bias - either from the tester or the
marker. Objective questions can take various forms
such as Simple recall, Multiple choice, True or
false, Matching block, etc.., but invariably they
require brief answers with little or no writing. There
can only be one right or objective answer to an
objective question. A simple tick or a quick oral
answer may be enough.

 Short Answer Questions:

The short answer type generally require exact


answers and usually take less than five minutes to
read and answer, many (very short answer) take less
than two minute.

 Extended / Essay Type Question:

The extended / essay type answer includes


questions which require pupils to write a
brief description, draw a map, make a list, perform
a calculation, translate a sentence and so on.

Sample Question Paper


Physical Science
Standard : VIII Mark: 50
Time : 2hrs
PART - A
I. Choose the Correct
Answer 1x20=20
1. Mechanics is a branch of
a) Chemistry b) Physics c) Electronics d) Statics
2. In neutral equilibrium the center of gravity is
a) Lowered b) Raised c) Lowered and raised d) Neither raised
nor lowered
3. An example for an simple machine
a) Generator b) Reactor c) Diesel engine d) Pulleys
4. In simple machines the mechanical advantage is equal to
a) Load x power b) Power/load c)Load/power d) None of the
above
5. An example for second order lever is
a) A pair of scissors b) Wheel barrow c)See-saw d) Forceps
6. In which Doll is titled, it goes to its initial position
a) Barbie Doll b) Agra Doll c) Thanjawur Doll d) None of the
above
7. A person is able to lift a stone of 200kg.wt by applying a force
of 10Kg.wt, the mechanical advantage is
a) 200 b) 10 c) 20 d) 2000
8. An example for third order lever is
a) See-saw b) Bottle opener c) Wheel borrow d) Forceps
9…………. Can be reduced by using ball bearings
a) Surface tension b) Friction c)Gravity d) Force

10. In a single movable pulleys, the mechanical advantage is


equal to

a) 1 b) 3 c)2 d) 4
II Fill in the blanks
11) …………………deals with the study of motion of bodies
12) The ratio of the mechanical advantage to the velocity ratio is
called……………
13)The position of center of gravity of a body determines the
…………of the body.
14. Polishing and smoothening of rough surfaces reduce………..
15. Staircases and Ghat roads are based on the principle
of……………

III Match the following

16. A funnel with his base on a table - reduces the friction


17. load x load arm - l/h
18. Pulley - Stable
equilibrium
19. Mechanical Advantage - Changes the direction
of force
20 .Lubricant - power x power arm
PART -B

V Answer the following 2 x 10


=20
21. Define center of gravity
22. Why do racing cars are low and their wheels apart?
23. Define efficiency of a simple lesson machine?
24. Give new examples of inclined plane?
25. Where are ball bearing used?
26. What is simple machine?
27. Mention two disadvantages of friction you noticed from your
surround?
28. State the law of levers
29. Why do we use ball bearing in wheels
30. Find out two factors that affecting friction?
PART - C
VI Answer in details 2 x 5 =10
31 . Explain the method to determine the center of gravity of an
irregular lamina with neat diagram
32.Explain various types of levers with examples
Preparation of The Scoring key and Marking
Scheme

The fifth step is to prepare the "Marking


Scheme". The marking scheme helps prevent
inconsistency in judgment. In the marking scheme,
possible responses to items in the test are structured.
The various value points for responses are
determined and the marks allowed to each value
points are indicated. The marking scheme ensures
objectivity in judgment and eliminates differences
in score which may be due to inconsistency of the
evaluator. The marking scheme, of course, includes
the scoring key, which is prepared in respect of
objective type questions.

The factors contributing to variations in the


standards of assessment can be controlled by
supplying a detailed scheme of marking along with
the expected answers so that every examiner may
interpret the questions in the same way and attain
the same standard of marking without being too
lenient or strict or varying in his assessment.
Subjectivity, is thus minimised and it is believed to
give a more reliable picture of the students'
performance.

Preparation of Question-wise Analysis

The sixth step is that of question-wise


analysis. After preparing adequate number of
questions, it is better to have a final look upon the
questions or a question-wise review with reference
to the blueprint. If there is any discrepancy, the
question can be modified or make changes to fit
with the blueprint. Such an exercise helps the paper
setter to ensure that there is no imbalance in the
question paper.
QUESTIONWISE ANALYSIS

FORMS OF DIFFERENT
S.NO CONTENT OBJECTIVE MARK
QUESTION LEVEL
PART-A

1 Centre of Knowledge Objective Easy 1


gravity
2 Centre of
Understanding Objective Easy 1
gravity
3 Simple
Knowledge Objective Easy 1
machines
4 Simple
Understanding Objective Average 1
machines
5 Lever and
Application Objective Average 1
pulley
6 Centre of
Knowledge Objective Easy 1
gravity
7 Simple
Application Objective Difficult 1
machines
8 Lever and
Application Objective Average 1
pulley
9 Friction Application Objective Average 1
10 Lever and
Understanding Objective Average 1
pulley
11 Centre of Objective
Knowledge Easy 1
gravity
12 Simple Objective
Understanding Average 1
machines
13 Centre of Objective
Understanding Average 1
gravity
14 Friction Application Objective Average 1
15 Simple Objective
Application Easy 1
machines
16 Centre of Objective
Understanding Average 1
gravity
17 Lever and Objective
Understanding Average 1
pulley
18 Lever and Objective
Understanding Easy 1
pulley
19 Simple Objective
Understanding Average 1
machines
20 Friction Application Objective Easy 1
PART-B

21 Centre of
Understanding Short Ans Average 2
gravity
22 Centre of
Application Short Ans Difficult 2
gravity
23 Simple
Understanding Short Ans Average 2
machines
24 Centre of
Application Short Ans Average 2
gravity
25 Lever and
Knowledge Short Ans Easy 2
pulley
26 Simple
Understanding Short Ans Average 2
machines
27 Friction Application Short Ans Average 2
28 Lever and
Understanding Short Ans Average 2
pulley
29 Friction Application Short Ans Average 2
30 Friction Application Short Ans Difficult 2
PART-C
31 Centre of
Skill Essay Difficult 5
gravity
32 Lever and 5
Application Essay Average
pulley
DIAGNOSTIC TEST

The process of determining the causes of


educational difficulties is known as educational
diagnosis. The scope of educational diagnosis is
much larger than the use of tests and examinations.
It is not proper to limit the scope of diagnosis to
locating the causes that interfere with the ordinary
academic prognosis of the pupils. An adequate
diagnosis may involve the use of intelligence tests,
both general and specific, and of diagnostic
achievement types of laboratory apparatus for
measuring sensory activity, co-ordination and the
like. Other forms of appraisal such as rating scales,
controlled observation, questionnaires and
interviews can also be used for diagnosis in
education.

Educational diagnosis is the basis of effective


and intelligent teaching. Diagnosis in education
means a case study of the condition of learning to
determine its nature and to find out the causation,
with the main purpose of correcting and remedying
the difficulty involved in active remembering. The
major function of diagnosis is to facilitate the
optimum development of every student. It is the
determination of the nature of learning difficulties
and deficiencies.

English and English (1958) have defined a


diagnostic test in these words, “ one designed to
locate the particular source of a person’s difficulties
in learning, especially in school subjects, thus
providing clues to what further measures of
instruction, guidance, or study are needed.” A
diagnostic test is a test designed to locate specific
planning deficiencies. Incase of specific individuals
at a specific stage of learning so that ‘specific
efforts’ could be made to overcome those
deficiencies. It helps the teacher in identifying the
status of the learner at the end of a particular
relation, unit or course of learning as to want
‘specific teaching or learning points’ have been
properly grasped by the learner. After administering
a diagnostic test or a battery of diagnostic tests to
students, a teacher takes remedial measures to
overcome the deficiencies thus discovered.

Characteristics Diagnosis Test

The following are the characteristics of


educational diagnostic test ;

( i ) Objectives

The diagnosis is essentially the task of


locating more specifically those factors which bear
more causal relation to the progress of learning of a
pupil or a group of pupils. If educational diagnosis
is to be a handmade to effective teaching. The
essence of educational diagnosis is the
identification of some of the causes of learning
difficulty and some of the potential educational
assets so that, by giving proper attention to these
factors, more effective learning may result.

(ii) Validity

Validity refers to the evidence of causal


factors to the attainment of the objectives.
Investigations have shown that the attempt to
diagnose children's difficulties in arithmetic by
inspection of the test papers was reasonably valid
for detecting kinds of examples that they could or
could not solve correctly but the method was not
valid for determining the mental processes involved
in the children's method of work. This shows that a
method of diagnosis may be valid for discovering
certain factors while not valid for determining other
factors

( i i i ) Objectivity

Third characteristic of a satisfactory


diagnosis is its objectivity. The elimination of
widely varying personal judgments in diagnosis is
essential if diagnostic procedures are to be used
with any degree of precision.

( iv ) Reliability

Increase in reliability is related to the


decrease in the fluctuation in conclusion that can be
secured by providing a more adequate and
representative sample of pupil reaction upon which
the conclusions are based. The improvement of the
reliability of any diagnosis involves the utilization
of a more satisfactory sample of pupil reaction as a
basis for the diagnosis.
(v) Comparability

An interpretation of the results of a diagnosis


usually rests upon the experience with similar data.
Hence, diagnostic procedures that give comparable
results are basic to intelligent interpretation. The
progress of the pupil over a period of time is basic
to the appraisal of the effect of remedial teaching.

( v i ) Exactness

Some diagnostic tests give only vague


results. Diagnostic test may be tried with typical
classes to discover their exactness. The exactness
may be increased by analysing the characteristics of
the progress in learning more minutely and utilizing
the symptom thus identified as the base of the
diagnosis. Teachers make a very minute diagnosis
in certain limited aspects of pupil activity and no
diagnosis at all in other aspects. 'This
incompleteness is dangerous because the attention
of teacher and learner is apt to be directed primarily
towards those things for which a thorough diagnosis
has been made.

(vii) Appropriateness

Certain desirable changes in boys and girls


usually develop under a wide variety of educational
environments without the necessity of giving very
specific treatment. These are the changes that we
consider characteristics of maturity. For such cases,
an educational diagnosis is unnecessary and
inappropriate. Any satisfactory diagnosis must be
appropriate to the programme.

(viii) Practicability

Many of the most valid and reliable


diagnostic procedures that have been developed are
impracticable for use in all schools. New diagnostic
procedures need to be developed that meet the other
qualifications of a satisfactory diagnosis and that at
the same time are capable of extensive use under
school conditions.

Functions Of Diagnostic Test

The following are the different functions of


diagnostic test ;

(1) To direct curriculum emphasis by:

(i) Focusing attention on as any of the important


ultimate objectives of education as possible.

(ii) Clarifying of educational objectives to


teachers and pupils
(iii) Determining elements of strength and
weaknesses in the instructional programme of the
school.

(iv) Discovering inadequacies in curriculum,


content, and organisation.

(2) To provide for educational guidance of pupil


by:

(i) Providing a basis for the preliminary grouping of


pupils in each learning area

(ii) Serving a basis for the preliminary


grouping of pupils in each learning area

(iii) Discovering special aptitude and


disabilities

(iv) Determining the difficulty of material


pupil can read with profit.

(v) Determining the level of problem solving


ability in various areas

(3) To stimulate the learning activities of pupils


by:

(i) Enabling pupils to think of their


achievements in objective terms
(ii) Giving pupil’s satisfaction for the progress they
make, rather than for the relative level of
achievement they made

(iii) Enabling pupils to compete with their


past performance record

(iv) Measuring achievement objectively in terms of


accepted educational standards, rather than by the
subjective appraisal of the teachers

(4) To direct and motivate administrative and


supervisory efforts by:

(i) Enabling teachers to discover the areas in


which they need supervisory aid

(ii) Affording the administrative and supervisory


staff an over-all measure of the effectiveness of the
school organization and supervisory policies.

Use Of Diagnostic Tests


The important uses of diagnostic tests are:
(i) Items, units or skills, which are understood by a
majority of students, car1 be located and teaching
can be adjusted to the situation
(ii) Items, units or skills which are not understood
by a majority of pupils can be located and there by
special emphasis in these aspects can be attempted
(iii) The causes for the difficulty in certain items
can be found out, for which remedial measures can
be taken
(iv) Individual weakness can be found out which
would serve as the baseline for individual correction
work and personal guidance
(v) Diagnostic test may be used for prognosis. It
helps to predict the possible success in certain type
of courses or vocation and therefore it helps in
providing guidance and counseling.
(vi) Diagnostic tests can be made the basis of
individualized instruction. Differentiated teaching
methods, ability grouping, individual drill.
differentiated assignments etc. can be attempted on
the basis of the results of diagnostic tests
(vii) Diagnostic test measures 'real understanding'
as opposed to superficial mastery of subject areas
measured by achievement of pupils in subject areas
(viii) Diagnostic tests can assist the pupil in locating
one's weakness and so they can be corrected with
maximum ease and economy
(ix) Diagnostic: tests can indicate the effectiveness
of specific methods of teaching in dealing with
specific teaching situations
(x) Diagnosis of pupils' weakness and self-
discovery can lead to motivation and interest can
generate cooperation in future teaching learning
situation
CONSTRLICTION OF DIAGNOSTIC TEST
Diagnostic test may be either standardized or
teacher made. Teacher-made tests besides being
more economical are also more effective, as each
teacher can frame it according to the specific needs
of students. Following are the different steps
involved in the construction of diagnostic test.

(i) Planning

(ii) Writing items

( iii ) Assembling the test

(iv) Providing Directions

(v) Preparing the scoring key and marking scheme

(vi) Receiving the test

(i) Planning

The unit, on which a diagnostic test is based,


requires a detailed exhaustive content analysis. It is
broken into learning points without omitting any
point. The diagnostic procedure is based on the
premise that mastery of the total process cannot be
stronger than that of the weakest link in the chair of
related concepts and skills. Accordingly each
concept, skill of learning point called into play is
identified at the time of constructing the test.
As far as a diagnostic test is concerned, it is
not very necessary to know the relative importance
of the learning points. All the learning points have
to be covered in an unbroken sequence. Each
learning point should have an adequate number of
questions to help identify the area of weakness.

(Ii) Writing Items

All the forms of questions (essay. Short


answer & objective) types can be used for testing
different learning points. However, it appears for
diagnostic purposes, short answer questions
involving one or two steps, are used widely.
Whatever be the form of questions, they should in
general be easy, suitable for average students of that
age or grade. The questions have to be specifically
related to the learning points and should be such as
to throw light on the weakness of the students. The
question should be written in simple language. The
scope of the expected answer should be clear to the
students. The questions are clubbed around the
learning points, even when they are of the different
forms; the learning points are arranged sequentially
from simple to complex which ensures that students
do not have to change their mental sets very
frequently.
(Iii) Assembling The Test

Preparation of blue print may altogether be


avoided. No rigid time limit need to be specified,
though for administrative case a time limit may be
set.

(Iv) Providing Directions And Preparing Scoring


Key

A set of instructions clear and precise, is


drafted. It should also be provided with a scoring
key and marking scheme.

(V) Reviewing The Test

Before printing the test, it should be carefully


edited and reviewed. This ensures that any
inadvertent errors are eliminated.

Administration Of Diagnostic Test

The following points need to be kept in view:

(i) The first task of the teacher is to win the


confidence of the students and reassure them that
test is to help them in the improvement of their
learning rather than for declaring pass or fail.

( ii ) It should be administered in a released


environment.
(iii) Students should be seated comfortably.

(iv) Students should be asked not to consult each


other while taking the test.

(v) If any student is not able to follow something,


he should be allowed to seek clarification from the
teacher.

(VI) The teacher may ensure that the students taking


the test attempt all questions.

(vii) Time schedule should not be enforced strictly.


If any student takes a little more time, he should be
allowed to do so.

REMEDIAL TEACHING

Remedial teaching is the process of


instruction that follows immediately after
diagnostic testing and analysis of the result. The
teacher first plan strategies for remedial teaching on
the basis of the nature of the difficulties and the
reason behind each. This may be at the group – level
or individual – level, depending on the scope of
diagnosis and the spread of difficulty within the
group. Additional learning experiences for solving
the difficulties identified are to be provided. By the
success in remedial classes resistant children
become cooperative, apprehensive children become
self confident, discouraged children become
hopeful, and socially mal-adjusted children become
acceptable to the group.

Need For Remedial Teaching

Teaching involves communication. That is,


messages are being sent at one end and received at
the other. When the messages are received as they
are transmitted, then effective communication is
believed to have taken place. Sometimes the
message may not get across at all or may reach the
other end in a garbled, distorted and unrecognizable
version. In such instances a 'gap' develops between
'teaching' and 'learning'. Frequently the learner has
not learnt what the teacher intended him to learn. In
this case, a message is received, but it is not the one
which was sent out.

Several problems arise in dealing with this


situation. First of all, the teacher has to find out if
the message received by the student IS the one sent
out. For that, the teacher has to rely on the feedback
from the student what he has received. Usually the
student finds it hard to express what he has received
and this give the teacher the impression that
learning has not taken place at all. So the teacher
tries to get the message across through repetition.
But. if the message received is a wrong one, it has
to be 'cancelled' before the correct one can be
'written in' in order not to create problems of
interference. This is one of the functions of
remediation.

Learning problems are different kinds and


each call for different remedial solutions. Most of
the problems are caused by incomplete or
inadequate learning. The diagnosis of the learning
problem is, hence ,very important. Wrong learning
inevitably results wherever there is teaching. It
interferes with the desired learning There can also
be different kinds or degrees of learning requiring
different strategies of remediation. The diagnosis of
the learning problem is, hence, very important.
Remediation may be regarded as an activity parallel
to the teaching function of motivation which
maintains constant vigil over his students. But it is
possible to create in the students' mind the same
kind of 'alertness' which his presence seem:; to
endure. It must be made felt that it is important for
the learner not to make mistakes and draw forth
censure and ridicule. The correction of wrong
'concepts' and insights, and the strengthening of
desired 'concepts' can be affected through
explanation of various kinds. If an error seems to be
due to interference, a comparison of the two
language systems at that point may be provided.
The wrong learning of certain concepts may also
have to be remediating. The learner can be
prevented from practicing a wrong concept only if
there is constant and effective monitoring, so that
the correction is immediate. Unmonitored practice
twill invariably result in the strengthening any
wrong concept which exists.

The greatest problem in any type of


remediation is to make the new learning abide. Old
errors have the habit of 'coming home to the roots'.
However effectively they are remediate and there is
a point beyond remediation is impossible because
no more learning takes place at that stage. The
errors become fossilized. Development of the
necessary attitudes and determination on the part of
the learner is far more crucial than the development
of 'concepts' or mere 'habits'. It can be inferred that
diagnosis is an important factor in imparting
instruction. Instruction will be incomplete without
diagnosis and remedial teaching. Individuals differ
in abilities. Pupils of different levels of ability are
likely to be present in a class of forty or fifty. Slow
learners, fast learners and average learners all have
to be catered to in different ways. The highly
talented should be provided with additional work
which requires higher intelligence level and
whereas the slow learners have to be specially cared
for in order to bring them to the level of the average
student. It is valid to consider insight-formation,
application, consolidation and revision.

Ideally, new learning should not be permitted


until wrong learning has been cancelled and
corrected. This is, however, impractical since
remediation is a slow and laborious process. A thing
once learnt is difficult to cancel, whether correct or
incorrect. Remediation, hence, has to go on
simultaneously with the other teaching functions.
The more teaching a learner has had, the more he
may be in need of remediation. The possible causes
of failure in learning can be due to interference from
concepts previously learnt or over generalization on
the basis of previous learning. These errors of
learning are caused by the learner taking an active
part in the process of learning. They tend to adopt a
particular learning strategy .Here; the learner tries
to simplify the task of learning or transfers his
precious learning to a new situation. The teacher is
in no way responsible for these errors. He can
probably do nothing to prevent them.
Learners seem to learn through their errors. It
follows that the teacher should not only permit
certain kinds of errors but assist him to form rules
or hypotheses which may be used as touch stones
and amended if necessary. Each time the error is
made, the learner receives 'feedback' which he uses
to amend his self- made rules. Then he finally
arrives at his linguistic competence. The
appropriate strategy of remediation can be
determined by the types of errors which have to be
dealt with. 'They need classifying into groups and
types as all the individual errors cannot be dealt with
practically. Remedial teaching is basically
cognitive. The aim is to make the learner conscious
about the rules; of concept attainment and his own
use of it. A teacher cannot consider remediation as
a 'follow-up' or an optimal activity.

BASIC PRINCIPLES OF REMEDIAL


INSTRUCTION
Remedial instruction consists of remedial
activities taking place along with the regular
instruction or outside the regular class instruction
and usually conducted by a special teacher. The
type of remedial treatment given to the students
depends on the character of the diagnosis made. If
physical factors are responsible, remedial attention
should be provided. The results of diagnosis have
significance only if they constitute the basis for
corrective instruction and for remedial procedures,
which remove, alleviate or compensate for causal
factors in the child and his I her environment. If a
teacher can identify several children who lack a
thorough understanding of certain concepts, he I she
may re-teach these concepts through group
instruction, demonstrations, and supplementary
silent reading by the pupils etc. General
backwardness in subject is frequently due to
inadequate mastery of the basic skills of Reading,
Arithmetic, Language, Handwriting and Spelling or
Inadequate command of the work, Study skills. etc.
Hence corrective work in the basic skills plus
improved motivation in the subject may be
sufficient to effect improvement. The following are
the general principles of remedial teaching:
(i) Individual consideration of the backward pupil
with recognition of his mental, physical and
educational characteristics
(ii) Thorough diagnosis with a pretest
(iii) Early success for the pupil in his backward
subject or subjects by use of suitable methods and
materials
(iv) Dissipation of emotional barriers through early
success, praise, continuous help, sympathetic
consideration of his difficulties and sustained
interest.
(v) The need for a new orientation towards the
backward subject through new methods involving
play way approaches, activities and appropriately
graded materials
(vi) Frequent planned remedial lessons
(viii) Co-operation with the parents
Preparation Of Remedial Materials
Preparation of remedial materials for a child
is a crucial aspect of corrective instruction.
Remedial materials prepared should meet the
following criteria:
(i) The difficulty of the remedial material should be
geared to the child's readiness and maturity in the
subject or skill to be improved. A set of remedial
materials should provide a wide range of difficulty,
covering several grades
(il) The remedial measures should be designed to
correct the pupils' individual difficulties. Through
the use of observation, interview and diagnostic
testing materials, the teacher would have analysed
the work of the backward children in order to locate
the specific retaining needs. An adequate amount of
remedial materials must be provided which is
designed to correct the specific difficulties
identified
( iii) The remedial materials should be self-
directive. Children may differ widely as to the
instructional materials needed to correct their
difficulties
(iv) The remedial measures must permit individual
rates of progress
(v) A method should be provided for recording
individual progress. When the child has an
opportunity to record his 1 her successes on a
progress record, he I she is given an additional
incentive to achieve.
Implementation Of The Remedial Instructional
Programme
Although the selection of the remedial
material is highly important, it is only one aspect of
the teacher's approach upon learning difficulties and
underlying causative factors. The following
principles should guide the teacher in planning and
carrying out the programme
(i) One of the first steps should be the correction of
any physical factors, which affect learning
(it) The co-operation of the parents should be
obtained in correcting such physical factors,
alleviating emotional tensions, and providing better
study conditions and the like
(iii) If the child has little desire to learn, immediate
steps should be taken to try to improve his I her
attitude through activities which makes the child
enjoy learning
(iv) Corrective instruction should begin by
analyzing with the child the specific strengths and
needs, and showing how the instructional materials
are designed to correct his / her deficiencies.
Making the child aware of his / her problem and
providing a method of solving them, based on
individual effort, helps to establish a powerful
motivating force
(v) Instruction should begin at or slightly the
learner's present level of achievement. Short term
goals should be established which the learner
considers reasonable and possible to attain. By
means of progress charts, praise and social
recognition the child's feeling of successful
accomplishment should be reinforced
(vi) Since corrective instruction must usually
proceed on the basis of a tentative diagnosis, the
teacher must be ready to modify the remedial
programme if the approach and materials selected
seem to be ineffective
(vii) Corrective procedures must be modified for
children of relatively inferior or superior mental
ability
(viii) The results of corrective instruction should be
evaluated. Comparable forms of a standardized test
should be administered before and after a period of
concentrated instruction. The effectiveness of the
programme must be evaluated for each child than in
terms of class averages
(ix) A cumulative record should be made of the
results of diagnosis, of methods and materials used,
and of the results of corrective instruction. Such a
record is helpful in the determination of next steps,
and of invaluable help to the next teacher when the
child is promoted
TOOLS AND TECHNIQUES OF
ASSESSMENT
The word tool literally means implement for
mechanical operations. But in educational
assessment,tool may be defined as an instrument to
collect evidences of the student’s achievement.
Achievement Test, Anecdotal Record, cumulative
Record, Check list, Rating Scale, Questionnaire, etc
are the main tools of Evaluation in Education .
Achievement test
As far as teachers are concerned the most
commonly used tool is achievement test. In the
evaluation approach, the term achievement has to
be understood in relation to the objectives of
instruction that are translated in to behavioural
changes. The same learning points might have been
learnt by different students at different levels. The
teacher is interested in knowing the level of
achievement of each student in each of the learning
points and evaluates these on the basis of his pre-
determined instructional objectives. A test meant
for the above purpose is known as an achievement
test.
Anecdotal Record
Anecdotal Records are Reports of informal
teacher observations regarding his pupils. A
Teacher will opportunity to observe certain
behaviours of his students during specific occations
that reveal their attitude or certain personality traits.
This may be either in the class room or out side.
This should be recorded soon after the incedent is
observed. It should be recorded accurately and
objectively. It should have 2 columns, one for the
description of the incedent and other for
interpretation of the incident
Cumulative Record
The Progress in the developmental pattern of
each student is recorded cumulatively from period
to period in a comprehensive record designed for
the purpose. Such a record is known as cumulative
record. It will have provision for recording the
details of a variety of dimension like physical
development, health conditions, level of attainment
in various subjects participation in co-curricular.
Check List
The Check List is a simple laundry list type
of device, consisting of a prepared list of items. It
easy to construct and easy to use. This is a 2
dimensional chart in which the traits measured are
noted in one dimention and the names of the
examinies in the other.The results can be recorded
by putting tick mark against the item.
Rating Scale.
This tool is a check list but a more
sophisticated modification in the check list . We
simply record the presence of a particular variable.
There is no provision for expressing how much that
variable is found. In order to overcome this
limitations each trait can be score on any number of
convenient point each point representing a
particular degree such as good average and poor
when it is a three point scale.
Questionnaire
It is a flexible tool for gathering Quantitative
information. It is possible to cover various aspects
of a broad problem or several problems themselves
through the Questionnaire. It is very easy to
administer and collect the responses using
questionnaire. But in adequate
coverage,misinterpretation of questions and
individual understanding of respondence are the
major limitations of this tool
TECHNIQUES OF ASSESSMENT
Testing, Observation, Interview, Case study,
Sociometry, Projective Techniques,etc are the
major techniques of assessment in education. Of
these testing is quite common. The other techniques
are being discussed below
Observation
Continuously observing in an individual and
there by measuring different dimensions of his
behavior relevant to the teacher is one of the most
effective techniques used for evaluation. There are
relevant feature noticed in this behavior should be
recorded ask objectively as possible. There are
different types of observation,they are controlled
and un controlled observation and participatory and
non participatory observation.
Inteview
Here the teacher tries to observe the child’s
behavior directly and gather information orally.
They see one another,here each other’s voice and
understand one another’s language. There are
several Types of interview ; Survey interview are
used to gather information,diagnostic interview are
used to understand child’s problems,Therapeutic
Interviews are used to plan suitable therapy and
counseling interviews are used to solve
personal,educational or vocational problems.
Case Study
The most reliable method of studying a single
child in its totality leads the case study method. In
this method, the teacher collects datas relating to the
individual’s socio economic status,family
conditions,study habbits,health and mental
conditions etc. Case study attempts to synthasise
and interpret the data collected from several sources
using various methods in order to study the
problems of the child.
Sociometry
It is a method developed by J. L Moreno for
assessing social relationships among members in a
social group. It will help the teacher to identify
stars,Isolates and cliques. Stars are goes who are
chosen by many. Isolates are those who are chosen
by no body and clique is a small group who has
close relationship exclusively among themselves
graphical representation of socio metric is called
socio-graph.
Projective Techniques
Abnormal cases whose behaviours are often
controlled by the unconscious mind are especially
not suspectable to such direct techniques. In such
cases, what is possible is to provide the clients with
some stimulus that might make them respond in
such a way as to project their inner self in an
unconscious manner. Then these responses may be
interpreted. Eg: Rorscharch Ink Blot Test,Thematic
Apperception Test (TAT) etc are examples of
projective Techniques.
QUALITIES OF A GOOD TEST
For getting valid and truthful informations
fromm the tools used for assessment , it must
require lots of good qualities . If any mistake in the
construction of tools will bring falls result. A good
test must possess the following qualities ;
Objectivity
Objectivity of a question we mean the
definiteness in the answer expected. It can be
maintained by pinpointing the specific behavior
going to be evaluated determining the expected
answer by which this behavior can be tested
deciding up on the scoring procedure and then re –
examining the questions in terms of the above
aspects.
Objective Basedness
Before setting terms for a test intended for
measuring the attainment of learner’s in a particular
unit of study the evaluated should think of the
objectives and the resulting specifications with
which the unit was taught a prepare sufficient
number of items suitable to measure the degree of
attainment in each of this specifications. This is
what is meant by sale that a test should be objective
based.
Comprehensiveness
The test should cover the whole syllabus.
Due importance should be given to all the relevant
learning materials. It should also cover all the
anticipated objectives. If these two exist the test
may be said to possess comprehensiveness.
Validity.
A test is said to be valid if it measure what it
intended to measure. Different types of validities
are given.
1. Content Validity : If the test contents agrees
with the course content with regard to the
dimensions instructional objectives and
subject matter the test may be said to possess
content validity.
2. Predictive Validity : In order to make the
prediction authentic validity of the prediction
test has to be established. This is often done
by co-relating the test results with some other
external criteria that has already been proved
to authoritically predict efficiency . This co-
relation is statistically determined and if the
test scores are found to be lightly co-related
with external criterian ,the test also will be
adjudged as valid
3. Concurrent validity : Here the test results are
compared with some other measures of the
same phenomina ( Rating Scale) obtain
simultaneously high with Co-relation
between the 2 sets of score establishes the
validity of the test and hence is known as
concurrent validity.
4. Construct Validity : The trait is associated
with something construct by the tester to
represent the traits,that is a ‘construt’. Such a
test is critically examined by asking the
question how well the test score corresponds
to the construct and hence such validity is
known as construct validity
Reliability:
Reliability of a test refers to the degree of
consistency with which measures what it is intented
to measure. Reliability is a pre requisite for validity
but reliability alone cannot ensure validity. When
we considered the relation between reliability and
validity in the opposite direction a test with high
validity and has to be reliable also. There are
different methods for determining the reliability of
a test . In the test retest method a test is administered
twise to the same group with a short interval in
between. The Scores are labeled and the co-relation
Calculated when the co-relation is higher these will
be more reliability. In Parallel or equelleed form
method, reliability is determined using 2 equalled
form of the same test content here also comparison
is made by determining the co-relation between the
2 sets of scores.
In Split half method, the scores of the odd and
even items are taken and the co-relation between the
2 sets of scores determining the assumption is that
these scores are comparable then we get reliability
of the half test using Sphereman. Brown Phrophecy
formula we can calculate reliability of full test.
Discriminating Power .
A test should be able to discriminate the
respondande namely gifted students,average
students nd low achievers on the basis of the
phenomina measure
Practicability:
To Make the practicability of a test setter
should plan about economy of time effort and
finance required.
Comparability:
A test possesses comparability when scores
obtained by administering can be interpreted in
terms of a common base thar has natural or accepted
meaning. There are two methods for establishing
the comparability of standard tests. They are
making available equivalent forms of a test and
making available adequate norms.
Utility:
Utility of a test may be considered as the final
master criterion. A test has utility if it provides the
test conditions that would facilitate realization of
the purpose for which it is meant. For achieving
utility it is essential that the test is constructed in the
light of well thought out purpose and its
interpretations are used in obtaining desirable
results.
DIFERENCE BETWEEN ACHIEVEMENT
TEST AND DIAGONOSTIC TEST

Achievement Test Diagnostic Test


Achievement tests are Diagnostic tests are
designed to measure the designed to measure
achievement of a the non achievement of
learner learner
Used for the placement Used for Remedial
of students teaching
All topic are given Only the content area
importance that gives problem is
given importance
Intensive evaluation of Intensive evaluation of
a particular area is not a particular area is
attempted attempted
Time factor is important Time factor is not
in answering the important in answering
question the question

STANDARDIZED TESTS
Standardized assessments are defined as
assessments constructed by experts and published
for use in many different schools and classrooms.
These assessments are used in various contexts and
serve multiple purposes. Americans first began
seeing standardized tests in the classroom in the
early 20th century. Currently, standardized tests are
widely used in grade school and are even required
in most states due to the No Child Left Behind Act
of 2001. Standardized tests may be comprised of
different types of items, including multiple-choice,
true-false, matching, essay, and spoken items.
These assessments may also take the form of
traditional paper-pencil tests or be administered via
computer. In some instances, adaptive
testing occurs when a computer is used. Adaptive
testing is when the students' performance on items
at the beginning of the test determines the next items
to be presented. Standardized testing allows
educators to determine trends in student progress

Advantages of Standardized Tests


There are many advantages of standardized testing:

1. Standardized tests are practical, they're easy


to administer, and they consume less time to
administer versus other assessments.
2. Standardized testing results are quantifiable.
By quantifying students' achievements,
educators can identify proficiency levels and
more easily identify students in need of
remediation or advancement.
3. Standardized tests are scored via computer,
which frees up time for the educator.
4. Since scoring is completed by computer, it is
objective and not subject to educator bias or
emotions.
5. Standardized testing allows educators to
compare scores to students within the same
school and across schools. This information
provides data on not only the individual
student's abilities but also on the school as a
whole. Areas of school-wide weaknesses and
strengths are more easily identifiable.
6. Standardized testing provides a longitudinal
report of student progress. Over time,
educators are able to see a trend of growth or
decline and rapidly respond to the student's
educational needs.

Disadvantages of Standardized Tests

1. Standardized test items are not parallel with


typical classroom skills and behaviors. Due
to the fact that questions have to be
generalizable to the entire population, most
items assess general knowledge and
understanding.
2. Since general knowledge is assessed,
educators cannot use standardized test results
to inform their individual instruction
methods. If recommendations are made,
educators may begin to 'teach to the test' as
opposed to teaching what is currently in the
curriculum or based on the needs of their
individual classroom.
3. Standardized test items do not assess higher-
level thinking skills.
4. Standardized test scores are greatly
influenced by non-academic factors, such as
fatigue and attention.

ENSURING FAIRNESS IN ASSESSMENT


ASPECTS OF ENSURING FAIRNESS IN
CLASSROOM ASSESSMENT

There is no single or direct path to guarantee


fairness in classroom assessment. Teacher
educators has to give special consideration for the
different aspects like classroom environment,
accuracy, consistency , the elimination of bias,
transparency of assessment for bringing fairness in
classroom assessment. A fair and just assessment
tasks provide all students with an equal opportunity
to demonstrate the extent of their learning.
Achieving fairness throughout your assessment of
students involves considerations about workload;
timing and complexity of the task .

The teaching and learning activities must


provide students with sufficient exposure and
practice in the work before the assessment. An
assessment of laboratory skills, without providing
the appropriate teaching and learning activities to
practice these skills, even if a written guide had
been made available, would not be considered a
very fair assessment task.The timing of feedback is
also important. Feedback must be provided early
enough for students to be able to do something with
it. Providing feedback on a draft essay 24 hours
prior to the final submission date is not a fair
approach.
Consensus moderation processes need to be
used to ensure every student will have their learning
assessed equally and appropriately regardless of
who is marking. In addition, a fair assessment must
take into consideration issues surrounding access,
equity and diversity. Assessment practices need to
be as free as possible from gender, racial, cultural
or other potential bias and provisions need to be
made for students with disabilities and/or special
needs. For bringing fair assessment practices
teachers must give importance to the following
aspects also;

Appropriate For the Desired Learning Outcome

The type of assessment used must be suitable


for "what" it is that is being assessed i.e. fit for
purpose. It would not be appropriate for example, to
assess a dental student's ability to fill a tooth cavity,
via a multiple choice or written examination. Such
traditional written examinations and assessments
may effectively measure a student's knowledge
about a skill, and even their knowledge and
understanding of how that skill can be applied. It
will not capture the student's ability to actually "do"
the skill. For this to be effectively measured,
assessment needs to be designed in such a way that
the student can demonstrate their ability to do or
perform the skill.

Valid For Assessing Learning


There are different types of validity,
including content validity. Content validity refers to
the extent a test measures what it claims to measure.
There must be a genuine relationship between the
task and the learning required to complete the task.
A valid assessment task will be a measure student's
learning and not something else.The ability to
complete an assessment task successfully should,
thus, be dependent on the student learning what is
required in the course or unit of study. If the student
is able to do the task without that learning, the
assessment is not a valid measure of student
learning. Hence, a valid assessment does not
require knowledge or skills that are irrelevant to
what is actually being assessed. In order to provide
sound evidence of the extent of a student's learning;
the assessment must be representative of the area of
learning being assessed.

Reliable Assessment Techniques and Tools


Reliability of assessment refers to the
accuracy and precision of measurement; and
therefore also its reproducibility. When an
assessment provides an accurate and precise
measurement of student learning, it will yield the
same, consistent result regardless of when the
assessment occurs or who does the marking.
Assessments need to be reliable if the decisions
based on the results are to be trusted and defensible.
Transparency of Assessment Practices
Transparency refers to the how clear the
assessment expectations are for students.
Transparency can be greatly enhanced by:
 A clear task description: so students know what
it is they are expected to do
 A clear set of criteria and standards: so students
know what it is against which they will be
assessed
 The use of model exemplars across a range: so
students know the level of performance expected
and what that "looks like"
The language used for assessment tasks and
criteria needs to be as clear and unambiguous as
possible. Even then, different students may make
different interpretations. The use of model
exemplars is a useful way to help students recognise
what is expected of them i.e. what a good quality
piece of work should look like.

Authentic Assessment Tasks / Activities


Authentic assessment tasks are those that are
relevant and reflect what occurs in the work-place
beyond the university environment. Such 'real' tasks
can motivate and stimulate students more than the
same material in abstract form. Students appreciate
assessment with relevance to real-life, industry
and/or vocational interests. Authentic assessments
tasks will stimulate intrinsic motivation of students
and prepare them for work outside the university.
This doesn't mean that a task must be carried out in
an external work setting to be authentic. Using
models or simulations that 'mimic' key elements of
authentic contexts can help to create an authentic
experience.

Manageable Workload
Overloading students can inhibit learning.
One common recommendation is that students
should spend about 1 hour of learning/week for each
credit point of a course. This includes the class time
spent in lectures, tutorials and labs/workshops;
preparation and reading time for in class activities;
any additional time needed to seek assistance or
resources and the assessment. While more time on
task is a major contributor to learning, it can shift to
overload and must be carefully considered. The
complexity and/or introduction of an unfamiliar
form of assessment must also be considered.
Students respond and perform better to complex and
different forms of assessment if these are introduced
gently and progressively to tasks that are
increasingly complex and demanding.

Engaging learner with Study Material


Time spent on a task is critically important to
effective learning, with more effective learning
expected with greater time on task. Students will
spend more time on an assessment task if it
something with which they can actively engage.
Being actively engaged involves a degree of
emotional and a cognitive engagement. This needs
to be balanced with the need to keep the student
assessment load requirement manageable for both
the students and the academics involved. Unless
students experience some kind of emotional
investment in what they are doing, they are
unlikely to commit the time to many of the useful
assessment tasks that are available.
PRINCIPLES ENSUIRING FAIRNESS IN
CLASSROOM ASSESSMENTS
Educators after conducting lots of researches
in assessment , developed certain principles for
bringing fairness in assessment. Important among
them are ;
I. Developing and Choosing Methods for
Assessment
II. Collecting Assessment Information
III. Judging and Scoring Student Performance
IV. Summarizing and Interpreting Results
V. Reporting Assessment Findings
I. Developing and Choosing Methods for
Assessment
Assessment methods should be appropriate
for and compatible with the purpose and context of
the assessment. Assessment method is used here to
refer to the various strategies and techniques that
teachers might use to acquire assessment
information. These strategies and techniques
include, but are not limited to, observations, text-
and curriculum-embedded questions and tests,
paper-and-pencil tests, oral questioning,
benchmarks or reference sets, interviews, peer-and
self-assessments, standardized criterion-referenced
and norm-referenced tests, performance
assessments, writing samples, exhibitions, portfolio
assessment, and project and product assessments.
Several labels have been used to describe subsets of
these alternatives, with the most common being
“direct assessment,” authentic assessment,”
“performance assessment,” and “alternative
assessment.”However, for the purpose of the
Principles, the term assessment method has been
used to encompass all the strategies and techniques
that might be used to collect information from
students about their progress toward attaining the
knowledge, skills, attitudes, or behaviors to be
learned.
1. Assessment methods should be developed or
chosen so that inferences drawn about the
knowledge, skills, attitudes, and behaviors
possessed by each student are valid and not open to
misinterpretation. Validity refers to the degree to
which inferences drawn from assessments results
are meaningful. Therefore, development or
selection of assessment methods for collecting
information should be clearly linked to the purposes
for which inferences and decisions are to be made.
2. Assessment methods should be clearly related to
the goals and objectives of instruction, and be
compatible with the instructional approaches used.
To enhance validity, assessment methods should be
in harmony with the instructional objectives to
which they are referenced. Planning an assessment
design at the same time as planning instruction will
help integrate the two in meaningful ways. Such
joint planning provides an overall perspective on
the knowledge, skills, attitudes, and behaviors to be
learned and assessed, and the contexts in which they
will be learned and assessed.
3. When developing or choosing assessment
methods, consideration should be given to the
consequences of the decisions to be made in light of
the obtained information. The outcomes of some
assessments may be more critical than others. 4.
More than one assessment method should be used
to ensure comprehensive and consistent indications
of student performance. To obtain a more complete
picture or profile of a student’s knowledge, skills,
attitudes, or behaviors, and to discern consistent
patterns and trends, more than one assessment
method should be used.
5. Assessment methods should be suited to the
backgrounds and prior experiences of students.
Assessment methods should be free from bias
brought about by student factors extraneous to the
purpose of the assessment. Possible factors to
consider include culture, developmental stage,
ethnicity, gender, socio-economic background,
language, special interests, and special needs.
6. Content and language that would generally be
viewed as sensitive, sexist or offensive should be
avoided. The vocabulary and problem situation in
each test item or performance task should not favour
or discriminate against any group of students. Steps
should be taken to ensure that stereotyping is not
condoned. Language that might be offensive to
particular groups of students should be avoided. A
judicious use of different roles for males and
females and for minorities and the careful use of
language should contribute to more effective and,
therefore, fairer assessments.
7. Assessment instruments translated into a second
language or transferred from another context or
location should be accompanied by evidence that
inferences based on these instruments are valid for
the intended purpose. Translation of an assessment
instrument from one language to another is a
complex and demanding task. Similarly, the
adoption or modification of an instrument
developed in another country is often not simple and
straightforward. Care must be taken to ensure that
the results from translated and imported instruments
are not misinterpreted or misleading.
II. Collecting Assessment Information
Students should be provided with a sufficient
opportunity to demonstrate the knowledge, skills,
attitudes, or behaviors being assessed. Assessment
information can be collected in a variety of ways
(observations, oral questioning, interviews, oral and
written reports, paper-and-pencil tests). The
guidelines which follow are not all equally
applicable to each of these procedures.
1. Students should be told why assessment
information is being collected and how this
information will be used. Students who know the
purpose of an assessment are in a position to
respond in a manner that will provide information
relevant to that purpose.
2. An assessment procedure should be used under
conditions suitable to its purpose and form.
Optimum conditions should be provided for
obtaining data from and information about students
so as to maximize the validity and consistency of
the data and information collected. Common
conditions include such things as proper light and
ventilation, comfortable room temperature, and
freedom from distraction. Adequate work-space,
sufficient materials, and adequate time limits
appropriate to the purpose and form of the
assessment are also necessary.
3. In assessments involving observations,
checklists, or rating scales, the number of
characteristics to be assessed at one time should be
small enough and concretely described so that the
observations can be made accurately. Student
behaviors often change so rapidly that it may not be
possible simultaneously to observe and record all
the behavior components. In such instances, the
number of components to be observed should be
reduced and the components should be described as
concretely as possible. One way to manage an
observation is to divide the behavior into a series of
components and assess each component in
sequence.
4. The directions provided to students should be
clear, complete, and appropriate for the ability, age,
and grade level of the students. Lack of
understanding of the assessment task may prevent
maximum performance or display of the behavior
called for.
5. In assessment involving selection items (e.g.,
truefalse, multiple-choice), the directions should
encourage students to answer all items without
threat of penalty.
A correction formula is sometimes used to
discourage “guessing” on selection items. The
formula is intended to encourage students to omit
items for which they do no know the answer rather
than to “guess” the answer.
6. When collecting assessment information,
interactions with students should be appropriate and
consistent. Care must be taken when collecting
assessment information to treat all students fairly.
7. Unanticipated circumstances that interfere with
the collection of assessment information should be
noted and recorded. Events such as a fire drill, an
unscheduled assembly, or insufficient materials
may interfere in the way in which assessment
information is collected. Such events should be
recorded and subsequently considered when
interpreting the information obtained.
8. A written policy should guide decisions about the
use of alternate procedures for collecting
assessment information from students with special
needs and students whose proficiency in the
language of instruction is inadequate for them to
respond in the anticipated manner. It may be
necessary to develop alternative assessment
procedures to ensure a consistent and valid
assessment of those students who, because of
special needs or inadequate language, are not able
to respond to an assessment method .
III. Judging and Scoring Student Performance
Procedures for judging or scoring student
performance should be appropriate for the
assessment method used and be consistently applied
and monitored. Judging and scoring refers to the
process of determining the quality of a student’s
performance, the appropriateness of an attitude or
behavior, or the correctness of an answer. Results
derived from judging and scoring may be expressed
as written or oral comments, ratings,
categorizations, letters, numbers, or as some
combination of these forms.
1. Before an assessment method is used, a procedure
for scoring should be prepared to guide the process
of judging the quality of a performance or product,
the appropriateness of an attitude or behavior, or the
correctness of an answer.
To increase consistency and validity, properly
developed scoring procedures should be used.
Different assessment methods require different
forms of scoring. Scoring selection items requires
the identification of the correct or, in some
instances, best answer.
2. Before an assessment method is used, students
should be told how their responses or the
information they provide will be judged or scored.
Informing students prior to the use of an assessment
method about the scoring procedures to be followed
should help ensure that similar expectations are held
by both students and their teachers.
3. Care should be taken to ensure that results are not
influenced by factors that are not relevant to the
purpose of the assessment. Various types of errors
occur in scoring, particularly when a degree of
subjectivity is involved .
4. Comments formed as part of scoring should be
based on the responses made by the students and
presented in a way that students can understand and
use them.
Comments, in oral and written form, are provided to
encourage learning and to point out correctable
errors or inconsistencies in performance. In
addition, comments can be used to clarify a result.
Such feedback should be based on evidence
pertinent to the learning outcomes being assessed.
5. Any changes made during scoring should be
based upon a demonstrated problem with the initial
scoring procedure. The modified procedure should
then be used to rescore all previously scored
responses.
6. An appeal process should be described to
students at the beginning of each school year or
course of instruction that they may use to appeal a
result. Situations may arise where a student believes
a result incorrectly reflects his/her level of
performance. A procedure by which can appeal
such a situation should be developed and made
known to them.
IV. Summarizing and Interpreting Results
Procedures for summarizing and interpreting
assessment results should yield accurate and
informative representations of a student’s
performance in relation to the goals and objectives
of instruction for the reporting period. Summarizing
and interpreting results refers to the procedures used
to combine assessment results in the form of
summary comments and grades which indicate both
a student’s level of performance and the valuing of
that performance.
1. Procedures for summarizing and interpreting
results for a reporting period should be guided by a
written policy. Summary comments and grades,
when interpreted, serve a variety of functions. They
inform students of their progress. Parents, teachers,
counselors, and administrators use them to guide
learning, determine promotion, identify students for
special attention, and to help students develop
future plans.
2. The way in which summary comments and
grades are formulated and interpreted should be
explained to students and their parents/guardians.
Students and their parents/guardians have the right-
to-know how student performance is summarized
and interpreted. With this information, they can
make constructive use of the findings and fully
review the assessment procedures followed. It
should be noted that some aspects of summarizing
and interpreting are based upon a teacher’s best
judgment of what is good or appropriate. This
judgment is derived from training and experience
and may be difficult to describe specifically in
advance.
3. The individual results used and the process
followed in deriving summary comments and
grades should be descrybed in sufficient detail so
that the meaning of a summary comment or grade is
clear. Summary comments and grades are best
interpreted in the light of an adequate description of
the results upon which they are based, the relative
emphasis given to each result, and the process
followed to combine the results.
4. Combining disparate kinds of results into a single
summary should be done cautiously. To the extent
possible, achievement, effort, participation, and
other behaviors should be graded separately.
5. Summary comments and grades should be based
on more than one assessment result so as to ensure
adequate sampling of broadly defined learning
outcomes. More than one or two assessments are
needed to adequately assess performance in multi-
facet areas such as Reading.
6. The results used to produce summary comments
and grades should be combined in a way that
ensures that each result receives its intended
emphasis or weight. When the results of a series of
assessments are combined into a summary
comment, care should be taken to ensure that the
actual emphasis placed on the various results
matches the intended emphasis for each student.
When numerical results are combined, attention
should be paid to differences in the variability, or
spread, of the different sets of results and
appropriate account taken where such differences
exist.
7. The basis for interpretation should be carefully
described and justified. Interpretation of the
information gathered for a reporting period for a
student is a complex and, at times, controversial
issue. Such information, whether written or
numerical, will be of little interest or use if it is not
interpreted against some pertinent and defensible
idea of what is good and what is poor.
8. Interpretations of assessment results should take
account of the backgrounds and learning
experiences of the students. Assessment results
should be interpreted in relation to a student’s
personal and social context.
9. Assessment results that will be combined into
summary comments and grades should be stored in
a way that ensures their accuracy at the time they
are summarized and interpreted.
10. Interpretations of assessment results should be
made with due regard for limitations in the
assessment methods used, problems encountered in
collecting the information and judging or scoring it,
and limitations in the basis used for interpretation.
V. Reporting Assessment Findings
Assessment reports should be clear, accurate,
and of practical value to the audiences for whom
they are intended.
1. The reporting system for a school or jurisdiction
should be guided by a written policy. Elements to
consider include such aspects as audiences,
medium, format, content, level of detail, frequency,
timing, and confidentiality. The policy to guide the
preparation of school reports should be developed
by teachers, school administrators, and other
jurisdictional personnel in consultation with
representatives of the audiences entitled to receive
a report.
2. Written and oral reports should contain a
description of the goals and objectives of instruction
to which the assessments are referenced. The goals
and objectives that guided instruction should serve
as the basis for reporting. A report will be limited
by a number of practical considerations, but the
central focus should be on the instructional
objectives and the types of performance that
represent achievement of these objectives.
3. Reports should be complete in their descriptions
of strengths and weaknesses of students, so that
strengths can be build upon and problem areas
addressed. Reports can be incorrectly slanted
towards faults in a student or toward giving
unqualified praise. Both biases reduce the validity
and utility of assessment. Accuracy in reporting
strengths and weaknesses helps to reduce
systematic error and is essential for stimulating and
reinforcing improved performance. Reports should
contain the information that will assist and guide
students, their parents/guardians, and teachers to
take relevant follow-up actions.
4. The reporting system should provide for
conferences between teachers and
parents/guardians. Whenever it is appropriate,
students should participate in these conferences.
5. An appeal process should be described to
students and their parents/guardians at the
beginning of each school year or course of
instruction that they may use to appeal a report.
6. Access to assessment information should be
governed by a written policy that is consistent with
applicable laws and with basic principles of fairness
and human rights.
7. Transfer of assessment information from one
school to another should be guided by a written
policy with stringent provisions to ensure the
maintenance of confidentiality.
ASSESSMENT FOR ENHANCING
CONFIDENCE IN LEARNING

Confidence is not something that can be


learned like a set of rules; confidence is a state of
mind. Positive thinking, practice, training,
knowledge and talking to other people are all useful
ways to help improve or boost your confidence
levels. Confidence comes from feelings of well-
being, acceptance of your body and mind (self-
esteem) and belief in your own ability, skills and
experience. Low-confidence can be a result of
many factors including: fear of the unknown,
criticism, being unhappy with personal appearance
(self-esteem), feeling unprepared, poor time-
management, lack of knowledge and previous
failures. Confidence is not a static measure, our
confidence to perform roles and tasks can increase
and decrease; some days we may feel more
confident than others.
The ultimate aims of formative and
summative and all other type of assessment is to
make the minds of learners to be confident in facing
all the challenging situations in life. Only through
enhancing the level of confidence students can
make positive behavioral changes. Periodical
assessment in classrooms and their results as
feedback always enhances the confidence level of
learners to face the summative assessments. Timely
assessment students in all aspects from childhood,
and timely guidance helps the learners to enhance
confidence. The assessment of knowledge level of
learners and giving immediate feedback to them is
the key feature of confidence based learning.
Confidence-Based Learning begins the learning
process by asking the learner a set of questions and
then filling knowledge gaps with critical content,
whereas most traditional online learning approaches
deliver content first and then test to validate each
learner's understanding of the content.
Relationship Of Assessment With Confidence,
Self-Esteem, Motivation
There exist a mutual relationship of
assessment with confidence, self -esteem and
Motivation. Psychological theories and researches
given lots of conclusions on this aspect of
assessment. The output of assessment gives a
feedback of performance of learner in a particular
test. The feedback is directly related to the level of
confidence, self-esteem and motivation of learners.
In a particular exam,the number of points the
student gets for a correct answer to a question
should decrease when the confidence level
decreases. But he is able get maximum correct
responses, it will increase the confidence level. And
in every classroom there is an important role for
these three aspects confidence, self-esteem and
motivation.
Assessment results brings positive and
negative impacts on these three aspects. If the result
is up to the estimations of the students, it will
increase the level of confidence , self-esteem and
motivation. But if the result is below the level of his
/ her expectation will also reduce the level of
confidence, self-esteem and motivation. Researches
shows that pupils with higher level of confidence
and self-esteem has been achieved topmost
positions in the world, they also poses higher level
of intrinsic motivation. But in the case of pupil with
low level of self – confidence and self-esteem has
shown some kinds of mental disorders and
personality disorders. So assessment plays a
significant role in the enhancement of level of
confidence, self-esteem and motivation through
learning.
IPSATIVE ASSESSMENT

Ipsative assessment is an assessment based


on a learner’s previous work rather than based on
performance against external criteria and standards.
Learners work towards a personal best rather than
always competing against other students. When
threshold standards must be met for an award,
ipsative feedback could be combined with
traditional grades.

Benefits of Ipsative Assessment To Learners

 Standards and criteria referenced assessment


can be demotivating for learners who do not
achieve high grades, while ipsative
assessment emphasises the progress learners
are making and is more motivating.
 Ipsative feedback helps learners to develop
by highlighting where there is more work to
do. Ipsative feedback can also help high
performing students to achieve even more.
 Ipsative assessment helps learners self-assess
and become more self-reliant.

Benefits of Ipsative Assessment To Teachers

 Providing ipsative feedback helps focus on


what the learner needs to do next rather than
dwelling on the inadequacies of current
performance.
 Some students are more likely to act on
ipsative feedback than highly critical
feedback.
 A focus on learner progress will distinguish
between poorly performing students who are
progressing, albeit slowly, and those who are
not progressing and who are therefore
unsuitable for the course.

Disadvantages Of Ipsative Assessment

The biggest problem is that assessors need to


have access to records of a learner’s past
assessments to make comparisons and these are not
always available, although electronic records can
help. It also means that assessments of different
modules in a modular scheme may need to be linked
and this may be difficult if the modules have very
different learning outcomes. Finally, ipsative
assessment requires a different way of thinking
about assessment and this may take time for
teachers and students to get used to.

TEST YOUR UNDERSTANDING


1.Explain the concept of Differentiated assessment
2. What is culturally responsive assessment?
3.What are the advantages and disadvantages of
Achievement test ?
4. Differentiate Achievement test & Diagnostic
test.
5. List out the purposes of achievement test.
6. Explain the different steps involved in the
construction of achievement test.
7.Explain the term Blueprint.
8. List out the different steps involved in diagnostic
test.
9. What are the different tools and techniques of
assessment ?
10. Explain the different qualities of a good test.
11. What is the use of Remedial teaching ?
12. How assessment enhance the confidence of
learners ?
13. Explain the term Ipsative assessment .
UNIT VI.

REPORTING QUANTITATIVE
ASSESSMENT DATA

MEASUIRES OF CENTRAL TENDENCY

In statistics, a central tendency (or, more


commonly, a measure of central tendency) is a
central or typical value for a probability
distribution. It may also be called a center
or location of the distribution. Colloquially,
measures of central tendency are often
called averages. The term central tendency dates
from the late 1920s. The most common measures of
central tendency are the arithmetic mean,
Geometric mean ,Harmonic mean, the median and
the mode. A central tendency can be calculated for
either a finite set of values or for a theoretical
distribution, such as the normal distribution.
Occasionally authors use central tendency to denote
"the tendency of quantitative data to cluster around
some central value." A measure of central tendency
is a single value that attempts to describe a set of
data by identifying the central position within that
set of data. As such, measures of central tendency
are sometimes called measures of central location.
They are also classed as summary statistics.
Arithmetic mean, Geometric mean and Harmonic
means are usually called Mathematical averages
while Mode and Median are called Positional
averages.

ARITHMETIC MEAN

The mean (or average) is the most popular


and well known measure of central tendency. It can
be used with both discrete and continuous data,
although its use is most often with continuous data.
To find the arithmetic mean, add the values of all
terms and them divide sum by the number of terms,
the quotient is the arithmetic mean. There are three
methods to find the mean :

(i) Direct method: In individual series of


observations x1, x2,… xn the arithmetic
mean is obtained by following formula.
.............

(ii) Short-cut method: This method is used to


make the calculations simpler.
Let A be any assumed mean (or any assumed
number), d the deviation of the arithmetic mean,
then we have

(iii)Step deviation method: If in a frequency table


the class intervals have equal width, say i than it is
convenient to use the following formula.

where u=(x-A)/ i ,and i is length of the interval, A


is the assumed mean.

Example 1. Compute the arithmetic mean of the


following by direct and short -cut methods both:

Class 20-30 30-40 40-50 50-60 60-70


Frequency 8 26 30 20 16
Solution.

Class Mid f fx d= x-A f d


Value A = 45
x
20-30 25 8 200 -20 -160
30-40 35 26 910 -10 -260
40-50 45 30 1350 0 0
50-60 55 20 1100 10 200
6070 65 16 1040 20 320
Total N = ∑ fx = ∑f d =
100 4600 100
By direct method

M = (∑fx)/N = 4600/100 = 46.

By short cut method.

Let assumed mean A= 45.

M = A + (∑ fd )/N = 45+100/100 = 46.

Example 2 Compute the mean of the following


frequency distribution using step deviation method.
:

Class 0-11 11-22 22-33 33-44 44-55 55-66

Frequency 9 17 28 26 15 8

Solution.

Class Mid- f d=x-A u = fu


Value (A=38.5) (x-
A)/i
i=11
0-11 5.5 9 -33 -3 -27
11-22 16.5 17 -22 -2 -34
22-33 27.5 28 -11 -1 -28
33-44 38.5 26 0 0 0
44-55 49.5 15 11 1 15
55-66 60.5 8 22 2 16
Total N = ∑fu =
103 -58

Let the assumed mean A= 38.5, then

M = A + i(∑fu )/N = 38.5 + 11(-58)/103

= 38.5 - 638/103 = 38.5 - 6.194 = 32.306

MEDIAN

The median is defined as the measure of the


central term, when the given terms (i.e., values of
the variate) are arranged in the ascending or
descending order of magnitudes. In other words the
median is value of the variate for which total of the
frequencies above this value is equal to the total of
the frequencies below this value. The median is the
value of the variable which divides the group into
two equal parts one part comprising all values
greater, and the other all values less than the
median‖.

For example. The marks obtained, by seven


students in a paper of Statistics are 15, 20, 23, 32,
34, 39, 48 the maximum marks being 50, then the
median is 32 since it is the value of the 4th term,
which is situated such that the marks of 1st, 2nd and
3rd students are less than this value and those of 5th,
6th and 7th students are greater then this value.

(a)Median in individual series.

Let n be the number of values of a variate (i.e. total


of all frequencies). First of all we write the values
of the variate (i.e., the terms) in ascending or
descending order of magnitudes

Here two cases arise:

Case 1. If n is odd then value of (n+1)/2th term


gives the median.

Case2. If n is even then there are two central terms


i.e., n/2th and n+1/2th The mean of these two
values gives the median.

(b) Median in continuous series (or grouped series).


In this case, the median (Md) is computed by the
following formula

Where ,Md = median


l = lower limit of median class

cf = total of all frequencies before median class

f = frequency of median class

i = class width of median class.

Example 1 – According to the census of 2011,


following are the population figure, in thousands, of
10 cities :

1400, 1250, 1670, 1800, 700, 650, 570, 488,


2100, 1700.

Find the median.

Solution. Arranging the terms in ascending order.

488, 570, 650, 700, 1250, 1400, 1670, 1800,


2100.

Here n=10, therefore the median is the mean of the


measure of the 5th and 6th terms.

Here 5th term is 1250 and 6th term is 1400.

Median (Md) = (1250+14000)/2 Thousands

= 1325 Thousands

Examples 2. Find the median for the


following distribution:
Wages in Rs. 0-10 10-20 20-30 30-40 40-50

No. of workers 22 38 46 35 20
Solution . We shall calculate the cumulative
frequencies.

Wages in Rs. No. of Cumulative


Workers f Frequencies
(c.f.)
0-10 22 22
10-20 38 60
20-30 46 106
30-40 35 141
40-50 20 161

Here N = 161. Therefore median is the measure of


(N + 1)/2th term i.e 81st term. Clearly 81st term is
situated in the class 20-30. Thus 20-30 is the median
class. Consequently.

= 20 + (½×161 – 60) / 46 × 10
= 20 + 205/46 = 20 + 4.46 = 24.46.

Example 3. Find the median of the following


frequency distribution:

Marks No. of Marks No. of


students students
Less than 15 Less than 106
10 50
Less than 35 Less than 120
20 60
Less than 60 Less than 125
30 70
Less than 84
40

Solution . The cumulative frequency distribution


table :

Class (Marks) Frequency f (No. Cumulative


of students) Frequency (C.
F.)
0-10 15 15
10-20 20 35
20-30 25 60
30-40 24 84
40-50 22 106
50-60 14 120
60-70 5 125
Total N = 125
Median = measure of (125 + ½) th term

= 63rd term.

Clearly 63rd term is situated in the class 30-40.

Thus median class = 30 - 40

=30 + (125/2 – 60) / 24 × 10

= 30 + 25/24

= 30+1.04 = 31.04

MODE

The word mode is formed from the French


word ‘La mode‘ which means in fashion‘.
According to Dr. A. L. Bowle the value of the
graded quantity in a statistical group at which the
numbers registered are most numerous, is called the
mode or the position of greatest density or the
predominant value.‘
According to other statisticians, The value of the
variable which occurs most frequently in the
distribution is called the mode. ‘The mode of a
distribution is the value around the items tends to be
most heavily concentrated. It may be regarded at the
most typical value of the series. The mode is that
value (or size) of the variate for which the frequency
is maximum or the point of maximum frequency or
the point of maximum density. In other words, the
mode is the maximum ordinate of the ideal curve
which gives the closest fit to the actual distribution.

Example 1. Find the mode from the following size


of shoes

Size of shoes 1 2 3 4 5 6
7 8 9
Frequency 1 1 1 1 2 3
2 1 1

Here maximum frequency is 3 whose term value is


6. Hence the mode is modal size number 6.

In continuous frequency distribution the


computation of mode is done by the following
formula ;

l = lower limit of class, f1 = frequency of modal


class, f0 =frequency of the class just preceding to
the modal class, f2 =frequency of the class just
following of the modal class, i =class interval

Example 2.Compute the mode of the following


distribution:

Class : 0-7 7-14 14-21 21-28 28-35 35-42


42-49

Frequency :19 25 36 72 51 43
28

Solution. Here maximum frequency 72 lies in the


class-interval 21-28. Therefore 21-28 is the modal
class.
l = 21, f1= 72, f0 = 36, f2 = 51, i = 7

= 21 + ×10

= 21 + 357 / 87

= 21 + 4.103 = 25.103.

MEASURES OF DISPERSION

An averages gives an idea of central


tendency of the given distribution but it is necessary
to know how the variates are clustered around or
scattered away from the average
The degree to which numerical data tend to
spread about an average value is called variation or
dispersion or spread of the data. Various measures
of dispersion or variation are available, the most
common are the following ;
(a) Range
(b) Mean deviation from mean
(c) Variance
(d) Standard deviation
RANGE
It is the simplest possible measure of
dispersion. The range of a set of numbers (data) is
the difference between the largest and the least
numbers in the set i.e. values of the variable. If this
difference is small then the series of numbers is
supposed regular and if this difference is large then
the series is supposed to be irregular.
Range = Largest – Smallest
Example : Compute the range for the following
observation
15 20 25 25 30 35
Solution: Range = Largest – Smallest
i.e., 35-15=20
SEMI-INTER-QUARTILE RANGE
The inter quartile range of a set of data is
defined by
Inter-quartile range = Q3-Q1
Where Q1 and Q3 are respectively the first
and third quartiles for the data.
Semi-inter quartile range (or quartile deviation) is
denoted by Q and is defined by
Q =(Q3 – Q1)/2
Where Q1 and Q3 have the same meaning as
given above.
The semi-inter-quartile range is a better measure of
dispersion than the range and is easily computed. Its
drawback is that it does not take into account all the
items.
MEAN DEVIATION
The average (or mean) deviation about any
point M, of a set of N numbers x1, x2, …, xN is
defined by
Where M is the mean or median or mode according
as the mean deviation from the mean or median or
mode is to be computed, l xi – M l represents the
absolute (or numerical) value. Thus l-5l = 5.
If x1,x2,…, xk occur with frequencies f1,f2,…,fk
respectively, then the mean deviation (δm) is
defined by

Mean deviation depends on all the values of the


variables and therefore it is a better measure of
dispersion than the range or the quartile deviation.
Since signs of the deviations are ignored (because
all deviations are taken positive), some artificiality
is created. In case of grouped frequency distribution
the mid-values are taken as x.
Example 1. Find the mean deviation from the
arithmetic mean of the following distribution :
Marks : 0-10 10-20 20-30 30-40 40-50

No. of students : 5 8 15 16 6
Solution. Let assumed mean A = 25 and i=10
Class Mid Frequency x- fu x-M f lx-
value f A/i Ml
X
0-10 5 5 -2 -10 -22 110
10-20 15 8 -1 -8 -12 96
20-30 25 15 0 0 -2 30
30-40 35 16 1 16 8 128
40-50 45 6 2 12 18 108
Total ∑f =50 ∑fu ∑f l
= 10 x –
Ml=
472

= 25 + 10/50 ×10 = 27.


The required mean deviation from arithmetic mean

= = 472 / 50 = 9.44
STANDARD DEVIATION
The standard deviation is a measure that
summarises the amount by which every value
within a dataset varies from the mean. Effectively it
indicates how tightly the values in the dataset are
bunched around the mean value. It is the most
robust and widely used measure of dispersion since,
unlike the range and inter-quartile range, it takes
into account every variable in the dataset. When the
values in a dataset are pretty tightly bunched
together the standard deviation is small. When the
values are spread apart the standard deviation will
be relatively large. The standard deviation is usually
presented in conjunction with the mean and is
measured in the same units.
Standard deviation (or S.D.) is the positive
square root of the arithmetic mean of the square
deviations of various values from their arithmetic
mean M. It is usually denoted by σ. Thus

When the deviations is calculated from the


arithmetic mean M, then root mean square deviation
becomes standard deviation. The square of the
standard deviation σ2 is called variance.
Example1. Calculate the S.D. and coefficient of
variation (C.V.) for the following table :
Class : 0-10 10-20 20-30 30-40 40-50 50-60
60-70 70-80
Frequency : 5 10 20 40 30
20 10 5
Solution. We prepare the following table for the
computation of S.D.
Class Mid- f U= x - fu fu2
value x 35
10
0-10 5 5 -3 -15 45
10-20 15 10 -2 -20 40
20-30 25 20 -1 -20 20 0
30-40 35 40 0 0 30
40-50 45 30 1 30 80
50-60 55 20 2 40 90
60-70 65 10 3 30 80
70-80 75 5 4 20
N=∑f = ∑fu = 2
140 65 ∑fu =
385

Let assumed mean = 35 = A (say) and h = 10

A.M. , M = A + h (∑fu)/N = 35 + 10 (65/140)

= 35 + 4.64 = 39.64

Example : 2
Calculation of Mean , Mode, Variance & Standard
Deviation
Class f M fM fM2

0-1 31 0.5 15.5 7.5


1-2 57 1.5 85.5 128.25
2-3 26 2.5 65 162.5
3-4 14 3.5 49 171.5
4-5 6 4.5 27 121.5
5-6 3 5.5 16.5 90.75

Total 137 258.25 682.25

fM 258.5
a) Mean  =  = 1.89
f 137
b.) Mode: Modal Class = 1-2. Mode = 1.5
c.) Variance:
(fM )2 (258.5)2
fM 2  682.25 
2 = N  137 = 1.4197
N 137
d.) standard Deviation:

 =  2  1.4197 = 1.1915
CORRELATION
Correlation is a bivariate analysis that
measures the strengths of association between
two variables. In statistics, the value of the
correlation coefficient varies between +1 and
-1. When the value of the correlation
coefficient lies around ± 1, then it is said to
be a perfect degree of association between the
two variables. As the correlation coefficient
value goes towards 0, the relationship
between the two variables will be
weaker. Usually, in statistics, we use three
methods to find out correlations: .(1) Scatter
Plot (2) Kar Pearson’s coefficient of correlation (3)
Spearman’s Rank-correlation coefficient.
Methods Of Determining Correlation
1) Scatter Plot ( Scatter diagram or dot diagram
): In this method the values of the two variables are
plotted on a graph paper. One is taken along the
horizontal ( (x-axis) and the other along the vertical
(y-axis). By plotting the data, we get points (dots)
on the graph which are generally scattered and
hence the name ‘Scatter Plot’.
The manner in which these points are scattered,
suggest the degree and the direction of correlation.
The degree of correlation is denoted by ‘ r ’ and its
direction is given by the signs positive and negative.
i) If all points lie on a rising
straight line the correlation
is perfectly positive and r =
+1 (see fig.1 )
ii) If all points lie on a
falling straight line the
correlation is perfectly
negative and r = -1 (see
fig.2)
iii) If the points lie in
narrow strip, rising
upwards, the correlation is
high degree of positive (see
fig.3)
iv) If the points lie in a
narrow strip, falling
downwards, the correlation
is high degree of negative
(see fig.4)
v) If the points are spread
widely over a broad strip,
rising upwards, the
correlation is low degree
positive (see fig.5)
vi) If the points are spread widely over a broad strip,
falling downward, the correlation is low degree
negative (see fig.6)
vii) If the points are spread (scattered) without any
specific pattern, the correlation is absent. i.e. r = 0.
(see fig.7)
Though this method is simple and is a rough idea
about the existence and the degree of correlation, it
is not reliable. As it is not a mathematical method,
it cannot measure the degree of correlation.

2) Karl Pearson’s coefficient of


correlation: Pearson r correlation is widely
used in statistics to measure the degree of the
relationship between linear related
variables. For example, in the stock market,
if we want to measure how two commodities
are related to each other, Pearson r correlation
is used to measure the degree of relationship
between the two commodities. The following
formula is used to calculate the Pearson r
correlation:

r=

where
N = Number of pairs of observation
Note : r is also known as product-moment
coefficient of correlation.
OR r =

OR r =
Now covariance of x and y is defined as

Example Calculate the coefficient of correlation


between the heights of father and his son for the
following data.
Heig
ht of
16 16 16 16 16 16 17 17
fathe
5 6 7 8 7 9 0 2
r
(cm):

Heig
ht of 16 16 16 17 16 17 16 17
son 7 8 5 2 8 2 9 1
(cm):
Solution: n = 8 ( pairs of observations )
Height x y
Height
of = =
of
father xi yi xy x2 y2
son
xi - -
yi
x y

- -
165 167 6 9 4
3 2

- -
166 168 2 4 1
2 1

- -
167 165 4 1 16
1 4

- -
167 168 1 1 1
1 1

168 172 0 3 0 0 9

169 172 1 3 3 1 9

170 169 2 0 0 4 0

172 171 4 2 8 16 4

xi=1 yi=1 xy= x2= y2=


0 0
344 352 24 36 44
Calculation:
Now,

Since r is positive and 0.6. This shows that the


correlation is positive and moderate (i.e. direct and
reasonably good).
Example From the following data compute the
coefficient of correlation between x and y.
Example If covariance between x and y is 12.3 and
the variance of x and y are 16.4 and 13.8
respectively. Find the coefficient of correlation
between them.
Solution: Given - Covariance = cov ( x, y ) = 12.3
Variance of x ( x2 )= 16.4
Variance of y ( y2 ) = 13.8
Now,
Example Marks obtained by two brothers FRED
and TED in 10 tests are as follows:

Find the coefficient of correlation between the two.


Solution: Here x0 = 60, c = 4, y0 = 60 and d = 3
Calculation:

3)Spearman rank correlation: Spearman


rank correlation is a non-parametric test that
is used to measure the degree of association
between two variables. It was developed by
Spearman, thus it is called the Spearman rank
correlation. Spearman rank correlation test
does not assume any assumptions about the
distribution of the data and is the appropriate
correlation analysis when the variables are
measured on a scale that is at least ordinal.
The following formula is used to calculate the
Spearman rank correlation:
P
6 Di2
rs = 1 − .
n (n 2 − 1)

This produces a correlation coefficient which


has a maximum value of 1 , indicating a perfect
positive association between the ranks, and a
minimum value of -1, indicating a perfect negative
association between ranks. A value of 0 indicates no
association between the ranks for the observed
values of X and Y .
Example : Following are the details of the countries
that rank highest on the GNP scale are the countries

Country GNP CBR Di Di2


Algeria 4 4 0 0
India 9 7 2 4
Mongolia 8 2.5 5.5 30.25
El Salvador 7 2.5 4.5 20.25
Equador 6 5.5 0.5 0.25
Malaysia 5 5.5 -0.5 0.25
Ireland 2 9 -7 49
Argentina 3 8 -5 25
France 1 10 -9 81
Sierra 10 1 9 81
Leone
Total 0 291.00
The value of the Spearman rank correlation
coefficient is

Graphical Representations of Data(Graphs &


Diagrams)

Visualization techniques are ways of creating


and manipulating graphical representations of data.
We use these representations in order to gain better
insight and understanding of the problem we are
studying - pictures can convey an overall message
much better than a list of numbers. Statistics is a
special subject that deals with large (usually)
numerical data. The statistical data can be
represented graphically. In fact, the graphical
representation of statistical data is an essential step
during statistical analysis.
Statistical surveys and experiments provides
valuable information about numerical scores. For
better understanding and making conclusions and
interpretations, the data should be managed and
organized in a systematic form. A graph is the
representation of data by using graphical symbols
such as lines, bars, pie slices, dots etc. A graph does
represent a numerical data in the form of a
qualitative structure and provides important
information. The following are the different types
of graphs and diagrams used to represent
quantitative atas;

Line or Dot Plots


Line plots are graphical representations of
numerical data. A line plot is a number line with x’s
placed above specific numbers to show their
frequency. By the frequency of a number we mean
the number of occurrence of that number. Line plots
are used to represent one group of data with fewer
than 50 values.

Example 30.1
Suppose thirty people live in an apartment building.
These are the following ages:
58 30 37 36 34 49 35 40
47 47
39 54 47 48 54 50 35 40
38 47
48 34 40 46 49 47 35 48
47 46

Make a line plot of the ages.

This graph shows all the ages of the people


who live in the apartment building. It shows the
youngest person is 30, and the oldest is 58. Most
people in the building are over 46 years of age. The
most common age is 47.

Line plots allow several features of the data to


become more obvious. For example, outliers,
clusters, and gaps are apparent.
• Outliers are data points whose values are
significantly larger or smaller than other values,
such as the ages of 30, and 58.
• Clusters are isolated groups of points, such as the
ages of 46 through 50.
• Gaps are large spaces between points, such as 41
and 45.

Stem and Leaf Plots


Another type of graph is the stem-and-leaf
plot. It is closely related to the line plot except that
the number line is usually vertical, and digits are
used instead of x’s. To illustrate the method,
consider the following scores which twenty students
got in a history test:

69 84 52 93 61 74 79 65
88 63

57 64 67 72 74 55 82 61
68 77

We divide each data value into two parts. The


left group is called a stem and the remaining group
of digits on the right is called a leaf. We display
horizontal rows of leaves attached to a vertical
column of stems. we can construct the following
table
52 7 5

69 1 5 3 4 7 1 8
74 9 2 4 7

84 8 2

93

Where the stems are the ten digits of the scores and
the leaves are the one digits.

The disadvantage of the stem-and-leaf plots


is that data must be grouped according to place
value. What if one wants to use different groupings?
In this case histograms, to be discussed below, are
more suited.

Frequency Distributions and Histograms


When we deal with large sets of data, a good
overall picture and sufficient information can be
often conveyed by distributing the data into a
number of classes or class intervals and to
determine the number of elements belonging to
each class, called class frequency. For instance, the
following table shows some test scores from a math
class.
65 91 85 76 85 87 79 93 82
75 100 70 88 78 83 59 87
69 89 54 74 89 83 80
94 67 77 92 82 70 94 84
96 98 46 70 90 96 88 72

It’s hard to get a feel for this data in this format


because it is unorganized. To construct a frequency
distribution,
• Compute the class width
.

• Round CW to the next highest whole number so that


the classes cover the whole data.
Thus, if we want to have 6 class intervals then
. The low number in each class is
called the lower class limit, and the high number is
called the upper class limit.

With the above information we can construct the


following table called frequency distribution.

Class Frequency
41-50 1
51-60 2
61-70 6
71-80 8
81-90 14
91-100 9

Once frequency distributions are constructed,


it is usually advisable to present them graphically.
The most common form of graphical representation
is the histogram. In a histogram, each of the classes
in the frequency distribution is represented by a
vertical bar whose height is the class frequency of
the interval. The horizontal endpoints of each
vertical bar correspond to the class endpoints.

One advantage to the stem-and-leaf plot over


the histogram is that the stemand-leaf plot displays
not only the frequency for each interval, but also
displays all of the individual values within that
interval.

Bar Graphs
Bar Graphs, similar to histograms, are often
useful in conveying information about categorical
data where the horizontal scale represents some
non-numerical attribute. In a bar graph, the bars are
non-overlapping rectangles of equal width and they
are equally spaced. The bars can be vertical or
horizontal. The length of a bar represents the
quantity we wish to compare.

Line Graphs
A Line graph ( or time series plot)is
particularly appropriate for representing data that
vary continuously. A line graph typically shows the
trend of a variable over time. To construct a time
series plot, we put time on the horizontal scale and
the variable being measured on the vertical scale
and then we connect the points using line segments.

For example ; The population (in millions) of the


US for the years 1860-1950 is as follows: 31.4 in
1860; 39.8 in 1870; 50.2 in 1880; 62.9 in 1890; 76.0
in 1900; 92.0 in 1910; 105.7 in 1920; 122.8 in 1930;
131.7 in 1940; and 151.1 in 1950. Make a time plot
showing this information.

Circle Graphs or Pie Charts


Another type of graph used to represent data
is the circle graph. A circle graph or pie chart,
consists of a circular region partitioned into disjoint
sections, with each section representing a part or
percentage of a whole. To construct a pie chart we
first convert the distribution into a percentage
distribution. Then, since a complete circle
corresponds to 360 degrees, we obtain the central
angles of the various sectors by multiplying the
percentages by 3.6. We illustrate this method in the
next example.

Example
A survey of 1000 adults uncovered some interesting
housekeeping secrets. When unexpected company
comes, where do we hide the mess? The survey
showed that 68% of the respondents toss their mess
in the closet, 23% shove things under the bed, 6%
put things in the bath tub, and 3% put the mess in
the freezer. Make a circle graph to display this
information.

Solution.
We first find the central angle corresponding to each
case:

in closet 68 × 3.6 = 244.8


under bed 23 × 3.6 = 82.8
in bathtub 6 × 3.6 = 21.6
in freezer 3 × 3.6 = 10.8
Note that

244.8 + 82.8 + 21.6 + 10.8 = 360.

Pictographs
One type of graph seen in newspapers and
magazines is a pictograph. In a pictograph, a
symbol or icon is used to represent a quantity of
items. A pictograph needs a title to describe what is
being presented and how the data are classified as
well as the time period and the source of the data.
Example of a pictograph is given in Figure 30.8.
Scatterplots

A relationship between two sets of data is


sometimes determined by using a scatterplot. Let’s
consider the question of whether studying longer for
a test will lead to better scores. A collection of data
is given below

Study Hours 3 5 2 6 7 1 2 7 1 7

Score 80 90 75 80 90 50 65 85 40 100

Based on these data, a scatterplot may be look like


the following figure;
Frequency Polygon

For plotting a frequency polygon, as in case


of histogram, the values of the variable are taken on
the horizontal axis of the graph and the frequencies
are taken on the vertical axis of the graph. In the
case of a frequency polygon, one has to indicate the
mid points of the C.I. on the horizontal axis, instead
of indicating the boundaries of the interval, Here the
mid point of the intervals just before the lowest
interval and just after the highest interval are also to
be indicated. Now by taking the mid points one by
one, the points above them are to be plotted
corresponding to the frequencies of the intervals. In
case of the two additional mid points, the frequency
being zero, the points to be plotted are on the X-axis
itself. The adjoining points so plotted are to be
joined by straight line segments.

The data which has been shown in the tabular


form, may be displayed in pictorial form by using a
graph. A well-constructed graphical presentation is
the easiest way to depict a given set of data. The
above explained different type of graphs and
diagrams helps a common man to easily identify the
representation of tabular data .
TEST YOUR UNDERSTANDING
1. Explain the concept of measures of central
tendency.
2. What you mean by Mean, Median & Mode ?
3. What is Range ?
4. Explain the use of calculating standard
deviation.
5. What is correlation ?
6. Explain the different methods of calculating
correlation coefficient.
7. Explain the different types of graphs and
diagrams.
8. Explain the different use of graphical
representation of data.

Sample Question Paper on

EDU 08-ASSESSMENT FOR LEARNING

PART – A
Answer all questions. Each carries 2 mark.

1. Differentiate the terms Measurement and


Test.
2. What is the role of Evaluation in classrooms
?
3. Create an operational definition for the term
‘Assessment’
4. What is Ipsative Assessment ?
5. What is the purpose of assessment ?
6. Explain any two uses of Rubrics.
7. Elaborate the concept of IBO
8. List out some tools for assessing the affective
domain of the learners.
9. List out some principles followed in the
construction of Short answer type test items.
10.Explain the concept of correlation
(10x2 =20)

PART – B

Answer any Ten questions. Each carries four marks.

11.Differentiate Summative assessment &


Formative assessment.
12.Explain the concepts of assessment for
learning, as learning and of learning
13.How will you ensure fairness in classroom
assessment?
14.How a teacher assess the performance of
students in behaviorist and constructivist
classroom environment.
15. Explain the different qualities required for a
good assessment tool.
16.Explain the different principles followed in
assessment practice
17. Explain the different types of grading and its
uses.
18.Explain the different types of measures of
dispersion.
19. ‘Objective type test items helps to maintain
objectivity in assessment’ – Do you agree.
Give the reasons.
20. Explain the use of graphical representation
of data.
21.What are the major issues in classroom
assessment ? As a teacher how will you solve
it ?
22.Differentiate Achievement test and
Diagnostic test .
(10x4 =40)

PART – C

Answer any two questions . Each carries 10 mark.

23.Explain the different types of assessment


practices in today’s classroom.
24. Modern education system had witnessed lots
of reforms in educational assessment. Bring a
brief explanation about the different reforms.
25. ‘Classroom Assessment enhances the
confidence of learners’-Justify the statement.
Bring out the relationship of assessment
between confidence, self-esteem and
Motivation.
(2x10 =20)

You might also like