0% found this document useful (0 votes)
50 views46 pages

Unit 1 IDD Assessment

D.Ed. IDD Assessment

Uploaded by

sandeep tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views46 pages

Unit 1 IDD Assessment

D.Ed. IDD Assessment

Uploaded by

sandeep tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit- 1

Concept of Assessment

1.1. Definition and meaning of screening, assessment, evaluation, testing and


measurement.
1.2. Assessment for diagnosis and certification – intellectual assessment, achievement,
aptitude and other psychological assessments.
1.3. Developmental assessment and educational assessment –entry level, formative and
summative assessments.
1.4. Formal and informal assessment – concept, meaning and role in educational.
Settings. Standardized/Norm referenced tests (NRT) and teacher made/informal
Criterion referenced testing (CRT).
1.5. Points to consider while assessing students with developmental disabilities.

1.1 Definition and meaning of screening, assessment, evaluation, testing and


measurement.

Screening- Screening is a preliminary process used to identify individuals who may be at risk
for certain conditions or who may need further evaluation.
It is a quick, broad procedure designed to flag potential concerns without providing a detailed
diagnosis. For instance, in education, screening might involve checking a child’s reading
level to determine if they might need additional support.
Screening is important in early intervention, especially in special education, as it helps in
identifying students who might need additional support or specialized instruction at an early
stage.
The primary goal of screening is to quickly and efficiently identify individuals who may be at
risk for certain conditions or who may benefit from more in-depth assessment or specialized
services.
Screening is typically broad and general, not aimed at providing a diagnosis but rather at
detecting potential concerns that warrant further investigation.
Screening often involves simple, quick, and easy-to-administer tools such as questionnaires,
checklists, or basic tests. These tools are designed to flag individuals who show signs of
needing further assessment.
The outcome of a screening is usually a decision on whether further evaluation is needed. If a
person is "flagged" during screening, they may be referred for more comprehensive
assessments to determine specific needs or conditions.

Screening is defined as a preliminary process used to identify individuals who may have
specific characteristics, conditions, or risk factors that require further evaluation or
intervention. It is a quick, broad, and general assessment aimed at detecting potential issues
early on, rather than providing a detailed diagnosis or comprehensive evaluation.

"Screening is the process of identifying a subset of the population who may have a certain
condition or characteristic, usually by means of a brief assessment. The purpose of screening
is to detect potential problems early and to refer individuals for further diagnostic
evaluation." Robert M. Kaplan and Dennis P. SaccuzzoPsychological Testing: Principles,
Applications, and Issues (8th Edition).

"Screening is the use of a test or a battery of tests to identify individuals who are at risk for a
condition or disorder, allowing for early intervention or further diagnostic evaluation."
American Academy of Pediatrics:Developmental and Behavioral Pediatrics (4th Edition).

"Screening refers to a brief procedure designed to identify children who may need a more
comprehensive assessment or intervention in order to maximize their developmental
outcomes." Mary McMullen Assessing Young Children: A Developmental Perspective.

"Screening is a preliminary process that involves administering quick and simple tests to
detect the possibility of a specific problem or condition in an individual, which then may
necessitate further, more comprehensive evaluation." Lori S. Wiggins Screening and
Assessment in Early Childhood Education.

"Screening is the presumptive identification of unrecognized disease or defect by the


application of tests, examinations, or other procedures that can be applied rapidly. Screening
tests sort out apparently well persons who probably have a disease from those who probably
do not." World Health Organization (WHO) Principles and Practice of Screening for
Disease.

Assessment- Assessment is a continuous process for understanding individual and


programming required services for him. It involves collection and organization of
information for specifying and verifying.
The decision may include a wide spectrum ranging from screening and identification to the
evaluation of teaching plan.
Assessment is a systematic process of collecting, analyzing, and interpreting information
about an individual’s abilities, behaviors, skills, or characteristics. The purpose of assessment
is to gain a comprehensive understanding of a person’s strengths, needs, and progress, often
to inform decisions about education, intervention, or treatment.
The selection of assessment tools and methods vary depending on the purpose for the
assessment is carried out.

The primary goal of assessment is to gather detailed and relevant data that can be used to
make informed decisions about a person's education, development, or care. This can include
identifying areas of need, measuring progress, or evaluating the effectiveness of
interventions.
Assessment is typically more detailed and comprehensive than screening. It often involves
multiple methods and tools to gather a wide range of information about an individual.
Assessment can involve a variety of tools, such as standardized tests, observations,
interviews, checklists, and informal assessments. These tools are used to gather both
qualitative and quantitative data.
The outcome of an assessment is a detailed understanding of the individual’s abilities and
needs. This information is often used to create personalized educational plans, determine
eligibility for special services, or evaluate the impact of interventions.
Assessment is defined as the systematic process of collecting, analyzing, and interpreting
information to understand an individual’s abilities, behaviors, and needs. The goal of
assessment is to gather data that can inform decisions about interventions, education, or care.

Wallace, Larsen, & Elkinson-1992 - “Assessment refers to the process of gathering and
analyzing information in order to make instructional, administrative, guidance decision for an
individuals.”

Robert M. Kaplan and Dennis P. Saccuzzo "Assessment is the systematic evaluation and
measurement of psychological, educational, or behavioral characteristics of an individual,
often through the use of standardized tests, interviews, and observation."

National Association of School Psychologists (NASP): "Assessment is the process of


gathering information about a student's learning, behavior, and environment through various
methods to make informed educational decisions and interventions."

Susan M. Brookhart: "Assessment is the process of gathering evidence of student learning


to inform instructional decisions and improve teaching and learning outcomes."

James McMillan: "Assessment refers to the various methods and tools used to evaluate
student learning, performance, and understanding, aiming to provide feedback to improve
both teaching and learning."

Paul Black and Dylan Wiliam: "Assessment is the process of eliciting evidence of learners’
understanding and abilities, typically for the purpose of making educational decisions that
promote further learning."

Why Assessment?
Taylor (1981) answers by explaining the stages of assessment

 Stage 1 – To screen and identify those students with potential problems.


 Stage 2 – To determine and evaluate the appropriate teaching program and strategies
for particular student.
 Stage 3 – To determine the current level of functioning and educational needs of a
student

Purpose of assessment
Anyone who is involved in assessment process should know clearly the purpose for which he
is conducting the assessment. Knowing this is very important as it decides the type of
assessment tools and means of gathering information for decision making.
For example, if the purpose is only for screening and identification, we use a short screening
schedule, for program planning we use a checklist which helps in assessing the current
performance level and selection of content for teaching.

There are many purposes of assessment.

1. Initial screening and identification,


2. determination and evaluation of teaching programs and strategies (pre-referral
intervention),
3. determination of current performance level and educational need,
4. decisions about classification and program placement,
5. Development of individual educational programs (including goals, objectives and
evaluation procedures).
6. Evaluation of the effectiveness of the Individualized Educational Program.

1- Initial screening and identification

Students who need special attention or educational services are first identified through
assessments. These assessments can be informal, like observing or analyzing mistakes, or
formal, like using achievement or intelligence tests. In simple terms, assessment helps to find
out which children need further evaluation.

Assessment is also used to check for children who are at "high risk" of developing problems.
These children may not yet have issues requiring special education but show behaviors that
suggest they might have problems in the future. By identifying these children early, we can
continuously monitor their problem areas and create a stimulation program, if needed, to
prevent the issues from developing.

2- Evaluation of teaching program and strategies (pre-referral)

One of the important roles of assessment is to help decide on the right programs and
strategies for students. Assessment information can be used in four main ways:

I. Assisting Regular Teachers: Before sending a student to a special education


program, assessments can help regular teachers figure out what to teach and the best
way to teach it.
II. Evaluating Teaching Methods: Assessment is used to check how effective a
particular teaching program or strategy is. By using assessments in this way, a formal
referral to special education can often be avoided. In other words, assessment
information can be used to create and review intervention programs before making a
formal referral. For example, if a student, like Student X, is getting poor marks
because of spelling mistakes, the regular teacher can assess the student's work and
provide a remediation program. If the student shows improvement, there may not be a
need for further action.
III. Documenting the Need for Formal Referral: If the pre-referral intervention does
not solve the problem, such as the spelling issues mentioned earlier, the assessment
can show that the student needs to be referred to a special education program.
IV. Incorporating Information into the Individual Education Program (IEP): The
information from pre-referral interventions can be used to create the IEP for students
who qualify for and receive special education.

3- Determining of current performance level and educational need

The assessment of current performance level of a student in subjects or skills is essential to


state the need for special education program. This information helps the teacher or examiner.
 to identify subject(s) or skill(s) that need special assistance.
 to identify strengths and weaknesses of students.
 to select appropriate strategies and procedures.

4- Decision about classification and program placement: Assessment data is used to


classify and place students with special needs in the right educational programs. The
goal of classifying students is to identify common characteristics and relationships
among their educational challenges, and to create a shared language that helps
professionals communicate effectively (Taylor, 1993).

Based on the assessment information, students are classified and placed in suitable
programs. For example, a 6-year-old child diagnosed with intellectual disability
would be placed in a special education program designed for children with similar
needs.

5- Development of the Individualized Educational Program: The most important use


of assessment information is to determine the goals and objectives, and strategies to
teach children who are identified to have special educational needs. As each
individual child’s needs are different, we have to plan educational program that meets
the needs. A systematically planned individualized educational program is a blueprint
for teachers to follow.

6- Evaluation of the effectiveness of the Individualized Educational Program:

In an Individualized Educational Program (IEP), evaluation procedures are listed


along with goals, objectives, methods, and materials. These procedures help the
teacher regularly check how well the student is doing. This monitoring gives
feedback, which can be positive or negative, to both the teacher and the student.
Depending on the feedback, the teacher might adjust her plan, keep the same plan, or
choose a new activity. For example, if the child shows improvement during these
regular checks, the teacher will continue with the current plan. If there is no
improvement, the teacher may need to make changes to the IEP.
Assessment is a continuous process

Evaluation
Evaluation is a systematic process of collecting, analyzing, and interpreting information to
make judgments or decisions about a subject, such as a program, person, product, or policy.
The purpose of evaluation is to determine the value, effectiveness, or significance of the
subject being evaluated. Here are some key points to understand about evaluation:

Evaluation is perhaps the most complex and least understood of the terms. Inherent in the
idea of evaluation is "value." When we evaluate, what we are doing is engaging in some
process that is designed to provide information that will help us make a judgment about a
given situation.
Evaluation originated in 1755, meaning “action of appraising or valuing.”
It is a technique by which we come to know at what extent the objectives are being achieved.
It is a decision making process which assists to make grade and ranking
Tyler, Ralph W. (1950): "Evaluation is the process of determining to what extent the
educational objectives are actually being realized."

Scriven, Michael (1967): "Evaluation refers to the process of delineating, obtaining, and
providing useful information for judging decision alternatives."

Stufflebeam, Daniel L. (1971): "Evaluation is the process of delineating, obtaining, and


providing useful information for judging decision alternatives."

Worthen, Blaine R. and Sanders, James R. (1987): "Evaluation is the formal determination
of the quality, effectiveness, or value of a program, product, project, process, objective, or
curriculum."

Patton, Michael Quinn (1997): Definition: "Evaluation is the systematic collection of


information about the activities, characteristics, and outcomes of programs to make
judgments about the program, improve program effectiveness, and/or inform decisions about
future programming."

Fitzpatrick, Sanders, and Worthen (2004): Definition: "Evaluation is a systematic process


to determine merit, worth, value, or significance of something using criteria against a set of
standards."

Rossi, Peter H., Lipsey, Mark W., and Freeman, Howard E. (2004): "Evaluation is the
systematic application of social research procedures for assessing the conceptualization,
design, implementation, and utility of social intervention programs."

Joint Committee on Standards for Educational Evaluation (2011): "Evaluation is the


systematic assessment of the worth or merit of an object. It involves collecting and analyzing
data to determine the effectiveness and efficiency of programs, policies, or products."

Key Components of Evaluation

1. Purpose: Evaluation aims to assess the effectiveness, efficiency, and impact of a


particular subject. It helps in making informed decisions, improving outcomes, and
ensuring accountability.
2. Criteria: Evaluation involves setting specific criteria or standards against which the
subject will be judged. These criteria can be based on goals, objectives, or
benchmarks.
3. Data Collection: Evaluation requires the collection of data or evidence. This can be
done through various methods such as observations, surveys, interviews, tests, or
performance metrics.
4. Analysis: Once the data is collected, it is analyzed to determine the extent to which
the subject meets the established criteria. This may involve statistical analysis,
qualitative analysis, or a combination of both.
5. Judgment: Based on the analysis, evaluators make judgments about the subject. This
can include identifying strengths and weaknesses, determining effectiveness, or
making recommendations for improvement.
6. Feedback: Evaluation results are communicated to stakeholders, such as program
managers, educators, or policymakers. This feedback can be used to make decisions,
implement changes, or improve future performance.
Types of Evaluation

1. Formative Evaluation: Conducted during the development or implementation of a


program or process. Its purpose is to provide feedback and make adjustments to
improve effectiveness.
2. Summative Evaluation: Conducted after a program or process is completed. Its
purpose is to assess the overall impact and effectiveness of the program.
3. Diagnostic Evaluation: Aimed at identifying specific problems or areas for
improvement. It helps in understanding the underlying causes of issues.
4. Process Evaluation: Focuses on the implementation of a program or process. It
examines how activities are carried out and whether they are aligned with the
intended plan.
5. Outcome Evaluation: Looks at the results or outcomes of a program or process. It
measures the extent to which the desired goals and objectives have been achieved.

Importance of Evaluation

 Improvement: Evaluation helps in identifying areas for improvement and making


necessary changes to enhance effectiveness.
 Accountability: It provides evidence of the success or failure of a program, ensuring
that resources are used effectively and responsibly.
 Decision-Making: Evaluation provides data and insights that support informed
decision-making by stakeholders.
 Learning: Through evaluation, individuals and organizations can learn from their
experiences and apply these lessons to future activities.

Nature of Evaluation

 Systematic Process: Evaluation follows a structured and organized approach.


 Continuous and Dynamic: It’s an ongoing process that changes as needed.
 Identifies Strengths and Weaknesses: Evaluation helps to find out what is working
well and what needs improvement in a program.
 Uses Various Tests and Techniques: Different methods and tools are used to gather
information.
 Focuses on Major Objectives: The main goals of an educational program are the
focus of the evaluation.
 Based on Data: The results come from the information collected during testing.
 Involves Decision-Making: Evaluation helps in making informed decisions.

When we evaluate something, we gather information about its worth, suitability, quality,
correctness, or legality. This is done through reliable measurements or assessments to help us
make informed decisions.

Test
A test or testing is a method used to measure and evaluate a person’s knowledge, skills,
abilities, or other attributes.

Definition: A test is a tool or method designed to assess a person's performance,


understanding, or abilities in a specific area.
Tests help determine how well someone knows a subject, how skilled they are at a task, or
how effectively they can apply certain abilities.

A test or exam is a way to check how much someone knows or can do, which can be given
orally, on paper, on a computer, or by performing tasks, with items that may be questions,
true/false statements, or tasks. The term "test" has been used since the 1590s to mean a trial
or examination to check correctness, and according to Barrow and McGee, it is a tool or
method used to get responses from students to judge their fitness, skills, knowledge, and
values.

Barrow and McGee (1979): "A test is a specific tool or procedure or a technique used to
obtain responses from the students in order to gain information which provides the basis to
make judgment or evaluation regarding some characteristics such as fitness, skill, knowledge,
and values."

Thorndike and Thorndike-Christ (2010): "A test is a systematic procedure for comparing
an individual’s performance to that of others or to established standards."

Freeman (1999): "A test is a systematic tool designed to measure a person’s ability,
knowledge, or skills in a specific area."

Miller (2005):Definition: "A test is a device or method used to assess a person’s proficiency
in specific areas or skills, usually involving a series of tasks or questions."

Anastasi and Urbina (1997): "A test is a measurement device or procedure used to obtain a
sample of behavior from which to make inferences about an individual's abilities, traits, or
performance."

Types of Tests

1. Academic Tests: Measure knowledge in subjects like math, science, or literature.


Examples include quizzes, exams, and standardized tests.
2. Skill Tests: Assess specific skills, such as typing speed, driving ability, or technical
skills. Examples include practical exams or job-related skill assessments.
3. Diagnostic Tests: Identify strengths and weaknesses in a person's abilities or
knowledge. They help in planning further instruction or intervention. For example, a
diagnostic test might reveal gaps in a student’s understanding of a subject.
4. Achievement Tests: Measure what a person has learned or accomplished in a
particular area. For instance, a final exam in a course evaluates how well a student has
learned the material throughout the course.
5. Psychological Tests: Assess mental processes and characteristics, such as personality
traits, intelligence, or cognitive abilities. Examples include IQ tests or personality
assessments.

How Tests Work

 Administering: Tests are given under controlled conditions to ensure fairness and
consistency.
 Scoring: The results are scored based on correct answers, performance standards, or
other criteria.
 Interpreting: The scores are analyzed to make decisions or provide feedback. For
example, test scores might be used to decide if a student passes a course or needs
additional help.

Importance of Testing

 Evaluation: Tests provide a way to measure and evaluate performance and abilities.
 Feedback: They offer feedback to both individuals and educators about strengths and
areas needing improvement.
 Decision-Making: Test results can guide decisions, such as placement in a course, job
suitability, or educational needs.

Nature of Test

 Reliable: It consistently gives the same results under the same conditions.
 Valid: It measures what it is supposed to measure.
 Objective: It does not depend on the tester’s opinion.
 Norm-Referenced: It compares results to a standard or norm.
 Affordable: It should not be too expensive.
 Time-Efficient: It should not take too long to complete.
 Effective: It should produce useful results and be implemented correctly.
 Feasible: It should be practical to administer.
 Educationally Valuable: It should have a purpose in learning.

A test is a tool, question, or set of questions used to measure a person’s abilities, knowledge,
performance, or achievements. Tests can be more or less strict. For example, in a closed book
test, you rely on memory, while in an open book test, you can use references like a book or
calculator.

Tests can be formal or informal. An informal test might be a reading quiz given by a parent,
while a formal test could be a final exam given by a teacher or an IQ test given by a
psychologist. Formal tests usually result in grades or scores and may be based on a large
number of participants or statistical analysis.

Examination
Examination originated in the 1610s, meaning “test of knowledge.”
Exams and tests are a great way to assess what the students have learned with regards to
particular subjects. Exams will show what part of the lesson each student seems to have taken
the most interest in and has remembered.

an "exam," is a formal assessment used to evaluate a person's knowledge, skills, abilities, or


other attributes in a specific subject or field.

What is an Examination- The main goal of an examination is to measure how well


someone understands or can apply what they have learned. It helps in assessing their level of
proficiency or achievement in a particular area.

Scriven, Michael (1967): "An examination is a process of assessing the extent to which
students have achieved the educational objectives of a course or program."
Tyler, Ralph W. (1950): "An examination is a tool used to determine how well students
have achieved the objectives of an educational program, through a systematic and structured
assessment."

Bloom, Benjamin S. (1956): "An examination is a systematic procedure for measuring and
evaluating the extent to which a learner has acquired the desired knowledge and skills."

Freeman, Richard K. (1999): "An examination is a formal assessment designed to evaluate


a person's understanding and ability in a specific area through structured questions and tasks."

Anastasi, Anne, and Urbina, Susana (1997) "An examination is a measurement tool used
to evaluate the knowledge, skills, or abilities of individuals, often resulting in scores or
grades."

Nunnally, Jum C. (1978): "An examination is a method of assessing the extent of


knowledge or competence that an individual has acquired, typically involving standardized
procedures and scoring."

McMillan, James H. (2001):"An examination is a structured evaluation used to measure


students' learning outcomes and to assess their understanding of course content."

Types:

o Written Exams: Include multiple-choice questions, essays, or short answers.


o Oral Exams: Involve answering questions spoken by an examiner.
o Practical Exams: Assess skills through hands-on tasks or demonstrations.
o Standardized Exams: Have consistent procedures and scoring methods, often
used for large-scale assessments.

Examples: Examples of examinations include school finals, professional certification tests,


and college entrance exams.

Measurement

Measurement is the process of determining the size, quantity, amount, or degree of something
using a specific method or tool.

Measurement is the act of finding out how much or how many of something there is. It
involves comparing an unknown quantity to a standard unit of measure.

The purpose of measurement is to obtain accurate and consistent information about different
attributes, such as length, weight, volume, time, or performance.

Measurement can also include things like attitudes or preferences. However, when we
measure, we usually use a standard tool to find out how big, tall, heavy, or how hot or cold
something is. Measurement involves collecting information in numbers and recording
performance or details needed to make a judgment.
Stevens, S. S. (1946): "Measurement is the process of assigning numbers to objects or events
according to some rule or system, typically to describe the attributes of those objects or
events."

Nunnally, Jum C. (1978): "Measurement is the process of determining the extent of an


individual’s characteristics or attributes by comparing them to a standard or reference."

Anastasi, Anne (1988):"Measurement is the process of quantifying the attributes of an


individual or object by using specific instruments or tools to gather numerical data."

Gronlund, Norman E. (1981): "Measurement refers to the process of collecting numerical


data to assess the characteristics or performance of individuals or objects."

Cronbach, Lee J. (1971): "Measurement is the systematic process of assigning numbers to


objects or events according to rules that allow for comparisons and judgments.

According to R.N. Patel: Measurement is an act or process that involves the assignment of
numerical values to whatever is being tested. So it involves the quantity of something.

Nature of Measurement
· It should be quantitative in nature
· It must be precise and accurate (instrument)
· It must be reliable
· It must be valid
· It must be objective in nature

Standard instruments are tools like rulers, scales, thermometers, and pressure gauges used for
measuring things. We use these instruments to get information about what we are measuring.
How useful this information is depends on how accurate the tools are and how well we use
them.

In everyday life, we measure things like the size of a classroom in square feet, the
temperature of a room with a thermometer, or the voltage and resistance in a circuit with
Ohm meters. These measurements give us data based on established rules or standards, but
they don’t assess or judge anything.

Assessment is different from measurement. While measurement gives us straightforward


data, assessment involves evaluating and making judgments based on that data for different
purposes.
Basic comparison of screening, assessment, evaluation, testing, and measurement-

Concept Purpose Scope Characteristics Examples


Screening To identify Broad and Quick and Health
individuals who general; initial efficient; provides screenings,
may need check initial overview; educational
further not diagnostic screening tests
investigation or
support
Assessment To gather Detailed and Uses various tools Educational
detailed comprehensive; and techniques; assessments,
information involves multiple provides a psychological
about an methods thorough evaluations
individual’s understanding;
abilities, informative
strengths, and
weaknesses
Evaluation To assess the Focused on Judgment-based; Program
effectiveness or effectiveness or outcome-focused; evaluations,
quality of a quality; compares provides feedback performance
program, outcomes to for improvement reviews
process, or criteria
performance
Testing To measure Focused on Structured and Academic tests,
specific skills, specific aspects of formal; provides skill proficiency
knowledge, or performance or quantitative data; tests
attributes knowledge specific focus
Measurement To determine Precise and Quantitative; uses Measuring
the size, quantitative; standardized tools; length with a
amount, or involves physical precise and ruler, weight
degree of attributes or accurate with a scale,
something performance temperature
using metrics with a
standardized thermometer
tools

This table provides a clear overview of each concept’s purpose, scope, characteristics, and
examples.

1.2 Assessment for diagnosis and certification – intellectual assessment, achievement,


aptitude and other psychological assessments.

Assessment is the process of gathering information to make decisions. It helps us understand


the starting point for intervention, while evaluation checks how well the intervention worked.

In clinical practice, both assessment and evaluation are important. Here’s what assessment
aims to do:
a. Identify the Condition: Determine if a condition exists based on specific
criteria and decide if mental health services are needed.
b. Treat Risk Factors: Find and address the causes and risk factors related to
intellectual disabilities.
c. Design a Plan: Identify needs related to the condition and create a plan to
reduce its impact.
d. Match Needs with Interventions: Find the best methods to address the
condition based on its nature and needs.
e. Evaluate Intervention Effectiveness: Check if the intervention is working
and how effective it is.

Psychological Assessments or Psychological Tests are tools used to evaluate a person’s


behavior through verbal or written tests. These tests help understand various aspects of
human behavior, such as why some people excel in certain areas while others do not.
However, since humans are complex and unique, there are criticisms of psychological testing
due to its subjective nature and individual differences.

The classification of the types of psychological tests is as follows:

As per the nature of psychological tests in terms of standardized and non-testing method of
testing
As per the functions of psychological tests such as intelligence tests, personality tests,
interest inventories, aptitude tests, etc.

Intellectual assessment -Intellectual assessment has changed a lot over the years. In the early
20th century, these assessments focused on language and speech patterns or sensory skills,
mainly for those with mental disabilities. Over time, assessments evolved into standardized
tests used in the 1990s that measure more complex cognitive skills for all levels of
intelligence.

The Wechsler scales, which are commonly used, were developed more from clinical practice
than from theory. In recent years, new tests are increasingly based on theories from
psychometrics and neurology, rather than just clinical experience. Despite this shift, the
Wechsler tests remain dominant because of extensive research and familiarity among
clinicians.

Psychologists are very comfortable using the Wechsler scales, often knowing them well
enough to administer them without needing the manual. However, the field is gradually
changing, with computer technology likely to play a bigger role in the future. Already,
computer-based scoring and reporting systems are in use, and more advanced technology is
expected to further transform intelligence testing by 2020.

Here are the major types of intelligence tests:

1. Wechsler Scales - These tests assess a range of cognitive abilities through both verbal and
performance-based tasks. They provide scores in different areas such as verbal
comprehension, perceptual reasoning, working memory, and processing speed.
Purpose: To offer a comprehensive evaluation of intellectual ability and cognitive function.
Examples: WAIS (for adults), WISC (for children).
2. Stanford-Binet Intelligence Scales- This test measures five key cognitive factors through
various tasks, including fluid reasoning (problem-solving), knowledge, quantitative reasoning
(mathematical skills), visual-spatial processing, and working memory.
Purpose: To evaluate overall intelligence and cognitive abilities across different age groups.
3. Raven's Progressive Matrices- This non-verbal test involves solving patterns and visual
puzzles to assess abstract reasoning and problem-solving skills.
Purpose: To measure general cognitive ability, especially in populations where language and
cultural biases might affect performance.
Examples: Raven's Standard Progressive Matrices, Raven's Colored Progressive Matrices.
4. Kaufman Assessment Battery for Children (KABC)- This test includes subtests that
measure various cognitive abilities such as sequential processing, simultaneous processing,
and planning skills, using both verbal and non-verbal tasks.
Purpose: To assess cognitive development and learning abilities in children, and to identify
specific learning needs.
Example: Kaufman Assessment Battery for Children (KABC-II).
5. Woodcock-Johnson Tests of Cognitive Abilities-This test provides a broad evaluation of
cognitive abilities through various subtests, measuring areas like general intellectual ability,
cognitive efficiency, and academic achievement.
Purpose: To assess a wide range of cognitive functions and academic skills, and to identify
learning disabilities.
Example: Woodcock-Johnson III Tests of Cognitive Abilities.
6. Differential Ability Scales (DAS)- This test evaluates cognitive abilities, including verbal
and non-verbal reasoning, through a series of tasks designed to measure general intelligence
and specific abilities.
Purpose: To measure cognitive development and identify learning disabilities in children.
Example: Differential Ability Scales (DAS-II).
7. Cattell Culture Fair Intelligence Test (CFIT)- Focuses on assessing cognitive abilities
through non-verbal tasks, aiming to minimize cultural and language biases.
Purpose: To evaluate general intelligence while reducing the influence of cultural
differences.
Example: Cattell Culture Fair Intelligence Test (CFIT).
These intelligence tests each serve specific purposes and are used to assess different aspects
of cognitive functioning, helping in diagnosis, educational placement, and understanding
individual abilities.

8. Bhatia Battery- The Bhatia Battery is a cognitive assessment tool developed to evaluate
various aspects of intelligence and cognitive function. It is designed to assess mental abilities
across different domains, providing a comprehensive profile of an individual's cognitive
strengths and weaknesses.

Purpose: To measure multiple dimensions of intelligence and cognitive abilities, including


problem-solving skills, memory, and spatial reasoning. It is used for educational and clinical
purposes to understand cognitive development and potential learning issues.

Example: The Bhatia Battery might be used to assess spatial reasoning abilities in children,
providing insights into their ability to visualize and manipulate objects.

9. MISIC (Malin's Intelligence Scale for Indian Children)- The Malin's Intelligence
Scale for Indian Children (MISIC) is a cognitive assessment tool specifically designed to
measure the intellectual abilities of children in India. It provides a comprehensive evaluation
of various cognitive functions.

Purpose: To assess different aspects of intelligence in Indian children, including verbal and
non-verbal reasoning, and to identify intellectual strengths and weaknesses. This helps in
understanding a child's cognitive development and planning appropriate educational
interventions.

Example: The MISIC might include tasks to assess a child's ability to understand and use
language, solve problems, and recognize patterns.

10. SFBT (Seguin Form Board Test) : The Seguin Form Board Test (SFBT) is a test
designed to evaluate visual-motor integration skills and spatial reasoning. It is used to assess
a child’s ability to perceive and manipulate shapes and forms.

Purpose: To measure visual-motor coordination, spatial abilities, and cognitive function


related to the manipulation of shapes and objects. It helps in identifying developmental delays
and difficulties in spatial reasoning.
Example: The Seguin Form Board Test might require a child to fit geometric shapes into
matching spaces on a board, evaluating their spatial reasoning and motor coordination.

These assessments help in understanding cognitive and developmental abilities, guiding


educational and therapeutic interventions.

Achievement tests are used to measure how much someone knows in specific academic
areas. These tests show how well a person has learned and understood the material over time.
They help to see what skills or knowledge a person has mastered and how well they perform
tasks related to that knowledge.

 Purpose: Achievement tests check how much a student has learned in subjects like
math, science, or language. They show whether a student has mastered the material
and is ready to move on to more advanced topics.
 How it Works: The test looks at what the person has learned by reviewing their
current performance. It measures their understanding and ability to use that
knowledge accurately and quickly.
 Use in Schools: Schools use achievement tests to see if students are doing well
enough to move to the next grade. For example, if a student scores high, it means they
have mastered the content and can advance. If the score is low, it might mean they
need to improve or repeat the subject.
 Action Plan: Based on the test results, students can create a study plan to improve. A
high score might lead to more challenging courses, while a low score might indicate
areas needing more focus.
 In Education and Work: Achievement tests are useful in schools and job settings to
evaluate performance and readiness for new challenges.

Overall, achievement tests help in understanding a person’s current level of knowledge and
skills, guiding them to either advance or improve as needed.

Here’s the table with the common types of standardized achievement tests, including their
descriptions, authors, and the psychologists or developers associated with each test:

Test Name Description Author/ Psycholo


Organizati gist(s)
on
Iowa Assessments A comprehensive test covering subjects like University E.F.
reading, language arts, mathematics, science, of Iowa, Lindquis
and social studies. It is used for grades K-12 College of t
and assesses students' academic progress and Education
proficiency.
Stanford A widely used test that assesses students in The Truman
Achievement Test reading, mathematics, language, spelling, Psychologic L.
(SAT) listening, science, and social science. al Kelley,
Administered to students in grades K-12 to Corporation Lewis
evaluate their academic skills and content (now M.
knowledge. Pearson Terman
Education)
California Tests students' abilities in areas such as McGraw- Richard
Achievement Test reading, math, language, spelling, and study Hill Madden,
(CAT) skills. It is often used in private schools and Education Stuart V.
homeschools to provide diagnostic Stevens
information and guide instruction.
Measures of An adaptive test that adjusts its difficulty Northwest Allen J.
Academic based on the test-taker’s responses. It Evaluation Qualls
Progress (MAP) provides a personalized assessment of a Association
student’s academic abilities and tracks (NWEA)
student growth over time in areas such as
math, reading, and language usage.
Woodcock- A battery of tests that measure academic Richard W. Richard
Johnson Tests of achievement and cognitive abilities. Suitable Woodcock W.
Achievement for all age groups, these tests are used for and Mary E. Woodco
educational diagnosis, planning, and to Bonner ck,
assess a range of skills including reading, Johnson Kevin S.
writing, and mathematics. McGrew
SAT (Scholastic A standardized test widely used for college College Carl
Assessment Test) admissions in the United States. The SAT Board and Brigham
assesses students' readiness for college and Educational
provides colleges with a common data point Testing
for comparing applicants. It covers evidence- Service
based reading, writing, and math. (ETS)
Wechsler Designed to assess the academic David David
Individual achievement of children and adults. The Wechsler Wechsler
Achievement Test WIAT measures a range of skills including and Pearson
(WIAT) reading, writing, mathematics, and oral Education
language. It is often used in educational
settings for diagnosing learning disabilities
and planning interventions.

some Indian standardized achievement tests with descriptions, authors, and associated
psychologists or researchers:

Test Name Description Author/ Psycholo


Organizati gist(s)/
on Researc
her(s)
Bhatia’s Battery A set of five tests designed to measure the C.M. Bhatia C.M.
of Performance intelligence of individuals aged 11 to 16 Bhatia
Tests of years and also for uneducated adults. It is
Intelligence often used to assess cognitive abilities in
various educational and clinical settings. The
test includes tasks like Koh's Block Design
Test, Alexander Pass-Along Test, Pattern
Drawing Test, Immediate Memory Test, and
Picture Construction Test.
Malin’s An adaptation of the Wechsler Intelligence Arthur J. Arthur J.
Intelligence Scale Scale for Children (WISC) specifically Malin Malin,
for Indian designed for Indian children. It measures the (based on David
Children (MISIC) intelligence of children aged 6 to 15 years David Wechsler
and includes verbal and performance scales Wechsler’s
to evaluate cognitive abilities in different WISC)
domains such as comprehension, arithmetic,
vocabulary, and digit span.
Seguin Form A non-verbal performance test used to assess Adapted and Edward
Board Test the cognitive and motor abilities of children used in Seguin
(SFBT) and adults with intellectual disabilities or Indian (original
developmental delays. The test requires the contexts by test),
subject to fit different shaped blocks into various Various
corresponding slots on a board, measuring psychologist Indian
factors like problem-solving ability, motor s psycholo
coordination, and visual-spatial skills. gists
Raven’s A non-verbal group test used to measure John C. John C.
Progressive abstract reasoning and is considered a good Raven Raven,
Matrices (Indian estimate of general intelligence. The Indian (original Indian
Adaptation) adaptation of Raven’s Progressive Matrices test), educatio
takes into account cultural and linguistic adapted by nal
differences in India, providing a more Indian psycholo
accurate assessment for Indian populations. researchers gists
NCERT National A comprehensive survey conducted by the NCERT NCERT
Achievement National Council of Educational Research (National researche
Survey (NAS) and Training (NCERT) to assess the learning Council of rs and
levels of students in different states across Educational educatio
India. It evaluates the academic performance Research nal
of students in subjects like mathematics, and experts
language, and environmental studies, Training)
providing data for policy-making and
educational planning.
Group Test of Designed to measure the general mental D.N. D.N.
General Mental ability of students in Indian schools, this test Srivastava Srivastav
Ability assesses various cognitive skills such as a, Indian
reasoning, comprehension, and problem- educatio
solving. It is used for educational and nal
vocational guidance and helps identify psycholo
students who need special educational gists
support.
Jalota’s General A test developed to measure the general S.S. Jalota S.S.
Mental Ability mental ability of high school and college Jalota
Test students in India. The test includes verbal
and non-verbal sections, evaluating areas
such as reasoning, numerical ability, and
comprehension. It is commonly used in
academic and vocational settings for
assessment and guidance purposes.
Chhaya’s Verbal A test developed to assess the intelligence of C.M. C.M.
and Non-Verbal individuals aged 6 to 16 years. It includes Chhaya Chhaya
Intelligence Test both verbal and non-verbal components,
measuring skills like vocabulary,
comprehension, numerical ability, and
pattern recognition. The test is widely used
in schools and clinical settings in India to
evaluate cognitive abilities and guide
educational interventions.
Kundu’s Verbal An Indian test designed to measure creative K.K. Kundu K.K.
Test of Creative thinking abilities in children and adults. The Kundu
Thinking test includes tasks that assess divergent
thinking, originality, and fluency in idea
generation. It is used in educational and
psychological assessments to identify
individuals with high creative potential and
to support the development of creative skills
in different settings.
Nehra’s A test developed to measure the creative Dr. R.S. Dr. R.S.
Creativity Test thinking abilities of students in Indian Nehra Nehra
(NCT) schools. The NCT assesses various aspects
of creativity, such as fluency, flexibility,
originality, and elaboration, through tasks
that require verbal and non-verbal responses.
It is used to identify and nurture creative
talent in educational settings, helping
teachers and parents support the
development of creative skills in students.

Aptitude test - An aptitude test is a type of exam used to measure a person's ability to learn
or perform specific skills, such as intellectual or motor skills, through future training. These
tests are based on the idea that people have different special abilities, which can help predict
their success in future activities.

An aptitude test is a type of standardized test designed to assess an individual's ability or


potential to learn and perform in a particular area or skill set. These tests aim to evaluate how
well a person can acquire, with training, the competencies needed for specific types of
activities or professions.

Characteristics of Aptitude Tests

1. Predictive Ability: Aptitude tests are used to predict future performance and success
in a specific field or job.
2. Skills and Talents: They measure specific abilities such as verbal reasoning,
numerical ability, abstract reasoning, spatial awareness, and mechanical aptitude.
3. Objective Measurement: Aptitude tests provide an objective way to compare an
individual's abilities with others or with a set standard.
4. Career Guidance: They are commonly used in educational and career settings to help
individuals identify their strengths and potential career paths.
5. Variety of Formats: Aptitude tests can be administered in various formats, including
multiple-choice, performance tasks, or computer-based simulations.
Types of Aptitude Tests

 General Aptitude Tests: These assess a broad range of skills to provide a general
overview of an individual’s capabilities. Examples include the SAT, ACT, and DAT.
 Specific Aptitude Tests: These focus on measuring aptitude for a particular
profession or skill, such as the Law School Admission Test (LSAT) for aspiring
lawyers or the Medical College Admission Test (MCAT) for future doctors.
 Vocational Aptitude Tests: Designed to help individuals understand their strengths
in different occupational areas, such as the Armed Services Vocational Aptitude
Battery (ASVAB).

General Aptitude Tests

 General aptitude tests are similar to intelligence tests because they evaluate a wide
range of skills.
 These skills include verbal comprehension, reasoning, numerical operations,
perceptual speed, and mechanical knowledge.
 Examples of general aptitude tests in the United States are the Scholastic Assessment
Test (SAT) and the American College Testing Exam (ACT).

Professional and Special Ability Tests

 Aptitude tests can also measure potential in specific fields, such as law or medicine,
and assess special abilities like clerical speed or mechanical reasoning.
 The Differential Aptitude Test (DAT) is an example that measures specific abilities
like clerical speed, mechanical reasoning, and general academic skills.

Common Aptitude Tests

People take many types of aptitude tests throughout their personal and professional lives,
starting from school age.

 Fighter Pilot Aptitude Test: Assesses a person's suitability to become a fighter pilot.
 Air Traffic Controller Career Test: Evaluates a person's potential to work as an air
traffic controller.
 High School Career Aptitude Test: Helps students decide which career paths to
pursue.
 Computer Programming Test: Tests a job candidate's ability to solve hypothetical
problems in programming.
 Physical Ability Test: Measures the physical capabilities needed for certain jobs, like
those of a police officer or firefighter.
Other Psychological Test and assessment

Category Tests
Personality Tests 16 Personality Factor Questionnaire (16-PF), Basic Personality
Inventory (BPI), Thematic Apperception Test (TAT), Rorschach
Test
Achievement Tests Kaufman Test of Education Achievement (K-TEA), Wechsler
Individual Assessment Test, Woodcock-Johnson Psychoeduca
Battery (Achievement)
Attitude Tests Likert Scale, Thurstone Scale, etc.
Aptitude Tests Abstract Reasoning Test, Visual Reasoning Test, etc.
Emotional Intelligence Emotional & Social Competence Inventory, Mayer-Salovey-
Tests Caruso EI Test (MSCEIT), etc.
Intelligence Tests Wechsler Individual Achievement Test, Wechsler Adult
Intelligence Scale, Universal Nonverbal Intelligence
Neuropsychological Ammons Quick Test, Beck Depression Inventory, Anxiety
Tests Inventory, Hopelessness Scale
Projective Tests Rorschach Inkblot Test, Thematic Apperception Test (TAT),
Draw-A-Person Test, House-Tree-Person Test
Observation (Direct) Direct Observation
Tests

Other Psychological Test and assessment

1. Wechsler Adult Intelligence Scale (WAIS)


 Description: Measures adult intelligence across four domains.
 Psychologist: David Wechsler

2. Stanford-Binet Intelligence Scales


 Description: Assesses general intellectual ability.
 Psychologists: Lewis Terman, Alfred Binet

3. Raven's Progressive Matrices


 Description: Nonverbal test measuring abstract reasoning.
 Psychologist: John C. Raven

4. Minnesota Multiphasic Personality Inventory (MMPI)


 Description: Assesses personality traits and psychopathology.
 Psychologists: Starke R. Hathaway, J.C. McKinley

5. Thematic Apperception Test (TAT)


 Description: Projective test that reveals underlying motives, concerns, and the way a
person sees the social world.
 Psychologists: Henry A. Murray, Christiana D. Morgan

6. Rorschach Inkblot Test


 Description: Projective test to analyze personality characteristics and emotional
functioning.
 Psychologist: Hermann Rorschach
7. Beck Depression Inventory (BDI)
 Description: Measures the severity of depression.
 Psychologist: Aaron T. Beck

8. Beck Anxiety Inventory (BAI)


 Description: Assesses the severity of anxiety symptoms.
 Psychologist: Aaron T. Beck

9. Myers-Briggs Type Indicator (MBTI)


 Description: Personality test based on Jung's theory of psychological types.
 Psychologists: Isabel Briggs Myers, Katharine Cook Briggs
10. Millon Clinical Multiaxial Inventory (MCMI)
 Description: Assesses personality disorders and clinical syndromes.
 Psychologist: Theodore Millon
11. Cattell’s 16 Personality Factor Questionnaire (16PF)
 Description: Measures 16 primary personality traits.
 Psychologist: Raymond B. Cattell
12. Bender Visual-Motor Gestalt Test
 Description: Assesses visual-motor functioning and neurological impairments.
 Psychologist: Lauretta Bender
13. Weschler Intelligence Scale for Children (WISC)
 Description: Measures the intellectual ability of children.
 Psychologist: David Wechsler
14. Kaufman Assessment Battery for Children (KABC)
 Description: Measures cognitive development in children.
 Psychologists: Alan S. Kaufman, Nadeen L. Kaufman
15. Peabody Picture Vocabulary Test (PPVT)
 Description: Assesses receptive vocabulary knowledge.
 Psychologists: Lloyd M. Dunn, Leota M. Dunn
16. Woodcock-Johnson Tests of Cognitive Abilities
 Description: Assesses cognitive abilities and academic skills.
 Psychologists: Richard W. Woodcock, Mary E. Bonner Johnson
17. California Psychological Inventory (CPI)
 Description: Measures interpersonal behavior and social interaction.
 Psychologist: Harrison G. Gough
18. Eysenck Personality Questionnaire (EPQ)
 Description: Assesses personality traits based on three dimensions: extraversion,
neuroticism, and psychoticism.
 Psychologist: Hans Eysenck
19. Big Five Inventory (BFI)
 Description: Assesses personality traits based on the Big Five model: openness,
conscientiousness, extraversion, agreeableness, and neuroticism.
 Psychologists: Oliver P. John, V. Benet-Martinez
20. Rotter’s Locus of Control Scale
 Description: Measures an individual's perception of control over events in their life.
 Psychologist: Julian B. Rotter
21. State-Trait Anxiety Inventory (STAI)
 Description: Measures anxiety as both a state and trait.
 Psychologists: Charles D. Spielberger, R.L. Gorsuch, R.E. Lushene
22. Children’s Apperception Test (CAT)
 Description: A projective test to assess children’s personality and emotional
functioning.
 Psychologists: Leopold Bellak, Sonya Sorel Bellak
23. Denver Developmental Screening Test (DDST)
 Description: Assesses developmental problems in young children.
 Psychologist: William K. Frankenburg
24. Child Behavior Checklist (CBCL)
 Description: Assesses behavioral and emotional problems in children.
 Psychologist: Thomas M. Achenbach
25. Vineland Adaptive Behavior Scales
 Description: Measures adaptive behavior in individuals from birth to adulthood.
 Psychologists: Edgar A. Doll, Sara Sparrow, David A. Balla
26. Revised NEO Personality Inventory (NEO-PI-R)
 Description: Assesses personality traits based on the Five-Factor Model.
 Psychologists: Paul T. Costa, Jr., Robert R. McCrae
27. Trail Making Test (TMT)
 Description: Assesses visual attention and task switching.
 Psychologist: Part of the Halstead-Reitan Neuropsychological Battery
28. Wisconsin Card Sorting Test (WCST)
 Description: Measures executive functions like flexibility in thinking.
 Psychologist: David A. Grant, Esta A. Berg
29. Brief Psychiatric Rating Scale (BPRS)
 Description: Measures psychiatric symptoms such as depression, anxiety,
hallucinations, and unusual behavior.
 Psychologists: John E. Overall, Donald R. Gorham
30. Beery-Buktenica Developmental Test of Visual-Motor Integration (Beery VMI)
 Description: Assesses visual-motor integration skills.
 Psychologists: Keith E. Beery, Norman A. Buktenica
31. Bayley Scales of Infant and Toddler Development
 Description: Assesses developmental functioning of infants and toddlers.
 Psychologist: Nancy Bayley
32. Stanford Hypnotic Susceptibility Scales
 Description: Measures a person's ability to be hypnotized.
 Psychologists: Andre M. Weitzenhoffer, Ernest R. Hilgard
33. Hamilton Rating Scale for Depression (HAM-D)
 Description: Assesses the severity of depression symptoms.
 Psychologist: Max Hamilton
34. Conners' Rating Scales
 Description: Measures behavioral issues and ADHD in children.
 Psychologist: C. Keith Conners
35. Children's Depression Inventory (CDI)
 Description: Assesses depressive symptoms in children and adolescents.
 Psychologist: Maria Kovacs
36. Reynolds Adolescent Depression Scale (RADS)
 Description: Assesses depressive symptoms in adolescents.
 Psychologist: William M. Reynolds
37. Brief Symptom Inventory (BSI)
 Description: Assesses psychological distress and symptoms.
 Psychologist: Leonard R. Derogatis
38. Hopkins Symptom Checklist (HSCL)
 Description: Measures symptoms of anxiety and depression.
 Psychologist: John E. Ware, Jr.
39. Piers-Harris Children's Self-Concept Scale
 Description: Measures self-concept in children and adolescents.
 Psychologist: Ellen V. Piers
40. Marital Satisfaction Inventory (MSI)
 Description: Assesses the quality and satisfaction within marriage.
 Psychologist: Douglas K. Snyder
41. Dyadic Adjustment Scale (DAS)
 Description: Measures the quality of adjustment in married or cohabiting couples.
 Psychologist: Graham B. Spanier
42. Tennessee Self-Concept Scale (TSCS)
 Description: Measures self-concept and identity.
 Psychologist: William H. Fitts
43. House-Tree-Person Test (HTP)
 Description: Projective test to measure aspects of a person’s personality.
 Psychologist: John N. Buck
44. Kaufman Brief Intelligence Test (KBIT)
 Description: Measures verbal and non-verbal intelligence.
 Psychologists: Alan S. Kaufman, Nadeen L. Kaufman
45. Revised Children's Manifest Anxiety Scale (RCMAS)
 Description: Assesses anxiety levels in children.
 Psychologist: Cecil R. Reynolds, Bert O. Richmond
46. Adult Attachment Interview (AAI)
 Description: Assesses adult attachment patterns.
 Psychologist: Mary Main
47. Wechsler Memory Scale (WMS)
 Description: Measures different memory functions in adults.
 Psychologist: David Wechsler
48. Cognitive Assessment System (CAS)
 Description: Assesses cognitive processing abilities.
 Psychologists: J.P. Das, Jack A. Naglieri, John R. Kirby
49. Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT)
 Description: Measures emotional intelligence.
 Psychologists: John D. Mayer, Peter Salovey, David R. Caruso
50. Multidimensional Personality Questionnaire (MPQ)
 Description: Assesses multiple dimensions of personality.
 Psychologist: Auke Tellegen

Indian Psychological Tests

1. Bhatia Battery of Performance Intelligence Tests


 Description: A non-verbal intelligence test battery designed to assess the intelligence
of individuals in India, particularly those who may not be literate.
 Psychologist: C.M. Bhatia
2. NIMHANS Neuropsychological Battery
 Description: A comprehensive battery developed for the assessment of cognitive
functions in the Indian population, especially for diagnosing neurological and
psychiatric disorders.
 Institution: National Institute of Mental Health and Neuro-Sciences (NIMHANS),
Bangalore
3. Raven's Standard Progressive Matrices (Indian Adaptation)
 Description: A culturally adapted version of Raven's test to assess non-verbal
abstract reasoning in the Indian population.
 Psychologist: John C. Raven (Adapted by various Indian psychologists)
4. Sinha's Anxiety Scale
 Description: A test designed to measure anxiety levels among the Indian population.
 Psychologist: Durganand Sinha
5. Anjali Mukherjee's Career Interest Record
 Description: Assesses career interests and helps in vocational guidance, specifically
tailored for Indian students.
 Psychologist: Anjali Mukherjee
6. PGI Memory Scale
 Description: A test to measure different aspects of memory functioning in the Indian
population.
 Psychologists: Dwarka Pershad, N.N. Wig (Postgraduate Institute of Medical
Education and Research, Chandigarh)
7. Ahluwalia's Comprehensive Personality Test
 Description: Measures various personality traits, tailored to suit the Indian cultural
context.
 Psychologist: Manju Ahluwalia
8. Malin's Intelligence Scale for Indian Children (MISIC)
 Description: An Indian adaptation of the Wechsler Intelligence Scale for Children
(WISC), assessing the intellectual abilities of Indian children.
 Psychologist: A.J. Malin
9. Gujarat Test of Intelligence
 Description: A group intelligence test developed for school children in the state of
Gujarat.
 Psychologists: Developed by educational institutions in Gujarat
10. Verma's Sociability Scale
 Description: Measures sociability and social behavior among Indian students.
 Psychologist: N.P. Verma
11. Shamshad-Jasbir Old Age Adjustment Inventory
 Description: Assesses the adjustment levels of elderly individuals in India across
various domains like health, social, and emotional aspects.
 Psychologists: Shamshad Hussain, Jasbir Kaur
12. Bhatnagar-Gayen Personality Inventory
 Description: A tool to measure personality traits, especially focusing on Indian
cultural contexts.
 Psychologists: R.P. Bhatnagar, K.K. Gayen
13. Jalota's Group Test of General Mental Ability
 Description: A widely used group intelligence test in Indian educational settings.
 Psychologist: S.S. Jalota
14. Indian Child Intelligence Test (ICIT)
 Description: Measures the intelligence of Indian children, considering cultural and
linguistic diversity.
 Psychologist: Developed by various Indian educational institutions
15. Adjustment Inventory for School Students (AISS)
 Description: Assesses the adjustment of school students in various areas such as
emotional, social, and educational domains.
 Psychologist: A.K.P. Sinha, R.P. Singh
16. Deo-Mohan Achievement Motivation Scale
 Description: Measures achievement motivation among Indian students.
 Psychologists: Pratibha Deo, Asha Mohan
17. Rao’s Social Maturity Scale
 Description: Assesses social maturity in children and adolescents in the Indian
context.
 Psychologist: Nalini Rao
18. Indian Adaptation of the Draw-A-Man Test
 Description: Measures cognitive and intellectual development in children through
their drawings.
 Psychologist: Adapted by various Indian psychologists
19. Bhargava's Depression Inventory
 Description: Measures the level of depression in individuals, tailored to the Indian
population.
 Psychologist: Mahesh Bhargava
20. BIS (Bureau of Indian Standards) Intelligence Test
 Description: A test developed for assessing the intelligence of Indian school children,
incorporating Indian cultural nuances.
 Institution: Bureau of Indian Standards

1.3 Developmental assessment and educational assessment – entry level, forma tive and
summative assessments.

Developmental Assessment- Developmental assessment is a comprehensive evaluation


of a child's physical, emotional, social, and cognitive development. It is used to determine
whether a child is meeting typical developmental milestones or if there are delays or
variations that may require further evaluation or intervention.

Purpose: The main goals of developmental assessment are:

 To identify children who may be at risk for developmental delays or disabilities.


 To provide early intervention services to help children reach their full potential.
 To guide parents and caregivers on how to support their child's development.

Areas Assessed:

1. Motor Skills: Gross motor skills (such as crawling, walking) and fine motor skills
(like holding a pencil).
2. Language and Communication: Understanding and use of language, including
vocabulary, sentence structure, and ability to communicate effectively.
3. Cognitive Skills: Problem-solving, memory, understanding of concepts, and the
ability to learn new information.
4. Social and Emotional Development: Ability to interact with others, form
relationships, regulate emotions, and show empathy.
Educational Assessment- Educational assessment refers to the systematic process of
documenting and using empirical data to measure knowledge, skills, attitudes, and beliefs. It
is focused on evaluating a student's learning, performance, and educational needs.

Purpose: The primary objectives of educational assessment are:

 To determine a student’s academic progress and achievement.


 To inform instruction and improve teaching strategies.
 To identify students who need additional support or advanced instruction.
 To provide feedback to students, parents, and educators about learning outcomes.

Entry-Level Assessment- Entry-level assessment refers to a set of evaluations designed to


measure the readiness and skill level of students, employees, or candidates entering a new academic
program, job, or position. These assessments aim to determine the current capabilities of
individuals, identify gaps in knowledge or skills, and help in planning appropriate educational or
training pathways.

Assessing the academic skills of new students is crucial in higher education, both nationally
and internationally. It's important to know if students are ready to handle the reading and
writing tasks they'll face in their studies. This readiness is what we mean by "academic
literacy." Academic literacy means that new students should have a basic understanding or
ability to learn how to:

 Read and understand complex texts.


 Pay attention to the structure and organization of the material they read.
 Be active and critical readers.
 Write responses to academic tasks that are well-organized, clear, and precise.

Purpose of Entry-Level Assessment

 Placement: To place students or candidates in the appropriate level or class based on


their existing skills and knowledge.
 Readiness Check: To assess whether individuals are prepared to meet the demands of
a new educational program or job role.
 Identifying Strengths and Weaknesses: To highlight areas where individuals excel
and where they may need additional support or training.
 Customized Learning or Training: To create personalized learning or training plans
that cater to the specific needs of the individual.

Formative Assessment

Formative assessment is a range of informal and formal methods used by teachers to evaluate
students' comprehension, learning needs, and academic progress during a lesson, unit, or
course. Unlike summative assessments, which evaluate student learning at the end of an
instructional period, formative assessments are integrated into the learning process and are
used to adapt teaching strategies and improve student understanding on an ongoing basis.
Formative assessment provides feedback and information during the learning process. It
measures how students are progressing and also helps teachers understand how well they are
teaching. For example, teachers can use formative assessments to decide whether to keep or
change an activity based on students' responses. The main goal of formative assessment is to
identify areas where students need improvement. These assessments usually aren't graded and
are used to measure students' learning progress and teaching effectiveness.

For instance, at the end of the third week of a semester, a teacher might informally ask
students questions that could be on a future exam. This helps the teacher see if the students
understand the material. A fun and effective way to check students' understanding is by using
clickers, which are interactive devices. For example, if many students get a question wrong or
are confused about something, the teacher can revisit that topic or present it differently to
make it clearer. This type of formative assessment allows teachers to adjust their teaching to
ensure students are on the right track. It's good practice to use this type of assessment to
check students' knowledge before giving a big exam.

Key Characteristics of Formative Assessment

1. Ongoing and Continuous: It occurs throughout the learning process rather than at
the end.
2. Diagnostic and Informative: It helps identify what students know and don’t know,
and provides detailed feedback for both students and teachers.
3. Adaptive: The results of formative assessments are used to modify teaching and
learning activities to meet students’ needs.
4. Low Stakes: Typically, formative assessments do not carry significant grading
weight. Their primary purpose is to provide feedback and facilitate learning, not to
assign grades.

Benefits of Formative Assessment

1. Improved Student Learning: By providing timely feedback and targeted instruction,


formative assessments help students build on their strengths and address their
weaknesses.
2. Increased Student Motivation: Students are more motivated to learn when they
receive regular feedback and see evidence of their progress.
3. Better Teaching Strategies: Teachers gain insights into the effectiveness of their
instructional methods and can adjust their teaching to better meet the needs of their
students.
4. Greater Student Involvement: Formative assessments encourage students to take an
active role in their learning, fostering a sense of ownership and responsibility.

Techniques and Methods of Formative Assessment

1. Observations: Teachers observe students as they work, noting their engagement,


participation, and understanding.
2. Questioning: Asking open-ended questions to encourage discussion and deeper
thinking.
3. Exit Tickets: At the end of a class, students write down their answers to a question or
a summary of what they learned, helping teachers gauge understanding.
4. Quizzes and Polls: Short, informal quizzes or polls that provide immediate feedback
on students' grasp of the material.
5. Think-Pair-Share: Students think about a question individually, discuss their
thoughts with a partner, and then share their conclusions with the class.
6. Peer and Self-Assessment: Students evaluate their own work or the work of their
peers, which promotes self-reflection and critical thinking.
7. Graphic Organizers: Tools like concept maps or Venn diagrams help students
organize and visualize their understanding of the material.
8. Learning Journals: Students keep a journal of their learning experiences, reflections,
and questions, which can be reviewed by the teacher for insights into their learning
process.
9. Feedback: Providing specific, constructive feedback on assignments, discussions, or
activities that guides students toward improvement.

More Specifically, Formative Assessments:

 Help students identify their strengths and weaknesses and areas they need to work on.
 Help teachers recognize where students are struggling and address problems quickly.
 Are generally low stakes, meaning they have little or no point value.

Examples of formative assessments include asking students to:

 Draw a concept map in class to show their understanding of a topic.


 Write one or two sentences to identify the main point of a lecture.
 Turn in a research proposal for early feedback.

Formative assessment is a vital component of the teaching and learning process. It provides ongoing
feedback that helps both teachers and students identify areas of improvement and adapt their
strategies to enhance learning outcomes. By focusing on the process of learning rather than the final
product, formative assessments create a supportive and dynamic learning environment that fosters
growth and development.

Summative Assessment- Summative assessment is a type of evaluation conducted at the


end of an instructional unit, course, or program to determine the extent of student learning
and achievement of the intended learning outcomes. It is typically used to assign grades,
make judgments about student performance, and assess the effectiveness of instructional
methods.

Summative assessments are given at specific points in time to determine what students know
and don’t know. They are often associated with standardized tests like state assessments but
are also an important part of district and classroom programs. Summative assessments are
used as part of the grading process and are more focused on the final product, while
formative assessments focus on the process of completing the product. Once a project is
completed, no further changes can be made. However, if students are allowed to make
changes, the assessment becomes formative because students can improve.

Key Characteristics of Summative Assessment

1. Final Evaluation: Summative assessments are typically given at the end of a learning
period, such as the end of a unit, semester, or course.
2. Cumulative: They measure the total learning and understanding of the material
covered over a specific period.
3. High Stakes: These assessments often carry significant weight in determining a
student’s final grade or progress in a program.
4. Standardized: Summative assessments are usually designed to be consistent and
objective, allowing for fair comparisons across different students or groups.

Types of Summative Assessment

1. Examinations:
o Final Exams: Comprehensive tests covering all material taught during a
course.
o Midterm Exams: Tests given in the middle of a course to assess students'
progress.
o Standardized Tests: Exams that measure student performance against a
common set of criteria or standards.
2. Projects and Presentations:
o Research Projects: In-depth investigations into a specific topic, culminating
in a written report or presentation.
o Capstone Projects: Comprehensive projects that integrate learning from an
entire program or course of study.
o Presentations: Oral or multimedia presentations that demonstrate a student’s
understanding of a topic.
3. Essays and Papers:
o Term Papers: Extended essays that require students to explore a topic in
depth.
o Research Papers: Papers that involve researching a topic, analyzing data, and
presenting findings.
4. Portfolios:
o Student Portfolios: Collections of a student’s work over time that
demonstrate growth and learning.
o Digital Portfolios: Online collections of student work that can include written
work, multimedia projects, and other artifacts.
5. Performances and Demonstrations:
o Performances: Activities like musical recitals, theater performances, or dance
shows that showcase student skills.
o Demonstrations: Practical demonstrations of skills, such as science
experiments, technical tasks, or cooking demonstrations.
6. Competitions and Debates:
o Academic Competitions: Events where students compete in knowledge or
skill-based activities.
o Debates: Structured discussions where students argue for or against a specific
position or topic.
Comparison of entry-level assessment, formative assessment, and summative assessment
in table form:

Aspect Entry-Level Formative Assessment Summative Assessment


Assessment
Timing Conducted before Conducted throughout Conducted at the end of a
starting a course, the learning process. course, unit, or
program, or job. instructional period.
Purpose - To assess the current - To monitor student - To measure the extent
knowledge, skills, and progress and to which students have
abilities of students or understanding achieved the learning
candidates. continuously. objectives.
- To determine - To provide ongoing - To assign final grades
readiness for a course or feedback to both or certifications.
job. students and instructors. - To evaluate the overall
- To place individuals in - To identify areas effectiveness of the
appropriate levels or where students need instruction and
groups. additional support or curriculum.
- To identify gaps that instruction. - To make decisions
may need to be - To adjust teaching about student progression
addressed early on. strategies based on or qualification.
student needs.
Impact on - Helps tailor - Provides insights into - Provides a final
Learning instruction to match how well students are evaluation of what
students' current understanding the students have learned.
abilities. material. - Influences final grades
- Guides the initial pace - Allows for real-time and academic or career
and content of the adjustments to teaching progression.
course or training. methods and materials. - Assesses the
- Sets the foundation for - Encourages a growth effectiveness of
personalized learning mindset by focusing on instructional methods and
plans. improvement. curriculum.
- Ensures that students - Helps students identify - Often used for
are ready for the their strengths and areas accountability purposes,
expected level of needing work. such as meeting
challenge. educational standards.
Focus - Evaluates existing - Focuses on the - Focuses on the final
knowledge and skills. learning process and product or outcome.
- Focuses on readiness progress. - Evaluates overall
and placement. - Aims to improve achievement of learning
- Identifies initial understanding and skills objectives.
learning gaps. during instruction. - Assesses cumulative
- Provides feedback for learning over a specific
ongoing development. period.
Stakes - Generally low or - Low stakes. - High stakes.
moderate. - Primarily used to - Often carries significant
- Used for placement provide feedback and weight in grading or
and initial planning, not support, not for grading. certification.
for final grading. - Can impact final
evaluations, academic
records, and future
opportunities.
Examples - Placement tests (e.g., - Quizzes and in-class - Final exams and end-of-
math placement exams). activities. term tests.
- Diagnostic tests. - Homework and - Major projects or
- Pre-course surveys or assignments. research papers.
assessments. - Class discussions and - Standardized tests (e.g.,
- Job skill assessments student reflections. state assessments,
(e.g., technical skills - Exit tickets and polls. national exams).
tests). - Peer and self- - Final presentations and
assessments. performances.
- Capstone projects.

Summary

 Entry-Level Assessment: Focuses on understanding what students or candidates


already know before starting a course or job. It helps in placing them at the right level
and identifying areas where they may need additional support.
 Formative Assessment: Occurs during the learning process to provide ongoing
feedback and monitor progress. It helps adjust teaching methods and supports student
development throughout the course.
 Summative Assessment: Evaluates the cumulative learning and achievement at the
end of a course or instructional period. It provides a final measure of performance and
is used for grading and certification.

1.4 Formal and informal assessment – concept, meaning and role in educational settings.
Standardized /Norm referenced tests (NRT) and teacher made/informal Criterion
referenced testing (CRT).

Formal assessments have data which support the conclusions made from the test. We
usually refer to these types of tests as standardized measures. These tests have been tried
before on students and have statistics which support the conclusion such as the student is
reading below average for his age. The data is mathematically computed and summarized.
Scores such as percentiles, or standard scores are mostly commonly given from this type of
assessment.

Formal assessment refers to a structured and standardized method of evaluating students'


learning, skills, and knowledge. These assessments are systematically designed and
implemented to provide objective, quantifiable data on student performance. They often use
established benchmarks and statistical analysis to ensure consistency and fairness.

Definition: Formal assessment is a method of evaluating students' abilities and achievements


through standardized tests or tools. These assessments are typically used to measure overall
learning outcomes, compare student performance against established norms or benchmarks,
and make decisions about grades, promotions, or further educational needs.

Characteristics of Formal Assessment:


 Structured: Uses predefined formats, procedures, and scoring systems.
 Standardized: Administered and scored in a consistent manner across different
settings and students.
 Objective: Aims to provide unbiased results based on data and statistical analysis.
 Quantifiable: Results are often presented in numerical or categorical terms, such as
scores, percentiles, or grades.

Examples of Formal Assessment:

1. Standardized Tests: The CUET common university entrance test or NEET National
eligibility cum entrance test , GATE (Graduate Aptitude test in Engineering) exams
used for college admissions.
Purpose: To measure students' readiness for college by assessing their skills in
reading, writing, and math compared to a national or state-wide benchmark.
2. Final Exams: End-of-term exams in high school subjects like biology or history.
Purpose: To evaluate students' understanding of all material covered during the
course, contributing to their final grade.
3. National or State Assessments: State-wide standardized tests in mathematics or
reading, such as the MCAS in Massachusetts or the STAAR in Texas.
Purpose: To assess student performance against state or national standards and to
track progress over time.
4. Certified Examinations: Professional certification exams like the CPA (Certified
Public Accountant) exam.
Purpose: To evaluate candidates' knowledge and skills required for professional
certification in fields like accounting or engineering.
5. Pre- and Post-Tests: Tests given before and after a training program to measure the
effectiveness of the instruction.
Purpose: To assess the growth in knowledge or skills achieved by the participants
during the program.

Informal assessments Informal assessment refers to a less structured and more flexible way
of evaluating students' learning and performance. Unlike formal assessments, which use
standardized methods and data, informal assessments focus on everyday observations and
interactions to gauge how well students are grasping the material.

Informal assessment is a method of evaluating students through direct observation and


interactions rather than standardized tests. It helps teachers understand students’ progress,
strengths, and areas needing improvement in a more flexible and immediate way.

Characteristics of Informal Assessment:

 Flexible: Adapted to the specific context and needs of the students.


 Observational: Based on what the teacher sees and hears during normal classroom
activities.
 Qualitative: Often focuses on the quality of students' performance rather than
numerical scores.
 Immediate Feedback: Provides quick insights into student understanding and
learning.
Examples of Informal Assessment:

1. Classroom Observations: Noting how actively a student participates in group


discussions or how they solve problems during class activities.
o Purpose: To understand students’ engagement and grasp of the material in
real-time.
2. Running Records: Tracking a student’s reading progress by noting errors and
accuracy while they read a book aloud.
o Purpose: To evaluate reading skills and fluency with specific texts.
3. Exit Tickets: Asking students to write a quick response to a question at the end of a
lesson, such as summarizing what they learned.
o Purpose: To get immediate feedback on what students have understood from
the lesson.
4. Peer Reviews: Having students review each other's work and provide feedback based
on a given rubric.
o Purpose: To encourage collaborative learning and self-assessment.
5. Checklists and Rubrics: Using a checklist to observe if students meet specific
criteria in a project or activity.
o Purpose: To assess various aspects of performance in a less formal way.
6. One-on-One Conversations: Speaking with students individually to discuss their
understanding of a topic or to address questions they might have.
o Purpose: To provide personalized feedback and support.

Norm Referenced test - A Norm-Referenced Test (NRT) is a type of assessment designed to


measure how a student's performance compares to that of a larger, representative group of
people. These tests are used to rank students and see where they stand in relation to their
peers.
A Norm-Referenced Test is a standardized test that compares an individual's performance to
a norm group—an average sample of people who have taken the same test. The test results
are used to determine how well the individual performs relative to others

Norm Referenced test Norm-Referenced test (NRT) is a traditional way of measuring how
a person’s abilities compare to others. These tests are standardized and used to see how a test
taker’s performance stacks up against a sample of people.
Norm-Referenced Assessment involves using tests that have been standardized on a large
group of people. The tests follow specific directions for administration, scoring, and
interpreting results. These results are used to compare individuals to others.

Characteristics:

 Standardized: Administered and scored in a consistent way across different settings.


 Comparative: Results are compared to the performance of a norm group.
 Rank-Based: Provides information on how a student ranks relative to others
Examples:

1. Stanford-Binet Intelligence Scales:


o Purpose: Measures general intelligence and cognitive abilities.
o Example Use: If a student scores at the 75th percentile, it means they scored
better than 75% of the students in the norm group.
2. Wechsler Intelligence Scale for Children (WISC):
o Purpose: Assesses cognitive abilities in children.
o Example Use: A child's score is compared to the scores of a large,
representative sample of children of the same age.
3. SAT (Scholastic Assessment Test):
o Purpose: Evaluates readiness for college by measuring skills in reading,
writing, and math.
o Example Use: A student’s score is compared to the scores of other students
who have taken the test nationally.
4. ACT (American College Testing):
o Purpose: Assesses high school students’ academic readiness for college.
o Example Use: Scores are compared to the scores of other students taking the
same test to determine college readiness.

Advantages

1. Benchmarking Performance: Provides a clear comparison of a student's


performance relative to others. This helps in understanding where the student stands
in relation to their peers.
2. Standardization: Results are based on standardized procedures, making them
consistent and reliable across different test-takers and settings.
3. Wide Application: Useful for a variety of purposes, such as identifying students who
may need special education services or determining eligibility for certain programs.
4. Objective Evaluation: Offers an objective measure of performance that is not
influenced by subjective opinions, making it easier to communicate results and
decisions.
5. Research-Based: Many norm-referenced tests are well-researched and have
established norms based on large, representative samples, providing a robust
framework for assessment.

Disadvantages

1. Limited Usefulness for Individual Needs: Results are often too general to guide
specific instructional decisions. They don't always indicate exactly what a student
needs to improve.
2. Focus on Comparison: Emphasizes ranking students against each other rather than
measuring individual progress. This can lead to an overemphasis on relative
performance rather than personal growth.
3. May Not Reflect Current Abilities: Norms are based on a sample from a specific
time, and a student's abilities or circumstances may change, making the results less
relevant over time.
4. Cultural and Contextual Bias: Some tests may have biases that affect the
performance of students from different cultural or socioeconomic backgrounds,
leading to unfair comparisons.
5. Pressure and Anxiety: Can create pressure and anxiety for students, as they are
aware that their performance is being compared to others, which may affect their test-
taking experience.

Criterion-referenced Test (CRTs) Concept: Criterion-Referenced Testing (CRT) evaluates


whether a student has achieved specific skills or knowledge according to pre-set standards or
criteria. Unlike norm-referenced tests, which compare students to each other, CRTs focus on
whether students meet the established learning goals.
Criterion-Referenced Testing (CRT) focuses on whether a student can perform a specific skill
or meet certain criteria, rather than comparing their performance to others. In CRT, students
are assessed on how well they meet set standards or goals, rather than how they rank
compared to their peers.

Definition: A Criterion-Referenced Test measures a student’s performance against specific criteria


or learning objectives that have been set in advance. It assesses whether the student has mastered
particular skills or concepts, regardless of how their performance compares to that of other
students.
A Criterion-Referenced Test measures how well a student meets specific skills or criteria that
have been set in advance. It looks at whether students have mastered the skills needed before
moving on to more advanced topics.

Examples:

1. Math Proficiency Test: A test designed to measure whether a student can correctly
solve basic addition problems. The criterion might be to solve 80% of addition
problems correctly to demonstrate proficiency before moving on to multiplication.
2. Reading Comprehension Assessment: A test that checks if a student can understand
and summarize a passage of text. The criterion could be summarizing key points of a
passage with 90% accuracy.
3. Spelling Test:A spelling test where the criterion is to correctly spell 20 out of 25
words from a given list. This ensures that the student has achieved a set level of
spelling skill.
4. Driving Test: A driving test where the criterion is to pass a series of driving
maneuvers and safety checks. The student must meet the set standards for each
maneuver to be considered proficient.

Advantages

1. Clear Standards: Provides clear and specific criteria for what students need to know
or be able to do. This helps in setting precise learning goals and expectations.
2. Focus on Mastery: Helps ensure that students have mastered particular skills or
concepts before moving on to more advanced topics. It focuses on individual student
progress and achievement.
3. Direct Instructional Guidance: Offers direct feedback on which specific skills or
knowledge areas need more focus. Teachers can use this information to tailor
instruction and interventions to meet individual student needs.
4. Useful for Identifying Needs: Helps identify specific areas where students may need
additional support or remediation. This can be particularly helpful in planning
targeted instructional strategies.
5. Ongoing Assessment: Can be used regularly to track student progress and adjust
teaching methods accordingly. This ongoing feedback is beneficial for continuous
improvement.

Disadvantages

1. Setting Criteria: Determining the exact criteria for passing a skill can be challenging.
Deciding what level of proficiency is required can affect how students are assessed
and how they perform.
2. Narrow Focus: The focus on meeting specific criteria may lead to a narrow view of
what students need to learn. It may result in teaching only the skills tested and not
considering broader educational needs.
3. Limited Comparisons: Since CRTs do not compare students to each other, they may
not provide information on how a student’s performance stands in relation to peers.
This can limit understanding of a student's relative standing.
4. Potential for Misalignment The skills assessed might become the primary focus of
teaching, which could lead to teaching to the test rather than addressing broader
educational goals.

Curriculum-Based Assessment (CBA)

Concept: Curriculum-Based Assessment (CBA) is a type of evaluation that focuses on


assessing a student’s performance based on the actual curriculum being taught in the
classroom. It is used to monitor how well students are learning and applying the material that
is part of their regular instruction.
Curriculum-Based Assessment (CBA) is used to understand how well students are doing with
the curriculum they are being taught. It helps identify educational needs and adjust teaching
methods based on what students are learning in the classroom.

Definition:
CBA involves regularly measuring a student’s performance on tasks that are directly linked
to their classroom curriculum. It checks how well students are meeting the goals of their
curriculum.

Curriculum-Based Assessment involves regularly measuring a student's progress on tasks and


objectives that are directly linked to the curriculum. The goal is to determine how well
students are meeting the expected outcomes of their education and to adjust instruction
accordingly.

Key Features:

1. Direct Link to Curriculum: The assessments are based on the specific content and
skills outlined in the curriculum. This ensures that the tests reflect what is being
taught in the classroom.
2. Frequent Measurement: Assessments are conducted regularly or frequently to track
ongoing progress. This helps in making timely adjustments to teaching strategies.
3. Formative Focus: Often used for formative purposes, meaning it helps teachers
understand how well students are learning and informs instructional decisions.
4. Skill-Based Evaluation: Assesses specific skills or knowledge areas outlined in the
curriculum, rather than comparing students to each other.
Examples:

 Classroom Tests: Regular quizzes or tests based on what’s currently being taught in
class.
 Performance Tasks: Activities that show how well students understand and apply
what they’ve learned.

Procedure for Developing CBA:

1. Define Curriculum: Break down the curriculum into specific tasks and goals.
2. Placement: Determine what skills students have learned and what they still need to
learn.
3. Teaching Methods: Choose the best methods, materials, and classroom organization
for teaching.
4. Evaluate Progress: Continuously check how students are progressing and adjust
teaching as needed.

Distinctions between Criterion-referenced and Norm-referenced testing


DIMENSION CRITERION REFERENCED NORM REFERENCED
PURPOSE To determine whether each To rank each student with
student has achieved specific respect to the achievement of
skills or concepts. others in broad areas of
To find out how much students knowledge.
know before instruction begins To discriminate between high
and after it has finished. and low achievers.

CONTENT Measures specific skills which Measures broad skill are as


make up a designated sampled from a variety of
curriculum. These skills are textbooks, syllabi, and the
identified by teachers and judgments of curriculum
curriculum experts. experts.
Each skill is expressed as an
instructional objective.

ITEMCHARACTERSTICES Each skill is tested by at least Each skill is usually tested by


four items in order to obtain an less than four items.
adequate sample of student Items vary in difficulty.
performance and to minimize Items are selected that
the effect of guessing. discriminate between high and
The items which test any given low achievers.
skill are parallel in difficulty
SCREINTERPRETATION Each individual is compared Each individual is compared
with a preset standard for with other examinees and
acceptable achievement. The assigned a score—usually
performance of other expressed as a percentile, a
examinees is irrelevant. grade equivalent score, or a
A student's score is usually stanine.
expressed as a percentage. Student achievement is
Student achievement is reported for broad skill areas,
reported for individual skills. although some norm-
referenced tests do report
student achievement for
individual skills.

Difference Between Norm-Referenced Tests (NRT) and Criterion-Referenced


Tests (CRT)

Aspect Norm-Referenced Tests (NRT) Criterion-Referenced Tests (CRT)


Purpose Compare a student’s performance Assess whether a student meets specific
to that of others (peers). learning criteria or standards.
Focus Relative performance compared Mastery of specific skills or knowledge as
to a norm group. per set criteria.
Comparison Measures how a student performs Measures how well a student meets
in relation to a norm group. predetermined criteria.
Standardization Standardized against a large Standardized based on specific learning
sample to establish norms. goals or standards.
Feedback Provides information on how a Provides information on whether a
student ranks compared to others. student has achieved specific learning
objectives.
Use in Instruction Less directly useful for guiding Directly informs instruction and identifies
individual instruction. areas needing improvement.
Examples SAT, ACT, IQ tests, etc. Driving tests, proficiency tests in specific
subjects.
Results Results are used to rank students Results indicate whether specific learning
Interpretation and identify relative standing. goals have been achieved.
Assessment Helps in understanding relative Helps in determining mastery of skills and
Outcome ability and making comparisons. guiding next steps.
Typical Use Often used in admissions and Often used in educational settings to track
selection processes. progress and guide teaching.

1.5 Points to consider while assessing students with developmental disabilities.


1. Understand the Disability

 Nature of the Disability: Different developmental disabilities affect learning in


various ways. For instance, a student with Autism Spectrum Disorder (ASD) might
struggle with social interactions and communication, while a student with Down
Syndrome may have challenges with cognitive skills but excel in other areas.
 Example: For a student with ASD, an assessment might focus on understanding their
social skills and communication abilities, rather than just academic performance.

2. Use a Multi-Method Approach

 Varied Assessments: Combine standardized tests with observational data and


informal assessments to get a complete picture. This could include using tests for
cognitive ability, behavior checklists, and direct observations of classroom
performance.
 Dynamic Assessment: Use methods that can adapt based on the student's responses.
For example, if a student with ADHD is struggling with a traditional test, using a
hands-on activity or oral assessment might provide better insights.
 Example: A student with developmental delays might be assessed using a
combination of IQ tests, classroom observations, and parent interviews to understand
their abilities comprehensively.

3. Consider the Student’s Communication Needs

 Adapt Assessments: Modify assessments to fit the student’s communication style.


For non-verbal students, use picture-based tests or communication boards.
 Alternative Methods: Implement assistive technologies like speech-to-text software
or communication apps.
 Example: A student with limited verbal communication might use a picture exchange
system to answer questions during an assessment.

4. Create a Supportive Testing Environment

 Comfortable Setting: Ensure the testing environment is quiet and free from
distractions. This might involve testing in a separate room or using noise-cancelling
headphones.
 Flexible Timing: Allow extra time or breaks. For instance, a student with ADHD
might need frequent breaks to maintain focus during a long assessment.
 Example: A student with sensory processing issues might be given the option to take
tests in a calm, dimly-lit room with minimal noise.

5. Incorporate Observational Data

 Daily Performance: Collect data on how the student performs tasks in everyday
situations, such as during classroom activities or at home.
 Functional Skills: Assess practical skills, like how a student with motor delays
manages daily tasks or navigates the school environment.
 Example: Observing a student’s ability to follow multi-step instructions in a
classroom setting can provide insight into their cognitive and functional abilities.

6. Involve Multiple Perspectives

 Input from Parents and Caregivers: Gather information from parents about the
student’s behavior and skills at home. For example, a parent might provide valuable
insights into the student’s social interactions or challenges.
 Team Collaboration: Collaborate with other professionals, such as special education
teachers and therapists, to get a holistic view of the student’s needs.
 Example: A team might include a speech therapist to assess communication skills, an
occupational therapist for motor skills, and a special education teacher for academic
performance.

7. Be Culturally and Linguistically Responsive

 Cultural Sensitivity: Ensure assessments are culturally appropriate. For example,


avoid using culturally biased tests that may disadvantage students from diverse
backgrounds.
 Language Considerations: Provide assessments in the student’s primary language or
use bilingual tools. For a student whose first language is Spanish, offer the assessment
in Spanish if possible.
 Example: Using culturally relevant examples in math problems can make
assessments more fair for students from different cultural backgrounds.

8. Focus on Strengths and Needs

 Strength-Based Approach: Identify and build on the student’s strengths. For


example, if a student excels in visual-spatial tasks, use visual aids to support their
learning.
 Targeted Goals: Set specific goals based on the student’s needs. For instance, if a
student struggles with reading comprehension, create targeted interventions to
improve this skill.
 Example: A student with strong problem-solving skills might be encouraged to work
on projects that involve critical thinking and creativity.

9. Regular Monitoring and Adaptation

 Continuous Assessment: Regularly track the student’s progress and adjust teaching
strategies as needed. Use frequent assessments to see how well interventions are
working.
 Feedback Loop: Provide ongoing feedback and modify teaching methods based on
the student’s progress. For instance, if a particular strategy isn’t effective, try a
different approach.
 Example: Adjusting a student’s learning plan based on monthly assessments to
ensure they are making progress towards their goals.

Assessing students with learning disabilities can be tricky. Some students, like those with
ADHD or autism, may have a hard time with traditional tests because they can't stay focused
long enough to finish. However, assessments are crucial as they give these students a chance
to show what they know and can do. For many students with learning disabilities, written
tests should be the least used method.

Here are some alternative ways to assess students with learning disabilities:
1. Use the Latest Standardized Tests: Make sure to use current and valid versions of
any standardized tests.
2. Employ Multiple Methods: Use a variety of assessment tools and sources, such as:
o Case History and Interviews: Gather information from parents, teachers,
professionals, and the student if possible.
o Parent and Teacher Reports: Include evaluations and feedback from parents
and teachers.
o Direct Observations: Collect informal observations and data from various
settings and times.
o Reliable Standardized Tests: Use tests that are valid, reliable, and suitable
for the student’s culture, language, development, and age.
o Curriculum-Based Assessments: Use assessments related to the curriculum,
task and error analysis, portfolios, diagnostic teaching, and other non-standard
methods.
o Progress Monitoring: Continuously track progress during instruction and
over time.
3. Consider IDEA Definitions: Take into account all aspects of specific learning
disabilities as defined by IDEA (Individuals with Disabilities Education Act) 2004,
including:
o Exclusionary Factors: Things that might rule out other causes of learning
issues.
o Inclusionary Factors: Factors that include different types of learning
disabilities.
o Eight Areas of Learning Disabilities: Skills like oral expression, listening
comprehension, written expression, basic reading, reading comprehension,
reading fluency, math calculation, and problem-solving.
o Individual Differences: Look at patterns of strengths and weaknesses relative
to age, grade level standards, or intellectual development.
4. Examine Different Areas of Functioning: Assess abilities in motor skills, sensory
processing, cognition, communication, and behavior. Pay attention to specific
cognitive difficulties like memory, attention, sequencing, motor planning, and
reasoning.
5. Follow Proper Procedures: Use recommended procedures for administering,
scoring, and reporting standardized tests. Report results in a way that allows for
comparisons across measures, avoiding age or grade equivalents.
6. Include Confidence Intervals: Provide confidence intervals and standard error
measures if available.
7. Combine Data Sources: Integrate both standardized and informal data collected.
8. Discuss All Information: Balance and discuss both standardized and non-
standardized data to understand the student’s academic performance and functional
skills. Use this information to make decisions about identification, eligibility,
services, and teaching strategies.
Practice Question-

1. What is the primary purpose of 'screening' in education?


A) To assess final academic performance
B) To identify students needing further evaluation
C) To measure individual learning progress
D) To provide detailed feedback on student work
2. Which term refers to the process of gathering information to make informed judgments about a
student's learning?
A) Testing
B) Screening
C) Assessment
D) Measurement
3. What distinguishes 'evaluation' from 'assessment'?
A) Evaluation is a process, while assessment is a tool
B) Evaluation involves making judgments about data collected, whereas assessment is the
process of collecting data
C) Evaluation is informal, while assessment is formal
D) Evaluation and assessment are synonyms
4. Which of the following best describes 'testing'?
A) Collecting data through observations
B) The process of measuring student performance through exams or quizzes
C) Reviewing academic records
D) Providing general feedback on student behavior
5. How does 'measurement' differ from 'assessment'?
A) Measurement provides quantitative data, while assessment may include both quantitative and
qualitative data
B) Measurement is used only in informal settings, while assessment is formal
C) Measurement and assessment are the same
D) Measurement is concerned with opinions, while assessment deals with facts
6. Which assessment type evaluates cognitive abilities?
A) Achievement assessment
B) Intellectual assessment
C) Aptitude test
D) Psychological assessment
7. What is the purpose of an 'achievement assessment'?
A) To measure a student's potential abilities
B) To assess academic skills and knowledge in specific subjects
C) To diagnose learning disabilities
D) To certify student performance
8. Which test measures potential abilities in specific areas?
A) Achievement test
B) Aptitude test
C) Intellectual assessment
D) Psychological assessment
9. For which purpose is 'psychological assessment' commonly used?
A) To determine academic achievement
B) To identify cognitive abilities
C) To diagnose psychological conditions
D) To assess future learning potential
10. Which type of assessment is used primarily for certification?
A) Developmental assessment
B) Psychological assessment
C) Intellectual assessment
D) Diagnostic assessment
11. What does 'entry level assessment' determine?
A) Academic progress throughout the year
B) Baseline skills and knowledge before instruction starts
C) Overall academic achievement
D) Final student performance
12. Which type of assessment provides feedback to enhance learning during instruction?
A) Summative assessment
B) Formative assessment
C) Entry level assessment
D) Certification assessment
13. When is a 'summative assessment' typically used?
A) During ongoing instruction
B) To provide feedback for improvement
C) At the end of an instructional period to evaluate overall learning
D) To diagnose learning difficulties
14. Which type of assessment informs instructional decisions and supports student learning?
A) Summative assessment
B) Formative assessment
C) Entry level assessment
D) Certification assessment
15. What is the main focus of developmental assessments?
A) Measuring academic performance
B) Tracking growth and developmental milestones
C) Diagnosing learning disabilities
D) Evaluating standardized test results
16. Which assessment method involves standardized procedures and scoring?
A) Informal assessment
B) Formal assessment
C) Criterion-referenced testing
D) Teacher-made quiz
17. An example of an informal assessment is:
A) A state standardized test
B) A teacher-made quiz
C) A nationally norm-referenced test
D) A standardized diagnostic tool
18. Criterion-referenced tests (CRT) are used to:
A) Compare student performance against a set standard
B) Measure student performance relative to peers
C) Predict future academic success
D) Diagnose psychological conditions
19. How do standardized tests differ from norm-referenced tests (NRT)?
A) NRTs compare performance to peers, while standardized tests measure against a fixed
criterion
B) Standardized tests are informal, while NRTs are formal
C) NRTs are used only for certification, while standardized tests are for diagnosis
D) There is no difference between them
20. The role of teacher-made assessments in education is primarily to:
A) Compare student scores nationally
B) Evaluate student progress and understanding in a specific context
C) Provide a standardized measure of achievement
D) Certify student performance

Answers
1. B) To identify students needing further evaluation
2. C) Assessment
3. B) Evaluation involves making judgments about data collected, whereas assessment is the process of
collecting data
4. B) The process of measuring student performance through exams or quizzes
5. A) Measurement provides quantitative data, while assessment may include both quantitative and
qualitative data
6. B) Intellectual assessment
7. B) To assess academic skills and knowledge in specific subjects
8. B) Potential ability in specific areas
9. C) To diagnose psychological conditions
10. D) Diagnostic assessment
11. B) Baseline skills and knowledge before instruction starts
12. B) Formative assessment
13. C) At the end of an instructional period to evaluate overall learning
14. B) Formative assessment
15. B) Tracking growth and developmental milestones
16. B) Formal assessment
17. B) A teacher-made quiz
18. A) Compare student performance against a set standard
19. A) NRTs compare performance to peers, while standardized tests measure against a fixed criterion
20. B) Evaluate student progress and understanding in a specific context

Long answer type Question

1. Define the term 'screening' in the context of educational assessment.


2. What is the primary difference between 'assessment' and 'evaluation'?
3. Explain the purpose of 'testing' in educational settings.
4. How does 'measurement' differ from 'assessment'?
5. Provide an example of a situation where 'screening' would be used in an educational context.
6. What is intellectual assessment, and why is it important for diagnosis?
7. Describe the purpose of 'achievement assessment' in educational settings.
8. What are aptitude tests designed to measure, and how are they used?
9. Give an example of a psychological assessment tool and its typical use.
10. How does assessment for certification differ from assessment for diagnosis?
11. What is 'entry level assessment,' and how does it inform instructional planning?
12. Explain the difference between formative and summative assessments with examples.
13. Why is it important to use developmental assessments in early childhood education?
14. Describe a situation where summative assessment would be most appropriate.
15. How can formative assessments be used to improve student learning outcomes?
16. Differentiate between formal and informal assessments with examples.
17. What is a standardized test, and how does it differ from a norm-referenced test (NRT)?
18. Explain the concept of criterion-referenced testing (CRT) and provide an example.
19. Discuss the role of teacher-made assessments in the classroom.
20. How do informal assessments complement formal assessments in educational settings?

You might also like