0% found this document useful (0 votes)
106 views27 pages

Name: Nayab Amjad ROLL NO: MCF1900609 Program: Ma Education (Morning)

The document provides information about a student named Nayab Amjad with roll number MCF1900609 enrolled in the MA Education morning program. It then summarizes key concepts around educational assessment including the differences between tests, measurements, assessments and evaluations. It also discusses the instructional process and role of assessment, principles of assessment, and ways to classify assessment based on nature, purpose and forms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views27 pages

Name: Nayab Amjad ROLL NO: MCF1900609 Program: Ma Education (Morning)

The document provides information about a student named Nayab Amjad with roll number MCF1900609 enrolled in the MA Education morning program. It then summarizes key concepts around educational assessment including the differences between tests, measurements, assessments and evaluations. It also discusses the instructional process and role of assessment, principles of assessment, and ways to classify assessment based on nature, purpose and forms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

NAME: NAYAB AMJAD

ROLL NO: MCF1900609 

PROGRAM: MA EDUCATION (MORNING)


LECTURE No: 1
Introduction to Educational Assessment
1.1) Concept of Test, Measurement, Assessment, Evaluation
 Test is the form of questioning or
measuring tool used to access the status
of one’s skill, attitude and fitness.
TEST  It is an assessment intended to measure
a test-taker’s knowledge, skill, aptitude,
performance, or classification in many
other topics.

 Measurement is an act or process that


involves the assignment of numerical
MEASUREMENT values to whatever is being tested.
 It involves the quantity of something

 Assessment in education is the process


of gathering, interpreting, recording &
using information about pupils’
responses to an educational task.
 The process of gathering quantitative
ASSESSMENT
and qualitative data of what a student
can do, and how much a student
possesses.

 Evaluation is concerned with a whole


range of issues in and beyond
education; lessons, programs, and skills
can be evaluated.
 It produces a global view of
EVALUATION
achievements usually based on many
different types of information such as
observation of lessons, test scores,
assessment reports, course documents
or interviews with students and teacher.

1.2) Instructional Process and Role of Assessment


 Successful student learning is most effective with an aligned system of standards,
curriculum, instruction, and assessment.
 When assessment is aligned with instruction, both
students and teachers benefit.
 Students are more likely to learn because instruction is
focused and because they are assessed on what they are
taught.
 Teachers are also able to focus, making the best use of
their time.
1.2.1) Instructional Design Process (ADDIE’s MODEL)

Five essential Phases


1)Analyse

This phase involves:

 Assessing goals
 Conducting a need analysis
 Identifying the knowledge gap
 Conducting an audience analysis

2)Design

This phase involves:

 Selecting appropriate delivery method


 Determining training structure and duration
 Developing storyboards and media

3)Develop

This phase involves:

 Creating the prototype


 Completing a tabletop review
 Running a training plot

4)Implement

This phase involves:

 Printing and preparing training materials


 Preparing the trainers for delivery
 Notify and enrolling learners
 Launching the course

5)Evaluate

This phase involves:

 Collecting training evaluation data


 Assessing project performance
 Reporting performance result
1.2.2) Role of Assessment and instructions:
 Asassessments are the tools and methods educators use to what students know and are
able to do.
 Assessments range from teacher questioning techniques to state-wide assessments like
PARCC.
 Assessments are only useful if they provide information that is used to improve
student learning.
 Assessment inspires us to ask these hard questions:
 "Are we teaching what we think we are teaching?"
 "Are students learning what they are supposed to be learning?"
 "Is there a way to teach the subject better, thereby promoting better learning?"

 Teachers are responsible for providing instruction by identifying teaching practices


that are effective for all students, since not every student learns or retains information
in the same way.
 This is where teachers get to be creative in how they engage students in learning.

---Curriculum provides a "map" for how students will master the standards. Decisions about
what that map looks like are made by districts, schools, and teachers. This map includes the
materials (e.g. lesson plans, assignments, tests, resources) that will make learning possible.

1.3) Assessment of and Assessment of Learning


Comparing Assessment for Learning and Assessment of Learning

Assessment for Learning Assessment of Learning
(Formative Assessment) (Summative Assessment)

 Checks learning to determine what to do


next and then provides suggestions of  Checks what has been learned to date.
what to do—teaching and learning are
indistinguishable from assessment.

 Is designed to assist educators and


students in improving learning.  Is designed for the information of those
not directly involved in daily learning and
teaching (school administration, parents,
school board, Alberta Education, post-
secondary institutions) in addition to
educators and students.

 Is used continually by providing


descriptive feedback.
 Is presented in a periodic report.

 Usually uses detailed, specific and


descriptive feedback—in a formal or  Usually compiles data into a single
informal report. number, score or mark as part of a formal
report.

 Is not reported as part of an achievement


grade.  Is reported as part of an achievement
grade.

 Usually focuses on improvement,


compared with the student's “previous  Usually compares the student's learning
best” (self-referenced, making learning either with other students' learning (norm-
more personal). referenced, making learning highly
competitive) or the standard for a grade
level (criterion-referenced, making
learning more collaborative and
individually focused).

 Involves the student.


 Does not always involve the student.

1.4) Principals of Assessment


 VALID: the assessment evidence meets all assessment criteria and all learning
outcomes
 AUTHENTIC: all the work is the learner’s own
 RELIABE: assessment evidence is consistent and generates outcomes that would be
replicated were the assessment repeated
 CURRENT: assessment evidence is up-to-date
 SUFFICIENT: enough work is available to justify the credit value, and to enable a
consistent and reliable judgement about the learner’s achievement
 COMPAREABLE: all assessment evidence is comparable in standard between
assessments within a unit/qualification, and between learners of the same level
 MANAGEABLE: all assessment places reasonable demands on all learners
 FAIR AND MEASIABLE BIAS: assessments are fair to all learners irrespective of
their characteristics (for example, age, gender, etc)

1.5) Classification of Assessment on the basis of:

---Nature of Assessment
Three major reasons why conduct assessment
1)Program Improvement

 Helps program developers identify areas of improvement for the program.


 Shows program developers the actual impact the program has on student.

2)Recruitment

 Provides parents with evidence of the value a program has for their child.
 Gives prospective students evidence of why they should participate.

3)Accountability

 Meets University and schools annual program reporting and program review
requirements.
 Addresses accrediting agency program evaluation requirements.
 May apply to other funding or regulatory agencies requirements. 

---Purpose of Assessment\

 To gather evidence of learning


 Used to place young children in infant or early childhood programs or to provide
special services
 To be able to describe what the child has achieved
 Program planning
 Research

---Forms of Assessment

 Performance task
 Classroom quizzes
 Portfolios of student work
 Teacher observations
 Standardized test

---Methods of Interpreting results

Test Interpretation is the process of analysing scores in a test and translating


qualitative data into quantitative and grading into numerical. Score interpretation is
same as test interpretation.

METHODS:

A referencing framework is a structure you can use to compare student performance


to something external to the assessment itself.

1)Criterion Referencing Framework.


 Criterion Referencing Framework permits us to describe an individual’s performance
without referring to the performance of other.
 Describes student performance according to a specified domain or clearly defined
learning tasks.
 Concerned with national examination and other assessment bodies.
 Used in the assessment of vocational and academic qualifications.
 Results are given on a pass/fail, competent/not competent basis.
 Results are conclusive and usually open to review.

2)Norm Referencing Framework

 Norm referenced framework interpretation tell us how an individual compares with


other students who have taken the same test.
 How much student knows is determined by his standing or rank within the reference
group.
 This means that student’s scores is not treated individually but as the part of the group
where the student belongs.
Norm Referenced Framework Most common types are
 Grade Norms (5.5)
 Percentile Norms (85% higher than)
 Standard scores norms (normal curve)
 Stanines (9)

---Teacher made VS Standardised Test

Teacher Made Test

Basically, teacher made tests are used to evaluate the progress of the students in school.
However, the specific use of tests may vary from school to school and teacher or teacher.The
test results can be used for students, teachers, and for other administrative purposes.

 These tests are very simple to use.


 Easy for the students.
 Teachers can assess the strengths and weaknesses of students.
 Tests are conducted continuously and children get immediate feedback.
 Tests are not so carefully and scientifically prepared
 The items of teacher made tests are seldom analysed and edited.
 The types of behavioural changes covered are often limited in scope.

Standardized Tests

A standardized test is an instrument of measurement, which measures what it aims to


measure quite correctly with constant result.

The process of the standardization demands a more critical analysis of the;


 Subject matter
 Rigorous planning of the test
 More accurate construction of test items
 Analysis and refinement conditions for administration and scoring.

Lecture no 2:
Instruction aims goals and objectives:
Aims:
 Aims are concerned with purpose whereas objectives are concerned with
achievement.
 Usually an educational objective relates to gaining ability, a skill, some knowledge, a
new attitude etc. rather than having merely completed a given task.
Goals:
 In broad terms, Educational Goals are statements that describe the competences,
skills, and attributes that students should possess upon completion of a course or
program.
 They often operate within the interacting domains of knowledge, skills and attitude.

Objectives:
 In education, learning objectives are brief statements that describe what students will
be expected to learn by the end of school year, course, unit, lesson, project, or class
period.

Instructional objectives as learning:

An instructional objective is a statement that will describe what the learner will be able to do
after completing the instruction. ... Instructional objectives are specific, measurable, short-
term, observable student behaviours. They indicate the desirable knowledge, skills, or
attitudes to be gained.

Relationship between learning experience and learning outcomes:


Learning objectives vs learning outcomes
 A learning objective is the instructor's purpose for creating and teaching their course.
 In contrast, learning outcomes are the answers to those questions.
They are the specific, measurable knowledge and skills that the learner will gain by taking
the course.
---General objective outcomes:

 Objective – A course objective describes what a faculty member will cover in


a course. ...
 Student Learning Outcome – A detailed description of what a student must be able to
do at the conclusion of a course.
---Specific learning outcomes:

 Specific outcomes relate content to ability by formulating as precisely as possible the


knowledge, skills or abilities that a learner must acquire or improve during or by the
end of a learning situation .
---Appropriate learning outcomes:

Good learning outcomes are focused on what the learner will know or be able to do by the
end of a defined period of time and indicate how that knowledge or skill will be demonstrated

Lecture no 3:

Developing test specifications

Test specification and its importance:


The purpose of a Table of Specifications is to identify the achievement domains being
measured and to ensure that a fair and representative sample of questions appear on the test

 A Table of Specifications provides the teacher with evidence that a test has content


validity, that it covers what should be covered.
 It provides clear instructions on the intent, performance and construction of the
project. It can reference the quality and standards which should be applied.
Building table of specification:

The TOS is typically constructed as a table that includes key information to help teachers
align the learning objectives that represent the content and cognitive levels intended for
students to achieve with class time spent and the number of test items.
---Preparing a list of instruction objectives:

A well-written objective should meet the following criteria:


(1) Describe a learning outcome,
(2) Be student oriented,
Should describe a learning outcome (e.g., to correctly spell the spelling words on page
seventeen).
Outlining the course content:
When creating your course outline there are some essential pieces that you need to include:

 Course Description from the Academic Calendar


 Course Goals
 Student Learning Objectives/Outcomes
 Assessment Overview
 Assessment Plan
 Schedule of Activities
 Plagiarism Announcement. Reading List.
 Preparing the two-way charts

Lecture no 5:
Taxonomy of educational objectives:
Cognitive domain:
 The cognitive domain involves knowledge and the development of intellectual skills
 This includes the recall or recognition of specific facts, procedural patterns, and
concepts that serve in the development of intellectual abilities and skills, Knowledge,
Comprehension.

Categories of cognitive domain:

Bloom's Taxonomy was created in 1956 under the leadership of educational psychologist Dr
Benjamin Bloom in order to promote higher forms of thinking in education, such as analysing
and evaluating concepts, processes, procedures, and principles, rather than just remembering
facts (rote learning). It is most often used when designing educational, training, and learning
processes.

Bloom taxonomy of cognitive development

Bloom identified six levels within the cognitive domain

 Knowledge is defined as remembering of previously learned material. Knowledge


represents the lowest level of learning outcomes in the cognitive domain..
 Comprehension is defined as the ability to grasp the meaning of material.

 Application refers to the ability to use   learned material in new and concrete


situations. This may include the application of such things as rules, methods,
concepts, principles, laws, and theories.
 Analysis refers to the ability to break down material into its component parts so that
its organizational structure may be understood.
 Synthesis refers to the ability to put parts together to form a new whole. This may
involve the production of a unique communication (theme or speech), a plan of
operations (research proposal), or a set of abstract relations (scheme for classifying
information).

 Evaluation is concerned with the ability to judge the value of material (statement,
novel, poem, research report) for a given purpose. The judgments are to be based on
definite criteria.
The Three Domains of Learning

The committee identified three domains of educational activities or learning (Bloom, et al.


1956):

 Cognitive: mental skills


 Affective: growth in feelings or emotional areas
 Psychomotor: manual or physical skills

Lecture no 6:
Classification of basis of functional role in instruction:
A Guide to Types of Assessment:
 Diagnostic
 Formative
 Interim
 Summative
The multi-faceted nature of assessments means that educators can leverage them in a number
of ways to provide valuable formal or informal structure to the learning process.

The main thing to remember is that the assessment is a learning tool. What all assessments
have in common is that they provide a snapshot of student understanding at a particular time
in the learning process.

Classification of basis of interpretation role in instruction:. 


Defining the Instructional Framework
The following definition of terms will help to interpret the framework and to clarify the
relationships between and among the levels.

Iinstructional Models
 Models represent the broadest level of instructional practices and present a
philosophical orientation to instruction.
 Models are used to select and to structure teaching strategies, methods, skills, and
student activities for a particular instructional emphasis.
Instructional Strategies
 Within each model several strategies can be used.
 Strategies determine the approach a teacher may take to achieve learning objectives.
 Strategies can be classed as direct, indirect, interactive, experiential, or independent.

Lecture no 7:
Restricted Response Test:
 imposed restrictions on pupils regarding the freedom of expression.
 From and scope of the answer is restricted.

Extended - Response question:


 Free to select any factual information to organise the answer.
 A ability to analyse problem, organize ideas , express in their own words .
 Choice of question and scoring procedure require attention.

Extended - response eyes:


Ability to:
 Organize, produce, and express ideas.
 Summarize (write summary of a store)
 Construct creative stories (e.g. narrative easy).
Advantages of Easy type Test:
 Measure complex learning outcomes.
 Easy to construct as it includes few questions.
 promote basic skills like summarizing and organising the ideas.
 Develops vocabulary.
Limitations:
Unreliability of scoring covers limited sample of contents as few questions are developed.
Suggestions for construction of Easy questions:
Use Easy questions only for those learning outcomes that cannot be measured by objective
items.

Lecture no 8:
Rubric:
 A scoring guide for the evaluation of performance, a product, or a project
 Makes grading and ranking simpler, more transparent and fairer consists of a checklist
of items.

---Analytic scoring Rubric:


 Enable us to focus on one characteristic at a time.
 Give specific feedback.
Advantages of Analytic Rubric
 Provide very specific feedback to students.
 Sub skills can be weighted differently.
 Produce more accurately and consistent scores.

Disadvantages of Analytic Rubric


 More time request to score students texts.
 Harder to develop
 Difficult to avoid descriptor overall and ambiguity.

---Holistic scoring Rubric:


 single score is given
 For rapid scoring

Advantages of Holistic Rubric


 Speed making process.
 Consistent making across multiple students
 Yield a single score
 Easier to create than analytic rubric.

Disadvantages of the Holistic Rubric


 Collapse a lot of criteria into a single grade.
 Difficult to assess students accurately.
 Din provides specific feedback on strengths and weaknesses.
 Inconsistent across multiple score.

Scoring of Easy questions:


1. What learning outcomes we are going to measure.
2. Phrasing out comes an questions and scoring Rubric in accordance with the learning
outcomes.
3. Direction regarding the type of required answer.
Suggestions for scoring Essay questions:
Prepare an outline of the expected response in advance.
(a) Major points
(b) Characteristics of the answer
(c) The amount of credit.
Use of the most appropriate scoring rubric:
(a) Analytic rubric
(b) Holistic rubric
Decide how to handle the factors that are irrelevant to learning outcomes:
For example:
Legibility of handwriting, spelling sentences structure, and neatness
Evaluate all response to one question before going on to the next one:
For
(a) Maintaining uniform standard
(b) Improving reliability.

(a) Students general impression may influence scoring of a test


So,
(b) Conceal students identify.
For special decision obtain two more ratings:
For,
Awards scholarship, and special trainings.

Lecture no 9:

Child observation and Assessment Anecdotal records Checklist


Writing Anecdotal Records
An Anecdotal record is a written record of what children do or say during on everyday
activity.
Being Objective Anecdotal records:

Write down what you see and hear,


 Don't assume the child's feelings.

Write down facts rather than opinion,


 Use words that describe but do not judge.

Benefits of using checklist


 Can use for multiple purpose. (I.e., whole class, small group, individual).
 Frequent monitoring allows for adjustment of instruction.
 Several observers can gather information on children.
 Communicate with a child's family about progress shown on the checklist.

Child observation and Assessment:


 Recording information
o Anecdotal records
o Checklist
o Organising information
o Interpreting information
o Using information
The Assessment - instructional:
Observation
|
Documentation
Via anecdotal

Records and
Checklist
|
Interpretation
Hypothesis
Setting
|
Instruction
Decision to Assess:
 Teach
 assess
 Adjust
Decision over time using Benchmark Rating
o Organize and summarize the data.
o What does the Information reveal about the child's development?
 What the child can do.
 What the child needs to learn.

Understand Assessment information:


 Look for areas of concern for individuals or groups of children
Gather information:
 Anecdotal records - documents what children do any day.
 Checklist - Document children's progress toward specific skills.
Analyse information:
 Determine what children can do.
 Determine what children need.

Lecture no 10
Observation
An observation is the act of noticing something or a judgment or inference from something
seen or experienced.
EXAMPLE: A doctor watching a patient's reaction to a medication

Types of observation:
 Naturalistic
Observe people or animals in their natural habitat. Observing behaviour in their natural
settings, without awareness or any manipulation or intervention

 Participant
Observing behaviour in a natural setting, through active participation in the Situation and /or
manipulation of the environment

 Laboratory
Observing behaviour in the lab. Observing behaviour in a controlled lab setting, with or
without participant's awareness and/or researches environment
Potential Issues
Reactance
In reactance naturalism doesn't fall, in participants and in the laboratory you may see the
reactance because in these two environments the situation is actively observed and you have
awareness and you can research and give reactions about it.
Expectancy effects
May include conscious or unconscious influences on subject behaviour including creation of
demand characteristics that influence subjects, and altered or selective recording of
experimental results themselves
Logistic issues
The results appear to support an increased use of participant observation in qualitative
logistics research, particularly when investigating inter organizational aspects. The analysis
highlights values, general limitations and challenges of using participant observation in
logistics

Unexpected events
An unexpected event means that Situation understanding is imperfect, it usually occurs in
naturalism and participant environments because in the laboratory you are fully aware.
Control (internal validity)
Laboratory observation, as opposed to naturalistic observation, refers to observing the
behaviour of subjects that are in a controlled environment. Because of the controlled
environment variable factors can be controlled which therefore leads to a limited number of
possible responses
Ecological validity
When research has high ecological validity it means that behaviour recorded within the
research can be applied to everyday life. This means that the results are more useful
Ecological validity is central to any laboratory assessment in order to ensure that the situation
is representative of the 'reality' of the family. The concept of ecological validity was first
proposed by Brunswick

DATA COLLECTION
An observation is a data collection method, by which you gather knowledge of the researched
phenomenon through making observations of the phenomena, as and when it occurs.

"LIVE" Vs. Recorded


Interval recording provides an estimate of behaviour. Whole interval recording typically
underestimates the overall duration of the behaviour because if behaviour occurs - but not for
the entire interval-it is not recorded or documented as occurring.
Whereas a live response collects all of the relevant data from the system that will be used to
confirm whether an incident occurred

Recording data, types


Some of the most significant record types are:

 Property records - title deeds and settlements.


 Accounting papers - including rentals, vouchers, surveys and valuations.
 Legal papers.
 Inventories.
 Correspondence.
 Enclosure papers.

Sampling methods
Time sampling
Time samples are a useful way to collect and present observation data over a long period of
time. Time samples can be used to observe a child's behaviour to identify possible concerns.

Event sampling
Event sampling is used to determine how often a specified event or behaviour occurs. Thus, it
does not identify the causes of behaviour. A simple example would be to record how often
and for long a child engages with a play activity.

Lecture no 11:
TEACHER MADE TESTS
Teacher-made tests are normally prepared and administered for testing classroom
achievement of students, evaluating the method of teaching adopted by the teacher and other
curricular programmes of the school. Teacher-made test is one of the most valuable
instruments in the hands of the teacher to solve his purpose.

Types of Test:

 Written /oral test


o Written tests are tests that are administered on paper or on a computer. The
oral exam is a practice in many schools and disciplines in which an examiner
poses questions to the student in spoken form.
 Performance (authentic) test
o In general, a performance-based assessment measures students' ability to apply
the skills and knowledge learned from a unit or units of study.
 Observation (Rubric scores)
o Classroom observation is another form of ongoing assessment. Most teachers
can "read" their students; observing when they are bored, frustrated, excited,
motivated, etc... These notes serve to document and describe student learning
relative to concept development, reading, social interaction, communication
skills, etc
Building a good test:
 Focus all your lessons and objectives
 Start your test with the easiest questions and move toward those that are more
difficult
 Test multiple learning levels
 Give your students experience with the types of questions with which you will be
testing

Planning the test:


 Determining the purpose of the assessment (pre-test, formative, or summative)
 Developing the test specifications (this is the table you are creating)
 Selecting the appropriate assessment tasks (form and type)
 Prepare the relevant assessment tasks.
 Assemble the assessment.
Preparing the test:
 Proof your test ahead of time
 Have another teacher proof the test and give you feedback if possible
 Throw out poor items
 Have multiple forms of measure on the test, not just one type
Things to Consider:
 Question need to be clear
 They should not give away the answer
 They should not hid the answer
 Group like items together (objectives)
 Place easy items at the beginning and harder items at the end
 Avoid patterns

Lecture No 12:
Benefits of Standardized assessment:

 It can be obtained easily and available or researcher’s convenience.


 Adopted and implement quickly.
 Helps to score objectively.
 Provide the external validity of test.
 Helps to provide reference group measures.
 Make longitudinal comparisons.

Lecture No 13:
Questionnaire Structure
Content questions:
In sections use filter questions, put more difficult questions towards the end and don't give
away answers to later questions.

Questionnaire Layout
Allow space, instructions and number for computer coding and one side of page only.
Telephone questionnaires:
 Retention problem.
 Provide guidance for interview on questionnaire.
 Make clear what is to be read and what is not.

Types of Questions:
 Leading (why don't you go more often to the supermarket)
 Ambiguous (Is your work made more difficult because you are expected)
 Multiple content (Have you suffered from headache or sickness)
 Implicit (How old is your car)
 Embarrassing (sexual more, alcohol assumptions)
o Over complex vocabulary: Avoid big words and technical terms.
o Over complex questions:
 Vague
 Too factual
 Complex
Lecture no 14
Research Interview
Interviewer:
Encodes questions about knowledge and perceptions of respondent about self
Decodes answer taking into account own knowledge about respondent and perception of
respondent knowledge about self.
Respondent:
Encodes questions about knowledge and perceptions of interviewer about self
Decodes answer taking into account own knowledge about interviewer and perception of
interviewer knowledge about self.
Different interviews:
 News
 Talk shows
 Documentary
o Interviews seen as making sense of our lives
 Epistemology position.
 Knowledge must be ontology.

Lecture no 15:
Measurement and Reliability
Measurement:
 A process where numbers are assigned to an abstract concept according to set of rules
Objective measures
Measurement Errors:
 Systematic
 Unsystematic
Controlling error
Reduce inconsistency.
Reliability:
 A degree of consistency of a measure.
 Precision in measurement.
Test Retest:
 Methods assess the external consistency of a test.
Interpreters:
 The degree to which different rates give consistent estimates of same behaviour
Split half
 A test splits into 2 parts and then both parts given to one group of students at the
same time.
Parallel forms:
 Assess the consistency of results of two tests constructed in same way from same
content.

Lecture no 16:
Dispersion and variability;
The range
The smallest score subtracted from the largest
Example;
 Number of friends of Face books users.
 22, 40, 53, 93, 98 , 103, 108, 116, 121, 252.
 Range = 252- 22= 230
 Very biased by outliers.
Range;
Quartiles
The three values that spilt the sorted data into quartile= median
 Lower quartile= median of lower half of the data.
 Upper quartile= median of upper half of the data.
 Need to order the individual first.
 One quartile of the individual are in each inter quartile range.
Quartiles;
Example;
 Number of friends on Face book
 Lower quartile= 53
 Median = 98
 Upper quartile= 116
Used on box plot;
Age of health and illness students
Upper quartile
Median
Lower quartile
Histogram;
High / length of bar indicates frequency.
Variance and standard deviation;
Sum of each individual variation from the mean
Example;
Five rating 1, 2, 3, 4, mean = 26
Quartiles;
Example;
Score Mean. Deviation

1. 2.6. -1.6
2. 2.6. -0.6
3. 2.6. 0.4
4. 2.6. 0.4
5. 2.6. 1.4
Total 0
Variance;
So we square the deviation.
Score Mean Deviation Square

1. 2.6. -1.6. 2.56


2. 2.6. -0.6. 0.36
3. 2.6. 0.4. 0.16
4. 2.6. 0.4. 0.16
5. 2.6. 1.4. 1.95
Total 5.20
Standard deviation;
 The variance has one problem;
 It is measured in units
 This is the standard deviation
Using SPSS;
 Giving mean media SD variance min max range skew.
 Can also produce steam and leaf and histogram.
Charts in SPSS;
 Use chart builder from graph menu over the legacy menu.
 Do this in spas first before cut and paste to word.
 Label the chart properly in spas or in word.
Pie chart;
Distribution of employment category in bank
Employment category;
 Clinical
 Office trainee
 Security officer
 College trainee
 MBA trainee
 Technical
What is normally distributed?
 People height
 IQ
 Hours spent viewing TV
 Blood pressure
What is not normally distributed?
● Income
● Many earn how incomes
● Family size in UK
● Most have one or two children just a few have to or more children.

Lecture no 17, 18
Mean;
The arithmetic mean is the average of the values found by adding the values together and
dividing by the total number of values.
Median;
The middle values found by listing all numbers in numerical order if there are an add
number of values this is the values in the middle if there are an even number of values we
will average the two numbers which appear in the middle.
Mode;
The number or numbers that occurs most frequency.
Example;
The mean score an a set of 20 tests is 75 what is the subs of the 20 test score.
Sum
Mean. ______
# score

75. X
_____
20
X= 1500
Range;

Lecture no 19
Multiple item tests;
 Easy to make
 Can be improved
 Easy to grade
 Can be stored
Student’s performance;
 Initial test
 More difficult
 Simpler test
Different learners;
Item answered correctly by low performance must be expected to be answered
correctly also by high performance.
Blooms taxonomy;
Create;
Produce new or original work.
Evaluate;
Justify a stand or decision.
Analyse;
Draw connection among ideas
Apply;
Use information in new situation
Understand;
Explain ideas or concept
Remember;
Recall facts and basic concept
Item and analysis components
Item difficulty;
Of examinees who answered correctly total of test takers.
Discrimination index;
A number representing how high performance students and low performing
students were able to answer the test item correctly
Distracter analysis
Numbers that tell something about the difficulty and discriminators ability of
both the correct choice and the alternative choices
What cells are used by cnidarians to capture prey?
 Nematocyst
 Echinocytes
 Gametocytes
 Leukocytes
Whether to retain, revise or eliminate items discrimination distracter information and your
instruction
 Item difficulty
 Item discrimination
 Distracters
 Instruction
Lecture no 20
Task 1
 Represent the discrimination index range on a number line.
 Label the number line to show the range where poor, good and very good items would
fall.
 What would discrimination of a mean.
Task 2
CalcUlate the discrimination index of the question given in task 1.
 Are the distracters good?
 What would you suggest be done with this item keep it for future use, receive it or
discard it? Why
 Distracter information can be analysed to determine which distracters were effective
and which ones were not
 Complete the given worksheet.
Item analysis
Item analysis examining student’s responses to individual test items in order to assess the
quality of that item and the test on a whole
Difficult index;
This is a measure of the percentage of students who answered the item correctly.
Discrimination;
If the same proportion of high and low achievers make the correct or incorrect
response to an item, it suggest that the item is either to easy or is ambiguous.

Lecture no 21
Distracter analysis;
Thomas saffron, James Madison, and Han cook were all framers of the constitution.
○ James
○ Herbier
○ Philip
○ Terrance
Item writing guidance;
 Validity of taxonomy of multiple choice item writing rules.
Quality over quantity;
As may plausible alternative that can be written.
There are more alternative is generation sufficient.
 Be sure to analyse distribution of top and bottom 27%.
 Index of discrimination
 Item difficulty
 Distracter analysis

You might also like