0% found this document useful (0 votes)
26 views5 pages

Notes

The document discusses evaluation of learning effectiveness. It defines assessment and evaluation, and describes types of evaluation including criterion-referenced and norm-referenced. It also discusses statistical considerations in evaluation, improving tests, and types of criterion-referenced tests such as entry skills tests, pretests, posttests, and diagnostic tests. Finally, it describes rubrics and their benefits for providing students informative feedback.

Uploaded by

www.jasonalando9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views5 pages

Notes

The document discusses evaluation of learning effectiveness. It defines assessment and evaluation, and describes types of evaluation including criterion-referenced and norm-referenced. It also discusses statistical considerations in evaluation, improving tests, and types of criterion-referenced tests such as entry skills tests, pretests, posttests, and diagnostic tests. Finally, it describes rubrics and their benefits for providing students informative feedback.

Uploaded by

www.jasonalando9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

CIT 210: Instructional Methods and Strategies

Topic 9: Evaluation of Learning Effectiveness

The teacher needs a way to determine how successful the instruction has been for the class in
general and for the students in particular.

Assessments can show whether or not the newly designed instruction has met its objectives.

ASSESSMENT: This is a measure of performance.

EVALUATION: This refers to the interpretation of assessment either in terms of grades (e.g.
A, B, C)

Or qualities (e.g. Good, Fair, Poor). It involves the collection and analysis of data
for the purpose of making decisions.

Test: Refers to any procedure used to assess the performance described in an objective.

Types of Evaluation.

1. Criterion-referenced Evaluation: In this type of evaluation, the standard of


performance is set before assessment is undertaken. The interpretation of the score is
in terms of behaviors that an examinee is expected to be able to perform. Deciding
what is satisfactory or not is often a major issue in criterion-referencing.
2. Normative-referenced Evaluation: In this type of evaluation, the standard of
performance is set after assessment as a function of the performance of the entire
class or group. Norm –referenced tests are tests that are interpreted in terms of
students’ performance relative to other students’ performance. The performance of
one learner is interpreted by comparing it to the performance of other learners in the
class.

Statistical Considerations.

Anticipated Variability: Is instruction planned for learners who differ considerably in regard to
course content? OR

Does one expect to work with learners who are uniformly uninformed?

Is instruction individualized to the extent of creating variability among the learners? i.e.
Everyone is expected to learn, but better learners are expected to learn and improve more than
those with lesser ability.

When individual differences are present at the initial stages, then we expect variability (i.e.
spreading out). In this case, the instruction is individualized but the time length is fixed. This is
the main set up in Kenya’s education system. Pupils enter class one and are all expected to be in
class eight at the same time to attempt the KCPE. The system does not give room for pupils to
repeat classes on account of being slow in catching what is taught in class.

If one plans to teach each individual until they all reach a predetermined standard of performance
and then stop teaching, then one should expect little variation if any. In this case no individual
differences should be expected in the final assessments. Here, instructional time is
individualized to reduce variability in performance.

Validity: This refers to the degree to which a measure is interpreted correctly

Content Validity: This refers to the degree to which the test content matches with instructional
objectives

Reliability: This is the degree to which a measure can be trusted to be stable over multiple
conditions.
Improving Tests.

1. Involves editorial review and empirical data analysis


2. Items are carefully edited for accuracy, grammar, clues, and derogatory language.
3. Editing can be done by the author of the items. However, this exercise can also be
done more effectively by a panel of editors. While in our schools individual teachers
attempt to edit their work themselves, it might be advisable to set up editing teams
according to departments.

Item Difficulty Index: This refers to the ability of an item to provide information to us about
whether or not the item helps us tell successful learners from the unsuccessful ones.

Distractor Analysis: This is the process for improvement of multiple-choice test items by
identifying distracters (wrong choices) as being effective or not. The wrong choices given in the
test items should be ones which do not give away the correct alternative. They should be as close
to the correct choice as possible in order for the candidate to think carefully in making their
choices. Ineffective distracters can be modified to salvage an item that might otherwise be
rejected on the basis of the discrimination value.

Criterion-Referenced assessment is important for evaluating both learners, progress and the
quality of instruction itself. The results of criterion-referenced assessment indicate to the
teacher, exactly how well learners were able to achieve each instructional objective. They also
indicate which components of the instruction worked well and which ones need to be revised.
Criterion –referenced assessment enables learners to reflect on their own performance.

All the above is possible because criterion-referenced assessment is an instrument composed of


items or performance tasks that directly measure skills described in one or more instructional
objectives. The test items correspond one-on-one with the instructional objectives. The
performance required in the objective must match the performance required in the test item.

Thus, clarity in specifying instructional objectives and the criteria for adequate (acceptable)
performance is necessary as a guide to adequate test construction.

Types of Criterion-Referenced Tests


1. Entry Skills (Behaviors) Tests: These are given to learners before they begin instruction
to assess the learners’ mastery of prerequisite skills. They cover areas of information and
skills that are determined to be essential for learning the planned material.
2. Pretests: They cover the material that is planned to be taught. They are administered to
learners before instruction commences for the sake of efficiency monitoring. If the skills
to be taught are found to have already been mastered by the target learners, then the
planned instruction is not necessary. If the skills are found to have been only partially
mastered, then perhaps only a review lesson is sufficient and the learners are allowed to
move on to new learning. If as expected, most or none of the tested skills have already
been mastered, then full scale instruction is conducted. A pretest is valuable only when it
is likely that learners may have partial or full mastery of the skills that the lesson is being
planned to cover.
3. Practice (Embedded) Tests: These are administered to find out whether the planned
learning is actually taking place. They are used as a tool to monitor the progress of
learning. They focus on small segments of the material being learned. They provide for
active learner participation during instruction. They are used to provide corrective
feedback and to monitor the pace of instruction.
4. Posttests: Like the pretests, they cover the material that is planned for the lesson. They
are administered after instruction has been conducted and concluded. They measure
attainment of the objectives of the instruction. They assess all the objectives covered by
the planned instruction. They may be used to assess learners’ performance and to assign
credit for successful completion of the study.

Authentic Assessment: - There is a close relationship between learning contexts and assessment
contexts. It involves many different skill and aspects of knowledge. It may continue over a
relatively long period as teaching and learning proceed. It is non-algorithmic i.e. the path of
action is not fully specified in advance. It can involve both individuals working alone or in small
groups. Students are much more autonomous and more involved in planning their own tasks and
assessment procedures. The focus of control rests with the learner. Test takers are active
participants in assessment activities. It offers real challenges based on real tasks. It often
culminates in learner’s own research product for which content is mastered as a means, rather
than an end. Scoring assessment tasks can be complex and a wide range of scoring techniques
may be required. It is not possible to construct parallel forms and have claims of high predictive
validity.

Why do we need Rubrics? - They;

Are easy to use and explain,

-make teachers’ explanations very clear,


-provide students with more informative feedback about their strengths and area in need of
improvement than traditional forms of assessment,

-increase inter-rater reliability for grading where multiple graders are assessing student work,

-support learning, the development of skills and understanding.

5. Placement Tests: These help to identify the starting point for each learner. They can
be used

To exempt some learners from taking some courses in a programme if it is established


that those learners already have mastery of the skills which such courses are designed to equip
the learners with.

5. Diagnostic Tests: They are constructed to measure prerequisite skills. They are helpful
for learners who are falling behind in group instruction. Such learners can be helped
through remedial instruction on prerequisite skills.
6. Progress Tests: They are tests which are administered after lessons. They help ensure
students have mastered the lesson’s objectives. They can be used as practice tests over
desired objectives

Assessment Rubrics:- This is a list of criteria against which to compare elements of the
performance. It is essentially a one or two-page document that describes varying levels of
quality from excellent to poor. It is used for a relatively complex assignment. It gives learners
informative feedback about their work in progress. While the formats of rubrics vary, two
features are common. These are (1) A list of criteria (what counts) in a project or assignment
and, (2) Gradations of quality.

You might also like