0% found this document useful (0 votes)
24 views4 pages

Teaching II Concepts

This document discusses what makes a good test and different types of testing. It outlines three key aspects that determine test validity: content validity (does the test test what it's supposed to), construct validity (are test items well-built to evaluate the intended content), and face validity (does the test appear to measure the intended content). It also discusses test reliability in terms of consistency of results and scoring. The document then briefly covers test practicality, backwash effect, direct vs indirect test items, discrete-point vs integrative testing, norm-referenced vs criterion-referenced testing, and Bloom's and Marzano's taxonomies for organizing learning outcomes.

Uploaded by

emilioortegam O
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views4 pages

Teaching II Concepts

This document discusses what makes a good test and different types of testing. It outlines three key aspects that determine test validity: content validity (does the test test what it's supposed to), construct validity (are test items well-built to evaluate the intended content), and face validity (does the test appear to measure the intended content). It also discusses test reliability in terms of consistency of results and scoring. The document then briefly covers test practicality, backwash effect, direct vs indirect test items, discrete-point vs integrative testing, norm-referenced vs criterion-referenced testing, and Bloom's and Marzano's taxonomies for organizing learning outcomes.

Uploaded by

emilioortegam O
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Teaching II Stage 1

What makes a “Good” Test Good?

Validity

Content Validity Construct Validity Face Validity


Does the test, test, Does the test appear o
Does a test, test, what
what it’s supposed to test what it is trying to
is supposed to test?
test and nothing else? test?
An exam with proper
An exam with proper Literally, a test taker,
Content Validity
Construct Validity an evaluator, professor
evaluates students’
refers to a test that has etc. determine that the
representative
items and tasks “well test “looks like” a test
knowledge gained
built” to evaluate what that correctly
during a period of
it claims to test. measures what it is
time.
intended.

Reliability

Test Reliability Scorer Reliability


If It was possible to give the same
person the same test at the same
time, would the result be the same?
Tests should be If you gave the same test to two
varied (different different people to mark (to
Testing types of items) and evaluate it) would they give the
Techniques familiar (items should same score?
be like the ones
performed before) Teachers should always have an
Make the instructions answer key, a rubric or a rating
Instructions
clear!!! scale in order to keep score
This means to restrict reliability. More open the questions,
or guide students for it will more likely this type of
Restrict the everyone have the reliability to fail.
Task same chance of
evaluate certain
aspects.
Practicality

How is the test practical to administer? In other words, which resources do


we need to design, apply and evaluate tests and which resources do we
have to do the same thing. Practicality is administered in terms of time,
personnel, space and equipment and money

Backwash

It is the effect that a final test has on the teaching program that leads to
it.

Kinds of Testing

Direct Test Items Indirect Test Items

Items where the students perform


Items where the student performs their abilities in the skills that help to
productive tasks (speaking or build productive or receptive skills.
writing)
Students answer controlled items.
Students answer with freedom using
language. Ex: A grammar test seen as an
indirect test of writing.

Discrete-Point Testing Integrative Testing

Evaluates language as a hole,


where students perform the
Assumes that language is divided in
different elements of language in
different elements (as grammar,
one single event as comprehension
vocabulary etc.) and tested by
tasks, speaking, writing etc.
multiple choice or recognition tasks.
Students answer using all possible
Students answer one thing at time
language elements required for
accomplishing the task.
Forms of Evaluation

Norm-Referenced Testing Criterion-Referenced Testing

Tests designed to determine


Tests designed for noticing students’ knowledge or abilities
differences between and among individually, checking their
students’ achievement. performance on a specified set of
Ex: PLANEA, Prueba Nuevo León objectives for the subject, program
etc.

Types of Tests

Designed to measure People’s abilities in a


language.
Proficiency Tests
Ex: TOEFL Test
To establish how successful students have been
achieving course objectives.
Achievement Tests
Ex: Your Teaching II test
To identify Students’ strengths and weaknesses.
Diagnostic Tests
Ex: Exams applied at the beginning of a new
level/stage
Provide Information to place students in certain
stage in a language program.
Placement Tests
Ex: Placement Test inside CCL or Language Center
Hybrid Approach to Test Design
Keep your test balanced in order to evaluate language features from a
natural context and real-life situations that are relevant to students’
communicative needs.
Bloom and Marzano’s Taxonomies
Taxonomies are a way of catalogue and organize learning outcomes into
certain level of complexity (from the lowest to the highest). Two of the most
important are the ones designed by Benjamin Bloom and Robert Manzano.

Bloom’s taxonomy

Marzano’s Taxonomy

You might also like