ELT Handout
ELT Handout
opasdfghjklzxcvbnmqwertyuiopasdfgh
jklzxcvbnmqwertyuiopasdfghjklzxcvb
nmqwertyuiopasdfghjklzxcvbnmqwer
ENGLISH LANGUAGE
tyuiopasdfghjklzxcvbnmqwertyuiopas
ASSESSMENT: A Handout
dfghjklzxcvbnmqwertyuiopasdfghjklzx
cvbnmqwertyuiopasdfghjklzxcvbnmq
Lailatul Musyarofah, S.Pd., M.Pd.
wertyuiopasdfghjklzxcvbnmqwertyuio
ENGLISH EDUCATION STUDY PROGRAM
pasdfghjklzxcvbnmqwertyuiopasdfghj
STKIP PGRI SIDOARJO
klzxcvbnmqwertyuiopasdfghjklzxcvbn
Revised in 2016
mqwertyuiopasdfghjklzxcvbnmqwerty
uiopasdfghjklzxcvbnmqwertyuiopasdf
ghjklzxcvbnmqwertyuiopasdfghjklzxc
vbnmqwertyuiopasdfghjklzxcvbnmrty
uiopasdfghjklzxcvbnmqwertyuiopasdf
ghjklzxcvbnmqwertyuiopasdfghjklzxc
COURSE SYLLABUS
ELT I
Course Description
This course is designed to broaden your perspective on language testing. You will be
introduced to basic principles of language testing and be introduced with language
testing in Indonesian contexts whether in elementary, secondary and tertiary
education, whether in formal or informal education institutions. Then, you will be
asked to choose certain topics in language testing that interest you most. You will try
to deepen and broaden your understanding about the topics and share them with your
peers through classroom presentation. In addition, you will also be required to design
and construct a test, try it out and analyze it using relevant procedures manually or
using a certain software.
Course Objectives
Course Assessment
2
ELT II
Course Description
This course is designed to enrich and deepen your knowledge in language assessment
and other key areas related to testing and evaluation as well as computer applications
in ELT. This also prepare you to be critical in analyzing the existing tests found in
school formative/ summative test or books.
Course Objectives
Course Assessment
3
This handout is dedicated to my dearest students: you are my motivation to be a better
teacher.
4
CHAPTER I
Assessment, on the other hand, is an ongoing process that encompasses a much wider
domain. Whenever a student responds to a question, offers a comment, or tries out a
new word or structure, the teacher subconsciously makes an assessment of the
student’s performance.
Tests, then, are a subset of assessment; they are certainly not the only form of
assessment that a teacher can make. Test can be useful devices, but they are only one
among many procedures and tasks that teachers can ultimately use to assess students.
Teaching
Assessment
Test
5
Teaching sets up the practice games of language learning: the opportunities for
learners to listen, think, take risks, set goals, and process feedback from the “coach”
and then recycle through the skills that they are trying to master.
Informal assessment can take a number of forms, starting with coaching and other
impromptu feedback to the students. Examples include saying “Nice job!” “Good
work!” “Did you say can or can’t?,” or putting on some homework. Informal
assessment does not stop there. A good deal of a teacher’s informal assessment is
embedded in classroom tasks designed to elicit performance without recording results
and making fixed judgments about a student’s competence.
6
has accomplished objectives, but does not necessarily point the way to future
progress. Final exams in a course and general proficiency exams are examples of
summative assessment.
Can you offer your students an opportunity to convert tests into “learning
experiences”?
7
Discrete-point and integrative testing
This historical perspective underscores two major approaches to language testing that
were debated in the 1970s and early 1980s. These approaches still prevail today, even
if in mutated form: the choice between discrete-point and integrative testing methods.
Discrete-point tests are constructed on the assumption that language can be broken
down into its component parts and that those parts can be tested successfully. These
components are the skills of listening, speaking, reading, and writing, and various
units of language (discrete points) of phonology/graphology, morphology, lexicon,
syntax, and discourse.
What does an integrative test look like? Two types of tests have historically been
claimed to be examples of integrative tests: cloze tests and dictations. A cloze test is a
reading passage (perhaps 150 to 300 words) in which roughly every sixth or seventh
word has been deleted; the test-taker is required to supply words that fit into the
blanks. Cloze test is claimed to cover knowledge of vocabulary, grammatical
structure, discourse structure, reading skills and strategies, and an internalized
“expectancy” grammar (enabling one to predict an item that will come next in a
sequence).
8
reading with long pauses between every phase (to give the learner time to write down
what is heard); a third reading at normal speed to give tes-takers a chance to check
what they wrote.
Exercises:
9
CHAPTER II
PRACTICALITY
RELIABILITY
A reliable test is consistent and dependable. If you give the same test to the same
students or matched students on two different occasions, the test should yield similar
results. The issue of reliability of a test may be addresses by considering a number of
factors that may contribute to the unreliability.
Student-Related Reliability
10
category are such factors as a test-taker’s “test-wiseness” or strategies for efficient
test taking.
Rater Reliability
Human error, subjectivity, and bias may enter into the scoring process. Inter-rater
reliability occurs when two or more scorers yield inconsistent scores of the same test,
possibly for lack of attention to scoring criteria, inexperience, inattention, or even
preconceived biases. Otherwise, intra-rater reliability is a common occurrence for
classroom teachers because of unclear scoring criteria, fatigue, bias toward particular
“good” and “bad” students, or simple carelessness.
The street noise that causes a tape recorder cannot be listened clearly, photocopying
variations, the amount of light in different parts of the room, variations of
temperature, and even the condition of desks and chairs.
Test Reliability
If a test is too long, test-takers may become fatigued by the time they reach the later
items and hastily respond incorrectly. Timed tests may discriminate against students
who do not perform well on a test with a time limit.
VALIDITY
Validity is the extent to which references made from assessment results are
appropriate, meaningful, and useful in terms of the purpose of the assessment.
Consequential Validity
11
Face Validity
Face validity refers to the degree to which a test looks right, and appears to measure
the knowledge or abilities it claims to measure, based on the subjective personnel
who decide on its use, and other psychometrically unsophisticated observers. Face
validity will likely be high if learners encounter
AUTHENTICITY
WASHBACK
Wash back is the effect of testing on teaching and learning. Wash back enhances a
number of basic principles of language acquisition: intrinsic motivation, autonomy,
self-confidence, language ego, interlanguage, and strategic investment, among others.
12
One way to enhance wash back is to comment generously and specifically on test
performance.
Informal performance assessment is by nature more likely to have built-in wash back
effects because the teacher is usually providing interactive feedback. Formal test can
also have positive wash back, but provide no wash back if the students receive a
simple letter grade or a single overall numerical score.
13
iii. Give an appropriate relative weight to each section.
4. Is the procedure face valid and “biased for best”?
a. Directions are clear
b. The structure of the test is organized logically
c. Its difficulty level is appropriately pitched
d. The test has no “surprises”, and
e. Timing is appropriate
Test-Taking Strategies
Before the Test
1. Give students all the information you can about the test: exactly what
will the test cover? Which topics will be the most important? What
kind of items will be on it? How long will it be?
2. Encourage students to do a systematic review of material. For
example, they should skill the textbook and other material, outline
major points, write down examples.
3. Give the practice or exercises, if available
4. Facilitate formation of a study group, if possible.
5. Caution students to get a good night’s rest before the test.
6. Remind students to get the classroom early.
During the Test
1. After the test is distributed, tell students to look over the whole test
quickly in order to get good grasp of its different parts.
2. Remind them to mentally figure out how much time they will need for
each part.
3. Advise them to concentrate as carefully as possible.
4. Warn students a few minutes before the end of the class period so that
they can finish on time, proofread their answers, and catch careless
errors.
After the Test
1. When you return the test, include feedback on specific things the
students did well, what he or she did not do well, and, if possible, the
reasons for your comments.
2. Advice students to pay careful attention in class to whatever you say
about the test results.
3. Encourage questions from students.
4. Advise students to pay special attention in the future to points on
which they are weak.
(Keep in mind that what comes before and after the test also contributes to its face
validity. Good class preparation will give students a comfort leel with the test,
and good feedback-wash back-will allow them to learn from it.
14
5. Are the test tasks as authentic as possible?
1. Is the language in the test as natural as possible?
2. Are items as contextualized as possible rather than isolated?
3. Are topics and situations interesting, enjoyable, and/or humorous?
4. Is some thematic organization provided, such as through a story line or
episode?
5. Do tasks represent, or closely approximate, real-world tasks?
Think the following two examples, which one is better multiple-choice task?
Example 2
1. There are three countries I would like to visit. One is Italy.
a. The other is New Zealand and other is Nepal
b. The others are New Zealand and Nepal
c. Other are New Zealand and Nepal
15
a. Swimming
b. To swimming
c. To swim
3. When Mr. Brown designs a website, he always creates it ____________.
a. Artistically
b. Artistic
c. Artist
4. Since the beginning of the year, I _______ at Millennium Industries.
a. Am working
b. Had been working
c. Have been working
5. When Mona broke her leg, she asked her husband ______ her to work.
a. To drive
b. Driving
c. Drive
Exercises:
1. Review the five basic principles of language assessment that are defined and
explained in this chapter. Be sure to differentiate among several types of
evidence that support the validity of a test, as well as four kinds of reliability.
2. It is stated that “Wash back is the effect of testing on teaching and learning.
Wash back enhances a number of basic principles of language acquisition:
intrinsic motivation, autonomy, self-confidence, language ego, interlanguage,
and strategic investment, among others.” Discuss the connection between
wash back and the above-named general principles of language learning and
teaching. Come up with some specific examples for each.
16
3. Wash back is described here as positive effect. Can tests provide negative
wash back? Explain.
17
CHAPTER III
Primer terminology:
1. Multiple-choice items are all receptive, or selective, response items in that the
test-taker chooses from a set of responses (commonly called a supply type of
response) rather than creating a response. Other receptive item types include
true-false questions and matching list.
2. Every multiple-choice item has a stem, which presents a stimulus, and several
(usually between three and five) options or alternatives to choose from.
3. One of those options, the key, is the correct response, while the others serve
as distractors.
18
(say 99 percent of respondents get it right) or too difficult (99
percent get it wrong) really does nothing to separate high-
ability and low-ability test takers. It is not really performing
much “work” for you on a test. The formula looks like this,
IF = #Ss answering the item correctly
Total # of Ss responding to that item
For example, if you have an item on which 13 out of 20
students respond correctly, your IF index is 13 divided by 20 0r
.65 (65 percent). Appropriate test items will generally have Ifs
that range between .15 and .85.
b. Item Discrimination (ID) is the extent to which an item
differentiates between high-and low-ability test takers.
Suppose your class of 30 students has taken a test. Once you
have calculated final scores for all 30 students, divide them
roughly into thirds-that is, create three rank-ordered ability
groups including the top 10 scores, the middle 10, and the
lowest 10. To find out which of your 50 or so test item were
most powerful in discriminating between high and low ability,
eliminate the middle group, leaving two groups which results
that might look something like this:
Item #23 # Correct #
Incorrect
High-ability Ss (top 10) 7
3
Low-ability Ss (bottom 10) 2
8
19
would be zero. In most cases you would want to discard an
item that scored near zero.
20
SCORING, GRADING, AND GIVING FEEDBACK
Scoring
Grading
Giving Feedback
You might choose to return the test to the student with one of, or combination of, any
of the possibilities below:
1. A letter grade
2. A total score
3. Four sub scores (speaking, listening, reading, writing)
4. For the listening and reading sections
a. An indication of correct/incorrect responses
b. Marginal comments
5. For the oral interview
21
a. Scores for each element being rated
b. A checklist of areas needing work
c. Oral feedback after the interview
d. A post-interview conference to go over the results
6. On the essay
a. Scores for each element being rated
b. A checklist of areas needing work
c. Marginal and end-of-essay comments, suggestions
d. A post-test conference to go over work
e. A self-assessment
7. On all or selected parts of the test, peer checking of results
8. A whole-class discussion of results of the test
9. Individual conferences with each student to review the whole test
Review the nine different options for giving feedback to students on assessment.
Review the practicality of each ad determine the extent to which practicality is
justifiably sacrificed in order to offer better wash back to learners.
22
CHAPTER IV
STANDARDIZED TESTING
A standardized test presupposes certain standard objectives, or criteria, that are held
constant across one form of the test to another. The criteria in large-scale
standardized tests are design to apply to a broad band of competencies that are
usually not exclusive to one particular curriculum. A good standardized test is the
product of a thorough process of empirical research and development. It dictates
standard procedures for administration and scoring. And finally, it is typical of a
norm-referenced test, the goal of which is to place test-takers on a continuum across a
range of scores and to differentiate test-takers by their relative thinking.
Examples:
Advantages:
A ready-made previously validated product that frees the teacher from having
to spend hours creating a test.
Administration to large groups can be accomplished within reasonable time
limits.
23
In the case of multiple-choice formats, scoring procedures are streamlined (for
either scan able computerized scoring or hand-scoring with a hole-punched
grid) for fast turnaround time.
There is often an air of face validity to such authoritative-looking instruments.
Disadvantages:
The inappropriate use of such tests, for example, using an overall proficiency
test as an achievement test simply because of the convenience of the
standardization.
Some standardized tests include tasks that do not directly specify performance
in the target objective.
Exercises:
1. Tell the class about the worst test experience you’ve ever had. Briefly analyze
what made the experience so unbearable, and try to come up with suggestions
for improvement of the test and/or its administrative conditions.
2. Compile a brief list of pros and cons of standardized testing. Cite illustrations
of as many items in each list as possible.
3. Select a standardized test that you are quite familiar with. Mentally evaluate
that test using the five principles of practicality, reliability, validity,
authenticity, and washback.
24
CHAPTER V
ASSESSING LISTENING
Micro skills (attending the smaller bits and chunks of language, in more of buttom-up
process)
25
11. Recognize cohesive devices in spoken discourse.
INTENSIVE LISTENING
26
party
(b) My girlfriend can go to the
party
Paraphrase recognition
RESPONSIVE LISTENING
SELECTIVE LISTENING
27
Flight seven-oh-six to Portland will squirrel was on top of the bird feeder
depart from gate seventy-three at nine- while the larger bird sat at the bottom of
thirty p.m. the feeder screeching at the squirrel. The
Flight ten-forty-five to Reno will depart smaller bird was flying around the
at nine-fifty p.m. from gate seventeen. squirrel, trying to scare it away.
1. Note-taking
2. Editing
3. Interpretative tasks
4. Retelling
Exercises:
1. Given that we spend much more time listening than we do speaking, why are
there many more tests of speaking than listening?
2. Look at the list of micro- and micro skills of listening. Brainstorm some tasks
that assess those skills.
3. It is noted that one cannot actually observe listening and reading performance.
Do you agree? And do you agree that there isn’t even a product to observe for
speaking, listening and reading? How then, can one infer the competence of a
test-taker to speak, listen, and read a language?
28
CHAPTER VI
ASSESSING SPEAKING
29
4. Interactive. The difference between responsive and interactive speaking is in
the length and complexity of the interaction, which sometimes includes
multiple exchanges and/or multiple participants.
5. Extensive (monologue). It includes speeches, oral presentations, and story-
telling, during which the opportunity for oral interaction from listeners is
either highly limited.
Macro skills
30
13. Use appropriate styles, registers, implicate, redundancies, pragmatic
conventions, and convention rules, floor-keeping and –yielding, interrupting,
and other linguistic features in face-to-face conversations.
14. Convey links and connections between events and communicative such
relations as focal and peripheral ideas, events and feelings, new information,
generalization and exemplification.
15. Convey facial features, kinesics, body language, and other nonverbal cues
along with verbal language.
16. Develop and use a battery of speaking strategies, such as emphasizing key
words, rephrasing, providing a context for interpreting the meaning of words,
appealing for help, and accurately assessing how well your interlocutor is
understanding you.
IMITATIVE SPEAKING
INTENSIVE SPEAKING
31
Tell me that you aren’t conveys its most novel ideas as if they were timeless
interested in tennis. truths, while American writing exaggerates; if you believe
Tell him to come to half of what is said, that’s enough. The former uses
my office at noon. understatement; the latter, overstatement. There are also
Remind him what time disadvantages to each characteristic approach. Readers
it is. who are used to being screamed at may not listen when
someone chooses to whisper politely. At the same time, the
individual who is used to a quiet manner may reject a
series of loud imperatives.
RESPONSIVE SPEAKING
My weekend in the mountains was fabulous. The first day we backpacked into the
mountains and climbed about 2,000 feet. The hike was strenuous but exhilarating.
By sunset we found these beautiful alpine lakes and made camp there. The sunset
was amazingly beautiful. The next two days we just kicked back and did little day
hikes, some rock climbing, bird watching, swimming, and fishing. The hike out on
the next day was really easy-all downhill- and the scenery was incredible.
Test-takers respond with two or three sentences.
32
INTERACTIVE SPEAKING
Interview
33
EXTENSIVE SPEAKING
Oral presentations
Picture-cued story-telling
Visual pictures
Photographs
Diagrams
Charts
Listening comprehension
Production of (oral discourse features, fluency, and interaction with the
hearer)
Longer texts are presented for the test-taker to read in the native language and then
translate into English. Those tests could come in many forms: dialogue, directions for
assembly of a product, a synopsis of a story or play or more, directions on how to
find something on a map, and other genres.
Exercises:
1. Review the five basic types of speaking that were outlined at the beginning.
Offer examples of each and pay special attention to distinguishing between
imitative and intensive, and between responsive and interactive.
34
2. What makes speaking difficult? Devise a list that could form a set of
specifications to pay special attention to in assessing speaking.
35
CHAPTER VII
ASSESSING READING
Micro skills
Macro skills
8. Recognize the rhetorical forms of written discourse and their significance for
interpretation.
9. Recognize the communicative functions of written texts, according to form
and purpose.
36
10. Infer context that is not explicit by using background knowledge.
11. From described events, ideas, etc., infer links and connections between events,
deduce causes and effects, and detect such relations as main idea, supporting
idea, new information, generalization, and exemplification.
12. Distinguish between literal and implied meanings.
13. Detect culturally specific references and interpret them in a context of the
appropriate cultural schemata.
14. Develop and use a battery of reading strategies, such as scanning and
skimming, detecting discourse markers, guessing the meaning of words from
context, and activating schemata for the interpretation of texts.
1. Multiple Choice
Perceptive Reading Selective Reading
Minimal Pair Distinction 1. He’s not married. He’s
Circle “S” for same or “D” for _____________
different a. Young
1. Led let S D b. Single
2. Bit bit S D c. First
3. Seat sit S D d. A husband
4. Too to S D 2. If there’s no doorbell, please
______ on the door.
Grapheme Recognition Task a. Kneel
Circle the “odd” item, the one that b. Type
doesn’t “belong” c. Knock
1. Piece peace piece d. Shout
2. Book book boot 3. The bank robbery occurred
________ I was in the
restaurant.
a. That
b. During
c. While
d. which
2. Picture-cued Items
Perceptive Reading Selective Reading
Test-takers hear: Point to the part of Test-takers read a three-paragraph
the picture that you read about here. passage, one sentence of which is:
37
Test takers see the picture and read During at least three quarters of the
sentence written on a separate card. year, the Arctic is frozen.
1. The man is reading a book. Click on the chart that shows the
2. The cat is under the table. relative amount of time each year
that water is available to plants in the
Arctic.
3. Editing
Selective Reading Interactive Reading
1. The abrasively action of the (1) Ever since super market first
wind wears away softer layers appeared, they have been take
of rock. over the world.
(2) Supermarkets have changed
2. There are two way of making people’s life styles, yet and at
a gas condense: cooling it or the same time, changes in
putting it under pressure. people’s life styles have
encourages the opening of
supermarkets.
3. Researchers have discovered (3) As a result this, many small
that the application of bright stores have been forced out of
light can sometimes be uses business.
to overcome jet lag (4) Moreover, some small stores
will be able to survive this
unfavorable situation.
Extensive Reading (Skimming Summarizing and Responding
Tasks)
What is the main idea of this text? Write a summary of the txt. Your
What is the author’s purpose in summary should be about one
writing the text? paragraph in length (100-150 words)
What kind of writing this....? and should include your
How easy or difficult do you think understanding of the main idea and
this text will be? supporting details
38
Exercises:
1. Look at the list of micro- and macro skills of reading. Brainstorm some tasks
that assess those skills.
2. What makes reading difficult? How to manage it into the test?
39
CHAPTER VII
ASSESSING WRITING
1. Imitative. This category includes the ability to spell correctly and to perceive
phoneme-grapheme correspondences in the English spelling system.
2. Intensive (controlled). Producing appropriate vocabulary within a context,
collocations and idioms, and correct grammatical features up to the length of a
sentence.
3. Responsive. Assessment tasks require learners to perform at a limited
discourse level, connecting sentences into a paragraph and creating a logically
connected sequence of two or three paragraphs.
4. Extensive. It implies successful management of all the processes and
strategies of writing for all purposes, up to the length of an essay, a term
paper, a major research project report, or even a thesis.
Micro skills
40
4. Use acceptable grammatical systems (tense, agreement, pluralization),
patterns, and rules.
5. Express a particular meaning in different grammatical forms.
6. Use cohesive devices in written discourse.
Macro skills
IMITATIVE WRITING
1. Copying
2. Listening cloze selection tasks
3. Picture-cued tasks
4. Converting numbers and abbreviations to words
41
Spelling Tasks and Detecting Phoneme-Grapheme Correspondences
1. Spelling tests
2. Picture-cued tasks
3. Multiple choice techniques
4. Matching phonetic symbols
A paragraph is read at normal speed, usually three or two times; then the teacher asks
students to rewrite the paragraph from the best of their recollection. In one of several
variations of the dicto-comp technique, the teacher, after reading the passage,
distributes a handout with key words from the paragraph, in sequence, as cues for the
students.
Picture-cued tasks
42
picture on the wall over the couch. Test-takers are asked to describe the
picture using the four of the following prepositions: on, over, under, next to,
around, as long as the prepositions are used appropriately, the criterion is
considered to be met.
3. Picture sequence description. A sequence of three or six pictures depicting a
story line can provide a suitable stimulus for written production. The picture
must be simple and unambiguous because an open-ended task at the selective
level would give test-takers so many options.
Paraphrasing
43
d. The overall effectiveness or impact the paragraph as a whole
3. Development of main and supporting ideas across paragraph
a. Addressing the topic, main idea, or principal purpose
b. Organizing and developing supporting ideas
c. Using appropriate details to undergird supporting ideas
d. Showing facility and fluency in the use of language
e. Demonstrating syntactic variety
44
CHAPTER VIII
ALTERNATIVE ASSESSMENT
Characteristics:
Performance-based assessment
The characteristics:
45
3. Tasks are meaningful, engaging, and authentic.
4. Tasks call for the integration of language skills.
5. Both process and product are assessed.
6. Depth of a student’s mastery is emphasized over breadth.
Portfolios
Materials:
Collecting
Reflecting
Assessing
46
Documenting
Linking
Evaluating
Benefits:
Journals
Steps:
47
5. Provide optimal feedback in your responses:
a. Cheerleading feedback, in which you celebrate successes with the
students or encourage them to persevere through difficulties,
b. Instructional feedback, in which you suggest strategies or materials,
suggest ways to fine-tune strategy use, or instruct students in their
writing, and
c. Reality-check feedback, in which you help the students set more
realistic expectations for their language abilities.
Observations
48
Frequency of student-initiated responses
Quality of teacher-elicited responses
Latencies, pauses, silent periods
Length of utterances
Evidence of listening comprehension
Affective states
Evidence of attention-span issues, learning style preferences
Students’ verbal or nonverbal response to materials, types of activities,
teaching styles..
Culturally specific linguistic and nonverbal factors.
49
CHAPTER IX
ERROR ANALYSIS
Error Analysis
Brown (2000: 218) calls error analysis as the fact that learners do make errors,
and that these errors can be observed, analyzed, and classified to reveal something of
the system operating within the learner, led to surge of study of learners’ errors.
While Nation and Newton (2009: 141) argue that error analysis is the study of errors
to see what process gave rise to them. And correcting errors is best done if there is
some understanding of why the error occurred.
50
communicative contexts is present. However, some correction is beneficial. The
lecturer has to determine what errors to correct and how to correct them.
Dulay, Burt, and Krashen (1982: 140) state the instant and widespread appeal
of error analysis (EA) stemmed perhaps from the refreshing alternative it provided to
the prevailing but more restrictive “contrastive analysis” approach to errors.
Contrastive analysis (CA) treatment of errors, rested on a comparison between the
two learner’s native and target languages. It was thought that contrastive analysis of
the learner’s two languages would predict the areas in the target language that would
pose the most difficulty.
Kinds of Error
51
violation of one segment of a sentence, allowing the hearer/reader to make an
accurate guess about the intended meaning.
Lennon (1991) in Brown (2000: 223) suggests that two relate dimensions of
error, domain and extend should be considered in any error analysis. Domain is the
rank of linguistic unit (from phoneme to discourse) that must be taken as context in
order for the error to become apparent, and extent is the rank of linguistic unit that
would have to be deleted, replaced, supplied, or reordered in order to repair the
sentence.
To describe the errors, Dulay, Burt, and Krashen (1982:138-139) state that
there are six most common errors produced by the learners as the following table:
52
Learners’ Common Errors
2 Double marking a semantic feature (e.g. past She didn’t went back
tense)
5 Using two or more forms in random alternation random use of he and she
even though the language requires the use of each regardless of the gender of
only under certain conditions. the person of interest
53
In the research literature, L2 errors have most frequently been compared to
errors made by children learning the target language as their first language and to
equivalent phrases or sentences in the learner’s mother tongue. Those comparisons
have yielded the two major error categories in this taxonomy: developmental errors
and interlingual errors. Two other categories that have been used in comparative
analysis taxonomies are derived from the first two: ambiguous errors, which are
classifiable as ether developmental or interlingual; and, of course, the grab bag
category, Other, which are neither.
Developmental errors are errors similar to those made by children learning the
target language as their first language, for example, dog eat it. Developmental errors
consist of omissions, additions, misformations, and misordering. Interlingual errors
are similar in structure to a semantically equivalent phrase or sentence in the learner’s
native language, for example the man skinny. Interlingual errors here, simply refer to
L2 errors that reflect native language structure, regardless of the internal processes or
external conditions that spawned them. Ambiguous errors are those that could be
classified equally well as developmental or interlingual. That is because these errors
reflect the learner’s native language structure, and at the same time, they are of the
type found in the speech of children acquiring a first language, for example in the
utterance I no have car. Other errors are the errors that do not fit into any other
category, for example in the utterance She do hungry (Dulay, Burt, and Krashen,
1982: 165-172). Some researches had been conducted about errors made by the
learners, although precise proportions differ from study to study, all the investigations
conducted to date have reached the same conclusion: the majority of errors made by
second language learners are not interlingual, but developmental.
54
In this study, error analysis was used to categorize and to find out the types of
students’ errors given corrective feedback by the lecturer. Students’ errors were
classified, analyzed, commented, and grouped based on their categories. Thus, there
is clear explanation about findings presented in this study. Linguistic categories
proposed by Corder (1981) are used to categorize students’ errors given corrective
feedback by the lecturer. Those are errors on grammar and errors on vocabulary.
Sources of Errors
To know why certain errors are made and what cognitive strategies and styles
even personality variables underlie certain errors, identifying sources of errors is
important to take another step toward understanding how the learners’ cognitive and
affective processes relate to the linguistic system and to formulate an integrated
understanding of the process of second language. Brown (2000: 224-227) stated there
are four factors causing errors; interlingual transfer, intralingual transfer, context of
learning, and communication strategies.
Researchers have found that the early stages of language learning are
characterized by a predominance of interference (interlingual transfer), but once
learners have begun to acquire parts of the new system, more and more interlingual
transfer-generalization within the target language- is manifested. This of course
follows logically from the tenets of learning theory. As learners progress in the
55
second language, their previous experience and their existing subsumers begin to
include structures within the target language itself. Negative intralingual transfer, or
overgeneralization, has already been illustrated in such utterances as “Does John can
sing?.” “He goed,” “I don’t know what time is it.” Once again, lecturer and researcher
cannot always be certain of the source of an apparent interlingual error, but repeated
systematic observations of a learner’s speech data will often remove the ambiguity of
a single observation of an error.
A third major source of error, although it overlaps both types of transfer, is the
context of learning. “Context” refers, for example, to the classroom with its lecturer
and its materials in the case of school learning or the social situation in the case of
untutored second language learning. Students often make errors because of a
misleading explanation from the lecturer, faulty presentation of a structure or word in
a text book, or even because of a pattern that was rotary memorized in a drill but
improperly contextualized. Two vocabulary items presented contiguously-for
example, point at and point out-might in later recall be confused simply because of
the contiguity of presentation.
The following table is the example of error and the causes found by Richards
(1974), Duskofa (1969), and Lennon (1991) in Nation and Newton (2009: 141):
56
Learners’ Errors and Causes
According to Ur (1996: 242), the use of positive and negative feedback should
not be separated since both support the students’ output in the second language.
57
When a lecturer gives negative feedback to the students, they may think that
something was wrong within their response in L2. To avoid over-judgmental
behavior, the lecturers are expected to provide positive feedback and make them
understand that mistakes are natural.
In analyzing the data, sources of the errors were considered as the students’
background in producing the errors. So, besides categorizing the errors into
grammatical errors, meaning errors and mispronunciation, students’ errors were
analyzed based on the source of the errors.
58
CHAPTER X
BARRET TAXONOMY
59
Although the classroom teacher who focuses on these higher questions has to
allow more time for the varied responses, the degree of learning that can be evaluated
is at least as great, and often greater, since adequate response to questions at these
levels must incorporate the information that could have been gathered by “fact”
questions. Therefore, as much or more can be gained for teacher and for students
from a lesson with only a few higher level questions and the varied responses, since
all the “facts” are checked while the students get practice in using higher cognitive
thinking processes.
1.2 Recall
1.2.1 Recall of Details
1.2.2 Recall of Main Ideas
1.2.3 Recall of a Sequence
1.2.4 Recall of Comparison
1.2.5 Recall of Cause and Effect Relationships
1.2.6 Recall of Character Traits
60
2.0 Reorganization
2.1 Classifying
2.2 Outlining
2.3 Summarizing
2.4 Synthesizing
4.0 Evaluation
4.1 Judgments of Reality or Fantasy
4.2 Judgments of Fact or Opinion
4.3 Judgments of Adequacy and Validity
4.4 Judgments of Appropriateness
4.5 Judgments of Worth, Desirability and Acceptability
5.0 Appreciation
5.1 Emotional Response to the Content
5.2 Identification with Characters or Incidents
5.3 Reactions to the Author’s Use of Language
5.4 Imagery
61
The Complete Barrett Taxonomy
1.0 Literal Comprehension
Literal comprehension focuses on ideas and information which are explicitly stated in
the selection. Purposes for reading and teacher’s questions designed to elicit
responses at this level may range from simple to complex. A simple task in literal
comprehension may be the recognition or recall of a single fact or incident. A more
complex task might be the recognition or recall or a series of facts or the sequencing
of incidents in a reading selection. (Or these tasks may be related to an exercise
which may itself be considered as a reading selection.) Purposes and questions at this
level may have the following characteristics.
1.1 Recognition
Recognition requires the student to locate or identify ideas or information explicitly
stated in the reading selection itself or in exercises which use the explicit ideas and
information presented in the reading selection. Recognition tasks are:
62
“When.” (This exercise even though it involves the recognition of sixteen
separate details is considered on question.) Skim (or read) for locations,
names, or dates.
63
1.1.4 Recognition of Comparison
The student is requested to locate or identify likenesses and differences in characters,
times, and places that are explicitly stated in the selection. (Levels 1.14, 1.24, and 3.4
involve comparisons. Seeing likenesses and differences, seeing relationships, and
making comparisons between characters, incidents, and situations are fairly
synonymous at these levels. However, when a cause and effect relationship exists, it
shall be classified at the next higher level of the taxonomy provided the criteria of
some other level are not more nearly met. There is a level for cognition of
comparisons, a level for recall of comparisons, and a level for inferring of
comparisons. Examples for each of these levels define what constitutes a comparison
question.)
64
recognized.)
4. Find the sentence that tells why _____ did (or was) _____ .
5. What happened to shorten his stay at _____ ?
1.2 Recall
Recall requires the student to produce from memory ideas and information explicitly
stated in the reading selection. Recall tasks are:
1.2.1 Recall of Details
The student is asked to produce from memory facts such as the names of characters,
the time of the story, or the place of the story. (Recall of almost any explicit fact or
detail from the selection is included. A single detail as well as several details
scattered throughout the story are both level 1.21 questions.)
65
4. Over what kind of land did they travel? (This question requires recall of
details from several places in the story; however, no sequencing or
reorganization is asked for.)
5. Write a list of all the details you can remember.
6. Recite the _____ listed.
66
the recall but are not sufficient.)
3. Number these _____ in the order in which they took place in the selection.
4. Make a chart that shows the _____ throughout the selection.
5. Tell in correct order _____ .
6. What happened on the fourth day?
67
5. Why did _____ decide to _____ ?
6. How did _____ accomplish _____ ? (This action in such instances causes an
effect.)
7. What was the reaction of _____ to _____ ?
2.0 Reorganization
Reorganization requires the student to analyze, synthesize, and/ or organize ideas or
information explicitly stated in the selection. To produce the desired thought product,
the reader may utilize the statements of the author verbatim or he or she may
paraphrase or translate the author’s statements. Reorganization tasks are:
2.1 Classifying
In this instance the student is required to place people, things, places, and / or events
into categories. (When pupils are asked to recognize or recall certain kinds of details,
relationships, or traits, they are in effect classifying, but at a lower level of the
taxonomy. The key to this level is that things must be sorted into a category or a
class.)
68
EXAMPLES AND PATTERNS:
1. Read each phrase below. Does it tell you “who,” “what,” “when,” “how,” or
“where?”
2. “Sank here.” (A phrase taken from a selection)
3. Which of the following are _____ ?
4. Place the following under the proper heading.
5. Classify the following according to _____ .
6. Which of the following _____ does not belong. (Where based upon the
selection and not merely a matter of word meaning. Care also has to be
exercised in such cases to make sure the inferring of a comparison, level 3.4
is not necessitated.)
2.2 Outlining
The student is requested to organize the selection in outline form using direct
statements or paraphrased statements from the selection.
2.3 Summarizing
The student is asked to condense the selection using direct or paraphrased statements
from the selection. (This level is interpreted as also being applicable when less than
the entire selection is condensed.)
69
EXAMPLES AND PATTERNS:
1. What has happened up to this point?
2. Tell the story in your own words.
2.4 Synthesizing
In this instance, the student is requested to consolidate explicit ideas or information
from more than one source. (The pupil is required to put together information from
more than one place. More is required than just a collecting of information for this
information must become fused so that information from more than one source
provides a single answer to a question. While the taxonomy refers to a single
selection, quite often in order t answer a question, information obtained from a
previous selection or selections must be utilized. The intent of the taxonomy, despite
its restrictive reference to the selection, is not only the reading comprehension
questions from review units, lessons, and exercise, but also many other reading
comprehension questions.)
70
by the student may be either convergent or divergent in nature and the student may be
asked to verbalize the rationale underlying his or her inferences. In general, then,
inferential comprehension is stimulated by purposes for reading and teachers’
questions which demand thinking and imagination that go beyond the printed page.
(Personal experience is interpreted to include formal learning experiences, as well as
those things which the reader has personally experienced in a first hand situation.
Prior knowledge, regardless of where this knowledge came from, is an integral part of
inference. The crucial factor distinguishing inference questions from recognition and
recall questions is that their answers are not explicitly stated but must be inferred.)
71
3.2 Inferring Main Ideas
The student is required to provide the main idea, general significance, theme, or
moral which is not explicitly stated in the selection. (Such questions may pertain to
part of a selection.)
72
3.4 Inferring Comparisons
The student is requited to infer likenesses and differences in characters, times, places,
things, or ideas. Such inferential comparisons revolve around ideas such as : here and
there, then and now, he and she, and she and she.
73
9. What is the result of _____ ?
10. What might have happened if _____ ?
11. What makes this _____ a _____ ?
12. What makes you think _____ ?
13. Did _____ because _____ ?
14. How could _____ ?
15. Why is it helpful to have a _____ ?
74
3. Will he help them?
4. Someone may predict _____ ?
5. Read _____ and guess what will happen.
4.0 Evaluation
Purposes for reading and teacher’s questions, in this instance, require responses by
the student which indicate that he or she has made an evaluative judgment by
comparing ideas presented in the selection with external criteria provided by the
teacher, other authorities, or other written sources, or with internal criteria provided
by the reader’s experiences, knowledge, or values. In essence evaluation deals with
judgment and focuses on qualities of accuracy, acceptability, desirability, worth, or
probability of occurrence. (Evaluative judgment is the key to this category.)
Evaluative thinking may be demonstrated by asking the student to make the following
judgments.
75
EXAMPLES AND PATTERNS:
1. Is _____ imaginary?
2. How many unreal things can you find?
3. Did _____ really happen?
4. Is _____ fact or fiction?
5. Is _____ possible?
76
3. Why was _____ true? not true?
4. Is adequate information given about _____ ?
5. Is _____ really _____ ?
6. Which ideas are still accepted and which ones are no longer believed?
7. Label each _____ true or false.
8. Find proof from other sources that _____ ?
77
5.0 Appreciation
Appreciation involves all the previously cited cognitive dimensions of reading, for it
deals with the psychological and aesthetic impact of the selection on the reader.
Appreciation calls for the student to be emotionally and aesthetically sensitive to the
work and to have a reaction to the worth of its psychological and artistic elements.
Appreciation includes both the knowledge of and the emotional response to literary
techniques, forms, styles, and structures.
78
EXAMPLES AND PATTERNS:
1. What words will describe the feelings of _____ ?
2. How did they feel when _____ ?
3. Will _____ be difficult for _____ ? (This goes beyond level 3.7, prediction.)
4. Would you _____ ?
5. Encourage pupils to identify with _____ .
6. Do you think he will follow the advice?
7. Did she act recklessly? (This would be an example of level 4.5, except that in
order to make a decision as to whether or not she acted recklessly, the
situation must be identified with.)
8. Write your own ending to this story. (It is believed that this question goes
beyond inferring of a sequence and the making of a prediction and falls at
level 5.2.)
9. Devise a conversation between _____ and _____ .
10. What would you do if you were _____ ?
11. What is _____ thinking?
12. How would you have felt if you were _____ ?
13. How did _____ talk when _____ ?
14. Relate _____ to you own life.
79
EXAMPLES AND PATTERNS:
1. Questions requiring recognition or discussion of qualifiers.
2. Why is _____ a good term?
3. Demonstrate how _____ ’s voice sounded when he spoke _____ .
4. What personifications, allegory, puns, malapropisms did the author use?
5. What “loaded” language was used? propaganda? understatements?
exaggerations? emotion-laden words?
6. How did the author express the idea of _____ ?
7. In what way is the word _____ used in the selection?
5.4 Imagery
In this instance, the reader is required to verbalize his or her feelings with regard to
the author’s artistic ability to pain word pictures which cause the reader to visualize,
smell, taste, hear, or feel.
80
8. Reenact the _____ scene.
9. How does _____ make you feel?
10. Take the role of _____ . (This goes beyond identification)
11. Questions requiring appreciation of dialogue may require utilization of this
level.
12. What _____ has the author created?
13. How did the author cause you to _____ ?
81
CHAPTER XI
CORRECTIVE FEEDBACK
82
correct structure of the erroneous utterance, or iii) providing metalinguistic
information describing the nature of the error, or any combination of these.
Metalinguistic clues Without providing the correct form, the lecturer poses
questions or provides comments or information related to
the formation student’s utterance. e.g. “Do we say it like
that?”, “Is it femininie?”
83
Elicitation The lecturer directly elicits the correct form from the
student by asking questions (e.g. “How do we say that in
French?”), by pausing to allow the students to complete
the lecturer utterance (e.g. “It’s a…”)or by asking
student to reformulate the utterance (e.g. “Say that
again.” Elicitation questions differ from questions that
are defined as metalinguistic clues in that they require
more than a yes/no response.
There are different types in giving corrective feedback toward students’ errors,
this study will use Lyster and Ranta (1997) model to identify lecturer’s techniques in
giving corrective feedback with some considerations. First, it has complete types
catering explicit and implicit corrective feedback. Second, the explanation is clear to
differentiate one type to another. After the study, the most frequently types used in
the oral classroom will be found.
84
REFERENCES
Lyster, R. & Ranta, L. (1997). Corrective Feedback and Learner Uptake: Negotiation
of Form in Communicative Classrooms. Studies in Second Language
Acquisition, 19, 37-66. In Heift, T. (2004). Corrective Feedback and Learner
Uptake in CALL. Cambridge University Press 16: 418.
Nation, I. S. P. And Newton, J. (2009). Teaching ESL/EFL Listening and Speaking.
UK: Routledge
Ur, P. (1996). A Course in Language Teaching. Cambridge: Cambridge University
Press.
85