Reading Material
Reading Material
Reading Material
Negative Trends
There are some writers who report starting by articulating their principles. For
example Bell and Gower (2011, pp. 142–6) started by articulating the following
principles which they wanted to guide their writing:
Flexibility
From text to language
Engaging content
Natural language
Emphasis on review
Personalized practice
Integrated skills
Balance of approaches
Learning to learn
Professional respect
Before planning or writing materials for language teaching, there is one crucial
question we need to ask ourselves. The question should be the first item on the
agenda at the first planning meeting. The question is this: How do we think
people learn language?
All teachers develop theories of learning and teaching which they apply in their
classrooms (even though they are often unaware of doing so). Many researchers
(e.g. Schon, 1983) argue that it is useful for teachers to try to achieve an
articulation of their theories by reflecting on their practice. In this way
evaluators can make overt their predispositions and can then both make use of
them in constructing criteria for evaluation and be careful not to let them
weight the evaluation too much towards their own bias. At the same time
evaluators can learn a lot about themselves and about the learning and
teaching process.
Here are some theories, which were articulated as a result of reflection by other
teachers’ practice:
Pre-use evaluation
Whilst-use evaluation
This involves measuring the value of materials while using them or while
observing them being used. It can be more objective and reliable than pre-use
evaluation as it makes use of measurement rather than prediction. However, it
is limited to measuring what is observable (e.g. ‘Are the instructions clear to
the learners?’) and cannot claim to measure what is happening in the learners’
brains. It can measure short-term memory through observing learner
performance on exercises but it cannot measure durable and effective learning
because of the delayed effect of instruction. It is therefore very useful but
dangerous too, as teachers and observers can be misled by whether the
activities seem to work or not. Exactly what can be measured in a whilst-use
evaluation is controversial but the following are included:
Clarity of instructions
Clarity of layout
Comprehensibility of texts
Credibility of tasks
Achievability of task
Achievement of performance objectives
Potential for localization
Practicality of the materials
Teachability of the materials
Flexibility of the materials
Appeal of the materials
Motivating power of the materials
Impact of the materials
Effectiveness in facilitating short-term learning
Post-use evaluation
What do the learners know which they did not know before
starting to use the materials?
What do the learners still not know despite using the materials?
What can the learners do which they could not do before starting
to use the materials?
What can the learners still not do despite using the materials?
To what extent have the materials prepared the learners for their
examinations?
To what extent have the materials prepared the learners for their
post-course use of the target language?
What effect have the materials had on the confidence of the
learners?
What effect have the materials had on the motivation of the
learners?
To what extent have the materials helped the learners to become
independent learners?
Did the teachers find the materials easy to use?
Did the materials help the teachers to cover the syllabus?
Did the administrators find the materials helped them to
standardize the teaching in their institution?
In other words, it can measure the actual outcomes of the use of the
materials and thus provide the data on which reliable decisions about the use,
adaptation or replacement of the materials can be made. Ways of measuring
the post-use effects of materials include:
The main problem, of course, is that it takes time and expertise to measure
post-use effects reliably (especially as, to be really revealing, there should be
measurement of pre-use attitudes and abilities in order to provide data for
post-use comparison). But publishers and ministries do have the time and can
engage the expertise, and teachers can be helped to design, administer and
analyse post-use instruments of measurement. Then we will have much more
useful information, not only about the effects of particular courses of materials
but about the relative effectiveness of different types of materials. Even then,
though, we will need to be cautious, as it will be very difficult to separate such
variables as teacher effectiveness, parental support, language exposure outside
the classroom, intrinsic motivation, etc.
Universal criteria are those which would apply to any language learning
materials anywhere for any learners. So, for example, they would apply equally
to a video course for 10-year-olds in Argentina and an English for academic
purposes textbook for undergraduates in Thailand. They derive from principles
of language learning and the results of classroom observation and provide the
fundamental basis for any materials evaluation. Brainstorming a random list of
such criteria (ideally with other colleagues) is a very useful way of beginning an
evaluation, and the most useful way I have found of doing it is to phrase the
criteria as specific questions rather than to list them as general headings.
Here are the universal criteria used in Tomlinson and Masuhara (2013) to
evaluate six current global coursebooks. To what extent is the course likely to:
If a question is an analysis question (e.g. ‘Does each unit include a test?’) then
you can only give the answer a 1 or a 5 on the 5-point scale which is
recommended later in this suggested procedure. However, if it is an evaluation
question (e.g. ‘To what extent are the tests likely to provide useful learning
experiences?’) then it can be graded at any point on the scale.
Many criteria in published lists ask two or more questions and therefore cannot
be used in any numerical grading of the materials. For example, Grant (1987)
includes the following question which could be answered ‘Yes; No’ or ‘No; Yes’:
‘1 Is it attractive? Given the average age of your students, would they enjoy
using it?’ (p. 122). This question could be usefully rewritten as:
1 Is the book likely to be attractive to your students?
2 Is it suitable for the age of your students?
3 Are your students likely to enjoy using it?
This might seem an obvious question but in many published lists of criteria
some questions are so large and so vague that they cannot usefully be
answered. Or sometimes they cannot be answered without reference to other
criteria, or they require expert knowledge of the evaluator.
For example:
Is it culturally acceptable?
Does it achieve an acceptable balance between knowledge about the
language and practice in using the language?
Does the writer use current everyday language, and sentence structures
that follow normal word order?
The questions should reflect the evaluators’ principles of language learning but
should not impose a rigid methodology as a requirement of the materials. If
they do, the materials could be dismissed without a proper appreciation of
their potential value. For example, the following examples make assumptions
about the pedagogical procedures of coursebooks which not all coursebooks
actually follow:
Are the various stages in a teaching unit (what you would probably call
presentation, practice and production) adequately developed?
Do the sentences gradually increase in complexity to suit the growing
reading ability of the students?
Some terms and concepts which are commonly used in applied linguistics are
amenable to differing interpretations and are best avoided or glossed when
attempting to measure the effects of materials. For example, each of the
following questions could be interpreted in a number of ways:
It is very useful to rearrange the random list of universal criteria into categories
which facilitate focus and enable generalizations to be made. An extra
advantage of doing this is that you often think of other criteria related to the
category as you are doing the categorization exercise.
Possible categories for universal criteria would be:
Learning Principles
Cultural Perspective
Topic Content
Teaching Points
Texts
Activities
Methodology
Instructions
Design and Layout
These are criteria which ask questions of particular relevance to the medium
used by the materials being evaluated (e.g. criteria for books, for audio
cassettes, for videos, etc.). Examples of such criteria would be:
These are criteria which relate to the topics and/or teaching points of the
materials being evaluated. ‘Thus there would be a set of topic related criteria
which would be relevant to the evaluation of a business English textbook but
not to a general English coursebook; and there would be a set of criteria
relevant to a reading skills book which would not be relevant to the evaluation
of a grammar practice book and vice versa.
These are criteria which relate to the age of the target learners. Thus there
would be criteria which are only suitable for 5-year-olds, for 10-year-olds, for
teenagers, for young adults and for mature adults. These criteria would relate
to cognitive and affective development, to previous experience, to interests and
to wants and needs.
Are there short, varied activities which are likely to match the attention
span of the learners?
Is the content likely to provide an achievable challenge in relation to the
maturity level of the learners?