30 Essentials For Using AI
30 Essentials For Using AI
30 Essentials for
Using Artificial
Intelligence
www.cambridge.org
Information on this title: www.cambridge.org/9781009804523
© Cambridge University Press & Assessment 2024
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press & Assessment.
First published 2024
20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
Printed in Great Britain by CPI Group (UK) Ltd, Croydon CR0 4YY
A catalogue record for this publication is available from the British Library
isbn 978-1-009-80452-3 Paperback
isbn 978-1-009-80453-0 eBook
isbn 978-1-009-80450-9 Cambridge Core
Cambridge University Press & Assessment has no responsibility for the persistence
or accuracy of URLs for external or third-party internet websites referred to in this
publication and does not guarantee that any content on such websites is, or will
remain, accurate or appropriate.
v
Published online by Cambridge University Press
Acknowledgements
The authors and publishers acknowledge the following sources of
copyright material and are grateful for the permissions granted. While
every effort has been made, it has not always been possible to identify
the sources of all the material used, or to trace all copyright holders. If
any omissions are brought to our notice, we will be happy to include
the appropriate acknowledgements on reprinting and in the next update
to the digital edition, as applicable.
Text
Ess10: Ernst Klett Sprachen GmbH for the adapted text from Going
Mobile by Nicky Hockly and Gavin Dudeney. Copyright © 2014
DELTA Publishing/Ernst Klett Sprachen GmbH. Reproduced with kind
permission; Taylor and Francis Group for the adapted text from Digital
Literacies 2e by Mark Pegrum, Nicky Hockly and Gavin Dudeney.
Copyright © 2022 Informa UK Limited, an Informa Group Company.
Reproduced with permission of the Taylor and Francis Group through
PLSclear; Ess26: Framework taken from ‘Australian Framework for
Generative Artificial Intelligence in Schools’. Copyright © 2023 State of
New South Wales (Department of Education).
Typesetting
Typesetting by QBS Learning.
vi
Published online by Cambridge University Press
Why I wrote this book
I first encountered the term artificial intelligence in the early 1980s.
I had just started a BA degree in English Literature and was living
in shared accommodation with four other students. Three of my
housemates were studying a degree programme called Artificial
Intelligence. I’d never heard the term before. When I asked them what
artificial intelligence was, they explained that figuring this out was a key
part of their studies. These were the early days in the field of ‘AI’, as I
quickly learned to call it. AI-related studies focused on understanding
how humans think, I was told, in order to build up a model of the
mind that might then be replicated through computer programming.
So my housemates spent a lot of time coding, and reading the work
of cognitive psychologists. My own interests as an English literature
student seemed a million miles away from those of my housemates.
Although I didn’t know it at the time, AI would cross my path many
times more, and become firmly entangled in my own professional life as
an English language teacher.
The second time AI crossed my path was after I had been teaching
English for about ten years. I was invited to join the materials writing
team of one of the first fully online English language schools, in the mid-
1990s. The language learning materials were online, and the learners
were based all over the world. In these early days, the online materials
reflected what was possible (in terms of computer programming) at the
time. Learners were presented with theme-based units that included
reading and listening texts with automated comprehension questions.
Automated grammar, vocabulary and pronunciation activities based on
these texts then provided learners with language work in context. It was
cutting-edge stuff, not just because it included some solid programming,
but because it looked great design-wise, and it was underpinned by a
robust communicative language teaching approach. Remember that
this was several years before audio communication tools like Skype
even existed, and video conferencing tools like Zoom were very far in
the future. The AI that underpinned these automated language learning
activities is known as rule-based AI, and we explore it further in 2. It’s
an approach to programming that still underlies many of the automated
language learning activities that you find online today.
vii
Published online by Cambridge University Press
Fast-forward to today, and generative AI is the new kid on the
block. The logic model that underpins rule-based AI, which meant
programming computers with instructions such as ‘if x, then y’, has
given way to something a lot more sophisticated, often referred to as
‘deep learning’, which has been evolving since the 1980s and 1990s.
Deep learning techniques are based on artificial neural networks, which
are called this because they attempt to simulate (in mathematical terms)
the biological neural networks that are found in the human brain. Deep
learning enables neural networks to learn from vast datasets and to
carry out complex tasks like images and speech recognition. We explore
this concept further in 2, but suffice to say here that generative AI
represents a fundamental shift in the field of computing science, and one
that we don’t yet fully understand. This can feel very unsettling.
The aim of this book, then, is to help you get to grips with AI by giving
you an idea of how it can be used in English language teaching and
learning. The book is divided into four sections. Section A gives you
some background on AI to help you understand what it is and how
it works, in very general terms. I personally find that understanding
where something comes from and how it works makes it less scary.
You may want to start here, especially if you would like to brush up
on some of the basics of AI. This will then give you the information
you need to explore the use of AI in English language teaching and
learning in Section B. This is followed by what, to me, is the most
interesting part of the book. In Section C, we think about the big
questions that using AI raises, not just in ELT but for society in general.
There are some important caveats and challenges to the use of any
educational technology, and AI is no exception. So you might want to
start with some of the chapters in this section instead, if this is what
interests you and you are already familiar with the key elements of
AI. Finally, Section D looks at how AI – especially generative AI – can
help us develop as teachers and learners. If topics such as wellbeing or
viii
Published online by Cambridge University Press
autonomous learning are of particular interest to you, you might want
to start with some of the chapters in this section. In short, there are
many ways to read this book. Wherever you start or finish, I hope you
come away with a clearer idea of AI in the field of ELT. I hope you also
come away with some ideas around what AI might mean for us in wider
society, both now and in the future.
ix
Published online by Cambridge University Press
Published online by Cambridge University Press
A: Setting the scene
1 What is AI?
2 What is generative AI?
3 AI and language learning
4 AI and creativity
5 Technology and the hype cycle
1
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
1 What is AI?
With the amount of hype and hysteria that surrounded the arrival
of ChatGPT in late 2022, you’d be forgiven for thinking that AI is
a completely new technology for teachers and learners. Not so. The
earliest and simplest forms of AI in ELT can be traced back to the
1960s, when CALL (Computer Assisted Language Learning) emerged
as an area of study. In these early days of AI, computers could be
programmed to provide limited responses to prompts. Computers
were large and expensive, and tended to be found in universities. The
advent of the personal computer, however, meant that by the late
1980s and early 1990s computers began to appear in schools and in
people’s homes. Language learning software with simple gap-fill and
text reconstruction activities became available. As computing power
increased, and computers developed multimedia capabilities, other uses
for AI in language learning emerged. This was the heyday of the CD-
ROM. By the early 1990s, some language learning software began to
integrate voice recognition to support pronunciation. Since then, AI
has become more powerful, and technology – especially in the form of
mobile devices – has become more ubiquitous.
2
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
follows very specific instructions. The machine cannot decide to create
another furniture item – let’s say a table – unless it is programmed to
do so. When the machine breaks down, it cannot fix itself. It is very
good at performing a pre-defined task (assembling a chair) quickly
and efficiently, but it cannot solve problems and it cannot do new
things or adapt to new situations. This is narrow AI. Now imagine a
skilled carpenter who makes wooden chairs by hand. She can create
unique chair designs. She gets better at her craft over time, learning
from her successes and mistakes. She teaches herself to use new and
more sophisticated carpentry tools, and she takes pride in her work.
The skilled carpenter represents AGI, which is indistinguishable from
human intelligence. AGI can plan, problem-solve and learn, and carry
out complex multi-faceted tasks. It displays a human-like level of
consciousness while doing so. We are not yet in the phase of AGI, but
the goal, computer scientists tell us, is to get there.
In our field, the gap-fill computer programs of the 1980s and 1990s
are examples of early – and therefore narrow – AI. Voice recognition
software, which was notoriously unreliable in the 1990s, has become
increasingly accurate. More recently, we have tools like ChatGPT, which
are based on generative AI (see 2), and can generate content in text,
image or multimedia formats. ChatGPT is still considered an example
of narrow AI by most researchers, but it represents a significant step
towards stronger forms of AI, not least in the way it seems to interact
with us in a very personable (i.e., pleasant and friendly) manner. It’s
useful here to imagine AI on a scale, with narrow AI at one end of the
scale, and AGI at the other end. Tools like ChatGPT can give us the
impression that we are moving quite fast along the scale from narrow to
AGI. However, not everyone thinks we can get all the way to AGI, and
not everyone is happy at the prospect of this, but there is no doubt that
AI is starting to feel more human-like.
AI and consciousness
3
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
with ChatGPT pointing out that it is a computer program that has no
consciousness, thoughts or feelings.
4
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
though, seemed to awaken the ELT community to the potential
advantages – and challenges – of AI in language teaching. Teachers and
learners quickly realised that this was going to be a game-changer for
our field. But as we will see in this book, AI encompasses much more
than a tool like ChatGPT, and in many ways, the game has already
changed.
Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming,
S. M., Frith, C., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters,
M. A. K., Schwitzgebel, E., Simon, J. and VanRullen, R. (2023). Consciousness in Artificial
Intelligence: Insights from the Science of Consciousness. Available at: https://arxiv.org/
abs/2308.08708. Accessed 24 December 2023.
Marcus. G. (2023). Reports of the birth of AGI are greatly exaggerated. Blog post.
Available at: https://garymarcus.substack.com/p/reports-of-the-birth-of-agi-are-greatly.
Accessed 24 December 2023.
5
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
2 What is generative AI?
Knowledge-based AI
These days, most spell-check programs will take context into account
when suggesting corrections. For example, imagine that the program
encounters the word ‘their’ in the sentence, ‘Their going to the park’.
The word ‘their’ is correctly spelled, but it’s not grammatically correct.
The program may suggest ‘they’re’ as the correct spelling because it
considers the sentence as a whole. Some spell-check programs may offer
a choice of corrections for this sentence, for example, by offering not
just ‘they’re’, but ‘there’. This shows us that there is a rule (or algorithm)
that tells the program to suggest words that are phonologically similar
when it spots a possible error. By providing a choice, the program is also
allowing for human judgement. Other terms you may come across for
knowledge-based AI are predictive AI or rule-based AI. Knowledge-based
6
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
AI has been used extensively in language learning, including in apps
(see 6), intelligent tutoring systems and chatbots (see 8 and 9),
automated translation and testing (see 15).
Data-driven AI
Generative AI
7
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Large language models
8
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
artificial general intelligence (AGI – see 1). One thing is clear though
– the quality and quantity of training data that generative AI platforms
use is important, as is the quality of human feedback they receive in
supervised learning. Poor or biased input, or poor or biased feedback, is
likely to lead to poor or biased outputs (see 18).
9
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
3 AI and language learning
Learning a language
10
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
intervals. I unwittingly used spaced repetition by looking regularly at
the car vocabulary items on my piece of paper, until, finally, I didn’t
need to look at it anymore.
In the examples above, I’ve described how you can set out to
deliberately learn the lexis, structure and pronunciation of the language
– for example, by noting and reviewing key vocabulary on a piece of
paper, or, if you have a mobile device, by using an app. We can also
acquire language more informally by being exposed to it. You might
notice a word that you’ve never heard before in an English language
movie, and then decide to use it. If you’re into digital gaming in English,
for example, where you will typically work in teams to complete a
mission online, you’re likely to pick up terms related to gameplay;
you will also most likely pick up some of the social language used to
communicate with your team members. This is often referred to as
incidental language learning, and it is no less valuable than formal
language learning.
Motivating learners
11
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Practising the language
12
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
As we’ve seen in this chapter, AI can provide support for learners in
a range of areas that research has shown are important for language
learning. However, there is one area in which AI inevitably falls short
– that of providing a human connection. Language is, after all, about
communication, and communicating with a machine is simply not
the same as communicating with another human, with all the nuance,
empathy and connection that this entails. Humanoid robots powered
by generative AI underpinned by large language models (see 2) are,
inevitably, on their way. To what extent they may replace human
conversation partners remains to be seen.
13
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
4 AI and creativity
Let’s start with the thorny question of whether generative AI’s creativity
is as good as human creativity. If we measure the worth of creative
content by its ability to win prizes, then the answer may well be ‘yes’.
In one well-known case, a photographer won a 2023 Sony World
Photography award with an AI-generated photographic image. He
stated that he had deliberately submitted the image to spark debate
around the use of AI in creating images. AI-generated content is not
always of high quality, though. Low quality books generated by AI are
readily available for purchase on platforms like Amazon’s Kindle, where
there are no quality controls on self-published content. Unscrupulous
academic journals have long been known to accept nonsensical articles
generated by AI for publication, motivated by profit (Aldhous, 2009).
In response to AI-generated content, competition organisers, journal
publishers – and educational institutions – tend to have guidelines and
principles around the acceptable use of AI. There are also laws, such
as the European Union’s AI Act (see 24), around transparency and
disclosure in the use of AI.
14
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
There is, perhaps unsurprisingly, no clear agreement on exactly what
human creativity is, although there are psychological tests available that
try to measure creative original thinking. However, generative AI can
beat humans at these tests. For example, in one controlled experiment,
AI beat 99 percent of humans in the widely used Torrance Tests of
Creative Thinking (Shimek, 2023). In another experiment, AI beat
91 percent of humans in the Alternative Uses Test for Creativity (Haase
and Hanel, 2023). This may reflect the flawed nature of these tests
more than it reflects a lack of human creativity, though. Another study
found that AI can be beneficial for creativity, by helping humans come
up with better creative ideas than they have on their own – although
interestingly, very creative people seem to need less AI support – for
now at least (Doshi and Hauser, 2023).
15
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Learners can create a short, AI-generated poem on a topic. They
can then change some of the language in the poem (for example, by
replacing nouns, adjectives and verbs with other words), to make
a new and personal version of the poem. This can be particularly
effective at low levels, where learners often don’t have the linguistic
resources to create poetry from scratch.
Learners can change the style of a single text. For example, they can
generate a formal email from a text message or vice versa. Again,
analysing and discussing the differences between the language used
in these texts can be a helpful language learning activity for learners.
16
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Put learners in pairs and ask them to create a list of jobs, including
jobs that are typically associated with women (e.g., nurse, secretary),
with men (e.g., engineer, electrician), and with both genders (e.g.,
teacher, journalist).
Ask learners to use different AI image tools to generate multiple
images of some of these jobs, and to discuss to what extent gender
bias is reflected in the images produced.
Hold a class discussion about the importance of recognising bias in
AI. To extend this activity, you could put learners in pairs or small
groups and ask them to research and share other examples of bias in
AI. Several examples are provided in 18, and learners will find plenty
more examples online.
Apart from recognising bias, teachers and learners should be clear and
transparent about using AI to generate images, text and ideas. AI-
generated content should always be clearly labelled with the tool used,
for example, ‘Image generated by Stable Diffusion’ or ‘Additional ideas
provided by ChatGPT’.
Aldhous, P. (2009). CRAP paper accepted by journal. New Scientist, 11 June 2009.
Available at: https://www.newscientist.com/article/dn17288-crap-paper-accepted-by-
journal. Accessed 24 December 2023.
Miller, D. I., Nolla, K. M., Eagly, A. H. and Uttal, D. H. (2018). The Development of
Children’s Gender-Science Stereotypes: A Meta-analysis of 5 Decades of U.S. Draw-A-
Scientist Studies. Child Development, 89(6), 1943–1955.
17
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
5 Technology and the hype cycle
18
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
very big business, generating billions of dollars a year globally. Creating
hype around an EdTech product by promising that it will improve
learning – or better yet, ‘revolutionise’ learning through ‘innovative’
approaches – can help get the product into schools, generating profits
for the EdTech company, even when there is no evidence to back up the
company’s claims. This is essentially hype for profit.
19
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
experience, reflection, empathy, emotion and imagination, as well as
the host of social and environmental factors that affect these nuanced
processes – cannot be quantified.
It’s interesting to reflect on how AI seems to attract many (if not all) of
these metaphors. Metaphors influence how we see and use technology.
They shape what we expect from it, and what we perceive as good and
bad about it. We often use metaphors without realising how much they
affect our ideas about something, and recognising these metaphors helps
us think more critically about them.
Here’s a short classroom activity you could carry out with higher
proficiency learners to explore this:
Write ‘AI is like …’ on the board, and ask your learners to each
write at least five different endings to this sentence.
Put the learners into small groups to compare their sentences. Can
they group the sentences in any way (for example, do some refer to
machines, or to biology/nature, or to a journey, etc.)?
20
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Get feedback from the group, and ask them what their similes (a
form of metaphor) suggest about their underlying beliefs and feelings
around AI.
Share the five metaphor types listed above. Did their similes reflect
any of these?
Finally, discuss to what extent your learners think current AI reflects
each of the five metaphors, and what this means for how we might
understand and use AI.
An activity like this can help develop our own and our learners’ critical
digital literacies, which are essential to identify and resist hype. We
examine the area of digital literacies in more detail in 25.
Marcus, G. (2023). The Rise and Fall of ChatGPT? Blog post, 23 August 2023. https://
garymarcus.substack.com/p/the-rise-and-fall-of-chatgpt. Accessed 25 December 2023.
Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European
Journal of Education, 57(4): 620–631.
21
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
B: AI in language teaching and learning
22
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Learning a language with AI 6
23
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
come. A search online (or in your app store) will enable you to find the
latest tools with the functionality described in these examples. At the
time of writing, popular generative AI tools included ChatGPT, Bing
and Claude.
Checking understanding
Generative AI can quickly and easily process and extract
meaning from multimedia texts. This means that it can generate
comprehension questions for print-based texts (such as articles or
blog posts), for audio (such as podcasts), and for videos. Teachers
can use generative AI tools to draft questions to check their learners’
understanding of a written text, an audio text, or a video. And of
course learners can check their own understanding of a text by
getting AI to generate comprehension questions for them. This is also
useful for learners who may need to revise or study specific content
for exams and tests.
Exam preparation
Learners can generate example proficiency tests for self-study, with
test items and sample answers. For example, learners can type a
prompt like ‘Help me prepare for the Cambridge First Certificate
exam by giving me sample questions and answers’ into a generative
AI tool. The tool will generate examples for the various sections
of the exam, with test items that are similar in format, length and
language level to the real exam. Exam providers and independent
app developers have developed specialised generative AI tools that
enable learners to make their own practice items for formal language
exams (see 15).
Sharing project findings
Many teachers use project work with learners of all ages and at
all levels. In project work, learners will typically work in pairs or
small groups to identify and research a topic, and to present their
findings and opinions to their classmates. Sharing these findings
might take the form of a slideshow presentation to the class, or
the development of a blog post or web page for others to read and
comment on. Tools that make slides, blogs and websites have existed
for decades, and although the development of, say, a blog or website
has got progressively easier over the years, requiring fewer and fewer
technology skills, it has always been a relatively time-consuming
24
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
endeavour. Even creating a slideshow can take hours. There are tools
powered by generative AI that enable the creation of a slideshow or
a website in seconds. These tools will not only design a slideshow
or website, they can also populate the slides or website with content
on a topic that the user specifies. This content tends to be somewhat
generic, but it can be replaced by learners with their own texts. If
the focus of your class is not on technology or design skills, making
this step of project work considerably less time-consuming means
that learners have more time to focus on the important bits – the
language they use to communicate their work.
Idioms
Generative AI can be used to quickly provide examples of idioms on
a specific topic, for example, parts of the body. Give pairs of learners
a key word (e.g., heart, arm, leg, eye) and ask them to use the
following prompt with a generative AI tool: ‘Give me five example
sentences with an idiomatic expression that includes the word [x].’
Ask the pairs to ensure they understand the meaning of each idiom
generated – they can ask the AI tool to explain any of the idioms
they don’t understand. Regroup the learners and ask them to teach
each other one or two of the idioms they liked best. Ask the learners
to try and remember one idiom for each body part. A week or so
later, ask them what idioms they can remember.
The examples above show how generative AI tools have the potential to
give learners more agency and independence in their learning. Learners
no longer need to rely on the teacher or published materials to provide
examples of language, text with comprehension questions or even
practice exams. Effective language teachers are arguably those who
understand that learners can be given a more active role in generating
language learning content themselves, and effective teachers know how
to take advantage of this to support learning. We examine the shift
towards a more facilitative role for teachers in 17.
25
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
7 Teaching with generative AI
Clearly these kinds of tools can cut down the amount of time teachers
spend on preparing materials. Generative AI tools can save time on
other tasks for teachers, too. Let’s look at some examples.
26
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Differentiation
Having learners with differing levels of ability in the same class
(often referred to as mixed ability classes) is a fact of life. To
address this, teachers are often told that they need to differentiate,
that is, provide activities at different levels of linguistic and
cognitive challenge for learners. Although this sounds like a
sensible approach, it can be time-consuming to create, say, two
or three additional activities for a reading text or for a speaking
activity for every class. Generative AI tools, however, can generate
shorter or longer versions of a single text written by a teacher at
different levels, as well as comprehension questions at different
levels of complexity. Similarly, a range of more or less complex
discussion prompts for a speaking task can be easily drafted. And
differentiated rubrics for writing tasks or exams can be tailored to
learners’ individual level of skills. As with all content made with
AI, teacher oversight is crucial. The teacher needs to review the
AI-generated content to ensure that they are happy with it; most
teachers will want to edit the content a bit, to ensure that it fits
with what they know of their learners’ linguistic and cognitive
abilities.
Lesson plans
Language learning lessons and units in many current coursebooks
follow a standard communicative approach, providing learners
with input and with opportunities to interact. A typical coursebook
language lesson will start by introducing the topic, possibly by
asking learners to share views or experiences related to that topic,
and will then provide input in the form of a reading or listening
text. After ensuring that learners understand the text (often through
comprehension questions), key vocabulary and structures from
the text are examined. A coursebook will ask learners to interact
during the activities, for example, by comparing answers, and at
some point in the unit there will a longer speaking activity on the
unit topic, and a writing activity (often for homework). Generative
AI tools that make language lessons will usually generate a
lesson plan along these lines. Although teachers will often adapt
these lesson plans for their own classes, much as they do with
27
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
coursebook lessons, generative AI provides teachers with a tool
to generate whole lessons or units of work on topics not included
in the coursebook, and that may be of interest to learners in a
specific context. For example, while researching for this book, I
made perfectly acceptable one-hour English language lessons for
intermediate learners on topics like ‘Sweden’s contribution to pop
music’, and ‘The effect of e-waste on Ghana’. Using additional
prompts in the AI tool enabled me to generate reading texts and
comprehension questions at various levels (see Differentiation
above) for each topic, as well as a range of possible homework
activities for learners to choose from.
Language learning approaches
Teachers can use generative AI to make lessons plans that are
underpinned by different or alternative approaches to language
teaching. For example, Scott Thornbury suggested asking a
generative AI tool to make a lesson that follows a dogme approach
(private communication). This will result in a lesson that adheres
to dogme principles. These principles include using the learners as
a resource by prioritising conversation and focusing on emergent
language, and hence, using few or no materials in the lesson. Asking
a tool to make a lesson that follows a suggestopedia approach
results in a lesson with an emphasis on soothing background
music, guided visualisation and poetry. A lexical approach lesson
will focus primarily on vocabulary; a lesson based on the grammar
translation approach will include, unsurprisingly, plenty of
grammar and translation (although it is likely to reflect elements of
communicative language teaching too, by including some interactive
activities).
Home-school connection
Decades of research has shown that a strong connection between
school and the learner’s home improves learning outcomes
for young learners. Simply put, the involvement of parents or
caregivers, and the support that learners receive at home for the
learning happening at school, is a key to success. This connection
relies on good communication between teachers and parents.
However, communicating with parents is time-consuming for
28
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
teachers with already heavy workloads. Generative AI tools that
can draft communications for parents like welcome letters, regular
news updates on class activities, individual learner progress
reports and permissions forms (all of which can then be edited and
personalised) can save teachers significant amounts of time as well
as strengthen the home-school connection.
29
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
8 Personalising content for learners
I’ve studied French on and off since I was at school. But my French
never seems to improve much. So when mobile language learning apps
(like Duolingo and Busuu) first appeared in the early 2010s, I was quick
to sign up. I hadn’t studied French for decades, but I wasn’t exactly
a beginner. I’d had six full years of French classes at school, but my
French was terribly rusty. I could read a bit of French, but I could barely
speak a word. How to know what my language level was? Luckily for
me, there was a test I could take in the app, which would figure out my
level of French. The app would then provide me with language learning
activities at the level diagnosed by the test. Then, depending on how
well (or badly) I did in these activities, the app would serve up more
activities, targeting my areas of weakness. By addressing my particular
learning challenges, the app would, they said, provide me with a
personalised journey to speaking ‘great’ French. I managed to keep
motivated for a couple of weeks, but soon began to tire of the diet of
automated gap-fill, translation and drag and drop activities. I never did
get to the point of speaking great French. This was my first experience
of the early days of adaptive learning.
Adaptive learning
30
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
learning app shows a beginner-level learner an activity focused on the
past simple. The learner needs to type the correct past simple verbs
into a text. The program detects that the learner has no problem with
regular past simple verbs but makes mistakes with irregular past
simple verbs. Based on this information, the program then offers the
learner explanations and activities that focus on irregular past simple
verbs, to help the learner address that gap in their knowledge. This
approach is known as adaptive learning because the software adapts
the material presented to the learner depending on their performance
in previous activities in the program. By working with content that is
tailored to their individual linguistic strengths and weaknesses, and by
providing a clear sense of progress through the content, it is suggested
that a learner’s motivation is increased and that learning outcomes are
improved. Research has shown improvements due to adaptive learning
in subjects like mathematics, but the outcomes for language learning are
less clear (see Kerr, 2022, for more on this).
31
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Intelligent tutoring systems
You may have come across the term intelligent tutoring systems (ITSs),
which have similarities and differences with adaptive learning. ITSs are
the interface between an adaptive learning system (that is, the learning
content) and the learner. ITSs often take the form of an automated
tutor that provides tips and hints to the learner in real time, while the
learner works through materials and activities. If we take our example
from earlier, where an English language learner is working through an
activity that requires them to produce verbs in the past simple, ITSs
will identify exactly which past form verbs the learner is struggling
with, provide hints or guidance while the learner is completing the task,
and then provide new activities targeting just those few irregular verbs
that the learner found most challenging. In this sense, ITSs are more
specialised than standard adaptive learning programs, as they can take
into account the individual learner’s trajectory and learning data.
However, there are issues with adaptive learning and ITSs, no matter
how sophisticated. For a start, adaptive learning suffers from the
same mechanistic view of language learning that we identified in 5. It
assumes that a language is a series of individual pieces of knowledge
to be progressively mastered, whereas, in reality, language learning is
complex, dynamic and usage-based. If, like me, you’ve ever learned
a foreign language and then not spoken or heard it for years, you’ll
know just how easy it is to lose the language. And in line with much
educational technology hype, there can be an over-reliance on adaptive
tools as a ‘solution’ for language learning when these tools are imported
wholesale into schools and universities. The sidelining of the teacher’s
role is a concern here too, and one we explore further in 17. Finally,
there are issues around the quality of the content generated by adaptive
learning software, and the potential cost of this software for learners
and for schools. In short, adaptive learning and ITSs can support
self-directed language learning, and they may be motivating for some
learners. But ITSs in adaptive learning, no matter how many hints
and tips they provide, are focused on helping the learner complete
automated individualised tasks; as such, they are not designed to
32
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
provide the sort of interpersonal dynamics that interacting with a
teacher or other learners brings. Because of this, adaptive learning
software (with or without ITS) is most commonly recommended as a
supplement to other, more socially-oriented, language learning activities,
or for learners who don’t have opportunities for person-to-person
interaction. For me, my personal efforts to improve my not very good
French mean that I use a language learning app now and again, but
most importantly for me, I meet up online with a French friend once a
week to practise my speaking.
Kerr, P. (2022). Philip Kerr’s 30 Trends in ELT. Cambridge: Cambridge University Press.
33
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
9 Practising English with chatbots
34
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
language learning chatbots, previously most suited to lower proficiency
learners, can also be deployed for realistic conversation practice, in
both text and audio format, at much higher levels. I’ve tried out some
of these latest English language audio chatbots myself and find they can
process and react to unexpected elements in a conversation.
35
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Using chatbots with learners
What does all of this mean for language teachers and learners? Many
learners already use chatbot apps to support their language learning out
of class. Well-known language learning apps such as Duolingo harness
generative AI to improve the performance of their chatbots, and new
generative AI- powered apps like Speak and ELSA, both mentioned
above, have appeared on the market. Indeed, apps that don’t integrate
generative AI are unlikely to remain competitive. Research shows that
the use of chatbots can support language learning in several ways (see
16). So, if we accept that chatbots can be helpful for language learners,
it makes sense to encourage our learners to use them out of class for
additional language practice. There are two main ways to do this. The
first is to encourage learners to interact with chatbots that have been
designed for language learning; the second is to encourage learners
to interact with chatbots that we use in our daily lives, but in English
rather than in their first language. Here is one way we might do this.
36
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Although chatbot apps may not be to the taste of all learners, it is
our job as language teachers to make our learners aware of language
learning opportunities like this, and which they can use outside the
classroom. It is, of course, up to the learner to decide whether or not to
take these opportunities.
Speak (2023). OpenAI Startup Fund-Backed Speak Announces $16m Series B-2 Financing
& Rapid International Expansion. Blog post, 31 August 2023. Available at: https://www.
speak.com/blog/series-b-2. Accessed 25 December 2023.
37
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
10 Learning with augmented reality
If you’ve been watching the tech landscape for a while, as I have, you
may remember a product called Google Glass. Google Glass launched
in 2012 and was essentially a very expensive pair of smart glasses. It
had a small display screen that could overlay digital information on
the world around you as you looked through the glasses, and it had a
build-in camera you could use to record whatever was in front of you.
You could be walking down the street and see, for example, your email
or social media notifications appear in front of you, projected onto the
digital screen of your glasses. You could use Google Glass to navigate
streets and you could use the camera to record your journey, all in real
time. This sort of digital overlay is known as augmented reality or AR –
literally, reality is augmented with digital information.
Despite significant hype (see 5), Google Glass never really caught on. It
was discontinued by Google in 2015, although an ‘Enterprise’ edition,
used primarily in vocational training, continued until 2023. Concerns
over privacy (such as recording people without permission), a lack of
social acceptability and a hefty price tag, all contributed to the demise
of Google Glass. It was also unclear why one would want to use a
wearable augmented reality device when smartphone AR apps already
provided much of the same functionality. The tech world has not given
up on AR wearables, though. At the time of writing, smart contact
lenses developers are researching how tiny batteries can be powered by
human tears (Tangermann, 2023). Even if these go to market in future,
it remains to be seen how keen we will be to wear them.
38
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
AR activities for language learners
Book reviews
Learners choose a book, and audio- or video-record themselves
giving a book review. They save the recording online. The books are
put on a table in the classroom. Learners scan a book cover with an
AR app on their smartphones. The cover (previously uploaded to the
AR app as an image) acts as a trigger, and overlays the audio/video
book review. Learners then listen to each review, while completing
a worksheet provided by the teacher. Afterwards, learners compare
notes on which book sounded most interesting, and decide which
book they would like to read most and why. They write up their
choice of book for homework.
Gallery walk
Learners choose and research a famous painting. They record an
audio or video explanation of what they learned, and save the
recording online. The teacher uploads an image of each painting
to the AR app, prints a copy of each painting and puts these on
the classroom walls. Learners walk around the classroom with
their smartphones, scanning the painting (which acts as a trigger),
and listening to the overlaid audio/video. The teacher provides a
worksheet for learners to fill in as they listen to each explanation.
Learners compare their worksheet answers afterwards, in pairs.
Finally, learners choose one painting and write a short summary of
what they learned about it, either in class or for homework.
39
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
These two AR activities are similar in structure and approach, and they
are examples of a mixed-skills lesson. This is a staple in communicative
language learning task design, in which several skills (reading, writing,
listening and/or speaking) are combined within a single lesson. In this
case, learners first research something (by reading/listening online)
and then record content describing their findings (speaking). Next,
the recordings are shared via the AR app and learners are given a
task to complete while they listen (listening with a purpose). Finally,
learners compare what they have learned (speaking), and produce a
book review/painting summary (writing). Mixed-skills lessons have
several benefits: they reflect real life communication, in which language
skills are usually combined; they provide learners with opportunities
to practice fluency; they can promote engagement and retention if the
lesson topic is interesting for the learners; and they enable learners to
use language in context.
40
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
object. To use the app with learners, here are some suggested steps. The
language skill practised at each lesson stage is included in brackets.
Tangermann, V. (2023). Scientists devise way to power smart contact lens with human
tears. Blog post, 31 August 2023. Available at: https://futurism.com/the-byte/scientists-
smart-contact-lens-powered-human-tears. Accessed 25 December 2023.
41
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
11 Learning with virtual reality
VR in language learning
42
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
virtual environment can certainly feel very realistic, and VR has
been extensively used in vocational training for this reason, in areas
such as engineering, medicine and aviation. With the high-quality
conversational capabilities that have emerged with generative AI,
however, immersive VR could enable language learners to communicate
much more realistically with advanced chatbot avatars in a digital
world. One can imagine a language learner navigating an immersive VR
world and coming across generative-AI powered avatars that initiate –
and sustain – realistic spoken conversations in real time, for example.
This immersive experience of communicating is likely to be more
engaging than that of interacting with chatbot avatars in a language
learning app (see 9).
43
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
VR activities for language learners
With 360° images and videos, the viewer can move the video around
on the screen with a mouse or finger, enabling them to see all angles.
It’s not fully immersive, but high-quality video footage on a large
computer screen is a good (and much cheaper!) alternative to VR via an
HMD. The three examples above can act as a springboard to speaking
activities for learners, with the VR content providing engaging, high-
quality educational content. A simple speaking task might consist of
learners telling a partner what they saw in the images or videos, and
sharing what impressed them most.
VR and research
44
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
communicative skills, particularly among adolescent learners (Li, Chiu
and Coady, 2014). For example, Massively Multiplayer Online Role-
Playing Games (MMORPGs) like ‘World of Warcraft’ or ‘Fortnite’,
which take place in virtual worlds, seem to hold promise for incidental
language learning. These games consist of immersive 3-D environments
full of challenging tasks that require players to communicate with each
other through both text and speech. Interestingly, these learning gains
tend to take place away from the language classroom, with players
willingly interacting with each other as part of gameplay in their spare
time. The amount of time players spend immersed in a MMORPG,
where they are exposed to English within authentic communicative
scenarios, also emerges as a key variable (see Thorne, Black and Sykes,
2009 for more on this).
Dooly, M., Thrasher, T. and Sadler, R. (2023). “Whoa! Incredible!:” Language Learning
Experiences in Virtual Reality. RELC Journal, 54(2), 321–339.
Li, Z., Chiu, C-C. and Coady, M. R. (2014). The transformative power of gaming literacy:
What can we learn from adolescent English language learners’ literacy engagement in
World of Warcraft (WoW)? In Gerber, H. R. and Schamroth Abrams, S. (Eds.). Bridging
literacies with videogames (pp. 129–52). Boston: Sense Publishers.
Ribeiro, R. (2020). Virtual reality in remote language teaching. Cambridge ELT Blog post,
27 October 2020. Available at: https://www.cambridge.org/elt/blog/2020/10/27/virtual-
reality-in-remote-language-teaching/. Accessed 25 December 2023.
Sundqvist, P. (2009). Extramural English matters: Out-of-school English and its impact
on Swedish ninth graders’ oral proficiency and vocabulary. Karlstad: Karlstad University
Studies.
Thorne, S. L., Black, R. W. and Sykes, J. M. (2009). Second Language Use, Socialization,
and Learning in Internet Interest Communities and Online Gaming. The Modern
Language Journal 93, 802–21.
45
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
12 Understanding real-time learner
engagement through AI
You may recall this news story from 2009. A Japanese railway company
introduced ‘smile scan’ software on its computers. Employees were
required to smile at the computer’s camera every morning, and the
machine analysed their smiles by looking at laughter lines and lip
curvature, among other facial features. The quality of the smile was then
rated. If the smile was judged too gloomy, the computer could provide
advice on how to look more cheerful. The computer could also print
out a personalised picture with an ideal smile for that employee. It’s
unclear whether this smile campaign continued as company policy after
extensive international media coverage (and a fair amount of ridicule).
Either way, the story raises some interesting issues.
46
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Emotion AI in education
47
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Issues with emotion AI
1 Ask your learners what they know about emotion AI. If necessary,
explain what it is.
48
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
2 Ask your learners to explore how emotion AI can be used in
business, healthcare, education and government. You could either
give them a text to read on the topic (which you could make with
a generative AI tool!), or you could ask your learners to research
online and to find examples of how it is used in a range of fields.
3 Put your learners in small groups to discuss what they have
learned about emotion AI, and what they think the advantages and
disadvantages are.
4 Get feedback on the class’s overall opinion of emotion AI. You could
ask your learners to vote on whether they think the use of emotion
AI is ethical, in each of the use cases from step 2.
Asher-Shapiro, A. (2022). Zoom urged by rights groups to rule out ‘creepy’ AI emotion
tech. Reuters, 11 May 2022. https://www.reuters.com/article/idUSL5N2X21UW/ Accessed
23 August 2023.
Index Holding (2018). Empath in UAE to Measure Happiness. Press release, 7 May 2018.
Available at: https://indexholding.ae/empath-in-uae-to-measure-happiness/. Accessed 25
December 2023.
Jenka. (2023). AI and the American Smile: How AI misrepresents culture through a facial
expression. Blog post. Available online at https://medium.com/@socialcreature/ai-and-the-
american-smile-76d23a0fbfaf. Accessed 25 December 2023.
Research and Markets (2021). The Worldwide Emotion Detection and Recognition
Industry is Expected to Reach $37.1 Billion by 2026. News article, 13 May 2023.
Available online at https://www.prnewswire.com/news-releases/the-worldwide-
emotion-detection-and-recognition-industry-is-expected-to-reach-37-1-billion-
by-2026--301290799.html. Accessed 25 December 2023.
Sharma, P., Joshi, S., Gautam, S., Maharjan, S., Khanal, S. R., Cabral Reis, M., Barroso, J.
and de Jesus Filipe, V. M. (2022). Student Engagement Detection Using Emotion Analysis,
Eye Tracking and Head Movement with Machine Learning. In: Reis, A., Barroso, J.,
Martins, P., Jimoyiannis, A., Huang, R.YM. and Henriques, R. (Eds.). Technology and
Innovation in Learning, Teaching and Education. TECH-EDU 2022. Communications in
Computer and Information Science, vol 1720.
Yang, L. and Qin, S-F. (2021). A Review of Emotion Recognition Methods From
Keystroke, Mouse, and Touchscreen Dynamics. IEEE Explore, 9, 162197–162213.
Available at: https://ieeexplore.ieee.org/document/9632591. Accessed 22 January 2024.
49
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
13 Understanding emotion in texts
with AI
50
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Simply put, algorithms do not understand what they are ‘reading’; they
analyse stretches of discourse and assign scores to individual learners
based on these analyses. However, we need to keep in mind that
sentiment analysis programs do not understand irony or sarcasm, nor
do they understand cultural nuance; this is where human interpretation
and expertise is needed. They are also unaware of context. A learner
may be dealing with an issue in their personal life, for example, that
impacts on their mood. This might be reflected in the emotional tone
of their postings to forums over a period of time; however, this has
nothing to do with the course content or their overall engagement with
the course materials. In this case, one can easily see how a sentiment
analysis program that gives this learner low scores for engagement
may be unfairly penalising them. And this is where sentiment analysis
tools fall short. Understanding the emotional intent of texts is therefore
often most effective when AI-based sentiment analysis is combined with
human analysis.
51
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
a sentiment analysis tool, it can help learners to subjectively infer the
underlying sentiment in a text, by helping them notice the frequency of
specific words, and to consider any emotions that might underlie the
use of these words. In this case, a kind of rough and ready text analysis
with learners can be used as a springboard for class discussions around
lexical choices and intended meanings, facilitating critical discourse
analysis. Word cloud tools have been freely available since the early
2010s, and have been widely used by English language teachers and
learners. Their longevity is testament to the fact that they are easy to
use and useful for simple text analysis. A search online for “word cloud
ideas for the classroom” will quickly provide a wealth of ideas of how
to use them with learners.
52
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Confessore, N. (2018). Cambridge Analytica and Facebook: The Scandal and the Fallout
So Far. The New York Times. Available at https://www.nytimes.com/2018/04/04/us/
politics/cambridge-analytica-scandal-fallout.html. Accessed 22 January 2024.
Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7, 2. Available at
https://journals.sagepub.com/doi/epub/10.1177/2053951720938091. Accessed 22 January
2024.
Matz, S. C., Kosinski, M., Nave, G. and Stillwell, D. J. (2017). Psychological targeting
as an effective approach to digital mass persuasion. The Proceedings of the National
Academy of Sciences (PNAS), Vol. 114, issue 48. Available at https://www.pnas.org/
doi/10.1073/pnas.1710966114. Accessed 22 January 2024.
Sharma, S., Tyagi, V. and Vaidya, A. (2021). Sentiment Analysis in Online Learning
Environment: A Systematic Review. In: Singh, M., Tyagi, V., Gupta, P. K., Flusser, J.,
Ören, T. and Sonawane, V. R. (Eds.) Advances in Computing and Data Sciences. ICACDS
2021. Communications in Computer and Information Science, vol 1441. Springer, Cham.
Available at https://doi.org/10.1007/978-3-030-88244-0_34. Accessed 22 January 2024.
53
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
14 Developing writing skills with AI
AI writing tools are integrated into some of the technologies that we use
on a daily basis. We often take for granted the text prediction and the
spelling and grammar checkers in our word-processed documents, emails,
mobile phone texts or social media updates. More recently, generative AI
(see 2) has enabled the generation of complete texts from scratch, based
on a prompt. Some teachers are concerned about what text generation
might mean for academic integrity (see 21) and for the development of
writing skills. For example, will learners use generative AI tools to cheat
by getting these tools to write essays for them? Will learners lose the
ability to create their own texts? But most teachers – and institutions –
understand that generative AI is here to stay, and that it is arguably more
productive for teachers and learners to find ways to work with these tools
in principled ways than to ignore or try to ban them.
Using AI to improve learners’ writing
54
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
2 Brainstorming ideas
Generative AI tools can help learners brainstorm ideas to include in
their own texts. Here is one way to use this in class. Imagine learners
need to write an essay about the pros and cons of, say, using social
media. First, ask learners to brainstorm the pros and cons in pairs.
Add the ideas to the board. Sharing a screen with the class, put a
prompt like ‘List the pros and cons of using social media’ into a text-
based generative AI app. The app will usually generate a list of pros
and cons, and a short explanation of each. Ask learners to compare
the two sets of ideas (theirs and the app’s ideas). Close the AI text
and add any new ideas to the board. Learners can now write their
own texts, incorporating all of the ideas.
3 Providing feedback
Generative AI tools can correct, refine and/or provide feedback on
texts written by learners. Research has shown that good writers
create multiple drafts of texts before reaching a final version.
Research also suggests that receiving scaffolded feedback on drafts
can help learners improve their writing skills (see 16). However,
for language learners with limited linguistic resources, finding ways
to reflect on and reformulate their own writing can be a challenge.
Asking a generative AI app to provide feedback on a first draft, and
to include language corrections and suggestions for improvements,
can help learners with this important stage. In class, teachers can
ask learners to write a first draft on a specific topic, get AI-generated
feedback on that draft, and then write an improved second draft.
Learners can also compare and discuss the feedback they received,
before attempting their second drafts. Well-known writing assistant
tool Grammarly, which has been available since 2009, integrated
generative AI in 2023, and is worth exploring with learners.
4 Marking learners’ work
Teachers can use generative AI tools to provide feedback on and to
grade learners’ work using pre-defined criteria. Of course, learners
can use these same criteria to get feedback on their own written
work as part of the feedback stage described above. This can be
especially useful for learners who need to practise producing written
texts for a standardised exam, and for which marking criteria are
usually publicly available.
55
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
The four ways to use generative AI described above show that these
tools can be used at the pre-writing, during writing and post-writing
stages. Given what we know about good writers, timely input from
AI in the ‘during’ writing stage, where the AI acts as a personal tutor
(see 8) can be very beneficial. As an example of this in practice, an AI
writing assistant called ‘Charlie’ was developed by Purdue University
to help students improve their essays before submission. The tool was
trained on large teacher-graded corpora of essays; it provided immediate
feedback on drafts of learners’ essays, and predicted their results/grades
against specific marking criteria.
56
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
An activity like this can be helpful for learners who may need to
produce emails in English for work. They are very likely to use
generative AI tools to draft emails, rather than writing emails
themselves from scratch. This is understandable for two reasons:
correctness (the learner wants the email to be as accurate as possible,
especially in a professional setting) and time (using a generative AI tool
to draft an email can save a lot of time and effort). Rather than insisting
that our learners only write emails from scratch in class, it may be more
useful to help them adapt and personalise emails that can be quickly
generated by AI. Helping our learners understand levels of formality
and tone in writing is an important part of this.
57
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
15 Assessing learners with AI
58
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
systems. Speech recognition has also improved significantly with the
advent of generative AI, and it will continue to improve; although its
use in summative assessment has been less widespread than that of
AWE, we are likely to see more of it now. It should be remembered
that although generative AI seems to understand spoken and written
language, it does not (see Section A). Issues with processing some
accents and varieties of English in speech recognition tools remain
– although including large datasets of spoken learner language in
generative AI models will inevitably improve this (see 9).
Generative AI tools are also useful for simpler forms of testing such as
making quizzes – these are often used as quick and informal progress
tests for learners, or as a way to help learners memorise language
(such as vocabulary). Teachers can present a generative AI tool with
a text, and ask it to produce any type of quiz based on that text, for
example, a multiple-choice quiz, true/false questions, a gap-fill, and
so on. Generative AI is being increasingly integrated into quiz-making
apps; this means that teachers no longer need to come up with their
own quiz questions and laboriously type them into an app – the quiz
app does this automatically based on a topic or content chosen by
the teacher. There is also, of course, potential for learners to use these
tools to make their own revision quizzes. An example of a generative
AI-based quiz tool at the time of writing is Quizalize. A more complex
version of automated question generation can be found in apps that
provide comprehension questions for videos, based on the video content
(a current example is Edpuzzle). It should be borne in mind that apps
59
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
come and go, but this type of tool is very likely to remain, given our
predilection for quizzes in language teaching and learning!
Some teachers may need to help their learners prepare for formal
language exams, and generative AI tools can help with this, too. As we
saw in 6, these tools can make practice exams for learners, generating
examples of whole tests (at the correct language level) for specific
formal language exams, given a clear prompt. Generative AI tools can
also be asked to make individual test items for specific exams. This
is known as automated item generation (AIG), and although it has
been used in test development since the early 2010s, publicly available
generative AI tools have now put AIG into the hands of teachers and
learners. As with any text produced by generative AI, test items need
careful checking and editing before being given to learners. A study
in Korea, for example, examined AIG for the reading section of the
state-wide CSAT exam by three different generative AI-powered tools
(ChatGPT, Perplexity AI and GenQue); the latter was specifically
designed to generate test items for the CSAT. The reading test items
generated by all three tools were found to need some revision by
teachers, and the authors of the study also concluded that AIG prompt
training would be helpful for teachers (Shin and Lee, 2023). One could
argue that using generative AI for AIG for formal exams is of most
relevance to large test providers, particularly when they use generative
AI tools that have been customised for this purpose; the language
learning app Duolingo, for example, uses generative AI-powered AIG
for the Duolingo English Test. Using generative AI tools that have
been fine-tuned to produce items for specific tests or exams is likely
to generate better quality results (at the time of writing these were not
publicly available, but this may change in the future). However, training
teachers to develop effective prompts (known as prompt engineering) is
an important area, and one we explore further in 17 and 29.
60
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
address issues. Caveats around protecting learner data apply here, of
course (see 20).
Shin, D. and Lee, J. H. (2023). Can ChatGPT make reading comprehension testing items
on par with human experts? Language Learning & Technology, 27, 3, 27–40.
61
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
C: The big questions
62
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Can AI support language learning? 16
Chatbot research
If we start with what we know about language learning (see 6), then AI
that supports interaction and gives learners opportunities to produce
63
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
spoken language is arguably a promising research area. Chatbots can,
in theory, provide learners with opportunities for real-time spoken
interactions, and they have long been an area of research for precisely
this reason. Meta-studies on the effects of chatbots powered by
generative AI (see 2) on English language learning had not yet emerged
at the time of writing this book; however, meta-studies of earlier
chatbots are useful in terms of identifying some of the limitations that
newer generations of chatbots need to overcome to be effective.
64
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
better able to deal with the first and third elements, at least in theory.
Once generative AI chatbots are implemented more widely, and more
individual research studies have been carried out, a future meta-study
could indicate to what extent these challenges have been overcome – or
not overcome.
AWE research
The findings suggest that AWE feedback can have a positive impact on
writing quality and accuracy, under certain conditions, but not always.
For example, AWE can help learners improve their writing when used
regularly over time, and when compared with learners receiving no
feedback on their writing. This is clearly good news for learners who
want to work in self-study mode, because they can use AWE to improve
their writing without the need for teacher feedback. When we look at
classrooms though, the meta-study found that sometimes AWE is better
than teacher feedback, but sometimes it is not. One can conclude here
that whether AWE is more or less effective than teacher feedback is
likely to depend on the teacher and the kind of feedback they provide.
Currently, the feedback provided by a good teacher and that provided
by an AWE tool tends to be close in quality, even if the latter is not
perfect. For teachers, providing detailed, individualised feedback on
learners’ writing can be a very time-consuming business. For teachers
with very large classes, it is often simply not possible. In these cases
then, AWE tools can provide much needed support for teachers. By
providing automated feedback for learners, an AWE tool can free up
a teacher to help individual learners who may be struggling with their
writing, or it can enable a teacher to identify and work on areas in
writing that several learners in the class may need support with.
65
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
It’s important to note that the meta-study mentioned above looked at
AWE systems that were available before generative AI tools became
widely available. Whether generative AI-powered AWE (which allows
for more nuanced, level-specific and dialogic feedback) can have an
increased positive impact on learners’ writing skills is an important
question. Some early research in this area suggests that this could be the
case (see Jacob, Tate and Warschauer, 2023), particularly when learners
integrate the use of a generative AI tool at key stages of the writing
process in a principled manner (see 14 for how a description of how
this can be done). This is certainly a research area to watch.
Fan, N. and Ma, Y. (2022). The Effects of Automated Writing Evaluation (AWE) Feedback
on Students’ English Writing Quality: A Systematic Literature Review. Language Teaching
Research Quarterly, 28, 53–73.
Huang, W., Hew, K. F. and Fryer, L. K. (2022). Chatbots for language learning – Are they
really useful? A systematic review of chatbot-supported language learning. Journal of
Computer Assisted Learning, 38(1), 237–257.
Jacob, S. R., Tate, T. and Warschauer, M. (2023). Emergent AI-assisted discourse: Case
study of a second language writer authoring with ChatGPT. Available online at: https://
arxiv.org/ftp/arxiv/papers/2310/2310.10903.pdf. Accessed 28 December 2023.
66
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
What does AI mean for teachers? 17
If you’re worried that AI might take your job, you’re not alone. The
first ‘robot’ English language chatbot, called EngKey, appeared in
South Korean primary school classrooms in 2011, leading to fears that
teachers would soon be replaced by these one-metre high, two-wheeled,
egg-shaped plastic objects, topped by a video screen as a ‘face’ (Saenz,
2011). Once the media hype had died down, it became clear that the
robot was, in fact, a vehicle that transported a video conference screen
around the classroom. On the screen were real English teachers from
the Philippines; they were being beamed into the classroom, in real time,
to teach from scripts that focussed on speaking and pronunciation with
young learners. So not really a robot at all. Developed by the Korea
Institute of Science and Technology, plans to introduce EngKey into all
South Korean primary schools by 2013 appear to have been quietly
shelved.
67
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
The digital revolution, too, has automated or replaced both manual
and skilled jobs – gone, for example, are most travel agents, replaced
by online services. More recently, web designers, graphic artists and
computer programmers have seen demands for their services fall as
generative AI does an increasingly good job in these fields for a fraction
of the price (Mutandiro, 2023). The ethical implications of this are, of
course, important to consider (see 26).
68
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How AI can undermine teachers
There are possible downsides to AI’s effect on our role as teachers. One
is deskilling. If a generative AI tool can make great lesson plans for us,
do we really need to learn how to plan lessons ourselves? Do pre-service
teacher training courses need to even teach lesson planning any more?
The answer is yes, they do. Generative AI tools can produce lessons that
are well-staged, follow a communicative (or any other) approach (see 7),
and include clear learning outcomes and suggestions for how these
outcomes can be formally or informally evaluated. But you know your
learners, and unlike the AI tool, you know what might be more or less
interesting for them. In other words, you know what might work or
not with a certain class. Teachers need a robust understanding of lesson
planning and the underlying principles of language learning to be able
to first evaluate, then adapt and improve AI lessons – exactly as they
have always done with coursebook lessons. In short, generative AI gives
teachers the ability to make, and then adapt, AI-generated (as opposed
to coursebook) lessons much faster and more effectively than before
(see 7). For more on AI and lesson planning, see Thornbury (2024).
69
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
use them. (See Giannikas, 2020, for more on using digital pedagogy, for
example, with young learners.)
Anderson, J. and Taner, G. (2023). Building the expert teacher prototype: A metasummary
of teacher expertise studies in primary and secondary education. Educational Research
Review, 38, 100485, 1–18.
Mutandiro, K. (2023). Free AI tools are killing South Africa’s web designer job market.
Rest of World blog. 31 August 2023. https://restofworld.org/2023/ai-tools-web-developer-
jobs-south-africa/. Accessed 28 December 2023.
Saenz, A. (2011). South Korea’s Robot Teachers To Test Telepresence Tools in the
New Year. Singularity Hub, 3 January 2011. Available at: https://singularityhub.
com/2011/01/03/south-koreas-robot-teachers-to-test-telepresence-tools-in-the-new-year/.
Accessed 28 December 2023.
70
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How can we make AI fair? 18
71
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Bias in speech recognition is not limited just to speakers of other languages.
A study carried out by Standford University in the US found five popular
virtual assistants that rely on speech recognition technologies (such as Siri
and Alexa) made twice as many errors when interpreting words spoken
by African Americans as when interpreting the same words spoken by
white Americans (Koenecke et al., 2020). Again, the bias can likely be
explained by the training data including mainly white Americans. In their
conclusions, the researchers highlighted that machine-learning tools need
monitoring to make sure that training data is inclusive, which is hard to
disagree with. Proprietary AI systems, however, are not keen to allow access
to their algorithms, which makes auditing a challenge – and underscores
the need for legislation that insists on transparency in revealing the sources
of training data (see 24 for more on AI regulation). Speech recognition
software also tends to struggle to understand voices of different ages –
children’s, adults’ and elderly people’s voices are all very different, for
example. It also has issues with different cultural usages of lexis, and with
speakers who may have speech difficulties. Speech recognition software
still has some way to go in terms of inclusivity, but awareness of these
shortcomings is the first step in starting to address them.
Bias has also been found in AI tools that try to detect whether a
learner’s written assignment has been generated by AI, rather than
written by the learner themself. These tools are known as GPT detectors
and their effectiveness is widely questioned. As one example, a study
assessed the effectiveness of various commonly used GPT detectors
when analysing written content from both native and non-native
English writers (Liang et al., 2023). The researchers found that these
detectors consistently misidentified non-native English writing as AI-
generated, while correctly classifying native-authored content. Studies
such as these have shown that GPT detectors often disadvantage writers
with limited linguistic proficiency by wrongly labelling them as cheats.
GPT detectors, then, are not very effective. OpenAI, the makers of
ChatGPT, admitted as much about their own GPT detector, pointing
out that their own GPT detectors are not reliable (OpenAI FAQ, 2023).
Whether GPT detectors will ever be fully accurate in identifying AI-
generated text versus human-authored text is unclear. It’s important to
consider what this means for assessing learners’ written work, and we
explore this further in 21.
72
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Bias against other languages
Text generators can produce high quality texts in English, and other
languages like Spanish, German, Arabic and Japanese. They are less
effective with languages like Swahili, Thai or Bengali, each of which
is spoken by millions. Why is this? Generative AI is trained on data
harvested from the internet, but there is less representation of these
latter languages in online text format. So, for these languages, text
generators struggle to produce coherent text. Instead, they generate
texts that are full of grammatical and syntactic errors, and at times,
they simply make up words. In a study carried out by researchers at the
University of Oregon (Lai et al., 2023), version 3.5 of ChatGPT was
asked to perform the same seven writing tasks in 37 different languages.
ChatGPT underperformed in what the researchers call ‘low resource’
languages (so-called because there are fewer online text resources
available in the language), and it performed particularly poorly in
languages that are structurally most different to English. This lack of
representation has the potential to increase digital inequality, a topic
that we explore further in 19. Local developers, however, have stepped
into the gap. For example, faced with the very weak performance of
ChatGPT in Amharic and Tigrinya, two languages spoken in Ethiopia
(Tigrinya is also spoken in Eritrea) that ChatGPT not only mixed
up, but invented words for, a local start-up developed an automated
translation service (Rest of World, 2023).
Making AI fairer
73
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
automated speech recognition should be adaptive, and able to
respond appropriately not just to different accents, but to voices
from the other groups described earlier in this chapter. We are
not there yet, though. Training data is needed for less represented
languages so that text generation tools can produce quality outputs
in these languages, too.
Transparency in training data
Allowing access to training data would enable audits to
systematically check for bias, and then rectify that bias by changing
or diversifying the training data and/or the algorithms. This is not
always easy to achieve though, and the CV selection software used
by Amazon described at the beginning of this chapter was eventually
abandoned, even after software engineers had tried to fix the bias.
Ongoing review
Monitoring and identifying algorithmic bias needs to be an ongoing
process. Once bias is identified and addressed, further unintended
consequences (including other biases) may occur, so ongoing
review is an important part of avoiding this. There is not much
point in fixing bias against one group only to find later that the fix
disadvantages another group.
Dyer, O. (2019). US hospital algorithm discriminates against black patients, study finds.
British Medical Journal, 367.
Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z., Toups, C.,
Rickford, J. R., Jurafsky, D. and Goel, S. (2020). Racial disparities in automated speech
recognition. PNAS 117(14), 7684–7689.
Lai, V. D., Ngo, N. T., Veyseh, A. P. B., Man, H., Dernoncourt, F., Bui, T. and Nguyen,
T. H. (2023). ChatGPT Beyond English: Towards a Comprehensive Evaluation of
Large Language Models in Multilingual Learning. Available online at: https://arxiv.org/
pdf/2304.05613.pdf. Accessed 28 December 2023.
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E. and Zou, J. (2023). GPT detectors are biased
against non-native English writers. Available online at: https://arxiv.org/pdf/2304.02819.
pdf. Accessed 28 December 2023.
74
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How can we make AI accessible 19
to all?
75
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
We see therefore that digital technologies can not only affect, but also
increase the digital divide. The examples in the paragraph above refer
to the digital divide between the Global North and the Global South,
but geographical digital divides can exist between urban and rural areas
within the same country, or between neighbourhoods in the same town.
There may even be a digital divide between schools in the same district
in terms of access to digital technologies, and between classrooms in
the same school, with teachers who have or don’t have digital skills.
The digital divide can also affect young versus old, and men versus
women, with the former in each case typically having more access to
technologies. In short, the digital divide is complex. In fact, it’s better
thought of as a sliding scale, rather than a divide with ‘haves’ and ‘have
nots’ on either side.
EDI stands for equity, diversity and inclusivity. Let’s start with a quick
note on the difference between equity and equality, two terms which are
often confused or conflated. Equity means that people are treated fairly
and justly, without bias or favouritism. Equality means that people have
the same rights and opportunities – for example, women and men have
access to the same jobs, and are paid the same for undertaking the same
work. As we saw in 18, with their baked-in biases, AI algorithms do not
have a good reputation when it comes to equity. The software industry
in general does not have a good reputation for diversity or inclusivity in
their hiring practices either, although an awareness of these blind spots,
and an apparent willingness to address them, is, in theory, a good thing.
76
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
those with motor skills challenges or limited movement). These assistive
technologies support learners with physical challenges, but assistive
technologies are also used to support neurodiverse learners with a
range of cognitive, behavourial and emotional challenges. For example,
there is educational software available for learners with autism, in
which teachers can create video scenarios to help their learners develop
empathy and social skills.
Social robots have also been used for supporting children with attention
deficit hyperactivity disorder (ADHD), hearing impairments, Down
syndrome and autism. Social robots are designed to interact with
humans – they often look like pets, stuffed animal toys or humanoid
robots. Research has shown that they can help teach social and
educational skills to children. For example, a study was carried out with
twelve autistic children aged between six and twelve at home (i.e., not in
a laboratory) over a month, in which the child and a caregiver interacted
with a social robot for 30 minutes every day (Scassellati et al., 2018).
Activities included storytelling, taking the perspectives of characters
in the story and sequencing events. The robot used adaptive learning
techniques (see 8) to adapt the difficulty of the activities for the child,
encouraged engagement and modelled positive social skills. The study
found that these children increased their attention and communication
skills after a sustained period of time interacting with the social robot.
77
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
materials, such as video scenarios, for learners themselves. What
was previously available only to those able to afford these (often
expensive) multimedia products thus becomes available to many
more teachers. Teachers, of course, still need training and support
in learning how to effectively work with learners with special
educational needs.
Social robots will also benefit from the improved speech capabilities
brought by generative AI. For the most part, however, these remain
expensive products that are available for the few.
Computer vision tools, which can understand and describe the real
world, as well as pictures and videos, can support those with visual
impairment. Microsoft’s Seeing AI is one current example of this
kind of AI-powered software. Open AI’s Be My AI app is another.
Both were current at the time of writing.
Chan, R. Y., Bista, K. and Allen, R. M. (2022). Online Teaching and Learning in Higher
Education during COVID-19. New York: Routledge.
Jeffreys, B. (2022). Covid closures still affecting 400 million pupils – Unicef. BBC News,
30 March 2022. Available at: https://www.bbc.com/news/education-60846683. Accessed
28 December 2023.
Reliefweb. (2022). COVID 19: Scale of education loss ‘nearly insurmountable’, warns
UNICEF. Available at: https://reliefweb.int/report/world/covid-19-scale-education-loss-
nearly-insurmountable-warns-unicef. Accessed 28 December 2023.
Scassellati, B., Boccanfuso, L., Huang, C-M., Mademtzi, M., Qin, M., Salomons, N.,
Ventola, P. and Shic, F. (2018). Improving social skills in children with ASD using a long-
term, in-home social robot. Science Robotics 3, 21. Available online at:
https://www.science.org/doi/10.1126/scirobotics.aat7544. Accessed 28 December 2023.
78
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Who owns the data? 20
A cautionary tale
This report shows very clearly two things. Firstly, it shows the technologies
that we use in our schools may be collecting and using data in unethical
ways. Secondly, it shows we need strong data protection laws that are
enforced, and that we shouldn’t simply expect technology companies to do
the right thing. There is some regulation in the data protection space – the
European Union’s 2018 GDPR (General Data Protection Regulation) and
2023 AI Act are two cases in point (see 24 for more on the latter) – but
these protections are not available in all jurisdictions.
79
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Normalising data collection
Datafication
It is perhaps no surprise then that data collection, with and without the
user’s consent, is widespread in education. The increasing generation,
analysis and sharing of student data, primarily through LMS and
other educational software, has been referred to as the datafication of
education. In fact, as data from multiple sources are integrated into our
80
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
learners’ profiles (for example, from social media plug-ins, or from GPS-
enabled apps that can track a learner’s access to institutional facilities),
increasing amounts of data can be collected about learners.
And how do learners feel about the creeping datafication that may be
taking place in their schools and universities? Interestingly, this question
is not often asked (research in this area tends to focus instead on
institutional policy as regards learner data). However, one recent study
asked learners in three Australian secondary schools what they thought
about having their data collected by their schools’ platforms (Pangrazio,
Selwyn and Cumbo, 2023). The findings are interesting, although
perhaps unsurprising. Learners felt resentful of – but resigned to –
institutional uses of their data. They disliked the lack of control they
had over their public profiles (the information about themselves that
was automatically displayed in school platforms) and they disliked the
pre-established privacy settings in these platforms. They also felt
unhappy with how their use of institutional technologies was closely
monitored and tracked by the school for accountability, as well as
powerless in their lack of choice over whether to use the platforms or
not. In short, learners feel a sense of ‘digital resignation’ (Pangrazio,
Selwyn and Cumbo, 2023, p. 11) and ‘surveillance realism’ (ibid. p. 12),
seeing the datafication of their learning experiences as inevitable. It’s not
only learners who are affected by datafication, however, as AI-powered
platforms are increasingly used to monitor learners and to link these
data to teacher effectiveness.
81
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
2 Ask them what their favourite platforms’ terms of service (ToS)
say about data collection. Learners are often unaware of their data
privacy settings, so put them in pairs, and give them five minutes to
explore the ToS of one or two of these platforms. Ask learners to
share what they found out with the class.
3 Lead a class discussion with the following questions:
– What data do your favourite social media platforms collect
about you?
– Can the platform share your data with third parties? If so, how
do you feel about this?
– Is there anything in the privacy settings that you can or would
like to change to protect your data?
– Is data surveillance inevitable in today’s world? What should
laws do to protect us?
Caltrider, J., Rykov, M. and MacDonald, Z. (2023). It’s Official: Cars Are the Worst
Product Category We Have Ever Reviewed for Privacy. Mozilla Foundation News, 6
September 2023. https://foundation.mozilla.org/en/privacynotincluded/articles/its-official-
cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/. Accessed 28
December 2023.
Human Rights Watch report ‘How Dare They Peep into My Private Life?’ (2022). https://
www.hrw.org/sites/default/files/media_2022/05/HRW_20220526_Students%20Not%20
Products%20Report%20Final-IV-v3.pdf. Accessed 28 December 2023.
Pangrazio, L., Selwyn, N. and Cumbo, B. (2023). Tracking technology: exploring student
experiences of school datafication. Cambridge Journal of Education.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at
the New Frontier of Power. New York: Public Affairs.
82
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Does AI help learners cheat? 21
Of course, it’s not only language teachers who are concerned that
their learners will use generative AI to write essays or do homework.
In a peer-reviewed paper entitled ‘Chatting and Cheating: Ensuring
Academic Integrity in the Era of ChatGPT’ (Cotton, Cotton and
Shipway, 2023), three UK academics discussed issues around the
academic honesty and the opportunities for plagiarism provided by
generative AI tools. The irony here – deliberately revealed by the
authors after publication – was that the article had been written by
ChatGPT; the authors had simply edited the references (which ChatGPT
tends to invent). None of the four reviewers had realised this.
83
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How can teachers ensure that learners don’t use generative AI tools
to cheat? Or to put it another way, how can academic integrity be
maintained if learners are using AI tools to do their work for them?
Given that it’s difficult (or impossible) to detect AI-generated text, some
educators suggest that this might not be the best way to frame the
question. Instead, they suggest, this may be an opportunity for us to do
two things: first, to rethink how we assess our learners; and second, to
openly explore with learners how to use these tools ethically to support
their writing and learning (e.g., Fyfe, 2022). Let’s see how this can be done.
84
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
asking her learners to share what they learned about their writing
from the drafting and redrafting stages of their writing.
85
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Task 3: Litter clean up event
In small groups, learners strategise a litter clean up event in their
local park, addressing logistics, scheduling, roles and promotion
(including sharing the posters from Task 2). Learners present their
event plans, get feedback and the class agrees on one plan. The
event takes place. A text-based generative AI tool helps learners with
suggestions on how to plan an event.
Task 4: Reflection
Learners write a short blog post (300–500 words) about their
journey through the litter awareness project, discussing insights on
littering, community engagement and personal learning. A text-based
generative AI helps learner improve their first drafts (see Teacher 2
above).
Cotton, D., Cotton, P. and Shipway, R. (2023). Chatting and Cheating: Ensuring Academic
Integrity in the Era of ChatGPT. Innovations in Education and Teaching International.
https://doi.org/10.1080/14703297.2023.2190148. Accessed 28 December 2023.
Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI &
Society, 38, 1395–1405.
86
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Whose content does AI use? 22
87
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
works, and to ensure fair pay for creators when their work is used by
others for profit. Creative commons licenses, which enable creators to
freely share their work in a variety of ways (for example, by asking
only for attribution in return for use, or by asking that their work is
not used to generate revenue), have existed for decades as an alternative
to copyright (see https://creativecommons.org/ for more on this). By
scraping training data from the internet without permission, companies
developing generative AI seem to have trampled over copyright
and over Creative Commons. Indeed, in most cases, the technology
companies refuse to disclose the exact training data they have used.
It’s understandable that creators are upset, and several lawsuits are
ongoing at the time of writing (one of the best-known of these is the
New York Times suing OpenAI over its use of newspaper articles in
training datasets without permission). Interestingly, Creative Commons
itself argues that the use of creative data from the internet may legally
constitute ‘fair use’ (Wolfson, 2023), and, at the time of writing, it
remains to be seen how the current lawsuits will be resolved.
Citing sources
What does all of this mean for language teachers and learners? As with
many of the big questions covered in this section, one of the first things
you can do is discuss them with your learners. Raising awareness of the
88
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
downsides (as well as the potential benefits) of using generative AI tools
is one of the first steps we can take to help our learners develop critical
digital literacies (see 25). Awareness raising can take the form of a short
lesson that includes a discussion of the key issues. For example, you
could follow these steps:
89
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Hallucinations
Encourage learners to check sources when creating text with
generative AI tools, because they are prone to hallucinations. One
amusing example to share with learners comes from ChatGPT: ‘some
species of dinosaurs even developed primitive forms of art, such as
engravings on stones’ (Szempruch, 2023).
Attribution
As we’ve seen in previous chapters, learners are likely to use
generative AI tools for a variety of tasks, including text generation.
Many schools and universities provide guidelines for teachers and
learners on how to use AI tools in ethical ways, and this includes
clearly stating how AI has been used in one’s work (as I did in 21,
for example. See also how learners can show their use of AI in
writing in 14).
Szempruch, D. E. (2023). Generative AI Model Hallucinations: The Good, The Bad, and
The Hilarious. Blog post 20 March 2023. Available at: https://www.linkedin.com/pulse/
generative-ai-model-hallucinations-good-bad-hilarious-szempruch/. Accessed 28 December
2023.
Wolfson, S. (2023). Fair Use: Training Generative AI. Creative Commons. Blog post 17
February 2023. Available at: https://creativecommons.org/2023/02/17/fair-use-training-
generative-ai/. Accessed 28 December 2023.
90
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Who creates AI? 23
But I soon found out that there is another group of people who work
in generative AI, who are ignored in the account above. These people
are working with the training data. Their job is to create sample
responses, to provide feedback, and to tag, rate and annotate AI-
generated responses. These are the people involved in the ‘human
feedback’ training stage for LLMs (this stage is known as reinforcement
learning from human feedback, or RLHF – see 2). Their job is to align
AI-generated responses with what we humans want and expect to see,
in other words, with human values and beliefs. Indeed, this stage of
91
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
generative AI training is often referred to as alignment. Alignment is
supposed to spot and rectify mistakes in generative AI output, and also
to identify and rectify bias. Of course, we are often unaware of our own
biases, so there is a strong possibility that human annotators’ biases are
passed on to the training data (see 18).
At this point, you may be wondering what all of this has to do with
language teachers. The answer is – plenty. Many graduates are finding
work as generative AI annotators, given the vast amounts of training
data used by LLMs, and the need for lots of human feedback to make
them less prone to mistakes. In specialised domains, graduates are
needed to create model responses to train AI. For example, if you ask a
generative AI tool a complex technical question related to astrophysics
or medicine, you want the answer to be detailed and you want it to be
correct. To make sure that a generative AI tool is as accurate as possible,
a company will hire graduates specialising in astrophysics or medicine
to work through queries and to provide model answers which are then
used to train the AI tool. If you’re teaching English at a university, there
is a chance that some of your learners will become annotators when
they graduate, reading and possibly creating model answers in English
and/or other languages. Some commentators suggest that this ‘platform
work’ or ‘crowd work’ could be the most widespread form of full-time
work by 2030. Other commentators have voiced concerns about how
the needs of the vast AI industry are already affecting higher education
by creating a swing away from disciplines such as philosophy or fine
arts, which are considered less relevant to the growing AI-related jobs
market.
92
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
examined graduate data annotators located in India, where many
subsidiary companies operate because they have access to English-
speaking graduates, but can pay them lower wages than in Europe or
North America (Wang, Prabhat and Sambasivan, 2022). The study
found there was a conflict between the need for high-quality data at
low cost, and the aspirational needs of annotators for well-being, career
prospects with decent pay, and active participation in what they had
thought of as ‘building the artificial intelligence dream’ (ibid.). Sadly,
these skilled annotators soon found platform work to be unfulfilling
and alienating. Another report found that qualified data annotators in
many countries in the Global South are consistently paid low wages and
expected to work long hours in tedious and repetitive work (Dzieza,
2023). In one country, a compulsory part of completing a vocational
training qualification included working as an intern for a data
annotation centre, for months on end and for very low pay (Zhou and
Chen, 2023).
What does AI’s increasing need for platform work mean for us as
English language teachers? I would argue that, at the very least, these
issues should be discussed with our language learners – some of whom
may end up doing platform work in the future. Even if they don’t, as
users of generative AI, both we and our learners need to be aware of
some of the darker sides of the development of generative AI. To discuss
these issues with your learners, you could follow these steps:
1 Introduce the topic of work and AI. Ask your learners what jobs
they think have already been replaced by AI, and what jobs will
be replaced by AI in the future. Ask them what jobs they think are
involved in creating AI.
2 Explain that ‘data annotators’ are key in the development of AI. Put
your learners in pairs and ask them to research this topic online.
3 Regroup the learners and ask them to share what they have learned
about data annotators with each other.
4 To round up, ask learners what they learned about platform work
and ensure that some of the issues outlined in this chapter are raised
and discussed with the class. Ask what regulation or laws could
improve data annotators’ jobs.
93
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
In the hype that surrounds generative AI (see 5), these kinds of issues
tend to get glossed over. Being AI literate (see 25) means having a
critical understanding of the effects (both good and bad) that these tools
can have. AI literacy also includes exploring potential solutions. This
short speaking activity tries to raise learners’ critical awareness and at
the same time consider possible solutions to the issue of platform work.
Dzieza, J. (2023). AI is a Lot of Work. The Verge. Online newspaper article, 20 June 2023.
Available at: https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-
notation-labor-scale-surge-remotasks-openai-chatbots. Accessed 28 December 2023.
Wang, D., Prabhat, S. and Sambasivan, N. (2022). Whose AI Dream? In search of the
aspiration in data annotation. In Proceedings of the 2022 CHI Conference on Human
Factors in Computing Systems, pp. 1–16.
Zhou, V. and Chen, C. (2023). As part of China’s digital underclass, vocational school
students work as data annotators – for low pay and few future prospects. Rest of World
blog post, 14 September 2023. Available at: https://restofworld.org/2023/china-ai-student-
labor/. Accessed 28 December 2023.
94
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Can we control AI? 24
Opaque AI
95
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
My invented LMS scenario may sound far-fetched. Unfortunately,
it’s not. The COVID-19 pandemic saw the introduction of learning
technology software and products into schools and homes on an
unprecedented scale. And as we saw in 20, a Human Rights Watch
investigation found that learning technology products introduced into
schools during the pandemic directly sold or gave access to children’s
personal data to digital advertising companies in 49 countries. This is
a worldwide issue, and regulation to prevent this kind of behaviour is
clearly needed. Regulation around data and privacy does exist in some
parts of the world, but often imperfectly. Of the 49 countries reviewed
in the Human Rights Watch report, 14 countries had no data protection
laws, while the data laws in a further 24 countries were not fit for
purpose in the digital age.
96
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
other frameworks and guidelines available, with new ones appearing all
the time. Exploring university and school websites in your own context
is likely to yield more examples that may be better suited to your
teaching environment.
You can develop more questions for your technoethical audit based
on the issues explored in other chapters in this book, especially in
Section C.
97
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Regulation provides safeguards, and in the case of AI, these safeguards –
often referred to as guardrails – are needed for all. Building international
consensus around regulation for AI is a challenge, and it’s unclear
whether or how it will be achieved. Nevertheless, the guidelines and
legislation around AI in education provided in the initiatives described
above provide a good starting point for educators interested in following
this developing area. There is clearly strong advocacy for regulation in AI
in the educational community, and it is a space to watch.
Miao, F., Holmes, W., Huang, R. and Zhang, H. (2021). UNESCO. Available at: https://
unesdoc.unesco.org/ark:/48223/pf0000376709. Accessed 28 December 2023.
Russell Group. (2023). Russell Group principles on the use of generative AI tools in
education. Available at: https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf.
Accessed 28 December 2023.
98
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How can we become critical users 25
of AI?
Experts estimate that just one query to a large language model (LLM)
like ChatGPT uses the equivalent of half a litre of water. Others claim
that the training of GPT3, a model that is a lot less powerful than the
LLMs we have now, was like driving a car to the moon and back in
terms of energy consumed. The massive numbers of very large servers
that house LLMs and their terabytes of data require a lot of energy,
including electricity (to function) and water (to cool the servers).
Standing in a room full of servers pre-generative AI is like listening to
the hum of thousands of electric fans. Standing in a room full of servers
that support generative AI is like listening to hundreds of jet engines.
The difference in decibels says it all.
There are a number of ways we can help raise our learners’ (and our
own!) awareness of the environmental costs of the digital technologies,
including generative AI, that we use in our teaching and learning. One
way is to get learners to discuss some of the issues. Below are some
discussion prompts that could be used with an intermediate group of
adult or teenage language learners. The prompts can be used in three
stages over a single class.
99
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Stage 1: Introduction
Introduce the topic and key issues by asking learners the following two
questions:
What types of electronic devices do you use in your daily life (e.g.,
mobile phones, tablets, computers, gaming consoles)?
What happens to your devices when you no longer use them?
Stage 2: Discussion
Put the learners into small groups to discuss some key issues. You could
allocate one topic to each group and ask them to first research their
topic online, then discuss it, and finally to report back to the class.
100
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
If relevant and of interest to your learners, you could also draw their
attention to the United Nations’ Sustainable Development Goal (SDG)
12 – ‘Ensure sustainable consumption and production patterns’. You
could ask your learners to what extent they feel their country is making
progress against this goal, particularly as regards digital technologies
and e-waste.
101
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
same project stages, could be taken using the topic of the environmental
impact of digital technologies. There are two ways you could plan such
a project:
Use the project plan in 21, but replace the topic of ‘littering’ with
the topic of ‘e-waste and mobile devices’. For the ‘event’ stage of
the project (Task 3), ask learners to organise an e-waste awareness-
raising event at your school. The event could also aim to collect
obsolete devices that learners (or their parents) have lying around at
home; some of the learners can then take them to a recycling centre
in the town. As a longer-term outcome for the school community, a
cardboard box for recycling e-waste could be left permanently at a
strategic point in the school. Learners from different classes could
take the collected e-waste to the recycling centre once a month or
once a term.
Put the following prompt (based on the prompt in 21) into an
AI text-generator or lesson-generator tool (see 7) and see what it
suggests.
‘Create a project that includes authentic assessment for a group of
intermediate level English language learners around the topic of
e-waste and mobile devices.’
Edit and adapt the AI-generated project to suit your learners. Think
about how you could include generative AI tools to support your
learners during each stage of the project (for example, as suggested
in the project plan in 21). Then try out the project with your
learners!
Pegrum, M., Hockly, N. and Dudeney, G. (2022). Digital Literacies (2nd Edition). London:
Routledge.
102
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
D: Self-development and AI
103
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
26 Considering wellbeing and AI
Fears about AI
Concern, and even fear, are not uncommon responses to new technologies
(see 26). Generative AI has been perceived as threatening, not just by
teachers but by society as a whole. Much debate continues to take place
among philosophers, scientists and politicians about just how much of
an existential threat generative AI might pose to humankind. You are
likely to be aware of these debates and you may have listened to some
of them, oscillating between fear and hope depending on who was
being interviewed. The truth is, that at the time of writing this book, we
are unsure of what the long-term (and even the short-term) effects of
generative AI will be on us, and this can provoke a feeling of unease.
Teacher wellbeing
Wellbeing has been part of the conversation in education for many
years. The wellbeing of teachers can be affected by external factors, such
104
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
as low pay, a lack of recognition, heavy workloads and dealing with
unmotivated or badly behaved learners. Internal factors, too, can affect
teachers’ wellbeing, and internal and external factors are often linked.
Teacher burnout is one well-known example of a wellbeing issue, and
the stresses of the job can unfortunately sometimes lead to physical and
psychological challenges such as insomnia or depression. See Kerr (2022)
and Mercer and Puchta (2023) for more on teacher wellbeing in ELT.
Over the years, the appearance of new digital technologies has often
affected teachers’ wellbeing, for good and for bad. Productivity tools
(such as word processors for creating worksheets, or spreadsheets for
calculating learners’ grades) are clearly helpful for teachers. After some
initial hesitation around whether they had the necessary technical skills
to learn to use them effectively, most teachers would agree that these
tools have made their lives easier. The move online necessitated by the
COVID-19 pandemic, on the other hand, was extremely challenging
for both teachers and learners. Although most teachers did an excellent
job of teaching in exceptionally difficult circumstances, there were
widely documented serious mental health challenges associated with the
pandemic for teachers and for learners. And now, it may seem to many
teachers, we are faced with yet another potential existential threat in the
shape of generative AI.
For teachers, one way to deal with the uncertainty that they may be
feeling in the face of generative AI is to first try and understand what it
is and how it works, at a very basic level (see 1 and 2). It’s also useful
to understand that both rule-based and data-based AI has underpinned
many of the language learning apps and websites that we’ve been using
for decades (see 3). Generative AI is a step up from what we’ve used
before in terms of functionality, but it has a long trajectory, and it didn’t
come out of nowhere.
Learner wellbeing
105
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
measure wellbeing, a range of factors are usually examined. The OECD
(Organisation for Economic Cooperation and Development) PISA tests,
for example, which aim to compare learning outcomes for adolescents
globally, include a range of wellbeing indicators. These indicators take
into account both negative aspects (e.g., anxiety, or low performance)
and positive aspects (e.g., interest, engagement and motivation to
achieve). The OECD indicators include:
106
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
The second principle is ‘diversity of perspectives’, which refers
to schools using tools that ‘expose users to diverse ideas and
perspectives, avoiding the reinforcement of existing biases’.
The third principle is ‘human rights’, and it states the need for
schools to use generative AI that ‘respects human rights and
safeguards the autonomy of individuals, especially children’.
Kerr, P. (2022). Philip Kerr’s 30 Trends in ELT. Cambridge: Cambridge University Press.
Mercer, S. and Puchta, H. (2023). Sarah Mercer and Herbert Puchta’s 101 Psychological
Tips. Cambridge: Cambridge University Press.
OECD. (2017). PISA 2015 results (Volume III): Students’ well-being. Available at: https://
www.oecd.org/pisa/publications/pisa-2015-results-volume-iii-9789264273856-en.htm.
Accessed 28 December 2023.
107
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
27 Carrying out action research into AI
108
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
5 Assess the effectiveness of these approaches, techniques or activities
through self or peer observation, reflection and analysis.
6 Optional: Refine or adapt your approaches, techniques or activities.
Experiment some more – and evaluate the effectiveness again.
7 Share what you found out from your action research with others,
including colleagues and your learners.
109
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
smartphones though, and she wonders whether she could use those
to somehow encourage her learners to practise speaking out of class.
2 Explore the action research topic and plan the approach
This second stage involves reading about the topic (including talking
to colleagues about it) and planning how to carry out the action
research. After reading 9 and 16 of this book, our teacher decides
to ask her learners to practise their speaking out of class by using a
generative AI chatbot app for a month. First though, she asks other
teachers in her school whether they have tried anything similar with
their own learners. One teacher got his learners to try a chatbot app
about five years ago, but, the teacher reports, the chatbot app only
allowed for very limited text-based interactions and his learners
got bored with it quite fast. But our teacher knows that generative
AI has improved the functionality of English language chatbots
significantly; she has learned that these more recent chatbots can
understand and respond to spoken language increasingly well.
She reviews the AI chatbot studies described in 16, and sees that
there are three key areas to keep in mind: 1) the chatbot needs to
understand non-standard English accents; 2) it needs to keep the
learners engaged and motivated; and 3) it needs to be simple enough
for learners to deal with (i.e., it needs to avoid excessive cognitive
load – see 16). Armed with this knowledge, she chooses two English
language chatbot apps that seem most likely to fulfil these three
criteria. At the time of writing, these apps include Speak and ELSA,
both mentioned in 9.
3 Try it out
In the next stage, our teacher explains her action research project
to her learners. She gets them to agree to try out a chatbot app for
a month, to see whether it will help them feel more confident with
their speaking skills. She asks them to choose one of the chatbots
she recommends (or even to try both), to download the app(s) to
their phones, and to spend 15 minutes a day carrying out speaking
activities in the app. Because these are teenagers, and she’s not
sure they will actually use the app out of class, she tells them that
taking part in this project means that they don’t need to take the
speaking exam at the end of term – instead they can submit the app
dashboard statistics (which provides an overview of the activities
each learner has completed). Once the project starts, our teacher
110
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
asks the group to informally share how they are getting on with the
app, once a week in class. This helps her gauge progress and to keep
the project on track.
4 Reflect
In this all-important fourth stage, the teacher and her learners
discuss and reflect on the experience of using their chosen chatbot
app for a month. To structure the discussion, and to collect
evidence of her learners’ experiences, the teacher has designed a
questionnaire for her learners to complete. She includes questions
about the three key areas she identified in her reading for Stage 2,
as well as asking questions about her learners’ attitudes to using a
chatbot, and whether (and if so why) they now feel more confident
about their speaking skills. She reads her learners’ responses in the
questionnaire, and she reflects on anything she might do differently
if she undertakes the same project with another class.
5 Share findings
Finally, our teacher shares the findings from the questionnaire
with her class, and discusses the experience further with them. She
encourages her learners to continue to use the chatbot or to explore
new chatbots if they found the experience helpful. She also shares
the experience with her colleagues in the school by explaining the
stages and her findings at a teacher development seminar. Several
teachers in her school decide to try the same approach with their
own adolescent learners.
As we can see from the example above, action research can not only
help teachers develop and explore their own classroom practice, it can
also inspire other teachers to try out similar approaches with their own
learners. Our example teacher above could now go on to present her
action research project at a local conference, or write an article about
the project for a teachers’ association magazine or for a blog. Action
research can also contribute to knowledge in our field, and help teachers
develop their professional careers if they choose to share their work
more widely.
111
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
28 Developing learner autonomy with AI
112
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
User engagement
113
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
emotional experience for learners remains thin on the ground. Some
current evidence suggests that they don’t (see the study discussed
later in this chapter).
Cognitive dimension: This focuses on a learner’s ability to work with
challenging concepts and to construct new knowledge. Activities
enabling learners to check their understanding of new concepts and
to get support with learning these (e.g., via automated hints and tips)
could in theory support this dimension. Within a MOOC, feedback
tools based on automated writing evaluation (see 14) can also
provide learners with feedback on aspects of their written work (e.g.,
overall organisation, cohesion, sentence structure, accuracy, word
range, etc.).
Collaborative dimension: The importance of social connections with
others and the collaborative aspects of language learning cannot
be underestimated, and it is in this area that self-study courses and
apps arguably struggle most. Aware of this, many self-study, online
courses provide opportunities for learners to engage with each other,
for example, via forum discussions. These sorts of interactions, when
related to discussing course content, have traditionally been difficult
to assess, though. This means that they are usually optional and
do not count toward learners’ final grades. However, as automated
writing evaluation improves with generative AI (see 14), we may see
collaboration included as a mandatory component more frequently
in self-study online courses. Language learning apps sometimes
approach the collaborative dimension by offering (paid) access to
teachers, or (as we saw above) by including chatbots, but, on the
whole, these apps tend to conceive of language learning as a solitary
endeavour.
114
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
On the plus side, the students thought that AI-powered tools like
study planners, adaptive quizzes and progress dashboards could
assist them with planning, reviewing materials, understanding topics
and monitoring their learning. Students especially liked how these
AI tools gave them quick help and insights into their own progress.
But they were sceptical that AI could help motivate them or adapt to
their evolving needs as learners. Most felt virtual tools that mimicked
humans (and intended to help their motivation) were more distracting
than helpful. Some doubted automated AI praise would make them feel
more accomplished if it wasn’t connected to real grades.
Overall, the study shows that learners see pros and cons to AI helping
them learn in self-study, online courses. They think AI has potential
for some uses, but they also want human teachers, especially for
motivation. Despite teachers’ fears around being replaced by AI then, it
seems that learners still see an important role for us!
Jin, S. H., Im, K., Yoo, M., Roll. I. and Seo, K. (2023). Supporting students’ self-regulated
learning in online learning using artificial intelligence applications. International Journal
of Educational Technology in Higher Education, 20, 37.
115
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
29 Developing your teaching with AI
116
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
how you can use an AI-powered chatbot to help you explore the topic in
more depth.
1 Write a prompt
The first step is to write a prompt that tells the chatbot what to do.
The prompt needs to be quite detailed, as in the example below
(note that ‘you’ in this example is ChatGPT):
‘You are an inquisitive teacher educator who challenges your
student teachers to critically think about terms and topics in
language teaching. This tutor session is about understanding the
term self-regulated learning. Begin by asking me to describe my
own understanding of this. Based on my response, challenge me to
explore the concept more deeply. Every response you give me should
end in a new question that challenges me to think critically about
the concept.’
117
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
– In the training session, project a current AI-powered chatbot onto
a screen at the front of the class. Using your prompt, carry out
a conversation with the chatbot, involving the whole group in
your responses. This shows your trainees how the chatbot works
and what to expect. It also provides them with a detailed model
prompt.
– Put the trainees in pairs and ask them to choose a topic related
to teacher development that they would like to explore further.
You could provide them with a list to choose from, for example,
of topics that you have worked on recently as a group. Each pair
then writes a prompt on their chosen topic, using the model you
provided.
– Each pair uses one device to interact with an AI-powered chatbot
acting as Socratic tutor, and explores their chosen topic. Give the
group a time limit (e.g., ten minutes).
– Regroup the trainees and ask them to explain what they have
learned to each other. Group members should ask questions
about anything they don’t understand or would like further
clarification on.
3 Reflect
– With the whole class, reflect on the experience of using a
generative AI chatbot as a Socratic tutor. Ask trainees whether
they found it useful or not, and why. Point out that using a
chatbot as a Socratic tutor can also be helpful when revising for
teacher educator exams, as it can help trainees check their own
understanding of topics in some depth.
– You, too, can reflect on the experience from a trainer’s
perspective, and whether you think this approach is useful
for your trainees. You could even carry out an action research
project, by asking your trainees to use a generative-AI powered
chatbot as a Socratic tutor over a period of time. You can then
get your trainees’ feedback on this, as well as looking at whether
their teacher knowledge exam or test scores improve over time.
See 27 for more on how to carry out an action research project
with your trainees.
118
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
Although the example above describes how teacher educators could
use a generative-AI tool as a Socratic tutor with teacher trainees, this
approach can also be used by learners. For example, some learners may
need to understand or revise certain concepts in English, in areas like
EAP (English for Academic Purposes) or CLIL (Content and Language
Integrated Learning).
119
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
30 What does the future hold?
Robots or humans?
This true story (Zhou, 2023) is interesting for several reasons. It shows
our very human tendency to anthropomorphise (attribute human
qualities to even inanimate objects), and it shows our willingness to
accept unusual situations when we feel that the benefits outweigh the
drawbacks. After all, Misaki’s boyfriend was not real. He would never
walk through the door, but the emotional support and comfort that
the app gave her were very real. To me, this story tells me that we may
need to reframe the question of whether AI will ever be as (or more)
intelligent than us, or whether AI will be capable of developing emotions.
The real question is to what extent their simulation of intelligence or
emotion is good enough for us. As one social scientist pointed out over a
decade ago, we have reached a ‘robotic moment’, in which we willingly
accept robots as friends and companions (Turkle, 2013).
This argument – that robots are as good as (or sometimes better than)
humans – is applied to the use of generative AI in education. AI chatbots
that can adapt to learners’ needs can be used at scale, it is argued, in
places where there may be a lack of trained teachers or where children
120
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
have no access to education at all. The counter argument is that this
approach removes the responsibility for education (and for training
teachers) from governments, and hands this responsibility over to large,
multinational educational technology companies, who provide the AI
software (and sometimes the hardware). These companies then have
access to large untapped pools of user data and access to new markets. We
would do well to remember that this sort of educational support comes at
a (frequently hidden) price. We can expect to see more of it in the future.
A study carried out with job recruiters found that those who used
high quality AI to help choose candidates became lazy, negligent and
less confident about their own judgement (Dell’Acqua, 2023a). Strong
job applicants were ignored, and the decisions of these recruiters were
worse than the decisions made by recruiters who used low-quality AI
or no AI at all. The researcher concluded that when AI is very good at
what it does, humans have no need to work hard. They let the AI take
over instead of using it as a tool.
121
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
AI, compared to those who did not. Interestingly, in the group who
used AI, consultants with lower initial scores benefited the most from
AI help, while those who were already very good at their jobs showed
less improvement, although there was still some improvement. What
does this study tell us then? That those who are less skilled at their
jobs can improve their skills significantly by using AI. Those who are
already highly skilled, on the other hand, can improve slightly by using
AI, because they are already very good at what they do. What we are
looking at here is an example of skills levelling when AI is used as a tool
rather than as a replacement. Those who are not very good at their jobs
can become a whole lot better with the help of AI.
Final thoughts
It’s notoriously difficult to predict the future. One way to approach the
future is to look at the present, which we have tried to do throughout
this book. After all, the seeds of the future are sown in the present.
But we can also learn from the past. The wonderfully named field of
paleo-futurology, which looks at past predictions of the future, may
provide us with some insights (Weinberg, 2023). One important lesson
from paleo-futurology is that while certain technological advancements
can be relatively easy to foresee, forecasting changes in society is
a lot trickier. For instance, futurists in the 1950s predicted certain
technological developments such as the increased use of plastics in the
household. But they found it more challenging to predict the societal
122
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
shift away from relegating shopping and cleaning to ‘the housewife
of [the year] 2000’. Technological changes affect attitudes and norms
over time, shaping our expectations across various aspects of life, as
well as how we live. Just as internet dating is now widely accepted and
practised, having a boyfriend or girlfriend app, like Mitsaki, or even a
boyfriend or girlfriend robot, may be the norm in the future. Teachers
and learners already use a wide range of technology tools in their
teaching and learning. And the future is likely to see more generative
AI used in education, if only because it increasingly suits our collective
expectations and needs.
Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran,
S., Krayer, L., Candelon, F. and Lakhani, K. R. (2023b). Navigating the Jagged
Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge
Worker Productivity and Quality. Harvard Business School Technology & Operations
Mgt. Unit Working Paper No. 24-013. Available at: https://papers.ssrn.com/sol3/papers.
cfm?abstract_id=4573321. Accessed 28 December 2023.
Weinberg, J. (2023). Thinking about Life with AI. Daily Nous. Blog post. Available at:
https://dailynous.com/2023/03/28/thinking-about-life-with-ai/. Accessed 28 December
2023.
Zhou, V. (2023). These women fell in love with an AI-voiced chatbot. Then it died. Rest of
World, 17 August 2023. Available at: https://restofworld.org/2023/boyfriend-chatbot-ai-
voiced-shutdown/. Accessed 28 December 2023.
123
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
Index
academic integrity 83–86 for language learning 12, 63–65
accessibility of AI 75–77 practising English 34–37
accountability 87–89 ChatGPT
action research 108–111 academic integrity 83–84
adaptive learning bias in 73
intelligent tutoring systems 32–33 governance of 96
motivating learners 11 hype and hyperbole 2, 18, 26, 104
personalising content 30–31 lack of consciousness 4
adaptive testing 59 teaching with 12, 26, 104
affective computing 46, 48 as technological development 3, 4–5,
affective user engagement 113–114 19, 23
algorithmic bias 71–74 cheating, use of AI 83–86
alignment (AI training) 92 citations 88
Alternative Uses Test 15 cognitive load 110
anthropomorphism 120–121 communication, between home and school
artificial general intelligence (AGI) 2–3, 4, 9 28–29
artificial intelligence (AI) Computer Assisted Language Learning
and consciousness 3–5 (CALL) 2
early AI in language teaching 2 computer vision tools 78
narrow versus artificial general consciousness, and AI 3–5
intelligence 2–3 copyright issues 87–89
practising language with 12–13 creativity, and AI 14–17
assessment using AI 58–61, 84–86 critical users of AI 99–102
assistive technologies 76–77, 76–78 culture, learning through AR 40–41
attention deficit hyperactivity disorder
(ADHD) 77 data
attribution of sources 88–90 assessment of learners using AI 60–61
augmented reality (AR) 38–41 discussing data use with learners
authentic assessment 85–86 81–82
autism 77 making AI fairer 73–74
automated essay scoring (AES) 58 normalising data collection 80
automated item generation (AIG) 60 ownership of 79
automated writing evaluation (AWE) data-driven AI 7
58–59, 65–66 data protection 52, 79, 96
autonomous learners 112–115 datafication of education 79–80
differentiation in teaching 27
behavioural dimensions 113 digital divide, accessibility of AI 75–76
bias digital literacies 101
AI-generated images 16–17 digital pedagogy 69–70
and fairness 71–74 digital revolution 67–68
book reviews 39 disabilities, assistive technologies 76–77
brainstorming 55 diversity 76–77
see also accessibility of AI; fairness
chatbots in AI
developing your teaching with AI dogme 28
116–119 Duolingo 12, 36
124
Published online by Cambridge University Press
educational technology (EdTech) 18–19 head-mounted display (HMD) 42
emotion AI 47–49 hearing impairments 77
emotions home-school connection 28–29
anthropomorphism of AI 120–121 human-centred AI 70
in education 47–49 hype cycle 18–21
facial recognition 46 hyperbole 18–20
sentiment analysis 50–52
user engagement 113–114 images, AI-generated
employment bias in 16–17
job losses due to AI 67 creative uses of 15
job opportunities in AI 91–94 human versus AI creativity 14
environmental costs of digital technologies sources and attribution 89
99–102 virtual worlds 42–43
equality 76 immersive virtual environments 42–45
equity, diversity and inclusivity (EDI) intelligence, types of 2–3
76–77 intelligent tutoring systems (ITSs) 32–33
see also accessibility of AI; fairness in AI
European Union’s AI Act 48 job losses due to AI 67
see also laws around AI use job opportunities in AI 91–94
125
Published online by Cambridge University Press
mixed ability classes 27 speech recognition 3, 71–72
mixed-skills lessons 40 speech-to-text tools 77–78
motivating learners 11–12 suggestopedia 28
summative assessment 58–61, 84
narrow AI 3 sustainable consumption 100–101
126
Published online by Cambridge University Press