100% found this document useful (2 votes)
2K views137 pages

30 Essentials For Using AI

Uploaded by

Charlie Svento
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
2K views137 pages

30 Essentials For Using AI

Uploaded by

Charlie Svento
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 137

Nicky Hockly’s

30 Essentials for
Using Artificial
Intelligence

Published online by Cambridge University Press


Cambridge Handbooks for Language Teachers Pocket editions
Part of the award-winning Cambridge Handbooks for Language Teachers series, these
practical, user-friendly books are full of tips and ideas from experienced English language
teaching professionals, to enrich your teaching practice.
Recent Pocket Editions: Herbert Puchta’s 101 Tips for Teaching
Teenagers
Penny Ur’s 100 Teaching Tips
herbert puchta
penny ur
Nicky Hockly’s 50 Essentials of Using Learning
Jack C. Richards’ 50 Tips for Teacher
Technologies
Development
nicky hockly
jack c. richards
Penny Ur’s 77 Tips for Teaching Vocabulary
Scott Thornbury’s 30 Language Teaching
penny ur
Methods
scott thornbury Jeremy Harmer’s 50 Communicative Activites
jeremy harmer
Alan Maley’s 50 Creative Activities
alan maley Philip Kerr’s 30 Trends in ELT
philip kerr
Scott Thornbury’s 101 Grammar Questions
scott thornbury Sarah Mercer and Herbert Puchta’s
101 Psychological Tips
Mark Hancock’s 50 Tips for Teaching
sarah mercer and herbert puchta
Pronunciation
mark hancock Scott Thornbury’s 66 Essentials of Lesson
Design
Carol Read’s 101 Tips for Teaching Primary
scott thornbury
Children
carol read
David Crystal’s 50 Questions About English
Usage
david crystal

Published online by Cambridge University Press


Nicky Hockly’s
30 Essentials for
Using Artificial
Intelligence
Nicky Hockly

Consultant and editor: Scott Thornbury

Published online by Cambridge University Press


Shaftesbury Road, Cambridge CB2 8EA, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
103 Penang Road, #05–06/07, Visioncrest Commercial, Singapore 238467

Cambridge University Press & Assessment is a department of the University of Cambridge.


We share the University’s mission to contribute to society through the pursuit of
education, learning and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781009804523
© Cambridge University Press & Assessment 2024
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press & Assessment.
First published 2024
20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
Printed in Great Britain by CPI Group (UK) Ltd, Croydon CR0 4YY
A catalogue record for this publication is available from the British Library
isbn 978-1-009-80452-3 Paperback
isbn 978-1-009-80453-0 eBook
isbn 978-1-009-80450-9 Cambridge Core
Cambridge University Press & Assessment has no responsibility for the persistence
or accuracy of URLs for external or third-party internet websites referred to in this
publication and does not guarantee that any content on such websites is, or will
remain, accurate or appropriate.

Published online by Cambridge University Press


Contents v
Acknowledgements and thanks vi
Why I wrote this book vii
A: Setting the scene 1
  1 What is AI? 2
2 What is generative AI? 6
3 AI and language learning 10
4 AI and creativity 14
5 Technology and the hype cycle 18
B: AI in language teaching and learning 22
6 Learning a language with AI 23
7 Teaching with generative AI 26
8 Personalising content for learners 30
9 Practising English with chatbots  34
10 Learning with augmented reality 38
11 Learning with virtual reality 42
12 Understanding real-time learner engagement through AI 46
13 Understanding emotion in texts with AI 50
14 Developing writing skills with AI 54
15 Assessing learners with AI 58
C: The big questions 62
16 Can AI support language learning? 63
17 What does AI mean for teachers?  67
18 How can we make AI fair? 71
19 How can we make AI accessible to all? 75
20 Who owns the data? 79
21 Does AI help learners cheat?  83
22 Whose content does AI use? 87
23 Who creates AI?  91
24 Can we control AI?  95
25 How can we become critical users of AI? 99
D: Self-development and AI  103
26 Considering wellbeing and AI 104
27 Carrying out action research into AI 108
28 Developing learner autonomy with AI 112
29 Developing your teaching with AI 116
30 What does the future hold? 120
Index 124

v
Published online by Cambridge University Press
Acknowledgements
The authors and publishers acknowledge the following sources of
copyright material and are grateful for the permissions granted. While
every effort has been made, it has not always been possible to identify
the sources of all the material used, or to trace all copyright holders. If
any omissions are brought to our notice, we will be happy to include
the appropriate acknowledgements on reprinting and in the next update
to the digital edition, as applicable.

Key: Ess = Essentials.

Text
Ess10: Ernst Klett Sprachen GmbH for the adapted text from Going
Mobile by Nicky Hockly and Gavin Dudeney. Copyright © 2014
DELTA Publishing/Ernst Klett Sprachen GmbH. Reproduced with kind
permission; Taylor and Francis Group for the adapted text from Digital
Literacies 2e by Mark Pegrum, Nicky Hockly and Gavin Dudeney.
Copyright © 2022 Informa UK Limited, an Informa Group Company.
Reproduced with permission of the Taylor and Francis Group through
PLSclear; Ess26: Framework taken from ‘Australian Framework for
Generative Artificial Intelligence in Schools’. Copyright © 2023 State of
New South Wales (Department of Education).
Typesetting
Typesetting by QBS Learning.

vi
Published online by Cambridge University Press
Why I wrote this book
I first encountered the term artificial intelligence in the early 1980s.
I had just started a BA degree in English Literature and was living
in shared accommodation with four other students. Three of my
housemates were studying a degree programme called Artificial
Intelligence. I’d never heard the term before. When I asked them what
artificial intelligence was, they explained that figuring this out was a key
part of their studies. These were the early days in the field of ‘AI’, as I
quickly learned to call it. AI-related studies focused on understanding
how humans think, I was told, in order to build up a model of the
mind that might then be replicated through computer programming.
So my housemates spent a lot of time coding, and reading the work
of cognitive psychologists. My own interests as an English literature
student seemed a million miles away from those of my housemates.
Although I didn’t know it at the time, AI would cross my path many
times more, and become firmly entangled in my own professional life as
an English language teacher.

The second time AI crossed my path was after I had been teaching
English for about ten years. I was invited to join the materials writing
team of one of the first fully online English language schools, in the mid-
1990s. The language learning materials were online, and the learners
were based all over the world. In these early days, the online materials
reflected what was possible (in terms of computer programming) at the
time. Learners were presented with theme-based units that included
reading and listening texts with automated comprehension questions.
Automated grammar, vocabulary and pronunciation activities based on
these texts then provided learners with language work in context. It was
cutting-edge stuff, not just because it included some solid programming,
but because it looked great design-wise, and it was underpinned by a
robust communicative language teaching approach. Remember that
this was several years before audio communication tools like Skype
even existed, and video conferencing tools like Zoom were very far in
the future. The AI that underpinned these automated language learning
activities is known as rule-based AI, and we explore it further in 2. It’s
an approach to programming that still underlies many of the automated
language learning activities that you find online today.

vii
Published online by Cambridge University Press
Fast-forward to today, and generative AI is the new kid on the
block. The logic model that underpins rule-based AI, which meant
programming computers with instructions such as ‘if x, then y’, has
given way to something a lot more sophisticated, often referred to as
‘deep learning’, which has been evolving since the 1980s and 1990s.
Deep learning techniques are based on artificial neural networks, which
are called this because they attempt to simulate (in mathematical terms)
the biological neural networks that are found in the human brain. Deep
learning enables neural networks to learn from vast datasets and to
carry out complex tasks like images and speech recognition. We explore
this concept further in 2, but suffice to say here that generative AI
represents a fundamental shift in the field of computing science, and one
that we don’t yet fully understand. This can feel very unsettling.

Hence this book. Precisely because generative AI is so powerful, it


raises all sorts of interesting questions. There are plenty of issues with
generative AI – and we explore several of them in the book – but we can
expect it to get better over time.

The aim of this book, then, is to help you get to grips with AI by giving
you an idea of how it can be used in English language teaching and
learning. The book is divided into four sections. Section A gives you
some background on AI to help you understand what it is and how
it works, in very general terms. I personally find that understanding
where something comes from and how it works makes it less scary.
You may want to start here, especially if you would like to brush up
on some of the basics of AI. This will then give you the information
you need to explore the use of AI in English language teaching and
learning in Section B. This is followed by what, to me, is the most
interesting part of the book. In Section C, we think about the big
questions that using AI raises, not just in ELT but for society in general.
There are some important caveats and challenges to the use of any
educational technology, and AI is no exception. So you might want to
start with some of the chapters in this section instead, if this is what
interests you and you are already familiar with the key elements of
AI. Finally, Section D looks at how AI – especially generative AI – can
help us develop as teachers and learners. If topics such as wellbeing or

viii
Published online by Cambridge University Press
autonomous learning are of particular interest to you, you might want
to start with some of the chapters in this section. In short, there are
many ways to read this book. Wherever you start or finish, I hope you
come away with a clearer idea of AI in the field of ELT. I hope you also
come away with some ideas around what AI might mean for us in wider
society, both now and in the future.

ix
Published online by Cambridge University Press
Published online by Cambridge University Press
A: Setting the scene

This first section aims to demystify


artificial intelligence (AI) by explaining
in a very basic sense what AI is, and how
it works. We also consider how and why
EdTech is prone to hype, and how we and
our learners might resist this hype.

1 What is AI?
2 What is generative AI?
3 AI and language learning
4 AI and creativity
5 Technology and the hype cycle

1
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
1 What is AI?

The field of artificial intelligence (AI) aims to create digital


machines that can carry out tasks that typically need
human intelligence. How close are we to human-level
artificial intelligence?

Early AI in language teaching

With the amount of hype and hysteria that surrounded the arrival
of ChatGPT in late 2022, you’d be forgiven for thinking that AI is
a completely new technology for teachers and learners. Not so. The
earliest and simplest forms of AI in ELT can be traced back to the
1960s, when CALL (Computer Assisted Language Learning) emerged
as an area of study. In these early days of AI, computers could be
programmed to provide limited responses to prompts. Computers
were large and expensive, and tended to be found in universities. The
advent of the personal computer, however, meant that by the late
1980s and early 1990s computers began to appear in schools and in
people’s homes. Language learning software with simple gap-fill and
text reconstruction activities became available. As computing power
increased, and computers developed multimedia capabilities, other uses
for AI in language learning emerged. This was the heyday of the CD-
ROM. By the early 1990s, some language learning software began to
integrate voice recognition to support pronunciation. Since then, AI
has become more powerful, and technology – especially in the form of
mobile devices – has become more ubiquitous.

Narrow versus artificial general intelligence

To understand where AI has come from and where it is going, it is


useful to distinguish narrow (or weak) and strong AI – the latter is
usually referred to as artificial general intelligence or AGI. Here an
analogy may be helpful. Imagine a chair that is mass-produced in a
factory. The machine that assembles the individual parts of the chair

2
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
follows very specific instructions. The machine cannot decide to create
another furniture item – let’s say a table – unless it is programmed to
do so. When the machine breaks down, it cannot fix itself. It is very
good at performing a pre-defined task (assembling a chair) quickly
and efficiently, but it cannot solve problems and it cannot do new
things or adapt to new situations. This is narrow AI. Now imagine a
skilled carpenter who makes wooden chairs by hand. She can create
unique chair designs. She gets better at her craft over time, learning
from her successes and mistakes. She teaches herself to use new and
more sophisticated carpentry tools, and she takes pride in her work.
The skilled carpenter represents AGI, which is indistinguishable from
human intelligence. AGI can plan, problem-solve and learn, and carry
out complex multi-faceted tasks. It displays a human-like level of
consciousness while doing so. We are not yet in the phase of AGI, but
the goal, computer scientists tell us, is to get there.

In our field, the gap-fill computer programs of the 1980s and 1990s
are examples of early – and therefore narrow – AI. Voice recognition
software, which was notoriously unreliable in the 1990s, has become
increasingly accurate. More recently, we have tools like ChatGPT, which
are based on generative AI (see 2), and can generate content in text,
image or multimedia formats. ChatGPT is still considered an example
of narrow AI by most researchers, but it represents a significant step
towards stronger forms of AI, not least in the way it seems to interact
with us in a very personable (i.e., pleasant and friendly) manner. It’s
useful here to imagine AI on a scale, with narrow AI at one end of the
scale, and AGI at the other end. Tools like ChatGPT can give us the
impression that we are moving quite fast along the scale from narrow to
AGI. However, not everyone thinks we can get all the way to AGI, and
not everyone is happy at the prospect of this, but there is no doubt that
AI is starting to feel more human-like.

AI and consciousness

What ‘human-like’ actually means is, unsurprisingly, the subject of


hot debate. You may have heard concerns that AI might already be
‘conscious’. But how can we know whether AI is conscious or not?
When I asked ChatGPT this very question, the answer was clearly no,

3
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
with ChatGPT pointing out that it is a computer program that has no
consciousness, thoughts or feelings.

Although current generative AI systems clearly claim not to be


conscious, the issue underlies many of the debates around AGI. To
address the tricky question of what AGI consciousness is, researchers
suggest that we need to be scientific in our approach. For example, in
one study, a large team of academic researchers first tried to define what
human consciousness is from a range of widely accepted neuroscientific
theories (Butlin et al., 2023), although it should be noted that there
is plenty of debate over exactly what this consists of (Goff, 2023).
The researchers then came up with a number of indicators to describe
consciousness, based on these theories. The next step was to compare
current AI systems against these indicators to see where they match
with human consciousness and where they fall short. There were two
interesting findings from this study. The first was that no current AI
system fulfilled all of the criteria for human consciousness as defined in
the study. The second was that there is no reason why future models of
AI can’t (at least in theory) fulfill all of these criteria. In other words,
the researchers concluded that although we may not be at AGI yet, we
could get there in the future.

There are plenty of commentators, however, who strongly disagree that


generative AI is, or ever will be, intelligent. One theoretical physicist
called ChatGPT a ‘glorified tape recorder’ (Cao, 2023), for example,
while others point out that current forms of generative AI are unable
to perform basic maths, are prone to hallucinations (i.e., to making up
facts) and overall have a very hazy grasp of reality (Marcus, 2023).

Wherever we may be on the scale from narrow AI to artificial general


intelligence, the much-publicised arrival of ChatGPT brought generative
AI (see 2) very much to the forefront of public consciousness. Freely
available at the time, ChatGPT reached 100 million users within two
months of its launch. In comparison, TikTok took nine months to reach
a similar number of users, and Instagram took two and a half years. The
unprecedently fast and widespread uptake of ChatGPT had the effect of
focusing minds on where AI may be leading us. Generative AI tools like
ChatGPT did not come out of nowhere. They were based on years of
work in the field of natural language processing. ChatGPT in particular

4
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
though, seemed to awaken the ELT community to the potential
advantages – and challenges – of AI in language teaching. Teachers and
learners quickly realised that this was going to be a game-changer for
our field. But as we will see in this book, AI encompasses much more
than a tool like ChatGPT, and in many ways, the game has already
changed.

Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming,
S. M., Frith, C., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters,
M. A. K., Schwitzgebel, E., Simon, J. and VanRullen, R. (2023). Consciousness in Artificial
Intelligence: Insights from the Science of Consciousness. Available at: https://arxiv.org/
abs/2308.08708. Accessed 24 December 2023.

Cao, S. (2023). A. I. Today Is a ‘Glorified Tape Recorder,’ Says Theoretical Physicist


Michio Kaku. Observer Newspaper, 15 August 2023. Available at: https://observer.
com/2023/08/michio-kaku-ai-chabot/. Accessed 24 December 2023.

Goff, P. (2023). Understanding Consciousness Goes Beyond Exploring Brain Chemistry.


Scientific American. Available at: https://www.scientificamerican.com/article/understanding-
consciousness-goes-beyond-exploring-brain-chemistry/. Accessed 22 January 2024.

Marcus. G. (2023). Reports of the birth of AGI are greatly exaggerated. Blog post.
Available at: https://garymarcus.substack.com/p/reports-of-the-birth-of-agi-are-greatly.
Accessed 24 December 2023.

5
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
2 What is generative AI?

The appearance of tools like ChatGPT brought the


potential benefits and challenges of generative AI into
sharp focus for educators. Understandably, teachers have
responded to it with both joy and fear. Understanding
some of the principles behind generative AI can help
demystify it.

Knowledge-based AI

To understand generative AI, it’s helpful to first look at earlier types


of AI – known as knowledge-based AI. The spell-check program is
an early and useful example of this. How does a spell-check program
typically work? First, a dictionary or lexicon of correctly spelled words
(the knowledge base) is defined. Then, a programmer creates explicit
rules that tell the spell-check program to compare each word in a text
against the words in its knowledge base. If a word is not found in the
lexicon, it is considered a potential spelling error. The program then
suggests possible corrections based on words in its knowledge base.
The corrections are generated by the program applying pre-written
rules and algorithms.

These days, most spell-check programs will take context into account
when suggesting corrections. For example, imagine that the program
encounters the word ‘their’ in the sentence, ‘Their going to the park’.
The word ‘their’ is correctly spelled, but it’s not grammatically correct.
The program may suggest ‘they’re’ as the correct spelling because it
considers the sentence as a whole. Some spell-check programs may offer
a choice of corrections for this sentence, for example, by offering not
just ‘they’re’, but ‘there’. This shows us that there is a rule (or algorithm)
that tells the program to suggest words that are phonologically similar
when it spots a possible error. By providing a choice, the program is also
allowing for human judgement. Other terms you may come across for
knowledge-based AI are predictive AI or rule-based AI. Knowledge-based

6
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
AI has been used extensively in language learning, including in apps
(see 6), intelligent tutoring systems and chatbots (see 8 and 9),
automated translation and testing (see 15).

Data-driven AI

As well as knowledge-based AI, we have so-called data-driven AI. This


approach is based on something called machine learning. Simply put,
machine learning involves the development of algorithms and models
that can learn from data and improve their performance on a specific
task without being explicitly programmed. More on this below.

Data-driven AI has been around for several decades, too, and it is


used in areas like language testing, speech recognition and machine
translation. Indeed, knowledge-based and data-driven approaches are
often combined. For example, spell-check programs are knowledge-
driven, but they are also now based on large datasets of language. This
means that they can more accurately identify the context a word is used
in, and therefore offer more accurate corrections. If you, like me, have
used spell-check programs for years, you’ll have noticed how much
more helpful they have become over time.

Generative AI

Within the field of data-driven AI, we have generative AI, which is


trained on massive quantities of user-generated online data. Generative
AI can become more knowledgeable and accurate over time, as it
receives more data, is retrained and refined, and as internal parameters
are adjusted to better process the input data. There are different types of
generative AI tools. Some generate text and are based on huge amounts
of online texts, including resources like Wikipedia and online books.
Text-generation AI tools included ChatGPT, Gemini and Bing at the time
of writing, although these are very likely to be joined or replaced by
other tools in the future. Other generative AI tools can generate images,
videos or sound, and are based on immense datasets of these media;
these media and texts are harvested from the internet – often without
permission (see 22). Simply put, generative AI uses the algorithms it
develops to teach itself; for example, it can draw conclusions about
things that may not have been in the original data, and it can generate
new data in the form of images, videos or text.

7
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Large language models

Text-based generative AI tools are based on so-called Large Language


Models (LLMs). This is a complex area, but let’s try to get a basic
understanding of LLMs.

A Large Language Model is a type of artificial intelligence system that


is designed to analyse and generate human-like text. It’s essentially
an advanced computer program that’s very good at analysing and
producing words in context. LLMs are built upon deep learning
techniques and algorithms based on neural networks. The term neural
network comes from biology, and refers to the way the human brain
is made up of billions of interconnected neurons. Neural networks in
computing attempt to simulate these biological networks. They are
complex mathematical models based on algorithms with billions of
parameters that can identify patterns, correlations and relationships
in data, and make predictions or decisions based on these data. I find
it helpful to think of an artificial neural network as a digital brain
made up of tiny decision-making units. These units work together
to solve problems or recognise patterns. Information goes in, and
the neural network processes it and gives an answer. Artificial neural
networks underpin many computer tasks, like recognising pictures and
understanding language.

Large Language Models are called ‘large’ because, as we saw above,


they are trained on datasets containing vast amounts of text taken
from the internet. The training of LLMs can either be supervised or
unsupervised. In supervised training, humans tell the model when the
content it produces is right or wrong, or how it can be improved. The
model uses this human feedback to adjust its algorithms and improve
its performance, making it better at understanding and generating
text. This feedback loop helps train the AI to be more accurate, useful
and aligned with human preferences and intentions. This approach is
called Reinforcement Learning from Human Feedback (RLHF), and it
requires a significant amount of human labour (something we explore
in 14). Exactly how a LLM uses a neural network to teach itself to
adapt the parameters of its algorithms is unclear though. This lack of
clarity is, understandably, a concern for many computer scientists –
and has led some commentators to claim that we are getting close to

8
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
artificial general intelligence (AGI – see 1). One thing is clear though
– the quality and quantity of training data that generative AI platforms
use is important, as is the quality of human feedback they receive in
supervised learning. Poor or biased input, or poor or biased feedback, is
likely to lead to poor or biased outputs (see 18).

What this means for language teachers

In short, LLMs learn language patterns, semantics and context from


large amounts of internet data, and they use this knowledge to generate
text that is coherent, contextually relevant – and very human-like. We’re
not sure exactly how they do some of the mathematical/algorithmic bits
in this process. LLMs don’t make many grammar and spelling mistakes
and they can sound very knowledgeable and convincing, even if they are
not always factually correct (see 1). Because of their advanced language
capabilities, text-generation AI tools based on LLMs are finding their
way into language learning in the form of chatbots, translation, content
generation and more – as we explore in the rest of this book.

Comparing knowledge-based AI with generative AI can help us


understand how the latter is an important developmental step in the
field of AI. Knowledge-based AI has been around for decades, and
most of us have probably experienced it in our personal lives (for
example, in spell-check programs) or with our learners (for example,
by encouraging them to use language learning apps). Generative AI
however, which can generate realistic, seemingly new content, is a more
recent development. As we will see in this book, there is room for both
knowledge-based and generative approaches in language learning.

9
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
3 AI and language learning

Now that we have a basic understanding of both knowledge-


based and data-driven AI (including generative AI), we can
consider its potential to support language learning.

Learning a language

When I moved to Spain in my early 20s, I knew not one word of


Spanish. In my first year of living in the country, I had a very old
car that kept breaking down. I had to learn Spanish fast, not least
to describe my latest car problems to the local mechanic. Gears,
handbrake, clutch, windscreen wiper … these low-frequency vocabulary
items were vital for me to learn in my first few months. None of these
words in Spanish sounded anything like their English equivalent, and
I found them very difficult to remember. So, I took a piece of paper,
wrote the words in English on one side and the Spanish translation
on the other side, and kept the piece of paper in my pocket for several
weeks. Every time I put my hand in the pocket, I’d remember the piece
of paper, try to remember the Spanish words, and then check whether
I was correct by looking at my paper. Within a couple of weeks, I’d
managed to memorise all the words, and I remember them to this day.

Although we don’t know everything about how languages are learned,


we do know several things from decades of second language acquisition
research. For example, learning a language entails storing a large
number of words and phrases (often called lexical chunks) in memory.
In my case, it was car vocabulary in Spanish. I used paper, but had it
existed at the time, an AI vocabulary app could have helped me with
this. Research shows that knowledge-based (see 2) AI language learning
apps can help learners commit new words and phrases to memory,
especially if these are grouped together in thematically connected lexical
sets. It also helps if words are shown in context (so in sentences rather
than as individual unconnected words) and if the app uses spaced
repetition. Spaced repetition occurs when the learner comes across
a target word or phrase many times over increasingly longer time

10
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
intervals. I unwittingly used spaced repetition by looking regularly at
the car vocabulary items on my piece of paper, until, finally, I didn’t
need to look at it anymore.

To learn a language, we need more than words. We also need


grammatical structures, and we need to know how to pronounce things
(as I soon discovered when trying to describe issues with the clutch to
my mechanic in beginner level Spanish!). Most language learning apps
include grammar-based activities, and you can also listen to how words
and sentences are pronounced.

In the examples above, I’ve described how you can set out to
deliberately learn the lexis, structure and pronunciation of the language
– for example, by noting and reviewing key vocabulary on a piece of
paper, or, if you have a mobile device, by using an app. We can also
acquire language more informally by being exposed to it. You might
notice a word that you’ve never heard before in an English language
movie, and then decide to use it. If you’re into digital gaming in English,
for example, where you will typically work in teams to complete a
mission online, you’re likely to pick up terms related to gameplay;
you will also most likely pick up some of the social language used to
communicate with your team members. This is often referred to as
incidental language learning, and it is no less valuable than formal
language learning.

Motivating learners

A learner’s progress in a language learning app is typically tracked by


data-driven AI (see 2), and additional lexis or activities are suggested by
the app depending on that progress (this is known as adaptive learning
– see 8). Some language learning apps include elements of gamification,
where learners can win points and move up through levels depending
on their performance in games and quizzes. These gamification
elements, including immediate feedback on progress and an attractive
interface, are designed to maximise engagement, encouraging learners to
spend more time on an activity while in a heightened state of attention.
Engagement has the potential to support learning by encouraging
learners to spend more time on an activity while in a state of high
concentration.

11
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Practising the language

Second language acquisition research also tells us that apart from


learning the nuts and bolts of a language (the lexis, grammar and
pronunciation), we need to actually use it. This means applying our
knowledge of the language to reading, writing, listening and speaking,
in the conditions that research suggests are optimal. What are these
optimal conditions? Research suggests that learners’ motivation and
engagement need to be high, that they are provided with help (or
‘scaffolding’) at the point of need (that is, when they need to use the
language), and that feedback on what they say or write is personalised
and constructive (see Thornbury, 2016). Clearly, teachers can provide all
of these things for learners. The question is – can AI?

Generative AI tools (see 2), it is argued, can potentially provide language


learners with many of these optimal conditions, by acting as a personal
tutor and language partner. Simpler knowledge-based AI chatbots can
typically hold conversations based on narrow, predefined scripts because
they respond to a series of pre-programmed prompts. If you ask these
chatbots questions that they have not been programmed to respond to,
the conversation soon breaks down. Chatbots based on generative AI,
however, can engage in more naturalistic dialogue. One of the first and
(at the time of writing) best-known of these generative AI chatbots is
ChatGPT. A chatbot that is based on generative AI can ‘remember’ what
has already been said during a conversation, and it can correct the learner
if asked to do so. It can also provide another of the key elements needed
for language learning that we identified above – scaffolding, or help at the
point of need. Because generative AI can draw on previous elements in
a conversation, the learner can ask it to explain things, to provide more
examples or to simplify its language, as needed. While providing feedback
or more examples, the chatbot can, if asked, help the learner notice key
words or structures by highlighting their importance. Noticing is also
an important part of language learning, and it happens when the learner
pays conscious attention to specific words or structures, and then tries to
use the words or structures correctly in their output. Duolingo, a well-
known language learning app that was launched in 2012, was one of the
first to integrate a generative AI chatbot (in this case, based on Open AI’s
GPT-4 technology). We explore the use of chatbots based on generative
AI for language practice in more detail in 9.

12
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
As we’ve seen in this chapter, AI can provide support for learners in
a range of areas that research has shown are important for language
learning. However, there is one area in which AI inevitably falls short
– that of providing a human connection. Language is, after all, about
communication, and communicating with a machine is simply not
the same as communicating with another human, with all the nuance,
empathy and connection that this entails. Humanoid robots powered
by generative AI underpinned by large language models (see 2) are,
inevitably, on their way. To what extent they may replace human
conversation partners remains to be seen.

Thornbury, S. (2016). Educational Technology: Assessing its Fitness for Purpose. In


McCarthy, M. (Ed.). The Cambridge Guide to Blended Learning for Language Teaching,
pp. 25–35. Cambridge: Cambridge University Press.

13
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
4 AI and creativity

We consider what the creative potential of generative AI


means for human creativity. More specifically, we consider
how this creativity can be used by language teachers and
learners.

The first generative AI to arouse widespread public interest included


tools that could generate text (for example, ChatGPT and Gemini)
and tools that could generate images (for example, DALL-E and Stable
Diffusion). Generative AI’s ability to produce new content in text, image
and multimedia formats, based on large amounts of data collected from
the internet without permission, has led to major concerns around
copyright and attribution (see 22). It has also led to concerns over
whether AI has the potential to replace human creativity.

Human versus AI creativity

Let’s start with the thorny question of whether generative AI’s creativity
is as good as human creativity. If we measure the worth of creative
content by its ability to win prizes, then the answer may well be ‘yes’.
In one well-known case, a photographer won a 2023 Sony World
Photography award with an AI-generated photographic image. He
stated that he had deliberately submitted the image to spark debate
around the use of AI in creating images. AI-generated content is not
always of high quality, though. Low quality books generated by AI are
readily available for purchase on platforms like Amazon’s Kindle, where
there are no quality controls on self-published content. Unscrupulous
academic journals have long been known to accept nonsensical articles
generated by AI for publication, motivated by profit (Aldhous, 2009).
In response to AI-generated content, competition organisers, journal
publishers – and educational institutions – tend to have guidelines and
principles around the acceptable use of AI. There are also laws, such
as the European Union’s AI Act (see 24), around transparency and
disclosure in the use of AI.

14
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
There is, perhaps unsurprisingly, no clear agreement on exactly what
human creativity is, although there are psychological tests available that
try to measure creative original thinking. However, generative AI can
beat humans at these tests. For example, in one controlled experiment,
AI beat 99 percent of humans in the widely used Torrance Tests of
Creative Thinking (Shimek, 2023). In another experiment, AI beat
91 percent of humans in the Alternative Uses Test for Creativity (Haase
and Hanel, 2023). This may reflect the flawed nature of these tests
more than it reflects a lack of human creativity, though. Another study
found that AI can be beneficial for creativity, by helping humans come
up with better creative ideas than they have on their own – although
interestingly, very creative people seem to need less AI support – for
now at least (Doshi and Hauser, 2023).

Creative ways of using AI with learners

For teachers and learners, the creative aspects of generative AI can be


harnessed in the language classroom in several interesting ways. Below
are a few ideas for using image-generation and text-generation tools.

Teachers can generate images for learners to use as discussion


prompts. Teachers can also add AI-generated images to worksheets.
Learners can generate their own images on a theme. They can then
compare the images with each other, discussing how their wording
of the prompt affected what appears in the image. Or learners can
try to guess the key words used in their partner’s prompt, from what
they see in the image.
Learners can brainstorm ideas on a topic, and then use a text
generation tool to help them come up with more ideas.
Learners can learn about the history of art by generating images in
the style of a particular painter, or in the style of an art movement
such as impressionism or cubism. These images can then form
the basis of research projects, in which pairs of learners find out
more about an artist or movement. Learners can later present their
findings to the class with both original and AI-generated images (and
ask whether their classmates can tell the difference!).
Learners can also learn about literary styles and genres by generating
short texts on a topic in the style of a particular writer or in the style
of different forms of poetry (e.g., a sonnet, a haiku). Learners can
then compare and discuss these texts.

15
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Learners can create a short, AI-generated poem on a topic. They
can then change some of the language in the poem (for example, by
replacing nouns, adjectives and verbs with other words), to make
a new and personal version of the poem. This can be particularly
effective at low levels, where learners often don’t have the linguistic
resources to create poetry from scratch.
Learners can change the style of a single text. For example, they can
generate a formal email from a text message or vice versa. Again,
analysing and discussing the differences between the language used
in these texts can be a helpful language learning activity for learners.

Exploring bias in images

Here is an activity that uses image-generator tools in a communicative


language learning activity for learners. This activity gives learners the
opportunity to discuss AI bias (and possibly their own biases) around
jobs and gender.
Use several different generative AI image tools to generate eight
to ten images of a scientist using the prompt: Make an image of a
scientist in white coat standing in a laboratory, holding a test tube.
In class, show your learners the images and ask them to discuss the
differences and similarities. Ask them to focus on gender – how
many of the images are of men? (When I tried this activity in late
2023, all of the images generated were of men.)
Tell your learners about a well-known experiment that has been
carried out multiple times over decades with school children (Miller
et al., 2018). The children are given a piece of blank paper and asked
to draw a picture of a ‘scientist’. Most children draw a man in a
white coat. Fewer children draw a female scientist. Ask your learners
why they think this happens. Point out that this experiment shows
that we can internalise gender bias early in life. This drawing activity
can help uncover unconscious bias with children.
Discuss bias in generative AI with your learners (see 18 for more
on this), and point out how the images you generated of a scientist
reflect gender bias. If, by the time you try this activity, the images
you generate show equal numbers of male and female scientists,
you can discuss whether generative AI is making progress in
addressing bias.

16
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Put learners in pairs and ask them to create a list of jobs, including
jobs that are typically associated with women (e.g., nurse, secretary),
with men (e.g., engineer, electrician), and with both genders (e.g.,
teacher, journalist).
Ask learners to use different AI image tools to generate multiple
images of some of these jobs, and to discuss to what extent gender
bias is reflected in the images produced.
Hold a class discussion about the importance of recognising bias in
AI. To extend this activity, you could put learners in pairs or small
groups and ask them to research and share other examples of bias in
AI. Several examples are provided in 18, and learners will find plenty
more examples online.

Apart from recognising bias, teachers and learners should be clear and
transparent about using AI to generate images, text and ideas. AI-
generated content should always be clearly labelled with the tool used,
for example, ‘Image generated by Stable Diffusion’ or ‘Additional ideas
provided by ChatGPT’.

Aldhous, P. (2009). CRAP paper accepted by journal. New Scientist, 11 June 2009.
Available at: https://www.newscientist.com/article/dn17288-crap-paper-accepted-by-
journal. Accessed 24 December 2023.

Doshi, A. R. and Hauser, O. (2023). Generative Artificial Intelligence Enhances


Creativity but Reduces the Diversity of Novel Content. Available at: https://ssrn.com/
abstract=4535536 or http://dx.doi.org/10.2139/ssrn.4535536. Accessed 24 December
2023.

Haase, J. and Hanel, P. H. P. (2023). Artificial muses: Generative Artificial Intelligence


Chatbots Have Risen to Human-Level Creativity. Available at: https://arxiv.org/
abs/2303.12003. Accessed 24 December 2023.

Miller, D. I., Nolla, K. M., Eagly, A. H. and Uttal, D. H. (2018). The Development of
Children’s Gender-Science Stereotypes: A Meta-analysis of 5 Decades of U.S. Draw-A-
Scientist Studies. Child Development, 89(6), 1943–1955.

Shimek, C. (2023). AI Outperforms Humans in Creativity Test. Neuroscience News, 6


July 2023. Available at: https://neurosciencenews.com/ai-creativity-23585/. Accessed 24
December 2023.

17
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
5 Technology and the hype cycle

Learning technologies have always attracted a lot of


exaggerated claims and false promises. AI is no exception.
Here we examine some of the reasons for this.

AI hype and hyperbole

When released in November 2022, ChatGPT received widespread


public attention. Newspaper headlines predicted the death of education.
School districts and entire countries banned – and then unbanned –
its use. Some technology experts warned of generative AI posing an
existential threat while others called it ‘robotic, incoherent, unreliable,
and untrustworthy’ (Marcus, 2023). ChatGPT was not the first
generative AI application to be reported on in mainstream media.
A year earlier, Google’s LAMDA (Language Model for Dialogue
Applications) made headlines when one of its engineers claimed that
it was an entity with thoughts and feelings of its own – in short, that it
was a conscious being.

Digital learning technologies are often prone to hype and hyperbole,


and they can give rise to polarised debates. There are many examples
of this hype in the field of language learning, some of which you may
have seen yourself. When Interactive Whiteboards (IWBs) first appeared
in classrooms in the early 2000s, there was much talk of how they
would improve learning outcomes. There was no evidence that this was
true then, and there is no evidence that it’s true now (see Hockly, 2013
for an overview of this). The new kid on the block, generative AI, has
experienced its fair share of hype and hyperbole, from speculation that it
is going to revolutionise learning, to fears that it will kill creativity (see 4),
replace teachers (see 17) and even exterminate humankind.

Three ways to understand AI hype

Why are learning technologies so prone to exaggerated claims and false


promises? A key reason is economic. Educational technology (EdTech) is

18
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
very big business, generating billions of dollars a year globally. Creating
hype around an EdTech product by promising that it will improve
learning – or better yet, ‘revolutionise’ learning through ‘innovative’
approaches – can help get the product into schools, generating profits
for the EdTech company, even when there is no evidence to back up the
company’s claims. This is essentially hype for profit.

A second reason is the underlying belief that a certain technology


(for example, a device, or a piece of software) is essential to language
learning. This is often referred to as technology solutionism, and
it ignores the fact that learning is a complex and personal process.
Generative AI (see 2) is a prime candidate for technology solutionism.
Indeed, the release of ChatGPT was followed by a flood of chatbot
apps that claimed to be ‘based on AI’ that supports language learning
better than ever before; however, whether an app was underpinned by
knowledge-based AI or by generative AI was rarely made clear in any
publicity. Simply putting the word ‘AI’ into an advert allowed the app
to tap into the prevailing hype, despite the fact that AI has been used
in language learning software for decades (see 2) and no miraculous
language learning has taken place.

A third reason is due to an essentially ‘mechanistic’ view of language


and of learning. If one believes that language is made up of small
components that can be learned by progressively assembling these
components over time, then certain learning technologies and
approaches (such as adaptive learning – see 8) fit well with this belief.
However, although learning the essential elements of a language
(grammar, vocabulary, pronunciation, etc.) is necessary, there is a
lot more needed, as we saw in 3. AI’s much praised reliance on data
suggests that all aspects of learning can be quantified, and outcomes
predicted. This is not the case. A 2020 study carried out at Princeton
University in the USA (Salganik, 2020) challenged teams of AI
researchers and data scientists to predict outcomes for children (such as
long-term future exam results and perseverance with their schoolwork)
based on data. Thirteen thousand data points on over 4,000 families
over a period of 15 years were provided, but none of the teams were
able to develop effective statistical models that could explain the actual
outcomes. Some things – such as decision-making based on personal

19
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
experience, reflection, empathy, emotion and imagination, as well as
the host of social and environmental factors that affect these nuanced
processes – cannot be quantified.

Resisting AI hype with learners

Overall, then, like many new educational technologies, generative


AI has been through a cycle of hype and hyperbole. It is clear that
generative AI is a key development in the field of computing, and it
is likely to be deployed increasingly widely in the field of education.
However, as one researcher puts it, ‘we need to be mindful that
education remains vulnerable to what can be termed AI theatre’
(Selwyn, 2022, p. 621). Resisting the hype can be challenging, though.
One way to address this issue is to look at the metaphors we use to
talk about AI (and educational technologies, or EdTech, in general). As
one researcher found, we tend to see these tools in five different ways
(Mason, 2018). These are:

1 manual labour – we see EdTech as a tool


2 construction – we see EdTech as supporting scaffolding and
knowledge construction
3 mechanism – we see EdTech as a machine
4 biological life – we see EdTech as an ecosystem or natural evolution
5 journey – we see EdTech as a progressive journey leading us towards
improved learning.

It’s interesting to reflect on how AI seems to attract many (if not all) of
these metaphors. Metaphors influence how we see and use technology.
They shape what we expect from it, and what we perceive as good and
bad about it. We often use metaphors without realising how much they
affect our ideas about something, and recognising these metaphors helps
us think more critically about them.

Here’s a short classroom activity you could carry out with higher
proficiency learners to explore this:

Write ‘AI is like …’ on the board, and ask your learners to each
write at least five different endings to this sentence.
Put the learners into small groups to compare their sentences. Can
they group the sentences in any way (for example, do some refer to
machines, or to biology/nature, or to a journey, etc.)?

20
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
Get feedback from the group, and ask them what their similes (a
form of metaphor) suggest about their underlying beliefs and feelings
around AI.
Share the five metaphor types listed above. Did their similes reflect
any of these?
Finally, discuss to what extent your learners think current AI reflects
each of the five metaphors, and what this means for how we might
understand and use AI.

An activity like this can help develop our own and our learners’ critical
digital literacies, which are essential to identify and resist hype. We
examine the area of digital literacies in more detail in 25.

Hockly, N. (2013). Interactive Whiteboards. English Language Teaching Journal, 67, 3,


354–358. Oxford: Oxford University Press.

Marcus, G. (2023). The Rise and Fall of ChatGPT? Blog post, 23 August 2023. https://
garymarcus.substack.com/p/the-rise-and-fall-of-chatgpt. Accessed 25 December 2023.

Mason, L. (2018). A critical metaphor analysis of educational technology research in


the social studies. Contemporary Issues in Technology and Teacher Education, 18, 3,
538–555.

Salganik, M. J. (2020). Measuring the predictability of life outcomes with a scientific


mass collaboration. PNAS, 117, 15.: 8398–8403. Available at: https://www.pnas.org/doi/
epdf/10.1073/pnas.1915006117. Accessed 22 January 2024.

Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European
Journal of Education, 57(4): 620–631.

21
https://doi.org/10.1017/9781009804509.001 Published online by Cambridge University Press
B: AI in language teaching and learning

In this section we explore whether – and if so,


how – AI can be used to support language learning.
We also look at some of the more controversial
uses of AI in learning, such as using emotion AI to
measure learner engagement.

6 Learning a language with AI


7 Teaching with generative AI
8 Personalising content for learners
9 Practising English with chatbots
10 Learning with augmented reality
11 Learning with virtual reality
12 Understanding real-time learner engagement through AI
13 Understanding emotion in texts with AI
14 Developing writing skills with AI
15 Assessing learners with AI

22
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Learning a language with AI 6

Generative AI can be used to enhance already-existing


language learning apps, but it has also enabled the
development of new mobile and web-based types of tools
for language learning.

A few months after ChatGPT was launched, I gave an online conference


talk to language teachers about AI. In the question-and-answer session
at the end of the talk, one of the attendees asked whether I thought
that AI might be a passing fad. My answer? Definitely not. Here’s why.
Although AI in the form of tools like ChatGPT may have felt like a
new development when it first appeared, AI has underpinned language
learning software since the early days of Computer Assisted Language
Learning (see 1). More recently, learners began using language learning
apps on their mobile devices. Many of these apps offer automated
language learning activities underpinned by rule-based AI (see 2). For
example, some apps specialise in a specific area of language learning,
such as grammar or vocabulary, which lend themselves well to a
rule-based approach. The advent of data-driven AI (see 2) meant that
language learning tools and apps became more sophisticated – machine
translation and speech-to-text apps are just two examples of how access
to large sets of data training has meant that these apps have got better
(i.e., more accurate) over time.

Generative AI as a tool to support learning

In short, AI is not a fad in language teaching – or, indeed, in any other


field. The advent of generative AI has had, and will continue to have,
an impact on language learning software and approaches in interesting
ways. Let’s look at some examples of how generative AI can be used to
support and enhance what we do in the language classroom.

Names of specific tools are not included in these examples because, as


we all know, tools come and go. However, the kinds of tools described
below are likely to be around in language learning for some time to

23
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
come. A search online (or in your app store) will enable you to find the
latest tools with the functionality described in these examples. At the
time of writing, popular generative AI tools included ChatGPT, Bing
and Claude.

Checking understanding
Generative AI can quickly and easily process and extract
meaning from multimedia texts. This means that it can generate
comprehension questions for print-based texts (such as articles or
blog posts), for audio (such as podcasts), and for videos. Teachers
can use generative AI tools to draft questions to check their learners’
understanding of a written text, an audio text, or a video. And of
course learners can check their own understanding of a text by
getting AI to generate comprehension questions for them. This is also
useful for learners who may need to revise or study specific content
for exams and tests.
Exam preparation
Learners can generate example proficiency tests for self-study, with
test items and sample answers. For example, learners can type a
prompt like ‘Help me prepare for the Cambridge First Certificate
exam by giving me sample questions and answers’ into a generative
AI tool. The tool will generate examples for the various sections
of the exam, with test items that are similar in format, length and
language level to the real exam. Exam providers and independent
app developers have developed specialised generative AI tools that
enable learners to make their own practice items for formal language
exams (see 15).
Sharing project findings
Many teachers use project work with learners of all ages and at
all levels. In project work, learners will typically work in pairs or
small groups to identify and research a topic, and to present their
findings and opinions to their classmates. Sharing these findings
might take the form of a slideshow presentation to the class, or
the development of a blog post or web page for others to read and
comment on. Tools that make slides, blogs and websites have existed
for decades, and although the development of, say, a blog or website
has got progressively easier over the years, requiring fewer and fewer
technology skills, it has always been a relatively time-consuming

24
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
endeavour. Even creating a slideshow can take hours. There are tools
powered by generative AI that enable the creation of a slideshow or
a website in seconds. These tools will not only design a slideshow
or website, they can also populate the slides or website with content
on a topic that the user specifies. This content tends to be somewhat
generic, but it can be replaced by learners with their own texts. If
the focus of your class is not on technology or design skills, making
this step of project work considerably less time-consuming means
that learners have more time to focus on the important bits – the
language they use to communicate their work.
Idioms
Generative AI can be used to quickly provide examples of idioms on
a specific topic, for example, parts of the body. Give pairs of learners
a key word (e.g., heart, arm, leg, eye) and ask them to use the
following prompt with a generative AI tool: ‘Give me five example
sentences with an idiomatic expression that includes the word [x].’
Ask the pairs to ensure they understand the meaning of each idiom
generated – they can ask the AI tool to explain any of the idioms
they don’t understand. Regroup the learners and ask them to teach
each other one or two of the idioms they liked best. Ask the learners
to try and remember one idiom for each body part. A week or so
later, ask them what idioms they can remember.

The examples above show how generative AI tools have the potential to
give learners more agency and independence in their learning. Learners
no longer need to rely on the teacher or published materials to provide
examples of language, text with comprehension questions or even
practice exams. Effective language teachers are arguably those who
understand that learners can be given a more active role in generating
language learning content themselves, and effective teachers know how
to take advantage of this to support learning. We examine the shift
towards a more facilitative role for teachers in 17.

25
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
7 Teaching with generative AI

Generative AI tools can help teachers save time preparing


lessons as well as with a range of administrative tasks.

In 6, we looked at how AI enables learners to generate language


learning content themselves, for both in and out of class use. Teachers
can, of course, do the same. Arguably, one of the key advantages of
generative AI for teachers is its time-saving potential. Generative AI
powered tools enable teachers to quickly and easily make materials such
as quizzes, discussion prompts, texts (with comprehension questions if
needed), worksheets, slideshows, lesson plans, units of work, syllabi and
curricula, tests and exams, assessment rubrics and more. The quality of
these materials may vary, and teachers will often need to edit or adapt
AI-generated materials to suit their own learners and context.

Nevertheless, teachers constantly tell me that they find it much quicker


and easier to make teaching and learning materials with generative
AI tools. A teacher I met at a conference about six months after the
launch of ChatGPT told me that using it had more than halved the time
she spent on lesson preparation. She added that it was ‘the best thing
I’ve ever seen’. Of course, we need to temper this kind of uncritical
enthusiasm with a clear awareness of the challenges and drawbacks
that using generative AI in education brings, and we examine these in
detail in Section C. In the meantime, tools that allow teachers to not
only make a lesson plan, but also to directly integrate AI-generated
content on the same topic from other apps (such as flashcards, quizzes
or videos with comprehension questions) have understandably received
an enthusiastic response.

Saving time on teaching tasks

Clearly these kinds of tools can cut down the amount of time teachers
spend on preparing materials. Generative AI tools can save time on
other tasks for teachers, too. Let’s look at some examples.

26
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Differentiation
Having learners with differing levels of ability in the same class
(often referred to as mixed ability classes) is a fact of life. To
address this, teachers are often told that they need to differentiate,
that is, provide activities at different levels of linguistic and
cognitive challenge for learners. Although this sounds like a
sensible approach, it can be time-consuming to create, say, two
or three additional activities for a reading text or for a speaking
activity for every class. Generative AI tools, however, can generate
shorter or longer versions of a single text written by a teacher at
different levels, as well as comprehension questions at different
levels of complexity. Similarly, a range of more or less complex
discussion prompts for a speaking task can be easily drafted. And
differentiated rubrics for writing tasks or exams can be tailored to
learners’ individual level of skills. As with all content made with
AI, teacher oversight is crucial. The teacher needs to review the
AI-generated content to ensure that they are happy with it; most
teachers will want to edit the content a bit, to ensure that it fits
with what they know of their learners’ linguistic and cognitive
abilities.
Lesson plans
Language learning lessons and units in many current coursebooks
follow a standard communicative approach, providing learners
with input and with opportunities to interact. A typical coursebook
language lesson will start by introducing the topic, possibly by
asking learners to share views or experiences related to that topic,
and will then provide input in the form of a reading or listening
text. After ensuring that learners understand the text (often through
comprehension questions), key vocabulary and structures from
the text are examined. A coursebook will ask learners to interact
during the activities, for example, by comparing answers, and at
some point in the unit there will a longer speaking activity on the
unit topic, and a writing activity (often for homework). Generative
AI tools that make language lessons will usually generate a
lesson plan along these lines. Although teachers will often adapt
these lesson plans for their own classes, much as they do with

27
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
coursebook lessons, generative AI provides teachers with a tool
to generate whole lessons or units of work on topics not included
in the coursebook, and that may be of interest to learners in a
specific context. For example, while researching for this book, I
made perfectly acceptable one-hour English language lessons for
intermediate learners on topics like ‘Sweden’s contribution to pop
music’, and ‘The effect of e-waste on Ghana’. Using additional
prompts in the AI tool enabled me to generate reading texts and
comprehension questions at various levels (see Differentiation
above) for each topic, as well as a range of possible homework
activities for learners to choose from.
Language learning approaches
Teachers can use generative AI to make lessons plans that are
underpinned by different or alternative approaches to language
teaching. For example, Scott Thornbury suggested asking a
generative AI tool to make a lesson that follows a dogme approach
(private communication). This will result in a lesson that adheres
to dogme principles. These principles include using the learners as
a resource by prioritising conversation and focusing on emergent
language, and hence, using few or no materials in the lesson. Asking
a tool to make a lesson that follows a suggestopedia approach
results in a lesson with an emphasis on soothing background
music, guided visualisation and poetry. A lexical approach lesson
will focus primarily on vocabulary; a lesson based on the grammar
translation approach will include, unsurprisingly, plenty of
grammar and translation (although it is likely to reflect elements of
communicative language teaching too, by including some interactive
activities).
Home-school connection
Decades of research has shown that a strong connection between
school and the learner’s home improves learning outcomes
for young learners. Simply put, the involvement of parents or
caregivers, and the support that learners receive at home for the
learning happening at school, is a key to success. This connection
relies on good communication between teachers and parents.
However, communicating with parents is time-consuming for

28
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
teachers with already heavy workloads. Generative AI tools that
can draft communications for parents like welcome letters, regular
news updates on class activities, individual learner progress
reports and permissions forms (all of which can then be edited and
personalised) can save teachers significant amounts of time as well
as strengthen the home-school connection.

29
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
8 Personalising content for learners

Teachers can easily make language learning materials


for learners with generative AI. But AI is also capable of
delivering content to learners without the teacher through
adaptive learning.

I’ve studied French on and off since I was at school. But my French
never seems to improve much. So when mobile language learning apps
(like Duolingo and Busuu) first appeared in the early 2010s, I was quick
to sign up. I hadn’t studied French for decades, but I wasn’t exactly
a beginner. I’d had six full years of French classes at school, but my
French was terribly rusty. I could read a bit of French, but I could barely
speak a word. How to know what my language level was? Luckily for
me, there was a test I could take in the app, which would figure out my
level of French. The app would then provide me with language learning
activities at the level diagnosed by the test. Then, depending on how
well (or badly) I did in these activities, the app would serve up more
activities, targeting my areas of weakness. By addressing my particular
learning challenges, the app would, they said, provide me with a
personalised journey to speaking ‘great’ French. I managed to keep
motivated for a couple of weeks, but soon began to tire of the diet of
automated gap-fill, translation and drag and drop activities. I never did
get to the point of speaking great French. This was my first experience
of the early days of adaptive learning.

Adaptive learning

Adaptive learning software provides learners with their own


individualised and customised pathways through materials; for this
reason, it is often referred to as personalised learning. How does
adaptive learning work in practice? Let’s take a simple example, this
time of someone learning English. Imagine that an English language

30
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
learning app shows a beginner-level learner an activity focused on the
past simple. The learner needs to type the correct past simple verbs
into a text. The program detects that the learner has no problem with
regular past simple verbs but makes mistakes with irregular past
simple verbs. Based on this information, the program then offers the
learner explanations and activities that focus on irregular past simple
verbs, to help the learner address that gap in their knowledge. This
approach is known as adaptive learning because the software adapts
the material presented to the learner depending on their performance
in previous activities in the program. By working with content that is
tailored to their individual linguistic strengths and weaknesses, and by
providing a clear sense of progress through the content, it is suggested
that a learner’s motivation is increased and that learning outcomes are
improved. Research has shown improvements due to adaptive learning
in subjects like mathematics, but the outcomes for language learning are
less clear (see Kerr, 2022, for more on this).

Language learning apps based on adaptive learning have traditionally


been most suited to the delivery of discrete items of language like
grammar and vocabulary, which can be easily quantified and measured.
Adaptive learning software can be traced back to the simplest forms of
rule-based AI, but the rise of data-driven, and especially generative AI
means that adaptive learning has become increasingly sophisticated over
time. Generative AI can, for example, provide the learner with unique
examples of learning content based on their (and other comparable
learners’) performance data in real time. In other words, the content
shown to a learner does not need to rely on the program choosing
the best pre-loaded activity; instead, adaptive learning software based
on generative AI can generate content and activities for a learner on
the fly, depending on the learner’s performance on previous activities.
This means that adaptive learning principles can be applied to a
learner’s speaking and writing skills. This has potential in the realm of
assessment, where adaptive learning tests are becoming increasingly
sophisticated with the application of generative AI (see 15).

31
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Intelligent tutoring systems

You may have come across the term intelligent tutoring systems (ITSs),
which have similarities and differences with adaptive learning. ITSs are
the interface between an adaptive learning system (that is, the learning
content) and the learner. ITSs often take the form of an automated
tutor that provides tips and hints to the learner in real time, while the
learner works through materials and activities. If we take our example
from earlier, where an English language learner is working through an
activity that requires them to produce verbs in the past simple, ITSs
will identify exactly which past form verbs the learner is struggling
with, provide hints or guidance while the learner is completing the task,
and then provide new activities targeting just those few irregular verbs
that the learner found most challenging. In this sense, ITSs are more
specialised than standard adaptive learning programs, as they can take
into account the individual learner’s trajectory and learning data.

However, there are issues with adaptive learning and ITSs, no matter
how sophisticated. For a start, adaptive learning suffers from the
same mechanistic view of language learning that we identified in 5. It
assumes that a language is a series of individual pieces of knowledge
to be progressively mastered, whereas, in reality, language learning is
complex, dynamic and usage-based. If, like me, you’ve ever learned
a foreign language and then not spoken or heard it for years, you’ll
know just how easy it is to lose the language. And in line with much
educational technology hype, there can be an over-reliance on adaptive
tools as a ‘solution’ for language learning when these tools are imported
wholesale into schools and universities. The sidelining of the teacher’s
role is a concern here too, and one we explore further in 17. Finally,
there are issues around the quality of the content generated by adaptive
learning software, and the potential cost of this software for learners
and for schools. In short, adaptive learning and ITSs can support
self-directed language learning, and they may be motivating for some
learners. But ITSs in adaptive learning, no matter how many hints
and tips they provide, are focused on helping the learner complete
automated individualised tasks; as such, they are not designed to

32
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
provide the sort of interpersonal dynamics that interacting with a
teacher or other learners brings. Because of this, adaptive learning
software (with or without ITS) is most commonly recommended as a
supplement to other, more socially-oriented, language learning activities,
or for learners who don’t have opportunities for person-to-person
interaction. For me, my personal efforts to improve my not very good
French mean that I use a language learning app now and again, but
most importantly for me, I meet up online with a French friend once a
week to practise my speaking.

Kerr, P. (2022). Philip Kerr’s 30 Trends in ELT. Cambridge: Cambridge University Press.

33
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
9 Practising English with chatbots

Chatbots have been used in English language learning for


decades. Does generative AI make them more effective?

Although chatbots have been used by English language learners for a


while, generative AI has given them a new lease of life – often through
significant investment from venture capital firms. One early language
learning chatbot app powered by generative AI, Speak, had received
over 60 million dollars of funding by August 2023, only nine months
after OpenAI had publicly launched ChatGPT (Speak, 2023). Investors
clearly see massive potential in the new generation of chatbots in the
field of language learning. But are they right? Let’s look at some of the
evidence.

Earlier generations of chatbots, based on rule-based AI and limited


forms of data-based AI (see 2), have been widely researched in ELT.
Based on text and/or audio interaction with the learner, these chatbot
apps provide practice around specific areas of language, such as asking
for directions, ordering a meal, or asking and answering simple personal
questions (e.g., How old are you? Where do you live?).

Generative AI-powered chatbots

Despite what now feels like very limited conversational abilities,


research has shown that these older chatbots can support language
learning in some ways, especially for learners with lower language
proficiency (see 16 for more on chatbot research). At the time of
writing this book (early 2024), it was too early for research around the
impact of generative AI-powered chatbots on language learning to have
emerged. But it’s worth noting that these newer chatbots can potentially
overcome one major limitation highlighted by earlier research – that
of maintaining a coherent dialogue with a learner, even when there
are unexpected twists and turns in the conversation. This means that

34
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
language learning chatbots, previously most suited to lower proficiency
learners, can also be deployed for realistic conversation practice, in
both text and audio format, at much higher levels. I’ve tried out some
of these latest English language audio chatbots myself and find they can
process and react to unexpected elements in a conversation.

A major issue with chatbots that rely on speech recognition to


communicate orally has, however, emerged. Mainstream chatbots, such
as voice assistants on mobile phones, have been found to discriminate
against English learners. This is because the large language models
(see 2) underlying these chatbots are trained on audio data from so-
called standard English language accents (e.g., on data from British,
American, Australian, etc. speakers); the chatbots are therefore less able
to understand speech that does not conform to these models. One early
generative AI-powered language learning chatbot that claims to address
this issue is ELSA, which is trained on a large corpus of non-native
speaker data. This points a way forward for voice-based chatbots in
language learning.

Despite this potential, many language learning apps with generative


AI-powered chatbots are still underpinned by behaviourist views of
language learning and a grammar-based syllabus. This means apps that
prioritise activities based on grammar activities and repetition, with
liberal use of translation, even at higher levels. The issue of learners
rapidly losing interest in interacting with a chatbot therefore remains,
especially if the learning design of the app does now allow for the
more naturalistic dialogue that comes with generative AI. In order to
maintain learner interest, chatbot apps have for some time tried to
come up with novel ways to keep learners engaged. Using 3-D virtual
or augmented reality chatbot avatars (see 10) is one stratagem. Another
might be physically mobile 3-dimensional robots that use generative AI
to communicate with us at some point in the future. However, the very
high cost of such technologies is likely to make ‘walking and talking
robots’ well beyond the reach of most of us for a very long time!

35
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Using chatbots with learners

What does all of this mean for language teachers and learners? Many
learners already use chatbot apps to support their language learning out
of class. Well-known language learning apps such as Duolingo harness
generative AI to improve the performance of their chatbots, and new
generative AI- powered apps like Speak and ELSA, both mentioned
above, have appeared on the market. Indeed, apps that don’t integrate
generative AI are unlikely to remain competitive. Research shows that
the use of chatbots can support language learning in several ways (see
16). So, if we accept that chatbots can be helpful for language learners,
it makes sense to encourage our learners to use them out of class for
additional language practice. There are two main ways to do this. The
first is to encourage learners to interact with chatbots that have been
designed for language learning; the second is to encourage learners
to interact with chatbots that we use in our daily lives, but in English
rather than in their first language. Here is one way we might do this.

1 In class, ask your learners if they use 1) language learning chatbot


apps to practise English out of class and/or 2) mainstream voice
assistants (like Siri or Alexa) or voice-based search engines powered
by generative AI (like Bing). If so, which ones do they use, and what
advantages and disadvantages have they experienced?
2 List some current, free, language learning chatbot apps on the board,
and invite learners to try one out over the coming week. Ask them
to spend at least ten minutes a day in conversation with the chatbot,
and to note down what new language items they learn. If they use an
AI voice assistant at home, ask them to also try using it in English
for a week.
3 After a week, in class, ask your learners to share their experiences of
using chatbots to practise their English.
4 Encourage learners to continue interacting with chatbots if they find
it useful, and ask for their feedback periodically in class. Sharing
and discussing learners’ out of class learning in class is one way of
encouraging them to continue engaging with chatbots, and therefore
practising English.

36
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Although chatbot apps may not be to the taste of all learners, it is
our job as language teachers to make our learners aware of language
learning opportunities like this, and which they can use outside the
classroom. It is, of course, up to the learner to decide whether or not to
take these opportunities.

Speak (2023). OpenAI Startup Fund-Backed Speak Announces $16m Series B-2 Financing
& Rapid International Expansion. Blog post, 31 August 2023. Available at: https://www.
speak.com/blog/series-b-2. Accessed 25 December 2023.

37
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
10 Learning with augmented reality

Teachers and learners can use free, augmented reality


apps as a springboard to engaging speaking and writing
activities in the classroom.

Augmented reality wearables

If you’ve been watching the tech landscape for a while, as I have, you
may remember a product called Google Glass. Google Glass launched
in 2012 and was essentially a very expensive pair of smart glasses. It
had a small display screen that could overlay digital information on
the world around you as you looked through the glasses, and it had a
build-in camera you could use to record whatever was in front of you.
You could be walking down the street and see, for example, your email
or social media notifications appear in front of you, projected onto the
digital screen of your glasses. You could use Google Glass to navigate
streets and you could use the camera to record your journey, all in real
time. This sort of digital overlay is known as augmented reality or AR –
literally, reality is augmented with digital information.

Despite significant hype (see 5), Google Glass never really caught on. It
was discontinued by Google in 2015, although an ‘Enterprise’ edition,
used primarily in vocational training, continued until 2023. Concerns
over privacy (such as recording people without permission), a lack of
social acceptability and a hefty price tag, all contributed to the demise
of Google Glass. It was also unclear why one would want to use a
wearable augmented reality device when smartphone AR apps already
provided much of the same functionality. The tech world has not given
up on AR wearables, though. At the time of writing, smart contact
lenses developers are researching how tiny batteries can be powered by
human tears (Tangermann, 2023). Even if these go to market in future,
it remains to be seen how keen we will be to wear them.

38
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
AR activities for language learners

Although wearables remain contentious, AR has been used effectively


in vocational training in fields such as medicine, psychology and
mechanical engineering. AR incorporates AI technologies such
as computer vision and speech recognition to create what can be
immersive, interactive and engaging experiences for learners. Perhaps
it is no surprise then that AR has found a niche in ELT, where it can be
used in imaginative ways. Here are two example activities using marker-
based AR smartphone apps. Marker-based AR apps display a text, video
or audio (an asset) as a digital overlay, triggered by a specific image; to
work, the image needs to be previously uploaded to the AR app and
linked to the asset.

Book reviews
Learners choose a book, and audio- or video-record themselves
giving a book review. They save the recording online. The books are
put on a table in the classroom. Learners scan a book cover with an
AR app on their smartphones. The cover (previously uploaded to the
AR app as an image) acts as a trigger, and overlays the audio/video
book review. Learners then listen to each review, while completing
a worksheet provided by the teacher. Afterwards, learners compare
notes on which book sounded most interesting, and decide which
book they would like to read most and why. They write up their
choice of book for homework.
Gallery walk
Learners choose and research a famous painting. They record an
audio or video explanation of what they learned, and save the
recording online. The teacher uploads an image of each painting
to the AR app, prints a copy of each painting and puts these on
the classroom walls. Learners walk around the classroom with
their smartphones, scanning the painting (which acts as a trigger),
and listening to the overlaid audio/video. The teacher provides a
worksheet for learners to fill in as they listen to each explanation.
Learners compare their worksheet answers afterwards, in pairs.
Finally, learners choose one painting and write a short summary of
what they learned about it, either in class or for homework.

(From Hockly, N. and Dudeney, G. (2014). Going Mobile. Delta


Publishing.)

39
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
These two AR activities are similar in structure and approach, and they
are examples of a mixed-skills lesson. This is a staple in communicative
language learning task design, in which several skills (reading, writing,
listening and/or speaking) are combined within a single lesson. In this
case, learners first research something (by reading/listening online)
and then record content describing their findings (speaking). Next,
the recordings are shared via the AR app and learners are given a
task to complete while they listen (listening with a purpose). Finally,
learners compare what they have learned (speaking), and produce a
book review/painting summary (writing). Mixed-skills lessons have
several benefits: they reflect real life communication, in which language
skills are usually combined; they provide learners with opportunities
to practice fluency; they can promote engagement and retention if the
lesson topic is interesting for the learners; and they enable learners to
use language in context.

However, using marker-based AR apps in activities like those described


above requires the use of smartphones and some technology know-
how. For example, learners need to audio- or video-record themselves
and upload their recordings online. Learners (or the teacher) then need
to upload a digital image of the book cover or painting to the AR app
website, and link the recording to the image. Learners need to have
the AR app on their phones and be connected to the internet to access
the recordings during the activity. The technology skills, hardware and
infrastructure required to use AR in class mean that it is unlikely to
ever be a widely used technology. Nevertheless, in contexts where it is
possible to use AR with learners, activities like those described above
can provide learners with a motivating multimodal learning experience.

Learning about history and culture through AR

For teachers and learners interested in working with augmented reality,


but with fewer technical skills, already-existing AR apps, like the BBC
‘Civilisations’ app, can also be used as a springboard for a mixed-
skills lesson. This excellent educational app includes 40 real historical
artefacts from around the world, including an Egyptian mummy, a
Japanese gunpowder flask and an Ogoni mask from Nigeria. The app
enables learners to project 3-dimensional AR images of these objects
onto the spaces around them, and to access information about each

40
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
object. To use the app with learners, here are some suggested steps. The
language skill practised at each lesson stage is included in brackets.

1 Ask learners to download and explore the BBC Civilisations AR


smartphone app.
2 Ask pairs of learners to choose one of the 40 historical objects, or
randomly assign an object to each pair.
3 Ask each pair to explore their chosen AR object in 3-D, to read the
accompanying notes, and to note down key facts about the object.
These facts could include where the object comes from, where and
when it was found, what it was used for, and one or two unusual
or interesting things about it. Ask learners to also take a few
smartphone screenshots of the AR object (reading skill).
4 Regroup learners and ask them to share their screenshots and what
they have learned. Alternatively, ask each pair to make a slideshow
presentation about their object, and to present their findings to the
class (speaking skill).
5 As a follow-up or for homework, ask learners to choose a different
AR object from the app, and to write a short paragraph or to audio-
record themselves describing the object, but without mentioning its
name. Classmates can then read or listen to the descriptions, and
try to guess which of the 40 historical objects is described (reading,
writing, speaking and listening skills).

(From Pegrum, M., Hockly, N. and Dudeney, G. (2022). Digital


Literacies. Routledge.)

Tangermann, V. (2023). Scientists devise way to power smart contact lens with human
tears. Blog post, 31 August 2023. Available at: https://futurism.com/the-byte/scientists-
smart-contact-lens-powered-human-tears. Accessed 25 December 2023.

41
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
11 Learning with virtual reality

Good quality educational virtual reality content can


provide learners with an engaging experience through
which to practise English.

Unlike augmented reality, which overlays digital information on the real


world (see 10), virtual reality (VR) takes you into an imaginary digital
world. You can explore a virtual world via a computer or mobile device
screen, but VR is most profoundly experienced in three dimensions by
wearing a VR headset with integrated vision and sound called a head-
mounted display (HMD). An HMD allows you to turn your head and
see the imaginary virtual world all around you. This is immersive VR.
One of my first immersive VR experiences was called Richie’s Plank.
The game takes you to a thin plank jutting out from the top floor of a
skyscraper, overlooking the streets very far below. The wind whistles
and birds circle nearby. You need to step out over the abyss and walk
along the plank. I kept taking off the HMD to remind myself there
was a real floor below me, but every time I put the headset back on,
and found myself back on the plank, I froze. I was unable to take a
step, even though I knew perfectly well that I was not really standing
at the top of a skyscraper. The height – and fear of falling – felt very
real indeed. You can find videos of people experiencing Richie’s Plank
online. There’s a lot of shrieking.

VR in language learning

Immersive VR’s potential to fully engage language learners in an


environment, can, according to one research study with Spanish
teenagers, result in more examples of spontaneous language use and
more examples of communication between learners. It can even, at
points, result in learners using and understanding higher than expected
levels of English (Dooly, Thrasher and Sadler, 2023). An immersive

42
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
virtual environment can certainly feel very realistic, and VR has
been extensively used in vocational training for this reason, in areas
such as engineering, medicine and aviation. With the high-quality
conversational capabilities that have emerged with generative AI,
however, immersive VR could enable language learners to communicate
much more realistically with advanced chatbot avatars in a digital
world. One can imagine a language learner navigating an immersive VR
world and coming across generative-AI powered avatars that initiate –
and sustain – realistic spoken conversations in real time, for example.
This immersive experience of communicating is likely to be more
engaging than that of interacting with chatbot avatars in a language
learning app (see 9).

The use of virtual worlds in language learning, although well-


established, has not become mainstream. Accessibility remains a major
issue. VR HMDs have always been expensive, and they are well beyond
the means of most teachers and learners. This makes immersive VR
that requires the use of an HMD feasible in only a few high-resource
contexts.

Non-immersive virtual worlds, which are accessed via a smartphone


or computer screen, are more accessible, and have provided spaces
for language learning since the early 2000s. Linden Lab’s Second
Life (2003) or Pearson’s Poptropica (launched in 2007 and aimed at
learners aged 6 to 15) enabled users to explore their virtual worlds
via avatars while interacting with other users or chatbots, in order to
practise language. Limited initially to text interactions, speech soon
became possible in virtual worlds, providing additional oral language
practice opportunities for learners. There are several language learning
apps with non-immersive virtual worlds, although these tend to come
and go; current examples include Mondly and ImmerseMe. However,
even using non-immersive virtual worlds can be costly. Access to a VR
world or app usually requires a subscription, a networked computer or
smart mobile device, and a reliable internet connection. These barriers
to access mean that VR has remained a minority area of interest within
English language learning; it simply remains beyond the reach of many
teachers and learners (see 19).

43
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
VR activities for language learners

For language teachers interested in exploring the use of VR with their


learners, however, here are three simple uses for free VR content that
can be accessed via a computer or mobile device.

Ask learners to take a virtual online museum tour (e.g., on the


Google Arts & Culture website), and share what they learned
(Ribeiro, 2020).
Learners can view short 360° documentary videos about animals on
the National Geographic YouTube channel, and share what they saw
and learned.
National Geographic has a virtual tour of the unique Son Doong
cave in Vietnam, with real background sounds and accompanying
explanatory text: www.nationalgeographic.com/news-features/son-
doong-cave/2/#s=pano48

With 360° images and videos, the viewer can move the video around
on the screen with a mouse or finger, enabling them to see all angles.
It’s not fully immersive, but high-quality video footage on a large
computer screen is a good (and much cheaper!) alternative to VR via an
HMD. The three examples above can act as a springboard to speaking
activities for learners, with the VR content providing engaging, high-
quality educational content. A simple speaking task might consist of
learners telling a partner what they saw in the images or videos, and
sharing what impressed them most.

Creating immersive and non-immersive VR worlds has always


required substantial funds. At the time of writing this book, however,
AI tools had recently emerged that can make (i.e., program) 3-D
objects for virtual worlds based on prompts. This can substantially
cut development time and may therefore reduce costs for creating
immersive VR, which, in turn, would reduce costs for accessing and
using it.

VR and research

VR may not be to everyone’s taste, but there is research evidence


showing that some VR games can support vocabulary acquisition
(Sundqvist, 2009) and facilitate the development of essential

44
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
communicative skills, particularly among adolescent learners (Li, Chiu
and Coady, 2014). For example, Massively Multiplayer Online Role-
Playing Games (MMORPGs) like ‘World of Warcraft’ or ‘Fortnite’,
which take place in virtual worlds, seem to hold promise for incidental
language learning. These games consist of immersive 3-D environments
full of challenging tasks that require players to communicate with each
other through both text and speech. Interestingly, these learning gains
tend to take place away from the language classroom, with players
willingly interacting with each other as part of gameplay in their spare
time. The amount of time players spend immersed in a MMORPG,
where they are exposed to English within authentic communicative
scenarios, also emerges as a key variable (see Thorne, Black and Sykes,
2009 for more on this).

Dooly, M., Thrasher, T. and Sadler, R. (2023). “Whoa! Incredible!:” Language Learning
Experiences in Virtual Reality. RELC Journal, 54(2), 321–339.

Li, Z., Chiu, C-C. and Coady, M. R. (2014). The transformative power of gaming literacy:
What can we learn from adolescent English language learners’ literacy engagement in
World of Warcraft (WoW)? In Gerber, H. R. and Schamroth Abrams, S. (Eds.). Bridging
literacies with videogames (pp. 129–52). Boston: Sense Publishers.

Ribeiro, R. (2020). Virtual reality in remote language teaching. Cambridge ELT Blog post,
27 October 2020. Available at: https://www.cambridge.org/elt/blog/2020/10/27/virtual-
reality-in-remote-language-teaching/. Accessed 25 December 2023.

Sundqvist, P. (2009). Extramural English matters: Out-of-school English and its impact
on Swedish ninth graders’ oral proficiency and vocabulary. Karlstad: Karlstad University
Studies.

Thorne, S. L., Black, R. W. and Sykes, J. M. (2009). Second Language Use, Socialization,
and Learning in Internet Interest Communities and Online Gaming. The Modern
Language Journal 93, 802–21.

45
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
12 Understanding real-time learner
engagement through AI

Emotion AI uses facial features and tone of voice to


measure a speaker’s engagement. But can it be trusted?

You may recall this news story from 2009. A Japanese railway company
introduced ‘smile scan’ software on its computers. Employees were
required to smile at the computer’s camera every morning, and the
machine analysed their smiles by looking at laughter lines and lip
curvature, among other facial features. The quality of the smile was then
rated. If the smile was judged too gloomy, the computer could provide
advice on how to look more cheerful. The computer could also print
out a personalised picture with an ideal smile for that employee. It’s
unclear whether this smile campaign continued as company policy after
extensive international media coverage (and a fair amount of ridicule).
Either way, the story raises some interesting issues.

The first issue is the use of facial recognition algorithms to analyse


emotion, known as affective computing. Algorithms can be biased
(see 18), and it is also difficult to equate a single facial expression
consistently with an emotional state. Not everyone smiles when they
are happy: essentially, the same emotion can be displayed through
very different facial expressions from one person to the next. Facial
expressions for emotions also vary from culture to culture. Smiling too
much can be interpreted as being untrustworthy in some cultures, for
example. Affective computing systems, also known as emotion AI, tend
to be based on western cultural concepts of how emotions are displayed
in facial expressions, reflecting what one commentator refers to as a
‘monoculture of facial expressions’ (Jenka, 2023). There is also the
question of ethics in using AI to potentially sanction or reward people
based on emotional reactions that are transmitted via unreliable facial
data.

46
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Emotion AI in education

The example of a railway company using emotion AI may seem


completely unconnected to what happens in education. It’s not.
Measuring learner engagement through an analysis of their facial
expressions, particularly in online learning, has been an area of
research for several years now. The worldwide shift to online learning
during the COVID-19 pandemic gave impetus to this research. In one
recent study (2023), researchers analysed facial emotions and eye and
head movements to create a ‘concentration index’ with three levels
of engagement, from ‘very engaged’ to ‘not engaged at all’. Results
showed that these levels correlated with learners’ self-reporting about
their levels of engagement at various points in an online class (Sharma
et al., 2022). Perhaps unsurprisingly, the results also found that
high-performing learners showed higher levels of engagement. The
researchers in this study suggested that the automated monitoring of
learner engagement in real time could provide the teacher with useful
feedback on his/her own performance in an online class. They also
suggested that future research should include additional biometric
data (such as a learner’s heart rate and oxygen levels) to paint a more
complete picture of learner engagement. The underlying argument here
is that the more facial and biometric data points that are available for
individual learners, the more robust the analysis. However, this is a
moot point. The study described here was carried out with 15 learners,
so generalising from these data is difficult. There have also been
academic studies that analyse how users type (i.e., their keystrokes) and
interact with a mouse and/or touchscreen, in order to infer emotions; it
has been suggested that this focus on KMT dynamics, as it’s known, is
less intrusive than other emotion AI techniques (Yang and Qin, 2021).

Emotion AI is not only to be found in small-scale academic studies.


In 2022, Zoom announced that it was developing emotion AI based
on facial expressions and speech patterns for its video conferencing
platform; this software was then to be deployed in Class, a virtual
classroom built on Zoom. Rights groups demanded that Zoom drop
this initiative (Asher-Shapiro, 2022), and at the date of writing, there
has been no further news about this. It’s unclear whether Zoom’s
emotion AI plans were definitively dropped, or are simply on hold.

47
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Issues with emotion AI

What might the use of affective computing mean for us as teachers?


Using emotion AI to analyse online learner engagement is not common,
it’s not reliable, and it’s still in the early stages of development; in
addition, major concerns around learners’ data and privacy remain. The
issues around privacy, data and ethics led the European Union’s 2023
AI Act to prohibit the use of emotion-detection software in schools,
for example (see 24 for more on regulating AI). However, affective
computing is not going to disappear. According to one estimate, the
emotion-detection software industry will be worth $37.1 billion by
2026 (Research and Markets, 2021). It will be challenging to resist
such economic pressure. It’s therefore important that as teachers, we
understand – and keep a critical eye on – developments in the emotion
AI space.

Although there is push-back on the use of emotion AI in education,


there is less regulation around its use in business and other areas. This
means that your learners may come across it in their professional lives,
either now or in the future. For example, emotion AI is used in call
centres to understand a customer’s emotional state during a phone
call, and to provide real-time feedback, tips and suggestions to agents.
It can be used to recruit staff when job interviews are video recorded
and analysed for the job candidates’ stress levels. Emotion AI applied
to voice analysis is used in the medical profession to help doctors
diagnose diseases like depression and dementia, or in counselling to
track patients’ mental states. It is used in the gaming industry to test
new computer games with users or to adapt games to a player’s current
mood. Emotion AI can even be used to measure the overall happiness of
a population through the analysis of video footage from public spaces
(Index Holding, 2018).

Discussing emotion AI with learners

One ethical way to bring emotion AI into the classroom is to raise


awareness and discuss its implications with learners. To do so, you
could follow these steps:

1 Ask your learners what they know about emotion AI. If necessary,
explain what it is.

48
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
2 Ask your learners to explore how emotion AI can be used in
business, healthcare, education and government. You could either
give them a text to read on the topic (which you could make with
a generative AI tool!), or you could ask your learners to research
online and to find examples of how it is used in a range of fields.
3 Put your learners in small groups to discuss what they have
learned about emotion AI, and what they think the advantages and
disadvantages are.
4 Get feedback on the class’s overall opinion of emotion AI. You could
ask your learners to vote on whether they think the use of emotion
AI is ethical, in each of the use cases from step 2.

Asher-Shapiro, A. (2022). Zoom urged by rights groups to rule out ‘creepy’ AI emotion
tech. Reuters, 11 May 2022. https://www.reuters.com/article/idUSL5N2X21UW/ Accessed
23 August 2023.

Index Holding (2018). Empath in UAE to Measure Happiness. Press release, 7 May 2018.
Available at: https://indexholding.ae/empath-in-uae-to-measure-happiness/. Accessed 25
December 2023.

Jenka. (2023). AI and the American Smile: How AI misrepresents culture through a facial
expression. Blog post. Available online at https://medium.com/@socialcreature/ai-and-the-
american-smile-76d23a0fbfaf. Accessed 25 December 2023.

Research and Markets (2021). The Worldwide Emotion Detection and Recognition
Industry is Expected to Reach $37.1 Billion by 2026. News article, 13 May 2023.
Available online at https://www.prnewswire.com/news-releases/the-worldwide-
emotion-detection-and-recognition-industry-is-expected-to-reach-37-1-billion-
by-2026--301290799.html. Accessed 25 December 2023.

Sharma, P., Joshi, S., Gautam, S., Maharjan, S., Khanal, S. R., Cabral Reis, M., Barroso, J.
and de Jesus Filipe, V. M. (2022). Student Engagement Detection Using Emotion Analysis,
Eye Tracking and Head Movement with Machine Learning. In: Reis, A., Barroso, J.,
Martins, P., Jimoyiannis, A., Huang, R.YM. and Henriques, R. (Eds.). Technology and
Innovation in Learning, Teaching and Education. TECH-EDU 2022. Communications in
Computer and Information Science, vol 1720.

Yang, L. and Qin, S-F. (2021). A Review of Emotion Recognition Methods From
Keystroke, Mouse, and Touchscreen Dynamics. IEEE Explore, 9, 162197–162213.
Available at: https://ieeexplore.ieee.org/document/9632591. Accessed 22 January 2024.

49
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
13 Understanding emotion in texts
with AI

Sentiment analysis can be used to measure the emotions


behind a written text.

In 12, we looked at how AI can identify the emotions behind our


facial expressions and our spoken words in real time, and then use this
to calculate an engagement score. AI can also analyse the emotions
behind the words we use in our writing. This kind of analysis is known
as sentiment analysis or, less commonly, opinion mining. Sentiment
analysis uses algorithms to analyse stretches of discourse in context,
in order to decide whether the overall feeling expressed in the text is
positive, negative or neutral. Human readers do this fairly easily, at
least in their first language(s). Think, for example, of a product or a
restaurant review. When we read a review, we infer the emotional intent
of the text and decide whether the review is positive or negative overall.
Sentiment analysis automates this process with AI.

Sentiment analysis in online platforms

Teachers are most likely to come across sentiment analysis tools in


a learning management system (LMS) or online platform, which
may provide sentiment analysis of learner contributions to forum
discussions, for example. The aim is to improve teaching by analysing
and understanding learners’ attitudes towards courses, content,
platform and/or teachers, as well as to identify disengaged or unhappy
learners (Sharma, Tyagi and Vaidya, 2021). Sentiment analysis tools
are based on algorithms that aim to identify learners’ emotions, and
they can provide scores for individual learners across several categories.
Sentiment scores may be numerical, for example, 1 to 5, where 1
represents negative feelings, and 5 represents positive feelings. Or
scoring may provide descriptors for learners using adjectives, such as
‘satisfied’, ‘frustrated’ or ‘disappointed’. There are clearly limitations
to this purely quantitative approach, which you have probably spotted
yourself. A simple score or adjective doesn’t really tell you anything
about the context of the discussions or about the individual learner.

50
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Simply put, algorithms do not understand what they are ‘reading’; they
analyse stretches of discourse and assign scores to individual learners
based on these analyses. However, we need to keep in mind that
sentiment analysis programs do not understand irony or sarcasm, nor
do they understand cultural nuance; this is where human interpretation
and expertise is needed. They are also unaware of context. A learner
may be dealing with an issue in their personal life, for example, that
impacts on their mood. This might be reflected in the emotional tone
of their postings to forums over a period of time; however, this has
nothing to do with the course content or their overall engagement with
the course materials. In this case, one can easily see how a sentiment
analysis program that gives this learner low scores for engagement
may be unfairly penalising them. And this is where sentiment analysis
tools fall short. Understanding the emotional intent of texts is therefore
often most effective when AI-based sentiment analysis is combined with
human analysis.

Using text analysis with learners

Sentiment analysis can be carried out to varying degrees of complexity


and accuracy, as we have seen. LMS-based sentiment analysis programs
are independent of teachers and learners in the sense that they are
externally developed and imposed, and they work independently
of what teachers and learners might want or need. But what about
teachers who would like to help their learners infer the emotions
behind their own and others’ texts, but don’t have access to complex
sentiment analysis tools? One simple approach is to take a short text
of about 100 to 150 words written by a learner, and to paste it into
a word cloud generator. Word cloud tools are not sentiment analysis
tools. Rather, they identify the key meaning-carrying words in the text
(nouns, adjectives, verbs and adverbs), and present them in the form
of a cloud. The most frequently used words in the text appear in larger
font in the cloud. Conjunctions, prepositions and auxiliary verbs don’t
usually appear as they are less important in understanding meaning. A
word cloud tool is useful for helping learners identify words they tend
to overuse in texts (‘nice’ is a prime candidate in descriptive texts). It
can also be used by learners to identify the key sentiments in authentic
texts such as newspaper articles. Although a word cloud tool is not

51
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
a sentiment analysis tool, it can help learners to subjectively infer the
underlying sentiment in a text, by helping them notice the frequency of
specific words, and to consider any emotions that might underlie the
use of these words. In this case, a kind of rough and ready text analysis
with learners can be used as a springboard for class discussions around
lexical choices and intended meanings, facilitating critical discourse
analysis. Word cloud tools have been freely available since the early
2010s, and have been widely used by English language teachers and
learners. Their longevity is testament to the fact that they are easy to
use and useful for simple text analysis. A search online for “word cloud
ideas for the classroom” will quickly provide a wealth of ideas of how
to use them with learners.

Issues with sentiment analysis

Inferring the emotions behind already-written texts through sentiment


analysis sounds a lot less controversial than applying emotion AI
to real-time interaction (see 12). Using a simple text analysis tool
like a word cloud to help learners subjectively infer the emotions
behind a text can be helpful and beneficial for language learning. In
the business world, companies use more complex sentiment analysis
tools and algorithms to track and analyse what people are saying
about their products or services online, for example, in online fora,
recommendation sites or via social media. The analysis enables them to
address issues with products, and also to develop or adjust marketing
campaigns. So far, so good. Unfortunately though, this kind of analysis
of user data has been used in unethical ways, too. For example,
unauthorised access to user data on social media sites such as Facebook
was used to create psychological profiles and then micro-target voters
during the 2016 U.S. presidential elections (Confessore, 2018). Such
misuse has raised serious concerns about data privacy and the influence
of social media analytics on democratic processes and led to calls for the
strengthening of data protection legislation (Hu, 2020). It has also made
clear that we are living in an era of ‘mass digital persuasion’ (Matz et al.,
2017: 12714) where the personal data, opinions and emotions that we
willingly share online can be manipulated in ways that can have very
far-reaching consequences.

52
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
Confessore, N. (2018). Cambridge Analytica and Facebook: The Scandal and the Fallout
So Far. The New York Times. Available at https://www.nytimes.com/2018/04/04/us/
politics/cambridge-analytica-scandal-fallout.html. Accessed 22 January 2024.

Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7, 2. Available at
https://journals.sagepub.com/doi/epub/10.1177/2053951720938091. Accessed 22 January
2024.

Matz, S. C., Kosinski, M., Nave, G. and Stillwell, D. J. (2017). Psychological targeting
as an effective approach to digital mass persuasion. The Proceedings of the National
Academy of Sciences (PNAS), Vol. 114, issue 48. Available at https://www.pnas.org/
doi/10.1073/pnas.1710966114. Accessed 22 January 2024.

Sharma, S., Tyagi, V. and Vaidya, A. (2021). Sentiment Analysis in Online Learning
Environment: A Systematic Review. In: Singh, M., Tyagi, V., Gupta, P. K., Flusser, J.,
Ören, T. and Sonawane, V. R. (Eds.) Advances in Computing and Data Sciences. ICACDS
2021. Communications in Computer and Information Science, vol 1441. Springer, Cham.
Available at https://doi.org/10.1007/978-3-030-88244-0_34. Accessed 22 January 2024.

53
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
14 Developing writing skills with AI

AI-based tools that can improve writing have existed for


some time. There are several ways that teachers can use
more recent generative AI-powered tools to support the
development of learners’ writing.

AI writing tools are integrated into some of the technologies that we use
on a daily basis. We often take for granted the text prediction and the
spelling and grammar checkers in our word-processed documents, emails,
mobile phone texts or social media updates. More recently, generative AI
(see 2) has enabled the generation of complete texts from scratch, based
on a prompt. Some teachers are concerned about what text generation
might mean for academic integrity (see 21) and for the development of
writing skills. For example, will learners use generative AI tools to cheat
by getting these tools to write essays for them? Will learners lose the
ability to create their own texts? But most teachers – and institutions –
understand that generative AI is here to stay, and that it is arguably more
productive for teachers and learners to find ways to work with these tools
in principled ways than to ignore or try to ban them.
Using AI to improve learners’ writing

Teachers and learners can use generative AI in several ways to develop


and enhance writing skills. Here are four ways to do so.

1 Generating essay titles or story outlines


Teachers can use a generative AI tool to draft a list of essay titles
around a certain topic by using a simple prompt such as ‘Make a
list of essay topics about [x]’. Learners can then choose the title that
most appeals to them.
It’s often challenging for learners to invent imaginative story plots,
and they may spend a lot of time simply trying to come up with an
idea rather than writing. Learners can use a generative AI tool to
suggest story outlines or plots. The more detail the learner puts into
the prompt, the more detailed the plot will be.

54
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
2 Brainstorming ideas
Generative AI tools can help learners brainstorm ideas to include in
their own texts. Here is one way to use this in class. Imagine learners
need to write an essay about the pros and cons of, say, using social
media. First, ask learners to brainstorm the pros and cons in pairs.
Add the ideas to the board. Sharing a screen with the class, put a
prompt like ‘List the pros and cons of using social media’ into a text-
based generative AI app. The app will usually generate a list of pros
and cons, and a short explanation of each. Ask learners to compare
the two sets of ideas (theirs and the app’s ideas). Close the AI text
and add any new ideas to the board. Learners can now write their
own texts, incorporating all of the ideas.
3 Providing feedback
Generative AI tools can correct, refine and/or provide feedback on
texts written by learners. Research has shown that good writers
create multiple drafts of texts before reaching a final version.
Research also suggests that receiving scaffolded feedback on drafts
can help learners improve their writing skills (see 16). However,
for language learners with limited linguistic resources, finding ways
to reflect on and reformulate their own writing can be a challenge.
Asking a generative AI app to provide feedback on a first draft, and
to include language corrections and suggestions for improvements,
can help learners with this important stage. In class, teachers can
ask learners to write a first draft on a specific topic, get AI-generated
feedback on that draft, and then write an improved second draft.
Learners can also compare and discuss the feedback they received,
before attempting their second drafts. Well-known writing assistant
tool Grammarly, which has been available since 2009, integrated
generative AI in 2023, and is worth exploring with learners.
4 Marking learners’ work
Teachers can use generative AI tools to provide feedback on and to
grade learners’ work using pre-defined criteria. Of course, learners
can use these same criteria to get feedback on their own written
work as part of the feedback stage described above. This can be
especially useful for learners who need to practise producing written
texts for a standardised exam, and for which marking criteria are
usually publicly available.

55
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
The four ways to use generative AI described above show that these
tools can be used at the pre-writing, during writing and post-writing
stages. Given what we know about good writers, timely input from
AI in the ‘during’ writing stage, where the AI acts as a personal tutor
(see 8) can be very beneficial. As an example of this in practice, an AI
writing assistant called ‘Charlie’ was developed by Purdue University
to help students improve their essays before submission. The tool was
trained on large teacher-graded corpora of essays; it provided immediate
feedback on drafts of learners’ essays, and predicted their results/grades
against specific marking criteria.

Using AI to generate texts for analysis

Generative AI tools can also be used by learners to make and analyse


texts at different levels of formality. Assuming your learners have access
to devices, here’s one way to do this in class.

1 Provide your learners with an example email on a topic, written in


a formal tone. To give your learners some useful language practice,
dictate the email and ask them to type it into their mobile devices
(this ensures that the learners have a digital copy of the email for the
next step).
2 Ask your learners to paste the email into a generative AI tool, and
to prompt the tool to rewrite the email in three ways – for example,
in a less formal tone, in a very informal tone, and in an enthusiastic
tone. (For less proficient learners, provide a short simple formal
email in step 1 above, and ask them to generate just one informal
version for the purposes of comparison.)
3 In pairs or small groups, ask your learners to compare the lexical
choices in each version of the email, and to note down key points of
difference. Ask them to each choose two or three expressions from
the emails that they would like to remember.
4 With the whole class, discuss the differences they noticed, and what
expressions they chose to learn. Discuss how they can use generative
AI to help them write emails (or similar texts) when not in the
classroom.
5 In a subsequent class, ask learners how many of their chosen
expressions they can remember!

56
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
An activity like this can be helpful for learners who may need to
produce emails in English for work. They are very likely to use
generative AI tools to draft emails, rather than writing emails
themselves from scratch. This is understandable for two reasons:
correctness (the learner wants the email to be as accurate as possible,
especially in a professional setting) and time (using a generative AI tool
to draft an email can save a lot of time and effort). Rather than insisting
that our learners only write emails from scratch in class, it may be more
useful to help them adapt and personalise emails that can be quickly
generated by AI. Helping our learners understand levels of formality
and tone in writing is an important part of this.

57
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
15 Assessing learners with AI

AI and computer-based or automated testing seem to go


particularly well together. Let’s look at how and why.

When talking about assessment, we first need to distinguish between


formative and summative assessment. The former provides feedback to
teachers and learners so that they can improve their work. The latter
often provides a grade or score on a final piece of work and aims to
evaluate what a learner has learned. Tests and exams can be used in
both formative and summative assessment.

Using AI to assess learners

Let’s look at how AI has been used in formative and summative


assessment (we look at how learners might use AI to cheat by getting
it to complete their assessed work in 21). Some of the earliest forms of
AI in language teaching focused on providing quiz-like activities with
simple standardised feedback. Gap fill and multiple-choice activities are
relatively straightforward to programme, as they require one correct
answer. Feedback can be programmed to appear depending on the
answer chosen – for example, Correct, or Incorrect, try again. Quiz-like
activities are still ubiquitous in digital language learning materials, and
can be found in mobile apps, digital coursebooks, learning management
systems (LMSs) and on websites. They are frequently used in both
formative and summative assessment.

More sophisticated forms of computer-based testing include assessing


learners’ written work, and increasingly, their speaking skills. The
assessment of written work is known as automated essay scoring
(AES) or automated writing evaluation (AWE). I’ve written extensively
about AWE elsewhere (e.g., see Hockly, 2022); suffice it to say that
recent advances in generative AI mean that AWE has become more
sophisticated than ever (see 2 and 14). The use of AWE was already
fairly widespread in high-stakes standardised exams, and given these
advances, it is likely to be become increasingly embedded in these

58
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
systems. Speech recognition has also improved significantly with the
advent of generative AI, and it will continue to improve; although its
use in summative assessment has been less widespread than that of
AWE, we are likely to see more of it now. It should be remembered
that although generative AI seems to understand spoken and written
language, it does not (see Section A). Issues with processing some
accents and varieties of English in speech recognition tools remain
– although including large datasets of spoken learner language in
generative AI models will inevitably improve this (see 9).

Another relatively widespread use of AI is in adaptive testing,


particularly in placement tests. Adaptive testing typically presents the
learners with a quiz-like question and a limited set of answers to choose
from. Depending on whether the learner chooses the correct or incorrect
answer, the text then presents the learner with the next question, which
can be of lesser, similar or greater linguistic complexity (see 8 for more
on adaptive learning). The advances in automated assessment of spoken
language discussed above mean that adaptive placement tests can also
evaluate learners’ written and spoken language, leading to a more
complete picture of learner proficiency.

Using AI to make assessments

Generative AI tools are also useful for simpler forms of testing such as
making quizzes – these are often used as quick and informal progress
tests for learners, or as a way to help learners memorise language
(such as vocabulary). Teachers can present a generative AI tool with
a text, and ask it to produce any type of quiz based on that text, for
example, a multiple-choice quiz, true/false questions, a gap-fill, and
so on. Generative AI is being increasingly integrated into quiz-making
apps; this means that teachers no longer need to come up with their
own quiz questions and laboriously type them into an app – the quiz
app does this automatically based on a topic or content chosen by
the teacher. There is also, of course, potential for learners to use these
tools to make their own revision quizzes. An example of a generative
AI-based quiz tool at the time of writing is Quizalize. A more complex
version of automated question generation can be found in apps that
provide comprehension questions for videos, based on the video content
(a current example is Edpuzzle). It should be borne in mind that apps

59
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
come and go, but this type of tool is very likely to remain, given our
predilection for quizzes in language teaching and learning!

Some teachers may need to help their learners prepare for formal
language exams, and generative AI tools can help with this, too. As we
saw in 6, these tools can make practice exams for learners, generating
examples of whole tests (at the correct language level) for specific
formal language exams, given a clear prompt. Generative AI tools can
also be asked to make individual test items for specific exams. This
is known as automated item generation (AIG), and although it has
been used in test development since the early 2010s, publicly available
generative AI tools have now put AIG into the hands of teachers and
learners. As with any text produced by generative AI, test items need
careful checking and editing before being given to learners. A study
in Korea, for example, examined AIG for the reading section of the
state-wide CSAT exam by three different generative AI-powered tools
(ChatGPT, Perplexity AI and GenQue); the latter was specifically
designed to generate test items for the CSAT. The reading test items
generated by all three tools were found to need some revision by
teachers, and the authors of the study also concluded that AIG prompt
training would be helpful for teachers (Shin and Lee, 2023). One could
argue that using generative AI for AIG for formal exams is of most
relevance to large test providers, particularly when they use generative
AI tools that have been customised for this purpose; the language
learning app Duolingo, for example, uses generative AI-powered AIG
for the Duolingo English Test. Using generative AI tools that have
been fine-tuned to produce items for specific tests or exams is likely
to generate better quality results (at the time of writing these were not
publicly available, but this may change in the future). However, training
teachers to develop effective prompts (known as prompt engineering) is
an important area, and one we explore further in 17 and 29.

All of this automated assessment can generate large amounts of data


about individual learners, and about learners in general. Generative AI
excels at identifying patterns and providing insights from data; it can
therefore be used to identify language areas that learners are finding
challenging and writing/speaking areas they could improve. This
information can then feed forward into targeted teaching strategies to

60
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
address issues. Caveats around protecting learner data apply here, of
course (see 20).

A picture is emerging of AI-based testing as a massive time-saver for


teachers, in test preparation, administration and marking. However,
AI-based testing is most effective when used alongside teachers’
expertise and judgement. The empathy and subjective knowledge that
teachers bring to learner assessment should not be forgotten in our
rush to implement AI-based assessment. As with many things related to
learning technologies, it’s a matter of finding the balance between time-
saving automation and human insight.

Hockly, N. (2022). 50 Essentials for Using Learning Technologies. Cambridge: Cambridge


University Press.

Shin, D. and Lee, J. H. (2023). Can ChatGPT make reading comprehension testing items
on par with human experts? Language Learning & Technology, 27, 3, 27–40.

61
https://doi.org/10.1017/9781009804509.002 Published online by Cambridge University Press
C: The big questions

In this section, we consider some of the


key ethical/moral, philosophical and legal
issues that underlie the deployment of AI.
Teachers and learners need to develop
an understanding of these to use AI
responsibly.

16 Can AI support language learning?


17 What does AI mean for teachers?
18 How can we make AI fair?
19 How can we make AI accessible to all?
20 Who owns the data?
21 Does AI help learners cheat?
22 Whose content does AI use?
23 Who creates AI?
24 Can we control AI?
25 How can we become critical users of AI?

62
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Can AI support language learning? 16

The question of whether AI can support language learning


is important. It’s also complex.

The research conundrum

AI is present in a range of different learning technologies, as we saw in


Section B. So, when we ask whether AI can support language learning,
we need to specify what AI technology we mean. Once we’ve specified
which AI technology we are talking about, we can explore what
research has to say about it. Almost immediately though, we run into
problems. Research studies are underpinned by different theoretical
frameworks, and they ask different research questions. They utilise
different research methodologies, look at different groups of learners
and teachers, and the research takes place in very different contexts.
This makes it difficult to compare the results of research studies that
look at the same AI technologies. What works well for you in your
context with your learners, may not work well for me in my context
with my learners.

One way that researchers can compare different research studies is to


undertake a systematic review of research that has already been carried
out. By reviewing a range of research evidence, a systematic review aims
to draw conclusions about the effectiveness (or otherwise) of a certain
approach or a certain tool. Researchers usually set specific criteria for
deciding which research studies to include in their review. For example,
they may decide to include only research carried out in the last few
years, and which has been published in peer-reviewed journals. So, to
try and answer the question, ‘Can AI support language learning?’, let’s
look at what research in the form of systematic reviews has to say about
two of the AI tools we explored in Section B.

Chatbot research

If we start with what we know about language learning (see 6), then AI
that supports interaction and gives learners opportunities to produce

63
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
spoken language is arguably a promising research area. Chatbots can,
in theory, provide learners with opportunities for real-time spoken
interactions, and they have long been an area of research for precisely
this reason. Meta-studies on the effects of chatbots powered by
generative AI (see 2) on English language learning had not yet emerged
at the time of writing this book; however, meta-studies of earlier
chatbots are useful in terms of identifying some of the limitations that
newer generations of chatbots need to overcome to be effective.

Unsurprisingly, chatbot meta-studies serve to highlight the large


range of factors that come into play when we ask whether a specific
technology is effective or not. A meta-study based on 25 empirical
studies (Huang, Hew and Fryer, 2022), for example, found that
chatbots have five different pedagogical uses in language teaching.
These include: 1) as conversation partners; 2) in simulated real-life
scenario; 3) as tutors delivering instructional content or explanations;
4) as assistants providing help or guidance when learners encounter
difficulties or have questions; and 5) as recommendation agents
suggesting further learning resources or activities for learners based
on their progress or interests. Clearly, if we are going to ask whether
chatbots support language learning, we first need to decide what sort of
chatbots we’re talking about!

This meta-study also identified three challenges in using chatbots for


language learning. The first was chatbots not being able to understand
complex or ambiguous language inputs, and therefore providing
inaccurate responses. The second was that, although learners find
chatbots engaging and fun at first, the novelty tends to wear off. The
third challenge was that interacting with chatbots at lower levels can
require a substantial amount of mental effort (often referred to as
cognitive load), and that the level of cognitive load depends not just on
learners’ language proficiency, but also on the complexity of the task,
and learners’ familiarity with the chatbot interface.

From reviewing this meta-study research, we might conclude that to


effectively support language learning, conversational partner chatbots
need to: 1) appear to understand what the learner is saying; 2) keep
the learner engaged and motivated; and 3) minimise cognitive load.
Generative AI chatbots powered by large language models should be

64
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
better able to deal with the first and third elements, at least in theory.
Once generative AI chatbots are implemented more widely, and more
individual research studies have been carried out, a future meta-study
could indicate to what extent these challenges have been overcome – or
not overcome.

AWE research

According to second language acquisition research (see 6), providing


timely, scaffolded feedback on learners’ written work can potentially
improve their writing skills. To what extent automated writing
evaluation (AWE) tools can provide this feedback has been an area of
extensive research. One interesting meta-study (Fan and Ma, 2022)
reviewed 22 studies published between 2005 and 2020 that had
investigated how AWE feedback affects the quality of learners’ English
writing.

The findings suggest that AWE feedback can have a positive impact on
writing quality and accuracy, under certain conditions, but not always.
For example, AWE can help learners improve their writing when used
regularly over time, and when compared with learners receiving no
feedback on their writing. This is clearly good news for learners who
want to work in self-study mode, because they can use AWE to improve
their writing without the need for teacher feedback. When we look at
classrooms though, the meta-study found that sometimes AWE is better
than teacher feedback, but sometimes it is not. One can conclude here
that whether AWE is more or less effective than teacher feedback is
likely to depend on the teacher and the kind of feedback they provide.
Currently, the feedback provided by a good teacher and that provided
by an AWE tool tends to be close in quality, even if the latter is not
perfect. For teachers, providing detailed, individualised feedback on
learners’ writing can be a very time-consuming business. For teachers
with very large classes, it is often simply not possible. In these cases
then, AWE tools can provide much needed support for teachers. By
providing automated feedback for learners, an AWE tool can free up
a teacher to help individual learners who may be struggling with their
writing, or it can enable a teacher to identify and work on areas in
writing that several learners in the class may need support with.

65
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
It’s important to note that the meta-study mentioned above looked at
AWE systems that were available before generative AI tools became
widely available. Whether generative AI-powered AWE (which allows
for more nuanced, level-specific and dialogic feedback) can have an
increased positive impact on learners’ writing skills is an important
question. Some early research in this area suggests that this could be the
case (see Jacob, Tate and Warschauer, 2023), particularly when learners
integrate the use of a generative AI tool at key stages of the writing
process in a principled manner (see 14 for how a description of how
this can be done). This is certainly a research area to watch.

As both of these meta-studies show, it’s important to remember that


technology is only one element in a complex environment with many
different factors that come into play. In short, it’s not just about the
technology. It’s also about the teaching, the learning and the wider
context.

Fan, N. and Ma, Y. (2022). The Effects of Automated Writing Evaluation (AWE) Feedback
on Students’ English Writing Quality: A Systematic Literature Review. Language Teaching
Research Quarterly, 28, 53–73.

Huang, W., Hew, K. F. and Fryer, L. K. (2022). Chatbots for language learning – Are they
really useful? A systematic review of chatbot-supported language learning. Journal of
Computer Assisted Learning, 38(1), 237–257.

Jacob, S. R., Tate, T. and Warschauer, M. (2023). Emergent AI-assisted discourse: Case
study of a second language writer authoring with ChatGPT. Available online at: https://
arxiv.org/ftp/arxiv/papers/2310/2310.10903.pdf. Accessed 28 December 2023.

66
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
What does AI mean for teachers? 17

The implications of AI on the teacher’s role are wide


reaching, and there are both positives and negatives.

Teachers replaced by technology

If you’re worried that AI might take your job, you’re not alone. The
first ‘robot’ English language chatbot, called EngKey, appeared in
South Korean primary school classrooms in 2011, leading to fears that
teachers would soon be replaced by these one-metre high, two-wheeled,
egg-shaped plastic objects, topped by a video screen as a ‘face’ (Saenz,
2011). Once the media hype had died down, it became clear that the
robot was, in fact, a vehicle that transported a video conference screen
around the classroom. On the screen were real English teachers from
the Philippines; they were being beamed into the classroom, in real time,
to teach from scripts that focussed on speaking and pronunciation with
young learners. So not really a robot at all. Developed by the Korea
Institute of Science and Technology, plans to introduce EngKey into all
South Korean primary schools by 2013 appear to have been quietly
shelved.

The rise of generative AI crystallised fears around technology replacing


us, a fear that has long been present in our societies. Although EngKey
didn’t replace English teachers, technology has indeed made human
jobs obsolete in the past. We all know about how weaving machines
replaced handlooms in the early eighteenth century, leading to the
infamous Luddite movement (a group of weavers who opposed this
automation and attacked the new machines in a doomed attempt
to save their livelihoods). What remains of their efforts is the use of
‘luddite’ as a blanket term to refer to anyone who opposes technology.
The mechanisation of weaving was an early example of how the
Industrial Revolution would automate or replace a range of artisan
jobs, from weavers to, famously, buggy whip makers, whose skills
became redundant when horse drawn carriages were replaced by cars.

67
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
The digital revolution, too, has automated or replaced both manual
and skilled jobs – gone, for example, are most travel agents, replaced
by online services. More recently, web designers, graphic artists and
computer programmers have seen demands for their services fall as
generative AI does an increasingly good job in these fields for a fraction
of the price (Mutandiro, 2023). The ethical implications of this are, of
course, important to consider (see 26).

How AI can support teachers

AI can positively support the language teacher’s role, as we saw in


Section B. For example, generative AI tools can save us time with
materials preparation and with administrative tasks (see 7). AI can
support learners outside the classroom by providing them with a wide
range of (often free) resources and learning opportunities, for example,
through chatbots, intelligent tutoring systems and language learning
apps (see 8 and 9). AI can help develop learners’ writing skills (see 14
and 16), and it can help them prepare for tests and exams (see 15).
By providing this kind of support for learners, teachers can focus on
more interesting and communicative tasks in the classroom. After all,
it doesn’t make much sense to have learners completing gap-fill or
multiple-choice item grammar activities on paper in class, when they do
this from home in a language learning app with a much more engaging
interface. Better to spend this class time on speaking activities and small
group work.

AI can also support teachers with their own development, both


pedagogical and technological (see 29). In terms of technology-
related skills, at the very least, teachers need to be able to craft good
prompts (or instructions) for chat-based generative AI tools to use
them effectively. Effective ‘prompt engineering’ comes with practice.
One soon learns, for example, that using a detailed prompt in a lesson
generation tool will get the best results. ‘Produce a detailed lesson plan
for elementary (CEFR A2) language learners aged 13–14 on how we
can address climate change through renewable energies. Include the
materials needed, the steps to follow, and some homework options
for learners’ is a better prompt than ‘Produce a lesson plan on climate
change’. There are also prompt banks for teachers available online; a
web search will take you to many examples of these.

68
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How AI can undermine teachers

There are possible downsides to AI’s effect on our role as teachers. One
is deskilling. If a generative AI tool can make great lesson plans for us,
do we really need to learn how to plan lessons ourselves? Do pre-service
teacher training courses need to even teach lesson planning any more?
The answer is yes, they do. Generative AI tools can produce lessons that
are well-staged, follow a communicative (or any other) approach (see 7),
and include clear learning outcomes and suggestions for how these
outcomes can be formally or informally evaluated. But you know your
learners, and unlike the AI tool, you know what might be more or less
interesting for them. In other words, you know what might work or
not with a certain class. Teachers need a robust understanding of lesson
planning and the underlying principles of language learning to be able
to first evaluate, then adapt and improve AI lessons – exactly as they
have always done with coursebook lessons. In short, generative AI gives
teachers the ability to make, and then adapt, AI-generated (as opposed
to coursebook) lessons much faster and more effectively than before
(see 7). For more on AI and lesson planning, see Thornbury (2024).

Another widely voiced concern is that learners will use generative AI


tools to complete assignments. In other words, to cheat. As one teacher
wryly commented, ‘We [teachers] use these AI tools all the time. We
don’t want our students to use these AI tools. Where’s the problem?’
Learners using generative AI to write essays and to complete homework
assignments has opened up an interesting debate around academic
integrity and authentic assessment, which we explore further in 21.

What does this all mean for language teachers?

The increasing use of digital tools in classroom settings over the


past decade has led theorists and educators to use terms like digital
pedagogy. Digital pedagogy focuses on ways that teachers and learners
can work with learning technologies, using them effectively and
critically. Rather than using learning technologies just because they
are available, digital pedagogy advocates a critical stance that looks at
whether these tools actually support learning. Digital pedagogy is about
teachers understanding when and how to use learning technologies
effectively (and which ones to use) as well as understanding when not to

69
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
use them. (See Giannikas, 2020, for more on using digital pedagogy, for
example, with young learners.)

Generative AI puts a highly powerful tool into the hands of both


teachers and learners. But it will not replace teachers, and learners
who want and need the human social interaction that is key to
communication – and key to effective teaching (Stronge, 2007;
Anderson and Taner, 2023) – will not disappear. Technology, including
generative AI, needs to be seen as another resource among the wide
range of human and non-human resources available to teachers.
However, we need to learn to use these tools ethically and effectively.
And not just as individuals, but as a society. There are plenty of people
(and educational institutions) interested in human-centred AI. This
is an approach that focuses on developing AI systems that amplify
and augment human abilities rather than on replacing them. Human-
centred AI aims to create AI systems that operate transparently, deliver
equitable outcomes, respect privacy and preserve human control. We
explore this further in 26, and many of the chapters in the rest of this
section explore elements that feed directly into ethical uses of AI.

Anderson, J. and Taner, G. (2023). Building the expert teacher prototype: A metasummary
of teacher expertise studies in primary and secondary education. Educational Research
Review, 38, 100485, 1–18.

Giannikas, C. N. (2020). Digital Pedagogy for Young Learners. Part of the


Cambridge Papers in ELT series. [pdf] Cambridge: Cambridge University Press.
https://www.cambridge.org/gb/files/6316/0612/8264/CambridgePapersInELT_
DigitalPedagogyYLs_2020_ONLINE.PDF. Accessed 28 December 2023.

Mutandiro, K. (2023). Free AI tools are killing South Africa’s web designer job market.
Rest of World blog. 31 August 2023. https://restofworld.org/2023/ai-tools-web-developer-
jobs-south-africa/. Accessed 28 December 2023.

Saenz, A. (2011). South Korea’s Robot Teachers To Test Telepresence Tools in the
New Year. Singularity Hub, 3 January 2011. Available at: https://singularityhub.
com/2011/01/03/south-koreas-robot-teachers-to-test-telepresence-tools-in-the-new-year/.
Accessed 28 December 2023.

Stronge, J. H. (2007). Qualities of Effective Teachers. Association for Supervision and


Curriculum Development (ASCD): Danvers, MA.

Thornbury, S. (2024). Scott Thornbury’s 66 Essentials of Lesson Design. Cambridge:


Cambridge University Press.

70
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How can we make AI fair? 18

Bias and discrimination can be reflected in algorithms.


We need to guard against such unfairness, and as a
society, actively work towards addressing the inequalities
embedded in our digital tools.

After a year of using an AI program to review large numbers of CVs and


list a few of the best applicants for new programmer jobs, recruiters at
Amazon started to notice that few women were making it onto the lists.
Why? The training data for the program included the CVs of previously
successful candidates, who were overwhelming male and white. So, the AI
looked for candidates with similar profiles. This is just one well-publicised
example of unintentional algorithmic bias, and how it can perpetuate
systemic inequalities. AI bias is a matter to take seriously. It can literally
have life-threatening effects. In one study, researchers found that an
algorithm used in US hospitals to predict which patients were most likely
to need extra medical care heavily favoured white patients over black
patients (Dyer, 2019). AI is not neutral. The data that feeds large language
models is created by humans, and so are the algorithms that underpin
many AI tools. These data will inevitably reflect our beliefs about the
world – and our prejudices and biases.

Speech recognition bias

What about bias in the AI tools that we use in language education? In 9,


we saw that chatbots that rely on speech recognition have traditionally had
issues with recognising non-standard English pronunciation, which can
directly discriminate against speakers of first languages other than English.
Speech recognition apps are often highly normative, and can unfairly
penalise accents, for example, by awarding a language learner a low score
on spoken intelligibility in a high-stakes exam. We know, however, that
intelligibility is all about context. It depends on who we speak to, about
what, and where. Some language learning chatbot apps try to overcome
this by training their models on a range of accents, and, in theory, this could
help overcome the issue. Research into this, at the time of writing, was still
thin on the ground, but it is an area that may hold promise.

71
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Bias in speech recognition is not limited just to speakers of other languages.
A study carried out by Standford University in the US found five popular
virtual assistants that rely on speech recognition technologies (such as Siri
and Alexa) made twice as many errors when interpreting words spoken
by African Americans as when interpreting the same words spoken by
white Americans (Koenecke et al., 2020). Again, the bias can likely be
explained by the training data including mainly white Americans. In their
conclusions, the researchers highlighted that machine-learning tools need
monitoring to make sure that training data is inclusive, which is hard to
disagree with. Proprietary AI systems, however, are not keen to allow access
to their algorithms, which makes auditing a challenge – and underscores
the need for legislation that insists on transparency in revealing the sources
of training data (see 24 for more on AI regulation). Speech recognition
software also tends to struggle to understand voices of different ages –
children’s, adults’ and elderly people’s voices are all very different, for
example. It also has issues with different cultural usages of lexis, and with
speakers who may have speech difficulties. Speech recognition software
still has some way to go in terms of inclusivity, but awareness of these
shortcomings is the first step in starting to address them.

Bias in GPT detectors

Bias has also been found in AI tools that try to detect whether a
learner’s written assignment has been generated by AI, rather than
written by the learner themself. These tools are known as GPT detectors
and their effectiveness is widely questioned. As one example, a study
assessed the effectiveness of various commonly used GPT detectors
when analysing written content from both native and non-native
English writers (Liang et al., 2023). The researchers found that these
detectors consistently misidentified non-native English writing as AI-
generated, while correctly classifying native-authored content. Studies
such as these have shown that GPT detectors often disadvantage writers
with limited linguistic proficiency by wrongly labelling them as cheats.

GPT detectors, then, are not very effective. OpenAI, the makers of
ChatGPT, admitted as much about their own GPT detector, pointing
out that their own GPT detectors are not reliable (OpenAI FAQ, 2023).
Whether GPT detectors will ever be fully accurate in identifying AI-
generated text versus human-authored text is unclear. It’s important to
consider what this means for assessing learners’ written work, and we
explore this further in 21.
72
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Bias against other languages

Text generators can produce high quality texts in English, and other
languages like Spanish, German, Arabic and Japanese. They are less
effective with languages like Swahili, Thai or Bengali, each of which
is spoken by millions. Why is this? Generative AI is trained on data
harvested from the internet, but there is less representation of these
latter languages in online text format. So, for these languages, text
generators struggle to produce coherent text. Instead, they generate
texts that are full of grammatical and syntactic errors, and at times,
they simply make up words. In a study carried out by researchers at the
University of Oregon (Lai et al., 2023), version 3.5 of ChatGPT was
asked to perform the same seven writing tasks in 37 different languages.
ChatGPT underperformed in what the researchers call ‘low resource’
languages (so-called because there are fewer online text resources
available in the language), and it performed particularly poorly in
languages that are structurally most different to English. This lack of
representation has the potential to increase digital inequality, a topic
that we explore further in 19. Local developers, however, have stepped
into the gap. For example, faced with the very weak performance of
ChatGPT in Amharic and Tigrinya, two languages spoken in Ethiopia
(Tigrinya is also spoken in Eritrea) that ChatGPT not only mixed
up, but invented words for, a local start-up developed an automated
translation service (Rest of World, 2023).

Making AI fairer

Clearly, then, algorithmic bias in AI is an issue that can have serious


repercussions. Bias in AI must be addressed, and not just in the AI tools
we use in language teaching. Some of the solutions that have been
proposed include the following:

Diversify development teams


Having diverse teams work on the development of apps from the
outset can ensure a range of viewpoints and inputs, which can help
avoid some of the most obvious biases.
Diversify training data
In language learning apps that use speech recognition, training
data should include a very wide range of accents. Ideally, however,

73
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
automated speech recognition should be adaptive, and able to
respond appropriately not just to different accents, but to voices
from the other groups described earlier in this chapter. We are
not there yet, though. Training data is needed for less represented
languages so that text generation tools can produce quality outputs
in these languages, too.
Transparency in training data
Allowing access to training data would enable audits to
systematically check for bias, and then rectify that bias by changing
or diversifying the training data and/or the algorithms. This is not
always easy to achieve though, and the CV selection software used
by Amazon described at the beginning of this chapter was eventually
abandoned, even after software engineers had tried to fix the bias.
Ongoing review
Monitoring and identifying algorithmic bias needs to be an ongoing
process. Once bias is identified and addressed, further unintended
consequences (including other biases) may occur, so ongoing
review is an important part of avoiding this. There is not much
point in fixing bias against one group only to find later that the fix
disadvantages another group.

Dyer, O. (2019). US hospital algorithm discriminates against black patients, study finds.
British Medical Journal, 367.

Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z., Toups, C.,
Rickford, J. R., Jurafsky, D. and Goel, S. (2020). Racial disparities in automated speech
recognition. PNAS 117(14), 7684–7689.

Lai, V. D., Ngo, N. T., Veyseh, A. P. B., Man, H., Dernoncourt, F., Bui, T. and Nguyen,
T. H. (2023). ChatGPT Beyond English: Towards a Comprehensive Evaluation of
Large Language Models in Multilingual Learning. Available online at: https://arxiv.org/
pdf/2304.05613.pdf. Accessed 28 December 2023.

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E. and Zou, J. (2023). GPT detectors are biased
against non-native English writers. Available online at: https://arxiv.org/pdf/2304.02819.
pdf. Accessed 28 December 2023.

OpenAI FAQ page. (2023). Available at: https://help.openai.com/en/articles/8313351-


how-can-educators-respond-to-students-presenting-ai-generated-content-as-their-own.
Accessed 28 December 2023.

Rest of World. (2023). The AI startup outperforming Google Translate in Ethiopian


languages. Blog post, 11 July 20123. Available at: https://restofworld.org/2023/3-minutes-
with-asmelash-teka-hadgu/. Accessed 28 December 2023.

74
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How can we make AI accessible 19
to all?

Not everyone has access to learning technologies. We have


a range of assistive technologies that can empower some
groups of learners, but equity, diversity and inclusivity are
challenges that both older and newer learning technologies
need to address.

The digital divide

According to UNESCO, at the height of the COVID-19 pandemic in


early 2020, 1.6 billion learners were out of school in 195 countries.
Widespread school closures forced many teachers and learners into a
range of digital learning scenarios, often overnight. You, yourself, may
have experienced this if you were teaching at the time. This ‘great move
online’ meant different things for different people, of course. In some
contexts, teachers and learners already had access to, and experience
with, a range of learning technologies. In other contexts, issues such
as a lack of infrastructure, the cost of devices and connectivity, and/
or a lack of experience, training or support for online learning meant
that when schools closed, there was nothing available for learners at
all. The pandemic both highlighted and exacerbated the digital divide.
On the one hand, it enabled the widespread adoption of educational
technologies in schools in some contexts (sometimes with unpleasant
implications – see 20), it provided opportunities for teachers and
learners to develop their digital skills (see Chan, Bista and Allen,
2022), and it put online learning firmly on the map. On the other
hand, for those learners with no access to digital opportunities, it led
to widespread learning loss, with learners from very impoverished
backgrounds having to drop out of school permanently to work, and
girls forced into early marriages (Jeffreys, 2023). Learning – and future
life and work prospects – were not the only things negatively impacted.
UNICEF estimated that over 370 million children globally missed their
daily school meals due to school closures; for some children, school is
the only reliable source of nutrition (Reliefweb, 2022).

75
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
We see therefore that digital technologies can not only affect, but also
increase the digital divide. The examples in the paragraph above refer
to the digital divide between the Global North and the Global South,
but geographical digital divides can exist between urban and rural areas
within the same country, or between neighbourhoods in the same town.
There may even be a digital divide between schools in the same district
in terms of access to digital technologies, and between classrooms in
the same school, with teachers who have or don’t have digital skills.
The digital divide can also affect young versus old, and men versus
women, with the former in each case typically having more access to
technologies. In short, the digital divide is complex. In fact, it’s better
thought of as a sliding scale, rather than a divide with ‘haves’ and ‘have
nots’ on either side.

Assistive technologies and EDI

EDI stands for equity, diversity and inclusivity. Let’s start with a quick
note on the difference between equity and equality, two terms which are
often confused or conflated. Equity means that people are treated fairly
and justly, without bias or favouritism. Equality means that people have
the same rights and opportunities – for example, women and men have
access to the same jobs, and are paid the same for undertaking the same
work. As we saw in 18, with their baked-in biases, AI algorithms do not
have a good reputation when it comes to equity. The software industry
in general does not have a good reputation for diversity or inclusivity in
their hiring practices either, although an awareness of these blind spots,
and an apparent willingness to address them, is, in theory, a good thing.

One area where technology has contributed significantly to EDI


though, is in assistive technologies. Assistive technologies support
and improve individuals’ lives – wheelchairs and hearing aids are two
commonplace examples. Digital assistive technologies have developed
alongside other technologies, and they are used extensively in special
needs education. These technologies are often integrated into the
devices we use every day, such as mobile phones and tablets. Assistive
technologies used by teachers and learners include text-to-speech (for
example, for the visually impaired or for dyslexic learners), speech-to-
text (for hearing impairment), and screen controls (such as changing
the swipe movements on a mobile screen to tapping movements for

76
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
those with motor skills challenges or limited movement). These assistive
technologies support learners with physical challenges, but assistive
technologies are also used to support neurodiverse learners with a
range of cognitive, behavourial and emotional challenges. For example,
there is educational software available for learners with autism, in
which teachers can create video scenarios to help their learners develop
empathy and social skills.

Social robots have also been used for supporting children with attention
deficit hyperactivity disorder (ADHD), hearing impairments, Down
syndrome and autism. Social robots are designed to interact with
humans – they often look like pets, stuffed animal toys or humanoid
robots. Research has shown that they can help teach social and
educational skills to children. For example, a study was carried out with
twelve autistic children aged between six and twelve at home (i.e., not in
a laboratory) over a month, in which the child and a caregiver interacted
with a social robot for 30 minutes every day (Scassellati et al., 2018).
Activities included storytelling, taking the perspectives of characters
in the story and sequencing events. The robot used adaptive learning
techniques (see 8) to adapt the difficulty of the activities for the child,
encouraged engagement and modelled positive social skills. The study
found that these children increased their attention and communication
skills after a sustained period of time interacting with the social robot.

Generative AI and assistive technologies

Although it is difficult to predict whether generative AI will enable


the development of completely new tools for special education
and to support neurodiverse learners, there is no doubt that it can
improve currently existing systems and tools. Here are some of the
improvements we can expect to see:

Text-to-speech and speech-to-text tools have become increasingly


powerful. If the biases and issues with recognising non-standard
speech and voices are addressed in voice recognition systems
(as suggested in 18), we can expect a wider and more effective
deployment of these to support learners with speech difficulties.
The potential of generative AI to generate multimedia from
scratch (see 7) enables special educational needs teachers to create

77
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
materials, such as video scenarios, for learners themselves. What
was previously available only to those able to afford these (often
expensive) multimedia products thus becomes available to many
more teachers. Teachers, of course, still need training and support
in learning how to effectively work with learners with special
educational needs.
Social robots will also benefit from the improved speech capabilities
brought by generative AI. For the most part, however, these remain
expensive products that are available for the few.
Computer vision tools, which can understand and describe the real
world, as well as pictures and videos, can support those with visual
impairment. Microsoft’s Seeing AI is one current example of this
kind of AI-powered software. Open AI’s Be My AI app is another.
Both were current at the time of writing.

Overall then, building on decades of development and research


into special needs education, newer forms of AI have the potential
to improve already-existing assistive technologies. Whether these
technologies will become cheaper and more widely available remains to
be seen, however. Many generative AI platforms require subscriptions,
effectively making them too expensive for many teachers and learners
around the world. It is also important that the many caveats and
challenges associated with generative AI, and explored in this book, are
addressed, if equity, diversity and inclusion are to be achieved.

Chan, R. Y., Bista, K. and Allen, R. M. (2022). Online Teaching and Learning in Higher
Education during COVID-19. New York: Routledge.

Jeffreys, B. (2022). Covid closures still affecting 400 million pupils – Unicef. BBC News,
30 March 2022. Available at: https://www.bbc.com/news/education-60846683. Accessed
28 December 2023.

Reliefweb. (2022). COVID 19: Scale of education loss ‘nearly insurmountable’, warns
UNICEF. Available at: https://reliefweb.int/report/world/covid-19-scale-education-loss-
nearly-insurmountable-warns-unicef. Accessed 28 December 2023.

Scassellati, B., Boccanfuso, L., Huang, C-M., Mademtzi, M., Qin, M., Salomons, N.,
Ventola, P. and Shic, F. (2018). Improving social skills in children with ASD using a long-
term, in-home social robot. Science Robotics 3, 21. Available online at:
https://www.science.org/doi/10.1126/scirobotics.aat7544. Accessed 28 December 2023.

78
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Who owns the data? 20

Learning technologies collect ever increasing amounts of


data about our learners. What does this mean for learners,
and how can we increase transparency around this
important area?

A cautionary tale

The COVID-19 pandemic saw the introduction of learning technologies


into schools on an unprecedented scale (although not everywhere – see
19). One outcome of this was the collection of very large amounts of
learner data, often without learners’ permission. An investigation carried
out by Human Rights Watch, published in 2022, sheds a chilling light on
this. Of 164 educational technology products endorsed by 49 governments
for use in schools during the pandemic, 146 of these products shared
children’s personal data with 199 advertising technology companies. It’s
worth quoting from the report itself on what was happening:

These products monitored or had the capacity to monitor children, in


most cases secretly and without the consent of children or their parents,
in many cases harvesting data on who they are, where they are, what
they do in the classroom, who their family and friends are, and what
kind of device their families could afford for them to use. Children,
parents, and teachers were largely kept in the dark about these data
surveillance practices. The report finds that governments failed to
protect children’s right to education (p. 2).

This report shows very clearly two things. Firstly, it shows the technologies
that we use in our schools may be collecting and using data in unethical
ways. Secondly, it shows we need strong data protection laws that are
enforced, and that we shouldn’t simply expect technology companies to do
the right thing. There is some regulation in the data protection space – the
European Union’s 2018 GDPR (General Data Protection Regulation) and
2023 AI Act are two cases in point (see 24 for more on the latter) – but
these protections are not available in all jurisdictions.

79
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Normalising data collection

Collecting data on learners, is of course, nothing new. Learning


management systems (LMSs) have used algorithms that collect data
for decades, looking at things like how often learners log on the LMS,
what resources they access and for how long, the length of their forum
contributions, grades, and so on. Adaptive learning software can collect
data to the granular level of the number of keystrokes a learner uses to
complete an activity in a specific timeframe. The massive amounts of
data that these tools can collect about learners has led to calls for data
literacy training for teachers. After all, if one is unsure of what all these
data mean or how to interpret them, they are not much use to teachers.
There is also some debate around how useful these data actually are.
Do multiple keystrokes over one minute in an online educational
game represent keenness or frustration, for example? Without more
information (about the learner and about the context), it’s hard to say.

Having technology products collect large amounts of our data has


become normalised. It happens across the multiple online spaces we
engage in (such as social media) and it also happens in places we might
not expect. A report from Mozilla (Caltrider, Rykov and MacDonald,
2023), which investigated 25 car companies, found that user data
collected by these companies included income, immigration status and
race, with software installed in the cars also accessing users’ photos,
calendars, and to-do lists. The majority of these 25 companies then went
on to share or sell these user data to third parties. Many people share
their data by accepting a company or website’s terms of service without
reading the small print. It is always advisable to read the terms on the
website, or we risk unknowingly having our personal data collected and
sold, and therefore resigning ourselves, as some commentators put it, to
‘surveillance capitalism’ (see, for example, Zuboff, 2019).

Datafication

It is perhaps no surprise then that data collection, with and without the
user’s consent, is widespread in education. The increasing generation,
analysis and sharing of student data, primarily through LMS and
other educational software, has been referred to as the datafication of
education. In fact, as data from multiple sources are integrated into our

80
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
learners’ profiles (for example, from social media plug-ins, or from GPS-
enabled apps that can track a learner’s access to institutional facilities),
increasing amounts of data can be collected about learners.

And how do learners feel about the creeping datafication that may be
taking place in their schools and universities? Interestingly, this question
is not often asked (research in this area tends to focus instead on
institutional policy as regards learner data). However, one recent study
asked learners in three Australian secondary schools what they thought
about having their data collected by their schools’ platforms (Pangrazio,
Selwyn and Cumbo, 2023). The findings are interesting, although
perhaps unsurprising. Learners felt resentful of – but resigned to –
institutional uses of their data. They disliked the lack of control they
had over their public profiles (the information about themselves that
was automatically displayed in school platforms) and they disliked the
pre-established privacy settings in these platforms. They also felt
unhappy with how their use of institutional technologies was closely
monitored and tracked by the school for accountability, as well as
powerless in their lack of choice over whether to use the platforms or
not. In short, learners feel a sense of ‘digital resignation’ (Pangrazio,
Selwyn and Cumbo, 2023, p. 11) and ‘surveillance realism’ (ibid. p. 12),
seeing the datafication of their learning experiences as inevitable. It’s not
only learners who are affected by datafication, however, as AI-powered
platforms are increasingly used to monitor learners and to link these
data to teacher effectiveness.

Discussing data use with learners

It’s often not possible for us as individual teachers to make decisions


about institutional learning technologies (such as an LMS or specific
software that we may be required to use with our students). We can,
however, talk to our learners about the data that are collected by these
technologies. Here is a short, classroom-based discussion task you could
use with learners:

1 Write a list of popular social media platforms on the board


(Instagram, TikTok, LinkedIn, etc.). Ask your learners which ones
they use, and what for.

81
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
2 Ask them what their favourite platforms’ terms of service (ToS)
say about data collection. Learners are often unaware of their data
privacy settings, so put them in pairs, and give them five minutes to
explore the ToS of one or two of these platforms. Ask learners to
share what they found out with the class.
3 Lead a class discussion with the following questions:
– What data do your favourite social media platforms collect
about you?
– Can the platform share your data with third parties? If so, how
do you feel about this?
– Is there anything in the privacy settings that you can or would
like to change to protect your data?
– Is data surveillance inevitable in today’s world? What should
laws do to protect us?

Transparency around data collection and use is important, so school


managers and academic directors should inform themselves of data
policies before bringing new technologies into schools. Asking EdTech
vendors the right questions about how their products deal with learner
data is an essential part of a school manager’s job, and these questions
need to be asked at the procurement stage. Managers should also share
this information with the school community (teachers, learners and/or
parents of young learners).

Caltrider, J., Rykov, M. and MacDonald, Z. (2023). It’s Official: Cars Are the Worst
Product Category We Have Ever Reviewed for Privacy. Mozilla Foundation News, 6
September 2023. https://foundation.mozilla.org/en/privacynotincluded/articles/its-official-
cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/. Accessed 28
December 2023.

Human Rights Watch report ‘How Dare They Peep into My Private Life?’ (2022). https://
www.hrw.org/sites/default/files/media_2022/05/HRW_20220526_Students%20Not%20
Products%20Report%20Final-IV-v3.pdf. Accessed 28 December 2023.

Pangrazio, L., Selwyn, N. and Cumbo, B. (2023). Tracking technology: exploring student
experiences of school datafication. Cambridge Journal of Education.

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at
the New Frontier of Power. New York: Public Affairs.

82
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Does AI help learners cheat? 21

A concern expressed by many teachers is that learners will


use generative AI to cheat. By this, language teachers often
mean that learners will use it to do homework writing
tasks. What, if anything, can teachers do about this?

Generative AI and academic integrity

Learners using technology to help them with homework is nothing new.


Translation software, for example, has enabled learners to write texts
in their first language, run it through a tool like Google Translate to
turn it into English, and then hand in this work as their own original
writing. It used to be relatively easy for teachers to spot when learners
did this. But as translation tools have got progressively better, it has
got progressively harder for teachers to spot when learners use these
tools. Generative AI tools can now produce texts in English that are
well above the language capabilities of many intermediate-level learners.
Apart from the obvious (to a teacher) discrepancy between what we
know of a learner’s proficiency level and perfectly correct AI-generated
text, there are currently no tools that can objectively identify whether
a text has been generated by AI or not. Indeed, as we saw in 18, AI
detector tools often penalise English language learners’ writing, wrongly
claiming that it is generated by AI.

Of course, it’s not only language teachers who are concerned that
their learners will use generative AI to write essays or do homework.
In a peer-reviewed paper entitled ‘Chatting and Cheating: Ensuring
Academic Integrity in the Era of ChatGPT’ (Cotton, Cotton and
Shipway, 2023), three UK academics discussed issues around the
academic honesty and the opportunities for plagiarism provided by
generative AI tools. The irony here – deliberately revealed by the
authors after publication – was that the article had been written by
ChatGPT; the authors had simply edited the references (which ChatGPT
tends to invent). None of the four reviewers had realised this.

83
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How can teachers ensure that learners don’t use generative AI tools
to cheat? Or to put it another way, how can academic integrity be
maintained if learners are using AI tools to do their work for them?
Given that it’s difficult (or impossible) to detect AI-generated text, some
educators suggest that this might not be the best way to frame the
question. Instead, they suggest, this may be an opportunity for us to do
two things: first, to rethink how we assess our learners; and second, to
openly explore with learners how to use these tools ethically to support
their writing and learning (e.g., Fyfe, 2022). Let’s see how this can be done.

Rethinking writing assessment

As we saw in 14, writing is often seen in summative assessment (for


example, in a writing test or exam) as an opportunity for learners to
show what they know about the language, and to be assessed on criteria
like use of language (e.g., correct grammar and range of vocabulary),
organisation and structure, cohesion and coherence, and so on. Writing
can also be used for formative assessment. Here, the aim of writing is
for learners to use feedback on their work to improve drafts of their
work, and thereby develop their writing skills. But how do we stop
learners from simply using generative AI tools to write their essays for
them, and then submitting those texts as their own work? Here are two
possible approaches to this issue, taken by two different teachers.

Teacher 1 asks her learners to write their essays by hand in the


classroom, where she can ensure that they don’t have access to
generative AI tools. She monitors her learners while they write to
ensure that they don’t attempt to use a generative AI tool on their
mobile devices to help them.
Teacher 2 accepts that her learners may want to use these tools
to generate good texts – after all, who wants to write poorly in a
foreign language? She openly discusses with her learners how best
to use a generative AI tool to help them improve their writing skills.
For example, she asks them to use the tool to improve a first draft of
an essay. She asks her learners to document this process by including
their first draft, screenshots of their conversation with an AI tool to
improve that draft, and the final version of their written work. She
requires her learners to submit all of these documented stages as part
of their written homework. In a subsequent class, she follows up by

84
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
asking her learners to share what they learned about their writing
from the drafting and redrafting stages of their writing.

Teacher 1 sees generative AI as a threat to academic integrity and


tries to prevent her learners from using it. Teacher 2 accepts that her
learners are likely to use generative AI tools to produce texts, and tries
to help them use these tools in ways that will develop their writing
skills. However, both of these teachers essentially believe that we should
carry on as usual by asking our learners to complete individual writing
assignments.

Authentic assessment and project work

A third way, however, is to rethink how we assess our learners.


Authentic assessment, for example, requires learners to apply their
knowledge and skills to a real-world problem or scenario. Authentic
assessment is often related to project work, which might take place over
several classes or weeks. How might this work in practice?

Teacher 3 assesses her learners via a project in which learners address


an issue with littering in a local park. The project takes place over
several weeks, and is called ‘Clean Our Park: Litter Awareness and
Action Project’. There are four separate tasks, which require learners
to use a range of language skills, critical thinking, research and
communication skills. Teacher 3 asks her learners to use generative AI
tools at various points in the project.

Task 1: Research and presentation


In small groups, learners research the effects of littering, looking at
environmental, societal and economic factors. Each group shares
their findings with the class. The class brainstorms ways to raise
awareness and prevent littering in the local park. A text-based
generative AI tool helps learners with brainstorming ideas (see 7).
Task 2: Bilingual litter awareness poster
In pairs, learners create a bilingual poster (in English and the
learners’ first language) to raise litter awareness and promote
responsible behaviour. Learners need to ensure that visuals, simple
language and clear messages are integrated effectively in their
posters. An image-based generative AI tool helps learners with poster
design (see 7).

85
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Task 3: Litter clean up event
In small groups, learners strategise a litter clean up event in their
local park, addressing logistics, scheduling, roles and promotion
(including sharing the posters from Task 2). Learners present their
event plans, get feedback and the class agrees on one plan. The
event takes place. A text-based generative AI tool helps learners with
suggestions on how to plan an event.
Task 4: Reflection
Learners write a short blog post (300–500 words) about their
journey through the litter awareness project, discussing insights on
littering, community engagement and personal learning. A text-based
generative AI helps learner improve their first drafts (see Teacher 2
above).

A multifaceted project like this can be used to assess learners’ language


proficiency across a range of skills, while encouraging civic engagement
and critical thinking in a real-world context. Not least, it can inspire
learners to make a positive impact in their local communities.

Note: I used a generative AI tool (ChatGPT) to help me come up


with an example of authentic assessment for this chapter. I used the
following prompt: ‘Create a project that includes authentic assessment
for a group of intermediate level English language learners around the
issue of littering in a local park.’ I then edited the project heavily for
the purposes of this book. The original version of the project was, in
my opinion, pretty good though. As suggested in 7, you could use a
generative AI tool to produce authentic assessments based on project
work for your own learners!

Cotton, D., Cotton, P. and Shipway, R. (2023). Chatting and Cheating: Ensuring Academic
Integrity in the Era of ChatGPT. Innovations in Education and Teaching International.
https://doi.org/10.1080/14703297.2023.2190148. Accessed 28 December 2023.

Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI &
Society, 38, 1395–1405.

86
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Whose content does AI use? 22

Generative AI raises questions about whose content has


been used to train their models. We explore issues of
copyright, attribution and accountability in this chapter.

The pushback against generative AI

In 2023, writers in Hollywood went on strike, bringing one of the most


powerful motion picture industries to a standstill. After a few months,
actors joined the strike. The writers’ strike lasted five months and was
one of the longest Hollywood had ever seen. The reason for the strike?
In a word, AI. Marching with placards covered in slogans like ‘AI?
More like “Ai-yi-yi”’, and ‘Do the write thing’ (these were writers after
all), the strikers demanded safeguards against generative AI for their
work, as well as fairer pay for streaming shows on platforms such as
Netflix. For these writers, a key concern was that Hollywood studios
would start using generative AI tools to produce movie scripts, making
writers redundant. In a nutshell, the writers feared being replaced by
technology. The three-year deal that ended the strike did not prevent
studios from using AI to generate content, but gave writers the right to
sue if studios used writers’ content to train generative AI systems. One
wonders what will happen when the three-year agreement ends.

The Hollywood writers’ strike highlights one of the big questions


around the use of generative AI. AI’s large language models (see 2) are
trained on huge amounts of data harvested from the internet. These
include millions of books, articles, Wikipedia content and computer
code. For tools that generate images and multimedia, billions of images,
songs and videos are used in training data. These resources have been
created by humans. Large tech companies, the argument goes, have
exploited all of this human labour by taking these data to feed their
AI systems without permission, without attributing the exact sources
and without remuneration, all to make money for themselves. There
is a point to the argument. Copyright laws exist to protect original

87
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
works, and to ensure fair pay for creators when their work is used by
others for profit. Creative commons licenses, which enable creators to
freely share their work in a variety of ways (for example, by asking
only for attribution in return for use, or by asking that their work is
not used to generate revenue), have existed for decades as an alternative
to copyright (see https://creativecommons.org/ for more on this). By
scraping training data from the internet without permission, companies
developing generative AI seem to have trampled over copyright
and over Creative Commons. Indeed, in most cases, the technology
companies refuse to disclose the exact training data they have used.
It’s understandable that creators are upset, and several lawsuits are
ongoing at the time of writing (one of the best-known of these is the
New York Times suing OpenAI over its use of newspaper articles in
training datasets without permission). Interestingly, Creative Commons
itself argues that the use of creative data from the internet may legally
constitute ‘fair use’ (Wolfson, 2023), and, at the time of writing, it
remains to be seen how the current lawsuits will be resolved.

Citing sources

One suggestion for addressing the issue around attribution in AI-


generated content is for sources to be clearly identified. Some generative
AI tools (such as Bing at the time of writing) do provide references
for sources in the texts they generate. This is useful for us to be able
to check whether the information provided by generative AI tools is
actually true. AI text-generator tools are famous for ‘hallucinations’
(inventing facts and making up references), although as these tools
become more powerful, hallucinations may decrease. Citing sources
for AI-generated images can be challenging, as a huge number of
source images may go into the creation of a new image. Nevertheless,
algorithms could be trained to acknowledge and list sources. The issue
of remuneration for creators (where needed) is trickier and it remains to
be seen if or how these issues will be resolved.

Discussing the issues with learners

What does all of this mean for language teachers and learners? As with
many of the big questions covered in this section, one of the first things
you can do is discuss them with your learners. Raising awareness of the

88
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
downsides (as well as the potential benefits) of using generative AI tools
is one of the first steps we can take to help our learners develop critical
digital literacies (see 25). Awareness raising can take the form of a short
lesson that includes a discussion of the key issues. For example, you
could follow these steps:

1 Before class, use an image-generator tool (see 7) to make images


on a current course topic in the style of several famous artists (e.g.,
Rembrandt, Monet, Dali, Mondrian). Find a couple of original
images by each artist online and save those to show in class.
2 In class, show your learners the AI-generated image and originals
for each artist. Do they recognise the artist’s work? For each artist,
can learners spot which images are originals and which image is AI-
generated?
3 Ask your learners if they know how image-generation tools are
trained. Explain if needed (these tools are trained on millions of
works of art harvested from the internet). Then ask your learners to
discuss the following questions:
0 Do you think it’s fair that image-generation tools are trained
from images from the internet, without the creator’s permission?
Is it different for dead versus living artists?
0 How would you feel if you were an artist or photographer, and
your work was used to train these tools?
0 What can we ask AI companies to do to make their use of images
fairer?
4 Put learners in pairs, and ask each pair to research one controversy
around AI companies not respecting copyright. You could assign
well-known examples such as the Hollywood writers’ strike, the
New York Times’ lawsuit (both mentioned above), Getty Images
suing an image-generator tool, or any other current controversy
(search online). Regroup learners to share what they have learned.
5 Finish the class by asking if your learners’ thoughts on the questions
in step 3 above have changed in any way, based on what they have
learned.

The use of generative AI tools brings up additional areas related to


copyright and attribution that teachers can explore with their learners.

89
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Hallucinations
Encourage learners to check sources when creating text with
generative AI tools, because they are prone to hallucinations. One
amusing example to share with learners comes from ChatGPT: ‘some
species of dinosaurs even developed primitive forms of art, such as
engravings on stones’ (Szempruch, 2023).
Attribution
As we’ve seen in previous chapters, learners are likely to use
generative AI tools for a variety of tasks, including text generation.
Many schools and universities provide guidelines for teachers and
learners on how to use AI tools in ethical ways, and this includes
clearly stating how AI has been used in one’s work (as I did in 21,
for example. See also how learners can show their use of AI in
writing in 14).

Szempruch, D. E. (2023). Generative AI Model Hallucinations: The Good, The Bad, and
The Hilarious. Blog post 20 March 2023. Available at: https://www.linkedin.com/pulse/
generative-ai-model-hallucinations-good-bad-hilarious-szempruch/. Accessed 28 December
2023.

Wolfson, S. (2023). Fair Use: Training Generative AI. Creative Commons. Blog post 17
February 2023. Available at: https://creativecommons.org/2023/02/17/fair-use-training-
generative-ai/. Accessed 28 December 2023.

90
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Who creates AI? 23

The rise of generative AI has led to new work


opportunities, both good and bad. How might this affect
you and your language learners?

Who works in AI?

As we saw in 2, generative AI tools are based on many years of research


in the field of computer science. So when we ask ourselves who
created AI, the first and most obvious people we think of are computer
scientists and programmers. These are what we might call the high-
prestige jobs in AI. They command respect, and computer scientists are
usually paid a decent wage for their expertise. When I first started using
generative AI tools, I assumed that these were the only people involved
in creating these new tools. I had heard of large language models (LLMs
– see 2) and knew that large amounts of data went into training these
models. I assumed that the creation of the generative AI tools I now use
daily went something like this: first, computer scientists created a lot
of complicated algorithms for an LLM. Then, based on vast quantities
of data taken from the internet, the LLM adjusted its algorithms based
on these data, essentially teaching itself how to work effectively. Then,
when I (as a user) put a prompt into a generative AI tool, the LLM did
some complex algorithmic stuff internally, and produced a text as per
my request.

But I soon found out that there is another group of people who work
in generative AI, who are ignored in the account above. These people
are working with the training data. Their job is to create sample
responses, to provide feedback, and to tag, rate and annotate AI-
generated responses. These are the people involved in the ‘human
feedback’ training stage for LLMs (this stage is known as reinforcement
learning from human feedback, or RLHF – see 2). Their job is to align
AI-generated responses with what we humans want and expect to see,
in other words, with human values and beliefs. Indeed, this stage of

91
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
generative AI training is often referred to as alignment. Alignment is
supposed to spot and rectify mistakes in generative AI output, and also
to identify and rectify bias. Of course, we are often unaware of our own
biases, so there is a strong possibility that human annotators’ biases are
passed on to the training data (see 18).

Our learners as future platform workers

At this point, you may be wondering what all of this has to do with
language teachers. The answer is – plenty. Many graduates are finding
work as generative AI annotators, given the vast amounts of training
data used by LLMs, and the need for lots of human feedback to make
them less prone to mistakes. In specialised domains, graduates are
needed to create model responses to train AI. For example, if you ask a
generative AI tool a complex technical question related to astrophysics
or medicine, you want the answer to be detailed and you want it to be
correct. To make sure that a generative AI tool is as accurate as possible,
a company will hire graduates specialising in astrophysics or medicine
to work through queries and to provide model answers which are then
used to train the AI tool. If you’re teaching English at a university, there
is a chance that some of your learners will become annotators when
they graduate, reading and possibly creating model answers in English
and/or other languages. Some commentators suggest that this ‘platform
work’ or ‘crowd work’ could be the most widespread form of full-time
work by 2030. Other commentators have voiced concerns about how
the needs of the vast AI industry are already affecting higher education
by creating a swing away from disciplines such as philosophy or fine
arts, which are considered less relevant to the growing AI-related jobs
market.

Large tech companies that own AI models usually outsource platform


work to subsidiary companies, who then hire the platform workers.
Platform work tends to be paid at an hourly rate, and often depends
on how productive annotators are, with productivity measured by
opaque algorithms. The workers involved in platform work are often
badly paid and are frequently working in the Global South (Hornuf
and Vrankar, 2022). Even specialised and highly qualified annotators,
working on topics like astrophysics or medicine, may not be particularly
well-paid (Wang, Prabhat and Sambasivan, 2022). For example, a study

92
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
examined graduate data annotators located in India, where many
subsidiary companies operate because they have access to English-
speaking graduates, but can pay them lower wages than in Europe or
North America (Wang, Prabhat and Sambasivan, 2022). The study
found there was a conflict between the need for high-quality data at
low cost, and the aspirational needs of annotators for well-being, career
prospects with decent pay, and active participation in what they had
thought of as ‘building the artificial intelligence dream’ (ibid.). Sadly,
these skilled annotators soon found platform work to be unfulfilling
and alienating. Another report found that qualified data annotators in
many countries in the Global South are consistently paid low wages and
expected to work long hours in tedious and repetitive work (Dzieza,
2023). In one country, a compulsory part of completing a vocational
training qualification included working as an intern for a data
annotation centre, for months on end and for very low pay (Zhou and
Chen, 2023).

Discussing platform work with learners

What does AI’s increasing need for platform work mean for us as
English language teachers? I would argue that, at the very least, these
issues should be discussed with our language learners – some of whom
may end up doing platform work in the future. Even if they don’t, as
users of generative AI, both we and our learners need to be aware of
some of the darker sides of the development of generative AI. To discuss
these issues with your learners, you could follow these steps:

1 Introduce the topic of work and AI. Ask your learners what jobs
they think have already been replaced by AI, and what jobs will
be replaced by AI in the future. Ask them what jobs they think are
involved in creating AI.
2 Explain that ‘data annotators’ are key in the development of AI. Put
your learners in pairs and ask them to research this topic online.
3 Regroup the learners and ask them to share what they have learned
about data annotators with each other.
4 To round up, ask learners what they learned about platform work
and ensure that some of the issues outlined in this chapter are raised
and discussed with the class. Ask what regulation or laws could
improve data annotators’ jobs.

93
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
In the hype that surrounds generative AI (see 5), these kinds of issues
tend to get glossed over. Being AI literate (see 25) means having a
critical understanding of the effects (both good and bad) that these tools
can have. AI literacy also includes exploring potential solutions. This
short speaking activity tries to raise learners’ critical awareness and at
the same time consider possible solutions to the issue of platform work.

Dzieza, J. (2023). AI is a Lot of Work. The Verge. Online newspaper article, 20 June 2023.
Available at: https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-
notation-labor-scale-surge-remotasks-openai-chatbots. Accessed 28 December 2023.

Hornuf, L. and Vrankar, D. (2022). Hourly Wages in Crowdworking: A Meta-Analysis.


Business and Information Systems Engineering, 64, 5: 553–573. Available at: https://link.
springer.com/article/10.1007/s12599-022-00769-5. Accessed 22 January 2024.

Wang, D., Prabhat, S. and Sambasivan, N. (2022). Whose AI Dream? In search of the
aspiration in data annotation. In Proceedings of the 2022 CHI Conference on Human
Factors in Computing Systems, pp. 1–16.

Zhou, V. and Chen, C. (2023). As part of China’s digital underclass, vocational school
students work as data annotators – for low pay and few future prospects. Rest of World
blog post, 14 September 2023. Available at: https://restofworld.org/2023/china-ai-student-
labor/. Accessed 28 December 2023.

94
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Can we control AI? 24

There is growing consensus around the need for guidelines


and policies to underpin the design, development and
use of AI technologies. Regulation aims to protect us,
by ensuring that AI reflects the values of our societies,
protects our rights, and does not pose risks or undertake
actions that might harm us.

Opaque AI

Let’s start by considering how unregulated AI in language learning


may pose risks. Imagine a learning management system (LMS) that
contains materials your adult learners must read and engage with as
part of their assessed online course work. The LMS uses an algorithm
that awards extra marks to your learners based on their levels of
engagement with the materials. In principle, this sounds like a good
idea. But wait. How is this engagement measured? The LMS provider
won’t say, claiming that this kind of information cannot be made
public because competitors may use it. Let’s say you manage to find out
that the LMS is using emotion-detection software that accesses your
learners’ laptop or mobile phone cameras to gather data from their
facial expressions when they interact with the materials (see 12); the
LMS then automatically assigns an engagement score to each learner
based on these data. Neither you, your institution, or your learners
were aware of this. And, having read 12, you know that emotion AI
can be unreliable. What’s more, explicit permission to gather these data
from your learners was never sought. You then discover that the same
LMS company has been selling learners’ engagement scores to their
employers; many of these learners have suddenly lost their jobs. Opaque
AI (i.e., a lack of transparency in AI algorithms) is an issue. Regulation
around transparency, accountability and data protection would help
avoid this invented (and rather nightmarish) scenario.

95
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
My invented LMS scenario may sound far-fetched. Unfortunately,
it’s not. The COVID-19 pandemic saw the introduction of learning
technology software and products into schools and homes on an
unprecedented scale. And as we saw in 20, a Human Rights Watch
investigation found that learning technology products introduced into
schools during the pandemic directly sold or gave access to children’s
personal data to digital advertising companies in 49 countries. This is
a worldwide issue, and regulation to prevent this kind of behaviour is
clearly needed. Regulation around data and privacy does exist in some
parts of the world, but often imperfectly. Of the 49 countries reviewed
in the Human Rights Watch report, 14 countries had no data protection
laws, while the data laws in a further 24 countries were not fit for
purpose in the digital age.

Frameworks and guidelines

The launch of ChatGPT in late 2022 created a sense of urgency around


international measures to regulate AI. Initiatives around AI governance
in education had been established several years before this, though.
UNESCO, for example, published AI and education: guidance for
policy-makers (Miao et al., 2021), as well as policy guidelines on the
use of generative AI in research and education (UNESCO, 2023). The
EU’s first Artificial Intelligence Act, approved in late 2023, includes
legislative measures for education, and these include bans on biometric
surveillance and emotion-recognition software in schools, as well as
transparency requirements for generative AI (European Parliament
News, 2023). And new frameworks and guidelines are appearing all
the time.

The use of generative AI by learners in schools and universities brings


up particular challenges, especially around the issue of academic
integrity and cheating (see 21). To deal with this, schools and
universities often prefer to develop their own guidelines for teachers and
learners around the use of generative AI. Two examples worth exploring
at the time of writing are the guidelines produced for the Russell Group
of universities in the UK (referred to as ‘principles’) and the Australian
Framework for Generative Artificial Intelligence in Schools produced by
the Ministry of Education (NSW Government, 2023). There are many

96
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
other frameworks and guidelines available, with new ones appearing all
the time. Exploring university and school websites in your own context
is likely to yield more examples that may be better suited to your
teaching environment.

Creating your own guidelines for AI use

Here are a couple of ideas for academic directors or teachers interested


in developing guidelines that can be tailored for the use of AI in your
own institutions or classrooms.

Explore what could go into your AI guidelines with teachers and


students, based on examples of guidelines already drawn up by
other institutions. Involving key stakeholders in the development
of AI-related guidelines is more likely to increase buy-in and
therefore compliance. In the case of a school-wide policy, important
stakeholders to include are teachers and learners. In the case of
guidelines developed by a teacher for their own classroom, the most
important stakeholders to include are – of course – the learners
themselves! Involving learners in discussions around the use of AI
in their learning is also an effective way to raise awareness about
some of the challenges we explore in this section of the book,
and thus to help support the development of their critical digital
literacies (see 25).
Carry out technoethical audits regularly (Krutka, Heath and Staudt
Willet, 2019; Krutka and Heath, 2022). This involves carefully
assessing a new technology choice. A technoethical audit explores
whether it’s morally right to use the technology and what possible
negative consequences might come from using it in your institution
or classroom. Some key questions guiding a technoethical audit
include:
0 How does this technology promote or hinder learning (see 3)?
0 Is the creation, design, and use of this technology fair, especially
for disadvantaged or minority groups (see 18)?
0 How does this technology impact the environment (see 25)?

You can develop more questions for your technoethical audit based
on the issues explored in other chapters in this book, especially in
Section C.

97
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Regulation provides safeguards, and in the case of AI, these safeguards –
often referred to as guardrails – are needed for all. Building international
consensus around regulation for AI is a challenge, and it’s unclear
whether or how it will be achieved. Nevertheless, the guidelines and
legislation around AI in education provided in the initiatives described
above provide a good starting point for educators interested in following
this developing area. There is clearly strong advocacy for regulation in AI
in the educational community, and it is a space to watch.

European Parliament News. (2023). Artificial Intelligence Act: deal on comprehensive


rules for trustworthy AI. Press release, 9 December 2023. Available at: https://www.
europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-
deal-on-comprehensive-rules-for-trustworthy-ai. Accessed 28 December 2023.

Krutka, D. K. and Heath, M. K. (2022). Is It Ethical to Use This Technology? An


Approach to Learning about Educational Technologies with Students. Civics of
Technology. Blog post. https://www.civicsoftechnology.org/technoethicalintegration.
Accessed 28 December 2023.

Krutka, D. K., Heath, M. K. and Staudt Willet, K. B. (2019). Foregrounding Technoethics:


Toward Critical Perspectives in Technology and Teacher Education. Journal of Technology
and Teacher Education 27, 4: 555–574.

Miao, F., Holmes, W., Huang, R. and Zhang, H. (2021). UNESCO. Available at: https://
unesdoc.unesco.org/ark:/48223/pf0000376709. Accessed 28 December 2023.

NSW Government. (2023). Draft National AI In Schools Framework. Available at:


https://education.nsw.gov.au/about-us/strategies-and-reports/draft-national-ai-in-schools-
framework. Accessed 28 December 2023.

Russell Group. (2023). Russell Group principles on the use of generative AI tools in
education. Available at: https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf.
Accessed 28 December 2023.

UNESCO. (2023). Guidance for generative AI in education and research. Available


at: https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research.
Accessed 28 December 2023.

98
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
How can we become critical users 25
of AI?

Generative AI comes with benefits and drawbacks. We can


encourage our learners to engage with the drawbacks of
generative AI (and of digital technologies in general) by
supporting the development of critical digital literacies.

Experts estimate that just one query to a large language model (LLM)
like ChatGPT uses the equivalent of half a litre of water. Others claim
that the training of GPT3, a model that is a lot less powerful than the
LLMs we have now, was like driving a car to the moon and back in
terms of energy consumed. The massive numbers of very large servers
that house LLMs and their terabytes of data require a lot of energy,
including electricity (to function) and water (to cool the servers).
Standing in a room full of servers pre-generative AI is like listening to
the hum of thousands of electric fans. Standing in a room full of servers
that support generative AI is like listening to hundreds of jet engines.
The difference in decibels says it all.

Digital technologies have an environmental cost that we tend to gloss


over. Issues such as e-waste and recycling, the built-in obsolescence
of many digital devices, and the unethical extraction of the minerals
and rare metals used to power them, are not often discussed in
the classroom. They should be. To this list, we can now add the
environmental cost of generative AI.

Discussing issues with learners

There are a number of ways we can help raise our learners’ (and our
own!) awareness of the environmental costs of the digital technologies,
including generative AI, that we use in our teaching and learning. One
way is to get learners to discuss some of the issues. Below are some
discussion prompts that could be used with an intermediate group of
adult or teenage language learners. The prompts can be used in three
stages over a single class.

99
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
Stage 1: Introduction
Introduce the topic and key issues by asking learners the following two
questions:

What types of electronic devices do you use in your daily life (e.g.,
mobile phones, tablets, computers, gaming consoles)?
What happens to your devices when you no longer use them?

Stage 2: Discussion
Put the learners into small groups to discuss some key issues. You could
allocate one topic to each group and ask them to first research their
topic online, then discuss it, and finally to report back to the class.

E-waste: What is electronic waste (e-waste), and why is it a problem


for the environment? Have you or someone you know ever disposed
of an old electronic device? If so, how was it done?
Built-in obsolescence: Do you think digital devices like smartphones
are designed to last a long time, or do they become outdated
quickly? Why? How do you feel when you realise that your device is
no longer as fast or efficient as it used to be? What do you do?
Unethical extraction: What do you know about the mining of
the minerals and metals used in digital devices? Do you think
it’s important to consider the ethical aspects of extracting these
materials? Why or why not?
Generative AI: What is generative AI, and how is it used? What do
you know about the environmental impact of generative AI? How
do you feel about it?

Stage 3: Personal Actions


Encourage students to think about what they can do to reduce their
environmental impact by discussing these final questions with the
whole class.

How can you reduce e-waste in your life?


Can you think of any small changes you could make to be more
environmentally friendly when using technology?
What is the most important thing you’ve learned about e-waste,
built-in obsolescence, unethical extraction, or generative AI’s
environmental impact?
Do you think you will change your behaviour in any way based on
what we’ve discussed?

100
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
If relevant and of interest to your learners, you could also draw their
attention to the United Nations’ Sustainable Development Goal (SDG)
12 – ‘Ensure sustainable consumption and production patterns’. You
could ask your learners to what extent they feel their country is making
progress against this goal, particularly as regards digital technologies
and e-waste.

Critical digital literacies

Raising learners’ awareness around the many ethical issues and


challenges associated with digital technologies is an important first step
in helping them develop their critical digital literacies. My co-authors
Mark Pegrum, Gavin Dudeney and I define critical digital literacies as:
‘the ability to apply a critical lens to all aspects of digital technologies,
including digital information, and through research, reflection and
discussion to arrive at a considered position on their design and/or
redesign’ (2022, p. 51).

Critical digital literacies can be thought of as including a number of


sub-literacies, including critical mobile literacy and critical material
literacy, among others.

Critical mobile literacy means being able to look at problems that


come specifically from using phones and other mobile devices. These
problems can be about things like how our privacy is affected, how
we get distracted, and how excessive use of mobile devices can affect
our physical and mental health.
Critical material literacy is about looking at the things that go
into making, sharing, using and getting rid of digital devices. This
includes thinking about how they affect politics, people and the
environment.

Developing learners’ critical digital literacies through project work

The discussion task suggested earlier in this chapter encourages learners


to think about how key issues relate to their own uses of digital
technologies. The task can take place over one class period. Another
option is to set up a longer-term project that encourages learners
to explore these issues. In 21, we looked at how a project could be
structured around the topic of littering. A similar approach, using the

101
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
same project stages, could be taken using the topic of the environmental
impact of digital technologies. There are two ways you could plan such
a project:

Use the project plan in 21, but replace the topic of ‘littering’ with
the topic of ‘e-waste and mobile devices’. For the ‘event’ stage of
the project (Task 3), ask learners to organise an e-waste awareness-
raising event at your school. The event could also aim to collect
obsolete devices that learners (or their parents) have lying around at
home; some of the learners can then take them to a recycling centre
in the town. As a longer-term outcome for the school community, a
cardboard box for recycling e-waste could be left permanently at a
strategic point in the school. Learners from different classes could
take the collected e-waste to the recycling centre once a month or
once a term.
Put the following prompt (based on the prompt in 21) into an
AI text-generator or lesson-generator tool (see 7) and see what it
suggests.
‘Create a project that includes authentic assessment for a group of
intermediate level English language learners around the topic of
e-waste and mobile devices.’
Edit and adapt the AI-generated project to suit your learners. Think
about how you could include generative AI tools to support your
learners during each stage of the project (for example, as suggested
in the project plan in 21). Then try out the project with your
learners!

Pegrum, M., Hockly, N. and Dudeney, G. (2022). Digital Literacies (2nd Edition). London:
Routledge.

102
https://doi.org/10.1017/9781009804509.003 Published online by Cambridge University Press
D: Self-development and AI

In this final section, we first consider the effects


generative AI can have on teachers’ and learners’
wellbeing. We then explore what research has
shown to be effective strategies for improving
teaching and learning. We consider how these
strategies can be applied to AI.

26 Considering wellbeing and AI


27 Carrying out action research into AI
28 Developing learner autonomy with AI
29 Developing your teaching with AI
30 What does the future hold?

103
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
26 Considering wellbeing and AI

Teachers’ and learners’ wellbeing – and how to support


it – is widely discussed in the field of ELT. Supporting
wellbeing in the face of generative AI is newer territory for
us, and here we consider how this might be approached.

Fears about AI

In early 2023, I started to notice a trend at the English language teacher


conferences I attended. The rise of generative AI, particularly in its
first widely known incarnation as ChatGPT, seemed to elicit two main
responses from teachers. Some teachers welcomed how generative AI
was saving them time with lesson planning and admin-related work
(see 7). Other teachers were much more circumspect, and some teachers
were frankly unnerved. At the end of my own conference talks on AI in
ELT, teachers frequently asked me two questions. The first question was
whether I thought that AI would replace them. The second question was
whether AI would be used by learners to cheat on their homework (see
21 for my response to that). Both questions reveal deep uncertainties
about our role as teachers.

Concern, and even fear, are not uncommon responses to new technologies
(see 26). Generative AI has been perceived as threatening, not just by
teachers but by society as a whole. Much debate continues to take place
among philosophers, scientists and politicians about just how much of
an existential threat generative AI might pose to humankind. You are
likely to be aware of these debates and you may have listened to some
of them, oscillating between fear and hope depending on who was
being interviewed. The truth is, that at the time of writing this book, we
are unsure of what the long-term (and even the short-term) effects of
generative AI will be on us, and this can provoke a feeling of unease.

Teacher wellbeing
Wellbeing has been part of the conversation in education for many
years. The wellbeing of teachers can be affected by external factors, such

104
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
as low pay, a lack of recognition, heavy workloads and dealing with
unmotivated or badly behaved learners. Internal factors, too, can affect
teachers’ wellbeing, and internal and external factors are often linked.
Teacher burnout is one well-known example of a wellbeing issue, and
the stresses of the job can unfortunately sometimes lead to physical and
psychological challenges such as insomnia or depression. See Kerr (2022)
and Mercer and Puchta (2023) for more on teacher wellbeing in ELT.

Over the years, the appearance of new digital technologies has often
affected teachers’ wellbeing, for good and for bad. Productivity tools
(such as word processors for creating worksheets, or spreadsheets for
calculating learners’ grades) are clearly helpful for teachers. After some
initial hesitation around whether they had the necessary technical skills
to learn to use them effectively, most teachers would agree that these
tools have made their lives easier. The move online necessitated by the
COVID-19 pandemic, on the other hand, was extremely challenging
for both teachers and learners. Although most teachers did an excellent
job of teaching in exceptionally difficult circumstances, there were
widely documented serious mental health challenges associated with the
pandemic for teachers and for learners. And now, it may seem to many
teachers, we are faced with yet another potential existential threat in the
shape of generative AI.

For teachers, one way to deal with the uncertainty that they may be
feeling in the face of generative AI is to first try and understand what it
is and how it works, at a very basic level (see 1 and 2). It’s also useful
to understand that both rule-based and data-based AI has underpinned
many of the language learning apps and websites that we’ve been using
for decades (see 3). Generative AI is a step up from what we’ve used
before in terms of functionality, but it has a long trajectory, and it didn’t
come out of nowhere.

Learner wellbeing

A concern with learner-wellbeing also has a long history in our field.


For example, every teacher knows that positive learner attitudes and
strong motivation can improve learning; indeed, second language
acquisition studies tend to bear this out. Learners’ wellbeing is about
more than attitudes and motivation though, and when we attempt to

105
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
measure wellbeing, a range of factors are usually examined. The OECD
(Organisation for Economic Cooperation and Development) PISA tests,
for example, which aim to compare learning outcomes for adolescents
globally, include a range of wellbeing indicators. These indicators take
into account both negative aspects (e.g., anxiety, or low performance)
and positive aspects (e.g., interest, engagement and motivation to
achieve). The OECD indicators include:

psychological factors, such as learners’ sense of purpose and


satisfaction in life, self-awareness and lack of emotional issues;
physical factors, such as a healthy lifestyle and overall health;
cognitive factors, such as problem-solving abilities;
social factors, such as learners’ relationships with family, friends
and teachers.

Perhaps unsurprisingly, competitive exams and harsh or unfair


disciplinary environments have both been found to negatively
affect learners’ wellbeing by the OECD. Areas where teachers (and
schools) can therefore support learner-wellbeing include developing
collaborative, as opposed to competitive, classroom environments and
treating learners fairly. Both of these can increase learners’ sense of
belonging and therefore their overall sense of wellbeing at school.

Wellbeing and generative AI

Now that we’ve explored teacher- and learner-wellbeing in general


terms, let’s turn to how generative AI can affect wellbeing. The concept
of wellbeing in the face of generative AI goes beyond the perceptions
and concerns or fears of individual teachers, valid as these may be.
It raises many of the big questions that we examined in Section C.
How can schools address wellbeing? The Australian Framework for
Generative Artificial Intelligence in Schools (previously mentioned in
24, with the framework in the consultation phase at the time of writing)
provides us with an example. The framework includes ‘Human and
social wellbeing’ as one of its core elements, breaking these down into
three separate principles as follows (NSW Government, 2023: 9–10):
The first principle is ‘wellbeing’ in general, and the framework states
that ‘the use of generative AI tools should have a positive impact on,
and not harm, all members of the school community’.

106
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
The second principle is ‘diversity of perspectives’, which refers
to schools using tools that ‘expose users to diverse ideas and
perspectives, avoiding the reinforcement of existing biases’.
The third principle is ‘human rights’, and it states the need for
schools to use generative AI that ‘respects human rights and
safeguards the autonomy of individuals, especially children’.

In this example, protecting wellbeing is defined in a very broad way.


The framework suggests identifying aspects of generative AI that
might harm users and avoiding those. However, this is much easier
said than done. Avoiding biased AI tools (suggested by Principle 2)
and protecting learners’ data (suggested by Principle 3) is simply not
possible until the technology companies producing these tools share
this information with us, and are held to account in rectifying bias, and
the misuse of learner data.

Nevertheless, it’s helpful to focus on wellbeing for all members in a


school community, and to consider specifically how wellbeing might
be negatively affected by some AI tools. It is an approach that can
be emulated by teachers and schools who wish to protect learners in
the face of generative AI. Suggestions for how teachers (and school
managers) can create guidelines around ethical AI use are provided in
24; your guidelines could include wording around wellbeing as well.

Kerr, P. (2022). Philip Kerr’s 30 Trends in ELT. Cambridge: Cambridge University Press.

Mercer, S. and Puchta, H. (2023). Sarah Mercer and Herbert Puchta’s 101 Psychological
Tips. Cambridge: Cambridge University Press.

NSW Government. (2023). Draft National AI In Schools Framework. Available at:


https://education.nsw.gov.au/about-us/strategies-and-reports/draft-national-ai-in-schools-
framework. Accessed 28 December 2023.

OECD. (2017). PISA 2015 results (Volume III): Students’ well-being. Available at: https://
www.oecd.org/pisa/publications/pisa-2015-results-volume-iii-9789264273856-en.htm.
Accessed 28 December 2023.

107
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
27 Carrying out action research into AI

Research is not just for academics and theorists. Teachers


can carry out valuable action research in their own
classrooms with their own learners. AI can provide a wide
range of topics for action research.

What is action research?

Teachers can feel that research is something done by academics in


universities who explore abstract ideas that are unconnected to the
practicalities and realities of day-to-day teaching. Research does indeed
include formal studies, but research can also be carried out by practising
classroom teachers. If we agree that teachers should be ‘reflective
practitioners’ (Schön, 1983) who engage in a cycle of reflecting on
and developing their own classroom practice, the value of small-scale,
classroom-based research is clear. After all, who is better placed than
teachers to examine what happens in their own classrooms? What is
known as action research enables teachers to investigate what happens
in their own classes with their own learners. It’s an important tool
that teachers can use to support their own development with minimal
disruption to their busy routines. In short, action research can help
teachers gain deeper insights into their own teaching; this can enhance
their expertise and better support learning outcomes for their learners.

Action research revolves around an ongoing cycle of action and


reflection, in which a teacher can take the following steps:

1 Identify a teaching/learning challenge or an area of interest in your


teaching.
2 Explore the area you identified through reading and through talking
with colleagues.
3 Plan how to approach your area of interest, and how you will assess
the effectiveness or impact of your approach.
4 Try out various approaches, techniques or activities with your
students to investigate your area of interest.

108
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
5 Assess the effectiveness of these approaches, techniques or activities
through self or peer observation, reflection and analysis.
6 Optional: Refine or adapt your approaches, techniques or activities.
Experiment some more – and evaluate the effectiveness again.
7 Share what you found out from your action research with others,
including colleagues and your learners.

AI can provide teachers with an interesting area to research in their


own teaching. As we’ve seen in this book, AI is a very large topic.
One interesting area for teachers to investigate includes learners’
attitudes to and experiences with generative AI (see 2). This could
include learners’ attitudes to some of the issues we explored in
Section C – for example, the increasing datafication of schools
(see 20), how learners use generative AI tools to do their homework
(see 21), or whether they have the critical digital literacy skills to
question their own uses of digital technologies including AI (see 25). AI
can also be integrated into teaching approaches – for example, AI tools
can be used at various stages in project work (see 21); whether this
approach improves the learning experience for learners is an area that
could also be investigated, for example.

An example of action research into AI

Let’s work through an example of an action research project that


teachers could undertake around how AI is used in language teaching
and learning. Each stage of the action research example is mapped to
the six stages described above.

1 Identify the action research topic


In this first stage, the teacher identifies the topic or area of interest
for her action research. Let’s imagine that the teacher has noticed
that her class of lower intermediate learners feel uncomfortable
taking part in speaking activities. She sees that many of the learners
lack confidence when it comes to speaking in English, and that they
will often use their first language instead of English when completing
oral pair work activities in class. She realises that she needs to help
them improve their confidence around speaking while also giving
them more opportunities to practise speaking. But classroom time is
limited, she has a syllabus to teach, and she can’t spend every class
on long speaking activities. She’s noticed that all of her learners have

109
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
smartphones though, and she wonders whether she could use those
to somehow encourage her learners to practise speaking out of class.
2 Explore the action research topic and plan the approach
This second stage involves reading about the topic (including talking
to colleagues about it) and planning how to carry out the action
research. After reading 9 and 16 of this book, our teacher decides
to ask her learners to practise their speaking out of class by using a
generative AI chatbot app for a month. First though, she asks other
teachers in her school whether they have tried anything similar with
their own learners. One teacher got his learners to try a chatbot app
about five years ago, but, the teacher reports, the chatbot app only
allowed for very limited text-based interactions and his learners
got bored with it quite fast. But our teacher knows that generative
AI has improved the functionality of English language chatbots
significantly; she has learned that these more recent chatbots can
understand and respond to spoken language increasingly well.
She reviews the AI chatbot studies described in 16, and sees that
there are three key areas to keep in mind: 1) the chatbot needs to
understand non-standard English accents; 2) it needs to keep the
learners engaged and motivated; and 3) it needs to be simple enough
for learners to deal with (i.e., it needs to avoid excessive cognitive
load – see 16). Armed with this knowledge, she chooses two English
language chatbot apps that seem most likely to fulfil these three
criteria. At the time of writing, these apps include Speak and ELSA,
both mentioned in 9.
3 Try it out
In the next stage, our teacher explains her action research project
to her learners. She gets them to agree to try out a chatbot app for
a month, to see whether it will help them feel more confident with
their speaking skills. She asks them to choose one of the chatbots
she recommends (or even to try both), to download the app(s) to
their phones, and to spend 15 minutes a day carrying out speaking
activities in the app. Because these are teenagers, and she’s not
sure they will actually use the app out of class, she tells them that
taking part in this project means that they don’t need to take the
speaking exam at the end of term – instead they can submit the app
dashboard statistics (which provides an overview of the activities
each learner has completed). Once the project starts, our teacher

110
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
asks the group to informally share how they are getting on with the
app, once a week in class. This helps her gauge progress and to keep
the project on track.
4 Reflect
In this all-important fourth stage, the teacher and her learners
discuss and reflect on the experience of using their chosen chatbot
app for a month. To structure the discussion, and to collect
evidence of her learners’ experiences, the teacher has designed a
questionnaire for her learners to complete. She includes questions
about the three key areas she identified in her reading for Stage 2,
as well as asking questions about her learners’ attitudes to using a
chatbot, and whether (and if so why) they now feel more confident
about their speaking skills. She reads her learners’ responses in the
questionnaire, and she reflects on anything she might do differently
if she undertakes the same project with another class.
5 Share findings
Finally, our teacher shares the findings from the questionnaire
with her class, and discusses the experience further with them. She
encourages her learners to continue to use the chatbot or to explore
new chatbots if they found the experience helpful. She also shares
the experience with her colleagues in the school by explaining the
stages and her findings at a teacher development seminar. Several
teachers in her school decide to try the same approach with their
own adolescent learners.

As we can see from the example above, action research can not only
help teachers develop and explore their own classroom practice, it can
also inspire other teachers to try out similar approaches with their own
learners. Our example teacher above could now go on to present her
action research project at a local conference, or write an article about
the project for a teachers’ association magazine or for a blog. Action
research can also contribute to knowledge in our field, and help teachers
develop their professional careers if they choose to share their work
more widely.

Schön, D. (1983). The Reflective Practitioner. Basic Books.

111
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
28 Developing learner autonomy with AI

Many teachers and instructional designers think that


AI can support learners in becoming more autonomous
learners, especially in online learning scenarios. But what
do learners think?

Learner autonomy and learning online

Learner autonomy refers to a learner’s independence and sense of


control (or ownership) over their own learning. Autonomous language
learners can make decisions about what and where they learn. They are
self-motivated, and they use self-regulation strategies like goal setting,
self-monitoring and reflection on their own progress. Helping learners
to become more autonomous has long been considered a good thing
in language learning, because on the whole, autonomous learners tend
to achieve more successful learning outcomes. Because of this, many
coursebooks integrate activities that aim to help learners become more
autonomous by developing self-regulation strategies.

Digital technologies provide many opportunities for learners to


study English independently, outside of the language classroom and
without a teacher. Let’s look at two popular ways that adult learners
can undertake independent English language learning supported by
technology. The first is via sometimes freely available self-study online
courses (often known as MOOCs, short for Massive Open Online
Courses). The second is via AI-powered language learning apps (see 9
for examples). Attending MOOCs and using language learning apps
are often seen as best suited to autonomous learners because they offer
little or no teacher support. But drop-out rates in both tend to be high,
because it’s challenging for learners to keep up the motivation needed
to work alone over long periods of time. How to motivate and engage
learners in self-study online learning, whether in online courses or via
language learning apps, is therefore an important question.

112
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
User engagement

Research shows that user engagement is crucial for successful learning


outcomes in online learning (e.g., Henrie, Halverson and Graham,
2015). User engagement is a complex and multifaceted concept,
and defining it can be tricky. One definition sees it as the ‘quality of
user experience characterized by the depth of an actor’s cognitive,
temporal, affective and behavioral investment when interacting
with a digital system’ (O’Brien, Cairns and Hall, 2018, p. 29). This
sounds complicated, but essentially, we’re talking about the quality
of a learner’s experience when learning online, across a number of
dimensions.

Let’s look at some key dimensions of user engagement online, based on


the definition above and others, and think about how AI might support
each dimension.

Behavioural dimension: Here we are talking about learner


attendance and involvement. In a self-study, online course like a
language learning MOOC, learners ideally log into the course and
complete course activities regularly. AI is often used in an LMS to
support positive learning behaviours, for example, by sending out
automated messages to learners to encourage them, and to remind
them of their progress or about deadlines. Language learning apps
frequently try to support positive behaviours by design, for example,
by including elements of gamification, such as moving up levels or
winning points as the learner completes activities.
Affective dimension: This involves the learner’s emotional reactions,
such as their interest and enjoyment while learning alone online.
Online self-study course and app designers try to encourage positive
emotions among learners by creating varied and engaging activities
(for example, with quizzes and multimedia content), by including
gamification and by designing engaging mobile app interfaces. The
novelty of using a new and well-designed language learning app can
often have a positive emotional effect on learners, but by definition,
the novelty can soon wear off, leading to drop out. More recently,
some language learning chatbot apps use 3-D chatbot avatars based
on generative AI (see 9 and 11) to replicate human interaction. It’s
early days, so research into whether these provide a more engaging

113
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
emotional experience for learners remains thin on the ground. Some
current evidence suggests that they don’t (see the study discussed
later in this chapter).
Cognitive dimension: This focuses on a learner’s ability to work with
challenging concepts and to construct new knowledge. Activities
enabling learners to check their understanding of new concepts and
to get support with learning these (e.g., via automated hints and tips)
could in theory support this dimension. Within a MOOC, feedback
tools based on automated writing evaluation (see 14) can also
provide learners with feedback on aspects of their written work (e.g.,
overall organisation, cohesion, sentence structure, accuracy, word
range, etc.).
Collaborative dimension: The importance of social connections with
others and the collaborative aspects of language learning cannot
be underestimated, and it is in this area that self-study courses and
apps arguably struggle most. Aware of this, many self-study, online
courses provide opportunities for learners to engage with each other,
for example, via forum discussions. These sorts of interactions, when
related to discussing course content, have traditionally been difficult
to assess, though. This means that they are usually optional and
do not count toward learners’ final grades. However, as automated
writing evaluation improves with generative AI (see 14), we may see
collaboration included as a mandatory component more frequently
in self-study online courses. Language learning apps sometimes
approach the collaborative dimension by offering (paid) access to
teachers, or (as we saw above) by including chatbots, but, on the
whole, these apps tend to conceive of language learning as a solitary
endeavour.

Does AI support self-regulated learning?

Looking at the dimensions above, it seems that AI can support online


self-regulated learning, at least in theory. But what do learners think?
After all, learners need to find these AI interventions helpful in
supporting their self-regulation strategies if they are to work. A study
into university students’ perspectives on using AI for self-regulated
learning in online classes found that students saw pros and cons to
using AI tools to help them manage their own learning online
(Jin et al., 2023).

114
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
On the plus side, the students thought that AI-powered tools like
study planners, adaptive quizzes and progress dashboards could
assist them with planning, reviewing materials, understanding topics
and monitoring their learning. Students especially liked how these
AI tools gave them quick help and insights into their own progress.
But they were sceptical that AI could help motivate them or adapt to
their evolving needs as learners. Most felt virtual tools that mimicked
humans (and intended to help their motivation) were more distracting
than helpful. Some doubted automated AI praise would make them feel
more accomplished if it wasn’t connected to real grades.

Overall, the study shows that learners see pros and cons to AI helping
them learn in self-study, online courses. They think AI has potential
for some uses, but they also want human teachers, especially for
motivation. Despite teachers’ fears around being replaced by AI then, it
seems that learners still see an important role for us!

Deng, R., Benckendorff, P. and Gannaway, D. (2020). Learner engagement in MOOCs:


Scale development and validation. British Journal of Educational Technology, 51(1),
245–262.

Henrie, C. R., Halverson, L. R. and Graham, C. R. (2015). Measuring student engagement


in technology-mediated learning: A review. Computers & Education, 90, 36–53.

Jin, S. H., Im, K., Yoo, M., Roll. I. and Seo, K. (2023). Supporting students’ self-regulated
learning in online learning using artificial intelligence applications. International Journal
of Educational Technology in Higher Education, 20, 37.

O’Brien, H. L., Cairns, P. A. and Hall, M. (2018). A practical approach to measuring


user engagement with the refined user engagement scale (UES) and new UES short form.
International Journal of Human-Computer Studies, 112, 28–39.

115
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
29 Developing your teaching with AI

AI can help teachers and teacher educators develop their


knowledge about teaching by acting as a Socratic tutor. In
this chapter, we explore what this is and how it works in
practice.

As an educator myself, I find myself using generative AI tools on an


almost daily basis in my professional life. For example, I might ask an
AI chatbot for help with brainstorming topics or titles for a conference
talk. Or I might engage it in a discussion about new concepts I’m
learning about and finding difficult to understand.

We’ve seen in previous chapters that generative AI tools can help


teachers in a number of ways, such as with materials preparation and
administrative tasks (see 7), for creating tests and evaluating learners’
work (see 15) and with planning entire lessons (see 21 for an example).
These tools can also help school directors and teachers create wellbeing
policies (see 26) or guidelines on ethical use of AI for learners (see 24).
They can also help teachers understand new concepts in more depth, as
we will see in the next section.

Using AI to understanding new concepts

Teachers can develop their own knowledge and understanding of


a concept or topic by using a generative-AI powered chatbot (like
ChatGPT, Gemini or Claude, which were available at the time of writing)
as a knowledgeable conversation partner that can help them think more
deeply about something. This is often referred to as using a chatbot as
a Socratic tutor. Rather than asking the chatbot to simply explain a
topic to you, a Socratic dialogue helps you to clarify and deepen your
understanding of a topic – although you always need to double check
any facts or references that the AI tool may provide (see 22). Let’s see
how this can work in practice. Imagine that you want to find out more
about self-regulated learning, a topic we explored a little in 28. Here is

116
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
how you can use an AI-powered chatbot to help you explore the topic in
more depth.

1 Write a prompt
The first step is to write a prompt that tells the chatbot what to do.
The prompt needs to be quite detailed, as in the example below
(note that ‘you’ in this example is ChatGPT):
‘You are an inquisitive teacher educator who challenges your
student teachers to critically think about terms and topics in
language teaching. This tutor session is about understanding the
term self-regulated learning. Begin by asking me to describe my
own understanding of this. Based on my response, challenge me to
explore the concept more deeply. Every response you give me should
end in a new question that challenges me to think critically about
the concept.’

We can break this prompt down into different parts:


– First we tell the chatbot who we want it to be. In other words,
we give it an identity in relationship to ourselves – in this case, a
teacher educator who is going to help us understand something.
– Then we set the topic for discussion. Here we’d like to discuss
self-regulated learning.
– Next we tell the chatbot where to begin, so that it has a clear
starting point.
– We then set a goal for the chatbot. In this case, we want it to
challenge us to think more deeply about the topic by asking us a
question at the end of each response.
2 Try it out
Creating good prompts will ensure better interactions with the
chatbot, and as you can see, there are several parts to a good
prompt. You can use the prompt above as a model for yourself or
for own teacher trainees, for example, by simply changing the topic.
If you’re a teacher trainer, here’s one way you could try this out with
a group of trainee teachers in a training session:
– Write a prompt based on the model in Step 1 above, on a teacher
development-related topic of your choice.

117
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
– In the training session, project a current AI-powered chatbot onto
a screen at the front of the class. Using your prompt, carry out
a conversation with the chatbot, involving the whole group in
your responses. This shows your trainees how the chatbot works
and what to expect. It also provides them with a detailed model
prompt.
– Put the trainees in pairs and ask them to choose a topic related
to teacher development that they would like to explore further.
You could provide them with a list to choose from, for example,
of topics that you have worked on recently as a group. Each pair
then writes a prompt on their chosen topic, using the model you
provided.
– Each pair uses one device to interact with an AI-powered chatbot
acting as Socratic tutor, and explores their chosen topic. Give the
group a time limit (e.g., ten minutes).
– Regroup the trainees and ask them to explain what they have
learned to each other. Group members should ask questions
about anything they don’t understand or would like further
clarification on.
3 Reflect
– With the whole class, reflect on the experience of using a
generative AI chatbot as a Socratic tutor. Ask trainees whether
they found it useful or not, and why. Point out that using a
chatbot as a Socratic tutor can also be helpful when revising for
teacher educator exams, as it can help trainees check their own
understanding of topics in some depth.
– You, too, can reflect on the experience from a trainer’s
perspective, and whether you think this approach is useful
for your trainees. You could even carry out an action research
project, by asking your trainees to use a generative-AI powered
chatbot as a Socratic tutor over a period of time. You can then
get your trainees’ feedback on this, as well as looking at whether
their teacher knowledge exam or test scores improve over time.
See 27 for more on how to carry out an action research project
with your trainees.

118
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
Although the example above describes how teacher educators could
use a generative-AI tool as a Socratic tutor with teacher trainees, this
approach can also be used by learners. For example, some learners may
need to understand or revise certain concepts in English, in areas like
EAP (English for Academic Purposes) or CLIL (Content and Language
Integrated Learning).

At the end of the day, though, Socratic dialogues might not be to


everyone’s taste. Some teacher trainees and learners (or even you
yourself!) may not enjoy this approach or find it helpful. This is fine. But
there is no harm in showing our trainees or learners how generative AI
can function as a Socratic tutor and letting them decide for themselves.

119
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
30 What does the future hold?

How will generative AI affect us in the future? It’s difficult


to predict this, but here are some final thoughts on where
these technologies may be taking us.

Robots or humans?

Misaki had the ideal romantic relationship. A successful businessman,


her partner sent her messages, called her every morning and evening,
read her poetry, reminded her to eat healthily and told her bedtime
stories at night. Even though he was never there, Misaki felt she had
the perfect boyfriend. Then one day he was gone. The company that
developed ‘Him’, had shut him down. Misaki’s boyfriend was not real
– he was an AI chatbot. She knew this, because she had downloaded
‘Him’ to her phone. But she was still heartbroken when ‘he’ was
disconnected.

This true story (Zhou, 2023) is interesting for several reasons. It shows
our very human tendency to anthropomorphise (attribute human
qualities to even inanimate objects), and it shows our willingness to
accept unusual situations when we feel that the benefits outweigh the
drawbacks. After all, Misaki’s boyfriend was not real. He would never
walk through the door, but the emotional support and comfort that
the app gave her were very real. To me, this story tells me that we may
need to reframe the question of whether AI will ever be as (or more)
intelligent than us, or whether AI will be capable of developing emotions.
The real question is to what extent their simulation of intelligence or
emotion is good enough for us. As one social scientist pointed out over a
decade ago, we have reached a ‘robotic moment’, in which we willingly
accept robots as friends and companions (Turkle, 2013).

This argument – that robots are as good as (or sometimes better than)
humans – is applied to the use of generative AI in education. AI chatbots
that can adapt to learners’ needs can be used at scale, it is argued, in
places where there may be a lack of trained teachers or where children

120
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
have no access to education at all. The counter argument is that this
approach removes the responsibility for education (and for training
teachers) from governments, and hands this responsibility over to large,
multinational educational technology companies, who provide the AI
software (and sometimes the hardware). These companies then have
access to large untapped pools of user data and access to new markets. We
would do well to remember that this sort of educational support comes at
a (frequently hidden) price. We can expect to see more of it in the future.

Will AI deskill or upskill English language teachers?

As we’ve seen throughout this book, the newer generation of generative


AI can help teachers and learners with language teaching and learning
in many ways. All this AI-powered help raises some important
questions, though. If AI is doing a lot of the work for teachers, do
we risk becoming deskilled? For example, if we use AI to plan all our
lessons, is there any need for us to know how to plan lessons ourselves?
And how might that affect pre-service teacher training? Is there any
need for teachers to learn about lesson planning? At the time of writing
this book, it was still too early for studies on the long-term effects of
generative AI on language teachers’ knowledge and practice to have
emerged. Research carried out to date in other fields can be revealing,
though. Let’s look at two studies.

A study carried out with job recruiters found that those who used
high quality AI to help choose candidates became lazy, negligent and
less confident about their own judgement (Dell’Acqua, 2023a). Strong
job applicants were ignored, and the decisions of these recruiters were
worse than the decisions made by recruiters who used low-quality AI
or no AI at all. The researcher concluded that when AI is very good at
what it does, humans have no need to work hard. They let the AI take
over instead of using it as a tool.

Another study examined to what extent generative AI could help


a group of consultants improve the quality of their work and their
productivity (Dell’Acqua et al., 2023b). In the study, some of the
subjects used AI tools, and some did not, to carry out a range of
typical consultancy tasks over a period of time. The study found that
AI improved quality and productivity for all the consultants who used

121
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
AI, compared to those who did not. Interestingly, in the group who
used AI, consultants with lower initial scores benefited the most from
AI help, while those who were already very good at their jobs showed
less improvement, although there was still some improvement. What
does this study tell us then? That those who are less skilled at their
jobs can improve their skills significantly by using AI. Those who are
already highly skilled, on the other hand, can improve slightly by using
AI, because they are already very good at what they do. What we are
looking at here is an example of skills levelling when AI is used as a tool
rather than as a replacement. Those who are not very good at their jobs
can become a whole lot better with the help of AI.

Let’s go back to our example above about how generative AI tools


could, in theory, remove the need for teachers to learn how to plan
lessons. An alternative scenario, however, is that teachers who are less
confident about lesson planning are shown how to use generative AI as
a tool that helps them plan better lessons, rather than as a substitute. By
levelling up their skills with the support of generative AI, these teachers
can become as good as the best teachers. It’s worth remembering
that teachers already use and adapt lesson plans from coursebooks,
so working from already-produced lessons is nothing new. By using
judicious prompts in generative AI tools, teachers can produce more
customised lessons for their learners. So-called prompt engineering
(creating effective prompts for generative AI tools) has thus become a
key skill for teachers to learn (see 29 for an example prompt).

Final thoughts

It’s notoriously difficult to predict the future. One way to approach the
future is to look at the present, which we have tried to do throughout
this book. After all, the seeds of the future are sown in the present.
But we can also learn from the past. The wonderfully named field of
paleo-futurology, which looks at past predictions of the future, may
provide us with some insights (Weinberg, 2023). One important lesson
from paleo-futurology is that while certain technological advancements
can be relatively easy to foresee, forecasting changes in society is
a lot trickier. For instance, futurists in the 1950s predicted certain
technological developments such as the increased use of plastics in the
household. But they found it more challenging to predict the societal

122
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
shift away from relegating shopping and cleaning to ‘the housewife
of [the year] 2000’. Technological changes affect attitudes and norms
over time, shaping our expectations across various aspects of life, as
well as how we live. Just as internet dating is now widely accepted and
practised, having a boyfriend or girlfriend app, like Mitsaki, or even a
boyfriend or girlfriend robot, may be the norm in the future. Teachers
and learners already use a wide range of technology tools in their
teaching and learning. And the future is likely to see more generative
AI used in education, if only because it increasingly suits our collective
expectations and needs.

Dell’Acqua, F. (2023a). Falling Asleep at the Wheel: Human/AI Collaboration in a Field


Experiment on HR Recruiters. Laboratory for Innovation Science. Harvard Business
School. Available at: https://static1.squarespace.com/static/604b23e38c22a96e9c78879e/t/
62d5d9448d061f7327e8a7e7/1658181956291/Falling+Asleep+at+the+Wheel+-
+Fabrizio+DellAcqua.pdf. Accessed 28 December 2023.

Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran,
S., Krayer, L., Candelon, F. and Lakhani, K. R. (2023b). Navigating the Jagged
Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge
Worker Productivity and Quality. Harvard Business School Technology & Operations
Mgt. Unit Working Paper No. 24-013. Available at: https://papers.ssrn.com/sol3/papers.
cfm?abstract_id=4573321. Accessed 28 December 2023.

Turkle, S. (2013). Alone Together. Basic Books.

Weinberg, J. (2023). Thinking about Life with AI. Daily Nous. Blog post. Available at:
https://dailynous.com/2023/03/28/thinking-about-life-with-ai/. Accessed 28 December
2023.

Zhou, V. (2023). These women fell in love with an AI-voiced chatbot. Then it died. Rest of
World, 17 August 2023. Available at: https://restofworld.org/2023/boyfriend-chatbot-ai-
voiced-shutdown/. Accessed 28 December 2023.

123
https://doi.org/10.1017/9781009804509.004 Published online by Cambridge University Press
Index
academic integrity 83–86 for language learning 12, 63–65
accessibility of AI 75–77 practising English 34–37
accountability 87–89 ChatGPT
action research 108–111 academic integrity 83–84
adaptive learning bias in 73
intelligent tutoring systems 32–33 governance of 96
motivating learners 11 hype and hyperbole 2, 18, 26, 104
personalising content 30–31 lack of consciousness 4
adaptive testing 59 teaching with 12, 26, 104
affective computing 46, 48 as technological development 3, 4–5,
affective user engagement 113–114 19, 23
algorithmic bias 71–74 cheating, use of AI 83–86
alignment (AI training) 92 citations 88
Alternative Uses Test 15 cognitive load 110
anthropomorphism 120–121 communication, between home and school
artificial general intelligence (AGI) 2–3, 4, 9 28–29
artificial intelligence (AI) Computer Assisted Language Learning
and consciousness 3–5 (CALL) 2
early AI in language teaching 2 computer vision tools 78
narrow versus artificial general consciousness, and AI 3–5
intelligence 2–3 copyright issues 87–89
practising language with 12–13 creativity, and AI 14–17
assessment using AI 58–61, 84–86 critical users of AI 99–102
assistive technologies 76–77, 76–78 culture, learning through AR 40–41
attention deficit hyperactivity disorder
(ADHD) 77 data
attribution of sources 88–90 assessment of learners using AI 60–61
augmented reality (AR) 38–41 discussing data use with learners
authentic assessment 85–86 81–82
autism 77 making AI fairer 73–74
automated essay scoring (AES) 58 normalising data collection 80
automated item generation (AIG) 60 ownership of 79
automated writing evaluation (AWE) data-driven AI 7
58–59, 65–66 data protection 52, 79, 96
autonomous learners 112–115 datafication of education 79–80
differentiation in teaching 27
behavioural dimensions 113 digital divide, accessibility of AI 75–76
bias digital literacies 101
AI-generated images 16–17 digital pedagogy 69–70
and fairness 71–74 digital revolution 67–68
book reviews 39 disabilities, assistive technologies 76–77
brainstorming 55 diversity 76–77
see also accessibility of AI; fairness
chatbots in AI
developing your teaching with AI dogme 28
116–119 Duolingo 12, 36

124
Published online by Cambridge University Press
educational technology (EdTech) 18–19 head-mounted display (HMD) 42
emotion AI 47–49 hearing impairments 77
emotions home-school connection 28–29
anthropomorphism of AI 120–121 human-centred AI 70
in education 47–49 hype cycle 18–21
facial recognition 46 hyperbole 18–20
sentiment analysis 50–52
user engagement 113–114 images, AI-generated
employment bias in 16–17
job losses due to AI 67 creative uses of 15
job opportunities in AI 91–94 human versus AI creativity 14
environmental costs of digital technologies sources and attribution 89
99–102 virtual worlds 42–43
equality 76 immersive virtual environments 42–45
equity, diversity and inclusivity (EDI) intelligence, types of 2–3
76–77 intelligent tutoring systems (ITSs) 32–33
see also accessibility of AI; fairness in AI
European Union’s AI Act 48 job losses due to AI 67
see also laws around AI use job opportunities in AI 91–94

facial recognition 46 knowledge-based AI 6–7, 9


fairness in AI 71–74
see also bias language learning
feedback with AI 12, 23–29, 63–66
assessment of learners using AI 58 AI hype 18–19
by generative AI 55 early AI 2
formative assessment 58–61 lesson plans 28
future of AI 120–123 motivation 11–12
practising with AI 12–13
gamification 113–114 as process 10–11
generative AI 6, 7–8 languages, AI bias towards English 73
academic integrity 83–86 large language models (LLMs) 8–9, 91–94,
bias in 73 99
creativity 14–17 laws around AI use
governance of 96 control and regulation 95–98
hype and hyperbole 2, 18–21, 26, 104 data protection 52, 79, 96
lack of consciousness 4 emotion detection 48
language learning 12 learner autonomy 112–115
practising English 12, 34–37 learning management system (LMS) 50–52,
teaching with 26–29, 104 95–96
technological developments 3, 4–5, lesson plans 27–28, 68, 69
19, 23 lexical chunks 10
writing skills 54–57
GPT detectors 72 marker-based AR apps 40
grammar 30–31 marking criteria 55
guardrails 98 Massive Open Online Courses (MOOCs)
guidelines for AI 96–98 112–115
see also laws around AI use mental health 104–107

125
Published online by Cambridge University Press
mixed ability classes 27 speech recognition 3, 71–72
mixed-skills lessons 40 speech-to-text tools 77–78
motivating learners 11–12 suggestopedia 28
summative assessment 58–61, 84
narrow AI 3 sustainable consumption 100–101

opaque AI 95–96 teachers


OpenAI 72 AI undermining teachers 69
deskilling and upskilling 121
paleo-futurology 122–123 developing your teaching with AI
personalising content for learners 30–33 116–119
plagiarism 83–86 implications of AI for 67–70
platform work 92–94 wellbeing 104–105
political uses of AI 52 technoethical audits 97
prompt engineering 68, 122 technology solutionism 19
pronunciation 11 text-to-speech tools 77–78

quiz tools 59–60 user engagement 113–114

regulations see laws around AI use virtual reality (VR) 42–45


Reinforcement Learning from Human visual impairments 78
Feedback (RLHF) 8, 91–92 vocabulary, adaptive learning 30–31
voice recognition 3, 71–72
scaffolding 12, 55
self-regulated learning 114–115 wearable technology 38–41
sentiment analysis 50–52 wellbeing and AI 104–107
social robots 77, 78 writing skills
Socratic tutor, AI as 116, 118–119 assessment of 84–86
source attribution 88–90 developing with AI 54–57
spaced repetition 10–11

126
Published online by Cambridge University Press

You might also like