Generative AI Chatbots in Higher Education: A Review of An Emerging Research Area

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Higher Education

https://doi.org/10.1007/s10734-024-01288-w

Generative AI chatbots in higher education: a review


of an emerging research area

Cormac McGrath1 · Alexandra Farazouli1 · Teresa Cerratto‑Pargman2

Accepted: 9 August 2024


© The Author(s) 2024

Abstract
Artificial intelligence (AI) chatbots trained on large language models are an example of
generative AI which brings promises and threats to the higher education sector. In this
study, we examine the emerging research area of AI chatbots in higher education (HE),
focusing specifically on empirical studies conducted since the release of ChatGPT. Our
review includes 23 research articles published between December 2022 and December
2023 exploring the use of AI chatbots in HE settings. We take a three-pronged approach to
the empirical data. We first examine the state of the emerging field of AI chatbots in HE.
Second, we identify the theories of learning used in the empirical studies on AI chatbots
in HE. Third, we scrutinise the discourses of AI in HE framing the latest empirical work
on AI chatbots. Our findings contribute to a better understanding of the eclectic state of
the nascent research area of AI chatbots in HE, the lack of common conceptual groundings
about human learning, and the presence of both dystopian and utopian discourses about the
future role of AI chatbots in HE.

Keywords AI chatbots · Generative AI · Large language models · Discourses · Theories of


learning

Introduction

Technological developments may act as catalysts for educational development, but they can
also lead to inflated hype and both dystopian and utopian discourses (Bearman et al., 2022).
Examples of such developments in education are plentiful and range from the emergence of
the calculator in the 1950s and its widespread use in schools (Banks, 2011), to more recent
phenomena in higher education (HE), such as Massive Open Online Courses (MOOCs)
(Barman et al., 2019). Generative artificial intelligence (GAI) using large language models
(LLMs) is a recent example of a technological development that brings promise but also
fear and concern in education. The desire to deploy AI in education, specifically GAI, can

* Cormac McGrath
[email protected]
1
Department of Education, Stockholm University, Stockholm, Sweden
2
Department of Computers and Systems Sciences, Stockholm University, Stockholm, Sweden

Vol.:(0123456789)
Higher Education

be seen against the backdrop of societal progress and a pervasive trend towards digitising
core societal sectors such as education (McGrath & Åkerfeldt, 2019).
While HE institutions and students may wish to further understand the outcomes of
engaging with GAI chatbots, there is a lack of consolidated knowledge about their impact
on HE practices and student learning. Consequently, understanding the potential of emerg-
ing technologies for educational practices may be difficult for university teachers (Ortegón
et al., 2024). To address this knowledge gap, this review collects and examines empiri-
cal studies about the use of GAI chatbots in HE settings. As GAI chatbots have been sur-
rounded by inflated expectations (Fütterer et al., 2023), there is still concern in the HE sec-
tor about the changes such chatbots may bring to students’ learning and teachers’ practices.
Reviews play an essential role in synthesising knowledge on a topic, preventing the dupli-
cation of research efforts, and providing additional insights through the comparison and/or
combination of individual pieces of research (Petticrew & Roberts, 2008).
The research questions guiding this review are as follows:
RQ1: What is the current state of empirical research on GAI chatbots in HE?
RQ2: What theories of learning underpin the studies of GAI chatbots?
RQ3: What discourses about AI are found in the literature?

Previous research

To date, we identify three existing review studies examining the impact of GAI chatbots
(Alemdag, 2023; Ansari et al., 2023; Wu & Yu, 2023). Wu and Yu (2023) and Alemdag
(2023) conduct meta-analyses on empirical studies and collect data on effect sizes irre-
spective of domain, discipline, and contextual setting in primary, secondary, and tertiary
education settings. Alemdag’s (2023) study suggests paying more attention to how chatbots
provide a conversational exchange and environment, while Wu and Yu (2023) suggest that
future studies should focus on both learning outcomes and the negative impact of GAI
chatbots on students’ learning due to the scant research on this topic. Wu and Yu (2023)
also find no significant differences between chatbot groups and control groups concerning
learning engagement, confidence, motivation, and performance. Similarly, Alemdag (2023)
finds no significant difference between experimental and control groups on vocabulary
learning and reading skills in English as a Foreign Language (EFL) education. Ansari et al.
(2023), however, present a systematic review examining HE, including both conceptual
and empirical work to map the global evidence of chatbots’ effects. They find that students
use ChatGPT as a personal tutor for various learning purposes but report “concerns related
to accuracy, reliability, academic integrity, and potential negative effects on cognitive and
social developmentidentified in the selected articles” (Ansari et al., 2023, p. 1).
All three studies are valuable contributions in different ways, but all are limited due to
the scope of their engagement with empirical studies or in their choice of educational set-
ting. Therefore, there is a need to engage with empirical research further to study the cur-
rent state of the emerging field of GAI chatbots in HE specifically. To this end, this study
conducts a review focusing on empirical studies conducted since the release of ChatGPT
in 2022. It includes research articles on GAI chatbots in HE settings that were published
between December 2022 and December 2023, representing 1 year of academic output.
Taking a three-pronged approach to the empirical data, this study (1) presents and exam-
ines the current empirical work in the emerging field of GAI chatbots in HE; this is done
to offer an overview of the empirical work in this emerging field, (2) identifies the theories
Higher Education

of learning used in the studies and argues that learning theories may be used to frame
how new technologies mediate learning, and (3) scrutinises the discourses of AI (Bearman
et al., 2022), arguing that discourses on AI configure specific understandings of the impact
of GAI chatbots on student learning and teaching (Cerratto Pargman et al., 2024).

The emergence of GAI chatbots

GAI has emerged as a result of technological developments in machine learning and the
development of LLMs that utilise natural language processing. In broad terms, GAI is a
type of AI based on unsupervised or self-supervised machine learning models pre-trained
on certain datasets that can generate original texts, images, and sounds. LLMs are artificial
neural networks that process and generate natural language text and mimic human-created
text (Wei et al., 2022). Using deep learning algorithms, LLMs learn the form (i.e., pat-
terns and structures) of language from massive amounts of textual data and then use that
information to generate new text based on prompts or inputs (Jurafsky & Martin, 2023).
Chatbots date back to the 1960s, when MIT’s first chatbot, ELIZA, could simulate a con-
versation (Weizenbaum, 1966). Chatbots are now extensively used in language learning for
commercial purposes and in educational contexts. The introduction of transformer architec-
ture in 2017 (Vaswani et al., 2017) heralded significant developments in natural language
processing (NLP). This new architecture of LLMs was designed to process large amounts
of textual data efficiently and perform a wide range of complex language tasks. Since 2015,
several generative pre-trained transformer (GPT) models have been introduced, and each
iteration has been more powerful than the last. The original GPT was trained on a mas-
sive corpus of text data and could generate text in various styles and genres. This was fol-
lowed in 2019 by GPT-2 (Radford et al., 2019), which was an even larger and more pow-
erful model that could produce more humanlike text. This model was followed by GPT-3
in 2020, which included 175 billion parameters in which the model could be trained to
perform new tasks with just a few examples of labelled data (Brown et al., 2020). Each
incremental development means that AI chatbots can now offer responses that mimic natu-
ral language (Jurafsky & Martin, 2023).

Learning theories in GAI chatbot research

Drawing on Khalil et al. (2023), this study considers the importance of learning theory
in the scientific output on emerging GAI chatbots and their impact in HE. Khalil et al.’s
(2023) scoping review examines the use and application of learning theories in learn-
ing analytics and explores the theoretical points of departure as well as the ontological
and epistemological assumptions. Positioning an empirical study in a broader theoretical
framework may help the HE research community understand how empirical studies can
increase the transferability of findings to other settings. Empirical work does not, as previ-
ously noted, occur in decontextualised settings, and understanding the context in which
they take place may be important in order for others to better understand and transfers the
findings. Here, theories can help to situate the work both ontologically in understanding the
difference between hype and substantiated claims about the potential impact of GAI chat-
bots in HE and epistemologically in how the value of the findings is perceived. Previous
work conducted in various educational contexts calls for an increase in the use of theory
Higher Education

have sometimes led to a greater cosmetic use of theories of learning (McGrath et al., 2020).
As the impact of GAI chatbots in HE is currently an emerging research area, this paper
argues for greater consideration of whether and how learning theories inform the design of
the studies conducted, and how they are used.

Discourses of AI

This study draws on Bearman et al.’s (2022) critical review of how AI is used in the HE
literature. In their study, following Gee (2004), the authors conduct a discourse analysis to
interpret references to AI in terms of situated meanings, social language, intertextuality,
figured worlds, and conversations. The authors find that AI is often associated with change
and expressed by either dystopian or utopian discourses. They report that few articles
provide clear definitions of AI, and most associate it with either technological concepts
or artefacts. Two predominant discourses are identified: (1) the discourse of imperative
response and (2) the discourse of altering authority. The discourse of imperative response
portrays AI as a force of unprecedented and inevitable change that requires HE to either
resist it or adapt to it. This discourse is portrayed either as dystopia-is-now, where universi-
ties need to resist AI-forced change, or utopia-just-around-the-corner, where institutions
need to respond positively. The authors reason about how these discourses construct dif-
ferent identities and perspectives for institutions, staff, and students along a spectrum of
dystopian and utopian views. The discourse of altering authority articulates how AI chal-
lenges the locus of authority and agency within HE. Bearman et al. (2022) discuss how this
discourse depicts the shifting power dynamics between humans and machines and between
actors such as teachers, students, corporations, and universities. More specifically, they
explain that the altering authority discourse reflects how AI challenges the role and pur-
pose of teachers, as AI is to “invest the authority of the teacher into the technology” (Bear-
man et al., 2022, p. 378), as well as how students can exist and function in an AI-embedded
educational reality. The dystopia-is-now and utopia-just-around-the-corner duality is pre-
sent in this discourse, too, representing harmful agency loss and benevolent enhancement,
respectively.
This study examines the extent to which these discourses are reflected in the empirical
literature on GAI chatbots in HE. Identifying and understanding discourses is central in the
study of an emerging research field. Discourses considered fusions of text and social mate-
riality (Kivle & Espedal, 2022) reflect how the written word is simultaneously a product
and producer of social practice (Kivle & Espedal, 2022).

Method

In order to answer the research questions, a modified rapid review of generative AI chat-
bots was conducted (Grant & Booth, 2009) for 1 year of academic output (Dec 2022–Dec
2023). The review included original research articles published in the 31 highest impact
journals as listed in Google Scholar (top 20 h5-index), SCImago Journal Rank (≥ 1700
Q1 SJR), and Journal Citation Reports by Clarivate (≥ 5.9 JIF). More specifically, this
included HE journals but excluded the ones that were discipline-specific (i.e., language
learning, STEM education), as well as education psychology journals and educational
Higher Education

research methodology journals (see Table 1). The top 31 journals focusing on HE prac-
tices, teachers, and students were included.

Table 1  Alphabetical list of journals and articles included in the study


Journal Article

1. Active Learning in Higher Education −


2. Assessment & Evaluation in Higher Education Farazouli et al., 2023
3. British Journal of Educational Technology −
4. Computers and Education −
5. Computers and Education: Artificial Intelligence Habibi et al., 2023; Rodway et al., 2023; Yilmaz and
Karaoglan Yilmaz, 2023; Pursnani et al., 2023; Li
et al., 2023; Lai et al., 2023; Kohnke et al., 2023;
Hallal et al., 2023; Dakakni et al., 2023; Bernabei
et al., 2023
6. Education and Information Technologies Maheshwari, 2023; Mohamed, 2023; Yan, 2023; Zou
et al., 2023; Guo and Wang (2023)
7. Higher Education −
8. Higher Education for the Future −
9. Higher Education Quarterly −
10. Higher Education Research & Development −
11. Innovations in Education and Teaching Inter- Al-Zahrani, 2023; Khosravi et al., 2023
national
12. International Journal of Computer-Supported −
Collaborative Learning
13. International Journal of Educational Technology Escalante et al., 2023; Chan et al., 2023; Barrett
in Higher Education et al., 2023
14. International Journal of Higher Education −
15. International Journal of Sustainability in Higher −
Education
16. Internet and Higher Education −
17. Journal of Applied Research in Higher Educa- Jafari et al., 2023
tion
18. Journal of Computing in Higher Education −
19. Journal of Diversity in Higher Education −
20. Journal of Further and Higher Education −
21. Journal of Higher Education Policy and Man- −
agement
22. Journal of International Students −
23. Journal of Studies in International Education −
24. Journal of University Teaching & Learning −
Practice
25. Research in Higher Education −
26. Studies in Higher Education −
27. Teaching in Higher Education −
28. The Internet and Higher Education −
29. The Journal of Higher Education −
30. The Review of Higher Education −
31. Trends in Higher Education Schwenke et al., 2023
Higher Education

Only empirical studies conducted during December 2022 and December 2023 were
selected, i.e. those done since the release of ChatGPT. However, the study was open to all GAI
chatbots and not restricted to any version of GPTs. While this review may be instrumental
in helping the research community to take stock of the knowledge gained so far and identify
underexplored areas of research, it may, more importantly, offer nuance in a time of hyperbole
and hype (Bearman et al., 2022). No other such reviews were found. The paper will now detail
the four-step review process followed.
Step 1 (determine initial search question and field of inquiry): The scope of the review was
defined by specifying the questions to address and the focus on HE journals only. Initial con-
ceptual and definitional work was chosen as the basis of the review’s conceptual framework.
Previous review work on addressing the impact of chatbots across broader educational settings
was identified, but a decision was made to target HE specifically.
Step 2 (selection criteria): The scope was restricted to studies published between December
2022 and December 2023, drawn from the 31 most prominent and influential HE journals
(based on Google Scholar, SCImago Journal Rank, and Journal Citation Reports by Clari-
vate). The search specified relevant sources (higher education) and used essential keywords
(chatbots, generative pre-trained transformers, GPT, generative artificial intelligence (GAI)).
Step 3 (data extraction): The empirical studies conducted within HE were identified, and
descriptive codes for variables such as country, study design, outcome metric/type, cohort
size, subject, and stakeholders were assigned. A total of 23 articles were included in the study
(see Fig. 1).
Step 4 (data analysis): This involved a three-step process. First, the study design, stakehold-
ers, location, cohort size, research questions, and main findings were descriptively mapped
across the corpus. The data were then analysed thematically where we identified stakeholders,
study design, and the main findings of the studies on the impact of GAI chatbots. The second
step was checking the data for theories of learning to determine how theories of learning were
used to conceptualise the impact of chatbots on student learning.
In the third and final step, the data were searched for elements of the discourses presented
by Bearman et al. (2022), which involved examining the terms and concepts ascribed to GAI
chatbots and their use or potential use in educational settings. The analytical process involved
mapping the discourses suggested in Bearman et al. (2022) by following Winther Jørgensen
and Phillips’s understanding of discourse analysis (2002). As such, we identified key signifiers
(i.e., power) and their potential combination with other key signifiers (i.e., future) to under-
stand “the chains of meaning that discourses bring together in this way, […] [to] identify dis-
courses (and identities and social spaces)” (Winther Jørgensen and Phillips, 2002 p.27). We
mapped the discourse of imperative response and the discourse of altering authority. There
were also signifiers with a dystopia-is-now or a utopia-around-the-corner meaning. These
were illustrated by statements like “The invention of AI-Chatbot is undeniably one of the most
remarkable achievements by humanity, harnessing an unparalleled level of power and poten-
tial. In the near future, AI-chatbots are expected to become valuable tools in education, aiding
students in their learning journeys”, where, we identify the phrases in italic as signifiers in
alignment with Bearman et al.’s (2022) taxonomy.
Higher Education

Total number of articles from selected journals’ online databases


(N =52)

Excluded due to the following


Titles and abstracts screened reasons:
(N =52) Review and position of
articles (N =16)

Excluded due to the following


reasons:
Full text assessed for eligibility Not meeting the inclusion
(N =36) criteria (N =13)

Full text literature included


(N =23)

Fig. 1  Flow chart of the literature search


Higher Education

Limitations

A limitation of this study is that literature on the periphery of HE was not examined,
though the purpose was only to examine research relevant to HE scholars and practition-
ers and in venues where they were actively engaged. Therefore, this study represents an
accurate and contemporaneous overview of the field of HE in a broad sense. Finally, the
small sample is also a limitation, though there is value in HE academics approaching the
emergence of GAI chatbots with more nuance and in more robust methodologically and
theoretically informed ways. Since we conducted the data collection, we also recognise
that more studies will have been published, but we argue those studies may be viewed and
considered in light of our findings.

Findings

The three research questions of this study structure the findings section. First, a descriptive
overview of the data is presented, followed by a thematic overview. Subsequently, the theo-
ries of learning used to frame, examine, or understand the implications of GAI chatbots are
detailed. Finally, the discourses of AI manifested in the corpus are laid out.

RQ1: What is the current state of empirical research on GAI chatbots in HE?

A tabular overview of the studies’ characteristics, with information about study design,
stakeholders, location, cohort size, research questions, and main findings, is provided
online via Notion (https://​grate​ful-​lunge-​409.​notion.​site/​470a9​20be0​36438​883bc​87eea​
0df38​22?v=​4f6df​ee70d​8f4e0​595ee​d77f9​a31b3​ee).
Here, we identify (1) a range of study designs and methodological approaches; (2) gen-
erative AI chatbot performance studies, (3) studies examining issues of trust, reliability,
and teachers’ reflections on the potential of GAI chatbots in education; and (4) studies
examining students’ motivation for using GAI chatbots.

Study designs and methodological approaches of GAI chatbot research

In order to ensure the stringency and robustness of the selected empirical studies, the meth-
odological choices made in them were assessed (see Notion (https://​grate​ful-​lunge-​409.​
notion.​site/​470a9​20be0​36438​883bc​87eea​0df38​22?v=​4f6df​ee70d​8f4e0​595ee​d77f9​a31b3​
ee) for an overview of the results. A csv file can be made available upon request). Here, the
corpus consists of one small-scale but in-depth autoethnography study (n = 1) (Schwenke
et al., 2023), a survey study (n = 505) addressing students’ perceptions of whether they were
ready to engage with generative AI in their future roles (Al-Zahrani, 2023), and another
large-scale study (n = 1117 participants) examining the uptake of technology (Habibi et al.,
2023). Survey studies (n = 11) use self-reported data to identify the impact (Rodway &
Schepman, 2023), readiness (Al-Zahrani, 2023), or willingness (Lai et al., 2023) to engage
with GAI in educational practice. There are three interview studies (i.e. Jafari & Keykha,
2023) and four studies using mixed methods, including either in-depth or semi-structured
interviews as part of their data collection (i.e. Dakakni & Safa, 2023).
One study utilising a randomised control design (Yilmaz & Karaoglan Yilmaz, 2023)
suggests that students in the chatbot intervention did better in post-test creativity and
Higher Education

computer programming tests. However, that study has a very small sample size of only 45
students.

GAI chatbots’ performance studies

Several studies use subject experts to validate the output of chatbot use and interaction.
This is illustrated in Farazouli et al. (2023), Hallal et al. (2023), Khosravi et al. (2023),
and Li et al. (2023). Expert validation, where AI-generated responses are tested on experts
for validity, is a form of data about how chatbots perform in HE settings and how they
could challenge core practices in education (e.g. Farazouli et al., 2023), but do not target
students’ learning. A number of studies focus on how well AI chatbots provide answers to
questions in various disciplines. In these studies, it is common to conduct an experiment
where a chatbot’s performance is tested and validated by experts. For example, Khosravi
et al. (2023) examine ChatGPT’s performance in questions on genetics, finding that 70%
of its responses are correct. Khosravi et al. (2023) report that the chatbot performs better
and more accurately on descriptive and memorisation tasks than on those requiring critical
analysis and problem-solving. Additionally, Pursnani et al. (2023) examine GPT-4’s abil-
ity to answer questions from the US Fundamentals of Engineering examination and con-
clude that there have been significant improvements in its mathematical capabilities and
problem-solving of complex engineering cases. Hallal et al. (2023), testing the claim that
GAI chatbots are valuable tools in students’ learning, examine the performance of GPT-
3.5, GPT-4, and Bard in text-based structural notations in organic chemistry and conclude
that chatbots’ integration in education needs careful consideration and monitoring. Finally,
Farazouli et al. (2023) examine ChatGPT’s ability to answer examination questions in law,
philosophy, sociology, and education, as well as teachers’ assessments of ChatGPT’s texts
in comparison with student texts. They report that teachers have difficulty discerning stu-
dent texts from GAI texts and observe a tendency for teachers to downgrade student texts
when suspecting the use of a chatbot.

Studies examining trust, reliability, and teachers’ reflections

A number of the selected studies focus on second language learning (n = 5) and suggest
that chatbots help students structure their thoughts in the second language (e.g. Yan, 2023;
Zou & Huang, 2023). In this category, Escalante et al. examine the differences between
chatbot feedback and tutor feedback and find that students do not value feedback from
tutors more than feedback from chatbots (Escalante et al., 2023). At the same time, there
are examples of chatbots being unreliable: “Nonetheless, its generative nature also gave
rise to concerns for learning loss, authorial voice, unintelligent texts, academic integrity as
well as social and safety risks” (Zou & Huang, 2023, p. 1).
A number of the studies focus on teachers’ concerns about how GAI chatbots may
impact students’ learning. Dakakni and Safa (2023), for example, find that 67% of instruc-
tors have feelings of distrust towards GAI chatbots as they feel they encourage students to
plagiarise, though 83% are in favour of GAI training, if only for “policing” students’ ten-
dency to plagiarise. Barrett and Pack (2023) find that teachers and students have a shared
understanding of what constitutes appropriate AI use. Kohnke et al. (2023) suggest that
familiarity and confidence in using AI-driven teaching tools play an important role in
adoption, but stress there are challenges that language instructors face that require tailored
support and professional development.
Higher Education

Studies on students’ motivation for use

In one study, 85.2% of the students mention they use AI technologies—primarily Bard
and Quillbot (38%) and ChatGPT (25.6%) (Dakakni & Safa, 2023). Explaining why
students use chatbots, Lai et al. (2023) identify an intrinsic motivation to learn as the
strongest motivator, which is consistent with the prior literature on technology accept-
ance, where “perceived usefulness” is a strong predictor of behavioural intention (Lai
et al., 2023). Schwenke et al.’s (2023) single respondent autoethnographic study reports
a perceived value in using a GAI chatbot to structure thoughts while writing a degree
thesis, but also that the process requires continuous validation of the output of the chat-
bot-generated material. Al-Zahrani (2023) reports that students have a positive outlook
on GAI and are aware of the ethical concerns associated with its use, but he does not
detail these concerns in the study. Focusing on the impact that GAI has had on stu-
dents’ learning, Yilmaz and Karaoglan Yilmaz (2023) find significant improvements in
the computational thinking skills, programming self-efficacy, and motivation of students
using GAI compared to the control group.

RQ2: What theories of learning are used in studies of GAI chatbots and their impact
on student learning?

Of the 23 selected studies, only three make use of learning theories explicitly, referring
to experiential learning (Li et al., 2023; Yan, 2023), reflective learning (Li et al., 2023;
Yan, 2023), active learning (Lai et al., 2023), and self-regulated learning (SRL) (Lai
et al., 2023). Li et al. (2023) use experiential learning and reflective learning to investi-
gate the effectiveness of ChatGPT in generating reflective writing and also the potential
challenges it poses to academic integrity. They also construct a framework for assess-
ing the quality of reflective writing based on experiential learning and reflective activ-
ity. Yan (2023) uses reflective learning and experiential learning to investigate students’
behaviour and reflections on using ChatGPT in writing classrooms (Yan, 2023). The
design of the study is informed by experiential learning, where students use ChatGPT as
part of their practicum. Lai et al. (2023) use active learning theory and SRL to motivate
their empirical approach, and their study identifies these theories as key components for
exploring the impact of chatbots on students’ motivation in their learning. They also use
the technology acceptance model (TAM) to examine undergraduate students’ motivation
and intention to use ChatGPT.
Although only Li et al. (2023), Yan (2023), and Lai et al. (2023) make explicit ref-
erence to learning theories, a number of other studies (Bernabei et al., 2023; Chan &
Hu, 2023; Habibi et al., 2023; Maheshwari, 2023) draw on behavioural research theo-
ries such as the technology acceptance model (TAM), the theory of planned behaviour
(TPB), and the unified theory of acceptance and use of technology (UTAUT). Other
conceptual frameworks, such as Biggs’ 3Ps (presage-process–product) model, examine
technology’s acceptance and use. In these studies, the focus seems to be exploring fac-
tors that influence individuals’ behavioural intentions when adopting a new technology,
such as performance expectancy, effort expectancy, and hedonic motivation. Across the
rest of the examined work (n = 20), theories of students’ learning and teachers’ practices
are absent.
Higher Education

RQ3: What discourses about AI are found in the literature?

Bearman et al.’s (2022) discourses of imperative response and altering authority are found
in the studies in this review. The discourse of imperative response predominantly frames
the selected studies and articulates a need for universities to respond to the emerging tech-
nology. There is a tendency to label emerging, potentially disruptive change either as a pos-
itive thing offering new possibilities for teaching and learning or as an existential threat to
university practices. Discourses of utopia-just-around-the-corner appear in several studies
(e.g. Dakakni & Safa, 2023; Rodway & Schepman, 2023) and this highlights the significant
advantages of AI in education, which support tailored learning experiences, improve stu-
dent learning, facilitate the identification of students’ strengths and weaknesses, and adapt
lessons for students’ individual learning needs. This discourse also appears in other studies
(Chan & Hu, 2023; Hallal et al., 2023; Jafari & Keykha, 2023; Lai et al., 2023) that argue
for the “undeniable” benefit of GAI that HE institutions “need to harness”. These studies
highlight the positive impact GAI chatbots may have on students’ learning by providing
tailored feedback on assignments, pinpointing areas for improvement, avoiding potential
embarrassment from judgmental teacher criticism, creating interactive learning activities
and suggesting resources, and enabling students to learn at their own pace. Hallal et al.
(2023) go as far as to claim: “The invention of AI-Chatbot is undeniably one of the most
remarkable achievements by humanity, harnessing an unparalleled level of power and
potential. In the near future, AI-chatbots are expected to become valuable tools in educa-
tion, aiding students in their learning journeys” (p. 1).
The dystopia-is-now discourse is also represented in several studies. Al-Zahrani
(2023) portrays AI in education as having a disruptive character, potentially displacing
teachers’ roles in education, and he anticipates a negative impact on knowledge work
productivity. Studies such as Escalante et al. (2023), Farazouli et al. (2023), and Li et al.
(2023) raise concerns about GAI chatbots’ disruptive impact on education practice, par-
ticularly their potential threat to assessment and their negative consequences for stu-
dents’ reflective writing and critical thinking.
Al-Zahrani (2023) expresses the utopia-around-the-corner thematisation through the
framing of the study, noting that “Dwivedi et al. (2023) argue that GPTs, in particular,
will disrupt education and believe their biggest impact will be on knowledge work pro-
ductivity”. Examples of utopia-around-the-corner discourses also appear in the discus-
sion of the same paper, when the authors note: “In summary, the findings indicate the
readiness of the higher education community in Saudi Arabi to integrate AI technolo-
gies in research and development” (Al-Zahrani, 2023, p. 11). This bold claim, that the
education community of an entire country is ready to take on the challenge of AI, is
made on the basis of self-reported data from a survey on students’ perceptions of AI.
The discourse of altering authority is also represented in the corpus. Several stud-
ies seem to align with this discourse regarding the integration of AI in education, such
as Dakakni and Safa (2023) and Jafari and Keykha (2023), where the authors refer to
AI learning systems as beneficial for students’ convenience and personalised learn-
ing without the intervention of teachers. Additionally, Rodway and Schepman (2023)
and Mohamed et al. (2023) discuss AI in education as enabling teachers to offer stu-
dents individualised and customised experiences. In several studies, this discourse also
appears when framing GAI chatbots’ integration in HE.
In Farazouli et al. (2023), for example, the findings suggest teachers lose their sense
of control, which we understand as an expression of the discourse of altering authority.
Higher Education

Here, agency is not necessarily only altered, but there are concerns that it may be lost
entirely. Finally, Khosravi et al. (2023) portray GAI chatbots as a potential game changer
in clinical decision-making and genetics education and suggest that if their accuracy is
improved, they could assist teachers in teaching and evaluating students.
The examination of the selected studies reflects in different ways Bearman et al.’s (2022)
discursive positions, both in terms of framing GAI chatbots as an imperative response and
as altering authority in the context of HE and representing both in dystopia-is-now and
utopia-around-the-corner approaches.

Discussion

Given the lack of consolidated knowledge about GAI in HE, the purpose of this review
was to examine the research published since the launch of ChatGPT in 2022. The search
was done in 31 journals relevant to HE and included 23 empirical studies on GAI chatbots.
While there is value in understanding the current trends in this emerging research area,
the review is exploratory in nature and seeks to better enable university teachers to make
informed decisions about their practice (Ortegón et al., 2024). There now follows a discus-
sion of the main points to be drawn from this review.
First, there is not much empirical work available in the specified time frame. The studies
examined in this review employ a wide variety of methods and samples from a wide range
of disciplines. The eclectic character of such studies calls into question the level of interest
in, for instance, conducting meta-synthesis at the current time. It may also be misleading to
count effect sizes in studies that are so methodologically diverse and range from language
learning to engineering. There is only one randomised control trial (Yilmaz & Karaoglan
Yilmaz, 2023), but this is not surprising given that it has only been 1 year since the release
of OpenAI’s ChatGPT, we also note that the study has a small sample. The diversity of
the studies, with such different objects of analysis, makes comparison difficult. Moreover,
there is a predominance of studies published in journals focusing on technology, comput-
ers, and education (n = 20). This suggests that HE journals that do not target technology-
mediated learning are not currently engaging with research on GAI chatbots to the same
degree. Similar findings have been shown elsewhere and, more recently, illustrating how
matters relevant to AI in education settings are more frequent in computer and technology
journals (Sperling et al., 2024). We argue that education journals need to stay abreast of
developments in technology-mediated learning to offer nuance from a position of educa-
tion values and ethics (Holmes et al., 2022; McGrath et al., 2023).
Although diverse, several of the studies included in this review have in common that
they are exploratory (Farazouli et al., 2023; Hallal et al., 2023; Jafari & Keykha, 2023;
Khosravi et al., 2023), using expert validation techniques to examine the quality of chat-
bot responses to authentic student examination questions. Most studies are either obser-
vational, based on small datasets, or utilise self-reported surveys to determine the likeli-
hood and motivation to use GAI chatbots, and as such, cannot yet be used to make robust
findings or generalisations for university teachers and their practices. Major technological
advances, such as GAI based on LLMs, can cause significant public and educational con-
cerns (Barman et al., 2019; Fütterer et al., 2023). However, it may still be too early to
determine the value of GAI chatbots’ and their impact on students’ learning and teachers’
practice or draw far-reaching conclusions given the current available empirical work.
Higher Education

The second main point from this review is that only three studies in the corpus explicitly
draw on learning theories. It is likely, theories of learning or theories of work-based prac-
tices may need to become more central to studies on GAI chatbots, as evidenced in other
contexts where theory becomes more important as a field develops (Khalil et al., 2023;
McGrath et al., 2020). As this research field progresses, studies examining students’ will-
ingness to engage with GAI chatbots and teachers’ perceptions about the meaning of this
new technology for teacher-student relationships will need to emerge. Studies regarding
what GAI chatbots can do and how they perform on specific tasks may very well predomi-
nate in the early work in the field. It seems reasonable to suppose that work on the impact
of GAI chatbots in HE on student learning or teacher practices will be better informed
by theories of learning or teachers’ practice. When claims are made about GAI chatbots
enhancing education and student learning, we find ourselves asking what such positive
statements mean in practice. More specifically, we ask what enhancement and learning
mean. What are the tacit assumptions feeding such beliefs? It may be appropriate to clarify
such fundamental positions vis-à-vis theories of learning in future empirical work.
The third main point is that the prevailing discourses identified by Bearman et al. (2022)
appear throughout the body of work examined in this study. These discourses are used to
frame and position the studies and as a way of inferring meaning to future challenges and
opportunities. This review focuses on mapping the existing discourses, which are concep-
tualised as either highlighting the benefits or focusing on the risks. Accordingly, the appro-
priate role of institutions is framed as to highlight the necessity of either harnessing the
potential of GAI chatbots within HE or focusing on the urgency of adapting HE practices
in order to safeguard learning and academic integrity. This aligns with the observations
of others in that the literature on integrating AI into society emphasises the significant
influence of discourse in shaping both current and future sociotechnical trajectories Bareis
and Katzenbach (2022). We argue that the scientific literature on GAI chatbots should be
grounded in robust findings about the impact on existing practices and should engage less
with ad hoc visions about what AI might bring.

Conclusion and future directions

This study examines a selection of empirical studies in HE to address the lack of con-
solidated knowledge about the impact of GAI chatbots on HE practices and student learn-
ing. It shows that a wide variety approaches appear in the literature, covering many disci-
plines. Very few of the studies utilise theories of learning to frame, examine, or explain
the impact of GAI chatbots on university teacher practices or students’ learning. There is
a tendency in the studies to use inflated discursive language. At times, there is also a dis-
connect between the presented findings and the bold claims encapsulated in the discourse
of altering authority. This is an important observation, given that: “Not only do groups
develop technologies with cultural assumptions and power relations in place that guide
development efforts, but people also construct certain uses and purposes for technology
through discourse that is itself, in turn, shaped in profound ways by cultural beliefs about
technology” (Haas, 1996, p. 227). While the sample of this study is too small to draw any
broader inferences, it is noteworthy that dystopian and utopian stances are often ascribed
to other scholars and studies in the framing of empirical work and less in the empirical
evidence which is presented in the studies. Here, we fear that such inflated discourses act
more as rhetorical devices (McGrath et al., 2020). In this sense, we argue the scientific
Higher Education

literature needs to resist contributing to the dystopian and utopian hyperbole when framing
and considering the transferability of research findings. Instead, we see the need to conduct
research engaging with questions aimed to better understanding the intricacies of GAI in
university practice from critical standpoints. Here, there is a clear need for future studies
to focus on various stakeholder populations in order to get a broader understanding of the
changes GAI chatbots bring to educational practices but also specific groups of students.
Methodologically speaking, qualitative studies, using ethnographic methods, that enable
“thick descriptions” (Geertz, 2008) could play a key role in better understanding how dif-
ferent stakeholders use GAI chatbots in their everyday tasks. As a final point, it is impor-
tant to observe the social impact of using GAI chatbots in HE as a social arena, not just as
a site for student learning and teaching. This includes being vigilant on issues regarding the
potential inequity that GAI chatbots may cause among various student groups.

Funding Open access funding provided by Stockholm University.

Declarations
Conflict of interest The authors declare no competing interests.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/.

References
Al-Zahrani, A. M. (2023). The impact of generative AI tools on researchers and research: Implications for
academia in higher education. Innovations in Education and Teaching International, 1–15. https://​doi.​
org/​10.​1080/​14703​297.​2023.​22714​45
Alemdag, E. (2023). The effect of chatbots on learning: A meta-analysis of empirical research. Journal of
Research on Technology in Education, 1–23. https://​doi.​org/​10.​1080/​15391​523.​2023.​22556​98
Ansari, A. N., Ahmad, S., & Bhutta, S. M. (2023). Mapping the global evidence around the use of ChatGPT
in higher education: A systematic scoping review. Education and Information Technologies, 29, 1–41.
https://​doi.​org/​10.​1007/​s10639-​023-​12223-4
Banks, S. (2011). A historical analysis of attitudes toward the use of calculators in junior high and high
school math classrooms in the United States since 1975. https://​files.​eric.​ed.​gov/​fullt​ext/​ED525​547.​
pdf. Accessed 17 Aug 2024
Bareis, J., & Katzenbach, C. (2022). Talking AI into being: The narratives and imaginaries of national AI
strategies and their performative politics. Science, Technology, & Human Values, 47(5), 855–881.
https://​doi.​org/​10.​1177/​01622​43921​10300​07
Barman, L., McGrath, C., & Stöhr, C. (2019). Higher education; for free, for everyone, for real? Mas-
sive open online courses (MOOCs) and the responsible university: History and enacting rationali-
ties for MOOC initiatives at three Swedish universities. In Sørensen, M. P., Geschwind, L., Kekäle,
J., Pinheiro, R. (Eds.), The Responsible University. Palgrave Macmillan. https://​doi.​org/​10.​1007/​
978-3-​030-​25646-3_5
Barrett, A., & Pack, A. (2023). Not quite eye to A.I.: Student and teacher perspectives on the use of gen-
erative artificial intelligence in the writing process. International Journal of Educational Technology
Higher Education, 20(1), 59. https://​doi.​org/​10.​1186/​s41239-​023-​00427-0
Higher Education

Bearman, M., Ryan, J., & Ajjawi, R. (2022). Discourses of artificial intelligence in higher education: A criti-
cal literature review. Higher Education, 1–17. https://​doi.​org/​10.​1007/​s10734-​022-​00937-2
Bernabei, M., Colabianchi, S., Falegnami, A., & Costantino, F. (2023). Students’ use of large language
models in engineering education: A case study on technology acceptance, perceptions, efficacy, and
detection chances. Computers and Education: Artificial Intelligence, 5, 100172. https://​doi.​org/​10.​
1016/j.​caeai.​2023.​100172
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sas-
try, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A.,
Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language models are few-shot learners
(arXiv:​2005.​14165). arXiv. https://​doi.​org/​10.​48550/​arXiv.​2005.​14165
Cerratto Pargman, T., Sporrong, E., Farazouli, A., & McGrath, C. (2024). Beyond the hype: Towards a criti-
cal debate about AI chatbots in Swedish higher education. Högre Utbildning, 14(1), 74–81. https://​doi.​
org/​10.​23865/​hu.​v14.​6243
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and chal-
lenges in higher education. International Journal of Educational Technology in Higher Education,
20(1), 43. https://​doi.​org/​10.​1186/​s41239-​023-​00411-8
Dakakni, D., & Safa, N. (2023). Artificial intelligence in the L2 classroom: Implications and challenges
on ethics and equity in higher education: A 21st century Pandora’s box. Computers and Education:
Artificial Intelligence, 5, 100179. https://​doi.​org/​10.​1016/j.​caeai.​2023.​100179
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M.,
Koohang, A., Raghavan, V., Ahuja, M., & Albanna, H. (2023). Opinion paper: “so what if Chat-
GPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of gen-
erative conversational AI for research, practice and policy. International Journal of Information
Management, 71, 102642. https://​doi.​org/​10.​1016/j.​ijinf​omgt.​2023.​102642
Escalante, J., Pack, A., & Barrett, A. (2023). AI-generated feedback on writing: Insights into efficacy
and ENL student preference. International Journal of Educational Technology in Higher Educa-
tion, 20(1), 57. https://​doi.​org/​10.​1186/​s41239-​023-​00425-2
Farazouli, A., Cerratto-Pargman, T., Bolander-Laksov, K., & McGrath, C. (2023). Hello GPT! Goodbye
home examination? An exploratory study of AI chatbots impact on university teachers’ assessment
practices. Assessment and Evaluation in Higher Education, 1–13. https://​doi.​org/​10.​1080/​02602​
938.​2023.​22416​76
Fütterer, T., Fischer, C., Alekseeva, A., Chen, X., Tate, T., Warschauer, M., & Gerjets, P. (2023). Chat-
GPT in education: Global reactions to AI innovations. Scientific Reports, 13(1), 15310. https://​doi.​
org/​10.​1038/​s41598-​023-​42227-6
Gee, J. P. (2004). Discourse analysis: What makes it critical? In: An introduction to critical discourse
analysis in education (pp. 49–80). Routledge
Geertz, C. (2008). Thick description: Toward an interpretive theory of culture. In: The cultural geogra-
phy reader (pp. 41–51). Routledge.
Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated
methodologies. Health Information & Libraries Journal, 26(2), 91–108. https://​doi.​org/​10.​1111/j.​
1471-​1842.​2009.​00848.x
Guo, K., & Wang, D. (2023). To resist it or to embrace it? In: Examining ChatGPT’s potential to support
teacher feedback in EFL writing. Education and Information Technologies.https://​doi.​org/​10.​1007/​
s10639-​023-​12146-0
Haas, C. (1996). Writing technology: Studies on the materiality of literacy. Routledge.
Habibi, A., Muhaimin, M., Danibao, B. K., Wibowo, Y. G., Wahyuni, S., & Octavia, A. (2023). Chat-
GPT in higher education learning: Acceptance and use. Computers and Education: Artificial Intel-
ligence, 5, 100190. https://​doi.​org/​10.​1016/j.​caeai.​2023.​100190
Hallal, K., Hamdan, R., & Tlais, S. (2023). Exploring the potential of AI-Chatbots in organic chemistry:
An assessment of ChatGPT and Bard. Computers and Education: Artificial Intelligence, 5, 100170.
https://​doi.​org/​10.​1016/j.​caeai.​2023.​100170
Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., ... & Koedinger,
K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International
Journal of Artificial Intelligence in Education, 1–23. https://​doi.​org/​10.​1007/​s40593-​021-​00239-1
Jafari, F., & Keykha, A. (2023). Identifying the opportunities and challenges of artificial intelligence in
higher education: A qualitative study. Journal of Applied Research in Higher Education, ahead-of-
print(ahead-of-print). https://​doi.​org/​10.​1108/​JARHE-​09-​2023-​0426
Jurafsky, D., & Martin, J. H. (2023). Speech and language processing. Retrieved 25 March 2023, from
https://​web.​stanf​ord.​edu/​~juraf​sky/​slp3/
Higher Education

Khalil, M., Prinsloo, P., & Slade, S. (2023). The use and application of learning theory in learning ana-
lytics: A scoping review. Journal of Computing in Higher Education, 35(3), 573–594. https://​doi.​
org/​10.​1007/​s12528-​022-​09340-3
Khosravi, T., Al Sudani, Z. M., & Oladnabi, M. (2023). To what extent does ChatGPT understand genet-
ics? Innovations in Education and Teaching International, 1–10, 1. https://​doi.​org/​10.​1080/​14703​
297.​2023.​22588​42
Kivle, B. M. T., & Espedal, G. (2022). Identifying values through discourse analysis. In: Researching
values: Methodological approaches for understanding values work in organisations and leadership
(pp. 171–187). Springer International Publishing
Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). Exploring generative artificial intelligence prepar-
edness among university language instructors: A case study. Computers and Education: Artificial
Intelligence, 5, 100156. https://​doi.​org/​10.​1016/j.​caeai.​2023.​100156
Lai, C. Y., Cheung, K. Y., & Chan, C. S. (2023). Exploring the role of intrinsic motivation in ChatGPT
adoption to support active learning: An extension of the technology acceptance model. Computers and
Education: Artificial Intelligence, 5, 100178. https://​doi.​org/​10.​1016/j.​caeai.​2023.​100178
Li, Y., Sha, L., Yan, L., Lin, J., Raković, M., Galbraith, K., Lyons, K., Gašević, D., & Chen, G. (2023). Can
large language models write reflectively. Computers and Education: Artificial Intelligence, 4, 100140.
https://​doi.​org/​10.​1016/j.​caeai.​2023.​100140
Maheshwari, G. (2023). Factors influencing students’ intention to adopt and use ChatGPT in higher educa-
tion: A study in the Vietnamese context. Education and Information Technologies. https://​doi.​org/​10.​
1007/​s10639-​023-​12333-z
McGrath, C., & Åkerfeldt, A. (2019). Educational technology (EdTech): Unbounded opportunities or just
another brick in the wall? In Digital Transformation and Public Services (pp. 143–157). Routledge.
https://​doi.​org/​10.​4324/​97804​29319​297-9
McGrath, C., Cerratto Pargman, T., Juth, N., & Palmgren, P. J. (2023). University teachers’ perceptions
of responsibility and artificial intelligence in higher education-An experimental philosophical study.
Computers and Education: Artificial Intelligence, 4, 100139. https://​doi.​org/​10.​1016/j.​caeai.​2023.​
100139
McGrath, C., Liljedahl, M., & Palmgren, P. J. (2020). You say it, we say it, but how do we use it? Communi-
ties of practice: A critical analysis. Medical Education, 54(3), 188–195. https://​doi.​org/​10.​1111/​medu.​
14021
Mohamed, A. M. (2023). Exploring the potential of an AI-based chatbot (ChatGPT) in enhancing English as
a Foreign Language (EFL) teaching: Perceptions of EFL Faculty Members. Education and Information
Technologies. https://​doi.​org/​10.​1007/​s10639-​023-​11917-z
Ortegón, C., Decuypere, M., & Williamson, B. (2024). Mediating educational technologies: Edtech
brokering between schools, academia, governance, and industry. Research in Education,
00345237241242990. https://​doi.​org/​10.​1177/​00345​23724​124299
Petticrew, M., & Roberts, H. (2008). Systematic reviews in the social sciences: A practical guide. Wiley.
Pursnani, V., Sermet, Y., Kurt, M., & Demir, I. (2023). Performance of ChatGPT on the US fundamen-
tals of engineering exam: Comprehensive assessment of proficiency and potential implications for
professional environmental engineering practice. Computers and Education: Artificial Intelligence, 5,
100183. https://​doi.​org/​10.​1016/j.​caeai.​2023.​100183
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsuper-
vised multitask learners. OpenAI blog, 1(8), 9. https://​www.​seman​ticsc​holar.​org/​paper/​Langu​age-​Mod-
els-​are-​Unsup​ervis​ed-​Multi​task-​Learn​ers-​Radfo​rd-​Wu/​9405c​c0d61​69988​371b2​755e5​73cc2​8650d​
14dfe
Rodway, P., & Schepman, A. (2023). The impact of adopting AI educational technologies on projected
course satisfaction in university students. Computers and Education: Artificial Intelligence, 5, 100150.
https://​doi.​org/​10.​1016/j.​caeai.​2023.​100150
Schwenke, N., Söbke, H., & Kraft, E. (2023). Potentials and challenges of chatbot-supported thesis writing:
An autoethnography. Trends in Higher Education, 2(4), 611–635. https://​doi.​org/​10.​3390/​highe​redu2​
040037
Sperling, K., Stenberg, C. J., McGrath, C., Åkerfeldt, A., Heintz, F., & Stenliden, L. (2024). In search of
Artificial Intelligence (AI) literacy in teacher education: A scoping review. Computers and Education
Open, 6, 100169. https://​doi.​org/​10.​1016/j.​caeo.​2024.​100169
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., & Gomez, A. N. (2017). Attention is all you
need [J]. Advances in Neural Information Processing Systems, 30(1), 261–272. https://​proce​edings.​
neuri​ps.​cc/​paper_​files/​paper/​2017/​hash/​3f5ee​24354​7dee9​1fbd0​53c1c​4a845​aa-​Abstr​act.​html
Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., ... & Fedus, W. (2022). Emergent abili-
ties of large language models. arXiv preprint arXiv:2206.07682.
Higher Education

Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication
between man and machine. Communications of the ACM, 9(1), 36–45. https://​doi.​org/​10.​1145/​357980.​
357991
Winther Jorgensen, M. W., & Phillips, L. J. (2002). Discourse analysis as theory and method. Sage.
Wu, R., & Yu, Z. (2023). Do AI chatbots improve students learning outcomes? Evidence from a meta‐analy-
sis. British Journal of Educational Technology. https://​doi.​org/​10.​1111/​bjet.​13334
Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investiga-
tion. Education and Information Technologies, 28(11), 13943–13967. https://​doi.​org/​10.​1007/​
s10639-​023-​11742-4
Yilmaz, R., & Karaoglan Yilmaz, F. G. (2023). The effect of generative Artificial Intelligence (AI)-based
tool use on students’ computational thinking skills, programming self-efficacy and motivation. Com-
puters and Education: Artificial Intelligence, 4, 100147. https://​doi.​org/​10.​1016/j.​caeai.​2023.​100147
Zou, M., & Huang, L. (2023). The impact of ChatGPT on L2 writing and expected responses: Voice
from doctoral students. Education and Information Technologies. https://​doi.​org/​10.​1007/​
s10639-​023-​12397-x

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

You might also like