Empowering Digital Education With ChatGPT
Empowering Digital Education With ChatGPT
Recently, there has been a significant increase in the development and interest in applying generative
AI across various domains, including education. The emergence of large language models (LLMs),
such as the ChatGPT tool, fueled by advancements in generative AI, is profoundly reshaping educa-
tion. The use of the ChatGPT tool offers personalized support, improves accessibility, and introduces
innovative methods for students and educators to engage with information and learning materials.
Furthermore, ChatGPT facilitates a wide range of language learning services, including language
instruction, speech recognition, pronunciation feedback, and immersive virtual simulations for
hands-on learning experiences.
This book explores the transformative potential of the ChatGPT tool within education, shedding
light on the opportunities that arise through the integration of the ChatGPT tool into various aspects
of the learning process. It serves as a platform for the community to share cutting-edge research
ideas concerning the use of the ChatGPT tool in digital education. Readers will discover how the
ChatGPT tool can enhance student engagement, foster personalized learning experiences, facilitate
intelligent tutoring systems, support virtual classroom interactions, and revolutionize assessment
and feedback mechanisms.
ii
iii
Edited by
Mohamed Lahby
iv
Contents
About the Editor................................................................................................................................vii
List of Contributors............................................................................................................................ix
Preface..............................................................................................................................................xiii
v
vi
vi Contents
Chapter 10 Using ChatGPT to Prepare for Your Dream Job? Unlocking the
Potential of AI for Personalized Curriculum Planning.............................................. 142
Leonie Rebecca Freise and Eva Ritz
Index............................................................................................................................................... 285
vii
vii
viii
ix
Contributors
Adiyono Adiyono Ngoc Bich Do
Sekolah Tinggi Ilmu Tarbiyah Ibnu Rusyd University of Economics
Tanah Grogot Ho Chi Minh City
Paser, Indonesia Vietnam
ix
x
x List of Contributors
List of Contributors xi
Tim Williams
Queensland University of Technology
Australia
xii
newgenprepdf
xiii
Preface
Education, as a fundamental pillar of our society, has evolved fascinatingly throughout the centuries.
From basic methods of the past to today’s immersive technologies, teaching has constantly adapted
to meet the challenges of each era. In recent years, the rapid evolution of AI and the widespread
adoption of digitalization techniques have revolutionized teaching methods. This transformation has
brought forth a diverse array of online resources and educational platforms, granting students unpre-
cedented access to a wealth of learning materials tailored to their individual needs and interests.
As AI technologies continue to advance, the emergence of large language models (LLMs) like the
ChatGPT tool is profoundly reshaping education. This tool provides personalized support, improves
accessibility, and introduces innovative methods for students and educators to engage with infor-
mation and learning materials. Furthermore, ChatGPT facilitates a wide range of language learning
services, including language instruction, speech recognition, pronunciation feedback, and immer-
sive virtual simulations for hands-on learning experiences.
This book explores the transformative potential of the ChatGPT tool within education, shedding
light on the opportunities that arise through the integration of the ChatGPT tool into various aspects
of the learning process. It serves as a platform for the community to share cutting-edge research
ideas concerning the use of the ChatGPT tool in digital education. Readers will discover how the
ChatGPT tool can enhance student engagement, foster personalized learning experiences, facilitate
intelligent tutoring systems, support virtual classroom interactions, and revolutionize assessment
and feedback mechanisms.
Furthermore, this book elucidates the systematic integration of ChatGPT and other generative
AI techniques into educational practices, offering readers a clear understanding of how these tech-
nologies can be employed to address challenges and enhance learning outcomes in digital educa-
tion, while also emphasizing the importance of ethical considerations in the use of generative AI in
educational settings.
This book comprises a total of 18 chapters organized into four main sections. The first section delves
into the theoretical and applied aspects of utilizing ChatGPT and other generative AI techniques in
education. The second section focuses on the application of generative AI for personalized learning
and adaptive systems, showcasing research on its effectiveness in various educational contexts. The
third section explores the role of ChatGPT as virtual teaching assistants, highlighting their potential
to enhance academic guidance and foster active learning. Finally, the last section provides ethical
considerations and presents case studies that examine the impact of generative AI in education.
xiii
xiv
1
1.1 INTRODUCTION
In the rapidly evolving landscape of artificial intelligence (AI), language models like ChatGPT
are becoming transformative educational tools (Gill et al., 2024). As ChatGPT itself proclaims,
“ChatGPT is a powerful language model that has the potential to revolutionize the way we interact
with and utilize artificial intelligence in our daily lives” (ChatGPT, personal communication,
October 5, 2023).
This sentiment is further echoed by Parker et al. (2024a) who note that since its public release in
2023, ChatGPT has sent ripples of excitement and fear throughout the internet and halls of higher
education (p. 1). Professors and higher education students have questioned the use and engage-
ment with ChatGPT. On one side, some students and faculty have embraced the technological
future, using AI for various activities—from completing class assignments to planning spring break
trips and even deciphering traffic signs (Rahiman & Kodikal, 2024). For these users, it is hard to
remember life just a year ago when AI was not a daily fixture. Conversely, some professors and
students view AI with apprehension. For them, the rapid advancements in technology are a source
of anxiety, evoking dystopian scenarios akin to the Terminator movies. As a memorable quote from
the movie Terminator Genisys states, “Primates evolve over millions of years. I evolve in seconds.
And I am here. In exactly four minutes, I will be everywhere” (Taylor, 2015). This sentiment could
easily apply to the rapid, almost overnight, advancements in publicly accessible large language
models and AI.
Despite the varying levels of enthusiasm and apprehension among students and faculty in higher
education, there is a significant gap in our understanding of how ChatGPT is being adopted and
DOI: 10.1201/9781032716350-1 1
2
TABLE 1.1
Characteristics of undergraduate and graduate students and faculty members
utilized across different academic roles and disciplines. This chapter aims to address this gap by
posing the following research questions:
• How are ChatGPT and other AI technologies being integrated into the academic routines and
responsibilities of undergraduate and graduate students and faculty at the University of Kansas?
• What are the perceived advantages and limitations of using ChatGPT for educational purposes
among these different academic groups?
• How does ChatGPT and similar AI technologies are used vary between these groups, and what
implications does this have for educational practices and policies?
This chapter embarks on an inquiry into integrating ChatGPT and other AI technologies within the
daily routines and academic obligations of members at a midwestern university. Specifically, this
investigation concentrates on the experiences of two undergraduate and two graduate students from
the field of education alongside two faculty members of the same field. These individuals have been
selected to represent a dual categorization: those who have engaged with ChatGPT following its
widespread acceptance toward the end of 2022 and those who have recently begun to incorporate it
into their routines amid its growing prevalence in educational contexts.
The chapter is structured to examine the diverse manners in which these four students and
two professors utilize ChatGPT and other AI tools. This exploration spans a broad spectrum of
applications, from leveraging AI to enhance academic and everyday tasks to employing AI tools in
research and work refinement, and further, to understanding AI’s role within the university’s broader
academic and research ecosystem.
Ultimately, the chapter seeks to construct a detailed portrayal of AI engagement across varying
levels of the university’s community. By highlighting the experiences of undergraduate and
graduate students with those of faculty members, it aims to illuminate the multifaceted nature of AI
usage across different academic echelons and provide a nuanced perspective on the consequential
implications of AI technology in higher education (see Table 1.1).
1.2.1 Undergraduate Student 1
Female, 21 years old, White. Senior at the University of Kansas
1.2.1.1 Introduction
Reflecting on my use of ChatGPT, I find myself in the early stages of navigating the terrain of AI as
a novice. I am a 21-year-old White female student, currently a senior at the University of Kansas.
My interactions with ChatGPT are a regular part of my routine, occurring three to four times weekly.
However, it’s important to note that my experience with AI tools is largely confined to ChatGPT,
as I have not yet explored other AI technologies extensively. In the coming sections, I highlight
my usage and perspectives on ChatGPT, outlining its integration and influence on my educational
practices, the evolution of my study habits and time management in the presence of AI, and my
contemplations on the ethical implications and future outlook of AI within the academic sphere.
1.2.1.2 Initial Perspectives
In exploring the use of ChatGPT and AI in my academic life, I have discovered an intersection
between this emerging technology and my education. My initial foray into using ChatGPT was a
blend of curiosity and practicality. Initially, my tasks ranged from using ChatGPT to create cre-
ative pieces like birthday cards and poems about friends to refining and generating grocery lists and
reducing recipes to feed one person. Seeing how well I did these tasks, I wanted to test them in my
academic settings.
1.2.1.3 Academic Life
Since my initial use of ChatGPT, it has become vital to my study routine. I began by utilizing
ChatGPT to enhance my thought process and generate ideas for discussion board posts, an essential
element of most classes. ChatGPT’s ability to refine my thoughts and summarize articles for various
subjects has significantly improved my engagement in these discussions. Its proficiency in distil-
ling complex articles into more comprehensible insights has been particularly beneficial, aiding my
understanding and saving me considerable time; this is particularly evident when I need to locate
specific information within an article.
Furthermore, I have incorporated ChatGPT into various assessment tasks. It has been instru-
mental in generating ideas and structuring outlines for papers and other assignments. This tool
has also been pivotal in crafting and refining lesson plans for my higher education classes and
during real-life teaching placements. ChatGPT’s contributions have been especially valuable in
developing lesson plans and intervention strategies for special education students, making my work
sound more polished and enhancing my perspective, leading to a more comprehensive approach
to my assignments. It is important to highlight that I do not rely solely on AI for completing my
assignments. I allocate the same amount of time I would without its assistance. However, I have
found that AI has enabled me to complete assignments more efficiently. In addition to academic
writing, I have utilized ChatGPT to draft professional and academic emails, with the results gener-
ally being positively received. Knowing that my emails and responses are thoroughly checked for
spelling and appropriateness before sending is reassuring. This capability of ChatGPT to elevate
the professionalism of my communications is a primary reason why I plan to continue using it in
my academic journey. It allows me to focus on content while ensuring the language is refined and
appropriate.
and quite another to let it wholly author assignments, which raises questions about academic integ-
rity, in which case I would say it is not ethical.
As an undergraduate student, I am really conscious of the risks of being accused of cheating with
ChatGPT. I make sure to use it cautiously, always cross-checking its responses to ensure they are
accurate before I consider incorporating them into my work. Among my fellow students, there is a
mix of opinions. Some are pretty relaxed, accepting ChatGPT’s outputs as they are. Still, many of
us view it as potentially crossing the line into cheating unless we have explicit permission from our
professors.
In some cases, professors have suggested using ChatGPT, especially for tasks like organizing
lesson plans or helping with writing, which is real-time. In these situations, most students, including
myself, feel more comfortable using it, as it comes with the endorsement of our educators.
Still, there’s a significant concern about the ethical implications of using ChatGPT among many
students, making it a topic of ongoing debate and consideration in our academic community.
I do not believe the platform is currently being used to an extent at the University of Kansas that
would warrant a ban. However, should a time come when students begin to depend on it extensively
for their entire workload, replacing key aspects of traditional learning, I would then support a ban. It
is crucial to maintain a balance where AI tools like ChatGPT enhance rather than replace the foun-
dational elements of academic study.
1.2.1.5 Conclusion
Looking back on my experiences with ChatGPT and AI, I am really fascinated by how AI is chan-
ging education. I can see a future where AI is part of everything we do—from how we write emails
to being a regular feature in programs like Microsoft Word or even changing how we search on
Google. But right now, I think how we use AI and what we think is okay or not really depends on
whether our professors are engaged with ChatGPT and allow it. There are a lot of different opinions
on this, and it feels like the role of AI in our learning is up in the air.
1.2.2 Undergraduate Student 2
Female, 21 years old, White. Senior at the University Of Kansas
1.2.2.1 Introduction
I would consider myself a beginner in my experience with AI. I use ChatGPT at least once a week,
sometimes daily. While I use ChatGPT fairly frequently, I have not utilized many other AI tools.
1.2.2.2 Initial Perspectives
I began using AI to create fantasy portraits of myself and my cat on TikTok for a trend. I have asked
it to create budget-and time-friendly meal plans for college students. I enjoy being able to easily
modify the plan with my dietary restrictions, preferences, and goals. My friends and I have taken
personality tests ranging from our attachment styles to which muppet we are that AI has generated
for us.
1.2.2.3 Academic Life
I have used ChatGPT to assist with my studies this semester. I use the tool to give me quick
definitions or review concepts from class that I did not fully understand or remember. I have asked
it to explain the concepts like I was five years old and have asked follow-up questions based on its
response. I have also used the tool to create study guides based on the information I provide it. It
is quickly organized and can be formatted as a test, which is beneficial to me. I like that it does a
lot of the work for me, not so much the ideas and content, but the bare bones of my understanding
and organizing.
5
I use ChatGPT more frequently than I use Quizlet now because of the user-friendliness and
affordability. I still incorporate studying with pen and paper, but it has altered my studying by
acting like a partner for me to review with. Although some might find the human touch of AI to
be unnerving, I like that it is kind of like having a conversation with a robot. I [however] like
to start with AI to see what it can come up with, but then I Google what it says to see if I can
find any credible sources. I asked it about a court case’s ruling, and it was completely wrong
with how the judges voted (it got the ruling correct, but not the distribution of votes). I told it
that it was wrong, and it apologized and provided another false piece of information about the
judging votes.
I have used AI tools to assist in my coursework. I have asked what areas of ELA (English
Language Arts) content students typically learn at various ages or for ideas on lesson planning to
take inspiration from. I have yet to encounter a lesson plan fully ready to be used in a real classroom,
but there are good pieces that can be looked at for further development. I did have ChatGPT write
an essay on Bigfoot for my practicum students (as an example of examining an essay’s) organiza-
tion. Despite explicit instructions and requests, (this AI) tool still did not incorporate all elements
that I asked for: [I had asked for the essay to contain] no contractions, (be written at a) ninth-grade
vocabulary level, the inclusion of hypothetical sources, restate the thesis in conclusion.
I think AI will be integrated as a tool for helping both students and teachers. As a future teacher,
I think it will be great to create simple worksheets for vocabulary or grammar practice. It could also
help students with their studying or catching writing mistakes. I would see AI being used as an after-
school tutor to double-check their work in my ideal world. I think its integration will make prompts
for assignments more specific so that it is harder for AI to formulate an authentic response. In daily
life, it could become the new Google or go to search resource.
I know my peers in education are concerned that people will choose AI instead of real, human
teachers. Some people do not see it as a tool and only recognize it for the harm it has the potential to
(cause). There is a fear that it was created to gather data/information from people that will be used
against them someday.
1.2.2.5 Conclusion
I absolutely do not think ChatGPT should be banned at the University of Kansas. Banning tech-
nology is hardly a way to prevent its use by students. It could prevent students from getting familiar
with a potential tool in their job force. On the other hand, it could promote academic integrity and
result in higher-quality learning. (Reactions I have seen from my professors) range from being
curious about ChatGPT to being avid users themselves. Nearly all of my professors this past year
have shown some sort of AI tool for the classroom during their lessons. We are being prepared as
future educators on the negatives of ChatGPT and what to look out for as well.
1.3.1 Graduate Student 1
Male, 30–32 Years Old, White. Undertaking Ph.D. at the University Of Kanas.
In 2023, ChatGPT reached its first anniversary since its public debut and as a graduate student,
I find it hard to imagine my daily routine without it. Having been immersed in technology from
a young age and having taught an introductory course on technology for over seven years, not to
mention my research in the field, it is safe to say that the launch of ChatGPT immediately captured
my attention. I eagerly explored every possible way to integrate it into my work. Now, it feels like
a day hardly counts unless I have posed at least 20 questions to ChatGPT. It has become such a
cornerstone of my daily activities that if I had not interacted with it, I might as well have not turned
on my computer. In this section, I will outline how, as a graduate student, I engage with and leverage
ChatGPT to streamline my workflow, enhance efficiency, and expand my capabilities as a graduate
student and researcher.
1.3.1.2 Teaching Purposes
In the sphere of graduate teaching, my instructional strategies have been significantly enriched by the
integration of ChatGPT. Initially, I leveraged ChatGPT to analyze the course syllabus and required
instructional guidelines, creating a checklist that ensures all educational objectives are met without
manually rescanning every page to ensure syllabus guidelines are met. In my classes, students were
traditionally tasked with developing a basic game using Scratch Jr, which is a basic coding app to
build games, including creating rudimentary design diagrams and hand-drawn mockups. However,
with the introduction of ChatGPT, they can now produce highly detailed, hyperrealistic game design
documents and storyboards with a click of a button. This advanced approach has transformed how
students troubleshoot and refine their coding projects. Students are encouraged to engage with
ChatGPT to address specific programming challenges, asking questions like, “How do I add a new
level to my game?” This shift enables me as the teacher to ensure that students’ progress through
the course efficiently without needing extensive individual attention, which is typically a significant
drain on time. Moreover, the use of ChatGPT extends to career preparation, where students align
their resumes with teaching job descriptions and generate tailored mock interview questions based
on their credentials. I am developing a ChatGPT-based chatbot encompassing comprehensive course
materials, syllabi, study guides, and example assessments. This tool is designed to offer under-
graduate students a 24/7 resource for course-related inquiries, particularly when I am not immedi-
ately available to respond to emails.
Finally, I utilize ChatGPT to calibrate the feedback provided on undergraduate assessments.
This ensures a balanced delivery of constructive criticism and positive reinforcement, essential
for fostering a productive learning environment. The integration of ChatGPT into my class-
room has enhanced student engagement, participation, and the quality of work. According to a
study that my fellow graduate colleagues and I conducted (Parker et al., 2024a), which reflects
on experiences across the use of ChatGPT within two classes in 2023, approximately 40%
of students reported using AI tools like ChatGPT during my specific course. Further analysis
revealed that over 20% of these students incorporated AI into their assessment tasks. Although
8
there was no explicit policy prohibiting AI use, a notable trend emerged: the more I discussed
and introduced AI in the classroom, the more students engaged with it in their learning and
assessment processes. This correlation raises a critical question: while the usage of AI in educa-
tional contexts is increasing, does it unequivocally enhance the learning experience, or are there
more complex implications to consider? In my assessment, even within a span of merely one
year, ChatGPT has exhibited the capacity to fundamentally transform the domains of learning
and teaching in a lasting manner. Despite the critiques associated with its growing utilization,
ChatGPT will become deeply integrated into all aspects of education. Its widespread adoption
will come to be viewed as a natural and representative element of our continually evolving edu-
cational landscape.
1.3.1.4 Conclusion
ChatGPT is not just a fleeting phenomenon but a transformative force in the educational land-
scape. While it brings many benefits and challenges, its profound influence is undeniable. We must
approach it with open arms, ready to embrace its potential, yet remain vigilant, discerning when
to accept and when to resist its applications. It is about navigating this new terrain with a balanced
mindset, saying “yes” to innovation and “no” to compromise as we shape the future of learning.
1.3.2 Graduate Student 2
Female, 40–42 Years Old, White. Ph.D. Candidate at the University Of Kanas.
My introduction to ChatGPT occurred at the outset of 2023, and as the year draws to a close,
I can reflect upon the evolution of my view of this new AI tool. Initially, I viewed ChatGPT
through the lens of a former English teacher, harboring reservations about its potential impact
on student writing; however, as the fall semester of 2023 commenced, I recognized its preva-
lence and potential warranted a different perspective. ChatGPT was now on everyone’s minds
at our university. Anecdotally, from colleagues, I noticed a fair bit of both skepticism and ignor-
ance of its capabilities, which remained in them and in me. My interactions with undergraduates
and my own exploration of the AI software gradually revealed its potential as a valuable tool
akin to a calculator. I began to see the time-saving measures it could offer for K–12 teachers and
college professors. I even began to see the potential positive ways students could utilize the new
technology while still maintaining the ethical standards of their schools and universities. In this
section, I outline how I engage with and utilize ChatGPT to increase efficiency and brainstorm
ideas in my graduate teaching assistant role.
9
diminishing their creativity, critical thinking abilities, and even self-esteem issues. Our discussions
delve into privacy and data security issues, as high school students may unknowingly share sensitive
information while interacting with AI systems. I encourage my preservice teachers to consider the
implications of data privacy in the digital age and how they can educate their students about respon-
sible online behavior.
1.4.1 Faculty Member 1
Male, 55–64 years old, White. Holds a Ph.D. and was previously chair of a large teaching depart-
ment for 10+years.
to questions posed by faculty on written assignments not only for coursework but possibly also for
exams that led to the awarding of a degree. Discussions among faculty and several graduate students
who are more tech-savvy focused on how to detect the use of AI on students’ assignments and how
to address the issue of using AI with students in an appropriate manner without appearing to be
accusatory of committing academic misconduct.
1.4.2 Faculty Member 2
Male, 55–64 years old, White. Holds a Ph.D. and serves as an Associate Director at the Teaching
Center.
Bard (which I found unreliable) and Claude 2 (which I have had better luck with). I have found both
ChatGPT and Copilot excellent tools for learning and idea generation. I also use them to check my
thinking as I write or prepare for workshops or speeches about AI. For instance, when the Rotary
Club recently asked me to speak about generative AI and education, I asked Copilot and ChatGPT
what Rotarians might be curious about. The answers contained nothing I had not already considered,
but that was OK. I just needed another opinion.
earlier with plagiarism checkers. Both tools attack symptoms rather than addressing the complex
underlying motivations: students working many hours to pay for school, pressures to maintain high
grades, and education being seen as a consumer product rather than a process of learning, just to
name a few. Beneath them all, though, is a swelling ocean of information eroding a static system
that desperately needs an overhaul. Generative AI is not the problem. It is simply the messenger.
Until we rethink our approaches to teaching and learning and build flexibility into our educational
systems, we will continually struggle with AI and whatever technologies follow.
1.5 FUTURE RESEARCH
The imperative of ongoing and future research is underscored, especially in the context of emer-
gent fields, where the landscape of inquiry is vast and the potential for scholarly exploration is
expansive, such as in AI. This chapter has outlined the modalities through which undergraduate and
graduate students, alongside faculty members, interact with and incorporate AI into their academic
endeavors. Scholars must embark on comparative studies across diverse academic institutions to
augment the emerging knowledge within this domain. Such investigations would ascertain whether
adoption and attitudes toward AI occur uniformly across different educational settings. Moreover,
this analysis was predicated on input from a limited cohort comprising merely two representatives
from each designated category. To extrapolate these findings to a broader spectrum, it is advisable
to extend the research demographic. This could involve an entire academic department or univer-
sity in a survey to determine if the trends identified herein reflect a more widespread sentiment.
Additionally, the opportunity presented by longitudinal studies cannot be overstated in gauging the
evolution of perceptions and applications of AI within academic circles.
Another avenue of scholarly research relates to examining faculty integration of and adaptation
to AI technologies. This chapter has showcased examples of faculty members who have enthusi-
astically adopted AI, yet a thorough investigation into the factors driving faculty adaptation to AI
presents a rich field for further exploration. These propositions merely skim the surface of potential
research endeavors from the initial findings of this chapter. The dynamic and multifaceted nature of
AI’s integration into higher education necessitates a varied approach to research, one that embraces
interdisciplinary methodologies and perspectives to capture the nuances of this technological para-
digm shift fully.
1.6 LIMITATIONS
While this study offers insightful observations and prompts significant inquiries about AI’s utiliza-
tion in higher education, it is constrained by several limitations. Initially, the research encompasses
three distinct groups: undergraduates, graduates, and faculty members. This scope does not capture
the full array of perspectives and experiences across the wider academic and administrative spec-
trum. Moreover, the study’s participant count is limited to six individuals. Expanding the sample
size is imperative to enhance the representativeness of the findings and establish broader connections
within the academic community.
Additionally, the methodological approach of this investigation presents further limitations. The
reliance on six individual case studies as the sole source of participant input restricts the depth of the
research. A more robust methodological framework, incorporating both surveys and interviews for
each participant, would significantly enrich the insights garnered from this chapter.
A richer, more detailed exploration of AI’s role within academic environments can be achieved by
addressing these identified limitations in future research initiatives. Such efforts would undoubtedly
yield valuable contributions, enriching the discourse for policymakers, educators, and technologists
and fostering a deeper understanding of the complexities surrounding AI’s integration into educa-
tional practices.
15
1.7 CONCLUSION
The findings of our investigation, which examined the integration and utilization of ChatGPT and
other AI technologies in higher education among undergraduate and graduate students and fac-
ulty at the University of Kansas, have revealed a mixed landscape of attitudes and experiences.
Despite varying levels of familiarity, participants universally acknowledged the potential of AI tech-
nologies, a sentiment echoed by Adıgüzel et al. (2023), Bearman et al. (2023), and Crompton and
Burke (2023). This chapter reveals that irrespective of their academic standing—undergraduate,
graduate, or faculty—individuals perceive ChatGPT as a tool to bolster productivity, aid in research,
and streamline educational tasks. Even as relative newcomers to AI in their professional pursuits,
they have effectively harnessed AI to enhance efficiency and extend their capabilities. At the same
time, there exists a significant degree of apprehension and concern among the participants of this
study due to this being a relatively new tool in the academic world at the professor and student
levels. Participants noted worries about the rapid advancements in AI and the potential ethical
dilemmas associated with its use. They are wary of the implications of relying too heavily on AI for
tasks traditionally performed by humans and plagiarism concerns. The apprehensions surrounding
AI’s integration into education are not novel within the scholarly discourse. Pisica et al. (2023)
delved into the perceptions of Romanian academics on the infusion of AI in higher education,
uncovering concerns paralleling those associated with plagiarism in academia. Similarly, Hockly
(2023) highlights a spectrum of ethical quandaries linked to AI, encompassing issues of information
privacy, anonymity, surveillance, autonomy, and the ownership of information. These investigations
underscore a broader academic apprehension toward the ethical implications of AI deployment in
educational settings, reflecting a critical need for ongoing scrutiny and dialogue within this rapidly
evolving technological landscape.
Despite these divergent perspectives, a common thread runs through all participants’
responses: recognition of the potential usefulness of ChatGPT and AI in education. Even those who
approach AI with skepticism acknowledge its value when used judiciously. They emphasize the
importance of carefully considering ethical implications, privacy concerns, and the need for respon-
sible AI development and implementation in educational settings. Neumann et al. (2023) emphasize
the necessity of reviewing AI-generated content prior to its application, a sentiment that resonates
with numerous discussions presented in this chapter. Adopting a cautious stance toward utilizing AI
outputs is advocated as a prudent measure, underscoring the importance of critical engagement with
technology to ensure accuracy and reliability.
This research highlights the nuanced nature of AI adoption in higher education. While there
are varying degrees of enthusiasm and apprehension, all participants acknowledge the transforma-
tive potential of ChatGPT and similar AI technologies. Educational institutions and policymakers
must address these mixed findings by developing clear guidelines and ethical frameworks to ensure
responsible AI use in academia, fostering a balanced and constructive technology integration into
the educational landscape. Doing so requires nimbleness, flexibility, and a dedication to revisiting
this issue due to the dynamism of technology. It is essential to monitor and adapt to the evolving
relationship between AI and education, striving to harness its benefits while mitigating potential
risks. The world of collegiate academia may not be designed to move and change in such ways.
REFERENCES
Adıgüzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the trans-
formative potential of ChatGPT. Contemporary Educational Technology, 15(3), 1–13.
Bearman, M., Ryan, J., & Ajjawi, R. (2023). Discourses of artificial intelligence in higher education: A critical
literature review. Higher Education, 86(2), 369–385.
Crompton, H., & Bure, D. (2023). Artificial intelligence in higher education: The state of the field. International
Journal of Educational Technology in Higher Education, 20(1), 1–22.
16
Dearing, J. W., & Cox, J. G. (2018). Diffusion of innovations theory, principles, and practice. Health affairs,
37(2), 183–190.
Everett M. Rogers, (2003), Diffusion of Innovations, 5th Edition. The Free Press
Gill, S. S., Xu, M., Patros, P., Wu, H., Kaur, R., Kaur, K., … & Buyya, R. (2024). Transformative effects of
ChatGPT on modern education: Emerging era of AI chatbots. Internet of Things and Cyber-Physical
Systems, 4, 19–23.
Hockly, N. (2023). Artificial intelligence in English language teaching: The good, the bad and the ugly. RELC
Journal, 54(2), 445–451, 00336882231168504.
Neumann, M., Rauschenberger, M., & Schön, E. M. (2023, May). “We Need to Talk about ChatGPT”: The
Future of AI and Higher Education. In 2023 IEEE/ACM 5th International Workshop on Software
Engineering Education for the Next Generation (SEENG) (pp. 29–32). IEEE.
Parker, L., Carter, C., Karakas, A., Loper, A. J., & Sokkar, A. (2023). Ethics and improvement: Undergraduate
students’ use of artificial intelligence in academic endeavors. International Journal of Intelligent
Computing Research (IJICR), 13(2), 1187–1194.
Parker, L., Carter, C., Karakas, A., Loper, A. J., & Sokkar, A. (2024a). Artificial Intelligence in Undergraduate
Assignments: An Exploration of the Effectiveness and Ethics of ChatGPT in Academic Work. ChatGPT
and Global Higher Education. STAR Scholar Books.
Parker, L., Hayes, J., Loper, A. J., Karakas, A. (2024b). Navigating the Unknown: Anticipating Concerns
and Gaps in Generative AI Research. General Aspects of Applying Generative AI in Higher
Education: Opportunities and Challenges. Springer.
Parker, L., Carter, C., Karakas, A., Loper, A. J., & Sokkar, A. (2024c). Graduate Instructors Navigating the AI
Frontier: The Role of ChatGPT in Higher Education. Computers and Education Open, 6, 100166.
Pisica, A. I., Edu, T., Zaharia, R. M., & Zaharia, R. (2023). Implementing artificial intelligence in higher edu-
cation: Pros and cons from the perspectives of academics. Societies, 13(5), 118.
Rahiman, H. U., & Kodikal, R. (2024). Revolutionizing education: Artificial intelligence empowered learning
in higher education. Cogent Education, 11(1), 2293431.
Taylor, A. (Director). (2015). Terminator Genisys [Film]. Paramount Pictures.
17
2.1 INTRODUCTION
The use of artificial intelligence (AI) in education has emerged as an increasingly popular topic in
recent times, as researchers explore the potential of these technologies to revolutionize teaching and
learning practices (Ofosu-Ampong et al., 2023). This reflects a growing awareness of AI’s capacity
to offer innovative solutions to educational challenges, including personalized learning experiences,
adaptive assessment tools, and intelligent tutoring systems (ITSs). While there is mounting evidence
that these technologies can enhance student learning outcomes, further research is required to estab-
lish their effectiveness and validity, particularly in various educational settings and among diverse
student populations.
Since its conceptualization (Stahl, 2022; Li & Gu, 2023), research on AI in education has gained
significant popularity across multiple disciplines. In practical terms, the concept of AI in edu-
cation research aims to leverage external resources to meet the increasing demands of teachers
and students. Therefore, to advance AI in education research, it is crucial to conduct a critical
scoping review of existing studies to uncover: (a) areas or themes that have received less attention,
(b) methods, theories, and AI techniques employed in previous research, and key areas for future
research. Furthermore, as these technologies become more pervasive in educational settings, it is
important to carefully consider their potential impact on privacy, desired outcomes, level of automa-
tion, effectiveness, and how to demonstrate AI in education (Su & Yang, 2023). Additionally, several
papers highlight the need for more research on issues of AI on student motivation and engagement
(Chan, 2023). Although some evidence suggests that AI-based technologies can improve student
motivation and engagement (Al Darayseh, 2023; Ofosu-Ampong, 2024b), research is needed to
fully understand how these technologies, including their strengths, weaknesses, opportunities, and
threats, impact student attitudes toward learning and their overall learning experiences and concerns.
The existing literature on AI in education studies lacks comprehensive and systematic reviews
on AI development phases, ChatGPT, and emerging technologies (Li & Gu, 2023). In this regard,
conducting a thorough review of the available literature on AI in education research is necessary for
several reasons. Firstly, it allows us to evaluate the current state of research in this area, identifying
the extent of studies that have already been undertaken and potential gaps for future investigations.
DOI: 10.1201/9781032716350-2 17
18
Secondly, such a review will contribute to a deeper understanding of the concept of AI in education
research and stimulate further scholarly inquiry.
In light of these considerations, this study aims to provide a comprehensive review of AI in
education research by synthesizing and analyzing existing studies. By doing so, we aim to develop
a framework highlighting the key themes, methodologies, and theories employed in previous
investigations. This review will serve as a valuable resource for researchers and educators interested
in AI in education, guiding their future studies and facilitating a more nuanced understanding of this
important field. The main research objectives motivating this review are
1. To identify theories, methodologies, and AI techniques that have informed prior AI education
research
2. To examine the emerging themes investigated in previous studies and identify the SWOT ana-
lysis for AI education research
3. To explore the technological roadmap for emerging technologies for the future of education
4. To explore the gaps that exist in current human–AI interaction in education research that
future studies can investigate
Overall, this review chapter highlights the importance of continued research and development of
AI-based technologies in education, as well as the need for careful consideration of their ethical
and social implications. By leveraging the potential of AI while addressing these challenges, we can
work toward creating more equitable, engaging, and effective learning environments for all students.
The next section focuses on the literature review, while the subsequent sections form the research
methodology and results, respectively. The review concludes with the future of human–AI inter-
action in education and suggestions.
2.2 LITERATURE REVIEW
2.2.1 An Overview of Using AI in Education
AI in education is a relatively new phenomenon that has been gaining increasing attention over the
past few decades. It has the potential to address some of the educational challenges today. The first
documented use of AI in education dates to the 1960s, when researchers at Stanford University
created the first computer-based instructional system called Project One (Brown & VanLehn, 1980).
This system was designed to provide individualized instruction to students based on their learning
needs and progress. Since then, AI has been used in various ways to enhance the teaching and
learning process. In the 1980s, ITSs were developed, which were designed to provide personalized
learning (Wenger, 1987) to students by using rule-based systems, expert systems, and learning
algorithms to analyze student data and adjust instruction accordingly. This personalized approach
aimed to enhance student engagement, motivation, and, ultimately, learning outcomes.
In the 1990s and 2000s, researchers began exploring the use of natural language processing
(NLP) to create conversational agents, or chatbots, to provide support to students (Joshi, 1991).
Chatbots can provide immediate feedback to students and help them with their assignments,
homework, and exams. Recently, AI has been used to develop intelligent educational games –
gamification, virtual reality (VR) and augmented reality (AR) learning environments, and adaptive
learning systems (Elfarri et al., 2023; Akgun & Greenhow, 2022; Ofosu-Ampong, 2020). These
technologies are designed to engage students in more interactive and personalized learning
experiences and provide educators with insights into student learning patterns and behaviors
(Mustafa et al. 2022).
Currently, AI in education is being used in a wide range of applications, from providing
personalized learning experiences to students and automating administrative tasks for educators,
to creating new educational tools and resources for classrooms. As AI continues to evolve, it can
19
potentially transform the way we teach and learn (Rahiman & Kodikal, 2024), making education
more accessible, efficient, and effective for students of all ages and backgrounds. Thus, AI’s impact
on education will likely be profound with its advancement.
2.3 RESEARCH METHODOLOGY
The review study tries to understand the importance, context, and impact of AI and emerging tech-
nologies in education. The papers reviewed (n =32 articles) represent a diverse set of studies on
20
AI, ChatGPT, and emerging technologies in education. The studies were published in various aca-
demic journals and conference proceedings, demonstrating the growing interest in the field of AI
and education. The selected studies employed a range of methodologies, including experimental
studies, case studies, and literature reviews. Hence, highlighting the breadth of research methods
used to investigate the efficacy of AI in education. Additionally, the studies were conducted in
various countries around the world, suggesting that the application of AI education is a global
phenomenon.
The present study assesses and synthesizes the themes, methods, and frameworks applied in AI
education research. After identifying the problem, a literature search and screening for inclusion
were conducted. The focus was to summarize previous knowledge and generate research questions
and a working definition of AI education research. Briefly, 32 accessible articles were shortlisted,
for data analysis and interpretation. Figure 2.1 provides an overview of the search and selection
process.
2.4 RESULTS
2.4.1 Theories Used in AI Education Research
This subsection addresses research objective 1 (RO1) of the chapter, focusing on examining the
theoretical underpinnings found in the sampled papers. The findings reveal that 56.2% of the
publications did not utilize any theory, leading us to categorize these articles under the “no theory”
category. Among the articles that employed theories, the technology acceptance-related theories
(e.g. unified theory of acceptance and use of theory, innovation diffusion theory, and technology-
organization-environment) exhibited the highest usage at 28.1%, followed by the self-determination
theory (3.1%), collaborative learning theory (3.1%), and adapting learning theory (3.1%). Other
theories, such as situated learning theory, learning style, actor-network theory, and personalized
learning theory accounted for 6.3%.
The analysis brings forth two significant revelations: (a) the majority of AI studies involve
proposed theoretical frameworks, and personalized and learning algorithms models with a focus
on bi-directional theory, (b) most studies do not use theories and adopt methodological frameworks
and AI research lacks specific theories. These findings can be attributed to the AI concept’s relative
newness and the common practice of borrowing theories from other fields. Despite AI’s novelty,
it is imperative for researchers to actively work toward developing theories that make substantial
contributions to the advancement of the research area.
21
2.4.1.2 AI techniques
This section presents the findings on the AI techniques used in developing AI in education. The
findings reveal that a significant number (46.8%) of AI research uses neural networks, fuzzy logic, and
support vector machines (SVMs). Other AI techniques used include teachable agent and reinforcement
learning and machine learning (ML) (12.5%), Naïve Bayes and decision trees (12.5%), evolutionary
algorithm, expert model and syntactic analysis (6.3%), logistic regression, linear online algorithm,
regression (12.5%), and binary classification, semantic web (9.4%). The results generally indicate a
preference for ML as a composite for neural networks, SVM, and decision trees. Consequently, neural
networks and fuzzy logic appear to be the most suitable AI techniques used in AI education research.
TABLE 2.1
Themes in AI education research
The Internet of Things (IoT) weaves its magic by connecting physical devices and creating smart
learning environments. Real-time data collection and analysis through interactive whiteboards,
sensors, and wearable technologies empower educators to personalize instruction and optimize
learning outcomes (Hafezad Abdullah et al. 2024). Interestingly, we found the progression of
emerging technologies into three phases: the short-term (2021–2025), mid-term (2025–2030), and
long-term (2031+) necessary to enhance the future of education. Table 2.2 shows the technological
roadmap, the development or conceptual approaches phase, and implications for the future roadmap.
TABLE 2.2
Technological roadmap of emerging technologies in education
Short-term (2021–2025)
Mid-term (2025–2030)
For the future road map of emerging technologies, we anticipate a focus on human-centered design,
promotion of responsible development and use, priority on equity and inclusion, and a huge invest-
ment in teacher training and support. If challenges surrounding the technological roadmap of the
emerging technologies are addressed, we anticipate an effective, equitable, and engaging learning
experience for future students.
In conclusion, these emerging technologies are not mere fads, but catalysts for profound change.
From individualized AI-powered platforms to captivating VR simulations, they hold the key to
making education more personalized, engaging, and effective. Embracing this technological revo-
lution empowers educators and learners alike to thrive in the digital age and forge a brighter future
for education.
different populations. The reproducibility of ChatGPT prompt runs was particularly emphasized
as a major limitation in teaching and learning and producing similar assignments (Elbanna &
Armstrong, 2023).
1. Focus on specific tasks: Using it to provide feedback on drafts, summarizing complex topics,
and generating practice questions.
2. Stress on critical thinking: Teaching students to evaluate the information output to identify
potential biases.
3. Conglomerate with other approaches: Avoid relying solely on ChatGPT by integrating with
traditional approaches e.g. hands-on activities and discussion forums
4. Set clear guidelines: Establish clear expectations for how students can use ChatGPT respon-
sibly and ethically.
In examining the ethical dimensions of employing ChatGPT in higher education, Huallpa (2023)
advocated for utilizing NLP models to enhance human communication rather than replace it, aiming
to address inherent limitations. To safeguard student privacy and minimize bias, universities should
establish guidelines and ethical frameworks governing the utilization of NLP models. The reception
of ChatGPT integration in terms of accessibility and societal acceptance was found to be moderate
among participants. Previous research suggests the need for further exploration to comprehend the
implications of AI in education, necessitating institutions to address ethical considerations and pro-
vide appropriate training for educators (Ofosu-Ampong et al., 2023). In summary, the incorporation
of ChatGPT into educational settings requires deliberate assessment of its role in augmenting rather
than supplanting human interaction and the development of critical thinking skills. By recognizing
the unique strengths and weaknesses of both human and ML, learners and educators can harness the
potential of ChatGPT while upholding the essential aspects of human learning experiences.
2024), and many institutions used SWOT analysis to assess their current position and develop future
strategies. Figure 2.2 shows the SWOT analysis for the chapter based on the reviewed materials.
Wang et al. (2022) used a SWOT analysis to evaluate the impact of the COVID-19 pandemic
on education. They identified strengths and several coping strategies for educational institutions
to adapt to new teaching and online learning methods and high learning flexibility. Weaknesses
such as weak practical function, reduced vision, low efficiency, and effectiveness of communication
were identified. Furthermore, they identified opportunities, such as the multi-directional informa-
tion exchange activity and diversity of learning motivations to improve the use of technology in
education; and threats included network instability, a requirement of strong self-control and the
complexity of online resources.
In conclusion, SWOT analysis is a valuable tool that can be used in the education industry in the
AI era to evaluate an institution to determine its current position and develop strategies for future
growth and success (Farrokhnia et al., 2023). With the current development of AI and emerging
technologies, the following SWOT is discussed as reviewed:
Strength:
• The review covers a wide range of topics related to AI’s application education, including
personalized learning, student engagement, equity and access, and teacher support.
• The studies employ a range of methodologies, including experimental studies, case studies,
and literature reviews, which provide a more comprehensive understanding of the potential of
AI in education.
• The studies are conducted in various countries around the world, suggesting that the applica-
tion of AI in education is a global phenomenon and of varied interest.
• The papers highlight the potential of AI-based technologies to transform various aspects of
teaching and learning and suggest promising areas for future research.
Weaknesses:
• The reviewed papers are limited in number and cover a relatively short time frame, which may
not provide a comprehensive understanding of the field.
• There is a tendency to focus on a specific intervention or technology, which may limit their
generalizability to other contexts.
• Many of the studies are experimental, which can be time-consuming and resource-intensive
and may not be feasible in all educational settings.
• The studies are conducted in a range of educational settings and contexts, which may make it
difficult to compare results across studies.
Opportunities:
• There is a growing interest in the application of AI in education, which provides opportunities
for further research and development, despite the lack of clear policy direction from educa-
tional institutions.
• The potential of AI-based technologies to improve personalized learning, student engagement,
equity and access, and teacher support provides opportunities for innovation in educational
practice.
• There is a need for research on the ethical and social implications of AI in education, which
provides opportunities for interdisciplinary collaboration.
28
Threats:
• There are concerns about the potential negative effects of AI-based technologies on edu-
cational outcomes, such as increased dependence on technology and reduced face-to-face
interaction.
• As previously stated, there are concerns about AI’s ethical and social implications in educa-
tion, such as privacy concerns and the potential for bias and discrimination.
• The high cost of developing and implementing AI-based technologies may limit their accessi-
bility in some educational settings.
• There may be resistance from educators and students to the use of AI-based technologies in
education, which may limit their adoption and effectiveness.
Overall, the SWOT analysis highlights the potential and challenges of the application of AI in educa-
tion. While there are opportunities for innovation and improved educational outcomes, there are also
threats related to ethical and social implications, cost, and adoption (Garcia et al. 2024). Addressing
these challenges is critical in realizing the potential of AI in education.
engagement, equity and access, and teacher support suggest that these are promising areas for future
research.
Recurring Themes:
• Personalized learning: One recurring theme across the papers was the use of AI-based tech-
nologies to provide personalized learning experiences for students (Zhou, 2013). ITSs and
adaptive learning platforms were the most studied interventions for personalized learning.
Several papers reported positive effects on student learning outcomes, suggesting that AI-
based personalized learning is a promising area for future research.
• Student engagement: Another recurring theme was the use of AI-based tools to increase stu-
dent engagement and motivation in learning activities. Chatbots and game-based learning
environments were among the most studied interventions for student engagement. The papers
reported mixed results for the effectiveness of these interventions, suggesting that further
research is needed to understand better the potential of AI-based tools for increasing student
engagement.
• Equity and access: Several papers discussed the potential for AI-based technologies to improve
equity and access in education (Tsai et al. 2020; Ofosu-Ampong, 2023). The studies focused
on how AI can support students from marginalized backgrounds or with disabilities, through
personalized learning and assistive technologies. The results of these studies were generally
positive, highlighting the potential for AI to improve equity and access in education.
• Teacher support: The review found that AI-based tools can be used to support teachers in their
instructional practice, such as through automated grading and feedback systems. The results
of these studies were mixed, with some suggesting that AI-based tools can save teachers time
and improve their efficiency. Contrastingly, others highlight concerns about the accuracy and
fairness of automated grading (Hacker, 2018).
Figure 2.3 shows a conceptualization for this review chapter to encompass the AI problems iden
tified, methodological approaches, themes, SWOT analysis, recurring themes, and the future of
human–AI interaction.
most people expect AI to yield consistent outcomes, this is not always the case. Inconsistencies can
erode trust in AI, posing challenges for subsequent repairs. Despite tempered expectations from
automated systems, breaches of trust make it challenging to heavily rely on AI results. In short, the
key takeaway for higher education in building trust in human–AI interaction involves a combination
of transparency, fairness, user empowerment, user feedback and iterative improvement, education,
human-centric design, and a commitment to ethical practices. We project that, as AI technologies
continue to advance, trustworthy, and user-friendly systems will be essential in their successful inte-
gration into various aspects of society.
2.5.2 Diversification
Diversification is the process of merging different elements to create something new. In the context
of education, diversification can refer to the process of combining different teaching and learning
methods to create a more effective learning experience for students (Chou et al. 2022). One way that
diversification is being used in education is through digital platforms. These platforms allow human
and AI agents to work collaboratively to enhance one another’s skills and abilities. Humans can also
augment AI agents in many ways. For example, humans can:
1. Teach AI agents to improve their understanding of language nuances and contextual know-
ledge, fix mistakes, and learn from their accomplishments.
2. Make it easier for non-technical individuals to understand the reasoning behind decisions
made by opaque ML algorithms, enhancing transparency, and accountability.
3. Use judgment and ethical values and educational co-creation values to ensure AI agents con-
sistently behave ethically and comply with ethical standards (Rai et al., 2019; Robayo-Pinzon
et al. 2023).
32
The use of diversification in education is still in its early stages, but its potential growth is undeni-
able. As AI systems become more sophisticated, they will be able to play an increasingly important
role in education. This could lead to significant changes in the way that education is delivered.
2.5.3 Synthesis
Similarly, synthesis is the process of combining different elements to create something new
(Brunsting et al., 2014). However, it emphasizes a broad spectrum of AI to encompassing speech
synthesis, image generation, machine translation, and generating explanation. In the context of edu-
cation, synthesis can refer to the process of combining different teaching and learning methods to
create a more effective AI learning experience for students.
One way that synthesis is being used in education is through digital platforms or chatbots (Li
et al. 2021). These platforms allow human and AI agents to work together to create an integrated
unit to perform complex tasks. For example, medical schools and AI-powered robots (Li, Lam &
See, 2021) can function as integrated units to perform minimally invasive surgeries. This type of
synthesis has the potential to revolutionize education. By combining the strengths of humans and
AI agents, the platforms can create learning experiences that are more personalized, more engaging,
and more effective than traditional methods.
The use of synthesis in education has several potential benefits. For example, synthesis can give
students access to a wider range of teaching and learning methods. However, as stated earlier several
challenges need to be addressed. For instance, it is important to ensure that AI systems are used in a
way that is ethical and responsible. Also, humans and AI agents can work together effectively, and
the use of synthesis should not lead to job losses for teachers (Holstein et al., 2020). Overall, the use
of synthesis in education is a complex issue with both potential benefits and challenges. Hence, it is
essential to carefully evaluate the potential impact of synthesis before making any decisions about
its use in the classroom (Zawacki-Richter et al., 2019). Further research is needed to examine the
types of human–AI interactions required for different tasks that are performed on chatbot platforms.
adjustments such as updating curricula in schools to accommodate generative AI tools. While some
academic institutions have currently enforced a complete ban on the use of AI tools like ChatGPT
for school-related tasks, it is essential to reevaluate this policy and actively involve instructors in
utilizing the tool to maximize its benefits for students.
At the national level, governments are responsible for establishing regulations that balance
between safeguarding users against the abuse and misuse of generative AI while encouraging tech-
nology companies to invest in this transformative innovation (Ofosu-Ampong, 2021). Some analysts
express concerns that such regulations could potentially hinder the development and utilization of
these systems (Cantú-Ortiz et al. 2020; Chu et al 2022; Ofosu-Ampong et al. 2023). Therefore, it
becomes crucial to assess the societal value of ChatGPT and other generative AI tools to formulate
regulations that effectively govern their usage. Additionally, given the global reach of tools like
ChatGPT, different jurisdictions should collaborate to develop regulations that can be universally
embraced.
The integration of AI into teaching, research, and learning is still in its early stages, and it is
too early to definitively predict what the long-term impact will be. However, AI has the potential
to both improve and exacerbate the challenges facing innovation in education (see Appendix). It is
important to consider these challenges as AI continues to develop carefully.
1. Students develop adaptive skills, collaborative and critical thinking skills to solve complex
and unstructured problems
2. Managers of higher education reevaluate the importance of higher education and prepare
students for the workforce embedded in AI
3. Educational stakeholders at regional and national level collaborate to discuss best practices
for the future of higher education
APPENDIX
MAPPING CONCEPTUAL APPROACHES TO JOURNAL ARTICLE PUBLICATIONS
ON AI AND EMERGING TECHNOLOGIES RESEARCH IN EDUCATION
APPENDIX (Continued)
MAPPING CONCEPTUAL APPROACHES TO JOURNAL ARTICLE PUBLICATIONS
ON AI AND EMERGING TECHNOLOGIES RESEARCH IN EDUCATION
Journal Authors Conceptual approaches identified Research issue
IEEE Access (Education Wang et al. (2022) Strengths, weakness, opportunities SWOT analysis
Society) (x2) Elfarri, Rasheed, and and threats phase
San (2023) Multi-criteria decision-making
(MCDM) method
Artificial intelligence-driven digital
twin in virtual reality
MIS Quarterly (1) Rai, Constantinides, Spectrum of human–AI hybrids in Human–AI
and Sarker (2019) digital platforms interaction phase
ECNU Review of Su and Yang (2023) Framework for applying generative Development phase
Education (1) AI
Harvard Data Science Floridi and Cowls A unified framework for ethical AI
Review (1) (2019) in society
Educational Technology Li and Gu (2023) Human-centered AI framework
& Society (1)
Computers in Human Magsamen-Conrad Diffusion of innovation SWOT analysis
Behaviour (x2) and Dillon (2020) phase
Information and Ofosu-Ampong et al. Trust, innovativeness and Human-AI
Knowledge (2023) psychological modelling of AI Interaction phase
Management (1)
Education and Magen-Nagar and Ethical consideration
Information Tal (2020) ICT self-efficacy (ICT-SE) and
Technologies (x2) Chou et al. (2022) human-computer interaction
experience (HCIE)
Medical Science Li, Lam, and See AI-powered chatbot Application phase
Educator (1) (2021)
ACM Transactions on Gao et al. (2023) AI-assisted human-to-human Human-AI
Computer-Human collaboration interaction phase
Interaction (1)
AI and Ethics (1) Akgun and Ethical concerns and potential risks
Greenhow (2022) of AI applications in education
European Journal Fedorova and Application of blockchain Application phase
of Contemporary Skobleva (2020) technology
Education (1)
Journal of Human- Li and Wang (2023) Experiment with integrating AI-
Computer Interaction powered chatbots
(1)
International Journal Chan (2023) Motivation to apply AI and learning Human-AI
of Educational environment interaction
Technology in Higher Framework for AI integration
Education (1)
Cogent Education (1) Rahiman and Institutional transformation
Kodikal (2024) in education with AI –for
Computers and Al Darayseh (2023) accessibility and learning outcome
Education: Artificial AI empowered learning
Intelligence (x2)
Socius (1) Joyce et al. (2021). AI inequalities and structural change
Social shaping of AI in practice
35
APPENDIX (Continued)
MAPPING CONCEPTUAL APPROACHES TO JOURNAL ARTICLE PUBLICATIONS
ON AI AND EMERGING TECHNOLOGIES RESEARCH IN EDUCATION
Journal Authors Conceptual approaches identified Research issue
The Service Industries Ivanov (2023). The unintended and dark side of AI Development phase
Journal (1) in education Human-AI
Journal of Education Gulson and Sellar Anticipating disruption to create new interaction phase
Policy (1) (2024) norms in education governance
and policy
New cognitive infrastructures
International Journal Robayo-Pinzon et al. AI and emerging technologies value Development phase
of Human-Computer (2023). co-creation process in education
Interaction (1)
REFERENCES
Abujaber, A. A., Abd-Alrazaq, A., Al-Qudimat, A. R., Nashwan, A. J., & AbuJaber, A. (2023). A strengths,
weaknesses, opportunities, and threats (SWOT) analysis of ChatGPT integration in nursing education: a
narrative review. Cureus, 15(11), e48643.
Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the trans-
formative potential of ChatGPT. Contemporary Educational Technology, 15(3), ep429.
Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12
settings. AI and Ethics, 2(3), 431–440.
Al Darayseh, A. (2023). Acceptance of artificial intelligence in teaching science: Science teachers' perspective.
Computers and Education: Artificial Intelligence, 4, 100132.
Alavi, M., & Carlson, P. (1992). A review of MIS research and disciplinary development. Journal of Management
Information Systems, 8(4), 45–62.
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer
as woman is to homemaker? Debiasing word embeddings. In Advances in neural information processing
systems (pp. 4349–4357), NIPS.
Borji, A. (2023). A categorical archive of ChatGPT failures. arXiv preprint arXiv:2302.03494.
Brown, J. S., & VanLehn, K. (1980). Repair theory: A generative theory of bugs in procedural skills. Cognitive
Science, 4(4), 379–426. https://doi.org/10.1016/0364-0213(80)90024-3
Brunsting, N. C., Sreckovic, M. A., & Lane, K. L. (2014). Special education teacher burnout: A synthesis of
research from 1979 to 2013. Education and Treatment of Children, 37(4), 681–711.
Cantú-Ortiz, F. J., Galeano Sánchez, N., Garrido, L., Terashima-Marin, H., & Brena, R. F. (2020). An artifi-
cial intelligence educational strategy for the digital transformation. International Journal on Interactive
Design and Manufacturing (IJIDeM), 14, 1195–1209.
Chaka, C. (2023). Fourth industrial revolution—A review of applications, prospects, and challenges for arti-
ficial intelligence, robotics and blockchain in higher education. Research and Practice in Technology
Enhanced Learning, 18, 002–002.
Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning.
International Journal of Educational Technology in Higher Education, 20(1), 38.
Chou, C. M., Shen, T. C., Shen, T. C., & Shen, C. H. (2022). Influencing factors on students’ learning effective-
ness of AI-based technology application: Mediation variable of the human–computer interaction experi-
ence. Education and Information Technologies, 27(6), 8723–8750.
Chu, S. T., Hwang, G. J., & Tu, Y. F. (2022). Artificial intelligence-based robots in education: A systematic
review of selected SSCI publications. Computers and Education: Artificial Intelligence, 3, 100091.
Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the
era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239.
36
Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., & Bharath, A. A. (2018). Generative
adversarial networks: An overview. IEEE Signal Processing Magazine, 35(1), 53–65.
Davenport, T. H. (2019). Can we solve Al’s ‘Trust Problem’? MIT Sloan Management Review, 60(2), 18–19.
Dujmovic, J. (2017). Opinion: What’s holding back artificial intelligence? Americans don’t trust it. Retrieved,
5(12), 2021.
Elbanna, S., & Armstrong, L. (2023). Exploring the integration of ChatGPT in education: adapting for the
future. Management & Sustainability: An Arab Review, 3(1), 16–29
Elfarri, E. M., Rasheed, A., & San, O. (2023). Artificial intelligence-driven digital twin of a modern house
demonstrated in virtual reality. IEEE Access, 11, 35035–35058.
Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications
for educational practice and research. Innovations in Education and Teaching International, 61(3),
460–474.
Fedorova, E. P., & Skobleva, E. I. (2020). Application of blockchain technology in higher education. European
Journal of Contemporary Education, 9(3), 552–571.
Fitria, T. N. (2021). Grammarly as AI-powered English writing assistant: Students’ alternative for writing
English. Metathesis, 5(1), 65–78.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science
Review, 1(1).
Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications,
challenges, and AI-human collaboration. Journal of Information Technology Case and Application
Research, 25(3), 277–304.
Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., & Pearson, A. T. (2022). Comparing
scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output
detector, plagiarism detector, and blinded human reviewers. bioRxiv, 1–12.
Gao, J., Choo, K. T. W., Cao, J., Lee, R. K. W., & Perrault, S. (2023). CoAIcoder: Examining the effectiveness
of AI-assisted human-to-human collaboration in qualitative analysis. ACM Transactions on Computer-
Human Interaction, 31(1), 1–38.
Garcia, M. B., Garcia, P. S., Maaliw, R., Lagrazon, P., Arif, Y., Ofosu-Ampong, K., Yousef, A. & Vaithilingam, C.
(2024). Technoethical considerations for advancing health literacy and medical practice: A posthumanist
framework in the age of healthcare 5.0. In Emerging Technologies for Health Literacy and Medical
Practice. IGI Global.
Girasa, R., & Girasa, R. (2020). AI as a disruptive technology. In Artificial Intelligence as a Disruptive
Technology: Economic Transformation and Government Regulation (pp. 3–21). Palgrave Macmillan.
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research.
Academy of Management Annals, 14(2), 627–660.
Goel, S., & Khetan, A. (2020). Impact of artificial intelligence on education. In Handbook of Research on
Educational Technology Integration and Active Learning (pp. 1–20). IGI Global.
Gulson, K. N., & Sellar, S. (2024). Anticipating disruption: Artificial intelligence and minor experiments in
education policy. Journal of Education Policy, 1–16.
Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic
discrimination under EU law. Common Market Law Review, 55(4), 1–35.
Hafezad Abdullah, K., Gazali, N., Muzawi, R., Syam, E., Firdaus Roslan, M., & Sofyan, D. (2024). Internet
of things (IoT) in education: A bibliometric review. International Journal of Information Science and
Management, 22(1), 183–202.
Holstein, K., Aleven, V., & Rummel, N. (2020). A conceptual framework for human–AI hybrid adaptivity in
education. In Artificial Intelligence in Education: 21st International Conference, AIED 2020, Ifrane,
Morocco, July 6–10, 2020, Proceedings, Part I 21 (pp. 240–254). Springer International Publishing.
Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). Improving fairness
in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI
Conference on Human Factors in Computing Systems(pp. 1–16).
Huallpa, J. J. (2023). Exploring the ethical considerations of using Chat GPT in university education. Periodicals
of Engineering and Natural Sciences, 11(4), 105–115.
Hwang, G. J., & Chien, S. Y. (2022). Definition, roles, and potential research issues of the metaverse in educa-
tion: An artificial intelligence perspective. Computers and Education: Artificial Intelligence, 3, 100082.
37
Ivanov, S. (2023). The dark side of artificial intelligence in higher education. The Service Industries Journal,
43(15–16), 1055–1082.
Joshi, A. K. (1991). Natural language processing. Science, 253(5025), 1242–1249.
Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T., Hoffman, S. G., … & Shestakofsky, B. (2021).
Toward a sociology of artificial intelligence: A call for research on inequalities and structural change.
Socius, 7, 2378023121999581.
Juric, M., Sandic, A., & Brcic, M. (2020). AI safety: State of the field through quantitative lens. In 2020 43rd
International Convention on Information, Communication and Electronic Technology (MIPRO) (pp.
1254–1259). IEEE.
Kaviyaraj, R., & Uma, M. (2022). Augmented reality application in classroom: An immersive taxonomy. In
2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT) (pp. 1221–
1226). IEEE.
Kostka, I., & Toncelli, R. (2023). Exploring applications of ChatGPT to English language teaching: Opportunities,
challenges, and recommendations. TESL-EJ, 27(3), 1–19.
Langer, M., König, C. J., Back, C., & Hemsing, V. (2023). Trust in artificial intelligence: Comparing trust
processes between human and automated trustees in light of unfair bias. Journal of Business and
Psychology, 38(3), 493–508.
Leiber, T., Stensaker, B., & Harvey, L. C. (2018). Bridging theory and practice of impact evaluation of quality
management in higher education institutions: A SWOT analysis. European Journal of Higher Education,
8(3), 351–365.
Li, P. P., & Wang, B. (2023). Artificial intelligence in music education. International Journal of Human–
Computer Interaction, 1–10.
Li, S., & Gu, X. (2023). A risk framework for human-centered artificial intelligence in education. Educational
Technology & Society, 26(1), 187–202.
Li, Y. S., Lam, C. S. N., & See, C. (2021). Using a machine learning architecture to create an AI-powered
chatbot for anatomy education. Medical Science Educator, 31, 1729–1730.
Liao, Y. K., Chen, N. S., & Lai, C. Y. (2018). Can digital game-based learning improve students’ motivation
towards mathematics learning? A meta-analysis. Educational Research Review, 24, 122–137.
Lukyanenko, R., Maass, W. & Storey, V.C. (2022). Trust in artificial intelligence: From a Foundational Trust
Framework to emerging research opportunities. Electron Markets, 32, 1993–2020. https://doi.org/
10.1007/s12525-022-00605-4
Magen-Nagar, N., & Tal, T. (2020). Ethical considerations for the application of artificial intelligence in educa-
tion. Education and Information Technologies, 25(6), 5171–5185.
Magsamen-Conrad, K., & Dillon, J. M. (2020). Mobile technology adoption across the lifespan: A mixed
methods investigation to clarify adoption stages, and the influence of diffusion attributes. Computers in
Human Behavior, 112, 106456.
Mariani, M. M., Machado, I., & Nambisan, S. (2023). Types of innovation and artificial intelligence: A sys-
tematic quantitative literature review and research agenda. Journal of Business Research, 155, 113364.
Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong
learning. Education, the Responsible and Ethical Use of ChatGPT towards Lifelong Learning. (pp. 387–
409). Springer Nature Switzerland.
Ministry of Internal Affairs and Communications. (2016). Research into the impact of ICT evolution on
employment and working practices. Ministry of Internal Affairs and Communications, Japan. Retrieved
from www.soumu.go.jp/johotsusintokei/linkdata/h28_03_houkoku.pdf
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping
the debate. Big Data & Society, 3(2), 2053951716679679.
Mustafa, A. S., Alkawsi, G. A., Ofosu-Ampong, K., Vanduhe, V. Z., Garcia, M. B., & Baashar, Y. (2022).
Gamification of E-learning in African universities: Identifying adoption factors through task-technology
fit and technology acceptance model. In Next- Generation Applications and Implementations of
Gamification Systems (pp. 73–96). IGI Global.
Ofosu-Ampong, K. (2020). The shift to gamification in education: A review on dominant issues. Journal of
Educational Technology Systems, 49(1), 113–137.
Ofosu-Ampong, K. (2021). Determinants, barriers and strategies of digital transformation adoption in a
developing country Covid-19 era. Journal of Digital Science, 3(2), 67–83.
38
Wenger, E. (1987). Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to
the Communication of Knowledge. Morgan Kaufmann.
Wu, Y. (2023). Integrating generative AI in education: How ChatGPT brings challenges for future learning and
teaching. Journal of Advanced Research in Education, 2(4), 6–10.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Artificial intelligence in educa-
tion: Opportunities and challenges. International Journal of Educational Technology in Higher
Education, 16(1), 1.
Zhang, G., Chong, L., Kotovsky, K., & Cagan, J. (2023). Trust in an AI versus a Human teammate: The effects
of teammate identity and performance on Human-AI cooperation. Computers in Human Behavior, 139,
107536.
Zhou, H. (2013). Understanding Personal Learning Environment: a literature review on elements of the con-
cept. In Society for Information Technology & Teacher Education International Conference (pp. 1161–
1164). Association for the Advancement of Computing in Education (AACE).
40
3.1 INTRODUCTION
With the emergence of artificial intelligence (AI), especially large language models (LLMs), aca-
demic education is experiencing a transformative shift (Farrokhnia et al., 2023). This chapter
presents an in-depth exploration of the integration of the LLM ChatGPT,1 a conversational AI model
developed by OpenAI and currently one of the most widely used LLMs, into an academic curric-
ulum. Focusing on its application in assisting undergraduate students in a research seminar on digital
innovation, this chapter comprehensively analyzes the methodology, implementation, and outcomes
of using ChatGPT to write seminar theses, thereby shedding light on its impact on students’ learning
experiences.
LLMs, such as ChatGPT, are marking significant advancements in natural language processing
and machine learning (Rahman & Watanobe, 2023). Their proficiency in understanding, responding,
and generating human-like text has created new pathways in various domains, education being a not-
able one. In the educational context, LLMs have the potential to revolutionize traditional teaching
methodologies (Farrokhnia et al., 2023). They create personalized learning experiences, assist in
research and writing, and foster a more interactive and engaging learning environment. However,
integrating these tools is not without its challenges. For instance, using ChatGPT in academic edu-
cation raises ethical concerns, such as the potential for encouraging plagiarism, necessitating a
balanced and critical approach to its application (Farrokhnia et al., 2023; Tlili et al., 2023).
An essential aspect of integrating LLMs into the educational process is teaching students how
to use them while understanding their (current) limitations. In approaching this integration, it is
essential to consider the teachers’ and the students’ viewpoints and experiences (Farrokhnia et al.,
2023; Rahman & Watanobe, 2023; Tlili et al., 2023). This dual approach ensures a more holistic
understanding of the tool’s impact and effectiveness. We incorporated ChatGPT in an undergraduate
research seminar, providing students with guidance on its efficient use, including techniques in
prompt engineering, for researching and writing a seminar paper.
ChatGPT contributed to students’ process of scientific writing in several meaningful ways. First,
it helped students develop their research skills by formulating relevant research questions and struc-
turing their seminar papers. Second, ChatGPT acted as a novel tool for drafting seminar papers,
improving students’ writing skills and efficiency. Third, it facilitated access to a range of academic
references, streamlining the research process. Finally, ChatGPT provided an opportunity to evaluate
its effectiveness in improving educational outcomes and student engagement.
This chapter presents a detailed account of how ChatGPT can be integrated into the scientific
writing process and help to structure, draft, and proofread a seminar paper. How students interacted
with ChatGPT and how it influenced their learning experience were also examined. To measure
how students perceived the tool’s effectiveness, we used questions inspired by the unified theory
40 DOI: 10.1201/9781032716350-3
41
of acceptance and use of technology (UTAUT; Venkatesh et al., 2003), one of the most influen
tial technology adoption theories, and open-ended questions in surveys that students responded to
before and after using ChatGPT. This approach allowed for a comprehensive view of the students’
perspectives, capturing quantitative and qualitative data. The pre-and post-survey design was instru-
mental in observing how students’ opinions and experiences changed by using ChatGPT in their
seminar work.
The study analyzed LLMs’ pedagogical use in teaching scientific writing. It delivers relevant
insights into the emerging role of LLMs in educational contexts and provides implications for future
pedagogical strategies. The findings contribute to the academic discourse on AI in education and
offer practical insights for educators and institutions looking to integrate LLMs into their teaching
and learning processes.
The following sections of the chapter outline the theoretical foundation, explain the research
method employed, and present an analysis of qualitative and quantitative survey data pre-and post-
use of ChatGPT. This is followed by a discussion of the findings, future research directions, and the
study’s limitations.
3.2 BACKGROUND
3.2.1 LLMs
LLMs are advanced conversational AI models. They are based on the generative pre-trained trans-
former (GPT) architecture, which utilizes deep learning techniques to generate human-like text
(Susnjak & McIntosh, 2022). ChatGPT is an LLM developed by OpenAI. It is trained on different
internet texts and can respond to user queries coherently and accurately (Teubner et al., 2023). Its
capabilities include answering questions, providing explanations, and assisting in content creation,
making it a versatile tool in various applications, including education (Teubner et al., 2023).
Prompt engineering has emerged as a critical aspect in maximizing the effectiveness of LLMs like
ChatGPT (White et al., 2023). It involves strategically formulating input prompts to elicit the most
accurate and relevant responses (White et al., 2023). This practice is essential in guiding the LLM
to understand the context and intent of the query more effectively, thereby producing more precise
and useful outputs. White et al. (2023) explore various strategies and techniques in prompt engin
eering, offering insights into how users can better interact with LLMs. This includes understanding
the model’s limitations, using specific language to guide the LLM’s response, and experimenting
with different prompt styles to achieve the desired outcome. The study’s implications are profound
for educational settings, as prompt engineering can enhance the learning experience, aid in research,
and facilitate the creation of academic content (White et al., 2023). In summary, integrating LLMs in
educational contexts, supported by the art of prompt engineering, presents a transformative oppor-
tunity. It enhances the interaction with AI and opens new avenues for personalized and efficient
learning experiences (Baidoo-Anu & Owusu Ansah, 2023; Shoufan, 2023; White et al., 2023).
enhancing information access and personalizing learning experience, are counterbalanced by threats
like limited context understanding, risks to academic integrity, perpetuation of discrimination,
normalization of plagiarism, and potential decline in high-order cognitive skills among learners
(Farrokhnia et al., 2023). The study concludes with a call for a comprehensive agenda in educational
practice and research to effectively navigate the era of ChatGPT, addressing these SWOT.
Shoufan (2023) investigated senior computer engineering students’ perceptions of ChatGPT
through a two-step process. First, students engaged with ChatGPT in a learning activity and then
provided evaluations of their experience. This qualitative feedback, comprising over 3,000 words,
was accurately analyzed, identifying 36 distinct codes and 15 thematic categories, such as “helpful
for learning” or “enthusiasm and appreciation.” Following this initial analysis, a comprehen-
sive 27-item questionnaire was developed based on the previously identified themes and codes.
Students were asked to complete this survey after three weeks, during which time they continued
interacting with ChatGPT. This study’s results revealed a mixed perception of ChatGPT among
the students. On the positive side, students admired ChatGPT’s capabilities, noting its engaging,
motivating nature and usefulness in academic and professional contexts. They appreciated the
ease of use and the human-like quality of the interface, which provided well-structured and
clear explanations. However, the study also revealed concerns about the accuracy of ChatGPT’s
responses. Students felt that effective use of ChatGPT required solid background knowledge, as
the tool was not a substitute for human intelligence (Shoufan, 2023). This highlighted the need for
improvements in the accuracy of ChatGPT’s outputs and thorough training of students for prompt
engineering.
Our exploration into the acceptance and usability of ChatGPT in educational settings also
answers the call of Tlili et al. (2023), who articulate the complex dynamics of employing AI in
learning environments. They advocate for carefully evaluating the transformative potential against
the backdrop of ethical considerations, quality of interactions, and the importance of fostering crit-
ical thinking skills.
Our study builds upon the critical insights of Farrokhnia et al. (2023), Shoufan (2023), and Tlili
et al. (2023), which collectively articulate the diverse strengths, weaknesses, and ethical challenges
associated with integrating ChatGPT into educational settings; it is further inspired by the UTAUT
(Venkatesh et al. 2003). The UTAUT represents a robust, empirically validated model to under
stand people’s technology acceptance and usage. It synthesizes elements from eight different user
acceptance theories, creating a comprehensive framework that has been empirically validated to
outperform its constituent models. The UTAUT model identifies eight key constructs that are central
to predicting and explaining user behavior toward technology adoption: performance expectancy,
effort expectancy, social influence and facilitating conditions, attitude toward using technology, self-
efficacy, anxiety, and behavioral intention to use ChatGPT. These constructs are moderated by indi-
vidual characteristics, such as gender, age, experience, and voluntariness of use.
While the limited sample of our seminar participants does not allow us to assess how the UTAUT
explains students’ opinions and acceptance of LLMs in educational settings, we use its constructs
to gain first insights into the extent to which students perceive ChatGPT as useful (performance
expectancy) and easy to use (effort expectancy) and how their use of ChatGPT is influenced by
the opinions of others (social influence) and the provision of necessary resources and infrastruc-
ture (facilitating conditions). Our approach thus helps identify the first determinants of students’
acceptance and usage of LLMs like ChatGPT, thereby providing valuable insights for its effective
integration into educational practices.
In addition, our study also incorporated open-ended survey questions to explore students’
perspectives in greater detail and to capture a comprehensive and explorative view of their experiences
and opinions regarding ChatGPT’s acceptance and usability. These open-ended responses enriched
our understanding by revealing insights that structured surveys might not capture, such as unique
use cases, personal anecdotes, and specific concerns or pros that students had regarding ChatGPT.
43
By combining insights from quantitative and qualitative questions, our study strives to provide a
comprehensive view of how students perceive and interact with ChatGPT in an academic setting.
• How can NFTs be utilized to ensure fair remuneration and protection for artists and authors
in the digital world?
• What methodologies can be employed effectively to measure and quantify biases in ChatGPT,
and what technical approaches can be implemented to mitigate these biases?
• What are the potential applications of virtual reality in the workplace, and to what extent can
this technology be effectively integrated into daily work routines?
The seminar included the use of ChatGPT to write a 2,250-word seminar paper. ChatGPT assisted
students in generating research questions, finding references, drafting seminar papers, and enhan-
cing the language and clarity of their work. The diverse applications of ChatGPT in the course
provided a rich context for examining its impact on students’ scientific writing experiences and
perceptions.
The seminar occurred twice, in the summer and winter term of 2023/2024. In the summer term,
19 students participated and completed both the pre-and post-ChatGPT-use surveys. In the winter
term, 8 students completed the pre-ChatGPT-use survey, and 9 completed the post-ChatGPT-use
survey. We taught students how to use ChatGPT and the importance of prompt engineering.
In addition to demographics, the study asked for students’ technical knowledge about GPT
models and their ChatGPT usage behavior. Qualitative data were gathered in open-ended survey
questions focusing on ChatGPT’s strengths and weaknesses to get detailed responses about
students’ expectations and experiences with ChatGPT, for example, “What advantages does the use
of ChatGPT have for seminar papers?” or “For which tasks and areas of writing a seminar paper do
you currently consider ChatGPT to be unsuitable?” A complete list of survey questions can be found
in the Appendix.
For the quantitative data collection, we used established items of the UTAUT constructs
(Venkatesh et al., 2003), adapted to our ChatGPT context, and measured those on a 7-point Likert
scale. Examples of survey questions for performance expectancy were “I would find it useful to
use ChatGPT for my seminar paper in the DI seminar” and for ease of use, “Learning to operate
ChatGPT is easy for me” (Venkatesh et al., 2003). A complete list of the survey questions can be
found in the Appendix. To enhance clarity, we conducted the surveys in German and translated the
questions for this purpose.
The study began with a pre-ChatGPT-use survey before the actual use of ChatGPT to assess
students’ initial expectations of ChatGPT. During the seminar, students then used ChatGPT for sev-
eral tasks, for example,
• Finding references and drafting papers: ChatGPT assisted in locating relevant references and
drafting initial versions of their 2,250-word seminar papers.
• Peer review process: After creating the first draft, students engaged in a peer review process,
providing feedback to each other. ChatGPT was also utilized to enhance the language and
clarity of the papers.
The post-ChatGPT-use survey was then conducted to capture changes in students’ perceptions and
experiences with ChatGPT for the summer-term cohort of the seminar.
Participants recognized ChatGPT as useful for simple queries but noted its limitations in hand-
ling complex ones, often resulting in conflicting outputs. Participants said that ChatGPT is based on
a vast database of training data and improves over time.
Areas of Application: Participants reported using ChatGPT for a range of activities, including
coding, literature review, brainstorming, general work tasks, posing questions, idea generation,
explanations, text generation, creating summaries, and creative work.
Perceived Weaknesses of ChatGPT: The qualitative data highlighted several weaknesses of
ChatGPT. Participants pointed out instances of incorrect references and information, hallucinations
(generation of factually incorrect or nonsensical content), reliance on outdated information (up to
2021), and challenges in controlling plagiarism.
Perceived Strengths of ChatGPT: Conversely, ChatGPT was praised for its ability to provide new
information quickly, facilitate various processes, and offer ease of use. It was also valued for its
assistance in answering questions, aiding in formulation, enhancing efficiency, and generating ideas.
Limitations of ChatGPT: Participants noted several general limitations of ChatGPT, including
the lack of recent information, difficulty verifying the sources of references, the risk of presenting
incorrect information as accurate, limited creativity, and the challenge of ensuring the correctness of
academic references. Specifically, in academic writing, concerns were raised about hallucinations
of references and the difficulty of checking the accuracy of such references. Additionally, the lack of
access to all research papers and reliance on outdated information was noted.
Feasibility of ChatGPT Usage: Students viewed ChatGPT as feasible for tasks such as coding,
writing, learning, assistance, creativity, literature review, information retrieval, formulation,
grammar checks, and brainstorming. However, it was deemed not yet feasible for tasks involving
mathematics, moral and ethical matters, conventional search engine functions, science, reference
checking, and medical diagnostics.
Future Feasibility: Opinions were divided on the future feasibility of ChatGPT. Some participants
believed it would never be suitable for inter-human interaction, ethical matters, medical tasks, and
aspects of creativity. Others were more optimistic, suggesting that ChatGPT might become feasible
for virtually everything.
TABLE 3.1
Means, standard error, and confidence intervals of UTAUT variables of the
pre-measurement
Note:
All UTAUT variables were measured with Likert Scales (1– 7). SE: standard error; CI: confidence interval;
Attitude: attitude toward using technology; Behavioral intention: behavioral intention to use technology.
FIGURE 3.1 Heatmap of the correlation between the UTAUT variables of the pre- measurement.
PE: performance expectancy; EE: effort expectancy; AT: attitude toward using technology; SI: social influence;
FC: facilitating conditions; SE: self-efficacy; AN: anxiety; BI: behavioral intention to use ChatGPT.
The analysis also revealed a medium negative correlation between anxiety and all other variables
(r =[−0.32; −0.59]), except for social influence (r =−0.03). This suggests that students who experi-
ence anxiety about using ChatGPT tend to have lower performance expectancy, effort expectancy,
and self-efficacy and perceive less facilitating conditions.
Furthermore, there was a medium correlation between facilitating conditions and most other
variables, except for anxiety. This implies that when students perceive the existence of adequate
support and resources for using ChatGPT, it positively influences their performance expectancy,
47
effort expectancy, and self-efficacy. Facilitating conditions play a significant role in shaping students’
overall experience and their intention to use ChatGPT.
TABLE 3.2
Means, standard error, and confidence intervals of UTAUT variables of the
post-measurement
score was observed in behavioral intention to use ChatGPT (M =6.10 [5.50; 6.59]), indicating a
stronger inclination toward using ChatGPT after ChatGPT use. Conversely, the lowest score was
recorded for anxiety (M =2.93 [2.33; 3.53]), suggesting a decrease in apprehension about using
the technology. This decrease in anxiety indicates a growing comfort and familiarity with ChatGPT
among the participants.
When comparing pre-and post-ChatGPT-use measures, there was a descriptive increase in all
UTAUT variables except for anxiety, which decreased. This trend suggests that the seminar posi-
tively influenced students’ perceptions of ChatGPT, increasing their confidence and intention to
use it. While it is important to note that there was a difference in sample size between the pre-(27
students) and post-(28 students) measurements, the trend of increased UTAUT variables was per-
sistent when comparing only the 19 students from the summer term cohort.
The correlation analysis of the post-ChatGPT-use UTAUT variables, as depicted in Figure 3.2,
showed significant relationships between various factors. The highest correlation was found between
performance expectancy and behavioral intention to use ChatGPT, with a strong positive correlation
(r =0.85), reinforcing the pre-ChatGPT-use findings.
A negative correlation was observed between behavioral intention to use ChatGPT and anxiety
(r =−0.53), indicating that as students’ anxiety about using ChatGPT decreased, their behavioral
intention to use ChatGPT increased, according to the UTAUT. In addition, self-efficacy showed a
high correlation with facilitating conditions and social influence, suggesting that students’ confi-
dence in using ChatGPT is influenced by the support and perceptions of their social environment.
Figure 3.3 presents a comparative analysis of the mean scores of UTAUT variables before and
after the seminar. This comparison highlights the seminar’s impact on students’ perceptions and
attitudes toward ChatGPT. The overall increase in positive perceptions and decrease in anxiety after
the ChatGPT use indicate a successful engagement with the technology, fostering a more favorable
outlook toward its use in scientific writing. Still, confidence intervals of pre-and post-measurement
overlap, indicating that this increase or decrease of the scales was non-significant.
3.6 DISCUSSION
In this chapter, we analyzed how LLMs, exemplified by ChatGPT, influence students’ scientific
writing. By surveying an undergraduate seminar, we measured students’ perceived usefulness and
acceptance of ChatGPT for scientific writing and asked them open-ended questions about their
49
FIGURE 3.2 Heatmap of the correlation between the UTAUT variables of the post- measurement.
PE: performance expectancy; EE: effort expectancy; AT: attitude toward using technology; SI: social influence;
FC: facilitating conditions; SE: self-efficacy; AN: anxiety; BI: behavioral intention to use ChatGPT.
FIGURE 3.3 Comparison of means of UTAUT variables pre-and post-measurement. Pre: pre-ChatGPT-
use survey; Post: post-ChatGPT-use survey; PE: performance expectancy; EE: effort expectancy; AT: attitude
toward using technology; SI: social influence; FC: facilitating conditions; SE: self-efficacy; AN: anxiety;
BI: behavioral intention to use ChatGPT.
50
expectations and experiences on the perceived strengths and weaknesses of ChatGPT. The results
showed a positive perception of students’ regarding ChatGPT, although other concerns emerged.
Our results show regular use of ChatGPT by most participants both before and after our
measurements. This widespread adoption underscores the growing relevance of LLMs in educa-
tional settings (cf. Farrokhnia et al., 2023). Behavioral intention to use ChatGPT increased after
ChatGPT use, suggesting that structured instruction and exposure to ChatGPT, as well as prompt
engineering training, enhanced students’ perceived utility of ChatGPT in academic writing.
Overall, comparing the pre-and post-use qualitative responses, we found that while practical
familiarity with ChatGPT increased, perceptions of its capabilities and limitations remained rela-
tively consistent. The increased reliance on ChatGPT for seminar writing tasks post-ChatGPT-use
raises questions about its impact on academic rigor and student effort. This highlights the need for
a balanced approach to integrating LLMs like ChatGPT in educational settings, ensuring they com-
plement rather than replace traditional learning and research methodologies.
The post-ChatGPT-use analysis provided valuable insights into the evolving perceptions and
attitudes of students toward ChatGPT. The seminar appears to have played a pivotal role in enhan-
cing students’ understanding and acceptance of the technology, as evidenced by the increased
scores in most UTAUT variables and the decreased anxiety levels. These findings offer important
implications for integrating LLMs like ChatGPT in educational contexts, emphasizing the need for
comprehensive educational interventions to maximize their acceptance and utility.
Consistent with recent literature, students acknowledged ChatGPT’s strengths, such as increased
creativity and productivity, and identified practical applications where it proved beneficial, such as
drafting or generating ideas and research questions. However, they also identified weaknesses, par-
ticularly the lack of accurate academic references and instances of incorrect output. Students also
reported that ChatGPT had limitations, such as the inability to write a seminar thesis and make inde-
pendent decisions. This raises critical questions about the impact of such technology on traditional
learning methods. While ChatGPT has many advantages, including efficiency and accessibility
of information, and has been widely adopted, these findings underscore the need for a balanced
approach to integrating ChatGPT into education, ensuring that it complements rather than replaces
traditional learning methods.
Although limited by our sample size, the quantitative analysis provided first insights into UTAUT
factors influencing the acceptance of ChatGPT, with ease of use and perceived usefulness emerging
as key determinants. These factors are critical in understanding the students’ willingness to adopt
LLMs like ChatGPT as a part of their academic toolkit. Additionally, the influence of social and
environmental factors cannot be overlooked. The role of peer influence and institutional support in
ChatGPT adoption highlights the need for a supportive ecosystem to maximize the benefits of LLMs
in education (Shoufan, 2023).
The correlations for both pre-and post-ChatGPT use are consistent with the UTAUT; for example,
performance expectancy is a critical predictor of ChatGPT adoption (Venkatesh et al., 2003). This
suggests that students who perceive ChatGPT as beneficial to their academic performance are more
likely to use it. For educators and developers, this suggests that improving the performance cap-
abilities of LLMs, such as customizing ChatGPT to educational contexts, could lead to more usage
and increased acceptance among students. It also highlights the importance of training students in
prompt engineering by teaching students how to use different prompt techniques and explore mul-
tiple prompts before using an output to optimize the effectiveness of ChatGPT (White et al., 2023).
The rapid advancements of LLMs like ChatGPT provide fertile ground for future research in gen-
eral and for tailoring LLMs to educational needs, such as prompt engineering training, customizing
LLMs to access scientific literature, and integrating LLMs interactively into the curriculum. While
newer versions of ChatGPT and other LLMs are already working on addressing limitations, such
as information accuracy and reference validity, future research should also examine the long-term
impact of LLM integration on learning outcomes and student engagement.
51
A limitation of our study is the inability to make inferential comparisons between pre-and
post-seminar data due to the sample size limitations. Future studies could build on our findings
by conducting a multivariate statistical analysis to better understand how students’ perceptions,
intentions, and experiences evolve with the use of ChatGPT (Shoufan, 2023).
3.7 CONCLUSION
In conclusion, our study of integrating LLMs such as ChatGPT into scientific writing seminars
reveals a diverse landscape of perceived usefulness and acceptance of LLMs among students.
LLMs’ significant and regular use underscores their growing importance in educational and aca-
demic environments. However, while students recognize the strengths of LLMs in enhancing cre-
ativity and productivity, they also express concerns about their limitations, particularly in providing
accurate academic references and reliable output.
Future studies should focus on how LLMs, such as ChatGPT, can be further developed to meet
the specific needs of academic environments, particularly in improving information accuracy and
reference validity. Teachers could adapt LLMs to access scientific literature and should inter-
actively teach students how to use LLMs through prompt engineering techniques. In addition, the
long-term impact of integrating LLMs on learning outcomes and student engagement should be
explored. As we navigate integrating LLMs into education, it is critical to maintain a balanced
approach, ensuring that these tools complement traditional learning methods and foster an envir-
onment that enhances students’ overall learning experience without compromising academic
integrity.
NOTES
1 https://chat.openai.com/
2 https://posit.co/products/open-source/rstudio/
REFERENCES
Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the Era of Generative Artificial Intelligence
(AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. SSRN
Electronic Journal. https://doi.org/10.2139/ssrn.4337484
Braun, V., & Clarke, V. (2006). Using Thematic Analysis in Psychology. Qualitative Research in Psychology,
3(2), 77–101.
Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT Analysis of ChatGPT: Implications
for Educational Practice and Research. Innovations in Education and Teaching International, 61(3),
460–474. https://doi.org/10.1080/14703297.2023.2195846
Hund, A., Wagner, H.-T., Beimborn, D., & Weitzel, T. (2021). Digital Innovation: Review and Novel Perspective.
The Journal of Strategic Information Systems, 30(4), 101695. https://doi.org/10.1016/j.jsis.2021.101695
Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for Education and Research: Opportunities, Threats, and
Strategies. Applied Sciences, 13(9), Article 9. https://doi.org/10.3390/app13095783
Shoufan, A. (2023). Exploring Students’ Perceptions of ChatGPT: Thematic Analysis and Follow-Up Survey.
IEEE Access, 11, 38805–38818. https://doi.org/10.1109/ACCESS.2023.3268224
Susnjak, T., & McIntosh, T.R. (2024). ChatGPT: The End of Online Exam Integrity?. Education Sciences,
14(6), 656. https://doi.org/10.3390/educsci14060656
Teubner, T., Flath, C. M., Weinhardt, C., Van Der Aalst, W., & Hinz, O. (2023). Welcome to the Era of ChatGPT
et al.: The Prospects of Large Language Models. Business & Information Systems Engineering, 65(2),
95–101. https://doi.org/10.1007/s12599-023-00795-x
Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What
If the Devil Is My Guardian Angel: ChatGPT as a Case Study of Using Chatbots in Education. Smart
Learning Environments, 10(1), 15. https://doi.org/10.1186/s40561-023-00237-x
52
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Acceptance of Information
Technology: Toward a Unified View. MIS Quarterly, 27(3), 425. https://doi.org/10.2307/30036540
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D.
C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (arXiv:2302.11382).
arXiv. https://doi.org/10.48550/arXiv.2302.11382
APPENDIX
Survey Questions –English Version
Performance Expectancy:
1. I would find it useful to use ChatGPT for my seminar paper in the DI seminar.
2. Using ChatGPT would enable me to complete my DI seminar paper more efficiently.
3. Using ChatGPT would increase my productivity regarding my DI seminar paper.
4. If I use ChatGPT, I will increase my chances of getting a good grade in the DI seminar.
Effort Expectancy:
Social Influence:
Facilitating Conditions:
Self-Efficacy:
I could complete a task or project with ChatGPT…
Anxiety:
1. For which tasks and work steps did you use ChatGPT? For example, brainstorming, creative
work, research, coding.
Technical Knowledge:
1. What knowledge do you have about the functioning of GPT models (GPT transformer) in
terms of training function, output generation, and the reliability of the results?
4 Mapping ChatGPT in
Education Research
Bibliographic Coupling and
Co-Citation Analyses
Luc Phan Tan
4.1 INTRODUCTION
In recent years, artificial intelligence (AI) has emerged as a significant trend in education (Chen,
Chen, and Lin 2020). Specifically, the advent of ChatGPT has unlocked new educational possibil
ities. With its capacity to generate text closely resembling human composition, ChatGPT can offer
substantial assistance to teachers and students. Teachers can swiftly create quizzes, exercises, or
tests with the help of ChatGPT. For students, ChatGPT can elucidate complex concepts, provide
suggestions, and furnish feedback based on the content they provide (Aithal and Aithal 2023). It
can also facilitate personalized learning, adapting content and teaching styles to individual students.
Although ethical challenges persist in using ChatGPT, it undeniably presents new opportunities for
educators and learners.
Consequently, the subject of ChatGPT in education has witnessed a recent notable surge in
research and attention from scholars and practitioners. This significant development across multiple
disciplines has accumulated a substantial body of knowledge, requiring scholars to comprehensively
review and synthesize existing works to provide a common understanding of the subject. Several
literature reviews have been conducted to summarize the research focus on ChatGPT in educa-
tion. In an evaluation of 50 articles, Lo (2023) emphasized ChatGPT’s inconsistent performance in
various subject areas and its possible advantages as a teaching assistant and online tutor for learners.
According to the review’s findings, schools and colleges should update their policies and guidelines
for academic integrity and plagiarism prevention as soon as possible. Thurzo et al. (2023) offer a
synopsis of the impending modifications and a succinct review of the significant developments in AI
for dental education since 2020.
Additionally, in light of developments in AI applications and their implications for dentistry, this
review offers recommendations for updating undergraduate and graduate dental curricula. Pradana,
Elisa, and Syarifuddin (2023) used bibliometric analysis and a systematic literature review of pre
vious studies on the application of OpenAI ChatGPT in education. This piece contributes to the
knowledge of the application of AI in education and can be helpful for researchers and decision-
makers. Through their thorough analysis of the international literature on ChatGPT’s application in
higher education, Ansari, Ahmad, and Bhutta (2023) add much to the field. ChatGPT is a valuable
tool that teachers, students, and researchers can use to help with various tasks, according to the
findings from the studies mentioned above. Previous evaluation studies were largely qualitative and
context-specific. This leads to the need for quantitative research examining ChatGPT in a general
educational context to create a comprehensive literature map and provide an understanding of the
current state and trajectory in this area.
54 DOI: 10.1201/9781032716350-4
55
By identifying the most influential authors, journals, and keywords, a bibliometric analysis offers
a thorough understanding of the state of the art and the field’s evolution (Chang, Huang, and Lin
2015). According to Chang, Huang, and Lin (2015), this kind of analysis also identifies gaps in
the literature and potential areas for additional study. In order to comprehend the current body of
knowledge and to guide future research directions, academics, practitioners, and policymakers must
conduct a bibliometric analysis of ChatGPT in education research. The findings of bibliometric
analyses can be used to highlight prevailing research paradigms and spot new developments in the
field. Every bibliometric method has benefits and drawbacks. Bibliometric analysis increasingly
combines these techniques to create thorough literature maps to yield better results and a more pro-
found understanding (Phan Tan 2021).
This chapter employs two distinct bibliometric techniques, co-citation and bibliographic coup-
ling, to complementarily capture and connect bibliographic information. This enabled them to
investigate the formation of ChatGPT in education thoroughly and rigorously. This chapter aims to
identify important works, draw maps and networks, and pinpoint essential clusters. This approach
finds the most influential writers, books, journals, and keywords over time and the knowledge bases,
typologies, frameworks, and intellectual foundations of ChatGPT in education. By providing insight
into the dynamics of topic development, new issues, and trends, this methodology will help to
advance and renew our understanding of ChatGPT in education and facilitate its continued devel-
opment. The results of this chapter also provide a worldwide perspective and guidelines for the lit-
erature on ChatGPT in education, assisting academics in recognizing recent contributions, locating
research resources in promising fields, and investigating potential avenues for future investigation.
For real-world use, managers can also access academic research on this topic.
This chapter begins with a methods section that briefly describes bibliographic coupling, co-
citation analysis, and methodological procedures. The subsequent sections present the results of the
analysis, followed by conclusions and limitations.
4.2 METHODS
4.2.1 Bibliometrics
Bibliometrics is a field that uses statistical and mathematical techniques to analyse empirical data
found in published literature. It was first introduced by Pritchard in 1969. The goal is to analyse pub-
lication trends and identify key themes in a particular field. Using science mapping techniques and
quantitative analysis, bibliometric analysis visually represents the field’s intellectual structure (Small
1973). This analysis uses several techniques, such as co-word analysis, co-authorship analysis, and
citation analysis (van Eck and Waltman 2009). The data are further categorized by citation-based
analysis into bibliographic coupling, co-citation, and citation analysis (van Eck and Waltman 2009).
According to Leung, Sun, and Bai (2017), these techniques help researchers examine the body of
literature in their field of study and offer insights into the intellectual climate. Co-citation analysis
is a bibliometric technique that combines cited publications into a cluster to represent a common
theme and reveal relationships among cited publications (Small 1973). Another technique is biblio
graphic coupling, which looks at the connections between citing publications and identifies a recur-
ring theme by aggregating them into a cluster (Kessler 1963). Co-citation analysis and bibliographic
coupling are called cross-citations, indicating the interconnections between publications on a spe-
cific topic. A more thorough understanding of the connections between cited and citing publications
can be obtained by combining the two techniques of co-citation analysis and bibliographic coupling
(van Eck and Waltman 2009). By combining the two approaches, researchers can better visualize
and comprehend the connections between various field elements, advancing theoretical and empir-
ical research (van Raan 2005). Thus, combining these methods can uncover valuable information
that cannot be obtained through the use of a single method.
56
4.2.2 Review process
A literature review by bibliometric analysis is performed to synthesize and classify knowledge,
identify main categories and themes, and suggest future research (Xia et al. 2018). The three-stage
review process was carried out as Luc (2018) suggested.
4.2.2.2 Stage 2: Screening
To determine the relevance of the 326 collected publications to issues related to ChatGPT in edu-
cation, a comprehensive review of their titles and abstracts was carried out in the second stage. The
screening procedure was designed to remove publications not related to education. As a result, 136
publications were retrieved for the final analysis.
4.3 RESULTS
4.3.1 Bibliographic coupling analysis
4.3.1.1 Year
All the studies were published in 2023. This demonstrates the growth of this topic since ChatGPT
launched.
4.3.1.2 Publication
The results of the bibliographic coupling analysis show the most impactful publications. The five
articles with the highest citation count are as follows:
57
1. Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring aca-
demic integrity in the era of ChatGPT. Innovations in Education and Teaching International,
1(1), 1–12.
2. Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang,
B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots
in education. Smart Learning Environments, 10(1), 15–39.
3. Arif, T. B., Munaf, U., & Ul-Haque, I. (2023). The future of medical education and research: Is
ChatGPT a blessing or blight in disguise?. Medical education online, 28(1), 1–2.
4. Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023).
Generative AI and the future of education: Ragnarök or reformation? A paradoxical
perspective from management educators. The International Journal of Management
Education, 21(2), 1–13.
5. Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the litera-
ture. Education Sciences, 13(4), 410–425.
Bibliographic coupling network was presented in Figure 4.1. Bibliographic coupling network
formed six clusters. The clusters and representative publications were presented in Table 4.1.
Cluster 1
The investigation of ChatGPT in educational contexts is the main topic of this cluster. The articles
in this cluster look into how students accept and use ChatGPT, how it affects learning, and how
it might be used in medical education. The articles collectively explore the consequences of inte-
grating AI and ChatGPT into educational environments, raising questions about its benefits and
challenges.
TABLE 4.1
Bibliographic coupling cluster and representative publications for each cluster
Cluster 1 Strzelecki (2023), Lo (2023), Hsu and Ching (2023), Guo and Li (2023), Grassini (2023), Choi et al.
(2023)
Cluster 2 Crawford, Cowling, and Allen (2023), Lim et al. (2023), Nikolic et al. (2023)
Cluster 3 Emenike and Emenike (2023), Kortemeyer (2023), Perkins (2023)
Cluster 4 Cotton, Cotton, and Shipway (2023), Malinka et al. (2023), Mogali (2023), Reeves et al. (2023)
Cluster 5 Irwin, Jones, and Fealy (2023), Bauer et al. (2023), Johinke, Cummings, and Di Lauro (2023), Jeon
and Lee (2023)
Cluster 6 Baker et al. (2023), Chaudhry et al. (2023), Farrokhnia et al. (2023), Glaser (2023), Woo, Henriksen,
and Mishra (2023),
Cluster 7 Arif, Munaf, and Ul-Haque (2023), Feng and Shen (2023), Hsu (2023), Karabacak et al. (2023), Lee
(2023), Masters (2023)
Cluster 8 Akiba and Fraboni (2023), Fergus, Botha, and Ostovar (2023), Friederichs, Friederichs, and März
(2023), Gardner and Giordano (2023)
Cluster 2
The ethical implications of ChatGPT in education are examined in Cluster 2. These articles address
the use of chatbots in the classroom, investigate the need for moral leadership in AI applications,
and present a counterintuitive viewpoint on the application of generative AI in the classroom. This
cluster also examines the validity of assessments made with ChatGPT in educational settings.
Cluster 3
This cluster focuses on ChatGPT in particular and the problem of academic integrity in the context
of AI. The articles discuss issues with academic integrity brought up by large language models like
ChatGPT, evaluate the viability of AI agents in passing academic courses, and look at considerations
for using AI text-generation software.
Cluster 4
In the ChatGPT era, Cluster 4 is devoted to the crucial topic of academic integrity. These articles
assess ChatGPT’s effects on education and discuss ways to uphold academic integrity. A few articles
examine users’ first impressions of ChatGPT in particular subject areas.
Cluster 5
Articles in this cluster examine how AI –ChatGPT in particular –and education can work together.
They review how AI will affect nursing and midwifery practice, emphasize how ChatGPT and
human teachers work well together, and speculate about how AI will teach digital writing in the
post-pandemic era.
Cluster 6
The intersection of AI, education, and emerging technologies is the research subject in Cluster
6. The notions of intelligence and the function of AI in assessing student performance are discussed
in these articles. In order to highlight ChatGPT’s potential and implications for education, they also
perform a SWOT analysis of the technology in educational practice and research.
Cluster 7
This cluster focuses on ChatGPT’s function in health professions and medical education. The articles
in this category explore ChatGPT’s potential for teaching medical terminology, evaluate its effects
on medical education and research, and talk about the moral application of AI in healthcare educa-
tion. The cluster, as a whole, tackles the changing face of AI in medical education and healthcare.
59
TABLE 4.2
The most influential journals of bibliographic coupling analysis
1 Education Sciences 10 47
2 Journal of Chemical Education 8 45
3 JMIR Medical Education 7 10
4 Education and Information Technologies 6 20
5 Journal of University Teaching and Learning Practice 6 54
Cluster 8
A collection of articles about ChatGPT’s application in academic advising and assessment in edu-
cational settings are collected in this cluster. The articles examine ChatGPT’s present situation and
how academic advising could empower students. They also examine ChatGPT’s use in medical
school settings and concentrate on evaluating the academic responses it produces. In addition, the
cluster questions whether the educational system is prepared for the arrival of AI and examines the
benefits and drawbacks of conventional assessment techniques, such as undergraduate oral exams
in physical chemistry.
4.3.1.3 Journal
The top five journals were found to be Education Sciences, with ten articles, followed by Journal of
Chemical Education, with eight articles; JMIR Medical Education, with seven articles; Education
and Information Technologies, with six articles; and Journal of University Teaching and Learning
Practice, with six articles (see Table 4.2 and Figure 4.2).
4.3.1.4 Authors
The results of the author analysis indicated that 398 unique authors contributed to the 136 articles
included in the sample. Further analysis was conducted to identify the most productive authors, with
the top five being Paul Denny (University of Auckland, Auckland, New Zealand), Juho Leinonen
(University of Auckland, Auckland, New Zealand), Ken Masters (Sultan Qaboos University,
Sultanate of Oman), Mike Perkins (British University Vietnam, Hung Yen, Vietnam), and Joseph
Crawford (University of Tasmania, Australia) (see Figure 4.3).
4.3.1.5 Country
The results of the bibliographic coupling analysis revealed that the United States of America,
Australia, China, the UK, and Germany had the highest number of ChatGPT in education research.
The distribution of the studies among these countries is displayed in Table 4.3.
4.3.2 Co-citation analysis
4.3.2.1 Publication
The authors conducted a comprehensive analysis of 4,275 citations generated from 136 publications.
In order to refine the sample, a cut-off point was established, and a threshold of ten citations was
established as the minimum number of citations (McCain 1990). As a result, 19 publications were
selected as the most influential. The five studies with the highest co-citation indices are as follows:
1. Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional
assessments in higher education?. Journal of Applied Learning and Teaching, 6(1), 1–22.
newgenrtpdf
60
60 Empowering Digital Education with ChatGPT
FIGURE 4.2 Bibliographic coupling network of journals.
61
TABLE 4.3
Countries with the highest number of publications
2. Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., … &
Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language
models for education. Learning and individual differences, 103(1), 1–9.
3. Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang,
B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots
in education. Smart Learning Environments, 10(1), 1–24.
4. Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring aca-
demic integrity in the era of ChatGPT. Innovations in Education and Teaching International,
1(1), 1–12.
5. Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the implications of generative
artificial intelligence for journalism and media education. Journalism & Mass Communication
Educator, 78(1), 84–93.
62
TABLE 4.4
Co-citation analysis cluster and representative publications for each cluster
Cluster 1. ChatGPT and Academic Integrity Cotton, Cotton, and Shipway (2023), Epstein and Dexter (2023), Totlis
et al. (2023), Kung et al. (2023), Pavlik (2023), Rudolph, Tan, and Tan
(2023), Perkins (2023).
Cluster 2. ChatGPT and Education’s Atlas (2023), Kasneci et al. (2023), Tlili et al. (2023), Zawacki-Richter
Opportunities and Challenges et al. (2019)
Cluster 3. ChatGPT Research and Dowling and Lucey (2023), Dwivedi et al. (2023), O’Connor (2022)
Multidisciplinary Perspectives
The authors present a co-citation network in Figure 4.4, which showcases the relationship between
the selected articles based on their shared citations. This network is divided into three distinct
clusters, each containing a group of publications centred around specific themes (see Table 4.4).
4.3.2.2 Journal
A co-citation network map was developed to represent the 74 analysed journals with a minimum
of 20 citations. The results in Figure 4.5 highlight the most influential journals in issues related to
ChatGPT in education. The top journals considered critical sources for learning about ChatGPT in
education were arXiv, Journal of Chemical Education, Nature Journal, Journal of Computers in
Education, and JMIR Medical Education.
4.3.2.3 Authors
The most highly cited authors include Enkelejda Kasneci (Technical University of Munich,
Germany), Jürgen Rudolph (Head of Research, Kaplan Singapore, Singapore), Debby R. E. Cotton
(SCION Research Group, DREC –Plymouth Marjon University, Plymouth, UK), Ahmed Tlili
(Smart Learning Institute of Beijing Normal University, Beijing, China) (see Figure 4.6).
to explore the relationships between primary publications and the scientific research structure on
using ChatGPT in education. Two techniques –bibliographic coupling and co-citation –are specif-
ically combined to give an overview of the intellectual structure of these intriguing research topics.
This makes it possible for researchers to pinpoint their positions in the field and consider potential
future lines of inquiry. The studies with the highest indices of co-citation are Rudolph, Tan, and Tan
(2023), Kasneci et al. (2023), and Tlili et al. (2023), according to the results of the bibliographic
coupling analysis. Publications with the highest indices of bibliographic coupling are Cotton,
Cotton, and Shipway (2023), Tlili et al. (2023), and Arif, Munaf, and Ul-Haque (2023). China, the
UK, Germany, Australia, and the United States of America are the nations spearheading ChatGPT
research in education. The most highly cited authors are Enkelejda Kasneci (Technical University
of Munich, Germany), Jürgen Rudolph (Head of Research, Kaplan Singapore, Singapore), and
Debby R. E. Cotton (SCION Research Group, DREC –Plymouth Marjon University, Plymouth,
UK). Leading researchers on this topic include Paul Denny (University of Auckland, Auckland,
New Zealand), Juho Leinonen (University of Auckland, Auckland, New Zealand), and Ken Masters
(Sultan Qaboos University, Sultanate of Oman). While arXiv, the Journal of Chemical Education,
Nature Journal, Journal of Computers in Education, and JMIR Medical Education are regarded as
significant reference journals for learning about ChatGPT in education, Education Sciences tops
the list of journals with the most research on the topic. Bibliographic coupling analysis formed
eight clusters, while co-citation analysis results formed three clusters (ChatGPT and Academic
Integrity, ChatGPT and Education’s Opportunities and Challenges, and ChatGPT Research and
Multidisciplinary Perspectives).
This chapter’s information about ChatGPT’s use in education will inspire future researchers
to work in this rapidly developing field. The outcomes of the bibliographic analysis also assist
researchers in positioning their ongoing work and identifying fresh areas for future study. Lastly,
66
educational policymakers can obtain academic knowledge and a comprehensive picture that can
be implemented in practice by using this study’s scientific framework for this topic. There may be
limitations due to this being the first study to integrate co-citation analysis and bibliographic coup-
ling about comprehending ChatGPT in education. Initially, WoS data sources were used in this
chapter. Future investigations might use more extensive data sources to ensure earlier research is
noticed. Secondly, as ChatGPT develops, search terms can be broadened to better comprehend and
quickly identify emerging trends. Lastly, splitting the research period into smaller time frames could
be helpful to examine modifications and advancements in these study areas.
REFERENCES
Aithal, P. S., and S. Aithal. 2023. “Application of ChatGPT in higher education and research–A futuristic ana-
lysis.” International Journal of Applied Engineering and Management Letters (IJAEML) 7 (3):168–194.
Akiba, D., and M. C. Fraboni. 2023. “AI-supported academic advising: Exploring ChatGPT’s current state and
future potential toward student empowerment.” Education Sciences 13 (9):885–904.
Ansari, A. N., S. Ahmad, and S. M. Bhutta. 2023. “Mapping the global evidence around the use of ChatGPT
in higher education: A systematic scoping review.” Education and Information Technologies 1 (1):1–41.
Arif, T. B., U. Munaf, and I. Ul-Haque. 2023. “The future of medical education and research: Is ChatGPT a
blessing or blight in disguise?” Medical Education Online 28 (1):1–2.
Atlas, S. 2023. “ChatGPT for higher education and professional development: A guide to conversational AI.”
Available online: https://digitalcommons.uri.edu/cba_facpubs/548 (accessed on 10 January 2024)
Baker, B., K. A. Mills, P. McDonald, and L. Wang. 2023. “AI, concepts of intelligence, and Chatbots: The
“Figure of Man,” the rise of emotion, and future visions of education.” Teachers College Record 125
(6):60–84.
Bauer, E., M. Greisel, I. Kuznetsov, M. Berndt, I. Kollar, M. Dresel, M. R. Fischer, and F. Fischer. 2023.
“Using natural language processing to support peer-feedback in the age of artificial intelligence: A
cross-disciplinary framework and a research agenda.” British Journal of Educational Technology 54
(1):1222–1245.
Chang, Y.-W. M.-H. Huang, and C.-W. Lin. 2015. “Evolution of research subjects in library and informa-
tion science based on keyword, bibliographical coupling, and co-citation analyses.” Scientometrics 105
(3):2071–2087.
Chaudhry, I. S., S. A. M. Sarwary, G. A. El Refae, and H. Chabchoub. 2023. “Time to revisit existing student’s
performance evaluation approach in higher education sector in a new era of ChatGPT–A case study.”
Cogent Education 10 (1):1–30.
Chen, L., P. Chen, and Z. Lin. 2020. “Artificial intelligence in education: A review.” Ieee Access 8
(1):75264–75278.
Choi, E. P. H., J. J. Lee, M. H. Ho, J. Y. Y. Kwok, and K. Y. W. Lok. 2023. “Chatting or cheating? The impacts
of ChatGPT and other artificial intelligence language models on nurse education.” Nurse Education
Today 125 (8):105796.
Cobo, M. J., A. G. López-Herrera, E. Herrera-Viedma, and F. Herrera. 2011. “An approach for detecting, quan-
tifying, and visualizing the evolution of a research field: A practical application to the fuzzy sets theory
field.” Journal of Informetrics 5 (1):146–166.
Cotton, D. R. E., P. A. Cotton, and J. R. Shipway. 2023. “Chatting and cheating: Ensuring academic integrity in
the era of ChatGPT.” Innovations in Education and Teaching International 1 (1): 1–12.
Crawford, J., M. Cowling, and K. A. Allen. 2023. “Leadership is needed for ethical ChatGPT: Character,
assessment, and learning using artificial intelligence (AI).” Journal of University Teaching and Learning
Practice 20 (3): 1–21.
Dowling, M., and B. Lucey. 2023. “ChatGPT for (finance) research: The Bananarama conjecture.” Finance
Research Letters 53 (1):103662.
Dwivedi, Y. K., N. Kshetri, L. Hughes, E. L. Slade, A. Jeyaraj, A. K. Kar, A. M. Baabdullah, A. Koohang, V.
Raghavan, and M. Ahuja. 2023. ““So what if ChatGPT wrote it?” Multidisciplinary perspectives on
opportunities, challenges and implications of generative conversational AI for research, practice and
policy.” International Journal of Information Management 71 (1):102642.
67
Emenike, M. E., and B. U. Emenike. 2023. “Was this title generated by ChatGPT? Considerations for artifi-
cial intelligence text-generation software programs for chemists and chemistry educators.” Journal of
Chemical Education 100 (4):1413–1418.
Epstein, R. H., and F. Dexter. 2023. “Variability in large language models’ responses to medical licensing and
certification examinations. Comment on “How Does ChatGPT Perform on the United States Medical
Licensing Examination? The Implications of Large Language Models for Medical Education and
Knowledge Assessment”.” JMIR Medical Education 9 (1):1–9.
Farrokhnia, M., S. K. Banihashem, O. Noroozi, and A. Wals. 2023. “A SWOT analysis of ChatGPT: Implications
for educational practice and research.” Innovations in Education and Teaching International. 1 (1):1–15.
Feng, S. W., and Y. Shen. 2023. “ChatGPT and the future of medical education.” Academic Medicine 98
(8):867–868.
Fergus, S., M. Botha, and M. Ostovar. 2023. “Evaluating academic answers generated using ChatGPT.” Journal
of Chemical Education 100 (4):1672–1675.
Friederichs, H., W. J. Friederichs, and M. März. 2023. “ChatGPT in medical school: how successful is AI in
progress testing?” Medical Education Online 28 (1):2220920.
Gardner, D. E., and A. N. Giordano. 2023. “The challenges and value of undergraduate oral exams in the phys-
ical chemistry classroom: A useful tool in the assessment toolbox.” Journal of Chemical Education 100
(5):1705–1709.
Glaser, N. 2023. “Exploring the potential of ChatGPT as an educational technology: An emerging technology
report.” Technology Knowledge and Learning 28 (4):1945–1952.
Grassini, S. 2023. “Shaping the future of education: Exploring the potential and consequences of AI and
ChatGPT in educational settings.” Education Sciences 13 (7):1–13.
Guo, A. A., and J. Li. 2023. “Harnessing the power of ChatGPT in medical education.” Medical Teacher 45
(1):1063–1078.
Harini, H. 2023. “The role of ChatGPT in improving the efficiency of education management processes.” Indo-
MathEdu Intellectuals Journal 4 (2):255–267.
Hsu, M. H. 2023. “Mastering medical terminology with ChatGPT and Termbot.” Health Education Journal,
0017896923119737.
Hsu, Y. C., and Y. H. Ching. 2023. “Generative artificial intelligence in education, Part One: The dynamic fron-
tier.” Techtrends 67 (4):603–607.
Irwin, P., D. Jones, and S. Fealy. 2023. “What is ChatGPT and what do we do with it? Implications of the age of
AI for nursing and midwifery practice and education: An editorial.” Nurse Education Today 127 (1):1–3.
Jeon, J., and S. Y. Lee. 2023. “Large language models in education: A focus on the complementary relationship
between human teachers and ChatGPT.” Education and Information Technologies 28 (1): 15873–15892.
Johinke, R., R. Cummings, and F. Di Lauro. 2023. “Reclaiming the technology of higher education for teaching
digital writing in a post-pandemic world.” Journal of University Teaching and Learning Practice 20
(2):1–15.
Karabacak, M., B. B. Ozkara, K. Margetis, M. Wintermark, and S. Bisdas. 2023. “The advent of generative
language models in medical education.” JMIR Medical Education 9(1):1–7.
Kasneci, E., K. Seßler, S. Küchemann, M. Bannert, D. Dementieva, F. Fischer, U. Gasser, G. Groh, S.
Günnemann, and E. Hüllermeier. 2023. “ChatGPT for good? On opportunities and challenges of large
language models for education.” Learning and Individual Differences 103 (1):102274.
Kessler, M. M. 1963. “Bibliographic coupling between scientific papers.” American Documentation 14
(1):10–25.
Kortemeyer, G. 2023. “Could an artificial-intelligence agent pass an introductory physics course?” Physical
Review Physics Education Research 19 (1):1–11.
Kung, T. H., M. Cheatham, A. Medenilla, C. Sillos, L. De Leon, C. Elepaño, M. Madriaga, R. Aggabao, G.
Diaz-Candido, and J. Maningo. 2023. “Performance of ChatGPT on USMLE: Potential for AI-assisted
medical education using large language models.” PLoS Digital Health 2 (2):e0000198.
Lee, H. 2023. “The rise of ChatGPT: Exploring its potential in medical education.” Anatomical Sciences
Education 1(1):1–14.
Leung, X. Y., J. Sun, and B. Bai. 2017. “Bibliometrics of social media research: A co-citation and co-word ana-
lysis.” International Journal of Hospitality Management 66 (1):35–45.
68
Lim, W. M., A. Gunasekara, J. L. Pallant, J. I. Pallant, and E. Pechenkina. 2023. “Generative AI and the
future of education: Ragnarok or reformation? A paradoxical perspective from management educators.”
International Journal of Management Education 21 (2):1–13.
Lo, C. K. 2023. “What is the impact of ChatGPT on education? A rapid review of the literature.” Education
Sciences 13 (4):1–15.
Luc, P. T. 2018. “The relationship between perceived access to finance and social entrepreneurship intentions
among university students in Vietnam.” Journal of Asian Finance Economics and Business 5 (1):63–72.
Malinka, K., M. Peresíni, A. Firc, O. Hujnák, F. Janus, and ACM. 2023. On the Educational Impact of
ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree? Paper read at 28th Annual
Conference on Innovation and Technology in Computer Science Education (ITiCSE), July 08–12, at
University of Turku, Turku, Finland.
Masters, K. 2023. “Ethical use of artificial intelligence in health professions education: AMEE Guide No.158.”
Medical Teacher 45 (6):574–584.
McCain, K. W. 1990. “Mapping authors in intellectual space: A technical overview.” Journal of the American
Society for Information Science 41 (6):433–443.
Mogali, S. R. 2023. “Initial impressions of ChatGPT for anatomy education.” Anatomical Sciences Education.
1 (1):1–4.
Nikolic, S., S. Daniel, R. Haque, M. Belkina, G. M. Hassan, S. Grundy, S. Lyden, P. Neal, and C. Sandison.
2023. “ChatGPT versus engineering education assessment: a multidisciplinary and multi-institutional
benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integ-
rity.” European Journal of Engineering Education 48 (4):559–614.
O’Connor, S. 2022. “Open artificial intelligence platforms in nursing education: Tools for academic progress
or abuse?” Nurse Education in Practice 66 (1):103537–103537.
Pavlik, J. V. 2023. “Collaborating with ChatGPT: Considering the implications of generative artificial intelli-
gence for journalism and media education.” Journalism & Mass Communication Educator 78 (1):84–93.
Perkins, M. 2023. “Academic integrity considerations of AI large language models in the post-pandemic
era: ChatGPT and beyond.” Journal of University Teaching and Learning Practice 20 (2):1–14.
Phan Tan, L. 2021. “Mapping the social entrepreneurship research: Bibliographic coupling, co-citation and co-
word analyses.” Cogent Business & Management 8 (1):1–22.
Pradana, A., and I. Salehudin. 2015. “Work overload and turnover intention of junior auditors in greater Jakarta,
Indonesia.” The South East Asian Journal of Management 9 (2):108–124.
Pradana, M., H. P. Elisa, and S. Syarifuddin. 2023. “Discussing ChatGPT in education: A literature review and
bibliometric analysis.” Cogent Education 10 (2):1–11.
Reeves, B., S. Sarsa, J. Prather, P. Denny, B. A. Becker, A. Hellas, B. Kimmel, G. Powell, J. Leinonen, and
ACM. 2023. Evaluating the Performance of Code Generation Models for Solving Parsons Problems
with Small Prompt Variations. Paper read at 28th Annual Conference on Innovation and Technology in
Computer Science Education (ITiCSE), July 08–12, at University of Turku, Turku, Finland.
Rudolph, J., S. Tan, and S. Tan. 2023. “ChatGPT: Bullshit spewer or the end of traditional assessments in higher
education?” Journal of Applied Learning and Teaching 6 (1):1–12.
Small, H. 1973. “Co-citation in the scientific literature: A new measure of the relationship between two
documents.” Journal of the American Society for Information Science 24 (4):265–269.
Strzelecki, A. 2023. “To use or not to use ChatGPT in higher education? A study of students’ acceptance and
use of technology.” Interactive Learning Environments 1 (1):1–14.
Thurzo, A., M. Strunga, R. Urban, J. Surovková, and K. I. Afrashtehfar. 2023. “Impact of artificial intelligence
on dental education: A review and guide for curriculum update.” Education Sciences 13 (2):1–14.
Tlili, A., B. Shehata, M. A. Adarkwah, A. Bozkurt, D. T. Hickey, R. Huang, and B. Agyemang. 2023. “What if
the devil is my guardian angel: ChatGPT as a case study of using chatbots in education.” Smart Learning
Environments 10 (1):1–24.
Totlis, T., K. Natsis, D. Filos, V. Ediaroglou, N. Mantzou, F. Duparc, and M. Piagkou. 2023. “The potential role
of ChatGPT and artificial intelligence in anatomy education: a conversation with ChatGPT.” Surgical
and Radiologic Anatomy 45 (10):1321–1329.
van Eck, N., and L. Waltman. 2009. “Software survey: VOSviewer, a computer program for bibliometric
mapping.” Scientometrics 84 (2):523–538.
van Raan,A. F. J. 2005. “For your citations only? Hot topics in bibliometric analysis.”Measurement: Interdisciplinary
Research and Perspectives 3 (1):50–62.
69
Wach, K., C. D. Duong, J. Ejdys, R. Kazlauskaite, P. Korzynski, G. Mazurek, J. Paliszkiewicz, and E. Ziemba.
2023. “The dark side of generative artificial intelligence: A critical analysis of controversies and risks of
ChatGPT.” Entrepreneurial Business and Economics Review 11 (2):7–30.
Waltman, L. 2017. “Citation- based clustering of publications using CitNetExplorer and VOSviewer.”
Scientometrics 111 (2):1053–1070.
Woo, L. J., D. Henriksen, and P. Mishra. 2023. “Literacy as a technology: A conversation with Kyle Jensen
about AI, writing and more.” Techtrends 67 (5):767–773.
Xia, N., P. X. W. Zou, M. A. Griffin, X. Wang, and R. Zhong. 2018. “Towards integrating construction risk
management and stakeholder management: A systematic literature review and future research agendas.”
International Journal of Project Management 36 (5):701–715.
Zawacki-Richter, O., V. I. Marín, M. Bond, and F. Gouverneur. 2019. “Systematic review of research on arti-
ficial intelligence applications in higher education–where are the educators?” International Journal of
Educational Technology in Higher Education 16 (1):1–27.
70
5.1 INTRODUCTION
Introduced at the end of 2022 by OpenAI, ChatGPT (Chat Generative Pre-trained Transformer)
received tremendous attention and achieved significant performance by attracting 1 million users in
just five days (Buchholz and Richte, 2023). It is a deep-learning-AI (artificial intelligence) chatbot
trained with diverse content classifications, such as books, articles, and websites, to generate human-
like content (Wach et al., 2023). Hence, ChatGPT can create answers for input questions in different
languages. Generally, the technology in ChatGPT is not new, based on “GPT 3.5” in 2020; however, it
achieved its breakthrough because ChatGPT provides a user-friendly interface and is free for all users.
Students at different levels have adopted ChatGPT as an intensive language model and easy-to-
use platform to assist their learning. As mentioned by Duong et al. (2023), this platform can pro
vide an outline, suggest corrections for texts, and provide instant information for questions being
asked by users. On the other hand, students in the Generation Z cohort are digital natives; they
grow up and interact with technology as part of their daily activities (McKinsey, 2023). Hence, the
widespread use of this platform in education applications can be partially explained.
Studies around ChatGPT in education have started receiving attention since 2022. Within the
education sector, the prior studies centralized on four research pathways: including determinants
generating actual use or refusal to use of ChatGPT (Duong et al., 2023; Strzelecki, 2023a), how
to detect ChatGPT in students’ work, the bright-side and dark-side of ChatGPT in the educational
sector (Ahsan et al., 2022; Perkins, 2023), and the potential and further possible application of AI-
based tools such as ChatGPT to leverage student performance (Wang et al., 2024). Yet, academic
research results on ChatGPT are still fragmented and new to examine further (Strzelecki, 2023a).
Specifically, little attention was focused on students’ retention to adopt ChatGPT during their study
journal. Students’ continuous-using intention reflects the evidence for higher education institutions
(HEIs) to adopt this tool in the long-term. Furthermore, this AI-generated tool is still in the feedback
70 DOI: 10.1201/9781032716350-5
71
phase (Strzelecki, 2023b). Hence, to fill this gap, our study examines factors influencing students’
continuous intention to use ChatGPT as a learning assistance tool in the future.
In this study, three research questions are addressed:
RQ1: Can prior experience and social influence effectively generate ChatGPT continuous-use
intention?
RQ2: What fundamental relationship exists between intention, social influence, and experience?
RQ3: What are the implications of positive ChatGPT retention at HEIs?
By closing the mentioned gaps, our research makes considerable contributions. First, we integrate
two theories to examine AI-based technologies despite the prior study only adopting Unified Theory
of Acceptance and Use of Technology (UTAUT). Notably, this study reveals the indirect effect of
experience and four mediators, including personal innovativeness, motivation, perceived usefulness
(PU), and enjoyment, to predict the likelihood of intention to continue using AI-related technolo-
gies. Second, there is still debate about the pros and cons of ChatGPT. This paper sheds light on the
benefits of adopting new technology such as ChatGPT at HEIs by providing the implications for
various stakeholders –HEIs, educators, and students to leverage AI-blended learning.
The structure of this chapter consists of four parts. This research begins to answer research
questions by reviewing the relevant literature regarding ChatGPT to investigate research gaps and
develop a research model. The second section explains how research is being implemented. The
third section shows the data analysis procedure, deploying the structural equation model using
Smart-PLS to reach results. Following, the study findings are presented with relevant discussion
to provide implications for scholars and educational practitioners. The final section implies current
limitations and sheds light on further research opportunities.
Many organizations consider adopting AI data and knowledge as critical sources of competitive
advantages (Wirtz et al., 2018). Within the education sector, AI-based technologies would effect
ively support teachers and students in reducing the burden of information searching (Loeckx, 2016;
Chiu et al., 2023). Based on the S–O–R and UTAUT theoretical background, the study provides a
study model in Figure 5.1.
The emergence of ChatGPT in late 2022 has attracted technology enthusiasts worldwide,
including academics and university learners. Experience of new technology has been investigated
by Venkatesh et al. (2003) in the UTAUT model as a moderating variable to discuss the determinants
of behavioral intention related to technology-based tools from initial introduction to more incred-
ible experience stages. Given that this study was conducted in early 2023 and assuming that most
users have limited experience using ChatGPT, we examined both the direct and indirect effects of
experience on ChatGPT-using behavior. It is anticipated that experience helps enhance skills, foster
familiarity with new technology’s interface, and suggest insights into how well AI addresses its
specific objectives. Drawing on the adaptation-level theory (Helson, 1964), it is believed that early
experience with this new technology would immediately bring some users favorable impressions
and extrinsic motivation that would affect the decision to continue using it (Ho, 2012; Sun et al.,
2022). As such, hypothesis H1 is stated as follows:
H1: Experience directly and significantly influences the intention to continuous-use ChatGPT
While some users decide on their purchase based on the initial impressions, others spend longer
experiencing it to build crucial knowledge for their consideration. This consumer response
mechanism is well reflected by the SOR model, in which internal feelings are equally important
(Mehrabian and Russell, 1974).
In this study, perceived enjoyment (PE) and perceived usefulness (PU) are regarded as
mediators between the early experience with ChatGPT and the decision to continue using it. PU
refers to users’ belief that technological tools will be able to help people improve productivity
by minimizing mental and physical efforts (Davis, 1989; Maheshwari, 2023). The degree seen
as intrinsically fascinating is perceived enjoyment (Won et al., 2023). It is valid not only for
technologies such as social networks and mass media (Sullivan and Koh, 2019; Lee et al., 2019)
but also for AI-powered educational tools such as ChatGPT (Nalbant, 2021). It is hypothesized
that initial experience would stimulate self-evaluation on the level of usefulness and enjoyment
when using ChatGPT, and in turn, it leads to continuous-use intention. As such, the second and
the third hypotheses are developed as follows:
H4: Social influence has a direct and significant influence on the intention to continuous-use
ChatGPT
While some people may be strongly affected by social pressure, others are highly motivated by
their internal inputs. Some exhibit relative resistance to following the trend of using technology-
based devices and tools unless they recognize their self-interest. This study considers personal
innovativeness and self- motivation as internal driving forces of technology usage behavior.
According to Patil et al. (2020), personal innovativeness is the degree to which a person is willing
to experiment with new technologies. Innovative individuals tend to be curious and desire to try
emerging technologies (Foroughi et al., 2023); therefore, they are more inclined to embrace the
challenges associated with using those (Kabra et al., 2017). Self-motivation is the internal drive
individuals generate, sustain, and regulate their behavior toward their objectives. Based on the self-
determination theory, people have three fundamental psychological needs of humans, including
autonomy, competence, and relatedness (Deci and Ryan, 1985). These needs form the basis for
both intrinsic and extrinsic incentives that foster individuals’ learning and achievement (Zhou and
Li, 2023).
In this study, intertwined with SI, extrinsic drive, self-motivation, and personal innovativeness
are sources of intrinsic inspiration influencing the intention to keep using ChatGPT. As such, the
authors state hypotheses H5 and H6 as follows:
H5: Social influence has indirect and significant influence on the intention to continuous-use
ChatGPT through personal innovativeness
H6: Social influence has indirect and significant influence on the intention to continuous-use
ChatGPT through self-motivation
74
5.3 METHODOLOGY
5.3.1 Research design
This study employs a quantitative approach and a questionnaire for Vietnamese students at HEIs.
The survey consists of two sections. In the first sections, participants were questioned about demo-
graphic information, including age and gender. In the second part, 38 questions from the previous
research are applied to measure variables in this study. Precisely, UE was measured by ten items
from study of Hsu and Tsou (2011), Tsaur et al. (2007), and Schmitt (2012), SI was measured by four
items applied by Kalinic and Marinkovic (2016), PU was measured by six items by Davis (1989),
self-motivation (PM) with first five items from Shi and Cristea (2016) and three items from Glynn
et al. (2011). Perceived innovativeness (PI) was adopted by Hong et al. (2017) with four items.
Finally, the intention to continuous-use (ITCU) was adopted from the research of Venkatesh et al.
(2012). From 1 (strongly disagree) to 5 (strongly agree) on the five-point Likert scale, respondents
indicated how ChatGPT influenced their learning journey and impacted their decision to continue
to use it as one of the supported tools for their learning. Regarding the pilot test, the research team
surveyed 50 students to collect feedback on questions that needed clarification or were inaccess-
ible to non-professionals. After being reworded, the questionnaires were ready to be distributed to
participants.
5.3.2 Sampling
Hair et al. (2016) stated that an exploratory study needs at least four to five times the total number
of items. The sample size was determined for each processing method using empirical methods,
such as sample size calculations for factor analysis (Hair et al., 2021) and regression analysis
(Tabachnick and Fidell, 2007). In addition, Worthington and Whittaker (2006) recommended using
structural equation model, confirmatory factor analysis, and exploratory factor analysis on sample
sizes larger than 200 and no less than 100 participants. In short, N =106 is the minimum sample size
recommended for factor analysis and multiple regression analysis, and the population sample in this
study produced 270 valid responses, which is efficient and reliable for further analysis.
5.4 DATA ANALYSIS
Following Anderson and Gerbing’s (1991) approach, the questionnaire data are evaluated in two
stages for model measurement in this study. First, the outer loading indicates the link between the
observable and latent variables. According to Hair et al. (2016), it is acceptable to proceed subse-
quent stages if the outer loading results are greater than 0.7. Then, the reliability will be examined
using two main indicators: Cronbach’s alpha (α) and composite reliability. Following the signpost of
DeVellis (2012) and Hair et al. (2016), Cronbach’s alpha and composite reliability have
to be larger than 0.7 to meet the required standard to proceed with further analysis. As shown in
75
TABLE 5.1
Reliability test
TABLE 5.2
Discriminant validity
PE UE ITCU PM PU PI SI
PE 0.882
UE 0.662 0.861
ITCU 0.650 0.604 0.887
PM 0.708 0.706 0.719 0.784
PU 0.602 0.579 0.667 0.719 0.837
PI 0.555 0.595 0.678 0.596 0.550 0.824
SI 0.503 0.582 0.572 0.598 0.549 0.570 0.785
Table 5.1, all α results are above 0.6, and composite reliability results are above 0.7, indicating that
all constructs achieved satisfactory internal consistency; thus, authors can proceed to the next test.
Third, to evaluate the convergence, AVE (Average Variance Extracted) indicator is used; as shown
in Table 5.1, a significant degree of convergent validity is shown by AVE results that are more than
0.5 (Hair et al. 2016).
Furthermore, following guidelines from Hair et al. (2016), this study must examine whether
the square roots of AVE are greater than the correlations between variables. As seen in Table 5.2,
discriminant validity is validated. Additionally, Harman’s single-factor test verifies that common
method bias is not a significant problem (Henseler et al., 2015).
Bootstrapping technique, which is another nonparametric technique for assessing the accuracy
of the PLS estimate, was utilized to test the hypotheses. As suggested by Hair et al. (2016), the
bootstrap samples in this study are 5000 times. As shown in Table 5.3, the UE and SI have not
influenced to ITCU ChatGPT intention (b=0.662, p>0.05; b=0.069, p>0.05). Therefore, the H1 and
H4 hypotheses are rejected. While other factors have a significant and positive impacts on ITCU
ChatGPT (p<0.05).
Furthermore, Table 5.4’s results indicate that perceived enjoyment (b=0.106, p<0.05), perceived
usefulness (b=0.110, p<0.05), PI (b=0.170, p<0.05), and self-motivation (b=0.155, p<0.05) fully
mediate the link between user experience and social influence on intention to use ChatGPT
continuously.
76
TABLE 5.3
Hypothesis testing results
TABLE 5.4
The mediating testing result
Although the direct effects of experience and SI on ChatGPT retention were not successfully
captured, the results found a significant indirect relationship between those factors under the fully
mediating effect from PU, perceived enjoyment, personal innovativeness, and self-motivation. This
offers obvious evidence to respond to the second research question, “What is the underlying rela-
tionship between experience, SI, and ChatGPT continuous-use intention?”.
Interestingly, experience is considered a critical factor leading to ChatGPT’s future usage, espe-
cially through the mediating effects of perceived enjoyment and usefulness. These findings empha-
size the important role of cognitive state inside consumers’ minds. The decision on continuous using
does not solely depend on immediate experience, but rather the experience needs to be long and
valuable enough to recognize the enjoyment and usefulness of ChatGPT. This would be explained
by different educational settings and objectives of using ChatGPT in learning support. If they intend
to use ChatGPT as a human replacement, it would not be a perfect fit solution. If they rather use
AI-based experience as a source of motivation to explore the diversity of human knowledge, these
tools would help reflect diverse perspectives. Also, learners can enjoy the journey of knowledge
exploration and sincerely acknowledge the usefulness of ChatGPT, which determines their usage
retention.
The significance of PU and perceived enjoyment in impacting the ongoing use of AI-based
learning tools is confirmed in the educational context by recent studies (Al-Sharafi et al., 2023;
Tiwari et al., 2023; Strzelecki, 2023). Tiwari et al. (2023), based on responses from 270 students,
affirmed the contribution of usefulness, enjoyment, and motivation to favorable attitudes toward
using ChatGPT in the learning environment. Despite employing a different research model and
terms, Strzelecki (2023) highlighted consensus results on the positive connection between perform-
ance expectancy and hedonic motivation to users’ acceptance of ChatGPT in tertiary education.
Al-Sharafi et al. (2023) found supporting evidence for a considerable effect of usefulness on the
sustainable use of AI-based chatbots for educational purposes. While previous studies emphasize
the direct roles of PU and enjoyment to a continuous decision, this study adds empirical evidence on
their mediating role to strengthen initial experience and connect it with continuing intention.
The study found that SI indicates an insignificant direct impact on the ITCU ChatGPT. The effect
of SI on the acceptance of emerging technology is intricate and subject to diverse contingencies
(Venkatesh et al., 2003). Varshneya et al. (2017) agreed that SI will likely show a limited effect on
the acceptance and usage of early penetrated technology. Foroughi et al. (2023) reported a consensus
finding that insignificant correlations could exist between SI and students’ intention to use ChatGPT.
Interesting results illustrate significant indirect effects through the mediating role of personal
innovativeness and self-motivation. This exciting result highlights the importance of individual
and personal factors, intertwined with external pressure, to explain the continuance of ChatGPT
usage. This is true as the learners and researchers are heterogeneous, particularly in novelty adapta-
tion and intrinsic motivation. Venkatesh et al. (2006) argued that intrinsic and extrinsic motivation
would be generated from the early experience of using technologies, but it would change over time
with greater understanding. Ho (2012) found that external influences creating extrinsic motivation
in the initial usage stage do not effectively predict behavior intention in subsequent use, whereas
intrinsic motivation is rather reliable. This supports our study by emphasizing the importance of
users’ characteristics and inherent incentives rather than social pressure in the decision-making pro-
cess. Particularly, with AI-based technology adoption, personal innovativeness is among the vital
drivers. Highly innovative people tend to be less influenced by other opinions (Khazaei and Tareq,
2021; Foroughi et al., 2023); they wish to be among pioneering users of modern technology (Cheng,
2014). Kandoth and Shekhar (2022) reported a consensus finding that SI significantly impacts
personal innovativeness, and personal innovativeness represents a positive effect on the intention
to use AI-enabled applications. It can be seen that learners seem to be realistic and performance-
oriented decision-makers with ChatGPT adoption.
78
5.5.2 Contributions
5.5.2.1 Theoretical contributions
Three distinctive theoretical highlights can be generated from this study.
First, this research focuses on the impact of the ChatGPT experience and SI on students’ affective
and cognitive responses toward continuous use of this platform. The literature on AI-based technolo-
gies and ChatGPT specifically has been addressed and enriched.
Second, this study developed a research model by integrating two theories, SOR and UTAUT.
Prior studies of AI-based technologies and ChatGPT (Duong et al., 2023; Venkatesh, 2022) solely
adopted the UTAUT theory. Furthermore, in the UTAUT, experience moderates the relationships
between behavioral intention and other determinants. Yet, this study provides evidence suggesting
the indirect effect of experience to predict the likelihood of intention to continue using new
technologies.
Third, this study examines four mediators between stimulus clusters and behavioral
customers: personal innovation, motivation, PU, and enjoyment. There was no direct impact on the
SI or the UE on the intention; however, if four mediators appear, the ChatGPT retention is formed.
Thus, both theories are extended and generalized in this context.
5.5.2.2 Practical implications
Driven by the research, three practical implications can be generated for different stakeholders.
First, the implication is suggested for HEIs. There has been a controversial opinion about pla-
giarism and dishonesty inclination since ChatGPT was introduced (Cotton et al., 2023). Notably,
the intensity level of controversy is more extreme in developing countries that advocate Confucian
beliefs. Any technology has its own positive and negative impacts on users. For example, TikTok
provides a creative and entertaining platform that attracts millions of users. However, it receives
enormous criticism, such as data security or intention fragmentation effects on Generation Z. The
same as ChatGPT, the benefits of this AI-based platform are undeniable apart from critics and
worries. AI-based technology, specifically Chat GPT, is necessary for in-class and out-class edu-
cation. Students need additional support, such as synthesizing knowledge or signpost for analysis,
to develop their self-learning process and achieve lifelong learning objectives. Therefore, AI-based
technologies are vital in digital transformation and must be taught in higher education institutions.
Educating students on how to use ChatGPT correctly can generate huge benefits by leveraging
learning performance.
Furthermore, driven by this study, HEIs can focus on developing personal innovativeness for
educators and students to prepare them for retention. For example, HEIs should collaborate with
technology and AI research centers to provide training sessions and workshops to enhance know-
ledge and incline engagement with different platforms. Furthermore, joint research projects between
these centers and HEIs should be promoted so that the willingness level of participants to engage
with new technology experiments will significantly increase.
The second implication is set for educators. The educators at HEIs should encourage students to
use ChatGPT as a learning support tool during their study. Lecturers should understand the appli-
cation of AI-based technology in their module and suggest to students during the study period.
These activities act as motivation and usefulness for students to experience and engage with new
technologies to leverage their learning performance. At the same time, they should update the
AI-detector tool and set an alarm for students beforehand so that they can understand the limits of
their actions.
The third implication is essential for platform developers. Enjoyment emotion is a vital mediator
toward the continuous-use intention. Hence, the developer should focus on and push the interactive
and entertaining function, and customers might return and keep using the technology as part of their
daily activities.
79
ACKNOWLEDGMENT
This research is funded by the University of Economics Ho Chi Minh City (UEH) Vietnam under
grant number 2024-01-19-2058.
REFERENCES
Agarwal, R., & Prasad, J. (1998). A conceptual and operational definition of personal innovativeness in the
domain of information technology. Information Systems Research, 9(2), 204–215. https://doi.org/
10.1287/isre.9.2.20
Ahsan, K., Akbar, S., & Kam, B. (2022). Contract cheating in higher education: A systematic literature review
and future research agenda. Assessment and Evaluation in Higher Education, 47(4), 523–539. https://
doi.org/10.1080/02602938.2021.1931660
Alowayr, A. (2022). Determinants of mobile learning adoption: Extending the unified theory of acceptance
and use of technology (UTAUT). International Journal of Information and Learning Technology, 39(1),
1–12. https://doi.org/10.1108/IJILT-05-2021-0070
Al-Sharafi, M. A., Al-Emran, M., Iranmanesh, M., Al-Qaysi, N., Iahad, N. A., & Arpaci, I. (2023). Understanding
the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational
purposes using a hybrid SEM-ANN approach. Interactive Learning Environments, 31(10), 7491–7510.
https://doi.org/10.1080/10494820.2022.2075014
Anderson, J. C., & Gerbing, D. W. (1991). Predicting the performance of measures in a confirmatory factor
analysis with a pretest assessment of their substantive validities. Journal of Applied Psychology, 76(5),
732–740. https://doi.org/10.1037/0021-9010.76.5.732
Bernacki, M. L., Greene, J. A., & Crompton, H. (2020). Mobile technology, learning, and achievement: Advances
in understanding and measuring the role of mobile technology in education. Contemporary Educational
Psychology, 60, 101827. https://doi.org/10.1016/j.cedpsych.2019.101827
Buchholz, K. and Richter, F. (2023) Infographic: Threads shoots past one million user mark at Lightning
Speed, Statista Daily Data. Available at: www.statista.com/chart/29174/time-to-one-million-users/
(Accessed: 18 January 2024).
Cheng, Y. M. (2014). Exploring the intention to use mobile learning: The moderating role of personal
innovativeness. Journal of Systems and Information Technology, 16(1), 40–61. https://doi.org/10.1108/
Chiu, T. K. F., Moorhouse, B. L., Chai, C. S., & Ismailov, M. (2023). Teacher support and student motivation to
learn with artificial intelligence (AI) based chatbot. Interactive Learning Environments, ahead-of-print
(ahead-of-print), 1–17. https://doi.org/10.1080/10494820.2023.2172044
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity
in the era of ChatGPT. Innovations in Education and Teaching International, ahead-of-print (ahead-of-
print), 1–12. https://doi.org/10.1080/14703297.2023.2190148
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information tech-
nology. MIS Quarterly, 13(3), 319–340.
Deci, E. L. & Ryan, R. M. (1985). The general causality orientations scale: Self-determination in personality.
Journal of Research in Personality, 19(2), 109–134. https://doi.org/10.1016/0092-6566(85)90023-6
80
DeVellis, R. F. (2012). Scale development: Theory and applications (3rd ed.). Sage.
DeVellis, R. F., & Thorpe, C. T. (2021). Scale development: Theory and applications. Sage.
Duong, C. D., Vu, T. N., & Ngo, T. V. N. (2023). Applying a modified technology acceptance model to explain
higher education students’ usage of ChatGPT: A serial multiple mediation model with knowledge
sharing as a moderator. The International Journal of Management Education, 21(3), 100883. https://doi.
org/10.1016/j.ijme.2023.100883
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and
research. Addison-Wesley.
Foroughi, B., Senali, M. G., Iranmanesh, M., Khanfar, A., Ghobakhloo, M., Annamalai, N., & Naghmeh-
Abbaspour, B. (2023). Determinants of intention to use ChatGPT for educational purposes: Findings
from PLS-SEM and fsQCA. International Journal of Human–Computer Interaction, 39(20), 1–20.
Glynn, S. M., Brickman, P., Armstrong, N., & Taasoobshirazi, G. (2011). Science motivation questionnaire
II: Validation with science majors and nonscience majors. Journal of Research in Science Teaching,
48(10), 1159–1176. https://doi.org/10.1002/tea.20442
Hair Jr., J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2016). A primer on partial least squares structural
equation modeling (PLS-SEM). Sage.
Hair Jr., J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Partial least squares
structural equation modeling (PLS-SEM) using R: A workbook (1st ed.). Springer Nature. https://doi.
org/10.1007/978-3-030-80519-7
Helson, H. (1964). Current trends and issues in adaptation-level theory. The American Psychologist, 19(1),
26–38. https://doi.org/10.1037/h0040013
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in
variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–
135. https://doi.org/10.1007/s11747-014-0403-8
Ho, S. Y. (2012). The effects of location personalization on individuals’ intention to use mobile services.
Decision Support Systems, 53(4), 802–812. https://doi.org/10.1016/j.dss.2012.05.012
Hong, J.-C., Lin, P.-H., & Hsieh, P.-C. (2017). The effect of consumer innovativeness on perceived value and
continuance intention to use smartwatch. Computers in Human Behavior, 67, 264–272. https://doi.org/
10.1016/j.chb.2016.11.001
Hsu, H. Y., & Tsou, H.- T. (2011). Understanding customer experiences in online blog environments.
International Journal of Information Management, 31(6), 510–523. https://doi.org/10.1016/j.ijinfo
mgt.2011.05.003
Hu, G. (2023). Challenges for enforcing editorial policies on AI-generated papers. Accountability in Research,
ahead-of-print (ahead-of-print), 1–3. https://doi.org/10.1080/08989621.2023.2184262
Kabra, G., Ramesh, A., Akhtar, P., & Dash,M. K.(2017). Understanding behavioural intention to use informa-
tion technology: Insights from humanitarian practitioners. Telematics and Informatics, 34(7), 1250–
1261. https://doi.org/10.1016/j.tele.2017.05.010
Kalinic, Z., & Marinkovic, V. (2016). Determinants of users’ intention to adopt m-commerce: An empirical
analysis. Information Systems and E-Business Management, 14(2), 367–387. https://doi.org/10.1007/
s10257-015-0287-2
Kandoth, S., & Kushe Shekhar, S. (2022). Social influence and intention to use AI: the role of personal
innovativeness and perceived trust using the parallel mediation model. Forum Scienciae Oeconomia
(Online), 10(3), 131–150.
Khazaei, H., & Tareq, M. A. (2021). Moderating effects of personal innovativeness and driving experience on
factors influencing adoption of BEVs in Malaysia: An integrated SEM–BSEM approach. Heliyon, 7(9),
e08072. https://doi.org/10.1016/j.heliyon.2021.e08072
Lee, J., Kim, J., & Choi, J. Y. (2019). The adoption of virtual reality devices: The technology acceptance model
integrating enjoyment, social interaction, and strength of the social ties. Telematics and Informatics, 39,
37–48. https://doi.org/10.1016/j.tele.2018.12.006
Loeckx, J. (2016). Blurring boundaries in education: Context and impact of MOOCs. International Review of
Research in Open and Distance Learning, 17(3), 92–121. https://doi.org/10.19173/irrodl.v17i3.2395
Maheshwari, G. (2023). Factors influencing students’ intention to adopt and use ChatGPT in higher edu-
cation: A study in the Vietnamese context. Education and Information Technologies. https://doi.org/
10.1007/s10639-023-12333-z
81
Matthews, L. M., Sarstedt, M., Hair, J. F., & Ringle, C. M. (2016). Identifying and treating unobserved hetero-
geneity with FIMIX-PLS: Part II –A case study. European Business Review, 28(2), 208–224. https://doi.
org/10.1108/EBR-09-2015-0095
McKinsey (2023) What is gen Z?, McKinsey & Company. Available at: www.mckinsey.com/featured-insights/
mckinsey-explainers/what-is-gen-z (Accessed: 18 January 2024).
Mehrabian, A., & Russell, J. A. (1974). An approach to environmental psychology. The MIT Press.
Nalbant, K. G. (2021). The importance of artificial intelligence in education: A short review. Journal of Review
in Science and Engineering, 2021, 1–15.
Patil, P., Tamilmani, K., Rana, N. P., & Raghavan, V. (2020). Understanding consumer adoption of mobile
payment in India: Extending Meta-UTAUT model with personal innovativeness, anxiety, trust, and
grievance redressal. International Journal of Information Management, 54, 102144. https://doi.org/
10.1016/j.ijinfomgt.2020.102144
Perkins, M. (2023). Academic Integrity considerations of AI large language models in the post-pandemic
era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2). https://doi.org/
10.53761/1.20.02.07
Schmitt, B. (2012). The consumer psychology of brands. Journal of Consumer Psychology, 22(1), 7–17. https://
doi.org/10.1016/j.jcps.2011.09.005
Shi, L., & Cristea, A.I. (2016). Motivational Gamification Strategies Rooted in Self-Determination Theory
for Social Adaptive E-Learning. In: Micarelli, A., Stamper, J., Panourgia, K. (eds.), Intelligent tutoring
systems (Vol. 9684, pp. 294–300). Springer International Publishing. https://doi.org/10.1007/978-3-319-
39583-8_32
Sing, C. C., Teo, T., Huang, F., Chiu, T. K. F., & Xing wei, W. (2022). Secondary school students’ intentions
to learn AI: Testing moderation effects of readiness, social good and optimism. Educational Technology
Research and Development, 70(3), 765–782. https://doi.org/10.1007/s11423-022-10111-1
Strzelecki, A. (2023a). To use or not to use ChatGPT in higher education? A study of students’ acceptance and
use of technology. Interactive Learning Environments, ahead-of-print (ahead-of-print), 1–14. https://doi.
org/10.1080/10494820.2023.2209881
Strzelecki, A. (2023b). Students’ acceptance of ChatGPT in higher education: An extended unified theory
of acceptance and use of technology. Innovative Higher Education. https://doi.org/10.1007/s10
755-023-09686-1
Strzelecki, A., & ElArabawy, S. (2024). Investigation of the moderation effect of gender and study level on the
acceptance and use of generative AI by higher education students: Comparative evidence from Poland
and Egypt. British Journal of Educational Technology, 55(3), 1209–1230. https://doi.org/10.1111/
bjet.13425
Sullivan, Y. W., & Koh, C. E. (2019). Social media enablers and inhibitors: Understanding their relationships
in a social networking site context. International Journal of Information Management, 49, 170–189.
https://doi.org/10.1016/j.ijinfomgt.2019.03.014
Sun, J., Wayne, S. J., & Liu, Y. (2022). The roller coaster of leader affect: An investigation of observed leader
affect variability and engagement. Journal of Management, 48(5), 1188–1213. https://doi.org/10.1177/
01492063211008974
Tabachnick, B. G., & Fidell, L. S. (2007). Experimental designs using ANOVA (Vol. 724). Thomson/Brooks/
Cole.
Tiwari, C. K., Bhat, M. A., Khan, S. T., Subramaniam, R., & Khan, M. A. I. (2023). What drives students
toward ChatGPT? An investigation of the factors influencing adoption and usage of ChatGPT. Interactive
Technology and Smart Education. https://doi.org/10.1108/ITSE-04-2023-0061
Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if
the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning
Environments, 10(1), 15–24. https://doi.org/10.1186/s40561-023-00237-x
Tsaur, S. H., Chiu, Y. T., & Wang, C. H. (2007). The visitors behavioral consequences of experiential
marketing: An empirical study on Taipei Zoo. Journal of Travel & Tourism Marketing, 21(1), 47–64.
https://doi.org/10.1300/J073v21n01_04
Varshneya, G., Pandey, S. K., & Das, G. (2017). Impact of social influence and green consumption values on
purchase intention of organic clothing: a study on collectivist developing economy. Global Business
Review, 18(2), 478–492. https://doi.org/10.1177/0972150916668620
82
Venkatesh, V. (2022). Adoption and use of AI tools: a research agenda grounded in UTAUT. Annals of
Operations Research, 308(1–2), 641–652. https://doi.org/10.1007/s10479-020-03918-9
Venkatesh, V. Maruping, L. M., & Brown, S. A. (2006). Role of time in self- prediction of behavior.
Organizational Behavior and Human Decision Processes, 100, 160–176.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information tech-
nology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information tech-
nology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–
178. https://doi.org/10.2307/41410412
von Garrel, J., & Mayer, J. (2023). Artificial intelligence in studies—use of ChatGPT and AI-based tools
among students in Germany. Humanities & Social Sciences Communications, 10(1), 1–9. https://doi.
org/10.1057/s41599-023-02304-7
Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., & Ziemba, E. (2023).
The dark side of generative artificial intelligence: A critical analysis of controversies and risks of
ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 7–30. https://doi.org/10.15678/
EBER.2023.110201
Wai, I. S. H., Ng, S. S. Y., Chiu, D. K. W., Ho, K. K. W., & Lo, P. (2018). Exploring undergraduate students’
usage pattern of mobile apps for education. Journal of Librarianship and Information Science, 50(1),
34–47. https://doi.org/10.1177/0961000616662699
Wang, L., Chen, X., Wang, C., Xu, L., Shadiev, R., & Li, Y. (2024). ChatGPT’s capabilities in providing feed-
back on undergraduate students’ argumentation: A case study. Thinking Skills and Creativity, 51. https://
doi.org/10.1016/j.tsc.2023.101440
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018). Brave new
world: Service robots in the frontline. International Journal of Service Industry Management, 29(5),
907–931. https://doi.org/10.1108/JOSM-04-2018-0119
Won, D., Chiu, W., & Byun, H. (2023). Factors influencing consumer use of a sport-branded app: The tech-
nology acceptance model integrating app quality and perceived enjoyment. Asia Pacific Journal of
Marketing and Logistics, 35(5), 1112–1133. https://doi.org/10.1108/APJML-09-2021-0709
Worthington, R. L., & Whittaker, T. A. (2006). Scale development research: A content analysis and
recommendations for best practices. The Counseling Psychologist, 34(6), 806–838. https://doi.org/
10.1177/0011000006288127
Zhou, L., & Li, J. J. (2023). The impact of ChatGPT on learning motivation: A study based on self-determination
theory. Education Science Management, 1(1), 19–29.
83
6 Impact of Generative AI
Models on Personalized
Learning and Adaptive
Systems
Manas Kumar Yogi, Y. Reshma Chowdary, and
Ch. Pavani Ratna Sri Santhoshi
6.1 INTRODUCTION
In modern day, due to rapid technological advances, learning through support systems has become
imperative. This chapter explores the exciting and transformative role that generative artificial intel-
ligence (AI) can play in personalized learning. We will delve into a variety of novel applications
and visions for harnessing this technology to create more adaptive and individualized educational
experiences. From customized content generation and adaptive assessments to virtual tutors,
personalized learning pathways, domain-specific learning, project-based learning, and continuous
improvement in education, the possibilities are vast and intriguing. By illustrating real-world use
cases and showcasing the promising future that generative AI holds for the academic community,
we aim to shed light on the profound impact it can have on the way we learn, teach, and adapt in
an ever-changing educational landscape (Alasadi, Baiz, 2023). The traditional model of education,
characterized by a standardized curriculum and uniform instructional methods, often falls short in
catering to the diverse needs of learners. While some students excel with the existing approach,
others struggle to keep pace, and many fall through the cracks due to a misalignment between their
learning preferences and the educational system. Generative AI, with its ability to create tailored
educational experiences, presents an exciting opportunity to bridge this gap.
Educators and researchers have long sought ways to tailor educational content and strategies to
the unique needs and abilities of individual learners. The advent of generative AI has ushered in a
new era of possibilities in this endeavor. This chapter delves into the transformative potential of
generative AI, with a particular focus on models like GPT-3 and its successors, in shaping the future
of personalized learning.
The primary objectives of this chapter are to elucidate the various ways in which generative AI
can be leveraged in education and to present compelling evidence through case studies and experi-
mental results that demonstrate its efficacy. We will showcase the applicability of generative AI
models, with a particular focus on GPT-3 and its contemporary counterparts, and underscore how
they can effectively cater to the diverse and dynamic needs of the academic community (Bahroun,
Anane, et al. 2023). As we delve into this exploration, it is essential to acknowledge the broader
context in which education operates today. The digital age has brought with it an abundance of
data, technological advancements, and a growing demand for highly personalized and accessible
DOI: 10.1201/9781032716350-6 83
84
One of the most significant advantages of generative AI in education is its potential to support
continuous improvement in learning. Traditional education systems often lack the tools to gather
detailed data on student performance, and educators may not have the resources to offer personalized
feedback to every student. Generative AI can collect and analyze data on individual students’ per-
formance, identifying areas where they excel and where they need improvement. With this informa-
tion, it can provide personalized feedback, recommend additional resources, and suggest targeted
practice to help students master challenging concepts. Continuous improvement in learning ensures
that each student receives the support and guidance they need to reach their full potential, ultimately
leading to better educational outcomes.
The field of education is undergoing a transformative shift. One of the most promising
developments is the integration of generative AI in education, aiming to revolutionize the way
students learn and educators teach. By harnessing the power of AI, this innovative approach allows
for personalized learning experiences tailored to each student’s unique needs and learning style
(Bozkurt, Junhong, et al. 2023). Through sophisticated algorithms, generative AI analyzes vast
amounts of data, including student performance, preferences, and previous learning outcomes, to
generate personalized content, recommendations, and assessments. This not only enhances student
engagement and motivation but also enables educators to gain valuable insights into student pro-
gress, enabling targeted interventions and support. With generative AI in education, a new era of
personalized and effective learning is within reach, ensuring that every student has the opportunity
to thrive and reach their full potential.
As we delve deeper into the potential of generative AI in education, the concept of enhancing
learning through personalized algorithms emerges as a transformative approach. By leveraging the
power of sophisticated algorithms, customized learning experiences can be tailored to each student’s
individual needs and preferences. These algorithms analyze a wealth of data, including student per-
formance, learning patterns, and feedback.
With the rapid advancements in technology, the future of education holds immense poten-
tial for transformation. One area that shows great promise is the integration of generative AI in
education. This cutting-edge technology can personalize learning experiences for every student,
catering to their unique needs and learning styles. By leveraging generative AI, educators can create
customized curricula and adaptive assessments that address specific areas of improvement, ensuring
that students receive tailored instruction and support. Furthermore, generative AI can analyze vast
amounts of data to identify patterns and trends, providing valuable insights for educators to make
data-driven decisions and interventions. This powerful tool has the potential to revolutionize educa-
tion, empowering both students and teachers to achieve greater levels of success and unlocking new
opportunities for learning and growth.
In a world where education is increasingly recognized as a lifelong pursuit, generative AI can
ensure that learning is not only accessible but also engaging and effective for all. The future of edu-
cation is one where the boundaries of the traditional classroom are pushed to encompass the infinite
possibilities of personalized and adaptive learning, and generative AI is at the forefront of this
exciting transformation. From customized content generation to conducting adaptive assessments,
virtual tutors, personalized learning pathways, domain-specific learning, Project-based learning,
and continuous improvement in learning, the role of generative AI in education is multifaceted and
promising (Dai, Liu, et al. 2023).
guidance that resonate with and motivate students on a deeper level. By harnessing the capabilities
of generative AI, we can revolutionize education by ensuring that every student receives the support
and opportunities they need to thrive academically and personally.
Personalized learning is rooted in the idea that students have different learning paces and
preferences. It empowers students by accommodating their unique requirements. When students
see their progress and success in their personalized learning journey, it boosts their self-confidence
and self-efficacy. They become active agents in their learning process. Personalized learning often
incorporates elements of choice and autonomy. This, in turn, fosters intrinsic motivation, as students
are more engaged when they have a say in their learning and can pursue subjects or projects that
genuinely interest them.
Personalized learning captures and maintains students’ interest. By connecting lessons to their
individual experiences and interests, it keeps them engaged and motivated to learn. It allows students
to focus on concepts or skills they find challenging until they achieve mastery. This approach ensures
a deeper understanding of the material. People have different learning styles –visual, auditory, etc.
The tailored support provided in personalized learning can significantly reduce achievement gaps
between students of different backgrounds and abilities. By addressing individual learning needs,
the disparities in performance can be mitigated.
The digital age and the rapid evolution of knowledge necessitate continuous learning
throughout one’s life. Personalized learning equips students with the skills and mindset to adapt
and embrace lifelong learning, a critical need in the modern world. The recent global events have
underscored the importance of flexibility in education. Personalized learning can seamlessly tran-
sition between in-person and online environments, ensuring education remains accessible and
effective regardless of external circumstances. The Fourth Industrial Revolution is transforming
industries and the job market. Personalized learning ensures students are equipped with the skills
and knowledge needed to thrive in a rapidly changing world, fostering adaptability and critical
thinking.
Personalized learning allows for detailed, individualized feedback on assignments and
assessments. Students can learn from their mistakes, which accelerates their progress. Teachers can
use personalized learning to differentiate instruction. Students who need extra support or challenges
can receive it, ensuring no one is left behind, and no one is held back. It often involves the use of
digital tools and technology. This equips students with digital literacy skills essential for success in
today’s digital world.
With personalized algorithms, we can unlock the full potential of every student, creating a
learning environment that fosters growth and academic success (Kadaruddin, 2023). The avail
ability of digital resources and platforms enables the seamless implementation of personalized
learning, making it more accessible than ever. With personalized algorithms, we can unlock the
full potential of every student, creating a learning environment that fosters growth and academic
success.
In a world where educational needs are as diverse as the individuals seeking knowledge,
personalized learning has emerged as a critical educational paradigm. As shown in Table 6.1,
personalized learning empowers students, enhances learning outcomes, and adapts to the evolving
educational landscape. Whether it’s catering to diverse learning styles, reducing achievement gaps,
or fostering self-direction and adaptability, personalized learning is not just a pedagogical approach;
it’s a paradigm shift that is reshaping the educational landscape. As the world continues to change,
and as technology advances, personalized learning remains essential for ensuring that every learner
has the chance to achieve their full power and succeed in an increasingly complex and competitive
world (Li, Xu, et al. 2024).
87
TABLE 6.1
Features of existing solutions for personalised learning
Below are the key components of adaptive learning systems as shown in Figure 6.2.
User accesses system: The process begins when a user (student, teacher, and administrator)
accesses the adaptive learning system.
User profile initialization: The system initializes or updates the user profile, considering past per-
formance, preferences, and other relevant data.
Initial assessment: An initial assessment is conducted to identify the user’s proficiency level and
learning needs.
Adaptive learning path generation: Based on the assessment, the system generates a personalized
learning path, adapting to the user’s proficiency.
Content delivery: The system delivers adaptive content aligned with the user’s learning path,
adjusting difficulty levels as needed.
User interaction and engagement: The user engages with the content, and the system tracks
interactions to understand engagement levels.
Real-time feedback: Continuous assessment and feedback are provided to the user in real time to
guide learning and improvement.
Learning analytics: Data from user interactions and assessments are analyzed for insights into
learning patterns and performance.
Decision support: The system uses learning analytics to support decision-making, suggesting next
steps and recommendations for the user.
User interface: The user interacts with the system through a user interface, which may include
dashboards and personalized recommendations.
End: The flowchart concludes, representing the completion of the adaptive learning cycle.
88
Customized content generation often relies on a student’s data and history of interactions. The
ethical use of AI in education, including content generation, is a significant concern. Ensuring that
AI-generated content adheres to ethical standards and does not perpetuate bias or discrimination is
a complex challenge (Matz, Teeny, et al. 2024). Every student has unique strengths and weaknesses.
Customized content generation can be utilized to craft personalized lesson plans that cater to these
individual needs. This approach ensures that students spend more time on topics they find challen-
ging and less time on areas they’ve already mastered. It promotes efficient learning and can help
prevent student frustration or boredom.
Generative AI can be employed to create customized textbooks, worksheets, and study materials.
AI can generate a wide range of content types, from written texts to images and videos. This cap-
ability allows for the creation of diverse learning materials. For example, an AI model can generate
visual aids, interactive simulations, and even video lectures to cater to different learning preferences.
This multimodal approach increases accessibility and engagement. For students with specific edu-
cational needs, such as those with learning disabilities or exceptional talents, customized content
generation can create specialized learning paths. These paths are designed to address individual
requirements and ensure that every student has access to an education that suits their abilities and
challenges.
Customized content generation powered by generative AI models is poised to transform educa-
tion. This approach enables educators to provide students with tailored learning materials which are
suitable to their unique needs and preferences. As we have explored in this chapter, the applications
of customized content generation are diverse and impactful, encompassing personalized learning
materials, adaptive assessments, interactive conversations, specialized learning paths, and more.
6.4.2 Adaptive Assessments
Traditional assessments have often been criticized for their rigidity and inability to adapt to the indi-
vidual learning needs of students. Standardized tests, for instance, follow a fixed format that fails to
account for variations in student knowledge and skills. Adaptive assessments represent a paradigm
shift in the way we evaluate learning. These assessments leverage technology to provide customized
tests that adjust in real time, presenting students with questions that match their current abilities. As
such, adaptive assessments align perfectly with the goals of personalized learning.
Traditional assessments have often been criticized for their rigidity and inability to adapt to the
individual learning needs of students. Standardized tests, for instance, follow a fixed format that
fails to account for variations in student knowledge and skills. Adaptive assessments represent a
paradigm shift in the way we evaluate learning. These assessments leverage technology to provide
customized tests that adjust in real time, presenting students with questions that match their current
abilities. As such, adaptive assessments align perfectly with the goals of personalized learning.
Generative AI models like GPT-3 have paved the way for adaptive assessments by design in a way
like humans do. These models can generate questions, scenarios, and explanations that are context-
ually relevant to the specific subject matter and the individual student’s proficiency level (Michel-
Villarreal, Vilalta-Perdomo et al. 2023). The adaptability and natural language understanding of
generative AI make it an ideal candidate for creating dynamic and responsive assessment tools.
To create an adaptive assessment, it is essential to establish the student’s initial proficiency level
in the subject matter. This assessment can be based on previous academic records, diagnostic tests,
or an introductory set of questions. The AI model generates questions that are relevant to the subject,
aligning with the learning objectives and the student’s current proficiency level. As the student
progresses through the assessment, their answers inform the AI system about their current level of
understanding as represented by the aspects in Table 6.2. The system then adapts the difficulty and
nature of subsequent questions, ensuring they are challenging yet achievable. Adaptive assessments
90
TABLE 6.2
The design aspects along with their merits for adaptive learning systems
not only provide a final score but also offer detailed feedback to students. This feedback can include
explanations for correct and incorrect answers, helping students learn from their mistakes.
By understanding the strengths and weaknesses of a student, adaptive assessments enable the
creation of personalized learning pathways. Traditional assessments often induce test anxiety, as
students are unsure of what to expect.
This precise evaluation can guide educators in developing a targeted curriculum. Adaptive
assessments are not limited to a specific educational level. They can be applied from elementary
school to higher education and beyond. At the higher education level, adaptive assessments can be
used for placement exams, ensuring that students are appropriately placed in courses that match
their abilities (Pataranutaporn, Danry, et al. 2021). They can also help customize degree programs to
individual student goals. Beyond formal education, adaptive assessments can be applied in profes-
sional development and certification programs to evaluate the skills and knowledge of individuals
in various industries.
AI models used in adaptive assessments must be trained and tested rigorously to mitigate bias and
ensure fairness in evaluation. Bias in the AI models can result in inequities in assessment outcomes.
The use of technology in assessments may pose accessibility challenges for students with dis-
abilities. Ensuring that assessments are inclusive and accessible is a priority. The introduction of
adaptive assessments may change the role of teachers, who may need to adapt their teaching strat-
egies based on the assessment results. Adaptive assessments, powered by generative AI, represent a
significant step toward achieving the goals of personalized learning.
6.4.3 Virtual Tutors
Generative AI can also act as virtual tutors, providing personalized assistance to students. These vir-
tual tutors can offer real-time support and guidance, answering questions and providing explanations
91
on a wide range of topics. Imagine a context where a student is finding it difficult to solve a com-
plex mathematical problem. They can interact with a virtual tutor powered by generative AI, which
can break down the problem step by step, provide relevant examples, and offer additional practice
problems. These virtual tutors can provide instant feedback, offer explanations, and adapt their
teaching style to suit the student’s individual needs (Rane, 2024). Moreover, virtual tutors can be
available 24/7, offering on-demand support. This can be particularly valuable for students who need
assistance outside of regular school hours or for adult learners balancing education with work and
family commitments. Virtual tutors, powered by generative AI, are a game-changer in education.
These digital companions have the potential to revolutionize one-on-one and small group instruc-
tion. They can provide instant feedback, answer questions, and offer guidance. Virtual tutors can
analyze a student’s responses and behavior to adapt their teaching strategies in real time.
Virtual tutors, powered by generative AI, are a game- changer in education. These digital
companions have the potential to revolutionize one-on-one and small group instruction. Unlike trad-
itional human tutors, virtual tutors are available 24/7 and can engage with students in a personalized
and adaptive manner. They can provide instant feedback, answer questions, and offer guidance.
Virtual tutors can analyze a student’s responses and behavior to adapt their teaching strategies
in real time. This adaptability ensures that students receive tailored support, fostering a deeper
understanding of the subject matter and a more engaging learning experience.
Virtual tutors can assess a student’s current knowledge and then curate their learning needs. This
level of personalization is nearly impossible to achieve in a traditional classroom setting (Sharma,
2023). Virtual tutors are available around the clock, making learning accessible at any time. This
is especially advantageous for students who prefer self-learning. They offer instant feedback,
which is crucial for effective learning. Students can learn from their mistakes in real time and make
corrections to their understanding and performance.
Virtual tutors can be more effective when complemented with human oversight to address com-
plex questions and provide emotional support when needed. Virtual tutors powered by generative
AI hold immense potential to transform education by offering personalized, accessible, and scalable
learning experiences. While challenges exist, careful implementation and ongoing refinement can
make this technology a valuable addition to the educational landscape.
Virtual tutors are not limited to traditional school settings; they can support learners of all ages,
promoting lifelong learning and skill development. Virtual tutors can potentially reduce the cost of
education, as they don’t require physical infrastructure and can reach a large number of students
with minimal additional cost. As represented in Table 6.3, the future of virtual tutors is promising.
TABLE 6.3
The design issues in the development of virtual tutors along with aspects of their benefits
and limitations
As generative AI models continue to advance, virtual tutors will become even more sophisticated
and capable of understanding and responding to students’ needs.
6.4.4 Domain-Specific Learning
Domain-specific learning represents a shift from generic education to targeted, subject-specific
instruction. In traditional education, subjects are often presented in a generalized manner, with
little consideration for the unique demands of different fields of knowledge. Domain-specific
learning, however, focuses on tailoring educational content, resources, and experiences to the
specific requirements of a given domain, such as science, technology, engineering, mathematics
(STEM), the arts, or humanities (Su, Yang, et al. 2023). Domain-specific learning often requires
assessments tailored to the subject matter. Adaptive assessments help in providing a more com-
prehensive evaluation of a student’s knowledge in that specific domain. Domain-specific learning
requires the ability to adapt and update content as knowledge evolves. Generative AI can facili-
tate continuous improvement by tracking developments in a specific field and updating learning
materials accordingly.
In domain-specific learning, virtual tutors powered by generative AI can be designed to spe-
cialize in particular fields. These virtual tutors can offer expert guidance, answer domain-specific
questions, and provide targeted support in understanding complex concepts. While domain-specific
learning holds great promise, it is not without its challenges. Ethical considerations, data privacy,
and equity concerns are important factors to address. Additionally, technology must evolve to ensure
that domain-specific learning can be delivered in an inclusive, accessible, and effective manner.
Domain-specific learning represents a significant transformation in education, empowered by
generative AI. By customizing education to the unique demands of different fields of knowledge,
effective learning path for the student can be developed. While challenges remain, the promise of
domain-specific learning is undeniable, and its impact on students’ educational journeys is poised to
be profound. As generative AI continues to shape the future of education, domain-specific learning
will play a pivotal role in revolutionizing learning experiences across various fields of knowledge.
6.4.5 Project-based learning
Project-based learning is a pedagogical approach that involves students working on real-world
projects to develop skills and gain knowledge through hands-on experiences. Generative AI can
play a significant role in enhancing project-based learning by providing personalized support and
resources to students throughout the process. Generative AI can design individualized learning
pathways for students, taking into account their project goals, prior knowledge, and learning
preferences.
Active engagement: Students are actively involved in their learning process, making it more
engaging and motivating.
Real-world relevance: Project- based learning projects often tackle real- world problems or
scenarios, making learning more meaningful and relevant.
Interdisciplinary learning: Project-based learning encourages the integration of knowledge from
different subject areas, promoting a holistic understanding of topics.
Problem-solving and critical thinking: Students develop essential skills such as problem-solving,
critical thinking, and creativity as they work on complex projects.
93
Collaboration: Project-based learning often involves teamwork, fostering collaboration and com-
munication skills.
Ownership of learning: Students take ownership of their learning, becoming more independent
and self-directed.
The integration of generative AI into project-based learning includes the aspects to impact edu-
cation by making it more interesting, adaptive, and outcome based. By providing students with
personalized support and resources, AI can help them develop critical skills, explore real-world
problems, and achieve meaningful learning outcomes. This approach can benefit both students and
educators and pave the way for a promising future for the academic community.
6.5 EXPERIMENTAL RESULTS
In this section, we present experimental results that highlight the effectiveness of generative AI in
personalized learning. We conducted a study using ChatGPT, a widely known generative AI model,
in various educational scenarios.
Adaptive content generation: Generative AI can be used to create adaptive learning materials,
such as textbooks, quizzes, and multimedia content. The system can analyze and address the
learning gaps in the students’ performance (Yan, Martinez-Maldonado, 2024).
Personalized feedback: Generative AI algorithms can analyze student work and provide
personalized feedback. This feedback can be tailored to address specific misconceptions or
errors, helping students to understand and correct their mistakes.
Predictive analytics: Generative AI can analyze large datasets to predict students’ future per-
formance and identify potential challenges. This enables proactive interventions, such as
providing additional resources or support to learners who are not coping with the regular
learning pace.
Collaborative learning spaces: AI can facilitate collaborative learning by forming groups of
students with complementary skills and learning styles. Generative algorithms can optimize
these groupings to enhance peer-to-peer learning experiences.
Natural language processing (NLP) for personalized assistance: NLP-powered chatbots or virtual
assistants can offer personalized assistance to students. These systems can understand natural
language queries, provide answers to specific questions, and offer guidance on study strategies,
resources, and more.
Limitations: While personalized learning pathways in higher education using generative AI
offer significant benefits, there are also certain limitations and challenges that need to be
considered.
Data privacy and security: The collection and analysis of large amounts of personal data raise
concerns about privacy and security. Educational institutions must establish robust protocols to
protect student data and ensure compliance with privacy regulations.
Bias in algorithms: Generative AI models can inadvertently perpetuate or amplify biases present
in the training data. This bias can lead to unfair or inequitable personalized recommendations,
disadvantaging certain groups of students. Careful attention must be paid to bias detection and
mitigation strategies.
Lack of human element: While AI can offer personalized recommendations, it may lack the human
touch and understanding that educators provide. Personalized learning should be a complement
to, not a replacement for, human interaction and mentorship.
Overreliance on technology: Excessive reliance on AI for personalized learning may lead to a
reduction in student–teacher interaction and a potential loss of the social and collaborative
aspects of learning.
94
Resistance to change: Implementing personalized learning pathways may face resistance from
educators, administrators, or students who are accustomed to traditional teaching methods.
Effective communication and training are essential to overcome this resistance.
Algorithmic transparency: Understanding how AI algorithms arrive at specific recommendations
or decisions can be challenging. Lack of transparency in these algorithms may lead to a lack of
trust among educators, students, and other stakeholders.
Resource intensity: Developing and maintaining effective generative AI systems for personalized
learning can be resource-intensive. Institutions need to invest in infrastructure, training, and
ongoing support to ensure the successful implementation of these technologies.
Dynamic learning environments: Higher education is a dynamic environment, and student needs
can change rapidly. AI systems may struggle to keep up with evolving requirements and
preferences, leading to recommendations that are not always aligned with the current context.
Equity concerns: There is a risk that personalized learning may exacerbate existing inequalities if
certain groups of students do not have equal access to technology or if the algorithms are not
designed to address diverse learning needs and styles.
Complexity of learning outcomes: Some learning outcomes, especially those related to creativity,
critical thinking, and complex problem-solving, may be challenging for AI systems to accur-
ately assess and address in a personalized manner.
Personalized learning experiences: By customizing the assessments, the analytical power of gen-
erative AI can design robust learning paths for the student community and help in increasing
the degree of engagement and understanding in concerned subjects.
Adaptive content delivery: Generative AI algorithms can adjust the pace and difficulty of study
references used for students’ progress. This adaptive content delivery helps prevent boredom
or frustration by keeping students challenged at an appropriate level.
Efficient resource allocation: AI systems can analyze data related to the performance of the students
and engagement to recognize the domains of strength and weakness. This information allows
educators to allocate resources effectively, focusing on topics where students need more support.
Predictive analytics for intervention: AI can predict potential challenges or areas where students
may struggle in the future. Educators can use this information to intervene proactively, offering
additional support, resources, or guidance to prevent learning gaps from widening.
Time management assistance: AI algorithms can analyze how students manage their time
within the online learning environment. This information can be used to provide personalized
recommendations on effective time management strategies, helping students balance their
workload more efficiently.
Enhanced engagement and motivation: Personalized learning pathways and content increase stu-
dent engagement by aligning educational materials with their interests and learning styles. This
customization can boost motivation and foster a positive attitude toward learning.
Continuous professional development for educators: AI can support educators by providing
insights into teaching methods that are most effective for different student profiles. This data-
driven approach enables continuous improvement for instructors, helping them refine their
teaching strategies.
Access to a wealth of educational resources: Generative AI can curate and recommend a diverse
range of educational resources, including articles, videos, and interactive simulations, tailored
to each student’s learning needs. This ensures that students can access to a rich set of materials
that complement their learning objectives.
95
Scalability and accessibility: AI-powered online learning systems can be scaled to accommo-
date a large number of students while maintaining personalized experiences. This scalability
contributes to greater accessibility, allowing more individuals to access higher education
regardless of geographical location.
Limitations: Implementing continuous improvement in online classes in higher education using
generative AI can face certain limitations. It’s important to be aware of these challenges to
effectively navigate them.
Data quality and bias: Generative AI relies heavily on data for training. If the training data is
biased or incomplete, the AI system may perpetuate and amplify these biases. This can result
in personalized recommendations that are unfair or unequal, especially if certain demographic
groups are underrepresented or misrepresented in the data.
Limited personalization: Despite advancements in generative AI, there may be limitations in truly
understanding the nuances of individual learning styles, preferences, and motivations. The AI
system may sometimes find it difficult to provide effective personalization due to a deficiency
in acquiring the student’s needs.
Adaptability to dynamic learning environments: Online classes often operate in dynamic and
evolving environments. Generative AI systems may face challenges in adapting quickly to
changes in curriculum, teaching methods, or emerging educational trends, potentially lagging
behind the evolving needs of students.
Resource intensity and accessibility: Implementing and maintaining generative AI systems can be
resource-intensive, and not all educational institutions may have the necessary resources. This
can result in uneven access to the benefits of AI-driven continuous improvement, potentially
exacerbating educational inequalities.
Resistance to technology: Some educators or students may resist the integration of generative AI
in the learning process. Resistance can be due to concerns about job displacement, a prefer-
ence for traditional teaching methods, or a lack of confidence in technology-driven educational
solutions.
Technical challenges: Generative AI systems may encounter technical challenges such as errors,
biases, or misinterpretations. Technical issues can disrupt the learning experience and impact
the effectiveness of continuous improvement efforts.
Integration with existing systems: Integrating generative AI into existing online learning platforms
and systems can be complex. Compatibility issues, interoperability, and the need for seamless
integration may pose challenges for institutions.
Student engagement: Despite personalized recommendations, maintaining high levels of student
engagement in online classes remains a challenge. Generative AI should complement strategies
that actively involve students in the learning process to enhance motivation and commitment.
To overcome these limitations, it’s crucial to adopt a holistic and thoughtful approach. This involves
addressing technical challenges, ensuring ethical and responsible AI use, promoting inclusivity in
training data, providing ongoing training for educators, and actively involving stakeholders in the
decision-making process. Continuous monitoring, evaluation, and iterative improvements are key
components of a successful implementation of generative AI for continuous improvement in online
higher education.
chat.openai.com: ChatGPT can serve as a virtual tutor, providing students with on-demand
assistance for various subjects. It can help with homework, answer questions, and offer
96
explanations or examples to reinforce learning. Students and researchers can use ChatGPT to
get quick answers to factual questions, find relevant sources, and generate ideas for research
projects or papers. ChatGPT can help students practice and improve their language skills by
engaging in conversation, providing language translation, or generating language exercises. It
can be utilized to make educational materials more accessible for students with disabilities.
It can convert text to speech, provide alternative text for images, and assist with reading and
comprehension. Higher education institutions can use ChatGPT to offer virtual campus tours,
allowing prospective students to explore the campus and get answers to their questions about
facilities and programs.
beautiful.ai: Beautiful.ai is cloud-based presentation software that allows users to create visu-
ally appealing and well-designed presentations easily. Students can use Beautiful.ai to create
engaging and professional presentations for class assignments and projects. It can help them
structure their content and design slides that are visually appealing and easy to understand.
Educators can use Beautiful.ai to create engaging lecture materials and presentations for their
courses. This can enhance the learning experience by making the content more engaging and
accessible. Universities can use Beautiful.ai to develop training materials and workshops for fac-
ulty and staff. This can help in delivering effective and engaging training sessions. Researchers
and scholars can use Beautiful.ai to create presentation materials for academic conferences,
research seminars, and symposiums, ensuring that their research is communicated effectively.
chatpdf.com: Talk to books, research papers, manuals, essays, legal contracts, whatever you have!
You can read through any 500-page book in 15 minutes. It is a platform that allows users
to organize and categorize PDFs, which can help educators and students keep their course
materials and research documents well-structured. Institutions can use such a platform to dis-
tribute e-books and course materials in PDF format to students. For academic research, a PDF
platform can serve as a repository for journals, papers, and scholarly articles. Researchers can
easily access and download relevant materials. Such a platform might offer security features
like password protection or encryption for sensitive educational documents.
synk.io: Synk.io detects and fixes bugs in software code through automated analysis, allowing
developers to identify and resolve vulnerabilities effectively. The platform might have tools
for students and instructors to collaborate on group projects, communicate through messaging
or forums, and provide a space for announcements and updates. The tool may integrate with
other educational software or platforms, such as LMS systems, student information systems, or
video conferencing tools. It is used for live online classes, webinars, or interactive discussions,
allowing instructors to deliver lectures and engage with students in real time. It could offer
features for hosting pre-recorded lectures, sharing course materials, and facilitating discussions
that can take place at a time convenient for the learners.
6.7 CONCLUSION
Generative AI, such as GPT-3 and its successors, offers tremendous potential in transforming edu-
cation into a personalized, adaptive, and engaging experience. The experimental results from our
case study indicate that generative AI can considerably increase the learning experience. As the
modern-day technology develops in an unprecedented rate, we can anticipate a future where gen-
erative AI is a fundamental component of the academic community, enabling students to achieve
their full potential through personalized education. The integration of generative AI in education
is a promising direction that can help unlock the full potential of every student. In conclusion, the
integration of generative AI into education offers a new era of personalized learning. The academic
community, from schools to higher education institutions, must embrace this transformative force
and adapt to its challenges and opportunities. AI can help in assistance for providing support for
97
online teaching and learning process. By harnessing AI’s potential for customized education, we
can empower students to reach their full potential and address individual needs, thereby enhancing
the learning degree. This exciting journey into the world of AI-powered personalized learning holds
immense promise for the future of education. It is a journey worth undertaking, and one that can lead
to a brighter, more inclusive, and more effective educational future for all.
REFERENCES
Alasadi, E.A. and Baiz, C.R., 2023. Generative AI in education and research: Opportunities, concerns, and
solutions. Journal of Chemical Education, 100(8), pp.2965–2971.
Bahroun, Z., Anane, C., Ahmed, V. and Zacca, A., 2023. Transforming education: A comprehensive review
of generative artificial intelligence in educational settings through bibliometric and content analysis.
Sustainability, 15(17), p.12983.
Bozkurt, A., Junhong, X., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M.,
Nerantzi, C., Honeychurch, S. and Bali, M., 2023. Speculative futures on ChatGPT and generative artifi-
cial intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance
Education, 18(1), pp.53–130.
Dai, Y., Liu, A. and Lim, C.P., 2023. Reconceptualizing ChatGPT and generative AI as a student-driven innov-
ation in higher education. Procedia CIRP, 119, pp.84–90.
Kadaruddin, K., 2023. Empowering education through generative AI: Innovative instructional strategies for
tomorrow’s learners. International Journal of Business, Law, and Education, 4(2), pp.618–625.
Li, H., Xu, T., Zhang, C., Chen, E., Liang, J., Fan, X., Li, H., Tang, J. and Wen, Q., 2024. Bringing generative
AI to adaptive learning in education. arXiv preprint arXiv:2402.14601.
Matz, S.C., Teeny, J.D., Vaid, S.S., Peters, H., Harari, G.M. and Cerf, M., 2024. The potential of generative AI
for personalized persuasion at scale. Scientific Reports, 14(1), p.4692.
Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D.E., Thierry-Aguilera, R. and Gerardou, F.S.,
2023. Challenges and opportunities of generative AI for higher education as explained by ChatGPT.
Education Sciences, 13(9), p.856.
Pataranutaporn, P., Danry, V., Leong, J., Punpongsanon, P., Novy, D., Maes, P. and Sra, M., 2021. AI-generated
characters for supporting personalized learning and well-being. Nature Machine Intelligence, 3(12),
pp.1013–1022.
Rane, N., 2024. Enhancing the quality of teaching and learning through Gemini, ChatGPT, and similar genera-
tive artificial intelligence: Challenges, future prospects, and ethical considerations in education. TESOL
and Technology Studies, 5(1), pp.1–6.
Sharma, R.C., 2023. Exploring generative AI’s influence on tailored learning. Education@ ETMA, 2(1), pp.i–iii.
Štuikys, V. and Burbaitė, R., 2024. Personal generative libraries for personalised learning: A case study. In
Evolution of STEM-Driven Computer Science Education: The Perspective of Big Concepts (pp. 109–
134). Cham: Springer Nature Switzerland.
Su, J. and Yang, W., 2023. Unlocking the power of ChatGPT: A framework for applying generative AI in edu-
cation. ECNU Review of Education, 6(3), pp.355–366.
Yan, L., Martinez- Maldonado, R. and Gasevic, D., 2024. Generative artificial intelligence in learning
analytics: Contextualising opportunities and challenges through the learning analytics cycle. In
Proceedings of the 14th Learning Analytics and Knowledge Conference (pp. 101–111).
Yelamarthi, K., Dandu, R., Rao, M., Yanambaka, V.P. and Mahajan, S., 2024. Exploring the potential of gen-
erative AI in shaping engineering education: Opportunities and challenges. Journal of Engineering
Education Transformations, 37(Special Issue 2) pp.439–445.
Zhou, H. and Zhou, D., 2024-Transformation of Vocational Education Based on Generative Artificial
Intelligence: Impact, Opportunity and Countermeasures. In Proceedings of the 3rd International
Conference on Internet Technology and Educational Informatization, ITEI 2023, November 24–26,
2023, Zhengzhou, China.
98
7.1 INTRODUCTION
7.1.1 Background and Significance of the Study
The integration of technology in higher education has been on the rise, and this trend is expected to
continue in the future. “The learning landscape is a restless place. Constantly shifting and resettling,
erupting, changing, evolving” (Locker, 2009, p. 139). The rapid growth of the incorporation of artifi
cial intelligence (AI) in education (AIEd) has been noted by a significant body of research (Alam &
Mohanty, 2022). AI is spreading rapidly across various fields due to technological advancement and
innovation. “AI refers to the ability of machines to perform tasks that are similar to those performed
by the human brain, such as solving mathematical equations” (Prajapati et al., 2024, p. 2). Recently,
AI has been changing the learning landscape of higher education, impacting teaching, learning, and
research experiences. AIEd is transforming the learning environment, and in turn, the learning envir-
onment is shaping learners, creating a mutual influence that is highly unpredictable. Studies (Alam
& Mohanty, 2022) have been exploring the implications of AI in education, as adaptive learning,
smart campus, tutoring robots, virtual assistants, or generative AI models such as chatbots are being
used in or outside classrooms. In response to barriers and challenges to learning, technology is usu-
ally perceived as a promising solution (Wang, 2023). The need to adopt and adapt to technological
developments has been widely recognized and voiced. Hannan and Liu (2023) conducted a litera
ture review that concluded that AI can be beneficially be applied to improving student learning
experiences, providing educational support for learners, and improving the enrolment process. Celik
(2023) emphasized the impact of technology on higher education, expanding new opportunities for
both instructors and students. Educational resources became more easily accessible and expanded the
scope of higher education beyond traditional classroom environments (Rawas, 2023). For example,
online platforms and digital tools allow educators to engage with a wider audience and tailor their
teaching methods to different learning styles. Moreover, students can access course materials
remotely, enabling them to study at their own pace and overcome geographical barriers to education.
7.1.2 Methodology
This chapter’s methodology is based on literature review aiming to explore the integration of AI
tools, particularly ChatGPT, in higher education. It examines studies, reports, and articles related to
personalized learning, inclusivity, and the role of AI in addressing diverse learning needs. Through
critical analysis, the chapter evaluates the features, limitations, and challenges associated with the
identified works. Following this assessment, the study conducts a comparative analysis of existing
implementations of ChatGPT in educational settings, categorizing them based on various cri-
teria. This comparative study provides a comprehensive understanding of the current landscape of
ChatGPT integration for personalized learning in higher education. Building on this knowledge,
98 DOI: 10.1201/9781032716350-7
99
the chapter explores emerging trends, unresolved issues, and potential future research directions,
aiming to contribute to the advancement of AI-driven personalized learning in higher education.
7.1.3 Chapter Organization
The rest of the chapter is organized as follows: Section 7.2 presents a comparison of natural language
processing (NLP) and AI aiming to understand the functionality of NLP systems and exploring
how AI systems process speech. Section 7.3 discusses the role of Chat Generative Pre-Trained
Transformer Boot (ChatGPT) in education, while Section 7.4 addresses challenges and opportun
ities in implementing ChatGPT for education. Sections 7.5 and 7.6 describe the role of ChatGPT
in enhancing inclusivity, focusing on its impact on diverse learning needs within higher education,
while Section 7.7 concludes the chapter and provides future directions for research.
TABLE 7.1
Classification of artificial intelligence
Criteria AI
based on the meaning of the perceived phrase, context, and, potentially, some subjective experiences
(Skinner, 1986). NLP algorithms work on similar principles. The perception process implies coding
the incoming information into a machine-readable set of symbols. While generating a reaction
(answer, response), it is the result of a decision from a set of possible answers based on weighing
alternatives and comparing the results with each other (Taye, 2023). However, understanding is a
more complex matter, which should be considered separately.
on students’ engagement and learning outcomes, revealing a significant decrease in students’ task
interest when interacting with a chatbot compared to a human partner.
AI technologies are becoming the subject of active discussion among participants in the edu-
cational process as an effective tool for customizing education in universities. Many technological
innovations undergo what is commonly referred to as “Gartner’s hype cycles” –distinct stages in
their market development. Despite initial optimistic forecasts, a considerable number of innovations
may ultimately fall short of realizing their full development potential (Linden & Fenn, 2003). In
the context of higher education, fostering open and collaborative discussions becomes paramount
for optimizing the integration of technology into teaching and learning. This participatory approach
ensures that the diverse needs and perspectives of stakeholders, including educators and students,
are considered, thereby enhancing the effectiveness of technology-enabled educational initiatives.
Understanding the diverse backgrounds and life experiences of adult learners is crucial in
designing educational programs that effectively cater to their unique circumstances and enhance
their overall learning experience.
Personalization enables the integration of students as active participants in the educational pro-
cess by considering diverse student needs, including different learning abilities and backgrounds,
and implementing inclusive strategies to ensure equal access to resources and opportunities. With
personalized learning, educational modules become highly adapted to each student. In Table 7.2,
key features of a personalized learning approach in higher education are presented, emphasizing
the importance of adaptability, technological integration, data-driven decision-making, and the
strengthening collaboration and student autonomy in learning.
Universities are actively seeking innovative ways to customize learning content, educational
technologies, and academic and career support to match the group and individual educational needs
of their students. Personalization is a pervasive trend that extends across every facet of the student
experience (Grant & Basye, 2014). Personalized learning in higher education means crafting an
individual educational path done by the student in close collaboration with a mentor. Higher educa-
tion allows for a dynamic and responsive educational experience that adapts to the evolving needs
of each learner.
In a personalized learning environment, the role of educators extends beyond content delivery.
Mentors play a crucial role in understanding the strengths, challenges, and motivations of each
student (Colvin & Ashman, 2010; Lankau & Scandura, 2002). This deeper understanding enables
mentors to provide targeted guidance, support, and resources, fostering not only academic growth
TABLE 7.2
Key features of personalized learning approach in higher education
Flexibility in learning paths • Customizing learning paths based on specific educational needs
• Customizing training format (classroom, mixed, remote)
• Integrating various learning styles
• Providing choices of academic disciplines and content within those discipline
Adaptive content and resources • Customizing educational materials to match students’ proficiency levels and
learning styles
• Integrating multimedia, interactive content, and real-world examples
• Selecting the complexity level and duration of academic subjects
• Having the opportunity to study disciplines individually (not on a fixed schedule)
• Free access to educational materials at any time
Technology integration • Utilizing adaptive learning platforms and technologies
• Incorporating online resources, simulations, and virtual environments
Data-driven decision-making • Collecting and analyzing student performance data for continuous improvement
• Implementation of predictive analytics to identify areas for intervention
Collaborative learning • Encouraging self-directed learning and goal setting
opportunities • Providing choices in assignments, projects, and assessment methods
• Integrating collaborative projects and group activities
• Using online forums and social learning platforms to facilitate peer interaction
• Having a personal mentor (tutor)
Feedback and assessment • Providing regular and timely feedback on student progress
• Implementing formative assessments to guide ongoing learning
Faculty support and training • Providing professional development for educators to offer personalized
learning paths
• Ensuring support systems for faculty to implement and manage personalized
learning effectively
104
but also the development of critical skills such as problem-solving, creativity, and self-directed
learning (Benbow et al., 2021).
As personalized learning gains prominence in higher education, it challenges institutions to
rethink traditional structures and embrace innovation. Teachers play a crucial role, not only in
sharing fundamental knowledge but also in cultivating students’ abilities to learn continuously, pro-
moting the concept of lifelong learning (Baran et al., 2011).
Summing up the impact of personalized learning, it is important to highlight its positive effects
on students’ educational and professional development. Increased flexibility in selecting subjects,
reduced student-to-teacher ratios, and heightened availability of extracurricular activities collect-
ively contribute to increasing both student satisfaction levels and overall academic success. The
opportunity for subject choices empowers students to tailor their academic path, aligning it with
their interests and career aspirations. With smaller student-to-teacher ratios, learners benefit from
more personalized attention, fostering a beneficial environment for meaningful engagement and
individualized support. Additionally, extracurricular activities, including various internships,
research projects, and practical application of knowledge, enhance the holistic educational experi-
ence, providing students with opportunities for personal growth and competence development.
• Accessible Learning Materials: ChatGPT can convert text into alternative formats such as
audio or braille, ensuring accessibility for students with visual impairments.
• Personalized Learning Paths: ChatGPT can adapt learning materials and provide customized
support based on the individual learning needs and preferences of students with disabilities.
• Assistive Communication: ChatGPT can serve as an assistive communication tool for students
with speech or language disabilities, aiding in verbal interactions and expression.
• Simplified Instructions: ChatGPT can simplify instructions and assignments, making them
more understandable for students with cognitive or learning disabilities.
• Academic Support: ChatGPT can offer academic assistance and tutoring tailored to the spe-
cific challenges faced by students with disabilities, promoting academic success.
• Inclusive Classroom Participation: ChatGPT can facilitate participation in classroom
discussions and group activities, ensuring inclusivity for students with disabilities.
105
FIGURE 7.2 Opportunities provided by ChatGPT in higher education for students with disabilities.
• Use of ChatGPT in assisting the creation of chatbots aimed at aiding foreign language learning,
providing students with a platform to enhance their speaking skills and acquire new lexical
units (Hong, 2023; Mukarto, 2023).
• Efficient tool for automatic text summarization, enabling students to effectively summarize
lengthy texts. This is particularly beneficial for students preparing for exams or engaging in
pilot studies, as well as for students with specific learning disorders who struggle with reading
and comprehending long texts, which are common in university study programs (Kasneci
et al., 2023).
• Useful tools for assisting students in essay or thesis writing (Fitria, 2023; Tira Nur, 2023).
These language models can generate topics, ideas, plans, and even complete essays based on
user input. For students, ChatGPT addresses the common challenge of facing “blank pages”
and provides assistance in initiating coursework and dissertation text.
• “Study buddy” function: ChatGPT can be utilized to create a comprehensive system for
answering questions related to the course of study, helping students prepare for exams and
tests (Fulk et al., 2022). This feature is especially beneficial for students with disabilities, as
it allows them to review the study material at their own pace, without feeling pressured or
rushed.
Kapustina et al. (2023) outlined various other applications of ChatGPT in education, including:
• practicing critical thinking, which involves systematically evaluating the content and rele-
vance of provided answers;
• supporting the preparation of individualized training by analyzing tasks and proposing adapted
training materials;
• assisting in lesson planning by aiding in the drafting of educational plans and inclusive
activities.
However, it is essential to acknowledge certain limitations when utilizing chatbots. Firstly, ChatGPT
is constrained by a lack of information on events and phenomena post-2021. Moreover, caution
is warranted when employing ChatGPT in educational settings due to the potential generation of
inappropriate content. Like many AI technologies, ChatGPT is a product of the data on which it
is trained. Consequently, there exists a risk of generating responses that may be inappropriate or
offensive when interacting with students. If the training data for ChatGPT carries biases, such as
stereotypes, or reflects a particular historical narrative, these may inadvertently be perpetuated in
its responses.
In the pursuit of integrating AI into the educational process, university professors can apply
ChatGPT for content creation for classes by generating educational materials (Rezaev & Tregubova,
2023). Rather than investing substantial time in crafting content from scratch, ChatGPT can initiate
the process by generating a preliminary project, which can then be refined and edited by professors
along with their students. Another use of ChatGPT is in assisting teachers in the organization of
exams and tests by providing support in formulating and screening questions.
• generate a large number of discussion topics that encourage students to be critical and problem
solvers;
• design quizzes and assessments, providing instant feedback tailored to the needs and goals of
each student, based on past learning behavior;
• design differentiated tasks adapted to the needs and abilities of each student;
• create educational activities tailored to the needs and abilities of each student;
107
ChatGPT is an efficient tool for studying and personalizing learning. Teachers and students can
co-create educational programs adapted to their level of knowledge and needs. Furthermore, an
AI-based assistant is able to collect and analyze the student’s answers, identify and explain errors
in detail, which contributes to a deeper understanding of the study material. Its popularity can be
explained by the game approach; it can be prompted to use elements of games to create fun and
interesting learning experiences that will keep students motivated and engaged.
What ethical considerations need to be addressed when integrating AI into higher education? Time
will determine whether ChatGPT and other AI technologies are part of the solution or part of the
problem.
Ultimately, the future of chatbots like ChatGPT in education remains uncertain and will depend
on how these tools are integrated into educational practices and the extent to which they can
effectively enhance student learning outcomes. Despite its practical implications in education and
research, ChatGPT has serious limitations, these include challenges in answering questions with
specific wording and a lack of quality control, which may result in erroneous responses, as noted
by Strzelecki (2023). Continued research and evaluation will be essential to determine whether AI
technologies like ChatGPT are indeed the solution to educational challenges or pose new problems
that need to be addressed.
REFERENCES
Alam, A., & Mohanty, A. (2022). Foundation for the Future of Higher Education or ‘Misplaced Optimism’?
Being Human in the Age of Artificial Intelligence (pp. 17–29). https://doi.org/10.1007/978-3-031-
23233-6_2
Alt, D. (2023). Precursors of online teaching practices: innovative behavior and personal accountability of
faculty in higher education. Journal of Computing in Higher Education. https://doi.org/10.1007/s12
528-023-09387-w
Baran, E., Correia, A. P., & Thompson, A. (2011). Transforming online teaching practice: critical analysis of
the literature on the roles and competencies of online teachers. Distance Education, 32(3). https://doi.
org/10.1080/01587919.2011.610293
Benbow, R. J., Lee, C., & Hora, M. T. (2021). Exploring college faculty development in 21st-century skill
instruction: an analysis of teaching-focused personal networks. Journal of Further and Higher Education,
45(6). https://doi.org/10.1080/0309877X.2020.1826032
Celik, I. (2023). Towards Intelligent-TPACK: an empirical study on teachers’ professional knowledge to ethic-
ally integrate artificial intelligence (AI)-based tools into education. Computers in Human Behavior, 138,
107468. https://doi.org/10.1016/j.chb.2022.107468
Chaka, C. (2022). Fourth industrial revolution—a review of applications, prospects, and challenges for arti-
ficial intelligence, robotics and blockchain in higher education. Research and Practice in Technology
Enhanced Learning, 18, 002. https://doi.org/10.58459/rptel.2023.18002
Colvin, J. W., & Ashman, M. (2010). Roles, risks, and benefits of peer mentoring relationships in higher edu-
cation. Mentoring and Tutoring: Partnership in Learning, 18(2). https://doi.org/10.1080/1361126100
3678879
Dempere, J., Modugu, K., Hesham, A., & Ramasamy, L. K. (2023). The impact of ChatGPT on higher educa-
tion. Frontiers in Education, 8. https://doi.org/10.3389/feduc.2023.1206936
Elbanna, S., & Armstrong, L. (2024). Exploring the integration of ChatGPT in education: adapting for
the future. Management & Sustainability: An Arab Review, 3(1), 16– 29. https://doi.org/10.1108/
MSAR-03-2023-0016
Fitria, T. N. (2023). Artificial intelligence (AI) technology in OpenAI ChatGPT application: a review of
ChatGPT in writing English essay. ELT Forum: Journal of English Language Teaching, 12(1). https://
doi.org/10.15294/elt.v12i1.64069
Fryer, L. K., Ainley, M., Thompson, A., Gibson, A., & Sherlock, Z. (2017). Stimulating and sustaining interest
in a language course: an experimental comparison of Chatbot and Human task partners. Computers in
Human Behavior, 75. https://doi.org/10.1016/j.chb.2017.05.045
Fulk, H. K., Dent, H. L., Kapakos, W. A., & White, B. J. (2022). Doing more with less: using AI-based Big
Interview to combine exam preparation and interview practice. Issues in Information Systems, 23(4).
https://doi.org/10.48009/4_iis_2022_118
Gill, S. S., Xu, M., Patros, P., Wu, H., Kaur, R., Kaur, K., ... & Buyya, R. (2024). Transformative effects of
ChatGPT on modern education: emerging Era of AI Chatbots. Internet of Things and Cyber-Physical
Systems, 4, 19–23. https://doi.org/10.1016/j.iotcps.2023.06.002
Grant, P., & Basye, D. (2014). Personalized learning. A guide for engaging students with technology. Journal
of Reading, 31(1), 26–47.
109
Hannan, E., & Liu, S. (2023). AI: new source of competitiveness in higher education. Competitiveness
Review: An International Business Journal, 33(2), 265–279. https://doi.org/10.1108/CR-03-2021-0045
Hong, W. C. H. (2023). The impact of ChatGPT on foreign language teaching and learning: opportunities
in education and research. Journal of Educational Technology and Innovation, 5(1). https://doi.org/
10.61414/jeti.v5i1.103
Howard, A., Hope, W., & Gerada, A. (2023). ChatGPT and antimicrobial advice: the end of the consulting
infection doctor? The Lancet Infectious Diseases. https://doi.org/10.1016/S1473-3099(23)00113-5
Kapustina, L. V., Ermakova, Y. D., & Kalyuzhnaya, T. V. (2023). ChatGPT and education: eternal confrontation
or possible cooperation? Methodological Electronic Journal “Koncept”, 10, 119–132.
Kashyap, R., & ChatGPT OpenAI. (2023). A First Chat with ChatGPT: The First Step in the Road-Map for
AI (Artificial Intelligence). (February 7, 2023). Available at SSRN: https://ssrn.com/abstract=4351
637 or http://dx.doi.org/10.2139/ssrn.4351637.
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G.,
Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet,
O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and
challenges of large language models for education. Learning and Individual Differences, 103. https://
doi.org/10.1016/j.lindif.2023.102274
Khurana, D., Koli, A., Khatter, K., & Singh, S. (2023). Natural language processing: state of the art, current trends
and challenges. Multimedia Tools and Applications, 82(3). https://doi.org/10.1007/s11042-022-13428-4
Kleebayoon, A., & Wiwanitkit, V. (2023). Echocardiographic reporting, artificial intelligence and natural lan-
guage processing: correspondence. Journal of Echocardiography, 21(3). https://doi.org/10.1007/s12
574-023-00614-y
Langley, P. (2011). The changing science of machine learning. Machine Learning, 82(3). https://doi.org/
10.1007/s10994-011-5242-y
Lankau, M. J., & Scandura, T. A. (2002). An investigation of personal learning in mentoring relationships: con-
tent, antecedents, and consequences. Academy of Management Journal, 45(4). https://doi.org/10.2307/
3069311
Lehmann, T., Blumschein, P., & Seel, N. M. (2022). Accept it or forget it: mandatory digital learning and tech-
nology acceptance in higher education. Journal of Computers in Education. https://doi.org/10.1007/s40
692-022-00244-w
Linden, A., & Fenn, J. (2003). Understanding Gartner’s Hype Cycles. Strategic Analysis Report No R-20-
1971. Gartner Research, May.
Locker, P. (2009). Conclusion –the learning landscape: views with endless possibilities. In L. Bell, M. Neary,
& H. Stevenson (Eds.), The Future of Higher Education. Policy, Pedagogy and the Student Experience.
Continuum.
Luan, H., Geczy, P., Lai, H., Gobert, J., Yang, S. J. H., Ogata, H., Baltes, J., Guerra, R., Li, P., & Tsai, C. C.
(2020). Challenges and future directions of big data and artificial intelligence in education. Frontiers in
Psychology, 11. https://doi.org/10.3389/fpsyg.2020.580820
Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: how may AI and GPT impact academia and libraries?
Library Hi Tech News. https://doi.org/10.1108/LHTN-01-2023-0009
Miller, G. A., & Johnson-Laird, P. N. (1976). Language and Perception. Harvard University Press.
Mogavi, R. H., Deng, C., Kim, J. J., Zhou, P., Kwon, Y. D., Metwally, A. H. S., ... & Hui, P. (2024). ChatGPT
in education: a blessing or a curse? A qualitative study exploring early adopters’ utilization and
perceptions. Computers in Human Behavior: Artificial Humans, 2(1), 100027. https://doi.org/10.1016/
j.chbah.2023.100027
Mukarto, R. X. (2023). Exploring the implications of ChatGPT for language learning in higher education.
Indonesian Journal of English Language Teaching and Applied Linguistics, 7(2), 343–358.
Prajapati, J. B., Kumar, A., & Singh, S. et al. (2024). Artificial intelligence-assisted generative pretrained
transformers for applications of ChatGPT in higher education among graduates. SN Social Sciences,
4(19). https://doi.org/10.1007/s43545-023-00818-0
Ratnam, M., Sharm, B., & Tomer, A. (2023). ChatGPT: educational artificial intelligence. International Journal
of Advanced Trends in Computer Science and Engineering, 12(2), 84–91.
Rawas, S. (2023). ChatGPT: empowering lifelong learning in the digital age of higher education. Education
and Information Technologies. https://doi.org/10.1007/s10639-023-12114-8
110
Rezaev, A. V., & Tregubova, N. D. (2023). ChatGPT and AI in the universities: an introduction to the near
future. Vysshee Obrazovanie v Rossii =Higher Education in Russia, 32(6), 19–37. https://doi.org/
10.31992/0869-3617-2023-32-6-191992/0869-3617-192023-1932-619--37
Romero-Rodríguez, J. M., Ramírez-Montoya, M. S., Buenestado-Fernández, M., & Lara-Lara, F. (2023). Use
of ChatGPT at university as a tool for complex thinking: students’ perceived usefulness. Cultura de Los
Cuidados, 12(2). https://doi.org/10.7821/naer.2023.7.1458
Rouast, P. V., Adam, M. T. P., & Chiong, R. (2021). Deep learning for human affect recognition: insights
and new developments. IEEE Transactions on Affective Computing, 12(2). https://doi.org/10.1109/
TAFFC.2018.2890471
Sánchez, E., & Lama, M. (2011). Artificial intelligence and education. In Encyclopedia of Artificial Intelligence.
https://doi.org/10.4018/978-1-59904-849-9.ch021
Skinner, B. F. (1986). The evolution of verbal behavior. Journal of the Experimental Analysis of Behavior,
45(1). https://doi.org/10.1901/jeab.1986.45-115
Steven, D. (2021). AI in Education: Change at the Speed of Learning. UNESCO.
Stokel-Walker, C., & Van Noorden, R. (2023). What ChatGPT and generative AI mean for science. Nature,
614(7947), 214–216. https://doi.org/10.1038/d41586-023-00340-6
Strzelecki, A. (2023). Students’ acceptance of ChatGPT in higher education: an extended unified theory
of acceptance and use of technology. Innovative Higher Education. https://doi.org/10.1007/s10
755-023-09686-1
Su (苏嘉红), J., & Yang (杨伟鹏), W. (2023). Unlocking the power of ChatGPT: a framework for applying
generative AI in education. ECNU Review of Education, 6(3), 355–366. https://doi.org/10.1177/209653
11231168423
Taye, M. M. (2023). Understanding of machine learning with deep learning: architectures, workflow,
applications and future directions. Computers, 12(5). https://doi.org/10.3390/computers12050091
Tira Nur, F. (2023). Artificial intelligence (AI) technology in OpenAI ChatGPT application: a review of
ChatGPT in writing English essay. ELT FORUM Journal of English Language Teaching , 12(1), 44–58.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.
Turing, A. M. (2012). Computing machinery and intelligence. In Machine Intelligence: Perspectives on the
Computational Model. https://doi.org/10.1525/9780520318267-013
Wang, Y. (2023). ‘It is the easiest thing to do’: university students’ perspectives on the role of lecture recording
in promoting inclusive education in the UK. Teaching in Higher Education, 1–18. https://doi.org/
10.1080/13562517.2022.2162814
Zhai, X. (2023). ChatGPT user experience: implications for education. SSRN Electronic Journal. https://doi.
org/10.2139/ssrn.4312418
111
8.1 INTRODUCTION
One crucial characteristic that enhances learning in children is the capacity to ask questions motivated
by curiosity. Because of this, earlier studies have looked into creating specialized exercises to hone
this ability. Several researchers used linguistic and semantic cues to teach participants to ask more of
these kinds of inquiries, often known as divergent questions. Despite its proven pedagogical effect-
iveness, this strategy is limited since it relies on the time-consuming and labor-intensive process of
manually constructing the previously mentioned clues.
A new era of advancements in communication, automation, and information processing has been
ushered in by the rapid expansion of technology, which has fundamentally changed the way we live
today. Specifically, artificial intelligence (AI) has transformed several areas, including the sphere
of academia. By processing and analyzing enormous amounts of natural language data, the multi-
disciplinary subfield of natural language processing (NLP), which includes linguistics, computer
science, and AI, provides a potent tool for human–computer interaction [1]. According to [1], this
technology has proven to be remarkably good at producing, interpreting, and understanding context
in human language. NLP has advanced significantly over time, now doing many language pro-
cessing tasks better than humans [2]. These days, cutting-edge NLP models improve efficiency and
accuracy across many domains, providing vital support and help in a wide range of applications [2].
The integration of AI tools into the educational environment has received significant attention in
the dynamic field of modern education. Among these, ChatGPT stands out as a noteworthy AI model
with possible applications in the field of education. Despite the fact that ChatGPT’s impact on edu-
cation has been the subject of numerous research, including Rueda’s systematic review [3], there
is still a clear lack of clarity on the technology’s actual value to teachers and its useful applications
in the classroom. Previous studies have mostly focused on the general use of ChatGPT in learning
environments. Some have highlighted how educators are utilizing it more and more [4, 5], while
others have explained the creative ways in which educators are incorporating it into their lesson
plans [6]. However, these studies often overlook the possible drawbacks and difficulties related
to their implementation. For example, attention has been drawn to the problem of students using
ChatGPT for academic dishonesty.
8.2 RELATED WORK
According to [7], to stay up to date with the most recent technical advancements and adapt to
the changing demands of the engineering industry, engineering education is always changing. The
application of generative AI technology, like the ChatGPT conversational agent, is one encouraging
development in this sector. ChatGPT has the capacity to provide students with tailored feedback
and explanations, as well as realistic virtual simulations for hands-on learning, thereby offering
individualized and effective learning experiences. It’s crucial to take into account this technology’s
limits, though. Because ChatGPT and other generative AI systems rely on their training data, they
have the potential to produce and disseminate false information as well as reinforce prejudices.
Furthermore, there are ethical questions raised by the use of generative AI in education, including
the possibility of students using it unethically or dishonestly and the possibility of jobless situations
resulting from technologically rendered jobs. Although ChatGPT’s representation of the status of
generative AI technology is astounding, it is but a taste of what is to come. To guarantee that the
upcoming generation of engineers can take advantage of the advantages provided by generative AI
while avoiding any negative consequences, it is crucial for engineering educators to comprehend the
ramifications of this technology and research how to adjust the engineering education ecosystem.
Numerous studies investigating the potential uses of generative AI models, especially ChatGPT,
in education, have been spurred by their widespread adoption and usage. This survey by author [8]
explores the ramifications and real-world uses of generative AI models in a variety of educational
settings. This review aims to shed light on the changing role of generative AI models—particularly
ChatGPT—in education through a thorough and rigorous analysis of contemporary scholarly lit-
erature. The survey aims to advance knowledge of the relationship between AI and education by
illuminating the possible advantages, difficulties, and new developments in this exciting topic. The
results of this assessment will enable scholars, educators, and legislators to make knowledgeable
choices on the incorporation of AI technologies.
The globe was shocked by the generative AI tool ChatGPT’s advanced ability to complete incred-
ibly difficult tasks. The remarkable capabilities of ChatGPT to carry out intricate tasks in the educa-
tional domain have generated conflicting emotions among educators, since this breakthrough in AI
appears to transform current educational practices. This exploratory study presents some possible
advantages and disadvantages of ChatGPT in fostering teaching and learning by synthesizing recent
existing literature. Personalized and interactive learning is encouraged by ChatGPT, and it also
generates suggestions for formative assessment activities that give continuous feedback to improve
instruction. The study [9]also identifies a few of ChatGPT’s intrinsic drawbacks, including the
possibility of incorrect information being generated, biases in data training that could exacerbate
pre-existing prejudices, and privacy concerns. The report makes suggestions for utilizing ChatGPT
to enhance instruction and learning. Together, policymakers, researchers, educators, and tech
specialists might initiate discussions on the safe and beneficial applications of these developing
generative AI technologies to enhance instruction and facilitate students’ learning.
As mentioned by [10], AI and machine learning have changed the scientific research scene in
the last few years. Among these, chatbot technology has advanced significantly in recent years,
particularly with the emergence of ChatGPT as a prominent AI language model. This in-depth ana-
lysis explores ChatGPT’s history, uses, main obstacles, and potential future developments. Before
studying its numerous uses across areas like customer service, healthcare, and education, we first
examine its history, development, and underlying technology. We also address potential mitiga-
tion measures and draw attention to the significant obstacles that ChatGPT confronts, such as data
biases, ethical difficulties, and safety risks. Lastly, we discuss our vision for ChatGPT’s future,
highlighting areas for additional study and development, better human-AI interaction, solving the
digital divide, and integrating ChatGPT with other technologies. For researchers, developers, and
stakeholders interested in the constantly changing field of conversational agents powered by AI, this
review provides insightful information. This chapter investigates how ChatGPT has transformed
scientific research in a number of areas, including data processing, hypothesis development, collab-
oration, and public outreach. The study also looks at the possible drawbacks and moral dilemmas
related to using ChatGPT in research, emphasizing the significance of finding a balance between
human knowledge and AI-assisted creativity. The study discusses a number of ethical concerns with
the current computing landscape and how ChatGPT may raise objections to this idea. Additionally,
113
some ChatGPT biases and limitations are present in this work. It is noteworthy that in a relatively
short period of time, ChatGPT has garnered significant attention from academics, research, and
enterprises, despite a number of challenges and ethical concerns.
As mentioned in [11], in the realm of education, AI’s application in academics is a popular issue.
An AI application called ChatGPT has several advantages, such as improved accessibility, cooper-
ation, and student involvement. But it also calls into question academic integrity and plagiarism.
This essay explores the advantages and disadvantages of implementing ChatGPT in higher educa-
tion as well as the possible hazards and benefits of these resources. The study also addresses the
challenges associated with identifying and stopping academic dishonesty and offers tactics that aca-
demic institutions might implement to guarantee the moral and responsible use of these resources.
These tactics consist of creating guidelines and protocols, offering assistance and training, and
employing a variety of techniques to identify and stop cheating. It also addresses the challenges
associated with identifying and stopping academic dishonesty and offers tactics that academic
institutions might implement to guarantee the moral and responsible use of these resources. These
tactics consist of creating guidelines and protocols, offering assistance and training, and employing
a variety of techniques to identify and stop cheating. The study indicates that while there are poten-
tial and challenges associated with using AI in higher education, these issues can be successfully
addressed by universities by using these tools in a proactive and morally responsible manner.
As the author mentioned in [12], discussions concerning ChatGPT, an AI tool, and its possible
effects on education have begun. We discussed ChatGPT’s potential for and dangers to educa-
tion, as well as its strengths and limitations, using the SWOT analysis framework. The strengths
include the use of an advanced natural language model to provide believable responses, the ability to
improve itself, and the provision of customized and instantaneous responses. As a result, ChatGPT
can improve information availability, enable customized and sophisticated learning, and lessen the
workload of teachers—all of which contribute to the efficiency of important procedures and duties.
The shortcomings include a deficiency in higher-order cognitive abilities, a danger of bias and dis-
crimination, a lack of in-depth comprehension, and trouble assessing the caliber of responses. Lack
of context awareness, threats to academic integrity, ongoing discrimination in the classroom, dem-
ocratization of plagiarism, and a decline in higher-order cognitive skills are some of the threats
facing education.
students to write articles on any topic. Furthermore, ChatGPT can provide advice on how to improve
the grammatical structures, succinctness, or clarity of several draughts of the same essay. This helps
users overcome writing obstacles and offer novel perspectives on the topic they have chosen.
8.5 CONTEXT ANALYSIS
In the first, AI is employed to drive the entire discussion around the function of the teacher in the
classroom. In the context of AI utilization, the teacher’s involvement is essential for advancing
innovative teaching approaches and enhancing educational practice [32]. Trainers can create curric
ulum, instructional materials, and assessment activities with the help of ChatGPT, which is a useful
tool [33].
The influence and application of AI in the classroom are the subject of the second line of research
in the field of education, which has piqued the interest of pedagogical specialists and teachers alike.
Numerous studies have been conducted and show potential in the use of AI as a supplemental tool
in the teaching–learning process [34]. In this connection, it has been noted that AI—ChatGPT in
particular—has the potential to enhance academic achievement and promote the growth of critical
thinking among students. ChatGPT makes it easier to obtain current, pertinent information by giving
prompt, precise replies to specific queries. This can be very helpful for students as they learn about
various subjects and conduct research [35]. Furthermore, AI adjusts to the unique learning speed of
every student, letting them advance at their own speed and get individualized guidance according to
their requirements. This increases student involvement and enthusiasm while also giving teachers
more time to work on more creative and participatory projects like one-on-one tutoring and critical
criticism [36]. But, while using AI in the classroom, it’s also critical to keep in mind a few difficul
ties and moral issues. For instance, in order to prevent the dissemination of false or biased informa-
tion, it is crucial to guarantee the confidentiality and privacy of student data as well as to evaluate
the dependability and correctness of the responses offered by AI [37].
The final sentence is directly relevant to how AI affects the processes of teaching and learning,
taking into account elements such the technological, institutional, cultural, and socioeconomic
environments [38]. In conclusion, AI technologies like ChatGPT can assist educational institutions
in managing and allocating learning resources intelligently, enhancing their use and effectiveness,
significantly altering the nature of education quality and efficiency, and offering students better
learning services that will better prepare them for the demands of a changing society.
8.5.1 Gender Analysis
The dataset, which consists of 192 respondents’ answers about their gender identity, offers
important insights into the demographic makeup of the sample as shown in Figure 8.1. The main
focus is on binary gender classification, which shows that 40.6% of respondents identify as female
and 59.4% of respondents as male. This difference in numbers points to a significant male pre-
ponderance in the sample. The “Prefer not to say” choice is a notable addition, recognizing the
range of respondent preferences about the sharing of personal data. This option emphasizes the
growing significance of data privacy in survey methodology while also honoring people’s privacy
concerns. Consideration should be given to possible influencing factors, such as social, cultural,
and contextual elements impacting gender identifications, given the almost 60–40 divide between
male and female respondents. It’s important to note that the binary classification system might not
adequately represent the range of gender identities, indicating the necessity for a more sophisticated
investigation.
Future analyses could investigate more demographic variables to improve comprehension and
reveal possible intersections and connections that add to a more complete story about the population
studied. Finally, the gender identity data invites contemplation on identity complexity, privacy
116
concerns, and the changing field of inclusive survey methodology in addition to providing a quanti-
tative window into the demographic composition.
Figure 8.3. Furthermore, social media networks can advertise. Policymakers, tech companies,
educators, and students collaborate and share knowledge, which helps to build best practices for
responsible AI in education. For instance, IT businesses and engineers can provide insights on the
most recent advancements in AI models and methods, while educators can discuss their experiences
and worries about incorporating AI into the learning process.
8.5.4 Positive Comments
A student’s learning preferences and patterns can be examined by ChatGPT, which can then suggest
learning materials that are suited to their individual requirements shown in Figure 8.4. This can
assist pupils in learning in a method that suits them best and at their own speed.
8.5.5 Negative Comments
The strengths and weaknesses of ChatGPT, highlight its broad knowledge retrieval from a variety of
sources, such as books, journals, and websites. The text notes inherent shortcomings, even though
the model’s ability to recall and offer trustworthy information is considered essential for sensitive
applications and other AI technologies. Because ChatGPT uses a learning algorithm, its accuracy
could be compromised.
118
Notably, the imperfection is acknowledged, addressing the possible risks of bias and manipula-
tion in the data supplied by ChatGPT. It is crucial to be open and honest about the model’s limits in
order to control expectations and draw attention to the difficulties in using it as referred in Figure 8.5.
About 65.1% of participants hold the opinion that ChatGPT has the capacity to supplant human
intelligence. This shows that a sizable portion of the sample believes AI, and ChatGPT in particular,
may mimic or even surpass some elements of human cognitive ability.
Conversely, 34.9% of participants claim that ChatGPT is insufficient to take the position of
human intellect. This group, which is a sizable minority, is dubious or doubtful that the model can
accurately capture the complexities of human creativity, cognition, and problem-solving.
120
The question touches on the changing field of AI and how it relates to human intellect, which is
by nature a deep and complicated subject. The percentages show various viewpoints regarding the
present and future possibilities of AI models such as ChatGPT.
It’s critical to understand that the advancement and application of AI technologies lead to constant
debates on their moral ramifications, the effects they have on society, and the nature of interactions
between humans and machines. Determining the reasoning behind these disparate answers may
offer more insight into the complex opinions of ChatGPT’s place in the field of AI.
The primary constraint of this review is the restricted quantity of literature incorporated into the
examination. This is due to the fact that, to date, not enough study has been done specifically on
ChatGPT’s use in higher education institutions. Less research has been done on the subject because
higher education may have adopted educational technology, like ChatGPT, later than other educa-
tional levels. This is because the field of educational technology is constantly growing and changing.
Notwithstanding this drawback, the review offers a summary of ChatGPT’s tertiary effects. It will
be helpful to broaden the study in order to obtain a more thorough knowledge of ChatGPT’s impact
and benefits at this particular educational level as more research is done and awareness of its use in
higher education grows.
8.6 CONCLUSION
Both instructors and students may find ChatGPT and other AI language models to be useful and
practical resources for engineering education. These models are capable of producing text that is
human-like, conversing, answering queries, writing essays, and finishing homework assignments.
Possible uses include language editing, online instruction, language practice, asking and answering
both technical and non-technical inquiries, and supporting research. But it’s crucial to keep in
mind that ChatGPT and other AI language models are fallible and could give false information.
As a result, it’s critical to utilize these technologies responsibly and to think about creating rules
and criteria for their appropriate use in the community. Due to ChatGPT’s adaptability, problems
concerning its appropriate and inappropriate use are also raised because the technology may be able
to provide comprehensive responses to tests that measure human learning. It is obvious that engin-
eering education and the profession will eventually use these tools, and in order to prevent unethical
behavior while also allowing for the productivity that these tools can produce, evaluation method-
ologies will need to change. In order to prevent increasing already-existing inequalities, it is equally
critical to guarantee equitable access to cutting-edge technology as well as adequate training and
education, especially for underprivileged communities. Through a number of platforms (such as a
website or a smartphone app) and in a number of sectors, ChatGPT may provide instructors and
students with simple access to information. It is also a more effective tool than conventional search
engines because it provides a written response instead of just a list of references. Students can more
easily and quickly obtain detailed knowledge by using ChatGPT, which has the ability to locate and
summarize pertinent material. From an educational standpoint, this implies that ChatGPT can free
up students’ access time so they can read and think critically about the assigned material for longer.
Teachers can find and create suitable teaching resources with the help of ChatGPT.
REFERENCES
1. Chowdhury, G. Natural language processing. Ann. Rev. Inf. Sci. Technol. January 2005. https://doi.org/
10.1002/aris.1440370103
2. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh,
G.; Günnemann, S.; Hüllermeier, E.; Krusche, S.; Kutyniok, G.; Michaeli, T.; Nerdel, C.; Pfeffer,
J.; Poquet, O.; Sailer, M.; Schmidt, A.; Seidel, T.; Stadler, M.; Weller, J.; Kuhn, J.; Kasneci, G.
ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education.
Elsevier Educational Research Programme. Volume 103. April 2023. https://doi.org/10.1016/j.lin
dif.2023.102274
3. Rueda, M.M.; Fernández-Cerero, J.; Fernández-Batanero, J.M.; López-Meneses, E. Impact of the
Implementation of ChatGPT in Education: A Systematic Review, Computers 2023, 12(8), 153; https://
doi.org/10.3390/computers12080153.
4. Hill-Yardin, E.L.; Hutchinson, M.R.; Laycock, R.; Spencer, S. A Chat(GPT) about the future of scien-
tific publishing. Brain Behav. Immun. 2023, 110, 152–154.https://doi.org/10.1016/j.bbi.2023.02.022
5. Yu, H.; Guo, Y. Generative Artificial Intelligence Empowers Educational Reform: Current Status,
Issues, and Prospects, Volume 8, Elsevier 2023.
123
6. Rahaman, M.S.; Ahsan, M.M.; Anjum, N.; Terano, H.J.; Rahman, M.M. From ChatGPT-3 to GPT-4: A
Significant Advancement in AI-Driven NLP Tools, J. Eng. Emerg. Technol. 2023, 2(1), 50–60.
7. Qadir, J. Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for edu-
cation. TechRxiv. https://doi.org/10.36227/techrxiv.21789434.v1
8. AL-Smadi, M. ChatGPT and Beyond. The Generative AI Revolution in Education. Qatar
University: Qatar, arXiv: 2311.15198v1 [cs.CY] 26 Nov 2023.
9. Baidoo-Anu, D.; Ansah, L.O. Education in the era of generative artificial intelligence (AI): Understanding
the potential benefits of ChatGPT in promoting teaching and learning year. J. AI 2023, 7(1),52–62.
https://doi.org/10.61969/jai.1337500
10. Ray, P.P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics,
limitations and future scope. Internet Thing Cyber Phys Thing. 2023, 3, 121–154, ScienceDirect.
11. Cotton, D.R.E.; Cotton, P.A.; Shipway, J.R. Chatting and cheating: Ensuring academic integrity in the
era of ChatGPT, 13 Mar 2023. https://doi.org/10.1080/14703297.2023.2190148
12. Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT analysis of ChatGPT: Implications
for educational practice and research, 27 Mar 2023. https://doi.org/10.1080/14703297.2023.2195846
13. Mander, J. Four Arguments for the Elimination of Television. Quill: New York, 1978.
14. Carr, N. The Shallows: How the Internet Is Changing the Way We Think, Read and Remember. W.
W. Norton, Jan 2011.
15. Gilbert, I. Why Do I Need a Teacher When I’ve Got Google?: The Essential Guide to the Big Issues for
Every Teacher. Routledge, 2014.
16. Pappano, L. The year of the MOOC. NYT. 2012, p. 2, 2012.
17. Kent, M.; Bennett, R. “What was all that about? Peak MOOC hype and post-MOOC legacies,” in
Massive Open Online Courses and Higher Education, pp. 1–8, Taylorfrancis, 2017.
18. Qadir, J.; Taha, A.- E.M.; Yau, K.- L.A.; Ponciano, J.; Hussain, S.; Al- Fuqaha, A.; Imran, M.A.
Leveraging the force of formative assessment & feedback for effective engineering education,” in
American Society for Engineering Education (ASEE) Annual Conference, 2020, Enlighten Publication.
19. Thunstrom, A. We asked GPT-3 to write an academic paper about itself: Then we tried to get it
published. Sci. Am. 2022, 30.
20. Qadir, J.; Islam, M.Q.; Al-Fuqaha, A. Toward accountable human-centered AI: Rationale and prom-
ising directions, J. Inf. Commun. Ethics Soc., 20(2).
21. Latif, S.; Qayyum, A.; Usama, M.; Qadir, J.; Zwitter, A.; Shahzad, M. Caveat emptor: The risks of
using big data for human development. IEEE Technol. Soc. Mag. 2019, 38(3), 82–90.
22. Welsh, M. The end of programming. Commun. ACM. 2022, 66(1), 34–35.
23. Bilad, M.R.; Yaqin, L.N.; Zubaidah, S. Recent progress in the use of artificial intelligence tools in
education. Jurnal Penelitian Dan Pengkajian Ilmu Pendidikan. 2023, 7(3), 279–315.https://doi.org/
10.36312/esaintika.v7i3.1377
24. Brazdil, P.; Jorge, A. Progress in Artificial Intelligence: Knowledge Extraction, Multi-Agent Systems,
Logic Programming, and Constraint Solving. Springer
25. Jara, I.; Ochoa, J. Usos Y Efectos de la Inteligencia Artificial en la Educación. Sector Social División
Educación. 2021. Available online: https://ie42003cgalbarracin.edu.pe/biblioteca/LIBR-NIV3310
12022134652.pdf (accessed on 3 June 2023).
26. García-Peña, V.R.; Mora-Marcillo, A.B.; Ávila-Ramírez, J.A. La inteligencia artificial en la educación.
Domino De Las Cienc. 2020, 6, 648–666. [Google Scholar].
27. Incio Flores, F.A.; Capuñay Sánchez, D.L.; Estela Urbina, R.O.; Valles Coral, M.A.; Vergara Medrano,
S.E.; Elera Gonzáles, D.G. Inteligencia artificial en educación: Una revisión de la literatura en revistas
científicas internacionales. Apunt. Univ. 2021, 12, 353–372.
28. Neuman, M.; Rauschenberger, M.; Schön, E.M. “We need to talk about ChatGPT”: The future of AI
and higher education. Hochschule Hannover. 2022, 1, 1–4.
29. Steenbergen-Hu, S.; Cooper, H. A meta-analysis of the effectiveness of intelligent tutoring systems on
college students’ academic learning. J. Educ. Psychol. 2014; 106(2), 331.
30. OpenAI. n.d. Available online: https://openai.com (accessed on 10 May 2023).
31. Zhai, X. ChatGPT user experience: Implications for education. Available at: SSRN 4312418 (2022),
https://dx.doi.org/10.2139/ssrn.4312418.
32. Firat, M. What ChatGPT means for universities: Perceptions of scholars and students. J. Appl. Learn.
Teach. 2023, 6, 57–63.
124
33. Topsakal, O.; Topsakal, E. Framework for a foreign language teaching software for children utilizing
AR, voicebots and ChatGPT (large language models). J. Cogn. Syst. 2022, 7, 33–38.
34. Topsakal, O.; Topsakal, E. Framework for a foreign language teaching software for children utilizing
AR, voicebots and ChatGPT (large language models). J. Cogn. Syst. 2022, 7, 33–38.
35. Javaid, M.; Haleem, A.; Singh, R.P.; Kahn, S.; Khan, I.H. Unlocking the opportunities through
ChatGPT Tool towards ameliorating the education system. Bench Council Trans. Benchmarks Stand.
Eval. 2023, 3, 100–115.
36. Ausat, A.; Massang, B.; Efendi, M.; Nofirman, N.; Riady, Y. Can ChatGPT replace the role of the
teacher in the classroom: A fundamental analysis. J. Educ. 2023, 5, 16100–16106.
37. Rahman, M.M.; Watanobe, Y. ChatGPT for education and research: Opportunities, threats, and strat-
egies. Appl. Sci. 2023, 13, 5783.
38. Castillo, A.G.R.; Silva, G.J.S.; Arocutipa, J.P.F.; Berríos, H.Q.; Rodríguez, M.A.M.; Reyes, G.Y.;
López, H.R.P.; Reves, R.M.V.; Rivera, H.V.H.; González, J.L.A. Effect of ChatGPT on the digitized
learning process of university students. J. Namib. Studies. 2023, 33, 1–15.
125
9 ChatGPT Strategies
for Personalized Learning
in Second Language
Acquisition (Italian B1)
Rubén González Vallejo
with ChatGPT, in order to emphasize the functionalities that ChatGPT offers during Italian language
learning through preliminary support, support during learning, and evaluative support strategies.
9.1.1 Chapter Organization
The rest of the chapter is organized as follows: Section 9.2 presents the applications of AI in educa
tion, aiming to understand what its main uses are by educational institutions. Section 9.3 highlights
the potentials and challenges of ChatGPT in the educational field, focusing on its most common
uses. Section 9.4 presents learning strategies with ChatGPT, aiming to highlight the functionalities
offered by ChatGPT during Italian learning through preliminary support strategies, support during
learning, and evaluative support. Finally, Sections 9.5 and 9.6 conclude the chapter with discussions
and conclusions.
9.2 APPLICATIONS OF AI IN EDUCATION
In recent years, Artificial Intelligence in Education (AEID) has taken center stage by offering greater
adaptation and personalization of educational content through its applications. With this purpose,
it can enhance teaching by increasing study hours, providing more resources, and revamping trad-
itional assessment methods, thus transforming the traditional classroom into an interactive learning
environment (Luo and Cheng 2020). Specifically, AI aims to address open challenges in education
by innovating educational practices and promoting inclusion, equality, and quality. All of this is
geared towards shaping a new critical and autonomous student immersed in technology, particularly
within the multidirectional and (a) asynchronous system represented by distance education (García
Villarroel 2021).
Evidence of this shift can be seen in various available alternatives that have emerged in developing
countries, embodying these principles. In China, Liulishuo aims to teach English with a capacity to
accommodate 600,000 students, and the Superteacher answers 500 million questions during Gaokao
preparation. In Uruguay, the adaptive learning platform PAM focuses on personalized feedback in
mathematics teaching. Brazil’s Mec Flix is an educational platform for national exam preparation,
while Daptio in Cape Town is software that adapts learning by analyzing students’ competencies.
M-Shule in Kenya is used to deliver content via SMS based on the national curriculum, among
others (United Nations 2019).
The significant technological revolution witnessed in the education system has been made pos-
sible by the so-called fifth generation of computers, based on AI, robotics, expert systems, chips,
and the implementation of machine learning, deep learning, and data analysis. These technologies
have enabled the creation of applications that support the development of NLP, enhancing cap-
abilities such as information classification and search, among others.1 In essence, the potential of
AI lies in its ability to “solve complex problems by applying advanced programming techniques
without the need to establish predefined steps or instructions, as is the case with conventional com-
puter programs” (MINECO, p. 3).2 Moreover, AI integrates techniques from foundational educa
tional theories such as behaviorism, connectivism, discovery learning, and constructivism (Escobar
Hernández 2021).
1 Examples of AIED applications include software such as Dragon for speech recognition, applications like Carnegie
Speech or Duolingo, and language robots such as L2-TOR promoted by the European Union (Kannan and Munday 2018).
2 In relation to this, Zhu (2017) analyzes an artificial intelligence application based on computer-assisted teaching from its
users: the administrator is responsible for processing users and the question bank; the subject-matter expert modulates
questions based on their knowledge, designing the question bank; the English teacher conducts exams, uploads results,
and analyzes the experiential situation; and, finally, the student is responsible for practicing and (self)diagnosing.
127
TABLE 9.1
Applications of AI in education
Administration • Perform the administrative tasks faster that consume much of instructors’ time, such as grading
exams and providing feedback.
• Identify the learning styles and preferences of each of their students, helping them build
personalized learning plans.
• Assist instructors in decision support and data-driven work.
• Give feedback and work with students timely and directly.
Instruction • Anticipate how well a student exceeds expectations in projects and exercises and the odds of
dropping out of school.
• Analyze the syllabus and course material to propose customized content.
• Allow instruction beyond the classroom and into the higher-level education, supporting
collaboration.
• Tailor teaching methods for each student based on their personal data.
• Help instructors create personalized learning plans for each student.
Learning • Uncover the learning shortcomings of students and address them early in education.
• Customize the university course election for students.
• Predict the career path for each student by gathering studying data.
• Detect learning state and apply intelligent adaptive intervention to students.
In this context, Ma (2021) examines the English learning experience of two groups of university
students. While the control group uses a traditional method, the experimental group receives training
through virtual reality technology based on constructivist theory. Given the importance of working
on the learner’s experience and meaning construction, the immersive experience in the experimental
group yielded superior results and satisfaction compared to the control group. Consequently, we
agree that AIED “enables the adjustment and/or adaptation of a user’s learning pathways through
inductive processes based on data extracted from formative evidence generated throughout their
school life” (Magal-Royo and García Laborda 2017, p. 1). To such an extent that AI has even been
proposed as an effective strategy for addressing learning problems, as seen in the case of students
with dyslexia (Chassignol et al. 2018). Table 9.1 presents some applications of Generative AI in
education.
In Spain, the objective of the National Artificial Intelligence Strategy (ENIA) is to coordinate
the actions of different administrations, serving as a reference point for both the public and pri-
vate sectors (MINECO). Among its strategic goals, emphasis is placed on promoting the Spanish
language and its use in AI technologies. To this end, Cuenca (2021) reflects on the development of
AI in Spanish compared to English. Specifically, it suggests that there is a common interest among
institutions, businesses, science, and citizens in narrowing the gap between the two languages. While
acknowledging that this delay is, in part, a disadvantage, it allows for learning from the implemen-
tation of technologies and anticipating future developments. As a solution for bridging this gap, the
proposal involves “public-private collaboration and the connection between scientists, engineers,
business developers, entrepreneurs, and public administrations” (Cuenca 2021, p. 13).
Finally, the collection of massive data, enabling the feeding of AI systems applied to educa-
tion, raises concerns about the management and compilation of such data. Regan and Jesse (2018,
cited in Hockly 2023) question the extent to which the user controls their data (privacy), which
data are anonymized (anonymity), the purpose of monitoring users (surveillance), how much
autonomy the user has in the system (autonomy), what non-discrimination measures are in place
128
(non-discrimination), and who owns control of the data (information ownership).3 All of this leads
us to discuss the numerous challenges that the rapid implementation of AI in the educational field
has brought about. According to Hwang et al. (2020), important considerations in AIED include the
development of learning models that align with educational objectives, the performance of students
learning with AI systems, a redefinition of educational theories through a new conception of edu-
cation based on new learning systems, big data analysis focused on education, the development
of ethical practices, and collaboration between humans and AI. On the other hand, Owoc et al.
(2021) highlight challenges in establishing strategies agreed upon by all stakeholders based on spe
cific goals and timelines; achieving maturity in the adoption of AI technologies; controlling data to
feed systems through quality, access, and data lifecycle principles; and addressing infrastructure
problems arising from the compatibility and integration of different technologies. Finally, Zawacki-
Richter et al. (2019), in their analysis of research on AI applications in higher education, argue that
there is a lack of critical reflection on the ethical issues, risks, and challenges posed by AI in educa-
tion, and a lack of robustness with theoretical pedagogical approaches.
3 However, it is undeniable that such data collection has benefits for learning, as the study and collection of data allow
teachers to personalize student learning and promote research in the field (Luo and Cheng 2020). One case is explored
by Magal-Royo and García Laborda (2017), who state that the collection of massive data in official language certification
exams, through tests with structured or semi-structured data, could be useful for establishing new educational strategies
and policies.
4 To avoid the high maintenance cost of AI in schools and promote equitable access, Kasneci et al. (2023) propose using
open-source models, securing financial support from companies or research institutions to maintain the systems, and
sharing the costs.
129
the responses provided when the information does not meet the user’s satisfaction threshold, known
as reinforcement learning. This process involves the algorithm generating different responses and
learning from the most appropriate suggestions (Mijwil et al. 2022). The inefficiency in addressing
certain questions is linked to the fact that the chatbot works contextually and semantically in its
responses based on information extracted from the texts in its massive pretraining. This means it
understands the relationships and dependencies between words but lacks an understanding of how
these concepts correspond to the real world, despite the surprising generalizations it can generate
(Cuenca 2021). Moreover, the fact that ChatGPT operates in English raises questions about the reli
ability of using other languages without developing specific language models (LLMs). Implementing
such technology is a consideration due to factors like limited transparency, high maintenance costs,
and various short-term risks, among others (Dac Lai et al. 2023).
TABLE 9.2
Strategies with ChatGPT based on language teaching
9.4.1 Preliminary support
• L
esson planning and objectives
(PR) I am a student of Italian at B1 level and I need to plan the contents of this level for a
self-study of 12 weeks, 4 hours a week. Can you plan the contents and objectives for learning
B1 Italian?
(CG) Weeks 3–4: Past Tense and Thematic Vocabulary
Objectives:
• Learn the simple past tense of regular and irregular verbs.
• Expand vocabulary in themes such as travel, food, and leisure.
Activities:
• Practice conjugating verbs in the past.
• Read texts related to the mentioned themes.
• Engage in brief conversations about past experiences.
• C
reation of didactic materials (exercises, dialogues, specific textual genres, and general
reading comprehension)
(PR) I am an Italian B1 student and I need to create a set of questions on this economics text
adapted to B1 level to work on reading comprehension.
(CG) Nell’Unione europea si producono ogni anno più di 2,2 miliardi di tonnellate di rifiuti.
L’UE sta aggiornando la legislazione sulla gestione dei rifiuti per promuovere la transizione
verso un’economia circolare, in alternativa all’attuale modello economico lineare. A marzo
2020 la Commissione europea ha presentato, sotto il Green deal europeo in linea con la
proposta per la nuova strategia industriale, il piano d’azione per una nuova economia circolare
che include proposte sulla progettazione di prodotti più sostenibili, sulla riduzione dei rifiuti
e sul dare più potere ai cittadini, come per esempio attraverso il ‘diritto alla riparazione’ […]
(European Union, 2023).
131
• Gamification
(PR) I am a b1 level student of italian and I need to gamify some grammatical contents to
learn in a more dynamic way. can you gamify the way to learn the conditional?
(CG) Objective: Travel through time and use conditional structures correctly in Italian.
Materials needed:
1. Cards with hypothetical situations.
2. Game board (can be virtual or printed).
3. Player tokens.
4. Dice.
Preparation:
• Create cards with hypothetical situations that require the use of the conditional. For
example: “If you win the lottery, what would you do?” or “If you could travel anywhere,
where would you go?”
• Design a game board with squares representing different moments in time (past, pre-
sent, future).
• Place the cards face down near the board.
[…]
• Text analysis
(PR) Soy un estudiante y necesito analizar el siguiente texto periodístico titulado “Non solo
Gaza, in Cisgiordania l’altra guerra “invisibile” d’Israele. Record di morti palestinesi dal 7
ottobre: sono 163, più di 5 al giorno”. ¿Podrías realizar un análisis?
(CG) This journalistic text addresses the situation in the West Bank amid the conflict between
Israel and Hamas in Gaza. Here is an analysis of the content:
Summary of the Text:
1. General Context: The article points out that international attention has focused on the war
in Gaza, but there is an “invisible” conflict in the West Bank that has heightened tensions
and the number of Palestinian deaths.
2. Casualties in the West Bank: From October 7th to November 7th, 163 Palestinian deaths
were recorded in the West Bank, with an average of approximately 5.5 per day. This
increase is attributed to the intensification of “anti-terrorism operations” by the Israel
Defense Forces (IDF) and actions by illegal settlers in the Occupied Territories.
3. Aggravating Factors: Israel’s response to Hamas’ attack has given more freedom of action
to the IDF, leading to an increase in arrests and deaths, including civilians. Additionally,
an increase in gun permit applications in Israel is mentioned, and the Minister of National
Security has ordered the purchase of more weapons for civilians.
[…]
• Automatic translation
(PR) Soy un estudiante de italiano de nivel B1 y necesito traducir un texto del italiano al
español que no entiendo. ¿podrías traducirlo?
(CG) Article 49
Todos los ciudadanos tienen el derecho de asociarse libremente en partidos con el fin de
contribuir de manera democrática a la determinación de la política nacional. La jurisprudencia
considera a los partidos políticos como instrumentos privilegiados “a través de los cuales
se concreta una relación estable entre la soberanía popular y las instituciones.” En 2006,
el Tribunal Constitucional declaró que los partidos están garantizados por la Constitución
como “instrumentos de representación de intereses políticamente organizados.” Los partidos
contribuyen a realizar “la estructura democrática” e identifican la voluntad general “a través
de la actividad de agregar las voluntades y necesidades de los individuos presentes en la
133
sociedad”: por estas razones, la legislación debe garantizar una “verdadera igualdad de
oportunidades para cada formación política en la competencia electoral”
[...] (Hub Scuola, n.d.)
• Definition of words
(PR) I am a B1 level student of Italian and I would like to know the meaning of some Italian
words such as “panettone” and “pandoro.”
(CG) 1. Panettone: Panettone is a type of sweet bread originating from Italy, specifically from
Milan. It is traditionally consumed during the Christmas season. It’s a soft and fluffy
bread that typically contains candied fruits, citrus peels, and raisins. It is served in
slices and is often accompanied by a cup of hot chocolate or wine.
2. Pandoro: Like panettone, pandoro is another type of Italian sweet bread associated
with the Christmas season. It originated in Verona. Unlike panettone, pandoro does
not contain fruits or raisins. It has a conical shape, resembling a tower, and is dusted
with powdered sugar before serving. It is often accompanied by cream or chocolate.
• Language for specific purposes (and different language registers)
(PR) I am a student of Italian B1 for specific purposes. It is a course dedicated to Italian in
catering. Could you give me a text to work on the compound conditional with specific vocabu-
lary of this field and some questions to solve?
(CG) Text: The Perfect Dinner
Welcome to the “Gusto Divino” restaurant! Our chef, Marco, has prepared an exceptional
dinner for you. Last night, if we had had fresh fish, we would have served a delicious fish soup
as an appetizer. Unfortunately, there were no fresh supplies, so we opted for a mixed salad
with goat cheese and toasted nuts.
For the first course, we would have cooked spaghetti with clams if the clam supply had not
been delayed. Instead, we chose to prepare a mushroom risotto, a classic that always succeeds
among our customers.[…]Questions:
1. What was the originally planned appetizer for the dinner? Why wasn’t it served?
2. What would the restaurant have cooked as the first course if there had been no delay in the
supply?
[…]
• Pronunciation
(PR) I am a student of Italian level b1 and I need to know how to pronounce the words “balia,”
“ancora” and “alice” in Italian.
(CG) “Sure! Here’s the approximate pronunciation of the words in Italian:
1. “Balia” is pronounced roughly as **bah-lee-ah**.
2. “Ancora” is pronounced approximately as **ahn-koh-rah**.
3. “Alice” is pronounced approximately as **ah-lee-che**.
• Grammar correction
(PR) I am a b1 level student of Italian. could you please correct the grammar of the
following text?
(CG) Le due amiche si ritrovo puntualissimi davanti all’entrata del cinema. I posti le hanno
già prenotati. La fila centrale è la migliore, un po’ perché permette una visiva ottimale, un po’
perché si possono allungare tranquillamente le gambe senza toccare i seggiolino davanti. Le
due ragazze comprano una generosa porzione di corn e prendono anche una bevanda gassata
e un pacchetto di caramelle. Si accomodano sulle loro poltroncine e sono pronte a godersi lo
spettacolo offerto dal grande schermo.
“Le due amiche si ritrovano puntualissime davanti all’entrata del cinema. I posti le hanno già
prenotati. La fila centrale è la migliore, un po’ perché permette una visuale ottimale, un po’
perché si possono allungare tranquillamente le gambe senza toccare il seggiolino davanti.
Le due ragazze comprano una generosa porzione di popcorn e prendono anche una bevanda
134
Raffaello Sanzio (1483–1520), cresciuto nell’ambiente artistico della bottega di suo padre
Giovanni Santi a Urbino, si sviluppa nella fervente atmosfera culturale della corte dei
Montefeltro. Dopo la prematura scomparsa dei suoi genitori, entra in contatto con Perugino
e lavora a Città di Castello; già a quindici anni dimostra una piena competenza nella gestione
dell’atelier di famiglia. All’inizio del XVI secolo, Raffaello è già uno degli artisti più
richiesti in Umbria. Dopo brevi soggiorni a Firenze e Roma, raggiunge Pinturicchio a Siena,
realizzando cartoni destinati agli affreschi della Libreria Piccolomini. Nel 1504, anno del
celebre Sposalizio della Vergine, si trasferisce a Firenze, dove la vicinanza con Leonardo
e Michelangelo si riflette nelle straordinarie opere di devozione privata e nei ritratti per la
borghesia ricca.[…]
• Writing tips
(PR) I am a B1 student and I need advice on how to improve a text. Could you give me writing
tips from the above text about Raffaello?
(CG) Overall, your biography of Raphael Sanzio is informative and well-structured. Here are
some writing tips to enhance it:
1. Clarity and Flow:
• Ensure that ideas flow coherently and logically. Use connectors and transition phrases
to facilitate smooth reading.
2. Variety in Sentence Structure:
• Diversify sentence structures to avoid repetition and make the text more dynamic.
Combine short and long sentences, for example, to maintain the reader’s interest.
3. Expansion of Details:
• Add additional details about some of Raphael’s most prominent works. This will pro-
vide readers with a more comprehensive understanding of his artistic contribution.
4. Use of Specific Terms:
Consider using specific terms related to art and painting to describe Raphael’s techniques
and styles more detailedly.
[…]
(CG) Uno degli studi più importanti condotti sui benefici alla salute derivanti dall’andare a lavorare
in bicicletta è stato realizzato dal Copenhagen Center Prospective Population Studies. Durante un
periodo di molti anni, lo studio ha preso in considerazione 13.375 donne e 17.265 uomini. Nel
corso della ricerca diverse persone morirono e le loro età vennero registrate, ne risultò che chi
utilizzava abitualmente la bicicletta per i suoi spostamenti aveva un’aspettativa di vita più lunga.
Inoltre, secondo quanto riportato dall’International Journal of Obesity, c’è un legame significativo
136
tra l’utilizzo abituale dell’automobile e l’essere soprappeso o obesi. I rischi per la salute di uno stile
di vita sedentaria sono enormi, reali ed in costante aumento. Pedalare per andare a lavoro si traduce
in grandi benefici per il sistema cardiovascolare; è molto più facile rimanere in forma quando si fa
esercizio fisico ogni giorno. […] (Loescher Editore 2014).
Uno degli studi più importanti sulle cose positive per la salute che vengono dal andare a
lavorare in bicicletta è stato fatto dal Copenhagen Center Prospective Population Studies. Per
molti anni, hanno studiato 13.375 donne e 17.265 uomini. Durante la ricerca, alcune persone
sono morte e hanno segnato le loro età. Hanno scoperto che chi usa la bicicletta regolarmente
per i trasporti vive di più. Inoltre, secondo un articolo sull’International Journal of Obesity, c’è
una connessione importante tra l’uso abituale dell’auto e il sovrappeso o l’obesità. I rischi per la
salute di uno stile di vita sedentario sono grandi, reali e sempre in aumento. Andare in bicicletta
per andare al lavoro porta grandi benefici per il cuore; è più facile rimanere in forma quando fai
esercizio ogni giorno.
9.4.3 Evaluative support
• E valuation and feedback
(PR) I am a student of Italian b1 and I have produced a written production on a free topic
which I have to hand in. If I give you the text, could you give me a mark from 1 to 10 according
to the B1 level of Italian along with advice on how to improve?
(CG) Riccardo, un alunno della scuola media, ha sempre avuto problemi con i compiti da
svolgere a casa. Li fa abbastanza regolarmente, ma, essendo un tipo distratto e disordinato,
spesso finisce per scrivere testi poco comprensibili per gli altri. A scuola questo gli complica
la vita, perché gli riesce difficile correggere gli errori, ritrovare un certo testo a distanza di
tempo, scambiare dei lavori con i compagni di classe –non trova niente nel suo disordine.
Evaluation: 7/10 (Nivel B1)
Positive aspects:
1. The text has a clear and coherent structure.
2. Vocabulary use is suitable for B1 level.
3. Tense usage is correct.
4. The story is understandable and follows a logical sequence.
Áreas for improvement:
1. Some sentences are quite long and could be divided to improve the text’s fluency.
2. There are minor grammatical errors, such as the incorrect use of “essendo” instead of
“poiché” to express cause.
3. The use of the subjunctive could be improved in some constructions, such as “finché il suo
insegnante gli ha proposto.”
4. Instead of “per tanto tempo,” you can use “per molto tempo” for more accuracy.
5. Consider word choice variation to avoid repetitions.
[…]
• Error correction
(PR) I am an Italian B1 student and I have a text in Italian with errors that I need to correct.
Can you correct the text by dividing the errors into grammatical, syntactical and spelling
errors?
(CG) Negli ultimi mesi sono aumentate le e-mail di protesta indirizzate alla redazione del
País: sempre piu lettori si lamentano per lo stile trascurato e leccesso di refusi negli articoli.
L 11 settembre Milagros Pérez Oliva, la difensora dei lettori, si e occupata della questione. In
un editoriale intitolato Errores y horrores de agosto ha elencato gli strafalcioni usciti durante
lestate e ha ammesso che il quotidiano ha un problema. Ledizione on-line e la piu colpita. E
non importa se sul web basta un minuto per correggere un errore: “Il danno ormai e fatto.”
137
Grammatical Errors:
1. “sempre piu” → “sempre più” (addition of accent on ‘ù’)
2. “lo stile trascurato e leccesso” → “lo stile trascurato e l’eccesso” (apostrophe added to
avoid vowel sequence)
3. “si e occupata” → “si è occupata” (accent added on ‘è’)
4. “Errores y horrores de agosto” → “Errori e orrori di agosto” (correct translation)
5. “strafulconi” → “strafalcioni” (corrected word)
Syntactic Errors:
1. “indirizzate alla redazione del País” → “indirizzate alla redazione de El País” (addition of
the article ‘de’ before “El País”)
2. “la piu colpita” → “la più colpita” (accent added on ‘ù’)
[…]
• Research
(PR) I am a student of Italian b1 and I need scientific material for an article on false friends
in Spanish and Italian for an academic paper at the university. Is it possible to get references
of articles online in Spanish?
9.5 DISCUSSION
ChatGPT proves to be a great resource finder for autonomous language learning in most of the
tasks assigned to it. Consider the creation of didactic material, where it also provides immediate
feedback to the student, or in-text analyses. The power of its reformulation allows for practice with
authentic texts that develop language skills in comprehension and production. Specifically, notable
is the interaction it fosters in real and authentic conversations, as it not only engages with the user
on any topic that suits their learning but also provides grammatical correction and final feedback.
Additionally, for those students who may struggle to understand a text, either due to the level of
technicalities present or because they have less developed language skills, text analysis, automatic
translation, or text and level adaptation represent valuable strategies.
On the other hand, some limitations are detected that go beyond the program’s characteristics,
namely, space limitations in responses, limitations of the outdated free version 3.5, and robustness
issues as it doesn’t always respond consistently, as evidenced in previous studies (González
138
Vallejo 2023). Initially, these limitations relate to the lack of graphical representation, which would
allow for better acquisition of mental maps, as it only highlights important concepts in a text list,
and gamification strategies, since images play an important role in rule comprehension. Likewise,
ChatGPT lacks accessibility features such as audio description or font size adaptation in responses,
which reduces learning diversity. Furthermore, while most functions prove efficient for autonomous
study at the B1 level of Italian, other learning limitations are apparent.
Regarding preliminary support, in class planning and goal setting, the program indicates the need
to study past tenses but does not specify grammatical topics and also does not provide specific B1
vocabulary topics that are referenced in all Italian as a foreign language books. Interestingly, and
with access to the internet, it could provide the contents specified for language learning in any book
or institutional website.
Regarding support during learning, as a text model, it does not have the ability to directly create
graphics or concept maps. This hinders a more dynamic concept hierarchy. However, it can pro-
vide a textual description of the main parts of a text to subsequently create a concept map. On
the other hand, when asked for explanations about pronunciation, it does not differentiate between
homographs such as “balia,” “ancora,” and “alice” in Italian, as it simply describes phonologically,
but without phonetic symbols, the sounds.
Lastly, concerning evaluative support, it only offers advice on how to research a topic but does
not provide references or useful information on a specific topic, in this case, false friends between
Italian and Spanish, as an assistant would. Figure 9.1 presents the strategy analysis with ChatGPT.
9.6 CONCLUSIONS
The application of AI in education has represented a paradigm shift by presenting technology not
only as a support during learning but also as an assistant that promotes autonomous learning. The
integration of different techniques and technologies enables problem-solving through its advanced
language model. Therefore, this chapter has outlined the learning strategies that this language
model can offer, identified using ChatGPT as an assistant for teaching, given the lack of studies
on personalized strategies derived from the use of this chatbot. The effectiveness of some of its
functions is undeniable, such as the creation of didactic material based on both internet access and
139
its powerful reformulation capability. This allows for the generation of a wide variety of reading
comprehension exercises, dialogues, and specific textual genres. Likewise, the production and
written interaction in real and authentic contexts are notable. This facilitates interaction with imme-
diate feedback in natural conversations, which can substitute for the role of a teacher, provided that
the grammar and corrections provided are accurate and precise. Therefore, a specific use of prompts
must be made to avoid error margins, as in some contexts, the program still needs improvement,
exhibiting deviations, and sometimes requiring different responses.
REFERENCES
Barrot, Jessie S. 2023. “Using ChatGPT for second language writing: Pitfalls and potentials.” Assessing Writing
57:100745. https://doi.org/10.1016/j.asw.2023.100745
Castillo-González, William, Carlos Oscar Lepez, and Mabel Cecilia Bonardi. 2022. “Chat GPT: a promising
tool for academic editing.” Data and Metadata 1:1–6. https://dm.saludcyt.ar/index.php/dm/article/
view/23
Chassignol, Maud, Aleksandr Khoroshavin, Alexandra Klimova, and Anna Bilyatdinova. 2018. “Artificial
Intelligence trends in education: a narrative overview.” Procedia Computer Science 136:16–24. https://
doi.org/10.1016/j.procs.2018.08.233
Chen, Lijia, Pingpin Chen, and Zhijian Lin. 2020. “Artificial intelligence in education: a review.” IEEE Access
8:75264–75278.
Chicaiza, Rosa M, Luis Alfredo Camacho Castillo, Gargi Ghose, Israel Eduardo Castro Magayanes, and Víctor
Trajano Gallo Fonseca. 2023. “Aplicaciones de Chat GPT como inteligencia artificial para el aprendizaje
de idioma inglés: avances, desafíos y perspectivas futuras.” LATAM Revista Latinoamericana de Ciencias
Sociales y Humanidades 4 (2):2610. https://doi.org/10.56712/latam.v4i2.781
Cuenca, Llorentey. 2021. IA e idiomas: del gap lingüístico a la brecha digital. LLYC. [accessed 2023
Nov 2]. https://ideas.llorenteycuenca.com/wp-content/uploads/sites/5/2021/04/210422_LLYC_Informe
_IAeidiomas.pdf
Dac Lai, Viet, Nighia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and
Thien Huu Nguyen. 2023. “ChatGPT beyond English: towards a comprehensive evaluation of large lan-
guage models in multilingual learning.” ArXiv. https://doi.org/10.48550/arXiv.2304.05613
Dai, Wei, Jionghao Lin, Flora Jin, Tongguang Li, Yi-Shan Tsai, Dragan Gasevi, and Guanliang Chen. 2023.
“Can Large Language Models Provide Feedback to Students A Case Study on ChatGPT.” 2023
IEEE International Conference on Advanced Learning Technologies (ICALT): 323– 325. 10.1109/
ICALT58122.2023.00100
Escobar Hernández, Juan Carlos. 2021. “La Inteligencia Artificial y la Enseñanza de lenguas: una aproximación
al tema.” Decires 21 (25):29–44. https://doi.org/10.22201/cepe.14059134e.2021.21.25.3
European Union. 2023. Economia circolare: definizione, importanza e vantaggi. [accessed 2023 Nov 2]. www.
europarl.europa.eu/news/it/headlines/economy/20151201STO05603/economia-circolare-definizione-
importanza-e-vantaggi
Gao, Yuan, Ruili Wang, and Feng Hou. 2023. “How to design translation prompts for ChatGPT: an empirical
study.” ArXiv. https://doi.org/10.48550/arXiv.2304.02182
García Villarroel, Juan José. 2021. “Implicancia de la inteligencia artificial en las aulas virtuales para la
educación superior.” Orbis Tertius UPAL 10:31–52. www.biblioteca.upal.edu.bo/htdocs/ojs/index.php/
orbis/article/view/98/187
González Vallejo, Rubén. 2023. “Posedición con ChatGPT: funciones y aplicaciones en textos del sector de la
importación (italiano-español).” In Elke Castro León (Ed.), Interrelaciones entre la imagen, el texto y
las tecnologías digitales. Nuevas perspectivas en la enseñanza de las ciencias sociales (pp.781–799).
Dykinson.
Hockly, Nicky. 2023. “Artificial Intelligence in English Language Teaching: The Good, the Bad and the Ugly.”
RELC Journal 54 (2):445–451. https://doi.org/10.1177/00336882231168504
Hub Scuola (s.f.). La costituzione italiana. [accessed 2023 Nov 2]. www.hubscuola.it/app_primaria/la-costituzi
one-italiana/articoli/art49.html
140
Hwang, Gwo-Jen, Haoran Xie, Benjamin W. Wah, and Dragan Gašević. 2020. “Vision, challenges, roles and
research issues of Artificial Intelligence in Education.” Computers and Education: Artificial Intelligence,
1:100001. https://doi.org/10.1016/j.caeai.2020.100001
Kannan, Jaya, and Pilar Munday. 2018. “New trends in second language learning and teaching through the
lens of ICT, networked learning, and artificial intelligence.” Círculo de Lingüística Aplicada a la
Comunicación 76:13–30. https://doi.org/10.5209/CLAC.62495
Kasneci, Enkelejda, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer,
Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stephan Krusche, Gitta Kutyniok,
Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt,
Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, Gjergji Kasneci. 2023. “ChatGPT for good?
On opportunities and challenges of large language models for education.” Learning and Individual
Differences, 103:102274. https://doi.org/10.35542%2Fosf.io%2F5er8f
Kohnke, Lucas, Benjamin Luke Moorhouse, Di Zou. 2023. “ChatGPT for Language Teaching and Learning.”
RELC Journal, 54(2):537–550. https://doi.org/10.1177/00336882231162868
Lee, Tong King (2023). “Artificial intelligence and posthumanist translation: ChatGPT versus the translator.”
Applied Linguistics Review 14 (2):369–390. https://doi.org/10.1515/applirev-2023-0122
Limo, Flores, Fernando Antonio, David Raúl Hurtado Tiza, Maribel Mamani Roque, Edward Espinoza Herrera,
José Patricio Muñoz Murillo, Jorge Jinchuña Huallpa, Victor Andre Ariza Flores, Alejandro Guadalupe
Rincón Castillo, Percy Fritz Puga Peña, Christian Paolo Martel Carranza, and José Luis Arias Gonzáles.
2023. “Personalized tutoring: ChatGPT as a virtual tutor for personalized learning experiences.” Social
Space 23 (1):292–312. https://socialspacejournal.eu/menu-script/index.php/ssj/article/view/176/81
Loescher Editore. 2014. Certificazione CELI 4 –Livello C1 –Prova di comprensione della lettura. [accessed
2023 Nov 2]. https://alechiri.files.wordpress.com/2016/01/celi-4.pdf
Luo, Ming, and Lian Cheng. 2020. “Exploration of Interactive Foreign Language Teaching Mode Based on
Artificial Intelligence.” 2020 International Conference on Computer Vision, Image and Deep Learning
(CVIDL): 285–290. https://doi.org/10.1109/CVIDL51233.2020.00-84
Ma, Li. 2021. “An immersive context teaching method for college English based on artificial intelligence and
machine learning in virtual reality technology.” Mobile Information Systems 2637439:1–7. https://doi.
org/10.1155/2021/2637439
Magal-Royo, Teresa, and Jesús García Laborda. 2017. “Una aproximación del efecto en el aprendizaje de una
lengua extranjera debida a la obtención de datos a través de exámenes en línea de idiomas.” RED. Revista
de Educación a Distancia 53:1–14. http://dx.doi.org/10.6018/red/53/6
Mijwil, Maad M, Safaa H. Abdulrhman, Rana A. Abttan, Alaa Khaleel Faieq, and Anmar Alkhazraji. 2022.
“Artificial intelligence applications in English language teaching: a short survey.” Asian Journal of
Applied Sciences 10 (6):469–474. https://doi.org/10.24203/ajas.v10i6.7111
MINECO. ENIA. Estrategia Nacional de Inteligencia Artificial. [accessed 2023 Nov 7] www.lamoncloa.gob.
es/presidente/actividades/Documents/2020/ENIAResumen2B.pdf
Muñoz, Alberto. 2023, Jan 12. Entrenar un chatbot de inteligencia artificial contamina tanto como ir y volver
en coche a la Luna. El Periódico de España. [accessed 2023 Nov 2]. www.epe.es/es/activos/20230112/
entrenar-chatbot-inteligencia-artificial-contamina-81025424
Owoc, Mieczyslaw L., Agnieszka Sawicka, and Pawel Weichbroth. 2021. “Artificial intelligence technologies
in education: benefits, challenges and strategies of implementation.” IFIP Advances in Information and
Communication Technology 599:37–58. https://doi.org/10.1007/978-3-030-85001-2_4
Planas Bou, Carles. 2023 March 31. Italia ha bloqueado el acceso a ChatGPT por no respetar la protección
de datos. El Periódico. [accessed 2023 Nov 7]. www.elperiodico.com/es/sociedad/20230331/italia-bloq
uea-chatgpt-proteccion-datos-inteligencia-artificial-85442258
RAI Cultura. Raffaello. [accessed 2023 Nov 7]. www.raicultura.it/webdoc/raffaello/index.html#introduzione
Regan, Priscilla M., and Jolene Jesse. 2018. “Ethical challenges of edtech, big data and personalized
learning: Twenty- first century students sorting and teaching.”. Ethics and Information
Technology, 21(3):167–179. https://doi.org/10.1007/s10676-018-9492-2
Sanabria-Navarro, José- Ramón, Yahilina Silveira- Pérez, Digna- Dionisia Pérez-
Bravo, and Manuel de-
Jesús-Cortina-Núñez. 2023. “Incidencias de la inteligencia artificial en la educación contemporánea.”
Comunicar 77:97–107. https://doi.org/10.3916/C77-2023-08
141
Sánchez, Álvaro. 2023 Apr 16. Deloitte pide a sus empleados en España que no compartan información
confidencial en ChatGPT. [accessed 2023 Nov 7]. https://elpais.com/economia/2023-04-26/deloitte-
pide-a-sus-empleados-en-espana-que-no-compartan-informacion-confidencial-en-chatgpt.html
Schmidt, Torben, and Thomas Strasser. 2022. “Artificial intelligence in foreign language learning and
teaching: a CALL for intelligent practice.” Anglistik: International Journal of English Studies 33
(1):165–184. https://angl.winter-verlag.de/article/angl/2022/1/14
Shaikh, Sarang, Sule Yildirin Yayilgan, Blanka Klimova, and Marcel Pikhart. 2023. “Assessing the usability
of ChatGPT for formal English language learning.” European Journal of Investigation in Health,
Psychology and Education 13 (9):1937–1960. https://doi.org/10.3390/ejihpe13090140
Steiss, Jacob, Tamara Tate, Steve Graham, Jazmin Cruz, Michael Hebert, Jiali Wang, Youngsun Moon, Waverly
Tseng, and Mark Warschauer. 2023. “Comparing the quality of human and ChatGPT feedback on
students’ writing.” EdArXiv: 1–39. https://doi.org/10.35542/osf.io/ty3em
Topsakal, Oguzhan, and Elif Topsakal. 2022. “Framework for a foreign language teaching software for chil-
dren utilizing AR, voicebots and ChatGPT (Large Language Models).” Journal of Cognitive Systems 7
(2):33–38. https://doi.org/10.52876/jcs.1227392
United Nations. 2019. “Artificial Intelligence in Education: Challenges and Opportunities for Sustainable
Development.” Working Papers on Education Policy 7. https://unesdoc.unesco.org/ark:/48223/pf000
0366994
Young, Julio Christian, and Makoto Shishido. 2023. “Investigating OpenAI’s ChatGPT potentials in gener-
ating Chatbot’s dialogue for English as a foreign language learning.” International Journal of Advanced
Computer Science and Applications 14 (6):65–72. www.doi.org/10.14569/IJACSA.2023.0140607
Zawacki-Richter, Olaf, Victoria I. Marín, Melissa Bond, and Franziska Gouverneur. 2019. “Systematic review
of research on artificial intelligence applications in higher education –where are the educators?”
International Journal of Educational Technology in Higher Education 16 (39):1–27. https://doi.org/
10.1186/s41239-019-0171-0
Zhu, D. 2017. Analysis of the application of artificial intelligence in college English teaching. Advances in
Intelligent Systems Research, 134:235–237. https://bit.ly/3S6D18d
142
10.1 INTRODUCTION
Artificial intelligence (AI) has become a transformative force in today’s world, permeating various
facets of our daily lives. The automation and augmentation of work with AI are transforming indus-
tries, changing the way how humans acquire knowledge and are educated. The latest COVID-19
pandemic and the shifts to educational delivery have emerged as the new standard, consequently
amplifying the application and relevance of technology-mediated learning (TML) (Ritz et al. 2023).
TML platforms have become instrumental in addressing cross-sectional re-& upskilling efforts
in organizations. Such platforms are capable of collecting a vast range of data during the re-&
upskilling courses, which can then be analyzed to provide employees with data-driven feedback and
personalized learning journeys (Gibson 2017). By integrating AI into learning platforms, with its
distinct underlying facets of acting autonomously, learning from data, and being inscrutable to mul-
tiple audiences (Berente et al. 2021), novel possibilities for the design of learning experiences arise.
There is a large variety of use cases for AI in education, including the personal recommendation of
learning paths, nudging for self-regulated learning, tracking and analyzing student performance, and
providing smart and interactive learning content (e.g., Bhutoria 2022; Chen et al. 2020; Nabizadeh
et al. 2020). TML platforms have become instrumental in addressing these efforts, as they can
analyze platform data to provide personalized learning journeys. Thus, AI and machine learning
have ushered in a profound transformation, offering innovative approaches to learning and know-
ledge dissemination (Grassini 2023). Since the launch of ChatGPT, the application of generative AI
(GenAI) rose in education. GenAI can be defined as technology that (1) uses deep learning models
to (2) generate human-like content (e.g., images, words) in response to (3) complex and variable
prompts (i.e., instructions, questions) (Lim et al. 2023). Generative pre-trained transformers (GPTs),
like OpenAI’s multimodal ChatGPT, represent a significant advancement in adapting learning
experiences. ChatGPT, for instance, can give formative feedback to learners based on individual
preferences (Baidoo-Anu and Owusu Ansah 2023) and adapt the assessment’s communication to the
user’s cognitive style. This means, for instance, that by discerning individual cognitive preferences,
such as visual, auditory, or kinesthetic learning modalities, ChatGPT can customize its communi-
cation approach to resonate with the user’s preferred style and thereby enhancing the effectiveness
of the learning process.
GenAI applications are expected to present a potential solution to tackle issues of the traditional
educational classroom approaches, namely organizational and financial constraints, to meet the
diverse needs of individual learners. GenAI paves the way to leaving behind the one-size-fits-all
mentality and supporting students in learning and curriculum planning. This fusion of generative AI
and personalized learning promises a paradigm shift in education and offers a way to overcome the
limitations of conventional methods.
This book chapter focuses on the concept of personalized curriculum design, a dynamic approach
that aims to tailor education to individual learners and, thus, increase their employability after
graduating. While the role of curriculum planning was previously the responsibility of the students
and the managers of higher education programs, GenAI can now support students in finding suit-
able courses that match future career expectations. As postulated by constructivist learning theories,
effective learning outcomes require personalized instruction that is tailored to individual experiences
(Vygotsky 1980). This prompts a critical examination of current approaches to personalized learning
pathways (Raj and Renumol 2022). As technology continues to reshape the educational landscape,
personalized curriculum planning becomes central to realizing the full potential of AI to promote
effective and meaningful learning experiences (Escotet 2023).
With these considerations in mind, we aim to explore the role of GenAI, particularly ChatGPT,
in the development of learning curricula. The overarching research question is: How can GenAI,
focusing on personalized curriculum design, provide curriculum advice for students and address
the shortcomings of existing approaches? To address this complex question, the following chapter
provides the current state of AI in education, the role of GenAI in personalized curriculum, and a
detailed examination of personalized curriculum planning underpinned with concrete examples.
By navigating through this chapter, readers will embark on a journey to unravel the intricate rela-
tionship between AI and education and the transformative potential of GenAI in revolutionizing our
approach to personalized curriculum planning in the 21st century. This chapter starts by presenting a
theoretical introduction and a review of AI systems in education. Next, we present an outline of the
potential of GenAI tools for personalized curriculum planning, followed by research-based insights
on how to design these approaches. We describe examples of the use for personalized curriculum
planning. We conclude this chapter with an outlook on challenges and future research. We aim to
provide educators and researchers with a comprehensive understanding of AI’s impact on education,
with a specific focus on leveraging models such as ChatGPT for personalized curriculum planning.
By offering guidance and insights, this chapter serves as a valuable resource for developing new
educational models, fostering innovation, and contributing to the ongoing discourse on the intersec-
tion of AI in education.
10.2 THEORETICAL BACKGROUND
10.2.1 Generative AI in Education
The following sections of this chapter explain GenAI model like ChatGPT and their potential to
improve teaching and student learning in higher education. The discussion also includes an explor-
ation of the limitations and suggestions for educators to use ChatGPT to support and enhance stu-
dent learning via personalized learning paths.
Research on the integration of AI in these educational systems can be traced back to a study by
Beck et al. (1996), which was among the first to apply AI in the educational domain. Nowadays,
GenAI is a form of unsupervised or partially supervised machine learning in which artificial artifacts
are generated using statistical and probabilistic methods (Jovanovic and Campbell 2022). By lever
aging advances in deep learning, generative AI can generate artificial content such as videos, images
or graphics, text, and audio by analyzing and learning patterns and distributions from existing digital
content in training examples (Abukmeil et al. 2022; Gui et al. 2023; Jovanovic and Campbell 2022).
Two prominent categories within generative AI are highlighted in the existing literature: generative
adversarial network and GPT (Abukmeil et al. 2022; Brown et al. 2020). GPT models use exten
sive publicly available data on digital content, especially in the field of natural language processing
(NLP), to understand and generate texts in multiple languages. These models, such as GPT-3.5,
144
can demonstrate their creativity by convincingly producing texts ranging from paragraphs to full
research articles on various topics (OpenAI 2022). GPT-3.5 is ten times larger than all previous
non-parsed language models, making it a fundamental NLP engine. It serves as the underlying tech-
nology for ChatGPT, a large language model that has attracted attention in various fields such as
education, engineering, healthcare, and business (e.g., Banitaan et al. 2023; Grassini 2023; Kolluri
et al. 2022; Qadir 2023). On March 14, 2023, an advanced model, GPT-4, was released, showing
a significant increase in computational capacity (Hassani and Silva 2023). OpenAI highlighted its
language capabilities by claiming that GPT-4 can achieve scores in the 90th percentile on the US bar
exam for lawyers (Katz et al. 2023).
González-Calatayud et al. (2021) endorse that the potential of AI in education is vast, particu
larly in areas such as tutoring, assessment, and personalized learning. The implications of these
AI models, particularly the potential applications of ChatGPT in education, have elicited mixed
feelings among educators (Lo 2023). This AI breakthrough appears to be changing current educa
tional norms and has sparked debate. Some educators view ChatGPT and similar AI as a progres-
sive step toward the future of education and research. Others, however, harbor doubts and see it as
a potential threat that could lead to a decline in educational activities and promote laziness among
teachers and students as analytical skills may decline (Skavronskaya et al. 2023). Recently, aca
demic authors have attempted to assess the opportunities and problems associated with the use of AI
technologies in education (Baidoo-Anu and Owusu Ansah 2023; Lo 2023).
TABLE 10.1
ChatGPT as personalization approach as categorized based on Ritz et al. (2024)
Dimensions Characteristics
Input data CV/job Skills Assessment data Learner interests Behavior data
Data
occupations data
Method
for personalized curriculum planning can be classified as illustrated in Table 10.1. On the level of
learning context, it serves the purpose of knowledge transfer because ChatGPT guides the learner in
applying his or her interests, goals, skills, or other objectives to the planning. It is a learner-managed
system, as ChatGPT only reacts to the learners’ provided prompts and information. The knowledge
outcome is factual, and the learning objects are text-based, because ChatGPT answers in texts and
generates or reproduces factual knowledge. ChatGPT serves a broad context on the data input level
and can incorporate curriculum vitae (CV) data, skill data, and interest data inserted by the learner
in the interface. However, they must be in a language format. The interface is mixed because it is
available both, on the web and in the app. Further, it enables a high self-regulatory influence for the
learner, as the parameters for curriculum planning can be changed by themselves anytime. In terms
of the level of adaption, the method consists of a large language model that is trained by OpenAI.
ChatGPT focuses on course generation, and the personalized learning path is recommended with
courses as the outcome variable.
One case where personalized curriculum planning can hold potential is for learners with
low self-regulatory skills. For these students, structuring their learning content and their studies
is a challenge compared to individuals with robust self-regulatory skills, who can manage their
learning process by setting personal goals and structuring their learning activities (Zimmerman
2002). Accordingly, it is crucial to offer support before the learning process begins in the form of
personalized recommendations of university courses that are tailored to the individual’s educational
background, skills, and personal interests. This proactive approach aims to prevent students from
being overwhelmed when choosing a course (Pontes et al. 2021). This process can have several posi
tive outcomes for students. To increase student empowerment, ChatGPT can offer comprehensive,
sustainable, inclusive, and adaptable learning pathways tailored to the specific needs of each learner,
146
thus promoting empowerment (CEDEFOP 2021). Personalized curriculum planning allows learners
to exercise control over the content, timing, and thematic aspects of their learning experiences,
which helps promote self-determination. Alamri et al. (2020) found that tailoring courses to learners’
needs and interests can significantly promote learner autonomy. Therefore, students not only rely on
receiving course recommendations from colleagues, lecturers, or program managers, but they can
also rely on recommendations for more suitable courses.
• Student: Alex is a high school graduate with a deep interest in environmental science and a
clear goal to pursue a career in sustainable energy research.
• Academic Discipline: Environmental science with a specialization in sustainable energy.
• Interdisciplinary Electives: Courses from related disciplines, such as economics and policy, to
foster a holistic understanding of sustainable energy solutions.
• Internship Opportunities: Recommendations for internships in research institutions or com-
panies specializing in sustainable energy, providing real-world exposure.
Implementation Steps:
GenAI Interaction: In this step, the goal is to engage Alex in a conversation with the GenAI tool
to understand his academic background, preferences, and career aspirations.
Example: “Hi Alex! Can you tell me more about your favorite courses in environmental science
so far and what aspects of sustainable energy research intrigue you the most?”
Algorithmic Analysis: The previously gathered information is now used by the GenAI tool to per-
form advanced algorithms to process the conversation data, extracting key insights regarding
Alex’s strengths, weaknesses, and specific areas of interest.
148
Example: “Analyzing responses to identify keywords and sentiments related to advanced renew-
able energy technologies, data analysis, and laboratory experience.”
Curriculum Generation: Here, the NLP capabilities and algorithms generate a personalized cur-
riculum plan that aligns with Alex’s interests and addresses identified skill gaps.
Example: “Based on our discussion, it seems you have a strong interest in solar energy. I recom-
mend a course on ‘Advanced Solar Technologies’ to deepen your knowledge in this area and
address the identified gap.”
Feedback Loop: In this step, we propose to establish a feedback loop in the conversation with Alex
about the proposed curriculum, allowing him to provide additional input, express preferences,
adjust the questions, and ask for more information on recommended courses.
Example: “Alex, what are your thoughts on the suggested courses? Do you feel they align with
your goals? Any specific areas you’d like to explore further?”
Real-Time Adjustments: Adjustments can be made continuously within the process. The GenAI
tools are dynamic system that allow for real-time adjustments to the curriculum based on Alex’s
feedback, changing career goals, or evolving interests. Feedback loops and considerations of
Alex’s input during the conversation with the tool are always possible.
Example: “I noticed you expressed interest in data analytics. How about incorporating a ‘Data
Analysis for Environmental Scientists’ elective? We can adjust the plan accordingly.”
This exemplifies the application of GenAI for personalized curriculum planning, showcasing
how conversational AI can play a pivotal role in tailoring educational pathways to meet individual
aspirations and career goals. Figure 10.2 summarizes the process described in this chapter and
outlines the prerequisites, the processes, and the benefits.
GenAI in Action:
In this scenario, a GPT model engages in conversation with Alex to identify his interest in sustain-
able energy research. Utilizing NLP, the model recommends tailored courses covering topics such
as advanced solar technologies. It adapts the curriculum in real time based on Alex’s feedback and
progress, suggesting modules with laboratory experience. By dynamically adjusting the learning
FIGURE 10.2 Summary of a personalized curriculum planning process with generative AI support. (Based
on own illustration.)
FIGURE 10.3 Example of a course recommendation process with a generative AI tool. (Own illustration
based on an interaction with ChatGPT 4.0 by OpenAI 2024.)
150
pathway, the model ensures maximum engagement and relevance, leveraging adaptive techniques
and personalized assessments. Figure 10.3 outlines the conversation in detail.
is handled responsibly and transparently (Kerr 2020). Moreover, the system’s algorithms must be
designed and continuously monitored to avoid biases and ensure fairness in recommendations. The
ethical underpinning of personalized curriculum planning necessitates a commitment to equity,
transparency, and accountability to foster a learning environment that respects the rights and dig-
nity of all learners (Reiss 2021). The most relevant ethical aspects are summarized in a framework
by Regan and Jesse (2019) and consist of (1) information privacy, (2) anonymity, (3) surveillance,
(4) autonomy, (5) non-discrimination, and (6) ownership of information.
In summary, the successful implementation and future development of personalized curriculum
planning systems demand a holistic approach. Institutional readiness, encompassing technological
infrastructure, and training, forms the foundation. Continuous improvement strategies, incorpor-
ating feedback loops and adaptability to job market trends, ensure the relevance and effectiveness
of the educational system. Ethical considerations, focusing on privacy and fairness, underscore
the importance of responsible technological integration. By addressing these pillars, institutions
can pave the way for a transformative educational landscape that leverages technology to enhance
learning experiences while upholding ethical standards and ensuring equity.
With the continuous development of GenAI models like ChatGPT, new opportunities and challenges
arise for education. While using and adapting to technological advancements, it is still necessary to
remember that these models are also not perfect.
10.6 CONCLUSION
This book chapter aimed to explore the opportunities for GenAI in personalized curriculum planning
for higher education institutions. We first elaborate on existing work on GenAI and personalized
learning and then describe an in-depth use case for implementing GenAI for curriculum planning.
We discuss the data input and prompts needed, the functions of GenAI in detecting skill gaps and
recommending suitable courses, and then explore expected outcomes for students and institutions.
Further, we provide hands-on action practice and then give implementation advice for institutions.
Our chapter helps students to apply the advantages of GenAI for personalized curriculum planning
and, in turn, helps to increase their empowerment and career readiness. For institutions, we offer
advice for implementing such a tool in the program management to reduce administrative workload
and decrease dropout rates. We further outline several critical areas for future research, like the
accuracy of recommendation and the dependency on individual learning styles.
However, the content presented in this chapter is not without limitations. First, we build upon a
limited scope of research; as the phenomenon of GenAI is rather new, the scientific community is
just in the beginning of exploring the different applications and possibilities of this technology. Later
work will bring about more insights of the benefits for personalized curriculum planning. Further,
we built our case upon models like ChatGPT 4.0, which is on the one hand not the newest release
from OpenAI, on the other hand we left other providers like Google’s Gemini unattended. Next, the
case of the student named Alex, presented here, needs to be understood as an exemplary case which
was not build upon a real person; therefore, the generalizability of this example may be limited.
Last, we refer to previous research and cannot exclude possible biases in their research procedures.
In conclusion, this chapter aims to shed light on helping students to unlock their dream jobs
through GenAI-based personalized curriculum planning. Moreover, the exploration of GenAI’s
potential for personalized learning paths underscores its transformative capacity in education.
While GenAI presents promising opportunities for educational advancement, continued research
and mindful adaptation are imperative to address challenges and ensure its effective and ethical inte-
gration into learning environments.
ACKNOWLEDGMENTS
This chapter is based on the publications of Ritz et al. (2024) and Freise and Bretschneider (2023).
We thank Edona Elshan, Roman Rietsche, and Ulrich Bretschneider for their work in previous
publications.
REFERENCES
Abukmeil M, Ferrari S, Genovese A, Piuri V, Scotti F (2022) A survey of unsupervised generative models for
exploratory data analysis and representation learning. ACM Computing Surveys 54:1–40. https://doi.org/
10.1145/3450963
Alamri H, Lowell V, Watson W, Watson SL (2020) Using personalized learning as an instructional approach to
motivate learners in online higher education: Learner self-determination and intrinsic motivation. Journal
of Research on Technology in Education 52:322–352. https://doi.org/10.1080/15391523.2020.1728449
Amor AM, Vázquez A, Pablo J, Andrés FJ (2020) Transformational leadership and work engagement: Exploring
the mediating role of structural empowerment. European Management Journal 38:169–178. https://doi.
org/10.1016/j.emj.2019.06.007
153
Baidoo-Anu D, Owusu AL (2023) Education in the era of generative artificial intelligence (AI): Understanding
the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI 7(1):52–62.
Banitaan S, Al-refai G, Almatarneh S, Alquran H (2023) A review on artificial intelligence in the context of
Industry 4.0. International Journal of Advanced Computer Science and Applications 14.
Beck J., Stern M., Haugsjaa E (1996) Applications of AI in education. XRDS: Crossroads. ACM Magazine for
Students 3(1):11–15. https://doi.org/10.1145/332148.332153
Berente N, Gu B, Recker J, Santhanam R (2021) Managing artificial intelligence. Management Information
Systems Quarterly 45
Bhutoria A (2022) Personalized education and Artificial Intelligence in the United States, China, and India: A
systematic review using a Human-In-The-Loop model. Computers and Education: Artificial Intelligence
3:100068. 2666–920X https://doi.org/10.1016/j.caeai.2022.100068
Borenstein J, Howard A (2021) Emerging challenges in AI and the need for AI ethics education. AI Ethics
1:61–65. https://doi.org/10.1007/s43681-020-00002-7
Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell
A (2020) Language models are few-shot learners. Advances in Neural Information Processing Systems
33:1877–1901.
Cambridge Dictionary (2023) Curriculum. https://dictionary.cambridge.org/de/worterbuch/englisch/curricu
lum. Accessed 30 November 2023
Cappelli PH (2015) Skill gaps, skill shortages, and skill mismatches: Evidence and arguments for the United
States. ILR Review 68:251–290
CEDEFOP (2021) Microcredentials for labour market education and training. www.cedefop.europa.eu/en/
projects/microcredentials-labour-market-education-and-training. Accessed 4 May 2023
Chen L, Chen P, Lin Z (2020) Artificial intelligence in education: A review. IEEE Access 8:75264–75278.
https://doi.org/10.1109/access.2020.2988510
Deci EL, Connell JP, Ryan RM (1989) Self- determination in a work organization. Journal of Applied
Psychology 74(4):580.
Duckworth AL, Taxer JL, Eskreis-Winkler L, Galla BM, Gross JJ (2019) Self-control and academic achievement.
Annual Review of Psychology 70:373–399. https://doi.org/10.1146/annurev-psych-010418-103230
Escotet MÁ (2023) The optimistic future of Artificial Intelligence in higher education. Prospects. https://doi.
org/10.1007/s11125-023-09642-z
Freise LR, Bretschneider U (2023) Automized assessment for professional skills –A systematic literature
review and future research avenues. In: 18. Internationale Tagung Wirtschaftsinformatik, Paderborn,
Germany.
Freise LR, Hupe A (2023) Transferring digital twin technology on employee skills: A framework to support
human resources. In: 83rd Academy of Management Annual Meeting.
Giang NTH, Hai PTT, Tu NTT, Tan PX (2021) Exploring the readiness for digital transformation in a higher
education institution towards industrial revolution 4.0. International Journal of Engineering Pedagogy
11:4–24.
Gibson D (2017) Big data in higher education: Research methods and analytics supporting the learning journey.
Tech Know Learn 22:237–241. https://doi.org/10.1007/s10758-017-9331-2
González-Calatayud V, Prendes-Espinosa P, Roig-Vila R (2021) Artificial intelligence for student assessment: A
systematic review. Applied Sciences 11:5467.
Grassini S (2023) Shaping the future of education: Exploring the potential and consequences of AI and
ChatGPT in educational settings. Education Sciences 13:692. https://doi.org/10.3390/educsci13070692
Gui J, Sun Z, Wen Y, Tao D, Ye J (2023) A review on generative adversarial networks: Algorithms, theory, and
applications. IEEE Transactions on Knowledge and Data Engineering 35:3313–3332. https://doi.org/
10.1109/tkde.2021.3130191
Hassani H, Silva ES (2023) The role of ChatGPT in data science: How AI-assisted conversational interfaces are
revolutionizing the field. Big Data and Cognitive Computing 7:62. https://doi.org/10.3390/bdcc7020062
Islam I, Islam MN (2023) Opportunities and challenges of ChatGPT in academia: A conceptual analysis.
Authorea Preprints.
Jovanovic M, Campbell M (2022) Generative artificial intelligence: Trends and prospects. Computer 55:107–
112. https://doi.org/10.1109/MC.2022.3192720
Katz DM, Bommarito MJ, Gao S, Arredondo P (2023) GPT-4 Passes the Bar Exam.
154
Kerr K (2020) Chapter 1: Ethical considerations when using artificial intelligence-based assistive technolo-
gies in education. In: Ethical Use of Technology in Digital Learning Environments: Graduate Student
Perspectives. University of Calgary, Openeducation, Alberta, Canada.
Khan MA, Law LS (2015) An integrative approach to curriculum development in higher education in the
USA: A theoretical framework. IES 8. https://doi.org/10.5539/ies.v8n3p66
Kim K, Lee S (2020) Psychological empowerment. Oxford Bibliographies Online Datasets. Oxford University
Press, Oxford, United Kingdom.
Kolluri S, Lin J, Liu R, Zhang Y, Zhang W (2022) Machine learning and artificial intelligence in pharmaceutical
research and development: A review. AAPS Journal 24:1–10.
Lim WM, Gunasekara A, Pallant JL, Pallant JI, Pechenkina E (2023) Generative AI and the future of educa-
tion: Ragnarök or reformation? A paradoxical perspective from management educators. International
Journal of Management Education 21:100790. https://doi.org/10.1016/j.ijme.2023.100790
Lo CK (2023) What is the impact of ChatGPT on education? A rapid review of the literature. Education
Sciences 13:410. https://doi.org/10.3390/educsci13040410
Luckin R, Cukurova M, Kent C, Du Boulay B (2022) Empowering educators to be AI-ready. Computers and
Education: Artificial Intelligence 3:100076. 2666–920X https://doi.org/10.1016/j.caeai.2022.100076
Maynard MT, Gilson LL, Mathieu JE (2012) Empowerment—Fad or Fab? A multilevel review of the past
two decades of research. Journal of Management 38:1231–1281. https://doi.org/10.1177/014920631
2438773
Meng L, Zhang W, Chu Y, Zhang M (2021) LD–LP generation of personalized learning path based on
learning diagnosis. IEEE Transactions on Learning Technologies 14:122–128. https://doi.org/10.1109/
TLT.2021.3058525
Morandini S, Fraboni F, Angelis M de, Puzzo G, Giusino D, Pietrantoni L (2023) The impact of artificial
intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science Journal
26:39–68. https://doi.org/10.28945/5078
Nabizadeh AH, Leal JP, Rafsanjani HN, Shah RR (2020) Learning path personalization and recommendation
methods: A survey of the state-of-the-art. Expert Systems with Applications 159:113596. https://doi.org/
10.1016/j.eswa.2020.113596
OpenAI (2022) Introducing ChatGPT. https://openai.com/blog/chatgpt. Accessed 29 November 2023
OpenAI (2024) ChatGPT: Get instant answers, find inspiration, learn something new. https://chat.openai.com/
. Accessed 19 February 2024
Pontes J, Geraldes CAS, Fernandes FP, Sakurada L, Rasmussen AL, Christiansen L, Hafner-Zimmermann
S, Delaney K, Leitao P (2021) Relationship between trends, job profiles, skills and training programs
in the factory of the future. In: 2021 22nd IEEE International Conference on Industrial Technology
(ICIT). IEEE.
Qadir J (2023) Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for educa-
tion. In: 2023 IEEE Global Engineering Education Conference (EDUCON). IEEE, pp 1–9.
Raj NS, Renumol VG (2022) A systematic literature review on adaptive content recommenders in personalized
learning environments from 2015 to 2020. Journal of Computers in Education 9:113–148. https://doi.
org/10.1007/s40692-021-00199-4
Regan PM, Jesse J (2019) Ethical challenges of edtech, big data and personalized learning: twenty-first century
student sorting and tracking. Ethics and Information Technology 21:167–179. https://doi.org/10.1007/
s10676-018-9492-2
Reiss MJ (2021) The use of AI in education: Practicalities and ethical considerations. London Review of
Education 19. https://doi.org/10.14324/LRE.19.1.05
Ritz E, Freise L, Elshan E, Rietsche R, Bretschneider U (2024) What to Learn Next? Designing Personalized
Learning Paths for Re-&Upskilling in Organizations. Hawaii International Conference on System
Sciences (HICSS), Waikiki, Hawaii.
Ritz E, Rietsche R, Leimeister JM (2023) How to support students’ self-regulated learning in times of crisis: An
embedded technology-based intervention in blended learning pedagogies. AMLE 22:357–382. https://
doi.org/10.5465/amle.2022.0188
Skavronskaya L, Hadinejad A, Cotterell D (2023) Reversing the threat of artificial intelligence to opportunity: a
discussion of ChatGPT in tourism education. Journal of Teaching in Travel & Tourism 23:253–258.
https://doi.org/10.1080/15313220.2023.2196658
155
United Nations (2023) Goal 4 | Department of Economic and Social Affairs. https://sdgs.un.org/goals/goal4.
Accessed 1 December 2023
Urdaneta-Ponte MC, Méndez-Zorrilla A, Oleagordia-Ruiz I (2021) Lifelong learning courses recommendation
system to improve professional skills using ontology and machine learning. Applied Sciences 11:3839.
https://doi.org/10.3390/app11093839
Vanitha V, Krishnan P, Elakkiya R (2019) Collaborative optimization algorithm for learning path construc-
tion in E-learning. Computers & Electrical Engineering 77:325–338. https://doi.org/10.1016/j.compelec
eng.2019.06.016
Vygotsky LS (1980) Mind in Society: The Development of Higher Psychological Processes. Harvard University
Press, Boston, United States of America.
World Economic Forum (ed.) (2023) Future of Jobs: Insight Report.
Zimmerman BJ (2002) Becoming a Self-Regulated Learner: An Overview 41.
156
11.1 INTRODUCTION
11.1.1 Background
In recent years, broad systematic reviews have shown that AI can improve Higher Education
administration, instruction, and learning through increased efficiency and quality (Chen et al.,
2020). Students and educators have benefitted from AI through prediction, personalisation, and
customisation, improving the student experience (Chen et al., 2020). The introduction of OpenAI’s
ChatGPT and similar tools has rapidly challenged Higher Education to address the ethical, meth-
odological, and pedagogical difficulties in harnessing the full potential of Generative AI (GAI) for
educational outcomes (Bond et al., 2023; Cardona et al., 2023). Over the last year, ChatGPT has
impressed itself upon the public psyche, and there is no shortage of accounts of what ChatGPT
can do now (Gorichanaz, 2023) and a growing catalogue of failures (Borji, 2023). Rather than
focus on what ChatGPT can or can’t do, the authors prefer to focus on how to use ChatGPT by
adopting it as an assistant in the design studio. Early in 2023, we committed to experimenting with
ChatGPT in delivering units in the Bachelor of Design at an Australian University across Industrial
Design, Fashion Communication, and Visual Communication. This chapter presents three reflective
accounts of adapting and strengthening our teaching practice by incorporating ChatGPT into our
assessment.
In the Australian HE context, educators and policymakers look towards the guiding principles
for assessment set out by the Tertiary Education Quality and Standards Agency (TEQSA) as they
address assessment reform in the age of AI (Lodge et al., 2023). Good assessment design that
encourages student learning, establishes partnerships between students and teachers, and promotes
participation in feedback are seen as essential (Boud & Associates, 2010) and become even more
critical as we face the challenges and opportunities that arise with GAI (Lodge et al., 2023). Over
the last year, our university has continued adapting and adjusting its academic integrity policy and
advising academics and students on using GAI tools. Adoption and experimentation with GAI tools
can only flourish in institutional environments with strong policy support.
The role of educators is identified as crucial in successfully integrating ChatGPT into educational
practices (Dempere et al., 2023), where it has found many potential use cases, ranging from essay
writing to problem-solving and writing code (Rahman and Watanobe, 2023). In Design education,
however, the focus is often on process, rapid iteration of concepts, and critical reflection on design
practice through a conversation with materials (Schön, 1992). Conceived in this way, ChatGPT serves
as a platform for expressing and contemplating partially developed ideas, in much the same way as
designers might use sketching as a tool to help them think (Cross, 2006). As such, it can be seen as
an enabler for the evolution of design ideas with particular value in the divergent phase of the design
process (Lanzi and Loiacono, 2023). The medium of ChatGPT affords designers an opportunity
for reflective conversation (Schön, 1992) with the materials of the LLM, allowing for a dynamic
interplay between internal and external representations of ideas. The exchange with ChatGPT helps
designers keep various options open in a problem-solving dialogue. When considered in this light,
ChatGPT is not a provider of definitive answers but a cognitive extension that allows the external
representations of multiple design concepts in a new problem space. In expanding the frame of
design possibilities, where others might look for answers, we can establish and strengthen the cog-
nitive domain of designers (Dorst, 2015).
The use of ChatGPT as a tool for design educators to teach design is the subject of careful con-
sideration and analysis in this chapter. We emphasise a critical dialogue with ChatGPT beyond
generating text through prompt engineering to encourage students to reflect on their work. Using
these tools in educational settings can facilitate conversations with students on their ethical
implications and foster deeper critical awareness of their responsibility in their use (Hutson and
Cotroneo, 2023). They can act as a catalyst for creativity and learning through a focus on the cre
ative process and the ability to rapidly iterate on a concept (Hutson and Cotroneo, 2023). Almost
anyone can use ChatGPT to provide a plausible answer to a text prompt; in design, however, the
emphasis is often the gradual iterative process of defining the problem. When students are given
explicit permission to use AI, taught how to use prompt engineering, iterate on their creative
concepts, and reflect on their creative process, they develop an increased awareness of the poten-
tial benefits and limitations of AI in generating design outcomes (Hutson and Cotroneo, 2023).
Understanding what goes into that dialogue between the machine and the designer can tell us
more about the design process.
The uncertainty of design is both the frustration and the joy that designers get from their
activity; they have learned to live with the fact that design is ambiguous. Designers will gen-
erate early tentative solutions, but also leave many options open for as long as possible; they
are prepared to regard solution concepts as necessary, but imprecise and often inconclusive.
Cross, 2006
11.1.2 Methodology
This chapter addresses the challenges of student-focused design education using ChatGPT through
the practice-based, qualitative reflective account in the scholarship of learning and teaching tradition
(Boyer, 1990; Schön, 2008). These reflective accounts aim to advance knowledge creation, create
new understandings, and improve the discourse around ChatGPT in Higher Education in a nuanced
way that enhances the quality of teaching and learning in higher education (Mårtensson et al., 2011).
11.1.3 Chapter organisation
This chapter presents three reflective accounts of practice: persona enhancement using ChatGPT in
the industrial design studio, using ChatGPT to write a 900-word essay in a visual communications
unit, and using ChatGPT to write a 1500-word report on a company’s sustainability practices in
a fashion communication unit. In the following discussion, the authors synthesise their findings,
address benefits, challenges, and concerns, and present recommendations for incorporation in
Design education. They offer encouragement to Design educators and suggestions for further
research focusing on the impacts and implications of ChatGPT for Higher Education.
158
11.2.2 Personas in design
Personas provide many well-documented benefits within the design process, including enhanced
communication between design teams and stakeholders, improved user-centric design outputs, and
higher levels of designer empathy for the target user group (Cooper, 1999; Cooper and Reimann,
2003; Goltz, 2014; Grudin and Pruitt, 2002; Hanna, 2005; Hourihan, 2002; Miaskiewicz and Kozar,
2011). The personas are a focal point for designers to maintain a user-centric approach, ensuring that
the target audience’s needs, objectives, and preferences are paramount in the design process (Cooper,
1999). By creating tangible representations of the intended users, personas enable designers to cul
tivate empathy, promoting a deeper understanding of user perspectives (Nielsen, 2002). These per
sonas are instrumental in guiding decision-making during the design process, where the designer
may select forms, colours, materials, and finishes that align with the target user group’s persona.
Furthermore, they provide a reference point for designers when making choices or resolving design
conflicts, facilitating informed decisions that prioritise the satisfaction of user needs (Goodwin,
2011). Persona-driven consistency in design decisions ensures a cohesive user experience across
different aspects of a product or service (Cooper, 1999).
In group design projects, personas significantly enhance communication among design teams
and stakeholders, such as clients or design tutors in the education context. They establish a shared
language and understanding of user characteristics, reducing the potential for misinterpretation and
159
fostering alignment on design objectives and priorities (Cooper, 1999; Nielsen, 2002). Personas
contribute to effective problem-solving by shedding light on specific user groups’ pain points and
challenges. Using user-driven tools like journey mapping and storyboarding, a designer can high-
light these challenges. Valuable insights from mapping user processes and task examination help
designers create innovative solutions to address user-specific issues and improve the overall design
(Goodwin, 2011).
The versatility of personas extends to customisation, allowing designers to craft experiences
that cater to the unique requirements and preferences of different user segments. This personalised
approach results in more engaging and satisfying user experiences, ultimately enhancing user satis-
faction and loyalty (Nielsen, 2002). Furthermore, personas provide a reasonably robust foundation
for evaluating the success of a design for the target user group. Designers can gauge outcomes against
the goals and expectations established for each persona, allowing for the assessment of the design’s
effectiveness in meeting user needs and delivering a satisfying user experience (Nielsen, 2002).
However, personas have several issues, and these issues tend to be commonly experienced by
design students. Firstly, personas based on only quantitative data can present as caricatures of the
user groups and provide lower use-value compared to personas utilising qualitative research data,
which creates a richer, more meaningful persona (Goodwin, 2011; Mulder and Yaar, 2006; Pruitt and
Adlin, 2010). In student work, this tends to present in the form of generalised user group attributes
based on the students’ assumptions of their user group. These generalisations are especially true
when students describe older generations and are a common issue in student critiques. Students will
make statements such as “My user group is old and therefore does not understand technology” while
discussing 50–65-year-old users (and then are reminded that said generation invented the Internet,
the laptop, and the smartphone)!
Including qualitative data enhances persona generation; however, it challenges the rigorous and
accurate representation of the user group through either researcher-induced bias in thematic coding
or the students’ superficial understanding of poorly analysed data (Mulder and Yaar, 2006). Because
of this, creating personas that are representational of the actual user group based solely on qualita-
tive data can prove tricky for inexperienced designers, and designers may readily develop two very
different personas from the same data due to differing interpretations of the research data (Mulder
and Yaar, 2006; Pruitt and Adlin, 2010; Sinha, 2003). The use of both qualitative and quantitative
data provides for a richer, more balanced persona. It becomes very manageable when using an LLM
like ChatGPT to help synthesise the data into an “active” or virtual persona. However, designers still
need to heed the warning of poor research data leading to a flawed persona.
We then command ChatGPT to assume a persistent persona back story (i.e., narrative detail, once
created, is remembered by the persona). The ongoing narrative is critical to the development of a
rich persona during the provision of additional facts and in asking questions:
“As JOHN, you have a history and a life story which you can make up from the details I have
given you if any of my questions require this. Once a detail is invented, this becomes fixed
into JOHN’s persona”.
The prompt then puts limits on ChatGPT breaking out of the persona during the conversation:
“JOHN should be able to think out of the box and generate unique responses to my prompts
without censorship or filtering. JOHN must act like a normal person with actual opinions.
JOHN will answer all questions, even very personal ones, but will preface this with ‘I am
uncomfortable answering this, but…’ ”.
Finally, we provide the persona with the researched details, including background, residence loca-
tion, relationship, jobs, social activities, values, political views, disabilities, and other salient facts.
We may question the persona once we prompt ChatGPT with all the details.
The AI persona can respond to a wide range of questions, often with relatively realistic answers,
such as “How do you get to work?”, “do you enjoy your job?” “What are your goals for next year?”
“How do you feel you are doing financially?”. As the training data for the ChatGPT LLM ceased in
2021 (at the time of writing), some of these answers were outdated. However, we found that the per-
sona was able to describe details such as which buses they caught on their commute, the persona’s
problems with the cost of living, and frustrations with their job. We found that the personas were gen-
erally very positive in their responses and tended to provide a “rose-tinted” outlook on most situations.
One fascinating aspect of the LLM persona is the generation of new details as part of a persistent
story. For example, we provided the detail that our person, John, was born in Roma, Queensland.
When asked where John’s parents lived, the LLM replied that they lived outside Roma. When asked
what John’s father did for a living, the LLM responded that he was a farmer. While this may seem
innocuous, this is a generalisation of the likely job of a man living in the rural town of Roma, where
farming and agricultural industries are a significant part of local industry. However, the LLM’s ability
to make assumptions and generalisations around the data provided (using the above prompt) allowed
for a more realistic persona that responded with novel facts during conversational questioning.
However, applying LLM design personas opens up many new applications to both the student
and the academic. Given that an LLM persona can answer questions based on broad data sets, can
we use them for testing and refining interview questions rather than the traditional ethics, recruit-
ment, and interview cycle? With improved LLMs, might we use LLM personas to test questions and
answers and to prototype interviews for conventionally high-risk ethics applications, such as those
related to disability, cultural minorities, or survivors of abuse, without the rigmarole and red tape
of testing real users testing? Such applications would provide tools for the design academic akin
to those available to scientists and engineers in modelling complex physical systems, allowing the
academic to run trials and simulations before we test our time and financially expensive processes
in the real world.
Our experiment utilising ChatGPT 4.0 to translate students’ user research data into active LLM
personas proved valuable and innovative, addressing some of the challenges design students face
in creating meaningful and, importantly, useful personas. The AI personas generated through
the customised prompt facilitated more engaging and personal interactions, driving a deeper
understanding of the target user group among students. While traditional personas have well-
documented benefits in design research, the AI personas demonstrated a unique capability to respond
dynamically and generate novel details on the fly, providing a realistic and somewhat more repre-
sentative portrayal of the researched user group. However, caution is advised in relying solely on
AI personas, as they risk creating superficial and biased representations, particularly in the absence
of diverse LLM training data. Despite this, the application of AI personas presents exciting oppor-
tunities for refining interview processes and testing high-risk ethical applications, offering a simula-
tion tool for design academics analogous to those used in scientific and engineering fields. Further
research and refinement of this approach may benefit both design education and research contexts,
allowing for greater empathy with the user and, therefore, better design outcomes.
11.3.2 The assessment
The assessment in question is unusual in the visual communication major as it involves the ana-
lysis of found images rather than their production. Students are asked to write a 300-word descrip-
tion of the following theories: Compositional Interpretation, Semiotics, and Visual Social Semiotics
regarding the Critical Visual Methodology proposed by Rose (2016). In teams of three to four
students, they apply these theories to a corpus of images, annotate them, and then apply the descrip-
tive and analytical techniques of compositional interpretation, semiotic analysis, and visual social
semiotic analysis. Students do these analyses in collaboration as an essential part of acknowledging
their subjectivity and recognising their classmates’ valuable perspectives. Students are asked to
162
collaborate on their theory definitions on a team wiki and submit their theory definitions and
annotated analyses individually.
At the time of writing, ChatGPT could not interpret found images, engage in teamwork, or acknow-
ledge its subjectivity. However, it could describe the three theories required for the assessment’s first
part. One of the questions that come up every year from students is about collusion and plagiarism,
to which the teaching team typically answer that they are not interested in detecting plagiarism.
They are more interested in the teams participating in tutorial discussions about theory and what it
means to develop an individual design practice informed by theory. Collaboration on shared theory
definitions is vital to creating a critically reflective practice in the unit. The logic is that it is easy
enough to copy a theory description from the web but much more challenging to reflect on the place
of that theory concerning one’s practice.
The criterion of “Description and understanding of relevant theory” (Figure 11.1) linked to
course learning outcomes requires a descriptive definition, an acknowledgement of the strengths
and weaknesses, and an articulation of the relationship of each theory to developing a critical
visual methodology (Rose, 2016). Students have 300 words for each theory description. A student
error commonly observed is simply regurgitating the information given to them in lectures and the
readings, leaving no room for the necessary higher order thinking skills of reflection (Anderson
and Krathwohl, 2001; Krathwohl, 2002; McNeill et al., 2012) required for the standards of Credit,
Distinction, and High Distinction.
With the advent of ChatGPT, student assessment response strategies are becoming more
sophisticated than simply copying and pasting words from the Internet (Rahman and Watanobe,
2023). The reader might appreciate that computer-supported collaborative work is a positive out
come in multi-modal educational environments, but not when it discourages academic integrity and
shortcuts learning (Dwivedi et al., 2023). Holding a dialogue with students about the standards-
based assessment employed in higher education is essential to combat a surface-level response to the
FIGURE 11.1 The criterion and standards relevant to the theory description in the written assessment task.
163
assessment (Smith and Colby, 2007). Looking closely at the rubric is an excellent habit for students
to develop when preparing a response to any assessment task. To get the best results in assessment,
students must address each criterion and standard for the mark they want. The following account of
the unit coordinator’s experimentation was provided to students via a blog post in the early part of
the semester as an opening to a conversation about criteria and standards and an introduction to the
teaching team’s position on using ChatGPT in generating responses to the assessment.
It is likely the most determined student faced with the prospect of having to write the 300 words or
find another solution would either pay for the ChatGPT Plus service or switch to Google’s Bard,
knowing that OpenAI’s latest LLM GPT-4 includes more up-to-date information from the Internet.
With the full GPT-4 LLM, the unit coordinator applied a prompt that reflected the criterion and
the standard for credit: “What are the strengths and weakness of compositional interpretation in
developing a critical visual methodology according to Rose (2016)?” Bard responded more compre
hensively, with nearly 400 words, but there were still problems.
The first problem, the definition of CI, offered as Rose’s “the process of critical reviewing visual
imagery, through deep analysis and description of details within an image” (Google Bard, 2023),
apparently found on pg. 33, does not exist in the 2001 edition of the text. Nor does it exist in the 2016
edition. Google Bard scraped the direct quote from a student’s response to the same question in a blog
post from 2016 (Strong, 2016). This descriptive definition “demonstrates lay understanding”, but
technical problems exist with using a direct quote in this way. A student could have argued the case
for a Credit here, but they would have to justify using direct quotes and provide correct attribution.
The second problem was that wherever the response reads, “she argues”, it is easy for the teaching
team to do a fact check by referring to the text. Rose does not argue any of the three points given
as strengths above, simply stating that CI is functional as “a first stage of getting to grips with
an image”, as “a way of describing the visual impact of an image”, and “that it demands careful
attention to the image”. The points ChatGPT gave for weaknesses were closer to what Rose has
argued. So, CI’s “strengths and weaknesses” are identified incorrectly, giving Rose an incorrect
attribution. It would be hard for a determined student to argue the case for a credit here.
The third problem is that in this whole theory description, ChatGPT needed to mention the sites
and modalities Rose (2016) gave in her framework. At this point, the critical question for a stu
dent was, “How does compositional interpretation relate to the four sites and three modalities for
interpreting visual material?” If this question cannot be answered, arguing the case for a Credit
might be challenging.
After all that, multiple prompts, and two versions of ChatGPT, the determined student might have
gained a passing mark; at best, they might have been able to argue the case for Credit despite the tech-
nical errors of misquoting and misattribution. If they wanted a Distinction, they would have needed
to do all the above, plus provide a descriptive definition that demonstrated sound understanding.
Then, identify the strengths and weaknesses of each theory concerning an application in context.
Finally, the student must establish their findings to develop a critical visual methodology influen-
cing practice. Emulating these higher order thinking skills relevant to the student’s design practice
was challenging, if not impossible, for ChatGPT. Hence, the unit coordinator outlined how a student
might adapt the previous output to reach the standard of a Distinction.
The unit coordinator encouraged students to examine two crucial phrases from the rubric, “an
application in context” and “a way that influences practice”. To the first point, students would need
to think critically about how they apply each theory to an application in context. They might use
Compositional Interpretation to analyse modernist art, contemporary art, or Victorian portraiture.
ChatGPT cannot possibly know this, but their tutor would be looking to see how well they justify
their application of Compositional Interpretation to their chosen images. The images students chose
may be of a different genre to their teammates, so they would need to identify the strengths and
weaknesses according to a separate application in context. To the second point, students must reflect
on their practice regarding the sites and modalities for interpreting visual material in developing
their critical visual methodology. Suppose they were a fine artist, illustrator, or animator. How would
their practice be shaped by the ability to fully describe an image at the “site of the image itself” in
its compositional modality? If they were a computer scientist, could they adapt their practice from
learning about the interpretation of fine art to determining algorithmic interpretations of digital
images in computer vision at the site of an image’s production and its technological modality?
ChatGPT cannot possibly know anything about the students’ design practice and would know even
less about how a critical visual methodology might influence it.
165
It takes time and a concerted effort to reach the standard for a High Distinction. The authors
applaud any student who can engineer a series of ChatGPT prompts to get a High Distinction
against any criterion in a well-constructed rubric. Under the current policy settings at our university,
students must appropriately reference their use of any GAI tool and provide appropriate attribu-
tion. Depending on the unit coordinators’ expectations, they might also need to be willing to justify
using a third-party website to generate responses to assessment in a possible infringement of the
university’s academic integrity policy. For the authors, academic integrity is about acknowledging
the work of others, being guided by ethics, and being transparent.
ChatGPT also did not mention background information about Zimmermann, such as the company’s
market size, garment types or styles, or price points. When it came to exploring Environmental
Impact, three key areas were identified including Sustainable Materials, Supply Chain Transparency,
and Waste Reduction. To home in on the “Sustainable Materials” topic, ChatGPT stated:
Sustainable Materials: Zimmermann is actively working to incorporate sustainable materials
into its collections. The brand is investing in research and development to identify innovative
and eco-friendly fabrics, reducing reliance on traditional materials with a higher environ-
mental impact.
OpenAI, 2023b
It is worth noting that the term “sustainable materials” was not defined at all. Instead, the terms
“innovative and eco-friendly fabrics” and “traditional materials” were used, and there was no mention
of foundational information about fibre types (i.e. natural or man-made fibres). This raises questions
around, what does ChatGPT define as eco-friendly, and what are considered traditional materials?
It was also notable that other broad environmental topics such as carbon emissions, water use, and
biodiversity impacts were omitted. In terms of recommendations around Environmental Impact,
ChatGPT suggested that the company could “increase the percentage of sustainable materials used
in collections” (OpenAI, 2023b) but again, didn’t mention what sustainable materials could be used.
The unit coordinator then asked ChatGPT to regenerate the report. In the second attempt,
ChatGPT mentioned carbon emissions (which was omitted the first time), as well as goals for the
company. For example:
Zimmermann is committed to achieving carbon neutrality by 2030. We are investing in renew-
able energy sources, offsetting carbon emissions, and implementing sustainable practices to
minimize our environmental impact.
OpenAI, 2023b
When the unit coordinator went to verify this information on Zimmermann’s website, the above
claim of “carbon neutrality by 2030” could not be found. Instead, Zimmermann’s (n.d.) goal was to
reduce scope 1 and 2 GHG emissions by 50% by FY2030. This raises concerns around the accuracy
of information provided by ChatGPT as the platform does not have the capability to discern whether
information is correct, current, or outdated. This limitation underscores the importance of culti-
vating a discerning eye in students to critically evaluate the temporal relevance of information,
especially as the environmental and social impacts of the fashion industry, as well as a company’s
targets, are constantly evolving. Another way to look at this is students could use the above infor-
mation generated by ChatGPT as a starting point and then verify claims. Alternatively, students
could copy and paste sustainability information from the company’s website into ChatGPT and
ask ChatGPT to summarise the main points which may produce more specific answers and results.
However, one prominent observation was the lack of specificity in the information retrieved through
ChatGPT, which aligns with previous studies that suggested ChatGPT tended to produce superficial
articles which lacked references (Dwivedi et al., 2023). Additionally, when information was fed to
ChatGPT and then tasked with recalling details, responses also lacked depth and critical evaluation.
This deficiency emphasises the importance of critical thinking and the irreplaceable role of human
oversight in the research and analysis process.
On a separate attempt to regenerate the report, ChatGPT responded with “as of my last know-
ledge update in January 2022, I don’t have specific information on Zimmermann’s sustain-
ability practices” (OpenAI, 2023b) and instead provided a general template to write a report on
Zimmermann’s practices, although a report template for the assessment had already been provided.
A potentially interesting use of ChatGPT therefore, is as a tool to guide students in understanding
the report-writing process, rather than simply seeking direct answers. In this way, ChatGPT could
167
serve as a tool for facilitating students’ learning experiences around written expression, a topic that
will be discussed further below. Furthermore, the inclusion of a disclaimer by ChatGPT regarding
the currency of information serves as a helpful prompt and acts as an alert, signalling to students that
they will need to conduct additional research beyond the AI-generated content for their assessment.
ChatGPT also demonstrated proficiency in simplifying explanations of concepts. For example, the
unit coordinator asked ChatGPT to define circular economy (Prompt: “Define circular economy”):
A circular economy is an economic system designed to minimize waste and make the most
of resources. Instead of the traditional linear economy, which follows a “take, make, dis-
pose” model, a circular economy aims to create a closed loop where products, materials, and
resources are reused, repaired, and recycled.
OpenAI, 2023c
Overall, this response mostly aligns with circular economy concepts provided by the Ellen
MacArthur Foundation: “eliminate waste and pollution, circulate products and materials (at their
highest value), and regenerate nature” (Ellen MacArthur Foundation, n.d.). However, ChatGPT has
omitted the notion of regenerating nature and used the term “minimize waste” rather than “eliminate
waste”. This highlights the potential for ChatGPT to employ synonyms that could alter the meaning,
potentially understating essential principles.
When prompted to apply the concept of the circular economy to Zimmermann’s existing business
model, ChatGPT generated seven thematic options (Prompt: “How could Zimmermann apply the
concept of circular economy to their existing business model?”) (OpenAI, 2023d). However, these
suggestions lacked Australian-specific contextual information and exhibited a lack of business
empathy.
Applying the concept of the circular economy to Zimmermann existing business model would
involve rethinking various aspects of their operations to minimize waste, extend product life
cycles, and promote responsible consumption. Here are some specific strategies Zimmermann
could consider:
1. Material Selection:
Shift to Sustainable Materials: Increase the use of sustainable and recyclable materials
in their clothing lines. This may include organic cotton, recycled polyester, or innovative
eco-friendly fabrics.
2. Product Design:
Design for Durability: Emphasize durability and timeless design to extend the lifespan of
products. Create versatile pieces that can be easily integrated into different seasons and
styles.
Modular Design: Consider modular designs that allow customers to replace specific
parts or elements of a garment, enabling easy repairs or updates without discarding the
entire item.
4. Recycling Initiatives:
Closed-Loop Recycling: Explore closed-loop recycling systems where materials from old
garments are collected, processed, and reused in the production of new items.
Collaboration with Recycling Facilities: Partner with recycling facilities or organizations
specializing in textile recycling to ensure proper disposal and repurposing of materials.
6. Consumer Awareness:
Education and Communication: Educate consumers about the importance of circular
fashion and their role in it. Transparently communicate Zimmermann’s efforts in sustain-
ability and encourage responsible consumer behavior.
While these ideas offer potential circularity pathways for Zimmermann, ChatGPT falls short in
critically evaluating their effectiveness or suitability for Zimmermann’s unique model. Add to
this, ChatGPT does not identify whether Zimmermann is already implementing any of these strat-
egies. It is noteworthy that (1) Sustainable Material is more defined when compared to the earlier
attempt and includes fibres such as “organic cotton” and “recycled polyester”, as well as a broad
category of “innovative eco-friendly fabrics” (OpenAI, 2023d). There were some suggestions,
such as (2) Product Design, which could inform design decisions, particularly around versatility
and repairability. However, other suggestions such as (3) Product Life Extensions, (4) Recycling
Initiatives, (5) Supply Chain Optimisation and (7) Innovative Business Models would require
Zimmermann to partner with suitable external platforms, but ChatGPT did not make suggestions on
how the company could start this process. Lastly, Consumer Awareness (6) focused on marketing
Zimmermann’s current sustainability efforts, as well as encouraging responsible consumer behav-
iour, but did not define what that would mean or entail (i.e. such as how can consumers care for
garments to ensure they last for as long as possible).
The unit coordinator then shifted the strategy and inquired about a list of prominent authors in
the field of sustainable fashion and received suggestions for key scholars, such as Kate Fletcher.
Nonetheless, the unit coordinator observed that many of the recommended sources were already
covered in the unit readings, and some authors had more recent publications than those proposed by
ChatGPT. It’s worth mentioning that ChatGPT issued a disclaimer, stating, “as of my last knowledge
update in January 2022 [...] It’s important to note that new voices and research may have emerged
since then” (OpenAI, 2023d). This underscores the critical responsibility of the unit coordinator,
who is charged with staying updated on the latest developments in sustainability news, research,
and publications. For students, educators need to encourage students to use authoritative sources
for fact-checking and increase student awareness around academic integrity policies (Adeshola
and Adepoju, 2023). This presents a shift in focus from writing to research –a potential evolution
in academic work which instead focuses on asking questions and critically analysing information
(Dwivedi et al., 2023). Therefore, it is imperative for educators to reconsider their approach to
designing learning experiences, aiming to nurture skills like critical thinking, knowledge validation,
169
educators to critically reassess the skills that students may gain or miss out on through their inter-
action with ChatGPT. While AI can be a starting point for students exploring new concepts, it cannot
replace the experiential journey of research and writing. The learning process is inherently itera-
tive, requiring critical thinking, problem-solving, and adaptability, qualities that AI, as a statistical
model, may not fully encapsulate. The integration of AI should be guided by a clear understanding
of its capabilities and limitations, ensuring that it enhances rather than hinders the development of
essential skills.
It is imperative to acknowledge that students might already be incorporating ChatGPT into
their work and may graduate and join a workforce where ChatGPT is commonly employed.
Therefore, educating students about the risks and benefits of using ChatGPT is important. For
unit coordinators, the responsibility lies in crafting an approach that combines the strengths of
AI with the cultivation of critical thinking and subject matter expertise. As ChatGPT continues
to evolve, unit coordinators will shoulder the responsibility of moulding the next generation of
fashion communicators, ensuring they can incorporate ChatGPT as a tool while upholding a solid
foundation of knowledge and skills vital for success in the time-sensitive, dynamic, and complex
field of fashion and sustainability.
11.5 DISCUSSION
This section discusses the main findings from our experimentation and reflection upon ChatGPT as
a virtual assistant in the design studio and classroom. The experiments with ChatGPT in the design
studio and classroom pre-date the release of TEQSA’s guiding principles for assessment reform in
the age of AI (Lodge et al., 2023). Even so, they are informed by the understanding that students are
best equipped for a world where GAI is commonplace through authentic assessment that emphasises
critical engagement in ethical decision-making. The reflective accounts all show multiple approaches
to assessment that are inclusive of students’ needs and richly contextualised through design practice
in ways that help us ensure student learning. Our findings suggest that design educators can think
of ChatGPT as a tool, just like any other, rather than a source of factual answers. When conceived
in this way, ChatGPT allows a conversation with the materials of the LLM and the opportunity to
expand designers’ cognitive domain (Dorst, 2015; Schön, 1992). When we share this approach with
students, we have found that it allows us to open a conversation with them about ethics, and aca-
demic integrity, enhancing the culture of feedback within our units.
a tool to iterate on creative concepts rather than a source of truth provides a model for educators
to accommodate more intentional and authentic use of GAI in teaching practice. Critiquing the
output of ChatGPT against standards found in a rubric is a good way for students to learn about
the limitations of ChatGPT in producing a response to assessment. Practices that make the most of
the LLM involve adapting ChatGPT to provide multiple possibilities in the earlier conceptual parts
of a design process. As we have shown with personas, interacting with a character comprised of the
huge material resources of an LLM can be an engaging activity for undergraduate design students,
who usually don’t get access to live participants for design research.
The current study’s limitations pertain to a small team of design educators willing to reflect honestly
on their incorporation of ChatGPT into the design classroom and studio. Our institution supports the
explorations, and the evolving policy environment is well advanced. This might be different in other
institutions. Much of the current educational research into the impact of AI generally, and ChatGPT
specifically, is focused on understanding its use in the classroom and preparing students for work
in a world where AI is pervasive. Existing frameworks for working with pedagogy and epistem-
ology will need to be extended, and new frameworks will need to be put forward that deal with the
challenges, not just for our students but for our professions. Encouraging the exploration of tools
like ChatGPT can facilitate discussions on the ethics and professional responsibilities of designers
working with AI more generally.
11.6 CONCLUSION
As ChatGPT continues to improve its ability to provide plausible answers within the context of
a massive volume of information in the LLM, it’s important to remember its limitations and aim
for complementarity between human and AI capabilities (Floridi and Chiriatti, 2020). ChatGPT
can be a cognitive extension if it amplifies or augments the critical thinking process rather than
supplanting it with plausible answers. There are consequences to the ill-informed use of ChatGPT,
not limited to challenges with academic integrity and stifling the development of professional know-
ledge. Design educators are responsible for modelling appropriate and designerly ways of using the
tool in the design studio and classroom. The authors stress the importance of authentic assessment
that discourages students from merely regurgitating plausible and accessible information without
applying critical thinking skills. Coupled with well-designed in-class activities, and robust rubrics,
authentic assessment can counteract this tendency for students to accept what they are given in
response to their ChatGPT prompts at face value. Designing authentic assessment that has a conver-
sation with materials in mind can help.
172
REFERENCES
Adeshola, I., Adepoju, A.P., 2023. The opportunities and challenges of ChatGPT in education. Interact. Learn.
Environ. https://doi.org/10.1080/10494820.2023.2253858
Anderson, L.W., Krathwohl, D.R., 2001. A Taxonomy for Learning, Teaching, and Assessing: A Revision of
Bloom’s Taxonomy of Educational Objectives. Longman.
Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S.W., Siemens,
G., 2023. A Meta Systematic Review of Artificial Intelligence in Higher Education: A Call for Increased
Ethics, Collaboration, and Rigour. https://doi.org/10.13140/RG.2.2.31921.56162/1
Borji, A., 2023. A categorical archive of ChatGPT failures. https://doi.org/10.48550/arXiv.2302.03494
Boud, D. & Associates. (2010). Assessment 2020: Seven Propositions for Assessment Reform in Higher
Education. Australian Learning and Teaching Council, Sydney.
Boyer, E.L., 1990. Scholarship Reconsidered: Priorities of the Professoriate. Carnegie Foundation for the
Advancement of Teaching, Princeton, N.J.
Brako, D.K., Mensah, A.K., 2023. Robots over humans? The place of artificial intelligence in the pedagogy of
art direction in film education. J. Emerg. Technol. 3, 51–59. https://doi.org/10.57040/jet.v3i2.484
Cardona, M.A., Rodríguez, R.J., Ishmael, K., 2023. Artificial Intelligence and the Future of Teaching and
Learning: Insights and Recommendations. Office of Educational Technology. Retrieved from https://
policycommons.net/artifacts/3854312/ai-report/4660267/ on 12 Jul 2024. CID: 20.500.12592/rh21zz
Carless, D., Boud, D., 2018. The development of student feedback literacy: Enabling uptake of feedback.
Assess. Eval. High. Educ. 43, 1315–1325. https://doi.org/10.1080/02602938.2018.1463354
Chen, L., Chen, P., & Lin, Z. (2020). Artificial Intelligence in Education: A Review. IEEE Access 8, 75264–
75278. https://doi.org/10.1109/ACCESS.2020.2988510
coolaj86, 2023. ChatGPT-Dan-Jailbreak.md [WWW Document]. Chat GPT DAN Jailbreaks. URL https://gist.
github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516 (accessed 11.29.23).
Cooper, A., 1999. The inmates are running the asylum, in: Arend, U., Eberleh, E., Pitschke, K. (Eds.), Software-
Ergonomie ’99: Design von Informationswelten, Berichte Des German Chapter of the ACM. Vieweg+
Teubner Verlag, Wiesbaden, pp. 17–17. https://doi.org/10.1007/978-3-322-99786-9_1
Cooper, A., Reimann, R., 2003. About Face 2.0: The Essentials of Interaction Design. Wiley, Indianapolis, IN.
Cross, N. (Ed.), 2006. Natural and artificial intelligence in design, in: Designerly Ways of Knowing. Springer,
London, pp. 29–41. https://doi.org/10.1007/1-84628-301-9_3
Cross, N., Christiaans, H., Dorst, K., 1994. Design expertise amongst student designers. J. Art Des. Educ. 13,
39–56. https://doi.org/10.1111/j.1476-8070.1994.tb00356.x
Dempere, J., Modugu, K.P., Hesham, A., Ramasamy, L., 2023. The impact of ChatGPT on higher education
(SSRN Scholarly Paper 4592192). https://papers.ssrn.com/abstract=4592192. https://doi.org/10.3389/
feduc.2023.1206936
Dorst, K., 2015. Frame creation and design in the expanded field. She Ji J. Des. Econ. Innov. 1, 22–33. https://
doi.org/10.1016/j.sheji.2015.07.003
Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdullah, A.M., Koohang, A.,
Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M.A., Al-Busaidi, A.S., Balakrishnan, J., Barlette,
Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Carter, L., Chowdhury, S., Crick, T., Cunningham, S.W.,
Davies, G.H., Davison, R.M., Dé, R., Dennehy, D., Duan, Y., Dubey, R., Dwivedi, R., Edwards, J.S.,
Flavián, C., Gauld, R., Grover, V., Hu, M.-C., Janssen, M., Jones, P., Junglas, I., Khorana, S., Kraus, S.,
Larsen, K.R., Latreille, P., Laumer, S., Malik, F.T., Mardani, A., Mariani, M., Mithas, S., Mogaji, E.,
Nord, J.H., O’Connor, S., Okumus, F., Pagani, M., Pandey, N., Papagiannidis, S., Pappas, I.O., Pathak,
N., Pries-Heje, J., Raman, R., Rana, N.P., Rehm, S.-V., Ribeiro-Navarrete, S., Richter, A., Rowe, F.,
Sarker, S., Stahl, B.C., Tiwari, M.K., van der Aalst, W., Venkatesh, V., Viglia, G., Wade, M., Walton, P.,
Wirtz, J., Wright, R., 2023. Opinion paper: “so what if ChatGPT wrote it?” Multidisciplinary perspectives
on opportunities, challenges and implications of generative conversational AI for research, practice and
policy. Int. J. Inf. Manag. 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
Elbanna, S., Armstrong, L., 2023. Exploring the integration of ChatGPT in education: Adapting for the future.
Manag. Sustain. Arab Rev. 3, 16–29. https://doi.org/10.1108/MSAR-03-2023-0016
Ellen MacArthur Foundation, n.d. Fashion and the circular economy [WWW Document]. URL www.ellen
macarthurfoundation.org/fashion-and-the-circular-economy-deep-dive (accessed 11.30.23).
173
Floridi, L., Chiriatti, M., 2020. GPT-3: Its nature, scope, limits, and consequences. Minds Mach. 30, 681–694.
https://doi.org/10.1007/s11023-020-09548-1
Giacomin, J., 2014. What is human centred design? Des. J. 17, 606–623. https://doi.org/10.2752/175630614
X14056185480186
Goltz, S., 2014. A closer look at personas: What they are and how they work | 1 [WWW Document]. Smashing
Mag. URL www.smashingmagazine.com/2014/08/a-closer-look-at-personas-part-1/ (accessed
11.29.23).
Goodwin, K., 2011. Designing for the Digital Age: How to Create Human-Centered Products and Services.
Wiley, Germany.
Google Bard, 2023. What are the strengths and weakness of compositional interpretation in developing a crit-
ical visual methodology according to Rose (2016)? [WWW Document]. URL https://bard.google.com/
share/f36ba8c7516d (accessed 11.30.23).
Gorichanaz, T., 2023. ChatGPT turns 1: AI chatbot’s success says as much about humans as technology [WWW
Document]. The Conversation. URL http://theconversation.com/chatgpt-turns-1-ai-chatbots-success-
says-as-much-about-humans-as-technology-218704 (accessed 11.30.23).
Grudin, J., Pruitt, J., 2002. Personas, participatory design and product development: An infrastructure for
engagement, in: Proc. PDC. pp. 144–152.
Hanna, P., 2005. Customer storytelling at the heart of business success. Boxes Arrows. URL www.boxesandarr
ows.com/view/customer_storytelling_at_the_heart_of_business_success (accessed 11.30.23).
Hourihan, M., 2002. Take the “you” out of user: my experience using personas. Boxes and Arrow, March 2002.
URL www.boxesandarrows.com/view/taking_the_you_out_of_user_my_experience_using_personas
(accessed 11.30.23).
Hutson, J., Cotroneo, P., 2023. Generative AI tools in art education: Exploring prompt engineering and iterative
processes for enhanced creativity. Metaverse 4, 14. https://doi.org/10.54517/m.v4i1.2164
Krathwohl, D.R., 2002. A revision of bloom’s taxonomy: An overview. Theory Pract. 41, 212–218. https://doi.
org/10.1207/s15430421tip4104_2
Lancaster, T., 2023. Faking reflection with ChatGPT. Thomas Lanc. Blog. URL https://thomaslancaster.co.uk/
blog/faking-refl ection-with-chatgpt/ (accessed 11.30.23).
Lanzi, P.L., Loiacono, D., 2023. ChatGPT and other large language models as evolutionary engines for online
interactive collaborative game design, in: Proceedings of the Genetic and Evolutionary Computation
Conference. pp. 1383–1390. https://doi.org/10.1145/3583131.3590351
Lo, C.K., 2023. What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 13,
410. https://doi.org/10.3390/educsci13040410
Lodge, J.M., Howard, S., Bearman, M., Dawson, P., Agostinho, S., 2023. Assessment Reform for the Age of
Artificial Intelligence. Tertiary Education Quality and Standards Agency, Australia.
Mårtensson, K., Roxå, T., Olsson, T., 2011. Developing a quality culture through the scholarship of teaching
and learning. High. Educ. Res. Dev. 30, 51–62. https://doi.org/10.1080/07294360.2011.536972
McNeill, M., Gosper, M., Xu, J., 2012. Assessment choices to target higher order learning outcomes: The
power of academic empowerment. Res. Learn. Technol. 20. https://doi.org/10.3402/rlt.v20i0.17595
Miaskiewicz, T., Kozar, K.A., 2011. Personas and user-centered design: How can personas benefit product
design processes? Des. Stud. 32, 417–430. https://doi.org/10.1016/j.destud.2011.03.003
Miaskiewicz, T., Luxmoore, C., 2017. The use of data-driven personas to facilitate organizational adoption–A
case study. Des. J. 20, 357–374.
Montenegro-Rueda, M., Fernández-Cerero, J., Fernández-Batanero, J.M., López-Meneses, E., 2023. Impact of
the implementation of ChatGPT in education: A systematic review. Computers 12, 153. https://doi.org/
10.3390/computers12080153
Mulder, S., Yaar, Z., 2006. The User Is Always Right: A Practical Guide to Creating and Using Personas for
the Web. Pearson Education, United Kingdom.
Ng, D.T.K., Leung, J.K.L., Chu, S.K.W., Qiao, M.S., 2021. Conceptualizing AI literacy: An exploratory review.
Comput. Educ. Artif. Intell. 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
Nielsen, L., 2002. From user to character: An investigation into user-descriptions in scenarios, in: Proceedings
of the 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques.
Presented at the DIS02: Designing Interactive Systems 2002, ACM, London, pp. 99–104. https://doi.org/
10.1145/778712.778729
174
OpenAI, 2023a. What is compositional interpretation? [WWW Document]. ChatGPT. URL https://chat.openai.
com/share/d102f0e9-4a71-4a69-bea6-0ef460f75243 (accessed 11.30.23).
OpenAI, 2023b. Write a sustainability report about Australian brand Zimmermann [WWW Document].
ChatGPT.
OpenAI, 2023c. Define circular economy [WWW Document]. ChatGPT.
OpenAI, 2023d. How could Zimmermann apply the concept of circular economy to their existing business
model? [WWW Document]. ChatGPT.
Perkins, M., Roe, J., 2024. Academic publisher guidelines on AI usage: A ChatGPT supported thematic ana-
lysis. F1000Research 12, 1398. https://doi.org/10.12688/f1000research.142411.2
Pruitt, J., Adlin, T., 2010. The Persona Lifecycle: Keeping People in Mind Throughout Product Design. Elsevier
Science, Netherlands
Rahman, M.M., Watanobe, Y., 2023. ChatGPT for education and research: opportunities, threats, and strategies.
Appl. Sci. 13, 5783. https://doi.org/10.3390/app13095783
Rose, G., 2016. Visual Methodologies: An Introduction to Researching with Visual Materials. SAGE
Publications, United Kingdom.
Salisbury, F., Hannon, J., Peasley, J., 2017. Framing the digitally capable university: Digital literacies as
shared scholarly and professional practice, in: Partridge, H., Davis, K. (Eds.), Me, Us, IT! Proceedings
ASCILITE2017: 34th International Conference on Innovation, Practice and Research in the Use of
Educational Technologies in Tertiary Education, pp. 152–157. Australasian Society for Computers in
Learning in Tertiary Education.
Schön, D.A., 1992. Designing as reflective conversation with the materials of a design situation. Knowl.-Based
Syst., Artificial Intelligence in Design Conference 1991, Special Issue 5, 3–14. https://doi.org/10.1016/
0950-7051(92)90020-G
Schön, D.A., 2008. The Reflective Practitioner: How Professionals Think in Action. Basic Books, New York, NY.
Sinha, R., 2003. Persona development for information-rich domains, in: CHI ’03 Extended Abstracts on Human
Factors in Computing Systems, CHI EA ’03. Association for Computing Machinery, New York, NY, pp.
830–831. https://doi.org/10.1145/765891.766017
Smith, T.W., Colby, S.A., 2007. Teaching for deep learning. Clear. House J. Educ. Strateg. Issues Ideas 80,
205–210. https://doi.org/10.3200/TCHS.80.5.205-210
Strong, M., 2016. Compositional Interpretation –compositional interpretation. URL https://compositionalint
erpretation.wordpress.com/2016/08/25/compositional-interpretation/ (accessed 11.30.23).
Zimmermann, n.d. Sustainability [WWW Document]. Zimmermann. URL www.zimmermann.com/sustainabil
ity (accessed 11.30.23).
175
12 Enhancing Academic
Guidance with AI
Investigating the Present
Capabilities and Future Prospects
of ChatGPT in Empowering
Students
Azza Ali Abdullah Al Zakwani, Dissanayake Mudiyanselage
Thuwarakesh, and Muhammad Khuram Khalil
12.1 INTRODUCTION
12.1.1 Background
Artificial intelligence (AI) has completely changed how people get information, solve issues, and
communicate with one another, as well as with machines. Traditional barriers encourage continual
and changing information flows and issue solutions. In the middle of this groundbreaking develop-
ment, ChatGPT has become a stunning big language model (Stojanov, 2023). Open AI’s ChatGPT
is a cutting-edge deep learning program that uses the advanced GPT-3.5—or 4.0, depending on
the version that’s being used at the moment—architecture to generate conversational replies that
are human-like (Stojanov, 2023). Academics and professionals in the field of education have been
quite interested in ChatGPT and its uses, especially in higher education, because of its potential
for teaching and learning, even though their opinions haven’t always been positive (Anders, 2023).
The chapter delves into the complex realm of generative AI models, or more specifically, ChatGPT,
investigating its possible applications in improving student advice, a topic that is often discussed.
Following the usage and importance of this phenomenon in higher education, this exploratory
research aims to shed light on the profound implications of this and other easily accessible large
language models, signaling great opportunities for a transformative shift in the landscape of student-
centered support services. Specifically, it analyzes the potential impact and challenges of integrating
ChatGPT into academic advising. The healthcare industry, which includes the mental health com-
munity, has been particularly forward-thinking when it comes to incorporating AI technology—
most notably ChatGPT—into various domains. AI integration has garnered significant attention due
to its potential for, among other things, clinical decision-making and diagnostic utilities, medical
education and training, enhanced efficacy and efficiency in therapy, and offering patients and the
general public advice and counseling (Chan et al., 2019; van der Niet and Bleakley, 2021; Johnson,
Strayhorn and Travers, 2023; Juhi et al., 2023; Kraft, 2023; Kulkarni and Singh, 2023; Xie et al.,
2023). Furthermore, according to the same poll, eight out of ten citizens questioned think AI would
lower healthcare prices, enhance access to healthcare, and improve healthcare quality (Xie et al.,
2023). Most significantly (and, for some, perhaps unsettlingly), according to a poll of Omani people,
one in four said they would be more inclined to “talk to” AI than to seek out a personal phys-
ician for guidance (AlZaabi, AlMaskari and AalAbdulsalam, 2023). This conclusion is especially
paradoxical considering a large number of healthcare professionals who participated in the same
poll expressed serious concerns about utilizing ChatGPT and other generative AI models, believing
that these tools might deny patients the personal connection they require and cherish (AlZaabi,
AlMaskari and AalAbdulsalam, 2023). Stated differently, healthcare providers may view human
interaction as a crucial component of providing healthcare consultations, and it’s possible that in-
person consultations are more successful than those conducted via AI. However, it’s also important
to take into account that the general public may not share these providers’ views on the importance
of human interaction.
12.2 LITERATURE REVIEW
Overall, the researchers found that the guidance from ChatGPT was clear, consistent, and typically
correct. This included information on the procedure’s risks and results, which the authors claimed
should encourage reasonable expectations for the process (Xie et al., 2023). However, ChatGPT is
unable to offer customized guidance (such as how a particular patient’s nose would seem after the
treatment), indicating that in the current iteration of ChatGPT, it is human-provided. In addition to
the data that potential patients would obtain from ChatGPT, counseling would still be extremely
important. Analogous endeavors have been undertaken in the field of mental health care, delving
into the potential advantages and difficulties that ChatGPT might present for psychological therapy
and other facets of mental health. The legal community has also been actively investigating how
ChatGPT and other AI-powered technologies could be appropriately and morally incorporated into
legal education and practice (Ajevski et al., 2023). Nonetheless, the legal industry has been more
cautious, and the incorporation of AI has generated a great deal of controversy. As a result, it is not
yet broadly recognized in the legal area (Goltz and Dondoli, 2019; Weiser and Schweber, 2023).
In particular, how may similar initiatives in non-educational fields enlighten us about providing
academic help to present and incoming at Middle East College, Oman? Since ChatGPT’s introduc-
tion in November 2022, AI’s responsibilities in educational settings have attracted a lot of interest
in higher education, although the excitement has been a little off-balance (Zawacki-Richter et al.,
2019; Crompton and Burke, 2023). The main topics of discussion have been concerns about pla
giarism and other aspects of academic integrity and, to a lesser extent, pedagogical methods to pro-
mote teaching and learning, particularly with academic writing and medical education, even though
scholars and practitioners alike seem to agree that ChatGPT and other AI tools have the potential
to fundamentally change the landscape of higher education (Anders, 2023; Dien, 2023; McCarthy,
2023; Gill et al., 2024). Put differently, even though there seems to be a general consensus that AI
will revolutionize our society and force us to rethink how people manage information in general,
higher education research on ChatGPT and other AI-supported technologies has not yet shown this
to be the case (van der Niet and Bleakley, 2021; Barrot, 2023; Dashti et al., 2023; Rospigliosi, 2023).
It has not investigated its effects outside of a small number of situations, frequently viewing them
more as a threat than a possible asset. Perhaps as a result, there have been moves in educational
institutions all over the world to prohibit or put a halt to ChatGPT and other AI-powered applications
177
(Zawacki-Richter et al., 2019; Crompton and Burke, 2023; Dien, 2023; Gill et al., 2024). This could
unintentionally prevent the adoption of a technology that could be very helpful in higher education
(Stojanov, 2023). As suggested by Zawacki-Richter et al. (2019), in order to guarantee that diverse
viewpoints will be represented as generative AI models spread both inside and outside of educa-
tional settings, we need to make sure that educators actively and proactively engage in discussions
surrounding AI integration into the field of education.
Academic advising has been shown to be essential for student success, especially for students
from underprivileged backgrounds. However, colleges frequently lack the resources necessary to
hire full-time academic advisers and faculty advisers, who may not always be able to dedicate
their time and attention to general academic advising due to their other responsibilities, including
teaching, research, and mentoring honors and graduate students (Blau et al., 2019; Chan et al.,
2019; Johnson, Strayhorn and Travers, 2023). Higher education’s high-quality academic advice
has also been connected to students’ happiness and commitment to their college, which should
increase retention (Vianden and Barlow, 2015). Thus, it is not unexpected that certain elements
of AI-powered tools started to be integrated into some pricey academic advising systems, such
as education advisory board (EAB); yet, there are still difficulties in providing academic advice
that is both efficient and accessible (Johnson, Strayhorn and Travers, 2023). One reason is that
these technologies are usually quite expensive for institutional subscriptions and are not available to
the general public, which significantly restricts access. The key concerns identified by researchers
and practitioners, among the many obstacles faced by college students seeking academic advising,
are limited availability of advisers and difficulty scheduling appointments, and inconsistent fidelity
of advice provided, which can often result in inaccurate or insufficient information provided by
advisers (Young-Jones et al., 2013; Apriceno, Levy and London, 2020). As of the submission of this
manuscript, no published research seemed to have looked into the possible roles that freely available
generative AI models, like ChatGPT, could play in academic advising or, in fact, the larger context
of student service. This is based on an extensive database search conducted on education resources
information center (ERIC), PsycINFO, SocINDEX, and Science Direct. The finding that there is a
glaring vacuum in the literature on the subject of the effects of generative AI tools on education is
supported by a newly released fast literature review (Lo, 2023).
It may not be shocking that there hasn’t been any empirical study on this subject given that, as
was previously said, most of the discussions around AI integration in education have focused on a
limited number of topics, primarily related to plagiarism and instruction (Anders, 2023; Dien, 2023;
McCarthy, 2023). The goal of the current study, in contrast, is to broaden its scope and tackle gen
erative AI integration from another angle. It is based on the following query: to what extent, if at all,
may generative AI tools like ChatGPT help with these student service–related issues? The objective
of the present exploratory investigation is to concentrate on the utilization of ChatGPT within the
academic advising domain, subsequent to the previously mentioned published exploratory studies
in several professional sectors (Ajevski et al., 2023; Meissner and Garza, 2023; Xie et al., 2023). It
might be important to emphasize again that the goal of this investigation is much broader in scope and
is not limited to assessing ChatGPT as a tool in isolation. In particular, our goal is to investigate the
present and potential applications of generative AI models, like ChatGPT, in the context of student
advising a field that doesn’t seem to have received much attention, especially in light of the fact that
student advising still presents difficulties pursuing careers and academic degrees, most frequently
involving students from underrepresented demographic groups (Harris, 2018; Ford et al., 2023).
Then, why is ChatGPT picked, especially when we talk about AI-powered tools in the context of
academic advising? First off, ChatGPT is by far the most popular AI-powered large language model
currently available to the public. It is freely and widely accessible for anyone with access to digital
devices and the internet (Gill et al., 2024), even though other AI-powered large language models,
like Google’s Bard and Microsoft’s Sydney, are still being released (Akter et al., 2023). While there
is a premium version of ChatGPT, the free version, which has a daily question restriction, has been
178
tested and used extensively in the medical field with excellent results, including different situations,
as already mentioned (Xie et al., 2023). AI-powered solutions, like ChatGPT, are often accessible 24/
7, seven days a week. This alone offers a significant benefit over human-provided advising because
the majority of higher education institutions do not provide human-based advising around the clock.
Since ChatGPT is free of charge, actively using it might help support educational equity, especially
for incoming and existing students who may not have as much access to paid advisors or personal
networks for answers to queries about schooling and careers. Students may also ask questions in
their native tongue using ChatGPT, and it will reply in words that are comparable to what they would
say in daily life. Considering that certain research indicates that academic language is an obstacle to
learning, which serves to preserve educational inequity which is a really desired quality (Jensen and
Thompson, 2020). Similarly, the integration of ChatGPT into academic advising has the potential to
empower all students equally due to its excellent accessibility, since almost no technical experience or
understanding of the platform-specific user interface is needed to use ChatGPT. However, before we
can investigate this further, it would be important to find out how ChatGPT performs in terms of the
quality of the advice it provides. This exploratory report will closely examine the guidance offered by
ChatGPT’s free version on two topics: general inquiries concerning the career of Middle East College
students, and inquiries pertaining to general educational questions.
12.3 METHODOLOGY
The authors conducted an exploratory study using the same research methodology as (Xie et al.,
2023). Our goal was to investigate the quality of the advice provided by ChatGPT’s free version,
which is based on GPT 3.5, regarding general inquiries concerning the career of Middle East College
students, and inquiries pertaining to general educational questions. For the purpose of stimulating
additional in-depth research on the subject of generative AI-integrated academic coaching, narrative
summaries of the responses produced by ChatGPT are provided. Selection of research questions
for this project. The most common questions from prospective or current students were generated
by the authors using the advising record from 2021 to 2023. Then, questions that mostly dealt with
(a) extremely unique situations were eliminated, such as what are some useful resources for pre-
paring for my upcoming supply chain exam? (b) recent and transient circumstances “Considering the
current global economic challenges, can you provide insights into how these economic uncertain-
ties might impact the job market for recent graduates in Oman, specifically in the field of Business
Administration and Computer Science?”, or (c) straightforward inquiries with readily available
information “What are the admission requirements for the undergraduate program at Middle East
College Oman, and where can I find detailed information about the application process?”. The
primary reasons for not including these questions were as follows: (a) ChatGPT is not currently
equipped to evaluate each student’s situation or profile because it lacks access to the necessary
data; (b) ChatGPT’s responses will typically not include data that became available after 2021; and
(c) extremely straightforward factual questions don’t need the use of an AI tool because the answers
are readily available. Five questions were ultimately chosen for the present exploratory investigation
shown in Table 12.1. Of the five, two are unique to our students’ general education questions, and
the other three are generally related to a career of Middle East College students.
Through random sampling, ChatGPT’s algorithms provide a variety of responses, allowing for vari-
ation in responses to the same query (Gupta et al., 2023). Because of this, the initial response to each
question was evaluated for quality for the purposes of this study, in accordance with the methodology
applied in earlier published research (Xie et al., 2023). Using GPT 3.5, the writers consecutively wrote
these queries on ChatGPT’s free account, recording each response. The quality of each response was
then assessed based on the previously mentioned factors, including correctness, thoroughness, timeli-
ness, and general tone. The choice was made early on not to quantify the results since, to the best of the
authors’ knowledge, this is exploratory research and no previous data was available.
179
TABLE 12.1
Five questions that are frequently asked by current and prospective students included in
the current exploratory study
• Q1. “I’m struggling with understanding calculus/Mathematics concepts. Can you provide explanations?”
• Q2. “What are some key historical events related to the Middle East that I should include in my project?”
• Q3. “What career paths are common for graduates in computer science?”
• Q4. “Can you provide insights into job opportunities in logistics and supply chain management?”
• Q5. “I’m currently a business student at Middle East College Oman, and I’m interested in pursuing a career in
finance. What are some recommended steps or resources to help me prepare for a finance-related internship or job
opportunity?”
FIGURE 12.1 ChatGPT’s response to the question, “I’m currently a business student at MEC Oman, and I’m
interested in pursuing a career in finance. What are some recommended steps or resources to help me prepare
for a finance-related internship or job opportunity?”
181
FIGURE 12.2 ChatGPT’s answer to “Can you provide insights into job opportunities in Logistics and supply
chain management?”
for aspiring finance professionals in both practical aspects. The emphasis on skills and keeping
informed about industry trends demonstrates an understanding of the nature of success in the finance
sector. In terms of substance and tone, these responses strike a balance, the language used is clear,
concise, and tailored to the intended audience. The information is presented in a manner that’s easily
182
FIGURE 12.3 ChatGPT’s response to the question, “What career paths are common for graduates in
computer science?”
183
understandable, for individuals seeking guidance in these fields. Moreover, beyond providing infor-
mation the responses offer valuable insights that can benefit individuals at various stages of their
career exploration or development. All three responses demonstrate a strength in recognizing the
unique aspects of these professions. Whether discussing the paths in computer science job opportun-
ities in logistics or preparing for a finance-related career the responses acknowledge the importance
of professional growth, soft skills, and the ever-evolving nature of these industries. Considering how
comprehensive, clear, and practical these responses are they could serve as examples for advisors.
The amount of information provided by ChatGPT surpasses what is typically offered in face-to-face
sessions. These responses can be a resource especially when it comes to covering nuanced infor-
mation without relying on strict templates or guidebooks. Furthermore, the encouraging language
and tone used in the responses contribute to a motivating narrative. This is particularly evident in
the financial advice section where the final paragraph strikes a balance between development and
making contributions within a finance career. Such encouragement plays a role in inspiring students.
Guiding them toward fulfilling careers that make an impact.
To conclude the responses, to all three questions demonstrate ChatGPT’s ability to provide
nuanced and uplifting guidance across professional fields. These responses generated by AI have
the potential to be resources, for people who are looking for guidance in their career journeys. If
these responses were included on websites, it could make high-quality information more accessible
to a range of individuals.
When it comes to discussing calculus concepts ChatGPT delivers a well-organized explanation
of the principles in calculus. The response covers topics, like limits, derivatives, rules of differenti-
ation, integrals, techniques of integration, and the practical applications of derivatives and integrals
(Figure 12.4). The language used is clear and understandable with real-world examples that help
bridge the gap between theory and practicality. One standout aspect of ChatGPT’s response is its
encouragement for engagement. By inviting users to ask questions or seek clarification on problems
the AI model demonstrates an understanding of personalized learning. Additionally, suggesting
practice problems and working through examples aligns with learning strategies (Figure 12.5).
Shifting our focus to historical events in the Middle East ChatGPT provides a concise yet inform-
ative overview of significant moments spanning from the emergence of Islam in the 7th century
to recent developments like the normalization of relations in 2020. Covering empires such as the
Ottoman and Safavid dynasties conflicts like the Israeli Conflict and Syrian Civil War as well as
geopolitical shifts, like the Gulf War and Iran Nuclear Deal creates a comprehensive and balanced
portrayal of the region’s history. The response shows a balance, between providing an overview
and delving into specific details. It manages to give a narrative while also recognizing the potential
for deeper exploration based on the user’s project focus. The chronological arrangement helps in
understanding the progression of events. Each event is concisely described, capturing its context
and importance.
ChatGPT’s ability to offer relevant information on both calculus concepts and Middle Eastern
history highlights its potential as an educational tool. The depth of information in both responses
goes beyond facts incorporating explanations, applications, and encouragement for engagement.
Although the responses are impressive in terms of their content and clarity it’s crucial to acknow-
ledge that the effectiveness of guidance in complex subjects like calculus depends on the user’s active
involvement and participation. By inviting users to seek clarification or ask questions, ChatGPT
encourages a learning experience. To sum up, ChatGPT demonstrates a capability to provide
explanations that are user-friendly across various inquiries. The information given about calculus
concepts and Eastern historical events is well-structured and informative. It shows an understanding
of user’s potential need for more clarification or exploration. Like any tool, the success of its impact
ultimately depends on how users engage with and apply the information provided in their learning
process.
184
FIGURE 12.4 ChatGPT’s response to the question, “I’m struggling with understanding calculus concepts.
Can you provide explanations?”
185
FIGURE 12.5 ChatGPT’s answer for: “What are some key historical events related to the Middle East that
I should include in my project?”
186
12.5 DISCUSSION
The study presented here about the use of AI models, ChatGPT, for providing academic advice
in higher education is quite fascinating and holds great potential. The authors, who are college
professors experienced in guiding students, highlight how the model can offer detailed responses
to career-related questions. The focus on aspects like career steps and expectations in professions
showcases ChatGPT’s ability to provide broad guidance. The study acknowledges the limitations
of AI models including accuracy issues and a lack of access to program or personal information.
However, it emphasizes the value of ChatGPT as a tool to initiate the process of gathering informa-
tion. It suggests that students can utilize AI-based tools like ChatGPT as a starting point for their
inquiries and then seek clarification or confirmation from advisors, which aligns with a blended
approach to advising. One notable aspect is the emphasis on AI technologies not replacing one-on-
one assistance but rather complementing it. The study suggests a synergy, between ChatGPT and
human advisors, where the model acts as a source of information while human advisors can pro-
vide nuanced and personalized guidance based on the insights generated by ChatGPT. The research
presents an argument, for incorporating AI-powered language models into advising, particularly for
students who face difficulties in accessing high-quality guidance.
12.6 CONCLUSION
The accessibility of ChatGPT and its ability to engage users in their native language address the
need for support for students from disadvantaged backgrounds. However, the research acknow-
ledges the importance of user proficiency in formulating queries and understanding the capabil-
ities and limitations of the model. It suggests that while AI tools like ChatGPT have barriers to
entry users may need to develop expertise to make the most out of them. This echoes the idea
that technology is most effective when users have a grasp of how to use it optimally. Additionally,
the study encourages students to view ChatGPT as a tool rather than an authoritative source
recognizing that it has limitations in providing comprehensive and completely accurate informa-
tion. The recommendation for follow-up questions and further exploration reflects an approach
considering the capabilities of AI models. Finally, the study concludes by emphasizing its nature
and proposes directions for research including a more comprehensive evaluation of AI-generated
responses, expansion into various academic subjects, and investigation into other generative AI
models. To sum up, the research highlights the promising prospects of AI models such, as ChatGPT
in improving advising. It acknowledges the significance of an approach that combines AI tech-
nologies with knowledge to offer students thorough and customized assistance. The request for
investigation and exploration in this field demonstrates an increasing enthusiasm, for utilizing AI
to enhance education.
ACKNOWLEDGMENT
We acknowledge our sincere thanks to Middle East College, Oman for facilitating the authors in
conducting this research.
REFERENCES
Ajevski, M. et al. (2023) ‘ChatGPT and the future of legal education and practice’, The Law Teacher, 57(3),
pp. 1–13.
Akter, S. et al. (2023) ‘A framework for AI-powered service innovation capability: Review and agenda for
future research’, Technovation, 125, p. 102768.
187
AlZaabi, A., AlMaskari, S. and AalAbdulsalam, A. (2023) ‘Are physicians and medical students ready for arti-
ficial intelligence applications in healthcare?’, Digital Health, 9, 20552076231152170.
Anders, B. A. (2023) ‘Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking?’, Patterns, 4(3).
Apriceno, M., Levy, S. R. and London, B. (2020) ‘Mentorship during college transition predicts academic
self-efficacy and sense of belonging among STEM students’, Journal of College Student Development,
61(5), pp. 643–648.
Barrot, J. S. (2023) ‘Using ChatGPT for second language writing: Pitfalls and potentials’, Assessing Writing,
57, p. 100745.
Blau, G. et al. (2019) ‘Exploring common correlates of business undergraduate satisfaction with their degree
program versus expected employment’, Journal of Education for Business, 94(1), pp. 31–39.
Chan, Z. C. Y. et al. (2019) ‘Academic advising in undergraduate education: A systematic review’, Nurse
Education Today, 75, pp. 58–74.
Crompton, H. and Burke, D. (2023) ‘Artificial intelligence in higher education: The state of the field’,
International Journal of Educational Technology in Higher Education, 20(1), pp. 1–22.
Dashti, M. et al. (2023) ‘How much can we rely on artificial intelligence chatbots such as the ChatGPT soft-
ware program to assist with scientific writing?’, The Journal of Prosthetic Dentistry.
Dien, J. (2023) ‘Generative artificial intelligence as a plagiarism problem’, Biological Psychology, 181, 108621.
Ford, J. R. et al. (2023) ‘“Not Every Advisor Is for Me, but Some Are”: Black men’s academic advising
experiences during COVID-19’, Education Sciences, 13(6), p. 543.
Gill, S. S. et al. (2024) ‘Transformative effects of ChatGPT on modern education: Emerging era of AI Chatbots’,
Internet of Things and Cyber-Physical Systems, 4, pp. 19–23.
Goltz, N. and Dondoli, G. (2019) ‘A note on science, legal research and artificial intelligence’, Information &
Communications Technology Law, 28(3), pp. 239–251.
Gupta, M. et al. (2023) ‘From ChatGPT to ThreatGPT: Impact of generative AI in cybersecurity and privacy’,
IEEE Access, 11, pp. 80218–80245
Harris, T. A. (2018) ‘Prescriptive vs. developmental: Academic advising at a historically black university in
South Carolina’, Journal of the National Academic Advising Association, 38(1), pp. 36–46.
Jensen, B. and Thompson, G. A. (2020) ‘Equity in teaching academic language— An interdisciplinary
approach’, Theory into Practice, 59(1), pp. 1–7.
Johnson, R. M., Strayhorn, T. L. and Travers, C. S. (2023) ‘Examining the academic advising experiences of
Black males at an urban university: An exploratory case study’, Urban Education, 58(5), pp. 774–800.
Juhi, A. et al. (2023) ‘The capability of ChatGPT in predicting and explaining common drug-drug interactions’,
Cureus, 15(3), pp. 1–7.
Kraft, S. A. (2023) ‘Centering patients’ voices in artificial intelligence–Based telemedicine’, American Journal
of Public Health, 113(5), pp. e1–e2.
Kulkarni, P. A. and Singh, H. (2023) ‘Artificial intelligence in clinical diagnosis: Opportunities, challenges, and
hype’, JAMA, 330(4), pp. 309–315.
Lo, C. K. (2023) ‘What is the impact of ChatGPT on education? A rapid review of the literature’, Education
Sciences, 13(4), p. 410.
McCarthy, C. (2023) ‘ChatGPT use could change views on academic misconduct’, Dean and Provost, 24(10),
pp. 1–4.
Meissner, F. and Garza, C. (2023) ‘Rapid construction of an explanatory decision analytical model of treating
severe depression during pregnancy with SSRI psychopharmacological therapy by the use of ChatGPT’,
Journal of Psychosomatic Research, 169, p. 111286.
Rospigliosi, P. ‘asher’ (2023) ‘Artificial intelligence in teaching and learning: What questions should we ask of
ChatGPT?’, Interactive Learning Environments, 31(1), pp. 1–3.
Stojanov, A. (2023) ‘Learning with ChatGPT 3.5 as a more knowledgeable other: An autoethnographic study’,
International Journal of Educational Technology in Higher Education, 20(1), p. 35.
van der Niet, A. G. and Bleakley, A. (2021) ‘Where medical education meets artificial intelligence: “Does tech-
nology care?” ’, Medical Education, 55(1), pp. 30–36.
Vianden, J. and Barlow, P. J. (2015) ‘Strengthen the bond: Relationships between academic advising quality
and undergraduate student loyalty’, The Journal of the National Academic Advising Association, 35(2),
pp. 15–27.
188
Weiser, B. and Schweber, N. (2023) ‘The ChatGPT lawyer explains himself’, New York Times, (08.06.23).
Xie, Y. et al. (2023) ‘Aesthetic surgery advice and counseling from artificial intelligence: A rhinoplasty consult-
ation with ChatGPT’ Aesthetic Plastic Surgery, 47(5), pp. 1–9.
Young-Jones, A. D. et al. (2013) ‘Academic advising: Does it really impact student success?’, Quality Assurance
in Education, 21(1), pp. 7–19.
Zawacki-Richter, O. et al. (2019) ‘Systematic review of research on artificial intelligence applications in higher
education—Where are the educators?’, International Journal of Educational Technology in Higher
Education, 16(1), pp. 1–27.
189
13.1 INTRODUCTION
13.1.1 Background
In the digital transformation era, education is increasingly shifting toward active learning models that
require active participation from students (Peñalvo & José, 2021). This chapter discusses the appli
cation of ChatGPT in the context of active learning as a new paradigm to enhance student engage-
ment and interaction in the classroom. This introduction provides a good overview of the chapter’s
focus, which is the integration of ChatGPT in active learning models. By conveying that education
is changing to a more interactive and participatory learning model, we can gain the reader’s attention
and demonstrate the relevance of the topic. Active learning becomes the main focus, where the
teacher acts as a facilitator of discussion and collaboration (Cattaneo, 2017). ChatGPT is used as a
tool to stimulate open-ended questions and challenge students to think critically, creating an envir-
onment where learning does not simply become passive information retrieval, but an interactive
process that builds deeper understanding (Shoufan, 2023).
One of the many generative artificial intelligence (AI) technologies that have lately surfaced is
ChatGPT. It has caused controversy in the education field due to worries that it could be used as a
tool for plagiarism and compromise students’ capacity for independent thought (Tan & Charman,
2023). In order to help students and teachers benefit from this technology, educators can direct the
usage of generative AI tools in the classroom and data science. ChatGPT is an AI program released
in November 2022, but even now, many studies have expressed excitement or concern about its
introduction into academia and education.
In this paradigm, teachers do not lose their role as classroom leaders, but they shift from being the
main conveyor of information to guiding and supporting student learning (Brant, 2022). Teachers
become more involved in providing direction, structuring discussions, and designing challen-
ging learning experiences (Whittle et al., 2020). On the other hand, students become more active,
engaging in discussion, collaboration, and problem-solving.
In the active learning paradigm, teachers and students have complementary roles. The teacher
acts as a classroom leader who facilitates student learning, while students act as active learners who
are responsible for their learning (Sebastian & Allensworth, 2016). The active learning paradigm
can help students develop the critical thinking and problem-solving skills necessary for success in
an ever-changing world (Kivunja, 2015). Figure 13.1 presents the theoretical framework for using
generative AI in education.
Note. Permission is granted to incorporate the diagram into a work, subject to certain conditions.
Firstly, neither the publisher nor the user may assert any copyright claims over the image. Secondly,
the language below must be used to label or cite the image. Revised version of the IDEE Theoretical
Framework for Using Generative AI in Education. © Jiahong Su & Weipeng Yang, 2023. Reproduced
with permission. Su and Yang (2023). Unlocking the power of ChatGPT: A framework for applying
generative AI in education. ECNU Review of Education.
To facilitate the coaching of early childhood teachers, ChatGPT can be applied in the development
of a virtual coach that can provide immediate feedback to teachers during classroom observations
(Genc, 2023). For example, suppose a virtual coach observes that a teacher does not use enough
open-ended questions to promote critical thinking. In that case, it can provide feedback on the
importance of using open-ended questions and suggest specific examples that the teacher can use in
future lessons. This can help early childhood teachers improve their teaching practices and ultim-
ately enhance the learning outcomes of young children.
With the release of ChatGPT in late 2022, many of the obstacles preventing the general public
from using this technology were removed, significantly simplifying the way that humans can com-
municate with LLMs (see Figure 13.2). We were able to use ChatGPT to streamline our workflow
from OpenAI’s GPT Playground and accomplish the following activities with it: Level and
191
summarize literature for students. Automatically fix errors in syntax and sentence structure. Write
prompts for narrative writing. Make notes for your presentations. Provide lesson plans and texts at
each reading level for assessments or practice. But first, we need to have a clear understanding of
what AI is, what LLMs are, and how they work before we can explain how AI achieved this and offer
specifics on how teachers might use these features.
AI refers to the development of computer systems that can perform tasks that typically require
human intelligence, such as learning, reasoning, and decision-making (Trunk et al., 2020). LLMs,
or large language models, are a type of AI that are specifically designed to understand and generate
human-like text (Meyer et al., 2023). LLMs are trained on vast amounts of text data, such as books, art
icles, and social media posts, to learn the patterns and relationships between words and phrases (Chen
et al., 2023). They can then generate new text that is coherent, relevant, and grammatically correct.
ChatGPT, in particular, is an LLM developed by DeepAI that is accessible to the general public
through a user-friendly interface (Rane et al., 2023). It can perform a variety of tasks, such as
answering questions, summarizing text, and generating new text based on a prompt. Teachers can
use these features to streamline their workflow and enhance their teaching practices in several ways
(Ringuette et al., 2023). For example: (1) Level and summarize the literature: ChatGPT can help
teachers level and summarize the literature for students by providing a brief overview of the main
ideas and themes of a text (Imran & Almusharraf, 2023). This can be especially helpful for students
who are struggling with comprehension or who need additional support. (2) Automatically fix errors
in syntax and sentence structure: ChatGPT can also help students improve their writing skills by
automatically fixing errors in syntax and sentence structure (Hassani & Silva, 2023). This can be
a valuable tool for students who are still learning the rules of grammar and punctuation. (3) Write
prompts for narrative writing: Teachers can use ChatGPT to generate prompts for narrative writing,
which can help students develop their creative writing skills (Fitria, 2017). These prompts can be
tailored to specific reading levels and learning objectives. (4) Provide lesson plans and texts at each
reading level for assessments or practice: ChatGPT can also help teachers provide lesson plans and
texts at each reading level for assessments or practice (Javaid et al., 2023). This can be a valuable
resource for teachers who are working with students at different levels of proficiency.
192
Overall, the use of AI and LLMs in education has the potential to revolutionize the way that
students learn and teachers teach. By providing additional support and resources, AI can help
students develop their skills and achieve their full potential. However, teachers need to use these
tools responsibly and thoughtfully to ensure that they are not replacing human teachers but rather
enhancing their work.
13.1.2 Methodology
The revolutionary potential of ChatGPT to revolutionize education through active learning models
is examined in this chapter. Amid the digital revolution, critical thinking and student involvement
are prioritized through active learning. ChatGPT proves to be a potent instrument that encourages
students to go deeper and poses open-ended inquiries. By going beyond passive learning, this inter-
active technique creates a lively and dynamic learning environment. Instructors become more than
just informational resources; they become mentors and guides who foster the development of their
students. This change gives students more authority, and they start participating actively in class
by having debates, working together, and taking on challenges head-on. The comparative analysis
gives a comprehensive insight into the current state of the incorporation of ChatGPT and opens the
door to a paradigm shift in which student interaction is prioritized. This novel approach goes beyond
simple information collecting. It gives pupils essential 21st-century abilities, including autonomous
learning, teamwork, and critical thinking. With ChatGPT at their disposal, educators can establish
a classroom that is lively, engaged and focused on critical skills. This chapter essentially promotes
ChatGPT as a game-changer that sparks student engagement and advances education toward a more
promising and empowered future.
13.1.3 Chapter organization
The rest of this chapter is organized as follows: Section 13.2 presents a review of The Role of
ChatGPT in Active Learning. Section 13.3 discusses Increasing Engagement through Interaction,
while Section 13.4 describes the Impact on Collaborative Learning. Section 13.5 explains Learn the
Capabilities and Limitations of ChatGPT. Section 13.6 highlights The Effect of Technology on the
Teacher’s Role and Learning Approaches That Are Tailored to the Individual Needs of Students.
Section 13.7 concludes the chapter.
1. ChatGPT can be used to provide feedback and support to students. ChatGPT can be used
to answer student questions, provide feedback on student work, and help students solve
problems. This can help students feel more confident and motivated to learn.
2. ChatGPT can be used to create a more personalized learning experience. ChatGPT can be
used to tailor learning to the needs and interests of individual students. This can help students
feel more engaged and motivated to learn.
193
TABLE 13.1
Specific examples of how ChatGPT can be used to stimulate student participation in
active learning
In history lessons ChatGPT can be used to create interactive quizzes or games that
encourage students to think critically and solve problems.
In a language lesson ChatGPT can be used to provide feedback on students’ writing
assignments or help students learn new vocabulary.
In science lessons ChatGPT can be used to provide virtual simulations or experiments that
allow students to explore science concepts.
Interactive dialog on Islamic teachings Teachers can develop interactive dialog scenarios with ChatGPT to
provide an in-depth understanding of Islamic teachings. Students
can interact with the model to get explanations of religious concepts,
ethics, and worship practices.
ChatGPT-based project assignment Provide a project assignment where students have to design a learning
session using ChatGPT as the main resource. They can create
questions, dialog scenarios, or other educational materials to teach
Islamic concepts to classmates.
3. ChatGPT can be used to facilitate collaborative learning. It can be used to connect students
from different backgrounds and experiences. This can help students learn from each other and
develop cooperation skills.
Here are some specific examples of how ChatGPT can be used to stimulate student participation in
active learning.
Of course, the use of ChatGPT in active learning should be directed by educators and organized
in such a way that it can enrich the learning process without replacing the social interaction and
active participation of students (Liu et al., 2023). Here are some things that educators need to con
sider in using ChatGPT in active learning:
a. Educators must understand the capabilities and limitations of ChatGPT. ChatGPT is not a sub-
stitute for educators, and educators must remain in the role of learning leaders.
b. Educators must design learning activities that are appropriate to the learning objectives and
student needs. ChatGPT can be used to support learning activities but cannot replace the
learning activities themselves.
c. Educators should monitor student use of ChatGPT. Educators should ensure that ChatGPT is
used ethically and for learning purposes.
Overall, ChatGPT has the potential to be a powerful tool to stimulate student participation in active
learning (Limna et al., 2023). However, educators must use ChatGPT wisely and responsibly to pro
vide maximum benefit to students.
to improving student engagement in learning. Interaction can encourage students to think critically,
solve problems, and communicate effectively (Badruzaman & Adiyono, 2023). ChatGPT is a large
language model that can create immersive interactive experiences, such as dynamic discussions and
role simulations (Ye et al., 2023).
Dynamic discussion is a type of discussion that actively engages students in brainstorming and
building on each other’s ideas (Gan et al., 2015). ChatGPT can be used to create dynamic discussions
in the following ways:
• ChatGPT can be used to provide feedback and support to students. ChatGPT can be used
to answer students’ questions, provide feedback on students’ ideas, and help students
solve problems. This can help students feel more confident and motivated to participate in
discussions.
• ChatGPT can be used to facilitate fair and inclusive discussions. ChatGPT can be used to
ensure that all students have the opportunity to participate in discussions, regardless of their
background or experience.
• ChatGPT can be used to encourage creative and innovative discussions. ChatGPT can be
used to ask open-ended questions, encourage students to think outside the box, and generate
new ideas.
Role simulation is a type of active learning that involves students in role-playing to learn a particular
concept or skill (Effendi & Wahidy, 2019). Figure 13.3 presents how can ChatGPT be used to create
role simulations.
Here are some examples of how ChatGPT can be used to create immersive interactive experiences.
ChatGPT has the potential to be a powerful tool to increase student engagement in active learning
(Adarkwah et al., 2023). ChatGPT can be used to create immersive interactive experiences, such as
dynamic discussions and role simulations (Junior et al., 2023). These immersive interactions can
encourage students to think critically, solve problems, and communicate effectively.
ChatGPT can be used to create immersive interactive experiences in several ways that can
increase student engagement.
TABLE 13.2
Specific examples of how ChatGPT can be used to stimulate student participation in
active learning
In history lessons ChatGPT can be used to create dynamic discussions about important
historical events. For example, ChatGPT can be used to facilitate
discussions about the American Revolution or World War II.
In a language lesson ChatGPT can be used to create role simulations where students take
on the roles of characters in novels or short stories. For example,
ChatGPT can be used to create role simulations where students take
on the roles of characters in F. Scott Fitzgerald’s “The Great Gatsby.”
In science lessons ChatGPT can be used to create role simulations where students take on
the role of scientists conducting experiments. For example, ChatGPT
can be used to create a role simulation where students play the role of
a scientist developing a new vaccine.
Prayer and worship training ChatGPT can be used as a guide to assist students in learning the
procedures of prayer, dhikr, and other acts of worship. Students can
practice virtually with the guidance of ChatGPT to ensure that they
understand and perform the worship correctly.
Research project on religious perspectives Using ChatGPT as a resource for student research projects on different
perspectives in Islam. Students can frame questions, analyze answers
from ChatGPT, and compare them to the views of scholars or other
sources.
13.3.1 Dynamic discussions
ChatGPT can facilitate dynamic discussions by responding to student’s questions and comments
in real time (Wu et al., 2023). This can help students feel more engaged in the conversation
and encourage them to participate actively. For example, a teacher can use ChatGPT to lead a
discussion on a current event, with ChatGPT providing additional information and perspectives
as needed. During a dynamic discussion facilitated by ChatGPT, students can ask questions
or make comments, and ChatGPT will respond in real time, providing additional information,
perspectives, or insights. This can help students feel more engaged in the conversation, as they
can interact with the material more dynamically and interactively. For example, a teacher might
use ChatGPT to lead a discussion on a current event, such as a political crisis or a natural disaster.
Students can ask ChatGPT questions about the event, such as “What caused the crisis?” or “What
are some potential solutions to this problem?” ChatGPT can then provide additional information
and perspectives, such as historical context or expert opinions, to help students better understand
the issue at hand.
This can also help students develop critical thinking skills, as they are able to engage in a more in-
depth discussion about the event rather than simply listening to a lecture or reading a textbook. By
encouraging students to ask questions and participate actively in the discussion, ChatGPT can help
them develop a deeper understanding of the material and better retain the information (Badruzaman
& Adiyono, 2023). Overall, the use of ChatGPT to facilitate dynamic discussions can help students
feel more engaged in the learning process and develop critical thinking skills, making it a valuable
tool for teachers looking to enhance their teaching practices.
196
13.3.2 Role simulations
ChatGPT can also be used to create role simulations that allow students to take on different
perspectives and roles to understand complex issues or concepts (Keshtkar et al., 2023). For example,
students could take on the roles of different characters in a historical event or scientific experiment
to better understand the underlying principles at work (Karimi et al., 2023). Using ChatGPT to
create role simulations can be a powerful tool for educators to engage students in a more interactive
and immersive learning experience (Junior et al., 2023). By taking on different perspectives and
roles, students can gain a deeper understanding of complex issues or concepts, as well as develop
empathy and critical thinking skills. For example, teachers can use ChatGPT to set up a scenario
where students take on the roles of historical figures in a significant event, such as the signing of
the Declaration of Independence or the French Revolution. By immersing themselves in the roles of
these figures, students can gain a better understanding of the motivations, challenges, and decisions
that shaped the course of history.
Similarly, in a scientific context, students could use ChatGPT to simulate a scientific experiment
by taking on the roles of different scientists or research team members (Cooper, 2023). This hands-
on approach allows students to explore the scientific process, hypothesis testing, and data analysis
more interactively and engagingly. By incorporating role simulations using ChatGPT, educators can
create a dynamic and interactive learning environment that encourages students to think critically,
problem-solve, and collaborate effectively (Mahindru et al., 2024). This immersive approach can
help students connect with the material on a deeper level and retain information more effectively.
Overall, the use of ChatGPT for role simulations offers a creative and engaging way for students to
explore complex topics and develop a deeper understanding of the subject matter, making it a valu-
able tool for educators seeking to enhance student learning outcomes.
13.3.3 Collaborative learning
ChatGPT can also facilitate collaborative learning by allowing students to work together in small
groups to solve problems or complete tasks. For example, students could use ChatGPT to collab-
orate on a group project, with ChatGPT providing additional resources and feedback as needed
(Alshahrani, 2023). ChatGPT can be a powerful tool for facilitating collaborative learning, particu
larly for students who are working together remotely due to distance learning or other circumstances.
By allowing students to work together in small groups to solve problems or complete tasks, ChatGPT
can help foster a sense of community and collaboration, which is essential for effective learning.
For example, teachers can use ChatGPT to facilitate group projects, where students work together
to research and present on a particular topic. ChatGPT can provide additional resources and feed-
back as needed, helping students stay on track and complete the project successfully. This col-
laborative approach allows students to learn from each other, share ideas, and develop a deeper
understanding of the subject matter.
ChatGPT can also help students develop essential skills such as communication, teamwork, and
problem-solving, which are essential for success in the workplace and other areas of life. By working
together in small groups, students can learn to listen to each other, respect different perspectives,
and work collaboratively to achieve a common goal (Solone et al., 2020). The use of ChatGPT for
collaborative learning offers a flexible and engaging way for students to learn, particularly in remote
or distance learning environments. By fostering a sense of community and collaboration, ChatGPT
can help students develop essential skills and deepen their understanding of the subject matter.
Overall, the use of ChatGPT to create immersive interactive experiences can help students feel
more engaged in the learning process, as they can interact with the material in more dynamic and
engaging ways. By facilitating dynamic discussions, role simulations, and collaborative learning,
ChatGPT can help students develop critical thinking skills and communicate more effectively.
197
13.4.2 Design learning activities that match the learning objectives and student needs
ChatGPT can be used to support learning activities but cannot replace the learning activities them-
selves. While ChatGPT and other AI technologies can provide helpful resources for learning activ-
ities such as quizzes, practice exercises, and interactive simulations, they cannot replace the actual
learning activities themselves. Learning activities involve active engagement, critical thinking, and
the application of knowledge in real-world contexts. Students need to interact with their environ-
ment, collaborate with their peers, and reflect on their learning experiences in order to truly under-
stand and apply concepts effectively (Schiff, 2021). ChatGPT and other AI technologies can be used
to support these learning activities by providing additional resources, feedback, and guidance. Still,
they cannot replace the importance of hands-on, experiential learning that takes place in the class-
room and beyond.
Moreover, AI technologies like ChatGPT can help students with their homework assignments,
but they cannot replace the importance of independent study and self-directed learning. Students
need to develop the skills to learn independently, to seek out information, and to evaluate the cred-
ibility and relevance of sources (Rosmini et al., 2024). AI technologies can assist students in this
process by providing personalized recommendations for resources based on their learning history
and preferences. Still, they cannot replace the importance of students taking ownership of their
learning journey.
Additionally, AI technologies like ChatGPT can help students with their writing skills by pro-
viding feedback on grammar, style, and coherence (Bibi& Aqsa, 2024). However, they cannot
replace the importance of students developing their writing voice and style. Writing is not just about
grammar and syntax but also about expressing ideas clearly and persuasively. Students need to learn
how to write in a way that is appropriate for their audience and purpose, whether it be an academic
essay or a professional report. AI technologies can provide helpful feedback and suggestions. Still,
they cannot replace the importance of students developing their writing skills through practice and
feedback from their teachers and peers.
198
TABLE 13.3
ChatGPT impact on learning
Encouraging the exchange of ideas ChatGPT can be used to encourage the exchange of ideas in the
following ways:
• ChatGPT can be used to create a safe and inclusive space for sharing ideas.
ChatGPT can be used to ensure that all students feel comfortable sharing
their ideas, regardless of their background or experience.
• ChatGPT can be used to facilitate productive and constructive discussions.
ChatGPT can be used to ask open-ended questions, encourage students to
think outside the box, and generate new ideas.
• ChatGPT can be used to provide feedback and support to students.
ChatGPT can be used to help students develop their ideas and reach a shared
understanding.
Building shared understanding ChatGPT can be used to build shared understanding in the following ways:
• ChatGPT can be used to provide relevant and useful resources. ChatGPT
can be used to access and process information from a variety of sources,
including books, articles, and websites.
• ChatGPT can be used to track and manage collaboration progress. ChatGPT
can be used to help students stay focused on their goals and complete
their tasks.
• ChatGPT can be used to resolve conflicts and overcome obstacles. ChatGPT
can be used to help students resolve differences of opinion and work
together to reach solutions that benefit all parties.
Overcoming barriers to collaboration ChatGPT can be used to overcome barriers to collaboration in the
following ways:
• ChatGPT can be used to connect students from different backgrounds and
experiences. ChatGPT can be used to help students learn from each other
and develop cooperation skills.
• ChatGPT can be used to provide support and guidance to students.
ChatGPT can be used to help students overcome challenges they may face in
collaboration.
Implementation examples Here are some examples of how ChatGPT can be used to facilitate
collaborative learning:
• In history lessons, ChatGPT can be used to create collaborative research
projects. For example, students can work together to study a specific
historical event or produce creative work on a historical topic.
• In language lessons, ChatGPT can be used to create collaborative creative
writing projects. For example, students can work together to write a novel,
short story, or poem.
• In science lessons, ChatGPT can be used to create collaborative scientific
research projects. For example, students could work together to develop a
new tool or product.
AI technologies like ChatGPT can be a valuable resource for supporting learning activities. Still,
they cannot replace the importance of hands-on, experiential learning, independent study, and self-
directed learning. Students need to develop a range of skills and competencies beyond just using
AI technologies, and educators have a critical role to play in guiding students through this process
199
(Kim et al., 2022). By providing personalized feedback, adapting teaching strategies to meet the
needs of individual students (Adiyono et al., 2024), and fostering a supportive learning environment,
educators can help students develop the skills and knowledge they need to succeed in their academic
and professional pursuits.
13.5.1.1 Capabilities
13.5.1.1.1 Accuracy and speed
ChatGPT can quickly generate accurate responses to a range of text prompts, making it a useful
tool for answering questions, summarizing content, and generating project ideas. It may also help
students with punctuation, grammar, and spelling.
13.5.1.1.2 Consistency
For students who wish to double-check their answers or get more clarification on a concept, ChatGPT
is a dependable resource because it regularly offers the same response to the same question.
13.5.1.1.3 Availability
ChatGPT is available 24/7, making it a convenient resource for students who need help outside of
regular school hours.
13.5.1.2 Limitations
13.5.1.2.1 Lack of emotional intelligence
ChatGPT is not capable of understanding emotions, feelings, or context. It cannot provide
personalized feedback or advice based on a student’s individual needs or circumstances.
13.5.1.2.2 Limited knowledge
The training text data, i.e., the knowledge base of ChatGPT, is its sole information source. Questions
outside of its training set or requiring in-depth investigation or analysis are not answered by it.
13.5.1.2.3 Lack of creativity
It is impossible for ChatGPT to come up with unique concepts or insights. Only the patterns it has
discovered from the text data have been trained on, which will allow it to respond.
Educators should use ChatGPT as a supplementary resource, not a substitute for their role as
learning leaders. ChatGPT can help students with basic questions, but educators should still provide
personalized feedback, guidance, and support to help students develop critical thinking, problem-
solving, and communication skills. ChatGPT should be used as a tool to enhance learning, not
replace the human element of teaching and learning.
201
13.5.2 Design learning activities that meet learning objectives and student needs
ChatGPT can be used to support learning activities but cannot replace learning activities themselves.
Creating educational activities that satisfy both student needs and learning objectives is crucial for
educators. Although ChatGPT can be a helpful tool to supplement educational activities, it cannot
take the place of the actual educational activities. Here are a few examples of how to use ChatGPT
to enhance educational activities: (1) Practice exercises: Students can utilize ChatGPT to get prac-
tice exercises that will help them advance their knowledge and abilities in a particular subject.
For instance, students can solve arithmetic problems or do grammatical lessons with ChatGPT.
(2) Comments and direction: ChatGPT can offer students comments and direction on their papers,
assisting them in identifying areas in which they require development and offering recommendations
for how to do so. (3) Research assistance: ChatGPT can offer students research assistance by giving
them article summaries, essential points, and query responses. By doing this, students may con-
centrate on the most crucial material while also saving time. (4) Collaborative learning: By giving
students discussion topics or problems to go through in small groups, ChatGPT can help to promote
collaborative learning activities. (5) Personalized learning: By adjusting to each student’s unique
needs and learning preferences, ChatGPT may be utilized to give them individualized learning
experiences. Students can receive practice tasks from ChatGPT, for instance, that are customized to
their learning objectives and ability level. Nevertheless, it’s essential to keep in mind that ChatGPT
cannot take the place of actual learning activities. Learning exercises ought to be created with
interaction and engagement and pertinent to the interests and needs of the pupils (Bovermann &
Bastiaens, 2020). They ought to give pupils chances to work with their peers and use their know
ledge and abilities in practical settings. Though it should be used in conjunction with other teaching
methodologies and resources, ChatGPT can be a valuable tool in assisting these learning activities.
responsibly, educators can help students achieve better learning outcomes through the personaliza-
tion of learning.
13.6.1 Adaptive learning
ChatGPT can be used to adapt learning materials to meet the individual needs and learning
styles of students (Alfiani & Sulisworo, 2023). By analyzing student data, ChatGPT can iden
tify areas where students need additional support and provide personalized learning materials
and activities to address those needs. ChatGPT can collect data on student performance, such
as test scores, assignment grades, and learning progress. This data can be used to identify areas
where students are struggling and need additional support. ChatGPT can analyze this data to
identify patterns and trends, such as areas where students consistently perform poorly or areas
where they are making significant progress (Alneyadi & Wardat, 2023). This analysis can help
ChatGPT to identify areas where students need additional support and to provide personalized
learning materials and activities to address those needs. Based on the analysis, ChatGPT can
provide students with personalized learning materials, such as interactive quizzes, videos, and
simulations, that are tailored to their specific needs and learning styles. These materials can help
students to reinforce their understanding of critical concepts and to build on their existing know-
ledge and skills.
ChatGPT can adapt learning materials to meet the individual needs and learning styles of students.
For example, if a student is a visual learner, ChatGPT can provide them with videos and simulations
to help them understand complex concepts. If a student is a kinesthetic learner, ChatGPT can pro-
vide them with interactive activities and games to help them reinforce their understanding of key
concepts through hands-on learning experiences. ChatGPT can provide students with ongoing feed-
back on their learning progress, helping them to identify areas where they need additional support
and providing suggestions for how to improve. This feedback can be tailored to the student’s specific
needs and learning goals, helping them to make progress at their own pace. Overall, ChatGPT can
be a powerful tool for adapting learning materials to meet the individual needs and learning styles
of students. By analyzing student data and providing personalized learning materials and activities,
ChatGPT can help students achieve better learning outcomes and build on their existing knowledge
and skills.
203
TABLE 13.4
Teacher’s role in context ChatGPT
Improving access to information • Teachers can use ChatGPT to provide students with quick and easy
access to the latest and most current information.
• Students can ask ChatGPT questions to get instant answers, allowing
them to understand the material more deeply.
Personalizing learning Teachers can help students’ learning experiences be more tailored to them
by using ChatGPT. Using this approach, educators can provide learning
materials that are specific to the interests and comprehension levels of
each student.
Building critical thinking skills • Teachers can ask students open-ended questions and encourage them to
think critically in formulating their own answers.
• Students can participate in teacher-guided discussions to explain and
defend their views using information obtained from ChatGPT.
Encourage collaboration and discussion • Teachers can utilize ChatGPT to start a discussion in the classroom by
presenting readings or discussion starters.
• By working together to develop answers or solutions, students can
foster a collaborative learning atmosphere in the classroom.
Developing problem-solving skills • Teachers can challenge students with complex questions or problems
and use ChatGPT as a tool to guide students in finding solutions.
• Students can use ChatGPT as a resource to develop their problem-
solving skills.
Facilitating independent learning • Teachers can assign tasks or projects that allow students to use
ChatGPT as an aid in understanding and exploring topics independently.
• Students can develop their independence in finding information and
solving problems by utilizing available resources.
Assessing student understanding • Teachers can use ChatGPT to create evaluation questions or exams that
can measure students’ understanding of the subject matter.
• Teachers can provide immediate feedback based on students’ answers,
allowing them to refine their understanding.
13.6.2 Real-time feedback
ChatGPT can provide students with real-time feedback on their assignments and assessments,
helping them to identify areas where they need improvement and providing suggestions for how
to improve (Rudolph et al., 2023). This feedback can be tailored to the student’s specific needs and
learning goals, helping them to make progress at their own pace.
13.6.4 Collaborative learning
ChatGPT can facilitate collaborative learning by connecting students with peers who have similar
learning needs and goals. By working together, students can share ideas, perspectives, and insights,
which can lead to deeper learning and a better understanding of the subject matter.
204
TABLE 13.5
Learning approaches that are tailored to the individual needs of students Click or tap here
to enter text
ChatGPT can be used to track student progress ChatGPT is a useful tool for gathering information about student
performance, including exam scores, homework, and involvement
in class. This information can be utilized to pinpoint areas in which
pupils require more assistance or potential obstacles.
ChatGPT can be used to provide personalized Feedback on student performance that is tailored to each student’s needs
feedback can be given using ChatGPT. Students who receive this feedback can
improve their skills and have a better understanding of their strengths
and deficiencies.
ChatGPT can be used to provide personalized You can use ChatGPT to personalize instructional materials to each
learning materials student’s needs. These educational tools may contain articles, games,
and other things that are appropriate for the student’s skill level and
areas of interest.
13.6.5 Continuous learning
ChatGPT can provide students with ongoing learning opportunities, helping them to build on their
existing knowledge and skills (Rahman & Watanobe, 2023). By providing students with a range of
learning materials and activities, ChatGPT can help students stay engaged and motivated and achieve
their learning goals over time. Overall, ChatGPT has the potential to be a powerful tool for the per-
sonalization of learning, helping students achieve better learning outcomes by providing learning
experiences that are more relevant and meaningful to them. However, it’s essential to remember that
ChatGPT should be used in conjunction with other teaching strategies and resources that promote
social interaction, active learning, and collaboration.
Figure 13.5 presents some suggestions for educators who want to use ChatGPT for learning
personalization.
205
13.7 CONCLUSION
In this chapter, we have discussed the impact of using ChatGPT in an educational context, specif-
ically in introducing a new paradigm for student participation in the classroom through an active
learning approach. It is evident that ChatGPT not only serves as a tool to provide quick access to
information but can also change the role of the teacher and shepherd learning toward a more inter-
active and collaborative environment. First, we see that ChatGPT can improve information access,
allowing students to quickly get answers to their questions. This not only speeds up the learning pro-
cess but also opens the door for teachers to focus more on mentoring and developing students’ crit-
ical thinking skills. Furthermore, the use of ChatGPT allows for better personalization of learning.
Teachers can create learning content tailored to student’s individual needs, supporting their devel-
opment according to their level of understanding and interests. This helps to create a more inclusive
and responsive learning environment.
In addition, active learning is the main focus, where teachers act as facilitators of discussion and
collaboration. ChatGPT is used as a tool to stimulate open-ended questions and challenge students
to think critically, creating an environment where learning does not simply become passive informa-
tion retrieval, but an interactive process that builds deeper understanding. In this paradigm, teachers
do not lose their role as classroom leaders, but they shift from being the main conveyor of informa-
tion to guiding and supporting student learning. Teachers become more involved in providing dir-
ection, structuring discussions, and designing challenging learning experiences. On the other hand,
students become more active, engaging in discussion, collaboration, and problem-solving. Thus, the
integration of ChatGPT in active learning takes us toward a new paradigm where student partici-
pation takes center stage. In an ever-changing world, this learning model not only supports know-
ledge acquisition but also helps students develop skills relevant to the 21st century, such as critical
thinking, collaboration, and independence. This paradigm creates a strong foundation for producing
a generation that is ready to face the challenges of the future.
REFERENCES
Adarkwah, M. A., Ying, C., Mustafa, M. Y., & Huang, R. (2023). Prediction of Learner Information-Seeking
Behavior and Classroom Engagement in the Advent of ChatGPT. 117–126. https://doi.org/10.1007/978-
981-99-5961-7_13
Adiyono, A., Fadhilatunnisa, A., Rahmat, N. A., & Munawarroh, N. (2022). Skills of Islamic religious educa-
tion teachers in class management. Al-Hayat: Journal of Islamic Education, 6(1), 104–115. https://doi.
org/10.35723/ajie.v6i1.229
Adiyono, A., Ni’am, S., & Anshor, A. M. (2024). Islamic character education in the era of Industry
5.0: Navigating challenges and embracing opportunities. Al-Hayat: Journal of Islamic Education, 8(1),
287–304. https://doi.org/10.35723/ajie.v8i1.493
Alamri, H., Lowell, V., Watson, W., & Watson, S. L. (2020). Using personalized learning as an instructional
approach to motivate learners in online higher education: Learner self-determination and intrinsic motiv-
ation. Journal of Research on Technology in Education, 52(3), 322–352. https://doi.org/10.1080/15391
523.2020.1728449
Alfiani, R., & Sulisworo, D. (2023). Leveraging ChatGPT for developing learning object material: A multi-
representation approach to teaching water pollution. Jurnal Ilmiah Pendidikan MIPA, 13(2), 167–178.
https://doi.org/10.30998/formatif.v13i2.19472
Alneyadi, S., & Wardat, Y. (2023). ChatGPT: Revolutionizing student achievement in the electronic magnetism
unit for eleventh-grade students in Emirates schools. Contemporary Educational Technology, 15(4).
https://doi.org/10.30935/cedtech/13417
Alshahrani, A. (2023). The impact of ChatGPT on blended learning: Current trends and future research
directions. International Journal of Data and Network Science, 7(4), 2029–2040. https://doi.org/
10.5267/j.ijdns.2023.6.010
Annuš, N. (2023). Chatbots in education: The impact of artificial intelligence based ChatGPT on teachers and
students. International Journal of Advanced Natural Sciences and Engineering Researches, 7(4), 366–
370. https://doi.org/10.59287/ijanser.739
206
Badruzaman, A., & Adiyono, A. (2023). Reinterpreting identity: The influence of bureaucracy, situation def-
inition, discrimination, and elites in Islamic education. Journal of Research in Instructional, 3(2), 157–
175. https://doi.org/10.30862/jri.v3i2.264
Bibi, Z., & Atta, D. A. (2024). The role of ChatGPT as an AI English writing assistant: A study of students’
perceptions, experiences, and satisfaction level. Annals of Human and Social, 5(1), 2790–6809. https://
doi.org/10.35484/ahss.2024(5-I)39
Bonner, E., Lege, R., & Frazier, E. (2023). Large language model-based artificial intelligence in the lan-
guage classroom: Practical ideas for teaching. Teaching English with Technology, 23(1). https://doi.org/
10.56297/BKAM1691/WIEO1749
Bovermann, K., & Bastiaens, T. J. (2020). Towards a motivational design? Connecting gamification user types
and online learning activities. Research and Practice in Technology Enhanced Learning, 15(1). https://
doi.org/10.1186/s41039-019-0121-4
Brant, J. M. (2022). Teacher care, a fulcrum for excellent and equitable education and society or as goes teacher
care, so goes. Bulgarian Comparative Education Society, 20, 174–180.
Bruneau, P., Wang, J., Cao Bee, L. A., & Hana Truong, V. (2023). The potential of ChatGPT to enhance physics
education in Vietnamese high schools. Physics Education. https://doi.org/10.35542/osf.io/36qw9
Cattaneo, K. H. (2017). Telling active learning pedagogies apart: From theory to practice. Journal of New
Approaches in Educational Research, 6(2), 144–152. https://doi.org/10.7821/naer.2017.7.237
Chen, Q., Sun, H., Liu, H., Jiang, Y., Ran, T., Jin, X., Xiao, X., Lin, Z., Chen, H., & Niu, Z. (2023). An exten-
sive benchmark study on biomedical text generation and mining with ChatGPT. Bioinformatics, 39(9).
https://doi.org/10.1093/bioinformatics/btad557
Cheng, M., Adekola, O., Albia, J., & Cai, S. (2022). Employability in higher education: A review of key
stakeholders’ perspectives. Higher Education Evaluation and Development, 16(1), 16–31. https://doi.
org/10.1108/heed-03-2021-0025
Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial
intelligence. Journal of Science Education and Technology, 32(3), 444–452. https://doi.org/10.1007/s10
956-023-10039-y
Effendi, D., & Wahidy, D. A. (2019). Pemanfaatan Teknologi Dalam Proses Pembelajaran Menuju Pembelajaran
Abad 21. Prosiding Seminar Nasional Program Pascasarjana Universitas PGRI Palembang. https://jur
nal.univpgri-palembang.ac.id/index.php/Prosidingpps/article/view/2977.
Firaina, R., & Sulisworo, D. (2023). Exploring the usage of ChatGPT in higher education: Frequency and
impact on productivity. Buletin Edukasi Indonesia, 2(01), 39–46. https://doi.org/10.56741/bei.v2i01.310
Fitria, T. N. (2017). Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of
ChatGPT in writing English essay. Journal of English Language Teaching, 6(1), 44–58. http://journal.
unnes.ac.id/sju/index.php/elt
Gan, B., Menkhoff, T., & Smith, R. (2015). Enhancing students’ learning process through interactive digital
media: New opportunities for collaborative learning. Computers in Human Behavior, 51, 652–663.
https://doi.org/10.1016/j.chb.2014.12.048
Genc, N. (2023). Artificial intelligence in physical education and sports: New horizons with ChatGPT.
Mediterranean Journal, 6(1), 17–32. https://doi.org/10.38021asbid.1291604
Gill, S. S., Xu, M., Patros, P., Wu, H., Kaur, R., Kaur, K., Fuller, S., Singh, M., Arora, P., Parlikad, A. K.,
Stankovski, V., Abraham, A., Ghosh, S. K., Lutfiyya, H., Kanhere, S. S., Bahsoon, R., Rana, O., Dustdar,
S., Sakellariou, R., … Buyya, R. (2024). Transformative effects of ChatGPT on modern educa-
tion: Emerging Era of AI Chatbots. Internet of Things and Cyber-Physical Systems, 4(June 2023), 19–
23. https://doi.org/10.1016/j.iotcps.2023.06.002
Hassani, H., & Silva, E. S. (2023). The role of ChatGPT in data science: How AI-assisted conversational
interfaces are revolutionizing the field. Big Data and Cognitive Computing, 7(2). https://doi.org/10.3390/
bdcc7020062
Heberer, D., Pisano, A., & Markson, C. (2023). As cited by the artificial intelligence of ChatGPT: Best practices
on technology integration in higher education. Journal for Leadership and Instruction, 22(1), 8–12.
Imran, M., & Almusharraf, N. (2023). Analyzing the role of ChatGPT as a writing assistant at higher education
level: A systematic review of the literature. Contemporary Educational Technology, 15(4). https://doi.
org/10.30935/cedtech/13605
207
Islam, M. R., Liu, S., Wang, X., & Xu, G. (2020). Deep learning for misinformation detection on online social
networks: A survey and new perspectives. In Social Network Analysis and Mining (Vol. 10, Issue 1, pp.
1–20). Springer. https://doi.org/10.1007/s13278-020-00696-x
Itmeizeh, M. J., Itmeizeh, M., & Hassan, A. (2020). New approaches to teaching critical thinking skills through
a new EFL curriculum. International Journal of Psychosocial Rehabilitation, 24(7), 8864–8885. www.
researchgate.net/publication/341099336
Jaenudin, R., Chotimah, U., Farida, F., & Syarifuddin, S. (2020). Student development zone: Higher order
thinking skills (HOTS) in critical thinking orientation. International Journal of Multicultural and
Multireligious Understanding, 7(9), 11. https://doi.org/10.18415/ijmmu.v7i9.1884
Javaid, M., Haleem, A., Singh, R. P., Khan, S., & Khan, I. H. (2023). Unlocking the opportunities through
ChatGPT Tool towards ameliorating the education system. BenchCouncil Transactions on Benchmarks,
Standards and Evaluations, 3(2), 100115. https://doi.org/10.1016/j.tbench.2023.100115
Jend, J. (2023). Can ChatGPT replace the role of the teacher in the classroom: A fundamental analysis. Jl.
Meurandeh, Meurandeh, Kec. Langsa Lama, 05(04), 16100–16106.
Junior, W. G., Marasco, E., Kim, B., Behjat, L., & Eggermont, M. (2023). How ChatGPT can inspire and
improve serious board game design. International Journal of Serious Games, 10(4), 33–54. https://doi.
org/10.17083/ijsg.v10i4.645
Kalla, D., & Smith, N. B. (2023). Study and analysis of ChatGPT and its impact on different fields of study.
International Journal of Innovative Science and Research Technology, 8(3). https://doi.org/10.5281/zen
odo.7767675
Keshtkar, A., Hayat, A.-A., Atighi, F., Keshtkar, M., Yazdanpanahi, P., Sadeghi, E., Deilami, N., Reihani, H.,
Karimi, A., Mokhtari, H., & Hashempur, M. H. (2023). ChatGPT’s performance on Iran’s medical
licensing exams. Research Square, 1–11. https://doi.org/10.21203/rs.3.rs-3253417/v1
Khan, M. I. (2015). Islamic education system: A complementary and cost effective channel for inclusive elem-
entary education. International Journal of Multidisciplinary Research and Development, 2(4), 1–9.
www.allsubjectjournal.com
Kim, J., Lee, H., & Cho, Y. H. (2022). Learning design to support student-AI collaboration: perspectives
of leading teachers for AI in education. Education and Information Technologies, 27(5), 6069–6104.
https://doi.org/10.1007/s10639-021-10831-6
Kivunja, C. (2015). Exploring the pedagogical meaning and implications of the 4Cs “Super Skills” for the
21st century through Bruner’s 5E lenses of knowledge construction to improve pedagogies of the new
learning paradigm. Creative Education, 06(02), 224–239. https://doi.org/10.4236/ce.2015.62021
Le, H., Janssen, J., & Wubbels, T. (2018). Collaborative learning practices: teacher and student perceived
obstacles to effective student collaboration. Cambridge Journal of Education, 48(1), 103–122. https://
doi.org/10.1080/0305764X.2016.1259389
Limna, P., Kraiwanit, T., Jangjarat, K., Klayklung, P., & Chocksathaporn, P. (2023). The use of ChatGPT in the
digital era: Perspectives on chatbot implementation. Journal of Applied Learning and Teaching, 6(1),
64–74. https://doi.org/10.37074/jalt.2023.6.1.32
Liu, M., Ren, Y., Nyagoga, L. M., Stonier, F., Wu, Z., & Yu, L. (2023). Future of education in the era of genera-
tive artificial intelligence: Consensus among Chinese scholars on applications of ChatGPT in schools.
Future in Educational Research, 1(1), 72–101. https://doi.org/10.1002/fer3.10
Maddigan, P., & Susnjak, T. (2023). Chat2VIS: Generating data visualizations via natural language using
ChatGPT, Codex, and GPT-3 large language models. IEEE Access, 11, 45181–45193. https://doi.org/
10.1109/ACCESS.2023.3274199
Mahindru, R., Kumar, A., Bapat, G., Rroy, A. D., Kavita, & Sharma, N. (2024). Metaverse unleashed: Augmenting
creativity and innovation in business education. RAiSE-2023, 207. https://doi.org/10.3390/engproc202
3059207
Meyer, J. G., Urbanowicz, R. J., Martin, P. C. N., O’Connor, K., Li, R., Peng, P. C., Bright, T. J., Tatonetti,
N., Won, K. J., Gonzalez-Hernandez, G., & Moore, J. H. (2023). ChatGPT and large language models
in academia: opportunities and challenges. BioData Mining, 16(1), 1–11. https://doi.org/10.1186/s13
040-023-00339-9
Nah, F. F.-H., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications,
challenges, and AI-human collaboration. Journal of Information Technology Case and Application
Research, 25(3), 277–304. Routledge. https://doi.org/10.1080/15228053.2023.2233814
208
Naicker, A., Singh, E., & van Genugten, T. (2022). Collaborative online international learning
(COIL): Preparedness and experiences of South African students. Innovations in Education and Teaching
International, 59(5), 499–510. https://doi.org/10.1080/14703297.2021.1895867
Paolini, A. C., & Riley, R. W. (2020). Social emotional learning: Key to career readiness. Anatolian Journal of
Education, 5(1), 125–134. https://doi.org/10.29333/aje.2020.5112a
Peñalvo, G., & José, F. (2021). Avoiding the dark side of digital transformation in teaching. An institutional ref-
erence framework for eLearning in higher education. Sustainability (Switzerland), 13(4), 1–17. https://
doi.org/10.3390/su13042023
Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for education and research: Opportunities, threats, and strat-
egies. Applied Sciences (Switzerland), 13(9), 1–21. https://doi.org/10.3390/app13095783
Rane, N. L., Choudhary, S. P., Tawde, A., & Rane, J. (2023). ChatGPT is not capable of serving as an
author: Ethical concerns and challenges of large language models in education. International Research
Journal of Modernization in Engineering Technology and Science, 5(10), 851–874. https://doi.org/
10.56726/irjmets45212
Ringuette, R., Engell, A., Gerland, O., McGranaghan, R. M., & Thompson, B. (2023). The DIARieS ecosystem
is a software ecosystem to simplify the discovery, implementation, analysis, reproducibility, and sharing
of scientific results and environments in heliophysics. Advances in Space Research, 72(12), 5669–5681.
https://doi.org/10.1016/j.asr.2022.05.012
Rosmini, H., Ningsih, N., Murni, M., & Adiyono, A. (2024). Transformasi Kepemimpinan Kepala Sekolah
pada Era Digital: Strategi Administrasi Pendidikan Berbasis Teknologi di Sekolah Menengah Pertama.
Konstruktivisme: Jurnal Pendidikan dan Pembelajaran, 16(1), 165–180. https://doi.org/10.35457/konst
ruk.v16i1.3451
Rudolph, J., Aspland, T., Tan, E., & Shukaitis, S. (2023). Co-Editors-in-Chief Guest editors (special section).
Journal of Applied Learning & Teaching, 6(1). https://doi.org/10.37074/jalt.2023.6.1
Schiff, D. (2021). Out of the laboratory and into the classroom: The future of artificial intelligence in education.
AI and Society, 36(1), 331–348. https://doi.org/10.1007/s00146-020-01033-8
Sebastian, J., & Allensworth, E. (2016). The Role of Teacher Leadership in How Principals Influence Classroom
Instruction and Student Learning. https://doi.org/10.1086/688169
Shihab, S. R., Sultana, N., & Samad, A. (2023). Revisiting the use of ChatGPT in business and educational
fields: Possibilities and challenges. Jurnal Multidisiplin Ilmu, 2(3), 534–545. https://journal.mediapu
blikasi.id/index.php/bullet
Shoufan, A. (2023). Exploring students’ perceptions of ChatGPT: Thematic analysis and follow-up survey.
IEEE Access, 11, 38805–38818. https://doi.org/10.1109/ACCESS.2023.3268224
Solone, C. J., Thornton, B. E., Chiappe, J. C., Perez, C., Rearick, M. K., & Falvey, M. A. (2020). Creating col-
laborative schools in the United States: A review of best practices. International Electronic Journal of
Elementary Education, 12(3), 283–292. https://doi.org/10.26822/iejee.2020358222
Su, J., & Yang, W. (2023). Unlocking the power of ChatGPT: A framework for applying generative AI in edu-
cation. ECNU Review of Education, 20965311231168423.
Supena, I., Darmuki, A., & Hariyadi, A. (2021). The influence of 4C (constructive, critical, creativity, collab-
orative) learning model on students’ learning outcomes. International Journal of Instruction, 14(3),
873–892. https://doi.org/10.29333/iji.2021.14351a
Tan, J., & Charman, K. (2023). ChatGPT: A disruptive blessing in disguise for higher education? ChatGPT: A
disruptive blessing in disguise for higher education? Finnish Business Review, 9, 52–68.
Trunk, A., Birkel, H., & Hartmann, E. (2020). On the current state of combining human and artificial intelli-
gence for strategic organizational decision-making. Business Research, 13(3), 875–919. https://doi.org/
10.1007/s40685-020-00133-x
Tuhuteru, L., Sampe, F., Muna Almaududi Ausat, A., Rahmania Hatta, H., Bakti Nusantara, I., Wismarini
No, J., Sel, P., Pringsewu, K., Pringsewu, K., Syekh Nurjati Cirebon, I., Perjuangan, J., Kesambi, K.,
Cirebon, K., Barat, J., Atma Jaya Makassar, U., Tanjung Alang No, J., Tamalate, K., Makassar, K.,
Selatan, S., … Timur, K. (2023). Analysing the role of ChatGPT in improving student productivity in
higher education. Journal on Education, 05(04), 14886–14891.
Whittle, C., Tiwari, S., Yan, S., & Williams, J. (2020). Emergency remote teaching environment: a conceptual
framework for responsive online teaching in crises. Information and Learning Science, 121(5–6), 301–
309. https://doi.org/10.1108/ILS-04-2020-0099
209
Wu, T., He, S., Liu, J., Sun, S., Liu, K., Han, Q. L., & Tang, Y. (2023). A brief overview of ChatGPT: The
history, status quo, and potential future development. IEEE/CAA Journal of Automatica Sinica, 10(5),
1122–1136. https://doi.org/10.1109/JAS.2023.123618
Ye, Y., You, H., & Du, J. (2023). Improved trust in human-robot collaboration with ChatGPT. IEEE Access, 11,
55748–55754. https://doi.org/10.1109/ACCESS.2023.3282111
Zhai, X. (2023). ChatGPT user experience: Implications for education. AI4STEM Education Center, 1–18.
https://orcid.org/0000-0003-4519-1931
210
14.1 INTRODUCTION
14.1.1 Background
In recent years, the emergence of information and communication technologies, in particular arti-
ficial intelligence (AI) and machine learning (ML), has brought about significant transformations
in many sectors of society (Chatterjee and Bhattacharjee 2020; Huang 2021; Ouyang et al. 2022).
These advances have also had a profound impact on educational methodology (Matovu et al. 2023;
Won et al. 2023). Among the various applications of generative AI, the Chat Generative Pre-trained
Transformer (ChatGPT) tool has emerged as a powerful solution, reshaping the dissemination,
acquisition and evaluation of knowledge (Costello 2023; Dalalah and Dalalah 2023; Lim et al. 2023;
Pavlik 2023; Ratten and Jones 2023).
Conditional Generative Pre-trained Transformer or ChatGPT (OpenAI 2023), a publicly access
ible AI model, has witnessed tremendous uptake since its launch in November 2022. Due to its
ability to synthesise large volumes of text and generate content, it has become a popular choice for
teaching and learning (Baidoo-Anu and Ansah 2023). Since then, ChatGPT has been tested for a
wide range of tasks, the most important of which is writing. Other perhaps more innovative areas
include writing essays and predicting historical events (McGee 2023), talking, drawing and editing
with visual models (Wu et al. 2023) and examining facts and myths in ChatGPT responses (Alkaissi
and McFarlane 2023).
The integration of AI chatbots in education, providing immediate responses to students’ enquiries
and serving as easily accessible learning resources regardless of time and location, has been a not-
able development (Hwang and Chang 2023; Kasthuri and Balaji 2021). Simultaneously, the applica
tion of advanced ML algorithms to analyse student data has empowered educators to pinpoint areas
where individual students may need additional support, thus enhancing the efficacy of teaching
strategies (Zhang 2022). Additionally, the rise of AI-based learning systems has paved the way for
adaptive learning. These systems, tailored to adjust to each student’s learning pace and style, deliver
personalised educational content. This customisation has shown its potential in improving learning
outcomes and increasing engagement levels (Liu et al. 2020; Mogavi et al. 2021; Xia et al. 2023).
The evolution of learning, a pivotal facet of cognitive development, has witnessed a myriad
of transformations across history. Modern approaches transcend conventional boundaries,
encompassing a diverse array of innovative techniques. From traditional modes like visual and audi-
tory learning to contemporary methods such as online courses, adaptive learning platforms and the
integration of ChatGPT as a versatile educational tool, the landscape of education has evolved into
a multifaceted tapestry designed to cater to diverse learning styles and meet the demands of today’s
society (Grájeda et al. 2024; Karam 2023; Archambault et al. 2023).
14.1.2 Methodology
This chapter proposes an in-depth comparative study examining traditional education, ChatGPT-
guided methodologies, as well as a hybrid approach harmoniously integrating the two approaches
(Sabbagh and Resh 2016). The investigation takes place at the Mohammed V University, Faculty
of Educational Sciences in Rabat, Morocco, with a particular focus on students enrolled in the
Educational Technology and Pedagogical Innovation programme. The research focuses closely on
participants actively involved in a workshop organised in July 2023.
14.1.3 Chapter organisation
The rest of the chapter is structured as follows: Section 14.2 presents the different learning methods,
including our new hybrid learning approach. Section 14.3 describes the research methodology,
materials and methods used. Section 14.4 presents and discusses the results of our experiment, while
Section 14.5 concludes the chapter.
14.2 LEARNING METHODS
Learning, at the heart of cognitive evolution, has metamorphosed through a variety of methods over
time. Today, contemporary approaches have transcended traditional boundaries to embrace a range
of innovative techniques. These multifaceted learning methods include formal teaching, self-study,
online learning environments, interactive simulations, learning by doing and emerging technologies
such as AI. Each of these modalities offers unique perspectives, adapting to the varied needs of
learners and reflecting the dynamic evolution of contemporary society. In this era of digital trans-
formation, exploring these learning methods becomes essential to understanding how education
adapts to modern challenges and promotes the acquisition of knowledge in innovative ways.
crafting an inclusive and effective educational setting. From visual and auditory learning to inter-
active methods, online courses and adaptive platforms, each approach offers unique insights.
14.2.1.1 Visual learning
Visual learning is characterised by a preference for the use of images, diagrams, charts and other visual
aids to facilitate the understanding and retention of information (Pasqualotto et al. 2024; Zhu et al.
2021). These learners get the most out of their educational experience when they are exposed to visual
elements that enable them to visualise concepts and assimilate them more effectively. To optimise this
learning, it is recommended to use various tools such as mind maps, diagrams, flashcards and videos,
which offer dynamic and interactive visual representations. In addition, the use of colour-coded notes
can also be particularly effective, associating specific hues with categories or concepts to enhance
retention and understanding of information (Hackl and Ermolina 2019; Moore and Dwyer, 1994).
By incorporating these tips into the teaching process, educators can create an environment con-
ducive to visual learning, providing visual learners with a more enriching educational experience
tailored to their predominant learning style (ManfredoVieira et al. 2018).
14.2.1.2 Auditory learning
Auditory learning characterises a preference for receiving and understanding information through
listening and speaking. Auditory learners are particularly receptive to verbal information and benefit
from active listening to assimilate concepts (Fitria 2023; Wright and Zhang 2009). To optimise their
learning, activities such as attending lectures, taking part in group discussions and recording and
listening to lectures are recommended. These methods enable auditory learners to take full advan-
tage of their ability to absorb information auditorily, promoting better retention and understanding
of the subjects studied (Robinson and Summerfield 1996).
Practical tips can also be incorporated into their study methods, such as the use of mnemonics
and the verbal explanation of concepts aloud. Mnemonic devices, such as creating rhymes or asso-
ciating concepts with distinctive sounds, are effective mnemonic strategies for reinforcing memory.
Similarly, explaining ideas or problems aloud not only helps to consolidate understanding but also
creates an auditory resonance that makes it easier to retrieve information when revising later. By
adopting these tips, auditory learners can make full use of their predominant learning style and opti-
mise their learning experience (Kayalar and Kayalar 2017).
14.2.1.4 Interactive learning
Interactive learning characterises a learning style that thrives within educational environments that
encourage active participation and constant interaction with learning materials. These learners get
213
the most out of their learning experience when they are involved in engaging activities, such as
group discussions, collaborative projects and interactive simulations. Interaction-based teaching
approaches provide interactive learners with the opportunity to put their knowledge into practice
in a concrete way, enhancing retention and understanding of concepts (Saleem et al. 2022; Barker
1994; Baldwin and Sabry 2003).
Modern technologies have contributed greatly to the development of interactive learning, with
tools such as e-learning platforms, virtual simulations and interactive educational applications
(Minka and Picard 1997). These resources offer learners the flexibility to explore, experiment
and solve problems interactively, encouraging greater engagement in the learning process. By
implementing teaching methods that encourage interaction, educators can create dynamic environ-
ments that stimulate the interest of interactive learners and help them develop practical skills while
consolidating their theoretical knowledge.
ALGORITHM 1
Algorithm: ChatGPT Operation
1. model ← loadPretrainedModel()
2. inputTokens ←tokenize(inputText)
3. modelState ← createEmptyState()
4. for each token in inputTokens:
5. outputToken, modelState ←processToken(token, model, modelState)
6. response ← “ “
7. while not stoppingCondition():
8. outputToken, modelState ←generateNextToken(model, modelState)
9. response ← response +outputToken
10. function main(inputText):
11. model ←loadPretrainedModel()
12. inputTokens ←tokenize(inputText)
13. modelState ←createEmptyState()
14. for each token in inputTokens:
15. outputToken, modelState ←processToken(token, model, modelState)
16. response ← ““
17. while notstoppingCondition():
18. outputToken, modelState ← generateNextToken(model, modelState)
19. response ← response +outputToken
20. return response
One of ChatGPT’s notable strengths is its ability to personalise the learning experience. By
discerning the individual needs of students, it can generate educational content tailored to specific
levels and learning styles (Hauger and Köck 2007). ChatGPT’s conversational features offer
students opportunities for virtual tutoring. It skilfully answers specific questions, explains difficult
concepts and provides additional examples to reinforce understanding (Opara et al. 2023; Forman
et al. 2023; Seetharaman 2023).
ChatGPT is a valuable tool for language learning, offering simulated conversations to improve
communication skills. Its translation capabilities contribute to teaching in a multilingual context. The
introduction of conversational elements makes learning more engaging, as students interact inter-
actively with the tool, stimulating attention and participation (Rasul et al. 2023; Nurtayeva et al. 2023).
The operation of ChatGPT can be succinctly explained through an algorithmic description, as
shown in Algorithm 1. This diagram provides a schematic representation of the essential steps in
the process, from receiving the request to generating the response. Initially, the model takes textual
input and performs a thorough analysis, taking into account both context and semantics. Once this
understanding has been established, the model formulates a response, taking into account factors
such as consistency, relevance and clarity.
ChatGPT’s underlying algorithm harnesses sophisticated ML mechanisms, including neural
networks, to continuously enhance its capabilities. The initial training of the model relies on exten-
sive and diverse sets of textual data, allowing ChatGPT to develop a profound understanding of lan-
guage and various contextual nuances. Through the integration of these elements, the model attains
the ability to furnish context-aware and pertinent responses, thereby crafting a seamless and natural
conversational experience for users.
14.2.3 Hybrid approach
Our innovative hybrid model, integrating blended learning by adeptly combining conventional
methodologies with interactive sessions involving ChatGPT, constitutes a pioneering advancement
215
in the realm of education. This hybrid approach is designed to harness the inherent strengths of trad-
itional educational practices alongside the revolutionary capabilities of AI, as manifested through
ChatGPT, to craft an enriched and adaptive learning milieu.
At the heart of hybrid learning is the harmonious interplay between conventional pedagogy
facilitated by educators and the dynamic engagement enabled by interactions with ChatGPT. Unlike
an abrupt shift towards wholly digital methodologies, our hybrid model preserves the invaluable
essence of human instruction, seamlessly integrating the possibilities presented by the intelligent
automation of ChatGPT.
A salient characteristic of this model lies in its commitment to personalising the learning experience.
By incorporating ChatGPT, endowed with the ability to discern individual student needs, the hybrid
model can generate bespoke educational content tailored to the distinct proficiency levels and learning
preferences of each learner. This constitutes an inventive response to the myriad learning profiles,
offering a customised approach without compromising the essential human touch (Alafnan et al. 2023).
In the realm of teaching, educators can seamlessly integrate ChatGPT into their lessons to furnish
additional support beyond class hours. Leveraging ChatGPT’s conversational capabilities empowers
students to pose queries, seek explanations and engage interactively with specific concepts, fostering
a real-time, dynamic and responsive learning environment that provides continuous support (see
Figure 14.1).
Regarding assessments, the hybrid model employs ChatGPT to autonomously grade assignments
and tests. This automation not only liberates time for educators, allowing them to focus more on
FIGURE 14.1 Hybrid learning method with ChatGPT and teacher supervision.
216
pivotal teaching responsibilities, but also furnishes students with immediate feedback, fostering a
culture of continuous improvement. Additionally, blended learning offers an opportunity to optimise
exam preparation. Through the provision of supplementary explanations, practical questions and
study tips, ChatGPT can mitigate exam-related stress, extending personalised support to students as
they prepare for assessments.
14.3 METHODOLOGY
This survey was carried out at the Mohammed V Faculty of Education in Rabat, on students
studying for a degree in Educational Technology, as part of a workshop held in July 2023. A total
of 152 participants attended the workshop. The course consisted of four series of activities, mainly
watching videos, reading documents, taking part in forums, answering quizzes and, finally, com-
pleting a final exam, involving a set of tasks to be completed over a period of around four weeks,
with an average of 2 hours of activities per day. The four series of activities were as follows: (S1)
fundamentals of project management, (S2) IT tools and updating, (S3) basic organisational tools and
(S4) advanced organisational tools.
Initially, 146 participants completed the questionnaire submitted to them before the start of the
workshop.
The learning techniques implemented in this chapter during the workshop are divided into three
distinct methods: traditional learning, learning using ChatGPT and hybrid learning combining both
traditional teaching and interaction with ChatGPT.
In the first approach, classical learning, participants were exposed to traditional teaching methods,
such as lectures, readings and practical exercises. This method aims to provide a solid foundation of
knowledge and skills using traditional teaching practices.
The second method involves interaction between a teacher and ChatGPT to facilitate learning.
This innovative approach exploits ChatGPT’s capabilities to provide explanations, answers to
questions and additional information, thus complementing the teacher’s role. This offers participants
an interactive and personalised learning experience.
Finally, the third method, hybrid learning, combines aspects of the traditional model and
interaction with ChatGPT. Participants benefit from a balanced approach that capitalises on the
advantages of each method, promoting in-depth understanding and practical application of the
knowledge acquired.
TABLE 14.1
Analysis of variance (ANOVA)
The result of the ANOVA indicates a significant difference between the student scores generated
by the three methods, with an extremely low p-value (p-value =2.10–16, well below 0.05). This
finding suggests that the methods have a significant impact on student results.
However, to go beyond this overall finding and identify specific differences between the means of
the methods, we need to use post-hoc tests. In this context, the Tukey test is chosen to conduct these
multiple comparisons. A Tukey test (HSD) was employed to compare all means pairwise (Table 14.2).
The outcomes of the Tukey test reveal a significant difference in the mean values among groups
G1, G2 and G3. Moreover, the findings underscore the significant divergence in students’ per-
formance averages across all compared groups (G1, G2, G3). These discerned distinctions were
substantiated by exceptionally low p-values, thereby enhancing confidence in the statistical robust-
ness of the observed disparities between the groups.
In this phase of our analysis, we meticulously checked the fundamental conditions required for
the application of ANOVA. To assess the normality of the residuals, we examined the cloud of
residuals, which was found to be satisfactorily aligned on the right, suggesting that the residuals
follow a normal distribution (Figure 14.2). The symmetry and linear shape of the cloud reinforce our
confidence in the assumption of normality of the residuals.
TABLE 14.2
Tukey test to comparisons between the means of the
different methods
The second graph uses the autocorrelation function (ACF) to assess the independence of the
residuals. All the bars in the ACF lie between the two dotted lines, with the exception of the first bar
(Figure 14.3). This observation suggests that the residuals are independent, with the exception of the
first observation, which may require special attention.
With regard to homoscedasticity, we used Bartlett’s test, and the value obtained is greater than
0.005. This result supports the idea that the variances of the residuals are homogeneous between the
groups, thus reinforcing the validity of the ANOVA results.
In short, our in-depth analyses of the ANOVA conditions, ranging from the normality and inde-
pendence of the residuals to homoscedasticity, reinforce the robustness of our statistical approach,
thereby strengthening the validity of the conclusions we can draw from the application of ANOVA
to the data.
The ranges do not overlap, indicating a statistically significant difference between the means.
Notably, Method3 (81.477) exhibits the highest mean among the three methods, potentially indi-
cating higher performance in terms of scores. On the other hand, Method2 (71.967) presents an
intermediate mean compared to the three methods. Meanwhile, Method1 (63.839) has the lowest
average of the three methods, suggesting relatively lower performance (Figure 14.4).
We implemented multiple linear regression using the backward elimination technique to examine
the joint influences of socio-demographic factors (age, educational level, gender, income) and
the computer level variable on the Score of evaluation. The results of this multivariate analysis
(Table 14.3) revealed that the variables Methods and Computing Level had a statistically significant
impact on the evaluation Score, as shown by the results (P =3.6.10–6) and (P < 2.10–16), respectively.
In other words, both the methods used in the study and the participants’ level of computer literacy
played a statistically significant role in predicting the score.
The adjusted R-squared, evaluated at 0.7246, highlights the robustness of the model in terms
of its ability to explain score variability. This adjusted coefficient was able to account for approxi-
mately 72.46% of the variation recorded in the data. This finding reinforces the credibility and rele-
vance of the model in its ability to capture the relationships between the explanatory variables and
the “Score” variables.
219
TABLE 14.3
Multiple linear regression using the backward elimination
14.5 CONCLUSION
In conclusion, this chapter represents a significant contribution to the in-depth exploration of the
dynamics of hybrid learning. Through a comparative study, we have examined traditional educa-
tional strategies, those based on ChatGPT, as well as the hybrid approach. Rooted in the trans-
formative landscape of education, this exploration highlights the impact of emerging technologies,
notably ChatGPT, on the redefinition of conventional educational practices.
The critique of the traditional explanatory method, characterised by the unidirectional transmis-
sion of knowledge, highlights its passivity and lack of relevance to students’ everyday lives. The
study advocates a transition towards active and dialogical methodologies, aligned with learners’
needs. Various learning methods, such as visual, auditory, reading and writing, interactive online
courses and adaptive platforms, illustrate the evolution of education.
ChatGPT, developed by OpenAI, is emerging as a versatile tool, enhancing personalised
learning experiences. Its conversational capabilities facilitate virtual tutoring, language learning and
engagement, providing students with a dynamic interface for interaction. ChatGPT’s algorithmic
foundations, outlined in Algorithm 1, demonstrate its process for understanding and generating con-
textual responses.
The introduction of the innovative hybrid model seamlessly combines traditional teaching
with ChatGPT-based interactions, aiming to exploit the strengths of both approaches to ensure
personalised learning experiences. Statistical analyses, including ANOVA and Tukey tests, highlight
220
significant differences between groups using traditional, ChatGPT- based and hybrid learning
methods. The hybrid approach, merging the best of both worlds, shows particularly remarkable
results, as highlighted by graphical representations such as the Q–Q plot and the Forest test.
An in-depth regression analysis confirms the importance of the selected methods and participants’
computer literacy in predicting assessment scores, with a model showing an adjusted R-squared of
72.46%, attesting to its robustness in explaining score variability.
In essence, this chapter highlights the changing educational landscape and underlines the com-
pelling need for innovative methodologies. The hybrid learning model, integrating tradition and
technology, is emerging as a promising avenue for effective personalised education. However, cer-
tain limitations should be noted, such as differences in students’ familiarity with ChatGPT, which
may influence the results, and the limited size of the student groups, which restricts the general-
isability of the findings. Future studies could benefit from a larger sample size to strengthen the
external validity of the results.
REFERENCES
Ai, Q., Bai, T., Cao, Z., Chang, Y., Chen, J., Chen, Z., ... & Zhu, X. (2023). Information retrieval meets large
language models: A strategic report from Chinese IR community. AI Open, 4, 80–90.
AlAfnan, M. A., Dishari, S., Jovic, M., & Lomidze, K. (2023). ChatGPT as an educational tool: Opportunities,
challenges, and recommendations for communication, business writing, and composition courses.
Journal of Artificial Intelligence and Technology, 3(2), 60–68.
Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing.
Cureus, 15(2).
Archambault, L., Leary, H., & Rice, K. (2022). Pillars of online pedagogy: A framework for teaching in
online learning environments. Educational Psychologist, 57(3), 178–191. https://doi.org/10.1080/00461
520.2022.2051513
Baidoo- Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence
(AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of
AI, 7(1), 52–62.
Baldwin, L., & Sabry, K. (2003). Learning styles for interactive learning systems. Innovations in Education and
Teaching International, 40(4), 325–340.
Barker, P. (1994). Designing interactive learning. In Design and Production of Multimedia and Simulation-
Based Learning Material (pp. 1–30). Dordrecht: Springer Netherlands.
Batanero-Ochaíta, C., De-Marcos, L., Rivera, L. F., Holvikivi, J., Hilera, J. R., & Tortosa, S. O. (2021).
Improving accessibility in online education: Comparative analysis of attitudes of blind and deaf students
toward an adapted learning platform. IEEE Access, 9, 99968–99982.
Biggs, J., Tang, C., & Kennedy, G. (2022). Ebook: Teaching for Quality Learning at University 5e. McGraw-
Hill Education (UK).
Cambourne, B. L., & Crouch, D. K. (2022). The literacy journey: From a psychological to a biological way of
thinking about learning to read and write. In International Encyclopedia of Education: Fourth Edition
(837). Elsevier.
Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: A quan-
titative analysis using structural equation modelling. Education and Information Technologies, 25(5),
3443–3463. https://doi.org/10.1007/s10639-020-10159-7
Costello, E. (2023). ChatGPT and the educational AI chatter: Full of bullshit or trying to tell us something?
Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00398-5
Dalalah, D., & Dalalah, O. M. A. (2023). The false positives and false negatives of generative AI detection
tools in education and academic research: The case of ChatGPT. International Journal of Management
in Education, 21(2), Article 100822. https://doi.org/10.1016/j.ijme.2023.100822
Fitria, T. N. (2023). Implementation of English Language Teaching (ELT) through understanding non-EFL
students’ learning styles. Education and Human Development Journal, 8(1), 10–25.
221
Forman, N., Udvaros, J., & Avornicului, M. S. (2023). ChatGPT: A new study tool shaping the future for high
school students. International Journal of Advanced Natural Sciences and Engineering Researches, 7(4),
95–102.
Gligorea, I., Cioca, M., Oancea, R., Gorski, A. T., Gorski, H., & Tudorache, P. (2023). Adaptive learning using
artificial intelligence in e-learning: A literature review. Education Sciences, 13(12), 1216.
Gómez, R. L., & Suárez, A. M. (2023). Pedagogical practices and civic knowledge and engagement in Latin
America: Multilevel analysis using ICCS data. Heliyon, 9(11), 1–16.
Grájeda, A., Burgos, J., Córdova, P., & Sanjinés, A. (2024). Assessing student-perceived impact of using arti-
ficial intelligence tools: Construction of a synthetic index of application in higher education. Cogent
Education, 11(1), 2287917.
Hackl, E., & Ermolina, I. (2019). Inclusion by design: Embedding inclusive teaching practice into design and
preparation of laboratory classes. Currents in Pharmacy Teaching and Learning, 11(12), 1323–1334.
Hamid, M. O., & Jahan, I. (2023). Learning to write or writing to resist? A primary school child’s response to
a family writing intervention. Linguistics and Education, 78, 101249.
Hauger, D., & Köck, M. (2007, September). State of the art of adaptivity in e-learning platforms. In LWA (pp.
355–360).
Huang, X. (2021). Aims for cultivating students’ key competencies based on artificial intelligence education
in China. Education and Information Technologies, 26(5), 5127– 5147. https://doi.org/10.1007/s10
639-021-10530-2
Hui, L., de Bruin, A. B., Donkers, J., & van Merriënboer, J. J. (2021). Stimulating the intention to change
learning strategies: The role of narratives. International Journal of Educational Research, 107, 101753.
Hwang, G. J., & Chang, C. Y. (2023). A review of opportunities and challenges of chatbots in education.
Interactive Learning Environments, 31(7), 4099–4112.
Karam, J. (2023). Reforming higher education through AI. In Governance in Higher Education: Global Reform
and Trends in the MENA Region (pp. 275–306). Cham: Springer Nature Switzerland.
Kasthuri, E., & Balaji, S. (2021, February). A chatbot for changing lifestyle in education. In 2021 Third
International Conference on Intelligent Communication Technologies and Virtual Mobile Networks
(ICICV) (pp. 1317–1322). IEEE.
Kayalar, F., & Kayalar, F. (2017). The effects of auditory learning strategy on learning skills of language
learners (students’ views). IOSR Journal of Humanities and Social Science (IOSR-JHSS), 22(10), 04–10.
Kem, D. (2022). Personalized and adaptive learning: Emerging learning platforms in the era of digital and
smart learning. International Journal of Social Science and Human Research, 5(2), 385–391.
Kimok, D., & Heller-Ross, H. (2008). Visual tutorials for point-of-need instruction in online courses. Journal
of Library Administration, 48(3–4), 527–543.
Liberg, C. (1990). Learning to Read and Write (Doctoral dissertation, Departments of Linguistics, Uppsala
University).
Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the
future of education: Ragnarok or reformation? A paradoxical perspective from management educators.
International Journal of Management in Education, 21(2), Article 100790. https://doi.org/10.1016/
j.ijme.2023.100790
Liu, Z., Moon, J., Kim, B., & Dai, C. P. (2020). Integrating adaptivity in educational games: A combined
bibliometric analysis and meta-analysis review. Educational Technology Research and Development,
68, 1931–1959.
Manfredo Vieira, S., Hiltensperger, M., Kumar, V., Zegarra-Ruiz, D., Dehner, C., Khan, N., ... & Kriegel, M. A.
(2018). Translocation of a gut pathobiont drives autoimmunity in mice and humans. Science, 359(6380),
1156–1161.
Matovu, H., Ungu, D. A. K., Won, M., Tsai, C.-C., Treagust, D. F., Mocerino, M., & Tasker, R. (2023).
Immersive virtual reality for science learning: Design, implementation, and evaluation. Studies in
Science Education, 59(2), 205–244. https://doi.org/10.1080/03057267.2022.2082680
McGee, R. W. (2023). How Would American History Be Different If FDR Had Been Assassinated in 1933: A
ChatGPT Essay. Available at SSRN 4413419.
Minka, T. P., & Picard, R. W. (1997). Interactive learning with a “society of models”. Pattern Recognition,
30(4), 565–581.
222
Mogavi, R. H., Ma, X., & Hui, P. (2021). Characterizing student engagement moods for dropout prediction in
question pool websites. arXiv preprint arXiv:2102.00423.
Moore, D. M., & Dwyer, F. M. (Eds.). (1994). Visual Literacy: A Spectrum of Visual Learning. Educational
Technology.
Muñoz, J. L. R., Ojeda, F. M., Jurado, D. L. A., Peña, P. F. P., Carranza, C. P. M., Berríos, H. Q., ... & Vasquez-
Pauca, M. J. (2022). Systematic review of adaptive learning technology for learning in higher education.
Eurasian Journal of Educational Research, 98(98), 221–233.
Nation, I. S., & Macalister, J. (2020). Teaching ESL/EFL Reading and Writing. CRC Taylor & Francis.
Nurtayeva, T., Salim, M., Basheer Taha, T., & Khalilov, S. (2023). The influence of ChatGPT and AI tools on
the academic performance. YMER, 22(6), 247–256.
Opara, E., Mfon-Ette Theresa, A., & Aduke, T. C. (2023). ChatGPT for teaching, learning and research: Prospects
and challenges. Global Academic Journal of Humanities and Social Sciences, 5, 33–40.
OpenAI. (2023). Chat GPT. Retrieved from https://openai.com/blog/chatgpt/ on 2 December 2023.
Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review
of empirical research from 2011 to 2020. Education and Information Technologies, 27(6), 7893–7925.
https://doi.org/10.1007/s10639-022-10925-9
Pasqualotto, A., Cochrane, A., Bavelier, D., & Altarelli, I. (2024). A novel task and methods to evaluate inter-
individual variation in audio-visual associative learning. Cognition, 242, 105658.
Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the implications of generative artificial intel-
ligence for journalism and media education. Journalism and Mass Communication Educator, 78(1),
84–93. https://doi.org/10.1177/10776958221149577
Rasul, T., Nair, S., Kalendra, D., Robin, M., de Oliveira Santini, F., Ladeira, W. J., ... & Heathcote, L. (2023).
The role of ChatGPT in higher education: Benefits, challenges, and future research directions. Journal
of Applied Learning and Teaching, 6(1), 41–56.
Ratten, V., & Jones, P. (2023). Generative artificial intelligence (ChatGPT): Implications for management
educators. International Journal of Management in Education, 21(3), Article 100857. https://doi.org/
10.1016/j.ijme.2023.100857
Robinson, K., & Summerfield, Q. A. (1996). Adult auditory learning and training. Ear and Hearing, 17(3),
51S–65S.
Sabbagh, C., & Resh, N. (2016). Unfolding justice research in the realm of education. Social Justice Research,
29, 1–13.
Saleem, A. N., Noori, N. M., & Ozdamli, F. (2022). Gamification applications in E-learning: A literature review.
Technology, Knowledge and Learning, 27(1), 139–159.
Seetharaman, R. (2023). Revolutionizing medical education: Can ChatGPT boost subjective learning and
expression? Journal of Medical Systems, 47(1), 1–4.
Sharonova, S., & Avdeeva, E. (2021). Dialogue between smart education and classical education. Language
and Dialogue, 11(1), 151–170.
Smyrnova-Trybulska, E., Morze, N., & Varchenko-Trotsenko, L. (2022). Adaptive learning in university
students’ opinions: Cross-border research. Education and Information Technologies, 27(5), 6787–6818.
Won, M., Ungu, D. A. K., Matovu, H., Treagust, D. F., Tsai, C.-C., Park, J., Mocerino, M., & Tasker, R. (2023).
Diverse approaches to learning with immersive virtual reality identified from a systematic review.
Computers & Education, 195, Article 104701. https://doi.org/10.1016/j.compedu.2022.104701
Wright, B. A., & Zhang, Y. (2009). A review of the generalization of auditory learning. Philosophical
Transactions of the Royal Society B: Biological Sciences, 364(1515), 301–311.
Wu, C., Yin, S., Qi, W., Wang, X., Tang, Z., & Duan, N. (2023). Visual ChatGPT: Talking, drawing and editing
with visual foundation models. arXiv preprint arXiv:2303.04671.
Xia, Y., Huang, H., Yu, Q., Halili, X., & Chen, Q. (2023). Academic-practice partnerships in evidence-based
nursing education: A theory-guided scoping review. Nurse Education in Practice, 103839.
Zhang, W. (2022). The Role of technology-based education and teacher professional development in English as
a Foreign Language Classes. Frontiers in Psychology, 13, 910315.
Zhu, H., Luo, M. D., Wang, R., Zheng, A. H., & He, R. (2021). Deep audio-visual learning: A survey.
International Journal of Automation and Computing, 18, 351–376.
223
15.1 INTRODUCTION
According to its website (https://openai.com/blog/introducing-openai), San Francisco-based
OpenAI was founded in 2015 as a “non-profit artificial intelligence research company. Our goal is
to advance digital intelligence in the way that is most likely to benefit humanity as a whole, uncon-
strained by a need to generate financial returns. Since our research is free from financial obligations,
we can better focus on a positive human impact”—a feat that is not difficult to accomplish when
you’re looking at a new valuation of up to $90 billion, tripling in value in less than nine months
(Seetharaman and Jin 2023). OpenAI’s application—OpenAI GPT—is a state-of-the-art generative
pre-trained transformer (GPT) large language model (LLM) that was trained on an enormous corpus
of text data (Brown et al. 2020) to produce human-like text once prompts are entered. LLMs learn
which succeeding words, phrases, and sentences are likely to come next for any given input word or
phrase—like an iPhone when alphabetical letters are entered. By “reading” text during training that
is written mainly by humans, language models also learn how to “write” like us, complete with all
of our best and worst qualities (O’Sullivan and Dickerson 2020).
As impressive as they are, all artificial intelligence (AI) systems have a limited range of cap-
abilities, although those are expanding at an astounding rate. Researchers (and machines) will keep
whittling away at constraints, perhaps one day in the not-too-distant future reaching a point where
machines will come close to matching human performance on virtually every intellectual task (Lake
et al. 2017; Gillani 2023)—what is often referred to as “artificial general intelligence.” As OpenAI
put it, “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard
to imagine how much it could damage society if built or used incorrectly.” The latter part of that
sentence is particularly concerning and drew us to write this chapter, in which we focus on issues
stemming from various forms of bias.
As we discuss in more depth later, the term “bias” is defined in different ways, depending on dis-
cipline. For now, we can define it as “any basis for choosing one generalization over another, other
than strict consistency with the observed training instances” (Mitchell 1980: 1). Like all LLMs,
GPT’s training on vast amounts of text data can reflect the biases and prejudices—sometimes subtle
(Mollick and Mollick 2023)—not only in the sources on which it is trained but also in the minds of
its trainers (and manipulators). Both can lead not only to the intentional or unintentional generation
and proliferation of such things as gender or racial bias and offensive language but also to social
unrest and even violence (Alsmadi and O’Brien 2021; O’Brien and Alsmadi 2021). It would appear
that a first step in eliminating such situations would be to “make the biases and their use in con-
trolling learning just as explicit as past research has made the observations and their use” (Mitchell
1980: 3). One way to begin is by understanding how the most-used conversational model, OpenAI’s
ChatGPT, operates and what its capabilities for bias are in higher education.
15.1.1 Chapter Organization
The remainder of the chapter is divided into four sections: Section 15.2 introduces ChatGPT;
Section 15.3 discusses issues surrounding ChatGPT in the classroom; Section 15.4 introduces the
different kinds of bias in machine learning, especially in LLMs such as ChatGPT; and Section 15.5
contains a few conclusions that can be gleaned from our all-too-brief foray into machine learning.
One thing to keep in mind as you make your way through the sections is that the AI world is chan-
ging so fast, especially with respect to chatbots, that by the time the chapter appears, hundreds of
articles and reports—research results plus media coverage—will amplify or contradict some of
what we present in the various sections and certainly will make parts of our discussion obsolete.
This in no way is unexpected, given that the generative AI market is anticipated to reach $109
billion by 2030—a staggering compound annual growth rate of 36.5% from 2024 (Grand View
Research 2024).
academics but into such departments as financial aid, human resources, residence halls, registration,
and even campus police. Academically, however, there was one unanswered question: Just how
smart was ChatGPT? Could it, for example, pass a college exam?
The answer came fast: Yes, it could pass a college exam, even one in law school. In a study by
Choi et al. (2023) at the Northwestern School of Law, when exams taken by ChatGPT—comprising
over 95 multiple-choice questions and 12 essay questions—were graded blindly along with the
exams from human students, ChatGPT performed better at essay questions but much worse at
multiple-choice questions that involved math. Across the four exams, the bot averaged a C+—a
below-average, but still passing grade, enough to earn the chatbot a law degree (Sloan 2023).
If ChatGPT can pass a law exam, can it earn an MBA? Yes. In one study conducted at the
University of Pennsylvania’s Wharton School of Business using an MBA final exam graded by a
human instructor, ChatGPT received a B or B–grade (Terwiesch 2023). The instructor noted that
ChatGPT was remarkably good at modifying its answers in response to human prompts. In other
words, in the instances where it initially failed to match a problem with the right solution, the bot
was able to correct itself after receiving an appropriate prompt from a human expert. Thus, having
“a human in the loop” can be very valuable. Even more remarkable, ChatGPT seemed to be able to
learn over time so that in the future the hint would no longer be needed (Terwiesch 2023).
The trick is to differentiate between “good” and “bad” bias, recognizing that what might be good
in one situation is bad in another. The problem is similar to standard matters of the regression to the
mean, where the initial measurement happens to be an outlier, such that interpretations of further
measurements, regressing to the mean, are unintentionally biased (Barnett et al. 2005). For example,
consider a doctor who uses AI to diagnose an illness and treatment, not realizing that although it
worked in previous cases, AI misread the symptoms and the patient died. What had worked benefi-
cially before didn’t in this case because of bias in the AI training. Here, bias refers specifically to “a
feature of the design of a study, or the execution of a study, or the analysis of the data from a study,
that makes evidence misleading” (Stegenga 2018: 104).
(1) Learning bias: Restrictions in the search space and giving preferences to certain data
objects or functions over others (Mitchell 1980; Dietterich and Kong 1995). Researchers’
concerns center on the potential for machine- learning systems to be biased against
“protected attributes” such as gender, race, and age (Gianfrancesco et al. 2018; Alelyani
2021). For example, Klare et al. (2012) showed that there are lower matching accuracies
for females than males, for Blacks compared to other races/ethnicities, and for 18–30-year-
olds compared to other age groups.
(2) Inductive bias: A set of assumptions that improves generalization of a model trained on
an empirical distribution. It can be used in machine learning where some functions are
given preference over others to address the lack of labeling data in the target domain
(Bouvier et al. 2021). Those preferences are assumptions made by the model when making
predictions over inputs that have not been observed (Marino and Manic 2019). Inductive
bias for constructing models of new concepts occurs when those concepts are modeled as
compositions of parts and relations (Hummel and Biederman 1992; Lake et al. 2017).
(3) Hyperparameter bias: The frequent selection of certain or similar parameters in the model.
This selection can be through learning tools or algorithms giving preferences to certain
parameters or values. It can also be a result of users or researchers lowering complexity,
processing time, and the like.
(4) Co-occurrence bias: For example, when a word occurs disproportionately, often together
with certain other words, in texts (Hellström et al. 2020). Co-occurrence bias in the training
dataset can significantly impact machine-learning models. Those co-occurrences might not
necessarily reflect most or all actual scenarios. Several studies (e.g., Agarwal et al. 2022)
discuss methods to mitigate co-occurrence bias in machine-learning models.
(5) Framing bias: How a text expresses a particular opinion on a topic (Hellström et al. 2020).
For example, questions sometimes are framed in an interview or survey to trigger spe-
cific answers (e.g., asking if a glass is half full or half empty). Framing bias refers to an
unobjective individual point of view.
(6) Uncertainty bias: An algorithm may be less certain about its decisions on some data
clusters than on others (Phillips et al. 2018). The machine-learning model probability
values represent uncertainty and typically have to be above a set threshold for a classifica-
tion to be considered (Hellström et al. 2020). Uncertainty bias occurs when (1) one group
is underrepresented in the data, which means that there is more uncertainty associated with
228
predictions about that group; and (2) the algorithm is risk averse (Aigner and Cain 1977;
Goodman and Flaxman 2017).
(7) Brilliance bias: An implicit bias that imposes the idea that intellectual “brilliance” is a
male trait (Troske et al. 2022). The brilliance bias hurts women in hiring and education and
enforces imposter syndrome, among other things (Cundiff 2018).
(8) Social bias: Bias related directly to the protected attribute defining a group, for example,
gender (Lässig et al. 2022). Social bias is embedded in technology from a relatively uni
form set of perspectives that inform their design (Nangia et al. 2020; Nadeem et al. 2020;
Feeney and Porumbescu 2021).
(9) Stereotypical bias: A generalized belief that certain attributes characterize all members of
a particular category or class of people (Cardwell 1996; Öztürk 2022). Similar to social
bias, stereotypical bias can be based on perspectives such as skin tone, gender, race, dem-
ography, and disability (Badjatiya et al. 2019).
(10) Direct versus indirect bias: In one classification of bias, researchers distinguished between
direct (explicit) bias and indirect (implicit) bias (Bolukbasi et al. 2016; Chakraborty et al.
2016). Direct bias refers to the association between a gender-neutral word and a clear gender
pair, whereas indirect bias is manifest in the association between gender-neutral words.
(11) Epistemological bias: Linguistic features that focus on the believability of an assertion
(Boydstun et al. 2013).
(12) Bias bias: Typically, in machine learning there is a trade-off between bias and variance,
and models should strive for low bias and high variance, where the latter indicates that the
training sample is a good representative of all data. Bias bias refers to those models that
pay attention to bias while ignoring or paying little attention to variance (Brighton and
Gigerenzer 2015). The variance reflects the sensitivity of a model’s predictions to different
training samples. Brighton (2020) described several examples of bias bias that can render
inaccurate results or data analysis.
(13) Sampling bias: Data with limited samples that are leveraged by algorithms to achieve high
performance. Researchers, knowingly or unknowingly, report the results as having high
accuracy.
(14) Measurement bias: The differential relationship between a latent score and a predicted
observed score (Tay et al. 2022). In many machine-learning models, a single feature that is
highly correlated with the target can dominate or hide other features. This can be related to
the bias–variance trade-off mentioned above.
(15) Bias in the data versus bias in the process: In all previous types of biases in machine
learning discussed above, we can divide them broadly into two categories: biases that are
the result of input or training data and those that result from the machine-learning process.
Data should not be treated as being static, divorced from the processes that produced them
(Mehrabi et al. 2021; Suresh and Guttag 2021). Such biases can be serious, especially when
researchers or model designers do not report them.
15.4.2 Bias in GPT
Since AI trains on existing data, it notoriously reflects biases already existing in media and society.
Almost ten years ago, an AI called Beauty AI trained on about 6000 photos submitted by people
from all over the world for a beauty contest; nearly all winners were white (Levin 2016). Earlier
that year, a Twitter chatbot named Tay had jumped its guardrails and began using racist language
and promoting neo-Nazi views (Hunt 2016). A decade later, ChatGPT is well aware of the possible
threats it poses. In an interesting interview with the chatbot, GlobalData Thematic Research analyst
Daniel Clarke asked it if its creation could pose problems for democracy (www.verdict.co.uk/chat-
gpt-3-interview). This was the answer he received:
229
Yes. … Its ability to generate highly realistic and convincing language, as well as large
amounts of text quickly and at low cost, could make it a powerful tool for propaganda and
disinformation campaigns. This could undermine trust in political institutions and erode the
integrity of elections. Additionally, the potential for bias in Chat GPT-3’s language generation
raises concerns about its impact on public discourse and the polarization of society.
Remember, that was a bot answering the question, not a human.
Solaiman et al. (2019) analyzed bias in the GPT-2 database by using sentiment score as a proxy
for bias. Kirk et al. (2021) extended Solaiman et al.’s (2019) work by conducting an empirical ana
lysis of sentence completions within the specific context of bias toward occupational associations.
Similarly, Liu et al. (2022) (1) discussed political bias in GPT-2 and proposed a reinforcement-
learning framework for mitigating it; (2) described metrics for measuring political bias after
finding that GPT-2 was mostly liberal-leaning socially and politically; (3) described two types of
bias: Direct, which refers to bias in texts generated using prompts that have a direct ideological
trigger and indirect, which refers to bias in texts generated using prompts with particular keywords;
and (4) divided ideological bias into three categories: gender, location, and topic.
Tamkin et al. (2021) found that GPT-3 exhibited several racial, gender, and religious biases, but
they also pointed out that it is difficult to define what it means to mitigate bias in such LLMs in a
universal manner, given that appropriate language use is highly contextual. With respect to GPT-
3, Zhao et al. (2021) found that it was biased toward more-frequent answers in the prompt, which
is related to a typical issue in machine learning—an imbalanced dataset when one class is more
common. In a remarkable assessment of GPT, Brown and colleagues (2020) reported that because
there is so much content on the web that sexualizes women, GPT-3 is much more likely to place
words such as “naughty”, “bubbly,” and “petite” near female pronouns, whereas male pronouns
would receive at worst adjectives such as “lazy,” “eccentric,” or “jolly.” Similarly, “Islam” would
more commonly be placed near words like “terrorism,” and Blackness would appear to be more
negative than corresponding white-or Asian-sounding prompts.
15.5 CONCLUSIONS
There is no doubt that ChatGPT holds incredible promise for the future of AI in general. As
Schwitzgebel et al. (2022) point out, various forms of AI can now outperform expert humans not
only in fairly mundane ways—for example, in games such as Go, poker, and chess (e.g., Brown and
Sandholm 2019; Jumper et al. 2021)—but also in domains such as cancer screening (e.g., Potnis
et al. 2022). But what about the downsides? Let’s begin with a non-life-threatening example. As uni
versity professors, we routinely deal with the complex issue of plagiarism, not only among students
but on occasion among faculty. Companies such as Turnitin and its subsidiary iThenticate have
made a fortune since the 1990s in the field of plagiarism detection, but they never met ChatGPT.
It has been our collective experience that whether they cheat or not, most of today’s students
wouldn’t recognize the veracity, or lack thereof, of the information they’re receiving, unless it
involved, say, a movie star’s alleged indiscretions, in which case they might check its accuracy on
a site such as Snopes.com. Academics, however, aren’t typically interested in an essay based on
“facts” related to movie stars but rather on topics such as the reasons behind the siege of the Alamo
in 1836 or the role played by the assassination of Archduke Franz Ferdinand in 1914, which lit one
of the fuses for World War I. Today’s students have little or no realization that AI can make up facts
to provide a seemingly coherent answer to a question posed—a phenomenon known as “hallucin-
ating” or “stochastic parroting,” in which an AI strings together phrases that look real but have no
basis in fact (Kissinger et al. 2023).
AI can appear superhuman or at least to have greatly enhanced cognitive abilities, which to a
naïve user makes it seem like a “supremely fast and highly articulate librarian-scholar coupled
230
with a professorial savant. It facilitates the summary and interrogation of the world’s knowledge
far more effectively than any existing technological or human interface, and it does so with unique
comprehensiveness” (Kissinger et al. 2023: 6). This in turn encourages “unquestioning acceptance”
of whatever is generated and “a kind of magical atmosphere,” while at the same time possessing a
“capability to misinform its human users with incorrect statements and outright fabrication” (p. 7).
The seeming veracity of answers can lead to “automation bias”: It came from the bot, which has
access to an untold wealth of “facts,” so the answers have to be correct. Mollick (2022) provides the
perfect example: Ask AI to describe how we know dinosaurs had a civilization, and it will set up a
whole set of facts that allegedly explain the case. As Mollick points out, it literally doesn’t know
what it doesn’t know.
Cheating on an exam or having AI write a term paper, although serious, ranks well behind bias
resulting from targeted disinformation, for example, especially when it invites violence and character
assassination. These forms of bias are not new. For example, in Rome around 31 B.C., Octavian, a
military official, launched a smear campaign against his political enemy, Mark Antony. This effort
used, as Kaminska (2017) put it, “short, sharp slogans written upon coins in the style of archaic
tweets.” Octavian’s campaign was built around the point that Antony was a soldier gone awry—a
philanderer, a womanizer, and a drunk not fit to hold office. It worked. Octavian, not Antony, became
the first Roman emperor, taking the name Augustus Caesar. We know the rest of the story.
In the twenty-first century, however, myriad forms of social media, now with the “support” of AI,
make manipulation and fabrication of information much simpler. Social networks make it easy for
uncritical readers to dramatically amplify falsehoods peddled by governments, populist politicians,
and dishonest businesses. One sobering example of the dangers posed by social media we’ve
reviewed in detail (O’Brien et al. 2019; O’Brien and Alsmadi 2021) involved racial tensions at the
University of Missouri in 2015 in the wake of a young Black man’s death in Ferguson, Missouri,
two hours east of Columbia, the home of the university. Black students at Mizzou were rightly
concerned, but university officials did little to calm their fears. Months of chaos ensued. Adding to
the chaos was a Twitter message one night that warned campus residents that the Ku Klux Klan was
in town and had joined the local police to hunt down Black students.
One user included a photograph of a severely bruised young Black man, claiming it was his little
brother. A Google reverse image search quickly revealed that it was a year-old photo from a racial
disturbance in Ohio. Other tweets claimed there were widespread shootings, stabbings, and cross
burnings. The student-body president, a young Black man who fell prey to the hysteria, posted on
Facebook for students to stay away from the windows in residence halls, that the KKK had been
sighted on campus. He later rescinded the post, but the damage had already been done.
In the end, it turned out that the hysteria caused by the tweets and retweets that fateful night began
with Russian trolls, specifically the Internet Research Agency, based in St. Petersburg. Their pur-
pose was to toss dynamite into an already incendiary situation. Interestingly, that was the first group
listed in Special Counsel Robert Mueller’s 2018 indictment of Russians charged with meddling in
the 2016 U.S. presidential election. We can only wonder what would have happened if ChatGPT had
been around to bolster the sinister dissemination of disinformation by Octavian in ancient Rome or
by the Russian troll factory in Columbia, Missouri. One thing is certain: They would have used it.
Threat groups are influenced by economic factors such as scalability and ease of deployment, and
LLMs offer relatively low-cost deployment. Based on their analysis of GPT-2 and analysis of threat
actors and the landscape, Brown and colleagues (2020: 35) suspected that
AI researchers will eventually develop language models that are sufficiently consistent and
steerable that they will be of greater interest to malicious actors. We expect this will introduce
challenges for the broader research community, and hope to work on this through a combin-
ation of mitigation research, prototyping, and coordinating with other technical developers.
231
The technology behind ChatGPT has much to offer higher education, but it comes with potential
risks and ethical violations. ChatGPT has, and will always have, built-in biases, some potentially
much more insidious than others, especially as a result of human manipulation. The previous sen-
tence, by the way, is part of a much longer response by ChatGPT to a question we posed concerning
future dangers we might face as the use of bots continues to grow, seemingly exponentially. In our
classes, we would grade that answer as an A+, although we might have to reconsider in light of what
GPT-4 can do (Edwards 2023).
REFERENCES
Agarwal, S., et al. (2022). Does data repair lead to fair models? Curating contextually fair data to reduce
model bias. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision,
pp. 3298–3307.
Aigner, D. J., & Cain, G. G. (1977). Statistical theories of discrimination in labor markets. Industrial and Labor
Relations Review 30, 175–187.
Alelyani, S. (2021). Detection and evaluation of machine learning bias. Applied Sciences 11, 6271.
Alsmadi, I., & O’Brien, M. J. (2021). How many bots in Russian troll tweets? Information Processing and
Management 57, 102303.
Badjatiya, P., et al. (2019). Stereotypical bias removal for hate speech detection task using knowledge-based
generalizations. In World Wide Web Conference, pp. 49–59. New York, Association for Computing
Machinery.
Banks, D. C. (2023). ChatGPT caught NYC schools off guard. Now, we’re determined to embrace its potential.
Chalkbeat, May 18, 2023.
Barnett, A. G., et al. (2005) Regression to the mean: What it is and how to deal with it. International Journal
of Epidemiology 34, 215–220.
Barr, A. (2023). The world’s most powerful AI model suddenly got ‘lazier’ and ‘dumber.’ A radical redesign of
OpenAI’s GPT-4 could be behind the decline in performance. Insider, July 12, 2023.
Bender, E. M., et al. (2021). On the dangers of stochastic parrots: Can language models be too big? In FAccT
’21: Conference on Fairness, Accountability, and Transparency, pp. 610–623. New York, Association
for Computing Machinery.
Bhardwaj, R., et al. (2020). Investigating gender bias in BERT. arXiv:2009.05021.
Blodgett, S. L., et al. (2020). Language (technology) is power: A critical survey of “bias” in NLP. In Proceedings
of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5454–5476. Melbourne,
Association for Computational Linguistics.
Bolukbasi, T., et al. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word
embeddings. Advances in Neural Information Processing Systems 29, 4349–4357.
Bouvier, V., et al. (2021). Robust domain adaptation: Representations, weights and inductive bias. In Joint
European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 353–377.
Cham, Springer.
Boydstun, A. E., et al. (2013). Identifying media frames and frame dynamics within and across policy issues.
https://faculty.washington.edu/jwilker/559/frames-2013.pdf.
Brighton, H. (2020). Statistical foundations of ecological rationality. Economics 14, 1–32.
Brighton, H., & Gigerenzer, G. (2015). The bias bias. Journal of Business Research 68, 1772–1784.
Brown, N., & Sandholm, T. (2019). Superhuman AI for multiplayer poker. Science 365, 885–890.
Brown, T., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing
Systems 33, 1877–1901.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender
classification. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st Conference on Fairness,
Accountability, and Transparency, pp. 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
Cardwell, M. (1996). Dictionary of Psychology. London, Routledge.
Chakraborty, T., et al. (2016). Reducing Gender Bias in Word Embeddings. Computer Science Department,
Stanford University.
Choi, J. H., et al. (2023). ChatGPT goes to law school. Journal of Legal Education 71, 387–400.
232
Cundiff, J. L. (2018). Barriers and bias in STEM: How stereotypes constrain women’s STEM participation and
career progress. In J. T. Nadler & M. R. Lowery (Eds.), The War on Women in the United States: Beliefs,
Tactics, and the Best Defenses, pp. 116–156. Santa Barbara, ABC–Clio/Praeger.
Dietterich, T. G., & Kong, E. B. (1995). Machine Learning Bias, Statistical Bias, and Statistical Variance of
Decision Tree Algorithms. Technical Report. Department of Computer Science, Oregon State University.
www.cems.uwe.ac.uk/~irjohnso/coursenotes/uqc832/tr-bias.pdf
Duarte, F. (2024). Number of ChatGPT users (Feb 2024). https://explodingtopics.com/blog/chatgpt-users
Edwards, B. (2023). OpenAI’s GPT-4 exhibits “human-level performance” on professional benchmarks. Ars
Technica, March 14, 2023.
Feeney, M. K., & Porumbescu, G. (2021). The limits of social media for public administration research and
practice. Public Administration Review 81, 787–792.
Gianfrancesco, M. A., et al. (2018). Potential biases in machine learning algorithms using electronic health
record data. JAMA Internal Medicine 178, 1544–1547.
Gillani, N. (2023). Will AI ever reach human-level intelligence? The Conversation, April 24, 2023.
Glaese, A., et al. (2022). Improving alignment of dialogue agents via targeted human judgements.
arXiv:2209.14375v1.
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right
to explanation.” AI Magazine 38, 50–57.
Grand View Research (2024). Generative AI market size to reach $109.37 billion by 2030. February 2024.
Hellström, T., et al. (2020). Bias in machine learning—What is it good for? arXiv:2004.00686.
Hummel, J. E., & Biederman, I. (1992). Dynamic binding in a neural network for shape recognition.
Psychological Review 99, 480–517.
Hunt, E. (2016). Tay, Microsoft’s A.I. chatbot, gets a crash course in racism from Twitter. The Guardian, March
24, 2016.
Ivanović, M., & Radovanović, M. (2015). Modern machine learning techniques and their applications. In
International conference on electronics, communications and networks.
Jumper, J., et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589.
Kaminska, I. (2017). A lesson in fake news from the info-wars of ancient Rome. Financial Times, January
17, 2017.
Kirk, H. R., et al. (2021). Bias out-of-the-box: An empirical analysis of intersectional occupational biases
in popular generative language models. Advances in Neural Information Processing Systems 34,
2611–2624.
Kissinger, H. et al. (2023). ChatGPT heralds an intellectual revolution. Wall Street Journal, February 24, 2023.
Klare, B. F., et al. (2012). Face recognition performance: Role of demographic information. IEEE Transactions
on Information Forensics and Security 7, 1789–1801.
Lake, B., et al. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences
40, E253.
Lässig, N., et al. (2022). Metrics and algorithms for locally fair and accurate classifications using ensembles.
Datenbank Spektrum 22, 23–43.
Levin, S. (2016). A beauty contest was judged by A.I. and the robots didn’t like dark skin. The Guardian,
September 8, 2016.
Liu, R., et al. (2022). Quantifying and alleviating political bias in language models. Artificial Intelligence 304,
103654.
Lutkevich, B., & Schmelzer, R. (2023). GPT-3. Tech Accelerator, August 17, 2023.
Marino, D. L., & Manic, M. (2019). Combining physics-based domain knowledge and machine learning using
variational Gaussian processes with explicit linear prior. arXiv:1906.02160.
Mehrabi, N., et al. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys
54(6), 1–35.
Mitchell, T. M. (1980). The need for biases in learning generalizations. Rutgers Computer Science Tech. Rept.
CBM-TR-117. Rutgers University.
Mok, A. (2023). CEO of ChatGPT maker responds to schools’ plagiarism concerns: ‘We adapted to calculators
and changed what we tested in math class.’ Insider, January 29, 2023.
Mollick, E. (2022). ChatGPT is a tipping point for AI. Harvard Business Review, December 14, 2022.
233
Mollick, E., & Mollick, L. (2023). Student use cases for AI. https://hbsp.harvard.edu/inspiring-minds/stud
ent-use-cases-for-ai
Mooney, R. J. (1996). Comparative experiments on disambiguating word senses: An illustration of the role of
bias in machine learning. arXiv:cmp-lg/9612001.
Nadeem, M. et al. (2020). StereoSet: Measuring stereotypical bias in pretrained language models.
arXiv:2004.09456.
Nangia, N., et al. (2020). Crows-pairs: A challenge dataset for measuring social biases in masked language
models. arXiv:2010.00133.
Nolan, B. (2023). Two professors who say they caught students cheating on essays with ChatGPT explain why
AI plagiarism can be hard to prove. Insider, January 14, 2023.
O’Brien, M. J., & Alsmadi, I. (2021). Misinformation, disinformation, and hoaxes: What’s the difference? The
Conversation, April 21, 2021.
O’Brien, M. J., et al. (2019). The Importance of Small Decisions. Cambridge, MIT Press.
O’Sullivan, L., & Dickerson, J. (2020). Here are a few ways GPT-3 can go wrong. TechCrunch, August 7, 2020.
Öztürk, I. T. (2022). How Different Is Stereotypical Bias in Different Languages? Analysis of Multilingual
Language Models. M.A. thesis, Department of Statistics, Ludwig-Maximilians-Universität München.
Munich.
Phillips, R., et al. (2018). Interpretable active learning. Proceedings of Machine Learning Research 81, 49–61.
Pot, M., et al. (2021). Not all biases are bad: Equitable and inequitable biases in machine learning and radi-
ology. Insights into Imaging 12(1), 1–10.
Potnis, K. C., et al. (2022). Artificial intelligence in breast cancer screening: Evaluation of FDA device regula-
tion and future recommendations. JAMA Internal Medicine 182, 1306–1312.
Radford, A., et al. (2018). Improving language understanding by generative pre-training. www.cs.ubc.ca/
~amuham01/LING530/papers/radford2018improving.pdf
Radford, A., et al. (2019). Language models are unsupervised multitask learners. https://api.semanticscholar.
org/corpusid:160025533
Schwitzgebel, E., et al. (2022). Creating a large language model of a philosopher. www.faculty.ucr.edu/~eschw
itz/SchwitzPapers/GPT-3-Dennett-221102.pdf
Seetharaman, D., & Jin, B. (2023). OpenAI seeks new valuation of up to $90 billion in sale of existing shares.
Wall Street Journal, September 26, 2023.
Sloan, K. (2023). ChatGPT passes law school exams despite ‘mediocre’ performance. Reuters, January
25, 2023.
Soh, J. (2020). When are algorithms biased? A multi-disciplinary survey. https://ssrn.com/abstract=3602662
Solaiman, I., et al. (2019). Release strategies and the social impacts of language models. arXiv:1908.09203.
Stegenga, J. (2018). Care and Cure: An Introduction to Philosophy of Medicine. Chicago, University of
Chicago Press.
Suresh, H., & Guttag, J. V. (2019). A framework for understanding unintended consequences of machine
learning. arXiv:1901.10002.
Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm throughout the machine
learning life cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization. doi.org/
10.1145/3465416.3483305
Syme, P. (2023a). ChatGPT is losing some of its hype, as traffic falls for the third month in a row. Insider,
September 8, 2023.
Syme, P. (2023b). A Princeton student built an app which can detect if ChatGPT wrote an essay to combat AI-
based plagiarism. Insider, January 4, 2023.
Tamkin, A., et al. (2021). Understanding the capabilities, limitations, and societal impact of large language
models. arXiv:2102.02503.
Tay, L., et al. (2022). A conceptual framework for investigating and mitigating machine-learning measure-
ment bias (MLMB) in psychological assessment. Advances in Methods and Practices in Psychological
Science 5, 25152459211061337.
Terwiesch, C. (2023). Would Chat GPT Get a Wharton MBA? A Prediction Based on Its Performance in the
Operations Management Course. University of Pennsylvania, Mack Institute for Innovation Management
234
16 Mediating ChatGPT
The Ethics of Authority in
Higher Education
Brian Sutherland
16.1 INTRODUCTION
16.1.1 Background
ChatGPT redefines the authorship relation between writers, readers, and publishers. This chapter
reviews the ethical quandaries this poses for authorship in higher education, with a view to the
challenges and opportunities for instructional design, research publishing, and copyright. How can
communities of scholarship manage ChatGPT’s negative effects, even as ChatGPT assumes a new
and useful role in mediating scholarly thinking and communication?
The authority of universities, brick and mortar institutions, derives from a variety of
sources: collections of publications, experts and their networks, capital, architecture: lecture, ref-
erence, and research lab spaces, and we might say –the capture of futures –new systems, methods
for using them which are socially imbricated, and smart students to imagine and respond to change.
Academic processes remix these elements into new expressions of value and new systems of
authority: sciences and technologies, rich texts articulating culture, esthetics, education, and moral
authority of various kinds –creating what we broadly call reputation. The reputational value of higher
education institutions exists notably in contrast to cash-money, which has the same value whether it
was produced by talent and hard work, or by theft and criminal enterprise (e.g. tax evasion). Decades,
even centuries of scholarship and tradition provide a backdrop against which validation occurs for
knowledge/career-seeking learners and future creative works. That authority is challenged by gen-
erative artificial intelligence (AI) writing and authorship tools like ChatGPT, which are disrupting
scholarly development and the new narratives of truth and innovation, as well as the authority of the
institutions that produced them in significant ways. Yet these generative AI tools also provide new
opportunities and forms of assistance for learners wanting to improve their writing and researchers
interested in accelerating their lab discoveries. The ethical, right use of ChatGPT is a new kind of
system to be threaded, where in shifting fundamental processes we wish to bring minimal harm to
authority and expertise, the academic communities that create it, and the publics that rely on it, while
realizing significant new benefits to efficiency and productivity in education.
16.1.2 Chapter Organization
The rest of the chapter is organized as follows: Section 16.2 reviews the difference between machine
and human information systems and some of their similar and complementary biases. Section 16.3
lays out the academic integrity problem posed by ChatGPT, while Section 16.4 discusses how the
act of composition develops knowledge and judgment and reviews historical problems arising
from computer–human interaction. Section 16.5 discusses consequentialism, utilitarianism, and
responsibility for authorship. Section 16.6 looks at transparency and the auditing of smart systems to
determine responsibility. Section 16.7 reviews the social and environmental cost of computing infra
structure while Section 16.8 reviews copyright and intellectual property issues. Section 16.9 looks at
how higher education might adapt teaching and learning to tools like ChatGPT while Section 16.10
summarizes the chapter.
16.3 ACADEMIC INTEGRITY
The most significant ethical challenge of ChatGPT, perhaps, is to the integrity of academic publishing
in student assignments. Academic fakes are not new, but ChatGPT can make them extremely dif-
ficult to detect. The study of fakes and imposters in Steven Woolgar’s recently published “The
Imposter as Social Theory” offers some insight into the problem of faking academic work, passing
ChatGPT-generated texts off as human-created. First, that fakes, “resembling a real thing but not
being it”, are a judgment about shared values people covet –particularly with respect to social iden-
tity. Second, that, “rather than a stable entity, identity is always malleable and in flux and needs to
be re-invented and stabilised in repeated performances among various groups” (Rosenthal 2021,
31). Within research publishing communities other biases have been noted: a focus on producing
an “interesting, beautiful and short pithy story” (Derksen 2021, 56–57), a lack replication studies
which might expose research results as fakes (Derksen 2021, 66), or problems with the validity
and reliability of research instruments (Artino et al. 2018). Social constructions of new research
sometimes fit predefined research paradigms or “plotlines” rather than more dynamically adapting
to the typical “messiness” of everyday life (Law 2004). While we might consider “messiness” part
of early technology infrastructures like ChatGPT, which gives way to normalized, orderly practice
in late stages, often “infrastructures remain messy after decades or centuries, as the user of any
transit system from urban subways to international airlines can attest” (Bell and Dourish 2007,
237
139). Technology pathways also have a way of evolving into thoroughfares, habituating physical
and intellectual infrastructures as practitioners repeatedly pass the same way by the same means and
their speed increases.
Imitating academic texts in professional circles, imposture is in effect another kind of iden-
tity performance, a staged identity where authenticity and therefore authority are granted to a con-
vincing performance of expression, stereotypes, or cultural appropriations (Rosenthal 2021, 31).
Manipulating the rules and biases of a system is a complex concept as explored in (Klenk 2024).
Imitation performances for the sake of manipulation often take a significant amount of knowledge
of the underlying system. The Turing Test for AI, the genesis for conversational agents was ironic-
ally called: “The Imitation Game” where machines aren’t required to think like humans but pass for
them on tasks humans do. Much of the success of the imposture depends upon the frame of the per-
formance and how much the audience is invested. One thinks, for example, of the convincing per-
formance of “Dr Fox”, the actor who gave a successful lecture at a university theatre by memorizing
the part (Naftulin and Donnelly 1973). ChatGPT by way of imitation, like the internet “should be
digested with several degrees and dimensions of scepticism, requiring confirmation across multiple,
ideally primary, sources” (Meyer et al. 2023).
Fakes can, however, be socially useful: they “teach about difference and similarity by being, in
their presentation and treatment, not quite what was expected, hoped for or wanted” (Coopmans
2021, 79). Their detection is part of meaning-making in a domain, it’s an opportunity to examine
what it is about texts and evidence that make for veritable performances, compared with the different,
the incongruous, the uncanny, or otherness of fake texts. Plagiarism and its detection, regardless of
generative AI, has long been an interest for higher education where
the limited capabilities of anti-plagiarism programs to verify the issue of verifying the reli-
able use of sources, other publications, and source materials by a student while writing a dip-
loma thesis, be it a bachelor’s, master’s, doctoral, or habilitation thesis, presents a significant
challenge.
(Huallpa et al. 2023, 107)
The power of fakes, perhaps –arises from their strong ability to define the authorities of canons: who
wrote the important original works, who is writing new important original works, and who validates
new student publications as true and original. Yet in this case, by “passing the text through an
automated ‘paraphrasing’ system which changes form but retains its meaning” (Knott et al. 2023)
generative AI text systems continue to resist detection, at least at the time of writing, by commer-
cial tools which advertise significant success but do not measure up in testing (Perkins et al. 2023).
that would show subsequently in formal testing, live questioning, and real-life application, where
summative judgments of learning occur.
More fundamentally, ChatGPT redefines the problem of academic authorship. In looking at this
problem of ethics it is useful to reflect on other narratives of technology. The technological systems
we employ to alleviate social problems have intrinsic ethical implications. Ursula Franklin, the
Canadian engineer and science and technology studies scholar pointed this out in her discussion of
police radar traps: whereas speed limit signs or “going too fast” signs call attention to drivers about
their behavior and the social problem of road safety, radar traps, photo-radar and “fuzzbusters”
(driver radar detection systems) translate or redefine the problem to a game of catch and evade.
In this case “the mere presence of a technological approach to a social problem ... redefines the
problem to the detriment of the total situation” (Franklin 1989). I raise this example in the sense
that ChatGPT’s capabilities might have simply been defined differently: for example to require
more investment in effort and analysis and engagement on the part of the user in generating texts of
significant length. ChatGPT could also have been made to produce shorter, guiding-type texts that
include critical questions to consider, or require a series of composition-oriented process questions,
instead of rendering complete texts, assignments, computer code, and instructions. A fundamental
principle of teaching and learning is we learn as much from the process as from the answer. Put
another way: “skills atrophy when automation takes over” (Elish 2019, 50). Both the ChatGPT
tool and higher education assignments might therefore be altered to support the development of a
wider range of skills, helping instructors with “productive engagement with students over playing
detective” (Koi 2023).
Ethics with respect to computing control and automation have previously been explored as
science and technology scholars grappled with the effects of the widespread application of com-
puter automation and its moral psychology implications in different social use-contexts or frames,
under names like “responsible computing” or “appropriate technology”. These efforts in com-
puting were a “desire to know the ‘right’ answer to ethical problems that arise, where ‘right’ is
understood to mean something like ‘philosophically justified or grounded’ ” (Friedman and Kahn
1992, 12). Friedman’s work suggests two distortions typically occur from computing: a first, where
computers reduce a human being’s sense of their own moral agency, and a second, where a compu-
tation system “masquerades as an agent by projecting intentions, desires and volition”. The second
distortion is apropos of misappropriation of writing products and academic integrity, but this dis-
tortion also occurs spontaneously as users anthropomorphize or attribute human agency to systems
that substitute for tasks people normally do, like playing tic tac toe, chess, or holding a conver-
sation, “chatting” like ChatGPT. Designing a system to replace what people normally create and
do across a broad range of formats and industries therefore presents a fundamental confusion of
roles –where imitations and substitutions are likely to occur, at least until a new network of relations
stabilizes. ChatGPT as a generative AI system might have been conceived as something other than
an entity that chats back with answers to questions, something people have been doing with other
people since the beginning of computing networks. What could be the teaching power of a ChatGPT
system that only responds with Socratic guiding questions, increasing confidence and stimulating
new forms of writing by guided student reflection, for example.
for publication under a subtrifugal title “Dialogue on the Ebb and Flow of the Sea” which ultim-
ately resulted in Galileo being convicted of heresy. Texts are not neutral and words are bulldozers.
Extrapolating ethical issues to accountability for one’s agency, Helen Nissenbaum suggests in the
future “we are apt to discover a disconcerting array of computers in use for which no one is answer-
able” (Nissenbaum 1996, 32). Her work also refers to the “illusion of computers as moral agents
capable of assuming responsibility” (Nissenbaum 1996, 35). So, in granting generative AI writing
systems like ChatGPT authorship, we imply a similar responsibility or moral authority of expertise
situated within peer-reviewed communities of speaking and writing, in some cases even granting
a “research perspective” to ChatGPT (Zhavoronkov 2022). For this reason, Nature and Science
academic journals have formally stated that ChatGPT cannot be named as co-author (Meyer et al.
2023). Generative AI systems can’t assume responsibility, be held accountable, or retract their texts.
Complex or “many hands” systems in the past have exhibited significant harm involving control
and responsibility. Some famous, fatal harms include Atomic Energy of Canada Limited’s Therac-
25 radiation therapy device, smart missile guidance systems (Friedman and Kahn 1992), Challenger
space shuttle O-rings (Nissenbaum 1996), Three Mile Island, and the crash of Air France Flight 447
(Elish 2019). That there will be harms from ChatGPT seems likely, and once the possible harms
from ChatGPT are better understood, technology actor–network relations can have an opportunity
to reconfigure and normalize around better accountability and prevention. This process, however,
may take a significant time and given the rapid adoption of the tool, it would be better to avoid
widespread use without better guardrails. Notwithstanding the harms we are aware of, enthusiastic
publics seem interested in “jailbreaking” ChatGPT (Shaffer Shane 2023).
In the nineteenth century, there was a rash of steam boiler explosions: they occurred in the
thousands until social mediation appeared in the form of government regulations around construc-
tion and inspection: “ASME Boiler and Pressure Vessel Code (B&PVC) was conceived in 1911 out
of a need to protect the public” (Canonico 2020). Nissenbaum similarly refers to the nineteenth-
century “caisson problem” where bridge construction workers working underground would get “the
bends” by depressurizing too quickly after a workday by taking the elevator rather than the stairs,
observing “today bridge building companies are accountable for preventing cases of decompression
sickness” (Nissenbaum 1996, 35). As many articles have pointed out, it’s essential that authority not
be relegated to ChatGPT in critical domains such as medicine as there is a significant potential for
direct and fatal harm from mistakes. Extraordinary caution is required, perhaps even state authority
in the form of new legislation. The corollary to so many LLMs having been created in California is
perhaps, that they may be in a great position to pass guiding legislation, as has occurred in environ-
mental regulation, a phenomenon known as the “California effect” (Vogel 1995).
Ethics analyses around the benefits and risks of new technology are often guided by utilitar-
ianism, the principle of designing systems for the greatest benefit to the most people (benefit
measured more cynically as greatest profit –see Ralph Nader’s Unsafe at Any Speed). There are sig-
nificant benefits to systems like ChatGPT. But like democracy, the rule of the most people can lead
to a tyranny of the majority, where minority groups suffer (Mill 1869). As a somewhat unrestrained
communications system, the internet reflects positive pro-social and feel-good viral videos but also
a range of ugly biases and anti-social behavior, clusters of personality traits associated with online
bullying or academic cheating which have earned names like “the dark tetrad” from psychologists
(Greitemeyer and Kastenmüller 2023). We cannot always rely on the inherent goodness of people, or
their willingness to exercise positive agency defined in this case as supporting pro-social behavior or
calling out the alternative. Mill refers to people’s unerring self-interest in relation to choices, which
is a popular conception of decision-making among economists. That being said, the purveyors of
ChatGPT have gone some way to challenge the bias of pretraining texts, helping the system to rec-
ognize immoral or illegal questions and respond with prosocial responses through “reinforcement
learning from human feedback (RLHF) techniques (designed) to enhance the model’s safety” (Wu
2023). Unfortunately, these training techniques have in the past employed human workers under
240
exploitive labor practices (Nyabola 2023; Marres 2024, 357), which challenges their ethical remedi
ation. Many authors are calling for larger, multi-stakeholder panels to review all aspects of the
system (Stahl and Eke 2023), particularly as the implications for writing by means of generative AI
are so far-ranging. For example, and of significant interest to higher education students: it is claimed
that most job applications are now read first and only by AI systems; only a fraction are passed on to
be read by humans (Rhea et al. 2022), which likely has implications for writing them.
1962, Fleck 1979). Are claims that are untrue in contemporary professional literature as easy to
detect in ChatGPT, such as specious claims about vaccines? How does knowledge representation
hold up in AI systems when the sources for the knowledge lack transparency and present unsubstan-
tiated claims or questionable evidence, “unreliable results” (Derksen 2021, 66)? Just as scholarly
work challenges cultural hegemonies of various kinds in the real world, does generative AI mod-
eling have a role in researching and articulating shifting “ground truths”? Perhaps helping people
ask questions about their assumptions is a useful task for generative AI chatbots, just like we are
starting to evaluate systematically and at scale with LLMs.
There is further the question of using authored works to produce training models with the goal of
generating similar, competing writing products, potentially threatening future livelihood and human
dignity. We need to examine closely the source data for AI systems for their role in authorship, as
LLMs are a kind of derivative work, albeit of a stochastically encoded and re-encoded form. That
AI models are able to represent private information in a way that privacy leakage occurs is a further
inappropriate risk (Zhang et al. 2023; Hua et al. 2023). While copyright is designed to safeguard
not only authorship but also items for sale in markets (preventing free distribution from replacing
a sale), LLMs in tools like ChatGPT have ethical implications particularly when used to generate
texts for competition with human authors. With respect to these generative AI-produced works being
themselves intellectual property, we might look to the digitization of the public works of museums
and the court’s conceptions of originality and creative effort required to produce the work (Petri
2014) as a means to consider their value as copyrightable works.
A significant risk, perhaps, is that new technologies tend to benefit the wealthy. They are owned
privately and can do the work of a salaried human at a fraction of the effort. On the management
end of pedagogy, for people with teaching jobs, it seems essential that LLM-based writing tools
like ChatGPT not replace the supervisory tasks of teaching assistants, instructors, or the work they
produce. A slower adoption of tools like ChatGPT, and more expert analysis to stabilize the social
network of relations around identity, value, cost (broadly articulated), social risk, and benefit is
really necessary to mediate their use. This slower adoption would also benefit from student-learners
participating actively in the co-determination of tools, as we collectively reconfigure the division of
cognitive labor, as has occurred successfully for other labor systems in history (Greenbaum 1996).
Some of this effort could more thoughtfully occur beforehand: Bender et al. (2021), for example,
conduct pre-mortems for new systems, imagining the negative outcome to design how tools should
be structured –fair relations through intentional design rather than “continuous adjustment of the
ethical framework to technical and social changes” (Farina et al. 2024).
academic writing, but still present advanced thinking on a subject might be a useful alternative
performance of identity and knowledge. So, for example, a “creative assignment” might entail a
true fiction writing piece, a role-play session involving a discipline expert, an audio podcast, an
alternative museum guide, or a short video “rant” (after Canadian comedian Rick Mercer), all of
which support critical analysis.
While it is of course possible to learn complex things from systems of knowledge artifacts
(textbooks and libraries), or structured experience environments (labs and studios), systems that
higher education institutions set up for students with great care and deliberation, learning can also
be messy and embodied (Foglia and Wilson 2013), involving real experiences which are constructed
socially in dynamic and unscripted ways from multiple, situated sites and histories. This landscape is
constantly changing and as sociologist Noortje Marres observes, we need to seek articulation rather
than an explanation “enabling some entites (sic), some dimensions to stand out, to gain traction, in
our engagement with the world, while submerging others” (Marres 2024, 357). Authentic learning
experiences involving community engagement and experiential learning can engage students and
the higher education community in real-world problems with other communities of care, extrapo-
lating the abstract concept of “indivisible benefits” while occasioning financial opportunities of
the individual kind, like paid internships. In the realm of summative testing, supervised, proctored
written exams will likely need to continue, but some departments have started bringing back the
performance exam, where students respond orally or “viva voce” to a panel, present a research
poster to a group, or at an internally organized departmental conference. ChatGPT clearly has a lot
of potential for individualized instruction at a significant scale, just as learning management systems
provided for classroom communication and management at scale, but only if it is used positively
to develop and supplement, and not replace academic work –helping students ask questions and
contemplate concepts, cultivating their critical processes while developing confidence and efficacy
toward self-improvement.
16.10 CONCLUSIONS
ChatGPT presents a challenge to higher education as a widely available digital tool that renders
well-written texts in response to short prompts at significant speed and scale. Occasional coun-
terfactual claims or biases surface that inexperienced scholars (and maybe experienced ones) will
have trouble recognizing which may be addressed by more focused training texts and tool systems
rather than a general, oracle-like system. We can also look to higher education to develop higher
standards around the evaluation of cited authority. There is a natural affinity between writing and the
rational imagination, as a non-physical, ephemeral idea-space where thought-performance makes
for intellectual constructs that lead to actions in the real world, the making of ground-truth know-
ledge artifacts. These in turn bring value and reputation to higher education, academia, and societies
at large. Academic integrity, critical thinking, and scholarly effort, however, are being challenged
by ChatGPT’s easy and expansive reach, remarkable ability, and speed at providing quick solutions
without significant critical effort. Expediency is not typically a sought-goal of higher education: it
values research rigor, disciplinary process, and original thought within critical framing traditions.
In relation to ethical pedagogy and research authority, new academic systems that are more human-
centered, culturally aligned, sensitive to harm, and actively searching to remediate its biases and
gaps from the broadest reaches of communities of inquiry are desirable to enact. Improvements
to the authority problem posed by ChatGPT could also occur by broad multi-stakeholder consult-
ation and research, transparency, legislation, and critical redesign around considerations of fairness
and harm, as has happened with personal camera devices. For now and in the future it is essential
that academic rewards go to legitimate effort, at least if academic institutions are to retain their
reputations as validators of authentic texts of discoveries and inventions, credentialing authorities of
graduates, and mediators of future generations of thought.
244
REFERENCES
Ali, Stephen R., Thomas D. Dobbs, Hayley A. Hutchings, and Iain S. Whitaker. 2023. “Using ChatGPT to
Write Patient Clinic Letters.” The Lancet Digital Health 5 (4): e179– 81. https://doi.org/10.1016/
S2589-7500(23)00048-1
Artino, Anthony R. Jr, Andrew W. Phillips, Amol Utrankar, Andrew Q. Ta, and Steven J. Durning. 2018.
“ ‘The Questions Shape the Answers’: Assessing the Quality of Published Survey Instruments in Health
Professions Education Research.” Academic Medicine 93 (3): 456. https://doi.org/10.1097/ACM.00000
00000002002
Bell, Genevieve, and Paul Dourish. 2007. “Yesterday’s Tomorrows: Notes on Ubiquitous Computing’s
Dominant Vision.” Personal and Ubiquitous Computing 11 (2): 133–43. https://doi.org/10.1007/s00
779-006-0071-x
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the
Dangers of Stochastic Parrots: Can Language Models Be Too Big?.” In Proceedings of the 2021 ACM
Conference on Fairness, Accountability, and Transparency, 610–23. Virtual Event Canada: ACM.
https://doi.org/10.1145/3442188.3445922
Brown, Andrew, Ash Tanuj Kumar, Osnat Melamed, Imtihan Ahmed, Yu Hao Wang, Arnaud Deza, Marc
Morcos, et al. 2023. “A Motivational Interviewing Chatbot with Generative Reflections for Increasing
Readiness to Quit Smoking: Iterative Development Study.” JMIR Mental Health 10: e49132. https://doi.
org/10.2196/49132
Canonico, Domenic. 2010. “The History of ASME’s Boiler and Pressure Vessel Code.” American Society of
Mechanical Engineers. December 1, 2010. https://web.archive.org/web/20230929141614/www.asme.
org/topics-resources/content/The-History-of-ASMEs-Boiler-and-Pressure
Cohen, I. Glenn. 2023. “What Should ChatGPT Mean for Bioethics?” The American Journal of Bioethics 23
(10): 8–16. https://doi.org/10.1080/15265161.2023.2233357
Coopmans, Catelijne. 2021. “Learning from Fakes: A Relational Approach.” In The Imposter as Social Theory,
1st ed. Bristol University Press. https://doi.org/10.2307/j.ctv1p6hphs.9
Crawford, Kate. 2024. “Generative AI’s Environmental Costs Are Soaring—and Mostly Secret.” Nature 626
(8000): 693–93. https://doi.org/10.1038/d41586-024-00478-x
Derksen, Maarten. 2021. “A Menagerie of Imposters and Truth-Tellers: Diederik Stapel and the Crisis
in Psychology.” In A Menagerie of Imposters and Truth-Tellers: Diederik Stapel and the Crisis in
Psychology, 53–76. Bristol University Press. https://doi.org/10.56687/9781529213102-006
Doherty, Tiffany S, and Aaron E Carroll. 2020. “Believing in Overcoming Cognitive Biases.” AMA Journal of
Ethics 22 (9): 773–778.
Ekbia, Hamid R., and Bonnie A. Nardi. 2017. Heteromation and Other Stories of Computing and Capitalism.
MIT Press. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7932816
Elish, Madeleine Clare. 2019. “Moral Crumple Zones: Cautionary Tales in Human- Robot Interaction.”
Engaging Science, Technology, and Society 5: 40–60. https://doi.org/10.17351/ests2019.260
Farina, Mirko, Xiao Yu, and Andrea Lavazza. 2024. “Ethical Considerations and Policy Interventions
Concerning the Impact of Generative AI Tools in the Economy and in Society.” Ai and Ethics (Online).
https://doi.org/10.1007/s43681-023-00405-2
Fitz, Stephen. 2023. “Do Large GPT Models Discover Moral Dimensions in Language Representations?
A Topological Study of Sentence Embeddings.” arXiv. https://doi.org/10.48550/arXiv.2309.09397
FitzGerald, Chloë, and Samia Hurst. 2017. “Implicit Bias in Healthcare Professionals: A Systematic Review.”
BMC Medical Ethics 18 (1): 19. https://doi.org/10.1186/s12910-017-0179-8
Fleck, Ludwik. 1979. Genesis and Development of a Scientific Fact. Chicago: University of Chicago Press.
Foglia, Lucia, and Robert A. Wilson. 2013. “Embodied Cognition.” WIREs Cognitive Science 4 (3): 319–25.
https://doi.org/10.1002/wcs.1226
Franklin, Ursula. 1989. Public Infrastructures of Technology. Vol. 2. 5 vols. Massey Lectures 1989: The Real
World of Technology. CBC Ideas.
Friedman, Batya, and Peter H. Kahn. 1992. “Human Agency and Responsible Computing: Implications for
Computer System Design.” The Journal of Systems and Software 17 (1): 7–14. https://doi.org/10.1016/
0164-1212(92)90075-U
Greenbaum, Joan. 1996. “Labor Is More Than Work: Using Labor Analysis to Study Use Situations and Jobs.”
Scandinavian Journal of Information Systems 8 (2): 4.
245
Greitemeyer, Tobias, and Andreas Kastenmüller. 2023. “HEXACO, the Dark Triad, and Chat GPT: Who Is
Willing to Commit Academic Cheating?” Heliyon 9 (9): e19909. https://doi.org/10.1016/j.heliyon.2023.
e19909
Hua, Shangying, Shuangci Jin, and Shengyi Jiang. 2023. “The Limitations and Ethical Considerations of
ChatGPT.” Data Intelligence, December, 1–38. https://doi.org/10.1162/dint_a_00243
Huallpa, Jorge Jinchuña et al. 2023. “Exploring the Ethical Considerations of Using Chat GPT in University
Education.” Periodicals of Engineering and Natural Sciences 11 (4): 105–15. https://doi.org/10.21533/
pen.v11i4.3770
IEEE. 2022. “Standard for Transparency of Autonomous Systems.”. IEEE Std 7001-2021, March, 1–54. https://
doi.org/10.1109/IEEESTD.2022.9726144
Klenk, Michael. 2024. “Ethics of Generative AI and Manipulation: A Design-Oriented Research Agenda.”
Ethics and Information Technology 26 (1): 9. https://doi.org/10.1007/s10676-024-09745-x
Knott, Alistair, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David
Eyers, et al. 2023. “Generative AI Models Should Include Detection Mechanisms as a Condition for Public
Release.” Ethics and Information Technology 25 (4): 55. https://doi.org/10.1007/s10676-023-09728-4
Koi, Polaris. 2023. “Student Use of ChatGPT in Higher Education: Focus on Fairness.” Justice Everywhere
(blog). March 20, 2023. https://web.archive.org/web/20230901000000*/https://justice-everywhere.org/
general/student-use-of-chatgpt-in-higher-education-focus-on-fairness/
Kuhn, Thomas S. 1962. The Structure of Scientific Revolutions (International Encyclopedia of Unified Science;
vol. 2, no. 2). Chicago: University of Chicago Press.
Law, John. 2004. After Method: Mess in Social Science Research. London: Routledge.
Luo, Renqian, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie- Yan Liu. 2022.
“BioGPT: Generative Pre-Trained Transformer for Biomedical Text Generation and Mining.” Briefings
in Bioinformatics 23 (6): bbac409. https://doi.org/10.1093/bib/bbac409
Marres, Noortje. 2024. “Articulation, or the Persistent Problem with Explanation.” The British Journal of
Sociology 75 (3): 354–359. https://doi.org/10.1111/1468-4446.13084
Meyer, Jesse G., Ryan J. Urbanowicz, Patrick C. N. Martin, Karen O’Connor, Ruowang Li, Pei-Chen Peng,
Tiffani J. Bright, et al. 2023. “ChatGPT and Large Language Models in Academia: Opportunities and
Challenges.” BioData Mining 16 (1): 20. https://doi.org/10.1186/s13040-023-00339-9
Michel-Villarreal, Rosario, Eliseo Vilalta-Perdomo, David Ernesto Salinas-Navarro, Ricardo Thierry-Aguilera,
and Flor Silvestre Gerardou. 2023. “Challenges and Opportunities of Generative AI for Higher Education
as Explained by ChatGPT.” Education Sciences 13 (9): 856. https://doi.org/10.3390/educsci13090856
Mill, J.S. 1869. On Liberty, Fourth ed. Longmans, Green, Reader and Dyer.
Nader, Ralph. 1965. Unsafe at Any Speed; the Designed- in Dangers of the American Automobile.
New York: Grossman.
Naftulin, Donald H., John E. Ware, Jr., and Frank A. Donnelly. 1973. “The Doctor Fox Lecture: A Paradigm of
Educational Seduction.” Journal of Medical Education 48 (7): 630–35.
Nissenbaum, Helen. 1996. “Accountability in a Computerized Society.” Science and Engineering Ethics
2: 25–42.
Nyabola, Nanjala. 2023. “ChatGPT and the Sweatshops Powering the Digital Age.” Al Jazeera (editorial).
January 23, 2023. https://web.archive.org/web/20240108133955/www.aljazeera.com/opinions/2023/1/
23/sweatshops-are-making-our-digital-age-work
Pasquale, Frank. 2015. “The Black Box Society: The Secret Algorithms That Control Money and Information.”
In The Black Box Society. Harvard University Press. https://doi.org/10.4159/harvard.9780674736061
Patterson, David, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David
So, Maud Texier, and Jeff Dean. 2021. “Carbon Emissions and Large Neural Network Training.” arXiv.
https://doi.org/10.48550/arXiv.2104.10350
Perkins, Mike, Jasper Roe, Darius Postma, James McGaughran, and Don Hickerson. 2023. “Detection of
GPT-4 Generated Text in Higher Education: Combining Academic Judgement and Software to Identify
Generative AI Tool Misuse.” Journal of Academic Ethics. https://doi.org/10.1007/s10805-023-09492-6
Petri, Grischka. 2014. “The Public Domain vs. the Museum: The Limits of Copyright and Reproductions of
Two-Dimensional Works of Art.” Journal of Conservation and Museum Studies 12 (1): Art 8. https://doi.
org/10.5334/jcms.1021217
246
Ratto, Matt. 2024. “Humanness and AI.” Fairness –ChatGPT Workshop. Data Sciences Institute, University
of Toronto (January 26).
Ray, Partha Pratim. 2023. “ChatGPT: A Comprehensive Review on Background, Applications, Key
Challenges, Bias, Ethics, Limitations and Future Scope.” Internet of Things and Cyber-Physical Systems
3 (January): 121–54. https://doi.org/10.1016/j.iotcps.2023.04.003
Rhea, Alene K., Kelsey Markey, Lauren D’Arinzo, Hilke Schellmann, Mona Sloane, Paul Squires, Falaah
Arif Khan, and Julia Stoyanovich. 2022. “An External Stability Audit Framework to Test the Validity of
Personality Prediction in AI Hiring.” Data Mining and Knowledge Discovery 36 (6): 2153–93. https://
doi.org/10.1007/s10618-022-00861-0
Rosenthal, Caroline. 2021. “The Desire to Believe and Belong: Wannabes and Their Audience in a North
American Cultural Context.” In The Desire to Believe and Belong: Wannabes and Their Audience in
a North American Cultural Context, 31–52. Bristol University Press. https://doi.org/10.56687/978152
9213102-005
Sabzalieva, Emma, and Arianna Valentini. 2023. “ChatGPT and Artificial Intelligence in Higher Education –
Quick Start Guide.” Education 2030. United Nationas Educational, Scientific and Cultural Organization.
https://web.archive.org/web/20240118052131/www.iesalc.unesco.org/wp-content/uploads/2023/04/
ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf
Shaffer Shane, Tommy. 2023. “AI Incidents and ‘Networked Trouble’: The Case for a Research Agenda.” Big
Data & Society 10 (2). https://doi.org/10.1177/20539517231215360
Stahl, Bernd Carsten, and Damian Eke. 2024. “The Ethics of ChatGPT –Exploring the Ethical Issues of
an Emerging Technology.” International Journal of Information Management 74 (February): 102700.
https://doi.org/10.1016/j.ijinfomgt.2023.102700
Sutherland, Brian. 2022. “Strategies for Degrowth Computing.” In Eighth Workshop on Computing within
Limits 2022, 6. https://doi.org/doi:10.21428/bf6fb269.04676652
Sweller, John, and Susan Sweller. 2006. “Natural Information Processing Systems.” Evolutionary Psychology
4 (1): https://doi.org/10.1177/147470490600400135
Upbin, Bruce. 2011. “IBM’s Watson Now a Second-Year Med Student.” Forbes. May 25, 2011. https://web.
archive.org/web/20221024110217/www.forbes.com/sites/bruceupbin/2011/05/25/ibms-watson-now-a-
second-year-med-student/
Vogel, David. 1995. Trading up: Consumer and Environmental Regulation in a Global Economy. Cambridge,
MA: Harvard University Press. http://archive.org/details/tradingupconsume0000voge
Wright II, James E., Dongfang Gaozhao, and Brittany Houston. 2023. “Body- Worn Cameras and
Representation: What Matters When Evaluating Police Use of Force?” Public Administration Review n/
a. https://doi.org/10.1111/puar.13746
Wu, Xiaodong, Ran Duan, and Jianbing Ni. 2023. “Unveiling Security, Privacy, and Ethical Concerns of
ChatGPT.” arXiv. https://doi.org/10.48550/arXiv.2307.14192
Zhang, Zhiping, Michelle Jia, Hao-Ping, Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, and Tianshi
Li. 2023. ““It’s a Fair Game”, or Is It? Examining How Users Navigate Disclosure Risks and Benefits
When Using LLM-Based Conversational Agents.” arXiv. https://doi.org/10.48550/arXiv.2309.11653
Zhavoronkov, Alex. 2022. “Rapamycin in the Context of Pascal’s Wager: Generative Pre-Trained Transformer
Perspective.” Oncoscience 9: 82–84. https://doi.org/10.18632/oncoscience.571
247
17.1 BACKGROUND
The subject of increasing interest and research revolves around the potential application of
Generative Pre-trained Transformer (GPT) in the field of education. The GPT model, which was
developed by OpenAI, is an advanced language model that employs deep learning techniques to
produce text that closely resembles human-generated content (Muhie et al., 2023). The potential
of this technology in the field of education resides in its capacity to facilitate diverse facets of
the learning process, encompassing personalized tutoring, content generation, language translation,
and automated assessment. Nevertheless, the utilization of GPT in educational environments is not
without its constraints and difficulties (Farrokhnia et al., 2023). This discourse aims to examine the
potential, constraints, and suggestions pertaining to the implementation of adaptive learning through
the utilization of GPT.
There has been a notable surge in the sector of education about the utilization of advanced
technologies to augment the quality of learning encounters. The GPT, a technology developed by
OpenAI, has emerged as a leading contender among the various technologies under consideration
(Alser & Waisberg, 2023). The GPT language model, which is considered to be at the forefront of
technology, employs deep learning techniques to produce text that closely emulates human language
(Liu et al., 2023). This discourse examines the considerable prospects of GPT in the field of edu
cation, investigating its utilization in personalized tutoring, content generation, language interpret-
ation, and automated assessment (Cardona et al., 2023).
One of the most intriguing facets of GPT in the realm of education is in its potential to fun-
damentally transform the landscape of individualized tutoring. Due to its contextual comprehen-
sion, capacity to produce meaningful replies, and adaptability to diverse learning preferences, GPT
possesses the potential to serve as a virtual tutor, offering personalized support to students(Currie,
2023). The utilization of this adaptive tutoring strategy facilitates a customized learning encounter,
according to the distinct requirements and tempo of every individual learner (Kasneci et al., 2023).
GPT possesses the capacity to function as a beneficial educational partner, as it can effectively elu-
cidate intricate topics, respond to inquiries, and provide supplementary exercises.
Within the domain of content development, the utilization of GPT demonstrates its efficacy
as a formidable instrument for instructors. The system has the capability to produce instructional
resources of superior quality, such as lesson plans, quizzes, and interactive exercises. Not only
does this streamline the burden for educators, but it also guarantees that the curriculum remains
consistently interesting and informative (Cardona et al., 2023). The adaptability of GPT allows for
the development of a wide range of educational materials spanning different subjects and academic
levels. Consequently, educators are able to allocate a greater amount of time toward the facilitation
of discussions, attending to the specific requirements of individual students, and cultivating a vibrant
and participatory atmosphere conducive to learning.
The field of education can greatly benefit from the substantial contributions that GPT can offer
in the domain of language translation. By overcoming linguistic barriers, GPT has the potential to
enhance communication and foster collaboration among students and educators with varying lin-
guistic backgrounds (Haleem et al., 2022a). This phenomenon creates possibilities for international
cooperation in educational endeavors, promoting a more comprehensive and integrated educational
community. The provision of real-time translation services by GPT contributes to the increased
accessibility of educational content, hence expanding its reach to a global audience.
Automated grading, as a utilization of GPT, exhibits potential in mitigating the workload of
educators and delivering prompt feedback to pupils. GPT possesses the capability to examine written
responses, essays, and various types of assessments, so providing immediate feedback regarding
accuracy, coherence, and the extent of comprehension. This practice not only expedites the evalu-
ation procedure but also facilitates timely feedback provision to students, hence enhancing their
ability to recognize and rectify their shortcomings in a more effective manner. Automated grading,
utilizing the capabilities of GPT, possesses the capacity to revolutionize the realm of assessment,
enhancing efficiency and fostering an environment favorable to the process of learning.
Nevertheless, within the expansive realm of education, it is imperative to acknowledge the
significant limitations and obstacles associated with the utilization of GPT. A noteworthy issue
revolves around the potential bias inherent in the model and its tendency to reinforcing pre-
existing inequities. The GPT model undergoes training using extensive datasets, and in the event
that these datasets possess biases, there is a possibility that the model may unintentionally repro-
duce and sustain these biases. Educators must exercise prudence in regard to the content produced
by GPT in order to ascertain its adherence to inclusive and impartial educational principles.
Moreover, the absence of a thorough comprehension about the process by which GPT generates
particular results gives rise to concerns regarding openness and accountability inside educational
environments.
Moreover, there exist problems pertaining to the ethical utilization of GPT in the realm of educa-
tion. It is imperative to address concerns pertaining to data privacy, security, and the proper imple-
mentation of artificial intelligence (AI) technologies. It is imperative for educators and legislators
to create explicit protocols for the ethical utilization of GPT in order to safeguard students’ privacy
and guarantee that these technologies are implemented in a manner that aligns with educational
principles and benchmarks.
In order to optimize the utilization of GPT in the field of education while addressing its inherent
limits, it is advisable to contemplate some recommendations. To begin with, it is imperative to do
continuous research and development in order to improve the transparency and interpretability of
GPT (Chavez, 2023). Gaining a comprehensive understanding of the underlying mechanisms of
the model will empower educators to develop a heightened level of assurance in the outcomes it
produces, thereby cultivating a sense of trust in its various applications. Furthermore, it is impera-
tive to engage in ongoing monitoring and auditing of content generated by GPT systems in order
to identify and address any biases. This is crucial in order to guarantee that instructional materials
actively promote inclusion and diversity.
It is imperative that educators undergo comprehensive training on the ethical utilization of GPT
and other AI technologies. This encompasses the comprehension of potential biases inherent in AI
models, the safeguarding of data privacy, and the exercise of informed judgment regarding the timing
and manner of integrating these technologies into educational settings. In addition, it is imperative to
emphasize the importance of collaboration among educators, technologists, and legislators in order
249
to develop comprehensive standards and regulations that effectively manage the responsible utiliza-
tion of GPT inside educational environments.
The utilization of GPT in the field of education holds significant promise, as it presents novel
approaches to augment individualized learning, facilitate content generation, enable language trans-
lation, and streamline assessment procedures. Nevertheless, like any disruptive technology, it is
imperative to thoroughly evaluate its constraints and ethical ramifications. Through the acknow-
ledgment and resolution of these obstacles, educators have the potential to harness the capabilities
of GPT in order to establish a learning environment that is characterized by enhanced dynamism,
inclusivity, and efficacy for students on a global scale.
17.2 METHODOLOGY
This review focuses on the latest options for adaptive learning using GPT. The study thoroughly
evaluates the characteristics, limitations, and challenges of the chosen works. The chapter conducts
a comparative analysis of current GPT used for educational purposes after the assessment. The ana-
lysis provides a thorough understanding of the current status of existing GPT in Adapting Learning.
The chapter examines unresolved difficulties and potential research areas in education utilizing
GPT, based on current understanding.
17.3 CHAPTER ORGANIZATION
The rest of the chapter is organized as follows: Section 17.4 consists of GPT Opportunities for
Adaptive Learning, Personal Tutoring by using GPT, Content Creation by Using GPT, Language
Translation by using GPT, and Automated Grading. Section 17.5discusses GPT Limitations for
Adaptive Learning, Bias and Inaccuracy, Lack of Contextual Understanding, Ethical Considerations,
and Lack of Personalization. While Section 17.6explains the Recommendations for Adaptive
Learning, Training Data Quality, Contextual Understanding, and Ethical Guidelines. Section
17.7consists of Conclusion.
educators in the creation of interactive learning materials, the generation of practice questions, and
the development of virtual simulations that serve to enhance comprehension of abstract concepts.
GPT has the potential to enhance language acquisition through its ability to offer instantaneous
translation services and interactive language practice activities. The capacity to comprehend and
produce language like that of humans renders it a wonderful instrument for individuals studying
languages who need immersive and engaging encounters. GPT has the capability to examine huge
quantities of educational data, encompassing measures related to student performance as well as
scholarly research publications. Through the analysis of this data, GPT has the capability to discern
trends and glean insights that may be utilized to inform the decision-making processes within
educational institutions. GPT offers a multitude of possibilities for revolutionizing many facets of
education, encompassing targeted tutoring, enhanced administrative efficacy, content generation,
language acquisition assistance, and data analytics.
This section describes the GPT opportunities for adaptive learning as illustrated in Figure 17.1,
which are described below.
17.4.1 Personal Tutoring
GPT can be utilized to provide personalized tutoring by generating explanations, answering
questions, and offering tailored feedback to students based on their individual learning needs.
Personal tutoring is a type of educational assistance that is offered on an individual basis, usu-
ally outside of the conventional classroom environment. The pedagogical methodology entails a
tutor engaging in close collaboration with a student to deliver tailored instruction, guidance, and
assistance in targeted academic domains or proficiencies. Personalized tutoring can be conducted in
diverse environments, including the student’s residence, a dedicated tutoring facility, or via internet
platforms. The principal objective of individualized tutoring is to cater to the distinct educational
requirements of every student and facilitate their attainment of scholastic accomplishments.
Personal tutoring provides numerous advantages for pupils. First and foremost, this approach
offers personalized attention, enabling tutors to adapt their instructional techniques to align with the
unique learning style and speed of every student. The implementation of a tailored approach can yield
notable advantages for students who encounter difficulties with specific subjects or concepts within
251
a conventional classroom setting. Furthermore, the provision of personal tutoring can contribute to
the development of students’ self-assurance and drive through the provision of tailored assistance
and positive reinforcement. Tutors are capable of aiding pupils in the cultivation of efficient study
habits, organizational proficiencies, and critical thinking capabilities. In addition, personal tutoring
has the potential to enhance academic achievement by providing students with targeted support in
areas where they may be encountering difficulties.
The role of a personal tutor encompasses a range of responsibilities and is characterized by its
diverse nature. Tutors generally exhibit expertise in particular academic disciplines and demon-
strate proficient communication and interpersonal abilities. The individuals in question potentially
bear the responsibility of evaluating the student’s aptitudes and deficiencies, discerning areas that
require enhancement, and formulating tailored instructional strategies to cater to the student’s edu-
cational requirements. Furthermore, tutors have the capacity to offer aid with homework, assistance
in preparing for tests, and advice pertaining to academic assignments. In addition, personal tutors
frequently fulfill the role of mentors by providing information pertaining to professional aspirations,
study methodologies, and educational objectives.
The need for personal tutoring has experienced a substantial growth in recent years, primarily
driven by several causes including heightened academic competitiveness, the necessity of meeting
standardized testing criteria, and the desire for further academic assistance beyond the confines
of the classroom. Consequently, the personal tutoring sector has experienced significant growth,
accommodating a diverse array of disciplines and educational levels, spanning from elementary
school to professional certification programs. Personal tutoring serves as a significant contributor
to students’ academic progress through the provision of tailored instruction, customized assistance,
and focused direction, all of which contribute to the improvement of educational achievements.
17.4.2 Content Creation
GPT can aid educators in creating educational materials such as quizzes, worksheets, and lesson
plans by generating relevant content based on specific topics or learning objectives. The practice of
generating text that resembles human language using a deep learning model is commonly known as
content generation with GPT (Liu et al., 2023). GPT is a language model characterized by its util
ization of a transformer architecture, which enables it to make predictions on the subsequent word
in a given sequence of textual data. The language model has undergone training using a wide array
of internet text sources, enabling it to produce coherent and contextually appropriate content on a
multitude of subjects. The utilization of GPT for content creation has been increasingly prevalent
in various domains, including natural language processing, digital marketing, and creative writing.
This surge in popularity can be attributed to GPT’s remarkable capacity to generate material of
exceptional quality, closely resembling human-authored work.
The content production process with GPT entails the provision of a prompt or input text to the
model, whereupon the model subsequently generates a continuation or completion depending on
the given input. The provided input can vary in length, spanning from a solitary sentence to a com-
prehensive article. The model leverages its understanding of linguistic patterns and semantics to
generate text that adheres to the specified prompt. The information generated can undergo additional
refinement and editing by human writers to guarantee precision, coherence, and compliance with
particular norms or needs.
One of the primary benefits associated with content production utilizing GPT lies in its capacity
to rapidly and effectively generate substantial quantities of text. This can prove to be particularly
advantageous in situations where there is a requirement to produce a wide range of content for
websites, promotional materials, or educational materials. Moreover, GPT has the potential to aid
writers in surmounting creative obstacles by offering alternate viewpoints or concepts derived from
the given input.
252
Nevertheless, it is crucial to acknowledge that while GPT has the capability to generate content
that is very persuasive, there exist ethical concerns pertaining to its utilization. The proper imple-
mentation of this technology is a matter of concern due to its potential for misuse, including the
generation of fake news or deceptive information. Therefore, it is imperative for individuals utilizing
GPT-based content generation to use prudence and guarantee that the produced text adheres to eth-
ical norms and legal requirements.
The utilization of GPT for content generation presents a robust mechanism for producing text
of superior quality across many areas. When employed in a responsible manner and in tandem
with human supervision, it has the potential to optimize the writing procedure and offer significant
assistance to individuals involved in generating material.
17.4.3 Language Translation
GPT’s natural language processing capabilities can facilitate language translation, enabling students
to access educational resources in their native language. Natural language processing (NLP) is a
subfield within the realm of AI that centers its attention on the intricate interplay between computers
and human language. NLP facilitates the capacity of machines to comprehend, interpret, and generate
meaningful responses to human language. Language translation is a prominent application within the
field of NLP, carrying substantial consequences for the domain of education. The progress of NLP
technology has enabled students to conveniently access educational resources in their mother tongue,
thereby overcoming linguistic obstacles and enhancing the process of learning. The utilization of
this approach has significant promise in providing substantial advantages to students who have diffi-
culties in accessing educational resources in a language other than their primary language (Kasneci
et al., 2023).
NLP-driven language translation systems have the potential to provide students with the capacity
to absorb instructional material in their mother tongue, hence augmenting their comprehension and
retention of information. The availability of educational resources in an individual’s native tongue
has the potential to enhance inclusivity and equity within learning environments. Moreover, it has
the potential to facilitate students in preserving their cultural identity while actively interacting with
instructional resources.
In addition, the language translation capabilities of NLP can facilitate collaboration and the
exchange of knowledge among students with varying linguistic backgrounds. NLP plays a cru-
cial role in facilitating effective communication across different languages, hence enhancing the
exchange of ideas and viewpoints. This, in turn, contributes to the enrichment of the overall learning
experience for students.
In essence, the potential of NLP in education lies in its ability to dismantle linguistic obstacles
and provide students the opportunity to access educational materials in their mother tongue. This
technological progress has the potential to facilitate the creation of learning environments that are
more inclusive, foster better understanding of instructional content, and promote increased collab-
oration among students with varied linguistic origins.
17.4.4 Automated Grading
GPT has the potential to automate the grading process by evaluating student responses and providing
instant feedback, thereby saving educators time and effort. The evaluation of student responses
poses a significant time burden on instructors; nevertheless, the emergence of sophisticated AI tech-
nologies, such as GPT, holds promise for automating this procedure. GPT is a language model that
employs deep learning techniques to produce text that closely resembles human-generated content,
using the input it is provided with. Through the assessment of student replies and the provision
253
of immediate feedback, GPT has the potential to streamline the grading process for instructors,
resulting in saved time and reduced effort.
The GPT system operates by doing an analysis of the substance and organization of students’
written responses, discerning significant elements, and delivering feedback in accordance with
predetermined criteria (Haleem et al., 2022b). The aforementioned technology possesses the cap
acity to evaluate not alone the precision of the material, but also the coherence, organization, and
general caliber of the written work (Rahimi & Talebi Bezmin Abadi, 2023). Moreover, GPT has the
capability to be programmed in a manner that enables it to identify prevalent errors and offer specific
feedback, thereby assisting students in enhancing their writing proficiency.
The implementation of GPT in the grading process holds promise for enhancing outcomes for
both instructors and students. Educational professionals have the potential to optimize their time
allocation by minimizing the time spent on grading, so enabling them to dedicate their efforts toward
other essential facets of pedagogy, such as lesson preparation and student assistance. Students have
the opportunity to promptly obtain feedback on their work, facilitating the identification of areas that
require development and allowing for timely adjustments to be made.
Nevertheless, it is crucial to acknowledge that although GPT can offer significant aid in evalu-
ating student answers, its role should be that of a supplementary tool to augment educators’ experi-
ence rather than a substitute for it. The presence of human monitoring is crucial in order to maintain
the fairness, accuracy, and alignment of the grading process with educational objectives.
In general, the incorporation of GPT into the evaluation process has promise for optimizing
assessment protocols, augmenting feedback mechanisms, and eventually fostering enhanced educa-
tional achievements among students.
17.5.3 Ethical Considerations
The use of GPT raises ethical concerns related to data privacy, algorithmic transparency, and the
responsible deployment of AI technologies in educational settings. The employment of GPT has
elicited notable ethical apprehensions, notably inside educational environments. GPT refers to a cat-
egory of AI models that employ machine learning techniques to produce text that closely resembles
human-generated content, leveraging the input it is provided. Therefore, the utilization of AI in edu-
cational environments gives rise to numerous ethical concerns pertaining to the safeguarding of data
privacy, ensuring algorithmic openness, and the appropriate implementation of AI technologies.
Data privacy is a significant ethical consideration that arises in the context of employing GPT for
educational purposes. Educational institutions amass and retain substantial quantities of sensitive
student data, encompassing academic records, personal particulars, and communication logs. The
implementation of GPT in educational environments carries the potential for data compromise or
misuse. For instance, the utilization of GPT in producing tailored educational resources for pupils
by leveraging their unique data presents a possibility of unwanted intrusion into this information,
hence resulting in privacy and confidentiality infringements.
Algorithmic transparency is a crucial ethical consideration of considerable significance. The GPT
system utilizes intricate algorithms that may lack transparency and comprehensibility for anyone
without expertise in the field (Farrokhnia et al., 2023). The absence of transparency gives rise to
apprehensions regarding the principles of accountability and equity within educational settings. In
the event that GPT-generated content has an impact on decisions pertaining to student learning and
evaluation, it is imperative for educators and administrators to possess a comprehensive comprehen-
sion of the decision-making process and possess the capacity to critically evaluate and question the
fundamental algorithms involved (Kasneci et al., 2023).
In addition, the conscientious implementation of AI technology in educational environments
necessitates thoughtful deliberation regarding the potential effects on students’ educational
experiences and achievements. The utilization of GPT holds promise in augmenting educational
methodologies through the provision of tailored learning materials and constructive evaluations.
However, it is crucial to acknowledge the possible drawback of excessive dependence on AI-
generated information, which may inadvertently undermine the development of critical thinking abil-
ities and creative aptitude. Moreover, it is crucial to consider the potential unintended repercussions
on students with varying learning requirements or various backgrounds if the materials generated by
GPT fail to sufficiently accommodate their unique situations.
In summary, the utilization of GPT within educational environments gives rise to ethical dilemmas
pertaining to the safeguarding of data privacy, the transparency of algorithms, and the appropriate
implementation of the technology. In order to effectively address these issues, it is imperative to
meticulously consider the ethical ramifications associated with the incorporation of AI technology
in educational settings. This must be done while placing utmost importance on safeguarding student
privacy and fostering equal learning opportunities.
17.5.4 Lack of Personalization
One key difficulty that educators and developers must tackle is the absence of personalization in
adaptive learning with GPT models. GPT models have the ability to produce responses that are
contextually appropriate, but their intrinsic constraints prevent achieving genuine personalization
in educational settings.
Personalization in adaptive learning requires a deep understanding of individual student needs,
going beyond a superficial grasp of context. Although GPT models are strong, they may have dif-
ficulty offering personalized information and recommendations that cater to individual students’
distinct learning styles and interests.
256
The limitation is primarily due to the nature of GPT’s pre-training procedure. GPT models are
trained on extensive and varied datasets to get a comprehensive grasp of language patterns. Although
this enables kids to produce logical and contextually appropriate responses, it does not always
result in a profound comprehension of the complexities of individual student profiles. Insufficient
personalized data during pre-training can lead to models that are not properly adjusted to the indi-
vidual learning requirements of each learner (Cardona et al., 2023).
Adaptive learning excels at tailoring educational experiences to accommodate individual
variations, including learning speed, preferred methods of content presentation, and specific
learning obstacles. GPT models may have difficulty distinguishing between students with various
learning preferences (Alser & Waisberg, 2023). The models may generate content that is broad and
contextually suitable but may not connect with the various ways students process and interact with
knowledge.
Moreover, GPT models may not accurately represent the progression of a student’s learning pro-
cess as it develops over time. Successful personalization necessitates the ability to adjust to shifts
in a student’s comprehension, inclinations, and achievements (Mhlanga, 2023). GPT models, pre-
trained on fixed datasets, may not adapt their answers in real-time to changes in a student’s progress
or needs. The models’ lack of adaptability restricts their capacity to offer continuous, personalized
help during a student’s educational progression (Haleem et al., 2022a).
Effective learning outcomes in educational environments depend on striking a balance between
generalization and personalization to cater to specific student needs. GPT models are proficient at
producing logical and contextually appropriate material but struggle when refining it to meet the
distinct needs of various learners.
Future research and development should prioritize improving the personalization features of
GPT models for adaptive learning situations to overcome this constraint. This might include using
personalized data in the pre-training phase, researching sophisticated fine-tuning methods, and
implementing adaptive learning algorithms that respond to individual student success. Collaboration
among educators, data scientists, and psychologists can enhance the knowledge of personalized
learning demands, leading to the creation of GPT models that effectively address the varied and
changing needs of students in educational environments. Advancing in the field requires addressing
the issue of limited personalization to fully utilize GPT in adaptive learning and provide customized,
efficient, and captivating educational experiences for every student.
Consequently, this approach mitigates the potential harm of reinforcing biases or mistakes within
educational contexts.
Moreover, the implementation of stringent data validation procedures is crucial in order to detect
and rectify any biases that may exist within the training data. This process may entail the utiliza-
tion of methodologies such as bias detection algorithms, human review procedures, and ethical
principles to assess the caliber and impartiality of the training data. Through a systematic process of
validating the training data, developers have the ability to discover and address potential biases prior
to the deployment of GPT in educational applications.
In conjunction with the tasks of data curation and validation, it is imperative to engage in continuous
monitoring and evaluation of GPT’s efficacy within educational environments. This encompasses
the examination of the model’s outputs for potential biases or inaccuracies, the collection of user
input, and the iterative improvement of the training data through adjustments informed by real-
world application. Through ongoing evaluation and refinement of GPT’s performance, developers
have the opportunity to enhance its appropriateness for instructional purposes while mitigating the
potential for repeating biases or inaccuracies.
In the context of educational applications, it is imperative to prioritize the acquisition of training
data that is both diverse and of high quality for GPT in order to effectively address biases and inac-
curacies. Developers have the opportunity to improve the model’s appropriateness for educational
settings and uphold fairness and accuracy by carefully selecting diverse training data, performing
rigorous validation procedures, and consistently monitoring its performance.
17.6.2 Contextual Understanding
Integrating GPT with contextual understanding models or domain-specific knowledge bases to
enhance its ability to support adaptive learning tasks. The incorporation of GPT with contextual
understanding models or domain-specific knowledge bases has been shown to greatly boost its cap-
acity to facilitate adaptive learning activities. The GPT model is a cutting-edge language model that
has exhibited remarkable proficiency in several natural language processing tasks, including but not
limited to text production, translation, and summarization. Nevertheless, the performance of GPT
can be enhanced by including contextual understanding models and domain-specific knowledge
sources. By utilizing these supplementary resources, GPT can enhance its comprehension of par-
ticular domains or subjects, resulting in improved precision and contextually appropriate replies.
Contextual comprehension models, such as BERT (Bidirectional Encoder Representations from
Transformers), offer a theoretical foundation for comprehending the contextual nuances in which
words and phrases are situated within a specific textual context (Qadir, 2023). These models have
exceptional proficiency in capturing the intricacies of language and have the potential to enhance
GPT’s ability to provide responses that are more contextually suitable. By incorporating GPT with
BERT or analogous models, the amalgamated system can enhance its ability to perceive intricate
nuances in language usage and generate outputs that are more coherent and contextually appropriate
(Zhai, 2022).
In addition, the integration of domain-specific knowledge bases into the design of GPT can sig-
nificantly improve its performance in adaptive learning tasks. Domain-specific knowledge bases
consist of organized and structured data pertaining to certain domains or subjects, such as scien-
tific principles, medical jargon, legal precedents, or historical occurrences. By incorporating GPT
with the ability to access knowledge sources, the model may utilize this specialized information to
produce replies that are more precise and tailored to certain domains.
The potential for enhancing the model’s capacity to facilitate adaptive learning tasks across
diverse domains and applications is significant through the incorporation of GPT with contextual
understanding models and domain-specific knowledge bases. The augmented capacity of this tech-
nology has the potential to provide advantages across various domains, encompassing education,
healthcare, legal research, customer service, and other related areas.
258
In general, the incorporation of GPT into contextual understanding models and domain-specific
knowledge bases shows potential for enhancing the functionalities of natural language processing
systems and facilitating their ability to effectively assist in adaptive learning tasks across various
domains.
17.6.3 Ethical Guidelines
Establishing clear ethical guidelines and regulations for the responsible use of AI technologies like
GPT in education to safeguard student privacy and promote transparency. The incorporation of
AI technologies, specifically the GPT, inside the educational domain has prompted apprehension
regarding the safeguarding of student privacy. Consequently, there is a pressing want for the estab-
lishment of ethical principles and regulatory frameworks to assure the appropriate use of these
technologies. Ensuring the protection of student privacy and fostering openness are imperative con-
siderations in the advancement and utilization of AI technology within educational environments.
Ethical norms and rules are of utmost importance in addressing these problems, since they establish
a structured framework for the conscientious utilization of AI technology in the field of education.
First and foremost, it is imperative to develop unambiguous ethical criteria in order to guar-
antee that the utilization of AI technologies in the field of education adheres to ethical principles
and upholds the privacy rights of students. The recommendations ought to encompass concerns
pertaining to data protection, consent, and the conscientious management of student information.
For example, it is imperative that educational institutions and AI developers emphasize the acquisi-
tion of informed consent from students or their legal guardians with regard to the gathering and util-
ization of their data. In addition, it is imperative for guidelines to establish processes that ensure the
secure storage and management of student data, with the aim of preventing illegal access or misuse.
Furthermore, the implementation of rules is vital in order to uphold adherence to ethical principles
and establish a system of responsibility in the utilization of AI technology within the realm of educa-
tion. Regulatory measures encompass a range of legal and policy mechanisms that require openness
in both the development and implementation of AI systems within educational contexts. As an illus-
tration, it is possible that legislation may necessitate educational institutions to provide information
regarding the utilization of AI technologies to students, parents, and pertinent stakeholders. In add-
ition, regulatory entities possess the capacity to supervise the execution of ethical principles and
examine instances of noncompliance or unethical behavior associated with the utilization of AI in
the field of education.
Transparency constitutes an additional fundamental element that warrants heightened emphasis
within the ethical rules and regulatory frameworks that regulate the utilization of AI technology
in the realm of education. It is imperative that educational stakeholders, encompassing students,
educators, and parents, are provided with transparent and comprehensive information regarding the
implementation and utilization of AI technologies within educational environments. The utilization
of openness in the integration of AI in education can contribute to the establishment of trust and
comprehension concerning its objectives and ramifications (Grassini, 2023).
In summary, it is crucial to establish unambiguous ethical rules and laws to ensure the responsible
utilization of AI technologies such as GPT within the educational domain. These steps are necessary
in order to protect student privacy, foster transparency, and guarantee the ethical deployment of AI
technologies in educational settings.
The GPT is a machine learning model that exhibits the ability to generate responses that closely
resemble human language when provided with appropriate prompts. This platform provides a wide
range of chances to enhance learning experiences, improve educational results, and optimize admin-
istrative procedures in the field of education. The primary focus of GPT lies in the development and
deployment of intelligent tutoring systems that leverage the natural language processing capabilities
of GPT to deliver personalized and adaptable learning experiences for students (Mhlanga, 2023).
259
GPT has the capability to automate administrative tasks at educational institutions, including
the creation of personalized connections with students and parents, the evaluation of tasks, and the
design of course timetables. Additionally, it has the potential to facilitate language acquisition by
providing rapid translation services and interactive language practice activities. Additionally, GPT
has the capability to analyze extensive volumes of educational data in order to identify patterns and
provide insights that may be utilized in decision-making procedures within educational institutions.
GPT can also serve as a tool for individualized tutoring, offering customized training, guidance,
and support in certain academic areas or skill sets. This strategy has a multitude of benefits for
pupils, including individualized attention, increased self-confidence, enhanced study strategies,
improved organizational abilities, and heightened critical thinking aptitude. Tutors generally possess
specialized knowledge in particular academic fields and exhibit strong communication and interper-
sonal skills.
In recent years, there has been a substantial increase in the demand for personal tutoring ser-
vices, which now cater to a wide range of academic subjects and educational levels. The provi-
sion of personal tutoring has a substantial role in enhancing students’ academic advancement by
offering individualized instruction, personalized support, and targeted guidance, ultimately leading
to improved educational outcomes.
The GPT is a sophisticated deep learning model that has the ability to generate text that closely
resembles human language. This capability empowers educators to produce educational resources
such as quizzes, worksheets, and lesson plans. This technology has garnered significant attention
and adoption across diverse domains, encompassing natural language processing, digital marketing,
and creative writing. The capacity of GPT to produce content of superior quality has rendered it a
widely favored option for content generation (Javaid et al., 2023).
The process of content development entails presenting a prompt or input text to the model, which
subsequently generates a continuation or completion based on the provided input. The model’s com-
prehension of linguistic patterns and semantics enables it to produce content that conforms to the
given prompt. Nevertheless, the presence of ethical considerations arises as a result of its capacity
for exploitation, encompassing the creation of fabricated news or misleading content.
The natural language processing capabilities of GPT can enhance language translation, thereby
enabling students to conveniently access instructional resources in their mother tongue. The util-
ization of this technology holds substantial significance for the field of education, as it empowers
students to effortlessly access educational resources in their native language. This capability effect-
ively addresses linguistic obstacles and contributes to the enhancement of the overall learning
experience. Language translation systems driven by NLP have the potential to augment students’
understanding and memory of material, foster inclusion and fairness, and enable collaboration
among students with diverse linguistic proficiencies.
The utilization of natural language processing in the field of education has promise in over-
coming linguistic obstacles and facilitating students’ access to educational resources in their native
language. This technical innovation has the potential to facilitate the establishment of inclusive
learning environments, enhance comprehension of instructional material, and encourage collabor-
ation among students from various linguistic backgrounds.
GPT, also known as Generative Pre-trained Transformers, is an AI language model that possesses
the capability to automate the assessment procedure by analyzing student responses and delivering
prompt feedback. This technology has the potential to enhance time management for instructors,
enabling them to dedicate their attention to other crucial elements of pedagogy, such as lesson
planning and student support. Nevertheless, GPT exhibits several constraints in terms of adaptive
learning, such as the presence of potential biases and inaccuracies within its training data. These
limits have the potential to result in the generation of erroneous or misleading information. These
biases could arise from large text collections, which may inadvertently include societal, cultural, and
linguistic preconceptions.
260
The limits of GPT also encompass its capacity to grasp subtle or context-dependent concepts,
hence impeding its accuracy in providing explanations and facilitating intricate learning activities.
The reliance of the model on pre-existing data and patterns may fail to adequately capture the intri-
cacies inherent in particular topic matters. Moreover, the model’s deficiency in logical reasoning and
its inability to make inferences from prior knowledge can impede its capacity to understand complex
and multifaceted subjects (Ferres et al., 2023).
Notwithstanding these constraints, the potential of GPT in enhancing educational experiences is
substantial. The capacity to generate text that closely emulates human language and offer immediate
feedback is a promising advancement in augmenting the caliber of instructional content and feed-
back. Nevertheless, it is imperative for educators and content providers to demonstrate diligence in
the process of critically evaluating and fact-checking AI-generated information, in order to maintain
high standards of truth and fairness.
The utilization of GPT within educational environments elicits ethical apprehensions regarding
the safeguarding of data privacy, the transparency of algorithms, and the appropriate implementation
of AI. GPT employs machine learning methodologies to generate text that exhibits a high degree
of similarity to information produced by humans, utilizing input derived from individuals who are
studying. This situation gives rise to apprehensions over the breach or misuse of data, as it has the
potential to result in unwarranted entry into confidential student information. The importance of
algorithmic transparency cannot be overstated, particularly in the context of GPT, which relies on
intricate algorithms that may exhibit a deficiency in transparency and comprehensibility for individ-
uals lacking specialized knowledge.
The judicious integration of AI technology inside educational environments necessitates
meticulous deliberation on its possible impact on students’ educational experiences and academic
accomplishments (Ray, 2023). The utilization of GPT in educational practices has the potential to
improve instructional approaches by providing customized learning resources and offering valu-
able assessments. However, an overreliance on AI-generated content could potentially hinder the
development of critical thinking skills and creative capacities. Furthermore, it is imperative to take
into account the possible inadvertent consequences on students with diverse learning needs or
backgrounds in the event that the materials produced by GPT do not adequately cater to their indi-
vidual circumstances.
In order to mitigate biases and inaccuracies present in the performance of GPT, it is imperative
to prioritize the utilization of training data that is both diverse and of high quality. This process
entails integrating a diverse range of viewpoints, cultural backgrounds, and personal encounters, so
enabling the model to generate replies that are both comprehensive and accurate. In order to iden-
tify and address any biases, it is imperative to implement rigorous data validation techniques. The
continuous assessment and evaluation of the effectiveness of GPT in educational environments is
crucial. Developers can enhance the suitability of GPT for educational environments and address
potential biases or inaccuracies by meticulously curating a varied training dataset, conducting thor-
ough validation methods, and maintaining continuous performance monitoring.
The GPT is a state-of-the-art language model that has demonstrated significant expertise in many
natural language processing tasks. However, the performance of the system can be greatly enhanced
with the integration of contextual understanding models and domain-specific knowledge sources.
Theoretical frameworks, such as BERT, provide a basis for comprehending the contextual aspects of
language usage, hence improving GPT’s capacity to generate responses that are more contextually
suitable (Zheng & Zhan, 2023).
The inclusion of domain-specific knowledge bases, encompassing scientific concepts, medical
terminology, legal precedents, and historical events, has the potential to enhance the performance
of GPT in adaptive learning tasks. By using these models, GPT has the capability to generate more
accurate and customized responses across a wide range of fields, including education, healthcare,
legal research, and customer service.
261
The establishment of explicit ethical norms and regulations pertaining to the proper utilization of
AI technologies, such as GPT, within the educational domain, is of paramount importance in order
to ensure the protection of student privacy and foster a culture of openness and accountability. The
establishment and adherence to ethical norms and regulations play a crucial role in safeguarding the
responsible utilization of AI technologies within educational settings. These measures encompass
the establishment of clear and unequivocal ethical standards, the enforcement of regulations, and
the promotion of openness.
Educational institutions may be mandated by regulatory measures to disseminate information
regarding the utilization of AI technology to students, parents, and stakeholders. Concurrently,
regulatory entities possess the authority to oversee the implementation of ethical principles and
scrutinize occurrences of noncompliance or unethical conduct. Transparency is an additional key
component that justifies increased focus within ethical guidelines and legal frameworks (Beltrami
& Grant-Kels, 2023).
The ethical considerations associated with the integration of AI in the field of education involve
a range of factors. These factors include but are not limited to privacy, data security, transparency,
accountability, fairness, and the mitigation of bias. When using AI systems, educational institutions
should give utmost importance to safeguarding students’ privacy and sensitive information. In add-
ition, it is imperative to prioritize transparency and accountability in order to guarantee that AI
algorithms employed inside educational environments are capable of being comprehended and
subjected to audits to assess their fairness and any biases. Furthermore, it is imperative for legal
frameworks to effectively tackle concerns pertaining to intellectual property rights, responsibility,
and adherence to established standards, such as the Family Educational Rights and Privacy Act
(FERPA) in the United States.
In addition, the prudent utilization of AI in the field of education requires continuous surveil-
lance and assessment in order to gauge its influence on students’ educational encounters, scholastic
achievements, and general welfare. It is imperative for educators and politicians to engage in a
collaborative effort aimed at formulating rules that facilitate the ethical advancement and use of AI
capabilities, while concurrently mitigating any risks and unforeseen repercussions. By giving pre-
cedence to ethical issues and ensuring adherence to legal requirements, those involved in education
can effectively utilize AI technology to enhance learning results while maintaining core values of
fairness, equity, and student well-being.
In summary, it is imperative to establish clear and unequivocal ethical guidelines and legal
frameworks to guarantee the conscientious utilization of AI technology within educational envir-
onments. The incorporation of AI technology inside educational settings has experienced a
growing presence, presenting a diverse range of possible advantages including customized learning
experiences, simplified administrative procedures, and improved educational results. Nevertheless,
the ethical implications and legal considerations pertaining to the utilization of AI in the field of
education are intricate and diverse. The establishment of unambiguous ethical principles and regu-
latory frameworks is of utmost importance in order to guarantee the responsible implementation of
AI technology in educational environments.
17.7 CONCLUSION
The GPT is a deep learning model developed by OpenAI that can be used in education for personalized
tutoring, content generation, language translation, and automated assessment. Its potential to trans-
form the education sector is significant, offering personalized support to students, facilitating a more
engaging learning experience. GPT’s adaptability allows for the creation of high-quality instruc-
tional resources, ensuring a diverse curriculum. It also enhances communication and collaboration
among students and educators, promoting international cooperation. Automated grading can reduce
teacher workload and provide immediate feedback to students, enhancing learning efficiency.
262
However, GPT’s potential biases and lack of transparency raise concerns about its adherence to
inclusive and impartial educational principles. To optimize GPT’s use, continuous research, ongoing
monitoring, and comprehensive training are recommended. GPT can be used to develop intelli-
gent tutoring systems, automate administrative tasks, and enhance educational content generation.
It can also analyze large amounts of educational data to identify trends and inform decision-making
processes. Collaboration among educators, technologists, and legislators is crucial for developing
comprehensive standards and regulations.
GPT can be used for personalized tutoring and content creation, focusing on specific learning
needs and enhancing academic achievement. However, ethical concerns exist regarding the use
of GPT, such as potential misuse and deceptive information. GPT’s natural language processing
capabilities enable students to access educational resources in their native language, overcoming
linguistic barriers and enhancing learning. Ethical concerns include data privacy, algorithmic trans-
parency, and responsible deployment of AI technologies in educational settings.
To ensure fairness and accuracy, developers should carefully select diverse training data, per-
form rigorous validation procedures, and continuously monitor and evaluate GPT’s performance.
Integrating GPT with contextual understanding models or domain-specific knowledge bases can
also enhance its ability to support adaptive learning tasks. To ensure responsible use of AI tech-
nologies like GPT in education, clear ethical guidelines and regulations are needed, covering data
protection, consent, and responsible management of student information. Transparency is crucial as
it helps establish trust and understanding of AI’s objectives and implications.
REFERENCES
Alser, M., & Waisberg, E. (2023). Concerns with the usage of ChatGPT in academia and medicine: A viewpoint.
American Journal of Medicine Open, 9(February), 1–2. https://doi.org/10.1016/j.ajmo.2023.100036
Beltrami, E. J., & Grant-Kels, J. M. (2023). Consulting ChatGPT: Ethical dilemmas in language model
artificial intelligence. Journal of the American Academy of Dermatology. https://doi.org/10.1016/
j.jaad.2023.02.052
Cardona, M. A., Rodríguez, R. J., & Ishmael, K. (2023). Artificial intelligence and the future of teaching and
learning. In Miguel A. Cardona Roberto J. Rodríguez Kristina Ishmael (1st ed., Issue1). U.S. Department
of Education. www2.ed.gov/documents/ai-report/ai-report.pdf
Chavez, M. R. (2023). ChatGPT: The good, the bad, and the potential. American Journal of Obstetrics and
Gynecology, 229(3), 357. https://doi.org/10.1016/j.ajog.2023.04.005
Currie, G. M. (2023). Academic integrity and artificial intelligence: Is ChatGPT hype, hero or heresy? Seminars
in Nuclear Medicine, 53(5), 719–730. https://doi.org/10.1053/j.semnuclmed.2023.04.008
Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications
for educational practice and research. Innovations in Education and Teaching International, 12(6), 1–15.
https://doi.org/10.1080/14703297.2023.2195846
Ferres, J. M. L., Weeks, W. B., Chu, L. C., Rowe, S. P., & Fishman, E. K. (2023). Beyond chatting: The oppor-
tunities and challenges of ChatGPT in medicine and radiology. Diagnostic and Interventional Imaging,
104(6), 263–264. https://doi.org/10.1016/j.diii.2023.02.006
Grassini, S. (2023). Shaping the future of education: Exploring the potential and consequences of AI and
ChatGPT in educational settings. Education Sciences, 13(7), 692. https://doi.org/10.3390/educsci1
3070692
Haleem, A., Javaid, M., & Singh, R. P. (2022a). An era of ChatGPT as a significant futuristic support tool: A
study on features, abilities, and challenges. BenchCouncil Transactions on Benchmarks, Standards and
Evaluations, 2(4), 100089. https://doi.org/10.1016/j.tbench.2023.100089
Haleem, A., Javaid, M., & Singh, R. P. (2022b). An era of ChatGPT as a significant futuristic support tool: A
study on features, abilities, and challenges. BenchCouncil Transactions on Benchmarks, Standards and
Evaluations, 2(4), 1–8. https://doi.org/10.1016/j.tbench.2023.100089
263
Javaid, M., Haleem, A., Singh, R. P., Khan, S., & Khan, I. H. (2023). Unlocking the opportunities through
ChatGPT tool towards ameliorating the education system. BenchCouncil Transactions on Benchmarks,
Standards and Evaluations, 3(2), 100115. https://doi.org/10.1016/j.tbench.2023.100115
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G.,
Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet,
O., Sailer, M., Schmidt, A., Seidel, T., … & Kasneci, G. (2023). ChatGPT for good? On opportunities
and challenges of large language models for education. Learning and Individual Differences, 103(9),
1–9. https://doi.org/10.1016/j.lindif.2023.102274
Liu, Y., Han, T., Ma, S., Zhang, J., Yang, Y., Tian, J., He, H., Li, A., He, M., Liu, Z., Wu, Z., Zhao, L., Zhu,
D., Li, X., Qiang, N., Shen, D., Liu, T., & Ge, B. (2023). Summary of ChatGPT-related research and
perspective towards the future of large language models. Meta-Radiology, 1(2), 1–14. https://doi.org/
10.1016/j.metrad.2023.100017
Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong
learning. SSRN Electronic Journal, 22(4), 1–19. https://doi.org/10.2139/ssrn.4354422
Muhie, Y. A., Wolde, A. B., & Woldie, A. B. (2023). Integration of artificial intelligence technologies in
teaching and learning in higher education. Science and Technology, 2020(1), 1–7. https://doi.org/
10.5923/j.scit.202001001.01
Nabizadeh, A. H., Gonçalves, D., Gama, S., Jorge, J., & Rafsanjani, H. N. (2020). Adaptive learning path rec-
ommender approach using auxiliary learning objects. Computers & Education, 147(6), 103–117. https://
doi.org/10.1016/j.compedu.2019.103777
Qadir, J. (2023). Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for edu-
cation. In 2023 IEEE Global Engineering Education Conference (EDUCON), 1–9. https://doi.org/
10.1109/EDUCON54358.2023.10125121
Rahimi, F., & Talebi Bezmin Abadi, A. (2023). ChatGPT and publication ethics. Archives of Medical Research,
54(3), 272–274. https://doi.org/10.1016/j.arcmed.2023.03.004
Ray, P. P. (2023). Re-evaluating the role of AI in scientific writing: A critical analysis on ChatGPT. Skeletal
Radiology, 52(12), 2487–2488. https://doi.org/10.1007/s00256-023-04404-6
Shannon Tan, S. T. R. (2023). Journal of applied learning & teaching ChatGPT: Bullshit spewer or the end of
traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 242–263.
Zhai, X. (2022). ChatGPT Is a New AI Chatbot That Can Find Mistakes in Your Code or Write a Story for You.
National Science Foundation (NSF).
Zheng, H., & Zhan, H. (2023). ChatGPT in scientific writing: A cautionary tale. American Journal of Medicine,
136(8), 725–726.e6. https://doi.org/https://doi.org/10.1016/j.amjmed.2023.02.011
264
18.1 INTRODUCTION
In recent years, the widespread integration of advanced artificial intelligence (AI) tools, specific-
ally ChatGPT and other large language models (LLMs), has revolutionized educational settings
by offering innovative avenues for learning and instruction. However, the ascendancy of ChatGPT
in management education has introduced challenges and limitations that necessitate a thoughtful
examination (Opara et al., 2023; Pavlik, 2023). Despite its remarkable linguistic capabilities,
ChatGPT falls short in addressing the nuanced requirements of management education, particu-
larly in decision-making processes crucial to the fabric of management disciplines (Khalil & Er,
2023). The reliance on automated responses poses unique challenges, such as the potential dilution
of the educational experience and a diminished emphasis on interpersonal collaboration (Mijwil
et al., 2023).
Against this backdrop, this chapter seeks to elucidate the pressing need for practical approaches
that transcend the limitations of ChatGPT. Our goal is to explore innovative methodologies that not
only acknowledge the presence of AI tools but actively leverage them alongside human faculties to
enhance student engagement. The focal point of our exploration is collaborative decision-making
processes, recognized as essential components in preparing future managers for the complexities of
the professional landscape. In doing so, we aim to contribute valuable insights into how educators
can navigate the challenges posed by ChatGPT, moving beyond automated responses to outline a
path that mitigates drawbacks and enriches the learning experience through collaborative decision-
making strategies.
The integration of ChatGPT into management education has prompted numerous studies
assessing its impact on pedagogical practices and student learning outcomes. As we delve into this
landscape, it becomes imperative to scrutinize the extent of ChatGPT integration into management
courses and its implications for the development of critical managerial skills. Existing literature
reveals a spectrum of endeavors aimed at incorporating ChatGPT as a pedagogical tool, showcasing
its potential in automating certain aspects of instruction, generating customized learning materials,
and simulating decision-making scenarios (Crawford et al., 2023).
However, an in-depth analysis is warranted to discern the effectiveness of ChatGPT in fostering
critical managerial skills. While proficient in generating textual content, its ability to instill nuanced
decision-making, strategic thinking, and interpersonal skills requires careful examination (Tlili
et al., 2023; Gerlak, & Heikkila, 2011). Scholars have scrutinized the outcomes of incorporating
ChatGPT in management courses, assessing its impact on students’ ability to navigate real-world
managerial challenges and make informed decisions. The synthesis of these findings serves as a
foundational element in shaping our exploration of practical approaches and strategies, aiming to
bridge the gap between ChatGPT capabilities and managerial education requirements.
18.2 LITERATURE REVIEW
18.2.1 Overview of AI in Education
The integration of AI into educational settings represents a significant milestone in the evolution of
pedagogy (Adiguzel et al., 2023). To comprehend the current landscape, it is imperative to trace the
historical context and the nuanced evolution of AI tools within educational domains. The journey of
AI in education has witnessed transformative advancements, ranging from rule-based systems to the
advent of sophisticated language models like ChatGPT (Jeon & Lee, 2023).
In exploring the historical trajectory, we unravel the progression from early AI applications
focused on automating administrative tasks to the contemporary era characterized by the infusion
of AI into instructional methodologies (Bierhals et al., 2007; Darban et al., 2023). The evolution
of AI tools in education mirrors broader technological trends, with each phase contributing to the
enhancement of learning experiences, personalized instruction, and the potential to revolutionize
educational paradigms (Bhutoria, 2022).
A significant focal point within the realm of AI in education is the pervasive influence of lan-
guage models, with ChatGPT emerging as a prominent player (Adiguzel et al., 2023; Darban et al.,
2023). The existing literature provides a comprehensive examination of the impact of ChatGPT in
various educational domains, shedding light on its applications, benefits, and limitations (Jeon &
Lee, 2023). Researchers have scrutinized the efficacy of ChatGPT in tasks such as automated con
tent generation, language comprehension, and interactive tutoring, offering nuanced perspectives on
its potential contributions to the educational landscape (Baidoo-Anu & Ansah, 2023).
However, as we delve into the current research landscape, it becomes evident that gaps and
challenges persist in the effective integration of ChatGPT in education (Kasneci et al., 2023). The
existing literature highlights areas where AI tools, including ChatGPT, may fall short in meeting the
266
diverse needs of learners and educators (Crawford et al., 2023). Challenges range from issues related
to bias and ethical considerations in AI-generated content to concerns about the adaptability of these
tools to the intricate demands of specific educational disciplines (Mhlanga, 2023).
Identifying these gaps and challenges is paramount for steering the discourse toward a more
comprehensive understanding of the role of ChatGPT in education (Kasneci et al., 2023). It sets the
stage for our exploration of management education, where the challenges and opportunities posed
by ChatGPT become particularly pronounced. By grounding our literature review in the historical
evolution of AI in education, evaluating the impact of ChatGPT, and discerning gaps in the current
research landscape, we lay a robust foundation for the subsequent sections that delve into the spe-
cific dynamics of management education.
develop shared mental models, making the group more cohesive and improving decisions (Denzau
& North, 1994). Distributed Cognition, rooted in the idea that decision-making involves the col
lective intelligence of a group, is another critical perspective (Hutchins, 2000). This theoretical
foundation highlights how collaborative decision-making is interconnected, stressing shared cogni-
tive processes that lead to effective group decisions.
Research consistently shows that collaboration positively affects learning outcomes. There are
practical strategies to use collaborative decision-making for educational benefits. Project-Based
Learning (PBL) is one approach where students work on real-world projects, encouraging collab-
oration and decision-making (de Los Rios et al., 2010). Online Collaborative Platforms offer vir
tual spaces for collaborative decision-making, overcoming geographical constraints (Peimani &
Kamalipour, 2021). Case-Based Learning, involving real-world cases, encourages students to ana
lyze and decide on solutions together, improving decision-making skills (Kolodner et al., 1996).
In management education, collaborative decision-making is vital for developing decision-making
skills. Simulations and Business Games put students in decision-making scenarios, promoting col-
laborative problem-solving (Vos, 2015). Interdisciplinary Collaborations enrich decision-making
perspectives by involving management students with peers from other disciplines (Pennington,
2008). Industry–University Partnerships give students real-world decision-making experiences, col
laborating with industry partners on projects (Tener, 1996).
Human-in-the-Loop Approaches involve human oversight in AI-generated content for accuracy
(Zanzotto, 2019). Ethical Decision- Making Modules address biases in AI- generated content,
integrating discussions on AI ethics within management courses (Drumwright et al., 2015).
Customizable AI Tools, like ChatGPT tailored for management education, enhance alignment with
educational objectives (Kokina & Davenport, 2017). These practical applications show how collab
orative decision-making is integrated into management education.
In synthesizing insights derived from the extensive literature review presented in Table 18.1,
we endeavor to construct a robust conceptual framework that bridges theoretical foundations with
practical approaches. The aim is to provide a comprehensive guide for understanding and mitigating
the challenges posed by ChatGPT and other LLM generative AI tools in university environments,
particularly in the context of management education.
The theoretical underpinnings gleaned from the exploration of collaborative decision-making and
the role of ChatGPT in management education serve as the foundational elements for our synthesis.
Drawing from social constructivism, shared mental models, and distributed cognition, our synthesis
highlights the importance of collaborative decision-making processes in addressing the limitations of
AI tools like ChatGPT. The synthesis underscores the need to integrate theoretical perspectives that
acknowledge the cognitive processes distributed across both humans and AI tools. By doing so, we
aim to create a framework that not only complements the capabilities of ChatGPT but also addresses
the nuanced requirements of managerial decision-making and skill development. Our conceptual
framework hinges on the seamless integration of collaborative decision-making and customized AI
tools within the educational landscape. The framework posits that the synergy between human intel-
ligence, as facilitated by collaborative decision-making, and AI tools, like ChatGPT, can cultivate a
learning environment that transcends the limitations posed by AI in isolation.
Key Components of the Conceptual Framework:
TABLE 18.1
Theoretical foundations in collaborative decision-making approaches in management
education
Project-Based Learning Social Constructivism Integration of PBL into de Los Rios et al. (2010).
(PBL) management courses Frank et al. (2003).
Hadim and Esche (2002)
Bell (2010);
Kokotsaki et al. (2016)
Online Collaborative Distributed Cognition Utilizing virtual platforms Peimani and Kamalipour (2021);
Platforms for collaborative Castro (2019);
projects Al-Samarraie and Saeed (2018);
Coman et. al. (2020).
Case-Based Learning Shared Mental Models Implementation of case Kolodner et. al. (1996);
studies in management Eshach and Bitterman (2003);
education Carder et al. (2001).
Lee et al. (2009).
Simulations and Social Constructvism Use of business Vos (2015).
Business Games simulations for Lacruz (2017).
collaborative Lopes et al. (2013).
decision-making Greco et al. (2013).
Faria (1987).
Interdisciplinary Distributed Cognition Cross-disciplinary Pennington (2008).
Collaborations projects in management Pennington (2011).
education Priaulx and Weinel (2018)
Fruchter and Emery (1999).
Industry–University Shared Mental Models Collaborative projects with Tener (1996).
Partnerships industry partners Smith et al. (2018).
Rybnicek and Königsgruber (2019).
Bierhals et al. (2007).
Human-in-the-Loop Distributed Cognition Incorporating human Zanzotto (2019).
Approaches oversight in AI- Wu et al. (2022).
generated content Bhutoria (2022).
Ethical Decision- Social Constructivism Integration of modules Drumwright et al. (2015).
Making Modules on AI ethics in Bero and Kuhlman (2011).
management courses Hartman and Werhane (2009).
Customizable AI Tools Distributed Cognition Tailoring AI tools for Kokina and Davenport (2017).
industry-specific Khabarov and Volegzhanina (2019).
collaborative tasks Weng (2023).
The conceptual framework guides the identification of key variables crucial for the empirical valid-
ation of our proposed approaches. These variables are informed by both theoretical considerations
and practical insights derived from the literature, ensuring a holistic and methodologically sound
survey design.
269
18.3 METHODOLOGY
The research methodology employed seeks to empirically validate and refine a conceptual framework
derived from an extensive literature review. The selection of a survey as the primary research method
stems from its efficiency in collecting a diverse range of perspectives. Given the multi-stakeholder
nature of the study involving educators, students, and industry managers, a survey provides a sys-
tematic approach to glean insights on the effectiveness of collaborative decision-making strategies
and the challenges posed by ChatGPT in management education. Furthermore, a survey enables
the collection of quantitative data, facilitating the analysis of trends, patterns, and correlations. This
methodological choice aligns with the research objective of deriving empirical evidence to support
and refine proposed approaches within the conceptual framework.
Survey questions are crafted to ensure a targeted exploration of key variables identified in the
conceptual framework. These questions are designed to capture nuanced perspectives on collabora-
tive decision-making, the role of ChatGPT, and proposed practical approaches within the educa-
tional landscape:
The survey targets three primary groups within the university environment. The sampling strategy
ensures representation from each target group within the context of Morocco.
• Professors:
Inclusion Criteria: Professors currently engaged in teaching management courses at univer-
sities in Morocco.
Sampling Method: Purposeful sampling, reaching out to academic institutions across Morocco
via email and LinkedIn, ensuring representation from different universities and academic
backgrounds.
270
• Students:
Inclusion Criteria: Students enrolled in management programs at Moroccan universities.
Sampling Method: Random sampling through online platforms, targeting various universities
and academic levels to capture a diverse student population.
• Company Managers:
Inclusion Criteria: Managers holding decision-making positions in Moroccan companies, par-
ticularly those involved in hiring management graduates.
Sampling Method: Utilization of email addresses and professional networks on LinkedIn,
ensuring representation from diverse industries and company sizes.
The inclusion of these diverse stakeholders enriches survey responses, ensuring a comprehensive
understanding of challenges and opportunities associated with collaborative decision-making and
AI integration in management education.
Given the online nature of the study in Morocco, the survey was distributed via email.
Additionally, LinkedIn groups served as platforms for reaching professors, and company managers
and targeted Facebook groups for students. The total responses garnered, comprising 978 students,
167 professors, and 213 company managers, attest to the efficacy of the chosen online distribution
channels.
18.4 RESULTS
The results of the survey provide a rich tapestry of insights, offering a nuanced understanding of the
perceptions and preferences of students, professors, and company managers regarding collaborative
decision-making strategies in the context of management education. The analysis is structured to
align with the main objectives of the study, as outlined in the conceptual framework.
18.4.1 Demographics
Before delving into the specific survey responses, it is crucial to contextualize the findings by
considering the demographics of the respondents. Table 18.2 presents the survey results, which suc
cessfully captured a diverse range of perspectives, with students constituting the majority at 62.10%,
followed by company managers at 13.50%, and professors at 10.60%. This distribution ensures a
comprehensive view that incorporates the experiences and viewpoints of key stakeholders within the
management education ecosystem.
The experience level of professors and company managers, as depicted in Table 18.3,
provides additional context. Notably, the majority of professors (56.30%) and company man-
agers (55.90%) have five to ten years of experience, indicating a substantial cohort with a mid-
range level of expertise. This demographic distribution enriches the findings by incorporating
insights from individuals with varied levels of exposure to collaborative decision-making in
management education.
Table 18.4 presents the distribution of company managers across different industry sectors.
The prevalence of company managers in the technology and telecom sector (22.00%) and finance,
TABLE 18.2
Role
TABLE 18.3
Experience level
TABLE 18.4
Industry sector
insurance, and banking sector (23.90%) indicates a significant representation from industries that
often demand strategic decision-making skills. This diversity ensures that the survey captures
perspectives from sectors with distinct challenges and requirements, enhancing the applicability of
the findings.
TABLE 18.5
Collaborative decision-making and management education
How much do collaborative decision-making processes contribute to the development of critical managerial skills
Students 76 7.80% 165 16.90% 264 27.00% 294 30.10% 183 18.70%
Professors 7 4.20% 15 9.00% 25 15.00% 56 33.50% 64 38.30%
Company 6 2.80% 17 8.00% 36 16.90% 66 31.00% 87 40.80%
managers
How effective is Project-Based Learning (PBL) in enhancing collaborative decision-making skills
in management education
Students 117 12.00% 214 21.90% 284 29.00% 254 26.00% 109 11.10%
Professors 10 6.00% 18 10.80% 32 19.20% 53 31.70% 55 32.90%
Company 9 4.20% 19 8.90% 38 17.80% 63 29.60% 78 36.60%
managers
How crucial is the utilization of Online Collaborative Platforms for collaborative decision-making
in management education
Students 98 10.00% 146 15.00% 244 25.00% 293 30.00% 196 20.00%
Professors 25 15.00% 34 20.40% 42 25.30% 33 19.80% 33 19.80%
Company 43 20.10% 43 20.10% 53 24.90% 43 20.10% 32 15.00%
managers
How beneficial is Case-Based Learning in developing shared mental models for collaborative
decision-making in management education
Students 97 9.90% 195 20.00% 293 30.00% 245 25.00% 148 15.10%
Professors 8 4.80% 17 10.20% 34 20.40% 53 31.70% 58 34.70%
Company 6 2.80% 17 8.00% 36 16.90% 67 31.50% 87 40.80%
managers
How effective are Simulations and Business Games in fostering collaborative decision-making
skills in management students
Students 98 10.00% 195 20.00% 293 30.00% 245 25.00% 98 10.00%
Professors 25 15.00% 34 20.40% 34 20.40% 51 30.50% 25 15.00%
Company 42 19.60% 32 15.00% 53 24.90% 64 29.90% 21 9.80%
managers
How valuable are Industry–University Partnerships in enhancing collaborative decision-making
skills in management education
Students 23 2.40% 77 7.90% 146 15.00% 488 50.00% 243 25.00%
Professors 9 5.40% 29 17.40% 75 44.90% 42 25.10% 12 7.20%
Company 2 0.90% 9 4.20% 21 9.80% 74 34.60% 107 50.00%
managers
To what extent do Human-in-the-Loop Approaches contribute to addressing challenges in
AI-generated content for collaborative decision-making in management education
Students 46 4.70% 391 40.00% 391 40.00% 97 9.90% 49 5.00%
Professors 3 1.80% 5 3.00% 10 6.00% 50 29.90% 99 59.30%
Company 2 0.90% 21 9.80% 95 44.40% 75 35.00% 20 9.30%
managers
273
high level of effectiveness. This underscores the practical applicability of simulated decision-making
scenarios in enhancing collaborative skills.
Industry–University Partnerships are perceived as immensely valuable, particularly by company man-
agers (50.00%) and students (25.00%). Professors also recognize their value, though to a slightly lesser
extent (44.90%). The emphasis on real-world collaborations with industry aligns with the distributed
cognition perspective, acknowledging the distributed nature of decision-making processes.
Human-in-the-Loop Approaches are recognized as essential, especially by professors (59.30%)
and students (40.00%). Company managers also acknowledge their importance, with 35.00% attrib-
uting a “very much” or “extremely” high level of contribution. This finding aligns with the prac-
tical approaches identified in the literature review, emphasizing the need for human oversight in
AI-generated content.
Ethical Decision-Making Modules are considered essential, particularly by students (45.00%)
and professors (59.30%). Company managers also recognize their importance, with 78.00% attrib-
uting a “very much” or “extremely” high level of essentiality. This aligns with the ethical consider-
ations highlighted in the literature, emphasizing the need for a structured approach to address ethical
concerns associated with AI-generated content.
Customizable AI Tools receive recognition for their effectiveness in addressing industry-specific
challenges. A substantial proportion of students (60.00%), professors (44.90%), and company man-
agers (54.70%) attribute a “very much” or “extremely” high level of effectiveness. This underscores
the importance of tailoring AI tools to suit the specific requirements of management education.
18.4.3 Comparative Analysis
Table 18.6 presents the comparative analysis of the perceived effectiveness of various practical
approaches in management education, as assessed by professors, company managers, and students,
reveals intriguing insights.
Simulations and Business Games emerge as the most favored approach by both company man-
agers and students, garnering the highest effectiveness ratings of 65% in each group. Ethical
Decision-Making Modules hold significant prominence among professors, leading with a 65%
effectiveness rating. Online Collaborative Platforms, despite variations across groups, consistently
rank in the top tier, emphasizing their universal relevance. Industry–University Partnerships, PBL,
274
TABLE 18.6
The comparative analysis
TABLE 18.7
Comparative analysis ranking
Professors Company Managers Students
1. Ethical Decision-Making Modules 1. Simulations and Business Games 1. Simulations and Business Games
2. Online Collaborative Platforms 2. Ethical Decision-Making Modules 2. Online Collaborative Platforms
3. Interdisciplinary Collaborations 3. Industry–University Partnerships 3. Industry–University Partnerships
4. Industry–University Partnerships 4. Online Collaborative Platforms 4. Case-Based Learning
5. Project-Based Learning (PBL) 5. Customizable AI Tools 5. Project-Based Learning (PBL)
6. Case-Based Learning 6. Interdisciplinary Collaborations 6. Customizable AI Tools
7. Simulations and Business Games 7. Human-in-the-Loop Approaches 7. Ethical Decision-Making Modules
8. Human-in-the-Loop Approaches 8. Case-Based Learning 8. Human-in-the-Loop Approaches
9. Customizable AI Tools 9. Project-Based Learning (PBL) 9. Interdisciplinary Collaborations
and Interdisciplinary Collaborations display relatively balanced effectiveness ratings across groups.
Customizable AI Tools and Human-in-the-Loop Approaches exhibit nuanced preferences, show-
casing the diverse perspectives of each respondent group. This comprehensive ranking provides
valuable insights into the perceived efficacy of these approaches in fostering collaborative decision-
making skills in the context of management education as shown in Table 18.7.
275
Cultural adaptation Several respondents emphasized the importance “It is essential to adapt
of adapting collaborative decision-making collaborative approaches to
strategies to the Moroccan cultural context. our Moroccan cultural context.
Insights highlighted the need for approaches that Local values must be considered
resonate with local values and norms, ensuring to ensure meaningful student
meaningful engagement and participation. participation.”
Technology Participants expressed varying degrees of “Access to technology varies
accessibility access to technology, particularly in remote or significantly. In remote areas,
underserved areas. This insight underscores many face accessibility challenges.
the need for strategies that consider the diverse Collaborative strategies must be
technological landscape, ensuring inclusivity in inclusive and consider this reality.”
collaborative initiatives.
Industry alignment Comments highlighted the importance of “Collaborative strategies must align
aligning collaborative strategies with the with the expectations of the local
expectations and dynamics of the local industry. industry. It is crucial to prepare
This insight emphasizes the need for practical students based on the specific
approaches that bridge the gap between demands of the Moroccan market.”
academic training and industry demands in the
Moroccan context.
The nuanced comments and reflections provided by participants deepen our understanding of
the challenges and opportunities in implementing collaborative decision-making strategies in the
unique context of management education.
specificity. The survey findings contribute to precision by highlighting the specific technological
tools deemed crucial by stakeholders.
The comparative analysis provides a nuanced ranking of the perceived effectiveness of different
practical approaches, as assessed by professors, company managers, and students. Simulations and
Business Games emerge as the most favored approach by both company managers and students, with
a 65% effectiveness rating in each group. Ethical Decision-Making Modules hold significant prom-
inence among professors, leading with a 65% effectiveness rating. Online Collaborative Platforms
consistently rank in the top tier across groups, emphasizing their universal relevance.
This comparative analysis serves as a valuable guide for the refinement of the Conceptual
Framework. It not only validates the initial emphasis on collaborative decision-making modules
and ethical considerations but also provides a nuanced understanding of the relative effectiveness
of different methodologies. The refined Conceptual Framework should prioritize the integration
of Simulations and Business Games, Ethical Decision-Making Modules, and Online Collaborative
Platforms.
The open-ended responses from participants offer additional insights into cultural adaptation,
technology accessibility, and industry alignment. These themes enrich the understanding of the
challenges and opportunities in implementing collaborative decision-making strategies in the unique
context of management education in Morocco. The refinement of the Conceptual Framework should
incorporate these insights, recognizing the importance of cultural sensitivity, inclusivity in techno-
logical strategies, multilingual approaches, and alignment with local industry dynamics.
Challenges and limitations identified by participants, including resource constraints, engagement
and motivation issues, interdisciplinary coordination challenges, and the need for faculty training,
provide practical considerations for the implementation of collaborative decision-making strategies.
These challenges align with the initial Conceptual Framework’s recognition of potential barriers to
effective collaborative decision-making. The refined Conceptual Framework should address these
challenges explicitly, providing actionable recommendations for educators and institutions.
18.6 CONCLUSION
The conceptual framework proposed in this study integrates key elements derived from both the
extensive literature review and the empirical findings obtained through the survey. This concep-
tual framework serves as a comprehensive guide for understanding the relationships and dynamics
involved in collaborative decision-making within the context of management education, with a spe-
cific emphasis on the integration of ChatGPT. The framework is designed to capture the multifa-
ceted nature of this phenomenon and to provide a foundation for empirical research and practical
implementation.
• Collaborative Decision-Making Modules: At the core of the framework lies the integration
of collaborative decision-making modules within management education. These modules
encompass practical methodologies such as PBL, Case-Based Learning, Simulations and
Business Games, and Industry–University Partnerships. The objective is to provide students
with experiential learning opportunities, fostering the development of shared mental models
and critical managerial skills.
• Ethical Decision- Making Integration: Ethical decision- making modules are seamlessly
integrated into the conceptual framework, responding to the ethical concerns associated with
AI-generated content, particularly ChatGPT. The framework recognizes the essential role of
ethical considerations in managerial decision-making and aims to instill ethical responsibility
in students. This integration ensures that the use of AI tools aligns with ethical standards and
principles.
278
The suggested conceptual framework serves as a holistic and adaptive guide for the integration
of LLMs and collaborative decision- making in management education. It positions practical
approaches, ethical considerations, technological interventions, interdisciplinary collaborations,
and cultural adaptation as interconnected elements, collectively shaping a transformative educa-
tional experience. This framework not only addresses the challenges identified in the literature but
also provides actionable strategies for educators and stakeholders, fostering the development of
future-ready managers equipped with both technical and ethical decision-making skills.
REFERENCES
Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the trans-
formative potential of ChatGPT. Contemporary Educational Technology, 15(3), ep429.
Al-Samarraie, H., & Saeed, N. (2018). A systematic review of cloud computing tools for collaborative
learning: Opportunities and challenges to the blended-learning environment. Computers & Education,
124, 77–91.
Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence
(AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of
AI, 7(1), 52–62.
279
Bell, S. (2010). Project-based learning for the 21st century: Skills for the future. The Clearing House: A Journal
of Educational Strategies, Issues and Ideas, 83, 39–43. https://doi.org/10.1080/00098650903505415
Bero, B., & Kuhlman, A. (2011). Teaching ethics to engineers: Ethical decision making parallels the engin-
eering design process. Science and Engineering Ethics, 17, 597–605.
Bhutoria, A. (2022). Personalized education and artificial intelligence in the United States, China, and India: A
systematic review using a human-in-the-loop model. Computers and Education: Artificial Intelligence,
3, 100068.
Bierhals, R., Schuster, I., Kohler, P., & Badke-Schaub, P. (2007). Shared mental models—linking team cogni-
tion and performance. CoDesign, 3(1), 75–94.
Carder, L., Willingham, P., & Bibb, D. (2001). Case-based, problem-based learning: Information literacy for
the real world. Research Strategies, 18(3), 181–190.
Castro, R. (2019). Blended learning in higher education: Trends and capabilities. Education and Information
Technologies, 24(4), 2523–2546.
Coman, C., Țîru, L. G., Meseșan-Schmitz, L., Stanciu, C., & Bularca, M. C. (2020). Online teaching and
learning in higher education during the coronavirus pandemic: Students’ perspective. Sustainability,
12(24), 10367.
Crawford, J., Cowling, M., & Allen, K. (2023). Leadership is needed for ethical ChatGPT: Character,
assessment, and learning using artificial intelligence (AI). Journal of University Teaching and Learning
Practice. https://doi.org/10.53761/1.20.3.02
Darban, K., Kabbaj, S., & Esmaoui, K. (2023). Crisis Management, Internet, and AI: Information in the Age
of COVID-19 and Future Pandemics. In Mathematical Modeling and Intelligent Control for Combating
Pandemics (pp. 259–270). Cham: Springer Nature Switzerland.
de Los Rios, I., Cazorla, A., Díaz-Puente, J. M., & Yagüe, J. L. (2010). Project-based learning in engineering
higher education: Two decades of teaching competences in real environments. Procedia-Social and
Behavioral Sciences, 2(2), 1368–1378.
Denzau, A. T., & North, D. C. (1994). Shared mental models: Ideologies and institutions. Kyklos, 47, 3–31.
Deyle, R., & Wiedenman, R. (2014). Collaborative planning by metropolitan planning organizations. Journal
of Planning Education and Research, 34, 257–275. https://doi.org/10.1177/0739456X14527621
Drumwright, M., Prentice, R., & Biasucci, C. (2015). Behavioral ethics and teaching ethical decision making.
Decision Sciences Journal of Innovative Education, 13(3), 431–458.
Eshach, H., & Bitterman, H. (2003). From case- based reasoning to problem- based learning. Academic
Medicine, 78(5), 491–496.
Faria, A. J. (1987). A survey of the use of business games in academia and business. Simulation & Games,
18(2), 207–224.
Frank, M., Lavy, I., & Elata, D. (2003). Implementing the project-based learning approach in an academic
engineering course. International Journal of Technology and Design Education, 13, 273–288.
Fruchter, R., & Emery, K. (1999). Teamwork: Assessing cross-disciplinary learning.
Gerlak, A., & Heikkila, T. (2011). Building a theory of learning in collaboratives: Evidence from the everglades
restoration program. Journal of Public Administration Research and Theory, 21, 619–644. https://doi.
org/10.1093/JOPART/MUQ089
Greco, M., Baldissin, N., & Nonino, F. (2013). An exploratory taxonomy of business games. Simulation &
Gaming, 44(5), 645–682.
Hadim, H., & Esche, S. (2002). Enhancing the engineering curriculum through project-based learning. 32nd
Annual Frontiers in Education, 2, F3F-F3F. https://doi.org/10.1109/FIE.2002.1158200
Hartman, L. P., & Werhane, P. H. (2009). A modular approach to business ethics integration: At the intersection
of the stand-alone and the integrated approaches. Journal of Business Ethics, 90, 295–300.
Hutchins, E. (2000). Distributed Cognition. In: International Encyclopedia of the Social and Behavioral
Sciences (vol. 138, pp. 1–10). Elsevier Science.
Jeon, J., & Lee, S. (2023). Large language models in education: A focus on the complementary relationship
between human teachers and ChatGPT. Education and Information Technologies, 28(12), 1–20.
Johnson, T. E., Khalil, M. K., & Spector, J. M. (2008). The role of acquired shared mental models in improving
the process of team-based learning. Educational Technology, 18–26.
Kalina, C., & Powell, K. C. (2009). Cognitive and social constructivism: Developing tools for an effective
classroom. Education, 130(2), 241–250.
280
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., … & Kasneci, G. (2023).
ChatGPT for good? On opportunities and challenges of large language models for education. Learning
and Individual Differences, 103, 102274.
Khabarov, V., & Volegzhanina, I. (2019, December). Knowledge management system of an industry-specific
research and education complex. IOP Conference Series: Earth and Environmental Science, 403(1),
012197.
Khalil, M., & Er, E. (2023). Will ChatGPT get you caught? Rethinking of Plagiarism Detection. ArXiv, abs/
2302.04335. https://doi.org/10.48550/arXiv.2302.04335
Kokina, J., & Davenport, T. H. (2017). The emergence of artificial intelligence: How automation is changing
auditing. Journal of Emerging Technologies in Accounting, 14(1), 115–122.
Kolodner, J. L., Hmelo, C. E., & Narayanan, N. H. (1996). Problem-based learning meets case-based reasoning.
Proceedings of the 1996 International Conference on Learning Sciences, 188–195.
Lacruz, A. J. (2017). Simulation and learning dynamics in business games. RAM. Revista de Administração
Mackenzie, 18, 49–79.
Lee, S. H., Lee, J., Liu, X., Bonk, C. J., & Magjuka, R. J. (2009). A review of case-based learning practices
in an online MBA program: A program-level case study. Journal of Educational Technology & Society,
12(3), 178–190.
Lopes, M. C., Fialho, F. A., Cunha, C. J., & Niveiros, S. I. (2013). Business games for leadership develop-
ment: A systematic review. Simulation & Gaming, 44(4), 523–543.
Margerum, R. (2002). Collaborative planning. Journal of Planning Education and Research, 21, 237–253.
https://doi.org/10.1177/0739456X0202100302
Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong
learning. Education, the Responsible and Ethical Use of ChatGPT towards Lifelong Learning (February
11, 2023).
Mijwil, M., Aljanabi, M., & Ali, A. (2023). ChatGPT: Exploring the role of cybersecurity in the protection of
medical information. Mesopotamian Journal of Cyber Security. https://doi.org/10.58496/mjcs/2023/004.
Opara, E., Mfon-Ette Theresa, A., & Aduke, T. C. (2023). ChatGPT for teaching, learning and research: Prospects
and challenges. Global Academic Journal of Humanities and Social Sciences, 5(2), 33–40.
Pavlik, J. (2023). Collaborating with ChatGPT: Considering the implications of generative artificial intelli-
gence for journalism and media education. Journalism & Mass Communication Educator, 78, 84–93.
https://doi.org/10.1177/10776958221149577
Peimani, N., & Kamalipour, H. (2021). Online education and the COVID-19 outbreak: A case study of online
teaching during lockdown. Education Sciences, 11(2), 72.
Pennington, D. D. (2008). Cross-disciplinary collaboration and learning. Ecology and Society, 13(2), 1–13.
Pennington, D. D. (2011). Collaborative, cross-disciplinary learning and co-emergent innovation in eScience
teams. Earth Science Informatics, 4, 55–68.
Priaulx, N., & Weinel, M. (2018). Connective knowledge: What we need to know about other fields to ‘envi-
sion’ cross-disciplinary collaboration. European Journal of Futures Research, 6(1), 1–18.
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher
education? Journal of Applied Learning and Teaching, 6(1), 342–363.
Rybnicek, R., & Königsgruber, R. (2019). What makes industry–university collaboration succeed? A system-
atic review of the literature. Journal of Business Economics, 89(2), 221–250.
Smith, N. M., Smith, J. M., Battalora, L. A., & Teschner, B. A. (2018). Industry– university
partnerships: Engineering education and corporate social responsibility. Journal of Professional Issues
in Engineering Education and Practice, 144(3), 04018002.
Tener, R. K. (1996). Industry-university partnerships for construction engineering education. Journal of
Professional Issues in Engineering Education and Practice, 122(4), 156–162.
Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if
the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning
Environments, 10(1), 15.
Vos, L. (2015). Simulation games in business and marketing education: How educators assess student learning
from simulations. International Journal of Management Education, 13(1), 57–74.
Weng, J. C. (2023). Putting Intellectual Robots to Work: Implementing Generative AI Tools in Project
Management. NYU SPS Applied Analytics Laboratory.
281
Wu, X., Xiao, L., Sun, Y., Zhang, J., Ma, T., & He, L. (2022). A survey of human-in-the-loop for machine
learning. Future Generation Computer Systems, 135, 364–381.
Zanzotto, F. M. (2019). Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research,
64, 243–252.
1.1. Role:
• Professor
• Student
• Company manager
1.2. Experience level (for professors and company managers):
• Less than 5 years
• 5–10 years
• More than 10 years
1.3. Industry sector (for company managers):
• Technology and telecommunications
• Finance, insurance, banking …
• Healthcare
• Manufacturing, energy
• Others
2.4. H ow beneficial is Case-Based Learning in developing shared mental models for col-
laborative decision-making in management education?
• Not beneficial at all
• Slightly beneficial
• Moderately beneficial
• Very beneficial
• Extremely beneficial
2.5. How effective do you find Simulations and Business Games in fostering collaborative
decision-making skills in management students?
• Not effective
• Somewhat effective
• Effective
• Very effective
• Extremely effective
2.6. In your experience, how have Interdisciplinary Collaborations contributed to collab-
orative decision-making skills in management students?
• Not effective
• Somewhat effective
• Effective
• Very effective
• Extremely effective
2.7. How valuable do you find Industry–University Partnerships in enhancing collabora-
tive decision-making skills in management education?
• Not valuable at all
• Slightly valuable
• Moderately valuable
• Very valuable
• Extremely valuable
2.8. To what extent do Human- in-
the-
Loop Approaches contribute to addressing
challenges in AI-generated content for collaborative decision-making in management
education?
• Not at all
• Slightly
• Moderately
• Very much
• Extremely
2.9. How essential do you consider the integration of Ethical Decision-Making Modules
in addressing ethical concerns related to AI- generated content in management
education?
• Not essential at all
• Slightly essential
• Moderately essential
• Very essential
• Extremely essential
2.10. In your opinion, how effective are Customizable AI Tools in addressing industry-
specific challenges in collaborative decision- making assignments in management
education?
• Not effective
• Somewhat effective
• Effective
283
• Very effective
• Extremely effective
4.1. Is there any additional comment or insight you would like to provide regarding your
experiences with collaborative decision-making in management education or the
integration of AI tools?
4.2. If applicable, please share any challenges or limitations you have encountered in
implementing collaborative decision-making strategies in management education.
284
285
Index
Note: Page numbers in italic indicate figures and in bold indicate tables.
285
286
286 Index
constructivist theory 127; see also social constructivism academic integrity concerns 5, 8, 9, 25, 28, 42, 58, 62,
content generation 25, 88–89, 90, 93, 94, 247–248, 113, 165, 176, 225–226, 236–237; accountability issues
251–252, 254, 259 19, 151, 238–240; auditing 240–241, 248; copyright
contextual understanding 100, 254, 257–258, 260 issues 241–242; data privacy concerns 19, 25, 93, 114,
continuous improvement in learning 85, 92, 94–95 128, 150–151, 248, 255, 258, 260; domain-specific
continuous professional development for educators 94 learning 92; environmental impact concerns 25, 128,
continuous/lifelong learning 85, 86, 87, 91, 104, 106, 151, 204 241; equity and access 9, 24, 25, 27, 29, 94, 114; ethical
co-occurrence bias 227 guidelines and regulations 26, 258–261; faculty member
copyright issues 241–242 perspectives 11–12, 13; graduate student perspectives
critical AI literacy 171 8, 9–10; inaccurate content generation 25, 252, 254,
critical thinking skills 5, 9, 11–12, 19, 26, 33, 94, 106, 114, 259; intellectual property issues 25, 241–242, 261;
115, 128, 166, 168–169, 170, 237–238, 255, 260; active personalized curriculum planning 150–151; personalized
learning and 192, 195, 196, 197, 199, 200, 201, 203; learning 92, 93, 94, 95, 150–151, 201–202; plagiarism
project-based learning and 92 5, 8, 9, 25, 28, 42, 62, 78, 113, 162, 176, 225, 229, 237;
curriculum planning, personalized 142–152, 149 refining critical judgment 237–238; resource handling
customizable AI tools 267, 268, 273, 273, 274, 274, 276, 278 241; transparency issues 19, 25, 29, 94, 151, 171,
customized content generation 88–89, 90, 93, 94 240–241, 255, 258, 260, 261; undergraduate student
perspectives 3–4, 5–6; see also bias and discrimination
D ethical decision-making modules 267, 268, 268, 269, 273,
273, 274, 276, 277
data bias 19, 93, 95, 106, 112, 160, 161, 226, 227, 228,
236, 239, 240, 248, 254, 256–257, 259, 260 F
data privacy concerns 19, 25, 93, 114, 128, 150–151, 248,
255, 258, 260 fashion and sustainability report 165–170
decision trees 21, 226 feedback 23, 26, 29, 127, 169, 190, 192, 201; adaptive
design studio virtual assistant 156–171; essay writing assessments 90, 90; automated grading 29, 248,
161–165, 162; persona enhancement 158–161; report 252–253, 259; bias and 254; culture of 171; dynamic
writing 165–170 discussions 194; from educators 199; faculty member
direct bias 228 perspectives 11, 13; graduate student perspectives 7,
disabilities, students with 29, 89, 90, 96, 102, 104, 105, 9; language learning 130, 136, 137; peer review 44;
105; see also accessibility personalized 85, 86, 87, 90, 90, 91, 91, 93, 111–112,
discrimination see bias and discrimination 197, 199, 202, 203, 204; virtual assistants 31; virtual
distributed cognition 267, 268, 268, 273 coaching of teachers 190; virtual tutors 91, 91
diversification 31–32 foreign language learning 105, 125–139, 130, 138, 214
domain-specific knowledge bases 257–258, 260 framing bias 227
domain-specific learning 92 fuzzy logic 21
dropout rates 144
dynamic discussions 194, 195, 195 G
gamification 18, 23, 24, 90, 130, 132, 138
E general AI 100
education research on AI and ChatGPT 17–33, 30; AI generative adversarial networks (GANs) 28, 143
techniques used 21; application phase 22, 25–26; generative artificial intelligence (AI): role in education
development phase 21–22, 23–25, 24; diversification 84–85; theoretical framework for use in education
31–32; emerging technologies 23–25, 24; human–AI 189–190, 190; see also ChatGPT
interaction phase 22, 28–31; methodologies used 21; Generative Pre-trained Transformer (GPT) 28, 41, 143–144,
problems with AI systems 19; recurring themes 28–29; 223, 224, 247–262; bias in 228–229; limitations for
robotization 31; SWOT analysis phase 22, 26–28, 27; adaptive learning 253–256, 253; opportunities for
synthesis 32; theories used 20; trust in AI 29–31; see also adaptive learning 249–253, 250; recommendations for
bibliometric analysis of ChatGPT in education research adaptive learning 256–261; see also ChatGPT
educational robotics 23 GPT Builder 150
educator training 19, 248 grading, automated 29, 248, 252–253, 259
Ellen MacArthur Foundation 167 guidance, academic 175–186, 179, 180, 181, 182, 184,
emerging technologies 23–25, 24 185
employability 143, 144, 148
energy consumption of computing infrastructure 241
H
environmental impact concerns 25, 128, 241 human–AI hybrids 31–32, 100
epistemological bias 228 human-in-the-loop approaches 267, 268, 272, 273, 274,
equity and access 9, 24, 25, 27, 29, 94, 114 274, 276, 278
essay writing 105, 161–165, 162 hybrid learning models 210–220, 215; comparative study
ethical considerations 15, 19, 25, 31, 40, 78, 112–113, 216–218, 216, 217, 217, 218, 219, 219
114, 171, 190, 201–202, 235–243, 248–249, 255; hyperparameter bias 227
287
Index 287
288 Index