International Journal of Information Management: Contents Lists Available at
International Journal of Information Management: Contents Lists Available at
☆
This editorial opinion paper provides a subjective viewpoint on the potential impact of generative AI technologies such as ChatGPT in the domains of education,
business, and society. Its objective is to offer initial guidance on the opportunities, challenges, and implications associated with these technologies. It is worth noting
that, given its nature as an editorial opinion piece, this submission has not undergone a formal double-blind review process but has been reviewed informally by
appropriate experts.
* Corresponding author at: Digital Futures for Sustainable Business & Society Research Group, School of Management, Swansea University, Bay Campus, Fabian
Bay, Swansea, Wales, UK.
E-mail address: [email protected] (Y.K. Dwivedi).
https://doi.org/10.1016/j.ijinfomgt.2023.102642
Received 1 March 2023; Accepted 1 March 2023
Available online 11 March 2023
0268-4012/© 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
q
Montpellier Business School (MBS), Montpellier, France
r
FORE School of Management, New Delhi, India
s
Indian Institute of Management Ahmedabad, Vastrapur, Ahmedabad 380015, India
t
Information School, University of Sheffield, UK
u
Bournemouth University Business School, Poole, UK
v
School of Information Systems and Technology Management, University of New South Wales, Sydney, Australia
w
Information, Operations and Management Sciences Department, TBS Business School, 1 Place Alphonse Jourdain, 31068 Toulouse, France
x
Department of Education & Childhood Studies, Swansea University, Swansea, United Kingdom
y
Faculty of Humanities and Social Sciences, University of Strathclyde, Glasgow G1 1XQ, United Kingdom
z
School of Management, Swansea University, Swansea, UK
aa
Department of Information Systems, City University of Hong Kong, Hong Kong Special Administrative Region
ab
Indian Institute of Management Bangalore, India
ac
Business and Management Research Institute, University of Bedfordshire, UK
ad
Montpellier Business School, Montpellier, France
ae
Liverpool Business School, Liverpool John Moores University, UK
af
Prin. L. N. Welingkar Institute of Management Development and Research, Mumbai, India
ag
Operations & Information Management Department, Aston Business School, UK
ah
Department of Marketing and Marketing Management, Faculty of Economics and Business, University of Zaragoza, Zaragoza, Spain
ai
Otago Business School, Co-Director, Centre for Health Systems and Technology, University of Otago, Dunedin, New Zealand
aj
Distinguished Professor and George & Boyce Billingsley Endowed Chair of Information Systems, IS Doctoral Program, Walton College of Business, University of
Arkansas, Room 216, Fayetteville, AR 72703, USA
ak
Institute of Technology Management, National Tsing Hua University, Hsinchu 300, Taiwan
al
Faculty of Technology, Policy and Management, Delft University of Technology, the Netherlands
am
School of Management, Swansea University, United Kingdom
an
College of Charleston, School of Business, USA
ao
Bournemouth University Business School, Bournemouth University, UK
ap
Free University of Bozen-Bolzano, Italy & University of Johannesburg, South Africa
aq
Leeds School of Business, Boulder, University of Colorado, Boulder, USA
ar
Sheffield University Management School, The University of Sheffield, UK
as
Schöller-Endowed Chair of Information Systems, Institute of Information Systems Nürnberg, School of Business, Economics and Society, Friedrich-Alexander University
Erlangen-Nuremberg, Germany
at
School of Management, Swansea University Bay Campus, Swansea, SA1 8EN Wales, UK
au
Business School, Worcester Polytechnic Institute, Worcester, MA 01609-2280, USA
av
Henley Business School, University of Reading, Henley-on-Thames, Oxfordshire, UK
aw
Department of Management, University of Bologna, Bologna, Italy
ax
School of Information Systems and Management, University of South Florida, Tampa, FL, USA
ay
Greenwich Business School, University of Greenwich, London SE10 9LS, UK
az
Management Science and Information Systems, Spears School of Business, Oklahoma State University, Stillwater, OK 74078, USA
ba
Division of Nursing, Midwifery and Social Work, School of Health Sciences, The University of Manchester, Manchester, United Kingdom
bb
Rosen College of Hospitality Management University of Central Florida 9907 Universal Boulevard, Orlando, FL 32819, USA
bc
Department of Business, WSB University, Wrocław, Poland
bd
SKEMA Research Center for Artificial Intelligence, SKEMA Business School, 5 quai Marcel Dassault – Suresnes, France
be
National Institute of Industrial Engineering (NITIE), Mumbai, India
bf
Newcastle University Business School, Newcastle upon Tyne, United Kingdom
bg
Department of Information Systems, University of Agder, Norway
bh
Department of Computer Science, Norwegian University of Science and Technology, Norway
bi
Microsoft AI MVP and Microsoft Regional DirectorGlobal lead - Innovation and Architecture at DXC Technologies India
bj
Department of People and Technology, Roskilde University, Denmark
bk
Symbiosis Institute of Business Management, Pune & Symbiosis International (Deemed University), Pune, India
bl
Department of Management and Marketing, College of Business and Economics, Qatar University, P.O. Box 2713, Doha, Qatar
bm
HuManiS Research Center – Humans and Management in Society, UR 7308, Université de Strasbourg – EM Strasbourg Business School, France
bn
ESIC University, Madrid, Spain and University of Economics and Human Sciences, Warsaw, Poland
bo
Wellington School of Business and Government, Rutherford House, 23 Lambton Quay, Wellington, New Zealand
bp
Nantes University, LEMNA, and SKEMA Business School, France
bq
Rolls-Royce Commonwealth Commerce, McIntire School of Commerce, University of Virginia, USA
br
School of Computer Science, The University of Nottingham, UK
bs
Process and Data Science, RWTH Aachen University, Ahornstraße 55, Aachen 52074, North Rhine-Westphalia, Germany
bt
Eminent Scholar and Verizon Chair, Director of Executive PhD in Business, Pamplin College of Business, Virginia TechBlacksburg, Virginia, USA
bu
University of Portsmouth, Department of Strategy, Marketing and Innovation, Richmond Building, Portsmouth, United Kingdom
bv
Department of Economics and Political Science, University of Aosta Valley, Aosta, Italy
bw
Digital Transformation, Global Center for Digital Business Transformation, IMD Business School, Lausanne, Switzerland
bx
Capgemini UK Ltd, United Kingdom
by
Department of Marketing, National University of Singapore, Singapore
A R T I C L E I N F O A B S T R A C T
Keywords: Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistin
Conversational agent guishable from that produced by a human, are applicable across a wide range of contexts. The technology
Generative artificial intelligence presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and
Generative AI
negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of
ChatGPT
Large language models
these, this article brings together 43 contributions from experts in fields such as computer science, marketing,
information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The
contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer
1
Equal contributions
2
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
significant gains in the banking, hospitality and tourism, and information technology industries, and enhance
business activities, such as management and marketing. Nevertheless, they also consider its limitations, dis
ruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation.
However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these con
tributions, the article identifies questions requiring further research across three thematic areas: knowledge,
transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and
scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed
to handle generative AI; examining biases of generative AI attributable to training datasets and processes;
exploring business and societal contexts best suited for generative AI implementation; determining optimal
combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced
by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts.
1. Introduction integration of deep learning and language models based on the Gener
ative Pre-training Transformer (GPT) architecture (Radford et al., 2018).
Artificial Intelligence (AI) has significantly impacted organisations, Language models attempt to predict the likelihood of a sequence of
societies, and individuals. It offers systematic capabilities of reasoning words a typical human interaction is likely to create through generative
based on inputs and learning via the differences of expected outcomes as and discriminative algorithms, typically through the application of deep
it predicts and adapts to changes in its ecosystems and stimulus that the learning and transformer architectures of neural networks (Bengio et al.,
system receives from its external environment. When AI was in its 2000; Bellegarda, 2004; Vaswani et al., 2017). ChatGPT uses a combi
formative years, the focus of algorithms was generally restricted to su nation of unsupervised pre-training and supervised fine-tuning to
pervised and unsupervised learning, whereby it borrowed inspiration generate human-like responses to queries and provide responses to
from biological organisms and physical properties of nature and estab topics that resemble that of a human expert. The GPT-3 model is the
lished these properties computationally to solve data intensive problems latest extension built on a language model with 175 billion parameters,
(Kar, 2016). However, traditional AI algorithms needed structured data trained on a diverse dataset of naturally used text obtained from
for both model building and information processing. These older and different internet sources such as web pages, books, research articles and
more established AI algorithms such as neural networks, genetic algo social chatter. While current language models generally utilise deep
rithms, decision trees, random forests, support vector machines, learning with a focus on supervised learning, future evolutionary models
k-means clustering and many more (Duan et al., 2019), were somewhat may be built more on reinforcement learning (Uc-Cetina et al., 2022).
restricted in their capabilities due to these limitations. The recent widespread global adoption of ChatGPT has demon
Newer AI algorithms have evolved over time and can now process strated the tremendous range of use cases for the technology including
data in their natural form, hence mining unstructured data such as raw software development and testing, poetry, essays, business letters, and
text and images is feasible. AI algorithms such as deep learning and contracts (Metz, 2022; Reed, 2022; Tung, 2023). However, it has also
reinforcement learning have now evolved where specific algorithms raised a number of concerns related to the difficulty in differentiating
such as convoluted neural networks and recurrent neural networks have human versus AI authorship within academic and education commu
gained prominence for being able to analyse images, audio, and even nities, and renewed debate on the role of traditional human endeavours
video (LeCun et al., 2015). Furthermore, industrial needs involving text (Else, 2023; Stokel-Walker, 2023). These challenges arise because
mining and natural language processing (NLP) have grown in demand ChatGPT can be extensively used for NLP tasks such as text generation,
triggering the development and growth of algorithms which could run language translation, and generating answers to a plethora of questions,
on unstructured data. Algorithms such as Bidirectional Encoder Repre engendering both positive and adverse impacts.
sentations from Transformers (BERT), Long short-term memory (LSTM) and With this background, in this article we seek to answer the two
language models have gained prominence for industrial automation following research questions:
(Guan et al., 2019; Kushwaha and Kar, 2021). Many of these algorithms
were heavily dependent on two resources: (1) abundant data for the RQ1) What are the opportunities, challenges, and implications
algorithms to train and operate upon, and (2) very elaborate computa related to generative AI technologies such as ChatGPT in the context
tional resources to deploy and run the algorithms. However, real-world of education, business, and society?
applications did not have access to high computational resources, RQ2) What are the most important research questions to investigate
especially when deployed. Over time, newer models of AI such as related to generative AI technologies such as ChatGPT in the context
federated learning and tiny machine learning (ML) algorithms were of education, business, and society?
developed and adopted in industrial applications (Li et al., 2020a,
2020b). Many of these applications created scenarios, where data were The remainder of this article is structured as follows. In the next
not available to initially train these algorithms, called the cold start section, we list the individual contributions of AI experts from various
problem. If data were not available, how would these applications learn disciplines including information systems (IS), computer science, data
patterns and predict future trends? This started developments in rein science, marketing, management, industrial engineering, nursing, edu
forcement learning and industrial applications of reinforcement learning cation, policy, hospitality and tourism, and publishing. Further, based
gained prominence in both marketing and financial management ap on the expert contributions that indicate a clear need for scientific
plications (Singh et al., 2022a, 2022b). research on various issues related to ChatGPT, we provide a synthesised
In parallel to the ongoing development of AI algorithms, the litera narrative of the nature of generative AI technologies, issues underlying
ture has seen an increased level of chatbot related studies (Lokman and their adoption and use, and directions for future research.
Ameedeen, 2018). Chatbots traditionally used NLP to respond to queries
raised by the user, while mapping it to the best possible response sets 2. Perspectives from leading experts
available in the system. In order to provide real time feedback to cus
tomers, chatbots have adopted language models along with deep In accordance with Foerster’s (2003) original proposal and multiple
learning while addressing NLP problems (Bellegarda, 2004; Melis et al., subsequent studies that have followed an expert-based viewpoint on a
2017; Kushwaha and Kar, 2021). The recent launch of OpenAI’s diverse range of IS related topics (Dwivedi et al., 2015, 2020, 2022a,a,b,
ChatGPT significantly extends the capabilities of chatbots via the b–d), we examine the critical perspectives on the impact and core
3
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
4
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Table 1 (continued ) management researchers have been lagging behind. They focused on the
Contribution Title Author (s) relevance of AI technologies for innovation recently, mostly after the AI
hype across (mass) media (Davenport and Ronanki, 2018). However,
our epistemic values in teaching and
research and our humanity
innovation management researchers have rapidly understood the po
Contribution 29: Recommendations for Paul Jones and Sascha Kraus tential relevance of AI for innovation management. This is reflected by
dealing with ChatGPT, Bard & Co. in the field at the intersection of AI and innovation evolving rapidly in
academic publishing response to calls for more studies on the multifaceted relationship be
2.4. Ethical issues
tween AI and innovation (see Cockburn et al., 2019).
Contribution 30: ChatGPT and Ethics – Laurence Brooks
‘ChatGPT Doesn’t Matter’? As clear from a recent systematic literature review of the scientific
Contribution 31: Good Bot or Bad Bot? Bernd C Stahl work produced on AI in innovation research (Mariani et al., 2022), so far
On the Ethics of ChatGPT innovation management researchers have focused on three types of
2.5. Challenges, opportunities, and research directions drivers of AI adoption for innovation (economic, technological, and
Contribution 32: Towards a research John S. Edwards and Yanqing Duan
agenda for generative AI in education,
social) and three types of outcomes of AI adoption (economic, compet
industry and research itive, organisational, and innovation). Among the innovation outcomes,
Contribution 33: Use the SACE and Sunil Mithas the development of new technology as well as product, process and
ADROIT Framework to Assess business model innovation have been found to be relevant potential
Challenges, Opportunities, and Research
application areas for AI.
Agenda Related to ChatGPT
Contribution 34: ChatGPT: Challenges Margherita Pagani So far, not all forms of AI have been analysed in innovation man
and Opportunities agement studies (Mariani et al., 2023), and generative AI has been
Contribution 35: Moving from Prediction Kai R. Larsen severely underexplored. However, the launch of multiple generative AI
to Creativity: Implications of Generative platforms over the last five years (e.g., GPT3 in 2020, ChatGPT in 2022),
AI on Measurement of Success
Contribution 36: ChatGPT: Challenges, Neeraj Pandey, Manoj Tiwari, Fevzi
and more generally Large Language Models, has attracted the attention
Opportunities, Impact and Research Okumus and F. Tegwen Malik of the media, organisations, and users. Indeed, one of the most striking
Agenda features of these generative AI platforms is that they have been adopted
Contribution 37: ChatGPT: Challenges, Paul Latreille massively in a short period of time: launched on November 30, 2022,
Opportunities, Impact and Research
ChatGPT has been used by 1 million users in the first 5 days after its
Agenda
Contribution 38: ChatGPT: challenges, Robin Gauld launch.
opportunities, impact and research The distinctive feature of ChatGPT is precisely its capability to
agenda generate textual content. In just 3 months after its release, ChatGPT has
Contribution 39: Challenges, Nishith Pathak been deployed by many software developers, creative writers, scholars/
opportunities, and impact of ChatGPT in
teachers, and songwriters to generate computer software and apps, text,
the IT industry
Contribution 40: Challenges of ChatGPT Indranil Bose academic essays, song lyrics. Platform users have expressed mixed
Contribution 41: On the Challenges and Iris Junglas and Sven-V. Rehm feelings. I tried ChatGPT myself in early December asking the platform
Opportunities of ChatGPT to develop an essay on the relationships between “big data” and
Contribution 42: ChatGPT3: Technology Scott W. Cunningham and Mei-Chih
“innovation management”. The result was rather disappointing as the
Development, Impact and Challenges Hu
Contribution 43: ChatGPT: A Research Manju Ahuja text seemed rather weak in terms of logical flow, inaccurate in terms of
Agenda factuality and truth, not critical in terms of elaboration of data, and
not novel. When asked to provide references and sources to back the
The remainder of this section showcases the 43 contributions, presented mostly
arguments of the essay, the software simply pulled together a list of 5
in their original and unabridged form. This approach may result in some un
evenness and overlapping narrative but preserves the individual perspectives of references with wrong titles associated wrongly to authors, journals and
each expert as they highlight important aspects of the impact and challenges publications years. Very much like GPT3, also ChatGPT can be poten
stemming from ChatGPT and generative AI (Dwivedi et al., 2022a,b,c,d). tially used by journalists and software developers to generate articles/
software, but the outcome will need to be carefully double checked as
2.1.1.2.3. Direction #3. Underlying our research domains are as the software seems to generate inaccurate content, based on inaccu
sumptions. One illustration of a framework of assumptions is Alvesson rately reported sources of ideas.
and Sandberg (2011). Assumptions in various domains, many of which The absence of originality of the output generated by ChatGPT is
often have unarticulated assumptions, will need to be articulated and even more acute when it comes to products that are related to the cre
validated (see Schuetz and Venkatesh, 2020). ative industries such as song lyrics. For instance, songwriter Nick Cave
2.1.1.2.4. Direction #4. Not mutually exclusive from direction #3 recently received from one of his fans a song’s lyrics that had been
above but equally important is that many domains may now require a generated by Chat-GPT based on a specific prompt: “in the style of Nick
paradigm shift in Kuhnian terms. This presents an exceptional oppor Cave”. Disappointed by the outcome, Cave wrote in response to his fan:
tunity for scholars to get away from existing views and theories—ones “Writing a good song is not mimicry, or replication, or pastiche, it is the
that perhaps do not give as much agency to technology should be opposite. It is an act of self-murder that destroys all one has strived to
reconsidered so as to provide new theories rooted in the new paradigm. produce in the past. It is those dangerous, heart-stopping departures that
In closing, I am neither alarmed nor not alarmed. It’s a technology. catapult the artist beyond the limits of what he or she recognises as their
It’s disruptive. It’s a great opportunity for science. It’s a great oppor known self.” (Cain, 2023). The absence of originality, detected during
tunity to grow as a society. And, when the two meet, science can help my own essay generation experience with ChatGPT and expressed also
society grow. by Cave, is perhaps an unimportant aspect for some routinised software
generation tasks, but it becomes clearly of paramount importance in
2.1.2. Contribution 2 creative writing.
We might argue that inaccuracy, weakness in logical flow, issues of
2.1.2.1. AI platforms, ChatGPT, and Innovation - Marcello Mariani. factuality/truth, lack of critical elaboration, and non-originality of the
While information management (IM) and IS scholars have started dis generated content could be the outcomes of a technology that is still
cussing opportunities and challenges pertaining to AI platforms as soon being tested. Indeed, the AI platform’s underlying deep learning models
as AI technologies and systems emerged and advanced, innovation were perhaps trained on an outdated and low-quality training set. On
the other hand, we might expect that if the training model and the data
5
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
quality improve, the AI platform might enhance its performance. suggests that context is a significant factor to identify, assess, prioritise,
However, it is not clear if better technical performance will lead to more and mitigate risks (NIST, 2023). Therefore, risks related to ChatGPT
meaningful innovation outcomes. Accordingly, one of the key questions adoption in any business sector should be mapped to its context of use
for innovation management scholars then becomes: “Are AI platforms and application. With regards to ChatGPT, the risk stems from the
(such as ChatGPT) capable to lead independently to meaningful prod quality, authenticity and reliability of the data used to train the model
uct, process, or business model innovation?”. and how model is being rectified by the human designers and
The impression is that there is no ready answer and there is a long developers.
way before AI platforms such as ChatGPT can become an independent Once the risks are identified, ethical models can be used to help
agent in innovation processes and innovation management. Recent uses determine a path forward (Ashok et al., 2022). Utilitarianism is one of
of generative AI in text, movie, and music generation suggest that these the most common approaches to making ethical decisions that does the
AI platforms at best can become a partner in product innovation and least harm (or most good) to individuals, society, environment, weigh
value creation (Vargo et al., 2008). ing both the positive and negative impact of the action (Böhm et al.,
In complex activities that involve several areas of the human brain, 2022). From the AI risk management perspective, the theory provides an
such as creative activities and innovation activities, a good prompt ap approach for resolving conflicts through a flexible result-oriented lens
pears to be not enough to allow a generative AI system to generate for formulating and testing policies at each stage of the risk management
independently a distinctively different and original new product. We cycle. For example, the risks of adopting ChatGPT in a specific context
expect this to be even more relevant for products that involve some form can be assessed by the RMF, where-as the impact and consequences of
of emotional intelligence (Jena & Goyal, 2022). the risks on each stakeholder can be prioritised using the utilitarianism
To summarise, there is a long way before AI platforms such as perspective, i.e., making a decision whether the adoption will promote
ChatGPT could be capable to lead independently to meaningful product, welfare compared to any other alternative. Similarly, the contextual
process, or business model innovation. At best they can help enhance importance of AI adoption (in each sector for a specific application) can
human intelligence for innovation by augmenting human intelligence. enable AI developers, organisations planning to deploy AI and even
As AI platforms and the underlying technology will evolve, future policy makers to make realistic and workable moral decisions under
research will need to investigate if and to what extent the role played by standing and weighing both the opportunities and negative implica
generative AI will be increasingly relevant in triggering innovation tions. Therefore, AI risk management frameworks and ethical theory
outcomes. perspectives should be consolidated to make socially responsible
judgements which will help ensure purposeful, cautious, reasoned, and
2.1.3. Contribution 3 ethical way of leveraging generative AI models such as ChatGPT. Ac
cording to utilitarianism, the most optimal decisions and actions related
2.1.3.1. Human and generative AI collaboration to ChatGPT design, development, adoption, deployment, maintenance
2.1.3.1.1. Lessons from utilitarianism -Lemuria Carter and Soumyadeb and evolution should provide the most good or does the least harm for
Chowdhury. ChatGPT is a cutting-edge AI language model that leverages the society. This also calls for responsible AI toolkits and frameworks to
generative AI techniques to provide algorithm generated conversational embed ethical perspectives to enable a balanced view of what is right
responses to question prompts (van Dis et al., 2023). The outputs from and wrong.
generative AI models are almost indistinguishable from 2.1.3.1.3. Future research agenda. We present four future research
human-generated content, as they are trained using nearly everything themes for generative AI models such as ChatGPT considering four
available on the web (for e.g., around 45 terabytes of text data in the different perspectives, viz, tool, proxy, ensemble and skills (Kim et al.,
case of ChatGPT). The model can be trained to perform specific tasks, 2021).
such as preparing slides in a specific style, writing marketing campaigns
for a specific demographic, online gaming commentary and generating • Theme 1
high resolution images (Chui et al., 2022a, 2022b). From a tool view, it is necessary to develop techniques to enhance
While the benefits of this new AI tool for businesses have been widely the transparency of generative and self-adaptive AI models that will
discussed by various media outlets, it is essential to understand the facilitate explainability of the outcome responses. For instance, an
limitations of generative AI models which may lead to reputation and important question to explore is how can transparency and
legal risks, using offensive or copyrighted content, loss of privacy, explainability either enhance or diminish the competitiveness and
fraudulent transactions and spreading false information. In this com productivity of organisations adopting ChatGPT?
mentary, we explore the intersection of risk management and ethics in • Theme 2
the context of generative AI to propose four themes for future research. From a proxy view, there are several interesting questions. For
2.1.3.1.2. Exploring the ethics of responsible AI: lessons from utili instance, how can responsible and ethical policies, practices and
tarianism. Threats posed by ChatGPT, and similar AI bots include black- regulations can help in diffusion of generative AI applications across
box algorithms, discrimination and biases, vulgarity, copyright organisations? What is the impact of consolidating risk management
infringement, plagiarism, fabricated unauthentic textual content, and frameworks and ethics theoretical perspectives on ChatGPT adoption
fake media. Hence, it is essential for organisations to understand, within organisations?
manage and mitigate risks resulting from AI adoption. Ethical reviews • Theme 3
and bias screening should complement periodic risk assessments From the ensemble view, where, when, and under what organ
because the algorithm is evolutionary in nature, i.e., the voluminous isational contexts is it best to implement ChatGPT? How do societal
data used to train the algorithmic models possess high velocity, het contexts shape the meaning and the outcomes of adopting ChatGPT,
erogeneous and variability characteristics. The Artificial Intelligence from an ethics and moral judgement perspectives? What are the
Risk Management Framework (“AI RMF 1.0) developed and released by conceptualisations of responsible ChatGPT according to customers,
National Institute of Standards and Technology (”NIST”) can guide the employees, stakeholders, managers, the community, and govern
organisations developing, adopting, and implementing AI solutions to ment policy makers. In light of the United Nation’s sustainable
systematically assess, understand, and manage risks, and promote development goals 4, 5, 8, 9 and 10, how can responsible develop
responsible usage, development, and evolution of AI tools (National ment, deployment, and evolution of ChatGPT, promote wellbeing
Institute of Standards and Technology, 2023). The working version of among humans and in society.
the risk management playbook released by NIST grounded in research • Theme 4
6
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
From a skills view, it will be interesting to explore which skills, Productivity could be measured in two ways: time to complete the
resources and capabilities (both technical and non-technical) are task (efficiency) and quality of the output (effectiveness).
necessary within organisations to better understand the limitations of The experiments could test different types of tasks, ranging from
ChatGPT within different contexts of use, and promote ethical decision- simple to more complex. Simple tasks might include writing a thank you
making? What is the role of government policies, training providers, email, producing a written summary from a point form list of items, or
higher education and technology developers to help develop these skills creating a simple report. More complex tasks could include writing a
among the human workforce (whose jobs are likely to be made redun performance evaluation, responding to a sensitive argument in an email,
dant as a result of robotic process automation)? or writing a complex report.
2.1.3.1.4. Conclusion. Given ChatGPT has been coined “the indus There are many promising research programs linked to ChatGPT and
try’s next big disrupter” due to its analytical and computational capa other text-based generative AI tools, as my colleagues have outlined.
bilities, efficiency in producing human-like responses and round the However, the lack of significant improvement in the productivity of
clock availability, it is important for researchers and practitioners to knowledge work (and workers), has long been a source of frustration for
examine the ethical concerns surrounding black-box algorithms and organisational leaders and policy makers (Shujahat et al., 2019).
how multi-disciplinary research can help alleviate these concerns. ChatGPT has the potential to enhance the productivity of knowledge
work through various mechanisms, such as simplifying the information
2.1.4. Contribution 4 search process, but I predict that its most significant impact will be to
provide a competent first draft for our most common written knowledge
2.1.4.1. ChatGPT’s biggest disruption: knowledge work productivity - tasks.
Michael Wade. New digital services and technologies are released all the
time, and every so often one comes along that hits the viral jackpot. 2.1.5. Contribution 5: ChatGPT as a member of hybrid teams – Alexander
Bitcoin did this in 2017, Google’s audio chatbot Duplex made a big Richter
splash in 2018, and the metaverse and Web 3.0 caught fire in 2022.
Some, like Bitcoin, endure; most, however, remain on the sidelines and 2.1.5.1. The emergence of hybrid teams. Movies2 such as ’The Termi
slowly fade away. Remember how Clubhouse was going to disrupt social nator ’, ’The Matrix ’, and ’I, Robot’ depict AI as powerful, autonomous
media, or how we were going to be 3D printing everything? entities that ultimately turn against humanity. This portrayal of AI as a
How will ChatGPT fare in the disruption sweepstakes? threat has contributed to the common perception that AI and human
Much has been said about how ChatGPT will disrupt the education actors compete with one another, at least for jobs, rather than being able
sector, and as someone who operates from within this sector, I can only to work together in a mutually beneficial way. However, the idea of
hope that this is the case. But my interest, at least from a research humans and AI working together is as old as the field of AI itself. The
perspective, lies elsewhere. In fact, I believe that the biggest potential pioneers of AI research recognised the potential benefits of AI and
source of disruption will be ChatGPT’s effect on the productivity of humans working together and envisioned a symbiotic relationship be
knowledge work. tween the two. They anticipated AI would augment human intelligence
ChatGPT is generative, meaning that it can create new data, not just to solve complex problems and make better decisions and ultimately
analyse existing data. This capability, specifically for text, is what sets help humans achieve their goals more efficiently (e.g., Licklider, 1960).
ChatGPT apart from the earlier avatars of ML. ML has been good at Based on the significant advancements of AI in the last decade
pattern recognition for many years now – whether it is distinguishing (specifically in deep learning and built on large amounts of data and
cats from dogs, or scanning for cancerous tumours. ChatGPT takes this computing power), it seems reasonable to assume we will increasingly
one step further. It not only recognizes patterns, it uses them to create see hybrid teams consisting of humans and AI. In order to understand the
new data based on those patterns, which makes it generative. potential of AI in hybrid teams and to effectively navigate their com
From a research perspective, we can test how effective ChatGPT is at plexities, it helps to explore what types of activities AI can take over in
generating content that can enhance knowledge worker productivity. these team settings.
Among the many ways that ChatGPT do this, I predict that the biggest In this contribution, I will illustrate some of ChatGPT’s possible roles
impact will be associated with its ability to create a first draft. in a team consisting of humans and AI, including coach, innovator and
Compared to physical labour, knowledge work is notoriously hard to software developer. I aim to use these examples to show how human
study due to the difficulty of measuring inputs versus outputs (Drucker, actors and AI are becoming hybrid teams. I will conclude this contri
1999). Nevertheless, research suggests that 41% of a knowledge bution with an initial set of research questions about the future of hybrid
worker’s time is spent on discretionary activities that offer little personal work from this brief analysis.
satisfaction and could be handled competently by others (Birkinshaw &
Cohen, 2013). ‘Others’ has typically been thought of as another person, 2.1.5.2. ChatGPT as a partner. ChatGPT is based on GPT-3 (Generative
but could equally be a technology solution, like ChatGPT. Pre-trained Transformer 3), which was introduced in June 2020. GPT-3
Knowledge workers know that it takes a lot of time and effort to write has 175 billion parameters3 and is one of the most powerful language
a first draft of anything – an email, a report, a blog, a business plan, a models available to date. Whereas ChatGPT uses the same dataset and
proposal, an article, or an employee review. Interestingly, I could not parameters as GPT-3, its conversational focus and ease of use (or
find any research that compares the time it takes to create a first draft of readiness-to-use) made it a huge success (which even surprised many
something versus the time it takes to transform that draft into a finished
product. Most of us would agree that, generally speaking, having
something to work with is a lot easier than starting something from
2
scratch. Ultimately, however, this is an empirical question. Thanks to Markus Luczak-Roesch, Shahper Richter, Kai Riemer, Gerhard
Schwabe as well as David Wagner for comments that helped me to improve this
An experimental research program can be designed to test the dif
contribution and to the anonymous student who shared their insightful expe
ference in productivity between two conditions. Condition one would be
rience with ChatGPT with me and allowed me to use it in this article.
where a research subject completes a task from start to finish, while in 3
The more parameters (or tunable values) a model has, the more complex
the second condition, a subject uses ChatGPT to create a first draft, and patterns it can learn and the better it can perform on a variety of tasks. How
then completes the task using the draft as a foundation. Experiments ever, having more parameters also requires more data and (computational)
could be conducted employing both within-subject and between-subject resources to train. In the case of ChatGPT, this raises open questions including
designs. ‘How ethical is it to pay human workers $2 an hour to improve the quality of
the training data’ (Perrigo, 2023)?
7
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
information technology (IT) experts) when it was made available in of months how they did not only test ChatGPT but how they already
November 2022. Globally, practitioners, academics and students started operatively use it. Examples entail using AI to assist with code writing,
experimenting with ChatGPT and shared their "eureka"-moments with automating simple tasks (including testing) and error management (in
colleagues, friends and family. Despite its high degree of accuracy, there development and post-deployment phases).7
were also justified warnings about the reliability of the produced texts.4
Like many other deep learning-based tools, ChatGPT can only be as good 2.1.5.4. A new perspective on "hybrid work" and a preliminary research
as its training data and is probabilistic and stochastic (e.g., Bender et al., agenda on hybrid teams. This contribution identifies a couple of roles AI
2021).5 can play in hybrid teams:
A typical social media post (out of many by the end of 2022) read like
this: "I asked ChatGPT to write me a [text/abstract/soliloquy/…] about • Rather simple roles, such as text producer, language editor and
[topic]. It did a [good/great/amazing] job, (certainly) (way) better than research assistant
I would have done." Whereas many of those initially circulated anec • And more sophisticated roles, such as coach, innovator and software
dotes relate to relatively straightforward roles of content production developer
such as text producer, language editor and translator, there are examples
of how ChatGPT can also contribute to hybrid teams in more sophisti The list of these roles has an illustrative character and is incomplete.
cated roles. As a next step, it seems reasonable to use group role or task frameworks
to identify the strengths and weaknesses of ChatGPT. For instance,
2.1.5.3. Roles of ChatGPT. One of my EMBA students shared their McGrath (1984) identifies the following tasks in his group task cir
experience with ChatGPT: "In my interactions with ChatGPT, it felt as if I’m cumplex: generate (planning tasks, creativity tasks), choose (intellective
interacting with a team member via a chat function. […] I received an im tasks, decision-making tasks), negotiate (cognitive conflict tasks,
mediate response and I felt I could trust the information that was given to me. mixed-motive tasks) and execute (contests, battles, performances). It
[…] I felt emotionally supported, I was so worried that I’m falling behind appears ChatGPT could contribute valuable input to many of these tasks,
schedule and in using ChatGPT I actually felt a sense of relief. In a way, it is but research will need to show which tasks ChatGPT (or other AI) is best
scary, isn’t it? Could this type of interactions replace group work?". suited to and what challenges its use will bring.
In this case, ChatGPT was initially thought of as a tool for (re-)search Knowing AI can assist with various tasks in a team can also enrich the
support, i.e., the student used it to make sense of a new topic. But it also discussion about hybrid work. So far, the term hybrid work has been
became a team member that provided emotional support to the EMBA mostly limited to the traditional binary of presence work vs virtual work.
student. This example is in line with studies that showed that AI can act The addition of AI to collaborative teams warrants to re-consider the
as a coach, i.e., supporting humans in achieving their goals, e.g., health- term and concept of hybrid work. Hybrid work is no longer limited to the
related ambitions such as weight loss (Stein and Brooks, 2017), physical continuum of presence and virtual but also comprehends the duality of
activity (Wlasak et al., 2023) or when it comes to skills development human/human vs human/AI (cf. Fig. 1). The various implementations of
(Terblanche and Cilliers, 2020). Generally, it has been observed that AI the Metaverse concept will take these considerations further.
acting as a coach has the potential to collaborate with individuals in Humans will need to develop new approaches and strategies for
planning, monitoring, and control of thinking, feeling, and actions hybrid teamwork, which opens the space for manifold research ques
(Cranefield et al., 2022). AI-based coaches can help individuals to tions, for example:
develop new practices and skills, e.g., when it comes to tracking their
well-being or allocating their time more mindfully in order to be able to 1. How to allocate tasks between humans and AI? Who delegates these
focus better (e.g., Winikoff et al., 2021). tasks, and who coordinates them? How do humans delegate tasks,
A study by Bouschery et al. (2023) explores how GPT-3 can become a and how does AI delegate tasks?
member of a hybrid innovation team by acting as an innovator in the 2. Will human actors feel the same sense of achievement when a lot of
new product development process. They found that it allowed for larger the creative work is done by AI? Will human-human interaction
problem and solution spaces and ultimately led to higher innovation change in hybrid teams?
performance. Further studies have shown that the generative nature of 3. What are the implications of over-reliance on AI for human actors’
ChatGPT enables it to contribute new ideas and concepts (Stevenson creativity, problem-solving abilities, and decision-making skills, and
et al., 2022), for example, by completing sentences, paragraphs or whole how can we mitigate these effects?
texts based on a given context and problem definition.6 Whereas it may
still deliver less original and valuable ideas than humans, ChatGPT can
help human team members better understand their problem and solu
tion space.
Another potential role of AI in a hybrid team is that of a software
developer. Research on so-called AI-based development is still in its
infancy. Still, many software developers have shared over the last couple
4
See (Pearl, 2022) for examples. Shah and Bender (2022) caution against the
opaqueness of search systems using ML, i.e., it is difficult to retrace what in
Fig. 1. Towards a new conceptual understanding of hybrid work (own
formation they present. The use of large language models has the potential to
illustration).
further decrease transparency and accountability. Shah and Bender suggest
measures to counteract these potential problems.
5
In other words: ChatGPT is a tool generating text based on the input it re
ceives and the parameters it has been trained on. I am not implying that
ChatGPT has consciousness, autonomy or the ability to act with intent.
6 7
It has yet to be determined what that means from an intellectual property There are also other AI-based software developers. For instance, GitHub
perspective, since the generated output is based on other people’s patterns that announced its “Copilot” tool in June 2021. It can assist users of several inte
are usually given as training data without consent and as free labour. So, there grated development environments (IDEs), e.g., Visual Studio, by autocomplet
is the question whether AI can only ever plagiarise. ing code.
8
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
4. Who assesses the quality of the work results? If AI assesses them, does The biggest challenge is that ChatGPT may produce correct-sounding
accountability of the work results only entail the team or potentially but logically incorrect results. This potentially puts a significant burden
the developer of the AI-based team member? on the teacher, that needs to correct the results. Moreover, this will also
have dramatic consequences on research. The number of published pa
These questions add to other previously raised AI-related research pers is growing exponentially. For example, the number of scientific
questions that our community is still working on resolving, e.g., who is papers doubled between 2000 and 2014 (Fire & Guestrin, 2019). The
accountable for AI-based actions (Martin, 2019)? How can we ensure number of papers per author increased dramatically, and it seems un
that AI systems are transparent, explainable/contestable, and realistic that all papers are read extensively. An urban legend tells us
trustworthy? that 50% of the articles are only read by the authors and the reviewers.
It will be interesting to observe how ChatGPT and the many other AI This is probably not true, but it illustrates the problem that too many
tools already available will contribute to the changing nature of work in low-quality papers are published. There is an increasing number of pa
the near and distant future. pers that are not very original and have errors that remain unnoticed.
ChatGPT will make this situation worse and produce papers that are
2.1.6. Contribution 6: ChatGPT as the new calculator - Wil van der Aalst convincing, but often wrong. This will distort scientific facts, stimulate
When asking ChatGPT, “What is the effect of calculators in teach plagiarism, and spread misinformation. Review processes cannot keep
ing?” the response is, “The use of calculators in teaching mathematics up with this. Therefore, ChatGPT and the like will change how research
can have both positive and negative effects on student learning. On the results are disseminated and verified. This will have a much bigger
positive side, calculators can help students perform complex calcula impact than the introduction of the calculator.
tions quickly and accurately, allowing them to focus more on under
standing mathematical concepts and problem-solving. Additionally, 2.1.7. Contribution 7
calculators can provide immediate feedback on their answers, allowing
students to check and correct their work. On the other hand, over 2.1.7.1. ChatGPT for human augmentation - Ilias O. Pappas. Technology
reliance on calculators can lead to students lacking basic arithmetic and the value it brings drives society forward and often will be ahead of
skills, as well as the ability to perform mental calculations. It is impor society itself. AI is being integrated in multiple ways in our lives and a
tant for teachers to use calculators as a tool to support learning, rather major challenge remains on how to design and develop inclusive AI for
than as a replacement for traditional mathematical instruction.” When all. Chatbots have been extensively implemented to deal with commu
asking ChatGPT, “What is the effect of ChatGPT in teaching?” a similar nication tasks that used to be performed by humans. Depending on the
answer is given with the closing remark, “it is important to note that situation, some tasks are perfectly handled by chatbots’ based on their
ChatGPT should be used as a supplement to, not a replacement for, current capabilities, while others still require human competencies.
traditional instruction and human teacher interaction.” Humans started using chatbots for automation purposes, which quickly
It is interesting to see that ChatGPT is, seeming, able to reflect on evolved into human task augmentation and the creation of hybrid teams,
itself, which is something a calculator cannot do. As its acronym in where humans collaborate closely with machines to perform relatively
dicates, ChatGPT is a Generative Pre-training Transformer; hence it uses simple tasks (Vassilakopoulou et al., 2023). The question is on what is
a mixture of supervised learning to predict the continuations combined next? OpenAI ChatGPT enables functionalities that were not possible
with supervised learning to fine-tune the result. The development of before with by the current generations of chatbots.
ChatGPT has been so spectacular that there are no representative OpenAI ChatGPT technology, now open to everyone, has made a
empirical studies on the use of this new technology. Earlier versions of critical, possibly historical, transition. So far, many advanced AI appli
GPT were more primitive and not representative for its current capa cations have been employed to deal with challenges and problems in
bilities. However, the use of calculators in education has been widely which failure comes at a high cost (e.g., aviation, medicine). On the
researched (Ellington, 2003; Mao et al., 2017). In the 1970 s, also the other hand, implementation has been lagging when it comes to dealing
use of calculators in education was a controversial topic, just like with tasks where failure is expected, acceptable, and comes with a low
ChatGPT today. Hence, it is interesting to compare both. Most studies cost (e.g., education) (Kabudi et al., 2021). The latter makes the use of AI
confirm that the effect of calculators is twofold. On the one hand, stu applications perfect for learning and OpenAI ChatGPT is enabling this.
dents get more time to focus on problem-solving and understanding. On Learning happens everywhere, it doesn’t happen only “in class”. Besides
the other hand, to effectively use a calculator, it is important to under traditional settings (e.g., university) learning includes training of em
stand how it works, i.e., students should be able to do the same task ployees & qualification of leaders. The latter can be either planned (e.g.,
manually. This is where ChatGPT is different. organisations training their employees) or occur naturally as employees
Most people using ChatGPT have no idea how it works and cannot are using new technologies or the existing ones in different ways, such as
reproduce the results manually. However, it makes no sense to ban the in Vassilakopoulou et al. (2023).
technology for students that will live in a world where this technology The educational sector is trying to catch up with the impact of
will play a major role (King, 2023). Therefore, it is essential to create ChatGPT, especially in exams. Teachers are debating about its use and if
two types of tasks: (1) tasks with no access to tools like ChatGPT (like a it should be allowed while students are asking for advice or permission
math exam without calculators allowed) and (2) tasks where tools like on if and how to use it. At the same time, AI tools to detect AI generated
ChatGPT are explicitly integrated (e.g., homework). For the first type, text are being developed as a counter measure. Considering that these
there is no need to change much. However, it can only be tested in a tools are constantly learning it can be expected that a tool can be asked
controlled environment. It is not feasible to use AI to check for the use of to generate text that is not detectable by AI. As educators, we remain
AI, despite attempts to develop such checkers. For the second type, responsible for training the students to develop critical thinking. This
significant changes are needed. Open-ended homework assignments objective has not changed over the years. Applications like ChatGPT can
where students need to write text cannot be used anymore. Instead, one be used either as a companion or tutor, to support for example self-
should think of tasks that require a combination of both or that involve regulated learning, or as a way to pass exams without any effort, thus
an oral examination afterward. Interestingly, ChatGPT is not limited to minimal, if any, learning. As with most things, ChatGPT sometimes
natural language. ChatGPT also speaks several programming and works great or just fine and sometimes it fails.
querying languages, e.g., Python, Java, C+ +, C#, JavaScript, SQL, PHP, Depending on the task and question, it can give useful, funny, or just
SPARQL, and XQuery. When correctly used, it will speed up program wrong and misleading answers. While the latter is expected to be
ming tasks dramatically. Therefore, assignments need to be more reduced over time considering that the training datasets will continue
challenging.
9
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
growing, the responsibility remains with the human to evaluate and understand the implications of using such tools. Developing critical
assess these responses. Hence, we need to rethink how we design thinking is key and ChatGPT might help in this direction if used
learning in order for learners to develop soft skills, such as critical appropriately. Would we accept a statement from another person
thinking, complex problem solving, creativity and decision-making, that without some form of critical reflection? Of course, the answer is that it
takes into account various teaching and learning spaces (e.g., class depends on so many variables.
rooms, meeting rooms, the industry) using new digital resources (Pappas A similar approach can be followed when assessing and evaluating
& Giannakos, 2021). Finally, using technologies like ChatGPT can be fun responses from ChatGPT and all the advanced conversational agents that
as it changes the way we interact with the computer, raising the need for will follow in the future. The ongoing interaction with ChatGPT allowed
new studies on how both cognitive and affective factors shape our be me to reflect on how I respond to questions and how accurate are these
haviours towards more advanced chatbots and “truly” conversational responses. The recent trend of posting ChatGPT chats on social media
agents. shows that I am not the only one reflecting on this matter. How do we
Research is ongoing on what entails inclusive AI, with inclusive AI respond when we are not completely sure about our answer? Depending
tools requiring inclusivity to be considered both when it is developed on the context and how expensive or cheap is to fail, we will either do
and when it is used. Design for inclusive AI starts by providing access to some fact checking or simply state that we are not fully sure (e.g., use the
a diverse group of users, then ensuring their engaged participation, that term probably). After all, how easy it is to improve and succeed without
will eventually lead to empowered success. ChatGPT, now open to failing first?
everyone, has the opportunity to allow this. Making AI applications
available to all can help overcome several existing challenges due to a 2.1.8. Contribution 8: overcoming resistance to change with AI
lack of data-driven culture or technological culture, which coupled with
the need for innovative and trustworthy AI can lead to solutions that are 2.1.8.1. Can ChatGPT help you? - Jan Pries-Heje. This short paper ex
understood by non-expert users as well. Latest news report that com plores an emerging AI based tool named “Chat Generative Pre-trained
panies will be able to create their own custom versions of ChatGPT, thus Transformer” (hereafter, ChatGPT) and its usefulness in overcoming
allowing them to launch their own chatbots using OpenAI ChatGPT resistance to change when implementing new digital innovations.
technology, possibly without using other branding, except their own. At The way I go about it is to identify relevant questions in relation to
the same time, considering the potential of low/no code platforms, a well-known literature on resistance to change and then discuss whether
new generation of chatbots can be developed to help deal with multiple a project manager responsible for implementing a given new innovation
aspects of societal impact, both short and long term. would find the AI-based ChatGPT answers useful? In the following I have
ChatGPT, while rather unique today, is on the first of similar chatbots showed a few of answers and not fully (that would take up all several
that will become available over time. Businesses and organisations that more pages). I have chosen to show enough so you as reader can follow
understand and embrace this change will develop competitive advan my line of argumentation.
tages. Already, soon after ChatGPT became open to all, Google When adopting and diffusing new IS the recipients often show signs
announced the launch of LaMDA, its own ChatGPT competitor. While of resistance when they are required to change their behaviour or adopt
these tools are free to use today, they require a lot of resources, thus it is new behaviour. This was identified in academic literature 75 years ago
only a matter of time before they become paid services, especially the by Coch & French Jr (1948) who carried out studies of changes to
ones leading the market. Digital divide and digital inequity will widen. production at the factory floor. Thus, the first question we could ask
For example, digital equity in education means ensuring that every ChatGPT to help us with may ask would be: Why is there resistance to
learner, regardless of socioeconomic status, language, race, geography, change? The answer to that was:
physical restrictions, cultural background, gender, or other attributes
historically associated with inequities, has equitable access to advanced 1. Fear of the unknown: People may be unsure of what the new change
technologies, communication and information resources, and the will bring and fear the unknown consequences.
learning experiences they provide. While the use of advanced chatbots 2. Loss of comfort or control…
like ChatGPT is still at its infancy, it is expected to experience a break 3. Lack of trust … [and three more bullets - left out here]
through in the following years and holds the potential to become a new
paradigm in business. The following research questions can inspire re This answer is pretty good and covers most of the knowledge we have
searchers from a wider range of disciplines to further research on the today. What is missing is for example that the person showing resistance
next generation of conversational agents and their impact on all the has a vested interest e.g., because they were part of or responsible for the
actors and stakeholders in our societies. “old” thing that is being taken over by the new thing.
The “Nestor” in diffusion and adoption Everett M. Rogers (2010)
RQ1: How to design inclusive AI applications for enabling human identified five categories (groups) of adopters called innovators, early
augmentation? adopters, early and late majority, and laggards. The early majority ex
RQ2: How do social and institutional context influence the design pects and waits for certain members of the group of early adopters called
and deployment of inclusive AI applications? “opinion leaders” to demonstrate and communicate the advantages.
RQ3: How can AI applications enhance collective intelligence and Furthermore, the late majority not only wants the advantages demon
enable new forms of community action for bridging the digital divide strated; they are also waiting for the right opportunity. Hence, the late
and digital inequity? majority group can be seen as having resistance to change but are in fact
RQ4: How can the key actors collaborate and take proactive action just waiting for the right opportunity.
towards the integration of technologies like ChatGPT for cham Let us try use ChatGPT to answer: “How do we identify opinion
pioning practical applications of inclusive AI to increase successful leader?” ChatGPT answers:
human/AI partnerships?
1. Observe behaviour: Look for individuals who are vocal and actively
Overall, I view ChatGPT as a training arena that I can throw ideas to involved in shaping the opinions and attitudes of their peers.
it and help me think. I often do this with colleagues and friends. 2. Survey employees …
Sometimes it might be about trivial tasks, pointing to automation of 3. Analyse social networks …
tasks, and sometimes it can be about more meaningful tasks, pointing to 4. Assess communication skills … [and four more bullets with
task augmentation. Instead of focusing on how to mitigate the use of explanations]
such tools we should train our students, the future employees, to
10
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
This is an excellent answer. Better than the one Rogers give. If I was a important concluding point is that the more pre-existing knowledge you
project manager, that answer would be really helpful. have the better you can use ChatGPT and make sure that you get quality
Another important source about resistance to change is Oreg (2003) information out of the use.
that identified 17 factors to measure the expected level of resistance at
the individual level. He came up with a 5-point scale, from “I generally
consider change to be negative thing” to “I sometimes find myself 2.2. Organisational and societal Impacts
avoiding changes that I know will be good for me”. Oreg used the scale
to show in a teaching environment at a University that the higher the 2.2.1. Contribution 9
score the less likely teachers were to be using a given innovation (new
technology) for their teaching (Oreg, 2003, p. 688). Thus, the next 2.2.1.1. Societal Impact of ChatGPT – Rahul Dé. Since its launch in
question we may ask is “How can I overcome resistance to change when November 2022, barely three months from the time of this writing,
introducing a new system?”. The answer to that by ChatGPT was: ChatGPT has drawn considerable media and academic attention (Mis
hra, 2023). It is variously described as a generative AI, a large language
1. Communicate effectively … model, and a chatbot.
2. Involve employees … At its core ChatGPT is a generative pre-processed transformer (GPT),
3. Offer training … a neural network that is trained on a large corpus of publicly available
4. Lead by example … [and three more bullets with explanations] data. While training, the inputs to the network are sentences and para
graphs, where the desired or target output for the network are subse
Again, a good answer but not as good as the other answers above quent phrases and sentences. The GPT network is thus trained to predict
because this answer leaves out a lot of things that could be done. Some blocks of text that will logically and grammatically follow the input
answers that Kotter (Kotter, 2007) would give could be “Build a guiding phrases and sentences. If these output phrases are then given as inputs,
coalition” and thereby focus on the team responsible for change, and the network can predict further text.
“Create short term wins”. Thus, to use ChatGPT you need help to judge ChatGPT is more advanced than previous versions of GPT, as it is also
the quality of the answers. And you need to be knowledgeable about the trained with a reinforcement learner that grades the many responses it
theory in order to formulate questions. With the observation that I was produces, and outputs the most suitable one. Human trainers provide
missing some points from Kotter I reformulated my question to “How the rewards for the reinforcement learning. ChatGPT is also able to
can I overcome resistance to change when introducing a new system? retain the state of the input-output responses and is thus able to
Please include Kotters 8 phases in the answer”. And then I got an answer participate in a sustained conversation.
that was just Kotters 8 phases no more no less. That is even less useful ChatGPT is now used for many applications - to generate stories and
than the answer I go above. reports on various topics; for generating or correcting computer code,
As a project manager we are interested in phases that should be in a though this facility does not provide perfect results as yet; and to sum
plan towards the change. Therefore, we ask “What phases do people go marise articles or chapters (Kundalia, 2023). It is being used as an
through when they change”. The answer we get from ChatGPT is: advanced search engine where it can provide answers to queries in a
(1) Awareness (2) Desire (3) Knowledge (4) Ability [and two more narrative form. It can also function as a chatbot, engaging in an extended
… … … … bullets] conversation, where it remembers the questions it responded to and the
answers it generated, thus resembling the programme envisaged by
Turing in the famous Imitation Game.
This is not a good answer. It is just a repetition of the ADKAR-model ChatGPT raises some prominent issues and questions. An immediate
but without a useful reference such as (Hiatt, 2006). There are many set of questions that have arisen in the media are about students using it
other phased models for organisational change with IT. Rogers (2010) to cheat on assignments and examinations, about replacing jobs of
has an individual change model with phases such as “Prior Conditions – certain types (Aleksander, 2017; Mitchell, 2023), about the originality
Knowledge – Persuasion – Decision – Implementation – Confirmation”. and creativity of responses (Chakravarti, 2023), and about integrating
And Cooper and Zmud (1990) had another phase-model based on a ChatGPT in various tasks in organisations. IS research questions have
number of empirical studies. been about use of digital assistance for learning, shopping, and adver
Hence, ChatGPT in this case gives an answer that may be OK – using tising (Chen et al., 2022; Kuswaha and Kar, 2021; Pizzi et al., 2021),
the ADKAR-model as inspiration is not a bad idea – but leave out about use of chatbots in different tasks (Lin and Wu, 2023; Van den
important and probably better alternatives. However, to realise that Broek et al., 2019), the communicative agency of chatbots (Araujo,
there are alternatives you need to know that the answer given is based 2018) and their success in handling conversations (Hill et al., 2015;
on the ADKAR-model. But ChatGPT never tells you any references or Konya-Bombach et al., 2023). Issues of bias and discrimination in data
where the knowledge it based the answer on was found. That is a feature and algorithms are prominent in AI and chatbot research (Akter et al.,
given by the technology used. 2021; Conboy et al., 2022). Another set of questions have arisen that are
Finally, we could ask: “Why do people fear change?”. The answer to focused at the societal level, raising larger issues of policy and
that was. regulation.
People often fear change because it brings uncertainty and the un 2.2.1.1.1. Societal questions. One goal of AI is to produce Artificial
known. Change can challenge one’s comfort zone and disrupt familiar General Intelligence (AGI), or programmes that are capable of a wide
routines, causing stress and anxiety … [continues three more lines]. variety of intelligent tasks, rivaling or exceeding human capabilities
Again, a good answer but not a very insightful one. To get deeper we (Goertzel and Pennachin, 2007; Kissinger et al., 2021). This goal is in
need to understand psychological theories such as theory of planned contrast to the current AI systems that have superior capabilities, much
behaviour (Ajzen, 2011). beyond that of the best humans, but in narrow domains, where these are
So, to conclude, ChatGPT is a valuable tool that can be very useful for referred to as Artificial Narrow Intelligence (ANI). It is in this context of
you as responsible for a project leading to organisational change, e.g., a the difference between AGI and ANI that ChatGPT has a strong influ
digital transformation. ChatGPT may help you at the level of a good ence. ChatGPT is one of the first set of programmes that are tending
collection of textbooks. However, if you want to get deeper and get towards AGI, along with programmes such as Siri, Alexa, LaMDA, as
‘behind’ the surface ChatGPT may not be the tool The main deficiency is they have a wide range of seemingly intelligent capabilities that may not
that you have no idea where the knowledge in the answers come from? exceed expert human levels at individual tasks, but are overwhelming
Thus, you cannot judge or evaluate the quality of the answers. The other owing to their scale, speed, and scope.
11
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
As newer and more capable versions of ChatGPT are produced, a few relating how students in Australia had been found cheating in an online
things are certain: 1) these programmes will require massive in exam by outsourcing (to an AI chatbot) the creation of their answers to
vestments and computing resources, and hence only the largest private exam questions. The article did not reveal the quality of the submitted
firms or governments will be able to develop them (Kissinger et al., answers nor the fate of the students who had perpetuated this action, but
2021); and 2) these programmes will be made available through the nevertheless the message was clear. If one can train an AI chatbot to
cloud, either for free or at a very low cost, to generate network effects generate an answer to an exam question, one can presumably train it to
and to onboard a large number of users (who will then participate in do many other things, including writing a complete journal article, if
further refining and training them). that same chatbot has access to the relevant literature, data, interpre
Across many societies and regions there is a distinct possibility that tation, etc. I imagine that some types of article may be easier for a
people will begin to depend on programmes like ChatGPT and delegate chatbot to write, e.g., those that are more formulaic in character. Arti
(Baird & Maruping, 2021) important tasks to them, initially with small cles that require extensive citation of references, as well as author in
scope, and then with growing import. Particularly in developing coun terpretations, may be harder, at least for the time being. AI chatbots can
tries, where there may be a shortage of experts on specific topics, like also analyse data, or fish for whatever relationships might exist in data,
high technology, the entire understanding and discourse will be deter so as to concoct a plausible contribution to knowledge. Indeed, an AI
mined by ChatGPT - as it will be a ready source of answers and will also chatbot will likely perform these activities much more effectively and
be always available. From school children to senior policy analysts, all efficiently than a human, discerning patterns that elude the human
will either use ChatGPT as an assistant or rely entirely on its answers to brain, and so conceivably creating a better quality or more useful
develop their own thinking, intuition, and understanding of all matters contribution to knowledge than could the human author. Whether the AI
of which they have weak knowledge. There may arise a need, then, for chatbot will then suggest a theory, which can be retrospectively tested
educational institutions, policy writers, and governments to understand with this fished data, or to be inductively created from the fished data, is
the extent to which opinions and thinking are being influenced by the another matter. I imagine that it’s possible. Furthermore, the AI chatbot
answers produced by ChatGPT. There will be a need in various devel may well ‘improve’ the linguistic fidelity of the final article, leveling the
oping countries to understand the basis of training of publicly available playing field for non-native speaker authors. However, such chatbots are
ChatGPT programmes, and how their own values are represented and also accused of producing ‘cogent waffle’ (Vincent, 2022), i.e., gram
explicated. matically accurate combinations of chunks of text from the training
As tools like ChatGPT are integrated into human-AI hybrid solutions materials that doesn’t really say anything useful and may also contain
(Rai et al., 2019), researchers will have to address societal questions of much misinformation. Cogent waffle is by no means a new phenomenon
access asymmetries. In the developing country context, there already in scholarly publishing, but the creativity of the chatbot is circumscribed
exists an issue of the digital divide (Venkatesh and Sykes, 2013), where by the breadth and depth of the training materials.
disparities in use of technology can lead to disparities in incomes and My perspective here is primarily that of the Editor of two scholarly
well-being. Though tools like ChatGPT are easy to learn and adopt, and journals (the Information Systems Journal and the Electronic Journal of
their designers will ensure they are even more integrated with Information Systems in Developing Countries). I must emphasize that
everyday-use apps, they can amplify inequality of resource access. One this is an opinion with some possible implications. My investigation of
example is of users who cannot use English (and other languages ChatGPT has not gone far beyond Wikipedia9 and a few websites,10,11
ChatGPT can work in), as they will suffer from their inability to use the though I have also benefitted from conversations with a few colleagues
AGI tool, while their peers will be able to. Further, inequality of access (see acknowledgements). There is no formal literature review: actually, I
will lead to asymmetry in the data used to train and refine these algo am not sure that there is a literature that can be reviewed since no peer-
rithms, where marginalised groups will not have their data represented reviewed articles on ChatGPT have yet been published, though I did find
(Chen and Wellman, 2004; Weissglass, 2022). an editorial (O’Connor, 2023). Moreover, this is not an empirical piece: I
Along with abilities and skill, the agency of human-AI systems will have no data to analyse or interpret. My general reaction is a strange mix
raise questions of responsibility. Research shows that there are grada of awe and fear, disgust and transformation: awe that this is possible,
tions in degrees of agency, with humans and AI systems displaying fear of what the consequences may be, disgust that people would be so
varying values on different dimensions of agency (Dattathrani and De’, lazy to resort to such chicanery, and transformation of what we find
2022). With increased use of ChatGPT, for tasks such as coding, or ethically acceptable.
planning, there will be increased focus on responsibility when things go Perhaps I should not be surprised that AI has moved to this level.
wrong. Along with blame and accountability, there will be the problems Received wisdom2 is now that we have gone well beyond the Turing test,
of allocating legal and financial liability. Future research will have to since we (humans) cannot distinguish between a human crafted piece of
identify the human vs AI agentic responsibilities for various domains writing and a machine crafted piece of writing. Perhaps the AI chatbots
and tasks. can distinguish, a point I come to later. I am in awe to the creators of the
2.2.1.1.2. Conclusion. ChatGPT presents immense possibilities of a chatbot and their NLP models. No matter the myriad constructive and
powerful AGI, one of the original goals of the field of AI. Though these positive opportunities to make the world a better place with such AI,
tools are likely to have a strong impact on many aspects of business, there is also an equally huge potential both to overturn centuries of
society and government, in terms of both creating innovation possibil scholarly tradition and to overwhelm the peer-review process with a
ities and destroying old ways of doing things, the need for research to veritable deluge of artificially generated papers. Even if those same
address the issues of discourse, divided access, and agency identified in papers are not of sufficient quality to justify publication, having them all
this note are urgent. reviewed (or even just desk rejected) will consume considerable time
and resources.
2.2.2. Contribution 10 In ante-diluvian times, we fondly imagined that contributing authors
would also be willing to engage as peer reviewers and thus to review
2.2.2.1. The Potential of ChatGPT: Awe, Fear, Disgust and Transformation
– Robert M. Davison. I8 recently chanced across a newspaper article
9
https://en.wikipedia.org/wiki/ChatGPT
10
https://scholarlykitchen.sspnet.org/2023/01/11/chatgpt-thoughts-on-ais-
8
I gratefully acknowledge Roger Clarke, Samantha Moore, Gerhard Schwabe, impact-on-scholarly-communications/?informz= 1&nbd=db8af4e3–980 f-
Kirstin Krauss, Monideepa Tarafdar, Marco Marabelli, Angsana Techatassana 47 cd-93ab-4039cea17bea&nbd_source=informz
11
soontorn and Sven Laumer for insightful comments on this short piece. https://www.ce-strategy.com/the-brief/chat/
12
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
papers written by others. Since each paper needs 2–3 reviewers, we acknowledge these AI tools? There seems to be a continuum from a
could reasonably expect each submitting author team to review (be paper that is entirely written by humans without any non-human
tween them) 2–3 papers submitted by other author teams. Indeed, many intervention, through papers that are improved with spell checkers,
journals automate this such that in order to submit a paper you need to grammar checkers, reference checkers and the like, through to papers
have an account in a submission system where you must provide some that draw on data that have been analysed with AI tools, and then papers
keywords that reflect your research interests; sooner or later you will be that are to a lesser or greater extent written by a chatbot. Where on the
‘invited’ through the same system to review another paper. But if the continuum do we draw the line of ethicality? Alternatively, is it more a
creation-agents of papers (should I call them authors?) are essentially matter of transparency, that any use of technology is acceptable so long
robots who/that generate papers for people who then submit them (I as it is openly acknowledged? These questions are not intended to be
hope that the intricacies, vagaries and inconsistencies of the submission rhetorical: they are practical: as scholars we need to decide what is
systems make it impossible to automate this too) in uncountable acceptable/ethical and what is not. When we have decided we can enact
numbers, I hardly imagine that these same authors will be enthusiastic and enforce policies. But first we need to have the conversation.
about reviewing the same number of papers as they submit! Perhaps In closing, I see this as simply the latest incarnation of disruption:
they will outsource the reviewing task to an AI chatbot?!. ChatGPT is the forerunner of a new wave of AI chatbots and other tools
That’s neither a fanciful nor a flippant remark: there seems to be no that has the significant potential to disrupt current processes. We have
reason why reviewers should not do this. Whether the AI chatbot can already seen disruption in a variety of industries; indeed, we use such
detect that the paper it is reviewing was written by (an instantiation of) disruptive examples in our teaching and if we teach digital trans
itself I don’t know and perhaps it doesn’t matter. But it opens up several formation then we will be still more familiar with the idea. Why
more cans of worms of prodigious girth. Will the AI chatbot be able to shouldn’t the academic production of knowledge also be disrupted? If
write a critical yet constructive review of a paper that it itself wrote? Or we want to have any control over this disruption, we need digital
will it find that its original creation was ‘perfect’ and thus return the leadership and a conversation about acceptable and ethical behaviour.
review comment of ‘Accept without Modification’?! What will the SE or But be sure, we can’t stop digital transformation. We may at most hope
AE think about that? I can’t speak for other journals, but in my experi to guide it.
ence as Editor of the Information Systems Journal, for instance, accepted
papers are, on average, revised 3.5 times. That implies several rounds of 2.2.3. Contribution 11
‘revise and resubmit’, not instant acceptance.
Clearly something is missing in this equation: if the AI chatbot both 2.2.3.1. ChatGPT: the digital transformation challenge for organisations
writes the papers and reviews them, what’s the point of having a human- just became harder - Paul Walton. Many organisations are struggling to
based peer review process at all? Indeed, perhaps the robots will also scale up the implementation of AI and realise the benefits it offers
publish the papers and we can dispense with the publishers as well? (Fountaine, McCarthy, Saleh, 2021). The introduction of ChatGPT (used
More practically, I suspect that if the AI chatbot proves to be as accurate as shorthand in this article for ChatGPT itself and similar products)
or useful as the current media hype suggests, then the journal publishers amplifies the difficulties of AI as a “consummate bullshitter” (Mollick,
will defend themselves by implementing detection programmes (pre 2022). ChatGPT is impressive and fluent but not always right. The issue
sumably using AI12) so as to detect the likelihood that a paper was AI- can’t be ignored—the technology industry’s commitment (Tiku, De
authored. Indeed, ChatGPT-detection programmes already exist.13 Vinck, Oremus, 2023) means that ChatGPT will be available more
Meanwhile, my own publishing contacts assure me that this is already widely and will be included in other products.
happening with both new policies related to machine generated papers So how can organisations take advantage of its strengths and miti
and checks on the provenance of papers in the works. I note with some gate the risks? ChatGPT can help people be more creative and produc
amusement that the International Conference on Machine Learning has tive but only if organisations can transform to address the issues of trust
banned the use of AI tools in writing the papers to be submitted to the and the commoditisation of AI in a way that manages the psychological
same conference, though it’s acceptable to use those same AI tools to risks.
edit and improve text (Vincent, 2023). The line between writing and ChatGPT has emerged just as AI has already created a headache for
editing seems quite thin to me. Is there any real difference between organisations caused by four factors:
editing, improving and writing?
Finally, disgust, that our esteemed colleagues might resort to such 1. Growth. The implementation of AI in business processes is growing
unethical means to outsource paper crafting to an AI program and pass it fast (De Cremer, Kasparov, 2021)— the rapid introduction of new
off as their own work. In my view it amounts to plagiarism. But then I technologies (like ChatGPT) means that competitive pressures for
have to temper my disgust with the sober realization that to be uneth organisations are mounting.
ical, a behaviour has to be proscribed in a code of ethics: if we didn’t yet 2. Risks. Because AI is fundamentally different from previous technol
proscribe that behaviour, i.e., there is no code regulating its enactment, ogies, there are new and different risks, many based on ethical
then it’s hardly fair to label the behaviour as unethical. The rise of concerns (Blackman, 2020), that are easily triggered (perhaps
ChatGPT thus creates an opportunity for us to discuss ethical issues through “artificial stupidity or other causes” (Bossman, 2016)). The
when using these and similar tools. This will potentially be a trans risks are large, including the potential of reputational, compliance or
forming conversation. If chatbots do a better job than humans, should financial damage.
we not use them? If AI tools can analyse data and produce findings, can 3. Urgency. The problem cannot be deferred. An ethical approach to AI
they also present the findings to an audience? Should we openly needs to be designed in and cannot be retrofitted (Floridi et al.,
2018).
4. Transformation difficulty. Because of the complexity of the trans
12
formation requirements in implementing AI at scale, many organi
https://www.zdnet.com/article/can-ai-detectors-save-us-from-chatgpt-i-
sations find it hard to progress beyond proof-of-concept activities
tried-3-online-tools-to-find-out/?ftag=TREc64629f&utm_email= 536ccc3
(Fountaine et al., 2021).
ce686324e956fda1e31b2c675839fb26b05bf781412a35aa499bcabf2&utm_
campaign_id= 5931351&utm_email_id= 20d3cd8d51774a2658232222a4
c9a255797b5bc5bd45d2444786acc9f4c2b7d2&utm_newsletter_ ChatGPT makes these trends more immediate and amplifies the
id= 92316&medium=email&source=iterable challenges starting with trust. As Sandel says (Pazzanese, 2020): “Part of
13
https://www.npr.org/sections/money/2023/01/17/1149206188/this-22- the appeal of algorithmic decision-making is that it seems to offer an
year-old-is-trying-to-save-us-from-chatgpt-before-it-changes-writing-for
13
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
objective way of overcoming human subjectivity, bias, and prejudice,… ChatGPT can also be applied to a long-standing challenge—that of
but many of the algorithms … replicate and embed the biases that knowledge management. AI has long offered the promise of improving
already exist in our society”. Because ML uses data that reflects the access to organisational knowledge through the use of chatbots, auto
biases of society, “bias is machine learning’s original sin” (Weinberger, mated document and image analysis, providing explanations (Bundy
2019). Pinker (2022) frames it elegantly: “Knowledge is acquired by et al., 2019) or through improved human collaboration. Echoing Lew
formulating explanations and testing them against reality, not by Platt’s anguished cry (“if only HP knew what HP knows” (Sieloff,
running an algorithm faster and faster. Devouring the information on 1999)), organisations have struggled to make their knowledge readily
the Internet will not confer omniscience either: Big Data is still finite available. This challenge falls naturally into the domain that ChatGPT
data, and the universe of knowledge is infinite.” inhabits, although organising organisational knowledge in a form that it
The scope of ethical issues raised by AI is wide-ranging (West, 2018). can access will remain difficult.
Numerous sets of principles for ethical AI have been documented (over a ChatGPT can help people be more creative and productive and
hundred and fifty sets of guidelines are documented in the global in improve organisational knowledge management. But it amplifies many
ventory (Algorithmwatch.org, 2020)) but with common ground, for AI challenges that organisations are already struggling with.
example in the EU principles (EU High-Level Expert Group on AI, 2019).
In its current invocation, ChatGPT violates several of these princi 2.2.4. Contribution 12
ples. It is based on ungoverned information that isn’t always reliable and
it doesn’t communicate the reliability of that information. So, it is easy 2.2.4.1. Services marketing and management implications of ChatGPT
to see that there may well be difficulties with the following principles: -Jochen Wirtz. Smart service technologies, both hard and software, in
combination with intelligent automation (IA) rapidly become more
• privacy and data governance—including, for example, GDPR regu powerful, cheaper, easier to implement and use (Bock et al., 2020;
lations and the “right to forget”; Bornet et al., 2021). They have the potential to bring unprecedented
• transparency—including the need for explainability (Bundy et al., improvements for the customer service, service quality, and productivity
2019) which would impact its use in governance, compliance (e.g., all at the same time (Huang and Rust, 2018: Wirtz et al., 2018; Wirtz and
demonstrating adherence to regulations), legal questions (e.g., about Zeithaml, 2018).
intellectual property) or customer interaction; These technologies enable specific service processes (e.g., an address
• accountability—how can an organisation be accountable for infor change) and entire service products (e.g., a digital banking service) to be
mation derived from ChatGPT used by staff under these circum streamlined, simplified, and scaled. Information processing-type ser
stances? How can it avoid fraud or criminal activities? vices (think of any service that can be delivered on an information
counter, over the phone, via email, or on an app) increasingly will be
ChatGPT makes organisational governance more difficult because it end-to-end (E2E) automated with no frontline employees involved. For
extends the use of AI outside knowledgeable data science teams. It is example, most Google customers (including its revenue-generating
available to developers through programming languages (e.g., Python) advertising clients) hardly, if at all, interact with human service pro
or cloud services (e.g., Amazon Web Services, Microsoft Azure, Google). viders. This allows Google to offer high-value services such as Google
In addition, so-called “no code” or “low code” technologies enable AI to Maps, GMail and Google Scholar for free. That is, costs are so low that
be used, in a similar way to Excel and Access, directly by “business these services can be advertising funded or offered advertising free for a
technologists” (including “citizen developers” and “citizen data scien small fee. In future, we can expect such E2E automated services become
tists”) who are not professional developers. Because it is so easy to use by the norm for many information-processing type services such as finan
anyone, ChatGPT takes this trend even further. cial services and bookings, but also healthcare and education as mar
This means that the problem of managing AI risks in organisations ginal costs will be negligible in many cases (e.g., Wirtz, Kunz, Hartley, &
has unprecedented scale. It has expanded from a (relatively) small team Tarbit, 2023; Wirtz, Lin, & Das, 2022).
of knowledgeable professionals to orders of magnitude more people Even for physical service processes, intelligent automation will
without any experience of the risks or the governance required. dramatically change the customer journey. For example, future hair
How can AI at this scale be controlled? Following existing models for salons will be able to build innovative solutions that use smart mirrors
digital development (UK Government Digital Service, 2019) and in line and AI to analyze a customer’s hair and recommend different hair styles.
with the EU principles, the need for high levels of human control has Hair cutting service robots can then execute the chosen cut.
been argued by Shneiderman (2020), amongst others. He argues for ChatGPT uses an umbrella technology called generative AI which
processes and interfaces that exhibit the “high levels of human control gets closer to achieving artificial general intelligence that allows the
AND high levels of automation” that can also increase human autonomy technology to understand and learn any intellectual tasks just as humans
(Bernstein et al., 2022). But the application of this approach at scale is can. Already today, generative AI is able to generate novel content such
daunting—how can controls catch the wide range of risks in time to as text, images, program code, and even poetry. In contrast, today’s
mitigate them? chatbots are primitive in comparison. Their development is typically
In addition, ChatGPT amplifies the psychological risks associated based on pre-existing frequently asked questions (FAQs) cum training
with AI (and with information processing more generally). As Kahne data, all powered by still relatively limited technology which tends to
man (2011) points out, thinking rigorously requires effort and instead make the bot’s responses pre-programmed. One can expect that gener
people often rely on instinctive thinking that ignores the quality of in ative AI will be introduced into many existing customer interfaces of
formation. ChatGPT plays directly to this tendency—it cuts out much of other providers in addition to ChatGPT (think of Siri or Alexa in future
the hard work and produces seemingly impressive output. Why bother ‘incarnations’). That is, ChatGPT and other general AI systems allow
rigorously assessing the detail when the result looks good? opening-up of chatbots and service robots to move closer to ‘general
This is just part of a wider question: how should AI and people work intelligence’ and provide service closer to the level of understanding and
together to take advantage of the best that each can offer (Bernstein flexibility as today’s human frontline employees.
et al., 2022). AI offers the potential for automating some work and Moving closer to general intelligence is an important next big step in
letting people focus on higher value, more creative activities. This is a E2E automation of customer service as is likely to be a gamechanger for
key opportunity for ChatGPT—it can easily give people access to ideas many services and these new technologies will bring the digital frontline
and knowledge (Agrawal et al., 2022) just as it already being used to to new levels, not just in their AI engine, but also in design of the
enhance the productivity of software developers, writers and other customer interface. For example, already today, Soul Machines, a
professions.
14
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
developer of what they call ‘digital people’, allows service firms to tailor query handling are inevitably present across the stages of the customer
their digital workforce to their desired brand image, positioning, and purchase journey. Can ChatGPT serve a similar or the same purpose for
customer preferences (e.g., age, ethnic group, gender). Furthermore, for the customers? The question requires more extensive investigation in
customers it will be largely indistinguishable whether they are served by the coming days considering on how ChatGPT describes its business
a digital service provider or a human employee, and they are likely not model. Table 2 offers perspective insights on how ChatGPT can benefit
to care very much. One can even expect that the instant availably (i.e., marketers in various ways. However, harvesting the exclusive benefits of
no waiting for a customer contact agent to become available), 24/7 the ChatGPT rests upon its future developments. Table 2 shows eight
service, and in ones preferred language will be preferred by most cus strategies that can use ChatGPT: building marketing campaigns, content
tomers over having to dial into a traditional call centre. Furthermore, marketing developments, content picture and art designings, services
the cost implications are significant, and given that we live largely in marketing, customer experience, keyword suggestions, marketing
competitive market economies, these cost savings are likely to be research and brand comparison. The table also shows indicative solu
competed away, leading to increasing standards of living. tions and reflective tactics for every strategy. Indicative solutions
As we are at the beginning of this technological revolution of explain the operationalization of ChatGPT in the respective marketing
customer service, research is needed to understand better the key value strategy, and reflective tactics explain the tactical move marketers
drivers of ChatGPT and other generative AI solutions for the service should follow to leverage ChatGPT and GPT-3.0 to their fullest potential.
sector. Important research questions include: (1) how to design, operate ChatGPT has similar reflections of search engine optimisation and
and continuously improve frictionless, generative AI governed customer content marketing flavour. However, ChatGPT, to our knowledge, does
journeys; (2) how can this technology guide customers effectively
through their journeys and how can it be designed to be customer-error
Table 2
tolerant and (3) to master automated service recovery when needed.
ChatGPT in marketing strategies.
Finally, ChatGPT and other advanced digital front line technology
(whether virtual on video or physical as in a service robot) carry serious Marketing Indicative solutions Reflective tactics
strategies
ethical, fairness, and privacy risks (Belk, 2021; Breidbach & Maglio,
2020; Wirtz et al., 2023). For example, the vast possibilities of capturing Building ChatGPT might be able to ChatGPT is a generative tool
marketing provide suggestions to develop which requires precise queries,
data and observing and analysing customers and gaining insights into
campaigns a successful campaign. such as: "Suggest me an
their lives and psychology are well beyond George Orwell’s terrifying Example: "Suggest me an effective marketing campaign
scenario described in his classic book “Nineteen Eighty-Four: A Novel” effective marketing campaign on Instagram for a bouquet
(Wirtz et al., 2023). It is disconcerting that these technologies can result on Instagram" start-up."
in customers being assessed, predicted, nudged, all often without their Content ChatGPT can assist in However, the preciseness of
marketing providing appropriate and the query should be more
consent and awareness (Gawer, 2021). We will need research to un
accurate content for the given accurate to get creative results.
derstand, manage, and mitigate the risks of generative AI such as query related to a campaign,
ChatGPT and other technologies that get ever closer to achieving arti product, sales page, email, and
ficial general intelligence technology when used in customer service. blog post
Content ChatGPT can offer ideas for DALL.E2 can be used to design
designing designing and other the product creatively as
2.2.5. Contribution 13: marketing in the world of GPT-3.0 and ChatGPT: improvements instructed in the query
Futuristic reflections -Janarthanan Balakrishnan, Rohita Dwivedi, Samuel Chatbot based GPT-3 is the next-generation The chatbot should be able to
Ribeiro-Navarrete and Adil S. Al-Busaidi services language generative AI which integrate GPT-3 and DALL.E2
can be used in chatbots for to generate appropriate Avatar
effective query handling in the conversations
2.2.5.1. ChatGPT and marketing. Technology has played a crucial role Customer AI-based experience is well The role of ChatGPT in other
in marketing during the last two decades. AI has extensively contributed experience recognised in academic immersive technologies will
to marketing during the previous five years. From automation to ana literature. ChatGPT can decide how better the
lytics, AI has accelerated marketing performance to a greater extent provide a more enriching experience it can provide
experience to the customers compared to the present state
(Dwivedi et al., 2021b). The chatbot is one among them which has
of AI experience
gained broader attention from marketing practitioners as well as from Keyword ChatGPT can provide keyword The search for content and
academicians (Balakrishnan et al., 2022). Chatbot employs AI archi suggestions suggestions which can assist keywords should be narrowed
tecture to respond to customer queries in a simulated environment sponsored campaigns. Apart based on the personalised
from regular keywords, campaign requirements.
(Balakrishnan & Dwivedi, 2021). Like Chatbot, OpenAI has introduced
ChatGPT allows marketers to
ChatGPT, a sophisticated AI platform that uses language processing test the alternate keywords in
models to answer human queries (OpenAI, 2022). The functions of the campaign
ChatGPT are huge, even incomparable to modern AI technologies. Marketing ChatGPT can assist marketers The research points may be
ChatGPT uses Generative Pre-trained Transformer (GPT) architecture research in testing content performance reflective from a data point of
(A/B testing), content the last two to three years. So
which OpenAI also developed. Academicians and Industry professionals
performance, market statistics, the recency effect will work in
are perplexed by the growth and speculation of ChatGPT during the last and demographic targeting terms of marketing research.
two months. The recent version of GPT-3 is a sophisticated AI model that information. While ChatGPT might not be
can provide ample scope for language translation, modelling, and able to perform A/B testing in
generative texts (Hughes, 2023). OpenAI made its mark with DALL.E2, a its current form, it can provide
some general testing
new AI system that can create realistic art and images based on an ex guidelines, benchmarking
pected language parameter (OpenAI, 2023). Now, many speculations content, and statistics.
are traveling around OpenAI, DALL.E2, GPT3, and ChatGPT. The future Brand ChatGPT can assist marketers ChatGPT could help collect
of ChatGPT mostly remains unknown, but challenges and opportunities Comparison in understanding brand data about other brands (e.g.,
position against rivalries to “compare iPhone and
lie ahead for businesses. Especially for marketers ChatGPT can offer
enhance the existing brand. Samsung”; “analyse 7 Ps of
various opportunities at different stages of the customer purchase Pepsi marketing”). The
journey. collected data could be used
for different purposes, such as
new product development
2.2.5.2. Opportunities for marketers. Customer information search and
(NPD).
15
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
not work same as Google. It has a trained dataset with a 570 GB explore the effects of AI-generated marketing-related content on society.
equivalent size (Theconversation, 2023). Moreover, the learning For example, the interplay between AI-generated content and AI safety
competence of GPT is expected to grow in the coming days. Unlike other might be investigated to find the impact of misaligned AI on users
search engines or content curation platforms, ChatGPT can precisely consuming AI-generated content. Overall, the use and challenges of
answer queries. Of all the benefits that ChatGPT can provide to mar ChatGPT mostly remain unexplored. Presently researchers perceive the
keters, it is paramount that company chatbots and content marketing future of ChatGPT based on the existing knowledge of reflective AI. With
teams should leverage the fullest potential of GPT-3 and its associated ChatGPT yet to evidence a series of developments, researchers should
tools. GPT-3 can curate content using sophisticated learning algorithms, enhance the available knowledge in the area of IS.
which can help marketers arrive at better research results and get more
optimised content. Algolia is a content-building platform that uses 2.2.6. Contribution 14
GPT-3 to better serve customers with search results (Algolia, 2023) and
optimise long tail results (Adams, 2023). Marketers could potentially 2.2.6.1. ChatGPT in banking services – Emmanuel Mogaji, Mousa Ahmad
integrate the AI interface (GPT-3) into the chatbot structure to enrich Albashrawi Sriparna Basu and Sangeeta Khorana. ChatGPT is an
AI-based customer conversation. In the present scenario, most chatbots emerging technology set to change the landscape of conversation agents
operate in a simulated environment; when integrated with ChatGPT, (Thorp, 2023); ChatGPT is said to build on the prospects of the chatbot,
marketers may find fruitful customer engagement with a multilevel of making conversation more human, answering questions, generating
data points acquired from the customers. Future ChatGPT models, when ideas, and suggesting solutions. It has been considered to shape jour
followed with proper precision, can alter their avatar positions based on nalism, law and marketing with its content-creating strategies (Ng et al.,
the conversation mood and thus provide customer engagement and 2022; Kelly, 2023). It is not surprising to see businesses evaluating how
experience. Marketers will explore the opportunities of framing brand they can integrate this technology into their operations. This piece,
placements in similar technologies in a sponsored way. Nevertheless, it however, aims to explore the prospect of ChatGPT in financial services
depends on how OpenAI decides its business model and whether it fol (Lucey & Dowling, 2023). Banks have always been known to accom
lows a subscription, premium or freemium model. modate technology and transform their operations (Abdulquadri et al.,
2021); from the time of online banking to open banking and even
2.2.5.3. Challenges for marketers. While ChatGPT and GPT-3.0 look chatbots and Robo advisors and investors, banks have always adopted
promising for marketers, there are some challenges marketers have to technology (Soetan et al., 2021; Mogaji & Nguyen, 2022) and not sur
face. ChatGPT can be an effective content curation tool similar to Goo prising to see many banks closing their physical branches and either
gle. Still, GPT − 3 uses robust database models with AI-based learning adopting financial technology (fintech) to streamline operations (Bon
algorithms, which indicates that keyword-based content or search en fim et al., 2021; Nguyen & Mogaji, 2022) or converting into digital
gine marketing may not be effective in ChatGPT architecture. Given that banks. It would not be surprising to see how ChatGPT can be integrated.
the AI pre-trained algorithms are more powerful in the ChatGPT, mar However, it is imperative to recognise that banks operate in a highly
keting strategies require a well-defined collaboration with OpenAI. regulated sector and technology adoption is often strategically explored
Some companies collaborate with OpenAI to use GPT-3.0 for their tools, (Vives, 2019; Czarnecka & Mogaji, 2020). In light of this background,
such as Algolia, Copy.ai, Replier.ai, Jasper AI, and Quickchat (Bhatta this piece contextualised retail financial services across three strands –
charyya, 2023). Though the adoption of GPT-3.0 will increase in the financial services marketing, provision and requisition as a spectrum of
coming days, the financial and knowledge capital required for its interaction between the consumers and the bank. Each strand is subse
implementation can be huge. The power of ChatGPT and GPT-3.0 may quently discussed while highlighting the prospects and implications of
require fewer human efforts compared to the current technology-based ChatGPT.
marketing environment. This scenario subsequently will question and
may conflict with the strategical orientation of marketers and may 3. Prospects and implications of ChatGPT in banking services
threaten various traditional marketing principles. Notably, the genera
tive responses provided by ChatGPT are subject to the query raised by 3.1. Financial services marketing
users. So an unspecific query may lead to erroneous results, thus
extending to brand value degradation. So it will be a challenging task for One of the bank’s primary responsibilities is to introduce financial
marketers to build an eco-system where customer queries are answered services to customers, either their existing consumers or their prospec
precisely. tive customers (Soetan et al., 2021). Banks can process a lot of infor
mation based on their engagement with customers and then be able to
2.2.5.4. Future research directions. After ChatGPT, IS and marketing offer them relevant products and services. AI has been tremendously
research may take a new turn on how researchers will look at AI-based deployed in this area of banking to understand consumers’ needs and be
performance tools. Future research which involves GPT-3 should focus able to target them with necessary products effectively (Mogaji et al.,
on investigating the architecture of how organisations can employ GPT- 2020a; 2020b). Here the bank is asking consumers – come bank with us.
3 and ChatGPT models in the existing state of their business. Primarily, There are tremendous opportunities for ChatGPT at this point, as banks
studies should explore how generative AI technologies can benefit can use it for their back-end operations, data analysis and marketing
marketing in customer services, customer engagement and experience, financial services without direct engagement with the customers
content curation and development, marketing research, lead generation, (Northey et al., 2022; Sheth et al., 2022). There are possible applications
commerce, and promotional activities. Research should also focus on for developing and implementing marketing communication strategies,
knowledge of how customers may welcome this powerful tool in the understanding consumers’ needs and creating personalised customer
version of chatbot, or can it replace entirely humans in customer ser offers (Mogaji et al., 2020a; 2020b; Omoge, Gala, & Horky, 2022). This
vices? Researchers should focus on developing frameworks to integrate technology can be used for content creation, integrating emotional ap
ChatGPT with their chatbots to enrich its presence. ChatGPT may not peals through human conversation feel in marketing campaigns.
always provide positive sentiment for every product and brand. So Considering the high regulations in the sector of financial service, there
beyond looking at the benefits of ChatGPT, researchers should explore are significant implications for the claims being made by ChatGPT.
how marketers may perceive ChatGPT as a threat. Besides concentrating These limitations suggest that marketing communications may not
on the text queries, future research should also focus on the role of solely rely on ChatGPT; human involvement would still be needed to
pictures and arts in generative AI frameworks. Future studies may also verify the trustworthiness of the insights and personalised offers. There
16
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
is also a possible need to declare the source of information – has this influence decisions and advice provided by the chatbot (Northey et al.,
campaign been auto-generated by an AI machine? Therefore, marketing 2022). Implications on individual investment preferences would also be
should take advice in the context of that knowledge. Banks would be pertinent – religious, political, and environmental interests could in
expected to invest in the infrastructure, exploring their technical capa fluence investment decisions. How would consumers get information
bilities and human resources to integrate it into existing digital trans about investment options that align with their ethical, religious or po
formation strategies (Abdulquadri et al., 2021; Diener & Špaček, 2021). litical beliefs (Riedel et al., 2022; Bukhari, et al., 2019)? Who will up
date and train the chatbot with this information? Where will it be
3.2. Financial services provision sourced? There are significant implications for financial services pro
viders to ensure that the correct information is provided and made
Consumers will always need the banks to provide financial services available to the consumers to allow them to make an informed decision
(Soetan et al., 2021), there will always be reasons to engage - and with (Mogaji et al., 2020a; 2020b). While emerging fintech companies are
these engagements, data are being generated which are beneficial to developing these financial services management tools, allowing cus
both parties (Mogaji et al., 2020b). There lies an opportunity for tomers to ask questions and get financial advice, it is essential to raise
ChatGPT to explore these datasets to inform some banking decisions. At awareness as customers need to know what’s available (Abdulquadri
this stage of the engagement, banks are inviting consumers to ‘come et al., 2021).
bank with them’, banks need to show some form of understanding,
commitment, and awareness about the needs of their customers (Sheth 4. Conclusion
et al., 2022; Omoge et al., 2022). Banks can use ChatGPT technology for
their front-end operations to enhance business operations and directly The huge befits of ChatGPT have been recognised, and with the
engage with customers. This conversation agent builds on the chatbot’s amount of investment and interest in this technology, its impact is bound
success, which has been well integrated into banking operations; the to grow (Dowling & Lucey, 2023; Ng, Haller, & Murray, 2022). Brands
chatbot is used to enhancing customers’ experiences, providing answers must start exploring their prospects. This article has focused on retail
to frequently asked questions, and streaming the banking relationship banking, providing a reflective insight into the adoption of ChatGPT
(Abdulquadri et al., 2021; Balakrishnan et al., 2021). Banks can gain across financial services marketing, provision and requisition. There are
insight into their service provision quality, improve their algorithm opportunities for managers, practitioners, and policymakers to have a
design and improve customer services (Sheth et al., 2022; Soetan et al., holistic view of this technology; however, it is essential to acknowledge
2021). However, trust in service provision will be paramount (van that this is an emerging area for research, and there are opportunities for
Esterik-Plasmeijer & van Raaij, 2017) as ChatGPT finds its way into future research to establish these positions empirically.
banking operations. It would not be surprising to see consumers ques
tioning their trust in the services and information provided. Could cus 4.1. Contribution 15: using ChatGPT in tourism, travel, transport and
tomers make a better decision if and when served by a human? Are hospitality industries – Dimitrios Buhalis
customers willing to accept offers made by these technologies? What are
the implications of service consumers with vulnerability or the impact of Technology and smart tourism diffuse innovations across tourism
the dark side of this technology on consumers’ well-being? Perhaps service ecosystems disrupting legacy operational practices and processes
technology can’t be wrong. There are also implications on convenience, (Buhalis, 2020). AI and ML are emerging rapidly as new technological
promptness of services, and accurate decisions (Mogaji et al., 2022; tools that will further revolutionise the use of technology in the Tourism,
Ghazwani et al., 2022). Banks would be expected to invest in training Travel, Transport and Hospitality Industries (Buhalis et al., 2019).
staff and educating customers about what’s available and what can be Until the launch of the Chat Generative Pretrained Transformer
done. Since many customers still need to be conversant with the chatbot, (ChatGPT) as a language model developed by OpenAI designed for
banks need to know their customers’ needs and how far to push the generating text-based responses in a conversational manner, AI was only
technology drive (Abdulquadri et al., 2021). used by technology experts, rather than the general public. In early 2023
ChatGPT gave access to AI and ML learning to the general public, by
3.3. Financial service requisition answering unstructured questions in natural language text. ChatGPT
empowers supercharges the generation of coherent and contextually
With the growing power of financial technology and the disruption appropriate responses to natural language inputs. This technology will
in the sector, many consumers are looking beyond their banks for revolutionise search and will transform the nature of access to products
financial advice and investment options (Northey et al., 2022). Many and services across industries. Naturally Google as a service was chal
FinTech companies are not banks but are offering innovative financial lenged, not only because search is changing by nature, but also because
products, and the ChatGPT provides an opportunity for these companies its own AI service underperformed whilst Bing resurfaced with a
to meet the growing needs of consumers. At this stage, consumers are stronger AI proposition. Google effectively searches for web pages that
looking beyond their bank and asking – who should I bank with? They include key phrases used for search. Google’s search algorithm is
are looking for platforms that align with their values and interests. designed to use keywords and phrases entered by the user to return a list
Customers can use various AI-driven platforms provided by fintech of web pages, images, and other content that match the query. ChatGPT
companies to get advice from varied sources without directly engaging uses a language generation model and the Transformer architecture
with any bank (Ghazwani et al., 2022; Mogaji et al., 2022). They do not developed by OpenAI to search across a massive corpus of text data and
have to rely on their bank for advice; they can look at other providers as to amalgamate comprehensive human-like text answers based on the
they seek information about where to invest, money management, and input it receives. It is designed to respond to natural language queries in
different financial needs. With ChatGPT, consumers would be asking, a conversational manner and can answer questions, summarise infor
’where should I invest?’, ’Which ethical funds should I invest’ or ’when mation, and generate a comprehensive text. Both technologies can be
do I invest?’ Answers to these questions, as provided by ChatGPT, may useful for answering questions although they are designed for different
influence their decisions. It is, however, imperative to recognise that purposes.
data to inform the decision and output of the ChatGPT would be Naturally, travellers (demand) and Tourism, Travel, Transport and
important. As consumers rely on this Robo Advisor (Zhang et al., 2021), Hospitality organisations and destinations (supply) are excited about the
the data upon which they are trained would be essential to ensure they incredible potential the AI, ML and ChatGPT tools bring into the
get unbiased information (Mogaji et al., 2020a; 2020b). Likewise, there ecosystem, as illustrated in Table 3.
would be implications on how banks can positively and ethically On the demand side, traveller information is critical for their
17
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Table 3 marketing text, descriptions, website and social media content can be
ChatGPT use in the Tourism, Travel, Transport and Hospitality ecosystem. widely distributed to attract more visitors. A very wide range of back-
DEMAND/Tourists SUPPLY/Tourism organisations and office functions can also benefit from fact-finding enquiries and the
destinations identification of resources. Menu engineering and recipe development
Information finding Concierge services for consumers for example may benefit restaurants and hotels when ChatGPT assists in
Building itineraries Marketing content text/pictures generation the development of innovative gastronomy offerings. Fig. 6 illustrates
Searching for specialised Menu engineering and recipe development suggestions for Greek gluten-free recipes. Overall, ChatGPT can help
services Tourism, Travel, Transport and Hospitality organisations to revolu
Eliminating Choice Fact Finding
Dynamic Itineraries Identification of resources
tionise customer communication, improve service and enhance the
Content Sharing Social media and organisations own web pages travel experience, streamline operations and have access to knowledge
databanks.
The virality of ChatGPT instantly illustrated the tremendous poten
satisfaction. They search for information, construct itineraries, select tial of AI and ML. However, it also brought to light that language models
suitable timetables, and evaluate alternative products and services. like ChatGPT are complex and continuously evolving systems, so there is
Tourism by definition is a hostile industry, as people/customers travel to always room for improvement and further development. A range of is
unfamiliar destinations to cocreate experiences (Buhalis et al., 2019). sues related to intellectual property and ownership of data was also
They consume products and services that meet their requirements in the exposed. There are challenges ahead to improve the system by using
present context (Buhalis & Sinarta, 2019). Therefore, they need exten more diverse training data, including text from different languages and
sive information provided by several organisations and summarised in a cultures; understanding a more diverse set of users; providing clearer
meaningful and comprehensive format. They have relied on search en fallback responses; developing the depth and breadth of information
gines, such as google, since early 2000 to find information, build itin provided; and avoiding confusing or misleading answers. Incorporating
eraries, search for specialised services, eliminate choice and develop more external knowledge sources, such as databases or APIs, user-
dynamic Itineraries (Fig. 2). They had to go through lists of websites in generated content, and user feedback into ChatGPT’s response genera
order to then co-create their experience by combining different re tion process can help it provide more accurate, in-depth and up-to-date
sources (Fig. 2). information. Analysing big data faster and incorporating real-time data
ChatGPT offers the opportunity to combine a range of different re will assist language models to learn faster and have a bigger impact on
sources into a text that provides a comprehensive text answer to their the industry (Buhalis & Volchek, 2021; Stylos et al., 2021). Nevertheless,
enquiries (Fig. 3). By using ChatGPT, tourists can receive quick and it is evident that this revolutionary technology will be assisting hu
accurate information in natural language to help them plan their trips manity to interact faster, cheaper and more efficiently cocreating value
and make the most of their travel experiences. The high use of innova in the future. But it should also be noted that ChatGPT can also be used
tive digital marketing will have a higher online influence and loyalty to generate fake reviews on hospitality and tourism organizations.
particularly for young people (Buhalis et al., 2020). When ChatGPT is
combined with voice assistants and AI is combined with contextual real
time services it will offer a revolutionary smar customer service (Buhalis 4.2. Impact on the academic sector
& Moldavska, 2022).
On the supply side, ChatGPT can assist Tourism, Travel, Transport 4.2.1. Contribution 16: generative AI in teaching and research: some
and Hospitality organisations with both customer phasing and back- preliminary thoughts - Ryan Wright & Suprateek Sarker
office functions. ChatGPT has a real chance to make an impact in the The launch of ChatGPT in November of 2022 has caught the atten
travel industry through trip planning. By allowing travellers to use AI to tion of all scholars, regardless of discipline. The popular press has also
create a trip itinerary and find top hotels, the time-consuming process of engaged in discussions around the implications of ChatGPT, and, more
sifting through numerous options can be reduced to just minutes. broadly, on generative AI, highlighting the many potential promises and
Providing concierge services, ChatGPT can assist in answering traveller pitfalls of these systems. As a background, generative AI started gaining
enquiries by generating information and answers about destinations, traction in 2014 when generative adversarial networks (GANS) were
and attractions. Providing travel recommendations for events, and ac developed and were widely used to create useful outputs such as facial
tivities, hotels, restaurants, and other travel services, as well as images and noise maps (Creswell et al., 2018). Now, aided by a web
providing advice on local customs, visa requirements, and travel tips interface provided by the company Open AI, the generative AI tools
enhance value-added services. They can assist with booking and reser respond to queries in English. Further, these tools have produced an
vation inquiries and can provide guests with information about avail swers to complex questions indistinguishable from expert human
ability and pricing as well as assist with the booking and reservation responses.
process. ChatGPT can personalise recommendations for guests based on As IS scholars who view emergent IT phenomena using a socio
their interests, abilities, disabilities and preferences. Fig. 4 illustrates technical perspective (Sarker et al., 2019), offering the world an un
some suggestions for a blind traveller to Paris, France. Eventually, it can derstanding of generative AI adoption and use, while considering the
empower customer-facing bots (Tércio Pereira et al., 2022). unintended consequences, is not only consistent with our expertise but is
ChatGPT can also create alternative itineraries, routes and trans also our responsibility. Consequently, we expect that the IS journals will
portation options, evaluating alternatives for travel planners, corporate likely receive many theoretical and empirical manuscripts on generative
travel companies and agencies and tour operators (Fig. 5). By providing AI in the near future.
multilingual support, ChatGPT can communicate with travellers in It is our perspective that, ultimately, the application of generative AI
multiple languages, helping to improve the overall experience and in in our own work practices will significantly shape the research agenda
crease satisfaction. within the IS and beyond. Our goal here is thus to provide some pre
ChatGPT can help also Tourism, Travel, Transport and Hospitality liminary thoughts on how IS scholars can start incorporating generative
organisations to generate marketing content and detailed descriptions AI into their professional lives, particularly in teaching and research. We
for attractions, hotel rooms, facilities, and services as well as develop will also take a long view of the implications within each of these cat
destination content for use on websites and social media. The generated egories of work. We begin with what most popular press and university
guidance has centred on, which is the implications for our teaching.
18
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Fig. 2. Google Output: what can tourists with kids do in Kalamata, Greece.
19
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Fig. 3. ChatGPT output: what can tourists with kids do in Kalamata, Greece.
4.2.1.1. Teaching in the era of generative AI. The release of ChatGPT and Then the instructors would ask students to provide a critique of this
the commentaries that have followed have caused a great deal of anxiety essay. An examination of the strength and weaknesses of the essay
to many academics. This anxiety is driven partly by the need to maintain should help aid in the exploration process (Mollick & Mollick, 2022).
academic integrity in university classrooms (Cotton et al., 2023). Noam Second, and related, it is important to provide guidance to students
Chomsky has been quoted as saying that these systems enable “basically on not only the ethical implications of using the technology but also the
high-tech plagiarism” and offer “a way of avoiding learning.14” It is clear boundaries with respect to the capabilities. Research has shown that “…
that there are academic integrity issues do exist (Rosenblatt, 2023), even trust in a technology is based on what the technology can do for the
with tools that can be used to detect generative AI text (see: person, such as giving help and being reliable.” (Lankton et al., 2016, p.
https://platform.openai.com/ai-text-classifier). However, we argue that 210). Assuming this holds, students will trust the technology because of
closing off the possibilities of using such technologies by pointing to its capability to aid in multiple work practices. It is the professor’s role to
academic integrity issues is a mistake. This technology has the potential offer a critical view that allows students to explore and use critical
to fundamentally change the classroom experience and the knowledge thinking when using technology. Researchers have already provided
and skills outcomes of our students. Because of the incredible potential compelling evidence that generative AI in general and ChatGPT specif
offered by generative AI, we offer suggestions that higher education ically provide biased output (e.g., Chen et al., 2023; Hartman et al.,
instructors can use to engage deeply in this disruptive technology. 2023).
First, it is critical that the mindset used by instructors is that of We acknowledge that generative AI is one in a line of several tech
experience and experimentation. In the IS literature, a helpful framing to nologies that have disrupted the classroom experience for students in
this exploration is IT Mindfulness. IT Mindfulness, defined by Thatcher higher education. Past disruptive technologies include calculators,
and colleagues (2018), includes four elements: 1) alertness to distinc email, Google search, statistical packages, etc. Next, we will take this
tion, 2) awareness of multiple perspectives, 3) openness to novelty, and same assumption and apply it to the research process. Specifically, we
4) orientation in the present. Instructors can use this theoretically will provide guidance for research on incorporating generative AI into
derived construct to help engage students in exploring technology tools. their research work practices using a simple example.
The use of IT Mindfulness-based training has been shown to improve
resistance to job burnout (Pflügner et al., 2021), help identify fake news 4.2.1.2. Research in the era of generative AI. As noted, ChatGPT is not
(Kim & Dennis, 2019), and create more secure online behaviour (Jensen the first AI tool to change research practices. Grammarly (www.gram
et al., 2017). It will be important to invite students into the conversation marly.com) is an example of a popular AI tool used to improve academic
and allow them to also apply IT Mindfulness to generative AI in their writing. rTutor.ai (www.rtutor.ai) is an AI chatbot that can generate R
work practices. code for statistical analysis. Also, Research Rabbit (www.researchrabbit.
Professor and the student should explore both the applications and ai) is an AI tool used to produce literature reviews. Similar to the con
the boundaries together, thus allowing the use of this technology in ways cerns regarding classroom integrity, researchers have also raised sig
that were unimaginable. There have been several papers published on nificant concerns regarding content created by ChatGPT (Else, 2023).
SSRN, arVix and other fast-to-publish outlets that provide high-quality Our colleagues in finance have provided evidence that ChatGPT can
in-class exercises and assignments allowing the student to experiment provide significant help in generating high-quality manuscripts (Lucey
under the guidance of a professor. For example, an instructor can ask & Dowling, 2023). Questions have surfaced on attribution and owner
students to use ChatGPT to write an essay on a topic related to the class. ship of this text generated by ChatGPT within research manuscripts
(Thorp, 2023). At this time, our perspective is that generative AI can
offer an acceleration to research similar computer-based statistic pack
14 ages, and even the Internet search engines. Computer-based packages
https://www.openculture.com/2023/02/noam-chomsky-on-chatgpt.html
20
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
allowed researchers to run analyses on data that by hand was difficult or offered by TheoryOn (Li et al., 2020a, 2020b). The generative aspect
even impossible. The Internet has allowed researchers to access material differs from previous tools as it allows the researcher to reconceptualise
in real-time. Modern research practices have benefited greatly in the the element maps based on their expertise and the insights garnered
speed and quality of research using these tools (Schucany et al., 1972). during the process. Further, generative AI can offer new mapping that is
The practice of developing research has been well documented by created because it can examine the corpus of the literature, unlike
our discipline. We rely on the guidance provided by Watson and Webster manual database searches.
(2020) and Webster and Watson and (2002) to examine how generative Watson and Webster state that the goal of the literature review is to
AI may be utilised to accelerate the development of a literature review provide a meta-analytic and that “the fundamental problem is that
and research question. Developing a literature review is a good candi knowledge is not encoded, and scholars must rely on the methods of
date for evaluation as they are typically the preliminary step in the their forebears (reading or scanning many papers) to take a step for
research process. ward.” (Watson and Webster, 2020, pg. 9). Knowledge is now being
encoded at a scale never seen before. Because of this process, generative
“We contend that a literature review also requires a higher-level
AI can now identify related concepts iteratively with the researcher thus
synthesis. It typically needs to integrate concepts across domains
accelerating the research process.
into a holistic treatment of a subject. The author has to identify what
Developing a literature review is one of many steps along the
ideas matter and then find a way to coherently link them into a
research process that will benefit greatly from generative AI. Analysis of
stream that has a clear and relevant expository flow for the intended
the data including surfacing second-order patterns will certainly also
reader.” (Watson and Webster, 2020, pg. 2).
benefit from this technology. The formulation of the discussion and
Watson and Webster contend that the literature review process starts conclusion are also ripe for AI utilization.
with element mapping which surfaces the core relationships between
concepts and processes. The element mapping culminates in the coding 4.2.1.3. What’s next?. Because the tools are evolving quickly, funded
of each relevant publication. This task is well-suited for generative AI by well-resourced companies such as Google and Microsoft, their ca
which can develop an element map similar to the ontological map pabilities will continue to grow. The ethical guidelines for universities
21
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
and journals also need to evolve along with the application of generative software tools such as chatbots. Although these originated in the 1960’s
AI. Some journals have already provided guidance to their authors. For with computer programs such as ELIZA (Weizenbaum, 1966), modern
example, in response to authors listing ChatGPT as a contributing author day chatbots leverage NLP and ML techniques to analyse and understand
(Kung et al., 2022), Nature has developed guiding principles to help text and audio data, enabling them to interact with and respond more
authors with the use and attribution of generative AI text (Nature, quickly and accurately to virtual conversations with human users
2023). In sum, generative AI does not qualify for authorship but the use (Brachten et al., 2021). AI chatbots can appear in many forms such as
of the technology should be documented in the methods section. Other pop-up virtual assistants on websites, integrated into mobile applica
than guidance about attribution, most journals have remained silent. tions via SMS, or as standalone audio-based devices that can be used at
One could imagine in the short term that all journals and conferences home. They are becoming popular in a range of sectors including higher
will provide guidance to authors on the correct (and incorrect) use of education where they are being developed and deployed to answer
generative AI tools. This is inevitable. Using these tools will allow aca queries about the availability and accessibility of educational pro
demics to focus more on the intellectual pursuits that need higher-order grammes and university services, to help students navigate e-learning
skills. resources, and to increase engagement with curricula and provide
To conclude, we contend that academics must prepare for this new instant feedback, with a view to improve learning outcomes and the
reality using novel classroom experiences and research experimentation, student experience (Okonkwo & Ade-Ibijola, 2021). If well designed and
and IS scholars have an important role. Our work in this regard is critical integrated with existing computer systems, AI chatbots may also reduce
because it can provide leadership for education and research across the workload of educators, administrators, and university management
academic disciplines. by supporting students which would enable them to focus more on
pedagogical research, curriculum development, and innovation in
4.2.2. Contribution 17: ChatGPT: the new wave of AI tools in higher higher education. Examples of commercially available chatbots used in
education– Siobhan O’Connor some universities in the United States and United Kingdom include
The advent of AI has led to the rapid development of automated IBM’s Watson (IBM, 2023) and Amazon’s QnABot (Strahan & Gleason,
22
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
2019), while others are under development (EDUBOTS, 2022). While these may be solved in time as the AI model learns more about the
A new AI chatbot called ChatGPT, based on a large language model, nuances of human language and the sources of information on the
was launched by a commercial company, OpenAI, in November 2022 as Internet, the use of chatbots like ChatGPT in educational research poses
a follow up to a previously developed chatbot (OpenAI, 2023). It utilises other ethical issues. Trust and transparency could be challenging if
reinforcement learning algorithms to optimise the language model with chatbots are used to write scientific research studies (Lucy & Dowling,
human feedback, having been originally trained using human AI trainers 2023), privacy and security may be another issue depending on the types
who generated a range of dialogue that was fine-tuned using supervised of data entered into an open AI platform, as can the design and “persona”
learning and ranked to develop a reward model. Reports indicate that of the virtual conversational agent itself as it could unwittingly
the chatbot was trained on a large dataset (40 GB in size) from text on encourage inappropriate behaviour when interacting with researchers
the Internet, equivalent to 133 million pages of text which would take a (Okonkwo & Ade-Ibijola, 2021). An example of this can be seen in
human being approximately 2200 years to read (Welsh, 2023). Although Microsoft’s Tay bot which began to post offensive content on social
the inner workings of ChatGPT are not fully transparent, OpenAI do media when released on Twitter in 2016 (Reese, 2016).
acknowledge some of the limitations of the chatbot and others are Despite the complexities that ChatGPT presents in higher education,
actively interacting and experimenting with it to identify more (Else, chatbots and other AI tools are becoming more widespread and estab
2023). The implications of ChatGPT and other types of open AI plat lished, requiring university educators, researchers, and management to
forms in higher education are hotly debated. Some argue they could be adapt to this rapidly changing digital environment. Some solutions are
used to assimilate knowledge quickly which might benefit learning, but already emerging, with global education software providers such as
these platforms could also be used by students to write assessments and Turnitin (2023) and Cadmus (2023) developing AI detection capabilities
dissertations which raises plagiarism and academic integrity issues in their existing electronic platforms to help identify content generated
(Stokel-Walker, 2022). There could also be long-term repercussions if by AI tools. However, academic integrity has always been an issue in
students choose to by-pass reading about a topic in-depth and critically higher education. Therefore, educating students about the benefits,
analysing different facets of it, using ChatGPT for rapid and potentially limitations, and risks of chatbots and the value of developing a range of
superficial learning. This could stifle critical thinking, problem solving, knowledge and skills is of upmost importance. More diverse forms of
and creativity which are key skills to develop as they are needed in many assessment may also be necessary to consider which could include the
professional careers (O’Connor, 2023). appropriate use of chatbots, as these tools are likely to be utilised in the
Similarly challenges with ChatGPT exist in relation to the conduct future workplace which could enhance students’ employability pros
and reporting of pedagogical research. These types of AI tools could be pects (Centre for Teaching and Learning, 2023). As for pedagogical
utilised throughout the research process to pose hypotheses, design research, human creativity is needed now more than ever to explore how
experiments, write manuscripts, and understand scientific results teaching, learning, and assessment can be enhanced by using ChatGPT
(Thorp, 2023), which could be beneficial in resource limited settings and how best to employ AI tools in all aspects of the research process. A
where funding and human expertise may be limited. Although this could human-AI collaboration in higher education is now a foreseeable future.
accelerate educational research by letting automated tools perform Hence, more funding along with policy changes are also needed to
many routine research tasks, the responses generated by this chatbot ensure we can develop, test, and apply these sophisticated computa
while well written are not guaranteed to be accurate, with some who tional tools to further student learning.
have used the platform identifying problems with incomplete citations
or citations to research studies that do not exist (Kendrick, 2023),
highlighting some of the shortcomings of the large language model.
23
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
4.2.3. Contribution 18: ChatGPT: the digital transformation of academia is sustainability. However, it was only able to write the text based on text it
underway - Sven Laumer was trained on and did not have the capability to conduct research or
At the International Conference on Information Systems in Copen perform a literature analysis. Nevertheless, I would expect that this will
hagen I started to engage in discussions with colleagues about the ca be one of the first capabilities of similar tools in the future.
pabilities of ChatGPT and its impact on academia. Shortly before I This experience highlights several important aspects. Firstly, it re
already utilised it for various purposes, such as composing a poem for quires human intelligence to formulate interesting questions. Secondly,
my digital transformation students and referencing it in my knowledge ChatGPT can only provide answers based on previously documented
management class. Upon returning home from the conference, I took knowledge used for its training. Thirdly, it requires human intelligence
advantage of the holiday season to experiment with the tool. As a result, to generate the knowledge that can be used to train ChatGPT. And
I was able to use ChatGPT to draft a brief paper on the responsibilities of finally, conducting research still requires human intelligence as
IT professionals in promoting sustainability. Based on that experience ChatGPT is not capable of doing so yet.
and my discussions, I wanted to write my comment for this editorial. This shift in focus from text writing to doing research highlights the
However, I always had to postpone it because some new examples of the evolution of academic work. It’s important to note that writing text may
power of ChatGPT emerged or new posts on why ChatGPT is the end of no longer be a crucial component of scholarly work, as this task can be
humankind or at least academia were published. supported more efficiently by tools like ChatGPT. Text serves as a means
From an academic viewpoint, it has been demonstrated, among to communicate the results of our research in a clear and accessible
others, that ChatGPT can pass MBA exams at a business school and write manner. Many scholars, including myself, have already used tools like
or debug code, indicating the end of traditional programming methods. Grammarly, spell checks, and professional proofreading to enhance their
It is also expected to revolutionise the way we search for information writing as English is not their native language. With the advent of
and produce academic texts that were previously written by students or ChatGPT, research and its dissemination can now transcend language
academics (Else, 2023). barriers. Superior writing skills in English are no longer the determining
With ChatGPT having demonstrated its ability to write text, making factor in reaching a broad audience with our research findings.
it a useful tool for students and academics, a debate has begun in ChatGPT allows us to focus on what truly matters in academia -
academia about whether AI, specifically ChatGPT, should be banned asking thought-provoking questions and conducting research to find
from academic use. Some universities have implemented policies pro answers. The emphasis shifts from the writing that summarises our
hibiting AI in college essays and requiring students to declare that they findings to the findings themselves. This shift represents a disruptive
have not used AI in their theses. Others have emphasised the opportu transformation in academia, as text has been a significant medium for
nities for schools and university. To gain a deeper understanding of this scholarly work for many decades.
debate, I collaborated with one of my Ph.D. students, Sebastian This triggers my concerns regarding the debate surrounding AI ban
Schötteler, to conduct a sentiment and topic analysis of Twitter tweets policies in academic institutions and journals, particularly with regards
about ChatGPT in academia.15 Our analysis revealed 1490 tweets with a to college essays and essay writing in general. I do not comment on the
negative sentiment, indicating a high level of user resistance. The topics consequences when considering AI as research method (e.g., pattern
covered included plagiarism by students and academics, fake text and recognition in data). I hold a similar viewpoint as I have stated previ
studies, fake content, cheating, and legal implications. ously regarding the use of AI in research. Writing text may not be the
The current debate around ChatGPT in academia is reminiscent of most essential skill that we should prioritise in evaluating our students’
the reactions that have been studied for years under the term "user intellectual abilities. The use of spell and grammar correction tools in
resistance". This type of research has been conducted to better under word processing software, such as Microsoft Word, is already widely
stand why individuals are hesitant towards technology and to help or accepted. Banning AI in essay writing would also disallow tools that help
ganisations promote its adoption. It has been concluded that user people improve their writing, such as Grammarly or DeepL. Interest
resistance is largely driven by the changes that new IT introduces to ingly, the ACM SIGCHI has included a Grammarly license in its mem
individuals’ work systems (Laumer et al., 2016). The debate surround bership benefits, encouraging researchers to utilise AI in their writing.
ing ChatGPT in academia follows a similar pattern, as it is triggering a This highlights the inconsistent approach academic institutions take
transformation in academic work systems, much like other technologies towards the use of AI. When it comes to college essays, it’s more crucial
have done in other fields that have been studied for user resistance (e.g., that we teach our students to ask important questions and find ways to
taxi drivers’ resistance to UBER). answer them. This is the intellectual core that will benefit both the
Despite the current resistance to ChatGPT in academia, it is crucial students and society. Therefore, we should place a greater emphasis on
that we acknowledge its increasing availability and use by individuals. teaching critical thinking skills and how to add value beyond AI. It’s also
The digital transformation of academia has already begun, and as his important to educate our students on the impact of tools like ChatGPT
tory has shown us, resistance to change is common in any industry un and similar algorithms on business and society, so they are prepared to
dergoing a technological shift. However, I believe it is essential for us to live and work in a world where algorithms are increasingly present. This
embrace this change and use ChatGPT to stimulate a discussion about shift requires a significant transformation in our teaching methods as
the fundamental principles of academia. Our community has previously well.
advised and researched other industries during their digital trans In conclusion, over the past two months, I have devoted a significant
formations, emphasizing the importance of staying competitive and amount of time to studying and reflecting on the impact of ChatGPT on
focusing on core competencies. Now, it is our turn to do the same. By academia. The resistance to this technology that I’ve observed aligns
taking advantage of ChatGPT, we can reevaluate our own practices and with the user resistance to IT that has been documented for decades, and
impacts for societies. it is a clear indication that a digital transformation of academia is un
My experience with ChatGPT while writing a short paper has taught derway. This transformation will bring a shift in the significance of text
me a valuable lesson. The tool demonstrated better English writing skills as a medium in research and education, putting the focus back on the
than I possess as a non-native speaker. It effectively brought together core elements of academia: the ability to ask meaningful questions and
arguments to discuss the role of IT professionals in promoting find answers in a world dominated by algorithms.
PS: A first draft of this text was written by myself, I used Sebastian’s
input to revise it and finally ChatGPT to optimize the English itself. The
15
query = ’("chatgpt" AND "academia") OR ("chatgpt" AND "publications") OR thoughts discussed are based on human intelligence, but the text itself is
("chatgpt" AND "research") OR ("chatgpt" AND "plagiarism") since:2022–01–19 co-authored by ChatGPT.
until:2023–01–19’; replies and retweets were excluded
24
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
4.2.4. Contribution 19: how to enhance critical thinking of students, to support English language learners in a classroom setting. Another
educators and researchers in the ChatGPT era - Giampaolo Viglia study by Kim et al. (2019) used ML algorithms to provide personalised
In my career, I have always encouraged critical thinking at univer feedback on grammar and vocabulary for second language learners. In
sity. In fact, memorization is often short-lived and less useful in real life general, these studies and others like them suggest that AI models have
(Aldhafri et al., 2015). For this very reason, the presence of essays allows the potential to provide personalised support, automate administrative
students to express their ideas and form their opinions in a thorough tasks and support language learners effectively. It is important to notice
way. that the effectiveness of these models may depend on the specific task,
The advent of ChatGPT - if used in a compulsive way - poses a threat the dataset and the quality of the training data.
both for students and for teachers. For students, who are already ChatGPT has been used for a variety of NLP tasks, including language
suffering from a lower attention span (Trinidad, 2020) and a significant translation, question answering, and text summarization. It has also
reduction in book reading intake (O’Connor, 2021), the risk is going into been used in the field of education to support and enhance the learning
a lethargic mode. For teachers, the ability to think critically is a pre experience for students. ChatGPT can be used to provide personalised
requisite for teaching critical thinking. Only by being very prepared on support to students, automate administrative tasks, support language
the topic with the right training, teachers might be able to disentangle learners, and enhance online education.
the work of a student from the work of an AI bot. However, it is important to remember that the use of ChatGPT and
There is a plethora of work showing how students cheat (for a re other AI models in education should be done with caution. The ethical
view, see Zhao et al., 2022). However, in my work as an Editor, I have and societal implications of the use of such automated tools must be
noticed that authors – who also serve as instructors at their own insti considered before accepting the tools as an assistant for the student in
tution – often show similar malicious behaviour, i.e., plagiarism or their learning process. There is no transparency in how the model is
milder forms, such as paraphrasing entire paragraphs. Additionally, functioning to generate the results. Hence it has to be considered as a
despite being strict with students when it comes to deadlines, they often black box AI tool which gives some information for questions that are
come up themselves with unreasonable requests for extensions when it posed to it. There is no assurance on the correctness of the information
comes to submitting a revised manuscript. given by ChatGPT. Hence allowing students to use it for learning,
For the reasons above, students, educators and researchers should without proper statuary warning, might cause more harm than good.
realise that we are in this quantum leap technological change together. It
is not time to increase rules and enforcements. It is time to use this 4.2.5.1. Opportunities. ChatGPT can be utilised in the educational
advancement to facilitate learning and knowledge, thus stressing the setting to help and improve students’ learning opportunities. There are
value of independent thinking. I venture to suggest that independent some potential opportunities for using ChatGPT in education. Some of
thinking is what makes us being better humans. The race is long, and the these include:
ultimate goal is not coming up with a better grade in an essay but to
improve our own wellbeing as people. If ChatGBT does everything or Basic Educational Material: ChatGPT can be used to provide basic
many things for students and professors, it may also kill creativity and educational materials, which otherwise is created by searching the
critical thinking. internet.
Personalised feedback: ChatGPT can be used to provide personalised
4.2.5. Contribution 20: ChatGPT and education – is a symbiotic feedback on writing assignments, such as essays and research papers.
relationship possible? - Ramakrishnan Raman, Gareth H. Davies and Abbas The model can analyse student writing and provide feedback on
Mardani grammar, organisation, and content.
ChatGPT (Generative Pre-trained Transformer) is trained on a Automating administrative tasks: ChatGPT can be used to automate
massive amount of data and can be fine-tuned for various NLP tasks, administrative tasks such as grading assessments and answering
including language translation, question answering, and text summari frequently asked basic questions. I can help to free up teachers’ time
zation. Deep learning methods are used by Open AI’s ChatGPT, a big to focus on other aspects of teaching and research.
language model that produces text that resembles human speech. The Language learning support: ChatGPT can be used to support lan
Transformer architecture, which was introduced in the paper "Attention guage learners by providing personalised feedback on grammar and
Is All You Need" by Vaswani et al., serves as the model’s foundation vocabulary, and by assisting with language translation in a classroom
(2017). The model is able to comprehend the context and meaning of the setting. It can support language learners by giving them extra prac
text better thanks to this architecture’s usage of self-attention processes tice and feedback on their language abilities
to weigh the significance of various words in a phrase. The model can be Enhancing online education: ChatGPT can be used to enhance online
fine-tuned for various NLP tasks after being pre-trained on a vast amount education. It can be used to improve online learning by giving stu
of text data. The pre-training enables the model to pick up on common dents more tools and resources, as well as by making the learning
language characteristics and patterns, enhancing its ability to handle experiences more interesting and participatory
new jobs with less fine-tuning. Individualised Support: ChatGPT can be used to provide one-on-one
There are several studies and papers that have investigated the use of tutoring for students, by answering questions and providing expla
AI models in providing personalised support to students, automating nations on various subjects. It may determine the student’s
administrative tasks, and supporting language learners. One area where comprehension level and offer explanations and tasks that are suit
AI has been applied in education is in providing personalised feedback able for them.
for writing assignments. For example, a study by Chen et al. (2018) used
a neural network model to analyse student essays and provide feedback The use of AI and NLP models like ChatGPT in education is still a
on grammar and organisation. In terms of automating administrative growing field, and there might be other opportunities to be discovered as
tasks, AI models have been used to grade assessments, such as multiple- the technology and the understanding of its capabilities evolve.
choice tests. For example, a study by Attali and McNamara (2006) used
AI to grade essays and found that the model performed as well as human 4.2.5.2. Challenges. Despite the potential benefits, there are concerns
graders. Another study by Burstein (2003) used a ML algorithm to grade and challenges that need to be addressed when using ChatGPT in edu
short answers and found that it had a high level of agreement with cation. One of the main challenges is ensuring that the model is
human graders.AI models have also been used to support language providing accurate and appropriate information to students. Addition
learners, particularly in the context of machine translation and NLP. For ally, there are concerns about the ethical and societal implications of
example, a study by Huang et al. (2018) used neural machine translation
25
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
using AI in education, such as the impact on jobs, privacy, and bias. The 8. How can ChatGPT be used to support collaborative learning and
challenges for ChatGPT in education include: peer-to-peer interactions among students and teachers?
Data quality and bias: ChatGPT is trained on a large dataset of text,
and any biases or inaccuracies in the data can be reflected in the model’s 9. What are the implications of using ChatGPT for teaching and
output. For the model to provide accurate and unbiased feedback, it is assessment in higher education?
imperative to use a high-quality dataset that is representative of the
question being posed to it. 10. How can the performance and effectiveness of ChatGPT be
Interpreting and understanding the model’s output: ChatGPT’s evaluated and improved in educational settings?
output is generated based on patterns in the data it was trained on, and it
is a challenge to understand how the model arrived at its output. There is
11. What are the policy measures what have to be included in the
no transparency to the mechanism involved in generation of results. This
higher education, which ensure the ethical use of ChatGPT?
can make it challenging to understand the model’s feedback and decide
if it is accurate and relevant.
12. How can ChatGPT be integrated into existing curriculum to
Privacy and security: ChatGPT has access to sensitive student data,
improve student engagement and motivation?
such as writing assignments, and it is important to ensure that this data
is protected and used in compliance with applicable privacy laws.
Another issue is that the ChatGPT could be used to cheat on exams or 13. Can ChatGPT be used to personalize learning for students with
assignments, either by providing answers to test questions or by different learning styles?
generating plagiarised content. Additionally, the model could be used to
impersonate students or teachers, or to spread misinformation or 14. How does the use of ChatGPT in education compare to traditional
disinformation. teaching methods in terms of effectiveness and efficiency?
Limited explanation capability: ChatGPT can generate human-like
text but it doesn’t have the understanding or the reasoning behind its 15. Can ChatGPT be used to assist students in self-directed learning
output, this can lead to difficulties in understanding the reasoning and research?
behind the generated text and the level of confidence of the model in its
output. 4.2.6. Contribution 21
Human-computer interaction: ChatGPT is not a human, it may not be
able to understand the nuances or the context of certain situations, this 4.2.6.1. ChatGPT in the context of education: how to deal with its
can lead to difficulties in understanding the student’s problems and disruptive effects? - Yves Barlette. OpenAI’s ChatGPT was released in
providing relevant solutions. November 2022 and caused a lot of buzz for its ground-breaking
Ethical concerns: The use of AI models in education raises ethical approach to AI-generated content, managing to produce complex orig
concerns, such as the possibility of replacing human teachers and the inal texts according to the user’s question. Concerns were immediately
potential impact on marginalised groups. It is important to consider raised about the impact on education. Our students were already used to
these concerns and to develop appropriate guidelines and policies for relying on AI tools to help them in their research or even to cheat on
the use of ChatGPT in education. their assessments, homework, essays, master theses, etc. Now, with a
Overall, ChatGPT has the potential to support and enhance education much more powerful tool, we can expect increased use of AI-generated
in various ways, but it is important to consider the ethical and societal texts. ChatGPT has received intense mediatisation, and if some students
implications of using AI in education and to ensure that the model is are still unfamiliar with how to adopt ChatGPT, Microsoft plans to
providing accurate and appropriate information to students. integrate it into Word, PowerPoint, and Outlook will certainly boost it.
In addition, other competing solutions exist, and others are being
4.2.5.3. Research questions. created, such as Google Sparrow, illustrating the strategic importance of
such tools for software companies. As a result, faculties are currently
1. How can CHATGPT be used to improve student engagement in facing a tidal wave and are unprepared to deal with its impacts, both in
online and Offline learning environments? terms of pedagogy and internal policies. Considering this context, we ask
what are the current impacts of tools like ChatGPT in the context of
2. What are the potential benefits and drawbacks of using ChatGPT education and what can be done to address the use of such tools? Also,
for automated grading? should we fight ChatGPT, or should we capitalise on it?
To answer these questions, we first examine the possibilities and
3. How can ChatGPT be used to support students with diverse weaknesses associated with ChatGPT. Then we will determine the pos
learning needs, such as specific language learners or students sibilities of detecting and monitoring its use to prevent fraudulent
with disabilities? authorship. Finally, we will examine a few possibilities to capitalise on
the use of ChatGPT and identify avenues for future research.
4. How can the personalise learning support and individualised
feedback to students given by ChatGPT be evaluated? 4.2.6.2. Possibilities and weaknesses of ChatGPT. ChatGPT can create
essays, arguments, and outlines based on variables defined by the user
(e.g., text length, specific topics or scenarios, etc.). For example,
5. How does the performance of ChatGPT compare to human
ChatGPT can write college admission essays in less than 10 min (Whit
teachers in providing explanations and answering questions on
ford, 2022), and Terwiesch (2023) showed it could even achieve the
educational content?
required coursework for being awarded a Wharton MBA. It can also
summarise concepts, historical events, and pieces of text. Moreover, it
6. What are the ethical considerations and potential biases in using can add its own details and embellishments, add transitional phrases, fix
CHAT GPT in education?
grammatical errors, and propose higher-level vocabulary and quotes or
facts to back up claims. In terms of pedagogy, ChatGPT can provide
7. How can ChatGPT be used to create interactive educational personalised assistance or tutoring for students who are struggling with
games and simulations? some concepts. It can also help teachers create questions for
26
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
assessments, and based on objectives and other parameters, it can assist seems very difficult to ban the use of ChatGPT, since students can use
with building lesson plans or developing curriculum. devices both inside and outside the classrooms. Some experts argue that
However, we must keep in mind that ChatGPT is “not an entity at all, restricting the technology is short-sighted because students will find
but rather a complex algorithm generating meaningful sentences” ways to use the bot regardless of whether it continues to grow in
(Mollick, 2022, p.5). As a result, this tool suffers from several limita popularity (Rosenzweig-Ziff, 2023). Moreover, Chatbots and CA such as
tions; the first one is that ChatGPT has learned everything it is in its ChatGPT have the potential to create inclusive learning environments
database before 2021 and cannot, for the time being, search the internet for impaired students (such as lack of resources, disabilities, or learning
to fill in information gaps. If ChatGPT’s writing is clean, grammatically disorders), with diverse living environments (commuters, family re
correct and well structured, it is often too vague and flat, and texts lack sponsibilities, or non-native) and with diverse learning styles (visual,
heart and soul (Whitford, 2022). In addition, several students with the kinaesthetic, or auditory learning) (Gupta & Chen, 2022). It therefore
same inquiries may obtain very similar results. More worryingly, AI is seems more reasonable to adopt the technology, albeit with some ad
very good at creating “convincing-sounding nonsense, devoid of truth” aptations: “If the things that we used to put so much effort into in
(Mollick, 2022, p.5). For example, ChatGPT struggles with confusingly teaching can be automated, then maybe we should rethink what the
worded questions, which can lead to incorrect answers. This kind of actual goals and experiences are that we should work toward in the
conversational agent (CA) offers minimum transparency and often classroom” (Rosenzweig-Ziff, 2023).
operates as “black box” because it is not able to reveal its underlying Here are some ideas on how to tackle cheating and capitalise on
decision-making processes (Chandra et al., 2022). Hence, ChatGPT does ChatGPT. First, teachers could ask students to write a traditional essay,
not explain what it does or how it does it, making the final results then have a brief in-class oral interview with each student asking them
inexplicable, and does not list its sources (Bass, 2022) or provide ref to explain the content of the essay, without them seeing it. However, if
erences that support its assertions. For example, language models such oral exam answers cannot be generated with AI, oral exams can be very
as ChatGPT have learnt that humans often support claims with a quote, time consuming when teachers are managing classes that contain large
and the software mimics this behaviour but lacks the benefit of human numbers of students. Hence, teachers could refine their essay questions
understanding of ethics and attribution (Bass, 2022). As a result, quotes to require more complex thought, draw on local and current information
may be assigned to the wrong author or may even be created from that would not be widely available (Svrluga, 2023), require students to
scratch by the AI itself. Therefore, we can question the reliability and make connections between the knowledge and their own experiences, or
credibility of the produced text. apply a concept to a unique situation in their writing. Another possibility
is to add a self-assessment step to the writing process where students
4.2.6.3. Detecting and monitoring the use of ChatGPT and punishing reflect and evaluate their own performance. In addition to content,
fraudulent use. Being able to identify a work based on or fully created by alternative formats encourage a more authentic assessment: podcast
ChatGPT is important for several reasons. The first one is that such usage production, peer feedback, debates, interviews are effective alternatives
can be detrimental to students. It can endanger students’ willingness to to traditional essays. Asking students to rate and comment on a
develop skills like writing and researching, and, above all, a blind usage ChatGPT’s answer to a question or to compare a ChatGPT-generated
of ChatGPT does not build critical thinking and problem-solving skills, research paper with the original version is also a good way to develop
which are essential for academic and lifelong success. critical thinking skills. ChatGPT can be considered an excellent source of
While ChatGPT can help a student create texts, it can become a fraud ideas and inspiration to give students a place to start. It can also make
when a text is primarily created by ChatGPT and presented as the stu information available at students’ fingertips, encouraging them to
dent’s work. However, institutions are not prepared (1) to detect and (2) conduct research and double-check their facts. Finally, ChatGPT offers
to punish fraud with ChatGPT. There are no legally valid detection so the possibility of human-machine hybrid work. Instead of prompting an
lutions available yet, and for some institutions, internal rules do not take AI and hoping for a good result, humans can now guide AIs and correct
into account fraud based on AI-generated content. Plagiarism-detection mistakes. Hence, new kinds of collaboration become possible (Mollick,
tools are becoming obsolete as AI-generated content enhances origi 2022). To conclude with the advantages of using ChatGPT, teaching
nality, and therefore cannot be detected as plagiarism. Plagiarism- students to work with AI and chatbots will prepare them for their future.
detection companies are struggling to address this shortcoming, which Teachers can show them appropriate use and ways to leverage the tools
is a source of inequity between students who are penalised for plagia for better outcomes.
rism and those who remain undetected. Other actors, such as Microsoft
or Google, and even OpenAI, the creator of ChatGPT are working on 4.2.6.5. Research agenda. At the school level, faculty members could
ways to identify texts generated by the bot. Recently, a student at openly update their curricula to accommodate such technologies in the
Princeton University has built an app, GPTZero, to detect such contents, classroom and use ChatGPT as a co-teacher. However, many schools and
based essentially on the degree of randomness of the text and its curricula are structured so that teachers do not know how to introduce
burstiness, to gauge whether the writing is complex and varied, as these advanced and innovative technologies (); hence future studies
human writing can be (Svrluga, 2023). However, there is a risk of false could find ways or even processes to facilitate this introduction. To
negatives and false positives; for example, a student may have a address one of the limitations discussed at the beginning of our work,
particular writing style that resembles AI-generated text. It is therefore, Chandra et al. (2022) highlight the need to build conversational agents
important to find legally acceptable solutions, especially when it comes (CAs) with not only an “artificial brain” but also an “artificial heart”.
to punishing or even expelling students who cheat. In addition, internal People perceive a chatbot with higher conversational skills to be both
rules should emphasise that any form of cheating related to content more humanlike and more engaging than a less skilled one (Schuetzler
generated by AIs or chatbots is unethical and specify situations leading et al., 2020). However, Seeger et al. (2021) showed that designing more
to punishment. Some institutions already reacted; for example, New anthropomorphic CAs is not trivial. Diederich et al. (2022) conducted a
York City schools banned access to ChatGPT (Rosenzweig-Ziff, 2023). literature review to analyse CA and propose a research agenda to move
CA research forward regarding user interaction, context, agent design,
4.2.6.4. Prohibit or take advantage of ChatGPT?. The first precaution to as well as CA perceptions and outcomes.
protect education from ChatGPT could be to ban it and favour hand
written work instead, which is important for students to acquire writing 4.2.6.6. Conclusion. The development of AI and automation will
skills. It is also possible to have students work on computers that are not threaten an increasing number of jobs (Agrawal et al., 2022; Dwivedi
connected to the Internet, even through a smartphone. However, it et al., 2021b) and rather than fighting them, the question is rather how
27
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
we will adapt to these disruptive changes: “The technology is developing ChatGPT’s ability by giving it a complicated essay question that was
very rapidly, possibly exponentially. But people are linear. When linear asked to Honors Psychology students and found that the bot produced no
people are faced with exponential change, they’re not going to be able to more than a grade of B- or C+ . ChatGPT could not distinguish the
adapt to that very easily” Kahneman says (as cited in Adams, 2021). We ‘classic’ article in the field that is being cited by any other article and
have examined the great impacts of tools like ChatGPT in the context of also kept referring to the same sources over and over again (Hirsh-Pasek
education and identified some ways to address these implications and and Blinkoff, 2023). This clearly indicates that ChatGPT can demon
capitalise on these tools. Most importantly, we need to teach students strate its intelligence to a certain extent to write the essays and manage
how to work with AI and chatbots and prepare them for their future. to get a passing grade but the writing style and presented content may
easily be differentiated from human written content. Hence, there is a
4.2.7. Contribution 22 need to train instructors to catch such caveats and assess the work
accordingly.
4.2.7.1. The Use of ChatGPT in educational institutions: challenges, op Rather than presenting the entire work generated by ChatGPT in
portunities and future scope – Nripendra P Rana, Jeretta Horn Nord, Hanaa response to assignments given by educational institutions, it can rather
Albanna and Carlos Flavian. OpenAI’s ChatGPT (Chat Generative Pre- be used as a tool for intellectual intensification rather than as a tool of
Trained Transformer) has gained an impressive amount of attention plagiarism. Considering the fear of this technology in education, the
and wide discussion across both academia and industries since its New York City Department of Education banned the use of ChatGPT on
launching on November 30, 2022 (Guo et al., 2023). It is a trans January 3, 2023 across all department devices and networks (Hirsh-
formative technology as it is trained to learn what humans mean when Pasek and Blinkoff, 2023). Banning such technology is not the solution
they pose a question to it. Many users are awed by it’s astounding human for any academic institutions as students can still use it in their own
like ability to chat, answer questions, produce content, compose essays, network and devices. Rather than considering this as a hindrance to the
create AI art prompts, explain art in great detail, script code and debug, students’ intellectual development, the instructors should use it as a tool
take tests, manipulate data, and explain and instruct. This disruptive of assistance. The application of ChatGPT and its constructive use to help
technology may very well change the way individuals interact with students complete their assignments should be taught at educational
computers as well as their method for information retrieval (Montti, institutions so that the culture of learning from an advanced AI tool such
2022; Ortiz, 2023a, 2023b). In other words, it is a revolutionary AI as this can be used in the right way by students rather than making its
chatbot technology that makes use of ML and NLP techniques to allow use as a medium of complete dependence and plagiarism. These tools
users to chat with a virtual assistant. It can provide a response for almost can provide answers to the questions quickly but they should never be
all tasks using written text. The usage of this application is currently considered as a substitute for critical thinking and problem-solving
open to the public for free as it is currently under the research and skills.
feedback gathering stage (Ortiz, 2023a, 2023b). ChatGPT is a large The support provided by this technology can rather be considered as
language model. These models are trained with a very large dataset of an opportunity for students to build further, which would be essential
conversational text such as chat logs, forums and social media posts to for their academic and life-long success. University instructors can use
precisely forecast what word comes next in a given sentence which al this tool to teach students how ChatGPT can help them generate con
lows it to engage with the users in more realistic natural dialogues cepts and structure for their writing assignments, practice their language
(Montti, 2022; Qadir, 2022). The popularity of this technology can be skills (e.g., translating the native language into the target language),
imagined by the tweet from Open AI’s chief Sam Altman who noted that identify appropriate sources and generate a list of potential research
ChatGPT attracted more than one million users in the first five days after topics, etc. (Qadir, 2022). In essence, teaching students how to use this
its launch. tool to supplement their research and writing efforts in university edu
Despite its impressive conversational abilities, this technology has cation is an opportunity they should be taught. Students should also
some limitations as well. For example, it is unable to answer questions understand that ChatGPT’s generated text or ideas presented as their
that are worded in a particular way requiring the user to reword the own can lead to plagiarism and hence they should only be considered
question in order for ChatGPT to understand it. An even bigger concern informational and as part of the research process when creating papers
is its lack of quality in the responses it provides to questions that seem to or completing other required assignments.
be plausible-sounding but hardly make any realistic sense. Finally, it Moreover, as this bot is not connected to the Internet and heavily
responds to ambiguous questions just by taking a guess rather than reliant on its data and algorithms, the output of this tool may be
asking for any further clarifications, which results in undesirable re misleading or incorrect if the data and algorithms are inaccurate. So the
sponses to such questions (Ortiz, 2023a, 2023b). The key reason for students should always verify the answer that they receive from the bot
ChatGPT to generate prejudiced or even incorrect results is largely due and check other sources before considering it as a valid response for
to lack of training data and biases in it that can reflect negatively on the their questions or assignments. Educational institutions should also have
model results (Yang, 2022). a clear policy for the extent of use of these tools and to ensure that
The increasing development of ChatGPT is significantly reshaping student data is properly protected from the huge cybersecurity risk
the education market and hence raising concerns about what and how to (Ortiz, 2023a, 2023b). Academic institutions should make a careful
teach the next generations (Zhai, 2022). People are concerned that evaluation of required educational tasks and reform them to cultivate
ChatGPT might replace human intelligence. For example, this technol students’ unique skills that are beyond the capabilities of ChatGPT. As
ogy can be used to efficiently write an article or essay within a few ChatGPT still needs to be trained more to cater to the needs of its role to
seconds potentially abolishing the need for human intervention (Ortiz, accurately answer the subject domain’s specific questions, integrating
2023a, 2023b). Others in education fear whether students will learn to this technology to subject learning tasks is critical and warrants further
effectively write as they will bank on ChatGPT to do this without putting research (Zhai, 2022). Educators need to consider the adjustments to
in any effort. But experts feel that the essays generated by ChatGPT can their educational learning outcomes and evaluation criteria to clearly
be identified apart from those written by human intelligence due to a understand the role of technology and students’ cognitive skills that they
few basic flaws of such articles produced by the bot. A report on such learnt by integrating their knowledge with the effective use of this tool
essays produced by ChatGPT found that the composed articles were to solve real-world problems. There is arguably considerable agreement
superficial and lacked the proper referencing. Other reports indicated that a student in higher education should be openly encouraged to use
that such articles included erroneous information and were unable to ChatGPT to create the first draft of their answer or assignment in
provide a convincing perspective for the readers. The authors tested consultation with their instructor. At this point, their abilities could be
assessed to the extent that they would be able to apply critical thinking
28
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
and problem-solving skills to improve the essay to meet the exact ex a “Stochastic Parrot” (Bender et al., 2021). Whereas mathematics results
pectations of the set criteria for that assessment (Hirsh-Pasek and in deterministic outcomes, this type of IA generates the most likely
Blinkoff, 2023). outcome, and this can therefore differ per time. This is not based on logic
Based on the above discussion, it is clear that ChatGPT and other AI or reasoning but is calculated based on the probability of the outcomes.
tools provide enormous opportunities for students and instructors in Furthermore, this can result in framing bias by telling users what it
higher education. The immense potential of AI tools have opened up ‘thinks’ they want to hear. The mix of human biases and seemingly
massive opportunities for research in this area in order to fully under coherent language heightens the potential for automation bias, delib
stand the potential of this technology as a method of enhancing the ef erate misuse, and amplification of a hegemonic worldview (Bender
ficiency of teaching and learning while branding the future of the et al., 2021).
current and upcoming generations. If we want our students to learn how ChatGPT is just the beginning and will likely be followed by other
to solve real time problems, we need to come out of the traditional specialised and more advanced tools. We should neither try to ban
teaching model of simply delivering the one-way theoretical knowledge ChatGPT nor embrace the technology indiscriminately. Like any tech
to students and go beyond that to make tools like ChatGPT a friend in the nology progress, critically appropriate is needed, and experimenting to
classroom ecosystem that is not something to fear. It should rather be find ways to take advantage of new technology without having its dis
used to encourage such technology as a medium for transforming advantages. These kinds of AI tools will likely transform our way of
practical education. Further, it could be of great help to students as they working and, after a while, become common and institutionalised in our
acquire life learning skills and use them in their future careers to solve daily life. The question is how technology will evolve and how humans
actual problems at their workplaces. and society will adapt (Agrawal et al., 2022).
Considering the massive potential of this chatbot, it can be effec 4.2.8.1.1. Use of ChatGPT in education. ChatGPT and other lan
tively used as a potential topic in higher education research in general guage model tools can be used to increase productivity and free the
and business education in particular. At the primitive level, future re scares number of code developers and report writers. ChatGPT can be
searchers could explore where this technology falls short in providing used by students to create a report for their course or even to provide
accurate information and results. Future researchers can perform liter answers to an exam. PLanguage models can be used to create AI-assisted
ature exploration and a review of information relating to the perfor work, which raises questions if students did their work solely on their
mance of ChatGPT and compose guidance papers on how to effectively own and how ChatGPT should be acknowledged in their work. ChatGPT
use this technology and where to take caution, avoid over reliance, lack has officially made its scientific debut with at least four co-author credits
of originality and plagiarism. Future research should also work around (Stokel-Walker, 2023). However, including such tools as authors is
developing models and validating them with relevant data on how to discussable, as authors take on legal responsibility for their work and
assess this technology’s effectiveness, performance, and intentions for need to give their consent. Only people can be held accountable. Instead,
use by relevant stakeholders. Future researchers could further explore the use of such tools should be explained as part of the research methods,
both positive (e.g., engagement, attention, attitude, playfulness, hedonic like we refer to Wikipedia and mention the use of tools like Envivo or
and utilitarian motivation, excitement, willingness to us, etc.) as well as Atlas.ti.
the dark side (e.g., threat, risk, invasion of security and privacy, Language models will change our way of education. On the one
escapism, distraction, response delay, irrelevant information, functional hand, the risk is that students’ independent thinking and language
failure, etc.) of the use of ChatGPT. In addition, this technology’s expression skills might deteriorate and not be practiced by using
capability to automatically assess student grades while providing rele ChatGPT. The first question is if these skills are still needed, and if
vant feedback and helping instructors periodically monitor students’ positively answered, new ways of developing and practicing these skills
progress should be explored. Moreover, ChatGPT’s ability to generate need to be developed. On the other hand, students need to develop skills
text in multiple languages can also help researchers undertake research to scrutinise the generated text and code and be critical towards the
on assessing students’ learning effectiveness and performance subjects outcomes. They need to develop skills to investigate whether the in
in their own native language. formation is factual and develop their critical attitude toward the
generated text. The use of language models should be explained in their
4.2.8. Contribution 23 research method to ensure accountability by the authors. Furthermore,
the outcomes should be validated and triangulated with other sources of
4.2.8.1. Disrupting higher education and human communication by Lan evidence. Although students have been fast in adopting ChatGPT, also
guage models - Marijn Janssen. Natural Language models, like ChatGPT, lecturers have embraced them. Some courses already provide AI-
are being used more and more due to their impressive performance. generated outcomes and ask their students to evaluate and reflect on
ChatGPT stands for Generative Pre-Trained Transformer and is devel this. Other courses encourage the use of ChatGPT or other language
oped by OpenAI is a large, pre-trained language model that has gained models in their assignment and ask the students to explain how they
immense popularity and might be disruptive (Haque et al., 2022). made use of this.
Language models need to invest significant resources into curating and Nevertheless, traditional education remains. Teachers even are not
documenting large language model training data with high costs and able to recognise reports generated by ChatGPT (O’Connor, 2022). As
estimated CO2 emissions. There has been an increasing size of LMs in language model tools are not able to understand what they actually do
terms of the number of parameters and size of training data (Bender and if the outcomes make sense, there is a close relationship with
et al., 2021). Such tools can be used for a variety of applications, such as misinformation. Language model tools have no real understanding of
searching with higher precisions, writing software codes, answering what they are generating, and they state both facts and falsehoods with
customer inquiries, and creating legal documents. ChatGPT might the same high level of confidence. The way of providing answers, by
disrupt the search engine market, and reshape the higher education providing easy-to-read stories rather than links to resources on the web,
landscape. might reinforce the filter bubble. The latter refers to the risks of showing
ChatGPT can be helpful in several ways, but it comes with substantial only certain information putting users in a “bubble” without showing
disadvantages, like most new technologies. ChatGPT outcomes are other and diverse viewpoints (Pariser, 2011). That makes it easier for
probabilistic. Language model refers to systems that are pre-trained on people, because they do not have to gather information from different
huge amounts of text to predict the likelihood of a token (which can be a sites. But, how do you know the information is correct? not biased? Not
character, word or string) given either the previous context or its context reflecting a certain worldview? There will still the issues of correctness,
(Bender et al., 2021; Bender & Koller, 2020). ChatGPT can be viewed as bias, criticism and norms. Cultures have different norms, and people can
have different views on a situation that might not be expressed. Research
29
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
suggests that people tend to take the information presented to them by single party should be avoided. Among others, the public values of
ChatGPT for granted (Krügel et al., 2023). Users might not be aware that openness, accessibility, and inclusiveness will be affected, and measures
other answers might be possible and that the answers are not factual. need to be taken to ensure that people have access and are able to
Who is accountable for providing the correct answer? Such language critically access the outcomes. ChatGPT cannot make a model or draw a
model tools can be used to direct the behaviour of the public. The risk is business process, however, there are other OpenAI tools, like and DALL-
that such tools be misused for the sake of manipulating citizens. Also, E, that can make drawings. This will likely move forward, and also these
there is the risk that those in control of language models can manipulate fields will be supported by AI tools. ChatGPT will likely be combined
the answers given. No government, company, or person should have sole with more logical models, like Wolfram Alpha, to understand relation
control of digital filters (Janssen & van den Hoven, 2015) and the ships. The focus in the future will be on Logic and Language Models.
resulting information provided. Their skills will also improve considerably in the coming years.
Language models threaten to take over jobs, people living from
checking grammar and English might become less necessary, the tools 4.2.9. Contribution 24
can be used by villains for writing hacking code or students for cheating
report writing. Yet, language models will present many opportunities as 4.2.9.1. ChatGPT and education policy and practice - Tom Crick. The
well and will likely create new jobs and even new kinds of business rapid advances in AI technologies have opened new possibilities in
models. Language models can make processes more efficient and effec various fields and domains, including education (Luckin et al., 2016).
tive and reduce human labour. Language models can support dyslexic The application of AI to learning can be traced back to the pioneers of
students in this way, making it easier for them to work in certain envi the field, including John McCarthy (McCarthy, 2007). It has thus been
ronments. There might be a battle between students and teachers in used in various educational contexts and settings for some time (Gold
using and detecting language model-generated text. Alternatively, stein & Papert, 1977; Woolf, 1992), primarily for personalised learning
embrace ChatGPT for better learning. For this, we should not start with and the assessment of student progress (for example, through learning
the how question, but with the why question. Similar to the introduction analytics), with varying success (Woolf, 2015; Zhai et al., 2021). But the
of the calculator or Wikipedia the tools can be integrated into our ed widespread use and application of AI in education has been increasingly
ucation. This raises the question of why we educate certain knowledge critically evaluated due to concerns about its cost and effectiveness,
and skills to our students, and what skills are needed by students. alongside emerging ethical and privacy considerations (Williamson
Essential is that they learn to be critical towards the automatically et al., 2020a; Williamson & Eynon, 2020; Selwyn, 2022). However, the
generated answers and be able to analyse them. Also, we need to educate impact of the COVID-19 global pandemic has further accelerated the
how to take advantage of these kinds of tools without falling into the interest and potential application of digital technologies — and espe
trap of the disadvantages. cially AI — in education (Crick, 2021), especially in the context of the
Humans spend too much time on mundane, day-to-day tasks. By digital disruption to learners, practitioners and education institutions
automating them, Humans can spend more time on creative and stra and structures (Williamson et al., 2020b; Watermeyer et al., 2021).
tegic work. People will get to work in more fulfilling roles. Teachers can These wider narratives can also be seen in emerging national and in
focus on what matters and those students who need more support. ternational policy contexts (UNESCO, 2021; UK Government, 2021;
Chatbots have been employed to offer free 24-hour psychological Council of Europe, 2022), alongside a renewed focus globally on digital
assistance to both medical staff and the public (Liu et al., 2020). A study skills, computer science education, and associated national curriculum
in Chatbots reveals that personalization, enjoyment, learning, and and qualifications reforms (Brown et al., 2014).
condition are positively related to user experience and user satisfaction The recent advancements in NLP have enabled the creation of
(Zhu et al., 2021) AI-based chatbots have been widely adopted to sup accessible large language models, such as the open-source GPT (Gener
port and strengthen the quality of services in business industries, such as ative Pre-trained Transformer) family created by OpenAI,16 which has
tourism and insurance (Lokot & Diakopoulos, 2016; Muthugala & further reinvigorated interest in the use of AI tools and technologies in
Jayasekara, 2019; Androutsopoulou et al., 2019) and can now be used to education. The widespread use and application of ChatGPT17 (Chat
strengthen education. A sound user interface and easy to use is key for Generative Pre-trained Transformer), which has been built on top of
effective use. Also, support for interpretation of the results can help to OpenAI’s GPT-3 family of large language models, and has been fine-
avoid that results are taken for granted without being looked for evi tuned for transfer learning using both supervised and reinforcement
dence. Research is needed on how language models can be used to create learning techniques (OpenAI, 2022), has the potential to transform the
value, under what conditions and what it cannot do or what might have way we learn, teach, assess and access information. With ChatGPT
negative effects. gaining more than 1 million users in its first five days since launching in
Although ML can help improve productivity and provide other ad November 2022,18 and nearly 600 million monthly visits and its total
vantages, its use also affects our public values. Ethical implications user count exceeding 100 million by January 2023, the impact on ed
should be considered and ensure that language models are being used in ucation is likely significant. However, the uncritical use of ChatGPT
a way that benefits everyone. Not all people might be able to use these raises several concerns and issues that need to be considered, both at a
kinds of tools or have access to the newest tools. Openness and acces legal, regulatory and policy level, as well as what this means for indi
sibility and inclusions are societal values that are affected. Research is vidual learners (and indeed, citizens). While the peer-reviewed pub
needed to evaluate the effect on public value and to ensure that these lished academic literature on the use of ChatGPT in education is still
values can be met. emerging at the time of writing, in this section, we will briefly explore its
4.2.8.1.2. Next steps. ChatGPT provides a glance into the future, and potential uses, as well as some of the key concerns and challenges
further developments will likely follow. Like any technology, language associated with its deployment and adoption in various settings and
models can be used for good or bad. In education, the learning objectives contexts.
and the way of learning should be reconsidered in a similar way as when While ChatGPT has raised visceral fears of learners cheating on
the calculator or Wikipedia was introduced. ChatGPT should be viewed homework, various commentators are assessing how its potential as an
as a tool that will be integrated and institutionalised in our daily prac educational tool outweighs its risks (New York Times, 2023). Moreover,
tices. ChatGPT is not able to valid the accuracy or veracity of the data,
but useful as a support. People should develop new skills, and the use of
language model tools should be acknowledged in research methods. 16
https://openai.com/
Experimentation is needed to understand the full consequences. 17
https://chat.openai.com/
ChatGPT should not result in a next filter bubble and domination by a 18
https://twitter.com/gdb/status/1599683104142430208?lang=en
30
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
education has been incorporating and reimagining the threats and datasets become encoded in the models, meaning that any output re
possibilities of technology for decades (Huang, 2019); AI will likely be inforces those structures. Because much of the data in the training set
no different (Luckin et al., 2016), but will require not only a techno might be produced from a predominantly white, male, Western, English-
logical shift but also a mindset and cultural shift (Breen, 2023). One of speaking perspective, the data would likely be heavily skewed to reflect
the most frequently highlighted applications of ChatGPT in education is those structures. This raises the question of how ChatGPT can be trusted
its potential to provide learners with personalised, on-demand learning to provide learners with accurate information from diverse and
experiences. It is feasible for ChatGPT to be easily integrated into authoritative sources, and what measures need to be taken to ensure that
educational platforms and systems (for example, virtual learning envi the information provided is reliable. With recent research in the UK
ronments, learning management systems, resource repositories, etc) to highlighting how Instagram, TikTok and YouTube are the top three news
provide students with instant feedback and guidance, 24/7 access to sources for teenagers, data and media literacy are increasingly key
relevant course and programme information, and engaging features of school education so as to counter misinformation and
context-specific conversational interactions. This is particularly relevant disinformation online (Ofcom, 2022).
for learners who are still unable to attend physical classes due to the ChatGPT can essentially be viewed as an increasingly sophisticated
COVID pandemic, or supports an increasing number of learners who are “bullshit generator”; if a generated sentence makes sense to you, the
engaging through remote or hybrid learning. By creating ad hoc inter reader, it means the mathematical model has made sufficiently good
active and engaging learning materials, ChatGPT-powered conversa guess to pass your sense-making filter. This becomes increasingly
tional agents could simulate real-world scenarios, providing learners problematic from an algorithmic governance and transparency
with hands-on learning experience, authentic assessment, and fostering perspective if is it able to produce plausible articles of any length or
critical thinking and problem-solving skills. It could them be used as a style, or is able to generate working computer code; it means what has
virtual tutor by providing students with personalised interactions, been automatically-generated is now becoming harder to spot. This
feedback and support in real-time. This could help learners better un directly links back to concerns regarding “trustworthy AI” (Floridi,
derstand complex concepts and overcome learning obstacles. Another 2019) and how it is or is not “citizen-centred” (UNESCO, 2021). Thus,
potential use case of ChatGPT in education is how it could support developments in the AI field can deeply impact interactions between
language learning, and how it can be programmed to converse in educators and learners and among citizens at large, which may under
different languages and can help students practice their language skills mine the very core of education; that is, the fostering of free will and
in real-time. For example, students can engage in conversations with independent and critical thinking via learning opportunities. Educa
ChatGPT to practice speaking and listening skills, as well as to learn new tional professionals need to be made aware of AI and the ethical chal
vocabulary and grammar. ChatGPT can also provide immediate feed lenges it poses in the context of education and learning (Council of
back and correction, which could help students to improve their lan Europe, 2022), increasing demands for AI-literate learners and
guage skills faster and more effectively. practitioners.
Moreover, ChatGPT can also be used to help practitioners in their In addition to concerns regarding accuracy and bias, there are also
learning environments (Ferlazzo, 2023). By trivially providing them more general concerns about the ethics and privacy implications of
with instant access to information, ChatGPT can assist them in using ChatGPT in educational contexts, and what policies and regula
answering questions and providing students with additional resources tions need to be in place to ensure that the technology is used securely,
and materials. Additionally, it can be used to create and distribute ethically and responsibly (Irons & Crick, 2022). As mentioned previ
educational content, such as ad hoc quizzes, games, and interactive ously, these types of AI models are reliant on huge amounts of data;
lessons, making the process of creating and delivering content much some of this data is personal or private, and has been scraped indis
faster and easier. Finally, ChatGPT can automate repetitive adminis criminately along with other freely available data. Earlier models were
trative tasks, such as grading assignments and providing structured shown to output private and identifying information from people in
feedback, freeing up practitioner time for more important tasks such as their datasets. Additionally, as models become more and more powerful
curriculum design, close to practice research, evaluation and student there is an imperative to supply them with increasing volumes of data;
engagement. this creates an economic imperative for the large language model cre
However, while the use of ChatGPT in education has the potential to ators to collect data in ways which are sometimes unethical or invade
realise some of the potential benefits outlined above, there are wide people’s privacy. Therefore, as learners and practitioners interact with
spread concerns about its use and impact. Again, while the peer- ChatGPT, they will be sharing personal information and data with the AI
reviewed published academic literature on the critical use of ChatGPT model, and it is important to ensure that this information is kept
in education is still emerging at the time of writing, there are key themes confidential and secure, and how this information will be used (and
emerging across research, policy and practice. In general, AI, like any potentially associated with their account and identity). The use of
other tool, offers many opportunities (Luckin et al., 2016) but also ChatGPT thus raises important questions about the collection, storage,
carries with it many threats (Williamson & Eynon, 2020), which make it and usage of learner information, as well as the potential for data
necessary to take human rights principles into account in the early breaches and unauthorised access to personally identifying data.
design of its application. Educators (and learners themselves) must be In summary, it is clear at the time of writing that ChatGPT and
aware of the strengths and weaknesses of AI in learning, so as to be associated tools and technologies will continue to impact on and disrupt
empowered — not overpowered — by technology in their educational education (Zhai, 2022). While early responses included futile calls to
practices (Council of Europe, 2022). ban them (New York Times, 2023), it is clear that, as with the COVID
One of the main ethical concerns is the accuracy and reliability of the pandemic, the continued use and adoption requires not only a techno
information provided by ChatGPT, as well as the potential for bias and logical shift, but a mindset and cultural one too (Breen, 2023). We have
discrimination. As an AI language model, ChatGPT is trained on large already seen significant disruption and permanent changes to learning,
corpora of textual data freely available on the internet, but it is not al teaching and assessment over the past few years (Crick, 2021) as we
ways possible to ensure that the information provided is accurate and embrace a “new (ab)normal” (Watermeyer et al., 2022); why would we
up-to-date, and there is a risk that the model may perpetuate existing not try and embrace these tools and technologies and use ChatGPT et al.
biases, stereotypes and discrimination in society. There have been to support innovative practices, assessing “humanness” rather than tests
extensive examples of racist, sexist, homophobic, ableist and other that can be increasingly trivially gamed and automatically solved. But it
discriminatory language making its way into the model and is then is clear that the use of AI in education across emerging research, policy
generated as output. Large language models can reinforce and exacer and practice requires further critical analysis and evaluation, as well as
bate existing power dynamics; the power structures reflected in the increased openness, transparency and likely further legal and regulatory
31
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
grounding (Williamson & Eynon, 2020), ensuring that they are explicitly used or with the algorithms, the latter are which are usually a black box
learner- (or indeed, citizen-) centred (Floridi, 2019). behind a wall of corporate control. The lack of transparency can make
the governance of the tool very messy.
4.2.10. Contribution 25 So, where are we going with this – and what are the solutions? While
it would be foolhardy to speculate with high specificity on direction,
4.2.10.1. How does ChatGPT benefit or harm academic research? - Varun there are a few general tenets that I feel comfortable in predicting.
Grover
4.2.10.1.1. Human contributor. A number of years ago, in my ➣ The battle between bots (use of ChatGPT vs. Bot Detection AI) is only
doctoral seminar for 1st year students, I distributed printouts of seven a small part of the solution. While we can train models in human text
short papers co-authored with me and each of the seven students in the vs. AI generated test, there will always be a degree of mismatch as the
class. I indicated that I had painstakingly worked on these papers to give training sets between the two needs to constantly change as the AI
them a head start on the publication process. I asked them to take 30 min evolves.
to review their paper and let me know what they thought. The papers ➣ The AI will always get better (through reinforcement learning, bigger
superficially looked credible, but were garbage, generated by SCIGen “a and better training sets, access to the Internet) and so fighting this
program that generates random Computer Science research papers, trend will fail – policies need to be set around transparency.
including graphs, figures, and citations. It uses a hand-written context- ➣ For academic research, the line is between using the Chatbot to
free grammar to form all elements of the papers.”1 After review, only 3 improve writing (communication goal) vs. to generate new ideas
of the 7 students identified the nonsensical nature of the papers, 2 were (innovation goal). Where that line is drawn between communication and
unsure (perhaps because they did not want to challenge the instructor), innovation and how policies are articulated is an important professional
and 2 indicated that they liked the papers and thanked me. conversation.
The technology is far better today, and ChatGPT due to its wide ➣ ChatGPT can never partake in co-authorship arrangements due to its
spread accessibility is causing some widespread concern. Some journals lack of accountability.
and conferences have already set up policies that prohibit the use of ➣ There needs to be high reflection of ethics in the AI to prevent auto
ChatGPT in the research product. For instance, the International Con mation of misinformation and the spread of false research.
ference on Machine Learning indicate that “Papers that include text
generated from a large-scale language model such as ChatGPT are pro I suspect that similar to most AI, there might be a combination of
hibited unless the produced text is presented as a part of the paper’s human and AI complementarity that will produce the best product.
experimental analysis.” ChatGPT as a tool can greatly facilitate research writing and other cre
Is this an overreaction? Certainly, the “ability to discern” an AI ative pursuits (like filmmaking, book writing, etc.) - but the open
generated prose from a human generated one increases the diligence question is how good can it get? The perfect human-AI complementarity
needed from our editors and reviewers. Most studies have shown that may be an elusive ideal that requires ongoing navigation through some
humans have a difficult time discriminating between AI and human delicate ethical boundaries.
generated text. Machines however (i.e., bot detection AI) however, I asked ChatGPT to address exactly the same topic by posing the
performs better at discriminating. AI generated writing tends to be less following question to it: “Write a 1000-word editorial on how ChatGPT
specific, less creative, over generalises specific instances, and has a benefits or harms academic research.” The results follow. Compare and
different writing style (e.g., uses more predicted words) than human enjoy!.
writing. AI tools (like GPTZero) have been pretty successful at proba How Does ChatGPT Benefit or Harm Academic Research by ChatGPT
bilistically identifying AI generated writing. (AI Bot).
However, while there may be legitimate reasons for reacting to this ChatGPT (AI Bot): ChatGPT is a state-of-the-art language model devel
tool, there are just as many reasons to embrace it proactively. ChatGPT is oped by OpenAI, capable of generating human-like text based on the input it
just that, a tool, that can be embraced like other tools (e.g., Grammarly) receives. With its impressive performance, ChatGPT has been applied in
to improve the quality of writing. For instance, often the review process various fields, including academic research. In this editorial, we will examine
ends with the tedium of shortening the paper to meet length re how ChatGPT has impacted academic research and explore both the benefits
quirements. Think of alleviating the difficulty in deciding what to cut by and harms it has brought to the field.
using the tool. Or consider the value to authors in feeding a complete On the one hand, ChatGPT has brought numerous benefits to academic
paper to the AI tool, and having it write the abstract. Similarly, complex research. Firstly, ChatGPT has significantly improved the efficiency of
papers could be made more accessible to different constituencies by language-related tasks in academic research. For example, researchers can
simplifying the communication of complex ideas. This could facilitate use ChatGPT to automatically generate high-quality abstracts or summaries
better communication of our work to practice – something often dis of research papers, saving time and reducing the workload of researchers. In
cussed, but rarely done because it takes “extra” effort when the goal of addition, ChatGPT can also be used to generate reports, conference papers,
journal publication is met. Non-native speaking researchers could and other forms of research output, freeing up researchers’ time to focus on
greatly benefit from improving the quality of writing through this tool. more creative and challenging aspects of their work.
The AI could also scrape websites or papers and organise it at a general Secondly, ChatGPT has also contributed to the democratization of
level that might facilitate data collection (from websites) or a literature knowledge by making research more accessible to the public. With its ability
review (from papers). to generate human-like text, ChatGPT can help to bridge the gap between
The challenges are also substantial. If our systems (particularly less academic research and the wider public, allowing non-experts to understand
scrutinous conferences) are not able to discriminate, then it is possible complex research concepts and findings in a more accessible and under
that the mass production of AI bot generated research papers could tax standable manner. This can lead to increased public engagement with aca
the review system, and challenge the integrity of research. False infor demic research and a more informed and educated society.
mation is just as much of a potential problem in research as it is in Finally, ChatGPT has also opened up new avenues for interdisciplinary
journalism and news. This is because how the AI takes information collaboration in academic research. By automating language-related tasks,
(based on its training set) and weight certain information could lead to ChatGPT allows researchers from different fields to collaborate more effec
misleading conclusions. The problem may be compounded when it is tively and efficiently. This can lead to the development of new research ideas
difficult to untangle the sources of the information and the attribution of and methods, as well as increased innovation and breakthroughs in academic
credit. Where is the intellectual ownership? Is it with the training set research.
However, there are also some harms that ChatGPT has brought to
32
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
academic research. Firstly, there is the issue of authenticity and reliability of quickly generate text for a manuscript. For instance, in a manuscript that
the generated text. Although ChatGPT is highly advanced, it is still an AI was recently published in a reputable outlet, the authors acknowledged
model that operates based on patterns and associations it has learned from its the contribution of ChatGPT to draft the abstract with minimum inter
training data. This means that the generated text may contain inaccuracies, vention by the authors (see, Bouschery et al., 2023). Moreover, a mi
biases, and other forms of misinformation that can harm the credibility of nority of academics claim to have used ChatGPT to pass examinations
academic research. (see, Bommarito II & Katz, 2022). While the latter may not be directly
Secondly, there is also the issue of accountability and authorship in ac linked to scholarly writing and publication, it does raise a concern that a
ademic research. With ChatGPT’s ability to automatically generate research minority of academics may use ChatGPT to write content, without
output, it is possible for researchers to pass off the work of the model as their acknowledging it to publishers, journal editors, or conference
own, leading to unethical behaviour and a lack of trust in academic research. organisers.
This can undermine the integrity of the research community and erode public Indeed, several academic papers have recently been published with
trust in the scientific enterprise. ChatGPT as a co-author. While the integrity of such studies is not being
questioned, from our perspective, adding ChatGPT as a co-author raises
4.2.11. Contribution 26 several issues such as copyright, privacy, misuse, bias, and lack of
transparency. While these issues are being addressed by publishers who
4.2.11.1. ChatGPT, scholarly writing and publishing - Rameshwar Dubey are updating their publication policies, use of ChatGPT as a co-author
and Denis Dennehy. In this article, we provide critical reflection on raises two fundamental concerns about the legitimacy of scholarly
ChatGPT in the context of conducting and publishing rigorous, high- research and writing that impacts all disciplines.
quality research. Such reflections are grounded in our training and First, as a discipline that adopts and adapts theories and theoretical
experience as IS and operations management scholars committed to high concepts from other disciplines (e.g., sociology, psychology, manage
quality scholarly research. ment) relying on a chatbot raises the issue of fragmented adhocracy
Since its release in November 2022, ChatGPT, built on the founda (Banville and Landry, 1989; Hirschheim et al., 1996), which previously
tional model GPT-3.5, has received mixed responses and reactions from overshadowed the IS discipline in its formative years. Further, we rely
the scientific community. Yet, it seems that we are again, playing ‘catch- on the virtues of ‘good theory’ (Popper, 1957) that includes uniqueness,
up’ with this latest technology. On one hand, it can be justified, espe parsimony, conservatism, generalizability, fecundity, internal consis
cially if one followed the Gartner (2022) Hype Cycle for emerging tency, empirical riskiness, and abstraction (Wacker, 1998) to make
technology, which predicated that such foundational models would take theoretical contributions. Theory is the ‘theoretical glue’ (Whetten,
5–10 years before mainstream adoption would take off. On the other 1989) that binds the elements of a complete theory that enables us to
hand, ChatGPT has ignited a much needed public discourse to re-affirm advance knowledge, which has implications for research, practice, and
that scholarly writing (and research) is a ‘craft’ that is developed over society. As theorizing and analysing data are key components of IS and
time and cannot be replaced with the latest technological ‘fashion operations management research, ChatGPT may not be useful for a
waves’ (Baskerville & Myers, 2009). We do not advocate to follow the myriad of reasons as mentioned throughout this manuscript.
approach taken by the New York City Department of Education, block Second, is the ‘identity’ of the IS discipline (Benbasat and Zmud,
ing access to ChatGPT on its networks and devices over fears it will 2003; Gill and Bhattacherjee, 2009; Baskerville & Myers, 2009), which
inhibit a student’s ability to build critical-thinking and problem-solving includes the fickle nature of IS researchers who follow research fads. The
skills. In contrast, we acknowledge that ChatGPT holds much promise in continued success and development of the IS discipline depends on its
terms of being integrated into academic teaching and learning (e.g., ability to maintain a strong identity (Benbasat and Zmud, 2003; Sidor
helping students write a programming language). ova et al., 2008), which we believe includes scholarly writing.
In short, ChatGPT is an AI-based, pre-trained language model
developed by OpenAI that operates in a conversational way to generate 4.2.11.2. Changes to publisher and conference policies. While publishers
text, just like the human being (Haque et al., 2022; Lucey & Dowling, scramble to update their policies (see excerpts in Table 4 below), the
2023). ChatGPT, unlike any other AI-based tool, has advanced features onus remains with all human authors that a breach of these policies will
that make it a cutting-edge tool over other language tools that already constitute scientific misconduct no different from plagiarism of existing
exist (Susnjak, 2022). For example, we know that members of the aca research.
demic community use AI-powered language tools (e.g., Grammarly, We make a call to the associations of our disciplines (e.g., Association
Shortly AI, and Jarvis) as they can be useful tools for writers whose first of Information Systems) to update their policies for conferences and the
language is not English, or they may have a learning disability such as use of ChatGPT in scholarly research and writing.
dyslexia. Essentially using these AI-powered language tools to improve To conclude, publishing policies will continue to evolve, and new
the readability and language of their manuscript. We do not however, variations of ChatGPT will emerge in the future, the activity for pub
cite such tools as a co-author. Scholarly writing is more than just lishing scholarly research is a ‘human-centric’ process, not ‘robot-
providing a polished manuscript, it is about originality and building on a centric’. This implies that authors are responsible for the integrity and
tradition of accumulative building of knowledge (Metcalfe, 2004). validity of their scholarly work, adhering to contractual agreements
Scholarly writing is not the same as writing for a tabloid newspaper that with publishers, and upholding the ethical principles of their academic
publishes routine stories (e.g., weather reports, traffic updates), community (i.e., Association for Information Systems). Further, as the
whereby ChatGPT may help journalists to write short news stories rigorous review process of manuscripts is conducted by humans, not
(Financial Times, 2023). robots, then publishers, editors, and conference committees have a re
Recent studies (e.g., Gao et al., 2022; Bouschery et al., 2023) suggest sponsibility to ensure reviewers are provided with relevant training to
that ChatGPT can be a useful tool for writing an abstract or even an help mitigate the threat of technologies (when used unethically) that
introduction section. However, in most cases, the text is either poorly have the potential to undermine the craft of scholarly writing and the
paraphrased leading to the distortion of the actual message or content integrity of our disciplines.
and in some cases, a high level or a considerable amount of plagiarism
has been noted. In addition, there is a noticeably absence of scientific 4.2.12. Contribution 27: ChatGPT and its potential impact on research and
evidence to support any claims that AI-based language tools can perform publishing - Savvas Papagiannidis
rigorous research (e.g., hypotheses development, data analyses, dis It is undeniable that the attention ChatGPT has attracted is testament
cussion on the results), apart from assisting researchers to improve to to its advanced capabilities and the insightful responses it can generate,
33
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
34
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
When it comes to the second part of the ChatGPT answer, if the also accelerating the process of coming up with a new set of rules.
author has taken full responsibility for the content, accuracy, verifica Questions like those posed above are not theoretical questions that one
tion and made the necessary adjustment, then ChatGPT claims that the has the luxury of debating, but rather pressing practical questions of
author owns the work. In such a case how is this different from authors immense significance.
having to use sources appropriately? AI effectively packages knowledge
and content we would have needed significant time to deal with. Not to 4.2.13. Contribution 28
mention that having humans review content produced by AI is not going
to necessarily address all potential issues. It is not just AI that can make 4.2.13.1. Magic or fast-food writing? when transformers challenge our
mistakes and has limitations. Humans do too. If nothing else, limitations epistemic values in teaching and research and our humanity - Frantz Rowe.
in AI can be addressed with future releases. In19 2016 Alphago had already triggered some emotions from the gen
For the sake of argument, let us assume that future releases of eral public and alerted us that algorithms that recognize patterns, what
ChatGPT and other AI systems will be capable of producing similar we now abusively refer to as AI (Rowe, 2018), could be very impressive
quality to what a human can produce (e.g., ChatGPT already writes and have consequences. Until now, despite being often assessed in our
believable scientific abstracts, though with completely generated data international conferences and journals, most conversational chatbots
that can fool human reviewers (Gao et al., 2022)) albeit much faster. where not so impressive or were not open to the public for consideration.
Why would it be a problem, if AI was to write the abstract of a paper? It ChatGenerative Pre-Trained Transformer (ChatGPT), its parents like
will leave more time for humans to focus on what mattered and invest GPT-3, and its transformers cousins open, apparently, a new era because
more time, energy and focus on the main body of the work. After all, we for numerous problems we may think about, including complex ones
are supposedly not bound by the limits imposed by the training data and where answers are not simply dichotomous, ChatGPT often produces
we can put our creative flair to use where it matters. This is not to say very plausible and rather satisfactory answers in the form of human like
that writing the abstract is not important. Quite the contrary! Still, text. Such problems are at the level requiring high competency – level of
despite being an important part of an article and needs the same university professor – in risk management, mathematics, culture and so
attention, there is no guarantee that humans will write a good one. Why on. What is striking is the fluidity of the answers, even if they can look a
are we concerned with productivity gains? Or, to put it differently, why bit repetitive, may need some corrections, and GPT-3 does not pass
were we not concerned with other technologies that made us more important tests (Floridi & Chiriatti, 2020). Notwithstanding the fact that
efficient. For instance, why were we not concerned with online data in many industries and occupation text production will be automated
bases moving away from paper journals and in-person visits to libraries? (ibidem), at first impression the current generation of “transformers”
They made undertaking literature review searches much easier. Not to seems to be largely usable for teaching and research activities and their
mention that we happily apply ML among other data science techniques stakeholders.
to clustering papers to identify the underlying themes. Under such dialogue conditions (e.g., responding in a fairly complete
If the argument is that AI can help us write better papers, providing a and stunning manner to some IS scientific questions we deem complex
competitive advantage to the author, then any gains are likely to be or triggering positive emotions when dialoguing with a deceased rela
short lived. Authors relying on AI to produce better research and in tive (Henrickson, 2023)), playing with ChatGPT, and possibly its sib
crease their chances of publishing their work are likely to quickly realise lings, tends to make us believe that technology has magical power in
that this is not a feasible or sustainable strategy. If we all have access to terms of understanding and writing. However, as Dr Faust’s tragedy
the same systems and ask for the same task to be undertaken, e.g., to reminds us, belief in magic is tempting but dangerous. Such technology
write a literature review on a given topic, chances are we are all going to comes with numerous challenges and pitfalls. Let us consider some of
get the same or at least a similar outcome, limiting its perceived value them as teachers and as researchers.
and not resulting in the significant contribution one may have aspired As teachers, it already challenges our grading of assignments and
to. exams and, more importantly, it will affect the students learning process.
Last but certainly not least, one fully appreciates the ethical concerns First, we see students succumb to the temptation of submitting home
that come with using AI. It goes without saying that research integrity work using ChatGPT with or without being transparent about its use.
should not be compromised. Still, this is a human issue, not a technology The first issue will be then whether we can detect it. If we have a good
one. Banning any practice usually has the very opposite result and will knowledge of what each student can produce this might be straight
most certainly result in a lack of transparency. AI is no exception: just forward. But in some context (e.g., with freshman students), it might be
like any other aspect of the research process there has to be transparency difficult to have an idea about the integrity and capability of each stu
in methods, and integrity and truth from authors (Springer-Nature, dent. Should we systematically defer that detection to anti-plagiarism
2023). software? What would be the cost for our institutions and humanity if
Perhaps, instead of trying to fit AI into our existing practices of un we decide to do this? And relatedly will this lead to an ever-greater
dertaking research and publishing it, we may want to review how we go dependency to this technology or to its antiplagiarism winners or both
about research more holistically. Fundamental changes to practices that as we continue training them by our own use? All these issues are big
have been established for decades or even longer are not going to be easy ethical issues that are not new, but simply more acute and visible now.
to bring about. Neither will it be easy to agree on the principles that will Consider that anti-plagiarism software is not necessarily something that
guide such changes. AI can not only support research, but also the peer- detects a fraud but simply the reproduction of some text (Introna &
review process. For example, beyond assisting with the writing up Hayes, 2011). The legitimacy of the reproduction may be different
process it could improve the language used, making a manuscript easier depending on educational culture such that what is admitted as fine in
to digest. It could also improve the reliability of plagiarism checks. It Greece or Egypt is not in the U.K. (Introna & Hayes, 2011). This means
could become the de facto reviewer 1 (one suspects that we do not want that what we would qualify as plagiarism requires a human interpreta
AI to be Reviewer 2), providing a fast recommendation to editors when it tion that we cannot entirely delegate to a machine. Second, in terms of
comes to screening papers. Such an automated review could help learning, we have observed in the past that the introduction of certain
improve manuscripts before they are even submitted. AI can also sum
marise and rewrite manuscripts in a non-specialist language for practi
tioners and the general public, making research more accessible. 19
I gratefully acknowledge Ojelanki Ngwenyama, Bertrand Rowe and Raphaël
Such AI applications could soon be a reality and this time we need to Suire for insightful comments on this short philosophical argument about the
be more prepared. Our abrupt awaking to AI may have done us a favour, labels we use for distinguishing human and ChatGPT capabilities and their
not just in helping us better grasp the enormity of the task ahead, but evolution if we use the latter.
35
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
digital tools (e.g., Calculator; text editing, GPS systems) has had deep directions. Until now for some genres of articles like issues and opinion
impact on our cognition. When we still practice with older tools we we could insist on interestingness and to a lesser extent on transparency.
retain our past capabilities, but when we don’t and use the new tool, our ChatGPT could nevertheless provide elements to fabricate “issues and
cognition adapts and works differently. With ChatGPT, students may opinion” papers or background sub-sections that simply synthesise what
probably lose certain skills like expressing themselves, but also will gain has been already published, and thus feed the scientific publishing in
others. Is there a net benefit? What will be the impact if we use it more dustry and editors of lower ranked journals and conferences. If we want
and more and let it be embedded in our office systems, we don’t know. to make sure that authenticity is preserved, what ChatGPT and digital
What we generally know however is that the old law of the deskilling by transformation of research processes may bring about is to force us to be
automation applies to much of our digital world. Typically, those who more and more demanding on transparency to the risk of eliminating the
use their GPS more have more difficulty reading a printed map or ori most provocative and interesting research and/or publication because
enting themselves in nature without using a digital device. Certain types we may have no longer time for it. If, conversely, we admit that scientific
of competencies will be lost. discourse can be a hybrid with declared parts being co-produced by the
As researchers we are already experienced in a given topic and can researchers who will still have to ask precise and significant questions to
identify relevant questions. Then we can easily ask the AI to provide an AI agent, we will also adjust, simply more cooperatively, to the new
elements of discourse, part by part, to construct, at least partially, what magic tools for our fast-food publication productivity. What we will gain
can look like a decent research paper in IS, sociology or economics. For from them in the long run is quite uncertain. Many disciplines may
the moment specialists themselves cannot distinguish abstracts pro become totally devalued and publications just considered, indepen
duced by AI from those produced by researchers (Else, 2023). When dently of their genre (Te’eni et al., 2015), an opinion not regarded as
under pressure to publish or perish, researchers, especially those who more valuable than another layman opinion. Researchers and the
are not native-English speakers may be tempted to use it at least for some community will have to defend itself again the potential dangers of such
parts, before submitting to journals publishing in English. ChatGPT is technology by asking itself ethical questions related to the framing of the
certainly best trained in this language. The problem is again ethical. It is social and ecological dynamics of the problem of introducing magic in
formally different, but is it contesting existing ideas? Is it transparent? At our educational, research and production systems. Beyond quality,
some point submitting papers with such support system, raises issues integrity and reputational issues, a more fundamental issue is that of the
that go against traditional values we have long defended to identify impact on human agency and our free will (Kreps & Rowe, 2021). Even if
original research against simple opinion or journalism. Those values we think it is smart to delegate our writing, since now we have an able
revolve around the idea that research publications should propose slave, and think we still control our slave, we will soon be unable to
something innovative and yet relevant, i.e., by definition not made out write and therefore think properly, because any competency is the
of what already exists by some non-transparent recombination; origi product of a habitus. This does not necessarily mean that the slave will
nality being about empirical facts (not known before) or analysis become our Master, we may only be more dependent on it and have lost
(interpretation or critique). To some extent it should be also transparent some cognitive skills as humans.
about how ideas have been derived and articulated from the literature Learning is fundamentally acquiring knowing how and related
and how the methods have been designed and used in the particular competency. From a philosophical perspective, in this Human-Computer
instance of this research. This notably requires that researchers cite their Interaction oriented towards learning, neither agent does learn. The way
sources of inspiration both in order to demonstrate integrity and to transformers give answers deprives the knowing subject from knowing
facilitate further research through possible contestation. Something that how and knowing why they may learn something if they rely on these
is currently missing with ChatGPT. Complementary work will have to be tools (Ryle, 1945). Knowing that is missing from transformers pseudo
performed humanly to make the paper look completely analogous to declarative knowledge, because transformers cannot themselves reflect
scientific discourse. While each of its outputs being possibly unique, this on knowing how (ibidem). However, from a pragmatic viewpoint, the
fake product will also defeat originality by definition since it cannot more we use transformers like ChatGPT, the more they are likely to get
reason. It’s nothing more than a “stochastic parrot” (Bender et al., 2022) close to providing a correct answer. Conversely, the more users may lose
that builds its sentences and discourse from data traces. Some unfore their ability to reflect and discern alternatives, and write in an original
seen combination of text may be innovative and valuable, but the like way. These tools will improve, while we will regress if we don’t cultivate
lihood that it be selected is very low and depends on the quality of data our traditional competencies. Using them and what we cultivate are
sources and training. political, societal and anthropological choices. Beyond difference in
It is important to realise that this parrot does not offer a conversa weight, research genres are based on epistemic values (e.g., contest
tion. It does not understand, does not communicate and that it does not ation/interestingness, transparency, originality, authenticity). The
produce knowledge. Comments about its (un)consciousness are just epistemic values of teaching and research, that are so deeply embedded
science fiction and all of these is either marketing or poor understand in western and eastern societies, are at risk if we don’t reflect more
ing. It just produces text from digital traces. These traces by themselves deeply about the ethical risks introduced by the transformers, even if
do not constitute ’meaningful’ scientifically valid and legitimated they are not intelligent. Beyond our occupations, as academics, we see
knowledge. They require a knowing subject to interpret and validate that we cannot escape the fundamental question: what does it mean to
them (Haack, 1979). ChatGPT combines expressions that are currently be human, if AI is going to automate our writing, and more insidiously
already circulating relative to a domain and left as digital traces in the our thinking, while biasing our decision making?
babel tower that it searches. It is functionally similar to the algorithm in
Searle’s Chinese room thought experiment (Searle, 1980) and does not 4.2.14. Contribution 29
compute meanings (Rowe, 2018). ChatGPT text production resembles
what could be at best a kind of disembodied declarative knowledge. In 4.2.14.1. Recommendations for dealing with ChatGPT, Bard & Co. in ac
sum this generation of “transformers”, is impressive but not intelligent, ademic publishing - Paul Jones and Sascha Kraus. The emergence of
more harmful, and probably less useful for education and research than OpenAI’s artificial-intelligence (AI) chatbot “ChatGPT” (Generative Pre-
current search engines that give us elements to help us discern what may trained Transformer) and Google’s recent reaction to it called “Bard”
be relevant to our questions. provides a seminal moment for both academic publishing and higher
If we lose the ability to identify the literature background, we lose education more generally (Alshater, 2022a, 2022b). For the academic
the capacity to assess the value of the contribution. If the paper is not community, it offers a significant challenge to proving the veracity and
sufficiently transparent about methods, we lose the ability of assessing authorship of an output. ChatGPT threatens the very foundations of the
the authenticity of research and can be completely misled into wrong
36
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
current publishing hierarchy with its reliance on journal quality and 4. Could authors who have been previously found guilty of plagiarism
reputation. have this noted on their publishing record. This is potentially
Although the chatbot has current limitations in constructing inde possible by using unique author identifiers like ORCID for example.
pendent critical scientific empirical manuscripts and has been guilty of This might be regarded as an extreme measure so could include time
producing some substandard content and even inventing sources/ref limited offences which drop off the record after a set time period. It
erences. The emerging literature suggests that ChatGPT has been used would act as an effective red flag for publishers and deter potential
for creating literature reviews and full papers in various disciplines plagiarism.
(Adesso, 2022; Aydın & Karaarslan, 2022). However, ChatGPT will 5. Journals that publish systemic and bibliographic literature reviews
undoubtedly evolve significantly in the coming months, because that’s might be challenged to identify plagiarised copies produced via
the nature of AI: collecting data, learning from it, and improving as a ChatGPT given the capability of the software to significantly alter
result. Basically, nothing else than the scientific process that we as original content. The key might be the data metrics e.g., the number
scholars go through anew with every new research project - only "by of papers evaluated and their source that would be a key to identi
hand". fying plagiarism in a copy.
As the news portal of the scientific journal "Nature" has recently re
ported four scientific studies had already listed ChatGPT as a co-author In conclusion, journal editorial teams might in the future require
(van Dis et al., 2023). However, editors of academic journals and other greater diligence of manuscript submissions to check their originality
academics widely agree that AI intelligences do not meet the criteria for and draw on a range of evidence including plagiarism checks, data
authorship because, unlike humans, they cannot take responsibility for verification and author track record. Journal publishers must support
content. However, the fact that an AI was involved in the development plagiarism detection software providers and demonstrate increased
of a scientific article could be mentioned elsewhere, some editors think. support to evaluate inappropriate practice. Unquestionably, this tech
For example, a previous version (GPT-3) has already been used to assist nology is here to stay. Moreover, it will continue to evolve, and will not
the process of a literature review in a management journal, but notably only challenge but even overturn much of what we assume to be known.
only to sharpen the research question as well as to check the Thus, it can certainly be considered "disruptive". For journal editors,
completeness of the results, not to write independent text (Burger et al., authors, students and academics, following years will undoubtedly bring
2023) – and the use was explicitly noted in the article. This is a concern significant changes in the use of AI. This, is neither condemnable nor
and one that must require a seismic and immediate change in publishing dangerous. On the contrary, opportunities will gradually develop that
protocols that effects all academic disciplines (Biswas, 2023). Further may make our lives as scientists easier and better. However, as in real
more, there are emerging instances of copied manuscripts appearing in life, all this must take place within certain guard rails, so that there are
journal by plagiarising authors. Whilst plagiarism has always been a as few accidents and violations of rules as possible, and the latter are also
threat, the opportunity to cheat has significantly increased with the punished accordingly. Our contribution is nothing other than a call for
emergence of ChatGPT and its equivalents. However, currently ChatGPT the development of such protocols.
remains undetectable by plagiarism software. Whilst this situation might
change, it is likely that ChatGPT and its equivalent competitors will 4.3. Ethical issues
continue to evolve. Unquestionably there will be an ongoing develop
ment war between the chatbot developers and the plagiarism detection 4.3.1. Contribution 30
software in coming months and years. In the meantime, what can be
done to verify the authenticity of academic publishing? As journal edi 4.3.1.1. ChatGPT and Ethics – ‘ChatGPT Doesn’t Matter’? - Laurence
tors, we would recommend the following practices be adopted: Brooks. “The rise of powerful AI will be either the best or the worst thing ever
to happen to humanity. We do not yet know which”. Stephen Hawking
1. The academic publishers must unite to condemn the practice of (201620).
ChatGPT usage and identify a code of practice for the academic This often quoted extract from Steve Hawking’s speech at the
community. This code of practice should note that use of ChatGPT or opening of the Cambridge Centre for the Future is a simple but powerful
equivalent software is forbidden in any context when in the process insight into our hopes and fears for AI. For a long time this has been the
of producing an academic output for publishing. All publishers must territory of more academic researchers, but with the Open AI release of
produce website content to educate potential authors regarding best ChatGPT into the mainstream use, it has become a lot more real for a lot
practice for proving their authorship of an article. Journal publishers more people.
that continue to use ChatGPT, or equivalent, must be blacklisted as In academic circles, ChatGPT has sparked a huge shift in the ongoing
unethical. debate about students’ work and plagiarism. The issue of students
2. The manuscript management systems of the major publishers today cheating the system by ‘buying’ their pieces of work for assessment
almost all already have an integrated automatic plagiarism check. It submission has shifted to feeding the assessment information into
is imperative that these be supplemented with AI output detectors, ChatGPT and then adding a few tweaks to what emerges and then using
which also already exist or are in development. that as their submission. The responses to this range from the outright
3. The protocols surrounding journal paper submission must be further horror, to calls to make more individual and complex assessment briefs,
refined to enhance the reliability and proof of authorship. For to a ‘well if we can beat them, then let’s join them’ approach which
example, authors must sign a license statement noting that ChatGPT embraces ChatGPT and makes it and the use of it part of the education
or equivalent, has not been used in the development of the manu process.
script. Secondly, the authors must provide a copy of the study data Similar issues are being seen in a number of other fields, from poli
with their submission. This would be a key source of evidence. This ticians (York, 202321) or business leaders’ speeches to song lyrics (in a
protocol has already been introduced by several publishers although specific style), art, music or writing adverts. While there are a number of
is rarely fully enforced. Academic journals must enforce this practice ways to understand ChatGPT, one of the key issues with this, as with any
with consistency. Data used within each paper must be verified and
safely stored by the publisher. The challenge in this regard is where
authors seek to publish multiple papers from the same dataset. This is 20
https://www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-
perfectly reasonably but will require an audit trail to ensure to-humanity-stephen-hawking-launches-centre-for-the-future-of
originality. 21
https://www.france24.com/en/technology/20230203-chatgpt-use-of-ai-
chatbot-in-congress-and-court-rooms-raises-ethical-questions
37
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
AI system, is the ethical viewpoint. A number of studies have now looked too’ versions which are about to be released. Since these are open access
at the ethical aspect of ChatGPT from the direction of how ethical it systems, and we all have access to them, it will not be what we use, or
behaves when asked certain questions (Hasselbalch, 2022,22 Dantas, don’t use, them for, but how we choose to use them. For that, we do need
202323). On the whole it does seem to be quite good, from an ethical to have our ethical lens very clearly up front, as this will be one way to
stance, but it somewhat depends on how the questions are framed. try and choose a better path.
However, others have pointed out that the output from the ChatGPT is
typically presented in the voice of a middle-class white male (Bjork, 4.3.2. Contribution 31
2023). Not exactly a good representation of society and a more inclusive
EDI. 4.3.2.1. Good bot or bad bot? On the ethics of ChatGPT- Bernd C Stahl.
There is also the question of how they have tried to make the system ChatGPT is taking the world by storm. There is broad agreement that it
more ethical (less ‘toxic’), through the use of cheap labour in an African has the capacity to change many aspects of private, organisational, and
country, to manually tag unacceptable content for the system to learn public life. There is currently much discussion of its potential and likely
from (Perrigo, 202324). Again, a not very ethical approach to increasing applications and the resulting social impact. This fuels the discussion
the ethical stance of the technology. surrounding the ethical evaluation of ChatGPT. In short, the question is:
We could also look at the ethical aspects off chat got from an ethical is ChatGPT a good bot or a bad bot?
principles and values perspective, and the questions they generate. This question is quite obviously a rhetorical device and cannot be
These include: answered using the binary distinction between good and bad. It is
nevertheless an interesting question because it helps us focus our
• Irreversibility – how does the inclusion of ChatGPT into society attention on the difficult topic of ethical concerns raised by novel and
change the relationship between people and between people and emerging technologies and innovations such as ChatGPT. Asking
technology? How does a technology, such as ChatGPT blur the whether ChatGPT is good or bad should thus be understood as a way of
physical and virtual worlds when it becomes so difficult to tell where triggering, structuring, and organising our thinking about the ethics of
a piece of writing has originated from, human or non-human? this technology. This, in turn, will not only inform the debate of the
• Novelty – the anxiety of missing out. What does it mean for society evaluation and governance of ChatGPT itself, but offer an example that
when so many people turn to ChatGPT to gather their information for can serve to inform our collective reaction to the next innovation arising
them, or equally so many people are in fear of this novel technology from the rapidly progressing field of AI.
(such as can be seen in the educational context). The eventual uses of ChatGPT and related conversational AI are
• Structures of power – given this powerful and potentially influential difficult to predict. This is almost always the case when new technolo
technology is being produced by a particular organisation (and there gies are emerging. Research on the ethics of such emerging technologies
are now equal systems being proposed by organisations such as (Moor, 2008; Sollie, 2009) had therefore spawned a number of meth
Google, or being bid for by Microsoft) (Walsh, 2023), one has to ask odological approaches aimed at dealing with the specific challenges of
how this affects the power of the actor behind this, as well as who this topic area (Floridi and Strait, 2020).
‘wins’ and who ‘loses’ in society? This brief editorial statement only offers the space to outline some
• Security – a tradition concerns about how secure a system such as aspects of such a prospective ethical analysis. An analysis would have to
ChatGPT is and what is to stop some form of hacking which would start with a more detailed understanding of the nature and capabilities
intentionally bias some outcomes, for a whole range of reasons, of the technology. ChatGPT is a chatbot that utilizes OpenAI’s Genera
including marketing and political (the ‘soft war’ approach, Walzer, tive Pretrained Transformer 3 (GPT-3) language model to generate text
2017). based on user input. Chatbots are not a new technology. However,
• Fake news and informational inflation – where media content is ChatGPT seems to surpass previous technology and it has reached a level
created without human supervision, then the ability to produce ‘fake of sophistication that allows it to provide plausible answers to many
news’ at scale becomes significantly easier and faster. We are already questions. It offers a simple interface that allows users to interact with it
aware of ‘echo chambers’, online and especially in social media, seamlessly and it produces output that is accessible, relevant, and, for
where specific views are reflected and increased through exposure to the most part, correct. Its output is high quality with regards to the
only one side of a multi-faceted argument. With the new ability to language used and the content delivered. This makes it difficult to
produce reams of unverified one-sided content, then a false sense of distinguish ChatGPT generated text from human outputs. Human-like
majority opinion could be easily produced. quality of text generation then promises many applications across all
sorts of fields that require textual input and output. The examples
So where does this leave us. Inevitable, with a flawed technology currently most prominently discussed tend to refer to text written in
which operates within a flawed society. As the saying goes, ‘you cannot academic environments, be they student essays (Stokel-Walker, 2022) or
please all the people all the time’. However, this is not good enough to research papers (Van Noorden, 2022) as well as specific types of text-like
excuse some of the flaws within technologies, including ChatGPT. The outputs like computer programs (Castelvecchi, 2022).
first thing is to make people aware of these failings, transparency being Language is an important part of what makes us human. Changes to
one of the key elements in the arsenal to tackle unethical technologies. the way language is generated and perceived can have profound social
However, I would also refer to one of the classic papers in the academic and ethical repercussions. In addition, written language constitutes
digital world, Carr’s ‘IT Doesn’t Matter’ (Carr, 2003). Essentially this much of our society, our rules, norms, expectations, and routines. Again,
says that once all organisations have IT, then it won’t matter because the changes to this will have results worth considering in detail.
advantage has gone. Of course, the response to this (and there have been Much of the current discussion of the ethics of ChatGPT focuses on
many) is that rather like water or electricity (which all organisations predictable consequences of easy to foresee applications. To take the
also all have), it not having them that is important, it is what you do with example of the use of ChatGPT for the creation of student essays, there
them that counts. The same may well be true of ChatGPT and it’s ‘me are (ethical) benefits for those students who struggle with essay writing
who may use ChatGPT to construct better arguments or to polish their
language to present good essays. Individuals struggling with traditional
22
https://dataethics.eu/testing-chatgpts-ethical-readiness/ university essays are often those who did not have the opportunity to
23
https://medium.com/@adilmarcoelhodantas/ethics-in-chatgpt-and-other- practice them, for example because they are from disadvantaged back
ais-ee31ce8e9f09 grounds or because they are non-native speakers. Helping them would
24
https://time.com/6247678/openai-chatgpt-kenya-workers/
38
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
clearly be ethically desirable. This benefit needs to be weighed against freely available on the web.25 One of the authors asked ChatGPT the
the ethical problems arising from the possibility of plagiarism or unac question “What are the disadvantages of generative AI?” on 9 January
knowledged use of automatically generated text and the threat to 2023 and again on 20 January 2023, as part of the same sequence of
traditional university assessment methods. This example is widely dis three prompts. As shown in Table 5, both responses included six specific
cussed, and policies are already being drawn up and implemented to disadvantages, but they were not the same: for example, legal issues
address them, for example in the form of university assessment pro were a heading in the first response but were not even mentioned in the
cedures or journal publication policies. second one. We will return to the significance of these differences later.
In this editorial statement I want to argue that we can be more Taking this list as inspiration, we offer the following list of the
ambitious in engaging with ethical issues of ChatGPT. The literature on principal general challenges/challenges for industry.
ethics of emerging technology has provided many insights into ethical
concerns that are likely to arise. To draw on the example of an ethics 4.4.1.1. Ethical issues. Rapid advances in AI have already raised serious
foresight project that I was involved in (Stahl et al., 2017), past expe ethical concerns (Duan et al., 2019). These concerns are exacerbated by
rience suggests that digital technologies are likely to have impacts on the rise of generative AI such as ChatGPT. “As a ML model, ChatGPT
individuals, such as privacy, autonomy, treatment of humans, sense of does not have the ability to understand or consider ethical and legal
identity, or security. They normally have consequences for society, for issues. It is not able to make judgments or decisions based on ethical or
example by creating digital divides, affecting collective human identity legal principles.” (Open AI ChatGPT).
and the share view of the good life, changing ownership, data control,
and intellectual property, impacts on responsibility, possibilities of 4.4.1.2. Risk of misuse. A consequence of this is the risk of misuse, in the
surveillance, or affecting cultural differences. Further likely ethical is production of deepfakes, fake news or other types of disinformation.
sues can be derived from the debate of the ethics of AI, such as biases, This also includes misinformation (unintentional misuse), for example
discrimination and resulting questions of justice and fairness (Stahl,
2021).
The current discussion of the ethics of ChatGPT has focused on a Table 5
small sub-set of these concerns, notably those related to authorship and ChatGPT’s responses to “What are the disadvantages of generative AI?”.
attribution of intellectual property. My plea in this editorial statement is 9 January 2023 20 January 2023
to broaden this debate and undertake a more systematic and compre
There are several potential Generative AI has several disadvantages,
hensive analysis. Such an analysis would need to cover benefits and disadvantages of generative AI, including:
downsides. Clearly, a prospective analysis has limitations. We cannot including:
foresee all possible applications of a new technology, nor all positive or 1. Quality: Generated content may not 1. Complexity: Generative AI models can
negative consequences. However, past research has shown that there is a be of the same quality as content be computationally expensive and
created by humans. This is require large amounts of data and
significant number of recurrent themes in the ethics of emerging digital particularly true for tasks that computational resources to train.
technology. I am therefore convinced that a rigorous prospective review require a high level of creativity or
of the ethics of ChatGPT will help prepare us for predictable outcomes nuance, such as writing or art.
thereby strengthen morally desirable opportunities and avoid undesir 2. Bias: Generative AI systems can be 2. Difficulty in assessing quality: It can
biased if they are trained on biased be challenging to measure the quality
able consequences.
data. This can result in generated of the generated data and to
content that is offensive or determine whether it is truly
4.4. Challenges, opportunities, and research directions inappropriate. "realistic" or not.
3. Legal issues: There may be legal 3. Lack of control: Generative AI models
4.4.1. Contribution 32: towards a research agenda for generative AI in issues around ownership and can sometimes produce unexpected or
copyright of content generated by AI. undesirable results, such as offensive
education, industry and research - John S. Edwards and Yanqing Duan It is unclear how the law would treat or biased content, due to the
This contribution looks at the challenges, opportunities and impact content created by a machine, and limitations of the data and algorithms
of generative AI in education, industry and research and proposes a this could create disputes over who used to train them.
research agenda towards addressing the key challenges from a human has the right to use and profit from
generated content.
centred AI perspective. We will concentrate in this contribution on text-
4. Loss of jobs: In some cases, the use of 4. Risk of misuse: Generative AI can be
and diagram-based generative AI. Uses in the creative arts are beyond generative AI may lead to the used for malicious purposes such as
our scope here (although worthy of note) as are further improvements in replacement of human workers, deepfakes, creating synthetic images,
automated translation between languages, where for example auto leading to job loss and economic audio or videos that can be used to
mated dubbing of TV shows and films is on the horizon. disruption. spread misinformation or
propaganda.
Generative AI can be thought of in two ways. One is as the latest 5. Dependence on technology: If an 5. Ethical issues: Generative AI raises
evolution of question answering systems, which have been a part of AI organisation becomes too reliant on ethical concerns about issues like
for at least 30 years. The other is as a new development that enables generative AI, it may struggle to privacy, autonomy, and decision-
computer systems to “write” plausibly in a way that was not possible function if the technology fails or making, as well as potential biases in
becomes unavailable. the data used to train the models.
before. Both of these views are valid separately, but in combination they
6. Lack of accountability: It may be 6. Limited to specific task: Generative AI
offer more – a step, perhaps a major one, towards a creativity support difficult to hold AI systems models are typically specialised for a
system. The opportunities and impact offered by generative AI are accountable for any errors or specific task or type of data, and may
mainly new, but many of the challenges are not. The principal challenge problems with generated content, as not be easily adapted to other tasks or
for generative AI, as with any computer-based support system, is they do not have the same level of types of data.
awareness or consciousness as
whether or not the user can rely on it for the intended purpose. humans.
We will look at the three sectors of industry, education and research Overall, while Generative AI has many
in turn. We take industry first, as the challenges in that sector also apply potential applications, it is important to
to most others. be aware of its limitations and to use it
responsibly.
To help scope out the challenges posed by generative AI, we carried
out a small experiment using the ChatGPT generative AI software that is
25
https://chat.openai.com
39
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
where the generative AI’s outputs are based on incorrect or inappro customer-facing contact centres and internal processes such as ICT
priate data. support. Generative AI also offers opportunities to improve several as
pects of customer interactions, from producing marketing copy and the
4.4.1.3. Transparency and explainability. As with many AI systems, early stages of the sales process through detailed interactions custom
generative AI systems function as a “black box”. The consequent lack of ised to each user into the aftersales area and potential longer-term
transparency makes it hard to understand why the system’s output is relationships.
what it is (Elliot, 2022). Creating computer code is an area where generative AI has already
shown its potential.
4.4.1.4. Bias. “garbage in, garbage out” still rules. The generative AI’s Inter-departmental presentations are another potential opportunity.
outputs will only be as good as the training data on which its learning Here generative AI may be of specific assistance to people who are good
was based. As with misinformation, simply using existing data may lead at coming up with ideas but poor at explaining them to others. The
to unintentional bias, reproducing past mistakes and human biases. ability of generative AI to produce arguments that persuade reluctant
recipients has yet to be fully tested though.
4.4.1.5. Legal issues (1). there are few specific guidelines for the One of the authors has discovered the benefits of using ChatGPT for
development of AI systems, and none specifically for the use of gener planning trips. It saves a huge amount of time that would otherwise have
ative AI. Nevertheless, automated decision-making is regulated in some been spent searching the internet. ChatGPT can filter the content, digest,
jurisdictions, and any use of generative AI in decision-making will have and summarise the most relevant and useful information. It helps make
to comply with them. decisions when selecting hotels, etc.
The impact of generative AI in industry will be considerable, and at
4.4.1.6. Legal issues (2). generative AI has to be trained on very large present we have not even scratched the surface.
Next, we look at education, where one challenge has leapt to the top
amounts of data in order to be effective. Typically, this has so far used
data “found” via the internet. It is not clear if industrial use of generative of the agenda at both school and university level in the space of a few
weeks since the beginning of 2023, and for the moment overrides all
systems trained in this way for profit-making purposes breaches copy
right; nor is it clear who owns the intellectual property rights in the others. This challenge is the effect of generative AI on assessments and
examinations. This could be seen as either an ethical issue, a legal issue,
output produced by a generative AI system.
or a type of misuse. In any form of written assessment except those under
invigilated examination conditions, it is always a challenge to ensure
4.4.1.7. Loss of jobs. if generative AI is used to produce text and pre
that what a student submits is actually their own work. Copying some
sentations, which it can do very much quicker than humans and 24x7,
one else’s work or paying someone else to write the submission have
what happens to the humans who used to do that work?
been problems for as long as written assessments have existed. The
advent of the internet has made it much easier for “essay mills” to
4.4.1.8. Dependence on technology. if generative AI applications become
operate profitably. With software such as ChatGPT freely available, it is
entrenched in an organisation’s operations, there are two risks. First,
perfectly feasible for a student to use it to write some or all of their
what happens if the technology goes down? Second, although the
assessment. Current anti-plagiarism software such as Turnitin will not
generative AI systems may continue to learn “on the job”, the human on
detect the use of generative AI, at least in the short term. As may be seen
the job learning that would have taken place may no longer be
in Table 5, the responses that the AI system gives vary, even when the
happening, thus reducing the chances for personal development.
same prompts are used. Indeed, the two responses in Table 5 appear to
be somewhat more different than might be expected from two students
4.4.1.9. Lack of originality. generative AI is, by definition, limited to who had taken the same class. Eliot (2023) supports this reasoning.
combining existing information, so any originality will be incremental at Software is already appearing that claims to be able to detect whether a
best. piece of text has been written by a human author or an AI system, such as
gptzero.me, but its performance in informal tests by one of the authors’
4.4.1.10. Effect of culture and personal values. some cultures, some or colleagues was unconvincing.
ganisations and some people may find it easier to accept the use of Academics have already begun to experiment with asking generative
generative AI than others. This could lead to a new form of digital AI to answer examination papers or assessments at university level. The
divide. conclusions are somewhat worrying. Tabarrok26 reports the generative
AI software Claude as gaining a marginal pass on a university law and
4.4.1.11. Design of prompts. the outputs from a generative AI system economics exam. Kung et al. (2022) found that ChatGPT performed at or
depend on the prompts it is given. Some people will find it much easier near the passing threshold for all three of the examinations that make up
to devise effective prompts than others. There may be a need for wide the US Medical Licensing Exam. Terwiesch (2023) gives a detailed
spread training in this new skill, that might even become as important as investigation of the performance of ChatGPT on the final examination in
numeracy. Operations Management from the Wharton MBA, concluding it would
Moving on to the opportunities, in the context of industry, the op have achieved a passing grade of B or B-.
portunities afforded by generative AI are at present gaining more Apart from abandoning any form of “takeaway” assessment, the
attention than the challenges. There are immediate opportunities for remedies to prevent students using generative AI seem to lie in the
both customer-facing and internal use. All of them will require the AI’s realms of making assessments more personal to the student’s own
training database to include internal company information as well as experience. One of the authors regularly taught a final year class where
generally available information. How this process will be managed is all the students had done an industrial placement (paid internship) the
one more specific challenge for industry users and generative AI system previous year, and the main assessment was related to the role they had
providers: SMEs, for example, clearly could not build and run a gener and the organisation they worked for. This is in accordance with the
ative AI system themselves with current technology. At present, those
offering freely available generative AI software such as ChatGPT
strongly warn against sharing sensitive information. 26
https://marginalrevolution.com/marginalrevolution/2023/01/ai-passes-
Generative AI offers a considerable improvement on current chat law-and-economics-exam.html
bots, being more conversational and better able to refer back to previous
statements in a “chat” with a user. This is a clear benefit for both
40
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
approach suggested more generally by Cooray and Duus, who advocate functions as a black box, meaning that its outputs cannot be fully scru
assessments that showcase a student’s specific abilities (20 Dec 2022 tinised. This links to the next two issues.
blog27). As mentioned above, the freely available generative AI software 4.4.1.11.4. Dependence on technology. without understanding how
strongly warns against sharing sensitive information, which would make the system generates its output, decisions, or predictions, less experi
it impossible to use for this type of assessment. enced researchers cannot learn from the process.
Similar issues apply to education at other levels, though less has been 4.4.1.11.5. Loss of jobs. with a generative AI system to trawl
written about them so far. through the literature, will as many human research assistants be
There is nevertheless an opportunity for students to use generative AI needed in the future?
in their learning, and for staff to use it to support their teaching. A 4.4.1.11.6. Lack of originality. the main criterion for publication in
generative AI system could, for example, act as a very patient tutor, an academic journal is novelty. Generative AI cannot (yet?) match
explaining a concept in a number of different ways. An immediate risk is human originality.
that, as with textbooks, some students know how to benefit from them The main opportunity for generative AI in the short term, as some of
and use them to learn, while others can only copy from them verbatim. the experimental papers have demonstrated, is as a research assistant for
The impact on assessment processes and regulations is likely to be desk research. Generative AI systems are capable of reviewing, ana
considerable. The use of ChatGPT has already been banned in schools in lysing, and summarising a huge amount of literature instantly (instantly
New York City.28 from the point of view of the user: there will have been a long training
Turning to research, at least four academic articles have so far been time before the system was “launched”). They therefore offer enormous
published crediting a generative AI system as a co-author,29 though the potential for improving the effectiveness and efficiency of desk research.
editor of one of the journals concerned has said this should not have Journal/conference editors and reviewers may also be able to use
been permitted. Taking the least technical of the four (O’Connor & generative AI to screen submitted manuscripts in terms of the quality of
ChatGPT, 2023), ChatGPT “wrote” the first five scene-setting paragraphs writing, formatting and referencing, contradictory or inappropriate
of this editorial article on how generative AI might affect nurse educa statements, and so on.
tion. All the literature references are in the part written by the human Further down the line, generative AI can be used to analyse quali
author. ChatGPT is capable of including academic references in its re tative data, subject to dealing with several of the issues indicated above,
sponses, but only when a prompt specifically asks it to do so. GPT-3, its and that of the security of responses that are often confidential. Meta-
predecessor, was able to include references when specifically asked in analyses and replication studies are two types of research project
the prompts, as in (GPT, Thunström, & Steingrimsson, 2022), but did not where generative AI’s lack of originality, transparency and traceability
include them in responses to further related prompts. may be less problematic.
As well as this limitation, the generative AI available at present does To conclude this contribution, we consider how to address some of
not seem to have the capabilities to discover new theories from data or the challenges across all sectors, which may serve as a partial research
suggest how to test theories, except at the level of standard methodo agenda. We believe that the opportunities should be addressed with
logical advice. It can certainly give useful introductions to topics, caution, and that the greatest potential for generative AI lies in human-
guiding the human researcher as to which ones to follow up. AI hybrid performance (Duan et al., 2019).
The main challenges that apply to using generative AI in research are
as follows. 4.4.1.12. How to make generative AI more responsible and ethical?. There
4.4.1.11.1. Ethical issues. from generative AI’s lack of judgement or has been growing effort, mainly academic, on developing and imple
“moral compass”. Generative AI is not responsible or accountable for the menting responsible and ethical AI for the benefit of humanity. How
content it generates, which poses significant challenges as to how to ever, it seems there is a very slow progress in making notable practical
safeguard academic standards and integrity. Is it ethical to use genera breakthroughs so far. The rising ethical concern about generative AI is
tive AI’s content in academic publications without acknowledging another wake up call for advancing research in this area. For example, AI
sources of data? Is it ethically and legally acceptable to directly use the faces challenges of overcoming bias “that lurks in the undercurrents of
text generated by a generative AI system in conducting and publishing the internet and society more broadly.” (McKinsey & Company, 2023).
research? This is particular acute for generative AI.
4.4.1.11.2. Legal issues. Generative AI poses a high risk for intel We feel the most important point is that ethical AI is not just the
lectual property (IP) and copyright protection. Taking ChatGPT as an responsibility of the AI developers. Multiple stakeholders can, and must,
example, OpenAI declares that it does not take any responsibility for any make positive contributions towards designing and deploying more
infringement of intellectual property rights that may occur as a result of responsible and ethical AI. Politicians, managers, users, customers and
the use of it. This has serious implications on how researchers can even the general public all have inputs to make from their different
effectively avoid the potential IP infringement caused by the use of viewpoints. Crucially, this requires developers to be far more open (and
ChatGPT. perhaps more truthful – the lack of openness makes it hard to judge)
As described above, there have been publications with a generative about what they are doing than some of the industry leaders have been
AI as a co-author, but these are all clearly experimental. Can a genera so far. Researching this requires access.
tive AI system legitimately be credited as a co-author for academic
publications if it is only a tool, not an independent researcher? Is the use 4.4.1.13. How to minimise the risks and maximise the benefit of generative
of generative AI different from using software for statistical analysis or AI though a human-centred AI approach?. The potential misuse or abuse
structural equation modelling? If so, how? of generative AI provides another strong motivation for an insistence on
4.4.1.11.3. Transparency and explainability. Quality control of human-centred AI development and deployment (Duan, Edwards, &
research outputs is of paramount importance in academia. Generative AI Dwivedi, 2019). The call for “human in control” or at least “human in
the loop” is becoming ever stronger and more convincing. In the context
of generative AI, researchers should explore how best humans and AI
27
https://charteredabs.org/dare-to-evolve-re-assessing-assessments-in-busi can work together to maximise the opportunities and benefit of gener
ness-schools/ ative AI, as well as minimising negative impact and risks.
28
See https://www.theguardian.com/us-news/2023/jan/06/new-york-city-
schools-ban-ai-chatbot-chatgpt#:~:text=New%20York%20City%20schools% 4.4.1.14. How can generative AI be used to augment human capability?.
20have,in%20New%20York’s%20public%20schools ChatGPT has certainly impressed educators in terms of its potential for
29
See https://www.nature.com/articles/d41586–023–00107-z
41
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
enhancing academic and students’ performance (Alshater, 2022a, education and research. Addressing the emerging challenges and impact
2022b). It can be powerful in combination with humans, augmenting also provides the opportunity to better prepare ourselves for the emer
their capabilities and enabling them to get work done faster and better gence of other powerful AI tools in the future.
(Chui, Roberts, & Yee). How generative AI can be developed and utilised
to augment human capacity deserves further investigation across all 4.4.2. Contribution 33
sectors.
As far as we are aware, there are as yet no generative AI systems 4.4.2.1. Use the SACE and ADROIT framework to assess challenges, op
specifically intended to make or support decisions. This may require the portunities, and research agenda related to ChatGPT - Sunil Mithas. When
integration of the generative AI system with other IS, and this area has Prof Dwivedi (EIC, IJIM) invited me to write a “two page multi
seen very little research so far. perspective editorial” on ChatGPT, I initially assigned the task to
ChatGPT and results were far below my own expectations given the hype
4.4.1.15. How to enhance generative AI’s benefit through person that I had seen on social media. Much of what ChatGPT wrote was
alisation?. Currently openly accessible generative AI systems such as common knowledge or verbosity without a point of view or original
ChatGPT serve as a common content generating tool. Personalisation is thinking. It pointed to “large computational requirements” as a chal
an essential criterion for the successful adoption of AI tools (Duan et al., lenge to deploy for “smaller organisations or individuals” and for “real-
2012). Many issues related to the challenges and limitations of genera time applications such as chatbots.” In addition, it acknowledged “ten
tive AI can be more effectively addressed by customising the systems at dency to generate biased or offensive text” which can “have negative
individual and organisational levels. However, personalisation requires impacts on society” such as “fake news or deepfake videos, which can be
a huge amount of time, effort and resources from the end users and the used to manipulate public opinion” and “the displacement of human
organisations concerned because they need provide reliable data and translators.” Yet, it pointed to opportunities such as “create content such
fine-tune the model to align with their needs, value, ethical principles, as articles, stories, and scripts. This can be particularly useful for content
and relevant regulations, and do so securely. This is a potentially huge creators, such as writers and journalists…[to] improve language trans
research area. lation, which can facilitate communication and understanding between
different cultures.” And then it went on to suggest further research op
4.4.1.16. How to deal with generative AI’s technical limitations?. We use portunities in the above areas “to understand the potential impacts of
ChatGPT as an example of generative AI’s technical limitations, as it is ChatGPT on society and develop strategies to mitigate any negative
the one we have been able to use for ourselves. These limitations offer effects.”
research opportunities. I draw here on my prior research and writings for assessing ChatGPT
or technologies similar to it and argue that a sobering and more useful
• Real time automatic data update - One of the major weaknesses of view of ChatGPT will require consideration of the functionalities that it
ChatGPT is its inability to consider real-time events and information may enable: (1) the ability to sense the environment (Sense); (2) the
when generating output. Real-time content creation is still a chal ability to analyse information (Analyse); (3) the ability to collaborate
lenging task that requires more advanced technologies. with others within or across firms (Collaborate); and (4) the ability to
• Dealing with imperfect information - How AI can deal with imperfect automatically complete tasks (Execute).sense, analyse, collaborate, and
information is recognised as a significant challenge (Dwivedi et al., execute (Mithas et al., 2022). The SACE acronym captures the four
2020). This is particularly evident in the context of ChatGPT. functionalities (Sense, Analyse, Collaborate, and Execute) and is a useful
• Transparency and explainability - It is imperative that generative AI starting point for a critical assessment of emerging technologies such as
should be designed and developed in a way that can be transparent, ChatGPT and Metaverse. Applying the SACE framework to ChatGPT, one
explainable, and accountable. may ask, will ChatGPT by itself help to “sense” the environment beyond
• Integration of generative AI with other systems to augment its overall what is possible currently? Can ChatGPT help to "analyse" data in more
system performance and capability. insightful ways given the dismal record of similarly hyped prescriptive
• Development of domain specific generative AI systems that can and cognitive analytics as IBM discovered? Will ChatGPT enable new
minimise ethical concerns, discrimination, plagiarism, copyright possibilities to "collaborate" across firms and value-chains by itself?
infringement, etc. Finally, will ChatGPT enable "execute" capabilities to open up new pos
sibilities for delivering new efficiencies in manufacturing, and particu
This contribution has offered a general overview on the use of larly service sectors such as healthcare and education that some argue
generative AI and a partial research agenda. Can generative AI help in are subject to Baumol’s disease (Baumol, 2012)?
this research? That will be a research topic in itself. ChatGPT gave us
some assistance in deciding how to begin this contribution, but except
Table 6
for the content of Table 5, all the writing and thinking is ours.
The ADROIT Framework for assessing the economic value of AI technologies
Recent breakthroughs in generative AI, like the rise of ChatGPT, have
such as ChatGPT.
demonstrated its exceptional performance of surpassing human beings
in information processing and human like content creation. Conse Framework Examples
component
quently, there has been a sudden explosion of interest and publications
on generative AI. Despite the extensive reports and discussions about Add revenues AI systems can enable firms to grow revenues by scale up their
volumes, making better pricing decisions, or through
ChatGPT, there are numerous contradictory claims, unrealistic expec
customization.
tations, and superficial understanding, especially in terms of its capa Differentiate AI may enable firms to offer personalised products and services
bilities, benefits, threats, and impact. Some people view generative AI as for differentiation.
just as a powerful tool, but others believe that it may be the beginning of Reduce costs AI may reduce costs through automation, and even elimination
an era of the creation of disruptive AI that will eventually destroy hu of tasks.
Optimise risk AI can help to manage risks, but it also creates new risks that
manity (discussion on BBC Radio 4, 26 Jan 2023). are not yet fully understood. Managing such risks will require
Overall, there are more questions than answers regarding the rise of careful thinking by policymakers as well as managers.
generative AI. Researchers will have to embrace the evolving challenges Innovate AI can enable innovation and rapid new product development.
as well as opportunities presented to maximise the benefit and mitigate Transform AI can help to transform society and lives by empowering
marginalised groups and achieve sustainable development
the potentially negative impact of generative AI on society, industry,
goals if managed and regulated thoughtfully (Tang, 2022).
42
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
I avoid the temptation to call ChatGPT as a "disruptive innovations" solutions. Bank of America uses conversational AI in its customer service
given my scepticism about the key premises of such labels (King & operations to automate routine tasks and provide fast and effective in
Baatartogtokh, 2015; Tellis, 2006). Instead, a theory of disruptive teractions with users (Bank of America, 2022). This has helped the bank
debottlenecking can be helpful here by asking if a new technology helps reduce wait times and improve customer satisfaction. Google uses
to avoid or overcome tradeoffs that prior generations of technologies too conversational AI in its Google Assistant platform to provide users with a
for granted (Mithas et al., 2022). In the end, the business or even societal natural and intuitive interface for accessing information and performing
value of ChatGPT must be assessed in terms of its influence on adding tasks, including data analysis, providing insights into user behaviour
revenues, differentiating, reducing costs, optimizing risks, innovating, and preferences (Google, 2023). The use of conversational AI offers a
and transforming business models and processes from a business and wide range of potential applications allowing companies to improve
societal perspective as I have argued elsewhere, and shown in Table 6 customer experience, automate routine tasks, gather customer insights,
(Mithas, 2016; Mithas et al., 2022, 2020). and drive innovation in their industries.
The hype about AI is not new and goes back to Herbert Simon’s Despite the numerous benefits in various applications, its usage also
predictions about AI back in 1960 (Mithas et al., 2022; Simon, 1960). entails certain drawbacks that should not be overlooked:
Continuing advances in AI can bring changes to how firms compete, yet
such changes often take decades to materialise and not all changes are 1. One of the major challenges posed by ChatGPT is the presence of deep
positive. Firms must develop their AI strategy as part of their overall IT fake text, as demonstrated by a recent study (Else, 2023) that proved
strategy, and competitive strategy with attention to governance and the ability of the AI system to write convincing fake research-paper
execution issues (Mithas & McFarlan, 2017). It is time to reimagine abstracts. To address this issue, there is a need for ongoing
digital and business strategies at the corporate, competitive and func research to develop methods for detecting deep fake text. Moreover,
tional levels leveraging opportunities that AI presents, rethink how they addressing the bias present in the model’s training data is also
create value using AI for the business and its customers, and on how they crucial. For example, if the model is trained on a dataset that con
appropriate some reasonable part of that value among their key stake tains gender stereotypes, it may generate text reinforcing those
holders (Mithas et al., 2020). Although there are good reasons to have a biases.
healthy scepticism for hype created by self-serving technology entre 2. Another significant drawback of ChatGPT is the propagation of
preneurs or uninformed commentators, some aspects of AI do raise misinformation. This can occur when the model is fed false or
fundamental philosophical questions beyond business processes to how misleading information during its training, leading it to produce
such technologies shape human identity. Simon exhorted us to carefully inaccurate or unreliable responses. This can have significant conse
assess the role of technology in rethinking "man’s conception of his own quences, especially in applications where the information provided
identity as a species…a new way of describing his place in the universe" by ChatGPT is relied upon, such as in the context of decision-making
(Simon, 1960, p. 55) and we must bear that in mind as we engage in our or information dissemination.
roles as educators, and researchers. 3. Another issue is the lack of awareness of local rules and regulations.
ChatGPT operates on a global scale, and its responses may not be
4.4.3. Contribution 34: ChatGPT: challenges and opportunities - aligned with local laws and regulations. This can result in the
Margherita Pagani dissemination of information that is legally or culturally
The advent of ChatGPT (General Pre-Trained Transformer), a highly inappropriate.
advanced AI chatbot classified as a large language model, has caused a 4. Finally, the training of ChatGPT on the vast amount of information
stir in the technology world. This cutting-edge tool capable of gener available on the Internet, which encompasses both good and bad
ating convincing and intelligent-sounding text in real time raises con aspects of human behaviour, can result in the propagation of inac
cerns about the authenticity of the generated text and the potential for curacies. This highlights the importance of monitoring and verifying
the perpetuation of biases in its training data. Academic stakeholders the information used to train language models, as well as the outputs
including journal editors, researchers, and publishers are engaged in they produce.
discussions regarding the appropriate utilization of AI tools in the
publication process and the acceptability of citing AI systems as authors By focusing on ongoing research to address the potential risks, we
(Stokel-Walker, 2023). Despite these challenges, ChatGPT can poten can ensure that this technology’s benefits are realized and positively
tially transform how we approach education, business, and research. impact society. The key areas of focus should include: (i) developing
The primary focus of this editorial is to emphasise the significant chal methods for detecting and mitigating bias, (ii) improving deep fake text
lenges and opportunities in these three key areas. detection, (iii) investigating applications in education and accessibility,
In the realm of education, conversational AI platforms like ChatGPT (iv) exploring the use of ChatGPT in industry, and (v) advancing our
and Amelia have the potential to revolutionise the traditional faculty- understanding of language and human communication.
centred approach by providing 24/7 access to virtual instructors with In conclusion, as the use of generative AI systems like ChatGPT be
globally validated knowledge, available in any language and format, comes increasingly prevalent in various industries, it is imperative that
and on any device (Wind, 2023). This can transform the role of the employees and managers of the future (MoF) acquire a range of human
faculty into facilitators of learning and implementation. This shift to skills to work effectively and responsibly with these systems. This im
wards a student-centred educational paradigm can prove to be highly plies challenges not only to the educational paradigms but also to the
beneficial. curricula that business schools must provide. From critical thinking and
In businesses, conversational AI can enhance creativity by digital literacy to creative problem-solving and ethical decision-making,
providing fast, effective, and human-like customer interactions. This can the skills needed to thrive in the AI-powered future are diverse and
lead to new and innovative ways of customer engagement, automate multi-faceted. Understanding how to manage data, apply interdisci
routine tasks, and provide valuable insights into customer behaviour plinary knowledge, and communicate effectively with technical and
and preferences (Kietzmann et al., 2018; Ma & Sun, 2020). By freeing up non-technical audiences are essential skills in this new era. The acqui
time and resources, businesses can focus on more creative and strategic sition of these skills will ensure that individuals are well-equipped to
initiatives such as product development and marketing campaigns navigate the rapidly-evolving landscape of AI, and can make informed
(Pagani and Champion, forthcoming). Companies such as H&M, Airbnb, and ethical decisions as they work with these powerful systems.
Capital One, and Oracle are also using conversational AI for enhancing
customer service, providing personalised recommendations, assisting
guests, offering personalised banking services, and providing enterprise
43
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
4.4.5. Contribution 36
44
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
45
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
like comments on certain personalities, religious matters, and other fifth). This reflects a (arguably the) major concern among educators,
delicate issues need to be addressed, as each region may have its namely the downside of ChatGPT’s potential for students to ‘outsource’
sensitivities. tasks, for example for-credit essays or personal statements.
Unemployment and Reskilling: Highly populated countries like A number of implications follow. First, reflecting an existing theme
China, India, and Brazil have sizeable employable population. ChatGPT in the literature on academic misconduct/integrity, educators will need
is likely to disrupt many industries and automate many routine and to wrestle further with assessment design. At least some existing stra
repetitive jobs (Agarwal et al., 2022). This would lead to mass unem tegies to ‘design out plagiarism’ (Carroll, 2007), including the use of
ployment and reskilling requirements. “academic custom writing services, essay banks or essay mills, private
Cybercrime: Cybercriminals, even with limited coding and IT skills, tutors or freelance workers” (Rafalin, undated: 1) are likely to be less
may become more aware of the hacking tools through ChatGPT. Chal effective against ChatGPT and its counterparts. Some educators/in
lenges regarding creating appropriate control on ChatGPT about infor stitutions may retreat to the comfort of traditional time-limited,
mation dissemination on creating phishing and malware software need in-person examinations; others may instead “consider innovative for
to be addressed. mats of assessments, particularly those… [requiring] creativity and…
Integrity issues: The ways to identify the actual scriptwriter as critical thinking” (Zhai, 2022: 10). Given the potential for ChatGPT to
compared to text created by ChatGPT have been elusive till now. The change the nature of office-based jobs in the future, part of the change to
black box phenomenon, where we sometimes do not understand the assessment is also likely to involve an embrace of such technologies
logic behind a ChatGPT answer, is intriguing. The accountability for (Jisc, 2023), building them into novel forms of ‘authentic assessment’
certain answers on ChatGPT, like in healthcare or mentoring services, and facilitating a new generation of ‘digital literacy’. In the short-term
remains a challenge that needs to be addressed. however, like COVID, ChatGPT appears to have largely caught in
4.4.5.1.3. Research agenda. Generative AI has gained the interest of stitutions off guard and scrambling to adapt to a shifting reality.
many investors and organisations. However, these AI-related advance One hitherto unmentioned corollary of ChatGPT’s ability to author
ments, including deep learning chatbots like ChatGPT, will never essays is its impact on parts of the assessment ecosystem. For example, it
replace humans but would aid in making us more productive and help in seems likely to prove highly disruptive to the business models of con
achieving higher-order needs. The best results would emerge with tract assessment-writing services. Currently free, even a paid-for
appropriate augmentation of technological developments like ChatGPT, ChatGPT is likely to prove cheaper, offer faster turnaround times and
metaverse, 3D printing, blockchain, and internet of things (IoT) with (as it learns) deliver similar quality to existing human services.30 A
human intelligence. second is the impact on plagiarism detection services such as Turnitin.
Thus, the scholars working in this domain may explore the following Such services are already gearing up to tackle the threat to academic
research questions (RQs): integrity of ChatGPT (turnitin, 2023). At the time of writing, a
student-developed app – GPTZero – can do so with a high degree of
RQ1: What should be the globally accepted regulation standards for accuracy, subject to access to the underlying models (Wilkins, 2023).
platforms like ChatGPT and MedPaLM? That may not continue to be the case, at least for AI providers dependent
RQ2: What should be the "fair price" for ChatGPT services to in on commercial funding streams. Either way, an ‘arms race’ seems almost
dividuals and corporations? inevitable, at least in the absence of assessment adaptation of the type
RQ3: How can ChatGPT reduce the widening digital divide between described earlier.
persons having and those not having internet access? Much of the discussion of ChatGPT focuses on its negative potential
RQ4: What are the key markers to identify whether the text is in terms of academic misconduct. However, models of the type repre
generated by ChatGPT or written by a living person? sented by ChatGPT also offer a number of affordances that could sub
stantially enhance educational practice and student learning. Firat
4.4.6. Contribution 37 (2023) for example, points to the ability of AI to provide a personalised
and interactive experience in relation to progress, support, feedback and
4.4.6.1. ChatGPT: challenges, opportunities, impact and research agenda - guidance, thereby enhancing learner autonomy, experiences and
Paul Latreille. ChatGPT is an advanced AI chatbot developed by OpenAI. engagement. Indeed, one could easily envisage a future in which
Built on large language models it launched on 30 November 2022 and marking of student assignments is primarily undertaken by AI-based
has attracted unprecedented interest due to its ability to provide systems, providing more detailed and faster feedback on both forma
detailed, ‘conversational’ responses to text prompts on a wide range of tive and summative assessments than academic staff. Scalability is a key
topics and in a variety of styles, “to answer follow-up questions, admit its feature, given massification of higher education globally (Tight, 2019),
mistakes, challenge incorrect premises, and reject inappropriate re and where delivering on promised turnaround times for marking in a
quests” (https://openai.com/blog/chatgpt/). Analysis by Similarweb drive for higher student satisfaction metrics has exacerbated work
reported in the Guardian suggests ChatGPT reached 100 million users in intensification. How the academic time freed up is redeployed, is of
the first two months. Both Microsoft and Google are rapidly developing course, open to conjecture.
competitor platforms. Others in this volume are more qualified to Other benefits can easily be identified. In a fascinating article written
elaborate on ChatGPT’s wider capabilities and limitations; the focus in conjunction with ChatGPT, Zhai (2022) looks, inter alia, at AI’s im
here is on its potential implications for educational practice. plications for the future of education. Along with personalisation of
Public interest in ChatGPT is similarly both recent and viral. Google learning, tutoring/mentoring and support, and automation of various
Trends for example (https://trends.google.com/trends/explore? routine administrative functions, such systems are predicted to permit
q=chatgpt), documents insufficient data to register searches for the term the development of AI-powered educational games and simulations.
“ChatGPT” prior to the week 20–26 November 2022, rising to a Ultimately, the novelty of ChatGPT renders prediction a risky game,
maximum value of 100 in the most recent week with full data at the time other than that such technologies are here to stay; AI is a genie that
of writing (i.e., 29 January-4 February 2023). Focusing only on the cannot simply be put back in the bottle. For educators, two aspects are
rather nebulous ‘Jobs & Education’ category (https://trends.google. vital. The first is the need “to reflect on our learning goals, outcomes,
com/trends/explore?cat=958&q=chatgpt) is revealing: two of the three and assessments… We should not let our existing approaches remain
top ‘Related Topics’ are “Plagiarism” and “Content similarity detection”,
while the second and third most frequent ‘Related queries’ are “ChatGPT
plagiarism” and “plagiarism” respectively (with “ChatGPT checker” 30
An interesting question is the extent to which identical (or very similar)
prompts by a cohort of students delivers distinguishably different responses.
46
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
simply because this is the way we have always done them” (CMS|W, incorporate aspects of oral examination into classes, or are there other
2023). The second is that such a task will be challenging given pedagogy ways of ensuring students are using the technology honestly and pro
is likely to be playing catch-up for some time; (further) educational ductively? There is a potential revolution in terms of class design and
research in this domain is imperative!. pedagogy on our doorstep. We have limited time to think through our
responses and must learn from one another in the process.
4.4.7. Contribution 38 The research agenda is wide ranging, encompassing every academic
discipline, with significant potential to make major gains. Areas such as
4.4.7.1. ChatGPT: challenges, opportunities, impact and research agenda – healthcare and education are obvious where research is needed into how
Robin Gauld. For readers of IJIM, ChatGPT will have been a topic of the technology will be most effectively used, what the risks are and what
considerable discussion, as well as consuming a lot of time exper it means for training, work and service delivery. The same with areas
imenting with and thinking through its implications and applications. such as law where there is considerable scope for assisting with pro
This certainly describes my experience. As with others, I have used cesses of researching and preparing cases. There is going to be a huge
ChatGPT for everything from developing a new course I will teach later demand for research into implications for information gathering,
this year, through to solving an obscure question relating to referencing decision-making and services in government and policy making as well
software. as other areas with public impact including for businesses and share
It did a very good job indeed of providing me with a solid foundation holders. The research community must engage with how ChatGPT is
for the course. In moments it spun out a course overview and set of used for research, what key questions need probing, and what the
objectives as well as an outline of what would be covered through 12 longer-term implications are. This includes a significant new risk to all
weeks of classes. It generated a very good essay question and when asked who use the technology to assist their work – that of cyber trust.
wrote a pretty good 1500 word response. It gave me material to use in In sum, ChatGPT is a game-changer. The impacts may not be as
slides and also a good reading list for the course. All of this was done in a profound as some predict, but it is clear that as it evolves and we adapt as
matter of minutes, most of which was me reviewing responses and users, there will be some significant shifts resulting.
deciding what to ask next.
Fortunately, the course is one most business schools globally offer, in 4.4.8. Contribution 39
business ethics, ironically. ChatGPT would have struggled to generate a
course on a completely new topic that had never been offered anywhere 4.4.8.1. Challenges, opportunities, and impact of ChatGPT in the IT in
before. Journalists I know who have been worrying about what it means dustry - Nishith Pathak. OpenAI’s GPT-3 (Generative Pre-trained
for their jobs have been relieved to learn from ChatGPT that ‘as an AI Transformer 3) is a ground-breaking AI language model that has
language model I don’t have access to real-time news updates. My captured the attention of the AI community and beyond. ChatGPT can
training data only goes up until 2021, and I don’t have the ability to safely be assumed as a viral sensation that could completely change the
access current events’. The database for ChatGPT is, therefore, not way we do things currently. It is probably one of the faster growing
current. It is not presently likely that the information gathering and technologies that world has ever adopted. It took 3.5 years for Netflix to
analysis involved in many roles will be surpassed by generative AI. Nor have 1 million users’ adoption. Facebook just took 10 months, Spotify
is it likely that the work done by humans in using the generated infor took around 5 months, IPhone took 74 days for the same adoption. You
mation will be subsumed. would be amazed to hear that it took just 5 days for world for reaching 1
I know school teachers who, at least in my native New Zealand, have million users. For some folks – it can be just a writing assistant tool can
taken to ChatGPT like a drug – apparently it has gone viral. Teachers are provide suggestions or even write entire paragraphs based on what
marvelling and grateful for the way it is able to produce basic lesson you’ve told it. For others – it can be largest neural network trained with
plans and cut through other administrative and compliance work – over 175 billion parameters making him one of the most advanced
paperwork that has to be submitted to school administrators and gov language models in field of NLP. Nevertheless, with the adoption rate
ernment agencies – that takes considerable time to collate. This is time and for the first time, it is safe to say that world have ever seen a
teachers do not have in their busy teaching schedules, and is usually technology that is going to reduce the gap between Human and robots’
done in evenings and weekends. ChatGPT is freeing them up to focus on machine, revolutionise the way how we interact with computers, mak
what matters – working with our children and their parents. ing it easier and more natural to communicate with them. Probably
For basic tasks that an internet search is not able to easily help with, sooner or later, one of the versions of ChatGPT would pass Turing Test.
ChatGPT seems to fill a gap. My referencing software question, a genuine 4.4.8.1.1. Challenges to the IT industry. With its ability to perform a
issue I was facing where a style I required was actually different from the wide range of language tasks, from text generation to question
named style in Endnote, was nicely solved by ChatGPT. I Googled away answering, with human-like proficiency, GPT-3 represents a major
getting nowhere then thought I would try ChatGPT. To my relief, it advance in the field of AI. ChatGPT is already creating a lot of buzz in the
solved my problem, telling me the name of the style I needed (a rarely IT industry and it would not be absurd if I say that ChatGPT is poised to
used variation, and plug in). This is the power and opportunity it offers. have a significant impact on the way organisations do business. ChatGPT
With ChatGPT, OpenAI has upped the stakes. Investments in it have has the potential to revolutionise the way organisations do business.
grown significantly, and the other big tech companies are working However, as with any new technology, ChatGPT also presents several
concertedly on their own offerings. As indicated, the opportunities and challenges that must be addressed before it can be fully adopted and
impact of generative AI are tremendous. For those of us involved in utilised in the IT industry.
leading and teaching in universities, an acute awareness and embracing First and foremost – the biggest challenge for adopting ChatGPT is on
of the technologies will be important. Our students will undoubtedly threats to Privacy and Security. The utilization of ML algorithms in
embrace ChatGPT. Through incorporating it in the class and beyond, we ChatGPT results in the processing of a massive amount of data, making it
must equip them for the workforce of the future. The worst students, of a susceptible target for cyberattacks. The sensitive information pro
course, will use the technology to cheat. Unsurprisingly, there is now an cessed by ChatGPT can become vulnerable to theft or abuse, posing
industry of ChatGPT detection tools emerging, of variable reliability. significant privacy and security threats.
The best students, along with their teachers, will find creative ways The accuracy of ChatGPT algorithms is dependent on the data they
of working with the technology. We need to think carefully about how to are trained on, which can result in the introduction of biases and
design classes where students actively use ChatGPT in a process of co- discrimination. This can be particularly concerning when ChatGPT is
curricular and learning design. Should we, as many are suggesting, utilised in sensitive domains such as healthcare and civil. As I write this
47
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
article, I also keep on hearing about ChatGPT generating incorrect re 4.4.9. Contribution 40
sults. This would certainly require a rigor check for ensuring about
correctness of data. 4.4.9.1. Challenges of ChatGPT - Indranil Bose. It is well-known that ML
Despite the swift advancement of ChatGPT, a considerable number of based AI systems that are used in knowledge work suffer from several
individuals still lack an understanding of how it operates and are wary of challenges (Zhang et al., 2020). ChatGPT as a form of conversational AI,
relying on it. This reluctance can restrict the potential impact of which has caught the attention of millions around the world, suffers
ChatGPT and impede its widespread adoption. from its own share of challenges which can be discussed under the
The implementation and upkeep of ChatGPT technology can be following four headings:
financially taxing, requiring specialised hardware and software as well 4.4.9.1.1. Security challenges. While the power of ChatGPT to
as trained personnel. This can pose a challenge for smaller organisations construct prose that is like that of humans has received the admiration of
or those with limited budgets. many, it is possible that this same ability can turn out to be extremely
4.4.8.1.2. Opportunities in the IT industry. ChatGPT has the potential dangerous if it falls into wrong hands. ChatGPT has been credited as
to transform the way organisations Operate and drive Innovation. As being able to create very authentic phishing emails that can lure un
every company is an IT company now a days, ChatGPT is poised to suspecting users to click on malicious links that can infect computers
transform each and every domain. As I write currently, experts and re with malware (Kulesh, 2022). The powerful ML algorithms powering
searchers are exploring it’s potential to streamline process, improve ChatGPT has the advantage of searching through numerous phishing
customer engagement or drive innovation. At a high level, I am trying to emails that are available on the Internet and understanding the key
summarise some of the quick win opportunities for IT Industry. features of these emails that make them suitable for social engineering
One of the first opportunity for IT Industry is to quickly automate attacks. In the same manner ChatGPT is also able to write malicious code
repetitive tasks and Improve customer experience. ChatGPT can with minimum instructions from hackers who are not even technically
certainly be an efficient tool to free up human resources and pushing savvy to produce such code. It is even able to translate natural language
them to handle more complex queries. Most of the IT Industry com to harmful code. Another security concern for ChatGPT is that it could be
panies are currently going through attrition of 20% of more. One of the used for malicious purposes such as spreading rumours, propaganda as
foremost reasons is employees not getting the right work. Automating well as misinformation on the Internet by generating believable text.
repetitive tasks would ensure employees focuses on strategic initiative, Such information spread on platforms such as social media can result in
making their morale high thereby increasing productivity and retention. devastating impacts (Deng and Lin, 2022).
What makes ChatGPT so unique and popular is that it just gives them 4.4.9.1.2. Technological challenges. While ChatGPT showcases
one unique answer. This also makes ChatGPT to help organisations to generative AI at its best and is able to mimic human writing it is still not
make better decisions by really proving data driven actionable insights. able to fully replicate the display of knowledge depth that is seen in
Now the current limitation of ChatGPT is merely it’s been trained only humans. In fact, some have remarked that the software is so impressive
till data in 2021 so getting a real time insight would be challenging but not because of its technical prowess but because of its ability to persuade
this doesn’t stop organisations to use ChatGPT to use it for deriving a reader with the use of words or phrases that appear to be authentic
actionable insights. (Hendrik, 2022). Educators worry about the uncertain future of written
ChatGPT works as a companion tool for people working in IT In exams and some like the New York City Department of Education have
dustry. For e.g., being a developer, one can use ChatGPT as a companion even blocked access to ChatGPT on its networks. However, ChatGPT is
tool for developers to help me generating test cases and test data, still not able to match the intellectual capability of a human exam taker.
debugging, and troubleshooting my code and even to some extent In fact, in a recent trial run at the University of Minnesota Law School to
writing a code to do a particular task. Similarly, ChatGPT can work as a see how the software fared in an exam for the MBA students, it was only
companion tool for folks working in marketing, Learning and de able to pass with a C+ grade (Ortiz, 2023a, 2023b). It is well-known that
velopments, HR, finance and many more. ChatGPT certainly has this software often “hallucinates” and is not able to reliably provide an
immense potential to unearth new business potential, generate new accurate answer to a question (Chui et al., 2022a, 2022b). This makes it
revenue streams and above all, be a driver for innovation. rather unsuitable for repeatable tasks that any AI tool is supposed to
4.4.8.1.3. Impact of ChatGPT in the IT industry. With the advent of excel in. Moreover, since the ChatGPT works on the basis of a large
ChatGPT, IT Industry has a lot of challenges and opportunities to explore corpus of training data, if that data itself is biased it is unable to
but Industry is also going to be largely affected with the way it operates recognize the errors in the data. Hence, it is believed that while this tool
currently. For e.g., ChatGPT would certainly disrupt and automate lots can gather a large number of ideas in a short period of time, it may not be
of tasks thereby, it would also displace lots of jobs. Currently lot of IT able to identify which of which ideas are the most innovative. For a task
companies are generating revenues doing the monotonous business and like that one would need to depend on human intelligence (Bouschery,
charging clients on year-on-year revenues for customer support by 2023). Moreover, like any AI tool ChatGPT also acts like a black box and
providing hundreds of resources. ChatGPT would revolutionise the does not provide explanations about how it came up with a specific
entire spectrum of customer experience. This means creating lot of jobs solution (Sun & Medaglia, 2019). For example, it is unable to explain
and tasks but also redefining existing ones especially in customer why it reports 3.2 to the power of 3.3 as approximately equal to 21.73,
experience to get started. Slowly and gradually, ChatGPT has potential whereas the actual answer is 46.45 (Azaria, 2022).
to affect other Industry and verticals. 4.4.9.1.3. Economic challenges. Although ChatGPT is currently
I frequently emphasise that those individuals and organisations made available as an open-source tool by OpenAI there is no reason to
that adopt technology and AI will eventually surpass those that do think that it is available for free. Since a response from ChatGPT usually
not. Therefore, companies and organisation need to incorporate is made up of 30 or more words, it can be estimated that to make
ChatGPT in innovative ways in their offerings providing cutting edge ChatGPT produce a response OpenAI is spending at least 1 cent. It is
benefits to organisations. conjectured that OpenAI may be burning a tremendous amount of cash
As an author, I acknowledge that ChatGPT presents several chal to the tune of US$ 100,000 per day as operating expenses (Indian Ex
lenges, but its potential benefits and impact on the IT industry are sig press, 2023). However, with the US$ 10 billion investment by Microsoft
nificant. From improving customer service and decision-making to for ChatGPT and the fact that this software runs on Microsoft Azure, it is
enhancing productivity and generating new revenue, ChatGPT holds the unclear what the true cost of running ChatGPT for OpenAI is at this time.
promise of revolutionizing the way we live and work in the digital era. Although ChatGPT started as a freely available service, OpenAI has
changed its course of action and launched a subscription-based model
48
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
for accessing ChatGPT that can “chat with you, answer follow-up written before. It usually does so impressively!.
questions, and challenge incorrect assumptions” at the subscription The fuel analogy also illustrates the limits of GPTs. Although armed
cost of US$ 20 per month (OpenAI, 2023). Also, taking cue from OpenAI, with impressive linguistic capabilities, ChatGPT’s ability to structure
several scammers and fraudsters have put up fake advertisements on the and restructure responses, use and create syllogisms, or argue along
Internet for subscription to ChatGPT at a monthly service fee of US$ 50 different viewpoints is limited by its design and the volume and choice
(Agomuoh, 2023). It is reported that numerous people have lost money of training materials.
by falling a prey to this extortion scheme and paying for the free basic What then is the opportunity? Answer: To experiment with those
service that is offered by OpenAI. aspects that are truly novel.
4.4.9.1.4. Ethical challenges. The output that is generated by GPTs will come with the option to enhance–or “fine-tune”–specific
ChatGPT is often biased due to the inaccurate training data that has been layers of the linguistically pre-trained neural network. OpenAI has made
used to enrich its knowledge. It is reported that when asked who could this option available on ChatGPT through its API, thus providing the
be a good scientist based on race and gender, ChatGPT has favoured possibility to customise any GPT model for a specific domain, such as the
people who are white and male as scientists, showcasing a bias against legal profession that is drowning in contractual documents. Fine-tuned
women and non-whites. Moreover, several questions have been raised GPTs can then become products themselves and can be commercial
about the unethical practices followed by OpenAI in gathering training ised or integrated with existing services.
data using low-paid workers in Kenya. Many of these workers have often Typical use cases include those that entail comparing texts and
reported traumatic experience when entrusted with the task of labelling completing sentences. Summarizing big volumes of text is another,
offensive content on the Internet (Agomuoh, 2023). Another ethical particularly as a first draft, so is answering questions in an elaborated
challenge related to ChatGPT is whether it could be considered as a and prolonged format, or pattern matching instructions, such as repair,
co-author in scholarly work, when it has contributed a significant part of maintenance or scheduling procedures, along with code excerpts. GPTs
the prose of the paper with the use of its training data. A report by might be the intern we were always looking for, taking off the majority
Nature has confirmed that since the tool “cannot take responsibility for of repetitive and grinding work and leaving us with the intellectual bit
the content and integrity of scientific papers” it cannot be considered to that is more enjoyable. Instead of combing through Internet search en
have made a scholarly contribution (Stokel-Walker, 2023). Finally, gine results, for example, ChatGPT’s successors will offer an easily
ChatGPT poses an ethical challenge of potential job loss for certain digestible summary. Provided with text fragments, they can generate
professionals such as content writers as well as educators for no fault of alternative formulations, summaries, abstracts, or nifty LinkedIn posts
their own (Zhai, 2022). and bios that are a tad more glorifying than we might compose. Used as
coding bots, they can help with drafting basic programming modules. As
4.4.10. Contribution 41 personal trainers, they can design workout plans, and as storytellers,
they can concoct a new bedtime story every night.
4.4.10.1. On the challenges and opportunities of ChatGPT - Iris Junglas and Further opportunities lie in combining GPTs with analytic modules,
Sven-V. Rehm. What is the challenge? Answer: Understanding what for example, for refined statistical analyses, such as topic modelling. In
ChatGPT truly is. such cases, more complex analyses can be prepared and then offered
The most pressing challenge for researchers and practitioners alike in through the transformer interface in a manner that is accessible to a
early 2023 has been to understand the novelty of ChatGPT. As with diverse set of users. Thus, the ability of IT architectures linking GPT with
every new technology that enters society, this exercise is a crucial one. real-time data will become increasingly important.
After all, only if we truly understand what is new can we unearth the How does GPT impact our discipline? Answer: On many, many
potential value added for organisations and our own lives. dimensions.
ChatGPT describes itself as “a cutting-edge language model…, From an IM and IS perspective, GPTs offer a novel interface to make
designed to generate human-like text based on a vast amount of diverse extensive amounts of information intuitively approachable. While some
data.” Put differently, it has the ability to generate paragraphs that are faculty view ChatGPT as a threat, it is an even greater opportunity.
eloquently worded based on a short text passage—be it a question or a Encouraging students to use ChatGPT for summarizing specific topic
statement. areas enables them to stir up discussions more quickly, which in turn
For that to happen, at least three things must be in place. First, the could help them building up argumentative and critical-thinking
system has to be trained, through a process of iterative reconfigurations, competencies.
to identify words and to recognise how and when words co-occur. Sec Students must learn how GPTs are different from other interfaces,
ond, it must be trained to propose a set of words that should be used in such as search engines, and how different techniques of training and
the output based on the input received. And last, the system has to be fine-tuning can help to create case-specific solutions. This certainly will
trained to arrange those suggested words into grammatically correct happen in industry, and our students must be prepared.
sentences. Or, as ChatGPT tells us: “When given a prompt or input text, Researchers must look into biases introduced by the training data set
ChatGPT uses this knowledge to generate a continuation of the text that and the training process. If the dataset is incomplete or biased, so will be
is coherent and contextually appropriate.” At its core, ChatGPT is thus a its output. Look at the allegations that question the ethics of OpenAI’s
generative textual engine, trained to rely on words and phrases used training process. Software engineers, and those in charge of training
previously to describe a phenomenon or event. And unlike our brains, it GPT’s linguistic capabilities, are the ethical arbiters of our digital tools.
does not rely on logic, semantic or epistemic models to develop self- As ChatGPT warns us: “While ChatGPT is a powerful tool, it should be
reflective positions. used in conjunction with human oversight and judgement to ensure
The word “generative” is crucial. As a power generator must have accuracy and ethics.” Developing normative design principles for
fuel to generate a current, ChatGPT is fuelled by the documents it was conversational agents are therefore of the essence. Shifting curricula
trained on. The training process, supervised by human operators who towards evaluating the goodness of data and process, along with putting
restrict contents and reinforce preferred output patterns, produces the standards in place will be crucial. The same notions will also have to
parametrization of the neural network that defines how “the fuel” apply to platform providers who must obey established guidelines
transforms into actual textual output. In that sense, ChatGPT—as with before their service offerings can be integrated into existing systems.
any other generative pre-trained transformer (or GPT in short)—does Of course, the generative ability of GPT will also find its way into our
not “know” anything. Instead, it eloquently ‘regurgenerates’—a cross research studies–writing abstracts, summarizing literature, generating
over between ‘regurgitate’ and ‘generate’—what has already been drafts for discussions, or as part of the methods applied. Researchers will
have to be explicit about how GPT factored into their studies, and what
49
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
mechanisms were in place to ensure accuracy and ethics while avoiding 3 in 2020 when Microsoft announced actively collaborate with OpenAI.
bias. Journals, such as this one, will have to re-work their policies We see the milestone for commercial threshold breakthrough is GPT 3.5.
accordingly and take a stance on what is acceptable and what is not. As soon as GPT3.5 was launched in November 2022. it immediately
It will not be easy. But as with everything in life that is not easy, it attracted one million users and received enthusiastic responses for po
often starts a positive momentum and leads to novel insights. tential commercialization opportunity across a range of services such as
cloud services, contents creation, ads and promotion, data services, and
4.4.11. Contribution 42 consulting.
There are a number of features which distinguish ChatGPT3 from the
4.4.11.1. ChatGPT3: technology development, impact and challenges - models which came before. These notable features are the volume, pa
Scott W. Cunningham & Mei-Chih Hu rameters, architecture and training regime of the model. These break
4.4.11.1.1. Technology development. As shown in Fig. 8 below the throughs seem to have unleashed a new era of NLP, one which harnesses
ChatGPT is an incremental innovation based on ML (or so-called AI). The the sheer recombinatorial power of vast sources of internet data. The
history of this technology has been very much one of technology push – a first feature noted above is simply the sheer volume of raw data which is
technology which until recently has not found its market or application. used in the ChatGPT3 training. Previous researchers have described the
Its technology adoption trajectory was started from the scientific theory "unreasonable effectiveness of big data" – and this certainly applies here.
and logic reasoning proposed by the UK scientist Alan Turing in 1950 Unexpectedly models such as GPT3 continue to grow in effectiveness
(Turing, 1950) in an idea and conceptualisation stage (from 1950 to with the more data they are given. Another factor distinguishing GPT is
1969). Fig. 8 is compiled from a variety of different sources (Oppy & the sheer volume of parameters which are used in the learning of the
Dowe, 2021; ChatGPT, 2023; Oppy et al., 2021; Turning 1950, Turing model. Deep learning algorithms are no longer new, but they allow
Test, 2023). learning to occur layer by layer and over iterations, thereby reducing the
Driven by public R&D funding, ML technology has gone through computational cost of training large models. This means that we have
experimental stages, facing both upward and downward developmental awaited the sheer processing power of modern computers before these
trends for more than four decades (from 1970 to 2013). The stage of large language models could be fully realised.
applications development was not kicked off until 2014 when the Human language is filled with specific cases, specific audiences and
Turning test was successfully passed for the historically first time and specific domains. Learning one model to fit all these cases has proven
ALpha Go AI won the world’s chess champion in 2016 to attract sub fruitless. While there may be an underlying logic or grammar to human
stantial investments from both private and public sectors. The OpenAI speech, the specific logics being used can be ambiguous even to human
laboratory was then established in 2015, aiming at training a model speakers. An additional factor which distinguishing GPT3 from previous
called ChatGPT in a dialogue format. models are the specific architectures used in training the model. GPT
It is evolved from singular text mining technology as GPT 1 in 2018, models are context-specific – they are able to allocate specific parts of
GPT 2 in 2019 when Microsoft invested US$1 billion in OpenAI, and GPT the network given the context of the problem. The fourth and final
50
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
feature that distinguishes ChatGPT3 is the supervised learning used in its these will help us by creating a range of ubiquitous personal assistants.
creation. ChatGPT3 has been trained by human raters to respond in a Those who can afford widespread access to computing resources, and
clear, interpretable and structured fashion to the queries which it has the know-how to deploy these agents on their behalf will certainly
been provided. This has proven to be an effective co-teaming of human benefit. The technology itself may enable fifth or higher generation
speakers with ML models, resulting in new products and services which computer languages where a range of programming functionality can be
neither human nor machine could have produced alone. described and then auto-filled on command. The programming and skill
4.4.11.1.2. Technology impacts. One core concept behind informa required to make effective agents seems to have been dramatically
tion economics – perhaps the single core concept – is to consider: Who reduced, making at least special purpose AI much more readily available
knows what when? And further how does any given IT change this for a mass market.
knowledge across major spheres of economic activity? Information 4.4.11.1.3. Management challenges. Having now entered the
technologies span a bewildering array of technologies and architectures. commercialization stage, urgent problems need to be addressed for
Their impacts are far-reaching. But the underlying causes of all these ChatGPT3 at all levels. Questions at the micro-level include managing
impacts is rooted in the capacity to control and access information. the computational costs of training, but also avoiding fake information
Therefore, this can be a useful and unifying framework. and other distortionary sources of information which can corrupt the
As we will demonstrate below the concept behind the strategic use of training. These micro-level issues also require us to ask questions about
information is highly nuanced, and is not easily characterised with a the boundaries between human and AI. Should we allow ChatGPT3 bots
single measure. Information economics presents both a useful frame to masquerade as human, or should ChatGPT3 continue to demonstrate
work for gathering impacts, as well as an opportunity to advance and clear tells for unwary users that a ML technology is behind the screen?
extend a research agenda. Seminal references to core ideas in informa There are questions at the meso-level as well. AIs like ChatGPT3, but
tion economics include those of Eisenhardt (1989), and Laffont and also transformer models which are generating music, videos and graphs
Martimort (2002). There are in fact at least four different measures of are expected to be a source of creative destruction. ChatGPT3 recom
the strategic use of information. One author describes these as measures bines diverse sources resulting in questions about intellectual property,
of information quality (Rasmusen, 2014). When the quality of infor and the appropriate recompense for creative workers. We may increas
mation is degraded, we all lose out. But those strategic actors with access ingly require detection tools to identify and root-out ChatGPT3 pro
to better quality are net beneficiaries. duced information and media, resulting in an arms-race of competing
The first measure is certainty. Can we be sure that the quality of our technology.
decision-making has improved after we have accessed this new source of Some commentators are concerned that a range of white-collar jobs –
information? Unfortunately, ChatGPT3 has probably reduced our including copywriters, customer service agents, transcriptionists, jour
overall certainty about the world. ChatGPT3 provides notoriously nalists, secondary school educators, and executive assistants – will find
confident – and yet incorrect – answers to questions. The net benefi themselves out of a job with the advent of this new technology. It is
ciaries of ChatGPT3 – at least on this measure – are those able to confirm perhaps of little comfort to those newly unemployed that these jobs have
and warrant the varied sources of information on which we all rely. This been increasingly hollowed out over the past one or two generations.
includes individuals, including members of the professional class known Confronting the impacts of ChatGPT3 may require that we reconsider
as "symbolic analysts" (Reich, 1992). But this also includes groups, such how sources of creative or emotional labour are currently being
as non-government organisations, academic and professional organisa recompensed in the economy. It may also require that we train a range of
tions, and national bureaucracies tasked for managing a range of func new professionals better able to co-team with these new technologies.
tions central to modern life. This is perhaps much more of an institutional matter than solely a
The second measure is completeness. This question asks us whether technological one.
we can be really certain about who is supplying our information, and the There are also macro-level issues. These macro-level questions are
purposes, incentives or intents behind its production. Here again largely one of ethics. Questions of ethics require research and develop
ChatGPT3 has probably diminished the overall quality of information ment staff, including engineers and societies, but they also require ex
available in the world. The piece in the local newspaper may have been perts from the social sciences and humanities. Scholars such as
produced by ChatGPT3; that friendly respondent at the end of an philosophers, linguists, policy analysts, and lawyers will increasingly
internet link may ultimately be a bot; that scientific article ultimately need to be involved. These scholars can help address questions such as
written by machine. The net beneficiaries of this may be application how to create new institutions to better govern these technologies,
developers who can foresee and deploy new applications of these tech thereby ensuring they do not increase inequality and injustice. We must
nologies. And of course, ChatGPT3 and perhaps emerging competitors, ask whether these technologies should be limited in application and
who can license the underlying model. deployment, and if so how we can verify their absence from important
The third measure of information quality is symmetry. What do you societal arenas.
know that I do not? Does ChatGPT3 put us on an equal footing?
ChatGPT3 is a great leveller. We may all have access to a range of new 4.4.12. Contribution 43
capabilities, including what may be the most elite capability of all –
writing a cogent and well-formed essay. Much of this levelling capability 4.4.12.1. ChatGPT: a research agenda - Manju Ahuja. As ChatGPT has
is dependent upon the future institutional structure of access to taken the world by storm in the last few months, the discussions related
ChatGPT3. ChatGPT3 is also a powerful device for signal jamming – it to its implications seem ubiquitous. However, much of the discussion in
can take any particular perspective and multiply and replicate it across the academic circles has been centred around the educational implica
channels both narrow and broad. acknowledges these risks. In this sense tions of ChatGPT. This, of course, is a major challenge. At the same time,
there are costs to be born for the widespread availability of ChatGPT3 I believe that we need to consider broader challenges and implications of
technologies; but they are higher for established gatekeepers of news, generative AIs such as ChatGPT, both positive and negative.
media and opinion. ChatGPT3 outputs are pruned and censored to not be At the root of these challenges is the age-old technological deter
hateful or objectionable. Nonetheless the net beneficiaries are those minism question – does technology shape social change and determine
such as OpenAI who control the underlying algorithms of ChatGPT3. our future? After all, who could have predicted how mobile technologies
The fourth and final measure of information quality is known as have shaped human behaviour and attention? Not many people can
perfection. This quality asks whether we know where we are within any truly say that they control their mobile device usage and behaviour; for
given process or transaction. Ultimately technologies like ChatGPT3 most, the devices control their behaviour and attention instead. It is
reduce the costs of transacting within a modern economy. Tools such as
51
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
clear that we are at the precipice of another, perhaps even more sig Unfortunately, while incorporating ethics in educational programs is
nificant social change phenomenon. An alternate perspective to tech viewed as an essential remedy, most educational programs tend to pay
nological determinism suggests that humans have the power to control mere lip service to this content. Further, classroom coverage of issues
and steer the effects of new technologies. Perhaps both perspectives can may not be a complete solution as ethics are entangled with human
be in play to some extent. Here, I would like to reflect on a few questions, experiences and motivations, leaving room for violations. We must
challenges, and considerations for IS researchers that may inform how explore mitigating measures and solutions to address moral and ethical
we might examine, manage, and govern this phenomenon. issues involved in generative AI.
4.4.12.1.1. The experience of writing and reading. Given that the 4.4.12.1.5. AI for social good. Finally, scholars and practitioners
most common form of ChatGPT output is in the form of written text, we must explore ways of using generative AI for potential socially beneficial
need to consider what kinds of writing tasks it might be suitable for and purposes. Research related to other breakthrough technologies,
which of these tasks are acceptable for its use? For instance, is it including AI, has already begun to do this. Utilizing existing research
appropriate to use it for copyediting, drafting an initial cut, or gener themes and extending them to generative AI could help us arrive at
ating a finished product? On the positive side, some believe that finding solutions to social ills using generative AI.
ChatGPT may become another acceptable writing tool (along the lines of The opportunities and challenges related to the use of ethical AI to
Grammarly and Spellcheck). benefit humanity lie at the intersection of technology, social structure,
If ChatGPT should become an acceptable tool for composition and and human behaviour. This calls for more interdisciplinary research and
writing, bigger questions must be addressed. What role would human has implications for educational programs that are inter- and multi-
creativity have in written word? It can be argued that the more disciplinary in nature.
compelling pieces of writing are those that draw on human experience
and thought. By autogenerating content, are we in danger of losing this 5. Discussion and implications
essential element of compelling writing, both fictional and non-
fictional? Further, what kind of writing is it most appropriately used 5.1. Synthesis and identification of common themes
for? The kind we are least motivated to do? Or might it be used to create
novel forms beyond human ability? Further, questions of propriety, In this section, we identify the main themes that have emerged from
acceptable use norms, and copyright questions must be raised, dis the 43 contributions.
cussed, and addressed.
4.4.12.1.2. Accuracy and verification. ChatGPT is a generative AI 5.1.1. ChatGPT as a productivity enhancing tool
tool that utilises language models that combines pieces of information ChatGPT is expected to have a potentially positive effect on pro
across multiple sources and integrate it into readable written output. ductivity (e.g., Contributions 20, 32, 36). It can enable the automation of
Essentially, it is a text predictor; it learns relationships between pieces of mundane or repetitive work and allow people to focus on creative and
text and uses it to predict what should come next. It then paraphrases non-repetitive activities (Contributions 11, 39). People can quickly ac
this content so it sounds like a new piece of writing. This process results cess information and knowledge through a simplified natural language-
in written pieces that can seem credible, but they are not necessarily based information search process using ChatGPT (Contributions 4, 11).
based on facts. AI technology is not yet ready for generating reliable and Within the banking, financial services, and insurance sectors,
accurate results. For instance, a recent report suggested that ChatGPT ChatGPT can help increase the accuracy in the audit and advisory ser
fabricated a citation for an author that it predicted should be cited, but vices, which means that banking executives can focus on more impor
for an article did not in fact exist. tant tasks (Contribution 36). It can also be used in text-mining of legal
In the current state of technology, this indicates that verification databases (Contribution 36), which is of particular benefit to financial
would need to become an essential next step for any generated written institutions required to handle complex regulatory systems (Kshetri,
output. This could apply not only to student assignments but also to 2023a). In the mid-2010 s, London-based think tank JWG estimated that
research articles. We need to consider if journals need to take on the task over 300 million pages of documents related to financial regulations
of checking and verifying submitted articles. Should they, instead, ask would be published worldwide by 2020 (Alhelal, 2021) and ChatGPT
for attestation and extent of reliance on ChatGPT for the article content? may help extract related information quickly.
More importantly, we must consider where the line for acceptable use In education, ChatGPT can have a positive impact on the produc
versus plagiarism would be and who would determine this line. tivity of students as generative AI can make the process of learning more
4.4.12.1.3. Responses to ChatGPT. The approaches to respond to the efficient (Contributions 20, 32). Moreover, academic authors may
generative AI phenomenon can range from resistance and neutraliza develop background sections or first drafts of their papers by gathering
tion, to integration. Taking the example of classroom instruction and information through ChatGPT (Contributions 4, 28).
assignments, the first approach might be to forbid the use of ChatGPT. In many cases, better results could be achieved with AI augmentation
The second approach could entail detection and correction. A third or augmented intelligence, which according to Gartner is “a human-
emergent approach is to integrate the tool in instruction by asking stu centred partnership model of people and AI working together to
dents to generate content using ChatGPT and then asking them to enhance cognitive performance” (Gartner.com, 2019). This is exactly
critique and improve this text. Each approach has its own merits and what Contribution 32 has discussed in the context of generative AI.
risks. We must examine and discuss different approaches that make
sense for different contexts. 5.1.2. Academic sector likely to experience some of the most disruptive
4.4.12.1.4. Moral and ethical questions. The ethical and responsible effects
use of AI is already a major issue that researchers have been addressing. Teaching, learning, and academic research are likely to experience
AI has been known to produce biased object recognition. As an AI system some of the most transformative impacts of ChatGPT and, perhaps un
learns from historical data, which may itself reflect human biases, it can surprisingly, 14 of the 43 contributions in this paper exclusively focused
further crystalise biased outputs. As the use of ChatGPT becomes on these (Section 2.3, Contributions 16 through 29).
widespread, these questions become even more salient. We need to ChatGPT can be used to improve learning opportunities such as by
examine which of the issues emanating from predictive AI might also be providing personalised feedback to students (Contribution 20). Since
relevant to generative AI? Which new issues arise with large language ChatGPT can provide students with basic reading material on the dis
modelling AI? It is established that bias correlates with income and cussion topic, the in-class student-teacher interactions can be used for
geography – could AI, therefore, further increase the social inequities? higher-level analytical and behavioural learning (Contribution 36).
Of course, we must also explore strategies for mitigating bias. Despite the limitations of ChatGPT, such as incomplete synthesis,
52
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
missing information, or reference errors, students may be able to quickly and it is difficult to understand the reasoning behind the generated text.
gain basic knowledge without much effort (Contributions 20, 32). Another limitation that must be highlighted is that ChatGPT cannot
Several major challenges in the use of ChatGPT may need to be answer questions unless they are worded in a particular way (e.g.,
overcome. A significant issue concerns the effects of generative AI such Contribution 22).
as ChatGPT on assessments and examinations (Contribution 32). A further limitation that was mentioned is related to the lack of
Another daunting challenge lies in controlling students’ engagement in updated information. Contributions 32 and 38 have noted that ChatGPT
plagiarism and cheating on assignments, theses and dissertations since does not incorporate real time data automatically. As stated in Contri
ChatGPT is a much more powerful tool compared to the existing tools bution 38, ChatGPT’s response to questions that require updated in
used for similar tasks (e.g., Contributions 1, 17, 21, 30, 37). For instance, formation was that the chatbot’s “training data only goes up until 2021”
teachers may not be able to recognise assignments that are generated by and thus it does not “have the ability to access current events”. There are
ChatGPT, which is particularly challenging (Contribution 23). ChatGPT also various types of biases embedded into generative AI such as
may have a negative impact on students’ motivation to learn to write ChatGPT (Contribution 40).
(Contribution 22) and on their independent thinking and language
expression skills (Contributions 23, 28). 5.1.6. The lack of regulatory templates
The impacts of ChatGPT go beyond teaching and learning in The generative AI industry and market are undergoing major tech
academia. Academic articles have already been published using nological upheaval. In situations such as this, the institutional context
ChatGPT as a co-author (e.g., van Dis et al., 2023). In this regard, a often fails to provide organising templates and models for action
concern that has gained prominence is that AI cannot take responsibility (Greenwood and Hinings, 1993). Such changes may also lead to
for the content and thus may not meet the criteria for authorship confusion and uncertainty and produce an environment that lacks
(Contribution 29), and lead to devalued research publications (Contri norms, templates, and models about appropriate strategies and struc
bution 28). tures (Newman, 2000). To put things in context, a lack of legislative and
regulatory template to adequately deal with issues such as privacy, se
5.1.3. Concerns about job losses curity, accountability, copyright violations, disinformation, misinfor
Researchers have long discussed the possibility that machines and mation and other forms of abuses and misuses has been a theme of
robots may replace some of the functions that human workers perform contributions in this paper (Contributions 11, 21, 32, 36). When
(Chui et al., 2016; Coombs et al., 2021; Dwivedi et al., 2021b). More ChatGPT was asked to list “the disadvantages of generative AI” by the
recently, researchers have expressed concerns that chatbots can authors of Contribution 32, ChatGPT’s response included “legal issues
increasingly replace human workers (Bates, 2019). Such a concern in the around ownership and copyright of content generated by AI”. Contri
context of generative AI has also been raised (e.g., Contributions 9, 20, bution 36 has suggested that governments have not yet figured out how
21, 32, 36, 42). This issue is even more salient since ChatGPT is more to regulate the use of tools such as ChatGPT.
powerful and sophisticated than an average chatbot. Indeed, when
ChatGPT was asked to list “the disadvantages of generative AI”, the list 5.2. Future research directions
in ChatGPT’s response included “loss of jobs” with the replacement of
human workers (Contribution 32). Throughout this paper, the contributors have raised many important
ChatGPT might increase the automation potential of some of the jobs questions regarding the opportunities, challenges, and implications of
that were thought to be less likely to be automated until a few years ago. generative AI such as ChatGPT. Consolidating these together with the
For instance, a 2016 McKinsey Quarterly article asserted that two cat discussion in the previous section, we summarise research questions
egories of jobs in particular had low automation potential: a) jobs requiring exploration into three thematic areas: knowledge, trans
related to managing and developing people, and b) jobs involving parency, and ethics; digital transformation of organisations and soci
decision-making, planning, or creative work (Chui et al., 2016). Activ eties; and teaching, learning, and scholarly research (Table 8).
ities such as writing software code and promotional materials of creating Research is required to explore how generative AI will affect
menus were viewed as some of the hardest activities to automate (Chui knowledge acquisition and transfer for individuals, teams, and organi
et al., 2016). This assertion has been challenged by ChatGPT3. Wide sations. Investigating ways to enhance transparency and reduce bias of
spread concerns have been raised about the possibility that ChatGPT technologies such as ChatGPT is also critical. As noted above, regula
might replace a range of white-collar jobs such as copywriters, customer tions, moral guidelines, and ethical codes have not yet (fully) developed
service agents, transcriptionists, journalists, and executive assistants around generative AI such as ChatGPT (e.g., Contributions 32, 43). The
(Contribution 42). present work opens new areas of research in terms of how such in
stitutions evolve over time. Scott (2001) proposed three institutional
5.1.4. The potential of misuse and abuse pillars: (i) regulative; (ii) normative and (iii) cultural-cognitive, which
Another common theme is that the use of tools such as ChatGPT by relate to “legally sanctioned”, “morally governed” and “recognizable,
some bad actors could lead to undesirable consequences such as misuse taken-for-granted” behaviours respectively (Scott et al., 2000, p. 238).
and abuse of these tools (Contributions 10, 19, 20, 32). For instance, Prior research has also suggested that building a regulative system (e.g.,
ChatGPT can be used by nefarious actors to create deepfakes and fake regulations to govern ChatGPT) is the first stage of institution formation,
news or engage in other types of misinformation and disinformation followed by formation of normative institutions and then cognitive in
campaigns (Contribution 32). stitutions (Hoffman, 1999). In future empirical work, scholars also need
Several contributions have also expressed concerns about the po to compare and contrast ChatGPT and other major innovations in terms
tential misuse of this tool by students (Contributions 10, 19, 20). For of the pattern of the evolution of various types of institutions.
instance, some students were reported to be engaged in cheating in Another stream of much needed research is to explore the effects of
exams by using AI chatbots to create answers to exam questions. digital transformation of organisations and societies. ChatGPT is likely
to transform several corporate functions. In marketing, for instance,
5.1.5. Major limitations of generative AI tools generative AI can help improve customer interactions (Contribution 32),
The contributions have stressed that in order to benefit from gener content marketing (Contribution 13), marketing communications
ative AI such as ChatGPT, their limitations need to be understood (Contribution 14), and marketing campaigns (Contributions 3, 34). In
(Contributions 3, 20, 22, 27, 29, 31, 32, 38, 40). As noted in Contribu finance, ChatGPT can help financial services providers to streamline and
tions 20, 32 and 38, major concerns of generative AI are related to simplify financial service processes (Contribution 12) and provide better
transparency and explainability since such tools function as a black box financial services (Contribution 14). We anticipate that the impact of
53
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
54
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
significant disruptions is the domain of knowledge work. Estimates that data and training models influence ChatGPT performance. While
suggest that 41% of a knowledge worker’s time is spent on activities that there are significant challenges in verifying information produced by
can be performed by others (Birkinshaw & Cohen, 2013). The ad ChatGPT since it can generate different answers for the same question
vancements in AI make it possible for tools such as ChatGPT to handle asked at different times, it would be helpful to find ways to empirically
such tasks (Contribution 4). For instance, ChatGPT can be used to test ChatGPT performance when the data and training models vary.
perform structured and repetitive tasks required by knowledge workers Therefore,
including software developers and report writers (Contribution 23).
P4. Data and training models underlying generative AI tools such as
Generative AI tools such as ChatGPT also help provide human-like
ChatGPT may influence its overall performance.
customer interactions both efficiently and effectively (Contribution
34), which are especially important in improving the quality of customer
5.3.5. ChatGPT’s misuse and abuse when formal and informal rules are
service (Contribution 12). Therefore,
lacking
P1. : Generative AI tools such as ChatGPT can be used to replace some The social and institutional contexts associated with generative AI
of the tasks performed by knowledge workers. such as ChatGPT could be a significant area for future research
(Contribution 7). The nascent stage of formal and informal institutions
5.3.2. ChatGPT’s use in augmenting the capabilities of knowledge workers surrounding ChatGPT has been a key theme in this paper. ChatGPT’s
The augmentation of human intelligence is a key mechanism enabled own response to “What are the disadvantages of generative AI?”
by generative AI tools such as ChatGPT (Contributions 5, 7, 32, 38). included the lack of legal clarity related to ownership and copyright of
Future research can investigate the roles of generative AI tools in content (Contribution 32). This would create disputes regarding the
enabling human augmentation (e.g., Contributions 7, 9, 32). Since right to use and profit from generated content. Similarly, the lack of
generative AI tools can enable humans to solve complex problems by informal institutions such as ethical guidelines could also foster misuse
augmenting their intelligence and capabilities, they can complete work and abuse (Contribution 32). The lack of well-developed formal and
faster and achieve their goals more efficiently (Contributions 5, 32; informal institutions around generative AI such as ChatGPT could mean
Licklider, 1960). For instance, in the context of healthcare in remote that violators are less likely to face sanctions and legally acceptable
areas, generative AI tools such as ChatGPT can be used to provide solutions may not exist to penalise students who engage in cheating or
valuable input to medical workers that can enhance their performance plagiarism (Contribution 21). Publishers and universities have made
(Contribution 36). Overall, while chatbots have been used for human forays into outlining expected behaviours around ChatGPT for re
task augmentation for some time, ChatGPT takes this to the next level by searchers and individuals (e.g., Contribution 26). Therefore,
enabling functionalities that were not possible with previous genera
P5. The lack of formal and informal rules can increase the possibility of
tions of chatbots (Contribution 7). Therefore,
abuse and misuse of generative AI tools such as ChatGPT.
P2. Generative AI tools such as ChatGPT can boost productivity by aug
menting the capability of knowledge workers. 5.3.6. ChatGPT as a source of ethical dilemmas
Due to its text generative capabilities, ChatGPT can be preferred and
5.3.3. ChatGPT as a powerful manipulation, misinformation, and used by stakeholders in different industries. In the education sector,
disinformation tool students have reportedly used ChatGPT to generate answers to exam
Experts have emphasised the importance of research that identifies questions whereas instructors have used ChatGPT to develop course
the best way to minimise the negative impacts of generative AI (e.g., content (e.g., Contributions 7, 18, 38). The need to identify the
Contribution 32). In order to achieve this, we first need to understand boundaries to guard against ChatGPT misuse and abuse becomes
the potential negative effects of generative AI tools such as ChatGPT. In important as ChatGPT finds greater acceptance in different spheres. For
this regard, a concern that has received a great deal of attention is the instance, there may not be an effective way to reliably detect if an essay
potential misuse or abuse of generative AI. The worry is that negative submitted by a student or a paper submitted by a researcher was
effects of deepfakes, fake news, or other types of disinformation and generated using ChatGPT and whether it should be considered plagia
misinformation are amplified by such generative AI tools (Contributions rism (Contributions 21, 28). In such cases, it may be useful to determine
32, 34). Generative AI tools such as ChatGPT can make deception and who serves as gatekeepers or enforcers of ChatGPT use and help resolve
manipulation more powerful and dangerous compared to existing tools ethical issues. Therefore,
(Kshetri, 2023b). For instance, when researchers asked ChatGPT to
P6. Compared to prior technologies, generative AI tools such as ChatGPT
create disinformation about a range of topics such as vaccines,
are likely to pose greater ethical dilemmas for several stakeholders.
COVID-19, the January 2021 insurrection at the U.S. Capitol, immi
gration, and China’s treatment of its Uyghur minority, the tools did so
5.3.7. ChatGPT’s superior, subjective, and deceptive intelligence
effectively (Klepper, 2023). Such misuse of generative AI tools can be
Unlike traditional tools that may rely on pattern matching and in
used to manipulate citizens (Contribution 23). Therefore,
formation retrieval algorithms, generative AI tools such as ChatGPT are
P3. Compared to previous AI tools, ChatGPT is likely to emerge as a more driven by learning algorithms that build intelligence. Due to its access to
effective manipulation, misinformation and disinformation tool. a vast data trove, ChatGPT has the capability to grow in intelligence
without cognitive limits that burden humans although it is dependent on
5.3.4. ChatGPT’s performance and the underlying data and training models human supervision to some extent. Without such interventions,
The performance of ChatGPT in generating text may be considerably ChatGPT can be equally good at generating both accurate and erroneous
influenced by the data and the training models used (e.g., Contribution text (e.g., Contributions 23, 42, 43) with no easy way to assess it. Left
2). ChatGPT’s effective performance when asked to create disinforma unchecked, ChatGPT may learn and build intelligence that may not be
tion related to COVID-19 (Klepper, 2023) could be attributed in part to necessarily objective or accurate. For instance, ChatGPT references non-
the data repositories on which ChatGPT was trained. It is possible that existent scientific work when generating text on particular topics, but it
ChatGPT could have struggled with generating disinformation had its is not possible to inform ChatGPT of such errors. Therefore,
operations been based on datasets that did not include disinformation.
P7. Compared to prior technologies, generative AI tools such as ChatGPT
Since ChatGPT was based on GPT-3 that relied on 175 billion documents
may possess superior, subjective, and deceptive intelligence.
of various types in the public domain (Perrigo, 2023), which also
contain disinformation or reports of disinformation, it can be conceded
55
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
5.3.8. ChatGPT’s potential for multiple roles in business and society services company UBS, ChatGPT is the fastest-growing consumer
Differing from traditional tools that typically helped make sense of application in history (Hu, 2023). The rapid diffusion of generative AI
existing data, generative AI tools such as ChatGPT generate new data. tools such as ChatGPT has important implications and consequences for
Combined with its capability to understand and generate natural lan practice and policy.
guage of humans, ChatGPT may mimic humans and play significant In order to drive competitive advantage, it is imperative for orga
roles in business and society (e.g., Contributions 2, 5, 13, 22, 23). The nisations to use ChatGPT as a productivity-enhancing tool. In many
extent to which ChatGPT outperforms humans in creative thought cases, organisational changes are needed to realise such benefits. For
would be an empirical question, but it is clear that it can synthesise instance, in the context of the education sector, emphasis should be
disparate data, summarise overall directions, and produce convincing placed on updating the curricula in schools and universities to accom
descriptions (cf. 3.3.4 on good data and training models). Even if modate generative AI tools such as ChatGPT in the classroom (Contri
ChatGPT may not be accorded the status of a decision-maker within bution 21, Shrivastava, 2022). In light of the many benefits of this tool
business and society, it seems entirely possible that it can trigger crea noted above, academic institutions should revisit the complete ban on
tive thoughts among humans due its ability to present synthesised the use of ChatGPT. It is important for instructors to engage deeply in
summaries from different perspectives that humans may not have ChatGPT so that the tool can be incorporated into instructional activities
considered. Perhaps, ChatGPT can play different roles such as champion to benefit students (Contribution 16).
or devil’s advocate in ideation and creation processes. Therefore, It is also crucial to develop criteria to evaluate outputs of generative
AI so that such tools can be employed to benefit organisations (Contri
P8. Compared to prior technologies, generative AI tools such as ChatGPT
bution 35). In addition, organisations may be required to combat
with its natural language capabilities may play significant roles in business
resistance to change from employees (Contributions 8, 43). At the same
and society.
time, it is also important to take measures to minimise the potential
negative impacts of such tools. For instance, cybercriminals can use
5.3.9. ChatGPT’s niche as superhuman or specialised agent
ChatGPT as a hacking tool (Contribution 36), which can victimise
Since a significant goal of AI is to mimic human intelligence, the
companies. They should thus develop new capabilities to fight new types
question of ChatGPT’s niche in society needs serious consideration. AI
of cybercrime generated and facilitated by such tools.
models can possess general or specialised intelligence (e.g., Contribution
In order to optimise the benefits of generative AI such as ChatGPT, it
9), both of which are common in our societies. Humans possess general
is important to take into account the various limitations of this tool such
intelligence on a wide array of topics as well as specialised intelligence
as the lack of originality (Contribution 2) and vagueness of the output
in narrow domains or disciplines. In terms of implementation, the dif
(Contribution 21). A further limitation, as mentioned earlier, is that it
ference between general and specialised AI systems could be the data
was trained on a database before 2021, which means that it cannot
and training methods used (cf. 3.3.4). However, what should be the
search the internet to provide up-to-date information about a topic. In
niche for ChatGPT? Is it good to have a superhuman know-it-all system
many cases, convincing answers have been frustratingly difficult to find
or one that is more restricted with specialised knowledge? Humans tend
with ChatGPT (Contribution 22). It is important to educate students and
to attach some importance to specialty (e.g., we do not ask for medical
other users about these and other limitations of ChatGPT so that they
advice from insurance agents) as do organizations that have invested in
can benefit from this transformative innovation. It is also crucial to make
specialisation (e.g., the various functional departments). Even in speci
sure that students have a clear understanding of how generative AI tools
alised niches, training ChatGPT based on data may not be complete since
differ from other tools such as search engines (Contribution 41).
training databases typically include outcomes and steps but not the lived
Legal systems to govern AI are not well developed in most countries.
experiences of specialised humans. Therefore,
In general, in nascent and formative sectors such as generative AI, there
P9. Generative AI tools such as ChatGPT have the potential to be super is a lack of developed networks of regulatory agencies (Kshetri & Dho
human or specialised agents as dictated by data and training methods. lakia, 2009). As a consequence, there is no stipulated template devel
oped for organising institutional actors’ behaviours (Greenwood &
5.3.10. ChatGPT’s capabilities and responsible use Hinings, 1996). Industry bodies and trade and professional associations
Technologies have traditionally addressed problems and provided can provide some degree of institutional co-ordination in order to
solutions; generative AI tools such as ChatGPT are no exception. How overcome the existing regulatory gaps (Kshetri, 2018). These entities
ever, technologies have been subjected to both intended and unintended can help create norms, informal rules, and codes of behaviour and
use (e.g., social media helps connections but also facilitates cyberbul penalise noncompliance with social and economic sanctions (North,
lying) and resulted in both intended and unintended consequences (e.g., 1990). The importance of developing a code of practice for the academic
nurturing friendships but also envy and bitterness). Technologies have community that offers specific guidelines for using ChatGPT in academic
often disrupted existing ways (e.g., iPod disrupted how we consumed publishing is crucial (Contribution 29).
music) and resulted in new ways (e.g., artists can release music for sale At the national level, it is the responsibility of governments to enact
online). Likewise, ChatGPT promises conversational natural language regulations that balance the needs of protecting users from abuse and
dialogue based on supervised pre-training on a closed albeit large misuse of generative AI while ensuring that technology companies are
dataset. As has been already discussed, ChatGPT has been applied for not discouraged from investing in this transformative innovation. For
both positive (e.g., help solve a problem) and negative (e.g., cheat on an instance, it is reported that the EU’s AI Act has targeted “high risk”
exam) purposes. There is a clear need to engage in responsible use of applications of AI, which means that AI must comply with the strictest
ChatGPT since it possesses unique capabilities that can be exploited or requirements. A new proposal categorises AI systems generating com
misused. Therefore, plex text such as chatbots as “high risk”. Some analysts are concerned
that such regulations may discourage the generation and use of such
P10. Similar to various tools in history, generative AI tools such as
systems (Grady, 2023). It is critical to assess the societal value of
ChatGPT promise unique capabilities but require responsible use.
ChatGPT, which is especially important for enacting new regulations to
govern such tools (Contribution 33). Moreover, given the global nature
5.4. Implications for practice and policy of tools such as ChatGPT (Contribution 34), it is important for different
jurisdictions to work together to develop more globally accepted regu
ChatGPT was estimated to have 100 million monthly active users in lations (Contribution 36).
January 2023, less than two months after the app was launched. Ac AI practitioners need to engage in serious introspection regarding the
cording to the Swiss multinational investment bank and financial capability and reliability of generative AI tools such as ChatGPT. Since
56
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
ChatGPT relies on several moving parts including data, training models, Agrawal, A., Gans, J., & Goldfarb, A. (2022). ChatGPT and How AI Disrupts Industries.
Harvard Business Review. December 12, 1- 6. Retrieved from 〈https://hbr.org
and human input, there are opportunities to inject bias into the system.
/2022/12/chatgpt-and-how-ai-disrupts-industries〉.
As the system matures over time through reinforcement learning, text Ajzen, I. (2011). The theory of planned behaviour: Reactions and reflections. Psychology
generation may also be biased or favour specific perspectives (e.g., & Health, 26(9), 1113–1127.
Getahun, 2023; Hamilton, 2022; Heikkilä, 2023). Different ways to Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N.
(2021). Algorithmic bias in data-driven innovation in the age of AI. International
safeguard against such biases should be determined and applied by AI Journal of Information Management, 60, Article 102387.
practitioners; failing this, AI tools such as ChatGPT may be considered Aldhafri, S., Alkharusi, H., & Al Ismaili, A. (2015). Predicting English test anxiety: how
suspect. For instance, AI trainers providing human feedback during the memorization and critical thinking function? Theory and Practice in Language Studies,
5(6), 1159.
training phases need to take neutral and objectively true positions such Aleksander, I. (2017). Partners of Humans: A Realistic Assessment of the Role of Robots
that the reinforcement learning are not unnecessarily biased. OpenAI’s in the Foreseeable Future. In Journal of Information Technology, 32 pp. 1–9). SAGE
own description of ChatGPT acknowledges certain types of biases in Publications Ltd,.
Algolia, 2023, "Index your world, put it in motion with our powerful search API",
training data such as trainers preferring longer answers that look more Available at: 〈https://www.algolia.com/products/search-and-discovery/hosted-
comprehensive (OpenAI, 2023a). Further, OpenAI claims to use search-api/〉.
moderation policies around disallowed use of ChatGPT that could also Algorithmwatch.org, 2020, April, AI Ethics Guidelines Global Inventory. Retrieved from
be biased even after conceding restrictions to curb activities that are
〈https://algorithmwatch.org/en/ai-ethics-guidelines-global-inventory/〉.
Alhelal, H. (2021) Rome Wasn’t Built In A Day. January 22, 〈https://www.sigmaratings.
illegal, high-risk, or abusive (OpenAI, 2022, 2023b). AI practitioners com/knowledge-center/rome-wasnt-built-in-a-day〉.
need to devise methods to evaluate the extent to which new text Alshater, M., 2022a, Exploring the role of artificial intelligence in enhancing academic
performance: A case study of ChatGPT (December 26, 2022). Avaiable at SSRN.
generated by ChatGPT is biased.
Retrieved from 〈https://ssrn.com/abstract=4312358〉 or https://doi.org/10.2139/
Users in any profession (e.g., students, software developers) have the ssrn.4312358.
opportunity to apply ChatGPT in both positive and negative ways. It is Alshater, M., 2022b, Exploring the Role of Artificial Intelligence in Enhancing Academic
helpful to recognise that ChatGPT may not always be accurate or know Performance: A Case Study of ChatGPT (December 26, 2022). Available at SSRN:
〈https://ssrn.com/abstract=4312358〉 or https://doi.org/10.2139/ssrn.4312358.
all the relevant answers needed by users. OpenAI’s own description of Alvesson, M., & Sandberg, J. (2011). Generating research questions through
the tool acknowledges that ChatGPT may “guess what the user intended” problematization. Academy of Management Review, 36(2), 247–271.
rather than asking clarifying questions to clear ambiguities and also Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic
design cues and communicative agency framing on conversational agent and
generate “plausible-sounding but incorrect or nonsensical answers” - company perceptions. Computers in Human Behavior, 85, 183–189.
both of which raise questions about credibility (OpenAI, 2023a). More Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for Artificial
importantly, OpenAI agrees that there is “currently no source of truth” Intelligence and Digital technologies. International Journal of Information
Management, 62, Article 102433.
for the ChatGPT models, which is a significant self-indictment of the Attali, Y., & McNamara, D. S. (2006). Automated essay scoring with e-rater V.2. Journal
tool’s capabilities (OpenAI, 2023a). Would that stop users from of Technology, Learning, and Assessment, 4(9), 1–17.
accepting ChatGPT generated text as the final answer? Would that Aydın, Ö., & Karaarslan, E. (2022). OpenAI ChatGPT generated literature review: Digital
twin in healthcare. In Ö. Aydın (Ed.), Emerging Computer Technologies, 2 pp. 22–31).
reduce users’ reliance on ChatGPT for information and knowledge? İzmir Akademi Dernegi.
Would that be enough for users to discontinue use of ChatGPT? The Azaria, A. 2022. ChatGPT usage and limitations. HAL Open Science hal-03913837.
potential for manipulation, misinformation, and disinformation are high Baird, A., & Maruping, L. M. (2021). The Next Generation of Research on IS Use: A
Theoretical Framework of Delegation to and from Agentic IS Artifacts. MIS Quarterly,
with new ChatGPT text not based on responsible use. To some extent, the
45(1).
burden of appropriate ChatGPT use resides with the users as with any Balakrishnan, J., & Dwivedi, Y. K. (2021). Role of cognitive absorption in building user
other technology. trust and experience. Psychology & Marketing, 38(4), 643–668.
Balakrishnan, J., Abed, S. S., & Jones, P. (2022). The role of meta-UTAUT factors,
perceived anthropomorphism, perceived intelligence, and social self-efficacy in
6. Concluding thoughts chatbot-based services? Technological Forecasting and Social Change, 180, Article
121692.
Balakrishnan, J., Nwoba, A., & Nguyen, N. (2021). Emerging-market consumers’
ChatGPT undoubtedly is among the most transformative AI tools interactions with banking chatbots. Telematics and Informatics, 65, Article 101711.
developed in recent years. It presents significant opportunities as well as Bank of America (2022) Bank of America’s Erica Tops 1 Billion Client Interactions, Now
challenges for organisations, societies, and individuals. Generative AI Nearly 1.5 Million Per Day retrieved on the web at: 〈https://newsroom.
bankofamerica.com/content/newsroom/press-releases/2022/10/bank-of-america-s-
can offer a major boost to productivity in various settings, but such tools erica-tops-1-billion-client-interactions–now-.html%20October%2012〉 October 12.
also present a number of practical, ethical, moral, and policy challenges. Baskerville, R., & Myers, M. (2009). Fashion waves in information systems research and
For instance, the challenges with ChatGPT in the education sector are practice. MIS Quarterly, 33, 647–662.
Bass, D. (2022). OpenAI Chatbot So Good It Can Fool Humans, Even When It’s Wrong.
well recognised due to the lack of well-developed guidelines and ethical
Bloomberg com. 〈https://www.bloomberg.com/news/articles/2022-12-07/openai-c
codes around generative AI. A critical practical challenge that regulators hatbot-so-good-it-can-fool-humans-even-when-it-s-wrong〉.
face is that they cannot penalise perpetrators with legal sanctions when Bates, M. (2019). Health care chatbots are here to help. IEEE Pulse (Volume: 10)(Issue:
such tools are deliberately misused or abused. Thus, it is imperative to 3). May-June Page(s): 12 - 14.
Baumol, W. J. (2012). The Cost Disease: Why Computers Get Cheaper and Health Care
enact new laws to govern these tools. Their global nature means that Doesn’t. Yale University Press,.
international coordination is also needed to maximise the benefits of Belk, R. (2021). Ethical issues in service robotics and artificial intelligence. The Service
tools such as ChatGPT. Industries Journal, 41(13–14), 860–876.
Bellegarda, J. R. (2004). Statistical language model adaptation: review and perspectives.
Speech Communication, 42(1), 93–108.
References Bender, E.M., & Koller, A., 2020, Climbing towards NLU: On meaning, form, and
understanding in the age of data. Paper presented at the Proceedings of the 58th
annual meeting of the association for computational linguistics.
Abdulquadri, A., Mogaji, E., Kieu, T., & Nguyen, P. (2021). Digital transformation in
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of
financial services provision: a Nigerian perspective to the adoption of chatbot.
Stochastic Parrots: Can Language Models Be Too Big?. Proceedings of the 2021 ACM
Journal of Enterprising Communities: People and Places in the Global Economy, 15(2),
Conference on Fairness, Accountability, and Transparency, Canada (pp. 610–623). New
258–281.
York, NY: ACM,.
Adams, M. (2023), "How AI search unlocks long tail results", Available at: 〈https://www.
Bengio, Y., Ducharme, R., & Vincent, P. (2000). A neural probabilistic language model.
algolia.com/blog/ai/how-ai-search-unlocks-long-tail-results/〉.
Advances in Neural Information Processing Systems, 13, 1–7 (available at) 〈https://p
Adams, T. (2021). Daniel Kahneman: ‘Clearly AI is going to win. How people are going to
roceedings.neurips.cc/paper/2000/file/728f206c2a01bf572b5940d7d9a8fa4c-Pape
adjust is a fascinating problem’. The Guardian. 〈https://www.theguardian.com/boo
r.pdf〉.
ks/2021/may/16/daniel-kahneman-clearly-ai-is-going-to-win-how-people-are-goin
Bernstein, E., Lee. M.Y., Minnaar, J. (2022, December 12) How Companies Are Using
g-to-adjust-is-a-fascinating-problem-thinking-fast-and-slow〉.
Tech to Give Employees More Autonomy. Retrieved from 〈https://hbr.org/2022/01/
Adesso, G. (2022). GPT4: The Ultimate Brain. Authorea Preprints.
how-companies-are-using-tech-to-give-employees-more-autonomy〉.
Agomuoh, F. (2023). The 6 biggest problems with ChatGPT right now. Digitaltrends
(Available online) 〈https://www.digitaltrends.com/computing/how-to-use-open
ai-chatgpt-text-generation-chatbot/〉.
57
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Bhattacharyya, S. (2023), "Commercial applications of GPT-3 that are already live", from 〈https://www.indiatoday.in/technology/features/story/even-chatgpt-believes
Available at: 〈https://analyticsindiamag.com/commercial-applications-of-gpt-3- -it-can-never-replace-human-writers-as-it-lacks-creativity-emotions-and-sensitivit
that-are-already-live/〉. y-2324174-2023-01-20〉.
Birkinshaw, J., & Cohen, J. (2013). Make Time for Work that Matters. Harvard Business Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To Be or Not to Be …Human?
Review, 91(9), 115–120. Theorizing the Role of Human-Like Competencies in Conversational Artificial
Biswas, S. (2023). Chat GPT and the future of medical writing. Radiology (Available at) Intelligence Agents. Journal of Management Information Systems, 39(4), 969–1005.
〈https://pubs.rsna.org/doi/full/10.1148/radiol.223312〉. ChatGPT, 2023, January 29, In Wikipedia. 〈https://en.wikipedia.org/wiki/ChatGPT〉.
Bjork, C. (2023). ChatGPT threatens language diversity. More needs to be done to protect Chen, W., & Wellman, B. (2004). The global digital divide–within and between countries.
our differences in the age of AI, available at 〈https://theconversation.com/chatgpt- IT & Society, 1(7), 39–45.
threatens-language-diversity-more-needs-to-be-done-to-protect-our-differences-in- Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T. (2022). Artificial Intelligence (AI)
the-age-of-ai-198878〉. Student Assistants in the Classroom: Designing Chatbots to Support Student Success.
Blackman, R. (2020, October 15). A Practical Guide to Building Ethical AI. Retrieved Retrieved January 31, 2023, from Information Systems Frontiers. https://doi.org/
from 〈https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai〉. 10.1007/s10796-022-10291-4.
Bock, D. E., Wolter, J. S., & Ferrell, O. C. (2020). Artificial intelligence: disrupting what Chui, M., Hall, B., Mayhew, H., & Singla, A. (2022a, December 6). The state of AI in
we know about services. Journal of Services Marketing, 34(3), 317–334. 2022—And a half decade in review. Retrieved February 8, 2023, from 〈https://
Böhm, S., Carrington, M., Cornelius, N., de Bruin, B., Greenwood, M., Hassan, L., & tinyurl.com/33j62ssd〉.
Shaw, D. (2022). Ethics at the centre of global and local challenges: Thoughts on the Chui, M., Roberts, R., & Yee, L. Generative AI is here: How tools like ChatGPT could
future of business ethics. Journal of Business Ethics, 180(3), 835–861. change your business. Retrieved from 〈https://www.mckinsey.com/capabilities/
Bommarito II, M., & Katz, D.M. (2022). GPT Takes the Bar Exam. arXiv preprint arXiv: quantumblack/our-insights/generative-ai-is-here-how-tools-like-chatgpt-could-
2212.14402. change-your-business?stcr=738AC24D41C14C898CCA8FCF18A6796F&cid=other-
Bonfim, D., Nogueira, G., & Ongena, S. (2021). “Sorry, We’re Closed” Bank Branch eml-alt-mip-mck&hlkid=e76a5e48b167490195bf9af698176a19&
Closures, Loan Pricing, and Information Asymmetries. Review of Finance, 25(4), hctky=10062436&hdpid=3eaef69d-9337–4776-9522-e13af67ae1a5#/〉.
1211–1259. Chui, M., Roberts, R., & Yee, L. (2022b). Generative AI is here: How tools like ChatGPT
Bornet, P., Barkin, I., & Wirtz, J. (2021). Intelligent Automation: Welcome to the World of could change your business. Quantum Black AI by McKinsey (Available online) 〈https
Hyperautomation. World Scientific Books,. ://www.mckinsey.com/capabilities/quantumblac/our-insights/generative-ai-is-he
Bossman, J. (2016, October 21). Top 9 Issues in Artificial Intelligence. Retrieved from re-how-tools-like-chatgpt-could-change-your-business〉.
〈https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial- Chui, M., Manyika, J., & Miremadi, M. (2016). Where. Machines Could Replace
intelligence/〉. Humans—And Where They Can’t (Yet). McKinsey Quarterly. July McKinsey com.
Bouschery, S., Blazevic, V., Piller, F.P. (2023). Augmenting Human Innovation Teams Coch, L., & French Jr, J. R. (1948). Overcoming resistance to change. Human Relations, 1
with Artificial Intelligence: Exploring Transformer-Based Language Models. https:// (4), 512–532.
doi.org/10.1111/jpim.12656. Cockburn, I.M., Henderson, R., & Stern, S. (2019). The Impact of Artificial Intelligence
Brachten, F., Kissmer, T., & Stieglitz, S. (2021). The acceptance of chatbots in an on Innovation: An Exploratory Analysis. Chap. 4 in The Economics of Artificial
enterprise context – A survey study. International Journal of Information Management, Intelligence, edited by AK Agrawal, J. Gans and A. Goldfarb.
60, Article 102375. Coombs, C., Stacey, P., Kawalek, P., Simeonova, B., Becker, J., Bergener, K., &
Breen, P. (2023). Don’t fear ChatGPT: education will always trump technology. Available Trautmann, H. (2021). What is it about humanity that we can’t give away to
at 〈https://www.timeshighereducation.com/blog/dont-fear-chatgpt-education-will- intelligent machines? A European perspective. International Journal of Information
always-trump-technology〉 Accessed 10.03.23. Management, 58, Article 102311.
Breidbach, C. F., & Maglio, P. (2020). Accountable algorithms? The ethical implications Cooper, R. B., & Zmud, R. W. (1990). Information technology implementation research: a
of data-driven business models. Journal of Service Management, 31(2), 163–185. technological diffusion approach. Management Science, 36(2), 123–139.
Brown, N. C., Sentance, S., Crick, T., & Humphreys, S. (2014). Restart: The resurgence of Cotton, D.R., Cotton, P.A., & Shipway, J.R. (2023). Chatting and Cheating. Ensuring
computer science in UK schools. ACM Transactions on Computing Education, 14(2), academic integrity in the era of ChatGPT. edarix.org.
1–22. Council of Europe (2022). Artificial intelligence and education: A critical view through
Buhalis, D. (2020). Technology in tourism-from information communication the lens of human rights, democracy and the rule of law. Available at 〈https://rm.
technologies to eTourism and smart tourism towards ambient intelligence tourism: a coe.int/artificial-intelligence-and-education-a-critical-view-through-the-lens/
perspective article. Tourism Review, 75(1), 267–272. 1680a886bd〉 Accessed 10.03.23.
Buhalis, D., & Moldavska, I. (2022). Voice assistants in hospitality: using artificial Cranefield, J., Winikoff, M., Chiu, Y. T., Li, Y., Doyle, C., & Richter, A. (2022). Partnering
intelligence for customer service. Journal of Hospitality and Tourism Technology, 13 with AI: The case of digital productivity assistants. Journal of the Royal Society of New
(3), 386–403. Zealand. https://doi.org/10.1080/03036758.2022.2114507
Buhalis, D., & Sinarta, Y. (2019). Real-time co-creation and nowness service: lessons from Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., & Bharath, A. A.
tourism and hospitality. Journal of Travel & Tourism Marketing, 36(5), 563–582. (2018). Generative adversarial networks: An overview. IEEE Signal Processing
Buhalis, D., & Volchek, K. (2021). Bridging marketing theory and big data analytics: The Magazine, 35(1), 53–65.
taxonomy of marketing attribution. International Journal of Information Management, Crick, T. (2021). COVID-19 and digital education: A catalyst for change? ITNOW, 63(1),
56, Article 102253. 16–17.
Buhalis, D., Harwood, T., Bogicevic, V., Viglia, G., Beldona, S., & Hofacker, C. (2019). Czarnecka, B., & Mogaji, E. (2020). How are we tempted into debt? Emotional appeals in
Technological disruptions in services: lessons from tourism and hospitality. Journal loan advertisements in UK newspapers. International Journal of Bank Marketing, 38
of Service Management, 30(4), 484–506. (3), 756–776.
Buhalis, D., Parra-López, E., & Martinez-Gonzalez, J. A. (2020). Influence of young Dattathrani, S., & De’, R. (2022). The Concept of Agency in the era of Artificial
consumers’ external and internal variables on their eloyalty to tourism sites. Journal Intelligence: dimensions and degrees. Information Systems Frontiers, 1–26.
of Destination Marketing & Management, 15. https://doi.org/10.1016/j. Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard
jdmm.2020.100409 Business Review, 96(1), 108–116.
Bundy, A. et al. (2019, November 28). Explainable AI, Retrieved from 〈https:// De Cremer, D., Kasparov, G. (2021, March 18). AI Should Augment Human Intelligence,
royalsociety.org/topics-policy/projects/explainable-ai/〉. Not Replace It. Retrieved from 〈https://hbr.org/2021/03/ai-should-augment-
Burger, B., Kanbach, D. K., & Kraus, S. (2023). The role of narcissism in entrepreneurial human-intelligence-not-replace-it〉.
activity: a systematic literature review. Journal of Enterprising Communities: People Deng, J., & Lin, Y. (2022). The benefits and challenges of ChatGPT: An overview.
and Places in the Global Economy. https://doi.org/10.1108/JEC-10-2022-0157 Frontiers in Computing and Intelligent Systems, 2(2), 81–83.
Burstein, J. (2003). The E-rater Scoring Engine: Automated Essay Scoring With Natural Diederich, S., Brendel, A. B., Morana, S., & Kolbe, L. (2022). On the Design of and
Language Processing. In M. D. Shermis, & J. Burstein (Eds.), Automated Essay Scoring: Interaction with Conversational Agents: An Organizing and Assessing Review of
A Cross-Disciplinary Perspective (pp. 113–121). Hillsdale, NJ: Lawrence Erlbaum Human-Computer Interaction Research. Journal of the Association for Information
Associates, Inc. Systems, 23(1), 96–138.
Cadmus (2023). Identifying and Mitigating Risks of AI in Authentic Assessment Practices. Diener, F., & Špaček, M. (2021). Digital transformation in banking: A managerial
Available at: 〈https://www.cadmus.io/blog/identifying-and-mitigating-risks-of-ai- perspective on barriers to change. Sustainability, 3(4), 2032–2042.
in-authentic-assessment-practices〉. Dowling, M., & Lucey, B. (2023). ChatGPT for (finance) research: The Bananarama
Cain, S. (2023). “’This song sucks’: Nick Cave responds to ChatGPT song written in style conjecture. Finance Research Letters, Article 103662 (In press. Available at) 〈htt
of Nick Cave”. The Guardian. January 17, 2023 〈https://www.theguardian.com/m ps://www.sciencedirect.com/science/article/pii/S1544612323000363〉.
usic/2023/jan/17/this-song-sucks-nick-cave-responds-to-chatgpt-song-written-in Drucker, P. F. (1999). Knowledge-worker productivity: The biggest challenge. California
-style-of-nick-cave〉. management review, 41(2), 79–94.
Carr, N. G. (2003). IT Doesn’t Matter. Harvard Business Review, 81(5), 41–49. Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision
Carroll, J. (2007). A Handbook for Deterring Plagiarism in Higher Education (2/e). Oxford: making in the era of Big Data – evolution, challenges and research agenda.
Oxford Centre for Staff and Learning Development, Oxford Brookes University,. International Journal of Information Management, 48, 63–71.
Castelvecchi, D. (2022). Are ChatGPT and AlphaCode going to replace programmers? Duan, Y., Ong, V. K., Xu, M., & Mathews, B. (2012). Supporting decision making process
Nature. https://doi.org/10.1038/d41586-022-04383-z with “ideal” software agents–What do business executives want? Expert Systems with
Centre for Teaching and Learning. (2023). Four lessons from ChatGPT: Challenges and Applications, 39(5), 5534–5547.
opportunities for educators. University of Oxford, (Available) 〈https://www.ctl.ox.ac. Dwivedi, Y. K., Hughes, L., Baabdullah, A. M., Ribeiro-Navarrete, S., Giannakis, M., Al-
uk/article/four-lessons-from-chatgpt-challenges-and-opportunities-for-educators〉. Debei, M. M., & Wamba, S. F. (2022a). Metaverse beyond the hype: Multidisciplinary
Chakravarti, A. (2023). Even ChatGPT believes it can never replace human writers as it perspectives on emerging challenges, opportunities, and agenda for research,
lacks creativity, emotions and sensitivity. India Today. Retrieved January 31, 2023,
58
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
practice and policy. International Journal of Information Management, 66, Article Ghazwani, S., van Esch, P., Cui, Y., & Gala, P. (2022). Artificial intelligence, financial
102542. https://doi.org/10.1016/j.ijinfomgt.2022.102542 anxiety and cashier-less checkouts: a Saudi Arabian perspective. International Journal
Dwivedi, Y. K., Hughes, L., Kar, A. K., Baabdullah, A. M., Grover, P., Abbas, R., & of Bank Marketing, 40(6), 1200–1216.
Wade, M. (2022b). Climate change and COP26: Are digital technologies and Gill, G., & Bhattacherjee, A. (2009). Whom are we informing? Issues and
information management part of the problem or the solution? An editorial reflection recommendations for MIS research from an informing sciences perspective. MIS
and call to action. International Journal of Information Management, 63, Article Quarterly, 33, 217–235.
102456. Goertzel, B., & Pennachin, C. (2007). Artificial General Intelligence. Berlin: Springer-
Dwivedi, Y. K., Hughes, L., Cheung, C. M., Conboy, K., Duan, Y., Dubey, R., & Viglia, G. Verlag,.
(2022c). How to develop a quality research article and avoid a journal desk Goldstein, I., & Papert, S. (1977). Artificial intelligence, language, and the study of
rejection. International Journal of Information Management, 62, Article 102426. knowledge. Cognitive Science, 1(1), 84–123.
Dwivedi, Y. K., Hughes, L., Wang, Y., Alalwan, A. A., Ahn, S. J., Balakrishnan, J., & Google (2023) Google Assistant, retrieved on the web at 〈https://newsroom.
Wirtz, J. (2022d). Metaverse marketing: How the metaverse will shape the future of bankofamerica.com/content/newsroom/press-releases/2022/10/bank-of-america-s-
consumer research and practice. Psychology & Marketing. https://doi.org/10.1016/j. erica-tops-1-billion-client-interactions–now-.html%20October%2012〉.
jbusres.2022.113420 GPT, Thunström, A.O., & Steingrimsson, S. (2022). Can GPT-3 write an academic paper
Dwivedi, Y. K., Ismagilova, E., Hughes, D. L., Carlson, J., Filieri, R., Jacobson, J., & on itself, with minimal human input? Retrieved from Online: 〈https://hal.science/
Wang, Y. (2021a). Setting the future of digital and social media marketing research: hal-03701250v1〉.
Perspectives and research propositions. International Journal of Information Grady, P. (2023). ChatGPT Amendment Shows the EU is Regulating by Outrage,
Management, 59, Article 102168. February 13, 〈https://datainnovation.org/2023/02/chatgpt-amendment-shows-the-
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & eu-is-regulating-by-outrage/〉.
Williams, M. D. (2021b). Artificial Intelligence (AI): Multidisciplinary perspectives Greenwood, R., & Hinings, C. R. (1993). “Understanding strategic change: The
on emerging challenges, opportunities, and agenda for research, practice and policy. contribution of archetypes”. Academy of Management Journal, 36, 1052–1081.
International Journal of Information Management, 57, Article 101994. Greenwood, R., & Hinings, C. R. (1996). Understanding radical organizational change:
Dwivedi, Y. K., Hughes, D. L., Coombs, C., Constantiou, I., Duan, Y., Edwards, J. S., & bringing together the old and the new institutionalism. Academy of Management, 21,
Upadhyay, N. (2020). Impact of COVID-19 pandemic on information management 1022–1054.
research and practice: Transforming education, work and life. International Journal of Greenwood, R., Suddaby, R., & Hinings, C. R. (2002). Theorizing change: the role of
Information Management, 55, Article 102211. professional associations in the transformation of institutionalized fields. Academy of
Dwivedi, Y. K., Wastell, D., Laumer, S., Henriksen, H. Z., Myers, M. D., Bunker, D., & Management, 45(1), 58–80.
Srivastava, S. C. (2015). Research on information systems failures and successes: Guan, C. , Wang, X. , Zhang, Q. , Chen, R. , He, D. , & Xie, X., 2019, May, Towards a deep
Status update and future directions. Information Systems Frontiers, 17(1), 143–157. and unified understanding of deep neural models in nlp. In International conference
Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of on machine learning (pp. 2454–2463). PMLR, available at 〈http://proceedings.mlr.
Management Review, 14, 57–74. press/v97/guan19a/guan19a.pdf〉.
Eliot, L. (2023, January 22 2023). Generative AI ChatGPT Is Going To Be Everywhere Guo, B., Zhang, X., Wang, Z., Jiang, M., Nie, J., Ding, Y.,. & Wu, Y. (2023). How Close is
Once The API Portal Gets Soon Opened, Stupefying AI Ethics And AI Law. Retrieved ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. arXiv
from 〈https://www.forbes.com/sites/lanceeliot/2023/01/22/generative-ai-chatgpt- preprint arXiv:2301.07597.
is-going-to-be-everywhere-once-the-api-portal-gets-soon-opened-stupefying-ai- Gupta, S., & Chen, Y. (2022). Supporting Inclusive Learning Using Chatbots? A Chatbot-
ethics-and-ai-law/〉. Led Interview Study. Journal of Information Systems Education, 33(1), 98–108.
Ellington, A. J. (2003). A Meta-Analysis of the Effects of Calculators on Students’ Hamilton, I. A. (2022). "Don’t worry about AI becoming sentient. Do worry about it
Achievement and Attitude Levels in Precollege Mathematics Classes. Journal for finding new ways to discriminate against people". Insider. 〈https://www.businessinsi
Research in Mathematics Education, 34(5), 433–463. der.com/ai-discrimination-bias-worse-problem-than-sentience-2022-6〉 Accessed: 24
Elliot, B. (2022). Why is ChatGPT Making Waves in the AI Market? Retrieved from FEB 2023.
〈https://www.gartner.com/en/newsroom/press-releases/2022–12-08-why-is- Haque, M.U., Dharmadasa, I., Sworna, Z.T., Rajapakse, R.N., & Ahmad, H. (2022). I think
chatgpt-making-waves-in-the-ai-market〉. this is the most disruptive technology": Exploring Sentiments of ChatGPT Early
Else, H. (2023). Abstracts written by ChatGPT fool scientists, 423-423 Nature, 613(7944). Adopters using Twitter Data. arXiv preprint arXiv:2212.05856.
https://doi.org/10.1038/d41586-023-00056-7. Heikkilä, M. (2023). How OpenAI is trying to make ChatGPT safer and less biased. Mitosz
Elsevier, 2023, The Use of AI and AI-assisted Technologies in Scientific Writing. 〈https:// Technology Review. 〈https://www.technologyreview.com/2023/02/21/1068893/ho
www.elsevier.com/about/policies/publishing-ethics〉 (accessed 20th Feb, 2023). w-openai-is-trying-to-make-chatgpt-safer-and-less-biased/〉 Accessed: 24 FEB 2023.
EU High-Level Expert Group on AI, 2019, April 08, Ethics guidelines for trustworthy AI. Hendrik, E., 2022, I get your excitement about ChatGPT, but … Available online:
Retrieved from 〈https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines- 〈https://www.hendrik-erz.de/post/i-get-your-excitement-about-chatgpt-but〉.
trustworthy-ai〉. Henrickson L. (2023) Chatting with the dead: the hermeneutics of thanabots, Media,
Ferlazzo, L. (2023). 19 Ways to Use ChatGPT in Your Classroom. Available at 〈https:// Culture & Society 1–18.
www.edweek.org/teaching-learning/opinion-19-ways-to-use-chatgpt-in-your- Hiatt, J. (2006). ADKAR: a model for change in business, government, and our
classroom/2023/01〉 Accessed 10.03.23. community: Prosci.
Financial Times, 2023, Daily Mirror publisher explores using ChatGPT to help write local Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial
news. 〈https://www.ft.com/content/4fae2380-d7a7–410c-9eed-91fd1411f977〉 intelligence: A comparison between human–human online conversations and
(accessed 20th Feb, 2023). human–chatbot conversations. In Computers in Human Behavior, 49 pp. 245–250).
Firat, M. (2023). How Chat GPT Can Transform Autodidactic Experiences and Open Elsevier,.
Education?.https://doi.org/10.31219/osf.io/9ge8m (last accessed 09/02/2023). Hirschheim, R., Klein, H. K., & Lyytinen, K. (1996). Exploring the intellectual structures
Fire, M., & Guestrin, C. (2019). Over-optimization of academic publishing metrics: of information systems development: A social action theoretic analysis. Accounting,
observing Goodhart’s Law in action. GigaScience, 8(6), 1–20. Management and Information Technologies, 6, 1–64.
Floridi, L. (2019). Establishing the rules for building trustworthy AI. Nature Machine Hoffman, A. J. (1999). Institutional evolution and change: Environmentalism and the US
Intelligence, 1(6), 261–262. chemical industry. Academy of Management Journal, 42(4), 351–371.
Floridi, L. et al. (2018). AI4People’s Ethical Framework for a Good AI Society: Hu, K. (2023) ChatGPT sets record for fastest-growing user base - analyst note,February
Opportunities, Risks, Principles, and Recommendations. Retrieved from https:// 2, 〈https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-
www.eismd.eu/wp-content/uploads/2019/03/AI4People’s-Ethical-Framework-for- base-analyst-note-2023–02-01/〉.
a-Good-AI-Society.pdf. Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service
Floridi, L., & Chiratti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Research, 21(2), 155–172.
Minds and Machines, 30, 681–694. Huang, P.-S., Wang, C., Huang, S., Zhou, D., & Deng, L. (2018). Towards neural phrase-
Floridi, L., & Strait, A. (2020). Ethical Foresight Analysis: What it is and Why it is based machine translation. In International Conference on Learning Representations.
Needed? Minds & Machines. https://doi.org/10.1007/s11023-020-09521-y Huang, R. (2019). Educational Technology: A Primer for the 21st Century. Springer Nature,.
Fountaine, T., McCarthy, B., & Saleh, T. (2021). Getting AI to Scale. Harvard Business Hughes, A. (2023), "ChatGPT: Everything you need to know about OpenAI’s GPT-3 tool",
Review. May-June 2021. Retrieved from 〈https://hbr.org/2021/05/getting-ai-t Available at: 〈https://www.sciencefocus.com/future-technology/gpt-3/〉.
o-scale〉. https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-
Gao, C.A., Howard, F.M., Markov, N.S., Dyer, E.C., Ramesh, S., Luo, Y., & Pearson, A.T. use-of-ai-tools-in-academic-content-creation/.
(2022). Comparing scientific abstracts generated by ChatGPT to original abstracts IBM (2023). Watson Assistant case studies. Available at: 〈https://www.ibm.com/
using an artificial intelligence output detector, plagiarism detector, and blinded products/watson-assistant/case-studies〉.
human reviewers. bioRxiv, 2022.2012.2023.521610. doi:10.1101/ International Conference on Machine Learning, 2023, ICML 2023 Call For Papers.
2022.12.23.521610. 〈https://icml.cc/Conferences/2023/CallForPapers〉 (accessed 20th Feb, 2023).
Gartner, 2022, Hype Cycle for Artificial Intelligence (AI). 〈https://www.gartner.com/en/ Introna, L. D., & Hayes, N. (2011). On Sociomaterial Imbrications: What Plagiarism
articles/what-s-new-in-artificial-intelligence-from-the-2022-gartner-hype-cycle〉 Detection Systems Reveal and Why It Matters. Information and Organization, 21(2),
(accessed 20th Feb, 2023). 107–122.
Gartner.com (2019). Top Trends on the Gartner Hype Cycle for Artificial Intelligence, Irons, A., & Crick, T. (2022). Cybersecurity in the Digital Classroom: Implications for
2019, 〈https://www.gartner.com/smarterwithgartner/top-trends-on-the-gartner- Emerging Policy, Pedagogy and Practice. Higher Education in a Post-COVID World:
hype-cycle-for-artificial-intelligence-2019/〉. New Approaches and Technologies for Teaching and Learning. Emerald Publishing,.
Getahun, H. (2023). ChatGPT could be used for good, but like many other AI models, it’s Janssen, M., & van den Hoven, J. (2015). Big and Open Linked Data (BOLD) in
rife with racist and discriminatory bias. Insider. 〈https://www.insider.com/chatgpt- government: A challenge to transparency and privacy? Government Information
is-like-many-other-ai-models-rife-with-bias-2023-1〉 Accessed: 24 FEB 2023. Quarterly, 32(4), 363–368.
59
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Jena, L. K., & Goyal, S. (2022). Emotional intelligence and employee innovation: LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
Sequential mediating effect of person-group fit and adaptive performance. European Li, J., Larsen, K., & Abbasi, A. (2020a). TheoryOn: A design framework and system for
Review of Applied Psychology, 72(1), Article 100729. unlocking behavioral knowledge through ontology learning. MIS Quarterly, 44(4).
Jensen, M. L., Dinger, M., Wright, R. T., & Thatcher, J. B. (2017). Training to mitigate Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020b). Federated learning: Challenges,
phishing attacks using mindfulness techniques. Journal of Management Information methods, and future directions. IEEE Signal Processing Magazine, 37(3), 50–60.
Systems, 34(2), 597–626. Licklider, J. C. (1960). Man-computer symbiosis. IRE Transactions on Human factors in
Jisc (2023). Does ChatGPT Mean the End of the Essay as an Assessment Tool? Available Electronics, (1), 4–11.
at: 〈https://www.jisc.ac.uk/news/does-chatgpt-mean-the-end-of-the-essay-as-an- Lin, J.-S. (Elaine), & Wu, L. (2023). Examining the psychological process of developing
assessment-tool-10-jan-2023〉 (last accessed 10/10/23). consumer-brand relationships through strategic use of social media brand chatbots.
Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled adaptive learning systems: A Computers in Human Behavior, 140, Article 107488.
systematic mapping of the literature. Computers and Education: Artificial Intelligence, Liu, S., Yang, L., Zhang, C., Xiang, Y. T., Liu, Z., Hu, S., & Zhang, B. (2020). Online mental
2, Article 100017. health services in China during the COVID-19 outbreak. The Lancet Psychiatry, 7(4),
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux,. e17–e18.
Kar, A. K. (2016). Bio inspired computing–a review of algorithms and scope of Lokman, A. S., & Ameedeen, M. A. (2018). Modern chatbot systems: A technical review.
applications. Expert Systems with Applications, 59, 20–32. Proceedings of the Future Technologies Conference (pp. 1012–1023). Cham: Springer,.
Kelly, C., 2023, ChatGPT might change advertising — but it won’t revolutionize it. https://doi.org/10.1007/978-3-030-02683-7_75 (November).
[Online] Available at: 〈https://www.marketingdive.com/news/chatgpt-AI- Lucey, B., & Dowling, M. (2023). ChatGPT: our study shows AI can produce academic
marketing-advertising-revolutionize/641649/〉 [Accessed 31 1 2023]. papers good enough for journals - just as some ban it. The Conversation (Available at)
Kendrick, C. (2023). The Efficacy of ChatGPT: Is it Time for Librarians to Go Home? 〈https://theconversation.com/chatgpt-our-study-shows-ai-can-produce-academ
Scholarly Kitchen (Available at) 〈https://scholarlykitchen.sspnet.org/2023/01/26/g ic-papers-good-enough-for-journals-just-as-some-ban-it-197762〉.
uest-post-the-efficacy-of-chatgpt-is-it-time-for-the-librarians-to-go-home/〉. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An
Kietzmann, J., Paschen, J., & Treen, E. R. (2018). “Artificial Intelligence in Advertising: Argument for Ai in Education. Pearson London,.
How Marketers Can Leverage Artificial Intelligence Along the Consumer Journey. Ma, L., & Sun, B. (2020). Machine learning and AI in marketing–Connecting computing
Journal of Advertising Research, 58(3), 263–267. power to human insights. International Journal of Research in Marketing, 37(3),
Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source 481–504.
rating on fake news in social media. Mis Quarterly, 43(3), 1025–1039. Maguire, S., Hardy, C., & Lawrence, T. B. (2004). Institutional entrepreneurship in
Kim, H., Shin, D. K., Yang, H., & Lee, J. H. (2019). A study of AI chatbot as an assistant emerging fields: HIV/aids treatment advocacy in Canada. Academy of Management,
tool for school English curriculum. Korean Association For Learner-Centered 47(5), 657–679.
Curriculum And Instruction, 19(1), 89–110. https://doi.org/10.22251/ Mao, Y., White, T., Sadler, P. M., & Sonnert, G. (2017). The association of precollege use
jlcci.2019.19.1.89 of calculators with student performance in college calculus. Educational Studies in
Kim, S., Wang, Y., & Boon, C. (2021). Sixty years of research on technology and human Mathematics, 94(1), 69–83.
resource management: Looking back and looking forward. Human Resource Mariani, M. M., Machado, I., & Nambisan, S. (2023). Types of innovation and artificial
Management, 60(1), 229–247. intelligence: A systematic quantitative literature review and research agenda.
King, A.A., & Baatartogtokh, B. (2015). How useful is the theory of disruptive Journal of Business Research, 155, Article 113364.
innovation? available at 〈http://sloanreview.mit.edu/article/how-useful-is-the- Mariani, M. M., Machado, I., Magrelli, V., & Dwivedi, Y. K. (2022). Artificial intelligence
theory-of-disruptive-innovation/〉. MIT Sloan Management Review, Fall, 77–90. in innovation research: A systematic review, conceptual framework, and future
Kissinger, H. A., Schmidt, E., & Huttenlocher, D. (2021). The Age of AI: and our Human research directions. Technovation. , Article 102623. https://doi.org/10.1016/j.
Future. Hachette UK,. technovation.2022.102623
Klepper, D. (2023), It turns out that ChatGPT is really good at creating online Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of
propaganda: ‘I think what’s clear is that in the wrong hands there’s going to be a lot Business Ethics, 160(4), 835–850.
of trouble. January 24, 〈https://fortune.com/2023/01/24/chatgpt-open-ai-online- McCarthy, J. (2007). What is artificial intelligence. Available from 〈http://jmc.stanford.
propaganda/〉. edu/artificial-intelligence/what-is-ai/index.html〉 Accessed 10.03.23.
Kotter, J. P. (2007). Leading change: Why transformation efforts fail. In Museum McGrath, J. E. (1984). Groups: Interaction and performance (Vol. 14). Englewood Cliffs,
management and marketing (pp. 20–29). Routledge,. NJ: Prentice-Hall,.
Kreps, D., & Rowe, F. (2021). Free Will in Philosophy and in IS research: where do we McKinsey & Company, 2023, What is generative AI? Retrieved from 〈https://www.
stand and where do we want to be? ICIS 2021 Proceedings. 5. mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai#/〉.
Krügel, S., Ostermaier, A., & Uhl, M. (2023). The moral authority of ChatGPT. arXiv Melis, G., Dyer, C., & Blunsom, P. (2017). On the state of the art of evaluation in neural
preprint arXiv:2301.07098. language models. arXiv preprint arXiv:1707.05589.
Kshetri, N., & Ajami, R. (2008). “Institutional Reforms in the Gulf Cooperation Council Merhi, M. I. (2022). An Evaluation of the Critical Success Factors Impacting Artificial
Economies: A Conceptual Framework”. Journal of International Management, 14(3), Intelligence Implementation. International Journal of Information Management. ,
300–318. Article 102545. https://doi.org/10.1016/j.ijinfomgt.2022.102545
Kshetri, N., & Dholakia, N. (2009). Professional and trade associations in a nascent and Metcalfe, M. (2004). Theory: Seeking a plain English explanation. JITTA: Journal of
formative sector of a developing economy: a case study of the NASSCOM effect on Information Technology Theory and Application, 6(2), 13–21.
the Indian offshoring industry. J Int Manag, 15(2), 225–239. Metz, A. (2022). 6 exciting ways to use ChatGPT – from coding to poetry. TechRadar. 〈htt
Kshetri, N. (2018). Informal institutions and Internet-based equity crowdfunding. Journal ps://www.techradar.com/features/6-exciting-ways-to-use-chatgpt-from-coding-to
of International Management, 21(4), 33–51. -poetry〉 Accessed: Feburary 20, 2023.
Kshetri, N. (2023a). "Regulatory technology and supervisory technology: Current status, Mishra, M. (2023). As ChatGPT takes the internet by storm, here’s what its rivals are up
facilitators and barriers”. IEEE Computer, 56(1), 64–75. to. The Economic Times. January 31 〈https://economictimes.indiatimes.com/tech/te
Kshetri, N. (2023b). ChatGPT in developing economies. IEEE IT Professional, 25(2) chnology/as-chatgpt-takes-the-internet-by-storm-heres-what-its-rivals-are-up-to/art
(forthcoming). icleshow/97441789.cms〉.
Kulesh, S. (2022). Why ChatGPT can be dangerous for every Internet user. The Times of Mitchell, A. (2023). ChatGPT could make these jobs obsolete: ‘The wolf is at the door.’.
India (Available online) 〈https://timesofindia.indiatimes.com/toirreporter/author-s New York Post. January 25 〈https://nypost.com/2023/01/25/chat-gpt-could-make
ourabh-kulesh-479259193.cms〉. -these-jobs-obsolete/〉.
Kundalia, N.D. (2023, January 30). ChatGPT and the future of writing. Hindustan Times. Mithas, S. (2016). Digital intelligence: what every smart manager must have for success
Retrieved January 31, 2023, from 〈https://www.hindustantimes.com/books/ in an information age. Portfolio Penguin.
chatgpt-and-the-future-of-writing-101675090609362.html〉. Mithas, S., & McFarlan, F. W. (2017). What is Digital Intelligence? IEEE IT Professional, 19
Kung, T.H., Cheatham, M., & Medenilla, A. (2022). Performance of ChatGPT on USMLE: (4), 3–6. 〈https://www.computer.org/csdl/mags/it/2017/04/mit2017040003.htm
Potential for AI-assisted medical education using large language models. bioRxiv. l〉.
Published online December, 20, 2022–12. Mithas, S., Chen, Z.-L., Saldanha, T., & Silveira, A. D. O. (2022). How Will Artificial
Kushwaha, A. K., & Kar, A. K. (2021). MarkBot – A Language Model-Driven Chatbot for Intelligence and Industry 4.0 Emerging Technologies Transform Operations
Interactive Marketing in Post-Modern World. Retrieved January 31, 2023, from Management? Production and Operations Management, 31(12), 4475–4487. https://
Information Systems Frontiers. https://doi.org/10.1007/s10796-021-10184-y. doi.org/10.1111/poms.13864
Laffont, J.-J., & Martimort, D. (2002). The Theory of Incentives: The Principal-agent Model. Mithas, S., Murugesan, S., & Seetharaman, P. (2020). What is Your Artificial Intelligence
Princeton University Press,. Strategy? IEEE IT Professional, 22(2), 4–9.
Larsen, K. R., Lukyanenko, R., Mueller, R. M., Storey, V. C., VanderMeer, D., Parsons, J., Mogaji, E., & Nguyen, N. P. (2022). Managers’ understanding of artificial intelligence in
& Hovorka, D. S. (2020). Validity in Design Science Research, Designing for Digital relation to marketing financial services: insights from a cross-country study.
Transformation. Co-Creating Services with Citizens and Industry: 15th International International Journal of Bank Marketing, 40(6), 1272–1298.
Conference on Design Science Research in Information Systems and Technology, DESRIST Mogaji, E., Farquhar, J. D., Van Esch, P., Durodié, C., & Perez-Vega, R. (2022). Guest
2020, Kristiansand, Norway, December 2–4, 2020, Proceedings 15 (pp. 272–282). editorial: Artificial intelligence in financial services marketing. International Journal
Springer,. Co-Creating Services with Citizens and Industry: 15th International of Bank Marketing, 40(6), 1097–1101.
Conference on Design Science Research in Information Systems and Technology, DESRIST Mogaji, E., Olaleye, S., & Ukpabi, D. (2020a). Using AI to personalise emotionally
2020, Kristiansand, Norway, December 2–4, 2020, Proceedings 15. appealing advertisement. Digital and Social Media Marketing: Emerging Applications
Laumer, S., Maier, C., Eckhardt, A., & Weitzel, T. (2016). Work routines as an object of and Theoretical Development (pp. 137–150). Cham: Springers,.
resistance during information systems implementations: Theoretical foundation and Mogaji, E., Soetan, T., & Kieu, T. (2020b). The implications of artificial intelligence on
empirical evidence. European Journal of Information Systems, 25(4), 317–343. the digital marketing of financial services to vulnerable customers. Australasian
https://doi.org/10.1057/ejis.2016.1 Marketing Journal, 29(3), 235–242.
60
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Mollick, E. (2022). ChatGPT Is a Tipping Point for AI. Harvard Business Review. December Pflügner, K., Maier, C., & Weitzel, T. (2021). The direct and indirect influence of
14. mindfulness on techno-stressors and job burnout: A quantitative study of white-
Mollick, E.R. & Mollick, L. (2022). New Modes of Learning Enabled by AI Chatbots: collar workers. Computers in Human Behavior, 115, Article 106566.
Three Methods and Assignments. Available at: https://doi.org/10.2139/ Pinker, S. (2022). Tech Prophecy and the Underappreciated Power of Ideas. In
ssrn.4300783Pflügner, J. Brockman (Ed.), Possible Minds: Twenty-Five Ways of Looking at AI. New York:
Montti, R. (2022). What is ChatGPT and how can you use it? Search Engine Journal Penguin Press.
(Accessed from) 〈https://www.searchenginejournal. Pizzi, G., Scarpi, D., & Pantano, E. (2021). Artificial intelligence and the new forms of
com/what-is-chatgpt/473664/#close〉. interaction: Who has the control when interacting with a chatbot? Journal of Business
Moor, J. H. (2008). Why we need better ethics for emerging technologies. In Research, 129, 878–890.
J. V. D. Hoven, & J. Weckert (Eds.), Information Technology and Moral Philosophy (pp. Popper, K.R. (1957). Philosophy of Science: A personal report.
26–39). Cambridge: Cambridge University Press. Qadir, J. (2022). Engineering Education in the Era of ChatGPT: Promise and Pitfalls of
National Institute of Standards and Technology, 2023, January 26, NIST AI Risk Generative AI for Education. TechRxiv. https://doi.org/10.36227/
Management Framework Playbook. NIST. Retrieved February 8, 2023, from techrxiv.21789434.v1.
〈https://pages.nist.gov/AIRMF/〉. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language
Nature. (2023). Preparing your materials. Nature (Available at) 〈https://www.nature. understanding by generative pre-training. Preprint. 1–12.
com/nbt/submission-guidelines/preparing-your-submission〉. Rafalin, D., undated, Designing Assessment to Minimise the Possibility of Contract
New York Times. (2023). Don’t Ban ChatGPT in Schools. Teach With It. 〈https://www. Cheating. Available at: 〈https://www.city.ac.uk/__data/assets/pdf_file/0019/
nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html〉 Accessed 440245/Designing-Assessment-to-Minimise-the-Possibility-of-Contract-Cheating-18-
10.03.23. RL1.pdf〉 (last accessed 09/02/2023).
Newman, K. L. (2000). “Organizational transformation during institutional upheaval”. Rai, A., Constantinides, P., & Sarker, S. (2019). Next generation digital platforms: toward
The Academy of Management Review, 25(3), 602–619. human-AI hybrids. MIS Quarterly, 43(1), iii–ix.
Ng, J., Haller, E., & Murray, A. (2022). The ethical chatbot: A viable solution to socio- Rasmusen, E. (2014). Perfect, certain, symmetric and complete information. Games and
legal issues. Alternative Law Journal, 47(4), 308–313. information: An introduction to game theory (fourth edition.,). Basil Blackwell,.
Nguyen, N., & Mogaji, E. (2022). #BankFromHome: Using Advertisement Campaigns to Reed, L. (2022). ChatGPT for Automated Testing: From conversation to code. Sauce Labs.
Change Banking Behaviour During the COVID-19 Pandemic in an Emerging 〈https://saucelabs.com/blog/chatgpt-automated-testing-conversation-to-code〉
Economy. Marketing Communications and Brand Development in Emerging Markets Accessed: February 20, 2023.
Volume II: Insights for a Changing World (pp. 83–108). Cham: Springer International Reese, H. (2016). Why Microsoft’s ’Tay’ AI bot went wrong. TechRepublic (Available at)
Publishing,. 〈https://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/〉.
North, D. C. (1990). Institutions, Institutional Change and Economic Performance. Reich, R. (1992). The work of nations. Preparing ourselves for 21st century capitalism. Alfred
Cambridge, UK: Cambridge University Press,. Knopf,.
Northey, G., Hunter, V., Mulcahy, R., & Choong, K. (2022). Man vs machine: how Riedel, A., Mulcahy, R., & Northey, G. (2022). Feeling the love? How consumer’s
artificial intelligence in banking influences consumer belief in financial advice. political ideology shapes responses to AI financial service delivery. International
International Journal of Bank Marketing, 40(6), 1182–1199. Journal of Bank Marketing, 40(6), 1102–1132.
O’Connor, S. (2023). Open artificial intelligence platforms in nursing education: Tools Rogers, E. M. (2010). Diffusion of Innovations. Simon and Schuster,.
for academic progress or abuse. Nurse Education in Practice, 66, Article 103537. Rosenblatt, K. (2023). ChatGPT passes MBA exam given by a Wharton professor.
O’Connor, A. (2021). Reading people. The Lancet Child & Adolescent Health, 5(11), 779. Available at: 〈https://www.nbcnews.com/tech/tech-news/chatgpt-passes-mba-
O’Connor, S.J. N.E. i P. (2022). Open artificial intelligence platforms in nursing exam-wharton-professor-rcna67036〉.
education: Tools for academic progress or abuse? 66, 103537–103537. Rosenzweig-Ziff, D. (2023). New York City blocks use of the ChatGPT bot in its schools.
O’Connor, S., & ChatGPT. (2023). Editorial: Open artificial intelligence platforms in The Washington Post. 〈https://www.washingtonpost.com/education/2023/01/05/n
nursing education: Tools for academic progress or abuse? Nurse Education In Practice, yc-schools-ban-chatgpt/〉.
66, Article 103537. https://doi.org/10.1016/j.nepr.2022.103537 Rowe, F. (2018). Being critical is good, but better with philosophy! From digital trans-
Ofcom (2022). Instagram, TikTok and YouTube teenagers’ top three news sources. formation and values to the future of IS research. European Journal of Information
Available at 〈https://www.ofcom.org.uk/news-centre/2022/instagram,-tiktok-and- Systems, 27(3), 380–393.
youtube-teenagers-top-three-news-sources〉 Accessed 10.03.23. Ryle, G. (1945). Knowing how and knowing that: The presidential address. In Aristotelian
O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A Society (Vol. 46, pp. 1–16). Wiley, (January).
scoping review. The Internet and Higher Education, 25, 85–95. Sarker, S., Chatterjee, S., Xiao, X., & Elbanna, A. (2019). The Sociotechnical Perspective
Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots applications in education: A as an ‘Axis of Cohesion’ for the IS discipline: Recognizing its Historical Legacy and
systematic review. Computers and Education Artificial Intelligence, 2, Article 100033. Ensuring its Continued Relevance. MIS Quarterly, 43(3), 695–719.
https://doi.org/10.1016/j.caeai.2021.100033 Schucany, W. R., Minton, P. D., & Shannon Jr, B. S. (1972). A survey of statistical
Omoge, A. P., Gala, P., & Horky, A. (2022). Disruptive technology and AI in the banking packages. ACM Computing Surveys (CSUR), 4(2), 65–79.
industry of an emerging market. International Journal of Bank Marketing, 40(6), Schuetz, S. W., & Venkatesh, V. (2020). “The Rise of Human Machines: How Cognitive
1217–1247. Computing Systems Challenge Assumptions of User-System Interaction”. Journal of
OpenAI (2022). New and Improved Content Moderation Tooling. 〈https://openai.co the Association for Information Systems, (21:2), 460–482. https://doi.org/10.17705/
m/blog/new-and-improved-content-moderation-tooling/〉. Accessed: 24 FEB 2023. 1jais.00608
OpenAI, 2023a, ChatGPT: Optimizing Language Models for Dialogue. Available at: Schuetzler, R. M., Grimes, G. M., & Giboney, J. S. (2020). The impact of chatbot
〈https://openai.com/blog/chatgpt/〉. conversational skill on engagement and perceived humanness. Journal of
OpenAI, 2023b, Usage policies. 〈https://platform.openai.com/docs/usage-policies〉. Management Information Systems, 37(3), 875–900.
Accessed: 24 FEB 2023. Science. (2023). ChatGPT is fun, but not an author. 〈https://www.science.org/doi/
OpenAI, 2023, “DALL.E2”, Available at: 〈https://openai.com/dall-e-2/〉. 10.1126/science.adg7879〉 (accessed 20th Feb, 2023).
Oppy, G., & Dowe, D. (2021). The Turing Test. In E. N. Zalta (Ed.), The Stanford Scott, R. (2001). Institutions and Organizations. Thousand Oaks, CA: Sage,.
Encyclopedia of Philosophy. Winter Edition. 〈https://plato.stanford.edu/archives/win Scott, W. R., Ruef, M., Mendel, P. J., & Caronna, C. A. (2000). Institutional change and
2021/entries/turing-test/〉. healthcare organizations: From professional dominance to managed care. Chicago, IL:
Oreg, S. (2003). Resistance to change: Developing an individual differences measure. University of Chicago Press,.
Journal of Applied Psychology, 88(4), 680. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain Sciences, 3(3),
Ortiz, S. (2023a). ChatGPT is changing everything. But it still has its limits. ZDNet 417–424.
(Available online) 〈https://www.zdnet.com/article/chatgpt-is-changing-everythin Seeger, A.-M., Pfeiffer, J., & Heinzl, A. (2021). Texting with Humanlike Conversational
g-but-it-still-has-its-limits/〉. Agents: Designing for Anthropomorphism. Journal of the Association for Information
Ortiz, S. (2023b). What is ChatGPT and why does it matter? Here’s everything you need Systems, 22(4), 931–967.
to know. ZD Netto Innovation (Accessed from) 〈https://www.zdnet.com/article Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European
/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/〉. Journal of Education, 57(4), 620–631.
Pagani, M. and Champion, R., forthcoming, Artificial Intelligence for Business Creativity, Shah, C., & Bender, E.M. (2022). Situating Search. In ACM SIGIR Conference on Human
Routledge. Information Interaction and Retrieval, 221–232.
Pappas, I. O., & Giannakos, M. N. (2021). Rethinking Learning Design in IT Education Sheth, J. N., Jain, V., Roy, G., & Chakraborty, A. (2022). AI-driven banking services: the
During a Pandemic. Frontiers in Education, 6, 103. next frontier for a personalised experience in the emerging market. International
Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. London: Viking/ Journal of Bank Marketing, 40(6), 1248–1271.
Penguin Press,. Shneiderman, B. (2020). Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS
Pazzanese, C. (2020). Great Promise but Potential for Peril. The Harvard Gazette. October Transactions on Human-Computer Interaction, 12(3), 109–124.
26 〈https://news.harvard.edu/gazette/story/2020/10/ethical-concerns- Shrivastava, R. (2022). Teachers Fear ChatGPT Will Make Cheating Easier Than Ever.
mount-as-ai-takes-bigger-decision-making-role/〉. Forbes. Retrieved January 31, 2023, from 〈https://www.forbes.com/sites/rashishri
Pearl, M. (2022): The ChatGPT chatbot from OpenAI is amazing, creative, and totally vastava/2022/12/12/teachers-fear-chatgpt-will-make-cheating-easier-than-ever/〉.
wrong. 〈https://mashable.com/article/chatgpt-amazing-wrong〉. Shujahat, M., Sousa, M. J., Hussain, S., Nawaz, F., Wang, M., & Umer, M. (2019).
Perrigo, B. (2023). OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make Translating the impact of knowledge management processes into knowledge-based
ChatGPT Less Toxic. Time. 〈https://time.com/6247678/openai-chatgpt-kenya- innovation: The neglected and mediating role of knowledge-worker productivity.
workers〉. Journal of Business Research, 94, 442–450.
Sieloff, C. G. (1999). ‘If only HP knew what HP knows’: the roots of knowledge
management at Hewlett-Packard. Journal of Knowledge Management, 3(1), 47–53.
61
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Simon, H. A. (1960). The corporation: Will it be managed by machines? In M. L. Anshen, Tiku, M., De Vinck, G., Oremus, W. (2023, February 3). Big Tech was Moving Cautiously
& G. L. Bach (Eds.), Management and the Corporations 1985 (pp. 17–55). McGraw- on AI. Then Came ChatGPT, Washington Post. Retrieved from 〈https://www.
Hill. washingtonpost.com/technology/2023/01/27/chatgpt-google-meta/〉.
Singh, V., Chen, S. S., Singhania, M., Nanavati, B., & Gupta, A. (2022b). How are Trinidad, J. E. (2020). Understanding student-centred learning in higher education:
reinforcement learning and deep learning algorithms used for big data based students’ and teachers’ perceptions, challenges, and cognitive gaps. Journal of
decision making in financial industries–A review and research agenda. International Further and Higher Education, 44(8), 1013–1023.
Journal of Information Management Data Insights, 2(2), Article 100094. Tung, L. (2023). ChatGPT can write code. Now researchers say it’s good at fixing bugs,
Singh, V., Nanavati, B., Kar, A. K., & Gupta, A. (2022a). How to maximize clicks for too. ZDNet. 〈https://www.zdnet.com/article/chatgpt-can-write-code-now-researc
display advertisement in digital marketing? A reinforcement learning approach. hers-say-its-good-at-fixing-bugs-too/〉 Accessed: February 29, 2023.
Information Systems Frontiers, 1–18. https://doi.org/10.1007/s10796-022-10314-0 Turing Test, 2023, January 29, In Wikipedia. 〈https://en.wikipedia.org/wiki/Turing_
Soetan, T. O., Mogaji, E., & Nguyen, N. P. (2021). Financial services experience and test#Google_LaMDA_chatbot〉.
consumption in Nigeria. Journal of Services Marketing, 35(7), 947–961. Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
Sollie, P. (2009). On Uncertainty in Ethics and Technology. In P. Sollie, & M. Düwell Turnitin (2023). Sneak preview of Turnitin’s AI writing and ChatGPT detection
(Eds.), Evaluating New Technologies: Methodological Problems for the Ethical Assessment capability. Available at: 〈https://www.turnitin.com/blog/sneak-preview-of-
of Technology Developments, The International Library of Ethics, Law and Technology turnitins-ai-writing-and-chatgpt-detection-capability〉.
(pp. 141–158). Springer. Uc-Cetina, V., Navarro-Guerrero, N., Martin-Gonzalez, A., Weber, C., & Wermter, S.
Springer-Nature, 2023, Tools such as ChatGPT threaten transparent science; here are our (2022). Survey on reinforcement learning for language processing. Artificial
ground rules for their use. 〈https://www.nature.com/articles/d41586–023- Intelligence Review, 1–33. https://doi.org/10.1007/s10462-022-10205-5
00191–1〉 (accessed 20th Feb, 2023). UK Government (2021). National AI Strategy. Available at 〈https://www.gov.uk/
Stahl, B. C. (2021). Artificial Intelligence for a Better Future: An Ecosystem Perspective on the government/publications/national-ai-strategy〉 Accessed 10.03.23.
Ethics of AI and Emerging Digital Technologies, SpringerBriefs in Research and Innovation UK Government Digital Service, 2019, July, UK government digital service standard.
Governance. Springer International Publishing,. https://doi.org/10.1007/978-3-030- Retrieved from 〈https://www.gov.uk/service-manual/service-standard〉.
69978-9 UNESCO (2021). AI and education: guidance for policy-makers. Available at 〈https://
Stahl, B. C., Flick, C., & Timmermans, J. (2017). Ethics of Emerging Information and unesdoc.unesco.org/ark:/48223/pf0000376709〉 Accessed 10.03.23.
Communication Technologies-On the implementation of RRI. Science and Public van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. (2023). ChatGPT:
Policy, 44, 369–381. https://doi.org/10.1093/scipol/scw069 five priorities for research. Nature, 614, 224–226. https://doi.org/10.1038/d41586-
Stein, N., & Brooks, K. (2017). A fully automated conversational artificial intelligence for 023-00288-7
weight loss: longitudinal observational study among overweight and obese adults. van Esterik-Plasmeijer, P. W., & Van Raaij, W. F. (2017). Banking system trust, bank
JMIR Diabetes, 2(2). https://doi.org/10.2196/diabetes.8590 trust, and bank loyalty. International Journal of Bank Marketing, 35(1), 97–111.
Stevenson, C., Smal, I., Baas, M., Grasman, R., & van der Maas, H. (2022). Putting GPT- Van Noorden, R. (2022). How language-generation AIs could transform science, 21–21
3’s Creativity to the (Alternative Uses) Test. In International Conference on Nature, 605. https://doi.org/10.1038/d41586-022-01191-3.
Computational Creativity (ICCC) 2022. arXiv preprint arXiv:2206.08932. Vargo, S. L., Maglio, P. P., & Akaka, M. A. (2008). On value and value co-creation: A
Stokel-Walker, C. (2022). AI bot ChatGPT writes smart essays - should professors worry? service systems and service logic perspective. European Management Journal, 26(3),
Nature (London). https://doi.org/10.1038/d41586-022-04397-7 145–152.
Stokel-Walker, C. J. N. (2023). ChatGPT listed as author on research papers: many Vassilakopoulou, P., Haug, A., Salvesen, L. M., & Pappas, I. O. (2023). Developing
scientists disapprove. Nature, 613, 620–621. https://doi.org/10.1038/d41586-023- Human/AI interactions for chat-based-customer-services: lessons learned from the
00107-z Norwegian Government. European Journal of Information Systems, 32(1), 10–22.
Strahan, B., & Gleason, B. (2019). Building a multi-channel Q&A chatbot at Saint Louis Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., &
University using the open source QnABot. Available at: 〈https://aws.amazon.com/ Polosukhin, I. (2017). Attention is all you need. Advances in neural information
blogs/publicsector/building-a-multi-channel-qa-chatbot-at-saint-louis-university- processing systems, 30. 31st Conference on Neural Information Processing Systems. CA,
using-the-open-source-qnabot/〉. USA: Long Beach, (Available at) 〈https://proceedings.neurips.cc/paper/2017/file/
Strang, D., & Meyer, J. (1993). Institutional conditions for diffusion. Theory Soc, 22, 3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf〉.
487–511. Venkatesh, V., & Sykes, T. A. (2013). Digital divide initiative success in developing
Stylos, N., Zwiegelaar, J., & Buhalis, D. (2021). Big data empowered agility for dynamic, countries: A longitudinal field study in a village in India. Information Systems
volatile, and time-sensitive service industries: the case of tourism sector. International Research, 24(2), 239–260.
Journal of Contemporary Hospitality Management, International Journal of Vincent, J. (2022) Open AI’s new chatbot can explain code and write sitcom scripts but is
Contemporary Hospitality Management, 33(3), 1015–1036. https://doi.org/10.1108/ still easily tricked, The Verge, December 2nd, 〈https://www.theverge.com/
IJCHM-07-2020-0644 23488017/openai-chatbot-chatgpt-ai-examples-web-demo〉.
Sun, T. Q., & Medaglia, R. (2019). Mapping the challenges of Artificial Intelligence in the Vincent, J. (2023) Top AI conference bans use of ChatGPT and AI language tools to write
public sector: Evidence from public healthcare. Government Information Quarterly, 36 academic papers, The Verge, January 6th, 〈https://www.theverge.com/2023/1/5/
(2), 368–383. 23540291/chatgpt-ai-writing-tool-banned-writing-academic-icml-paper〉.
Susnjak, T. (2022). ChatGPT: The End of Online Exam Integrity? arXiv preprint arXiv: Vives, X. (2019). Competition and stability in modern banking: A post-crisis perspective.
2212.09292. International Journal of Industrial Organization, 64, 55–69.
Svrluga, S. (2023). Princeton student builds app to detect essays written by a popular AI Wacker, J. G. (1998). A definition of theory: research guidelines for different theory-
bot. The Washington Post, 〈https://www.washingtonpost.com/education/2023/01/ building research methods in operations management. Journal of Operations
12/gptzero-chatgpt-detector-ai/〉. Management, 16, 361–385.
Tang, C. S. (2022). Innovative technology and operations for alleviating poverty through Walsh, T. (2023). Bard, Bing and Baidu: how big tech’s AI race will transform search –
women’s economic empowerment. Production and Operations Management, 31(1), and all of computing, available at 〈https://theconversation.com/bard-bing-and-
32–45. baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing-199501〉.
Te’eni, D., Rowe, F., Agerfalk, P., & Lee, J. S. (2015). Publishing and getting published in Walzer, M. (2017). In M. L. Gross, & T. Meisels (Eds.), Soft War: The Ethics of Unarmed
EJIS: marshaling contributions for a diversity of genres. European Journal of Conflict. Cambridge: Cambridge University Press. https://doi.org/10.1017/
Information Systems, 24(6), 559–568. 9781316450802.
Tellis, G. J. (2006). Disruptive Technology or Visionary Leadership? Journal of Product Watermeyer, R., Crick, T., & Knight, C. (2022). Digital disruption in the time of COVID-
Innovation and Management, 23(1), 34–38. 19: Learning technologists’ accounts of institutional barriers to online learning,
Terblanche, N., & Cilliers, D. (2020). Factors that influence users’ adoption of being teaching and assessment in UK universities. International Journal for Academic
coached by an artificial intelligence coach. Philosophy of Coaching: An International Development, 27(2), 148–162.
Journal, 5(1), 61–70. https://doi.org/10.22316/poc/05.1.06 Watson, R. T., & Webster, J. (2020). Analysing the past to prepare for the future: Writing
Tércio Pereira, P., Limberger, F., Minasi, S. M., & Buhalis, D. (2022). New Insights into a literature review a roadmap for release 2.0. Journal of Decision Systems, 29(3),
Consumers’ Intention to Continue Using Chatbots in the Tourism Context. Journal of 129–147.
Quality Assurance in Hospitality & Tourism. https://doi.org/10.1080/ Webster, J., & Watson, R. T. (2002). Analyzing the past to prepare for the future: Writing
1528008X.2022.2136817 a literature review. MIS Quarterly, xiii–xxiii.
Terwiesch, C. (2023). Would Chat GPT Get a Wharton MBA? A Prediction Based on Its Weinberger, D. (2019, November 06). How Machine Learning Pushes us to Define
Performance in the Operations Management Course. Mack Institute News, White Fairness. Retrieved from 〈https://hbr.org/2019/11/how-machine-learning-pushes-
Paper. Available at: 〈https://mackinstitute.wharton.upenn.edu/2023/would-chat- us-to-define-fairness〉.
gpt3-get-a-wharton-mba-new-white-paper-by-christian-terwiesch/〉. Weissglass, D. E. (2022). Contextual bias, the democratization of healthcare, and medical
Theconversation, 2023, "ChatGPT could be a game-changer for marketers, but it won’t artificial intelligence in low- and middle-income countries. Bioethics, 36(2),
replace humans any time soon", Available at: 〈https://theconversation.com/chatgpt- 201–209.
could-be-a-game-changer-for-marketers-but-it-wont-replace-humans-any-time-soon- Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language
198053〉. communication between man and machine. Communications of the ACM, 9(1),
Thorp, H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313-313. 36–45 Welsh, S. (2023). ChatGPT and the Future of the Professions. Quillette.
Thorp, H. H. (2023). ChatGPT is fun, but not an author, 313–313 Science (American Available at: 〈https://quillette.com/2023/01/26/chatgpt-and-the-future-of-the-
Association for the Advancement of Science, 379(6630). https://doi.org/10.1126/ professions/?ref=quillette-newsletter〉.
science.adg7879. West, D. (2018, September 13). The role of corporations in addressing AI’s ethical
Tight, M. (2019). Mass Higher Education and Massification. Higher Education Policy, 32, dilemmas. Retrieved from 〈https://www.brookings.edu/research/how-to-address-
93–108. ai-ethical-dilemmas/〉.
62
Y.K. Dwivedi et al. International Journal of Information Management 71 (2023) 102642
Whetten, D. A. (1989). What constitutes a theoretical contribution? Academy of Wlasak, W., Zwanenburg, S. P., & Paton, C. (2023). Supporting Autonomous Motivation
Management Review, 14, 490–495. for Physical Activity With Chatbots During the COVID-19 Pandemic: Factorial
Whitford, E. (2022). A Computer Can Now Write Your College Essay, Maybe Better Than Experiment. JMIR Formative Research, 7(1). https://doi.org/10.2196/38500
You Can. Forbes com. 〈https://www.forbes.com/sites/emmawhitford/2022/12/0 Woolf, B. P. (1992). AI in Education. In Encyclopedia of Artificial Intelligence (2nd ed..,).
9/a-computer-can-now-write-your-college-essay—maybe-better-than-you-can/?sh New York: John Wiley & Sons,.
=3f21aba2dd39〉. Woolf, B.P. (2015). AI and Education: Celebrating 30 Years of Marriage. In AIED
Wilkins, A. (2023). ChatGPT Detector Could Help Spot Cheaters Using AI to Write Essays. Workshops (Vol. 4, pp. 38–47).
Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future Yang, S. (2022). The Abilities and Limitations of ChatGPT. Anaconda Perspectives. 〈https
directions in AI in education. Learning, Media and Technology, 45(3), 223–235. ://www.anaconda.com/blog/the-abilities-and-limitations-of-chatgpt〉. on 25th
Williamson, B., Bayne, S., & Shay, S. (2020a). The datafication of teaching in Higher January 2023.
Education: critical issues and perspectives. Teaching in Higher Education, 25(4), Zhai, X. (2022). ChatGPT user experience: Implications for education. arXiv SSRN
351–365. 4312418, 1–10.
Williamson, B., Eynon, R., & Potter, J. (2020b). Pandemic politics, pedagogies and Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., Liu, J., Yuan, J., &
practices: digital technologies and distance education during the coronavirus Li, Y. (2021). A Review of Artificial Intelligence (AI) in Education from 2010 to
emergency. Learning, Media and Technology, 45(2), 107–114. 2020. Complexity, 2021, 1–18.
Wind, J. (2023). AI-enabled New Educational Paradigm (Seminar). SKEMA Business Zhang, L., Pentina, I., & Fan, Y. (2021). Who do you choose? Comparing perceptions of
School. human vs robo-advisor in the context of financial services. Journal of Services
Winikoff, M., Cranefield, J., Li, J., Doyle, C., & Richter, A. (2021). The Advent of Digital Marketing, 35(5), 634–646.
Productivity Assistants: The Case of Microsoft MyAnalytics. In IEEE Annual Hawaii Zhang, Z., Hummel, J. T., Nandhakumar, J., & Waardenburg, L. (2020). Addressing the
International Conference on System Sciences (HICSS) 2021. key challenges of developing machine learning AI systems for knowledge-intensive
Wirtz, J., & Zeithaml, V. (2018). Cost-effective service excellence. Journal of the Academy work. MIS Quarterly Executive, 19(4), 221–238.
of Marketing Science, 46(1), 59–80. Zhao, L., Mao, H., Compton, B. J., Peng, J., Fu, G., Fang, F., & Lee, K. (2022). Academic
Wirtz, J., Kunz, W. H., Hartley, N., & Tarbit, J. (2023). Corporate digital responsibility in dishonesty and its relations to peer cheating and culture: A meta-analysis of the
service firms and their ecosystems. Journal of Service Research, published Online first. perceived peer cheating effect. Educational Research Review, Article 100455.
https://doi.org/10.1177/10946705221130467 Zhu, Y., Janssen, M., Wang, R., & Liu, Y. (2021). It is me, chatbot: working to address the
Wirtz, J., Lin, C., & Das, G. (2022). cost-effective health care developments and research COVID-19 outbreak-related mental health issues in China. User experience,
opportunities in China, India and Singapore. Journal of Services Marketing, 36(4), satisfaction, and influencing factors. International Journal of Human–Computer
461–466. Interaction, 38(12), 1182–1194.
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A.
(2018). Brave new world: service robots in the frontline. Journal of Service
Management, 29(5), 907–931.
63