1 s2.0 S0268401223000816 Main
1 s2.0 S0268401223000816 Main
1 s2.0 S0268401223000816 Main
Opinion paper
A R T I C L E I N F O A B S T R A C T
Keywords: This article explores ethical issues raised by generative conversational AI systems like ChatGPT. It applies
ChatGPT established approaches for analysing ethics of emerging technologies to undertake a systematic review of
Ethics possible benefits and concerns. The methodology combines ethical issues identified by Anticipatory Technology
Emerging technology
Ethics, Ethical Impact Assessment, and Ethical Issues of Emerging ICT Applications with AI-specific issues from
Generative AI systems
the literature. These are applied to analyse ChatGPT’s capabilities to produce humanlike text and interact
seamlessly. The analysis finds ChatGPT could provide high-level societal and ethical benefits. However, it also
raises significant ethical concerns across social justice, individual autonomy, cultural identity, and environ
mental issues. Key high-impact concerns include responsibility, inclusion, social cohesion, autonomy, safety,
bias, accountability, and environmental impacts. While the current discourse focuses narrowly on specific issues
such as authorship, this analysis systematically uncovers a broader, more balanced range of ethical issues worthy
of attention. Findings are consistent with emerging research and industry priorities on ethics of generative AI.
Implications include the need for diverse stakeholder engagement, considering benefits and risks holistically
when developing applications, and multi-level policy interventions to promote positive outcomes. Overall, the
analysis demonstrates that applying established ethics of technology methodologies can produce a rigorous,
comprehensive foundation to guide discourse and action around impactful emerging technologies like ChatGPT.
The paper advocates sustaining this broad, balanced ethics perspective as use cases unfold to realize benefits
while addressing ethical downsides.
https://doi.org/10.1016/j.ijinfomgt.2023.102700
Received 23 April 2023; Received in revised form 29 August 2023; Accepted 29 August 2023
Available online 9 September 2023
0268-4012/© 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
ethics of emerging technologies and the ethics of AI. This background applications. One field of applied ethics that has developed since that
provides the basis for the methodology employed to analyse the ethics of 1970 s is that of computer ethics (Bynum, 2001; Moor, 1985) which has
ChatGPT. The subsequent sections then discuss the structured ethical strongly influenced the development of the ethics of AI (Stahl, 2021).
analysis of ChatGPT, relevance of findings, and which open questions While computer ethics is one root of the ethics of AI, there are other
emerge from them. The article then spells out likely implications for streams of ethical considerations that are linked to some of the features
research and practice. of AI. One example of this stream of work can be described using the
acronym FATE which stands for fairness, accountability, transparency
2. ChatGPT and the ethics of emerging digital technologies and ethics (Memarian & Doleck, 2023) which mirrors the subject matter
of the ACM conferences series on Fairness, Accountability and Trans
This article aims to apply ideas developed in the context of ethics- parency (FAccT).
related research on emerging digital technologies to ChatGPT. It is For the purposes of this article that explores ethical concerns arising
therefore important to first understand the theoretical background of from ChatGPT, it is important to start with a broad understanding of
ethics that underpins ethics-related research. This section then discusses ethical question that includes but goes beyond the current focus of FATE
which benefits ChatGPT promises and which concerns it raises (Dwi or FAccT. ChatGPT as an example of a large language model and thus of
vedi, Kshetri et al., 2023) before introducing approaches to the ethics of AI may well raise questions around fairness, accountability and trans
emerging technologies. parency, but its ethical concerns are likely to go beyond these estab
lished topics. The paper is therefore theoretically grounded in the
2.1. Theoretical background broader computer ethics literature and builds more specifically on the
theoretical approach to the ethics of emerging technologies as explained
The current discussion of ChatGPT covers numerous technical as in detail in the methodology section below. This theoretical angle of the
pects as well as current and likely applications. One crucial part of the paper allows for broader insights than the prevailing literature on AI and
debate relates to ethical concerns that many contributors raise about the ethics and therefore offers a theoretical justification for the research
use of ChatGPT. This article focuses on such ethical questions. Before we question. If ChatGPT turns out to be as impactful as is currently widely
can explore them in more detail, it is important to provide the concep predicted, then we need theoretically sound and empirically rigorous
tual and theoretical basis. The concept of ethics as used in the English and transparent research to proactively engage with its likely ethical and
language has a number of related but non-identical meanings (Stahl, social consequences. The theoretical and methodological approach re
2012). In general terms it refers to questions involving moral concepts ported in this article are geared to meet these requirements.
such as right and wrong, or good and bad. In most familiar contexts In order to follow this argument, it is important to understand which
humans intuitively know what is right and wrong, a knowledge of ethics aspects of ChatGPT are capable or giving rise to ethical questions, what
that is acquired during human socialisation. Sometimes this intuition those questions are and how they are currently addressed.
fails or conflicts with others’ intuitions, at which point ethics appears as
a set of explicit statements in the form of “you should never / always do 2.2. Literature on ChatGPT and ethics
X”. Where such statements are not accepted and need justification,
ethics as a set of philosophical theories is called for. Ethics can be ChatGPT is an interactive system that allows users to have conver
descriptive or normative, abstract or applied. sations using natural language. According to OpenAI, the organisation
Ethics as a philosophical discipline that is mostly concerned with that developed it, the dialogue format employed by ChatGPT allows it to
examining or justifying prescriptive rules and statements has a written answer follow-up questions, admit mistakes, challenge incorrect pre
history of several millennia. In the European context the roots of phil mises and reject inappropriate requests (OpenAI, 2022). Technically,
osophical ethics are typically traced back to Greek Antiquity. There are ChatGPT is based on of OpenAI’s Generative Pre-trained Transformer
numerous ethical theories that form the canon of philosophical ethics. language (GPT) models (particularly GPT-3 and GPT-4), a large-scale
One that has its roots in Aristotle (2007) is that of virtue ethics which neural network-based language generation model trained on a wide
evaluates the ethical quality of an action or situation with reference to range of internet-based texts. It was trained using reinforcement
the character of the actor. Other high-profile ethical theories include learning from human feedback. We refer to ChatGPT as a ‘technology’ in
deontology which judges the quality of an action by examining the the sense defined by the Encyclopaedia Britannica (2023) as “the
intention of the agent undertaking it. This type of ethical reasoning is application of scientific knowledge to the practical aims of human life.”
closely associated with the German philosopher Immanuel Kant (1788, While the article focuses on ChatGPT, its underlying interest is broader
1797). For Kant an action can only count as ethically good if it is than this particular example of a user-oriented technology. Competition
motivated by duty, which means that it must come from a purely for ChatGPT is growing rapidly, e.g. in the form of Google’s Bard (Metz
rational insight into what should be done which Kant formulates as the & Grant, 2023), Alibaba’s Tongyi Qianwen (McMorrow & Liu, 2023)
so-called Categorical Imperative. A very different ethical theory is that and many others. We use ChatGPT as the most high-profile example of
of teleology, which uses the outcomes of an action to evaluate its ethical an interactive large language model because of its prominence in the
quality. This type of reasoning, also known as consequentialism is discourse. Our analysis is based on key features of ChatGPT and is thus
associated with thinkers such as Bentham (1789) or Mill (1861). They applicable to competitor technologies insofar as these share the same
suggested that a suitable measure of ethics would be utility, which is characteristics.
why this type of thinking is also known as utilitarianism. One measure of Generative AI systems or Chatbots like ChatGPT are not a new
utility is happiness, so that a well-known expression of this ethical po phenomenon. The idea that it should be possible to interact seamlessly
sition is that an action is ethically good, if it leads to the greatest with a computer using natural language goes back to the beginnings of
happiness for the greatest number of people. computing and AI research. The most prominent early example of such a
While it is important to realise that there are extensive theoretical chatbot may be ELIZA, a conversational program developed by Joseph
underpinnings to ethics as a philosophical discipline, it is worth noting Weizenbaum (1977) in the 1960 s. Early chatbots like ELIZA had limited
that there are focus areas within ethics that make use of specific aspects functionality and adoption. With the progress of natural language pro
of this breadth of theory. Ethics of technology is typically seen as one cessing, better hardware and connectivity, chatbots have become more
branch of applied ethics (Nijsingh & Duwell, 2009). Applied ethics aims prominent. Most of the large tech companies have been trying to inte
to understand ethical questions arising in specific fields such as medicine grate them into their service offerings. Examples include Apple’s Siri,
(Freyhofer, 2004), business (Bowie, 1999; De George, 1999), or research Amazon’s Alexa or Microsoft Cortana.
(OECD, 2016). It often focuses on specific sub-fields or problems and The widespread existence, use and adoption of chatbot technologies
2
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
raises the question why ChatGPT has managed to gain the level of accurate and more capable (Mehdi, 2023). In more general terms, the
attention that it has been receiving since its launch in November 2022. benefits of ChatGPT are sometimes described as providing additional
The answer appears to be in the high quality of its outputs. ChatGPT can intelligence to improve operations. One way of capturing this is the
interact across a broad range of topics, providing answers of high quality metaphor of AI interns (Nast, 2023a), suggesting that it will be easier to
of language and good accuracy of content. The underlying technology, draw upon the intelligent but non-expert support that human interns can
the large amount of training data from the internet, and the model ar offer to organisations. This sounds rather incremental but given the
chitecture appear to have allowed ChatGPT to pass a previously invisible potential ubiquity of the infinite number of AI interns, the cumulative
threshold. Interaction with ChatGPT gives the impression that one has a effect may still be transformational. This may be one reason why Sam
conversation with an educated human who has subject expertise across a Altman, the CEO of Open AI can claim that he can envisage ChatGPT to
large number of subject areas. “break capitalism” (Bove, 2023). An ethical evaluation of such a
The question why ChatGPT has risen to prominence is intimately fundamental change of the socio-economic environment would be far
linked to the question of its social and ethical evaluation. Such an from straightforward but it is easy to see how it would be beneficial for
evaluation draws at least partly on real or expected practical conse some.
quences of technology use which, in turn, are driven by the technology’s While the moral benefits of ChatGPT thus remain somewhat fuzzy,
capability. We believe that the following characteristics and capabilities there are more clearly defined concerns. The most prominent set of those
of ChatGPT may influence its use and applications and contribute to has to do with the fact that ChatGPT texts can be difficult to distinguish
possible ethical concerns: from human output which can lead to problems of attribution of
authorship. This is seen as a significant problem for student assessment,
• Production of high-quality text in response to human input that is where such assessment is based on essays and the fear is that students
often difficult to identify as the output of an AI (Zhang et al., 2023); can gain unfair advantages (Eke, 2023; Stokel-Walker, 2022; Weale,
• Ability to engage in a dialogical interaction on a very broad array of 2023). Much work is undertaken to explore concerns about the role of
topics (Gilson et al., 2023); ChatGPT in research where it is seen as a threat to transparency in sci
• Ability to tailor its output to specific language styles (Short & Short, ence (Nature editorial, 2023). Some scientists have attempted to
2023); pre-empt this issue by adding ChatGPT as an author to publications, but
• While currently text based, we assume that it can be integrated into the practice does not seem to find acceptance in the academic commu
other modalities of communication which would, for example, nity (Stokel-Walker, 2023).
enable it to engage in voice communication (Ali et al., 2023); In addition to these concerns about authorship and attribution, there
• Ability to learn from interaction leading to allow it to further are various other concerns that are discussed. One of these is the possible
improve the content quality and acceptability (Sallam, 2023); impact on employment (Frederick, 2023), in particular with regards to
• Based on a large language model trained on large but limited datasets jobs that involve producing output that have close affinity to digital
(Radford et al., 2019). technologies, such as computer programming (Castelvecchi, 2022). As
the technology becomes more advanced, there is a concern that it could
One consequence of these characteristics is that interventions from have a significant impact on the jobs human programmers do (Marr,
ChatGPT may be difficult to distinguish from original human in 2023a).
terventions. Where prior instantiations of chatbots could easily be The fact that the origin of a text is difficult to discern may have
identified as such, this appears to be more difficult for ChatGPT. This further morally problematic consequences. One of these is that it may
ability to interact with technology without a simple way of ascertaining exacerbate the widely discussed problem of misinformation and disin
that it is a technology may revolutionise the way we interact with formation (Hsu & Thompson, 2023). Another well-established ethical
technology. We discuss the ethical issues that arise from it below, but we issue of AI, namely the possibility of using it for unwarranted political
concede that it may lead to an acceleration of research and technical intervention may be exacerbated by ChatGPT as highlighted by (Sanders
progress which can quickly give rise to new capabilities which can & Schneier, 2023) who develop a plausible scenario of the use of chat
change the ethical evaluation again. bots for political lobbying that may overwhelm existing scrutiny
This brings us to the question of which specific ethical benefits or mechanisms.
concerns ChatGPT are currently discussed. These are typically linked to In addition, there is a rapidly growing number of commentaries that
possible applications. At a basic level, ChatGPT produces texts. It can pick up on other ethical concerns related to digital technologies and AI
thus potentially be applied across a vast range of applications that and that trace how these may materialise in the context of ChatGPT.
involve the production of text. An ethical analysis of any novel tech Such concerns include the opacity of the underlying model (van Dis
nology is typically well advised to look at benefits as well as concerns. At et al., 2023), environmental pollution (Nast, 2023b), the fact that
present there is relatively little discussion of ethical benefits of ChatGPT, chatbots still have no real understanding of the texts it produces (Hut
such as the potential benefits of using ChatGPT in promoting teaching son, 2021) but also the increasing affinity to the human mind which
and learning Baidoo-Anu and Owusu Ansah (2023). There is, however, a allows ChatGPT to pass tests designed for humans (Wilkins, 2023).
wide-spread perception in the media that it is likely to be disruptive and There is also growing evidence that to make ChatGPT more sensitive to
transformational, which in terms of technology discourses typically cultural values and sentiments, OpenAI relied on exploited and under
means that it will lead to economic benefits, be these in the form of paid data labellers in Africa and other low-income countries (Perrigo,
start-up companies, optimisation of existing industries or accrual of 2023).
further growth in the tech sector. Such economic growth can provide This list of currently discussed issues highlights key concerns that are
improved wellbeing for many people and thus can count as a moral currently discussed. This article aims to provide a more systematic and
benefit. comprehensive account of the ethics of ChatGPT. It therefore now in
Key to an ethical analysis of ChatGPT is an understanding of likely troduces the discourse on the ethics of emerging technologies, which
applications. One expectation is that ChatGPT will lead to fundamental will provide the conceptual basis of the analysis and inform the meth
changes in the internet search sector. Google is currently the entrenched odology that will be described in the next section.
market leader in this sector. ChatGPT may change the way people
search, leading to more competition in this economically highly lucra 2.3. ChatGPT and the Ethics of Emerging Technologies
tive sector (Grant, 2023; Metz & Grant, 2023). Microsoft has already
announced the launch of a new and improved search engine Bing that The previous section has shown that there is already significant
will run on ChatGPT and GPT-3.5 that they claim to be faster, more research on and discussion of ethical aspects of ChatGPT. However, this
3
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
discourse is mostly ad hoc and lacks a systematic approach. Much of it is undertake research on the ethics of emerging technologies is not to
published in news media and focuses on specific high-profile topics. We provide scientific certainties, but to act on the insight that researchers
therefore suggest that a more systematic and rigorous approach to the share some responsibility for future morally relevant technical de
ethics of ChatGPT is called for that can be used to better understand velopments and to undertake an honest attempt to proactively engage
possible issues and serve as basis for organisational and societal policy with those (Cagnin et al., 2008; Groves, 2009; Swierstra et al., 2009;
development. Such a more systematic approach should be based on peer Walsham, 2012) and to provide orientation with regards to research
reviewed and established knowledge that has proven to be reliable when activities (van der Burg, 2014). Such an endeavour calls for intellectual
applied to similar technologies. One body of knowledge and theory that humility (Jasanoff, 2003) acknowledging its difficult and fallible nature
was developed for this very purpose is represented in the literature on which is the spirit that should inform the reading of the methodological
the ethics of emerging technologies. This article therefore draws on this approaches introduced in the next section.
literature and applies it to ChatGPT. Before we outline how this can be
done in the methodology section, we first need to demonstrate that this 3. Methodology
body of knowledge can usefully be applied to ChatGPT which requires us
to show that ChatGPT is an emerging technology which we do by We use the term ‘methodology’ in this paper in the commonly
looking at the components of the term, at ‘technology’ and when it can accepted meaning of “a set of methods used in a particular area of study
be considered to be ‘emerging’. or activity” (Cambridge Dictionary, 2023). A methodology for under
Technology has its etymological root in the Greek tekhnē, which taking research on the ethics of emerging technologies can be under
stands for “art, skill, craft in work; method, system, an art, a system or stood as an attempt to respond to the problem of uncertainty of the
method of making or doing” and the ending ‘-logy’ which refers to future (Brooks et al., 2021). This does not mean that they can overcome
discourse, theory or science which combines to the definition of tech this problem, but they can provide a rigorous, transparent and repro
nology as the “study of mechanical and industrial arts” (Online Ety ducible way of engaging with it, which allows for highlighting un
mology Dictionary, 2022). Technology is pervasive in human societies certainties and calling them into question (Umbrello et al., 2023). The
and its role in shaping humans and our society has long been discussed probably most comprehensive review of ethics in research and innova
(Ellul, 1973; Heidegger, 1953; Ihde, 1990; Spengler, 1931). The case of tion (Reijers et al., 2018) has identified eight methods of what it calls “ex
ChatGPT shows that these questions are not just idle speculation but ante” methods, which align with the ethics of emerging technologies
affect the way we engage with current phenomena. One can distinguish discussed here. A review and critique of “ethical foresight analysis”
between technology as a paradigm and a device, both of which reflect an (Floridi & Strait, 2020) which is similarly aligned with this article’s
important facet of technology (Moor, 2008). An alternative catego research objective identified six existing methods, which significantly
risation distinguishes between three levels: the top level of the tech overlap with Reijers et al. (2018) findings. In the following subsection
nology, which can be implemented in the meso level of different we provide an overview of relevant methodologies that can be used to
artefacts, which can lead to different applications at the most basic level study the ethics of emerging technologies before we then explain in
(Brey, 2012). In this schema ChatGPT would be best characterised as an detail our research protocol that explains how we have adapted,
artefact which forms part of the technology of natural language pro developed, and implemented these for the purpose of this article.
cessing, which in turn is part of artificial intelligence, which is part of
computing or digital technologies. ChatGPT forms part of the family of 3.1. Methodologies for Studying the Ethics of Emerging Technologies
emerging digital technologies that call for ethical reflection (Kazim &
Koshiyama, 2021). It can furthermore be seen as emerging, because it is Most of these methodologies aim to bridge the gap between meth
still developing fast as indicated by the transition from GPT-3 to GPT-4 odologies from future and foresight studies (Sardar, 2010) and the ethics
(Marr, 2023b). At the same time, the current iteration of ChatGPT is still of technology (Royakkers & Poel, 2011). They typically use approaches
very open in terms of intended and likely applications. from future and foresight studies such as Delphi studies (Adler & Ziglio,
Dealing with ethical questions of emerging technologies is not 1996; Someh et al., 2019) or scenario research (Gray & Hovav, 2007)
straightforward. A key reason for this is the fundamental and unavoid and put special emphasis on questions of ethics.
able uncertainty of the future. We simply do not know with certainty This article employs a sub-set of the established methodologies of
what the future holds and how technology will develop (Ellul, 1973). An research on the ethics of emerging technologies, notably the subset that
ethics of emerging technologies that claims full knowledge of future allows for the identification of ethical issues. Much effort across various
developments and their ethical consequences is thus impossible. How methodologies is expended on identifying emerging technologies. This
ever, while the future cannot be comprehensively known, it is not work is not needed in this article because it focuses on a defined tech
entirely unknowable either. Modern societies are based on strong as nology, namely ChatGPT. Furthermore, the article’s aim is to broaden
sumptions about the future (e.g. future tax revenue or demographic awareness of the ethics benefits and challenges raised by ChatGPT, i.e.,
development) that allow for planning and policy development and – to it aims to enumerate such issues in as comprehensive a manner as
some degree – help pre-empt foreseeable future problems. It is this possible. It therefore does not need to pay attention to what is typically
ambiguity of future knowledge that motivates calls for research and the next step proposed by several of the methodologies, which is the
technology development to accept partial responsibility for the ethics of question of how to address them.
emerging technologies (InterAcademy Partnership, 2016). This ambi The methodology employed in this article is therefore based on three
guity also shapes the claims that can be raised for research on the con of the established methodologies of ethics of emerging technologies that
sequences of emerging technologies. Collingridge (1981) famously include proposals for identifying ethical issues. This is complemented by
pointed out the trade-off that short-term technical developments that we steps that undertake a similar approach in the ethics of artificial intel
can easily predict are difficult to change, whereas long-term de ligence (AI). The three methodologies used here are referred to as
velopments that may be easy to steer in desired directions are difficult to “anticipatory technology ethics” (ATE, (Brey, 2012)), the “framework
predict (Genus & Stirling, 2018). While we thus cannot fully know the for the ethical impact assessment of information technology” (EIA,
future, we can explore possible and likely futures with a view to un (Wright, 2011)) and “ethical issues of emerging ICT applications”
derstanding what they require from us today (Cuhls, 2003). Or, as (ETICA, (Stahl, 2011; Stahl et al., 2017). These three were all developed
Guston (2013) puts it, while we may not look into the future, we can in the early 2010 s. They are aware of each other and mutually cite one
look toward it. another. This can serve as an indicator that combining them, as is pro
With these caveats in mind, it is possible to say more about the ethics posed in this article, is a legitimate activity. They also take a similar
of emerging technologies. The spirit motivating most attempts to approach to the identification of issues which is the core interest in this
4
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
article. certification and many others. There are several attempts to identify and
All three agree that ethical concerns do not arise in a vacuum and categorise AI-specific ethical issues, e.g. by Müller (2020) who distin
that one can learn about likely future ones by looking at established guishes between ethical issues of AI systems as objects which includes
ethical discussions and by asking which established issues are likely to privacy, manipulation, opacity, bias, human-robot interaction,
be relevant in a novel technology or application area and how such is employment, and the effects of autonomy and ethical issues of AI sys
sues can be categorised. They each produce a list of issues which shows tems as subjects which include machine ethics, artificial moral agency,
significant overlap. They differ in how these lists are constructed. ATE’s and finally the issues arising from an AI superintelligence. Vesnic-Alu
list is derived from philosophical ethics of technology and has the main jevic et al. (2020) distinguish between individual issues which includes
categories of harms and risks, rights, (distributive) justice, and well autonomy, dignity, privacy and data protection, and societal issues
being and the common good. In each of these categories a number of which includes fairness and equity; the good life and diversity; re
issues are located. In some cases these issues are then subdivided further. sponsibility and accountability; transparency; surveillance and data
For example, under rights there is the issue of freedom, which is broken fication; governance of the AI.
down in freedom of movement, speech and assembly. EIA starts with its These examples demonstrate that there is significant overlap be
main categories by using the principles of biomedical ethics (Beauchamp tween the ethics of emerging technologies and the ethics of AI which is
& Childress, 2009): autonomy, nonmaleficence, beneficence, and jus not surprising, as AI can still be considered an emerging technology. It
tice. These are complemented by the category of privacy and data pro raises the question, however, which list to include in our analysis to
tection. Again, each of these have a number of issues associated with avoid possible blind spots. We decided to include a list published by the
each of the categories. For each of the issues (or in some case values), European Parliament (2020) which was compiled by a group of aca
EIA offers a number of questions that support the reflection of the issues demics with a view to providing broad coverage of issues of interest to
in a specific context. The ETICA method, final, offers a similar list of the European Parliament and hence, by implication, to European citi
issues. However, the individual issues were arrived at differently. They zens. This list is categorised in terms of AI impact on different aspects of
were identified on the basis of the analyses of 10 different emerging society where individual issues are then listed as shown in Appendix B
technologies and the categories were constructed in a bottom-up below.
manner. The categories listed in (Stahl et al., 2017) are: conceptual is
sues and ethical theories, impact on individuals, consequences for so 3.2. Research protocol
ciety, uncertainty of outcomes, perceptions of technologies and role of
humans. In this methodology each of the categories or issues are also In our research we based the analysis of ChatGPT on the list of issues
linked to guiding questions that are meant to help researchers under that are displayed in Appendix A and B. These formed the theoretically
stand and identify the nature of the potential ethical concern. derived starting point for us to explore possible ethical issues. We used
The aspect of these three approaches to the ethics of emerging these lists of ethical issues as shown in appendices A and B to find out
technologies that is of most interest to this article is the list of likely what we can learn about the ethical benefits and challenges raised by
issues they produced. For the purposes of this article the idea was to ChatGPT by applying existing approaches to the ethics of emerging
generate the most comprehensive overview of possible issues to then digital technologies. In order to achieve this, we combined the lists of
interrogate this list with a view to identifying issues related to ChatGPT issues into one spreadsheet which also included the guiding questions
that are most in need of discussion. The lists provided by the three from EIA and ETICA to help us understand the issues in more detail. We
methodologies were therefore merged into one list. For this purpose, the then wanted to explore whether ChatGPT is likely to have an impact on
lists were reduced to a one-dimensional list, i.e., the categories and, in the various issues. For each issue we therefore explored how ChatGPT
the case of ATE, the sub-issues, were either included, where they offered might impact it, both in positive and negative terms. This was done by
additional insights, or excluded, if they were covered by the issues. providing a short narrative outlining our reasoning and then allocating a
Additional information, notably the guiding questions from EIA and likelihood of this impact arising (1 =low, 2 =medium, 3 =high) and a
ETICA were retained as comments linked to the issues to guide the more measure of severity (1 =low impact, 2 =medium impact, 3 =high
detailed subsequent analysis steps. The table in appendix A contains the impact). This allowed us to calculate an expected impact measure,
list of all the issues as identified across the three methodologies. separated between positive and negative impacts, which we arrived at
This list of issues in appendix A provides the basis of the analysis of by multiplying likelihood and severity, giving us two scores between 1
the ethical issues of ChatGPT undertaken for this article. They represent and 9 (one for benefits, one for damages) for each of the issues. To allow
the ethical issues identified by the three methodologies and need to be for independent scrutiny of our approach, we make the full spreadsheet
read in conjunction with the supporting documentation, notably the including the narrative justifications and the scores for each issue we
definitions of the issues and the guiding questions accompanying them. make the complete spreadsheet available here.1
We believe that they constitute a good starting point for an analysis of A key challenge when using this approach was to identify likely areas
the ethics of emerging technologies. However, we realise that this list of and mechanisms of impact. Our aim was to offer an analysis of ethical
issues dates back to the early 2010 s and was not specifically compiled issues of ChatGPT that cuts across application areas. However, the
with a view to AI. In light of the rapid development of AI and the dis number of possible applications is close to infinite, and we may not be
cussion of the ethics of AI in the intervening years, we decided to cali aware of some that are already being worked on, even less of ones that
brate this list by adding a set of issues that were identified specifically are currently unexplored. We therefore focused on the key ethically
with AI in mind. As ChatGPT is an example of an AI application, we relevant features of ChatGPT, namely its abilities to produce text that is
aimed to ensure that established ethical concerns arising from the ethics difficult to recognise as machine-generated, its ability to learn and
of AI discourse were properly included. interact seamlessly with humans. In order to provide plausibility of our
The inclusion of the ethics of AI raised further methodological evaluations we used the narrative part of the analysis to indicate which
questions. While there is a relatively settled landscape of research on the application examples we had in mind before scoring them. Furthermore,
ethics of emerging technologies as outlined above, the same cannot be in order to provide some rigour to the analysis the two authors co-
said about the ethics of AI (Ashok et al., 2022; Dwivedi et al., 2019). This reviewed each other’s analysis to ensure consistency and plausibility.
discourse continues to mushroom. It has a strong emphasis on ethical As all future and foresight research methods, this methodological
guidelines and principles (Fjeld et al., 2020; Jobin et al., 2019) which approach does not promise scientific exactness. Its purpose is to surface
includes a number of high profile interventions (AI HLEG, 2019) but it
also legislation such as the EU’s AI Act (European Commission, 2021),
various approaches to standardisation (IEEE, 2017; NIST, 2022), 1
https://tinyurl.com/5fawk8yc
5
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
possible issues, in particular those that have not yet been looked at in one’s own thoughts and form one’s own opinions, animal rights and
much detail and to do so in a systematic and transparent way. We un animal welfare, support of vital social institutions and structures, the
derstand that our analysis is somewhat idiosyncratic and driven by our labour market, and impact on the financial system. This very eclectic
prior knowledge in particular our understanding of ethics of technology group of benefits is defined by its broad scope and reach. These are not
and that researchers with different backgrounds might highlight other specific issues or topics but mostly comprised of higher-level aggregate
issues and use different justifications for this. From our perspective this concepts. Human identity, the role of humans, beneficence, the labour
is not majorly problematic, and we would indeed welcome alternative market, the financial system or health are all high-level topics that
voices that come to different conclusions from ours, as it would depend on the collaboration of many individuals across large parts of
strengthen the overall aim of the exercise which is to support a public society. This implies that they rely heavily on successful communication
discourse on the ethical issues of ChatGPT. between different types of stakeholders. Such communication and the
As a final remark on the article’s methodology, we would like to translation of meaning across different vocabularies of different stake
highlight that we interacted with ChatGPT during the writing of this holder groups is something that is core to the capabilities of ChatGPT.
article and incorporated some of the insights into the narrative. As the Being high-level, these concepts also are open to interpretation and offer
use of ChatGPT in academia is currently highly contested and at the scope for the analyst to fill them with life.
centre of public attention, we chose to collect all of our relevant in We identified 11 concepts that scored the maximum in terms of
teractions with the system and make them accessible to scrutiny on this benefits. The above benefits are highlighted in literature on how
document.2 ChatGPT and other generative AI systems can be leveraged in hospitality
and tourism industry (Dwivedi, Pandey et al., 2023), in healthcare
4. Findings and discussion (Javaid et al., 2023), in transportation (Du et al., 2023), and agriculture
(Ray, 2023). In contrasts with the benefits, we identified 30 concepts
By following the methodology described above, we aimed to arrive at that scored the maximum (i.e. were deemed to have a high likelihood of
a comprehensive understanding of the ethics of ChatGPT that is trans materialising and a high impact) in terms of negative impact. These can
parent and rigorous. However, when implementing the method, we had be grouped as issues related to social justice and rights, individual needs,
to make assumptions and choices that are relevant to the analysis and culture and identity as well as environmental impacts as shown in Fig. 1.
need to be highlighted. When assessing the likelihood and severity of This figure suggests that the key issues that have the potential to raise
both positive and negative impacts, we had to make use of our under the most significant concerns can be divided into four main categories.
standing and possible scenarios of use. We only considered immediate These are social justice and rights where ChatGPT is seen as having a
consequences based on the core characteristics and capability of potentially detrimental effect on the moral underpinnings of society,
ChatGPT, i.e., its abilities to produce text that is difficult to recognise as such as a shared view of justice and fair distribution as well as specific
machine-generated, its ability to learn and interact seamlessly with social concerns such as digital divides or social exclusion. The second
humans. For example, we assumed that it can be used for telephone group pertains to individual needs, such as safety and autonomy which
conversations where the respondent is not aware of speaking to a ma are also reflected in informed consent and the avoidance of harm. We
chine. This, in turn can be used for data collection, which may have for included environmental harms as one group of issues and added topics
legitimate and illegitimate purposes, such as online or telephone mar of culture and identity as a fourth group. This categorisation is one of
keting but also social engineering or political persuasion. Our key many possible ones. The dividing lines are not as clear-cut as the figure
challenge was to use a consistent boundary of future exploration which may suggest. Some of the issues could fit into more than one of the
is anchored in the actual technical capability of ChatGPT and avoid boxes. Its main purpose here is to show the breadth of issues and con
general speculation. Where possible and relevant we therefore used the cerns which cover pretty much any aspect of life.
textual description of benefits and risks to explain how we arrived at our A further perspective on our analysis that may be helpful with
evaluation of the various issues. regards to identifying key ethical concerns worthy of further study and
In order to allow for a general overview and comparability of ben intervention is provided by the difference between benefits and down
efits and risks we used a simple method of quantifying these, providing sides. We calculated the total impact by subtracting the total negative
measures of likelihood and impact. We use these measures for the dis impact (probability times damage) from the positive impact (probability
cussion below, as they allowed us to highlight what we perceive to be times benefit). The total impact score is positive if the positive impact
key topics worthy of attention. We believe that this is a reasonable outweighs the negative impact. The higher the score, the more the
approach that mirrors widely used ways of dealing with risks and ben benefits outweigh risks and the reverse is true for negative impacts.
efits of emerging technology through risk management or impact as A look at the net positive concepts shows that these are mostly linked
sessments processes (Stahl, Antoniou et al., 2023). We fully realise, to the external world, such as with regards to the rights of animals
however, that the numerical scores we allocated are expressions of our (which could be promoted) or the promotion of physical health (which
interpretations rather than objective truths. Before we return to the could benefit from telemedicine, clinical decision support, disease sur
reflection on and evaluation of our approach, let us take a look at what veillance etc.) In all of these cases ChatGPT has the potential to improve
the analysis unearthed. processes. As a language-based system, ChatGPT can have negative
impacts in these areas as well, but these would not be immediate, as they
4.1. Benefits and concerns of ChatGPT might be in the case of robotic systems, and we felt that on the balance of
probability, the positives would dominate the negatives in these cases.
One intention behind this article was to arrive at a balanced view of On the other end of the scale, we have those concepts that promise a
ChatGPT that takes into account both benefits and concerns linked to the higher negative than positive impact. Many of these are related to the
technology. We therefore start the discussion of our substantive findings changes in social relationships that ChatGPT may engender. This in
by looking at the concepts that promised benefits that scored highest. cludes questions of responsibility and accountability where there is no
We evaluated the following concepts as having a high likelihood of being doubt a possibility that a better way of understanding complex situa
realised and a high level of social or ethical benefit: collective human tions may help and the ability to generate texts may strengthen re
identity and the good life; perceptions of technology, the role of humans, sponsibility and accountability regimes. However, we felt that the much
beneficence, sustainability, health and bodily harm, the ability to think more likely consequences of ChatGPT use will be that strong agents can
use the technology to obfuscate and confuse public discourse with a
view to evading responsibility and being held accountable.
2
https://tinyurl.com/yuk4w2yk Further broad issues having to do with social relationships follow a
6
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
similar logic. We included social solidarity, inclusion and exclusion, that was discussed earlier. A key topic is that of authorship and the in
intergenerational justice, biases and digital divides in the same category. fluence of ChatGPT on research and education (Lee, 2023; Salvagno
In all of these cases it is perfectly plausible that ChatGPT will do good et al., 2023), a topic that our analysis is sensitive to. There are contri
and help those who are disenfranchised to gain a voice. But in practice butions that aim to capture a broader set of issues such as Dwivedi,
and based on past experience in the way emerging digital unfold in Kshetri et al. (2023) including ethical concerns but they remain rela
western society, we assume that the negative implications will vastly tively superficial.
overshadow the positive ones. The divides between those who have One way of assessing whether our approach meets the objective of
access to ChatGPT and are able to leverage it and those who do not will providing a rigorous and comprehensive account of ethical aspects of
likely follow similar lines as existing digital and social divides and the ChatGPT is successful is to explore whether it reflects and is consistent
ones who will be able to draw most benefit will continue to be the ones with recent scholarship. There is currently much work that aims to
who are advantaged already, i.e. the large companies who have the provide guidance on the use of ChatGPT in the academic publication
resources to make use of new technologies and the individuals who have system. A good example is provided by Shmueli et al. (2023) who
the education, financial and intellectual independence to understand the explore how authors, reviewers and editors in data science can
opportunities and realise them. responsibly leverage and address challenges of using generative AI tools
like ChatGPT to improve scholarship. The editorial covers ethical issues
4.2. Relevance of the findings around authorship, accountability, methodological rigor, bias, fairness,
accuracy, gaming the system, privacy, data exposure, interpretability,
The objective of this article is to explore ethical benefits and chal intellectual property, availability, review reliability, and responsible use
lenges raised by ChatGPT by applying existing approaches to the ethics as ethical issues arising from the application of generative AI like
of emerging digital technologies. A key question in assessing the value ChatGPT in data science research and publishing. All of these concerns
and contribution of the article is the degree to which it corresponds or can be located in our ethical analysis as well. Another recent example of
contradicts existing scholarship. It is therefore worth comparing our engagement with ethics of ChatGPT is given by Susarla et al. (2023) who
findings with the current discourse on the ethics of ChatGPT. discuss guidelines and challenges for the responsible use of generative AI
Due to the novelty of the technology, there is relatively little aca like ChatGPT in IS research, including concerns around bias, account
demic and peer-reviewed work on the topic available, even though the ability, intellectual property, and depth versus potential benefits for
number of articles is growing rapidly. At the time of writing a search of problem formulation, data analysis, and writing. Again, these concerns
the Scopus research databased using the search string “ethics AND are well covered by our analysis. As a final indication whether our
ChatGPT” returns 25 results. At present the academic literature focuses analysis is relevant to the current discussion of ChatGPT, it is worth
on the set of topic that are prominent in the public domain and media looking at a publication that is not primarily interested in ethical
7
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
questions. Dwivedi et al. (2023) explore practices, challenges, and a control of use. In our analysis we have assumed that ChatGPT will be
research agenda for implementing generative AI tools like ChatGPT in freely available to all possible users. This is the model that OpenAI has
the hospitality and tourism industry. While not focused on ethics, the chosen to initially bring to potential users, and it is consistent with ac
paper acknowledges ChatGPT raises important ethical issues around cess models of other types of general digital technologies, such as pop
privacy, bias, transparency, governance, labour impacts etc. that require ular search engines. However, it is far from obvious whether this will
thoughtful policies and practices by users. By referring to these three remain the case. If benefits of ChatGPT can be monetised, then this is
publications we can confirm that our analysis engages with the ethical likely to happen. A paid-for version called ChatGPT plus already exists
issues that are currently discussed and that it has a broader scope than which requires a monthly payment of US$20 and has a more advanced
the current discourse. architecture, is based on more training data and likely better perfor
In addition to the current general discussion, it is worth looking more mance (Salunke, 2023). Open AI which started out as non-profit orga
specifically at the way in which the community of scholars working on nisation created a capped-profit subsidiary, ostensibly with a view of
large language models are perceiving the issues arising from the class of allowing employee share options. The non-profit OpenAI Inc is the
technologies that includes ChatGPT. The most prominent contribution controlling shareholder of the for-profit OpenAI LP which may provide
to this discourse comes from the Google-owned AI company DeepMind an avenue for continuation of the organisation’s aim of democratising QI
(Weidinger et al., 2021). In their report they report a total of 21 risks of (“OpenAI, 2023c). However, there have been some high-profile links to
harm that can be expected from language models, categorised in the corporate players, notably a partnership with Microsoft which is using
following six risk areas: I. Discrimination, Exclusion and Toxicity, II. OpenAI technology for its chat business which have been critically
Information Hazards, III., Misinformation Harms, IV. Malicious Uses, V. evaluated by some observers (Xiang, 2023). It is not the purpose of this
Human-Computer Interaction Harms, VI. Automation, Access, and article to discuss corporate structures of OpenAI, but it seems likely that
Environmental Harms. The report does not provide a detailed method these corporate structures will have an influence on the ethical issues
ology of how the risks were identified which makes a detailed com related to ChatGPT. As we have seen, many of the key ethical concerns
parison with the ethical issues derived from the emerging technology involve broader question of social cohesion, inclusion and exclusion.
literature difficult. However, there is significant similarity and overlap Profit-oriented ownership of technology has been described as a key
between the 21 risks identified by Weidinger et al. (2021) and the issues concern of other technologies (Zuboff, 2019) and ChatGPT is likely to be
that we list in the appendix. covered by these concerns.
Maybe even more interesting than the general issues related to lan Where profit is the driver of the organisations that develop and make
guage models is the observation of how OpenAI is currently engaging available ChatGPT and related technologies, it is easy to foresee that
with the ethical issues of ChatGPT. While we were writing this article, profitable applications will be privileged over non-profitable ones. This
OpenAI launched GPT-4, the next version of the language model that means that many of the benefits that we identified, those that will allow
ChatGPT is based on. As part of this launch the company published a underserved communities to strengthen their position and make their
technical report (OpenAI, 2023b) and a “system card” document voices heard, are less likely to be supported than those that promise a
(OpenAI, 2023a). These documents provide some technical background return on investment.
to GPT-4 that allows an insight into the ethical issues that OpenAI has A further open question that has some link to ownership but also to
recognised and is attempting to address in the development of GPT-4. the broader social context of ChatGPT is that of governance and over
Both documents contain an identical figure (OpenAI, 2023a, p. 8) sight, in particular with regards to misuse of the technology. This raises
showing example prompts (i.e. input from external users) and the difficult questions of what would count as use or misuse, e.g. in fields
response of an earlier version and how GPT-4 responds when it was like the persuasion of voters to vote for a particular party or of customers
launched. The prompts covered in the table refer to ethically problem to buy products. These may well be deemed legitimate uses by some but
atic information that ChatGPT could conceivably provide, including misuse by others. However, there are more clear-cut cases of misuse,
advice on immoral activities such as murder, production of weapons, notably the use of ChatGPT for illegal purposes such as scamming, fraud
money laundering, self-harm, threats, racism and purchase of weapons. etc. The capabilities of ChatGPT clearly render it a useful tool for such
In an earlier version GPT-4 provided responses to these prompts whereas illegal uses, a point that is already well recognised (Europol, 2023;
the launched version refuses to provide input. When tested in April Sweney, 2023). The current technical and organisational model where
2023, ChatGPT gave similar answers to the launch version of GPT-4, ChatGPT is hosted centrally and allows for interactions via the web may
suggesting that these safeguards had already been implemented. All of offer mechanisms for tracing illegal uses and countering them. If and
these issues that OpenAI is addressing in GPT-4 are part of or implied in when ChatGPT or its successors or competitors become available as
the list of issues we identified in our methodology, thus confirming that stand-alone systems, this may open additional avenues for illegal use.
our ethical analysis is valid. We thus think there is evidence that our The ownership model is also likely to have implications for the
research objective was met and that the approach we offer in this article broader understanding of the technology and thus of its robustness and
is rigorous and comprehensive. Despite this success in moving towards reliability. In our analysis of the different ethical issues, we assumed that
our research objective, open questions remain. ChatGPT can provide human-like input into conversations with a high
level of reliability. However, there is the well-discussed issue that ma
4.3. Open questions chine learning models can learn biases from its training data and
replicate these in interactions with users. This has been a key short
While we believe that our analysis is valid, robust and transparent, coming of previous models such as Microsoft’s chatbot Tay (Wolf et al.,
we do not claim that it is exclusive. By this we mean that others who 2017). OpenAI seems to have spent considerable effort in avoiding
would follow the same methodology could plausibly come to different ChatGPT from generating racist or similarly offensive output, but it is
insights. The value of doing the analysis is not so much in the substantive open to which degree this will remain successful. A related challenge in
items and concerns that we have looked at but in the deeper under terms of reliability and robustness of the technology is the transparency
standing of the overall landscape of ChatGPT use that it provides. By of its sources and outputs comes from the technology’s propensity to
undertaking the analysis, we have come to a better understanding of “hallucinate”. This term refers to mistakes that ChatGPT makes when
how ethical issues of ChatGPT are conceptualised and framed in public generating text that is semantically correct but factually incorrect or
discourse which raises questions that need to be discussed for reasonable even nonsensical (Smith, 2023). Such hallucinations are a general
and realistic responses to emerge. problem for ChatGPT but depending on the context of their occurrence
One key question that pervades many of the issues and is highly could raise significant ethical problems, for example in healthcare
relevant to the evaluation of ethical issues is that of ownership and settings.
8
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
In addition to these questions that are focused on the technology comparison with between the abstract analysis of ethical aspects offered
itself and the way people can interact with it, there are further open here and the specific work that is already undertaken by the owners and
questions about how a long-term use of this type of technology changes developers of ChatGPT and other large language models. The previous
our collective view of what it means to be human and to interact with section suggests that these questions are taken seriously by developers
technology. We posit in this article that one of the crucial features of but that work on ethical issues is focused on immediate harm and its
ChatGPT is its ability to produce human-like texts which implies that it avoidance, thus excluding many of the broader issues such as fairness,
may be difficult to assess whether a contribution to a conversation inclusion, access, labour markets etc. Furthermore, research appears to
comes from a human or a machine. This is traditionally seen as prob focus heavily on ethical concerns and tends not to be explicit about the
lematic, as we assume that interactions between humans and machines difficult process of comparing and balancing benefits and concerns.
and among humans differ. The European General Data Protection Future studies will furthermore be called for to better understand
Regulation (GDPR, 2016) stipulates in Article 22 that any data subject detailed empirical impacts of ChatGPT. In this paper we focused on the
“shall have the right not to be subject to a decision based solely on generic characteristics of the technology and inferred possible uses and
automated processing, including profiling, which produces legal effects consequences from those characteristics. A more detailed understanding
concerning him or her or similarly significantly affects him or her.” This would be gained by a differentiated analysis looking at specific appli
could be read as implying that many interactions with chatbots will have cation areas or industries. Such detailed analysis would also help
to be flagged as such. What is unclear, however, is whether and how long research keep abreast of the rapidly developing technical landscape and
this position will be upheld. If ChatGPT paves the way for widespread the capability of large language models. This question of technical
use of chatbots for all sorts of purposes, then it may well be that our development is also likely to have implications for the more abstract
moral preferences change and the distinction between interacting with a questions around the potential for the development of AGI or other long-
human and a machine are seen as decreasingly crucial. The opposite term philosophical questions. It is likely that such questions will not be
development is also conceivable, where more emphasis will be put on confined to ChatGPT or large language models but need to include other
the identification of artificial agents in conversations. developments of AI and its enabling technologies. Elsewhere we have
In the longer term these developments can also touch on the broader suggested that a suitable way of framing such research is to explore the
ethical questions of the impact of artificial general intelligence, a highly ethics of the socio-technical innovation ecosystem in which AI is realised
contested term (Mitchell, 2019) but one that OpenAI openly promotes. (Stahl, 2022, 2023). This perspective is important when determining the
This points to larger philosophical question around the nature of reality implications of ChatGPT because it shifts the focus from individuals (e.g.
and how AI can change our view of it, which are beyond the scope of this developers or users) or organisations (notably OpenAI) to the broader
article that focuses on immediate capabilities and impacts of ChatGPT ecosystem (Chae, 2019; Senyo et al., 2019). The question of who is
but which may well turn out to be the most significant ethical implica responsible for specific ethical consequences then transforms into the
tions of ChatGPT. question of how should the ecosystem be structured to promote bene
At a philosophical level, there is also the open question of whether ficial and prevent detrimental effects of the technology.
ChatGPT’s ability to produce humanlike responses affirms Jacques The key implication for research is thus that further research is called
Derrida’s critique of logocentrism of giving privileges to speech over for. While this is a typical and often self-serving position taken in many
writing. Derrida’s position was that meaning in natural language is research publications, its relevance and legitimacy in the case of
determined by the ‘play’ of differences between words and not by ideas ChatGPT is hard to deny. The potential impact of the technology com
and intentions of a speaker (Sallis, 1987). There is no reason therefore to bined with its rapidly evolving nature combine into a strong argument
consider that speech comes before writing. For some, it may be taken for ongoing research-based reflection of its ethical implications. Such
that ChatGPT ‘writes without speech’. If this is true, it confirms Derrida’s research is in the public interest but also in the interest of the AI industry
position that there is a disconnect between the living voice of the and forms part of the implications for practice.
speaker and written words. However, does ChatGPT write without
speech? Or it is trained on living voices of many speakers and subse 5.2. Implications for practice
quently predicts patterns of words? Does ChatGPT ‘write’ without
speech or ‘predicts’ from living voices? This is an open question that The insights developed in this paper and the availability of a more
requires more thought and research. comprehensive overview of ethical concerns related to ChatGPT are
relevant to the practice of several stakeholder groups including policy
5. Implications makers, research funders and companies developing or using large
language models.
The analysis of the ethical issues of ChatGPT has implications for One can observe a rapidly growing recognition among policymakers
research as well as practice that are spelled out below. that the nature of recent AI developments is such that a passive approach
to legislation and regulation is difficult to justify. While some regulatory
5.1. Implications for research developments such as the EU AI Act precede the arrival of ChatGPT, one
can argue that ChatGPT has contributed to the push for the need of
The article has demonstrated that a rigorous and comprehensive policy intervention, notably in the USA. It is now broadly recognised
approach to the ethics of ChatGPT is possible and that existing meth that policy initiatives, potentially including legislation, regulation or the
odologies can be used to gain an overview of the ethical issues. At the creation of new regulators (Graham & Warren, 2023) is required to
same time, the previous section on open questions has given an indi ensure responsible development and use of generative AI. Such initia
cation of avenues for further research that can be pursued to further tives may be called for to mitigate impacts, for example where harm
strengthen the approach and go beyond it. arises due to job losses, inequality, social biases and mitigate identified
An initial limitation of the present research is that it was undertaken harms. These are not fundamentally new insights (Executive Office of
by the authors of the paper which allowed for a controlled analysis of the President, 2016), but the arrival of ChatGPT has highlighted the
ethical issues but is limited by the worldviews and experiences of the need for such interventions.
researchers. Undertaking a similar study using the ethics of emerging The insights arising from this paper show that at present there is a
technologies but broadening expected benefits and concerns by lack of balance between the attention paid to ethical benefits and
including more stakeholders will very likely bring to light further in downsides of ChatGPT. Relevant policy should pay attention to both of
sights that would likely change the evaluation of benefits and risks. these aspects as well as to the question who receives the benefits and
Another avenue of further research would be to undertake a direct who is exposed to the downsides. This implies that policies are called for
9
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
that promote diversity, equity, and inclusion in access to generative AI those markets that are most profitable for them, thus leading to limits to
to prevent exacerbating divides. This may include subsidies or public international collaboration and exacerbating the risk that the downsides
access programmes. Policy can be used to create incentives through of new technologies will be pushed into areas where they do not have
government procurement practices, grants etc. to steer development their economic focus.
towards social good applications. ChatGPT and related technologies are It would thus be naïve to expect industry to solve these issues, but at
not locally bound with risks and benefits potentially having global the same time, they will not be solved without the contribution of in
reach. This means that any relevant policy intervention needs to move dustry. A public commitment of the tech giants stating that they have
beyond the local and national level and establish reliable policy ethical duties arising from the products they distribute cannot replace
frameworks on the international level (UNESCO, 2022; Wallach & stronger regulatory interventions, but it can serve as a useful reminder in
Marchant, 2018). Some of the ethical issues point to the fundamental controversial debates that at least some companies accept that they have
structures of our societies. ChatGPT arises from the current context of ethical commitments beyond the maximisation of profits and that they
what Zuboff (2019) calls surveillance capitalism. A long-lasting response recognise their social responsibilities.
to the ethics of ChatGPT may well call for fundamental reconsiderations
of the structure of current socio-economic systems. 6. Conclusion
Some of the more immediate implications of our research arise for
research funders. The objective of our research was to come to a more This article explores the ethical issues raised by ChatGPT beyond
comprehensive understanding of the ethical issues of ChatGPT. The current debates on authorship, demonstrating the value of applying
follow-on research activities outlined in the implications for research existing approaches to the ethics of emerging technologies. Our analysis
will need to be funded. This not only includes the broadening of the highlights a broad range of relevant ethical concerns going significantly
research by involving stakeholder perspectives but also research on beyond the current discourse. This is useful for AI ethics scholars, de
possible and likely mitigation measures. These can range from the use of velopers, and anyone interested in suitable ethics methodologies. The
open, transparent and unbiased foundations for generative models that analysis also provides insights into implications and interventions,
can overcome some of the limitations of proprietary systems. Key to the emphasizing the need to consider the whole socio-technical AI
success of engaging with ethics of ChatGPT will be to bring in diverse ecosystem, not just specific issues. Fully addressing the challenges will
perspectives which may call for funding to support diverse participation likely require multi-level societal engagement, from individual software
in generative AI development and preventing homogeneous thinking. engineers to international policies, to ensure the benefits of AI, in
This calls for interdisciplinary research that can inform educational particular large language models such as those underpinning ChatGPT,
initiatives to develop expert and public understanding of responsible are realized while harms are avoided. Overall, the article advocates
development and use. The broadening of the discourse furthermore calls applying a holistic ethics perspective to guide ongoing discourse and
for stronger interdisciplinary collaboration. action around such impactful emerging technologies.
Policy and funding support will only have a chance of promoting
ethically beneficial outcomes of ChatGPT use, if the organisations Responsible innovation ecosystems
driving the development of large language models actively engage with
ethical questions. Implications for industry thus include a need to pri Ethical implications of the application of the ecosystem concept to
oritise ethical considerations early, e.g. through ethical review boards artificial intelligence.
and risk assessment processes. Responsible design principles that are
being developed across the AI landscape like transparency, account Funding source
ability and bias testing should become standard in training generative AI
models. Companies should consider the use of impact assessment This research was funded by the European Union’s Horizon 2020
methods (Stahl et al., 2023) prior to deploying their technologies. It is Research and Innovation Programme Under Grant Agreement No.
likely that a risk-based approach will be applied to AI (NIST, 2022) and 945539 (Human Brain Project SGA3). This work was furthermore sup
companies need to consider how high risk applications of large language ported by the Engineering and Physical Sciences Research Council
models can be governed. But even for lower risk applications, the bal [Horizon Digital Economy Research ‘Trusted Data Driven Products: EP/
ance of benefits and concerns should be considered. This is likely to call T022493/1].
for companies to develop access and pricing models that prevent the
exclusion of typically underserved communities, to allow them to
CRediT authorship contribution statement
benefit from large language models.
In developing appropriate responses, companies need to work with
Bernd Carsten STAHL: Conceptualization, Data collection and
governments, regulators and policymakers. This certainly appears to be
analysis, Methodology design, Writing, Funding acquisition. Damian
happening at present where OpenAI and other large tech companies are
EKE: Data collection and analysis, writing.
working with the US legislature and executive to address concerns
around AI. It is important to keep in mind, however, that the oligopo
listic structure of the tech industry is part of the underlying problem and Declaration of Competing Interest
the leading companies in this space are unlikely to wish to change the
structure that they benefit from. Furthermore, companies will focus on None.
Appendix A. Cumulative list of ethical issues from ATE, EIA and ETICA
10
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
(continued )
Digital divides
Collective human identity and the good life
Ownership, data control, and intellectual property
Responsibility.
Surveillance
Cultural differences
Uncertainty of outcomes
Perceptions of technology
Role of humans
EIA autonomy
dignity
informed consent
safety
Social solidarity, inclusion and exclusion
Isolation and substitution of human contact
Discrimination and social sorting
Beneficence
Universal service
Accessibility
Value sensitive design
Sustainability
Justice
Equality and fairness (social justice)
Collection limitation (data minimisation) and retention
Data quality
Purpose specification
Use limitation
Confidentiality, security and protection of data
Transparency (openness)
Individual participation and access to data
Anonymity
Privacy of personal communications: monitoring and location tracking
Privacy of the person
Privacy of personal behaviour
ATE Health and bodily harm
Pain and suffering
Psychological harm
Harm to human capabilities
Environmental harm
Harms to society
Freedom
Freedom of movement
Freedom of speech and expression
Freedom of assembly
Autonomy
Ability to think one’s own thoughts and form one’s own opinions
Ability to make one’s own choices
Responsibility and accountability
Informed consent
Human dignity
Privacy
Information privacy
Bodily privacy
Relational privacy
Property
Right to property
Intellectual property rights
Other basic human rights as specified in human rights declarations (e.g., to life, to have a fair trial, to vote, to receive an education, to pursue happiness, to seek asylum, to
engage in peaceful protest, to practice one’s religion, to work for anyone, to have a family etc.)
Animal rights and animal welfare
Just distribution of primary goods, capabilities, risks and hazards
Nondiscrimination and equal treatment relative to age, gender, sexual orientation, social class, race, ethnicity, religion, disability, etc.
North–south justice
Intergenerational justice
Social inclusion
Supportive of happiness, health, knowledge, wisdom, virtue, friendship, trust, achievement, desire-fulfillment, and transcendent meaning
Supportive of vital social institutions and structures
Supportive of democracy and democratic institutions
Supportive of culture and cultural diversity
11
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
References Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K.,
Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H.,
Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I.,
Adler, M., & Ziglio, E. (1996). Gazing into the Oracle: The Delphi Method and its Application
Brooks, L., Buhalis, D., & Wright, R. (2023). “So what if ChatGPT wrote it?”
to Social Policy and Public Health. Jessica Kingsley.
Multidisciplinary perspectives on opportunities, challenges and implications of
AI HLEG. (2019). Ethics Guidelines for Trustworthy AI. European Commission -
generative conversational AI for research, practice and policy. International Journal
Directorate-General for Communication. https://ec.europa.eu/digital-single-
of Information Management, 71, Article 102642. https://doi.org/10.1016/j.
market/en/news/ethics-guidelines-trustworthy-ai.
ijinfomgt.2023.102642
Ali, S. R., Dobbs, T. D., Hutchings, H. A., & Whitaker, I. S. (2023). Using ChatGPT to
Dwivedi, Y. K., Pandey, N., Currie, W., & Micu, A. (2023). Leveraging ChatGPT and other
write patient clinic letters. The Lancet Digital Health, 5(4), e179–e181. https://doi.
generative artificial intelligence (AI)-based applications in the hospitality and
org/10.1016/S2589-7500(23)00048-1
tourism industry: Practices, challenges and research agenda. International Journal of
Aristotle. (2007). The Nicomachean Ethics. Filiquarian Publishing, LLC.
Contemporary Hospitality Management. https://doi.org/10.1108/IJCHM-05-2023-
Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for artificial
0686
intelligence and digital technologies. International Journal of Information
Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity.
Management, 62, Article 102433. https://doi.org/10.1016/j.ijinfomgt.2021.102433
Journal of Responsible Technology, 13, Article 100060. https://doi.org/10.1016/j.
Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the Era of Generative Artificial
jrt.2023.100060
Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting
Ellul, J. (1973). The technological society. Inc. USA: Random House.
Teaching and Learning (SSRN Scholarly Paper 4337484). https://doi.org/10.2139/
European Commission. (2021). Proposal for a Regulation on a European approach for
ssrn.4337484.
Artificial Intelligence (COM(2021) 206 final). European Commission. https://digital-
Beauchamp, T.L., & Childress, J.F. (2009). Principles of Biomedical Ethics (6th ed.). OUP
strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-
USA.
intelligence.
Bentham, J. (1789). An introduction to the principles of morals and legislation. Dover
European Parliament. (2020). The ethics of artificial intelligence: Issues and initiatives
Publications Inc.
(PE 634.452). EPRS | European Parliamentary Research Service. https://www.
Bove, T. (2023, February 3). OpenAI founder Sam Altman says he can imagine ways that
europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)
ChatGPT “breaks capitalism.” Fortune. https://fortune.com/2023/02/03/openai-
634452_EN.pdf.
sam-altman-chatgpt-break-capitalism/.
Europol. (2023). ChatGPT - the impact of Large Models on Law Enforcement [Tech
Bowie, N. E. (1999). Business ethics: A kantian perspective. Blackwell Publishers.
Watch Flash]. Europol. https://www.europol.europa.eu/publications-events/
Brey, P. A. E. (2012). Anticipating ethical issues in emerging IT. Ethics and Information
publications/chatgpt-impact-of-large-language-models-law-enforcement.
Technology, 14(4), 305–317. https://doi.org/10.1007/s10676-012-9293-y
Executive Office of the President. (2016). Preparing for the Future of Artificial
Brooks, L., Cannizzaro, S., Umbrello, S., Bernstein, M. J., & Richardson, K. (2021). Ethics
Intelligence. Executive Office of the President National Science and Technology
of climate engineering: Don’t forget technology has an ethical aspect too.
Council Committee on Technology. https://obamawhitehouse.archives.gov/sites/
International Journal of Information Management. , Article 102449. https://doi.org/
default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.
10.1016/j.ijinfomgt.2021.102449
pdf.
Bynum, T. W. (2001). Computer ethics: Its birth and its future. Ethics and Information
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial
Technology, 3(2), 109–112. https://doi.org/10.1023/A:1011893925319
Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to
Cagnin, C., Keenan, M., Johnston, R., Scapolo, F., & Barré, R. (2008). Future-oriented
Principles for AI. https://dash.harvard.edu/handle/1/42160420.
technology analysis. Springer. https://doi.org/10.1007/978-3-540-68811-2
L. Floridi A. Strait Ethical foresight analysis: What it is and 2020.
Cambridge Dictionary. (2023). Methodology. Cambridge Dictionary. https://dictionary.
Frederick, B. (2023, January 16). Will ChatGPT Take Your Job? Search Engine Journal.
cambridge.org/dictionary/english/methodology.
https://www.searchenginejournal.com/will-chatgpt-take-your-job/476189/.
Castelvecchi, D. (2022). Are ChatGPT and AlphaCode going to replace programmers?
Freyhofer, H.H. (2004). The Nuremberg Medical Trial: The Holocaust and the Origin of
Nature. https://doi.org/10.1038/d41586-022-04383-z
the Nuremberg Medical Code (2nd Revised edition edition). Peter Lang Publishing
Chae, B. (Kevin) (2019). A General framework for studying the evolution of the digital
Inc.
innovation ecosystem: The case of big data. International Journal of Information
GDPR. (2016). REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND
Management, 45, 83–94. https://doi.org/10.1016/j.ijinfomgt.2018.10.023
OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard
Collingridge, D. (1981). The social control of technology. Palgrave Macmillan.
to the processing of personal data and on the free movement of such data, and
Cuhls, K. (2003). From forecasting to foresight processes—New participative foresight
repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal
activities in Germany. Journal of Forecasting, 22(2–3), 93–111. https://doi.org/
of the European Union, L119/1.
10.1002/for.848
Genus, A., & Stirling, A. (2018). Collingridge and the dilemma of control: Towards
De George, R.T. (1999). Business Ethics (5th edition). Prentice Hall College Div.
responsible and accountable innovation. Research Policy, 47(1), 61–69. https://doi.
Du, H., Teng, S., Chen, H., Ma, J., Wang, X., Gou, C., Li, B., Ma, S., Miao, Q., Na, X.,
org/10.1016/j.respol.2017.09.012
Ye, P., Zhang, H., Luo, G., & Wang, F.-Y. (2023). Chat with ChatGPT on intelligent
Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D.
vehicles: An IEEE TIV perspective. IEEE Transactions on Intelligent Vehicles, 8(3),
(2023). How does ChatGPT perform on the United States medical licensing
2020–2026. https://doi.org/10.1109/TIV.2023.3253281
examination? The implications of large language models for medical education and
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y.,
knowledge assessment. JMIR Medical Education, 9(1), Article e45312. https://doi.
Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M.,
org/10.2196/45312
Jones, P., Kar, A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., & Williams, M. D.
Graham, L., & Warren, E. (2023, July 27). Opinion | Lindsey Graham and Elizabeth
(2019). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging
Warren: When It Comes to Big Tech, Enough Is Enough. The New York Times.
challenges, opportunities, and agenda for research, practice and policy. International
https://www.nytimes.com/2023/07/27/opinion/lindsey-graham-elizabeth-warren-
Journal of Information Management. https://doi.org/10.1016/j.
big-tech-regulation.html.
ijinfomgt.2019.08.002
12
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
Grant, N. (2023, January 20). Google Calls In Help From Larry Page and Sergey Brin for Co-operation and Development. http://www.oecd-ilibrary.org/content/
A.I. Fight. The New York Times. https://www.nytimes.com/2023/01/20/ workingpaper/5jln7vnpxs32-en.
technology/google-chatgpt-artificial-intelligence.html. Online Etymology Dictionary. (2022, September 10). Technology | Etymology, origin
Gray, P., & Hovav, A. Z. (2007). The is organization of the future: Four scenarios for and meaning of technology by etymonline. https://www.etymonline.com/word/
2020. Information Systems Management, 24(2), 113–120. technology.
Groves, C. (2009). Future ethics: Risk, care and non-reciprocal responsibility. Journal of OpenAI. (2023a). GPT-4 System Card. https://cdn.openai.com/papers/gpt-4-system-
Global Ethics, 5(1), 17–31. card.pdf.
Guston, D. (2013). “Daddy, Can I Have a Puddle Gator?”: Creativity, Anticipation and OpenAI. (2023b). GPT-4 Technical Report (arXiv:2303.08774). arXiv. http://arxiv.org/
Responsible Innovation. In R. Owen, M. Heintz, & J. Bessant (Eds.), Responsible abs/2303.08774.
Innovation (pp. 109–118). Wiley. OpenAI. (2023c). In Wikipedia. https://en.wikipedia.org/w/index.php?
Heidegger, M. (1953). Die Frage nach der Technik. http://content.wuala.com/contents/ title=OpenAI&oldid=1149269339.
nappan/Documents/Cyberspace/Heidegger,%20Martin%20-%20Die%20Frage% OpenAI. (2022, November 30). ChatGPT: Optimizing Language Models for Dialogue.
20nach%20der%20Technik.pdf. OpenAI. https://openai.com/blog/chatgpt/.
Hsu, T., & Thompson, S.A. (2023, February 8). Disinformation Researchers Raise Alarms Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT
About A.I. Chatbots. The New York Times. https://www.nytimes.com/2023/02/08/ Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/.
technology/ai-chatbots-disinformation.html. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language
Hutson, M. (2021). Robo-writers: The rise and risks of language-generating AI. Nature, models are unsupervised multitask learners. OpenAI Blog, 1(8), 9.
591(7848), 22–25. https://doi.org/10.1038/d41586-021-00530-0 Ray, P. P. (2023). AI-assisted sustainable farming: Harnessing the power of ChatGPT in
IEEE. (2017). The IEEE Global Initiative on Ethics of Autonomous and Intelligent modern agricultural sciences and technology. ACS Agricultural Science & Technology,
Systems. https://standards.ieee.org/develop/indconn/ec/autonomous_systems. 3(6), 460–462. https://doi.org/10.1021/acsagscitech.3c00145
html. Reijers, W., Wright, D., Brey, P., Weber, K., Rodrigues, R., O’Sullivan, D., & Gordijn, B.
Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University (2018). Methods for practising ethics in research and innovation: A literature review,
Press. critical analysis and recommendations. Science and Engineering Ethics, 24(5),
InterAcademy Partnership. (2016). Doing global science: A guide to responsible conduct in 1437–1481. https://doi.org/10.1007/s11948-017-9961-8
the global research enterprise. Princeton University Press. Royakkers, L., & Poel, I. van de (2011). Ethics, technology and engineering: An introduction.
Jasanoff, S. (2003). Technologies of humility: Citizen participation in governing science. Wiley-Blackwell.
Minerva, 41(3), 223–244. https://doi.org/10.1023/A:1025557512320 Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice:
Javaid, M., Haleem, A., & Singh, R. P. (2023). ChatGPT for healthcare services: An Systematic review on the promising perspectives and valid concerns. Healthcare, 6.
emerging stage for an innovative perspective. BenchCouncil Transactions on https://doi.org/10.3390/healthcare11060887
Benchmarks, Standards and Evaluations, 3(1), Article 100105. https://doi.org/ Sallis, J. (1987). Deconstruction and philosophy: The texts of Jacques Derrida. University of
10.1016/j.tbench.2023.100105 Chicago Press.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Salunke, M. (2023, March 15). ChatGPT vs ChatGPT Plus: A Comparison. Medium.
Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019- https://medium.com/@ind/chatgpt-vs-chatgpt-plus-a-comparison-e8b233165def.
0088-2 Salvagno, M., Taccone, F. S., Gerli, A. G., & ChatGPT. (2023). Can artificial intelligence
Kant, I. (1788). Kritik der praktischen Vernunft. Reclam, Ditzingen. help for scientific writing? (Scopus) Critical Care, 27(1). https://doi.org/10.1186/
Kant, I. (1797). Grundlegung zur Metaphysik der Sitten. Reclam, Ditzingen. s13054-023-04380-2.
Kazim, E., & Koshiyama, A. S. (2021). A high-level overview of AI ethics. Patterns, 2(9). Sanders, N.E., & Schneier, B. (2023, January 15). Opinion | How ChatGPT Hijacks
https://doi.org/10.1016/j.patter.2021.100314 Democracy. The New York Times. https://www.nytimes.com/2023/01/15/opinion/
Lee, J. Y. (2023). Can an artificial intelligence chatbot be the author of a scholarly ai-chatgpt-lobbying-democracy.html.
article? (Scopus) Science Editing, 10(1), 7–12. https://doi.org/10.6087/kcse.292. Sardar, Z. (2010). The namesake: Futures; futures studies; futurology; futuristic;
Marr, B. (2023a, January 23). How ChatGPT And Natural Language Technology Might foresight—What’s in a name? Futures, 42(3), 177–184. https://doi.org/10.1016/j.
Affect Your Job If You Are A Computer Programmer. Forbes. https://www.forbes. futures.2009.11.001
com/sites/bernardmarr/2023/01/23/how-chatgpt-and-natural-language- Senyo, P. K., Liu, K., & Effah, J. (2019). Digital business ecosystem: Literature review and
technology-might-affect-your-job-if-you-are-a-computer-programmer/. a framework for future research. International Journal of Information Management, 47,
Marr, B. (2023b, February 24). GPT-4 Is Coming – What We Know So Far. Forbes. 52–64. https://doi.org/10.1016/j.ijinfomgt.2019.01.002
https://www.forbes.com/sites/bernardmarr/2023/02/24/gpt-4-is-coming–what- Shmueli, G., Maria Colosimo, B., Martens, D., Padman, R., Saar-Tsechansky, M., R. Liu
we-know-so-far/. Sheng, O., Street, W. N., & Tsui, K.-L. (2023). How can IJDS authors, reviewers, and
McMorrow, R., & Liu, N. (2023, April 11). China slaps security reviews on AI products as editors use (and Misuse) generative AI? INFORMS Journal on Data Science, 2(1), 1–9.
Alibaba unveils ChatGPT challenger. Financial Times. https://www.ft.com/content/ https://doi.org/10.1287/ijds.2023.0007
755cc5dd-e6ce-4139–9110-0877f2b90072. Short, C. E., & Short, J. C. (2023). The artificially intelligent entrepreneur: ChatGPT,
Mehdi, Y. (2023, February 7). Reinventing search with a new AI-powered Microsoft Bing prompt engineering, and entrepreneurial rhetoric creation. Journal of Business
and Edge, your copilot for the web. The Official Microsoft Blog. https://blogs. Venturing Insights, 19, Article e00388. https://doi.org/10.1016/j.jbvi.2023.e00388
microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered- Smith, C.S. (2023, March 13). Hallucinations Could Blunt ChatGPT’s Success. IEEE
microsoft-bing-and-edge-your-copilot-for-the-web/. Spectrum. https://spectrum.ieee.org/ai-hallucination.
Memarian, B., & Doleck, T. (2023). Fairness, accountability, transparency, and ethics Someh, I., Davern, M., Breidbach, C., & Shanks, G. (2019). Ethical issues in big data
(FATE) in artificial intelligence (AI) and higher education: A systematic review. analytics: A stakeholder perspective. Communications of the Association for
Computers and Education: Artificial Intelligence, 5, Article 100152. https://doi.org/ Information Systems, 44(1), 718–747. https://doi.org/10.17705/1CAIS.04434
10.1016/j.caeai.2023.100152 Spengler, O. (1931). Der Mensch und die Technik (2007 reprint). Voltmedia, Paderborn.
Metz, C., & Grant, N. (2023, February 6). Racing to Catch Up With ChatGPT, Google Stahl, B. C. (2011). IT for a better future: How to integrate ethics, politics and innovation.
Plans Release of Its Own Chatbot. The New York Times. https://www.nytimes.com/ Journal of Information, Communication and Ethics in Society, 9(3), 140–156. https://
2023/02/06/technology/google-bard-ai-chatbot.html. doi.org/10.1108/14779961111167630
Mill, J.S. (1861). Utilitarianism (2nd Revised edition). Hackett Publishing Co, Inc. Stahl, B. C. (2012). Morality, ethics, and reflection: A categorization of normative IS
Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans (Illustrated edition). research. Journal of the Association for Information Systems, 13(8), 636–656. https://
Farrar, Straus and Giroux. doi.org/10.17705/1jais.00304
Moor, J. H. (1985). What is computer ethics. Metaphilosophy, 16(4), 266–275. Stahl, B. C. (2021). From computer ethics and the ethics of AI towards an ethics of digital
Moor, J. H. (2008). Why we need better ethics for emerging technologies. In ecosystems. AI and Ethics, 2. https://doi.org/10.1007/s43681-021-00080-1
J. V. D. Hoven, & J. Weckert (Eds.), Information technology and moral philosophy (pp. Stahl, B. C. (2022). Responsible innovation ecosystems: Ethical implications of the
26–39). Cambridge University Press. application of the ecosystem concept to artificial intelligence. International Journal of
Müller, V. C. (2020). Ethics of artificial intelligence and robotics. In E. N. Zalta (Ed.), The Information Management, 62, Article 102441. https://doi.org/10.1016/j.
Stanford encyclopedia of philosophy (Fall 2020). Metaphysics Research Lab. Stanford ijinfomgt.2021.102441
University. 〈https://plato.stanford.edu/archives/fall2020/entries/ethics-ai/〉. Stahl, B. C. (2023). Embedding responsibility in intelligent systems: From AI ethics to
Nast, C. (2023a, January 10). Infinite AI Interns for Everybody. Wired UK. https://www. responsible AI ecosystems. Article 1 Scientific Reports, 13(1). https://doi.org/
wired.co.uk/article/ai-labor-interns. 10.1038/s41598-023-34622-w.
Nast, C. (2023b, February 10). The Generative AI Race Has a Dirty Secret. Wired UK. Stahl, B. C., Antoniou, J., Bhalla, N., Brooks, L., Jansen, P., Lindqvist, B., Kirichenko, A.,
https://www.wired.co.uk/article/the-generative-ai-search-race-has-a-dirty-secret. Marchal, S., Rodrigues, R., Santiago, N., Warso, Z., & Wright, D. (2023). A systematic
Nature editorial. (2023). Tools such as ChatGPT threaten transparent science; here are review of artificial intelligence impact assessments. Artificial Intelligence Review.
our ground rules for their use, 612–612 Nature, 613(7945). https://doi.org/ https://doi.org/10.1007/s10462-023-10420-8
10.1038/d41586-023-00191-1. Stahl, B. C., Flick, C., & Timmermans, J. (2017). Ethics of Emerging Information and
Nijsingh, N., & Duwell, M. (2009). Interdisciplinary, applied ethics and social science. In Communication Technologies-On the implementation of RRI. Science and Public
P. Sollie, & M. Düwell (Eds.), Evaluating new technologies: Methodological problems for Policy, 44(3), 369–381. https://doi.org/10.1093/scipol/scw069
the ethical assessment of technology developments (pp. 79–92). Springer. Stokel-Walker, C. (2022). AI bot ChatGPT writes smart essays—Should professors worry?
NIST. (2022). AI Risk Management Framework: Second Draft. https://www.nist.gov/ Nature. https://doi.org/10.1038/d41586-022-04397-7
document/ai-risk-management-framework-2nd-draft. Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists
OECD. (2016). Research Ethics and New Forms of Data for Social and Economic Research disapprove. Nature, 613(7945), 620–621. https://doi.org/10.1038/d41586-023-
[OECD Science, Technology and Industry Policy Papers]. Organisation for Economic 00107-z
13
B.C. Stahl and D. Eke International Journal of Information Management 74 (2024) 102700
Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The Janus effect of generative Walsham, G. (2012). Are we making a better world with ICTs? Reflections on a future
AI: Charting the path for responsible conduct of scholarly activities in information agenda for the IS field. Journal of Information Technology, 27(2), 87–93.
systems. Information Systems Research, 34(2), 399–408. https://doi.org/10.1287/ Weale, S. (2023, January 13). Lecturers urged to review assessments in UK amid
isre.2023.ed.v34.n2 concerns over new AI tool. The Guardian. https://www.theguardian.com/
Sweney, M. (2023, March 8). Darktrace warns of rise in AI-enhanced scams since technology/2023/jan/13/end-of-the-essay-uk-lecturers-assessments-chatgpt-
ChatGPT release. The Guardian. https://www.theguardian.com/technology/2023/ concerns-ai.
mar/08/darktrace-warns-of-rise-in-ai-enhanced-scams-since-chatgpt-release. Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M.,
Swierstra, T., Demerding, D., & Boenink, M. (2009). Exploring Techno-moral change: Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W.,
The case of the obesity pill. In P. Sollie, & M. Düwell (Eds.), Evaluating new Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., & Gabriel, I.
technologies: Methodological problems for the ethical assessment of technology (2021). Ethical and social risks of harm from Language Models (arXiv:2112.04359).
developments (pp. 119–138). Springer. arXiv. https://doi.org/10.48550/arXiv.2112.04359
Umbrello, S., Bernstein, M. J., Vermaas, P. E., Resseguier, A., Gonzalez, G., Porcari, A., Weizenbaum, J. (1977). Computer power and human reason: From judgement to calculation
Grinbaum, A., & Adomaitis, L. (2023). From speculation to reality: Enhancing (New edition). W.H.Freeman & Co Ltd.
anticipatory ethics for emerging technologies (ATE) in practice. Technology in Wilkins, A. (2023, February 15). ChatGPT AI passes test designed to show theory of mind
Society, 74, Article 102325. https://doi.org/10.1016/j.techsoc.2023.102325 in children. New Scientist. https://www.newscientist.com/article/2359418-chatgpt-
UNESCO. (2022). Recommendation on the Ethics of Artificial Intelligence. UNESCO. ai-passes-test-designed-to-show-theory-of-mind-in-children/.
https://unesdoc.unesco.org/ark:/48223/pf0000381137. Wolf, M. J., Miller, K., & Grodzinsky, F. S. (2017). Why we should have seen that coming:
van der Burg, S. (2014). On the hermeneutic need for future anticipation. Journal of Comments on Microsoft’s tay “experiment,” and wider implications. ACM SIGCAS
Responsible Innovation, 1(1), 99–102. https://doi.org/10.1080/ Computers and Society, 47(3), 54–64. https://doi.org/10.1145/3144592.3144598
23299460.2014.882556 Wright, D. (2011). A framework for the ethical impact assessment of information
van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). technology. Ethics and Information Technology, 13(3), 199–226. https://doi.org/
ChatGPT: Five priorities for research. Nature, 614(7947), 224–226. https://doi.org/ 10.1007/s10676-010-9242-6
10.1038/d41586-023-00288-7 Xiang, C. (2023, February 28). OpenAI Is Now Everything It Promised Not to Be:
Vesnic-Alujevic, L., Nascimento, S., & Pólvora, A. (2020). Societal and ethical impacts of Corporate, Closed-Source, and For-Profit. Vice. https://www.vice.com/en/article/
artificial intelligence: Critical notes on European policy frameworks. 5d3naz/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-
Telecommunications Policy. , Article 101961. https://doi.org/10.1016/j. and-for-profit.
telpol.2020.101961 Zhang, C., Zhang, C., Li, C., Qiao, Y., Zheng, S., Dam, S. K., Zhang, M., Kim, J. U.,
Wallach, W., & Marchant, G.E. (2018). An agile ethical/legal model for the international Kim, S. T., Choi, J., Park, G.-M., Bae, S.-H., Lee, L.-H., Hui, P., Kweon, I. S., &
and national governance of AI and robotics. In W. Wallach (Ed.), Control and Hong, C. S. (2023). One small step for generative AI, one giant leap for AGI: A complete
Responsible Innovation in the Development of AI and Robot (pp. 45–59). The survey on ChatGPT in AIGC Era (arXiv:2304.06488). arXiv. https://doi.org/
Hastings Center. https://www.thehastingscenter.org/wp-content/uploads/Control- 10.48550/arXiv.2304.06488
and-Responsible-Innovation-FINAL-REPORT.pdf. Zuboff, P.S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at
the New Frontier of Power (1st edition). Profile Books.
14