Concepts of Ethics and Their Application To AI
Concepts of Ethics and Their Application To AI
Concepts of Ethics and Their Application To AI
Ethical issues of AI are hotly debated and sometimes contested. In order to understand
what they are and why they might be considered ethical issues, and to start thinking
about what can or should be done about them, I start with an introduction to ethics,
which is then followed by an empirically based discussion of current ethical issues
of AI.
At its most basic level, ethics has to do with good and bad, with right and wrong.
However, the term “ethics” is much more complex than that and the same word is
used to cover very different aspects of the question of right and wrong. Elsewhere
(Stahl 2012), I have proposed the distinction of four different levels, all of which are
covered by the term “ethics”:
1. Moral intuition, expressed in a statement of the sort: “This is right,” or “This is
wrong.”
2. Explicit morality, expressed in general statements like “One should always
/never do this.”
3. Ethical theory, i.e. the justification of morality drawing on moral philosophy
expressed in statements like “Doing this is right/wrong because …”
4. Metaethics, i.e. higher-level theorising about ethical theories.
This view of ethics is compatible with other views, notably the frequently
suggested distinction between applied ethics, normative ethics and metaethics. It
also accommodates the typical introduction to ethics that one can find in technology
ethics textbooks (Van de Poel and Royakkers 2011), notably the dominant ethical
theories of deontology and consequentialism.
Ethical theories are attempts to find an answer to the question: what makes an action
ethically better or worse than an alternative action? Prominent examples of ethical
theories include consequentialism and deontology. (I shall return to virtue ethics
later.) Both of these originated during the Enlightenment period (mainly in the 18th
century). They aim to provide clear rules that allow us to determine the ethical quality
of an action. Consequentialist theories focus on the outcomes of the action for this
evaluation. The various approaches to utilitarianism going back to Jeremy Bentham
(1789) and John Stuart Mill (1861) are the most prominent examples. They are based
on the idea that one can, at least in theory, add up the aggregate utility and disutility
resulting from a particular course of action. The option with the highest net utility,
i.e. utility minus disutility, is the ethically optimal one.
Deontology, on the other hand, is based on the principle that the basis of the
ethical evaluation of an action is the duty of the agent executing it. The most prominent
representative of this position is Immanuel Kant (1788, 1797), who formulated the so-
called categorical imperative. The most often quoted formulation of the categorical
imperative says “Act only on that maxim by which you can at the same time will
that it should become a universal law” (translation, quoted in Bowie 1999: 14). This
categorical imperative stops agents from rationalising exemptions for themselves.
The interesting aspect of such a position for our purposes is that this view of ethics
pays no immediate attention to the consequences of an action, but exclusively focuses
on the motivation for undertaking it.
It is important to underline, however, that deontology and utilitarianism are not
the only ethical theories that can be applied to AI, and to technology more broadly.
In addition to virtue ethics, to which I will return shortly, there are other general
ethical approaches such as the feminist ethics of care (Gilligan 1990) and ethics
based on various religions. Applying ethical theories to particular application areas
has resulted in rich discourses of concepts such as computer ethics (Bynum and
Rogerson 2003, Bynum 2008a, van den Hoven 2010), information ethics (Capurro
2006, Floridi 2006) and technology ethics (Brey 2011) that are relevant to AI.
Entire libraries have been written about philosophical ethics, and I cannot hope
to do justice to the many and rich nuances of ethical thinking. It may nevertheless
be helpful to outline how ethics links to the human condition. This can explain some
of the characteristics of ethics and it can shed light on whether or to what degree
non-human artificial agents can be ethical subjects.
3.1 Ethical Theories 21
broader meaning. For the ancient Greeks, philosophy was not just an intellectual
endeavour but an attempt to find ways to live the “good life”, the answer to the
question: how should I live (Annas 1993)? The major philosophical schools of ancient
Greece agreed that the cosmos had a purpose and that the individual good life,
resulting in happiness (Aristotle 2007), was predicated on people fulfilling their role
in society. This is the basis of virtue ethics, which is most prominently associated
with Aristotle (2007) but whose main tenets are widely shared across philosophical
schools. The focus of this approach to ethics is not so much the evaluation of the
anticipated outcomes of an individual act or their intention, but providing guidance
for the individual to help them develop a virtuous character.
I do not want to overly romanticise ancient Greece, whose acceptance of slavery
and misogyny are not acceptable. However, virtue ethics as an approach to ethics
has significant appeal, probably because it offers to provide guidance not only on
individual problems but on how we should live our lives. This may explain why it
has returned to prominence since the end of the 20th century and seen attempts to
translate it into modern contexts (MacIntyre 2007).
Terry Bynum is one of several scholars who have succeeded in translating the
ancient principles of virtue ethics into a modern technology-saturated context. He
suggests the development of a “flourishing ethics” (Bynum 2006) which draws from
Aristotelian roots. Its key tenets are:
1. Human flourishing is central to ethics.
2. Humans as social animals can only flourish in society.
3. Flourishing requires humans to do what we are especially equipped to do.
4. We need to acquire genuine knowledge via theoretical reasoning and then act
autonomously and justly via practical reasoning in order to flourish.
5. The key to excellent practical reasoning and hence to being ethical is the ability
to deliberate about one’s goals and choose a wise course of action.
Bynum (2008b) has shown that these principles of virtue ethics are relevant to
and have informed ethical considerations of information technology since its early
days and can be found in the work of Norbert Wiener (1954), one of the fathers of
digital technology.
Much research has been undertaken to explore how principles of virtue ethics can
be applied to technology and how we can live a virtuous life in a technologically
driven society. An outstanding discussion of virtue ethics in the context of digital
technologies is provided by Vallor (2016), and, given that my approach relies heavily
on her discussion, I will return to it later with reference to human flourishing.
As Bynum points out, people are endowed with different skills and strengths.
Flourishing includes excellence in pursuit of one’s goals, which implies that there
are as many ways of flourishing as there are combinations of skills. Flourishing is
thus not a one-size-fits-all concept but needs to be filled with life on an individual
level. Before I return to a more detailed discussion of the concept of flourishing, I now
want to discuss the motivations behind and purposes of developing, deploying and
using AI, as these have a direct bearing on the ethical evaluation of AI socio-technical
systems.
3.3 Purposes of AI 23
3.3 Purposes of AI
Efficiency,
optimisation,
profit Social control
maximisation
Purposes of AI
Human
flourishing
with AI’s ability to detect patterns and correlations between variables, allow for new
ways of controlling human behaviour. This can be done in subtle ways, using the
idea of “nudging” based on behavioural economics (Mullainathan and Thaler 2000,
Camerer et al. 2004) or it can be done more vigorously, as for example in the Chinese
social credit scoring system (Creemers 2018, Liu 2019).
The system intends to monitor, rate and regulate the financial, social, moral and, possibly,
political behavior of China’s citizens – and also the country’s companies – via a system of
punishments and rewards. The stated aim is to “provide the trustworthy with benefits and
discipline the untrustworthy.” (Bartsch and Gottske nd)
AI as social control can also breach the limits of legality, as happened in the
Facebook–Cambridge Analytica case, where social media data was used to ille-
gitimately influence the outcome of democratic elections (Isaak and Hanna 2018).
Zuboff (2019) offers a forceful argument that social control is a driving force and
a necessary condition of success of what she calls “surveillance capitalism”. In her
analysis she does not focus on the term AI, but her description of the way in which
new business models have developed and facilitated enormous profits is fully aligned
with the concept of AI as converging socio-technical systems (see Fig. 3.1).
The third purpose of using AI, drawing on the earlier discussion of ethics, is to
employ it for human flourishing. This means that AI is developed and deployed in
ways that promote human flourishing. It can be used as a tool that helps individuals
and groups identify worthwhile goals and supports them in their pursuit of excellence
in achieving these goals. There are a number of suggestions on how to ensure that
AI has positive consequences for individuals and societies, which is part of this third
purpose of using AI for human flourishing: for example, attempts to construct a
“good AI society” (Cath et al. 2016) or the discourse on AI for good that I discuss
in more detail below in the section on the benefits of AI.
The three different views of the purpose of AI are represented in Fig. 3.1.
These three goals may come into conflict, but they are not necessarily contradic-
tory.
The pursuit of efficiency and the resulting economic benefits can lead to a strong
economy that provides the material substrate for human wellbeing. By generating
wealth an efficient economy opens avenues of human flourishing that would other-
wise be impossible. For instance, a move from coal-based energy production to solar
energy is expensive. In addition, the pursuit of efficiency and profit creation can be
a legitimate area of activity for excellence, and people can flourish in this activity.
Social control is often seen as problematic and in conflict with individual liberties.
The use of information and communications technologies (ICTs) has long been
associated with violations of privacy and the growth of surveillance (Lyon 2001).
This concern traditionally saw the state as the source of surveillance. In these days of
corporate giants that control much of the data and technical infrastructure required for
AI, the concern includes the exploitation of individuals in new forms of “surveillance
capitalism” (Zuboff 2019). But, again, there does not have to be a contradiction
between social control and human flourishing. Humans as social beings need to
define ways of collaborating, which includes agreement on moral codes, and these
3.3 Purposes of AI 25
Purpose of AI
Efficiency,
optimisation, profit Social control
maximisation
Human flourishing
need to be controlled and enforced in some way. While nudging as a policy instrument
is contentious, it can be and often is used to promote behaviours that are conducive
to flourishing, such as promoting a healthier lifestyle. Used especially in the United
Kingdom, Australia, Germany and the US (Benartzi et al. 2017), nudging involves
government-led campaigns to achieve given targets, for instance higher vaccination
rates. For example, a US campaign involved sending out planning prompts for flu
vaccination to citizens, which increased vaccination rates by 4.2% (ibid).
In the technology domain AI can be used to promote privacy awareness (Acquisti
2009), arguably a condition of flourishing. As I write these sentences, much of the
world is under lockdown due to the COVID-19 pandemic. In the UK there is a
heated debate around apps to be used to support the tracking and tracing of infected
individuals (Klar and Lanzerath 2020). What this shows is that even forced social
control through digital technologies may in some circumstances be conducive to
human flourishing, for example, if it can help save lives and allow society to function.
A Venn-type diagram may therefore be a better representation of the relationship of
the three purposes (Fig. 3.2).
I must emphasise that the three purposes of AI listed in Figures 3.1 and 3.2 are
not intrinsically contradictory, but rather describe the main fields of emphasis or
different directions of travel that can guide the development and deployment of AI.
My proposal is that the explicit aim to do the ethically right thing with AI can be
described with reference to human flourishing.
This is not a novel insight. It draws from the ancient Greek philosophers and has
been applied to ICT for decades. It has also been applied to AI. Virginia Dignum
(2019: 119), for example, states: “Responsible Artificial Intelligence is about human
responsibility for the development of intelligent systems along fundamental human
principles and values, to ensure human flourishing and well-being in a sustainable
world.” Mark Coeckelbergh (2019: 33) voices a similar view when he states that we
“need a positive and constructive ethics of AI, which is not only about regulation in
the sense of constraints but which also concerns the question of the good life and
human and societal flourishing”. The principle of this argument is unproblematic and
26 3 Concepts of Ethics and Their Application to AI
can also be found in AI policy proposals (ALLEA and Royal Society 2019). Who,
after all, would say that they want to use AI to limit human flourishing? However,
it raises the questions: how can we know whether human flourishing is promoted
or achieved, and how can this be translated into practice? In order to answer these
questions, I will now look at some theoretical positions on technology and its role in
the world.
Flourishing ethics is part of the tradition of virtue ethics and its historical roots in
Aristotelian ethics. In order to answer the question, “How can we understand flour-
ishing in practical terms?” it is helpful to look at other positions that share the aim of
promoting human flourishing. Three positions that have been applied to technology,
or that were developed specifically with research and technology development in
mind, are important in this context: critical theory of technology, capability theory
and responsible research and innovation. Each of these three offers an established
theoretical approach that is consistent with human flourishing, and each has led to a
wealth of insights into how flourishing can be observed and promoted..
Critical theory of technology is my first example of a theoretical approach rele-
vant to AI that encompasses flourishing. Critical theory has a number of different
possible roots. In its European spirit it tends to trace its origins to Marx’s criticism
of capitalism. There is a recurrence of Marxist thinking in relation to digital tech-
nologies (Greenhill and Wilson 2006, Fuchs and Mosco 2017). However, much of
critical theory of technology uses later developments of critical theory, notably of
the Frankfurt School (Wiggershaus 1995). Andrew Feenberg’s (1993, 1999) work is
probably the best-known example of the use of critical theory to understand modern
technology. In addition, there has been a long-standing discussion of critical theory
in the field of information systems, which draws on further theoretical traditions,
such as postcolonialism (Mayasandra et al. 2006) and postmodernism (Calás and
Smircich 1999).
Elsewhere I have argued that one central combining feature of the various different
views of critical theory is that they aim to promote emancipation (Stahl 2008). The
emancipatory intention of critical research, i.e. research undertaken in the critical
tradition, means that resulting research cannot be confined to description only, but
attempts to intervene and practically promote emancipation (Cecez-Kecmanovic
2011). Myers and Klein (2011), drawing on Alvesson and Willmott (1992), see
emancipation as facilitating the realisation of human needs and potential, critical self-
reflection and associated self-transformation. The concept of emancipation seems
very close to the principle of human flourishing discussed earlier. My reason for
bringing critical theory into this discussion is that critical theory has developed a set
of tools and a high degree of sensitivity for understanding factors that can impede
emancipation. Because of its roots in Marxist ideology critique, critical theory is
well positioned to point to the factors limiting emancipation and flourishing that
3.4 Theoretical Perspectives on Human Flourishing 27
arise from the current socio-economic system, from labour processes and from capi-
talist modes of production. As will be seen later, these constitute probably the largest
set of ethical issues associated with AI.
A second theoretical position worth highlighting in the context of human flour-
ishing is capability theory. Capability theory has roots in philosophy and economics
and is strongly associated with Amartya Sen (2009) and Martha Nussbaum (2011).
The capability approach originated in development economics and the desire to find
better ways of describing human development than purely financial and aggregate
measures such as the gross domestic product. It is also directly linked to and based
on the Aristotelian notion of flourishing (Johnstone 2007), and thus immediately
relevant to a discussion of the ethics of AI and human flourishing.
The reason for highlighting the capability approach is that it has a history of appli-
cation to information technologies (Oosterlaken and van den Hoven 2012), often in
the context of studies of ICT for development and its focus on marginalised and
vulnerable populations (Kleine 2010). It can thus be used as a way of sharpening the
focus on the impact that AI can have on such populations. In addition, the communi-
ties working with the capability approach have developed tools for improving human
functioning and freedoms and for measuring outcomes that have been recognised at
a political level, notably by the United Nations. It is therefore suited to the creation
of metrics that can be used to assess whether AI applications and uses benefit human
flourishing.
The final theoretical position relevant to AI ethics and human flourishing is that
of responsible research and innovation (RRI). RRI is a concept that has gained
prominence in research and innovation governance since around the early 2010s.
It has been defined as the “on-going process of aligning research and innovation
to the values, needs and expectations of society” (European Union 2014). There
are different interpretations of RRI (Owen and Pansera 2019), including that of the
European Commission (2013), which consists of six pillars or keys (engagement,
gender equality, science education, ethics, open access and governance), and that
of the UK’s Engineering and Physical Sciences Research Council (Owen 2014),
represented by the AREA acronym (anticipate, reflect, engage and act), which is
based on Stilgoe et al. (2013).
A much-cited definition of RRI proposed by Von Schomberg (2013: 63) sees RRI
as
a transparent, interactive process by which societal actors and innovators become mutually
responsive to each other with a view to the (ethical) acceptability, sustainability and societal
desirability of the innovation process and its marketable products (in order to allow a proper
embedding of scientific and technological advances in our society).
The reference to RRI is helpful in the context of AI ethics because it puts research
and innovation explicitly into the societal context. The idea that the process and
product of research and innovation should be acceptable, sustainable and societally
desirable can be read as implying that they should be conducive to human flour-
ishing. RRI can thus be understood as a way of promoting and implementing human
flourishing. RRI is important in the context of this book because it is established as
28 3 Concepts of Ethics and Their Application to AI
The main thesis of this book is that flourishing ethics can enlighten AI ethics and
provide guidance in the development of practical interventions. The majority of
currently existing guidelines were not drafted from one theoretical viewpoint but
tend to use a set of ethical principles or values. What are these values?
The most comprehensive review of AI ethics guidelines published so far (Jobin
et al. 2019) lists the following ethical principles: transparency, justice and fairness,
non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust,
sustainability, dignity and solidarity. Each of these is comprised of components.
Transparency, for example, refers to related concepts such as explainability, expli-
cability, understandability, interpretability, communication and disclosure. The rela-
tionship between these concepts is not normally well defined and they can refer
to different ethical positions. Elsewhere we have tried to clarify their normative
implications (Ryan and Stahl 2020).
Another example, the ethics guidelines for trustworthy AI proposed by the EU’s
High-Level Expert Group on Artificial Intelligence (2019), has a tiered level of
principles. The expert group proposes a framework for trustworthy AI that consists
of lawful AI (which they do not cover), ethical AI and robust AI. This framework is
based on four ethical principles: respect for human autonomy, prevention of harm,
fairness and explicability. From these principles they deduce seven key requirements
for the realisation of trustworthy AI, namely:
3.5 Ethical Principles of AI 29
is usually taken for granted that biomedical research pursues an ethically commend-
able goal: that of contributing to human health and thus to human wellbeing. Ethical
concerns therefore do not arise from this goal itself but only from ways of achieving
it. In the case of technical research, including AI research, it is not at all obvious
that this implicit premise of biomedical research is applicable. The assumption that
the research itself and its intended consequences are ethically acceptable and desir-
able is in need of much more questioning and debate, casting doubt on whether the
process-oriented and principle-based biomedical research ethics process is a suitable
one to base AI ethics on.
Secondly, biomedical principlism (Beauchamp and Childress 2009) leaves open
the question of how to deal with conflicts between principles. This is a well-
established problem for any ethical approach that is based on a set of non-hierarchical
principles or values. In most cases it is possible to imagine situations where these
come into conflict. Looking at the principles used in AI, it is, for example, easy to
imagine a case where the principle of transparency would come into conflict with the
principle of privacy. In order to successfully guide action or decision, the approach
therefore needs to find ways of dealing with such conflicts. In addition, principlism
has been criticised for being overly close to its US origins and not generalisable
across the world (Schroeder et al. 2019).
Framing AI ethics in terms of human flourishing can address both concerns.
By offering an overarching ethical ambition it proposes a point of comparison
that can help address value conflicts. It also aligns more closely to 21st-century
research ethics, which has been moving away from Western principles to global
values (Schroeder et al. 2019). And it furthermore offers a perspective that does not
take for granted that all research and technology innovation is desirable per se, but
clearly posits flourishing as the overarching goal.
References
Acquisti A (2009) Nudging privacy: the behavioral economics of personal information. IEEE Secur
Priv 7:82–85. https://doi.org/10.1109/MSP.2009.163
ALLEA, Royal Society (2019) Flourishing in a data-enabled society. https://royalsociety.org/-/
media/policy/Publications/2019/28-06-19-flourishing-in-data-enabled-society.pdf?la=en-GB&
hash=D521F71EB21F9369FAC26D7E1313398A. Accessed 23 Sept 2020
Alvesson M, Willmott H (1992) On the idea of emancipation in management and organization
studies. Acad Manage Rev 17:432–464
Annas J (1993) The morality of happiness, New edn. Oxford University Press, New York
Aristotle (2007) The Nicomachean ethics. Filiquarian Publishing, Minneapolis
Bartsch B, Gottske M (nd) China’s social credit system. Bertelsmann Stiftung. https://www.ber
telsmann-stiftung.de/fileadmin/files/aam/Asia-Book_A_03_China_Social_Credit_System.pdf.
Accessed 25 Sept 2020
Beauchamp TL, Childress JF (2009) Principles of biomedical ethics, 6th edn. Oxford University
Press, New York
Benartzi S, Beshears J, Milkman KL et al (2017) Should governments invest more in nudging?
Psychol Sci 28:1031–1040. https://doi.org/10.1177/2F0956797617702501
Bentham J (1789) An introduction to the principles of morals and legislation. Dover Publications,
Mineola NY
References 31