Ethics of Artificial Intelligence and Robotics
Ethics of Artificial Intelligence and Robotics
Robotics
First published Thu Apr 30, 2020
Artificial intelligence (AI) and robotics are digital technologies that will have significant
impact on the development of humanity in the near future. They have raised fundamental
questions about what we should do with these systems, what the systems themselves should
do, what risks they involve, and how we can control these.
After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues
that arise with AI systems as objects, i.e., tools made and used by humans. This includes
issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot
interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems
as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial
moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to
a “singularity” (§2.10). We close with a remark on the vision of AI (§3).
For each section within these themes, we provide a general explanation of the ethical issues,
outline existing positions and arguments, then analyse how these play out with
current technologies and finally, what policy consequences may be drawn.
1. Introduction
o 1.1 Background of the Field
o 1.2 AI & Robotics
o 1.3 A Note on Policy
2. Main Debates
o 2.1 Privacy & Surveillance
o 2.2 Manipulation of Behaviour
o 2.3 Opacity of AI Systems
o 2.4 Bias in Decision Systems
o 2.5 Human-Robot Interaction
o 2.6 Automation and Employment
o 2.7 Autonomous Systems
o 2.8 Machine Ethics
o 2.9 Artificial Moral Agents
o 2.10 Singularity
3. Closing
Bibliography
Academic Tools
Other Internet Resources
o References
o Research Organizations
o Conferences
o Policy Documents
o Other Relevant pages
Related Entries
1. Introduction
1.1 Background of the Field
The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a
typical response to new technologies. Many such concerns turn out to be rather quaint (trains
are too fast for souls); some are predictably wrong when they suggest that the technology
will fundamentally change humans (telephones will destroy personal communication, writing
will destroy memory, video cassettes will make going out redundant); some are broadly
correct but moderately relevant (digital technology will destroy industries that make
photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply
relevant (cars will kill children and fundamentally change the landscape). The task of an
article such as this is to analyse the issues and to deflate the non-issues.
Some technologies, like nuclear power, cars, or plastics, have caused ethical and political
discussion and significant policy efforts to control the trajectory these technologies, usually
only once some damage is done. In addition to such “ethical concerns”, new technologies
challenge current norms and conceptual systems, which is of particular interest to
philosophy. Finally, once we have understood a technology in its context, we need to shape
our societal response, including regulation and law. All these features also exist in the case of
new AI and Robotics technologies—plus the more fundamental fear that they may end the
era of human control on Earth.
The ethics of AI and robotics has seen significant press coverage in recent years, which
supports related research, but also may end up undermining it: the press often talks as if the
issues under discussion were just predictions of what future technology will bring, and as
though we already know what would be most ethical and how to achieve that. Press coverage
thus focuses on risk, security (Brundage et al. 2018, in the Other Internet Resources section
below, hereafter [OIR]), and prediction of impact (e.g., on the job market). The result is a
discussion of essentially technical problems that focus on how to achieve a desired outcome.
Current discussions in policy and industry are also motivated by image and public relations,
where the label “ethical” is really not much more than the new “green”, perhaps used for
“ethics washing”. For a problem to qualify as a problem for AI ethics would require that we
do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with
AI is not a problem in ethics, but whether these are permissible under certain
circumstances is a problem. This article focuses on the genuine problems of ethics where we
do not readily know what the answers are.
A last caveat: The ethics of AI and robotics is a very young field within applied ethics, with
significant dynamics, but few well-established issues and no authoritative overviews—
though there is a promising outline (European Group on Ethics in Science and New
Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo
and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone
et al. 2019), and policy recommendations (AI HLEG 2019 [OIR]; IEEE 2019). So this article
cannot merely reproduce what the community has achieved thus far, but must propose an
ordering where little order exists.
2. Main Debates
In this section we outline the ethical issues of human use of AI and robotics systems that can
be more or less autonomous—which means we look at issues that arise with certain uses of
the technologies which would not arise with others. It must be kept in mind, however, that
technologies will always cause some uses to be easier, and thus more frequent, and hinder
other uses. The design of technical artefacts thus has ethical relevance for their use (Houkes
and Vermaas 2010; Verbeek 2011), so beyond “responsible use”, we also need “responsible
design” in this field. The focus on use does not presuppose which ethical approaches are best
suited for tackling these issues; they might well be virtue ethics (Vallor 2017) rather than
consequentialist or value-based (Floridi et al. 2018). This section is also neutral with respect
to the question whether AI systems truly have “intelligence” or other mental properties: It
would apply equally well if AI and robotics are merely seen as the current face of automation
(cf. Müller forthcoming-b).
2.10 Singularity
2.10.1 Singularity and Superintelligence
In some quarters, the aim of current AI is thought to be an “artificial general intelligence”
(AGI), contrasted to a technical or “narrow” AI. AGI is usually distinguished from
traditional notions of AI as a general purpose system, and from Searle’s notion of “strong
AI”:
computers given the right programs can be literally said to understand and have other
cognitive states. (Searle 1980: 417)
The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems
that have a human level of intelligence, then these systems would themselves have the ability
to develop AI systems that surpass the human level of intelligence, i.e., they are
“superintelligent” (see below). Such superintelligent AI systems would quickly self-improve
or develop even more intelligent systems. This sharp turn of events after reaching
superintelligent AI is the “singularity” from which the development of AI is out of human
control and hard to predict (Kurzweil 2005: 487).
The fear that “the robots we created will take over the world” had captured human
imagination even before there were computers (e.g., Butler 1863) and is the central theme in
Čapek’s famous play that introduced the word “robot” (Čapek 1920). This fear was first
formulated as a possible trajectory of existing AI into an “intelligence explosion” by Irvin
Good:
Let an ultraintelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines is one of
these intellectual activities, an ultraintelligent machine could design even better machines;
there would then unquestionably be an “intelligence explosion”, and the intelligence of man
would be left far behind. Thus the first ultraintelligent machine is the last invention that man
need ever make, provided that the machine is docile enough to tell us how to keep it under
control. (Good 1965: 33)
The optimistic argument from acceleration to singularity is spelled out by Kurzweil (1999,
2005, 2012) who essentially points out that computing power has been increasing
exponentially, i.e., doubling ca. every 2 years since 1970 in accordance with “Moore’s Law”
on the number of transistors, and will continue to do so for some time in the future. He
predicted in (Kurzweil 1999) that by 2010 supercomputers will reach human computation
capacity, by 2030 “mind uploading” will be possible, and by 2045 the “singularity” will
occur. Kurzweil talks about an increase in computing power that can be purchased at a given
cost—but of course in recent years the funds available to AI companies have also increased
enormously: Amodei and Hernandez (2018 [OIR]) thus estimate that in the years 2012–2018
the actual computing power available to train a particular AI system doubled every 3.4
months, resulting in an 300,000x increase—not the 7x increase that doubling every two years
would have created.
A common version of this argument (Chalmers 2010) talks about an increase in
“intelligence” of the AI system (rather than raw computing power), but the crucial point of
“singularity” remains the one where further development of AI is taken over by AI systems
and accelerates beyond human level. Bostrom (2014) explains in some detail what would
happen at that point and what the risks for humanity are. The discussion is summarised in
Eden et al. (2012); Armstrong (2014); Shanahan (2015). There are possible paths to
superintelligence other than computing power increase, e.g., the complete emulation of the
human brain on a computer (Kurzweil 2012; Sandberg 2013), biological paths, or networks
and organisations (Bostrom 2014: 22–51).
Despite obvious weaknesses in the identification of “intelligence” with processing power,
Kurzweil seems right that humans tend to underestimate the power of exponential growth.
Mini-test: If you walked in steps in such a way that each step is double the previous, starting
with a step of one metre, how far would you get with 30 steps? (answer: to Earth’s only
permanent natural satellite.) Indeed, most progress in AI is readily attributable to the
availability of processors that are faster by degrees of magnitude, larger storage, and higher
investment (Müller 2018). The actual acceleration and its speeds are discussed in (Müller
and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming); Sandberg (2019) argues that
progress will continue for some time.
The participants in this debate are united by being technophiles in the sense that they expect
technology to develop rapidly and bring broadly welcome changes—but beyond that, they
divide into those who focus on benefits (e.g., Kurzweil) and those who focus on risks (e.g.,
Bostrom). Both camps sympathise with “transhuman” views of survival for humankind in a
different physical form, e.g., uploaded on a computer (Moravec 1990, 1998; Bostrom 2003a,
2003c). They also consider the prospects of “human enhancement” in various respects,
including intelligence—often called “IA” (intelligence augmentation). It may be that future
AI will be used for human enhancement, or will contribute further to the dissolution of the
neatly defined human single person. Robin Hanson provides detailed speculation about what
will happen economically in case human “brain emulation” enables truly intelligent robots or
“ems” (Hanson 2016).
The argument from superintelligence to risk requires the assumption that superintelligence
does not imply benevolence—contrary to Kantian traditions in ethics that have argued higher
levels of rationality or intelligence would go along with a better understanding of what is
moral and better ability to act morally (Gewirth 1978; Chalmers 2010: 36f). Arguments for
risk from superintelligence say that rationality and morality are entirely independent
dimensions—this is sometimes explicitly argued for as an “orthogonality thesis” (Bostrom
2012; Armstrong 2013; Bostrom 2014: 105–109).
Criticism of the singularity narrative has been raised from various angles. Kurzweil and
Bostrom seem to assume that intelligence is a one-dimensional property and that the set of
intelligent agents is totally-ordered in the mathematical sense—but neither discusses
intelligence at any length in their books. Generally, it is fair to say that despite some efforts,
the assumptions made in the powerful narrative of superintelligence and singularity have not
been investigated in detail. One question is whether such a singularity will ever occur—it
may be conceptually impossible, practically impossible or may just not happen because of
contingent events, including people actively preventing it. Philosophically, the interesting
question is whether singularity is just a “myth” (Floridi 2016; Ganascia 2017), and not on the
trajectory of actual AI research. This is something that practitioners often assume (e.g.,
Brooks 2017 [OIR]). They may do so because they fear the public relations backlash,
because they overestimate the practical problems, or because they have good reasons to think
that superintelligence is an unlikely outcome of current AI research (Müller forthcoming-a).
This discussion raises the question whether the concern about “singularity” is just a narrative
about fictional AI based on human fears. But even if one does find negative reasons
compelling and the singularity not likely to occur, there is still a significant possibility that
one may turn out to be wrong. Philosophy is not on the “secure path of a science” (Kant
1791: B15), and maybe AI and robotics aren’t either (Müller 2020). So, it appears that
discussing the very high-impact risk of singularity has justification even if one thinks the
probability of such singularity ever occurring is very low.
2.10.2 Existential Risk from Superintelligence
Thinking about superintelligence in the long term raises the question whether
superintelligence may lead to the extinction of the human species, which is called an
“existential risk” (or XRisk): The superintelligent systems may well have preferences that
conflict with the existence of humans on Earth, and may thus decide to end that existence—
and given their superior intelligence, they will have the power to do so (or they may happen
to end it because they do not really care).
Thinking in the long term is the crucial feature of this literature. Whether the singularity (or
another catastrophic event) occurs in 30 or 300 or 3000 years does not really matter (Baum et
al. 2019). Perhaps there is even an astronomical pattern such that an intelligent species is
bound to discover AI at some point, and thus bring about its own demise. Such a “great
filter” would contribute to the explanation of the “Fermi paradox” why there is no sign of life
in the known universe despite the high probability of it emerging. It would be bad news if we
found out that the “great filter” is ahead of us, rather than an obstacle that Earth has already
passed. These issues are sometimes taken more narrowly to be about human extinction
(Bostrom 2013), or more broadly as concerning any large risk for the species (Rees 2018)—
of which AI is only one (Häggström 2016; Ord 2020). Bostrom also uses the category of
“global catastrophic risk” for risks that are sufficiently high up the two dimensions of
“scope” and “severity” (Bostrom and Ćirković 2011; Bostrom 2013).
These discussions of risk are usually not connected to the general problem of ethics under
risk (e.g., Hansson 2013, 2018). The long-term view has its own methodological challenges
but has produced a wide discussion: (Tegmark 2017) focuses on AI and human life “3.0”
after singularity while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn
(forthcoming) survey longer-term policy issues in ethical AI. Several collections of papers
have investigated the risks of artificial general intelligence (AGI) and the factors that might
make this development more or less risk-laden (Müller 2016b; Callaghan et al. 2017;
Yampolskiy 2018), including the development of non-agent AI (Drexler 2019).
2.10.3 Controlling Superintelligence?
In a narrow sense, the “control problem” is how we humans can remain in control of an AI
system once it is superintelligent (Bostrom 2014: 127ff). In a wider sense, it is the problem
of how we can make sure an AI system will turn out to be positive according to human
perception (Russell 2019); this is sometimes called “value alignment”. How easy or hard it is
to control a superintelligence depends significantly on the speed of “take-off” to a
superintelligent system. This has led to particular attention to systems with self-
improvement, such as AlphaZero (Silver et al. 2018).
One aspect of this problem is that we might decide a certain feature is desirable, but then find
out that it has unforeseen consequences that are so negative that we would not desire that
feature after all. This is the ancient problem of King Midas who wished that all he touched
would turn into gold. This problem has been discussed on the occasion of various examples,
such as the “paperclip maximiser” (Bostrom 2003b), or the program to optimise chess
performance (Omohundro 2014).
Discussions about superintelligence include speculation about omniscient beings, the radical
changes on a “latter day”, and the promise of immortality through transcendence of our
current bodily form—so sometimes they have clear religious undertones (Capurro 1993;
Geraci 2008, 2010; O’Connell 2017: 160ff). These issues also pose a well-known problem of
epistemology: Can we know the ways of the omniscient (Danaher 2015)? The usual
opponents have already shown up: A characteristic response of an atheist is
People worry that computers will get too smart and take over the world, but the real problem
is that they’re too stupid and they’ve already taken over the world (Domingos 2015)
The new nihilists explain that a “techno-hypnosis” through information technologies has now
become our main method of distraction from the loss of meaning (Gertz 2018). Both
opponents would thus say we need an ethics for the “small” problems that occur with actual
AI and robotics (sections 2.1 through 2.9 above), and that there is less need for the “big
ethics” of existential risk from AI (section 2.10).
3. Closing
The singularity thus raises the problem of the concept of AI again. It is remarkable how
imagination or “vision” has played a central role since the very beginning of the discipline at
the “Dartmouth Summer Research Project” (McCarthy et al. 1955 [OIR]; Simon and Newell
1958). And the evaluation of this vision is subject to dramatic change: In a few decades, we
went from the slogans “AI is impossible” (Dreyfus 1972) and “AI is just automation”
(Lighthill 1973) to “AI will solve all problems” (Kurzweil 1999) and “AI may kill us all”
(Bostrom 2014). This created media attention and public relations efforts, but it also raises
the problem of how much of this “philosophy and ethics of AI” is really about AI rather than
about an imagined technology. As we said at the outset, AI and robotics have raised
fundamental questions about what we should do with these systems, what the systems
themselves should do, and what risks they have in the long term. They also challenge the
human view of humanity as the intelligent and dominant species on Earth. We have seen
issues that have been raised and will have to watch technological and social developments
closely to catch the new issues early on, develop a philosophical analysis, and learn for
traditional problems of philosophy.
Bibliography
NOTE: Citations in the main text annotated “[OIR]” may be found in the Other Internet
Resources section below, not in the Bibliography.
Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are
Private?”, Journal of Privacy and Confidentiality, 7(3): 1–15. doi:10.29012/jpc.v7i3.404
Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-
up, and Hybrid Approaches”, Ethics and Information Technology, 7(3): 149–155.
doi:10.1007/s10676-006-0004-4
Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial
Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence, 12(3): 251–
261. doi:10.1080/09528130050111428
Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against
Autonomy in Weapons Systems”, Global Jurist, 18(1): art. 20170012. doi:10.1515/gj-2017-
0012
Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the
Future of Humans, Washington, DC: Pew Research Center.
Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical
Intelligent Agent”, AI Magazine, 28(4): 15–26.
––– (eds.), 2011, Machine Ethics, Cambridge: Cambridge University Press.
doi:10.1017/CBO9780511978036
Aneesh, A., 2006, Virtual Migration: The Programming of Globalization, Durham, NC and
London: Duke University Press.
Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots, Boca Raton, FL:
CRC Press.
Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality
Thesis”, Analysis and Metaphysics, 12: 68–84.
–––, 2014, Smarter Than Us, Berkeley, CA: MIRI.
Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the
Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference
on Human-Robot Interaction—HRI ’17, Vienna, Austria: ACM Press, 445–452.
doi:10.1145/2909824.3020255
Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics
of Care”, IEEE Technology and Society Magazine, 38(2): 40–53.
doi:10.1109/MTS.2019.2915154
Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction, March 1942.
Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim
Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine
Experiment”, Nature, 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of
Work, New York: Oxford University Press.
Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson,
Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg,
Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term
Trajectories of Human Civilization”, Foresight, 21(1): 53–83. doi:10.1108/FS-04-2018-0037
Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie,
Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften),
Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in
Global Perspective, second edition, Cambridge, MA: MIT Press.
Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”,
in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT*
’19, Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should
We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research
Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [Bentley et al.
2018 available online]
Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical
Analysis”, The Information Society, 34(3): 130–140. doi:10.1080/01972243.2018.1444249
Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political
Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and
Transparency, in Proceedings of Machine Learning Research, 81: 149–159.
Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical
Quarterly, 53(211): 243–255. doi:10.1111/1467-9213.00309
–––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and
Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2, Iva
Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON:
International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17.
[Botstrom 2003b revised available online]
–––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century,
Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
–––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced
Artificial Agents”, Minds and Machines, 22(2): 71–85. doi:10.1007/s11023-012-9281-3
–––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy, 4(1): 15–31.
doi:10.1111/1758-5899.12002
–––, 2014, Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.
Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks, New York:
Oxford University Press.
Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for
Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence, S
Matthew Liao (ed.), New York: Oxford University Press. [Bostrom, Dafoe, and Flynn
forthcoming – preprint available online]
Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The
Cambridge Handbook of Artificial Intelligence, Keith Frankish and William M. Ramsey
(eds.), Cambridge: Cambridge University Press, 316–334.
doi:10.1017/CBO9781139046855.020 [Bostrom and Yudkowsky 2014 available online]
Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses
to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on
Computational Propaganda. [Bradshaw, Neudert, and Howard 2019 available online/]
Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook
of Law, Regulation and Technology, Oxford: Oxford University Press.
doi:10.1093/oxfordhb/9780199680832.001.0001
Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress,
and Prosperity in a Time of Brilliant Technologies, New York: W. W. Norton.
Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial
Companions: Key Social, Psychological, Ethical and Design Issues, Yorick Wilks (ed.),
(Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74.
doi:10.1075/nlp.8.11bry
–––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New
Enlightenment: A Transcendent Decade, Madrid: Turner - BVVA. [Bryson 2019 available
online]
Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the
People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law, 25(3):
273–291. doi:10.1007/s10506-017-9214-9
Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds
and Machines, 29(3): 461–494. doi:10.1007/s11023-019-09497-4
Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The
Press (Christchurch), 13 June 1863. [Butler 1863 available online]
Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.),
2017, The Technological Singularity: Managing the Journey, (The Frontiers Collection),
Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of
Bologna Law Review, 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law, Cheltenham:
Edward Elgar.
Čapek, Karel, 1920, R.U.R., Prague: Aventium. Translated by Peter Majer and Cathy Porter,
London: Methuen, 1999.
Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen
‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische
Forschung, 47: 93–102.
Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise
AI”, The Guardian , 04 January 2019. [Cave 2019 available online]
Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of
Consciousness Studies, 17(9–10): 7–65. [Chalmers 2010 available online]
Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring
2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL =
<https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/>
Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of
Moral Consideration”, Ethics and Information Technology, 12(3): 209–221.
doi:10.1007/s10676-010-9235-5
–––, 2012, Growing Moral Relations: Critique of Moral Status Ascription, London:
Palgrave. doi:10.1057/9781137025968
–––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to
Doom Scenarios”, AI & Society, 31(4): 455–462. doi:10.1007/s00146-015-0626-3
–––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to
the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts
and Applications, Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.),
London: Routledge, 110–121.
Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature,
538(7625): 311–313. doi:10.1038/538311a
Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust,
Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [Cristianini
forthcoming – preprint available online]
Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It
Matters”, Minds and Machines, 25(3): 231–246. doi:10.1007/s11023-015-9365-y
–––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology,
18(4): 299–309. doi:10.1007/s10676-016-9403-3
–––, 2016b, “The Threat of Algocracy: Reality, Resistance and
Accommodation”, Philosophy & Technology, 29(3): 245–268. doi:10.1007/s13347-015-
0211-1
–––, 2019a, Automation and Utopia: Human Flourishing in a World without Work,
Cambridge, MA: Harvard University Press.
–––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies,
3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
–––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical
Behaviourism”, Science and Engineering Ethics, first online: 20 June 2019.
doi:10.1007/s11948-019-00119-x
Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications,
Boston, MA: MIT Press.
DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic
Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28
October 1983. [DARPA 1983 available online]
Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds, New
York: W.W. Norton.
Devlin, Kate, 2018, Turned On: Science, Sex and Robots, London: Bloomsbury.
Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of
Computational Power Structures”, Digital Journalism, 3(3): 398–415.
doi:10.1080/21670811.2014.976411
Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special
Issue”, Ethics and Information Technology, 20(1): 1–3. doi:10.1007/s10676-018-9450-z
Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning
Machine Will Remake Our World, London: Allen Lane.
Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz,
Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-
Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in
the UK, France and the Netherlands”, in International Conference on Social Robotics 2014,
Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in
Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145.
doi:10.1007/978-3-319-11973-1_14
Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting
Recidivism”, Science Advances, 4(1): eaao5580. doi:10.1126/sciadv.aao5580
Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as
General Intelligence”, FHI Technical Report, 2019-1, 1-210. [Drexler 2019 available online]
Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason,
second edition, Cambridge, MA: MIT Press 1992.
Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The
Power of Human Intuition and Expertise in the Era of the Computer, New York: Free Press.
Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise
to Sensitivity in Private Data Analysis, Berlin, Heidelberg.
Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.),
2012, Singularity Hypotheses: A Scientific and Philosophical Assessment, (The Frontiers
Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and
Punish the Poor, London: St. Martin’s Press.
European Commission, 2013, “How Many People Work in Agriculture in the European
Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs, 8
(July 2013). [Anonymous 2013 available online]
European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial
Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission,
Directorate-General for Research and Innovation, Unit RTD.01. [European Group 2018
available online ]
Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the
Future of Law Enforcement, New York: NYU Press.
Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter
and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible.
Why?”, Aeon, 9 May 2016. URL = <Floridi 2016 available online>
Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia
Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard
Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a
Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and
Machines, 28(4): 689–707. doi:10.1007/s11023-018-9482-5
Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds
and Machines, 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences,
374(2083): 20160360. doi:10.1098/rsta.2016.0360
Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double
Effect”, Oxford Review, 5: 5–15.
Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the
Robot”, Paladyn, Journal of Behavioral Robotics, 10(1): 77–93. doi:10.1515/pjbr-2019-0006
Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a
Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law,
25(3): 305–323. doi:10.1007/s10506-017-9212-y
Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal
of Philosophy, 68(1): 5–20.
Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of
Automation, Princeton, NJ: Princeton University Press.
Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How
Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17
September 2013. [Frey and Osborne 2013 available online]
Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité, Paris: Éditions du Seuil.
EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil
Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs, 10.11.2016.
https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679
of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural
Persons with Regard to the Processing of Personal Data and on the Free Movement of Such
Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union, 119 (4
May 2016), 1–88. [Regulation (EU) 2016/679 available online]
Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial
Intelligence”, Journal of the American Academy of Religion, 76(1): 138–166.
doi:10.1093/jaarel/lfm101
–––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual
Reality, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS
Computers and Society, 45(3): 274–279. doi:10.1145/2874239.2874278
German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics
Commission: Automated and Connected Driving”, June 2017, 1–36. [GFMTDI 2017
available online]
Gertz, Nolen, 2018, Nihilism and Technology, London: Rowman & Littlefield.
Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy, 3(1):
133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie
Philosophique, Maxime Kristanek (ed.), accessed: 16 April 2020, URL = <Gibert 2019
available online>
Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal
Observer’ Meets Artificial Intelligence”, Philosophy & Technology, 31(2): 169–188.
doi:10.1007/s13347-017-0285-z
Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”,
in Advances in Computers 6, Franz L. Alt and Morris Rubinoff (eds.), New York & London:
Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning, Cambridge,
MA: MIT Press.
Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic
Decision-Making and a ‘Right to Explanation’”, AI Magazine, 38(3): 50–57.
doi:10.1609/aimag.v38i3.2741
Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy
Challenges”, Oxford Review of Economic Policy, 34(3): 362–375. doi:10.1093/oxrep/gry002
Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in
Europe”, American Economic Review, 99(2): 58–63. doi:10.1257/aer.99.2.58
Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about
Adolescent Offenders”, Law and Human Behavior, 28(5): 483–504.
doi:10.1023/B:LAHU.0000046430.65485.1f
Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have
Rights?”, Ethics and Information Technology, 20(2): 87–99. doi:10.1007/s10676-017-9442-4
–––, 2018b, Robot Rights, Boston, MA: MIT Press.
Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as
Moral Agent and Patient special issue of Philosophy & Technology, 27(1): 1–142.
Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity,
Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid
Agents”, The Monist, 102(2): 259–275. doi:10.1093/monist/onz009
Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth,
Oxford: Oxford University Press.
Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World, New
York: Palgrave Macmillan.
–––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis, 38(9): 1820–
1829. doi:10.1111/risa.12978
Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow, New York: Harper.
Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the
Intangible Economy, Princeton, NJ: Princeton University Press.
Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of
Artefacts, (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands.
doi:10.1007/978-90-481-3900-2
IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with
Autonomous and Intelligent Systems (First Version), <IEEE 2019 available online>.
Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future, New
York: Norton.
Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age, New York:
Oxford University Press.
Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics
Guidelines”, Nature Machine Intelligence, 1(9): 389–399. doi:10.1038/s42256-019-0088-2
Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and
Machines, 27(4): 575–590. doi:10.1007/s11023-017-9417-6
Kahnemann, Daniel, 2011, Thinking Fast and Slow, London: Macmillan.
Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries, Eric Rakowski (ed.), Oxford:
Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft. Translated as Critique of Pure
Reason, Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated
Vehicles”, Science and Engineering Ethics, 26(1): 293–307. doi:10.1007/s11948-019-00096-
1
Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in
his Essays in Persuasion, New York: Harcourt Brace, 1932, 358–373.
Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—
in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The
Atlantic, June 2018. [Kissinger 2018 available online]
Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human
Intelligence, London: Penguin.
–––, 2005, The Singularity Is Near: When Humans Transcend Biology, London: Viking.
–––, 2012, How to Create a Mind: The Secret of Human Thought Revealed, New York:
Viking.
Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand
IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of
the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19, Glasgow,
Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot
Relationships, New York: Harper & Co.
Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A
Paper Symposion, London: Science Research Council. [Lighthill 1973 available online]
Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving,
Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin,
Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From
Autonomous Cars to Artificial Intelligence, New York: Oxford University Press.
doi:10.1093/oso/9780190652951.001.0001
Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk,
Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo,
20 December 2008), 112 pp. [Lin, Bekey, and Abney 2008 available online]
Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John
Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the
Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI
’12, Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction, London: Routledge.
Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer,
Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a
Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer
Interaction, 3(CSCW): art. 81. doi:10.1145/3359183
Minsky, Marvin, 1985, The Society of Mind, New York: Simon & Schuster.
Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design
and Its Implementation in a Geriatric Care System”, Artificial Intelligence, 278: art. 103179.
doi:10.1016/j.artint.2019.103179
Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and
Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics, 22(2): 303–
341. doi:10.1007/s11948-015-9652-2
Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE
Intelligent Systems, 21(4): 18–21. doi:10.1109/MIS.2006.80
Moravec, Hans, 1990, Mind Children, Cambridge, MA: Harvard University Press.
–––, 1998, Robot: Mere Machine to Transcendent Mind, New York: Oxford University
Press.
Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological
Solutionism, New York: Public Affairs.
Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments:
Less Control, More Flexibility and Better Interaction”, Cognitive Computation, 4(3): 212–
215. doi:10.1007/s12559-012-9129-4
–––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and
Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of
Remotely Controlled Weapons, Ezio Di Nucci and Filippo Santoni de Sio (eds.), London:
Ashgate, 67–81.
––– (ed.), 2016b, Risks of Artificial Intelligence, London: Chapman & Hall - CRC Press.
doi:10.1201/b19187
–––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der
KI”, Medienkorrespondenz, 20: 5–15. [Müller 2018 available online]
–––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target
Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and
Animals, Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.),
(Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179.
doi:10.1007/978-3-030-14126-4_9
–––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence,
New York: Oxford University Press.
––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence, New
York: Oxford University Press.
Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A
Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence, Vincent C.
Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-
26485-1_33
Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology, London:
Penguin.
Nørskov, Marco (ed.), 2017, Social Robots, London: Routledge.
Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–
Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics, 24(4):
1201–1219. doi:10.1007/s11948-017-9943-x
–––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy
Compass, 13(7): e12506. doi:10.1111/phc3.12506
Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love
with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers,
and the Futurists Solving the Modest Problem of Death, London: Granta.
O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy, Largo, ML: Crown.
Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal
of Experimental & Theoretical Artificial Intelligence, 26(3): 303–315.
doi:10.1080/0952813X.2014.895111
Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity, London:
Bloomsbury.
Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of
AI”, in Oxford Handbook of Ethics of Artificial Intelligence, Markus D. Dubber, Frank
Pasquale, and Sunnit Das (eds.), New York: Oxford.
Rawls, John, 1971, A Theory of Justice, Cambridge, MA: Belknap Press.
Rees, Martin, 2018, On the Future: Prospects for Humanity, Princeton: Princeton University
Press.
Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of
Machines”, IEEE Technology and Society Magazine, 35(2): 46–53.
doi:10.1109/MTS.2016.2554421
Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society,
117(2): 187–206. doi:10.1093/arisoc/aox008
Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love
to War, Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control,
New York: Viking.
Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and
Beneficial Artificial Intelligence”, AI Magazine, 36(4): 105–114.
doi:10.1609/aimag.v36i4.2577
SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving
Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [SAE
International 2015 available online]
Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory
of Artificial Intelligence, Vincent C. Müller (ed.), (Studies in Applied Philosophy,
Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–
264. doi:10.1007/978-3-642-31674-6_19
–––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time
Compression”, Foresight, 21(1): 84–99. doi:10.1108/FS-04-2018-0044
Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over
Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI, 5(February):
15. doi:10.3389/frobt.2018.00015
Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and
Control Your World, New York: W. W. Norton.
Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences, 3(3):
417–424. doi:10.1017/S0140525X00005756
Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet
Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the
Conference on Fairness, Accountability, and Transparency—FAT* ’19, Atlanta, GA: ACM
Press, 59–68. doi:10.1145/3287560.3287598
Sennett, Richard, 2018, Building and Dwelling: Ethics for the City, London: Allen Lane.
Shanahan, Murray, 2015, The Technological Singularity, Cambridge, MA: MIT Press.
Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human
Dignity”, Ethics and Information Technology, 21(2): 75–87. doi:10.1007/s10676-018-9494-0
Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”,
in Robot Ethics: The Ethical and Social Implications of Robotics, Patrick Lin, Keith Abney
and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan
Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December
2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford
University. [Shoam et al. 2018 available online]
Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai,
Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy
Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning
Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science, 362(6419):
1140–1144. doi:10.1126/science.aar6404
Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance
in Operations Research”, Operations Research, 6(1): 1–10. doi:10.1287/opre.6.1.1
Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The
Philosophical Quarterly, 66(263): 302–322. doi:10.1093/pq/pqv075
Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24
February 2016, 56 mins.
Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy, 24(1): 62–77.
doi:10.1111/j.1468-5930.2007.00346.x
–––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society, 31(4): 445–454.
doi:10.1007/s00146-015-0625-4
Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of
Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys,
48(4): art. 55. doi:10.1145/2871196
Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data:
Implementing Responsible Research and Innovation”, IEEE Security Privacy, 16(3): 26–33.
Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural
Objects”, Southern California Law Review, 45: 450–501.
Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia
Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown,
David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro
Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on
Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford,
CA, September 2016. [Stone et al. 2016 available online]
Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy, Taylor &
Francis. doi:10.4324/9780415249126-V014-1
Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love
Machine”, IEEE Transactions on Affective Computing, 3(4): 398–409. doi:10.1109/T-
AFFC.2012.31
Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and
Manipulation”, Internet Policy Review, 8(2): 30 June 2019. [Susser, Roessler, and
Nissenbaum 2019 available online]
Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for
Good”, Science, 361(6404): 751–752. doi:10.1126/science.aat5991
Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data
Science?”, Big Data & Society, 6(2): art. 205395171985811.
doi:10.1177/2053951719858114
Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations
for European Research and Innovation: Summary of Consultation with Multidisciplinary
Experts”, June. [Taylor, et al. 2018 available online]
Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence, New York:
Knopf.
Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health,
wealth and happiness, New York: Penguin.
Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us
All”, Wired, 23 November 2018. [Thompson and Bremmer 2018 available online]
Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist,
59(2): 204–217. doi:10.5840/monist197659224
Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”,
in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial
Intelligence”, 11 February 2019. [Trump 2019 available online]
Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence, Berlin: Springer.
doi:10.1007/978-3-319-96235-1
Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview, (Intelligent Systems,
Control and Automation: Science and Engineering 79), Cham: Springer International
Publishing. doi:10.1007/978-3-319-21714-7
Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future
Worth Wanting, Oxford: Oxford University Press.
doi:10.1093/acprof:oso/9780190498511.001.0001
Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial
Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th
Conference on Innovative Applications of Artifical Intelligence, (IAAI’04), San Jose, CA:
AAAI Press, 900–907.
van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation,
London: Routledge. doi:10.4324/9781315586397
van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making
Artificial Moral Agents”, Science and Engineering Ethics, 25(3): 719–735.
doi:10.1007/s11948-018-0030-8
Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”,
in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans,
LA: ACM, 317–322. doi:10.1145/3278721.3278726
Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World:
Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society, 4(2): art.
205395171774353. doi:10.1177/2053951717743530
Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature
Electronics, 2(8): 316–318. doi:10.1038/s41928-019-0294-2
Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the
Morality of Things, Chicago: University of Chicago Press.
Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-
Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law
Review, 2019(2): 494–620.
Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation
of Automated Decision-Making Does Not Exist in the General Data Protection
Regulation”, International Data Privacy Law, 7(2): 76–99. doi:10.1093/idpl/ipx005
Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations
Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of
Law & Technology, 31(2): 842–887. doi:10.2139/ssrn.3063289
Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics,
London: Routledge.
Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence, Amherst,
MA: Prometheus Books.
Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy, London:
Nesta. [Westlake 2014 available online]
Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas,
Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now
Institute, New York University. [Whittaker et al. 2018 available online]
Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019,
“Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A
Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge.
[Whittlestone 2019 available online]
Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine
Ethics: The Design and Governance of Ethical AI and Autonomous Systems, special issue
of Proceedings of the IEEE, 107(3): 501–632.
Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford
Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL =
<https://plato.stanford.edu/archives/win2016/entries/doing-allowing/>
Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda:
Political Parties, Politicians, and Political Manipulation on Social Media, Oxford: Oxford
University Press. doi:10.1093/oso/9780190931407.001.0001
Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security, Boca Raton,
FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation, Oxford: Oxford
University Press. doi:10.1093/oso/9780198838494.001.0001
Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons
Briefing Paper, 3339(25 June 2019): 1-19. [Zayed and Loft 2019 available online]
Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in
Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy &
Technology, 32(4): 661–683. doi:10.1007/s13347-018-0330-6
Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future
at the New Frontier of Power, New York: Public Affairs.
Academic Tools
How to cite this entry .
Preview the PDF version of this entry at the Friends of the SEP Society .
Look up this entry topic at the Internet Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
Research Organizations
Turing Institute (UK)
AI Now
Leverhulme Centre for the Future of Intelligence
Future of Humanity Institute
Future of Life Institute
Stanford Center for Internet and Society
Berkman Klein Center
Digital Ethics Lab
Open Roboethics Institute
Conferences
Philosophy & Theory of AI
Ethics and AI 2017
FAT 2018
AIES
We Robot 2018
Robophilosophy
Policy Documents
EUrobotics TG ‘robot ethics’ collection of policy documents
Acknowledgments
Early drafts of this article were discussed with colleagues at the IDEA Centre of the
University of Leeds, some friends, and my PhD students Michael Cannon, Zach Gudmunsen,
Gabriela Arrieagada-Bruneau and Charlotte Stix. Later drafts were made publicly available
on the Internet and publicised via Twitter and e-mail to all (then) cited authors that I could
locate. These later drafts were presented to audiences at the INBOTS Project Meeting
(Reykjavik 2019), the Computer Science Department Colloquium (Leeds 2019), the
European Robotics Forum (Bucharest 2019), the AI Lunch and the Philosophy & Ethics
group (Eindhoven 2019)—many thanks for their comments.
I am grateful for detailed written comments by John Danaher, Martin Gibert, Elizabeth
O’Neill, Sven Nyholm, Etienne B. Roesch, Emma Ruttkamp-Bloem, Tom Powers, Steve
Taylor, and Alan Winfield. I am grateful for further useful comments by Colin Allen, Susan
Anderson, Christof Wolf-Brenner, Rafael Capurro, Mark Coeckelbergh, Yazmin Morlet
Corti, Erez Firt, Vasilis Galanos, Anne Gerdes, Olle Häggström, Geoff Keeling, Karabo
Maiyane, Brent Mittelstadt, Britt Östlund, Steve Petersen, Brian Pickering, Zoë Porter,
Amanda Sharkey, Melissa Terras, Stuart Russell, Jan F Veneman, Jeffrey White, and Xinyi
Wu.
Parts of the work on this article have been supported by the European Commission under the
INBOTS project (H2020 grant no. 780073).