0% found this document useful (0 votes)
65 views10 pages

Floridi Digital Ethics Its Nature and Scope

This chapter discusses digital ethics as a field that addresses ethical challenges arising from digital technologies. It focuses on issues related to data, algorithms, and digital practices and infrastructure. The chapter outlines how digital ethics has evolved from previous fields of ethics to take a holistic, macro-level approach to ethical problems caused by the digital world.

Uploaded by

ourtablet1418
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views10 pages

Floridi Digital Ethics Its Nature and Scope

This chapter discusses digital ethics as a field that addresses ethical challenges arising from digital technologies. It focuses on issues related to data, algorithms, and digital practices and infrastructure. The chapter outlines how digital ethics has evolved from previous fields of ethics to take a holistic, macro-level approach to ethical problems caused by the digital world.

Uploaded by

ourtablet1418
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/336430656

Digital Ethics: Its Nature and Scope

Chapter · October 2019


DOI: 10.1007/978-3-030-17152-0_2

CITATIONS READS

13 3,515

3 authors:

Luciano Floridi Corinne Cath

743 PUBLICATIONS 33,218 CITATIONS


University of Oxford
18 PUBLICATIONS 1,078 CITATIONS
SEE PROFILE
SEE PROFILE

Mariarosaria Taddeo
University of Oxford
206 PUBLICATIONS 9,515 CITATIONS

SEE PROFILE

All content following this page was uploaded by Luciano Floridi on 05 October 2021.

The user has requested enhancement of the downloaded file.


Chapter 2
Digital Ethics: Its Nature and Scope

Luciano Floridi, Corinne Cath, and Mariarosaria Taddeo

2.1 Digital Ethics As a Macroethics

The digital revolution provides huge opportunities to improve private and public
life, and our environments, from health care to smart cities and global warming.
Unfortunately, such opportunities come with significant ethical challenges. In par-
ticular, the extensive use of increasingly more data—often personal, if not sensi-
tive (Big Data)—the growing reliance on algorithms to analyse them in order to
shape choices and to make decisions (including machine learning, AI, and robot-
ics), and the gradual reduction of human involvement or oversight over many auto-
matic processes, pose pressing questions about fairness, responsibility, and respect
of human rights.
These ethical challenges can be addressed successfully by fostering the
development and application of digital innovations, while ensuring the respect
of human rights and the values shaping open, pluralistic, and tolerant informa-
tion societies. Striking such a balance is neither obvious nor simple. On the one
hand, overlooking ethical issues may prompt negative impact and social rejec-
tion. This was the case, for example, with the NHS care.data programme, a
failed project in England to extract data from GP surgeries into a central data-
base. On the other hand, overemphasizing the protection of individual or collec-
tive rights in the wrong contexts may lead to regulations that are too rigid, and
this may harm the chances to harness the social value of digital innovation. The
LIBE amendments, initially proposed to the European Data Protection
Regulation, offer a good example, as they would have made data-based medical

L. Floridi (*) · C. Cath · M. Taddeo


Oxford Internet Institute, Digital Ethics Lab, University of Oxford, Oxford, UK
The Alan Turing Institute, London, UK
e-mail: [email protected]

© Springer Nature Switzerland AG 2019 9


C. Öhman, D. Watson (eds.), The 2018 Yearbook of the Digital Ethics Lab,
Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-030-17152-0_2
10 L. Floridi et al.

research more difficult.1 Social preferability must be the guiding principle to


strike a robust ethical balance for any digital project with impact on human life.
The demanding task of digital ethics is navigating between social rejection and
legal prohibition in order to reach solutions that maximise the ethical value of digi-
tal innovation to benefit our societies, all of us, and our environments. To achieve
this, digital ethics builds on the foundation provided by Computer and Information
Ethics, which has focused, for the past 30 years, on the challenges posed by infor-
mation and communication technologies (Floridi 2013; Bynum 2015). This valu-
able legacy grafts digital ethics onto the great tradition of ethics more generally. At
the same time, digital ethics refines the approach endorsed in Computer and
Information Ethics, by changing the Levels of Abstraction (LoA) of ethical enqui-
ries from an information-centric (LoAI) to a digital-centric one (LoAD).
The method of abstraction is a common methodology in Computer Science
(Hoare 1972) and in Philosophy and Ethics of Information (Floridi 2008, 2011). It
specifies the different perspective from which a system can be analysed, by focusing
on different aspects, called observables. The choice of the observables depends on
the purpose of the analysis and determines the choice of LoA. Any given system can
be analysed at different LoAs. For example, an engineer interested in maximising
the aerodynamics of a car may focus upon the shape of its parts, their weight, and
the materials. A customer interested in the aesthetics of the same car may focus on
its colour and the overall look, while disregarding the engineer’s observables.
Ethical analyses are developed at a variety of LoAs. The shift from Information
(LoAI) to Digital (LoAD) is the latest in a series of changes that characterise the
evolution of Computer and Information Ethics. Research in this field first endorsed
a human-centric LoA (Parker 1968), which addressed the ethical problems posed by
the dissemination of computers in terms of professional responsibilities of both
their designers and users. The LoA then shifted to a computer-centric one (LoAC) in
the mid 1980s (Moor 1985), and changed again at the beginning of the second mil-
lennium to LoAI (Floridi 2006).
These changes responded to rapid, widespread, and profound technological
transformations. And they had important conceptual implications. For example,
LoAC highlighted the nature of computers as universal and malleable tools, making
it easier to understand the impact that computers could have on shaping social
dynamics and on the design of the environment surrounding us (Moor 1985). Later
on, LoAI shifted the focus from the technological means (the hardware: computers,
mobile phones, etc.) to the content (information) that can be created, recorded, pro-
cessed, and shared through such means. In doing so, LoAI emphasised the different
moral dimensions of information—i.e., information as the source, the result, or the

1
European Parliament, Committee on Civil Liberties, Justice and Home Affairs. (2012). On the
proposal for a regulation of the European Parliament and of the Council on the protection of indi-
vidual with regard to the processing of personal data and on the free movement of such data
(General Data Protection Regulation) (COM(2012)0011 – C7-0025/2012-2012/0011(COD)).
Amendments 27, 327, 328, and 334–3367 proposed in the Albrecht’s Draft Report, Retrieved from
http://www.europarl.europa.eu/meetdocs/2009_2014/documents/libe/pr/922/922387/922387en.
pdf
2 Digital Ethics: Its Nature and Scope 11

target of moral actions—and led to the design of a macro-ethical approach, able to


address the whole cycle of information creation, sharing, storage, protection, usage,
and possible destruction (Floridi 2006, 2013).
We have come to understand that it is not a specific technology (now including
online platforms, cloud computing, Internet of Things, AI, and so forth), but the
whole ecosystem created and manipulated by any digital technology that must be
the new focus of our ethical strategies. The shift from information ethics to digital
ethics highlights the need to concentrate not only on what is being handled as the
true invariant of our concerns but also on the general environment (infosphere), the
technologies and sciences involved, the corresponding practices and structures (e.g.
in business and governance), and the overall impact of the digital world broadly
construed. It is not the hardware that causes ethical problems, it is what the hard-
ware does with the software, the data, the agents, their behaviours, and the relevant
environments that prompts new ethical problems. Thus, labels such as “robo-ethics”
or “machine ethics” miss the point. We need a digital ethics that provides a holistic
approach to the whole universe of moral issues caused by digital innovation.
LoAD brings into focus the different moral dimensions of the whole spectrum of
digital realities. In doing so, it highlights that ethical problems—such as anonymity,
privacy, responsibility, transparency, and trust—concern a variety of digital phe-
nomena, and hence they are better understood at that level. So, digital ethics is best
understood as the branch of ethics that studies and evaluates moral problems related
to information and data (including generation, recording, curation, processing, dis-
semination, sharing, and use), algorithms (including AI, artificial agents, machine
learning, and robots), and corresponding practices and infrastructures (including,
responsible innovation, programming, hacking, professional codes, and standards),
in order to formulate and support morally good solutions (e.g., right conduct or right
values). This means that the ethical challenges posed by the digital revolution can
be mapped within the conceptual space delineated by three axes of research: the
ethics of data/information, the ethics of algorithms, and the ethics of practises and
infrastructures.
The ethics of data focuses on ethical problems posed by the collection and analy-
sis of large datasets and on issues ranging from the use of Big Data in biomedical
research and social sciences (Mittelstadt and Floridi 2016), to profiling, advertising,
data donation, and open data. In this context, key issues concern possible re-­
identification of individuals through data-mining, −linking, −merging, and re-using
of large datasets, and risks for so-called “group privacy”, when the identification or
profiling of types of individuals, independently of the de-identification of each of
them, may lead to serious ethical problems, from group discrimination (e.g. ageism,
racism, sexism) to group-targeted forms of violence (Floridi 2014; Taylor et al.
2017). Trust (Taddeo 2010; Taddeo and Floridi 2011) and transparency (Turilli and
Floridi 2009) are also crucial topics, in connection with an acknowledged lack of
public awareness of the benefits, opportunities, risks, and challenges associated
with the digital revolution. For example, transparency is often advocated as one of
the measures that may foster trust. However, it is unclear what information should
be made transparent and to whom information should be disclosed.
12 L. Floridi et al.

The ethics of algorithms addresses issues posed by the increasing complexity


and autonomy of algorithms broadly understood (e.g., including AI and artificial
agents such as Internet bots), especially in the case of machine learning applica-
tions. Crucial challenges include the moral responsibility and accountability of both
designers and data scientists with respect to unforeseen and undesired consequences
and missed opportunities (Floridi 2012, 2016); the ethical design and auditing of
algorithms; and the assessment of potential undesirable outcomes (e.g., discrimina-
tion or the promotion of anti-social content).
Finally, the ethics of practices (including professional ethics and deontology)
and infrastructures addresses questions concerning the responsibilities and liabili-
ties of people and organisations in charge of data processes, strategies, and policies,
including businesses and data scientists, with the goal of defining an ethical frame-
work to shape professional codes that may ensure ethical practises fostering both
the progress of digital innovation and the protection of the rights of individuals and
groups (Cath et al. 2017). Here four issues are central: solutions by design, consent,
user privacy, and secondary use or repurposing.
While they are distinct lines of research, the ethics of data, algorithms, practices
and infrastructures are obviously intertwined, and this is why it may be preferable
to speak in terms of three axes defining a conceptual space within which ethical
problems are like points identified by three values. Most of them do not lie on a
single axis. For these reasons, Digital Ethics must address the whole conceptual
space, albeit with varying priorities and foci. As such, it needs to be developed from
the start as a macroethics, that is, as an overall “geometry” of the ethical space that
avoids narrow, ad hoc approaches, but rather addresses the diverse set of ethical
implications of digital realities within a consistent, holistic, and inclusive frame-
work. An example may help to clarify the value of the holistic approach of Digital
Ethics. Consider the case of protecting users privacy on social media platforms. In
order to be able to understand the problem adequately and to indicate possible solu-
tions, ethical analyses of users’ privacy will have to address data collection, access,
and sharing. They will have to focus also on users’ consent and the responsibilities
on online service providers (Taddeo and Floridi 2015, 2017). At the same time,
aspects such as ethical auditing of algorithms and oversight mechanism for algorith-
mic decisions will be central to the analyses, as they may be part of the solution.
In order to give a better sense of the specific issues discussed in Digital Ethics,
the following sections focus on two key areas of application: Internet Infrastructure,
and Cyber Conflicts. They are not the only ones, but they should provide a clear
sense of the scope and significance of this new area of investigation.

2.2 Digital Infrastructure

Increasingly, digital data infrastructures, like the Internet, are part of what makes
our societies prosper (Castells 2007). And as this network-of-networks becomes
more important—from managing our critical infrastructures like the electricity grid
2 Digital Ethics: Its Nature and Scope 13

to managing our private lives—so does the ethics of its technical governance (Cath
and Floridi 2017).
The management of the infrastructure of the Internet depends on choice (Lessig
2006) and control (Deibert et al. 2008; Choucri and Clark 2012). It is about how one
decides to build the infrastructure through which data travels, and how it does so.
The Internet influences who can connect to whom, and how (Denardis 2014). In
turn, these choices can have a fundamental impact on the Internet’s ability to foster
the public interest, especially in terms of social justice, civil liberties, and human
rights (Chadwick 2006; Denardis 2014). Control matters too. Internet infrastructure
is increasingly becoming ‘politics by other means’ (Abbate 2000). Understanding
the ethics of the practices embedded in the technology underlying the Internet’s
digital information flows is vital in order to understand and ultimately improve soci-
etal and political developments.
To understand this specific axe of digital ethics we need to address professional
responsibilities and deontology (Floridi 2012) of those actors involved in coding,
maintaining, and updating the Internet’s infrastructure, including its applications
and platforms. This requires an in-depth understanding of the inner-workings of the
Internet. The Internet is not one network but a global network of networks that are
bound together through standards and protocols, relying on hardware and software
for information flows. Its decentralized, complex, and multi-layered character
explains why its maintenance takes many actors and organizations. Considering the
complexity of this system, a taxonomy that explains the Internet by dividing it into
three distinct layers provides the needed level of abstraction: content, logical, physi-
cal (Benkler 2000). This model divides the Internet into three layers: the content
layer (information to interact with), the logical infrastructure layer (software), phys-
ical infrastructure (wires, cables).

Layer Description
Content News, social media posts, videos on streaming platforms, content generated in
collaborative tools like Wikipedia or on digital labour platforms
Logical The technology that makes the Internet interoperable, its digital infrastructure
Physical The tangible Internet, its physical infrastructure: Computers (servers, personal
computers, mobile phones, etc.), telecommunications cables, routers, data centres

Each layer raises specific ethical questions about data, but not all the associated
discussions have dedicated institutional homes, and often cut across the various lay-
ers and organizations (Mathiason 2008). There has been a concerted political effort
over the last decade to highlight how human rights frameworks apply to the Internet.
Yet more work remains to be done about how the ethics of data and associated ethics
of practices of the various actors mentioned in the taxonomy (Cath and Floridi
2017). There is also limited engagement with the ethical questions surrounding
material infrastructures through which data flows. Addressing these questions could
alleviate some of the concerns surrounding private ordering of data flows and infor-
mation control by the Internet’s technical and business community, increase trust in
14 L. Floridi et al.

the decisions of these private actors (Taddeo and Floridi 2011), and make the overall
technical infrastructure of the network more stable, as it combines moral values
with an ethical infrastructure to support the instantiation of moral behaviour (Floridi
2012). Such an analytical manoeuvre is particularly important as the infrastructure
of the Internet increasingly enacts its social ordering upon the world (Denardis
2014; Hofmann et al. 2016).

2.3 Cyber Conflicts

Cyber conflicts arise from the use of digital technologies for immediate (tactic) or
intermediate (strategic) disruptive or destructive purposes. When compared to con-
ventional warfare, cyber conflicts show fundamental differences: their domain
ranges from the virtual to the physical; the nature of their actors and targets involves
artificial and virtual entities alongside human beings and physical objects; and their
level of violence may range from non-violent to potentially highly violent
phenomena.
These differences are redefining our understanding of key concepts like harm,
violence, combatants, and weapons. They also pose serious ethical and policy prob-
lems concerning risks, rights, and responsibilities (the 3R problems) (Taddeo 2012).
Start from the risks. Estimates indicate that the cyber security market will be worth
US$170 billion by 2020 (Markets and Markets 2015), posing the risk of a progres-
sive weaponisation of cyberspace, which may spark a cyber arms race and competi-
tion for digital supremacy, further increasing the possibility of escalation and
conflicts (Taddeo 2017a, b; Taddeo and Floridi 2018). At the same time, cyber
threats are pervasive. They can target, but can also be launched through, civilian
infrastructures. This may (and in some cases already has) initiate policies of higher
levels of control, enforced by governments in order to detect and deter possible
threats. In these circumstances, individual rights, such as privacy and anonymity,
come under devaluing pressure (Arquilla 1999). Ascribing responsibilities is also
problematic, because cyber conflicts are increasingly waged through autonomous
systems (Cath et al. 2017). Two good examples are the Active Cyber Defence pro-
grammes developed in the US and UK, and ‘counter autonomy’ systems, which are
autonomous machine-learning systems able to engage in cyber conflicts by adapting
and evolving to deploy and counter ever-changing attack strategies. In both cases, it
is unclear who or what is accountable and morally responsible for the actions per-
formed by these systems.
If left unaddressed, the 3R problems will daunt attempts to regulate cyber con-
flicts, favour escalation, and jeopardize international stability. Digital ethics offers a
valuable framework to address the 3R problems, as these span across the conceptual
space identified at the beginning of this chapter, namely data, algorithms, and
practices.
More specifically, as state-run cyber operations will rely on machine learning
and neural network algorithms, focusing on ethics of algorithms will be crucial to
2 Digital Ethics: Its Nature and Scope 15

mitigate issues concerning the risks of escalation. This is because ethical analyses
of algorithms foster the design and deployment of verification, validation, and
auditing procedures that can ensure transparency, oversight, on autonomous sys-
tems deployed for threat detection and target identification. The ethics of data offers
the conceptual basis to solve the friction between cyber conflicts and rights. For it
sheds light on the moral stance of digital objects, (artificial) agents, and infrastruc-
tures involved in cyber conflicts and, in doing so, it facilitates the application of key
principles of Just War Theory—such as proportionality, self- defence, discrimina-
tion—to cyber conflicts (Taddeo 2014, 2016). The ethics of practices plays a central
role in the regulation of cyber conflicts, as it fosters the understanding of roles and
responsibilities of the different stakeholders (private companies, governmental
agencies, and citizens) and, thus, shapes the ethical code that should inform their
conduct. These problems need to be addressed now, while still nascent, to ensure
fair and effective regulations.

2.4 Conclusion

This chapter provided an overview of digital ethics, a new and fast developing field
of research that is of vital importance to develop our information societies well, to
improve our interactions among ourselves and with our environments, to protect
human dignity and to foster human flourishing in the digital age. Much work lies
ahead, and progress will require multidisciplinary collaboration among many fields
of expertise and a sustained, unflinching focus on the value that ethical thinking can
and should make in the world and in technological innovation.

References

Abbate, Janet. 2000. Inventing the internet. Cambridge, Mass: The MIT Press.
Arquilla. 1999. Ethics and information warfare. In Strategic appraisal: The changing role of infor-
mation in warfare, ed. Zalmay Khalilzad and John Patrick White, 379–401. Santa Monica, CA:
RAND.
Benkler, Yochai. 2000. From consumers to users: Shifting the deeper structures of regulation
towards sustainable commons and user access. Federal Communications Law Journal 52: 3.
Bynum, Terrell. 2015. Computer and information ethics. In The Stanford encyclopedia of philoso-
phy. http://plato.stanford.edu/archives/win2015/entries/ethics-computer/.
Cath, Corinne, and Luciano Floridi. 2017. The design of the internet’s architecture by the Internet
Engineering Task Force (IETF) and human rights. Science and Engineering Ethics 24 (2):
449–468.
Castells, Manuel. 2007. Communication, power and counter-power in the network society.
International Journal of Communication 1 (1): 238–266.
Cath, Corinne, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo, and Luciano Floridi.
2017. Artificial intelligence and the “Good society”: The US, EU, and UK approach. Science
and Engineering Ethics: 1–24.
16 L. Floridi et al.

Chadwick, Andrew. 2006. Internet politics: States, citizens, and new communication technologies.
Oxford: Oxford University Press.
Choucri, N., Clark, D. 2012. Integrating cyberspace and international relations: The Co-Evolution.
Working Paper No. 2012–29, Political Science Department, Massachusetts Institute of
Technology. Retrieved from http://ssrn.com/abstract=2178586
Deibert, Ronald. 2008. Access denied: The practice and policy of global internet filtering.
Cambridge, Mass: MIT Press.
Denardis, Laura. 2014. The global war for internet governance. New Haven: Yale University
Press.
Floridi, Luciano. 2006. Information ethics, its nature and scope. SIGCAS Computer Society
36 (3): 21–36.
———. 2008. The method of levels of abstraction. Minds and Machines 18 (3): 303–329.
———. 2011. “The philosophy of information”. Oxford/New York: Oxford University Press.
———. 2012. Distributed morality in an information society. Science and Engineering Ethics
19 (3): 727–743.
———. 2013. The ethics of information. Oxford: Oxford University Press.
———. 2014. Open data, data protection, and group privacy. Philosophy & Technology 27 (1):
1–3. https://doi.org/10.1007/s13347-014-0157-8.
———. 2016. Faultless responsibility: on the nature and allocation of moral responsibility for
distributed moral actions. Philosophical Transactions of the Royal Society A 374 (2083):
20160112.
Hoare, Charles Antony Richard. 1972. Structured programming. In Structured programming, ed.
O.J. Dahl, E.W. Dijkstra, and C.A.R. Hoare, 83–174. London: Academic. http://dl.acm.org/
citation.cfm?id=1243380.1243382.
Hofmann, Jeanette, Christian Katzenbach, and Kirsten Gollatz. 2016. Between coordination and
regulation: Finding the governance in internet governance. New Media & Society 19 (9):
1406–1423.
Lessig, Lawrence. 2006. Code: And other Laws of cyberspace, version 2.0. New York: Basic
Books.
Markets and Markets. 2015. ‘Cyber security market by solutions & services - 2020’. http://www.
marketsandmarkets.com/Market-Reports/cyber-security-market-505.html?gclid=CNb6w7mt8
MgCFQoEwwodZVQD-g.
Mathiason, John. 2008. Internet governance: The new frontier of global institutions. Routledge.
Mittelstadt, Brent Daniel. 2016. In The ethics of biomedical big data, Law, governance and tech-
nology series, ed. Luciano Floridi, vol. 29. Cham: Springer.
Moor, James H. 1985. What is computer ethics? Metaphilosophy 16 (4): 266–275.
Parker, Donn B. 1968. Rules of ethics in information processing. Communications of the ACM
11 (3): 198–201. https://doi.org/10.1145/362929.362987.
Taddeo, Mariarosaria. 2010. Trust in technology: A distinctive and a problematic rela-
tion. Knowledge, Technology & Policy 23 (3–4): 283–286. https://doi.org/10.1007/
s12130-010-9113-9.
———. 2012. Information warfare: A philosophical perspective. Philosophy and Technology
25 (1): 105–120.
———. 2014. Just information warfare. Topoi 35 (1): 213–214.
———. 2016. On the risks of relying on analogies to understand cyber conflicts. Minds and
Machines 26 (4): 317–321.
———., ed. 2017a. The responsibilities of online service providers. New York/Berlin/Heidelberg:
Springer.
———. 2017b. Cyber conflicts and political power in information societies. Minds and Machines
27 (2): 265–268.
Taddeo, Mariarosaria, and Luciano Floridi. 2011. The case for E-trust. Ethics and Information
Technology 13 (1): 1–3.
2 Digital Ethics: Its Nature and Scope 17

———. 2015. The debate on the moral responsibilities of online service providers. Science and
Engineering Ethics (November). https://doi.org/10.1007/s11948-015-9734-1.
———, eds. 2017. The responsibilities of online service providers. New York: Springer Berlin
Heidelberg.
———. 2018. Regulate artificial intelligence to avert cyber arms race. Nature 556 (7701): 296–
298. https://doi.org/10.1038/d41586-018-04602-6.
Taylor, Linnet, Luciano Floridi, and Bart van der Sloot, eds. 2017. Group privacy: New challenges
of data technologies. Hildenberg: Philosophical Studies, Book Series, Springer.
Turilli, Matteo, and Luciano Floridi. 2009. The ethics of information transparency. Ethics and
Information Technology 11 (2): 105–112.

View publication stats

You might also like