Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

AI Ethics
AI Ethics
AI Ethics
Ebook197 pages2 hours

AI Ethics

Rating: 3.5 out of 5 stars

3.5/5

()

Read preview

About this ebook

This overview of the ethical issues raised by artificial intelligence moves beyond hype and nightmare scenarios to address concrete questions—offering a compelling, necessary read for our ChatGPT era.
 
Artificial intelligence powers Google’s search engine, enables Facebook to target advertising, and allows Alexa and Siri to do their jobs. AI is also behind self-driving cars, predictive policing, and autonomous weapons that can kill without human intervention. These and other AI applications raise complex ethical issues that are the subject of ongoing debate. This volume in the MIT Press Essential Knowledge series offers an accessible synthesis of these issues. Written by a philosopher of technology, AI Ethics goes beyond the usual hype and nightmare scenarios to address concrete questions.
 
Mark Coeckelbergh describes influential AI narratives, ranging from Frankenstein’s monster to transhumanism and the technological singularity. He surveys relevant philosophical discussions: questions about the fundamental differences between humans and machines and debates over the moral status of AI. He explains the technology of AI, describing different approaches and focusing on machine learning and data science. He offers an overview of important ethical issues, including privacy concerns, responsibility and the delegation of decision making, transparency, and bias as it arises at all stages of data science processes. He also considers the future of work in an AI economy. Finally, he analyzes a range of policy proposals and discusses challenges for policymakers. He argues for ethical practices that embed values in design, translate democratic values into practices and include a vision of the good life and the good society.
LanguageEnglish
PublisherThe MIT Press
Release dateFeb 5, 2020
ISBN9780262357074

Read more from Mark Coeckelbergh

Related to AI Ethics

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for AI Ethics

Rating: 3.25 out of 5 stars
3.5/5

10 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    AI Ethics - Mark Coeckelbergh

    AI Ethics

    The MIT Press Essential Knowledge Series

    A complete list of the titles in this series appears at the back of this book.

    AI Ethics

    Mark Coeckelbergh

    The MIT Press | Cambridge, Massachusetts | London, England

    © 2020 The Massachusetts Institute of Technology

    All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

    This book was set in Chaparral Pro by Toppan Best-set Premedia Limited.

    Library of Congress Cataloging-in-Publication Data

    Names: Coeckelbergh, Mark, author.

    Title: AI ethics / Mark Coeckelbergh.

    Description: Cambridge, MA : The MIT Press, [2020] | Series: The MIT Press essential knowledge series | Includes bibliographical references and index.

    Identifiers: LCCN 2019018827 | ISBN 9780262538190 (pbk. : alk. paper)

    Subjects: LCSH: Artificial intelligence—Moral and ethical aspects.

    Classification: LCC Q334.7 .C64 2020 | DDC 170—dc23 LC record available at https://lccn.loc.gov/2019018827

    10 9 8 7 6 5 4 3 2 1

    d_r0

    for Arno

    Contents

    Series Foreword

    Acknowledgments

    1 Mirror, Mirror, on the Wall

    2 Superintelligence, Monsters, and the AI Apocalypse

    3 All about the Human

    4 Just Machines?

    5 The Technology

    6 Don’t Forget the Data (Science)

    7 Privacy and the Other Usual Suspects

    8 A-responsible Machines and Unexplainable Decisions

    9 Bias and the Meaning of Life

    10 Policy Proposals

    11 Challenges for Policymakers

    12 It’s the Climate, Stupid! On Priorities, the Anthropocene, and Elon Musk’s Car in Space

    Glossary

    Notes

    References

    Further Reading

    Index

    Series Foreword

    The MIT Press Essential Knowledge series offers accessible, concise, beautifully produced pocket-size books on topics of current interest. Written by leading thinkers, the books in this series deliver expert overviews of subjects that range from the cultural and the historical to the scientific and the technical.

    In today’s era of instant information gratification, we have ready access to opinions, rationalizations, and superficial descriptions. Much harder to come by is the foundational knowledge that informs a principled understanding of the world. Essential Knowledge books fill that need. Synthesizing specialized subject matter for nonspecialists and engaging critical topics through fundamentals, each of these compact volumes offers readers a point of access to complex ideas.

    Bruce Tidor

    Professor of Biological Engineering and Computer Science

    Massachusetts Institute of Technology

    Acknowledgments

    This book not only draws on my own work on this topic but reflects the knowledge and experience of the entire field of AI ethics. It would be impossible to list all the people I have discussed with and learned from over the past years, but the relevant and fast-growing communities I know include AI researchers such as Joanna Bryson and Luc Steels, fellow philosophers of technology such as Shannon Vallor and Luciano Floridi, academics working on responsible innovation in the Netherlands and the UK such as Bernd Stahl at De Montfort University, people I met in Vienna such as Robert Trappl, Sarah Spiekermann, and Wolfgang (Bill) Price, and my fellow members of the policy-oriented advisory bodies High-Level Expert Group on AI (European Commission) and Austrian Council on Robotics and Artificial Intelligence, for example Raja Chatila, Virginia Dignum, Jeroen van den Hoven, Sabine Köszegi, and Matthias Scheutz—to name just a few. I would also like to warmly thank Zachary Storms for helping with proofreading and formatting, and Lena Starkl and Isabel Walter for support with literature search.

    1

    Mirror, Mirror, on the Wall

    The AI Hype and Fears: Mirror, Mirror, on the Wall, Who Is the Smartest of Us All?

    When the results are announced, Lee Sedol’s eyes swell with tears. AlphaGo, an artificial intelligence (AI) developed by Google’s DeepMind, just secured a 4–1 victory in the game Go. It is March 2016. Two decades earlier, chess grandmaster Garry Kasparov lost to the machine Deep Blue, and now a computer program had won against eighteen-time world champion Lee Sedol in a complex game that was seen as one that only humans could play, using their intuition and strategic thinking. The computer won not by following rules given to it by programmers but by means of machine learning based on millions of past Go matches and by playing against itself. In such a case, programmers prepare the data sets and create the algorithms, but cannot know which moves the program will come up with. The AI learns by itself. After a number of unusual and surprising moves, Lee had to resign (Borowiec 2016).

    An impressive achievement by the AI. But it also raises concerns. There is admiration for the beauty of the moves, but also sadness, even fear. There is the hope that even smarter AIs could help us to revolutionize health care or find solutions for all kinds of societal problems, but also the worry that machines will take over. Could machines outsmart us and control us? Is AI still a mere tool, or is it slowly but surely becoming our master? These fears remind us of the words of the AI computer HAL in Stanley Kubrick’s science fiction film 2001: A Space Odyssey, who in response to the human command to Open the pod bay doors answers: I’m afraid I can’t do that, Dave.. And if not fear, there may be a feeling of sadness or disappointment. Darwin and Freud dethroned our beliefs of exceptionalism, our feelings of superiority, and our fantasies of control; today, artificial intelligence seems to deal yet another blow to humanity’s self-image. If a machine can do this, what is left for us? What are we? Are we just machines? Are we inferior machines, with too many bugs? What is to become of us? Will we become the slaves of machines? Or worse, a mere energy resource, as in the film The Matrix?

    The Real and Pervasive Impact of AI

    But the breakthroughs of artificial intelligence are not limited to games or the realm of science fiction. AI is already happening today and it is pervasive, often invisibly embedded in our day-to-day tools and as part of complex technological systems (Boddington 2017). Given the exponential growth of computer power, the availability of (big) data due to social media and the massive use of billons of smartphones, and fast mobile networks, AI, especially machine learning, has made significant progress. This has enabled algorithms to take over many of our activities, including planning, speech, face recognition, and decision making. AI has applications in many domains, including transport, marketing, health care, finance and insurance, security and the military, science, education, office work and personal assistance (e.g., Google Duplex¹), entertainment, the arts (e.g., music retrieval and composition), agriculture, and of course manufacturing.

    AI is already happening today and it is pervasive, often invisibly embedded in our day-to-day tools.

    AI is created and used by IT and internet companies. For example, Google has always used AI for its search engine. Facebook uses AI for targeted advertising and photo tagging. Microsoft and Apple use AI to power their digital assistants. But the application of AI is wider than the IT sector defined in a narrow sense. For example, there are many concrete plans for, and experiments with, self-driving cars. This technology is also based on AI. Drones use AI, as do autonomous weapons that can kill without human intervention. And AI has already been used in decision making in courts. In the United States, for example, the COMPAS system has been used to predict who is likely to re-offend. AI also enters domains that we generally consider to be more personal or intimate. For example, machines can now read our faces: not only to identify us, but also to read our emotions and retrieve all kinds of information.

    The Need to Discuss Ethical and Societal Problems

    AI can have many benefits. It can be used to improve public and commercial services. For example, image recognition is good news for medicine: it can help with the diagnosing of diseases such as cancer and Alzheimer. But such everyday applications of artificial intelligence also show how the new technologies raise ethical concerns. Let me give some examples of questions in AI ethics.

    Should self-driving cars have built-in ethical constraints, and if so, what kind of constraints, and how should they be determined? For example, if a self-driving car gets into a situation where it must choose between driving into a child or into a wall to save the child’s life but potentially killing its passenger, what should it choose? And should autonomous lethal weapons be allowed at all? How many decisions and how much of those decisions do we want to delegate to AI? And who is responsible when something goes wrong? In one case, the judges put more faith in the COMPAS algorithm than in agreements reached by the defense and the prosecution.² Will we rely too much on AI? The COMPAS algorithm is also highly controversial since research has shown that the algorithm’s false positives (people who were predicted to re-offend but did not) were disproportionately black (Fry 2018). AI can thus reinforce bias and unjust discrimination. Similar problems can arise with algorithms that recommend decisions about mortgage applications and job applications. Or consider so-called predictive policing: algorithms are used to forecast where crimes are likely to occur (e.g., which area of a city) and who might commit them, but the result might be that specific socioeconomic or racial groups will be disproportionately targeted by police surveillance. Predictive policing has already been used in the United States and, as a recent AlgorithmWatch (2019) report shows, also in Europe.³ And AI-based facial recognition technology is often used for surveillance and can violate people’s privacy. It can also more or less predict sexual preferences. No information from your phone and no biometric data are needed. The machine does its work from a distance. With cameras on the street and other public spaces, we can be identified and read, including our mood. By means of analysis of our data, our mental and bodily health can be predicted—without us knowing it. Employers can use the technology to monitor our performance. And algorithms that are active on social media can spread hate speech or false information; for example, political bots can appear as real people and post political content. A known case is the 2016 Microsoft chatbot named Tay that was designed to have playful conversations on Twitter but, when it got smarter, started to tweet racist things. Some AI algorithms can even create false video speeches, such as the video that was composed to misleadingly resemble a speech by Barack Obama.⁴

    The intentions are often good. But these ethical problems are usually unintended consequences of the technology: most of these effects, such as bias or hate speech, were not intended by the developers or users of the technology. Moreover, one critical question to be asked is always: Improvement for whom? The government or the citizens? The police or those who are targeted by the police? The retailer or the customer? The judges or the accused? Questions concerning power come into play, for instance when the technology is shaped by only a few mega corporations (Nemitz 2018). Who shapes the future of AI?

    This question points up the social and political significance of AI. AI ethics is about technological change and its impact on individual lives, but also about transformations in society and in the economy. The issues of bias and discrimination already indicate that AI has societal relevance. But it is also changing the economy and therefore perhaps the social structure of our societies. According to Brynjolfsson and McAfee (2014), we have entered a Second Machine Age in which machines are not only complements to humans, as in the Industrial Revolution, but also substitutes. As professions and work of all kinds will be affected by AI, our society has been predicted to change dramatically as technologies once described in science fiction enter the real world (McAfee and Brynjolfsson 2017). What is the future of work? What kind of lives will we have when AIs take over jobs? And who is the we? Who will gain from this transformation, and who will lose?

    This Book

    Based on spectacular breakthroughs, a lot of hype surrounds AI. And AI is already used in a wide range of knowledge domains and human practices. The first has given rise to wild speculations about the technological future and interesting philosophical discussions about what it means to be human. The second has created a sense of urgency on the part of ethicists and policymakers to ensure that this technology benefits us instead of creating insurmountable challenges for individuals and societies. These latter concerns are more practical and immediate.

    AI ethics is about technological change and its impact on individual lives, but also about transformations in society and in the economy.

    This book, written by an academic philosopher who also has experience with advice for policymaking, deals with both aspects: it treats ethics as related to all these questions. It aims to give the reader a good overview of the ethical problems with AI understood broadly, ranging from influential narratives about the future of AI and philosophical questions about the nature and future of the human, to ethical concerns about responsibility

    Enjoying the preview?
    Page 1 of 1