Instant Download Deceitful Media: Artificial Intelligence and Social Life After The Turing Test 1st Edition Simone Natale PDF All Chapter

Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

Download More ebooks [PDF]. Format PDF ebook download PDF KINDLE.

Full download ebooks at ebookmass.com

Deceitful Media: Artificial Intelligence


and Social Life after the Turing Test
1st Edition Simone Natale
For dowload this book click BUTTON or LINK below

https://ebookmass.com/product/deceitful-media-
artificial-intelligence-and-social-life-after-the-
turing-test-1st-edition-simone-natale/
OR CLICK BUTTON

DOWLOAD NOW

Download More ebooks from https://ebookmass.com


More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Deceitful Media: Artificial Intelligence and Social


Life after the Turing Test 1st Edition Simone Natale

https://ebookmass.com/product/deceitful-media-artificial-
intelligence-and-social-life-after-the-turing-test-1st-edition-
simone-natale/

Artificial Intelligence, Social Harms and Human Rights


Aleš Završnik

https://ebookmass.com/product/artificial-intelligence-social-
harms-and-human-rights-ales-zavrsnik/

Artificial Intelligence And Its Discontents: Critiques


From The Social Sciences And Humanities 1st Edition
Ariane Hanemaayer

https://ebookmass.com/product/artificial-intelligence-and-its-
discontents-critiques-from-the-social-sciences-and-
humanities-1st-edition-ariane-hanemaayer/

The Psychoanalysis of Artificial Intelligence 1st


Edition Isabel Millar

https://ebookmass.com/product/the-psychoanalysis-of-artificial-
intelligence-1st-edition-isabel-millar/
Artificial Intelligence and International Relations
Theories 1st Edition Bhaso Ndzendze

https://ebookmass.com/product/artificial-intelligence-and-
international-relations-theories-1st-edition-bhaso-ndzendze/

Artificial Intelligence for Dummies 2nd Edition John


Paul Mueller & Luca Massaron

https://ebookmass.com/product/artificial-intelligence-for-
dummies-2nd-edition-john-paul-mueller-luca-massaron/

Artificial Intelligence and Intellectual Property Reto


Hilty

https://ebookmass.com/product/artificial-intelligence-and-
intellectual-property-reto-hilty/

Precision Health and Artificial Intelligence Arjun


Panesar

https://ebookmass.com/product/precision-health-and-artificial-
intelligence-arjun-panesar/

The Virtual Public Servant: Artificial Intelligence and


Frontline Work 1st ed. Edition Stephen Jeffares

https://ebookmass.com/product/the-virtual-public-servant-
artificial-intelligence-and-frontline-work-1st-ed-edition-
stephen-jeffares/
Deceitful Media
ii
Deceitful Media
Artificial Intelligence and Social Life after
the Turing Test

Simone Natale

1
iv

1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press


198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2021

All rights reserved. No part of this publication may be reproduced, stored in


a retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permitted
by law, by license, or under terms agreed with the appropriate reproduction
rights organization. Inquiries concerning reproduction outside the scope of the
above should be sent to the Rights Department, Oxford University Press, at the
address above.

You must not circulate this work in any other form


and you must impose this same condition on any acquirer.

Library of Congress Cataloging-​in-​Publication Data


Names: Natale, Simone, 1981–​author.
Title: Deceitful media : artificial intelligence and social life after the Turing test /​
Simone Natale.
Description: New York : Oxford University Press, [2021] |
Includes bibliographical references and index.
Identifiers: LCCN 2020039479 (print) | LCCN 2020039480 (ebook) |
ISBN 9780190080365 (hardback) | ISBN 9780190080372 (paperback) |
ISBN 9780190080396 (epub)
Subjects: LCSH: Artificial intelligence—​Social aspects. | Philosophy of mind.
Classification: LCC Q335 .N374 2021 (print) | LCC Q335 (ebook) |
DDC 303.48/​34—​dc23
LC record available at https://​lccn.loc.gov/​2020039479
LC ebook record available at https://​lccn.loc.gov/​2020039480

DOI: 10.1093/​oso/​9780190080365.001.0001

9 8 7 6 5 4 3 2 1
Paperback printed by Marquis, Canada
Hardback printed by Bridgeport National Bindery, Inc., United States of America
ACKNOWLEDGMENTS

When I started working on this book, I had an idea about a science fic-
tion story. I might never write it, so I reckon it is just fine to give up its
plot here. A woman, Ellen, is awakened by a phone call. It’s her husband.
There is something strange in his voice; he sounds worried and somehow
out of tune. In the close future in which this story is set, artificial intelli-
gence (AI) has become so efficient that a virtual assistant can make calls
on your behalf by reproducing your own voice, and the simulation will be
so accurate as to trick even your close family and friends. Ellen and her
husband, however, have agreed that they would never use AI to communi-
cate between them. Yet in the husband’s voice that morning there is some-
thing that doesn’t sound like him. Later, Ellen discovers that her husband
has died that very night, a few hours before the time of their call. The call
should have been made by an AI assistant. Dismayed by her loss, she listens
to the conversation again and again until she finally picks up some hints to
solve the mystery. In fact, this science fiction story I haven’t written is also
a crime story. To learn the truth about her husband’s death, Ellen will need
to interpret the content of the conversation. In the process, she will also
have to establish whether the words came from her husband’s, from the
machine that imitated him, or from some combination of the two.
This book is not science fiction, yet like much science fiction, it is also an
attempt to make sense of technologies whose implications and meaning we
are just starting to understand. I use the history of AI—​a surprisingly long
one for technologies that are often presented as absolute novelties—​as a
compass to orient my exploration. I started working on this book in 2016.
My initial idea was to write a cultural history of the Turing test, but my
explorations brought exciting and unexpected discoveries that made the
final project expand much beyond that.
A number of persons read and commented on early drafts of this
work. My editor, Sarah Humphreville, not only believed in this project
vi

since the start but also provided crucial advice and punctual suggestions
throughout its development. Assistant Editor Emma Hodgon was also
exceedingly helpful and scrupulous. Leah Henrickson provided feedback
on all the chapters; her intelligence and knowledge made this just a much
better book. I am grateful to all who dedicated time and attention to read
and comment on different parts of this work: Saul Albert, Gabriele Balbi,
Andrea Ballatore, Paolo Bory, Riccardo Fassone, Andrea Guzman, Vincenzo
Idone Cassone, Nicoletta Leonardi, Jonathan Lessard, Peppino Ortoleva,
Benjamin Peters, Michael Pettit, Thais Sardá, Rein Sikveland, and Cristian
Vaccari.
My colleagues at Loughborough University have been a constant source
of support, both professionally and personally, during the book’s gestation.
I would like especially to thank John Downey for being such a generous
mentor at an important and potentially complicated moment of my career,
and for teaching me the importance of modesty and integrity in the process.
Many other senior staff members at Loughborough were very supportive
in many occasions throughout the last few years, and I wish particularly
to thank Emily Keightley, Sabina Mihelj, and James Stanyer for their con-
stant help and friendliness. Thanks also to my colleagues and friends Pawas
Bisht, Andrew Chadwick, David Deacon, Antonios Kyparissiadis, Line
Nyhagen, Alena Pfoser, Marco Pino, Jessica Robles, Paula Saukko, Michael
Skey, Elzabeth Stokoe, Vaclav Stetka, Thomas Thurnell-​Read, Peter Yeandle,
and Dominic Wring, as well as to all other colleagues at Loughborough, for
making work easier and more enjoyable.
During the latest stages of this project I was awarded a Visiting Fellowship
at ZeMKI, the Center for Media, Communication, and Information
Research of the University of Bremen. It was a great opportunity to discuss
my work and to have space and time to reflect and write. Conversations
with Andreas Hepp and Yannis Theocharis were particularly helpful to
clarify and deepen some of my ideas. I thank all the ZeMKI members for
their feedback and friendship, especially but not only Stefanie Averbeck-​
Lietz, Hendrik Kühn, Kerstin Radde-​Antweiler, and Stephanie Seul, as well
as the other ZeMKI Fellows whose residence coincided with my stay: Peter
Lunt, Ghislain Thibault, and Samuel Van Ransbeeck.
Some portions of this book have been revised from previous publications.
In particular, parts of ­chapter 3 were previously published, in a significantly
different version, in the journal New Media and Society, and an earlier ver-
sion of c­ hapter 6 was featured as a Working Paper in the Communicative

[ viii ] Acknowledgments
Figurations Working Papers series. I thank the reviewers and editors for
their generous feedback.
My thanks, finally, go to the many humans who acted as my companions
throughout these years, doing it so well that no machine will ever be able to
replace them. This book is especially dedicated to three of them: my brother
and sister and my partner, Viola.

Acknowledgments [ ix ]
x

I remember a visit I made to one of Queen Victoria’s residences, Osborne


on the Isle of Wight. . . . Prominent among the works displayed there was
a life-​size marble sculpture of a large furry dog, a portrait of the Queen’s
beloved pet “Noble.” The portrait must have been as faithful as the dog
undoubtedly was—​but for the lack of color it might have been stuffed.
I do not know what impelled me to ask our guide, “May I stroke him?” She
answered, “Funny you want to do that; all the visitors who pass stroke
him—​we have to wash him every week.” Now, I do not think the visitors to
Osborne, myself included, are particularly prone to magic beliefs. We did
not think the image was real. But if we had not thought it somewhere we
would hardly have reacted as we did—​that stroking gesture may well have
been compounded of irony, playfulness, and a secret wish to reassure our-
selves that after all the dog was only of marble.
—​Ernst Gombrich, Art and Illusion
Introduction

I n May 2018, Google gave a public demonstration of its ongoing pro-


ject Duplex, an extension of Google Assistant programmed to carry out
phone conversations. Google’s CEO, Sundar Pichai, presented the recording
of a conversation in which the program mimicked a human voice to book
an appointment with a hair salon. Duplex’s synthetic voice featured pauses
and hesitation in an effort to sound more credible. The strategy appeared
to work: the salon representative believed she was speaking with a real
person and accepted the reservation.1
In the following weeks, Duplex’s apparent achievements attracted
praise, but also criticism. Commentaries following the demonstration
highlighted two problems about the demo. On one side, some contended
that Duplex operated “straight up, deliberate deception,”2 opening new
ethical questions regarding the capacity of an artificial intelligence (AI) to
trick users into believing it is human. On the other side, some expressed
doubts about the authenticity of the demo. They pointed to a series of
oddities in the recorded conversations: the businesses, for instance,
never identified themselves, no background noise could be heard, and
the reservation-​takers never asked Duplex for a contact number. This
suggested that Google might have doctored the demo, faking Duplex’s ca-
pacity to pass as human.3

Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/​oso/​9780190080365.003.0001
2

The controversy surrounding Duplex reflects a well-​ established dy-


namic in the public debate about AI. Since its inception in the 1950s, the
achievements of AI have often been discussed in binary terms: either ex-
ceptional powers are attributed to it, or it is dismissed as a delusion and a
fraud.4 Time after time, the gulf between these contradictory assessments
has jeopardized our capacity to recognize that the true impact of AI is more
nuanced and oblique than usually acknowledged. The same risk is pre-
sent today, as commentators appear to believe that the question should
be whether or not Duplex is able to pass as a human. However, even if
Google’s gadget proved unable to pass as human, we should not believe
the illusion to be dispelled. Even in the absence of deliberate misrepresen-
tation, AI technologies entail forms of deception that are perhaps less ev-
ident and straightforward but deeply impact societies. We should regard
deception not just as a possible way to employ AI but as a constitutive el-
ement of these technologies. Deception is as central to AI’s functioning as
the circuits, software, and data that make it run.
This book argues that, since the beginning of the computer age,
researchers and developers have explored the ways users are led to be-
lieve that computers are intelligent. Examining the historical trajectory
of AI from its origins to the present day, I show that AI scientists have
incorporated knowledge about users into their efforts to build mean-
ingful and effective interactions between humans and machines. I call,
therefore, for a recalibration of the relationship between deception and
AI that critically questions the ways computing technologies draw on
specific aspects of users’ perception and psychology in order to create
the illusion of AI.
One of the foundational texts for AI research, Alan Turing’s Computing
Machinery and Intelligence (1950), set up deception as a likely outcome
of interactions between humans and intelligent computers. In his pro-
posal for what is now commonly known as the Turing test, he suggested
evaluating computers on the basis of their capacities to deceive human
judges into believing they were human. Although tricking humans was
never the main objective of AI, computer scientists adopted Turing’s in-
tuition that whenever communication with humans is involved, the be-
havior of the human users informs the meaning and impact of AI just as
much as the behavior of the machine itself. As new interactive systems
that enhanced communications between humans and computers were
introduced, AI scientists began more seriously engaging with questions
of how humans react to seemingly intelligent machines. The way this

[2] Deceitful Media


dynamic is now embedded in the development of contemporary AI voice
assistants such as Google Assistant, Amazon’s Alexa, and Apple’s Siri sig-
nals the emergence of a new kind of interface, which mobilizes deception
in order to manage the interactions between users, computing systems,
and Internet-​based services.
Since Turing’s field-​defining proposal, AI has coalesced into a disci-
plinary field within cognitive science and computer science, producing
an impressive range of technologies that are now in public use, from
machine translation to the processing of natural language, and from
computer vision to the interpretation of medical images. Researchers
in this field nurtured the dream—​cherished by some scientists while
dismissed as unrealistic by others—​of reaching “strong” AI, that is, a
form of machine intelligence that would be practically indistinguish-
able from human intelligence. Yet, while debates have largely focused
on the possibility that the pursuit of strong AI would lead to forms of
consciousness similar or alternative to that of humans, where we have
landed might more accurately be described as the creation of a range of
technologies that provide an illusion of intelligence—​in other words, the
creation not of intelligent beings but of technologies that humans per-
ceive as intelligent.
Reflecting broader evolutionary patterns of narratives about technolog-
ical change, the history of AI and computing has until now been mainly
discussed in terms of technological capability.5 Even today, the prolifera-
tion of new communicative AI systems is mostly explained as a technical
innovation sparked by the rise of neural networks and deep learning.6
While approaches to the emergence of AI usually emphasize evolution in
programming and computing technologies, this study focuses on how the
development of AI has also built on knowledge about users.7 Taking up this
point of view helps one to realize the extent to which tendencies to project
agency and humanity onto things makes AI potentially disruptive for social
relations and everyday life in contemporary societies. This book, therefore,
reformulates the debate on AI on the basis of a new assumption: that what
machines are changing is primarily us, humans. “Intelligent” machines
might one day revolutionize life; they are already transforming how we un-
derstand and carry out social interactions.
Since AI’s emergence as a new field of research, many of its leading
researchers have professed to believe that humans are fundamentally sim-
ilar to machines and, consequently, that it is possible to create a computer
that equals or surpasses human intelligence in all aspects and areas. Yet

Introduction [3]
4

entertaining a similar tenet does not forcefully contrast with and is often
complementary to the idea that existing AI systems provide only the illu-
sion of human intelligence. Throughout the history of AI, many have ac-
knowledged the limitations of present systems and focused their efforts
on designing programs that would provide at least the appearance of in-
telligence; in their view, “real” or “strong” AI would come through further
progress, with their own simulation systems representing just a step in
that direction.8 Understanding how humans engage in social exchanges,
and how they can be led to treat things as social agents, became instru-
mental to overcoming the limitations of AI technologies. Researchers in AI
thus established a direction of research that was based on the designing of
technologies that cleverly exploited human perception and expectations to
give users the impression of employing or interacting with intelligent sys-
tems. This book demonstrates that looking at the development across time
of this tradition—​which has not yet been studied as such—​is essential to
understanding contemporary AI systems programmed to engage socially
with humans. In order to pursue this agenda, however, the problem of de-
ception and AI needs to be formulated under new terms.

ON HUMANS, MACHINES, AND “BANAL DECEPTION”

When the great art historian Ernst Gombrich started his inquiry into the
role of illusion in the history of art, he realized that figurative arts emerge
within an interplay between the limits of tradition and the limits of percep-
tion. Artists have always incorporated deception into their work, drawing
on their knowledge both of convention and of mechanisms of perception
to achieve certain effects on the viewer.9 But who would blame a gifted
painter for employing deceit by playing with perspective or depth to make
a tableau look more convincing and “real” in the eyes of the observer?
While this is easily accepted from an artist, the idea that a software
developer employs knowledge about how users are deceived in order to
improve human-​computer interaction is likely to encounter concern and
criticism. In fact, because the term deception is usually associated with ma-
licious endeavors, the AI and computer science communities have proven
resistant to discussing their work in terms of deception, or have discussed
deception as an unwanted outcome.10 This book, however, contends that
deception is a constitutive element of human-​ computer interactions
rooted in AI technologies. We are, so to say, programmed to be deceived,
and modern media have emerged within the spaces opened by the limits
and affordances of our capacity to fall into illusion. Despite their resistance

[4] Deceitful Media


to consider deception as such, computer scientists have worked since the
early history of their field to exploit the limits and affordances of our per-
ception and intellect.11
Deception, in its broad sense, involves the use of signs or representations
to convey a false or misleading impression. A wealth of research in areas
such as social psychology, philosophy, and sociology has shown that de-
ception is an inescapable fact of social life with a functional role in social
interaction and communication.12 Although situations in which deception
is intentional and manifest, such as frauds, scams, and blatant lies, shape
popular understandings of deception, scholars have underlined the more
disguised, ordinary presence of deception in everyday experience.13 Many
forms of deception are not so clear-​cut, and in many cases deception is not
even understood as such.14
Moving from a phenomenological perspective, philosopher Mark
A. Wrathall influentially argued that our capacity to be deceived is an in-
herent quality of our experience. While deception is commonly understood
in binary terms, positing that one might either be or not be deceived,
Wrathall contends that such a dichotomy does not account for how people
perceive and understand external reality: “it rarely makes sense to say that
I perceived either truly or falsely” since the possibility of deception is in-
grained in the mechanisms of our perception. If, for instance, I am walking
in the woods and believe I see a deer to my side where in fact there is just a
bush, I am deceived; yet the same mechanism that made me see a deer where
it wasn’t—​that is, our tendency and ability to identify patterns in visual
information—​would have helped me, on another occasion, to identify a po-
tential danger. The fact that our senses have shortcomings, Wrathall points
out, represents a resource as much as a limit for human perception and is
functional to our ability to navigate the external world.15 From a similar
point of view, cognitive psychologist Donald D. Hoffman recently proposed
that evolution has shaped our perceptions into useful illusions that help
us navigate the physical world but can also be manipulated through tech-
nology, advertising, and design.16
Indeed, the institutionalization of psychology in the late nineteenth and
early twentieth centuries already signaled the discovery that deception and
illusion were integral, physiological aspects of the psychology of percep-
tion.17 Understanding deception was important not much or not only in
order to study how people misunderstood the world but also to study how
they perceived and navigated it.18 During the nineteenth and twentieth
centuries, the accumulation of knowledge about how people were deceived
informed the development of a wide range of media technologies and
practices, whose effectiveness exploited the affordances and limitations

Introduction [5]
6

of our senses of seeing, hearing, and touching.19 As I demonstrate in this


book, AI developers, in order to produce their outcomes, have continued
this tradition of technologies that mobilize our liability to deception.
Artificial intelligence scientists have collected information and knowledge
about how users react to machines that exhibit the appearance of intelli-
gent behaviors, incorporating this knowledge into the design of software
and machines.
One potential objection to this approach is that it dissolves the very
concept of deception by equating it with “normal” perception. I contend,
however, that rejecting a binary understanding of deception helps one re-
alize that deception involves a wide spectrum of situations that have very
different outcomes but also common characteristics. If on one end of the
spectrum there are explicit attempts to mislead, commit fraud, and tell lies,
on the other end there are forms of deception that are not so clear-​cut
and that, in many cases, are not understood as such.20 Only by identifying
and studying less evident dynamics of deception can we develop a full un-
derstanding of more evident and straight-​out instances of deception. In
pointing to the centrality of deception, therefore, I do not intend to sug-
gest that all forms of AI have hypnotic or manipulative goals. My main goal
is not to establish whether AI is “good” or “bad” but to explore a crucial di-
mension of AI and interrogate how we should proceed in response to this.
Home robots such as Jibo and companion chatbots such as Replika, for
example, are designed to appear cute and to awaken sentiments of empathy
in their owners. This design choice looks in itself harmless and benevo-
lent: these technologies simply work better if their appearance and beha-
vior stimulate positive feelings in their users.21 The same characteristics,
however, will appear less innocent if the companies producing these sys-
tems start profiting from these feelings in order to influence users’ polit-
ical opinions. Home robots and companion chatbots, together with a wide
range of AI technologies programmed to enter into communication with
humans, structurally incorporate forms of deception: elements such as ap-
pearance, a humanlike voice, and the use of specific language expressions
are designed to produce specific effects in the user. What makes this less
or more acceptable is not the question whether there is or is not deception
but what the outcomes and the implications are of the deceptive effects
produced by any given AI technology. Broadening the definition of decep-
tion, in this sense, can lead to improving our comprehension of the poten-
tial risks of AI and related technologies, counteracting the power of the
companies that gain from the user’s interactions with these technologies
and stimulating broader investigations of whether such interactions pose
any potential harm to the user.

[6] Deceitful Media


To distinguish from straight-​out and deliberate deception, I propose
the concept of banal deception to describe deceptive mechanisms and
practices that are embedded in media technologies and contribute to their
integration into everyday life. Banal deception entails mundane, everyday
situations in which technologies and devices mobilize specific elements
of the user’s perception and psychology—​for instance, in the case of AI,
the all-​too-​human tendency to attribute agency to things or personality
to voices. The word “banal” describes things that are dismissed as ordi-
nary and unimportant; my use of this word aims to underline that these
mechanisms are often taken for granted, despite their significant impact
on the uses and appropriations of media technologies, and are deeply
embedded in everyday, “ordinary” life.22
Different from approaches to deliberate or straight-​out deception, banal
deception does not understand users and audiences as passive or naïve.
On the contrary, audiences actively exploit their own capacity to fall into
deception in sophisticated ways—​for example, through the entertain-
ment they enjoy when they fall into the illusions offered by cinema or
television. The same mechanism resonates with the case of AI. Studies in
human-​computer interaction consistently show that users interacting with
computers apply norms and behaviors that they would adopt with humans,
even if these users perfectly understand the difference between computers
and humans.23 At first glance, this seems incongruous, as if users resist
and embrace deception simultaneously. The concept of banal deception
provides a resolution of this apparent contradiction. I argue that the subtle
dynamics of banal deception allow users to embrace deception so that
they can better incorporate AI into their everyday lives, making AI more
meaningful and useful to them. This does not mean that banal deception is
harmless or innocuous. Structures of power often reside in mundane, ordi-
nary things, and banal deception may finally bear deeper consequences for
societies than the most manifest and evident attempts to deceive.
Throughout this book, I identify and highlight five key characteristics
that distinguish banal deception. The first is its everyday and ordinary char-
acter. When researching people’s perceptions of AI voice assistants, Andrea
Guzman was surprised by what she sensed was a discontinuity between
the usual representations of AI and the responses of her interviewees.24
Artificial intelligence is usually conceived and discussed as extraordi-
nary: a dream or a nightmare that awakens metaphysical questions and
challenges the very definition of what means to be human.25 Yet when
Guzman approached users of systems such as Siri, the AI voice assistant
embedded in iPhones and other Apple devices, she did not find that the
users were questioning the boundaries between humans and machines.

Introduction [7]
8

Instead, participants were reflecting on themes similar to those that also


characterize other media technologies. They were asking whether using the
AI assistant made them lazy, or whether it was rude to talk on the phone
in the presence of others. As Guzman observes, “neither the technology
nor its impact on the self from the perspective of users seemed extraordi-
nary; rather, the self in relation to talking AI seemed, well, ordinary—​just
like any other technology.”26 This ordinary character of AI is what makes
banal deception so imperceptible but at the same time so consequential. It
is what prepares for the integration of AI technologies into the fabrics of
everyday experience and, as such, into the very core of our identities and
selves.27
The second characteristic of banal deception is functionality. Banal de-
ception always has some potential value to the user. Human-​computer in-
teraction has regularly employed representations and metaphors to build
reassuring and easily comprehensible systems, hiding the complexity
of the computing system behind the interface.28 As noted by Michael
Black, “manipulating user perception of software systems by strategically
misrepresenting their internal operations is often key to producing com-
pelling cultural experiences through software.”29 Using the same logic, com-
municative AI systems mobilize deception to achieve meaningful effects.
The fact that users behave socially when engaging with AI voice assistants,
for instance, has an array of pragmatic benefits: it makes it easier for users
to integrate these tools into domestic environments and everyday lives,
and presents possibilities for playful interaction and emotional reward.30
Being deceived, in this context, is to be seen not as a misinterpretation by
the user but as a response to specific affordances coded into the technology
itself.
The third characteristic of banal deception is obliviousness: the fact
that the deception is not understood as such but taken for granted and
unquestioned. The concept of “mindless behavior” has been already used
to explain the apparent contradiction, mentioned earlier, of AI users un-
derstanding that machines are not human but still to some extent treating
them as such.31 Researchers have drawn from cognitive psychology to de-
scribe mindlessness as “an overreliance on categories and distinctions
drawn in the past and in which the individual is context-​dependent and,
as such, is oblivious to novel (or simply alternative) aspects of the situa-
tion.”32 The problem with this approach is that it implies a rigid distinction
between mindfulness and mindlessness whereby only the latter leads to
deception. When users interact with AI, however, they also replicate social
behaviors and habits in self-​conscious and reflective ways. For instance,
users carry out playful exchanges with AI voice assistants, although they

[8] Deceitful Media


know too well the machine will not really get their jokes. They wish them
goodnight before going to bed, even if aware that they will not “sleep” in
the same sense as humans do.33 This suggests that distinctions between
mindful and mindless behaviors fail to capture the complexity of the inter-
action. In contrast, obliviousness implies that while users do not thematize
deception as such, they may engage in social interactions with the machine
deliberately as well as unconsciously. Obliviousness also allows the user to
maintain at least the illusion of control—​this being, in the age of user-​
friendliness, a key principle of software design.34
The fourth characteristic of banal deception is its low definition. While
this term is commonly used to describe formats of video or sound repro-
duction with lower resolution, in media theory the term has also been
employed in reference to media that demand more participation from
audiences and users in the construction of sense and meaning.35 For what
concerns AI, textual and voice interfaces are low definition because they
leave ample space for the user to imagine and attribute characteristics such
as gender, race, class, and personality to the disembodied voice or text. For
instance, voice assistants do not present at a physical or visual level the
appearance of the virtual character (such as “Alexa” or “Siri”), but some
cues are embedded in the sounds of their voices, in their names, and in
the content of their exchanges. It is for this reason that, as shown in re-
search about people’s perceptions of AI voice assistants, different users
imagine AI assistants in different, multiple ways, which also enhances the
effect of technology being personalized to each individual.36 In contrast,
humanoid robots leave less space for the users’ imagination and projec-
tion mechanisms and are therefore not low definition. This is one of the
reasons why disembodied AI voice assistants have become much more in-
fluential today than humanoid robots: the fact that users can project their
own imaginations and meanings makes interactions with these tools much
more personal and reassuring, and therefore they are easier to incorporate
into our everyday lives than robots.37
The fifth and final defining characteristic of banal deception is that
it is not just imposed on users but also is programmed by designers and
developers. This is why the word deception is preferable to illusion, since
deception implies some form of agency, permitting clearer acknowledg-
ment of the ways developers of AI technologies work toward achieving the
desired effects. In order to explore and develop the mechanisms of banal
deception, designers need to construct a model or image of the expected
user. In actor-​network theory, this corresponds to the notion of script,
which refers to the work of innovators as “inscribing” visions or predictions
about the world and the user in the technical content of the new object and

Introduction [9]
01

technology.38 Although this is always an exercise of imagination, it draws


on specific efforts to gain knowledge about users, or more generally about
“humans.” Recent work in human-​computer interaction acknowledges that
“perhaps the most difficult aspect of interacting with humans is the need
to model the beliefs, desires, intentions preferences, and expectations of
the human and situate the interaction in the context of that model.”39
The historical excavation undertaken in this book shows that this work of
modeling users is as old as AI itself. As soon as interactive systems were
developed, computer scientists and AI researchers explored how human
perception and psychology functioned and attempted to use such know-
ledge to close the gap between computer and user.40
It is important to stress that for us to consider the agency of
programmers and developers who design and prepare for use AI sys-
tems is perfectly compatible with the recognition that users themselves
have agency. As much critical scholarship on digital media shows, in fact,
users of digital technologies and systems often subvert and reframe the
intentions and expectations of companies and developers.41 This does not
imply, however, that the latter do not have an expected outcome in mind.
As Taina Bucher recently remarked, “the cultural beliefs and values held
by programmers, designers, and creators of software matter”: we should
examine and question their intentions despite the many difficulties in-
volved in reconstructing them retrospectively from the technology and its
operations.42
Importantly, the fact that banal deception is not to be seen as negative
by default does not mean that its dynamics should not be the subject of
attentive critical inquiry. One of the key goals of this book is to identify
and counteract potentially problematic practices and implications that
emerge as a consequence of the incorporation of banal deception into AI.
Unveiling the mechanisms of banal deception, in this sense, is also an in-
vitation to interrogate what the “human” means in the discursive debates
and practical work that shape the development of AI. As the trajectory
described in this book demonstrates, the modeling of the “human” that
has been developed throughout the history of AI has in fact been quite
limited. Even as computer access has progressively been extended to wider
potential publics, developers have often envisioned the expected user as
a white, educated man, perpetuating biases that remain inherent in con-
temporary computer systems.43 Furthermore, studies and assumptions
about how users perceive and react to specific representations of gender,
race, and class have been implemented in interface design, leading for
instance to gendered characterizations of many contemporary AI voice
assistants.44

[ 10 ] Deceitful Media
One further issue is the extent to which the mechanisms of banal decep-
tion embedded in AI are changing the social conventions and habits that
regulate our relationships with both humans and machines. Pierre Bourdieu
uses the concept of habitus to characterize the range of dispositions
through which individuals perceive and react to the social world.45 Since
habitus is based on previous experiences, the availability of increasing
opportunities to engage in interactions with computers and AI is likely to
feed forward into our social behaviors in the future. The title of this book
refers to AI and social life after the Turing test, but even if a computer
program able to pass that test is yet to be created, the dynamics of banal
deception in AI already represent an inescapable influence on the social life
of millions of people around the world. The main objective of this book is
to neutralize the opacity of banal deception, bringing its mechanisms to
the surface so as to better understand new AI systems that are altering
societies and everyday life.

ARTIFICIAL INTELLIGENCE, COMMUNICATION, MEDIA HISTORY

Artificial intelligence is a highly interdisciplinary field, characterized by


a range of different approaches, theories, and methods. Some AI-​based
applications, such as the information-​ processing algorithms that reg-
ulate access to the web, are a constant presence in the everyday lives of
masses of people; others, like industrial applications of AI in factories and
workshops, are rarely, if ever, encountered.46 This book focuses partic-
ularly on communicative AI, that is, AI applications that are designed to
enter into communication with human users.47 Communicative AIs include
applications involving conversation and speech, such as natural language
processing, chatbots, social media bots, and AI voice assistants. The field of
robotics makes use of some of the same technologies developed for com-
municative AI—​for instance to have robots communicate through a speech
dialogue system—​but remains outside the remit of this book. As Andreas
Hepp has recently pointed out, in fact, AI is less commonly in use today
in the form of embodied physical artifacts than software applications.48
This circumstance, as mentioned earlier, may be explained by the fact that
computers do not match one of the key characteristics of banal decep-
tion: low definition.
Communicative AI departs from the historical role of media as mere
channels of communication, since AI also acts as a producer of communica-
tion, with which humans (as well as other machines) exchange messages.49
Yet communicative AI is still a medium of communication, and therefore

Introduction [ 11 ]
21

inherits many of the dynamics and structures that have characterized


mediated communication at least since the emergence of electronic media
in the nineteenth century. This is why, to understand new technologies
such as AI voice assistants or chatbots, it is vital to contextualize them in
the history of media.
As communication technologies, media draw from human psychology
and perception, and it is possible to look at media history in terms of how
deceitful effects were incorporated in different media technologies. Cinema
achieves its effects by exploiting the limits of human perception, such as
the impression of movement that can be given through the fast succession
of a series of still images.50 Similarly, as Jonathan Sterne has aptly shown,
the development of sound media drew from knowledge about the physical
and psychological characteristics of human hearing and listening.51 In this
sense, the key event of media history since the nineteenth century was
not the invention of any new technology such as the telegraph, photog-
raphy, cinema, television, or the computer. It was instead the emergence
of the new human sciences, from physiology and psychology to the social
sciences, that provided the knowledge and epistemological framework to
adapt modern media to the characteristics of the human sensorium and
intellect.
Yet the study of media has often fallen into the same trap as those who
believe that deception in AI matters only if it is “deliberate” and “straight-​
up.”52 Deception in media history has mainly been examined as an ex-
ceptional circumstance, highlighting the manipulative power of media
rather than acknowledging deception’s structural role in modern media.
According to an apocryphal but persistent anecdote, for instance, early
movie audiences exchanged representation for reality and panicked before
the image of an incoming train.53 Similarly, in the story of Orson Welles’s
radio broadcast War of the Worlds, which many reportedly interpreted as
a report of an actual extraterrestrial invasion, live broadcasting has led
people to confuse fiction with reality.54 While such blatant (and often
exaggerated) cases of deception have attracted much attention, few have
reflected on the fact that deception is a key feature of media technologies’
function—​that deception, in other words, is not an incidental but an irre-
mediable characteristic of media technologies.55
To uncover the antecedents of AI and robotics, historians commonly
point to automata, self-​operating machines mimicking the behavior and
movements of humans and animals.56 Notable examples in this lineage in-
clude the mechanical duck built by French inventor Jacques de Vaucanson
in 1739, which displayed the abilities of eating, digesting, and defecating,

[ 12 ] Deceitful Media
and the Mechanical Turk, which amazed audiences in Europe and America
in the late eighteenth and early nineteenth centuries with its proficiency at
playing chess.57 In considering the relationship between AI and deception,
these automata are certainly a case in point, as their apparent intelligence
was the result of manipulation by their creators: the mechanical duck had
feces stored in its interior, so that no actual digestion took place, while
the Turk was maneuvered by a human player hidden inside the machine.58
I argue, however, that to fully understand the broader relationship between
contemporary AI and deception, one needs to delve into a wider histor-
ical context that goes beyond the history of automata and programmable
machines. This context is the history of deceitful media, that is, of how dif-
ferent media and practices, from painting and theatre to sound recording,
television, and cinema, have integrated banal deception as a strategy to
achieve particular effects in audiences and users. Following this trajectory
shows that some of the dynamics of communicative AI are in a relationship
of continuity with the ways audiences and users have projected meaning
onto other media and technology.
Examining the history of communicative AI from the proposal of the
Turing test in 1950 to the present day, I ground my work in the per-
suasion that a historical approach to media and technological change
helps us comprehend ongoing transformations in the social, cultural,
and political spheres. Scholars such as Lisa Gitelman, Erkki Huhtamo,
and Jussi Parikka have compellingly shown that what are now called
“new media” have a long history, whose study is necessary to under-
stand today’s digital culture.59 If it is true that history is one of the best
tools for comprehending the present, I believe that it is also one of the
best instruments, although still an imperfect one, for anticipating the
future. In areas of rapid development such as AI, it is extremely diffi-
cult to forecast even short-​and medium-​term development, let alone
long-​term changes.60 Looking at longer historical trajectories across sev-
eral decades helps to identify key trends and trajectories of change that
have characterized the field across several decades and might, therefore,
continue to shape it in the future. Although it is important to under-
stand how recent innovations like neural networks and deep learning
work, a better sense is also needed of the directions through which the
field has moved across a longer time frame. Media history, in this sense,
is a science of the future: it not only sheds light on the dynamics by
which we have arrived where we are today but helps pose new questions
and problems through which we may navigate the technical and social
challenges ahead.61

Introduction [ 13 ]
41

Following Lucy Suchman, I use the terms “interaction” and “commu-


nication” interchangeably, since interaction entails the establishment of
communication between different entities.62 Early approaches in human-​
computer interaction recognized that interaction was always intended as
a communicative relationship, and the idea that the computer is both a
channel and a producer of communication is much older than often im-
plied.63 Although AI and human-​computer interaction are usually framed
as separate, considering them as distinct limits historians’ and contem-
porary communicators’ capacity to understand their development and
impact. Since the very origins of their field, AI researchers have reflected
on how computational devices could enter into contact and dialogue with
human users, bringing the problems and questions relevant to human-​
computer interaction to the center of their own investigation. Exploring
the intersections between these fields helps one to understand that they
are united by a key tenet: that when a user interacts with technology, the
responsibility for the outcome of such interaction is shared between the
technology and the human.
On a theoretical level, the book is indebted to insights from dif-
ferent disciplinary fields, from action-​network theory to social anthro-
pology, from media theory to film studies and art history. I use these
diverse frameworks as tools to propose an approach to AI and digital
technologies that emphasizes humans’ participation in the construc-
tion of meaning. As works in actor-​network theory, as well as social
anthropologists such as Armin Appadurai and Alfred Gell, have taught
us, not only humans but also artifacts can be regarded as social agents in
particular social situations.64 People often attribute intentions to objects
and machines: for instance, car owners attribute personalities to their
cars and children to their dolls. Things, like people, have social lives, and
their meaning is continually negotiated and embedded within social
relations.65
In media studies, scholars’ examinations of the implications of this
discovery have shifted from decades-​long reflections on the audiences of
media such as radio, cinema, and television to the development of a new
focus on the interactive relationships between computers and users. In The
Media Equation, a foundational work published in the mid-​1990s, Byron
Reeves and Clifford Nass argue that we tend to treat media, including
but not only computers, in accordance with the rules of social interac-
tion.66 Later studies by Nass, Reeves, and other collaborators have estab-
lished what is known as the Computers Are Social Actors paradigm, which
contends that humans apply social rules and expectations to computers,
and have explored the implications of new interfaces that talk and listen

[ 14 ] Deceitful Media
to users, which are becoming increasingly available in computers, cars, call
centers, domestic environments, and toys.67 Another crucial contribution
to such endeavors is that of Sherry Turkle. Across several decades, her
research has explored interactions between humans and AI, emphasizing
how their relationship does not follow from the fact that computational
objects really have emotions or intelligence but from what they evoke in
their users.68
Although the role of deception is rarely acknowledged in discussions of
AI, I argue that interrogating the ethical and cultural implications of such
dynamics is an urgent task that needs to be approached through inter-
disciplinary reflection at the crossroads between computer science, cogni-
tive science, social sciences, and the humanities. While the public debate
on the future of AI tends to focus on the hypothesis that AI will make
computers as intelligent or even more intelligent than people, we also
need to consider the cultural and social consequences of deceitful media
providing the appearance of intelligence. In this regard, the contemporary
obsession with apocalyptic and futuristic visions of AI, such as the singu-
larity, superintelligence, and the robot apocalypse, makes us less aware of
the fact that the most significant implications of AI systems are to be seen
not in a distant future but in our ongoing interactions with “intelligent”
machines.
Technology is shaped not only by the agency of scientists, designers,
entrepreneurs, users, and policy-​makers but also by the kinds of questions
we ask about it. This book hopes to inspire readers to ask new questions
about the relationship between humans and machines in today’s world.
We will have to start searching for answers ourselves, as the “intelligent”
machines we are creating can offer no guidance on such matters-​as one of
those machines admitted when I asked it (I.1).

I.1 Author’s conversation with Siri, 16 January 2020.

Introduction [ 15 ]
61

CHAPTER 1

The Turing Test


The Cultural Life of an Idea

I n the mid-​nineteenth century, tables suddenly took a life of their own.


It started in a little town in upstate New York, where two adolescent sis-
ters, Margaret and Kate Fox, reportedly engaged in communication with
the spirit of a dead man. Soon word spread, first in the United States and
then in other countries, and by 1850 a growing following of people were
joining spiritualist séances. The phenomena observed were extraordinary.
As a contemporary witness reported, “tables tilt without being touched,
and the objects on them remain immobile, contradicting all the laws of
physics. The walls tremble, the furniture stamps its feet, candelabra float,
unknown voices cry from nowhere—​all the phantasmagoria of the invis-
ible populate the real world.”1
Among those who set out to question the origins of these weird phe-
nomena was British scientist Michael Faraday. His interest was not acci-
dental: many explained table turning and other spiritualist wonders as
phenomena of electricity and magnetism, and Faraday had dedicated his
life to advancing scientific knowledge in these areas.2 The results of his
investigations, however, would point toward a different interpretation. He
contended that the causes of the movements of the tables were not to be
searched for in external forces such as electricity and magnetism. It was
much more useful to consider the experience of those who participated in
the séances. Based on his observations, he argued that sitters and mediums
at spiritualist séances not only perceived things that weren’t there but also
even provoked some of the physical phenomena. The movements of the

Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/​oso/​9780190080365.003.0002
séance table could be explained with their unconscious desire for deceiving
themselves, which led them involuntarily to move the table.
It wasn’t spirits of the dead that made spiritualism, Faraday pointed
out. It was the living people—​we, the humans—​who made it.3
Robots and computers are no spirits, yet Faraday’s story provides a
useful lens through which to consider the emergence of AI from an unu-
sual perspective. In the late 1940s and early 1950s, often described as the
gestation period of AI, research in the newly born science of computing
placed more and more emphasis on the possibility that electronic digital
computers would develop into “thinking machines.”4 Historians of cul-
ture, science, and technology have shown how this involved a hypothesis
of an equivalence between human brains and computing machines.5 In
this chapter, however, I aim to demonstrate that the emergence of AI also
entailed a different kind of discovery. Some of the pioneers in the new field
realized that the possibility of thinking machines depended on the per-
spective of the observers as much as on the functioning of computers. As
Faraday in the 1850s suggested that spirits existed in séance sitters’ minds,
these researchers contemplated the idea that AI resided not so much in
circuits and programming techniques as in the ways humans perceive
and react to interactions with machines. In other words, they started to
envision the possibility that it was primarily users, not computers, that
“made” AI.
This awareness did not emerge around a single individual, a single group,
or even a single area of research. Yet one contribution in the gestational field
of AI is especially relevant: the “Imitation Game,” a thought experiment
proposed in 1950 by Alan Turing in his article “Computing Machinery and
Intelligence.” Turing described a game—​today more commonly known as
the Turing test—​in which a human interrogator communicates with both
human and computer agents without knowing their identities, and is chal-
lenged to distinguish the humans from the machines. A lively debate about
the Turing test started shortly after the publication of Turing’s article and
continues today, through fierce criticisms, enthusiastic approvals, and
contrasting interpretations.6 Rather than entering the debate by providing
a unique or privileged reading, my goal here is to illuminate one of the sev-
eral threads Turing’s idea opened for the nascent AI field. I argue that, just
as Faraday placed the role of humans at the center of the question of what
happened in spiritualist séances, Turing proposed to define AI in terms of
the perspective of human users in their interactions with computers. The
Turing test will thus serve here as a theoretical lens more than as historical
evidence—​a reflective space that invites us to interrogate the past, the pre-
sent, and the future of AI from a different standpoint.

T h e T u r i n g  T e s t [ 17 ]
81

IS THERE ANYTHING LIKE A THINKING MACHINE?

Artificial intelligence sprang up in the middle of the twentieth century


at the junction of cybernetics, control theory, operations research, psy-
chology, and the newly born computer science. In this context, researchers
exhibited the ambitious goal of integrating these research areas so as to
move toward the implementation of human intelligence in general, ap-
plicable to any domain of human activity, for example language, vision,
and problem-​solving. While the foundation of AI is usually dated to 1956,
coinciding with a groundbreaking conference at which the term AI was
introduced, researchers had started to engage with the problem of “ma-
chine intelligence” and “thinking machines” at least as early as the early
1940s, coincidentally with the development of the first electronic digital
computers.7 In those early days, as computers were mostly used to per-
form calculations that required too much time to be completed by humans,
imagining that computers would take up tasks such as writing in natural
language, composing music, recognize images, or playing chess required
quite an imagination. But the pioneers who shaped the emergence of the
field did not lack vision. Years before actual hardware and software were
created that could handle such tasks, thinkers such as Claude Shannon,
Norbert Wiener, and Alan Turing, among others, were confident that it was
only a matter of time before such feats would be achieved.8
If electronic computers’ potential to take up more and more tasks was
clear to many, so, however, were the challenges the burgeoning field faced.
What was this “intelligence” that was supposed to characterize the new
machines? How did it compare to human intelligence? Did it make sense
to describe computers’ electronic and mechanical processes as “thinking”?
And was there any fundamental analogy between the functionings of
human brains and of machines that operated by computing numbers?
Some, like the two American scientists Warren McCulloch, a neurophys-
iologist, and Walter Pitts, a logician, believed that human reasoning
could be formalized mathematically through logics and calculus, and they
formulated a mathematical model of neural activity that laid the ground
for successful applications, decades later, of neural networks and deep
learning technologies.9 The view that the brain could be compared and
understood as a machine was also shared by the founder of cybernetics,
Norbert Wiener.10
Yet another potential way to tackle these questions also emerged in this
period. It was the result, in a way, of a failure. Many realized that the equa-
tion between computers and humans did not stand up to close scrutiny.
This was not only due to the technical limitations of the computers of the

[ 18 ] Deceitful Media
time. Even if computers could eventually rival or even surpass humans in
tasks that were considered to require intelligence, there would still be little
evidence to compare their operations to what happens in humans’ own
minds. The problem can be illustrated quite well by using the argument
put forward by philosopher Thomas Nagel in an exceedingly famous article
published two decades later, in 1974, titled “What Is It Like to Be a Bat?”
Nagel demonstrates that even a precise insight into what happens inside
the brain and body of a bat would not make us able to assess whether the
bat is conscious. Since consciousness is a subjective experience, one needs
to be “inside” the bat to do that.11 Transferred into the machine intelli-
gence problem, despite computing’s tremendous achievements, their “in-
telligence” would not be similar or equal to that of humans. While all of us
has some understanding of what “thinking” is, based on our own subjective
experience, we cannot know whether others, and especially non-​human
beings, share the same experience. We have no objective way, therefore, to
know if machines are “thinking.”12
A trained mathematician who engaged with philosophical issues only
as a personal interest, Turing was far from the philosophical sophistica-
tion of Nagel’s arguments. Turing’s objective, after all, was to illustrate the
promises of modern computing, not to develop a philosophy of the mind.
Yet he showed similar concerns in the opening of “Computer Machinery
and Intelligence” (1950). In the introduction, he considers the question
“Can machines think?” only to declare it of little use, due to the difficulty
of finding agreement about the meanings of the words “machine” and
“think.” He proposes therefore to replace this question with another one,
and thereby introduces the Imitation Game as a more sensible way to ap-
proach the issue:

The new form of the problem can be described in terms of a game which we call
the “imitation game.” It is played with three people, a man (A), a woman (B), and
an interrogator (C) who may be of either sex. The interrogator stays in a room
apart from the other two. The object of the game for the interrogator is to de-
termine which of the other two is the man and which is the woman. . . . We now
ask the question, “What will happen when a machine takes the part of A in this
game?” Will the interrogator decide wrongly as often when the game is played
like this as he does when the game is played between a man and a woman? These
questions replace our original, “Can machines think?”13

As some of his close acquaintances testified, Turing intended the article


to be not so much a contribution to philosophy or computer science as a
form of propaganda that would stimulate philosophers, mathematicians,

T h e T u r i n g  T e s t [ 19 ]
02

and scientists to engage more seriously with the machine intelligence ques-
tion.14 But regardless of whether he actually thought of it as such, there is
no doubt that the test proved to be an excellent instrument of propaganda
for the field. Drawing from the powerful idea that a “computer brain” could
beat humans in one of the skills that define our intelligence, the use of nat-
ural language, the Turing test presented potential developments of AI in
an intuitive and fascinating fashion.15 In the following decades, it became
a staple reference in popular publications presenting the achievements
and the potentials of computing. It forced readers and commentators to
consider the possibility of AI—​even if just rejecting it as science fiction or
charlatanry.

THE PLACE OF HUMANS

Turing’s article was ambiguous in many respects, which favored the emer-
gence of different views and controversies about the meaning of the Turing
test.16 Yet one of the key implications for the AI field is evident. The ques-
tion, Turing tells his readers, is not whether machines are or are not able to
think. It is, instead, whether we believe that machines are able to think—​in
other words, if we are prepared to accept machines’ behavior as intelligent.
In this respect, Turing turned around the problem of AI exactly as Faraday
did with spiritualism. Much as the Victorian scientist deemed humans and
not spirits responsible for producing spiritualist phenomena at séances,
the Turing test placed humans rather than machines at the very center
of the AI question. Although some have pointed out that the Turing test
has “failed” because its dynamic does not accurately adhere to AI’s current
state of the art,17 Turing’s proposal located the prospects of AI not just
in improvements of hardware and software but in a more complex sce-
nario emerging from the interactions between humans and computers.
By placing humans at the center of its design, the Turing test provided a
context wherein AI technologies could be conceived in terms of their cred-
ibility to human users.18
There are three actors in the Turing test, all of which engage in acts of
communication: a computer player, a human player, and a human evalu-
ator or judge. The computer’s capacity to simulate the ways humans behave
in a conversation is obviously one of the factors that inform the test’s out-
come. But since human actors engage actively in communications within
the test, their behaviors will be another decisive factor. Things such as their
backgrounds, biases, characters, genders, and political opinions will have
a role in both the decisions of the interrogator and the behavior of the

[ 20 ] Deceitful Media
human agents with which the interrogator interacts. A computer scientist
with knowledge and experience on AI, for instance, will be in a different
position from someone who has limited insight into the topic. Likewise,
human players who act as conversation partners in the test will have their
own motivations and ideas on how to participate in the test. Some, for
instance, could be fascinated by the possibility of being exchanged for a
computer and therefore tempted to create ambiguity about their identities.
Because of the role human actors play in the Turing test, all these are
variables that may inform its outcome.19
The uncertainty that derives from this is often indicated to be one of
the test’s shortcomings.20 The test appears entirely logical and justified,
however, if one sees it not as an evaluation of the existence of thinking
machines but as a measure of humans’ reactions to communications with
machines that exhibit intelligent behavior. From this point of view, the
test made clear for the first time, decades before the emergence of online
communities, social media bots, and voice assistants, that AI not only is a
matter of computer power and programming techniques but also resides—​
and perhaps especially—​in the perceptions and patterns of interaction
through which humans engage with computers.
The wording of Turing’s article provides some additional evidence to
support this interpretation. Though he refused to make guesses about the
question whether machines can think, he did not refrain from speculating
that “at the end of the century the use of words and general educated
opinion will have altered so much that one will be able to speak of machines
thinking without expecting to be contradicted.”21 It is worth reading the
wording of this statement attentively. This is not a comment about the de-
velopment of more functional or more credible machines. It is about cul-
tural change. Turing argues that by the end of the twentieth century people
will have a different understanding of the AI problem, so that the prospect
of “thinking machines” will not sound as unlikely as it did at Turing’s time.
He shows interest in “the use of words and general educated opinion” rather
than in the possibility of establishing whether machines actually “think.”
Looking at the history of computing since Turing’s time, there is no
doubt that cultural attitudes about computing and AI did change, and quite
considerably. The work of Sherry Turkle, who studied people’s relationships
with technology across several decades, provides strong evidence of this.
For instance, in interviews conducted in the late 1970s, as she asked
participants what they thought about the idea of chatbots providing psy-
chotherapy services, she encountered much resistance. Most interviewees
tended to agree that machines could not make up for the loss of the em-
pathic feeling between the psychologist and the patient.22 In the following

T h e T u r i n g  T e s t [ 21 ]
2

two decades, however, much of this resistance dissolved, with participants


in Turkle’s later studies becoming more open toward the possibility of psy-
chotherapy chatbots. Discussion of computer psychotherapy has become
less moralistic, and her questions were now considered in terms of the lim-
itations on what computers could do or be. People were more willing to
concede that, if this was beneficial to the patient, it was worth to try. As
Turkle put it, they were “more likely to say, ‘Might as well try it. It might
help. What’s the harm?’ ”23
Turing’s prediction might not have been fulfilled—​one may still expect
today to be contradicted when speaking of machines “thinking”—​yet he
was right in realizing that cultural attitudes would shift, as a consequence
of both the evolution of computing and people’s experiences with these
technologies. Importantly, because of the role human actors play in the
Turing test, such shifts may also inform its outcomes.
Recall the comparison with the spiritualist séance. Sitters joining a
“spiritualist circle” witness events such as noises, movements of the table,
and the levitation of objects. The outcome of the séance depends on the in-
terpretation the sitters give to these events. It will be informed, as Faraday
intuited, not only by the nature of the phenomena but also, and perhaps
especially, by the perspective of the sitters. Their attitudes and beliefs,
even their psychology and perception, will inform their interpretation.24
The Turing test tells us that the same dynamics shape interactions between
computers and humans. By defining AI in terms of a computer’s ability to
pass the Turing test, Turing included humans in the equation, making their
ideas and biases, as well as their psychology and character, a crucial vari-
able in the construction of “intelligent” machines.

THE COMMUNICATION GAME

Media historian John Durham Peters famously argued that the history
of communication can be read as the history of the aspiration to estab-
lish an empathic connection with others, and the fear that this may break
down.25 As media such as the telegraph, the telephone, and broadcasting
were introduced, they all awoke hopes that they could facilitate such
connections, and at the same time fears that the electronic mediation they
provided would increase our estrangement from our fellow human beings.
It is easy to see how this also applies to digital technologies today. In its rel-
atively short history, the Internet kindled powerful hopes and fears: think,
for instance, of the question whether social networks facilitate the creation
of new forms of communication or make people lonelier than ever before.26

[ 22 ] Deceitful Media
But computing has not always been seen in relationship to communi-
cation. In 1950, when Turing published his article, computer-​mediated
communication had not yet coalesced into a meaningful field of investi-
gation. Computers were mostly discussed as calculating tools, and avail-
able forms of interactions between human users and computers were
minimal.27 Imagining communication between humans and computers in
1950, when Turing wrote his article, required a visionary leap—​perhaps
even greater than that needed to consider the possibility of “machine in-
telligence.”28 The very idea of a user, understood as an individual given ac-
cess to shared computing resources, was not fully conceptualized before
developments in time-​sharing systems and computer networks in the
1960s and the 1970s made computers available to individual access, ini-
tially within small communities of researchers and computer scientists and
then for an increasingly larger public.29 As a consequence, the Turing test
is usually discussed as a problem about the definition of intelligence. An
alternative way to look at it, however, is to consider it as an experiment in
the communications between humans and computers. Recently, scholars
have started to propose such an approach. Artificial intelligence ethics ex-
pert David Gunkel, for instance, points out that “Turing’s essay situates
communication—​and a particular form of deceptive social interaction—​
as the deciding factor” and should therefore be considered a contribu-
tion to computer-​mediated communication avant la lettre.30 Similarly, in a
thoughtful book written after his experience as a participant in the Loebner
Prize—​a contemporary contest, discussed at greater length in ­chapter 5, in
which computer programs engage in a version of the Turing test—​Brian
Christian stresses this aspect, noting that “the Turing test is, at bottom,
about the act of communication.”31
Since the design of the test relied on interactions between humans
and computers, Turing felt the need to include precise details about how
they would have entered into communication. To ensure the validity of
the test, the interrogator needed to communicate with both human and
computer players without receiving any hints about their identities other
than the contents of their messages. Communications between humans
and computers in the test were thus meant to be anonymous and disem-
bodied.32 In the absence of video displays and even input devices such as
the electronic keyboard, Turing imagined that the answers to the judge’s
inputs “should be written, or better so, typewritten,” the ideal arrangement
being “to have a teleprinter communicating between the two rooms.”33
Considering how, as media historians have shown, telegraphic transmission
and the typewriter mechanized the written word, making it independent
from its author, Turing’s solution shows an acute sense of the role of media

T h e T u r i n g  T e s t [ 23 ]
42

in communication.34 The test’s technological mediation aspires to make


computer and human actors participate in the experiment as pure content,
or to use a term familiar to communication theory, as pure information.35
Before “Computing Machinery and Intelligence,” in an unpublished re-
port written in 1947, Turing had fantasized about the idea of creating a
different kind of machine, able to imitate humans in their entirety. This
Frankenstein-​like creature was a cyborg made up of a combination of
communication media: “One way of setting about our task of building a
‘thinking machine’ would be to take a man as a whole and try to replace
all the parts of him by machinery. He would include television cameras,
microphones, loudspeakers, wheels and ‘handling servo-​mechanisms’ as
well as some sort of ‘electronic brain.’ This would of course be a tremen-
dous undertaking.”36 It is tempting to read this text, notwithstanding its
ironic tone, along the lines of Marshall McLuhan’s aphorism according to
which media are “extensions of man”—​in other words, that media provide
technological proxies through which human skills, senses, and actions are
mechanically reproduced.37 Although Turing’s memo was submitted in
1947, thus almost two decades before the publication of McLuhan’s now
classic work, Turing’s vision was part of a longer imaginative tradition of
cyborg-​like creatures that presented media such as photography, cinema,
and the phonograph as substitutes for human organs and senses.38 For
what concerns the history of AI, Turing’s imagination of a machine that
takes “man as a whole and tr[ies] to replace all the parts of him by ma-
chinery” points to the role of media in establishing the conditions for suc-
cessful AI.39 Indeed, the history of the systems developed in the following
decades demonstrates the impact of communications media. The use of
different technologies and interfaces to send messages informs, if not di-
rectly the computational nature of these programs, the outcome of their
communication, that is, how they impact on human users. A social media
bot on Twitter, for instance, may be programmed to respond with the same
script as a bot in a chatroom, but the nature and effects of the communi-
cation will be differently contextualized and understood. Similarly, where
voice is concerned, the nature of communication will be different if AI voice
assistants are applied to domestic environments, like Alexa, embedded in
smartphones and other mobile devices, like Siri, or employed to conduct
telephonic conversations, as in automated customer services or Google’s
in-​progress Duplex project.40
The Turing test, in this sense, is a reminder that interactions between
humans and AI systems cannot be understood without considering the
circumstances of mediation. Humans, as participants in a communicative
interaction, should be regarded as a crucial variable rather than obliterated

[ 24 ] Deceitful Media
by approaches that focus on the performance of computing technologies
alone. The medium and the interface, moreover, also contribute to the
shape of the outcomes and the implications of every interaction. Turing’s
proposal was, in this sense, perhaps more “Communication Game” than
“Imitation Game.”

PLAYING WITH TURING

Be it about communication or imitation, the Turing test was first of all a


game. Turing never referred to it as “test”; this wording took over after his
death.41 The fact that what we now call the Turing test was described by its
creator not as a test but as game is sometimes dismissed as irrelevant.42
Behind such dismissals lies a widespread prejudice that deems playful ac-
tivities to be distinct from serious scientific inquiry and experiment. In the
history of human societies, however, play has frequently been an engine of
innovation and change.43 As concerns the history of AI, games have been a
powerful vehicle of innovation, too: AI pioneers such as the father of infor-
mation theory, Claude Shannon, or the creator of the first chatbot, Joseph
Weizenbaum, relied on games to explore and interrogate the meanings and
potentials of computing.44 Turing himself took up chess programming,
writing his own chess program on paper in the absence of a computer avail-
able to run it.45
In 1965, the Russian mathematician Alexander Kronrod was asked to
explain why he was using precious computer time at the Soviet Institute
of Theoretical and Experimental Physics to engage in a playful, trivial ac-
tivity such as correspondence chess. He responded that chess was “the
drosophila of artificial intelligence.”46 The reference was to Drosophila
melanogaster, more commonly known as the fruit fly, which was adopted
as a “model organism” by researchers in genetics for experimental inquiry
in their field.47 Kronrod suggested that chess provided AI researchers with
a relatively simple system whose study could help explore larger questions
about the nature of intelligence and machines. The fact that he ultimately
lost the directorship of the Institute because of complaints that he was
wasting expensive computer resources notwithstanding, his answer was
destined to turn into an axiom for the AI discipline. Its pioneers turned
their attention to chess because its simple rules made it easy to simulate
on a computer, while the complex strategies and tactics involved made it
a significant challenge for calculating machines.48 Yet, as historians of sci-
ence have shown, such choices are never neutral decisions: for example, the
adoption of Drosophila as the experimental organism of choice for genetics

T h e T u r i n g  T e s t [ 25 ]
62

research meant that certain research agendas became dominant while


others were neglected.49 Similarly, the choice of chess as “AI’s drosophila”
had wider consequences for the AI field, shaping approaches to program-
ming and particular understandings of what intelligence is. Advances in
computer chess privileged the association between intelligence and ra-
tional thinking, supported by the fact that chess has been regarded for
centuries as evidence of the highest human intelligence.50
One wonders if not only chess but games in general are the Drosophila
of communicative AI. During AI’s history, games have allowed scholars and
developers to imagine and actively test interactions and communication
with machines.51 While a game can be described in abstract terms as a set
of rules describing potential interactions, in pragmatic terms games exist
only when enacted—​when players make choices and take actions under the
constraints of the rules. This applies also to the Imitation Game: it needs
players to exist.
Historians of computing have explored the close relationship between
games and the emergence of the computer user. In the 1960s, when time-​
sharing technologies and minicomputers made it possible for multiple
individuals to access still scarce computer resources, playing games was
one of the first forms of interaction implemented. Games, moreover, were
among the first programs coded by early “hackers” experimenting with
the new machines.52 As computer scientist Brenda Laurel puts it, the first
programmers of digital games realized that the computer’s potential lay
not or not only “in its ability to perform calculations but in its capacity to
represent action in which humans could participate.”53
Having computers play games has meaningful consequences for
human-​computer interaction. It means creating a controlled system where
interactions between human users and computer users can take place,
identifying the computer as a potential player and, therefore, as an agent
within the game’s design. This can lead to the distinctions between human
and computer players collapsing. Within the regulated system of a game, in
fact, a player is defined by the characteristics and the actions that are rele-
vant to the game. It follows that if a computer and a human agent playing
the game are compared, they are in principle indistinguishable within the
boundaries of that game. This has led game studies scholars to argue that
the division between humans and machines in digital games is “completely
artificial” since “both the machine and the operator work together in a cy-
bernetic relationship to effect the various actions of the video game.”54
At different moments in the history of AI, games encouraged researchers
and practitioners to envision new pathways of interactions between
humans and machines. Chess and other table games, for instance, made

[ 26 ] Deceitful Media
possible a confrontation between humans and computers that required
turn taking, a fundamental element in human communication and one
that has been widely implemented in interface design. Similarly, digital
games opened the way for experimenting with more complex interactive
systems involving the use of other human senses, such as sight and touch.55
The Turing test, in this context, envisioned the emergence of what Paul
Dourish calls “social computing,” a range of systems for human-​machine
interactions by which understandings of the social worlds are incorporated
into interactive computer systems.56
At the outset of the computer era, the Turing test made ideas such as
“thinking computers” or “machine intelligence” imaginable in terms of a
simple game situation in which machines rival human opponents. In his vi-
sionary book God & Golem, Inc. (1964), the founder of cybernetics, Norbert
Wiener, predicted that the fact that learning machines could outwit their
creators in games and real-​world contests would raise serious practical and
moral dilemmas for the development of AI. But the Turing test did not just
posit the possibility that a machine could beat a human at a fair play. In
addition, in winning the Imitation Game—​or passing the Turing test, as
one would say today—​machines would advance one step further in their
impertinence toward their creators. They would trick humans, leading us
into believing that machines aren’t different from us.

A PLAY OF DECEPTION

In Monkey Shines, a 1988 film directed by George Romero, a man named


Allan is paralyzed from below the neck by an accident. To cope with the sit-
uation, he is offered a trained monkey to help in his daily life. The monkey
soon exhibits a startling intelligence and develops an affectionate rela-
tionship with the protagonist. Yet, in keeping with Romero’s approach to
filmmaking—​he is well known for the Night of the Living Dead saga—​the
story soon turns into a nightmare. The initial harmony between the animal
and its master is broken as the monkey goes on a killing spree, taking out
some of Allan’s loved ones. Though unable to move, Allan still has an ad-
vantage over the monkey: humans know how to lie, a skill that animals—​
the film tells us—​are lacking. Allan succeeds in stopping the monkey by
pretending to seek its affection, thus attracting it into a deadly trap.
The idea that the capacity to lie is a defining characteristic of humans
resonates in the Turing test. If the test is an exercise in human-​computer
interaction, this interaction is shaped by trickery: the computer will “win”
the game if it is able to deceive the human interrogator. The Turing test can

T h e T u r i n g  T e s t [ 27 ]
82

be regarded, in this sense, as a lie-​detection test in which the interrogator


cannot trust her conversation partner because much of what a computer
will say is false.57 Critics of the Turing test have sometimes pointed to this
aspect as one of its shortcomings, reasoning that the ability to fool people
should not be posed as adequate test of intelligence.58 This, however, can be
explained by the fact that the test does not measure intelligence, only the
capacity to reproduce it. Evaluating AI in terms of imitation, the test poses
the problem of AI from the perspective of the user: it is only by focusing on
the observer, after all, that one may establish the efficacy of an illusion.59
Including lying and deception in the mandate of AI becomes thus equiva-
lent with defining machine intelligence in terms of how it is perceived by
human users, rather than in absolute terms. Indeed in the decades after
Turing’s proposal, as computer programs were developed to try their luck
with the test, deception became a common strategy: it became evident to
programmers that there were strategies to exploit the fallibility of judges,
for instance by employing nonsense or irony to deflect questions that could
expose the computer program.60
Media historian Bernard Geoghegan has suggested that the Turing test’s
relationship with deception reflects the heritage of cryptological research
in early AI.61 During World War II, Turing contributed to efforts conducted
at Bletchley Park to decrypt the ciphered messages used by the Germans
and their allies to communicate in secret. This also involved creating
electronic machines, including one of the very first digital computers, to
enhance the speed and reach of cryptographic work.62 Computer scien-
tist Blay Whitby, a close acquaintance of Turing, compared the efforts of
British cryptographers to those of the human interrogator in the Turing
test. Cryptographers had to deduce the functioning of the cryptographic
machines created by the Nazi, such as the notorious Enigma, purely by
analyzing the ciphered message; in a similar way, Whitby notes, the judge
in the Turing test has to find out the human or mechanical identity of her
conversation partner only by assessing the “output” of the conversation.63
By comparing the dynamics of the Turing test to warfare, the crypto-
graphic interpretation envisages an internal tension between the machine
and the human. In fact, the prospect of machines passing the Turing test
has often awakened fears of a robot apocalypse and the loss of primacy
for humans.64 Yet there is also an element of playfulness in the deception
administered by computers in the Turing test. As I have shown, Turing’s
insistence that it was to be regarded as a game underpinned its pertinence
to the world of play. As proposed by cultural historian Johan Huizinga,
play is “a free activity standing quite consciously outside ‘ordinary’ life as
being ‘not serious,’ but at the same time absorbing the player intensely and

[ 28 ] Deceitful Media
utterly.”65 Thus in the Turing test, the computer’s deceptive stance is innoc-
uous precisely because it is carried within such a playful frame, with the
complicity of human players who follow the rules of the game and partici-
pate willingly in it.66
Turing presented his test as an adaptation of an original “imitation
game” in which the interrogator had to determine who was the man and
who the woman, placing gender performance rather than machine intelli-
gence at center stage.67 The test, in this regard, can be seen as part of a longer
genealogy of games that, before the emergence of electronic computers,
introduced playful deception as part of their design.68 The Lady’s Oracle,
for instance, was a popular Victorian pastime for social soirées. The game
provided a number of prewritten answers that players selected randomly to
answer questions posed by other players. As amusing and surprising replies
were created by chance, the game stimulated the players’ all-​too-​human
temptation to ascribe meaning to chance.69 The “spirit boards” that repro-
duce the dynamics of spiritualist séances for the purpose of entertainment
are another example.70 Marketed explicitly as amusements by a range of
board game companies since the late nineteenth century, they turn spirit
communication into a popular board game that capitalizes on the players’
fascination with the supernatural and their willingness—​conscious or
unconscious—​to make the séance “work.”71 The same dynamics charac-
terize performances of stage magic, where the audience’s appetite for the
show is often attributable to the pleasures of falling in with the tricks of
the prestidigitator, of the discovery that one is liable to deception, and of
admiration for the performer who executes the sleights of hand.72
What these activities have in common with the Turing test is that they
all entertain participants by exploiting the power of suggestion and de-
ception. A negative connotation is usually attributed to deception, yet
its integration in playful activities is a reminder that people actively seek
situations where they may be deceived, following a desire or need that
many people share. Deception in such contexts is domesticated, made in-
tegral to an entertaining experience that retains little if anything of the
threats that other deceptive practices bring with them.73 Playful decep-
tion, in this regard, posits the Turing test as an apparently innocuous game
that helps people to experiment with the sense, which characterizes many
forms of interactions between humans and machines, that a degree of de-
ception is harmless and even functional to the fulfillment of a productive
interaction. Alexa and Siri are perfect examples of how this works in prac-
tice: the use of human voices and names with which these “assistants” can
be summoned and the consistency of their behaviors stimulate users to
assign a certain personality to them. This, in turn, helps users to introduce

T h e T u r i n g  T e s t [ 29 ]
03

these systems more easily into their everyday lives and domestic spaces,
making them less threatening and more familiar. Voice assistants function
most effectively when a form of playful and willing deception is embedded
in the interaction.
To return to the idea underlined in Monkey Shines, my reading of the
Turing test points to another suggestion: that what characterizes humans
is not as much their ability to deceive as their capacity and willingness to
fall into deception. Presenting the possibility that humans can be deceived
by computers in a reassuring, playful context, the Turing test invites re-
flection on the implications of creating AI systems that rely on users falling
willingly into illusion. Rather than positing deception as an exceptional
circumstance, the playfulness of the Imitation Game envisioned a future
in which banal deception is offered as an opportunity to develop satisfac-
tory interactions with AI technologies. Studies of social interaction in psy-
chology, after all, have shown that self-​deception carries a host of benefits
and social advantages.74 A similar conclusion is evoked and mirrored by
contemporary research in interaction design pointing to the advantages of
having users and consumers attribute agency and personality to gadgets
and robots.75 These explorations tell us that by cultivating an impression of
intelligence and agency in computing systems, developers might be able to
improve the users’ experience of these technologies.
The attentive reader might have grasped the disturbing consequences
of this apparently benign endeavor. McLuhan, one of the most influential
theorists of media and communication, used the Greek myth of Narcissus
as a parable for our relationship with technology. Narcissus was a beautiful
young hunter who, after seeing his own image reflected in a pool, fell in love
with himself. Unable to move away from this mesmerizing view, he stared
at the reflection until he died. Like Narcissus, people stare at the gadgets of
modern technology, falling into a state of narcosis that makes them unable
to understand how media are changing them.76 Identifying playful decep-
tion as a paradigm for conceptualizing and constructing AI technologies
awakens the question of whether such a sense of narcosis is also implicated
in our reactions to AI-​powered technologies. Like Narcissus, we regard
them as inoffensive, even playful, while they are changing dynamics and
understandings of social life in ways that we can only partially control.

THE MEANING OF A TEST

So much has been written about the Turing test that one might think
there is nothing more to add. In recent years, many have argued that the

[ 30 ] Deceitful Media
Turing test does not reflect the functioning of modern AI systems. This
is true if the test is seen as a comprehensive test bed for the full range
of applications and technologies that go under the label “AI,” and if one
does not acknowledge Turing’s own refusal to tackle the question whether
“thinking machines” exist or not. Looking at the Turing test from a dif-
ferent perspective, however, one finds that it still provides exceedingly
useful interpretative keys to understanding the implications and impact of
many contemporary AI systems.
In this chapter, I’ve used the Turing test as a theoretical lens to unveil
three key issues about AI systems. The first is the centrality of the human
perspective. A long time before interactive AI systems entered domestic
environments and workspaces, researchers such as Turing realized that
the extent to which computers could be called “intelligent” would depend
on how humans perceived them rather than on some specific character-
istic of machines. This was the fruit, in a sense, of a failure: the impos-
sibility of finding an agreement about definitions of the word intelligence
and of assessing the machine’s experience or consciousness without being
“inside” it. But this realization was destined to lead toward extraordinary
advancements in the AI field. Understanding the fact that AI is a relational
phenomenon, something that emerges also and especially within the in-
teraction between humans and machines, stimulated researchers and
developers to model human behaviors and states of mind in order to devise
more effective interactive AI systems.
The second issue is the role of communication. Imagined by Turing at
a time when the available tools to interact with computers were minimal
and the very idea of the user had not yet emerged as such, the Turing
test helps us, paradoxically, to understand the centrality of commu-
nication in contemporary AI systems. In computer science literature,
human-​computer interaction and AI are usually treated as distinct: one
is concerned with the interfaces that enable users to interact with com-
puting technologies, the other with the creation of machines and pro-
gram completing tasks that are considered intelligent, such as translating
a piece of writing into another language or engaging in conversation
with human users. Yet the Turing test, as I have shown in this chapter,
provides a common point of departure for these two areas. Regardless of
whether this was or was not among Turing’s initial intentions, the test
provides an opportunity to consider AI also in terms of how the commu-
nication between humans and computers is embedded in the system. It
is a reminder that AI systems are not just computing machines but also
media that enable and regulate specific forms of communication between
users and computers.77

T h e T u r i n g  T e s t [ 31 ]
23

The third issue is related to the central preoccupation of this book: the
relationship between AI and deception. The fact that the Turing test pos-
ited a situation in which a human interrogator was prone to deception by
the computer shows that the problem of deception sparked reflections in
the AI field already in its gestational years. Yet the game situation in which
the test is framed stimulates one to consider the playful and the apparently
inoffensive nature of this deception. As discussed in the introduction, after
all, media technologies and practices, including stage magic, trompe l’oeil
painting, cinema, and sound recording, among many others, are effective
also to the extent in which they open opportunities for playful and willing
engagement with the effects of deception.78 The playful deception of the
Turing test, in this sense, further corroborates my claim that AI should
be placed within the longer trajectory of deceitful media that incorporate
banal deception into their functioning.

[ 32 ] Deceitful Media
Another random document with
no related content on Scribd:
As I, building the altars of their souls
To something that is nameless in a name,
And, like a bell upon the night-tide, tolls
Setting them midst their capers all to pray.

3.

This something seems at times of less import


Than what is built thereto. The altars rise
Immeasurable records of surmise;
The achievement is indeed of the great sort,
The length of their magnificence not short,
But in our wonder at their grace and size
Can we forget they were fashioned for your eyes,
Or make of those oblivion in our sport!

Oh no, the idolater finds the idol still,


Though there be pyramids to dazzle him,
And paintings of high art along the wall,
Still there is left the goddess young and slim,
Her lips still breathe, her breasts still rise and fall,—
He kills himself, if her he tries to kill.

4.

But these my friends like other men do eat,


And sleep, and spend most merrily their while
Upon this lily-earth; their hours beguile
Each other, each with a memory to repeat.
And if by chance they do a noble feat
It is for them the subject of a smile,
For they know well at some uncertain mile
Staunch military Death will blow retreat.

Till in a moment they are one with me,


And Love has conquered in an unseen way
The turrets and the bulwarks of their dreams.
No longer is to-morrow yesterday,
Nor life the pagan paradox it seems,
And they are begging immortality.

5.

Immortal girl, what I have said in mirth


About these people,—it is true of me,
Only they live still rich in poverty,
While I am one beyond the reach of earth.
These, of their parent clay, still weigh the worth,
And hesitate to plunge into the sea.
But I, the sooner lost, have found in thee
A new and an eternal kind of birth.

Because your eyes are flaming, and must burn,


Your body fire that kills, your beauty death,
I love, worshipping that which I desire.
Icarus knew no more: I breathe thy breath,
And touch thy hair;—if I to dust return
At least I shall be cinders, you still fire.

MAXWELL E. FOSTER.
Dagonet

You come to me for guidance? That’s a queer


Anomaly, to ask an aged man
What course in Life he recommends, what plan
Of conduct,—ask the King, or Bedivere....
The King is dead? Oh, I recall,—last year
It was; and Bedivere, last of the clan
To follow, like a tired veteran
Obeyed the hand that beckoned from the mere.
Yes, I remember now: in Camelot
When Life was wrapped about us like a flame
How we enjoyed the zeal of Arthur’s rule.
But that was long ago. And there is not
A thing to say, because it was with shame
I saw the King seek counsel of his fool.
HERBERT W. HARTMAN, JR.
The Dark Priest

The dark priest tutors me to-day,


The dark priest.
I turn to the left in the cloister way
To the inner court with the hollyhock row,
And he looks down upon me and watches me go,
The dark priest.

I climb the stair to his study door,


The dark priest,
And I knock (I have done it o’er and o’er)
Then he opens it slowly and ushers me in
And I sit on the hassock and lessons begin,
With the dark priest.

His fingers are long and his eyes are grey,


The dark priest.
The other boys fear him, so they say,
But he throws back his cowl and he lets me see
The smile on his lips, and he’s kind to me,
My dark priest.

He takes his viola and tunes it to play,


The dark priest,
For my Latin’s well read and he promised to-day,
And his instrument gleams in the dust-laden beams
While I sit there athrill to his music of dreams,
The dark priest.

He plays an old Normandy love song I know,


The dark priest,
And the strings quaver back the caress of the bow.
The chamber grows dark while his notes ring out clear,
But he cannot conceal the slow fall of a tear,
My dark priest.

K. A. CAMPBELL.
Poem

A little laughter, and a few short days


And Life is done:
The race throughout this long bewildering maze
Is quickly run.

A little friendship, and a word or so


With worth half guessed—
And then a-weary to long sleep we go,
And that is best.

Life is a little while to dream our dreams


Before we rest—
And Life to us is always what it seems:
That is Life’s jest.

A little hope, a friendship which might live,


The laughing sun,
A tear, a star, is all Life has to give
Ere Life be done.

R. C. BATES.
Sonnet

Autumnal dusk was sweeping with a star,


Over the wood where lovers’ lips were meeting;
Trembled the first cold night-flame, passed the far
Low-whistling sadness of a duck’s wings beating.
Heart strained to heart. The purple deepened through
A twilight shriven in its pain of dying;
Swiftly the wing-beats slanted earthward to
The darkening marshes, with a throat-soft crying.

Night crept through dusk, as now the old surprise


Crept through our kisses to the inner love,
An age-old wistfulness. Our pensive eyes
Yearned to the darkness and the veil thereof;
Yea, and our ears found sorrow in the cries
Of moor-fowls,—and the darkness wheeled above.

WINFIELD SHIRAS.
Book Reviews

Abbé Pierre. By Jay William Hudson. (D. Appleton


& Co.)
“Abbé Pierre”, by Jay William Hudson, is altogether a delightful and
charming book. It may not be called very subtle, nor humorous, nor
dramatic, nor sordid—qualities which most modern novels seem to
imbibe; but that it is delightful and charming no one may deny.
In one respect the book is a picture of a Gascon village—its
customs and its traditions, its thoughts and its dreams. These walks
with Abbé Pierre along the dusty roads of Gascony, these glimpses
of its hills and valleys, these insights into its daily life are most
interesting and picturesque. Furthermore, such a background is ideal
for the unfolding of the romance of Germaine Sance and the young
American, David Ware.
In another respect the book is a picture of life viewed broadly and
sympathetically. Abbé Pierre left his little Gascon village when he
was quite young; he has given the best of his years and strength to
the world; and now he returns to spend his last days in this place
that he loves above all else. Here he sits in his garden house and
writes down some thoughts and ideas about life born of many years
of living. And these thoughts of his give the book, along with its
beauty of description, its beauty of spirit.
I wish that all of us who aimlessly rush about this world with no
time to read anything but an “exciting” novel would pause and read
this book. I suppose my wish is ludicrous, for does not Abbé Pierre
himself say that “Americans always seem to think that unless one is
bustling about all the time one is doing nothing”? And then he
immediately adds: “Some of the best deeds that I have ever done
have been the thoughts I have lived through in this same old garden
by the white road, where wooden shoes go up and down”. He who
can appreciate such a philosophy will read “Abbé Pierre” with much
interest and delight.
W. E. H., JR.

Confessions of an Old Priest. By Rev. S. D.


McConnell. (The MacMillan Co.)
We are all, being students, in a period when our opinions are forming
rapidly according to our characters and interests. For those who feel
that a religious philosophy is an essential basis from which other
values must be derived, or for those whose religion is an untouched
field of inherited beliefs and inhibitions, the time and the subject-
matter of “Confessions of an Old Priest” are ripe. The Rev. Mr.
McConnell remains in the end as devout a Christian as he was fifty
years ago, when he entered the Church convinced that “it owed its
origin to Jesus Christ, and that He was the unique Son of God”. But
he is no longer a worshipper of Jesus; he has taken the very
cornerstone out of Christian doctrine and cast it away—and the
edifice still shelters him as efficaciously as before.
The volume is devoted to his explanation of this paradox: how he
finds himself a faithful Christian still, while the result of his historical
research has disproved for him the divinity of Jesus. For Jesus, he
declares, was not the original Christ; Christus, a Greek word, was
applied to the heroes of a number of Mystery religions during the
century before the obscure Hebrew province of Gallilee had any
intimations that the “Messiah” was born.
And the most startling attack upon traditional dogma is his analysis
of “the trouble with Christianity”. “It is,” he says, “not an unworthy
Christianity, but an unworthy Christ.” When the reader has
swallowed hard for a moment over that declaration, he reads on to
discover what this astounding pastor means, and finds a wealth of
plausible argument to support his extravagance of phrase. Jesus
himself preached a “workless” doctrine, a “toil not, nor spin”
existence, a “turn the other cheek” attitude, and it is his biographers,
together with such followers as Paul, who have incorporated Him
into the practical philosophy and morality of the Church, to make Him
the greatest exemplar in history of life as it should be lived. Jesus,
and “Christlike” people are delightful, adorable characters, according
to this book, but they are a care to the community, and should their
ethics be generally adopted, civilization would go immediately more
or less to smash.
The Rev. Mr. McConnell’s conclusions are so wholesale and so
radical that I am not sure we can all accept them without comment or
refutation. I cannot agree with his method of discriminating between
true history and apostolic imagination in the “synoptic” gospels. But I
do think every Christian should read this work as a test for his
present beliefs and an introduction to new areas of religious thought.
And it is quite possible that here is the way to a new religion and a
satisfactory one in this time of restlessness and agnosticism.
D. G. C.

What I Saw in America. By G. K. Chesterton.


(Hodder & Stoughton, Ltd.)
After reading Mr. G. K. Chesterton’s account of his recent travels in
this country, we recalled to mind a certain cartoon which appeared
some time ago in a London periodical, which depicted the author as
an immense Zeppelin floating over the city. From his mouth came
great clouds of vapor and below were written the words: “G. K. C.
spreading paradoxygen over London”. A similar caricature might be
made in the present instance, for the gentleman in question has, in
this book, tinged his treatment of America and American life with a
shade of paradox.
It would seem to us as if this most interesting and penetrating
series of essays should prove to be of greater interest to American
than to English readers. Mr. Chesterton came, saw, and pondered,
and the results of his meditations are a series of enlightening essays
dealing with everything in America and American life from a
discussion of what America is, and what manner of men Americans
are, to Prohibition and the Irish question.
The author never comments on any subject as you would expect
him to. His impressions of the material and the abstract, of which we
have formed no very definite opinion because of what might be
called that contempt bred of familiarity, come to us as truths which
are as worthy of our consideration as they merit the laughter of the
foreigner.
When he tells you that he is not sure that the outcome of the Civil
War may not have been for the best and that he believes that Walt
Whitman was the greatest American poet, you may be inclined to
disagree, but you will be forced to admit that, as he himself would
say, his reasons are reasonable. Nor does this Englishman spare his
own country in many of his comparisons. The book is not one to be
read through in a sitting; it is something to be picked up and read
one part at a time. There is none of the parts but will bear a second
and even a third reading, for many of its truths are buried deep. It is
a text-book in the art of the appreciation of foreign lands, and its
teachings, if followed, would bring more lasting harmony among all
peoples than the League of Nations it condemns.
M. T.

Aaron’s Rod. By D. H. Lawrence. (Martin Secker.)


Mr. Lawrence is undoubtedly the most consistent of the so-called
moderns on either side of the Atlantic. His novels, thus far, have set
an average standard far above that of his closest rival, Mr. James
Joyce. Mr. Lawrence’s books are always readable; Mr. Joyce’s,
seldom, but they both have gifts of sincerity and mental acuteness
which lift them from the ruck of the ordinary incomprehensible. Their
pungent observations on types, existing conditions, and each other,
are amusing to say the least.
We have heard Mr. Lawrence’s name bandied promiscuously
about as a realist. Nothing could be less real than “Aaron’s Rod”.
The action and dialogue never took place on this earth, nor does it
seem probable that they ever will. There is an odd, pervasive sense
of violence saturating this novel. The Great War has evidently left its
stamp on the intellects of these younger British geniuses, for their
work has a tense, strained quality which is disquieting in the
extreme. The characters of “Aaron’s Rod” move ceaselessly back
and forth like a scurrying body of ants; they jabber in a rather
inhuman way about love, socialism, Italian scenery, and Christmas
trees.
There is no action, no story to speak of: A coal miner runs off to
London, thence to Italy, from one of the larger Midland towns, for no
reason whatsoever except that his wife is fond of him. Persons
appear on Mr. Lawrence’s stage, speak their lines, and hurry off
again, no one seems to know whither. Nevertheless, these
characters are interesting by virtue of Mr. Lawrence’s positive genius
for purely physical portraiture. Josephine, Aaron Sisson’s first
incidental “amoureuse”, is particularly well done, from a pictorial
standpoint. Scarcely a page is given to her, yet she leaves an
impression on our minds far more lasting than that of Aaron himself.
Pains have been taken with Lady Franks in the same way; it seems
as if Mr. Lawrence loses interest in his major characters. He must be
on to pastures new.
“Aaron’s Rod” can scarcely be called a “good” novel. It contains
many advanced ideas in the field of sociology which we found rather
difficult to agree with. However, the world may in time grow up to Mr.
Lawrence and until then we should seize the opportunity of reading
his descriptions of luxurious interiors, and the Alps. They are
remarkably able bits of writing.
Mr. Lawrence is an important novelist now, but it is in his power to
do much better things than he has done so far. If he would lessen his
tone of violent indignation, if he would tincture his spiritual realism a
little less with impure physical realism, he might be considered one
of the great novelists of our time. As it is, his achievement in
“Aaron’s Rod” is remarkable in that he has stripped off everything
unnecessary, merely giving us the essentials on just about every
topic known as a “world problem”. However, we should prefer the
doses one at a time; all at once they seem a rather large gulp.
G. L. G.
Editor’s Babel

Chaos!
In intonations worthy priests of Baal
Ahasuerus and Bukis
Mr. Benson and the Egoist
The Welcome Intruder and
Richard Cory
Shout the praises of Poesy.

Chaos!!
“Be it all poesy—that flaming goddess
With bewildering hair.”
Intones Richard Cory.

Sic transit prosae contributorum

Chaos!!!
We will be Punditical....
We are Punditical.
And so is the Lit.

Chaos!!!!
“WHEE!” from Cory, Bukis, Ahasuerus, Benson, and the
Egoist.
Yale Lit. Advertiser.

Compliments
of
The Chase National Bank

HARRY RAPOPORT
University Tailor
Established 1884
Every Wednesday at Park Avenue Hotel,
Park Ave. and 33rd St., New York
1073 CHAPEL STREET NEW HAVEN, CONN.
DORT SIX
Quality Goes Clear Through
$990 to $1495
Dort Motor Car Co.
Flint, Michigan
$1495

The Knox-Ray Company


Jewelers, Silversmiths, Stationers

Christmas Gifts
Novelties of Merit
Handsome and Useful
Cigarette Cases
Vanity Cases
Photo Cases
Powder Boxes
Match Safes
Belt Buckles
Pocket Knives

970 CHAPEL STREET


(Formerly with the Ford Co.)

P. RING
217 ELM STREET
Sanitary Soda Fountain
Perfect Service
Cigars, Cigarettes, Pipes, Magazines,
Confectionery Fruit
NEW ENLARGED STORE
The Aberdeen
We extend our most sincere wishes for success to those men
who step forth this month into the world of business—may each
man bring greater glory to Yale.

The “Nettleton” Shop


1004 Chapel Street
Opposite Osborn Hall

The Brick Row Book Shop, Inc.


Booksellers, Importers, Print Dealers
From the very first it has been the policy of the Brick Row to
render the best service possible to all lovers of books, first in New
Haven, later in New York and Princeton.
By adhering conscientiously to this avowed purpose, the Brick
Row has gained an enviable place in the hearts of its clients. We
hope to be of even more service to you in the future.

The Brick Row Book Shop, Inc.


19 East 47th S., New York 104 High St., New Haven
68½ Nassau St., Princeton

A Derby belongs in every Wardrobe


SOONER OR LATER—and not infrequently—an occasion arises
where the derby surpasses the soft hat in good taste and correct
grooming.
For that occasion, we suggest that you select a derby now, and
select a Knox. Whatever the model, be assured it is the mode.

SEVEN DOLLARS
HERE ONLY

SHOP OF JENKINS
Haberdashery—Knox Hats—Clothing Specialists
940 Chapel Street, New Haven, Conn.

The Nonpareil Laundry Co.


The Oldest Established Laundry to Yale
We darn your socks, sew your buttons on, and make all repairs
without extra charge.

PACH BROS.
College Photographers
1024 CHAPEL STREET
NEW HAVEN, CONN.

CHAS. MEURISSE & CO.


4638 Cottage Grove Ave., Chicago, Ill.
POLO MALLETS, POLO BALLS, POLO SADDLES and POLO
EQUIPMENT of every kind
Catalog with book of rules on request

CHASE AND COMPANY


Clothing
GENTLEMEN’S FURNISHING GOODS
1018-1020 Chapel St., New Haven, Conn.
Complete Outfittings for Every Occasion. For Day or Evening Wear. For Travel,
Motor or Outdoor Sport. Shirts, Neckwear, Hosiery, Hats and Caps. Rugs,
Bags, Leather Goods, Etc.

Tailors to College Men of Good Discrimination

You might also like