Instant Download Deceitful Media: Artificial Intelligence and Social Life After The Turing Test 1st Edition Simone Natale PDF All Chapter
Instant Download Deceitful Media: Artificial Intelligence and Social Life After The Turing Test 1st Edition Simone Natale PDF All Chapter
Instant Download Deceitful Media: Artificial Intelligence and Social Life After The Turing Test 1st Edition Simone Natale PDF All Chapter
https://ebookmass.com/product/deceitful-media-
artificial-intelligence-and-social-life-after-the-
turing-test-1st-edition-simone-natale/
OR CLICK BUTTON
DOWLOAD NOW
https://ebookmass.com/product/deceitful-media-artificial-
intelligence-and-social-life-after-the-turing-test-1st-edition-
simone-natale/
https://ebookmass.com/product/artificial-intelligence-social-
harms-and-human-rights-ales-zavrsnik/
https://ebookmass.com/product/artificial-intelligence-and-its-
discontents-critiques-from-the-social-sciences-and-
humanities-1st-edition-ariane-hanemaayer/
https://ebookmass.com/product/the-psychoanalysis-of-artificial-
intelligence-1st-edition-isabel-millar/
Artificial Intelligence and International Relations
Theories 1st Edition Bhaso Ndzendze
https://ebookmass.com/product/artificial-intelligence-and-
international-relations-theories-1st-edition-bhaso-ndzendze/
https://ebookmass.com/product/artificial-intelligence-for-
dummies-2nd-edition-john-paul-mueller-luca-massaron/
https://ebookmass.com/product/artificial-intelligence-and-
intellectual-property-reto-hilty/
https://ebookmass.com/product/precision-health-and-artificial-
intelligence-arjun-panesar/
https://ebookmass.com/product/the-virtual-public-servant-
artificial-intelligence-and-frontline-work-1st-ed-edition-
stephen-jeffares/
Deceitful Media
ii
Deceitful Media
Artificial Intelligence and Social Life after
the Turing Test
Simone Natale
1
iv
1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.
DOI: 10.1093/oso/9780190080365.001.0001
9 8 7 6 5 4 3 2 1
Paperback printed by Marquis, Canada
Hardback printed by Bridgeport National Bindery, Inc., United States of America
ACKNOWLEDGMENTS
When I started working on this book, I had an idea about a science fic-
tion story. I might never write it, so I reckon it is just fine to give up its
plot here. A woman, Ellen, is awakened by a phone call. It’s her husband.
There is something strange in his voice; he sounds worried and somehow
out of tune. In the close future in which this story is set, artificial intelli-
gence (AI) has become so efficient that a virtual assistant can make calls
on your behalf by reproducing your own voice, and the simulation will be
so accurate as to trick even your close family and friends. Ellen and her
husband, however, have agreed that they would never use AI to communi-
cate between them. Yet in the husband’s voice that morning there is some-
thing that doesn’t sound like him. Later, Ellen discovers that her husband
has died that very night, a few hours before the time of their call. The call
should have been made by an AI assistant. Dismayed by her loss, she listens
to the conversation again and again until she finally picks up some hints to
solve the mystery. In fact, this science fiction story I haven’t written is also
a crime story. To learn the truth about her husband’s death, Ellen will need
to interpret the content of the conversation. In the process, she will also
have to establish whether the words came from her husband’s, from the
machine that imitated him, or from some combination of the two.
This book is not science fiction, yet like much science fiction, it is also an
attempt to make sense of technologies whose implications and meaning we
are just starting to understand. I use the history of AI—a surprisingly long
one for technologies that are often presented as absolute novelties—as a
compass to orient my exploration. I started working on this book in 2016.
My initial idea was to write a cultural history of the Turing test, but my
explorations brought exciting and unexpected discoveries that made the
final project expand much beyond that.
A number of persons read and commented on early drafts of this
work. My editor, Sarah Humphreville, not only believed in this project
vi
since the start but also provided crucial advice and punctual suggestions
throughout its development. Assistant Editor Emma Hodgon was also
exceedingly helpful and scrupulous. Leah Henrickson provided feedback
on all the chapters; her intelligence and knowledge made this just a much
better book. I am grateful to all who dedicated time and attention to read
and comment on different parts of this work: Saul Albert, Gabriele Balbi,
Andrea Ballatore, Paolo Bory, Riccardo Fassone, Andrea Guzman, Vincenzo
Idone Cassone, Nicoletta Leonardi, Jonathan Lessard, Peppino Ortoleva,
Benjamin Peters, Michael Pettit, Thais Sardá, Rein Sikveland, and Cristian
Vaccari.
My colleagues at Loughborough University have been a constant source
of support, both professionally and personally, during the book’s gestation.
I would like especially to thank John Downey for being such a generous
mentor at an important and potentially complicated moment of my career,
and for teaching me the importance of modesty and integrity in the process.
Many other senior staff members at Loughborough were very supportive
in many occasions throughout the last few years, and I wish particularly
to thank Emily Keightley, Sabina Mihelj, and James Stanyer for their con-
stant help and friendliness. Thanks also to my colleagues and friends Pawas
Bisht, Andrew Chadwick, David Deacon, Antonios Kyparissiadis, Line
Nyhagen, Alena Pfoser, Marco Pino, Jessica Robles, Paula Saukko, Michael
Skey, Elzabeth Stokoe, Vaclav Stetka, Thomas Thurnell-Read, Peter Yeandle,
and Dominic Wring, as well as to all other colleagues at Loughborough, for
making work easier and more enjoyable.
During the latest stages of this project I was awarded a Visiting Fellowship
at ZeMKI, the Center for Media, Communication, and Information
Research of the University of Bremen. It was a great opportunity to discuss
my work and to have space and time to reflect and write. Conversations
with Andreas Hepp and Yannis Theocharis were particularly helpful to
clarify and deepen some of my ideas. I thank all the ZeMKI members for
their feedback and friendship, especially but not only Stefanie Averbeck-
Lietz, Hendrik Kühn, Kerstin Radde-Antweiler, and Stephanie Seul, as well
as the other ZeMKI Fellows whose residence coincided with my stay: Peter
Lunt, Ghislain Thibault, and Samuel Van Ransbeeck.
Some portions of this book have been revised from previous publications.
In particular, parts of chapter 3 were previously published, in a significantly
different version, in the journal New Media and Society, and an earlier ver-
sion of c hapter 6 was featured as a Working Paper in the Communicative
[ viii ] Acknowledgments
Figurations Working Papers series. I thank the reviewers and editors for
their generous feedback.
My thanks, finally, go to the many humans who acted as my companions
throughout these years, doing it so well that no machine will ever be able to
replace them. This book is especially dedicated to three of them: my brother
and sister and my partner, Viola.
Acknowledgments [ ix ]
x
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0001
2
Introduction [3]
4
entertaining a similar tenet does not forcefully contrast with and is often
complementary to the idea that existing AI systems provide only the illu-
sion of human intelligence. Throughout the history of AI, many have ac-
knowledged the limitations of present systems and focused their efforts
on designing programs that would provide at least the appearance of in-
telligence; in their view, “real” or “strong” AI would come through further
progress, with their own simulation systems representing just a step in
that direction.8 Understanding how humans engage in social exchanges,
and how they can be led to treat things as social agents, became instru-
mental to overcoming the limitations of AI technologies. Researchers in AI
thus established a direction of research that was based on the designing of
technologies that cleverly exploited human perception and expectations to
give users the impression of employing or interacting with intelligent sys-
tems. This book demonstrates that looking at the development across time
of this tradition—which has not yet been studied as such—is essential to
understanding contemporary AI systems programmed to engage socially
with humans. In order to pursue this agenda, however, the problem of de-
ception and AI needs to be formulated under new terms.
When the great art historian Ernst Gombrich started his inquiry into the
role of illusion in the history of art, he realized that figurative arts emerge
within an interplay between the limits of tradition and the limits of percep-
tion. Artists have always incorporated deception into their work, drawing
on their knowledge both of convention and of mechanisms of perception
to achieve certain effects on the viewer.9 But who would blame a gifted
painter for employing deceit by playing with perspective or depth to make
a tableau look more convincing and “real” in the eyes of the observer?
While this is easily accepted from an artist, the idea that a software
developer employs knowledge about how users are deceived in order to
improve human-computer interaction is likely to encounter concern and
criticism. In fact, because the term deception is usually associated with ma-
licious endeavors, the AI and computer science communities have proven
resistant to discussing their work in terms of deception, or have discussed
deception as an unwanted outcome.10 This book, however, contends that
deception is a constitutive element of human- computer interactions
rooted in AI technologies. We are, so to say, programmed to be deceived,
and modern media have emerged within the spaces opened by the limits
and affordances of our capacity to fall into illusion. Despite their resistance
Introduction [5]
6
Introduction [7]
8
Introduction [9]
01
[ 10 ] Deceitful Media
One further issue is the extent to which the mechanisms of banal decep-
tion embedded in AI are changing the social conventions and habits that
regulate our relationships with both humans and machines. Pierre Bourdieu
uses the concept of habitus to characterize the range of dispositions
through which individuals perceive and react to the social world.45 Since
habitus is based on previous experiences, the availability of increasing
opportunities to engage in interactions with computers and AI is likely to
feed forward into our social behaviors in the future. The title of this book
refers to AI and social life after the Turing test, but even if a computer
program able to pass that test is yet to be created, the dynamics of banal
deception in AI already represent an inescapable influence on the social life
of millions of people around the world. The main objective of this book is
to neutralize the opacity of banal deception, bringing its mechanisms to
the surface so as to better understand new AI systems that are altering
societies and everyday life.
Introduction [ 11 ]
21
[ 12 ] Deceitful Media
and the Mechanical Turk, which amazed audiences in Europe and America
in the late eighteenth and early nineteenth centuries with its proficiency at
playing chess.57 In considering the relationship between AI and deception,
these automata are certainly a case in point, as their apparent intelligence
was the result of manipulation by their creators: the mechanical duck had
feces stored in its interior, so that no actual digestion took place, while
the Turk was maneuvered by a human player hidden inside the machine.58
I argue, however, that to fully understand the broader relationship between
contemporary AI and deception, one needs to delve into a wider histor-
ical context that goes beyond the history of automata and programmable
machines. This context is the history of deceitful media, that is, of how dif-
ferent media and practices, from painting and theatre to sound recording,
television, and cinema, have integrated banal deception as a strategy to
achieve particular effects in audiences and users. Following this trajectory
shows that some of the dynamics of communicative AI are in a relationship
of continuity with the ways audiences and users have projected meaning
onto other media and technology.
Examining the history of communicative AI from the proposal of the
Turing test in 1950 to the present day, I ground my work in the per-
suasion that a historical approach to media and technological change
helps us comprehend ongoing transformations in the social, cultural,
and political spheres. Scholars such as Lisa Gitelman, Erkki Huhtamo,
and Jussi Parikka have compellingly shown that what are now called
“new media” have a long history, whose study is necessary to under-
stand today’s digital culture.59 If it is true that history is one of the best
tools for comprehending the present, I believe that it is also one of the
best instruments, although still an imperfect one, for anticipating the
future. In areas of rapid development such as AI, it is extremely diffi-
cult to forecast even short-and medium-term development, let alone
long-term changes.60 Looking at longer historical trajectories across sev-
eral decades helps to identify key trends and trajectories of change that
have characterized the field across several decades and might, therefore,
continue to shape it in the future. Although it is important to under-
stand how recent innovations like neural networks and deep learning
work, a better sense is also needed of the directions through which the
field has moved across a longer time frame. Media history, in this sense,
is a science of the future: it not only sheds light on the dynamics by
which we have arrived where we are today but helps pose new questions
and problems through which we may navigate the technical and social
challenges ahead.61
Introduction [ 13 ]
41
[ 14 ] Deceitful Media
to users, which are becoming increasingly available in computers, cars, call
centers, domestic environments, and toys.67 Another crucial contribution
to such endeavors is that of Sherry Turkle. Across several decades, her
research has explored interactions between humans and AI, emphasizing
how their relationship does not follow from the fact that computational
objects really have emotions or intelligence but from what they evoke in
their users.68
Although the role of deception is rarely acknowledged in discussions of
AI, I argue that interrogating the ethical and cultural implications of such
dynamics is an urgent task that needs to be approached through inter-
disciplinary reflection at the crossroads between computer science, cogni-
tive science, social sciences, and the humanities. While the public debate
on the future of AI tends to focus on the hypothesis that AI will make
computers as intelligent or even more intelligent than people, we also
need to consider the cultural and social consequences of deceitful media
providing the appearance of intelligence. In this regard, the contemporary
obsession with apocalyptic and futuristic visions of AI, such as the singu-
larity, superintelligence, and the robot apocalypse, makes us less aware of
the fact that the most significant implications of AI systems are to be seen
not in a distant future but in our ongoing interactions with “intelligent”
machines.
Technology is shaped not only by the agency of scientists, designers,
entrepreneurs, users, and policy-makers but also by the kinds of questions
we ask about it. This book hopes to inspire readers to ask new questions
about the relationship between humans and machines in today’s world.
We will have to start searching for answers ourselves, as the “intelligent”
machines we are creating can offer no guidance on such matters-as one of
those machines admitted when I asked it (I.1).
Introduction [ 15 ]
61
CHAPTER 1
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0002
séance table could be explained with their unconscious desire for deceiving
themselves, which led them involuntarily to move the table.
It wasn’t spirits of the dead that made spiritualism, Faraday pointed
out. It was the living people—we, the humans—who made it.3
Robots and computers are no spirits, yet Faraday’s story provides a
useful lens through which to consider the emergence of AI from an unu-
sual perspective. In the late 1940s and early 1950s, often described as the
gestation period of AI, research in the newly born science of computing
placed more and more emphasis on the possibility that electronic digital
computers would develop into “thinking machines.”4 Historians of cul-
ture, science, and technology have shown how this involved a hypothesis
of an equivalence between human brains and computing machines.5 In
this chapter, however, I aim to demonstrate that the emergence of AI also
entailed a different kind of discovery. Some of the pioneers in the new field
realized that the possibility of thinking machines depended on the per-
spective of the observers as much as on the functioning of computers. As
Faraday in the 1850s suggested that spirits existed in séance sitters’ minds,
these researchers contemplated the idea that AI resided not so much in
circuits and programming techniques as in the ways humans perceive
and react to interactions with machines. In other words, they started to
envision the possibility that it was primarily users, not computers, that
“made” AI.
This awareness did not emerge around a single individual, a single group,
or even a single area of research. Yet one contribution in the gestational field
of AI is especially relevant: the “Imitation Game,” a thought experiment
proposed in 1950 by Alan Turing in his article “Computing Machinery and
Intelligence.” Turing described a game—today more commonly known as
the Turing test—in which a human interrogator communicates with both
human and computer agents without knowing their identities, and is chal-
lenged to distinguish the humans from the machines. A lively debate about
the Turing test started shortly after the publication of Turing’s article and
continues today, through fierce criticisms, enthusiastic approvals, and
contrasting interpretations.6 Rather than entering the debate by providing
a unique or privileged reading, my goal here is to illuminate one of the sev-
eral threads Turing’s idea opened for the nascent AI field. I argue that, just
as Faraday placed the role of humans at the center of the question of what
happened in spiritualist séances, Turing proposed to define AI in terms of
the perspective of human users in their interactions with computers. The
Turing test will thus serve here as a theoretical lens more than as historical
evidence—a reflective space that invites us to interrogate the past, the pre-
sent, and the future of AI from a different standpoint.
T h e T u r i n g T e s t [ 17 ]
81
[ 18 ] Deceitful Media
time. Even if computers could eventually rival or even surpass humans in
tasks that were considered to require intelligence, there would still be little
evidence to compare their operations to what happens in humans’ own
minds. The problem can be illustrated quite well by using the argument
put forward by philosopher Thomas Nagel in an exceedingly famous article
published two decades later, in 1974, titled “What Is It Like to Be a Bat?”
Nagel demonstrates that even a precise insight into what happens inside
the brain and body of a bat would not make us able to assess whether the
bat is conscious. Since consciousness is a subjective experience, one needs
to be “inside” the bat to do that.11 Transferred into the machine intelli-
gence problem, despite computing’s tremendous achievements, their “in-
telligence” would not be similar or equal to that of humans. While all of us
has some understanding of what “thinking” is, based on our own subjective
experience, we cannot know whether others, and especially non-human
beings, share the same experience. We have no objective way, therefore, to
know if machines are “thinking.”12
A trained mathematician who engaged with philosophical issues only
as a personal interest, Turing was far from the philosophical sophistica-
tion of Nagel’s arguments. Turing’s objective, after all, was to illustrate the
promises of modern computing, not to develop a philosophy of the mind.
Yet he showed similar concerns in the opening of “Computer Machinery
and Intelligence” (1950). In the introduction, he considers the question
“Can machines think?” only to declare it of little use, due to the difficulty
of finding agreement about the meanings of the words “machine” and
“think.” He proposes therefore to replace this question with another one,
and thereby introduces the Imitation Game as a more sensible way to ap-
proach the issue:
The new form of the problem can be described in terms of a game which we call
the “imitation game.” It is played with three people, a man (A), a woman (B), and
an interrogator (C) who may be of either sex. The interrogator stays in a room
apart from the other two. The object of the game for the interrogator is to de-
termine which of the other two is the man and which is the woman. . . . We now
ask the question, “What will happen when a machine takes the part of A in this
game?” Will the interrogator decide wrongly as often when the game is played
like this as he does when the game is played between a man and a woman? These
questions replace our original, “Can machines think?”13
T h e T u r i n g T e s t [ 19 ]
02
and scientists to engage more seriously with the machine intelligence ques-
tion.14 But regardless of whether he actually thought of it as such, there is
no doubt that the test proved to be an excellent instrument of propaganda
for the field. Drawing from the powerful idea that a “computer brain” could
beat humans in one of the skills that define our intelligence, the use of nat-
ural language, the Turing test presented potential developments of AI in
an intuitive and fascinating fashion.15 In the following decades, it became
a staple reference in popular publications presenting the achievements
and the potentials of computing. It forced readers and commentators to
consider the possibility of AI—even if just rejecting it as science fiction or
charlatanry.
Turing’s article was ambiguous in many respects, which favored the emer-
gence of different views and controversies about the meaning of the Turing
test.16 Yet one of the key implications for the AI field is evident. The ques-
tion, Turing tells his readers, is not whether machines are or are not able to
think. It is, instead, whether we believe that machines are able to think—in
other words, if we are prepared to accept machines’ behavior as intelligent.
In this respect, Turing turned around the problem of AI exactly as Faraday
did with spiritualism. Much as the Victorian scientist deemed humans and
not spirits responsible for producing spiritualist phenomena at séances,
the Turing test placed humans rather than machines at the very center
of the AI question. Although some have pointed out that the Turing test
has “failed” because its dynamic does not accurately adhere to AI’s current
state of the art,17 Turing’s proposal located the prospects of AI not just
in improvements of hardware and software but in a more complex sce-
nario emerging from the interactions between humans and computers.
By placing humans at the center of its design, the Turing test provided a
context wherein AI technologies could be conceived in terms of their cred-
ibility to human users.18
There are three actors in the Turing test, all of which engage in acts of
communication: a computer player, a human player, and a human evalu-
ator or judge. The computer’s capacity to simulate the ways humans behave
in a conversation is obviously one of the factors that inform the test’s out-
come. But since human actors engage actively in communications within
the test, their behaviors will be another decisive factor. Things such as their
backgrounds, biases, characters, genders, and political opinions will have
a role in both the decisions of the interrogator and the behavior of the
[ 20 ] Deceitful Media
human agents with which the interrogator interacts. A computer scientist
with knowledge and experience on AI, for instance, will be in a different
position from someone who has limited insight into the topic. Likewise,
human players who act as conversation partners in the test will have their
own motivations and ideas on how to participate in the test. Some, for
instance, could be fascinated by the possibility of being exchanged for a
computer and therefore tempted to create ambiguity about their identities.
Because of the role human actors play in the Turing test, all these are
variables that may inform its outcome.19
The uncertainty that derives from this is often indicated to be one of
the test’s shortcomings.20 The test appears entirely logical and justified,
however, if one sees it not as an evaluation of the existence of thinking
machines but as a measure of humans’ reactions to communications with
machines that exhibit intelligent behavior. From this point of view, the
test made clear for the first time, decades before the emergence of online
communities, social media bots, and voice assistants, that AI not only is a
matter of computer power and programming techniques but also resides—
and perhaps especially—in the perceptions and patterns of interaction
through which humans engage with computers.
The wording of Turing’s article provides some additional evidence to
support this interpretation. Though he refused to make guesses about the
question whether machines can think, he did not refrain from speculating
that “at the end of the century the use of words and general educated
opinion will have altered so much that one will be able to speak of machines
thinking without expecting to be contradicted.”21 It is worth reading the
wording of this statement attentively. This is not a comment about the de-
velopment of more functional or more credible machines. It is about cul-
tural change. Turing argues that by the end of the twentieth century people
will have a different understanding of the AI problem, so that the prospect
of “thinking machines” will not sound as unlikely as it did at Turing’s time.
He shows interest in “the use of words and general educated opinion” rather
than in the possibility of establishing whether machines actually “think.”
Looking at the history of computing since Turing’s time, there is no
doubt that cultural attitudes about computing and AI did change, and quite
considerably. The work of Sherry Turkle, who studied people’s relationships
with technology across several decades, provides strong evidence of this.
For instance, in interviews conducted in the late 1970s, as she asked
participants what they thought about the idea of chatbots providing psy-
chotherapy services, she encountered much resistance. Most interviewees
tended to agree that machines could not make up for the loss of the em-
pathic feeling between the psychologist and the patient.22 In the following
T h e T u r i n g T e s t [ 21 ]
2
Media historian John Durham Peters famously argued that the history
of communication can be read as the history of the aspiration to estab-
lish an empathic connection with others, and the fear that this may break
down.25 As media such as the telegraph, the telephone, and broadcasting
were introduced, they all awoke hopes that they could facilitate such
connections, and at the same time fears that the electronic mediation they
provided would increase our estrangement from our fellow human beings.
It is easy to see how this also applies to digital technologies today. In its rel-
atively short history, the Internet kindled powerful hopes and fears: think,
for instance, of the question whether social networks facilitate the creation
of new forms of communication or make people lonelier than ever before.26
[ 22 ] Deceitful Media
But computing has not always been seen in relationship to communi-
cation. In 1950, when Turing published his article, computer-mediated
communication had not yet coalesced into a meaningful field of investi-
gation. Computers were mostly discussed as calculating tools, and avail-
able forms of interactions between human users and computers were
minimal.27 Imagining communication between humans and computers in
1950, when Turing wrote his article, required a visionary leap—perhaps
even greater than that needed to consider the possibility of “machine in-
telligence.”28 The very idea of a user, understood as an individual given ac-
cess to shared computing resources, was not fully conceptualized before
developments in time-sharing systems and computer networks in the
1960s and the 1970s made computers available to individual access, ini-
tially within small communities of researchers and computer scientists and
then for an increasingly larger public.29 As a consequence, the Turing test
is usually discussed as a problem about the definition of intelligence. An
alternative way to look at it, however, is to consider it as an experiment in
the communications between humans and computers. Recently, scholars
have started to propose such an approach. Artificial intelligence ethics ex-
pert David Gunkel, for instance, points out that “Turing’s essay situates
communication—and a particular form of deceptive social interaction—
as the deciding factor” and should therefore be considered a contribu-
tion to computer-mediated communication avant la lettre.30 Similarly, in a
thoughtful book written after his experience as a participant in the Loebner
Prize—a contemporary contest, discussed at greater length in chapter 5, in
which computer programs engage in a version of the Turing test—Brian
Christian stresses this aspect, noting that “the Turing test is, at bottom,
about the act of communication.”31
Since the design of the test relied on interactions between humans
and computers, Turing felt the need to include precise details about how
they would have entered into communication. To ensure the validity of
the test, the interrogator needed to communicate with both human and
computer players without receiving any hints about their identities other
than the contents of their messages. Communications between humans
and computers in the test were thus meant to be anonymous and disem-
bodied.32 In the absence of video displays and even input devices such as
the electronic keyboard, Turing imagined that the answers to the judge’s
inputs “should be written, or better so, typewritten,” the ideal arrangement
being “to have a teleprinter communicating between the two rooms.”33
Considering how, as media historians have shown, telegraphic transmission
and the typewriter mechanized the written word, making it independent
from its author, Turing’s solution shows an acute sense of the role of media
T h e T u r i n g T e s t [ 23 ]
42
[ 24 ] Deceitful Media
by approaches that focus on the performance of computing technologies
alone. The medium and the interface, moreover, also contribute to the
shape of the outcomes and the implications of every interaction. Turing’s
proposal was, in this sense, perhaps more “Communication Game” than
“Imitation Game.”
T h e T u r i n g T e s t [ 25 ]
62
[ 26 ] Deceitful Media
possible a confrontation between humans and computers that required
turn taking, a fundamental element in human communication and one
that has been widely implemented in interface design. Similarly, digital
games opened the way for experimenting with more complex interactive
systems involving the use of other human senses, such as sight and touch.55
The Turing test, in this context, envisioned the emergence of what Paul
Dourish calls “social computing,” a range of systems for human-machine
interactions by which understandings of the social worlds are incorporated
into interactive computer systems.56
At the outset of the computer era, the Turing test made ideas such as
“thinking computers” or “machine intelligence” imaginable in terms of a
simple game situation in which machines rival human opponents. In his vi-
sionary book God & Golem, Inc. (1964), the founder of cybernetics, Norbert
Wiener, predicted that the fact that learning machines could outwit their
creators in games and real-world contests would raise serious practical and
moral dilemmas for the development of AI. But the Turing test did not just
posit the possibility that a machine could beat a human at a fair play. In
addition, in winning the Imitation Game—or passing the Turing test, as
one would say today—machines would advance one step further in their
impertinence toward their creators. They would trick humans, leading us
into believing that machines aren’t different from us.
A PLAY OF DECEPTION
T h e T u r i n g T e s t [ 27 ]
82
[ 28 ] Deceitful Media
utterly.”65 Thus in the Turing test, the computer’s deceptive stance is innoc-
uous precisely because it is carried within such a playful frame, with the
complicity of human players who follow the rules of the game and partici-
pate willingly in it.66
Turing presented his test as an adaptation of an original “imitation
game” in which the interrogator had to determine who was the man and
who the woman, placing gender performance rather than machine intelli-
gence at center stage.67 The test, in this regard, can be seen as part of a longer
genealogy of games that, before the emergence of electronic computers,
introduced playful deception as part of their design.68 The Lady’s Oracle,
for instance, was a popular Victorian pastime for social soirées. The game
provided a number of prewritten answers that players selected randomly to
answer questions posed by other players. As amusing and surprising replies
were created by chance, the game stimulated the players’ all-too-human
temptation to ascribe meaning to chance.69 The “spirit boards” that repro-
duce the dynamics of spiritualist séances for the purpose of entertainment
are another example.70 Marketed explicitly as amusements by a range of
board game companies since the late nineteenth century, they turn spirit
communication into a popular board game that capitalizes on the players’
fascination with the supernatural and their willingness—conscious or
unconscious—to make the séance “work.”71 The same dynamics charac-
terize performances of stage magic, where the audience’s appetite for the
show is often attributable to the pleasures of falling in with the tricks of
the prestidigitator, of the discovery that one is liable to deception, and of
admiration for the performer who executes the sleights of hand.72
What these activities have in common with the Turing test is that they
all entertain participants by exploiting the power of suggestion and de-
ception. A negative connotation is usually attributed to deception, yet
its integration in playful activities is a reminder that people actively seek
situations where they may be deceived, following a desire or need that
many people share. Deception in such contexts is domesticated, made in-
tegral to an entertaining experience that retains little if anything of the
threats that other deceptive practices bring with them.73 Playful decep-
tion, in this regard, posits the Turing test as an apparently innocuous game
that helps people to experiment with the sense, which characterizes many
forms of interactions between humans and machines, that a degree of de-
ception is harmless and even functional to the fulfillment of a productive
interaction. Alexa and Siri are perfect examples of how this works in prac-
tice: the use of human voices and names with which these “assistants” can
be summoned and the consistency of their behaviors stimulate users to
assign a certain personality to them. This, in turn, helps users to introduce
T h e T u r i n g T e s t [ 29 ]
03
these systems more easily into their everyday lives and domestic spaces,
making them less threatening and more familiar. Voice assistants function
most effectively when a form of playful and willing deception is embedded
in the interaction.
To return to the idea underlined in Monkey Shines, my reading of the
Turing test points to another suggestion: that what characterizes humans
is not as much their ability to deceive as their capacity and willingness to
fall into deception. Presenting the possibility that humans can be deceived
by computers in a reassuring, playful context, the Turing test invites re-
flection on the implications of creating AI systems that rely on users falling
willingly into illusion. Rather than positing deception as an exceptional
circumstance, the playfulness of the Imitation Game envisioned a future
in which banal deception is offered as an opportunity to develop satisfac-
tory interactions with AI technologies. Studies of social interaction in psy-
chology, after all, have shown that self-deception carries a host of benefits
and social advantages.74 A similar conclusion is evoked and mirrored by
contemporary research in interaction design pointing to the advantages of
having users and consumers attribute agency and personality to gadgets
and robots.75 These explorations tell us that by cultivating an impression of
intelligence and agency in computing systems, developers might be able to
improve the users’ experience of these technologies.
The attentive reader might have grasped the disturbing consequences
of this apparently benign endeavor. McLuhan, one of the most influential
theorists of media and communication, used the Greek myth of Narcissus
as a parable for our relationship with technology. Narcissus was a beautiful
young hunter who, after seeing his own image reflected in a pool, fell in love
with himself. Unable to move away from this mesmerizing view, he stared
at the reflection until he died. Like Narcissus, people stare at the gadgets of
modern technology, falling into a state of narcosis that makes them unable
to understand how media are changing them.76 Identifying playful decep-
tion as a paradigm for conceptualizing and constructing AI technologies
awakens the question of whether such a sense of narcosis is also implicated
in our reactions to AI-powered technologies. Like Narcissus, we regard
them as inoffensive, even playful, while they are changing dynamics and
understandings of social life in ways that we can only partially control.
So much has been written about the Turing test that one might think
there is nothing more to add. In recent years, many have argued that the
[ 30 ] Deceitful Media
Turing test does not reflect the functioning of modern AI systems. This
is true if the test is seen as a comprehensive test bed for the full range
of applications and technologies that go under the label “AI,” and if one
does not acknowledge Turing’s own refusal to tackle the question whether
“thinking machines” exist or not. Looking at the Turing test from a dif-
ferent perspective, however, one finds that it still provides exceedingly
useful interpretative keys to understanding the implications and impact of
many contemporary AI systems.
In this chapter, I’ve used the Turing test as a theoretical lens to unveil
three key issues about AI systems. The first is the centrality of the human
perspective. A long time before interactive AI systems entered domestic
environments and workspaces, researchers such as Turing realized that
the extent to which computers could be called “intelligent” would depend
on how humans perceived them rather than on some specific character-
istic of machines. This was the fruit, in a sense, of a failure: the impos-
sibility of finding an agreement about definitions of the word intelligence
and of assessing the machine’s experience or consciousness without being
“inside” it. But this realization was destined to lead toward extraordinary
advancements in the AI field. Understanding the fact that AI is a relational
phenomenon, something that emerges also and especially within the in-
teraction between humans and machines, stimulated researchers and
developers to model human behaviors and states of mind in order to devise
more effective interactive AI systems.
The second issue is the role of communication. Imagined by Turing at
a time when the available tools to interact with computers were minimal
and the very idea of the user had not yet emerged as such, the Turing
test helps us, paradoxically, to understand the centrality of commu-
nication in contemporary AI systems. In computer science literature,
human-computer interaction and AI are usually treated as distinct: one
is concerned with the interfaces that enable users to interact with com-
puting technologies, the other with the creation of machines and pro-
gram completing tasks that are considered intelligent, such as translating
a piece of writing into another language or engaging in conversation
with human users. Yet the Turing test, as I have shown in this chapter,
provides a common point of departure for these two areas. Regardless of
whether this was or was not among Turing’s initial intentions, the test
provides an opportunity to consider AI also in terms of how the commu-
nication between humans and computers is embedded in the system. It
is a reminder that AI systems are not just computing machines but also
media that enable and regulate specific forms of communication between
users and computers.77
T h e T u r i n g T e s t [ 31 ]
23
The third issue is related to the central preoccupation of this book: the
relationship between AI and deception. The fact that the Turing test pos-
ited a situation in which a human interrogator was prone to deception by
the computer shows that the problem of deception sparked reflections in
the AI field already in its gestational years. Yet the game situation in which
the test is framed stimulates one to consider the playful and the apparently
inoffensive nature of this deception. As discussed in the introduction, after
all, media technologies and practices, including stage magic, trompe l’oeil
painting, cinema, and sound recording, among many others, are effective
also to the extent in which they open opportunities for playful and willing
engagement with the effects of deception.78 The playful deception of the
Turing test, in this sense, further corroborates my claim that AI should
be placed within the longer trajectory of deceitful media that incorporate
banal deception into their functioning.
[ 32 ] Deceitful Media
Another random document with
no related content on Scribd:
As I, building the altars of their souls
To something that is nameless in a name,
And, like a bell upon the night-tide, tolls
Setting them midst their capers all to pray.
3.
4.
5.
MAXWELL E. FOSTER.
Dagonet
K. A. CAMPBELL.
Poem
R. C. BATES.
Sonnet
WINFIELD SHIRAS.
Book Reviews
Chaos!
In intonations worthy priests of Baal
Ahasuerus and Bukis
Mr. Benson and the Egoist
The Welcome Intruder and
Richard Cory
Shout the praises of Poesy.
Chaos!!
“Be it all poesy—that flaming goddess
With bewildering hair.”
Intones Richard Cory.
Chaos!!!
We will be Punditical....
We are Punditical.
And so is the Lit.
Chaos!!!!
“WHEE!” from Cory, Bukis, Ahasuerus, Benson, and the
Egoist.
Yale Lit. Advertiser.
Compliments
of
The Chase National Bank
HARRY RAPOPORT
University Tailor
Established 1884
Every Wednesday at Park Avenue Hotel,
Park Ave. and 33rd St., New York
1073 CHAPEL STREET NEW HAVEN, CONN.
DORT SIX
Quality Goes Clear Through
$990 to $1495
Dort Motor Car Co.
Flint, Michigan
$1495
Christmas Gifts
Novelties of Merit
Handsome and Useful
Cigarette Cases
Vanity Cases
Photo Cases
Powder Boxes
Match Safes
Belt Buckles
Pocket Knives
P. RING
217 ELM STREET
Sanitary Soda Fountain
Perfect Service
Cigars, Cigarettes, Pipes, Magazines,
Confectionery Fruit
NEW ENLARGED STORE
The Aberdeen
We extend our most sincere wishes for success to those men
who step forth this month into the world of business—may each
man bring greater glory to Yale.
SEVEN DOLLARS
HERE ONLY
SHOP OF JENKINS
Haberdashery—Knox Hats—Clothing Specialists
940 Chapel Street, New Haven, Conn.
PACH BROS.
College Photographers
1024 CHAPEL STREET
NEW HAVEN, CONN.