Deceitful Media
Deceitful Media
Deceitful Media
ii
Deceitful Media
Artificial Intelligence and Social Life after
the Turing Test
Simone Natale
1
iv
1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.
DOI: 10.1093/oso/9780190080365.001.0001
9 8 7 6 5 4 3 2 1
Paperback printed by Marquis, Canada
Hardback printed by Bridgeport National Bindery, Inc., United States of America
ACKNOWLEDGMENTS
When I started working on this book, I had an idea about a science fic-
tion story. I might never write it, so I reckon it is just fine to give up its
plot here. A woman, Ellen, is awakened by a phone call. It’s her husband.
There is something strange in his voice; he sounds worried and somehow
out of tune. In the close future in which this story is set, artificial intelli-
gence (AI) has become so efficient that a virtual assistant can make calls
on your behalf by reproducing your own voice, and the simulation will be
so accurate as to trick even your close family and friends. Ellen and her
husband, however, have agreed that they would never use AI to communi-
cate between them. Yet in the husband’s voice that morning there is some-
thing that doesn’t sound like him. Later, Ellen discovers that her husband
has died that very night, a few hours before the time of their call. The call
should have been made by an AI assistant. Dismayed by her loss, she listens
to the conversation again and again until she finally picks up some hints to
solve the mystery. In fact, this science fiction story I haven’t written is also
a crime story. To learn the truth about her husband’s death, Ellen will need
to interpret the content of the conversation. In the process, she will also
have to establish whether the words came from her husband’s, from the
machine that imitated him, or from some combination of the two.
This book is not science fiction, yet like much science fiction, it is also an
attempt to make sense of technologies whose implications and meaning we
are just starting to understand. I use the history of AI—a surprisingly long
one for technologies that are often presented as absolute novelties—as a
compass to orient my exploration. I started working on this book in 2016.
My initial idea was to write a cultural history of the Turing test, but my
explorations brought exciting and unexpected discoveries that made the
final project expand much beyond that.
A number of persons read and commented on early drafts of this
work. My editor, Sarah Humphreville, not only believed in this project
vi
since the start but also provided crucial advice and punctual suggestions
throughout its development. Assistant Editor Emma Hodgon was also
exceedingly helpful and scrupulous. Leah Henrickson provided feedback
on all the chapters; her intelligence and knowledge made this just a much
better book. I am grateful to all who dedicated time and attention to read
and comment on different parts of this work: Saul Albert, Gabriele Balbi,
Andrea Ballatore, Paolo Bory, Riccardo Fassone, Andrea Guzman, Vincenzo
Idone Cassone, Nicoletta Leonardi, Jonathan Lessard, Peppino Ortoleva,
Benjamin Peters, Michael Pettit, Thais Sardá, Rein Sikveland, and Cristian
Vaccari.
My colleagues at Loughborough University have been a constant source
of support, both professionally and personally, during the book’s gestation.
I would like especially to thank John Downey for being such a generous
mentor at an important and potentially complicated moment of my career,
and for teaching me the importance of modesty and integrity in the process.
Many other senior staff members at Loughborough were very supportive
in many occasions throughout the last few years, and I wish particularly
to thank Emily Keightley, Sabina Mihelj, and James Stanyer for their con-
stant help and friendliness. Thanks also to my colleagues and friends Pawas
Bisht, Andrew Chadwick, David Deacon, Antonios Kyparissiadis, Line
Nyhagen, Alena Pfoser, Marco Pino, Jessica Robles, Paula Saukko, Michael
Skey, Elzabeth Stokoe, Vaclav Stetka, Thomas Thurnell-Read, Peter Yeandle,
and Dominic Wring, as well as to all other colleagues at Loughborough, for
making work easier and more enjoyable.
During the latest stages of this project I was awarded a Visiting Fellowship
at ZeMKI, the Center for Media, Communication, and Information
Research of the University of Bremen. It was a great opportunity to discuss
my work and to have space and time to reflect and write. Conversations
with Andreas Hepp and Yannis Theocharis were particularly helpful to
clarify and deepen some of my ideas. I thank all the ZeMKI members for
their feedback and friendship, especially but not only Stefanie Averbeck-
Lietz, Hendrik Kühn, Kerstin Radde-Antweiler, and Stephanie Seul, as well
as the other ZeMKI Fellows whose residence coincided with my stay: Peter
Lunt, Ghislain Thibault, and Samuel Van Ransbeeck.
Some portions of this book have been revised from previous publications.
In particular, parts of chapter 3 were previously published, in a significantly
different version, in the journal New Media and Society, and an earlier ver-
sion of c hapter 6 was featured as a Working Paper in the Communicative
[ viii ] Acknowledgments
Figurations Working Papers series. I thank the reviewers and editors for
their generous feedback.
My thanks, finally, go to the many humans who acted as my companions
throughout these years, doing it so well that no machine will ever be able to
replace them. This book is especially dedicated to three of them: my brother
and sister and my partner, Viola.
Acknowledgments [ ix ]
x
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0001
2
[ 2 ] Deceitful Media
dynamic is now embedded in the development of contemporary AI voice
assistants such as Google Assistant, Amazon’s Alexa, and Apple’s Siri sig-
nals the emergence of a new kind of interface, which mobilizes deception
in order to manage the interactions between users, computing systems,
and Internet-based services.
Since Turing’s field-defining proposal, AI has coalesced into a disci-
plinary field within cognitive science and computer science, producing
an impressive range of technologies that are now in public use, from
machine translation to the processing of natural language, and from
computer vision to the interpretation of medical images. Researchers
in this field nurtured the dream—cherished by some scientists while
dismissed as unrealistic by others—of reaching “strong” AI, that is, a
form of machine intelligence that would be practically indistinguish-
able from human intelligence. Yet, while debates have largely focused
on the possibility that the pursuit of strong AI would lead to forms of
consciousness similar or alternative to that of humans, where we have
landed might more accurately be described as the creation of a range of
technologies that provide an illusion of intelligence—in other words, the
creation not of intelligent beings but of technologies that humans per-
ceive as intelligent.
Reflecting broader evolutionary patterns of narratives about technolog-
ical change, the history of AI and computing has until now been mainly
discussed in terms of technological capability.5 Even today, the prolifera-
tion of new communicative AI systems is mostly explained as a technical
innovation sparked by the rise of neural networks and deep learning.6
While approaches to the emergence of AI usually emphasize evolution in
programming and computing technologies, this study focuses on how the
development of AI has also built on knowledge about users.7 Taking up this
point of view helps one to realize the extent to which tendencies to project
agency and humanity onto things makes AI potentially disruptive for social
relations and everyday life in contemporary societies. This book, therefore,
reformulates the debate on AI on the basis of a new assumption: that what
machines are changing is primarily us, humans. “Intelligent” machines
might one day revolutionize life; they are already transforming how we un-
derstand and carry out social interactions.
Since AI’s emergence as a new field of research, many of its leading
researchers have professed to believe that humans are fundamentally sim-
ilar to machines and, consequently, that it is possible to create a computer
that equals or surpasses human intelligence in all aspects and areas. Yet
Introduction [ 3 ]
4
entertaining a similar tenet does not forcefully contrast with and is often
complementary to the idea that existing AI systems provide only the illu-
sion of human intelligence. Throughout the history of AI, many have ac-
knowledged the limitations of present systems and focused their efforts
on designing programs that would provide at least the appearance of in-
telligence; in their view, “real” or “strong” AI would come through further
progress, with their own simulation systems representing just a step in
that direction.8 Understanding how humans engage in social exchanges,
and how they can be led to treat things as social agents, became instru-
mental to overcoming the limitations of AI technologies. Researchers in AI
thus established a direction of research that was based on the designing of
technologies that cleverly exploited human perception and expectations to
give users the impression of employing or interacting with intelligent sys-
tems. This book demonstrates that looking at the development across time
of this tradition—which has not yet been studied as such—is essential to
understanding contemporary AI systems programmed to engage socially
with humans. In order to pursue this agenda, however, the problem of de-
ception and AI needs to be formulated under new terms.
When the great art historian Ernst Gombrich started his inquiry into the
role of illusion in the history of art, he realized that figurative arts emerge
within an interplay between the limits of tradition and the limits of percep-
tion. Artists have always incorporated deception into their work, drawing
on their knowledge both of convention and of mechanisms of perception
to achieve certain effects on the viewer.9 But who would blame a gifted
painter for employing deceit by playing with perspective or depth to make
a tableau look more convincing and “real” in the eyes of the observer?
While this is easily accepted from an artist, the idea that a software
developer employs knowledge about how users are deceived in order to
improve human-computer interaction is likely to encounter concern and
criticism. In fact, because the term deception is usually associated with ma-
licious endeavors, the AI and computer science communities have proven
resistant to discussing their work in terms of deception, or have discussed
deception as an unwanted outcome.10 This book, however, contends that
deception is a constitutive element of human- computer interactions
rooted in AI technologies. We are, so to say, programmed to be deceived,
and modern media have emerged within the spaces opened by the limits
and affordances of our capacity to fall into illusion. Despite their resistance
[ 4 ] Deceitful Media
to consider deception as such, computer scientists have worked since the
early history of their field to exploit the limits and affordances of our per-
ception and intellect.11
Deception, in its broad sense, involves the use of signs or representations
to convey a false or misleading impression. A wealth of research in areas
such as social psychology, philosophy, and sociology has shown that de-
ception is an inescapable fact of social life with a functional role in social
interaction and communication.12 Although situations in which deception
is intentional and manifest, such as frauds, scams, and blatant lies, shape
popular understandings of deception, scholars have underlined the more
disguised, ordinary presence of deception in everyday experience.13 Many
forms of deception are not so clear-cut, and in many cases deception is not
even understood as such.14
Moving from a phenomenological perspective, philosopher Mark
A. Wrathall influentially argued that our capacity to be deceived is an in-
herent quality of our experience. While deception is commonly understood
in binary terms, positing that one might either be or not be deceived,
Wrathall contends that such a dichotomy does not account for how people
perceive and understand external reality: “it rarely makes sense to say that
I perceived either truly or falsely” since the possibility of deception is in-
grained in the mechanisms of our perception. If, for instance, I am walking
in the woods and believe I see a deer to my side where in fact there is just a
bush, I am deceived; yet the same mechanism that made me see a deer where
it wasn’t—that is, our tendency and ability to identify patterns in visual
information—would have helped me, on another occasion, to identify a po-
tential danger. The fact that our senses have shortcomings, Wrathall points
out, represents a resource as much as a limit for human perception and is
functional to our ability to navigate the external world.15 From a similar
point of view, cognitive psychologist Donald D. Hoffman recently proposed
that evolution has shaped our perceptions into useful illusions that help
us navigate the physical world but can also be manipulated through tech-
nology, advertising, and design.16
Indeed, the institutionalization of psychology in the late nineteenth and
early twentieth centuries already signaled the discovery that deception and
illusion were integral, physiological aspects of the psychology of percep-
tion.17 Understanding deception was important not much or not only in
order to study how people misunderstood the world but also to study how
they perceived and navigated it.18 During the nineteenth and twentieth
centuries, the accumulation of knowledge about how people were deceived
informed the development of a wide range of media technologies and
practices, whose effectiveness exploited the affordances and limitations
Introduction [ 5 ]
6
[ 6 ] Deceitful Media
To distinguish from straight-out and deliberate deception, I propose
the concept of banal deception to describe deceptive mechanisms and
practices that are embedded in media technologies and contribute to their
integration into everyday life. Banal deception entails mundane, everyday
situations in which technologies and devices mobilize specific elements
of the user’s perception and psychology—for instance, in the case of AI,
the all-too-human tendency to attribute agency to things or personality
to voices. The word “banal” describes things that are dismissed as ordi-
nary and unimportant; my use of this word aims to underline that these
mechanisms are often taken for granted, despite their significant impact
on the uses and appropriations of media technologies, and are deeply
embedded in everyday, “ordinary” life.22
Different from approaches to deliberate or straight-out deception, banal
deception does not understand users and audiences as passive or naïve.
On the contrary, audiences actively exploit their own capacity to fall into
deception in sophisticated ways—for example, through the entertain-
ment they enjoy when they fall into the illusions offered by cinema or
television. The same mechanism resonates with the case of AI. Studies in
human-computer interaction consistently show that users interacting with
computers apply norms and behaviors that they would adopt with humans,
even if these users perfectly understand the difference between computers
and humans.23 At first glance, this seems incongruous, as if users resist
and embrace deception simultaneously. The concept of banal deception
provides a resolution of this apparent contradiction. I argue that the subtle
dynamics of banal deception allow users to embrace deception so that
they can better incorporate AI into their everyday lives, making AI more
meaningful and useful to them. This does not mean that banal deception is
harmless or innocuous. Structures of power often reside in mundane, ordi-
nary things, and banal deception may finally bear deeper consequences for
societies than the most manifest and evident attempts to deceive.
Throughout this book, I identify and highlight five key characteristics
that distinguish banal deception. The first is its everyday and ordinary char-
acter. When researching people’s perceptions of AI voice assistants, Andrea
Guzman was surprised by what she sensed was a discontinuity between
the usual representations of AI and the responses of her interviewees.24
Artificial intelligence is usually conceived and discussed as extraordi-
nary: a dream or a nightmare that awakens metaphysical questions and
challenges the very definition of what means to be human.25 Yet when
Guzman approached users of systems such as Siri, the AI voice assistant
embedded in iPhones and other Apple devices, she did not find that the
users were questioning the boundaries between humans and machines.
Introduction [ 7 ]
8
[ 8 ] Deceitful Media
know too well the machine will not really get their jokes. They wish them
goodnight before going to bed, even if aware that they will not “sleep” in
the same sense as humans do.33 This suggests that distinctions between
mindful and mindless behaviors fail to capture the complexity of the inter-
action. In contrast, obliviousness implies that while users do not thematize
deception as such, they may engage in social interactions with the machine
deliberately as well as unconsciously. Obliviousness also allows the user to
maintain at least the illusion of control—this being, in the age of user-
friendliness, a key principle of software design.34
The fourth characteristic of banal deception is its low definition. While
this term is commonly used to describe formats of video or sound repro-
duction with lower resolution, in media theory the term has also been
employed in reference to media that demand more participation from
audiences and users in the construction of sense and meaning.35 For what
concerns AI, textual and voice interfaces are low definition because they
leave ample space for the user to imagine and attribute characteristics such
as gender, race, class, and personality to the disembodied voice or text. For
instance, voice assistants do not present at a physical or visual level the
appearance of the virtual character (such as “Alexa” or “Siri”), but some
cues are embedded in the sounds of their voices, in their names, and in
the content of their exchanges. It is for this reason that, as shown in re-
search about people’s perceptions of AI voice assistants, different users
imagine AI assistants in different, multiple ways, which also enhances the
effect of technology being personalized to each individual.36 In contrast,
humanoid robots leave less space for the users’ imagination and projec-
tion mechanisms and are therefore not low definition. This is one of the
reasons why disembodied AI voice assistants have become much more in-
fluential today than humanoid robots: the fact that users can project their
own imaginations and meanings makes interactions with these tools much
more personal and reassuring, and therefore they are easier to incorporate
into our everyday lives than robots.37
The fifth and final defining characteristic of banal deception is that
it is not just imposed on users but also is programmed by designers and
developers. This is why the word deception is preferable to illusion, since
deception implies some form of agency, permitting clearer acknowledg-
ment of the ways developers of AI technologies work toward achieving the
desired effects. In order to explore and develop the mechanisms of banal
deception, designers need to construct a model or image of the expected
user. In actor-network theory, this corresponds to the notion of script,
which refers to the work of innovators as “inscribing” visions or predictions
about the world and the user in the technical content of the new object and
Introduction [ 9 ]
01
[ 10 ] Deceitful Media
One further issue is the extent to which the mechanisms of banal decep-
tion embedded in AI are changing the social conventions and habits that
regulate our relationships with both humans and machines. Pierre Bourdieu
uses the concept of habitus to characterize the range of dispositions
through which individuals perceive and react to the social world.45 Since
habitus is based on previous experiences, the availability of increasing
opportunities to engage in interactions with computers and AI is likely to
feed forward into our social behaviors in the future. The title of this book
refers to AI and social life after the Turing test, but even if a computer
program able to pass that test is yet to be created, the dynamics of banal
deception in AI already represent an inescapable influence on the social life
of millions of people around the world. The main objective of this book is
to neutralize the opacity of banal deception, bringing its mechanisms to
the surface so as to better understand new AI systems that are altering
societies and everyday life.
Introduction [ 11 ]
21
[ 12 ] Deceitful Media
and the Mechanical Turk, which amazed audiences in Europe and America
in the late eighteenth and early nineteenth centuries with its proficiency at
playing chess.57 In considering the relationship between AI and deception,
these automata are certainly a case in point, as their apparent intelligence
was the result of manipulation by their creators: the mechanical duck had
feces stored in its interior, so that no actual digestion took place, while
the Turk was maneuvered by a human player hidden inside the machine.58
I argue, however, that to fully understand the broader relationship between
contemporary AI and deception, one needs to delve into a wider histor-
ical context that goes beyond the history of automata and programmable
machines. This context is the history of deceitful media, that is, of how dif-
ferent media and practices, from painting and theatre to sound recording,
television, and cinema, have integrated banal deception as a strategy to
achieve particular effects in audiences and users. Following this trajectory
shows that some of the dynamics of communicative AI are in a relationship
of continuity with the ways audiences and users have projected meaning
onto other media and technology.
Examining the history of communicative AI from the proposal of the
Turing test in 1950 to the present day, I ground my work in the per-
suasion that a historical approach to media and technological change
helps us comprehend ongoing transformations in the social, cultural,
and political spheres. Scholars such as Lisa Gitelman, Erkki Huhtamo,
and Jussi Parikka have compellingly shown that what are now called
“new media” have a long history, whose study is necessary to under-
stand today’s digital culture.59 If it is true that history is one of the best
tools for comprehending the present, I believe that it is also one of the
best instruments, although still an imperfect one, for anticipating the
future. In areas of rapid development such as AI, it is extremely diffi-
cult to forecast even short-and medium-term development, let alone
long-term changes.60 Looking at longer historical trajectories across sev-
eral decades helps to identify key trends and trajectories of change that
have characterized the field across several decades and might, therefore,
continue to shape it in the future. Although it is important to under-
stand how recent innovations like neural networks and deep learning
work, a better sense is also needed of the directions through which the
field has moved across a longer time frame. Media history, in this sense,
is a science of the future: it not only sheds light on the dynamics by
which we have arrived where we are today but helps pose new questions
and problems through which we may navigate the technical and social
challenges ahead.61
Introduction [ 13 ]
41
[ 14 ] Deceitful Media
to users, which are becoming increasingly available in computers, cars, call
centers, domestic environments, and toys.67 Another crucial contribution
to such endeavors is that of Sherry Turkle. Across several decades, her
research has explored interactions between humans and AI, emphasizing
how their relationship does not follow from the fact that computational
objects really have emotions or intelligence but from what they evoke in
their users.68
Although the role of deception is rarely acknowledged in discussions of
AI, I argue that interrogating the ethical and cultural implications of such
dynamics is an urgent task that needs to be approached through inter-
disciplinary reflection at the crossroads between computer science, cogni-
tive science, social sciences, and the humanities. While the public debate
on the future of AI tends to focus on the hypothesis that AI will make
computers as intelligent or even more intelligent than people, we also
need to consider the cultural and social consequences of deceitful media
providing the appearance of intelligence. In this regard, the contemporary
obsession with apocalyptic and futuristic visions of AI, such as the singu-
larity, superintelligence, and the robot apocalypse, makes us less aware of
the fact that the most significant implications of AI systems are to be seen
not in a distant future but in our ongoing interactions with “intelligent”
machines.
Technology is shaped not only by the agency of scientists, designers,
entrepreneurs, users, and policy-makers but also by the kinds of questions
we ask about it. This book hopes to inspire readers to ask new questions
about the relationship between humans and machines in today’s world.
We will have to start searching for answers ourselves, as the “intelligent”
machines we are creating can offer no guidance on such matters-as one of
those machines admitted when I asked it (I.1).
Introduction [ 15 ]
61
CHAPTER 1
The Turing Test
The Cultural Life of an Idea
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0002
séance table could be explained with their unconscious desire for deceiving
themselves, which led them involuntarily to move the table.
It wasn’t spirits of the dead that made spiritualism, Faraday pointed
out. It was the living people—we, the humans—who made it.3
Robots and computers are no spirits, yet Faraday’s story provides a
useful lens through which to consider the emergence of AI from an unu-
sual perspective. In the late 1940s and early 1950s, often described as the
gestation period of AI, research in the newly born science of computing
placed more and more emphasis on the possibility that electronic digital
computers would develop into “thinking machines.”4 Historians of cul-
ture, science, and technology have shown how this involved a hypothesis
of an equivalence between human brains and computing machines.5 In
this chapter, however, I aim to demonstrate that the emergence of AI also
entailed a different kind of discovery. Some of the pioneers in the new field
realized that the possibility of thinking machines depended on the per-
spective of the observers as much as on the functioning of computers. As
Faraday in the 1850s suggested that spirits existed in séance sitters’ minds,
these researchers contemplated the idea that AI resided not so much in
circuits and programming techniques as in the ways humans perceive
and react to interactions with machines. In other words, they started to
envision the possibility that it was primarily users, not computers, that
“made” AI.
This awareness did not emerge around a single individual, a single group,
or even a single area of research. Yet one contribution in the gestational field
of AI is especially relevant: the “Imitation Game,” a thought experiment
proposed in 1950 by Alan Turing in his article “Computing Machinery and
Intelligence.” Turing described a game—today more commonly known as
the Turing test—in which a human interrogator communicates with both
human and computer agents without knowing their identities, and is chal-
lenged to distinguish the humans from the machines. A lively debate about
the Turing test started shortly after the publication of Turing’s article and
continues today, through fierce criticisms, enthusiastic approvals, and
contrasting interpretations.6 Rather than entering the debate by providing
a unique or privileged reading, my goal here is to illuminate one of the sev-
eral threads Turing’s idea opened for the nascent AI field. I argue that, just
as Faraday placed the role of humans at the center of the question of what
happened in spiritualist séances, Turing proposed to define AI in terms of
the perspective of human users in their interactions with computers. The
Turing test will thus serve here as a theoretical lens more than as historical
evidence—a reflective space that invites us to interrogate the past, the pre-
sent, and the future of AI from a different standpoint.
T h e T u r i n g T e s t [ 17 ]
81
[ 18 ] Deceitful Media
time. Even if computers could eventually rival or even surpass humans in
tasks that were considered to require intelligence, there would still be little
evidence to compare their operations to what happens in humans’ own
minds. The problem can be illustrated quite well by using the argument
put forward by philosopher Thomas Nagel in an exceedingly famous article
published two decades later, in 1974, titled “What Is It Like to Be a Bat?”
Nagel demonstrates that even a precise insight into what happens inside
the brain and body of a bat would not make us able to assess whether the
bat is conscious. Since consciousness is a subjective experience, one needs
to be “inside” the bat to do that.11 Transferred into the machine intelli-
gence problem, despite computing’s tremendous achievements, their “in-
telligence” would not be similar or equal to that of humans. While all of us
has some understanding of what “thinking” is, based on our own subjective
experience, we cannot know whether others, and especially non-human
beings, share the same experience. We have no objective way, therefore, to
know if machines are “thinking.”12
A trained mathematician who engaged with philosophical issues only
as a personal interest, Turing was far from the philosophical sophistica-
tion of Nagel’s arguments. Turing’s objective, after all, was to illustrate the
promises of modern computing, not to develop a philosophy of the mind.
Yet he showed similar concerns in the opening of “Computer Machinery
and Intelligence” (1950). In the introduction, he considers the question
“Can machines think?” only to declare it of little use, due to the difficulty
of finding agreement about the meanings of the words “machine” and
“think.” He proposes therefore to replace this question with another one,
and thereby introduces the Imitation Game as a more sensible way to ap-
proach the issue:
The new form of the problem can be described in terms of a game which we call
the “imitation game.” It is played with three people, a man (A), a woman (B), and
an interrogator (C) who may be of either sex. The interrogator stays in a room
apart from the other two. The object of the game for the interrogator is to de-
termine which of the other two is the man and which is the woman. . . . We now
ask the question, “What will happen when a machine takes the part of A in this
game?” Will the interrogator decide wrongly as often when the game is played
like this as he does when the game is played between a man and a woman? These
questions replace our original, “Can machines think?”13
T h e T u r i n g T e s t [ 19 ]
02
and scientists to engage more seriously with the machine intelligence ques-
tion.14 But regardless of whether he actually thought of it as such, there is
no doubt that the test proved to be an excellent instrument of propaganda
for the field. Drawing from the powerful idea that a “computer brain” could
beat humans in one of the skills that define our intelligence, the use of nat-
ural language, the Turing test presented potential developments of AI in
an intuitive and fascinating fashion.15 In the following decades, it became
a staple reference in popular publications presenting the achievements
and the potentials of computing. It forced readers and commentators to
consider the possibility of AI—even if just rejecting it as science fiction or
charlatanry.
Turing’s article was ambiguous in many respects, which favored the emer-
gence of different views and controversies about the meaning of the Turing
test.16 Yet one of the key implications for the AI field is evident. The ques-
tion, Turing tells his readers, is not whether machines are or are not able to
think. It is, instead, whether we believe that machines are able to think—in
other words, if we are prepared to accept machines’ behavior as intelligent.
In this respect, Turing turned around the problem of AI exactly as Faraday
did with spiritualism. Much as the Victorian scientist deemed humans and
not spirits responsible for producing spiritualist phenomena at séances,
the Turing test placed humans rather than machines at the very center
of the AI question. Although some have pointed out that the Turing test
has “failed” because its dynamic does not accurately adhere to AI’s current
state of the art,17 Turing’s proposal located the prospects of AI not just
in improvements of hardware and software but in a more complex sce-
nario emerging from the interactions between humans and computers.
By placing humans at the center of its design, the Turing test provided a
context wherein AI technologies could be conceived in terms of their cred-
ibility to human users.18
There are three actors in the Turing test, all of which engage in acts of
communication: a computer player, a human player, and a human evalu-
ator or judge. The computer’s capacity to simulate the ways humans behave
in a conversation is obviously one of the factors that inform the test’s out-
come. But since human actors engage actively in communications within
the test, their behaviors will be another decisive factor. Things such as their
backgrounds, biases, characters, genders, and political opinions will have
a role in both the decisions of the interrogator and the behavior of the
[ 20 ] Deceitful Media
human agents with which the interrogator interacts. A computer scientist
with knowledge and experience on AI, for instance, will be in a different
position from someone who has limited insight into the topic. Likewise,
human players who act as conversation partners in the test will have their
own motivations and ideas on how to participate in the test. Some, for
instance, could be fascinated by the possibility of being exchanged for a
computer and therefore tempted to create ambiguity about their identities.
Because of the role human actors play in the Turing test, all these are
variables that may inform its outcome.19
The uncertainty that derives from this is often indicated to be one of
the test’s shortcomings.20 The test appears entirely logical and justified,
however, if one sees it not as an evaluation of the existence of thinking
machines but as a measure of humans’ reactions to communications with
machines that exhibit intelligent behavior. From this point of view, the
test made clear for the first time, decades before the emergence of online
communities, social media bots, and voice assistants, that AI not only is a
matter of computer power and programming techniques but also resides—
and perhaps especially—in the perceptions and patterns of interaction
through which humans engage with computers.
The wording of Turing’s article provides some additional evidence to
support this interpretation. Though he refused to make guesses about the
question whether machines can think, he did not refrain from speculating
that “at the end of the century the use of words and general educated
opinion will have altered so much that one will be able to speak of machines
thinking without expecting to be contradicted.”21 It is worth reading the
wording of this statement attentively. This is not a comment about the de-
velopment of more functional or more credible machines. It is about cul-
tural change. Turing argues that by the end of the twentieth century people
will have a different understanding of the AI problem, so that the prospect
of “thinking machines” will not sound as unlikely as it did at Turing’s time.
He shows interest in “the use of words and general educated opinion” rather
than in the possibility of establishing whether machines actually “think.”
Looking at the history of computing since Turing’s time, there is no
doubt that cultural attitudes about computing and AI did change, and quite
considerably. The work of Sherry Turkle, who studied people’s relationships
with technology across several decades, provides strong evidence of this.
For instance, in interviews conducted in the late 1970s, as she asked
participants what they thought about the idea of chatbots providing psy-
chotherapy services, she encountered much resistance. Most interviewees
tended to agree that machines could not make up for the loss of the em-
pathic feeling between the psychologist and the patient.22 In the following
T h e T u r i n g T e s t [ 21 ]
2
THE COMMUNICATION GAME
Media historian John Durham Peters famously argued that the history
of communication can be read as the history of the aspiration to estab-
lish an empathic connection with others, and the fear that this may break
down.25 As media such as the telegraph, the telephone, and broadcasting
were introduced, they all awoke hopes that they could facilitate such
connections, and at the same time fears that the electronic mediation they
provided would increase our estrangement from our fellow human beings.
It is easy to see how this also applies to digital technologies today. In its rel-
atively short history, the Internet kindled powerful hopes and fears: think,
for instance, of the question whether social networks facilitate the creation
of new forms of communication or make people lonelier than ever before.26
[ 22 ] Deceitful Media
But computing has not always been seen in relationship to communi-
cation. In 1950, when Turing published his article, computer-mediated
communication had not yet coalesced into a meaningful field of investi-
gation. Computers were mostly discussed as calculating tools, and avail-
able forms of interactions between human users and computers were
minimal.27 Imagining communication between humans and computers in
1950, when Turing wrote his article, required a visionary leap—perhaps
even greater than that needed to consider the possibility of “machine in-
telligence.”28 The very idea of a user, understood as an individual given ac-
cess to shared computing resources, was not fully conceptualized before
developments in time-sharing systems and computer networks in the
1960s and the 1970s made computers available to individual access, ini-
tially within small communities of researchers and computer scientists and
then for an increasingly larger public.29 As a consequence, the Turing test
is usually discussed as a problem about the definition of intelligence. An
alternative way to look at it, however, is to consider it as an experiment in
the communications between humans and computers. Recently, scholars
have started to propose such an approach. Artificial intelligence ethics ex-
pert David Gunkel, for instance, points out that “Turing’s essay situates
communication—and a particular form of deceptive social interaction—
as the deciding factor” and should therefore be considered a contribu-
tion to computer-mediated communication avant la lettre.30 Similarly, in a
thoughtful book written after his experience as a participant in the Loebner
Prize—a contemporary contest, discussed at greater length in chapter 5, in
which computer programs engage in a version of the Turing test—Brian
Christian stresses this aspect, noting that “the Turing test is, at bottom,
about the act of communication.”31
Since the design of the test relied on interactions between humans
and computers, Turing felt the need to include precise details about how
they would have entered into communication. To ensure the validity of
the test, the interrogator needed to communicate with both human and
computer players without receiving any hints about their identities other
than the contents of their messages. Communications between humans
and computers in the test were thus meant to be anonymous and disem-
bodied.32 In the absence of video displays and even input devices such as
the electronic keyboard, Turing imagined that the answers to the judge’s
inputs “should be written, or better so, typewritten,” the ideal arrangement
being “to have a teleprinter communicating between the two rooms.”33
Considering how, as media historians have shown, telegraphic transmission
and the typewriter mechanized the written word, making it independent
from its author, Turing’s solution shows an acute sense of the role of media
T h e T u r i n g T e s t [ 23 ]
42
[ 24 ] Deceitful Media
by approaches that focus on the performance of computing technologies
alone. The medium and the interface, moreover, also contribute to the
shape of the outcomes and the implications of every interaction. Turing’s
proposal was, in this sense, perhaps more “Communication Game” than
“Imitation Game.”
PLAYING WITH TURING
T h e T u r i n g T e s t [ 25 ]
62
[ 26 ] Deceitful Media
possible a confrontation between humans and computers that required
turn taking, a fundamental element in human communication and one
that has been widely implemented in interface design. Similarly, digital
games opened the way for experimenting with more complex interactive
systems involving the use of other human senses, such as sight and touch.55
The Turing test, in this context, envisioned the emergence of what Paul
Dourish calls “social computing,” a range of systems for human-machine
interactions by which understandings of the social worlds are incorporated
into interactive computer systems.56
At the outset of the computer era, the Turing test made ideas such as
“thinking computers” or “machine intelligence” imaginable in terms of a
simple game situation in which machines rival human opponents. In his vi-
sionary book God & Golem, Inc. (1964), the founder of cybernetics, Norbert
Wiener, predicted that the fact that learning machines could outwit their
creators in games and real-world contests would raise serious practical and
moral dilemmas for the development of AI. But the Turing test did not just
posit the possibility that a machine could beat a human at a fair play. In
addition, in winning the Imitation Game—or passing the Turing test, as
one would say today—machines would advance one step further in their
impertinence toward their creators. They would trick humans, leading us
into believing that machines aren’t different from us.
A PLAY OF DECEPTION
T h e T u r i n g T e s t [ 27 ]
82
[ 28 ] Deceitful Media
utterly.”65 Thus in the Turing test, the computer’s deceptive stance is innoc-
uous precisely because it is carried within such a playful frame, with the
complicity of human players who follow the rules of the game and partici-
pate willingly in it.66
Turing presented his test as an adaptation of an original “imitation
game” in which the interrogator had to determine who was the man and
who the woman, placing gender performance rather than machine intelli-
gence at center stage.67 The test, in this regard, can be seen as part of a longer
genealogy of games that, before the emergence of electronic computers,
introduced playful deception as part of their design.68 The Lady’s Oracle,
for instance, was a popular Victorian pastime for social soirées. The game
provided a number of prewritten answers that players selected randomly to
answer questions posed by other players. As amusing and surprising replies
were created by chance, the game stimulated the players’ all-too-human
temptation to ascribe meaning to chance.69 The “spirit boards” that repro-
duce the dynamics of spiritualist séances for the purpose of entertainment
are another example.70 Marketed explicitly as amusements by a range of
board game companies since the late nineteenth century, they turn spirit
communication into a popular board game that capitalizes on the players’
fascination with the supernatural and their willingness—conscious or
unconscious—to make the séance “work.”71 The same dynamics charac-
terize performances of stage magic, where the audience’s appetite for the
show is often attributable to the pleasures of falling in with the tricks of
the prestidigitator, of the discovery that one is liable to deception, and of
admiration for the performer who executes the sleights of hand.72
What these activities have in common with the Turing test is that they
all entertain participants by exploiting the power of suggestion and de-
ception. A negative connotation is usually attributed to deception, yet
its integration in playful activities is a reminder that people actively seek
situations where they may be deceived, following a desire or need that
many people share. Deception in such contexts is domesticated, made in-
tegral to an entertaining experience that retains little if anything of the
threats that other deceptive practices bring with them.73 Playful decep-
tion, in this regard, posits the Turing test as an apparently innocuous game
that helps people to experiment with the sense, which characterizes many
forms of interactions between humans and machines, that a degree of de-
ception is harmless and even functional to the fulfillment of a productive
interaction. Alexa and Siri are perfect examples of how this works in prac-
tice: the use of human voices and names with which these “assistants” can
be summoned and the consistency of their behaviors stimulate users to
assign a certain personality to them. This, in turn, helps users to introduce
T h e T u r i n g T e s t [ 29 ]
03
these systems more easily into their everyday lives and domestic spaces,
making them less threatening and more familiar. Voice assistants function
most effectively when a form of playful and willing deception is embedded
in the interaction.
To return to the idea underlined in Monkey Shines, my reading of the
Turing test points to another suggestion: that what characterizes humans
is not as much their ability to deceive as their capacity and willingness to
fall into deception. Presenting the possibility that humans can be deceived
by computers in a reassuring, playful context, the Turing test invites re-
flection on the implications of creating AI systems that rely on users falling
willingly into illusion. Rather than positing deception as an exceptional
circumstance, the playfulness of the Imitation Game envisioned a future
in which banal deception is offered as an opportunity to develop satisfac-
tory interactions with AI technologies. Studies of social interaction in psy-
chology, after all, have shown that self-deception carries a host of benefits
and social advantages.74 A similar conclusion is evoked and mirrored by
contemporary research in interaction design pointing to the advantages of
having users and consumers attribute agency and personality to gadgets
and robots.75 These explorations tell us that by cultivating an impression of
intelligence and agency in computing systems, developers might be able to
improve the users’ experience of these technologies.
The attentive reader might have grasped the disturbing consequences
of this apparently benign endeavor. McLuhan, one of the most influential
theorists of media and communication, used the Greek myth of Narcissus
as a parable for our relationship with technology. Narcissus was a beautiful
young hunter who, after seeing his own image reflected in a pool, fell in love
with himself. Unable to move away from this mesmerizing view, he stared
at the reflection until he died. Like Narcissus, people stare at the gadgets of
modern technology, falling into a state of narcosis that makes them unable
to understand how media are changing them.76 Identifying playful decep-
tion as a paradigm for conceptualizing and constructing AI technologies
awakens the question of whether such a sense of narcosis is also implicated
in our reactions to AI-powered technologies. Like Narcissus, we regard
them as inoffensive, even playful, while they are changing dynamics and
understandings of social life in ways that we can only partially control.
So much has been written about the Turing test that one might think
there is nothing more to add. In recent years, many have argued that the
[ 30 ] Deceitful Media
Turing test does not reflect the functioning of modern AI systems. This
is true if the test is seen as a comprehensive test bed for the full range
of applications and technologies that go under the label “AI,” and if one
does not acknowledge Turing’s own refusal to tackle the question whether
“thinking machines” exist or not. Looking at the Turing test from a dif-
ferent perspective, however, one finds that it still provides exceedingly
useful interpretative keys to understanding the implications and impact of
many contemporary AI systems.
In this chapter, I’ve used the Turing test as a theoretical lens to unveil
three key issues about AI systems. The first is the centrality of the human
perspective. A long time before interactive AI systems entered domestic
environments and workspaces, researchers such as Turing realized that
the extent to which computers could be called “intelligent” would depend
on how humans perceived them rather than on some specific character-
istic of machines. This was the fruit, in a sense, of a failure: the impos-
sibility of finding an agreement about definitions of the word intelligence
and of assessing the machine’s experience or consciousness without being
“inside” it. But this realization was destined to lead toward extraordinary
advancements in the AI field. Understanding the fact that AI is a relational
phenomenon, something that emerges also and especially within the in-
teraction between humans and machines, stimulated researchers and
developers to model human behaviors and states of mind in order to devise
more effective interactive AI systems.
The second issue is the role of communication. Imagined by Turing at
a time when the available tools to interact with computers were minimal
and the very idea of the user had not yet emerged as such, the Turing
test helps us, paradoxically, to understand the centrality of commu-
nication in contemporary AI systems. In computer science literature,
human-computer interaction and AI are usually treated as distinct: one
is concerned with the interfaces that enable users to interact with com-
puting technologies, the other with the creation of machines and pro-
gram completing tasks that are considered intelligent, such as translating
a piece of writing into another language or engaging in conversation
with human users. Yet the Turing test, as I have shown in this chapter,
provides a common point of departure for these two areas. Regardless of
whether this was or was not among Turing’s initial intentions, the test
provides an opportunity to consider AI also in terms of how the commu-
nication between humans and computers is embedded in the system. It
is a reminder that AI systems are not just computing machines but also
media that enable and regulate specific forms of communication between
users and computers.77
T h e T u r i n g T e s t [ 31 ]
23
[ 32 ] Deceitful Media
CHAPTER 2
How to Dispel Magic
Computers, Interfaces, and the Problem of the Observer
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0003
43
In this chapter, I look at this early AI research to unveil a less evident legacy
of this pioneering period. I tell the story of how members of the AI commu-
nity realized that observers and users could be deceived into attributing in-
telligence to computers, and how this problem became closely related to the
building of human-computer interactive systems that integrated deception
and illusion in their mechanisms. At first glance, this story seems in contra-
diction with the fact that many of the researchers who shaped this new field
believed in the possibility of creating “strong” AI, that is, a hypothetical ma-
chine that would have the capacity to complete or learn any intellectual task
a human being could. Yet practical explorations in AI also coincided with the
discovery that humans, as Turing anticipated, are part of the equation that
defines both the meaning and the functioning of AI.
In order to tell this story, one needs to acknowledge that the devel-
opment of AI cannot be separated from the development of human-
computer interactive systems. I look, therefore, at the history of AI and
human-computer interaction in parallel. I start by examining the ways AI
researchers acknowledged and reflected on the fact that humans assign in-
telligence to computers based on humans’ biases, visions, and knowledge.
As scientists developed and presented to the public their computing sys-
tems under the AI label, they noted that observers were inclined to ex-
aggerate the “intelligence” of these systems. Because early computers
offered few possibilities of interaction, this was an observation that con-
cerned mostly the ways AI technologies were presented and perceived by
the public. The development of technologies that created new forms and
modalities of interaction between humans and computers was, in this re-
gard, a turning point. I move on therefore to discuss the ways early inter-
active systems were envisioned and implemented. Drawing on literature
in computer science and on theories in media studies that describe com-
puter interfaces as illusory devices, I show that these systems realized in
practice what the Turing test anticipated in theory: that AI only exists to
the extent it is perceived as such by human users. When advancements
in hardware and software made new forms of communication between
humans and computers possible, these forms shaped practical and theoret-
ical directions in the AI field and made computing a key chapter in the story
of the lineage of deceitful media.
Watching moving images on the screens of our laptops and mobile devices
is today a trivial everyday experience. Yet one can imagine the marveling
[ 34 ] Deceitful Media
of spectators at the end of the nineteenth century when they saw for the
first time images taken from reality animated on the cinematic screen. To
many early observers, film appeared as a realm of shadows, a feat of magic
similar to a conjurer’s trick or even a spiritualist séance. According to film
historians, one of the reasons that early cinema was such a magical experi-
ence was the fact that the projector was hidden from the spectators’ view.
Not seeing the source of the illusion, audiences were stimulated to let their
imaginations do the work.3
At the roots of technology’s association with magic lies, in fact, its
opacity. Our wonder at technological innovations often derives from
our failure to understand the technical means through which they work,
just as our amazement at a magician’s feat depends in part on our ina-
bility to understand the trick.4 From this point of view, computers are
one of the most enchanting technologies humans have ever created. The
opacity of digital media, in fact, cannot be reduced to the technical skills
and knowledge of users: it is embedded in the functioning of computing
technologies. Programming languages feature commands that are intel-
ligible to computer scientists, allowing them to write code that executes
complex functions. Such commands, however, correspond to actual oper-
ations of the machine only after having been translated multiple times,
into lower-level programming languages and finally into machine code,
which is the set of instructions in binary numbers executed by the com-
puter. Machine code is such a low level of abstraction that it is mostly in-
comprehensible to the programmer. Thus, even developers are unable to
grasp the stratifications of software and code that correspond to the actual
functioning of the machine. Computers are in this sense the ultimate black
box: a technology whose internal functioning is opaque even to the most
expert users.5
It is therefore unsurprising that AI and more generally electronic
computers stimulated from their inception a plethora of fantasies and
myths. The difficulty of figuring out how computers worked, combined
with overstated reports about their achievements, sparked the emergence
of a vibrant imaginary surrounding the new machines. As historian of com-
puting C. Dianne Martin demonstrated, based on a body of poll-based so-
ciological evidence and content analysis of newspapers, a large segment
of public opinion in the 1950s and early 1960s came to see computers as
“intelligent brains, smarter than people, unlimited, fast, mysterious, and
frightening.”6 The emergence of AI was deeply intertwined with the rise of
a technological myth centered around the possibility of creating thinking
machines. This did not go unnoticed by the scientists who built the AI
field in the decades after World War II. As early achievements were given
T h e P r o b l e m of t h e O b s e r v e r [ 35 ]
63
[ 36 ] Deceitful Media
with intelligence is by definition opaque. If it is difficult to bridge the gap
between what software achieves and what happens at the level of computer
circuits, it is even more difficult to imagine how our thoughts and emotions
correspond to specific physical states inside the human brain and body. We
know, in fact, very little about how the brain works.14 It was in the space be-
tween the two sides of this double lack of understanding that the possibility
opened up for imagining computers as thinking machines even when little
evidence was available to suggest a close similarity between computers and
human brains. The opacity of both computers and human brains meant
that the statement “computers are thinking machines” was impossible to
prove but also impossible to dismiss. Consequently, the dream of building
an electronic brain drove the efforts of many AI researchers during a phase
of extraordinary expansion of computing technologies.
Some researchers saw the excessive enthusiasm around early AI systems
as a form of deception. Cognitive scientist Douglas Hofstadter reflected
on such dynamics by pointing out that “to me, as a fledgling [AI] person, it
was self-evident that I did not want to get involved in that trickery. It was
obvious: I don’t want to be involved in passing off some fancy program’s
behavior for intelligence when I know that it has nothing to do with in-
telligence.”15 Other AI researchers were more ambivalent. Most scientists
usually expressed caution in academic publications, but in platforms aimed
at nonacademic audiences and in interviews with the press they were
far less conservative.16 Marvin Minsky at the Massachusetts Institute of
Technology (MIT), for instance, suggested in a publication aimed at the
broad public that once programs with capacity for self-improvement were
created, a rapid evolutionary process would lead to observing “all the phe-
nomena associated with the terms ‘consciousness,’ ‘intuition’ and ‘intelli-
gence’ itself.”17
No one in the field could ignore how important it was to navigate the
of public expectations, perceptions, and fantasies that surrounded AI.18
The tendency to overstate actual and potential achievements of AI was fu-
eled by the need to promise exciting outcomes and practical applications
that would attract funding and attention to research. Reporting on his
experience as an AI pioneer in the 1950s, Arthur L. Samuel recalled: “we
would build a very small computer, and try to do something spectacular
with it that would attract attention so that we would get more money.”19
The myth of the thinking machine also functioned as a unifying goal for
researchers in the nascent AI field. As historians of technology have shown,
future-oriented discourse in techno-scientific environments contributes to
a shift of emphasis away from the present state of research and toward an
imagined prospect when the technology will be successfully implemented.
T h e P r o b l e m of t h e O b s e r v e r [ 37 ]
83
[ 38 ] Deceitful Media
the wheel serves as an extension of human feet, accelerating movement
and exchange across space. As a result, the wheel has enabled new forms
of political organizations, such as empires, and has changed humans at a
social, cultural, and even anthropological level. Seen from a complemen-
tary perspective, however, the notion that media are extensions of humans
also suggests another important aspect of media: that they are meant to
fit humans. Media are envisioned, developed, and fabricated so that they
can adapt to their users—in McLuhan’s words, to become their extensions.
Paraphrasing an old text, so humankind created media in its image, in the
image of humankind it created them . . .
It is easy to see how this applies to different kinds of media. The inven-
tion of cinema, for instance, was the result not only of prodigious engi-
neering efforts but also of decade-long studies about the functioning of
human perception. Knowledge about vision, perception of movement, and
attention was incorporated into the design of cinema so that the new me-
dium could provide an effective illusion and entertain audiences around
the world.24 Likewise, sound media, from the phonograph to the MP3, have
been constructed in accordance with models of human hearing. In order
to improve capacity while retaining quality of sound, frequencies that
are outside the reach of human hearing have been disregarded, adapting
technical reproduction to what and how we actually hear.25 The problem
was not so much how an early phonograph cylinder, a vinyl record, or an
MP3 sounded in physical terms; it was how they sounded for humans. As
proposed in this book, this is why all modern media incorporate banal de-
ception: they exploit the limits and characteristics of human sensoria and
psychology in order to create the particular effects that suit their intended
and prepared use.
Computers are in this regard no exception. Throughout the history
of computing, a plethora of different systems have been developed that
have enabled new forms of communication with users.26 In this con-
text, the computer has become more and more evidently an extension of
humans, hence a communication medium. The need for more accessible
and functional pathways of interaction between computers and human
users stimulated computer scientists to interrogate the ways humans ac-
cess and process information, and to apply the knowledge they gathered
in the development of interactive systems and interfaces. Not unlike what
happened with cinema or sound media, this effort entailed exploring ways
computers could be made “in the image of humankind.”
While the history of AI is usually written as distinct from the history
of human-computer interaction, one need only look at AI’s early develop-
ment to realize that no such rigid distinction is possible.27 The golden age
T h e P r o b l e m of t h e O b s e r v e r [ 39 ]
04
[ 40 ] Deceitful Media
implement human-computer systems in terms of a feedback mechanism
between two entities that spoke, so to speak, the same language. At the
same time, however, this approach conflicted with practical experiences
with these systems. When humans are involved, in fact, communication
is always situated within a sociocultural milieu. As Lucy Suchman would
later make clear, any effort to create human-computer interactions entails
inserting computer system within real-world situations.35
In the space between these two understandings of communication—
disembodied and abstract on the one side, as in the theoretical foundations
of computer science, embodied and immersed in social situations on the
other side, as in practical experiences with new interactive systems—one
can better understand the apparent contradiction that characterized the
exploration of many AI pioneers. Leading scientists embraced the myth
of the “thinking machine,” which was rooted in the idea that the brain is
a machine and the activity of the neurons can be described mathemati-
cally, just as the operations of computers. At the same time, the experi-
ence of creating AI applications and putting them into use led scientists
to consider the specific perspectives of human users. In order to achieve
Licklider’s goal of a symbiotic partnership performing intellectual opera-
tions more effectively than humans alone, the complex processes of elec-
tronic computing were to be adapted to users. Computers, in other words,
were to be brought “down to earth,” so to speak, so that they could be easily
accessed and used by humans.
As systems were developed and implemented, researchers realized that
human psychology and perception were important variables in the devel-
opment of effective human-computer interactions. Early studies focused
on issues such as response time, attention, and memory.36 The develop-
ment in the 1960s of time-sharing, which allowed users to access computer
resources in what they perceived as real time, is a case in point.37 One of
the leading AI scientists, Herbert Simon, contended in his 1966 article
“Reflections on Time Sharing from a User’s Point of View” that a time-
sharing system should be considered “fundamentally and essentially a man-
machine system whose performance depends on how effectively it employs
the human nerveware as well as the computer hardware.”38 Consequently,
time-sharing had to be modeled against the ways humans receive and pro-
cess information. Likewise, for Martin Greenberger, who researched both
AI and time-sharing at MIT, computers and users were to be considered
“the two sides of time sharing.” Only a perspective that looked at both sides
would lead to the development of functional systems.39 In this regard, he
observed, time-sharing was “definitely a concession to the user and a rec-
ognition of his point of view.”40
T h e P r o b l e m of t h e O b s e r v e r [ 41 ]
24
[ 42 ] Deceitful Media
between them. In practice, however, he conceded that a machine invites
different interpretations from different observers. A person with little
insight into computing and mathematics, for example, may see intelli-
gence where a more expert user wouldn’t. A similar dynamic, after all, also
characterizes judgments about intelligence and skill in humans, which “are
often related to our own analytic inadequacies, and . . . shift with changes
in understanding.”44 In another article, Minsky therefore argued that “in-
telligence can be measured only with respect to the ignorance or lack of
understanding of the observer.”45
These were not mere footnotes based on casual observations.
Acknowledging the contribution of the observer—described as an out-
sider who approaches computers with limited knowledge of AI—signaled
the realization that AI only existed within a cultural and social environ-
ment in which the user’s perceptions, psychology, and knowledge played a
role. The fact that this was recognized and discussed by Minsky—a leading
AI scientist who firmly believed that the creation of thinking computers
was a definite outcome to be achieved in a relatively close future—makes
it all the more significant.46 Although acknowledging that observers
wrongly attribute intelligence to machines might seem at odds with
Minsky’s optimism for AI’s prospects, the contradiction evaporates if one
considers that AI emerged in close relationship with the development of
human-computer interactive systems. In a context wherein human users
are increasingly exposed to and interacting with computers, believing in
the possibility of creating thinking machines is entirely compatible with
acknowledging that intelligence can be the fruit of an illusion. Even while
professing full commitment to the dream of creating thinking machines,
as Minsky did, one could not dismiss the perspective of humans as part of
the equation.
In the 1960s, an age where both AI and human-computer interaction
posed firm foundations for their subsequent development, the problem of
the observer—that is, the question of how humans respond to witnessing
machines that exhibit intelligence—became the subject of substantial
reflection. British cybernetician and psychologist Gordon Pask, for in-
stance, pointed out in a 1964 computer science article that “the property
of intelligence entails the relation between an observer and an artifact.
It exists insofar as the observer believes that the artifact is, in certain es-
sential respects, like another observer.”47 Pask also noted that an appear-
ance of self-organization entails some forms of ignorance on the part of
the observer, and explained at length situations in which computing sys-
tems could be programmed and presented so that they looked “amusingly
lifelike.”48 Besides theoretical debates, actual artifacts were developed for
T h e P r o b l e m of t h e O b s e r v e r [ 43 ]
4
[ 44 ] Deceitful Media
THE MEANINGS OF TRANSPARENCY, OR HOW (NOT)
TO DISPEL MAGIC
Early efforts to convey the significance of AI to the public had shown the re-
silience of popular myths that equated computers with “electronic brains.”
Rather than being understood as rational machines whose amazing promise
remained in the sphere of science, computers were often represented as
quasi-magical devices ready to surpass the ability of humans in the most
diverse areas.
In this context, the nascent AI community saw the emergence of
human-computer interaction systems as a potential opportunity to in-
fluence the ways computers were perceived and represented in the public
sphere. In an issue of Scientific American dedicated to AI in 1966, AI pio-
neer John McCarthy proposed that the new interactive systems would pro-
vide not only greater control of computers but also greater understanding
among a wide community of users. In the future, he contended, “the ability
to write a computer program will become as widespread as the ability to
drive a car,” and “not knowing how to program will be like living in a house
full of servants and not speaking their language.” Crucially, according to
McCarthy, the increased knowledge would help people acquire more con-
trol over computing environments.51
McCarthy’s vision posed a potential solution to the problem of the ob-
server. That some users tended to attribute unwarranted intelligence to
machines, in fact, was believed to depend strictly on lack of knowledge
about computing.52 Many members of the AI community were therefore
confident that the deceptive character of AI would be dispelled, like a magic
trick, by providing users with a better understanding of the functioning
of computer systems. Once programming was as widespread as driving
a car, they reasoned, the problem would evaporate. Artificial intelligence
scientists such as McCarthy imagined that interactive systems would have
helped achieve this goal, making computers more accessible and better un-
derstood than ever before.
Such hope, however, did not take into account that deception, as the his-
tory of media shows, is not a transitional but rather a structural component
of people’s interactions with technology and media. To create aesthetic and
emotional effects, media need users to fall into forms of illusion: fiction, for
instance, stimulates audiences to temporarily suspend their disbelief, and
television provides a strong illusion of presence and liveness.53 Similarly,
the interaction between humans and computers is based on interfaces that
provide a layer of illusion concealing the technological system to which
T h e P r o b l e m of t h e O b s e r v e r [ 45 ]
64
[ 46 ] Deceitful Media
The interface’s work of granting access to computers while at the same
time hiding the complexity of the systems relates to the way, discussed in
the previous chapter, AI facilitates a form of playful deception. Through the
game dynamics envisioned by Turing, the illusion of intelligence was do-
mesticated to facilitate a productive interaction between human and ma-
chine. Similarly, one of the characteristics of computer interfaces is that
they are designed to bring into effect their own illusory disappearance, so
that users do not perceive friction between the interface and the under-
lying system.63 The illusion, in this context, is normalized to make it appear
natural and seamless to users. It was for this reason that the implemen-
tation of human-computer interaction coincided with the discovery that
where humans are concerned, even things such as misunderstanding and
deception inform the new symbiosis.
In the 1966 Scientific American issue on AI, Anthony G. Oettinger
brought together McCarthy’s dream of increased access to computers with
his own, different views about software interfaces. Stressing the necessity
for easier systems that would make computing comprehensible to eve-
rybody, he proposed the concept of “transparent computers” to describe
what he considered to be one of the key goals of contemporary software
engineering: turning the complex processes of computing hardware into
“a powerful instrument as easy to use as pen and paper.”64 Once interest in
what happened inside the machine gave place to interest in what could be
achieved with computers, he predicted, computers would be ready to serve
an enormous variety of functions and roles. Quite remarkably, in the same
article he described the computer as a “very versatile and convenient black
box.”65 This concept, as noted, describes technological artifacts that provide
little or no information about their internal functioning. Reading this in
its literal sense, one should assume that computers were to be made trans-
parent and opaque at the same time. Of course, Oettinger was playing with
a secondary connotation of the word “transparent,” that is, easy to use or
to understand. That this transparency was to be achieved by making the
internal functioning of computers opaque is one of the ironic paradoxes of
modern computing in the age of user-friendliness.
From the 1980s, the computer industry, including companies such
as IBM, Microsoft, and Apple, embraced the idea of transparent design.
Faithful in this regard to McCarthy’s call of two decades earlier, they aimed
to make computers as widespread as cars; they diverged, however, on the
means to achieve this goal. Rather than improving computer literacy by
teaching everybody to program, they wanted to make computers simpler
and more intuitive so that people with no expertise could use (and buy)
them. “Transparent” design did not refer to software that exposed its inner
T h e P r o b l e m of t h e O b s e r v e r [ 47 ]
84
[ 48 ] Deceitful Media
of magic is rooted in people’s relationships with objects and people’s nat-
ural tendency to attribute intelligence and sociality to things. On the other
side, this feeling is embedded in the functioning of computer interfaces
that emerge through the construction of layers of illusion that conceal
(while making “transparent”) the underlying complexity of computing sys-
tems. For this reason, even today, computer scientists remain aware of the
difficulty of dispelling false beliefs when it comes to perceptions and uses
of AI.71 The magic of AI is impossible to dispel because it coincides with the
underlying logic of human-computer interaction systems. The key to the
resilience of the AI myths, then, is to be found not in the flamboyant myths
about mechanical brains and humanoid robots but in the innocent play of
concealing and displaying that makes users ready to embrace the friendly,
banal deception of “thinking” machines.
Developers in computer science would introduce the possibility of de-
ception within a wider framework promising universal access and ease of
use for computing technologies, which informed work aimed at improving
human-computer interaction. Their explorations entailed a crucial shift
away from considering deception something that can be dispelled by
improving knowledge about computers and toward the full integration
of forms of banal deception into the experience of users interacting with
computers. As the next chapter shows, the necessity for this shift became
evident in the earliest attempts to create AI systems aimed at interaction
with the new communities of users accessing computers through time-
sharing systems. One of them, the chatbot ELIZA, not only stimulated ex-
plicit reflections on the relationship between computing and deception but
also became a veritable archetype of the deceitful character of AI. If the
Turing test was a thought experiment contrived by a creative mind, ELIZA
was the Turing test made into an actual artifact: a piece of software that
became a cause célèbre in the history of AI.
T h e P r o b l e m of t h e O b s e r v e r [ 49 ]
05
CHAPTER 3
The ELIZA Effect
Joseph Weizenbaum and the Emergence of Chatbots
I f asked about ELIZA, the first chatbot ever created, Apple’s AI assis-
tant Siri—or at least, the version of Siri installed in my phone—has an
answer (fig. 3.1).
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0004
program to impersonate a human being successfully.”2 Its capacity to pass
as human, at least in some situations, made it a source of inspiration for
generations of developers in natural language processing and other AI
applications that engage with language, up to contemporary systems such
as Siri and Alexa.
This chapter examines the creation of ELIZA at MIT in 1964–1966
and interrogates its legacy by examining the lively debate it stimulated
in the AI community and the public sphere. Often discussed in passing
in historical scholarship on AI, computing, and media, the case of ELIZA
deserves much more dedicated attention. In fact, the creation and recep-
tion of Weizenbaum’s ELIZA was a crucial moment for the history of dig-
ital media, not only because the program is widely regarded as the first to
conduct a conversation in natural language but also because ELIZA was a
disputed object that became the center of competing narratives shaping
key controversies and discourses about the impact of computing and
digital media.
Focusing on the discussions about how to interpret the functioning
and success of ELIZA, I argue that it was influential not so much at a
technical but mainly at a discursive level. Providing evidence of how
relatively simple AI systems could deceive users and create an impres-
sion of humanity and intelligence, ELIZA opened up new perspectives
for researchers interested in AI and in conversational agents such as
chatbots. While the program was relatively unsophisticated, especially
compared to contemporary systems, the narratives and anecdotes cir-
culating about it prompted reflection on how humans’ vulnerability to
deception could be exploited to create efficient interactions between
humans and machines. As such, ELIZA is an emblematic example of how
artifacts, and in particular software, have social and cultural lives that
concern their material circulations as much as the narratives that emerge
and circulate about them.
With the creation of ELIZA, the German-born computer scientist Joseph
Weizenbaum was determined to stress the illusory character of computers’
intelligence. Yet some of the narratives emerging from the creation of
ELIZA reinforced the idea that machines and humans think and under-
stand language in similar ways. Consequently, the program was interpreted
as evidence in favor of two different, even contrasting visions: on the one
side that AI provided only the appearance of intelligence; on the other that
AI might actually replicate intelligence and understanding by artificial
means. In the following decades, moreover, the mechanisms of projection
that inform the use of chatbots and other AI agents came to be described
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 51 ]
25
as the “ELIZA effect.” In this sense, ELIZA is more than a piece of obsolete
software: it is a philosophical toy that, like the optical devices invented in
the nineteenth century to disseminate knowledge about illusion and per-
ception, is still a reminder that the power of AI originates in the technology
as much as in the perception of the user.
Although the implications and actual meaning of the Turing test are still
at the center of a lively debate, it is beyond doubt that the test contributed
to setting practical goals for the community of AI researchers that devel-
oped in the following decades. In what is probably the most influential AI
handbook in computer science, Stuart Russell and Peter Norvig recognize
the key role of the test in the development of a particular understanding
of research in this field based on the so-called behavioral approach, which
pursues the goal of creating computers that act like humans. Programs
designed in behavioral AI, they explain, are designed to exhibit rather than
actually replicate intelligence, thereby putting aside the problem of what
happens inside the machine’s “brain.”3
As Weizenbaum took up an academic position at MIT in 1964 to
work on AI research, his work was informed by a similar approach.4 In
his writings, he professed that AI was and should be distinguished from
human intelligence. Yet he made efforts to design machines that could
lead people into believing they were interacting with intelligent agents,
since he was confident that users’ realization that they had been deceived
would help them understand the difference between human intelligence
and AI.
Between 1964 and 1966, Weizenbaum created what is considered the
first functional chatbot, that is, a computer program able to interact with
users via a natural language interface.5 The functioning of ELIZA was
rather simple. As Weizenbaum explained in the article describing his in-
vention, ELIZA searched the text submitted by its conversation partner
for relevant keywords. When a keyword or pattern was found, the program
produced an appropriate response according to specific transformation
rules. These rules were based on a two-stage process by which the input
was first decomposed, breaking down the sentence into small segments.
The segments were then reassembled, readapted according to appropriate
rules—for instance by substituting the pronoun “I” for “you” —and pro-
grammed words were added to produce a response. In cases when it was im-
possible to recognize a keyword, the chatbot would employ preconfigured
[ 52 ] Deceitful Media
formulas, such as “I see” or “Please go on” or alternatively would create a re-
sponse through a “memory” structure that drew from previously inserted
inputs.6
Made available to users of the Project MAC time-sharing system at MIT,
the program was designed to engage in conversations with human users
who responded by writing on a keyboard, a situation similar to a contem-
porary messaging service or online chatroom. In this sense, the very pos-
sibility of ELIZA resided on the availability of a community of users who
could engage in conversations with the chatbot thanks to the development
of new human-computer interaction systems.7
Weizenbaum was adamant in his contention that ELIZA exhibited
not intelligence but the illusion of it. The program would demonstrate
that humans in interactions with computers were vulnerable to decep-
tion.8 As Weizenbaum conceded in an interview with Daniel Crevier, a
historian of AI, ELIZA was the immediate successor of a program that
played a game called Five-in-a-row or Go-MOKU, which was described in
his first published article, aptly titled “How to Make a Computer Appear
Intelligent.”9 The program used a simple strategy with no lookahead, yet it
could beat anyone who played at the same naive level and aimed at creating
“a powerful illusion that the computer was intelligent.”10 As noted in the
article where it was first described, it was able to “fool some observers for
some time.” Indeed, deception was, to Weizenbaum, the measure of success
for the author of an AI program: “his success can be measured by the per-
centage of the exposed observers who have been fooled multiplied by the
length of time they have failed to catch on.” On the basis of this criterion,
Weizenbaum considered his program to be quite successful, as it made
many observers believe that the computer behaved intelligently, providing
a “wonderful illusion of spontaneity.”11
Considering the limited computer power and resources available, ELIZA
was also quite successful in deceiving users, at least when the interaction
was limited to a relatively brief conversation. Its efficacy was due to some
intuitions that did not strictly pertain to the domain of programming but
derived from insights from psychology and from Weizenbaum’s under-
standing of human behavior in conversations. He had realized that our per-
ception of the identity of a conversation partner is crucial to the credibility
of any human interaction. Thus, in order to pass convincingly for a human,
a chatbot should not only respond correctly to a given input but also play
a coherent role throughout the conversation.12 Consequently, he conceived
of ELIZA as a program that could be adapted to different roles, which
he called, using one of his characteristic theatrical metaphors, scripts. In
ELIZA’s software architecture, scripts were treated as data, which implied
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 53 ]
45
that they were “not part of the program itself.” In terms of conversation
patterns, a script corresponded to a specific part that the bot would play
throughout a conversation.13
In the initial version of ELIZA, called DOCTOR, the program’s script
simulated a psychotherapist employing the Rogerian method—a type
of nondirective therapy in which the therapist reacts to the patient’s
talk mainly by redirecting it back to the patient, often in the form of
further questions.14 The choice of this role was crucial to ELIZA’s suc-
cess: in fact, the dynamics of the therapy allowed the program to sustain
a conversation while adding little, if anything, to it. The results are ev-
ident in some excerpts of a conversation with ELIZA that Weizenbaum
published in his first article on the subject. (ELIZA’s contributions are
in capital letters.)
[ 54 ] Deceitful Media
certain role.”19 In order to underline that ELIZA was only able to produce
the impression of reality, he went so far as to point out that “in a sense
ELIZA was an actress who commanded a set of techniques but who had
nothing of her own to say” and to describe it as a “parody” of a nondirective
psychotherapist.20
To Weizenbaum, then, ELIZA’s operations could be equated to acting, and
more broadly, conversation—and consequently, verbal human-machine
interactions—was a matter of role playing. As he made clear in an article
dedicated to the problem of, as the title put it, “Contextual Understanding
by Computers,” he realized that the contributions of human subjects were
central to such interactions. Humans make assumption about their con-
versation partners, assigning them specific roles; thus, a conversation
program is credible as long as it successfully plays its part. The illusion
produced by the program will break down in the moment when the con-
textual assumptions made by the human partner cease to be valid—a phe-
nomenon that, Weizenbaum notes, is at the basis of the comical effects
created in comedies of errors.21
In the decades since the creation of ELIZA, the theatrical metaphor
has been used by scholars and commentators to discuss Weizenbaum’s
work and, more broadly, has become a common way commentators and
developers alike describe the functioning of chatbots.22 This is significant
because, as George Lakoff and Mark Johnson famously demonstrated, the
emergence of new metaphors to describe things and events may result in
orienting attitudes toward them and in guiding future actions.23 While the
metaphor of the theatre was used by Weizenbaum to demonstrate that AI
was the fruit of an illusory effect, another comparison he employed pointed
even more explicitly to the issue of deception: he noted, in fact, that users’
belief that ELIZA was actually understanding what they were saying “is
comparable to the conviction many people have that fortune-tellers really
do have some deep insight.”24 Like a fortune teller, ELIZA’s messages left
enough space of interpretation for users to fill the gaps, so that “the ‘sense’
and the continuity the person conversing with ELIZA perceives is supplied
largely by the person himself.”25
As historians of technology have shown, technologies function not only
at a material and technical level but also through the narratives they gen-
erate or into which they are forced.26 David Edgerton, for instance, points
to the V2 rockets the German army developed and employed at the end of
World War II. Although the rockets were ineffective if one considers the
amount of resources used and their practical effects, their development
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 55 ]
65
was useful at a symbolic level, since it kept the hopes of victory alive. The
rockets functioned, in other words, first and foremost, at a narrative rather
than a material level.27 A similar discourse could apply to software. Think,
for instance, of the Deep Blue chess program, which famously beat chess
master Garry Kasparov in 1997. IBM had made considerable investments
to develop Deep Blue, yet once the challenge was won and the attention
of journalists evaporated, IBM dismantled the project and reassigned the
engineers working on it to different tasks. The company, in fact, was inter-
ested in the program’s capacity to create a narrative—one that would place
IBM at the forefront of progress in computation—more than in its poten-
tial application beyond the Kasparov challenge.28
In a similar way, ELIZA was especially effective at a discursive rather
than a practical level. Weizenbaum was always transparent about the fact
that ELIZA had limited practical application and would be influential in
shedding light on a potential path rather than in the context of its imme-
diate use.29 As Margaret Boden rightly points out, in terms of program-
ming work ELIZA was simple to the point of being obsolete even at the
moment of its creation, and proved largely irrelevant in technical terms
for the development of the field, essentially because Weizenbaum “wasn’t
aiming to make a computer ‘understand’ language.”30 His deep interest,
on the contrary, in how the program would be interpreted and “read” by
users suggests that he aimed at the creation of an artifact that could pro-
duce a specific narrative about computers, AI, and the interaction between
humans and machines. ELIZA, in this regard, was an artifact created to
prove his point that AI should be understood as an effect of users’ tendency
to project identity. In other words, ELIZA was a narrative about conversa-
tional programs as much as it was a conversational program itself.
Weizenbaum, however, was not interested in deceiving computer users.
On the contrary, he expected that a particular interpretation of ELIZA
would emerge from users’ interactions with it, as they realized that the ap-
parent intelligence of the machine was just the result of deception.31 This
narrative would present AI not as the result of humanlike intelligence pro-
grammed into the machine but as an illusory effect. This narrative would
therefore replace the myth of the “thinking machine,” which suggested
that computers could equal human intelligence, with a narrative more con-
sistent with the behavioral approach in AI.32 Looking back at ELIZA’s crea-
tion, he explained that it was conceived as a “programming trick”33—even
a “joke.”34 By showing that a computer program of such limited complexity
could trick humans into believing it was real, ELIZA would work as a dem-
onstration of the fact that humans, facing “AI” technologies, are vulnerable
to deception.
[ 56 ] Deceitful Media
Weizenbaum’s approach recalls in this sense the use in the Victorian
age of philosophical toys such as the kaleidoscope or the phenakistoscope,
which illustrated an idea about optics through a device that manipulated
the viewers’ perceptions and at the same time entertained them.35 In fact,
Weizenbaum noted that one of the reasons for ELIZA’s success was that
users could interact playfully with it.36 It was targeted to users of the new
time-sharing system at MIT, who would be stimulated by it to reflect not
only on AI’s potentials but also, and crucially, on its limitations, since the
“programming trick” created the illusion of intelligence rather than intel-
ligence itself.
Weizenbaum’s success in turning ELIZA into the source of a specific nar-
rative about AI as deception is evident in the stories that circulated about
ELIZA’s reception. One famous anecdote concerned Weizenbaum’s sec-
retary, who once asked him to leave the room, needing some privacy to
chat with ELIZA. He was particularly startled by this request because the
secretary was well aware of how the program functioned and could hardly
consider it a good listener.37 Another anecdote about ELIZA concerns a
computer salesman who had a teletype interchange with ELIZA without
being aware that it was a computer program; the interaction resulted in
him losing his temper and reacting with fury.38 Both anecdotes have been
recalled extremely often to stress humans’ tendency to be deceived by AI’s
appearance of intelligence—although some, like Sherry Turkle, point out
that the story of the secretary might reveal instead users’ tendency to
maintain the illusion that ELIZA is intelligent “because of their own desires
to breathe life into a machine.”39
In such anecdotes, which played a key role in informing the program’s
reception, the idea that AI is the result of deception becomes substantiated
through a simple, effective narrative. Indeed, one of the characteristics of
anecdotes is their capacity to be remembered, retold, and disseminated,
conveying meanings or claims about the person or thing they refer to. In
biographies and autobiographies, for instance, anecdotes add to the narra-
tive character of the genre, which despite being nonfictional is based on sto-
rytelling, and at the same time contribute to the enforcing of claims about
the person who is the subject of the biographical sketch, for example, her
temperament, personality, and skills.40 In the reception of ELIZA, the an-
ecdote about the secretary played a similar role, imposing a recurring pat-
tern in which the functioning of the program was presented to the public
in terms of deception. Julia Sonnevend has convincingly demonstrated
that one of the characteristics of the most influential narratives about
media events is their “condensation” into a single phrase and a short nar-
rative.41 Others have referred to a similar process in Latourian terms as a
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 57 ]
85
[ 58 ] Deceitful Media
out that computers might be regarded by laypersons as though they are per-
forming magic. However, “once a particular program is unmasked, once its
inner workings are explained in language sufficiently plain to induce under-
standing, its magic crumbles away; it stands revealed as a mere collection of
producers, each quite comprehensible.”46 By making this point, Weizenbaum
seemed unaware of a circumstance that concerns any author—from the
writer of a novel to an engineer with her latest project: once one’s crea-
tion reaches public view, no matter how carefully one has reflected on the
meanings of its creation, these meanings can be overturned by the readings
and interpretations of other writers, scientists, journalists, and laypersons.
A computer program can look, despite the programmer’s intention, as if it
is performing magic; new narratives may emerge, overwriting the narrative
the programmer meant to embed in the machine.
This was something Weizenbaum would learn from experience. The
public reception of ELIZA, in fact, involved the emergence a very different
narrative from the one he had intended to “program” into the machine.
With the appearance in 1968 of Stanley Kubrick’s now classic science fic-
tion film 2001: A Space Odyssey, many thought that ELIZA was “something
close to the fictional HAL: a computer program intelligent enough to under-
stand and produce arbitrary human language.”47 Moreover, Weizenbaum
realized that research drawing or following from his work was led by very
different understandings about the scope and goals of AI. A psychologist
from Stanford University, Kenneth Mark Colby, developed PARRY, a con-
versational bot whose design was loosely based on ELIZA but represented a
very different interpretation of the technology. Colby hoped that chatbots
would provide a practical therapeutic tool by which “several hundred
patients an hour could be handled by a computer system designed for this
purpose.”48 In previous years, Weizenbaum and Colby had collaborated and
engaged in discussions, and Weizenbaum expressed later some concerns
that his former collaborator did not give appropriate credit to his work
on ELIZA; but the main issue in the controversy that ensued between the
two scientists was on moral grounds.49 A chatbot providing therapy to real
patients was in fact a prospect Weizenbaum found dehumanizing and dis-
respectful of patients’ emotional and intellectual involvement, as he later
made abundantly clear.50 The question arises, he contended, “do we wish
to encourage people to lead their lives on the basis of patent fraud, charla-
tanism, and unreality? And, more importantly, do we really believe that it
helps people living in our already overly machine-like world to prefer the
therapy administered by machines to that given by other people?”51 This
reflected his firm belief that there were tasks that, even if theoretically or
practically possible, a computer should not be programmed to do.52
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 59 ]
06
[ 60 ] Deceitful Media
public, bringing about “a certain aura—derived, of course, from science.”59
He worried that if public conceptions of computing technologies were mis-
guided, then public decisions about the governance of these technologies
were likely to be misguided as well.60 The fact that the computer had become
a powerful metaphor, in his view, did nothing to improve understandings
of these technologies among the public. On the contrary, he reasons, a
metaphor “suggests the belief that everything that needs to be known is
known,” thereby resulting in a “premature closure of ideas” that sustains
rather than mitigates the lack of understanding of the complex scientific
concepts related to AI and computation.61
Scholars of digital culture have shown how metaphors such as the
cloud provide narratives underpinning powerful, and sometimes inaccu-
rate, representations of new technologies.62 Less attention has been given,
however, to the role of particular artifacts, such as software, in stimulating
the emergence of such metaphors. Yet the introduction of software often
stimulates powerful reactions in the public sphere. Digital games such as
Death Race or Grand Theft Auto, for instance, played a key role in directing
media narratives about the effects of gaming.63 As Weizenbaum percep-
tively recognized, the ways ELIZA’s functioning was narrated by other
scientists and in the press contributed to strengthening the “computer
metaphor,” by which software’s capacity to create the appearance of intel-
ligence was exchanged for intelligence itself. Although he had conceived
and programmed ELIZA with the expectation that it would help people
dismiss the magical aura of computers, the narratives that emerged from
public scrutiny of his invention were actually reinforcing the very same
metaphors he intended to dispel.
REINCARNATING ELIZA
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 61 ]
26
[ 62 ] Deceitful Media
Figure 3.2 Author’s conversation with ELIZA’s avatar, at http:// www.masswerk.at/
elizabot/, 18 May 2018. The author acknowledges to have himself fallen into the (banal?) de-
ception, by referring to a piece of software through a feminine instead of a neutral pronoun.
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 63 ]
46
technologies. His widely read book Computer Power and Human Reason
turned ELIZA into a powerful narrative about computing and digital media,
which anticipated and in many ways prepared the ground for the emergence
of critical approaches, counteracting the emergence of techno- utopian
discourses about the so-called digital revolution.70 Thus, ELIZA became the
subject of not just one but a range of narratives, which contributed, at that
foundational moment for the development of digital media, alternative
visions about the impact and implications of computing and AI.
If Weizenbaum had conceived ELIZA as a theory embedded in a program,
others after him also turned ELIZA into a theory about artificial agents,
users, and communicative AI. Even today, people’s tendency to believe that
a chatbot is thinking and understanding like a person is often described as
the Eliza effect, designating situations in which users attribute to computer
systems “intrinsic qualities and abilities which the software . . . cannot pos-
sibly achieve,” particularly when anthropomorphization is concerned.71
The Eliza effect has remained a popular topic in reports of interactions
with chatbots, forming a recurrent pattern in accounts of users’ occasional
inabilities to distinguish humans from computer programs.
The particular implications given to the Eliza effect vary significantly
from one author to the other. Noah Wardrip-Fruin, for instance, suggests
that the deception was the result of audiences developing a mistaken idea
about the internal functioning of the chatbot: “they assumed that since
the surface appearance of an interaction with the program could resemble
something like a coherent dialogue, internally the software must be com-
plex.”72 The Eliza effect, in this sense, designates the gap between the black-
boxed mechanisms of software and the impression of intelligence that
users experience. This impression is informed by their limited access to the
machine but also by their preconceptions about AI and computers.73
Turkle gives a different and in many respects more nuanced interpre-
tation of the Eliza effect. She uses this interpretation to describe the ways
users, once they knew that ELIZA or another chatbot is a computer pro-
gram, change their modes of interaction so as to “help” the bot produce
comprehensible responses. She notes, in fact, that ELIZA users avoid saying
things they think will confuse it or result in too predictable responses. In
this sense, the Eliza effect demonstrates not so much AI’s capacity to de-
ceive as the tendency of users’ to fall willingly—or perhaps most aptly,
complacently—into the illusion.74
This interpretation of the Eliza effect recalls wider responses that people
have to digital technologies, if not toward technology on the whole. As
Jaron Lanier observes, “we have repeatedly demonstrated our species’ bot-
tomless ability to lower out our standards to make information technology
[ 64 ] Deceitful Media
look good”: think, for instance, of teachers giving students standardized
tests that an algorithm could easily solve, or the bankers before the fi-
nancial crash who continued to trust machines in directing the finan-
cial system.75 The Eliza effect, therefore, reveals a deep willingness to see
machines as intelligent, which informs narratives and discourses about AI
and computing as well as everyday interactions with algorithms and infor-
mation technologies.
In this sense ELIZA, as Turkle put it, “was fascinating not only because
it was lifelike but because it made people aware of their own desires to
breathe life into a machine.”76 Although this desire usually remains tacit
in actual interactions with AI, it may reveal also a form of pragmatism on
the part of the user. For example, if a chatbot is used to do an automated
psychotherapy session, falling under the spell of the Eliza effect and thus
“helping” the chatbot do its job could potentially improve the experience
of the user. Similarly, AI companions—such as social robots or chatbots—
that are designed to keep users company will be more consoling if users
prevent them from revealing their shortcomings by choosing inputs they
can reply to. In addition, users of AI voice assistants such as Siri and Alexa
learn the most appropriate wordings for their queries, to ensure that these
assistants will perform their daily tasks and “understand” them. As a man-
ifestation of banal deception, in this sense, the Eliza effect contributes
to explaining why users are complicit with the deceptive mechanisms
embedded in AI: not as a consequence of their naivete but rather because
the deception has some practical value to them.
Still, such apparent benevolence should not make us forget
Weizenbaum’s passionate call to consider the benefits as much as the risks
of AI technologies. The Eliza effect, to Weizenbaum himself, epitomized a
dynamics of reception that was “symptomatic of deeper problems”: people
were eager to ascribe intelligence even if there was little to warrant such
a view.77 This tendency could have lasting effects, he reasoned, because
humans, having initially built the machines, might come to embrace the
very models of reality they programmed into them, thereby eroding what
we understand as human. As such, for Weizenbaum ELIZA demonstrated
that AI was not or at least not only the technological panacea that had been
enthusiastically heralded by leading researchers in the field.
OF CHATBOTS AND HUMANS
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 65 ]
6
with algorithms in everyday life.78 She notes that everyday encounters with
computing technologies are interpreted in personal ways by users, and that
these interpretations inform how they imagine, perceive, and experience
digital media. While Bucher focuses mainly on the context of interaction
with modern computing technologies, the case of ELIZA shows that these
interpretations are also informed by the narratives and visions that circu-
late around them. In fact, as argued by Zdenek, AI depends on the produc-
tion of physical as well as discursive artifacts: AI systems are “mediated
by rhetoric” because “language gives meaning, value and form” to them.79
This dependence is also due to the characteristic opacity of software, whose
functioning is often obscure to users and, to a certain extent, even to com-
puter scientists, who often work with only a partial understanding and
knowledge of complex systems. If narratives in this sense may illuminate
software’s technical nature for the public, they might at the same time turn
software into contested objects whose meanings and interpretations are
the subject of complex negotiations in the public sphere.
ELIZA was one such contested object and, as such, prompted questions
about the relationship between AI and deception that resonate to the pre-
sent day. In the context of an emerging AI field, ELIZA provided an insight
into patterns of interaction between humans and computers that only ap-
parently contrasts with the vision of human-computer symbiosis that was
embraced at the same time at MIT and in other research centers across the
world. In fact, Weizenbaum’s approach to AI in terms of illusion was not
by any means irrelevant to the broader discussions about human-machine
interactions that were shaping this young discipline.
In histories of computing and AI, Weizenbaum is usually presented as
an outcast and a relatively isolated figure within the AI field. This was cer-
tainly true during the last part of his career, when his critical stance gained
him the reputation of a “heretic” in the discipline. This was not the case,
however, during his early work on ELIZA and other AI systems, which were
extremely influential in the field and secured him a tenured position at
MIT.80 The questions Weizenbaum was posing at the time of his work on
ELIZA were not the solitary explorations of an outcast; several of his peers,
including some of the leading researchers of the time (as c hapter 2 has
shown), were asking them too. While AI developed into a heterogeneous
milieu bringing together multiple disciplinary perspectives and approaches,
many acknowledged that users could be deceived in interactions with “in-
telligent” machines. Like Weizenbaum, other researchers in the burgeoning
AI field realized that humans were not an irrelevant variable in the AI equa-
tion: they actively contributed to the emergence of intelligence, or better
said, the appearance of it.
[ 66 ] Deceitful Media
The questions that animated Weizenbaum’s explorations and the con-
troversy about ELIZA, in this sense, informed not only the work of chatbot
developers and AI scientists. In different areas of applications, software
developers discovered that users’ propensity to project intelligence or
agency onto machines was kindled by specific elements of software’s de-
sign, and that mobilizing such elements could help achieve more functional
programs and systems. In the next chapter, I explore some examples of
ways the creation of software turned into a new occasion to interrogate
how deception informed the interaction between computer and users. For
all their specificity, these examples will show that the Eliza effect concerns
much more than an old chatbot that might have deceived some of its
users: it is, on the contrary, a structural dimension of the relationship be-
tween humans and computing systems.
W e i z e n b au m a n d t h e E m e r g e n c e of C h at b o t s [ 67 ]
86
CHAPTER 4
B etween the 1970s and the early 1990s, in coincidence with the emer-
gence of personal computing and the shift to network systems and
distributed computing, AI took further steps toward being embedded in
increasingly complex social environments.1 AI technologies were now
more than isolated experiments testing the limits and potentials of com-
puting, as had been largely the case until then. More and more often these
technologies were made available to communities of users and aimed at
practical applications. Pioneering projects like ELIZA had mostly con-
cerned communities of practitioners such as the one surrounding the MAC
time-sharing project at MIT. But when computing devices moved to the
center of everyday life for masses of people, the significance of AI changed
irremediably. So too did its meaning: AI took up a social life of its own, as
new opportunities emerged for interactions between artificial agents and
human users.
Somewhat paradoxically, these developments coincided with a lack of
fortune for the AI enterprise. After the enthusiasm of the 1950s and 1960s,
a period of disillusion about the prospects of AI characterized the following
two decades. The gap between the visionary myths of thinking machines
and the actual outcomes of AI research became manifest, as a series of
critical works underlined the practical limitations of existing systems. The
Lighthill report, commissioned by the British Science Research Council, for
instance, gave a pessimistic evaluation of the drawbacks of AI, identifying
the “combinatorial explosion”: in increasing the size of the data, some
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0005
problems become rapidly intractable.2 Other critical works, such as Herbert
Dreyfus’s Alchemy and Artificial Intelligence, also highlighted that the much-
heralded achievements of AI research were far from bringing significant
practical results.3 The criticism seriously undermined the AI community
as a whole, resulting in a climate of public distrust regarding their research
and in a loss of credibility and funding: the so-called AI winter.
As a response to this, many researchers in the late 1970s and in the early
1980s started to avoid using the term AI to label their research even when
working on projects that would have been called AI just a few years earlier.
They feared that if they adopted such a term to describe their work, the ge-
neral lack of credibility of AI research would penalize them.4 Even if labeled
otherwise, however, research into areas such as natural language processing
and communicative AI continued to be done in laboratories across the
United States, Europe, Asia, and the world, with significant achievements.5
In addition, the rising computer software industry supported experimenta-
tion with new ways to apply AI technologies in areas like home computing
and gaming. During this time, crucial steps were taken that implemented
AI within new interactive systems and prepared for today’s communicative
AI—the likes of Alexa, Siri, and Google Assistant.
This chapter examines the ways AI was incorporated in software sys-
tems by focusing on three particular cases: routine programs called
daemons, digital games, and social interfaces. The trajectory of these dif-
ferent software artifacts provides an opportunity to reflect on the way
users’ understandings of computers and AI have changed over time, and
how such changes are connected with evolutions in interactions between
humans and computing machines. Observing a significant change since
the late 1970s to the 1990s in how participants in her studies perceived
AI, Turkle has suggested that the emergence of personal computing and
the increasing familiarity with computing technologies made AI appear
less threatening to users. As computers became ubiquitous and took up an
increasing number of tasks, their inclusion in domestic environments and
everyday lives made them less frightening and more familiar to people.6
The integration of AI-based technologies in software that came to be in
everyday use for millions of people around the world helped users and
developers to explore the potentials and ambiguities of AI, preparing for
the development of contemporary AI systems, not only at a technical but
also at a cultural and social level.
Reconstructing histories of software, as of all technical artifacts, requires
a multilayered approach that embraces the material, social, and cultural
dimensions of technology.7 Yet the peculiar character of computer software
adds extra levels of complexity to this enterprise. What makes the history
Si t uat i n g A I i n S of t wa r e [ 69 ]
07
[ 70 ] Deceitful Media
environment but run independently from other processes. Once active,
they follow the coded instruction to complete routine tasks or react to par-
ticular situations or events. A crucial distinction of daemons from other
software utilities is that they do not involve interaction with users: “a true
daemon,” as Andrew Leonard put it, “can act on its own accord.”12 Daemons
deliver vital services enabling the smooth functioning of operating sys-
tems, computer networks, and other software infrastructures. Email users
may have encountered the pervasive mailer-daemon that responds to var-
ious mailing problems, such as sending failures, forwarding needs, and lack
of mailbox space. Although unbeknownst to them, navigators visiting any
web page also benefit from the daemons that respond to requests for access
to web documents.13
The origins of the word daemon are noteworthy. In Greek mythology,
daemons were both benevolent and evil divinities. A daemon was often
presented as an intermediary between gods and mortals—a medium in
the literal sense, that is, “what is in between.” In nineteenth-century sci-
ence, the word daemon came to be employed for describing fictional beings
that were included in thought experiments in domains like physics. These
daemons helped scientists picture situations that could not be reproduced
in practical experiments but were relevant to theory.14 The most famous
of these daemons was the protagonist of a thought experiment by British
scientist James Clerk Maxwell in 1867, wherein he suggested the possi-
bility that the second law of thermodynamics might be violated. According
to Fernando Corbato, who is usually credited with inventing computer
daemons, “our use of the word daemon was inspired by the Maxwell’s
daemon of physics and thermodynamics. . . . Maxwell’s daemon was an im-
aginary agent which helped sort molecules to different speeds and worked
tirelessly in the background. We fancifully began to use the word daemon to
describe background processes which worked tirelessly to perform system
chores.”15
In human- computer interaction, daemons contribute to making
computers “transparent” by staying in the background and performing
tasks that help the interface and the underlying system work efficiently and
seamlessly. Thus, daemons are often characterized as servants who antic-
ipate the master’s wishes and use their own initiative to meet such needs,
without being noticed by the master-user.16 As Fenwick Mckelvey recently
noted, however, daemons are never mere servant mechanisms. Given their
autonomy and the role they play in large digital infrastructures such as
the Internet, they should also be seen as mechanisms controlling these
infrastructures: “since daemons decide how to assign and use finite net-
work resources, their choice influence which networks succeed and which
Si t uat i n g A I i n S of t wa r e [ 71 ]
27
[ 72 ] Deceitful Media
ways software has been perceived, described, and discussed. These bear
the signs of the different attributes that people have projected onto them
throughout their “biographical” life.20
Consider, then, the origins of computer daemons through both
meanings of the word biography. Daemons were introduced as distinctive
programming features when researchers working at MIT’s Project MAC,
led by Fernando Corbato, needed to implement autonomous routines
to handle the functioning of their new time-sharing systems. Daemons
were programmed in this context as pieces of software that worked in the
background to seamlessly administer crucial tasks in the system without
requiring a user to operate them directly. Yet in this context Corbato’s de-
cision to call them daemons reveals the tendency to attribute some form
of humanity to them, despite the fact that they carried out rather simple
and mundane tasks in the system. Recall that in ancient myths, daemons
were believed to possess will, intention, and the capacity to communicate.
One might contend that because Corbato had a background in physics,
the concept of daemon was chosen because of the meaning of its meta-
phor in science, with a different connotation therefore than in Greek my-
thology. Yet Maxwell (whose thought experiment on the second law of
thermodynamics contributed, as mentioned earlier, to the introduction
of the term daemon in physics) lamented that the concept of the daemon
resulted in humanizing agency in his thought experiment. He noted that
he had not explicitly called the protagonist of his thought experiment a
daemon, this characterization being made by others, while a more exact
description would have been an automatic switch or a valve rather than
an anthropomorphized entity.21 Even in physics, therefore, the use of the
term was viewed suspiciously, as betraying the temptation to attribute
agency to imaginary devices included in thought experiments.
Corbato’s naming choice, in this regard, was an act of projection through
which a degree of consciousness and intelligence was implicitly attributed
to daemons—understood as lines of code capable of exercising effects in the
real world. The significance of this initial choice is even more evident when
one examines wider trajectories in the daemon’s “biography.” As argued by
Andrew Leonard, it is difficult to distinguish rigidly between daemons and
some of the bots that later inhabited online communities, chatrooms, and
other networked platforms, since the routine tasks usually attributed to
daemons are sometimes taken up by chatbots with some form of conver-
sational ability.22 Not unlike daemons, bots are also employed to execute
tasks needed to preserve the functioning of a system. These include, for
instance, preventing spam in forums, posting updates automatically to so-
cial media, and contributing to moderation in chatrooms. This servility,
Si t uat i n g A I i n S of t wa r e [ 73 ]
47
shared by many daemons and bots, reminds us of the way voice assistants
such as Amazon’s Alexa are presented as docile servants of the house and
its family members.
Yet the key similarity between daemons and bots is perhaps to be
found not so much in what they do but in how they are seen. One of the
differences between daemons and bots is that the former work “silently”
in the background, while the latter are usually programmed so that they
can enter into interaction with users.23 Once embedded in platforms and
environments characterized by a high degree of social exchanges, however,
daemons resist this invisibility. As shown by McKelvey, for instance, the
results of daemons’ activities in online platforms have stimulated complex
reactions. Propiracy, anticopyright groups in The Pirate Bay, for instance,
antagonized daemons as well as copyright holders, making specific efforts
to bring to the surface the invisible work of daemons.24 Also in contrast to
bots, daemons are not expected to engage linguistically with users’ inputs.
Yet, as shown by the very term chosen to describe them, daemons also in-
vite users and developers to regard them as agents.
The biography of computer daemons shows that software, in the eyes
of the beholder, takes up a social life even when no apparent interaction
is involved. Like stones, computer daemons do not talk. Yet even when no
explicit act of communication is involved, software attracts people’s innate
desire to communicate with a technological other—or conversely, the fear
of the impossibility of establishing such communication.25 Daemons, which
once inhabited the middle space between gods and humans, are today sus-
pended between the user and the mysterious agency of computer opera-
tions, whose invisible character contrasts with the all-too-real and visible
effects it has on material reality.26 Like ghosts, daemons appear unfamiliar,
existing in different dimensions or worlds, but still capable of making us
feel some kind of proximity to them.
[ 74 ] Deceitful Media
experiment with the interactive and communicative dimensions of AI.
Turing intuited this when he proposed the Imitation Game or, even earlier,
when he suggested chess as a potential test bed for AI. In a lecture to the
London Mathematical Society in 1947, he contended that “the machine
must be allowed to have contact with human beings in order that it may
adapt itself to their standards. The game of chess may perhaps be rather
suitable for this purpose, as the moves of the machine’s opponent will au-
tomatically provide this contact.”27 Turing was in search of something that
could work as propaganda for a nascent discipline, and chess was an excel-
lent choice in this regard because it could place computers and players at
the same level in an intellectually challenging competition. But Turing’s
words indicate more than an interest in demonstrating the potential of AI.
The development of “machine intelligence” required pathways for the com-
puter to enter into contact with human beings and hence adapt to them,
and games were the first means envisioned by Turing to create this contact.
In the following decades, digital games enhanced the so-called symbiosis
between computers and humans and contributed to immerse AI systems in
complex social spaces that nurtured forms of interaction between human
users and computational agents.28 While pioneering digital games such as
Spacewar!—a space combat game developed in 1962—involved challenges
between human players only, computer-controlled characters were gradu-
ally introduced in a number of games. One of the forms of interaction that
became available to players was communication in natural language. To
give one instance, a player of an adventure game—a genre of digital games
that requires players to complete quests in a fictional world—may initiate
a dialogue with a computer-controlled innkeeper in the hope of gaining
useful information for completing the game quest.29 For the history of
the relationship between AI and human-machine communication, similar
dialogues with fictional characters represent an extremely interesting case.
Not only, in fact, do they entail conversations between users and artificial
agents but also these are embedded in the fictional environment of the
gameworld, with its own rules, narratives, and social structures.
Yet, interestingly, digital games rarely make use of the same kind of con-
versational programs that power ELIZA and other chatbots. They usually
rely, instead, on a simple yet apparently very effective—at least in game
scenarios—programming technique: dialogue trees. In this modality of di-
alogue, players do not type directly what should be said by the character
they control. Instead, they are provided with a list of predetermined scripts
they can choose from. The selected input activates an appropriate response
for the computer-controlled character, and so on, until one of the dialogue
options chosen by the player ignites a certain outcome. For instance, you,
Si t uat i n g A I i n S of t wa r e [ 75 ]
67
the player, are attacked if you selected lines of dialogue that provoked the
interlocutor. If different options were chosen, the computer-controlled
character could instead become an ally.30
Take The Secret of Monkey Island, a 1990 graphic adventure game that
rose to the status of a classic in the history of digital games. In the role
of fictional character Guybrush Threepwood, you have to pass three trials
to fulfill your dream of becoming a pirate. The first of these trials is a duel
with the Sword Master, Carla. Preparing for this challenge, you discover
that sword fighting in the world of Monkey Island is not about dexterous
movements and fencing technique. The secret, instead, is mastering the
art of insult, catching your opponent off guard. During the swordplay, a
fighter launches an insult such as “I once owned a dog that was smarter
than you.” By coming up with the right response (“He must have taught
you everything you know”) you gain the upper hand in the combat. An in-
appropriate comeback instead leads to a position of disadvantage. In order
to beat the Sword Master, you need to engage in fights with several other
pirates, learning new insults and fitting responses to be used in the final
fight (fig. 4.1).31
In the Monkey Island saga, conversation is therefore conceived as a veri-
table duel between the machine and the human. This is to some extent sim-
ilar to the Turing test, in which the computer program wins the Imitation
Game by deceiving the human interrogator into believing it is a human.
In contrast with the test, however, it is the human in Monkey Island who
takes up the language of the computer, not the computer who imitates the
language of humans. Envisaging conversation as a hierarchical structure
Figure 4.1 Sword- fighting duel in The Secret of Monkey Island: only by selecting
the right insult will the player be able to defeat opponents. Image from https://
emotionalmultimediaride.wordpress.com/2020/01/27/classic-adventuring-the-secret-of-
monkey-island-originalspecial-edition-pc/ (retrieved 7 January 2020).
[ 76 ] Deceitful Media
of discrete choices, dialogue trees in fact take up the logic of software pro-
gramming: they are equivalent to decision trees, a common way to display
computer algorithms made up of conditional choices. In Monkey Island’s
“conversation game,” the player beats the computer-controlled pirate by
navigating software’s symbolic logic: if the right statement is selected, then
the player gains an advantage; if the player selects the right statement
again, then the duel is won.
Dialogue trees, in other words, speak the language of computer pro-
gramming, whose locution does not merely convey meaning but also
executes actions that have real consequences in the material world.32
This fits with a long-standing theory in computer science that posits that
natural language, when employed to operate tasks on a computer, can
be regarded as just a very high-level form of programming language.33
Although this idea has been criticized with some reason by linguists, the
equivalence between natural language and programming language applies
to cases such as dialogue trees, in which the user does not need to master
programming but just uses its everyday language as a tool to interact with
computers.34 In textual-based digital games, such as interactive fiction—a
genre of digital games that originated as early as in the 1970s—this logic
does not only concern dialogues, since language is the main tool to act on
the gameworld: the player employs verbal commands and queries to act
on the game environment and control fictional characters. For example, a
player types GO SOUTH to have the character move in that direction in the
fictional world. Something similar applies to voice assistants, with which
users can carry out a simple conversation, but perhaps even more impor-
tantly, they can use language to make them perform tasks such as making
a phone call, turning off the light, or accessing the Internet.
As observed by Noah Wardrip-Fruin, nevertheless, the most remarkable
thing about dialogue trees is “how little illusion they present, especially
when compared with systems like ELIZA.”35 Not only, in fact, dialogue trees
are framed in a static dynamics of turn-taking, which reminds one more
of the dynamics of table games (where players take turns to make their
moves) than of real-life conversations. Furthermore, each character can
only respond to a limited number of queries, with the resulting dialogues
appearing at the very least rigid and, in the worst cases, overly predictable.
It is surprising, and in a way fascinating, that such a rigid frame is still able
to exercise a powerful effect on players—as the persistent success of dia-
logue trees, which continue to be widely privileged over chatbots in game
design, testifies. Game scholar Jonathan Lessard, who designed a number
of simple digital games experimenting with the use of chatbots, speculated
that this might have to do with the fact that dialogue trees allow game
Si t uat i n g A I i n S of t wa r e [ 77 ]
87
[ 78 ] Deceitful Media
stimulating players to fill the gaps in the illusion spelled out by the dig-
ital game. One need only look at popular culture fans’ engagements with
character bots inhabiting social media, or at the teasing jokes many users
attempt with AI voice assistants, to realize how this happens even outside
of game environments.44
Reportedly, the main satisfaction of Will Crowther, the creator in the
1970s of the first textual-based interactive fiction game, Adventure, was to
discover that his program fooled people into thinking that it was intelli-
gent enough to understand and speak English. As a colleague recounted,
“Will was very proud—or more accurately amused—of how well he could
fool people into thinking that there was some very complex AI behind the
game.”45 The similarity of this comment with Weizenbaum’s reflections
about ELIZA are noteworthy. Like Weizenbaum, Crowther noticed that
users tended to overstate the proficiency of relatively simple computing
systems if they were processing and generating natural language.46 This
pointed to something akin to the deception Weizenbaum noted in users
of his ELIZA: a propensity to regard even unsophisticated manipulation of
language as evidence of “intelligence” on the part of computers.
Si t uat i n g A I i n S of t wa r e [ 79 ]
08
[ 80 ] Deceitful Media
dialogue balloons and provided users with “active and intelligent help” as
well as “expert information.” The guides were presented in a virtual en-
vironment that simulated domestic spaces and could be personalized
by users, “decorating” the rooms and even changing the scenery out-
side the rooms’ windows to suit their taste.56 Users could choose from a
list of twelve creatures, the default being a dog named Rover, and other
choices, including a French-sounding cat, a rabbit, a turtle, and a sullen
rat.57 The guides sat at the corner of the screen, providing instruction and
performing occasional gimmicks while users were using the software (fig.
4.2). One of the features Microsoft boasted about was the degree to which
the different “personal guides” were characterized by specific personalities.
When the user was given the choice of selecting a personal guide, each was
described with attributes depicting its personality, for example, “outgoing,”
“friendly,” or “enthusiastic.”58 Although the experience with all guides was
in truth quite similar, some of the lines of dialogue were specific to each
one’s personality.
Like the metaphor of the domestic space, the name “Bob” was purport-
edly chosen to sound “familiar, approachable and friendly.”59 The reaction
of expert reviewers and journalists, however, was not friendly at all. Bob
was described as “an embarrassment,” and criticized for evident flaws in its
design.60 In the virtual rooms, for instance, there were objects giving access
to applications or to certain functions. However, other objects were useless
Figure 4.2 Screenshot of Microsoft Bob, with the guide “Rover the dog” (lower right).
Image from https://theuijunkie.com/microsoft-bob/, retrieved 24 November 2019.
Si t uat i n g A I i n S of t wa r e [ 81 ]
28
and, when clicked on, delivered a message explaining that they had no par-
ticular function. This was hardly an example of seamless design.61 Another
common criticism revolved around the fact that the interface offered “in-
direct management,” meaning that it not only gave access to functions but
also performed operations on behalf of the user. This was seen as poten-
tially problematic especially because the applications offered in Microsoft
Bob included home finance management, which raised the question of who
would be accountable for eventual mistakes.62
Arguably, the main reason for Bob’s failure, however, was that the in-
terface struck users as intrusive. Despite its having been explicitly pro-
grammed to facilitate social behavior, Bob did not follow important social
rules and conventions concerning things such as conversation turns and
personal space. The “personal guides” frequently interrupted navigation,
with help messages bubbling up unrequested. As one reviewer put it, “Bob’s
programs eliminate many standard functions, bombard users with chatty
messages and do not necessarily make it easier to do things.”63 In addition,
the didactic tone of the program and its personal guides were irritating to
many. Shortly after Bob’s release, a programmer named George Campbell
created a parody of Bob with its helpers offering “little bits of drawling
advice—more typically about life than computers—such as: ‘Listen to the
old folks; y’all might get to live as long.” Or: ‘A thirsty man oughtn’t to sniff
his glass.’ ”64
Yet for all the derision it attracted later, Bob did not look ridiculous at
the time. Its concept originated in what has been probably the most author-
itative social sciences approach to human-computer interaction developed
over the last three decades, the Computers Are Social Actors paradigm.
Clifford Nass and Byron Reeves, whose studies built the foundations of this
approach, consulted for Microsoft and participated in events connected to
Bob’s launch.65 Famously, Nass and Reeves argue that people applied to
computers and other media social rules and expectations similar to those
they employed when interacting with people in their everyday lives.66
Translated into the complexity of interface design, their research indi-
cated that each component in interface design, up to the way a message
was phrased, conveyed social meaning. The possibility of anticipating and
directing such meaning could be exploited by designers to build more effec-
tive and functional interactions with computers.67
Nass gave Bob passionate endorsement in press releases and interviews,
underlining that Bob was a natural development of the findings developed
through Nass’s and Reeves’s own research: “our research shows that people
deal with their computers on a social level, regardless of the program type
or user experience. People know that they’re working with a machine, yet
[ 82 ] Deceitful Media
we see them unconsciously being polite to the computer, applying social
biases, and in many other ways treating the computer as if it were a person.
Microsoft Bob makes this implicit interaction explicit and thereby lets
people do what humans do best, which is to act socially.”68 Reeves was later
also quoted in a press release as stating: “the question for Microsoft was
how to make a computing product easier to use and fun. Cliff and I gave a
talk in December 1992 and said that they should make it social and natural.
We said that people are good at having social relations—talking with each
other and interpreting cues such as facial expressions. They are also good
at dealing with a natural environment such as the movement of objects and
people in rooms, so if an interface can interact with the user to take advan-
tage of these human talents, then you might not need a manual.”69
Looking at Microsoft Bob alongside Nass’s and Reeves’s theories makes
a quite interesting exploration of the challenge of bringing together theory
with practice. On the one side are the spectacular achievements of Nass’s
and Reeves’s paradigm, which has informed a generation of research as well
as practical work in human-computer interaction; on the other side the
spectacular failure of an interface, Microsoft Bob, which was inspired by
this very same paradigm. How can one make sense of the relationship be-
tween the two?
Perhaps the key to Microsoft’s inability to convert Nass’s and Reeves’s
insights into a successful product is in Nass’s statement that Microsoft
Bob represented the ambition to take the implicit social character of
interactions between humans and computers and make it explicit. Yet in
Nass’s and Reeves’s research, as presented in their groundbreaking book
The Media Equation, published shortly after the launch of Microsoft Bob,
the social character of interactions with computers is implied and un-
spoken, rather than acknowledged or even appreciated by the user.70 In
his later work, Nass has further refined the Computers Are Social Actors
paradigm through the notion of “mindless behavior.” Through the notion
of mindlessness, drawn from the cognitive sciences, he and his collabo-
rator Youngme Moon have emphasized that users treat computers as social
actors despite the fact that they know all too well that computers aren’t
humans. Social conventions derived from human-human interaction are
thus applied mindlessly, “ignoring the cues that reveal the essential asocial
nature of a computer.”71
In Nass’s and Reeves’s theoretical work, therefore, computers’ sociality
is not intended to be explicit and constrictive as was the case in Microsoft
Bob. The intrusiveness of Bob, in this sense, might have disrupted the
mindless nature of imaginative social relationships between humans and
machines. To remain more strictly within the boundaries of the history of
Si t uat i n g A I i n S of t wa r e [ 83 ]
48
Searching on YouTube for videos about Alexa, one finds a large number
of homemade videos of users engaging in conversations with “her.” A ver-
itable subgenre among them are videos that play jokingly with weird or
comic replies returned by Alexa to specific inputs. Some, for instance, hint
at the famous Star Wars scene in which Darth Vader tells young hero Luke
Skywalker “I am your father.” In the YouTube clips Alexa responds to users
with the same line, certainly scripted by Amazon developers: “Noooo!
That’s not true, that’s impossible.”76
In spite of Alexa’s resistance to accepting its users’ paternity, Alexa,
Siri, Google Assistant, and other contemporary voice assistants have not
one but many “fathers” and “mothers.” This chapter has tracked three cru-
cial threads that are part of this story: daemons, digital games, and social
interfaces. Histories of AI are usually more conservative in defining the
precedents of today’s AI communicative systems, such as voice assistants.
In technical terms, their software is the result of a long-standing evolu-
tion in at least two main areas within AI: natural language processing and
automated speech recognition and production. On the one hand natural
language processing develops programs that analyze and process human
[ 84 ] Deceitful Media
language, from chatbots to language translation, from the interpretation
of speech to the generation of text.77 On the other hand automated speech
recognition and production develops computer applications that process
spoken language so that it can be analyzed and reproduced by computers.
The genealogy of voice assistants is, however, much more complex. Since,
as Lucy Suchman has taught us, every form of human-computer interac-
tion is socially situated, this genealogy also involves practical and mundane
software applications through which AI has developed into technical, so-
cial, and cultural objects.78
Each of the three cases examined in this chapter provide alternative entry
points to interrogating the subtle and ambivalent mechanisms through
which social agency is projected onto AI. Obscure and hidden by definition,
computer daemons testify to the dynamics by which autonomous action
leads to attributions of agency and humanity. Notwithstanding the rou-
tine character of the operations daemons perform, their name and the “bi-
ography” of their representations help locate computing machinery in an
ambiguous zone between the alive and the inanimate.
The role of digital games in the evolution of AI has often been overlooked.
Yet the playful character of games has permitted safe and imaginative ex-
ploration of new forms of interactions and engagements with computers,
including dialogues with AI agents and computer-controlled characters.
As in digital games, modern interactions with voice assistants elicit and
stimulate playful engagements through which the boundaries between the
human and the machine are explored safely and without risk. Examining
the uses of dialogue trees helps one appreciate the role of playfulness in
interacting with communicative AI, where part of the pleasure comes from
falling into the illusion of entertaining a credible conversation with non-
human agents.
Si t uat i n g A I i n S of t wa r e [ 85 ]
68
[ 86 ] Deceitful Media
CHAPTER 5
How to Create a Bot
Programming Deception at the Loebner
Prize Competition
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0006
8
AI applications, and that even in this specific area the contest does not re-
flect the most advanced technologies.2 They note that the contest does not
improve understanding of the discipline but on the contrary creates hype
and supports misleading views of what AI actually means.3 Persuasively,
they also contend that what makes computer programs successful at the
Loebner Prize is largely their capacity to draw on human fallibility, finding
tricks and shortcuts to achieve the desired outcome: fooling people into
thinking that a bot is human.4
While I agree with such criticisms, I also believe that paradoxically, the
very reasons why the Loebner Prize has little relevance as an assessment
of technical proficiency make it an extremely productive case for under-
standing AI from a different perspective. As I have shown, when AI sys-
tems enter into interactions with humans, they cannot be separated from
the social spaces they share with their interlocutors. Humans need to be
included in the equation: communicative AI can only be understood by
taking into account issues such as how people perceive and react to the
behavior of AI. In this regard, the Loebner Prize provides an extraordinary
setting where humans’ liability for deception and its implications for AI are
put to the test and reflected on. This chapter looks at the history of this
competition to argue that it functioned as a proving ground for AI’s ability
to deceive humans, as well as a form of spectacle highlighting the poten-
tial but also the contradictions of computing technologies. Although the
prize has not offered any definitive or even scientifically valid assessment
of AI, the judges’ interactions with humans and programs during the nearly
thirty years of the competition represent an invaluable archive. They illu-
minate how conversational bots can be programmed to trick humans, and
how humans respond to the programmers’ strategies for deception.
[ 88 ] Deceitful Media
reached such a level of sophistication and proficiency that it was worth
putting it to the Turing test. This hypothesis, however, is only partially
convincing. Although new generations of natural language processing
programs had brought to fruit the theory of generative grammar, conversa-
tional agents such as chatbots had not evolved dramatically since the times
of ELIZA and PARRY.5 In fact, even the organizers of the first Loebner Prize
competition did not expect computers to come anywhere close to passing
the Turing test.6 Further, their actual performance hardly demonstrated
clear technical advances. As several commentators noted, contestants went
on to use many of the techniques and tricks pioneered by the ELIZA pro-
gram, developed almost thirty years earlier.7
What led to the organization of the Loebner Prize competition were
probably not technical but mainly cultural factors.8 When Epstein and
Loebner teamed up to convert Turing’s thought experiment into an actual
contest, the growth of personal computing and the Internet had created
ideal conditions for carrying out such an endeavor. People were now accus-
tomed to computing machines and digital technologies, and even chatbots
and AI characters were not unknown to Internet users and digital game
players. After roughly two decades of disillusion and the “AI Winter,” fer-
tile ground was laid for renewing the myth of thinking machines. The cli-
mate of enthusiasm for computers involved the emergence of a powerful
narrative around the idea of the “digital revolution,” with the help of en-
thusiastic promoters, such as Nicholas Negroponte, founder of the MIT
Media Lab in 1985, and amplifiers, like the popular magazine Wired, which
debuted in 1993.9
As part of this climate, computers moved to the center stage not just of
the Loebner Prize but of several human-versus-machine competitions. From
chess to popular quiz shows, the 1990s saw computer programs battling
against humans in public contests that attracted the attention of the public
and the press.10 It was not a coincidence, in this sense, that Epstein was a
skilled science popularizer and Loebner the owner of a firm that specialized
in theatre equipment. The Loebner Prize was first and foremost a matter of
public communication and performance; as Margaret Boden put it, “more
publicity than science.”11 This was not completely in contradiction with the
intentions of Turing himself: as noted in Chapter 1, Turing had in fact in-
tended his original proposal of the test to be a form of “propaganda” to raise
the status of the new digital computers.12 It is entirely appropriate, in this
sense, that the Loebner Prize was aimed primarily at appealing to the public
and only secondarily as a contribution to science.
The first competition for the Loebner Prize was hosted on November 8,
1991, by the Computer Museum in Boston. Oliver Strimpel, the museum’s
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 89 ]
09
executive director, explained to the New York Times that the event met one
of the key goals of the museum: “to answer the ‘so what?’—to help explain
the impact of computing on modern society.” He also explicitly acknowl-
edged the continuity between the Loebner Prize and other AI contests,
noting that the museum hoped to organize another “human vs machine
showdown”: a match between the world’s chess master and the most ad-
vanced chess-playing computer.13
Due to the vagueness of Turing’s proposal, the committee that pre-
pared the event had to decide how to interpret and implement his test.
They agreed that judges would rank each participant (either a computer or
a human) according to how humanlike the exchanges had been. If the me-
dian rank of a computer equaled or exceeded the median rank of a human
participant, that computer had passed this variant of the Turing test. In
addition, judges would specify whether they believed each terminal to be
controlled by a human or by a computer. The judges were to be selected
among the general public, and so were the “confederates,” that is, the
humans who would engage in blind conversations with judges alongside
the computer programs.
Another important rule concerned the conversation topic: the scope of
the test was limited to a specific theme, to be chosen by each contestant.
Epstein admitted that this rule, which changed in 1995 when discussions
became unrestricted, was meant “to protect the computers” that were “just
too inept at this point to fool anyone for very long.”14 Other regulations
were also conceived to the computers’ advantage: judges, for instance, were
solicited through a newspaper employment ad and screened for having
little or no knowledge of computer science, which heightened the chances
of the chatbots’ programmers.15
For a contest aimed at measuring AI progress, it seems odd that specific
regulations were aimed at facilitating the programmers’ work, jeopardizing
the impartiality of the proceedings. Yet this choice is reasonable if one
concedes that the Loebner Prize competition is not so much scientific re-
search as a form of spectacle. Just as the organizers of a sporting event
need athletes to be in their best shape to compete, the prize’s committee
need computers to be strong contestants in their game—hence the deci-
sion to easen their task.16 The need to enhance the spectacularism of the
competition also explains the decision to award a bronze medal and a cash
prize every year to the computer program that, according to the judges,
has demonstrated the “most human” conversational behavior—even if the
program has failed to pass the Turing test. The bronze medal was intended
not only as a motivation for participants but also as a further concession
to the spectacular character of the event: the presence of a winner, in fact,
[ 90 ] Deceitful Media
fits the familiar frame through which public competitions are presented to
journalists and audiences.17
There is a long tradition of exhibiting scientific and technological wonders
to the public. In the history of computing, public duels featuring machines
and humans, such as chess competitions between human champions and
computer programs, have often transformed the ordinary and invisible op-
erations of machines into a sensational spectacle.18 The automaton chess
player that stunned observers in the eighteenth and nineteenth century
also exploited the appeal of machines battling against humans in what Mark
Sussman called “the faith-inducing dramaturgy of technology.”19 Faithful
to this tradition, the Loebner Prize competition was organized as a spec-
tacle and hosted in a large auditorium before a live audience. A moderator
wandered around the auditorium with a cordless microphone, interviewing
some of the contestants and commenting on their competing chatbots,
while audiences could follow the conversations in screen terminals. The
spectacular frame of the event reverberates in the reports describing the
first competition as “great fun” and an “extravaganza.”20
Substantial attention was given to promoting the prize to the press. The
date was anticipated by a robust press campaign, with articles published in
some of the most widely read newspapers on both sides of the Atlantic.21
The need to make the contest easily understandable to journalists was
mentioned among the criteria orienting the committee in deciding on the
regulations.22 As reported by one of the referees, Harvard University com-
puter scientist Stuart Shieber, “the time that each judge had to converse
with each agent was shortened from approximately fifteen minutes to ap-
proximately seven in order to accommodate the press’s deadlines.”23 After
the event, three articles appeared in the New York Times alone, including a
front-page article the day after the contest, and Epstein boasted that the
winner had received the equivalent of a million dollars in free advertising
thanks to the press coverage of the event.24
The circulation and dissemination of narratives about technology in the
public sphere tend to follow recurring patterns, relying on repetition that
shapes these narratives’ very capacity to become popular and widespread.25
This process has been described in Latourian terms as a “pasteurization” of
the narrative, by which elements that do not fit into the dominant narra-
tive are eliminated so as to privilege a more coherent and stable narrative,
exactly like germs being eliminated in food pasteurization processes.26 In
the press coverage of early Loebner Prize competitions, it is easy to iden-
tify the recurring script through which the event was narrated. The nar-
rative posited that even if no computer had been able to pass the Turing
test, encouraging successes could be observed that suggested a potential
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 91 ]
29
future in which computers would pass the test and become indistinguish-
able from humans.27 This script suited the overarching narrative of the
AI myth, which since the early years had pointed to promising if partial
achievements of AI to envision a future where human-level AI would be
reached, changing the very definition of humanity and the social world.28
The opposing narrative, according to which AI was a fraud, also emerged in
more critical reports of the prize. Thus the coverage of the prize drew from
and perpetuated the long-standing dualism between enthusiasm and criti-
cism that has characterized the entire history of AI.29
By contrast, more ambiguous implications of the prize’s procedures that
did not fit the classic AI controversy were rarely, if ever, discussed in public
reports. The emphasis was on computers: what they were able to do and to
what extent they could be considered “intelligent.” Much less attention was
given to the humans who participated in the process. This is unfortunate
because, as shown in chapter 1, the Turing test is as much about humans
as it is about computers. Yet only a few newspaper pieces mentioned that
not only computers were exchanged for humans at the contest: humans
were often exchanged for computers, too. For instance, in the first compe-
tition, the confederate who impressed the judges as “the most human of
all [human] contestants” was still deemed to be a computer by as many as
two judges.30 This pattern continued in the following years, persuading the
prize’s committee to assign a nominal prize every year to the confederate
judged “the most human human.”31
If the fact that the judges could struggle to recognize the humanity of
confederates seems counterintuitive, one should remember that not all
instances of human communication conform to widespread ideas of what
counts as “human.” Think, for instance, of highly formalized interactions
such as exchanges with a phone operator, or the standardized movements
of telegraphists, whose actions were later automated through mechaniza-
tion.32 Even in everyday life, it is not uncommon to describe a person who
repeats the same sentences or does not convey emotion through language
as acting or talking “like a machine.” To give a recent example, one of the
moments when it became evident that the 2017 election campaign was not
going well for British prime minister Theresa May was when newspapers
and users on social media started to refer to her as the “Maybot.” By
likening her to a bot, journalists and the public sanctioned her dull and
repetitive communication style and the lack of spontaneity in her public
appearances.33
The sense that some communications are “mechanical” is amplified
in contexts where exchanges are limited or highly formalized. Most
exchanges between customers and a cashier in a big supermarket are
[ 92 ] Deceitful Media
repetitive, and that is why this job has been mechanized faster and more
effectively than, say, psychotherapy sessions. Yet the conversation be-
tween a cashier and customer sometimes goes out of the usual register—
for instance, if some jokes are exchanged or if one shares one’s personal
experiences with a product. These are the kinds of conversations that are
perceived as more “human”—thus, the ones that both a computer and a
confederate should aim for in order to convince judges that they are flesh
and blood.34
One of the consequences of this is that within the strict boundaries
of the Loebner Prize competition, people and computers are actually in-
terchangeable. But this is not because computers “are” or “behave” like
humans. It is, more subtly, because in the narrow situation of a seven-
minute written conversation via computer screen, playing the Imitation
Game competently is the key to passing as human.
Brian Christian, who participated as a confederate in the 2009 Loebner
Prize competition and wrote a thoughtful book on the experience, was
told by the organizers to be “just himself.” Yet, in truth, confederates,
to strike judges as humans, should not just be themselves: they need to
opt for conversation patterns that will strike judges as more “human.”
Christian, therefore, decided to ignore the organizers’ advice. In order to
establish an effective strategy, he asked himself which dynamics might
have informed the judges’ decisions. How does a person appear “human”
in the setting of the Turing test? Which strategies should be adopted
by a confederate or coded into a computer to maximize the chances of
passing for human? In order to answer such questions, Christian not only
studied transcripts of past Loebner Prize competitions but also reflected
on which behaviors are sanctioned as “mechanical” by people and, con-
versely, which behaviors are perceived as more authentic and “human.”
He went on to win the 2009 award as the “most human human,” that is,
the confederate who had been considered most credible by the judges.35
With the appropriate preparation, he had become the best player of the
Imitation Game.
Judges for the Loebner Prize also came up with some clever techniques
for performing their roles more effectively. Some, for instance, intention-
ally mumbled and misspelled words or asked the same questions several
times to see if the conversant could cope with repetition, as humans regu-
larly do.36 Such tricks, however, would only work until a clever programmer
figured them out and added appropriate lines of code to counteract them.
Not unlike the all-to-human confederates and judges, in fact, programmers
developed their own strategies in the attempt to make their machines the
most skilled contestants for the prize.
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 93 ]
49
CODING DECEIT
[ 94 ] Deceitful Media
allow entrants to simulate human typing foibles, and whether messages
should be sent in a burst or in character-by-character mode. Their decision
was finally to leave all possibilities open to contestants, so that this variable
could become an integral part of the “tricks” exploited by programmers.42
Common tricks included admitting ignorance, inquiring “Why do you
ask that?” to deflect from a topic of conversation, or using humor to make
the program appear more genuine.43 Some, like the team who won the 1997
competition with the chatbot CONVERSE, figured out that commenting on
recent news would be regarded by judges as a positive indication of authen-
ticity.44 Many strategies programmers elaborated at the competition were
based on accumulations of knowledge about what informed the judges’ as-
sessment. Contestant Jason Hutchens, for instance, explained in a report
titled “How to Pass the Turing Test by Cheating” that he had examined
the logs of past competitions to identify a range of likely questions the
judges would ask. He realized for example that a credible chatbot should
never repeat itself, as this had been the single biggest giveaway in previous
competitions.45
The pervasiveness of tricks adopted by programmers to win the Loebner
Prize is easily explained by the fact that the contest does not simply allow
deception but is organized around it. Every interaction is shaped by the
objective of fooling someone: computer programs are created to make
judges believe they are human, judges select their questions in the attempt
to debunk them, confederates use their own strategies to make them-
selves recognizable as humans. In this sense, to return to a comparison
made in c hapter 1, the Loebner Prize competition looks like a spiritualist
séance where a fraudulent medium uses trickery to deceive sitters, while
skeptics carry out their own tricks and ploys to expose the medium’s de-
ceit.46 This setup contrasts to the conditions of scientific research: as Hugo
Münsterberg, a psychologist of perception and early film theorist, pointed
out, “if there were a professor of science who, working with his students,
should have to be afraid of their making practical jokes or playing tricks
on him, he would be entirely lost.”47 But this setup also differs from most
forms of human-computer interaction. While the judges of the Loebner
Prize expect that their conversation partners are computers built to simu-
late humans, in many platforms on the web the default expectation is that
everybody is a human.48
This does not mean, however, that forms of deception similar to the
ones observed at the Loebner Prize contest are not relevant to other
contexts of human-machine communication. The history of computing is
full of examples of chatbots that, like the winner of the inaugural Loebner
Prize, were programmed to exhibit unpredictable or even pathological
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 95 ]
69
When Humphrys named the program MGonz and put it online on the
preweb Internet in 1989, an element of surprise was added: unlike the
Loebner Prize scenario, most users did not expect that they could be
chatting with a computer agent. This made it more likely for users to be-
lieve that they were talking to a human.
In this sense, the Loebner Prize is a reminder that computer-mediated
communication is a space essentially open to deception. One of the charac-
teristics of online environments is the complex play of identity that avatars,
anonymity, and pseudonyms allow. As argued by Turkle, “although some
people think that representing oneself as other than one is always a decep-
tion, many people turn to online life with the intention of playing it in pre-
cisely this way.”51 MGonz’s experience in the preweb era also foregrounded
the more recent upsurge in online abuse and trolling. Journalist Ian Leslie
recently referred to MGonz when he proposed that Donald Trump might
be the first chatbot president, as “some of the most effective chatbots mask
their limited understanding with pointless, context-free aggression.”52
The points of continuity between the Loebner Prize competition and
online spaces resonate in an ironic episode involving Epstein. He was him-
self the victim of a curious online scam when he carried on an email cor-
respondence for months with what he thought was a Russian woman. It
turned out to be a bot that employed tricks similar to those of Loebner
Prize contestants, hiding its imperfections behind the identity of a for-
eigner who spoke English as a second language.53 As stressed by Florian
Muhle, “the most successful conversational computer programs these days
[ 96 ] Deceitful Media
often fool people into thinking they are human by setting expectations
low, in this case by posing as someone who writes English poorly.”54 Today,
some of that strategies that were conceived and developed by Loebner Prize
competitors are mobilized by social media bots pretending to be human. In
this sense the prize, promoted as a contest for the development of intelli-
gent machines, has turned out to be more of a test bed for the gullibility of
humans confronted with talking machines.
In fact, one of the common criticisms of the prize is that, as a conse-
quence of the reliance on tricks, the programs in the competition have
not achieved significant advances on a technical level and thus cannot be
considered cutting-edge AI.55 Yet the relatively low level of technological
innovation makes the repeated success of many programs in fooling judges
all the more striking. How could very simple systems such as PC Therapist
and CONVERSE be so successful? Part of the answer lies in the circum-
stance, well known to confidence men, psychic mediums, and entertainers
à la P. T. Barnum, that people are easily fooled.56 As Epstein admitted, “the
contest tells us as much, or perhaps even more, about our failings as judges
as it does about the failings of computers.”57
Yet being content with the simple assertion that people are gullible,
without exploring it in its full complexity and in the specific context of
human-machine communication, would mean ignoring what the Loebner
Prize competition can really tell us. In the search for more nuanced answers,
one may consider Suchman’s discussion of the “habitability” problem with
respect to language, that is, “the tendency of human users to assume that a
computer system has sophisticated linguistic abilities after it has displayed
elementary ones.”58 This might have to do with the experience, constantly
confirmed in everyday exchanges with our own species, that those who
master language have indeed such abilities. Until very recently, humans’
only precedent in interacting with entities that use language has been al-
most exclusively with other humans. This helps explain why, as research
in human-machine communication has demonstrated, users tend to adopt
automatically social behaviors with machines that communicate in nat-
ural language.59 Something similar also happened with the introduction
of communication technologies that recorded and electrically transmitted
messages in natural language. Although mediation increases the separa-
tion between sender and receiver, people easily transferred frames of inter-
pretation that were customary in person-to-person exchanges, including
empathy and affect.60
Art historian Ernst Gombrich— whose Art and Illusion is, though
unbeknownst to most media theorists, a masterpiece of media theory—
presents an interesting way of looking at this issue when he argues that
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 97 ]
89
[ 98 ] Deceitful Media
of a text-only interface are meant to restrict the scope of the experience,
thereby easing the computers’ task. All these boundaries to the communi-
cation experience reduce the possibility that the initial recognition effect,
and the illusion of humanity, will erode.64
Many of the programmers’ tricks are also directed at limiting the scope
of conversations so that potential hints that the computer violates con-
versational and social conventions will be minimized. The use of whim-
sical conversation and nonsense, for instance, helps dismiss attempts to
entertain conversations on topics that would not be handled sufficiently
well by the program. The move from conversations on specific topics to
unrestricted conversations after the 1995 edition of the Loebner Prize
competition did not reduce the importance of this tactic. On the con-
trary, nonsense and erratic responses became even more vital for
shielding against queries that might have jeopardized the recognition ef-
fect.65 Contestants for the Loebner Prize need to struggle not to create
an illusion but, more precisely, to keep it, and this is why the most suc-
cessful chatbots have applied a “defensive” strategy—finding tricks and
shortcuts to restrict the inquiry of the judges, instead of cooperating
with them.66 This approach has also informed attempts to produce effects
of personality and characterization in the programming of chatbots for
the prize.
The winner of the 1994 Loebner Prize was a system called TIPS. Its cre-
ator, programmer Thomas Whalen, was driven by a more ambitious vision
than those of previous winners, for example, the creators of PC Therapist.
His goal was not to confuse judges with nonsense or whimsical conversa-
tion but to instill his chatbot with the simple model of a human being,
including a personality, a personal history, and a worldview. In the hope of
restricting the scope of the conversation, he decided on a male character
with a limited worldview who did not read books and newspapers, worked
nights, and was therefore unable to watch prime-time television. Whalen
also created a story to be revealed gradually to the judges: TIPS’s character
worked as a cleaner at the University of Eastern Ontario, and on the day
of the prize “he” had been concerned about being accused by “his” boss of
having stolen something. “I never stole nothing in my life,” TIPS stressed in
conversations with the judges, “but they always blame the cleaners when-
ever anything is missing.”67
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 99 ]
01
First, I had hypothesized that the number of topics that would arise in an open
conversation would be limited. . . . My error was that the judges, under Loebner’s
rules, did not treat the competitors as though they were strangers. Rather,
they specifically tested the program with unusual questions like, “What did
you have for dinner last night?” or “What was Lincoln’s first name?” These are
questions that no one would ever ask a stranger in the first fifteen minutes of a
[ 100 ] Deceitful Media
conversation. . . . Second, I hypothesized that, once the judge knew that he was
talking to a computer, he would let the computer suggest a topic. . . . Thus, my
program tried to interest the judges in Joe’s employment problems as soon as
possible. . . . I was surprised to see how persistent some judges were in refusing
to ever discuss Joe’s job.75
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 101 ]
02
1
[ 102 ] Deceitful Media
competitors, as is evident in the varied approaches to drawing on stere-
otyped representations of gender and sexism. In the second year of the
competition, for instance, programmer Joseph Weintraub chose the topic
“men vs women” for the restricted Turing test.82 In 1993, Ken Colby and
his son Peter chose the topic “bad marriage”; their chatbot presented it-
self as a woman and complained, with blatantly sexualized lines, that “my
husband is impotent and im [sic] a nymphomaniac.”83 A notable example
of a chatbot that adopted sexist stereotypes is Julia, which was initially
developed to inhabit the online community tinyMUDs and later competed
for the Loebner Prize. This chatbot appealed to menstruation and pre-
menstrual syndrome in the attempt to justify its inconsistent responses
through gender stereotypes.84
Originally, Turing introduced his test as a variation of the Victorian game
in which players had to find out whether their interlocutor was male or fe-
male.85 This suggests, as some have noted, that the Turing test is positioned
at the intersection between social conventions and performances.86 As a
person who experienced intolerance and was later convicted for his sexual
behaviors, Turing was instinctively sensitive to gender issues, and this
might have informed his proposal, consciously or unconsciously.87 In the
Loebner Prize competition, the adoption of gender representations as a
strategy further corroborates the hypothesis that chatbots are extensions
of humans, as they adapt to and profit from people’s biases, prejudices,
and beliefs. Because judges are willing to read others according to conven-
tional cues, the performance of social conventions and socially held beliefs
is meant to strengthen the effect of realism that programmers are seeking.
This applies to gender as well as to other forms of representations, such as
race (recall the case of the Russian bot who tricked Epstein) and class (as in
the story of the cleaner invented by Whalen to win the prize).88
In the arena of deception staged at the Loebner Prize competition,
gender stereotypes are employed as a tactic to deceive judges. However,
chatbots are also programmed to perform gender in other platforms and
systems. Playing with gender-swapping has been a constant dimension of
online interactions, from multi-user dungeons in the 1980s to chatrooms
and to present-day social media. Similar strategies have also been replicated
in the design of social interfaces, some of which enact familiar scripts about
women’s work, for instance in the case of customer services.89 This was also
the case of Ms. Dewey, a social interface developed by Microsoft to overlay
its Windows Live Search platform between 2006 and 2009. Represented
as a nonwhite woman of ambiguous ethnic identity, Ms. Dewey was pro-
grammed to react to specific search terms with video clips that contained
titillating and gendered content. As Miriam Sweeney convincingly shows
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 103 ]
0
41
in her doctoral dissertation on the topic, gender and race served as infra-
structural elements that facilitated ideological framing of web searches
in this Microsoft platform.90 The Ms. Dewey interface thus reinforced
through characterization the underlying bias of search engine algorithms,
which have often been found to reproduce racism and sexism and orient
users toward biased search results. Ms. Dewey, in other words, reproduced
at a representational level, with the video clips, what the information re-
trieval systems giving access to the web executed at an operational level.91
This approach reverberates in the characterization of contemporary AI
voice assistants, such as Alexa, which is offered by default to customers as
a female character.92
To date, no computer program that has competed for the Loebner Prize
has passed the Turing test. Even if one ever does so, though, it will not
mean that a machine is “thinking” but, more modestly, as Turing himself
underlined in his field-defining article, only that a computer has passed this
test. Given the hype that has surrounded the prize, as well as other human-
versus-computer contests in the past, one can anticipate that such a feat
would nonetheless enhance perceptions of the achievements of AI, even if
it might not have an equivalent impact at the level of technical progress.
Indeed, the popularity of such events attests to a fear, but also a deep fas-
cination, in contemporary cultures with the idea of intelligent machines.
This fascination is evident in fictional representations of AI: the dream
of artificial consciousness and the nightmare of the robot rebellion have
informed much science fiction film and literature since the emergence of
computers. But this fascination is also evident in reports and discussions
on public performances where AI is celebrated as a form of spectacle,
like the Loebner Prize competition. People seem to have a somehow am-
bivalent, yet deep, desire to see the myth of the thinking machine come
true, which shapes many journalistic reports and public debates about the
Turing test.93
As I have shown, the Loebner Prize competition says more about how
people are deceived than about the current state of computing and AI. The
Loebner Prize competition has become an arena of deception that places
humans’ liability for being deceived at center stage. Yet this does not make
this contest any less interesting and useful for understanding the dynamics
of communicative AI systems. It is true that in “real life,” which is to say
in the most common forms of human-computer interaction, users do not
[ 104 ] Deceitful Media
expect to be constantly deceived, nor do they constantly attempt to as-
sess whether they are interacting with a human or not. Yet it is becoming
harder and harder to distinguish between human and machine agents on
the web, as shown by the diffusion of social media bots and the ubiquity of
CAPTCHAs—which are reversed Turing tests to determine whether or not
the user making a request to a web page is a human.94 If it is true that the
possibility of deception is heightened by the very limits the Loebner Prize
competition places on communicative experiences, such as the textual in-
terface and the prize’s regulations, it is also true that similar “limits” char-
acterize human-machine communication on other platforms. As Turkle
points out, “even before we make the robots, we remake ourselves as people
ready to be their companions.” Social media and other platforms provide
highly structured frameworks that make deception relatively easier than
in face-to-face interactions or even in other mediated conversations, as in
a phone call.95
The outright deceptions staged at the Loebner Prize competition, more-
over, help one to reflect on different forms of relationships and interactions
with AI that are shaped by banal forms of deception. As will be discussed in
the next chapter, even voice assistants such as Alexa and Siri have adopted
some of the strategies developed by Loebner Prize contestants—with the
difference that the “tricks” are aimed not at making users believe that the
vocal assistant is human but, more subtly, at sustaining the peculiar ex-
perience of sociality that facilitates engagements with these tools. And
bots in social media, which have recently been used also for problematic
applications such as political communication, have appropriated many
of the strategies illuminated by the Loebner Prize competition.96 Among
the tricks employed by competitors in the prize that have been integrated
into the design of AI tools of everyday use are turn taking, the slowing of
responses in order to adapt to the rhythms of human conversation, and the
use of irony and banter. Injecting a personality and creating a convincing
role for the chatbot in a conversation has been instrumental to success at
the prize but is also a working strategy to create communicative AI systems
that can be integrated into complex social environments such as families,
online communities, and professional spaces.97
The Loebner Prize competition, in this sense, is an exercise in simulating
sociality as much as in simulating intelligence. Playing the Turing test
successfully goes beyond the capacity to grasp the meaning of a text and
produce an appropriate response. It also requires the capacity to adapt to
social conventions of conversation and exchanges. For example, because
emotions are an integral part of communications between humans, a com-
puter program benefits from being capable of recognizing emotions and
P r o g r a m m i n g De c e p t i o n : T h e L o e b n e r P r i z e [ 105 ]
061
[ 106 ] Deceitful Media
CHAPTER 6
To Believe in Siri
A Critical Analysis of Voice Assistants
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0007
0
81
ONE AND THREE
[ 108 ] Deceitful Media
widely held assumptions about individuality, by which being one and being
three are mutually exclusive.3
A similar difficulty also involves software. Many systems that are
presented as individual entities are in fact a combination of separate
programs applied to diverse tasks. The commercial graphic editor software
package known as Photoshop, for instance, hides behind its popular trade-
mark a complex stratification of discrete systems developed by different
developers and teams across several decades.4 When looking at software,
the fact that what is one is also at the same time many should be taken not
as the exception but as the norm. This certainly does not make software
closer to God, but it does make it a bit more difficult to understand.
Contemporary voice assistants such as Alexa, Siri, and Google Assistant
are also one and many at the same time. On the one side they offer them-
selves to users as individual systems with distinctive names and humanlike
characteristics. On the other side, each assistant is actually the combination
of many interconnected but distinct software systems that perform partic-
ular tasks. Alexa, for instance, is a complex assemblage of infrastructures,
hardware artifacts, and software systems, not to mention the dynamics of
labor and exploitation that remain hidden from Amazon’s customers.5 As
BBC developer Henry Cooke put it, “there is not such a thing as Alexa” but
only a multiplicity of discrete algorithmic processes. Yet Alexa is perceived
as one thing by its users.6
Banal deception operates by concealing the underlying functions of digital
machines through a representation constructed at the level of the interface.
A critical analysis of banal deception, therefore, requires examination of the
relationship between the two levels: the superficial level of the representa-
tion and the underlying mechanisms that are hidden under the surface, even
while they contribute to the construction of the overlaid representation. In
communicative AI, the representation layer also coincides with the stimula-
tion of social engagement with the user. Voice assistants draw on distinctive
elements such as a recognizable voice, a name, and elements to suggest a
distinctive persona such as “Alexa” or “Siri.” From the user’s point of view, a
persona is above all an imagined construction, conveying the feeling of a con-
tinuing relationship with a figure whose appearance can be counted on as a
regular and dependable event and is integrated into the routines of daily life.7
In the hidden layer are multiple software processes that operate in
conjunction but are structurally and formally distinct. Although the en-
tire “anatomy” of voice assistants is much more complex, three software
systems are crucial to the functioning of voice assistants’ banal deception,
which roughly map to the areas of speech processing, natural language
processing, and information retrieval (fig. 6.1). The first system, speech
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 109 ]
0
1
Natural
language
Speech processing Information
processing retrieval
Representation
level (“Alexa”)
processing, consists of algorithms that on the one hand “listen to” and
transcribe the users’ speech and on the other produce the synthetic voice
through which the assistants communicate with users.8 The second system,
natural language processing, consists of the conversational programs that
analyze the transcribed inputs and, as for a chatbot program, elaborate
responses in natural language.9 The third system, information retrieval,
consists of the algorithms that retrieve relevant information to respond
to the users’ queries and activate the relevant tasks. Compared to speech
processing and natural language processing, the relevance of informa-
tion retrieval algorithms is perhaps less evident at first glance. However,
they enable voice assistants to access Internet-based resources and to be
configured as users’ proxies for navigating the web.10 As the following
sections will show, the differences between these three software systems
are not restricted to their functions, since each of them is grounded in
distinct approaches to computing and AIs, and carries with it different
implications at both the technical and social levels.
Since the invention of media such as the phonograph and the telephone,
scientists and engineers have developed a range of analog and digital sys-
tems to record and reproduce sound. Like all modern media, sound media,
as discussed earlier, were produced in the shape of the human user. For
example, studies of human hearing were incorporated into the design of
technologies such as the phonograph, which recorded and reproduced
sound that matched sound frequencies perceived by humans.11 A similar
[ 110 ] Deceitful Media
work of adaptation also involved the human voice, which was immediately
envisaged as the key field of application for sound reproduction and re-
cording.12 For instance, in 1878 the inventor of the phonograph, Thomas
Alva Edison, imagined not music but voice recordings for note-taking or
family records as the most promising applications for its creation.13 Specific
efforts were therefore made to improve the mediation of the voice.
Such endeavors profited from the fact that people are built or “wired” for
speech.14 A human voice is more easily heard by humans than other noises,
and familiar voices are recognized with a precision barely matched by the
vision of a known face. This quality comes to fruition in mediated commu-
nications. In cinema, for instance, the tendency of audiences to recognize
a voice and immediately locate its source accomplishes several functions,
adding cohesion to the narrative and instilling the sense of “life”—that is,
presence and agency—to characters.15 Similarly, in voice conversations over
the phone or other media, the capacity to recognize and identify a disem-
bodied voice is essential to the medium’s use. This skill enables phone users
to recognize a familiar person and to gain hints about the demographics
and mood of a voice’s owner. Thus the technical mediation of voice draws
on the characteristics of human perception to generate meaningful results
that are fundamental to the experience of the user.
Following from this technical and intellectual lineage, the dream of
using spoken language as an interface to interact with computers is as old
as computing itself.16 Until very recently, however, voice-based interfaces
struggled to provide reliable services to users. Encountering automatic
voice recognition technologies was often a frustrating experience: early
“press or say one” phone services didn’t handle variations in accent or tone
well, and users were often annoyed or amused by these systems’ struggles
to comprehend even simple inputs.17 Compared with the performances of
Alexa or Siri, especially but not exclusively in the English language, one
wonders how the technology improved so markedly in such a short lapse
of time. The secret of this swift progress lies in one of the most significant
technical changes that AI has experienced throughout its history: the rise
of deep learning.
Deep learning refers to a class of machine-learning algorithms that rely
on complex statistical calculations performed by neural networks autono-
mously and without supervision. Inspired by the functioning of biological
neurons, neural networks were proposed very early in the history of AI but
initially seemed ineffective.18 In the 1980s and 1990s, new studies showed
at a theoretical level that neural networks could be extremely powerful.19
Yet only in the last decade has the technology realized its full potential, due
to two main factors: advances in hardware that have made computers able
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 111 ]
2
1
[ 112 ] Deceitful Media
The fact that voice assistants have often featured female voices as default
has been at the center of much criticism. There is evidence that people react
to hints embedded in AI assistants’ voices by applying categories and biases
routinely attributed to people. This is particularly worrying if one considers
the assistants’ characterization as docile servants, which reproduces ster-
eotypical representations of gendered labor.28 As Thao Phan has argued,
the representation of Alexa directs users toward an idealized vision of do-
mestic service, departing from the historical reality of this form of labor,
as the voice is suggestive of a native-speaking, educated, white woman.29
Similarly, Miriam Sweeney observes that most AI assistants’ voices suggest
a form of “ ‘default whiteness’ that is assumed of technologies (and users)
unless otherwise indicated.”30
Although public controversies have stimulated companies to increase
the diversity of synthetic voices, hints of identity markers such as race,
class, and gender continue to be exploited to trigger a play of imagina-
tion that relies on existing knowledge and prejudice.31 Studies in human-
computer communication show that the work of projection is performed
automatically by users: a specific gender, and even a race and class back-
ground, is immediately attributed to a voice.32 The notion of stereotyping,
in this sense, helps us understand how AI assistants’ disembodied voices
activate mechanisms of projection that ultimately regulate their use. As
Walter Lippmann showed in his classic study of public opinion, people
could not handle their encounters with reality without taking for granted
some basic representations of the world. In this regard, stereotypes have
ambivalent outcomes: on the one side they limit the depth and detail of
people’s insight into the world; on the other they help people recognize
patterns and apply interpretative categories built over time. While nega-
tive stereotypes need to be exposed and dispelled, Lippmann’s work also
shows that the use of stereotypes is essential to the functioning of mass
media, since knowledge emerges both through discovery and through the
application of preconstituted categories.33
This explains why the main competitors in the AI assistant market
select voices that convey specific information about a “persona”—that
is, a representation of individuality that creates the feeling of a contin-
uing relationship with the assistant itself. In other computerized serv-
ices employing voice processing technologies, such as customer services,
voices are sometimes made to sound neutral and evidently artificial. Such
a choice, however, has been deemed untenable by companies who aspire
to sell AI assistants that will accompany users throughout their everyday
lives. To function properly, these tools need to activate the mechanisms
of representation by which users imagine a source for the voice—and,
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 113 ]
4
1
[ 114 ] Deceitful Media
after all, a necessity of any medium of mass diffusion to be able to adapt its
message to diverse populations.
Still, the design of voice interfaces exercises an undeniable influence
over users. Experimental evidence shows that synthetic voices can be
manipulated to encourage the projection of demographic cues, including
gender, age, and even race, as well as personality traits, such as an extro-
verted or an introverted, a docile or aggressive character.42 Yet this is a ul-
timately a “soft,” indirect power, whereby the attribution of personality is
delegated to the play of imagination of the users. The low definition of voice
assistants contrasts with humanoid robots, whose material embodiment
reduces the space of user contribution, as well as with graphic interfaces,
which leave less space for the imagination.43 The immaterial character of
the disembodied voice should not to be seen, however, as a limitation: it is
precisely this disembodiment that forces users to fill in the gaps and make
voice assistants their own, integrating them more deeply into their eve-
ryday lives and identities. As a marketing line for Google Assistant recites,
“it’s your own personal Google.”44 The algorithms are the same for every-
body, but you are expected to put a little bit of yourself into the machine.
When I pick up my phone and ask Siri if it’s intelligent, it has an answer
(fig. 6.2).
To me, this is not just a turn of phrase. It’s an inside joke that points
to the long tradition of reflections about machines, intelligence, and
awareness—all the way back to Turing’s 1950 article in which he argued
that the question whether machines can “think” was irrelevant.45 Yet what
at first glance looks like a clever reply is actually one of the least “smart”
things Siri can do. This reply, in fact, is not the result of a sophisticated sim-
ulation of symbolic thought, nor has it emerged from statistical calculations
of neural networks. More simply, it was manually added by some designer
at Apple who decided that a question about Siri’s intelligence should ignite
such a reply. This is something a programmer would hesitate to describe
as coding, dismissing it as little more than script writing—in the Loebner
Prize’s jargon, just another “programming trick.”
The recent swift success of voice assistants has led many researchers and
commentators to argue that these tools have clearly outrun the conversa-
tional skills of earlier conversational programs.46 Yet in contrast with this
widely held assumption, the likes of Alexa, Siri, and Google Assistant do
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 115 ]
6
1
[ 116 ] Deceitful Media
between cats and dogs or identifying skin cancer), but it can’t easily turn
a stack of data into the complex, intersecting, and occasionally irrational
guidelines that make up modern English speech.”48
Thus, while voice assistants represent a step ahead in communicative AI
for areas such as voice processing, their handling of conversations still relies
on a combination of technical as well as dramaturgical solutions. Their ap-
parent proficiency is the fruit of some of the same strategies developed by
chatbot developers across the last few decades, combined with an unprec-
edented amount of data about users’ queries that helps developers antici-
pate their questions and design appropriate responses. The dramaturgical
proficiency instilled in voice assistants at least partially compensates for
the technical limitations of existing conversational programs.49
In efforts to ensure that AI assistants reply with apparent sagacity and
appear able to handle banter, Apple, Amazon, and to a smaller degree Google
assigned the task of scripting responses to dedicated creative teams.50
Similar to Loebner Prize chatbots, which have been programmed to deflect
questions and restrict the scope of the conversations, scripted responses
allow voice assistants to conceal the limits of their conversational skills and
maintain the illusion of humanity evoked by the talking voice. Every time
it is asked for a haiku, for instance, Siri comes out with a different piece of
this poetry genre, expressing reluctance (“You rarely ask me /what I want
to do today /Hint: it’s not haiku”), asking the user for a recharge (“All day
and night, /I have listened as you spoke. /Charge my battery”), or unen-
thusiastically evaluating the genre (“Haiku can be fun /but sometimes they
don’t make sense. /Hippopotamus”).51 Although the activation of scripted
responses is exceedingly simple on a technical level, their ironic tone can be
striking to users, as shown by the many web pages and social media posts
reporting some of the “funniest” and “hilariously honest” replies.52 Irony
has been instrumental in chatbots gaining credibility at the Loebner Prize,
and voice assistants likewise benefit from the fact that irony is perceived as
evidence of sociability and sharpness of mind.
In contrast to Loebner Prize chatbots, the objective of voice assistants is
not to deceive users into believing they are human. Yet the use of dramatur-
gical “tricks” allows voice assistants to achieve subtler but still significant
effects. Scripted responses help create an appearance of personalization, as
users are surprised when Siri or Alexa reply to a question with an inventive
line. The “trick” in this case is that AI assistants are also surveillance systems
that constantly harvest data about users’ queries, which are transmitted
and analyzed by the respective companies. As a consequence, AI assistant
developers are able to anticipate some of the most common queries and
have writers come out with appropriate answers.53 The consequentiality of
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 117 ]
8
1
this trick remains obscure to many users, creating the impression that the
voice assistant is anticipating the user’s thoughts—which meets expecta-
tions of what a “personal” assistant is for.54 Users are thereby swayed into
believing AI assistants to be more capable of autonomous behavior than
they actually are. As noted by Margaret Boden, they appear “to be sensitive
not only to topical relevance, but to personal relevance as well,” striking
users as “superficially impressive.”55
Also unlike Loebner Prize chatbots, voice assistants have simulated so-
ciality as just one of the functions, not their raison d’être. Social engage-
ment is never imposed on the user but occurs only if users invite this
behavior through specific queries. When told “Goodnight,” for instance,
Alexa will reply with scripts including “Goodnight,” “Sweet dreams,” and
“Hope you had a great day.” Such answers, however, are not activated if
users just request an alarm for the next morning. Depending on the user’s
input, AI assistants enact different modalities of interaction. Alexa, Google
Assistant, and Siri can be a funny party diversion one evening, exchange
conviviality at night, and the next day return to being discreet voice
controllers that just turn lights on and off.56
This is what makes AI assistants different from artificial companions,
which are software and hardware systems purposely created for social com-
panionship. Examples of artificial companions include robots such as Jibo,
which combines smart home functionality with the appearance of em-
pathy, as well as commercial chatbots like Replika (fig. 6.3), an AI mobile
app that promises users comfort “if you’re feeling down, or anxious, or just
need someone to talk to.”57 While Alexa, Siri, and Google Assistant are only
meant to play along if the user wants them to, artificial companions are
purportedly programmed to seek communication and emotional engage-
ment. If ignored for a couple of days, for instance, companionship chatbot
Replika comes up with a friendly message, such as “Is everything OK?” or
the rather overzealous “I’m so grateful for you and the days we have ahead
of us.”58 Alexa and Siri on the other hand require incitement to engage in
pleasantries or banter—coherently with one of the pillars of their human-
computer interaction design, by which assistants speak up only if users pro-
nounce the wake word. This is also why the design of smart speakers that
provide voice assistant services in domestic environments, such as Amazon
Echo, Google Home, and Apple HomePod, is so minimal. The assistants are
meant to remain seamless, always in the background, and quite politely in-
tervene only when asked to do so.59
When not stimulated to act as companions, voice assistants treat conver-
sational inputs as prompts to execute specific tasks. Conversations with AI
assistants therefore profit from the fact that language, as shown by speech
[ 118 ] Deceitful Media
Figure 6.3 Author’s conversation with Replika, 30 December 2019. Replika boasts of a re-
ported 2.5 million sign-ups and a community of around 30,000 users on its Facebook group.
Replika allows customization of gender (female, male, and nonbinary) and is programmed
to be overly submissive and complimentary to the user, its stated purposes being: “Talk
things through together,” “Improve mental wellbeing,” “Explore your personality” and
“Grow together.” (Luka Inc., “Replika,” 2019, available at https://replika.ai/, retrieved 30
December 2019.)
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 119 ]
201
[ 120 ] Deceitful Media
Little attention, however, has been given to the fact that information re-
trieval also plays a key role in voice assistants. To properly function, voice
assistants such as Alexa, Siri, and Google Assistant need to be constantly
connected to the Internet, through which they retrieve information and
access services and resources. Internet access allows these systems to
perform functions that include responding to queries, providing infor-
mation and news, streaming music and other online media, managing
communications— including emails and messaging— and controlling
smart devices in the home such as lights or heating systems.66 Although
voice assistants are scarcely examined as to the nature of the interfaces
they use that give access to Internet-based resources, they are ultimately
technologies that provide new pathways to navigating the web through
the mediation of huge corporations and their cloud services. As voice
assistants enter more and more into public use, therefore, they also in-
form the way users employ, perceive, and understand the web and other
resources.
One of the key features of the web is the huge amount of information
that is accessible through it.67 In order to navigate such an imposing mass
of information, users employ interfaces that include browsers, search
engines, and social networks. Each of these interfaces helps the user
identify and connect to specific web pages, media, and services, thereby
restricting the focus to a more manageable range of information that is
supposed to be tailored to or for the user.
To some extent, all of these interfaces could be seen as empowering users,
as they help them retrieve information they need. Yet these interfaces also
have their own biases, which correspond to a loss of control on the part
of the user. Search engines, for instance, index not all but only parts of
the web, influencing the visibility of different pieces of information based
on factors including location, language, and previous searches.68 Likewise,
social networks such as Facebook and Twitter impact on access to infor-
mation, due to the algorithms that decide on the appearance and ranking
of different posts as well as to the “filter bubble” by which users tend to
become distanced from information not aligning with their viewpoints.69
It is for this reason that researchers, since the emergence of the web, have
kept interrogating whether and to what extent different tools for web nav-
igation facilitate or hinder access to a plurality of information.70 The same
question urgently needs to be asked for voice assistants. Constructing a
persona in the interface, voice assistants mobilize specific representations
while they ultimately reduce users’ control over their access to the web,
jeopardizing their capacity to browse, explore, and retrieve a plurality of
information available through the web.
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 121 ]
21
A comparison between the search engine Google and the voice assistant
Google Assistant is useful at this point. If users search one item on the
search engine, say “romanticism,” they are pointed to customized entries
from Wikipedia and the Oxford English Dictionary alongside a plethora
of other sources. Although studies show that most users rarely go be-
yond the first page of a search engine’s results, the interface still enables
users to browse at least a few of the 16,400,000 results retrieved through
their search.71 The same input given to Google Assistant (at least, the ver-
sion of Google Assistant on my phone) linked only to the Wikipedia page
for “Romanticism,” the artistic movement. The system disregards other
meanings for the same words in the initial search and privileges one single
source. If the bias of Google algorithms applies to both the search en-
gine and the virtual assistant, in Google Assistant browsing is completely
obliterated and replaced by the triumph of “I’m feeling lucky” searches
delivering a single result. Due to the time that would be needed to pro-
vide several potential answers by voice, the relative restriction of options
is to be considered not just a design choice but a characteristic of voice
assistants as a medium.
Emily MacArthur has pointed out that a tool such as Siri “restores a sense
of authenticity to the realm of Web search, making it more like a conversa-
tion between humans than an interaction with a computer.”72 One wonders,
however, whether this “sense of authenticity” is a way for voice assistants
to appear to be at the service of the users, to make users forget that their
“assistants” are at the service of the companies that developed them. In
spite of their imagined personae, “Alexa,” “Siri,” and “Google Assistant”
never exist on their own. They exist only embedded in a hidden system
of material and algorithmic structures that guarantee market dominance
to companies such as Amazon, Apple, and Google.73 They are gateways to
the cloud-based resources administered by these companies, eroding the
distinction between the web and the proprietary cloud services that are
controlled by these huge corporations. This erosion is achieved through
close interplay between the representation staged by the digital persona
embodied by each assistant and its respective company’s business model.
It is striking to observe that the specific characterization of each voice as-
sistant is strictly related to the overall business and marketing approaches
of each company. Alexa is presented as a docile servant, able to inhabit
domestic spaces without crossing the boundaries between “master” and
“servant.” This contributes to hiding Amazon’s problematic structures of
labor and the precarious existence of the workers who sustain the function-
ality of the platform.74 Thus, Alexa’s docile demeanor contributes to making
the exploitation of Amazon’s workforce invisible to the customer-users
[ 122 ] Deceitful Media
who access Amazon Prime services and online commerce through Alexa.
In turn, Siri, compared to the other main assistants, is the one that makes
the most extensive use of irony. This helps corroborate Apple’s corporate
image of creativity and uniqueness, which Apple attempts to project onto
its customers’ self-representation: “stay hungry stay foolish,” as recited
in a famous Apple marketing line.75 In contrast with Apple and Amazon,
Google chose to give their assistant less evident markers of personal iden-
tity, avoiding even the use of a name.76 What appears to be a refusal to
characterize the assistant, however, actually reflects Google’s wider mar-
keting strategy, which has always downplayed elements of personality
(think of the low profile, compared to Steve Jobs at Apple or Jeff Bezos
at Amazon, of Google’s founders, Larry Page and Sergei Brin) to present
Google as a quasi-immanent oracle aspiring to become indistinguishable
from the web.77 Google Assistant perpetuates this representation by of-
fering itself as an all-knowing entity that promises to have an answer for
everything and is “ready to help, wherever you are.”78
Rather than being separated from the actual operations voice assistants
carry out, the construction of the assistant’s persona is meant to feed into
the business of the corporations. In fact, through the lens of Siri or Alexa,
there is no substantial difference between the web and the cloud-based
services administered by Apple and Amazon.79 Although interfaces are
sometimes seen as secondary to the communication that ensues through
them, they contribute powerfully to shape the experiences of users. It is
for this reason that the nature of voice assistants’ interfaces needs to be
taken seriously. Like the metaphors and representations evoked by other
interfaces, the construction of the assistant’s persona is not neutral but
informs the very outcome of the communication. In providing access to the
web, voice assistants reshape and repurpose it as something that responds
more closely to what companies such as Amazon, Apple, and Google want
it to look like for their customers.
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 123 ]
2
41
familiar to the user. The metaphors and tropes manifested in the interface
inform the imaginary constructions through which people perceive, under-
stand, and imagine how computing technologies work.82
In voice assistants, the level of representation coincides with the con-
struction of a “persona” that reinforces the user’s feeling of a continuing
relationship with the assistant. This adds further complexity to the in-
terface, which ceases to be just a point of intersection between user and
computer and takes on the role of both the channel and the producer of
communication. The literal meaning of medium, “what is in between” in
Latin, is subverted by AI assistants that reconfigure mediation as a circular
process in which the medium acts simultaneously as the endpoint of the
communication process. An interaction with an AI assistant, in this sense,
results in creating additional distance between the user and the informa-
tion retrieved from the web through the indirect management of the inter-
face itself.
This chapter has shown the way this distancing is created through the
application of banal deception to voice assistants. Mobilizing a plurality
of technical systems and design strategies, voice assistants represent the
continuation of a longer trajectory in communicative AI. As shown in pre-
vious chapters, computer interface design emerged as a form of collabora-
tion in which users do not so much “fall” into deception as they participate
in constructing the representation that creates the very capacity for their
interaction with computing systems. In line with this mechanism, there
is a structural ambivalence in AI assistants that results from the complex
exchanges between the software and the user, whereby the machine is
adapted to the human so that the human can project its own meanings
into the machine.
Voice assistants such as Alexa and Siri are not trying to fool anyone into
believing that they are human. Yet, as I have shown, their functioning is
strictly bounded to a banal form of deception that benefits from cutting-
edge technical innovations in neural networks as well as from dramatur-
gical strategies established throughout decades of experimentation within
communicative AI. Despite not being credible as humans, therefore, voice
assistants are still capable of fooling us. This seems a contradiction only
so long as one believes that deception involves a binary decision: if we are,
in other words, either “fooled” or “not fooled.” Both the history of AI and
the longer history of media demonstrate that this is not the case—that
technologies incorporate deception in more nuanced and oblique ways
than is usually acknowledged.
In the future, the dynamics of projection and representation embedded in
the design of voice assistants might be employed as means of manipulation
[ 124 ] Deceitful Media
and persuasion. As Judith Donath perceptively notes, it is likely that AI
assistants and virtual companions in the future will be monetized through
advertisement, just as advertisement has been a crucial revenue for other
media of mass consumptions, from television and newspapers to web search
engines and social media.83 This raises the question of how feelings such as
empathy, aroused by AI through banal deception, might be manipulated in
order to convince users to buy certain products or, even more worryingly,
to vote for a given party or candidate. Moreover, as AI voice assistants are
also interfaces through which people access Internet-based resources, we
need to ask how banal deception can inform access to information on the
web—especially considering that Alexa, Siri, and the like are all proprietary
technologies that privilege the cloud-based services and priorities of their
respective companies.
Rather than being restricted to the case of voice assistants, some of the
dynamics examined in this chapter extend to a broad range of AI-based
technologies. Beside “flagship” conversational AI such as Siri, Alexa and
Google Assistant, a proliferation of virtual assistants, chatbots, social
media bots, email bots and AI companions envisioned to work in more or
less specific domains and to carry out diverse task have emerged in the last
few years.84 While only a few of them might be programmed to undertake
forms of deliberate and straight-out deception in contexts such as phishing
or disinformation campaigns, many incorporate in their design deceptive
mechanisms similar to those highlighted in this chapter. The implications,
as discussed in the conclusion, are broader and more complex than any
single AI technology.
A C r i t i c a l A n a ly s i s of Vo i c e A s s i s ta n t s [ 125 ]
Conclusion
Our Sophisticated Selves
Deceitful Media. Simone Natale, Oxford University Press (2021). © Oxford University Press.
DOI: 10.1093/oso/9780190080365.003.0008
2
81
deception that are subtler and more pervasive than the rigid opposition
between authenticity and delusion suggests. The question, therefore, is not
so much whether AI will reach consciousness or surpass human intelligence
as how we can accommodate our relationships with technologies that rely
not only on the power of computing but also on our liability to be deceived.
My approach is in continuity with recent attempts in communication
and media studies to consider more closely how users understand com-
puting and algorithms and how such understandings inform the outcome
of their interactions with digital technologies.4 I argue, in this regard, that
the ways humans react to machines that are programmed to reproduce
the appearance of intelligent behaviors represent a constitutive element
of what is commonly called AI. Artificial intelligence technologies are not
just designed to interact with human users: they are designed to fit spe-
cific characteristics of the ways users perceive and navigate the external
world. Communicative AI becomes more effective not only by evolving
from a technical standpoint but also by profiting from the social meanings
humans project onto situations and things.
Throughout the book, I have taken special care to highlight that banal
deception is not forcefully malicious, as it always contributes some form of
value to the user. Yet the fact that banal deception improves the function-
ality of interactions with AI does not mean that it is devoid of problems
and risks. On the contrary, precisely because they are so subtle and difficult
to recognize, the effects of banal deception are deeper and wider than any
forms of straight-out delusion. By appropriating the dynamics of banal de-
ception, AI developers have the potential to affect the deeper structures of
our social lives and experiences.
One crucial challenge has to do with the fact that banal deception al-
ready bears within it the germs of straight-out deception. I have shown,
for instance, how the dynamics of projection and stereotyping make it
easier for AI voice assistants to accommodate our existing habits and social
conventions. Organizations and individuals can exploit these mechanisms
for political and marketing purposes, drawing on the feeling of empathy
that a humanlike assistant stimulates in consumers and voters.5 Other
mechanisms that underpin banal deception can be used maliciously or
in support of the wrong agendas. Exposing the dynamics of banal decep-
tion, in this sense, provides a further way to expose the most troublesome
and problematic appropriations of AI and to counteract the power that
algorithms and data administered by public and private institutions hold
over us.
In chapter 6, I undertook a critical analysis of the banal deception
mechanisms embedded in AI voice assistants. The same analytical work
[ 128 ] Deceitful Media
could be undertaken with regard to other AI technologies and systems.6
For what concerns robotics, for instance, the question is to what extent
the dynamics of banal deception that characterize AI voice assistants will
be incorporated into robots that bear a physical resemblance to humans.7
In contrast with the low definition of AI assistants, which employ verbal
or written communication to interact with humans, the robots of the fu-
ture may develop into a high-definition, “hot” medium that dissolves the
distinction between banal and deliberate deception. To make another ex-
ample, concerning deepfakes (i.e., AI-based technologies that produce
videos or images in which the face of a person is replaced with someone
else’s likeness, so that, for instance, famous persons or politicians can ap-
pear to be pronouncing words that they have never said), the “banal” im-
pression of reality that technologies of moving images create is what makes
their deliberate deception so effective.8
An important phenomenon this book has touched on only marginally
is the proliferation of bots on social media. While this has recently been
the subject of much attention, discussions have mostly focused on what
happens when bots pretend to be human.9 Less emphasis has been given
to the fact that deception does not only occur when bots are substituted
for humans. An interesting example, in this regard, is the chatbot
impersonating Israeli prime minister Benjamin Netanyahu that was used
on Facebook in the 2019 elections. Although it was made quite clear to
users that the chatbot was not Netanyahu himself, it was still useful to
create a sense of proximity to the candidate as well as to collect informa-
tion about potential voters to be targeted through other media.10
Even when no deliberate deception is involved, the development of
communicative AI based on banal deception will spark unprecedented
changes in our relationships with machines and, more broadly, in our so-
cial lives. Continuous interactions with Alexa, Siri, and other intelligent
assistants do not only make users more ready to accept that AI is taking
up an increasing number of tasks. These interactions also impact on the
capacity to distinguish exchanges that offer just an appearance of sociality
from interactions that actually include the possibility of empathy on the
part of our interlocutor.11 Therefore, as banal deception mechanisms dis-
appear into the fabric of our everyday lives, it will become increasingly
difficult to maintain clear distinctions between machines and humans
at the social and cultural levels. Artificial intelligence’s influence on so-
cial habits and behaviors, moreover, will also involve situations where
no machines are involved. There are growing concerns, for instance, that
stereotypical constructions of gender, race, and class embedded in com-
municative AI stimulate people to reproduce the same prejudices in other
Conclusion [ 129 ]
301
[ 130 ] Deceitful Media
To quote Joseph Weizenbaum, the limits for the applicability of AI “cannot
be settled by asking questions beginning with ‘can’ ” but should instead be
posed only in terms of “oughts.”16 Developers of AI need to seriously re-
flect on the question of deception. The call to develop stricter professional
standards for the AI community to prevent the production of deceptively
“human” systems is not new, but a broader debate on the issue of deception
needs to take place.17 Computer scientists and software designers are usu-
ally hesitant to employ the term deception in reference to their work. Yet
not only the less subtle, deliberate forms of deception need to be acknowl-
edged and addressed as such. Acknowledging the incorporation of banal
deception into AI calls for an ethics of fairness and transparency between
the vendor and the user that should not only focus on potential misuses
of the technology but on interrogating the outcomes of different design
features and mechanisms embedded in AI. The AI community also needs to
consider the principles of transparent design and user-friendly interfaces,
developing novel ways to make deception apparent and helping users to
better navigate the barriers between banal and deliberate deception. The
fact that technologies such as speech processing and natural language gen-
eration are becoming more easily available to individuals and groups makes
these endeavors all the more pressing.
One complication, in this regard, is the difficulty of attributing agency
and responsibility to developers of “intelligent” systems. As David Gunkel
stresses, “the assignment of culpability is not as simple as it might first ap-
pear to be,” especially because we are dealing with technologies that stim-
ulate users to attribute agency and personality to them.18 Nevertheless,
software is always constructed with action in mind, and programmers
always aim to produce a certain outcome.19 Although it is difficult to es-
tablish what the creators of software have had in mind, elements of this
can be reconstructed retrospectively, moving from the technical charac-
teristics of the technology and the economic and institutional context of
its production—as I have endeavored to do through the critical analysis of
voice assistants.
When undertaking such critical work, we should remember that any
representation of the user—the “human” around whom developers and
companies construct communicative AI technologies—is itself the fruit of
a cultural assessment affected by its own bias and ideology. For example, a
company’s decision to give a feminine name and voice as default to its voice
assistant might be based on research about people’s perceptions of female
and male voices, but this research remains embedded in specific methodo-
logical regimes and cultural frameworks.20 Finally, we should not disregard
the fact that users themselves have agency, which may subvert and reframe
Conclusion [ 131 ]
321
[ 132 ] Deceitful Media
NOTES
INTRODUCTION
1. “Google’s AI Assistant Can Now Make Real Phone Calls,” 2018,
available at https://www.youtube.com/watch?v=JvbHu_bVa_g&time_
continue=1&app=desktop (retrieved 12 January 2020). See also O’Leary,
“Google’s Duplex.”
2. As critical media scholar Zeynep Tufekci put it in a tweet that circulated widely.
The full thread on Twitter is readable at https://twitter.com/zeynep/status/
994233568359575552 (retrieved 16 January 2020).
3. Joel Hruska, “Did Google Fake Its Duplex AI Demo?,” ExtremeTech, 18 March
2018, available at https://www.extremetech.com/computing/269497-did-
google-fake-its-google-duplex-ai-demo (retrieved 16 January 2020).
4. See, among many others, on the one side, Minsky, “Artificial Intelligence”;
Kurzweil, The Singularity Is Near; and on the other side, Dreyfus, What Computers
Can’t Do; Smith, The AI Delusion.
5. Smith and Marx, Does Technology Drive History?; Williams, Television; Jones, “The
Technology Is Not the Cultural Form?”
6. Goodfellow, Bengio, and Courville, Deep Learning.
7. Benghozi and Chevalier, “The Present Vision of AI . . . or the HAL Syndrome.”
8. See chapter 2.
9. Gombrich, Art and Illusion.
10. There are, however, some relatively isolated but still significant exceptions. Adar,
Tan, and Teevan, for instance, distinguish between malicious and benevolent
deception, which they describe as “deception aimed at benefitting the user
as well as the developer.” Such a benevolent form of deception, they note, “is
ubiquitous in real-world system designs, although it is rarely described in such
terms.” Adar et al., “Benevolent Deception in Human Computer Interaction,”
1. Similarly, Chakraborti and Kambhampati observe that the obvious outcome
of embedding models of mental states of human users into AI programs is that
it opens up the possibility of manipulation. Chakraborti and Kambhampati,
“Algorithms for the Greater Good!” From a different perspective, Nake
and Grabowski have conceptualized communications between human and
machine as “pseudo-communication,” arguing for the importance of a semiotic
perspective to understand human-computer interaction. Nake and Grabowski,
“Human–Computer Interaction Viewed as Pseudo-communication.” See also
Castelfranchi and Tan, Trust and Deception in Virtual Societies; Danaher, “Robot
Betrayal.”
341
[ 134 ] Notes
37. Hepp, “Artificial Companions, Social Bots and Work Bots.” On discussions
of perceptions of robots, see the concept of the “uncanny valley,” originally
proposed in Mori, “The Uncanny Valley” (the original text in Japanese
was published in 1970). It is also worth noting that AI voice assistants are
“disembodied” only to the extent that they are not given a proper physical
“body” whose movements they control, as in robots; however, all software has to
some extent its own materiality, and AI voice assistants in particular are always
embedded in material artifacts such as smartphones or smart speakers. See, on
the materiality of software, Kirschenbaum, Mechanisms, and specifically on AI
voice assistants, Guzman, “Voices in and of the Machine.”
38. Akrich, “The De-scription of Technical Objects.” See also Feenberg, Transforming
Technology; Forsythe, Studying Those Who Study Us.
39. Chakraborti and Kambhampati, “Algorithms for the Greater Good!”
40. Interestingly, deception plays a key role not only in how knowledge about
human psychology and perception is used but also in how it is collected and
accumulated. Forms of deception, in fact, have played a key role in the designing
of experimental psychological studies so as to mislead participants in such a
way that they remain unaware of a study’s actual purposes. See Korn, Illusions of
Reality.
41. Balbi and Magaudda, A History of Digital Media.
42. Bucher, If . . . Then, 68.
43. Towns, “Towards a Black Media Philosophy.”
44. Sweeney, “Digital Assistants.”
45. Bourdieu, Outline of a Theory of Practice.
46. Guzman, “The Messages of Mute Machines.”
47. For a clear and compelling description of the remits of “communicative AI,” see
Guzman and Lewis, “Artificial Intelligence and Communication.”
48. Hepp, “Artificial Companions, Social Bots and Work Bots.”
49. Guzman, Human-Machine Communication; Gunkel, “Communication and
Artificial Intelligence”; Guzman and Lewis, “Artificial Intelligence and
Communication.” For a discussion of the concept of medium in AI and
communication studies, see Natale, “Communicating with and Communicating
Through.”
50. Doane, The Emergence of Cinematic Time; Hugo Münsterberg, The Film: A
Psychological Study.
51. Sterne, The Audible Past; Sterne, MP3. Even prominent writers, such as Edgar
Allan Poe in his “Philosophy of Composition,” have examined the capacity of
literature to achieve psychological effects through specific stylistic means. Poe,
The Raven; with, The Philosophy of Composition.
52. As shown, for instance, by the controversy about Google Duplex mentioned at
the beginning of this introduction.
53. Bottomore, “The Panicking Audience?”; Martin Loiperdinger,
“Lumière’s Arrival of the Train”; Sirois-Trahan, “Mythes et limites du
train-qui-fonce-sur-les-spectateurs.”
54. Pooley and Socolow, “War of the Words”; Heyer, “America under Attack I”; Hayes
and Battles, “Exchange and Interconnection in US Network Radio.”
55. There are, however, important exceptions; among works of media history that
make important contributions to the study of deception and media, it is worth
mentioning Sconce, The Technical Delusion; Acland, Swift Viewing.
56. See, for instance, McCorduck, Machines Who Think; Boden, Mind as Machine.
Notes [ 135 ]
361
57. Riskin, “The Defecating Duck, or, the Ambiguous Origins of Artificial Life”;
Sussman, “Performing the Intelligent Machine”; Cook, The Arts of Deception.
58. Geoghegan, “Visionäre Informatik.” Sussman, “Performing the Intelligent
Machine.”
59. Gitelman, Always Already New; Huhtamo, Illusions in Motion; Parikka, What Is
Media Archaeology?
60. This is why predictions about technology so often fail. See on this Ithiel De Sola
Pool et al., “Foresight and Hindsight”; Natale, “Introduction: New Media and the
Imagination of the Future.”
61. Park, Jankowski, and Jones, The Long History of New Media; Balbi and Magaudda,
A History of Digital Media.
62. Suchman, Human-Machine Reconfigurations.
63. For an influential example, see Licklider and Taylor, “The Computer as a
Communication Device.”
64. Appadurai, The Social Life of Things; Gell, Art and Agency; Latour, The
Pasteurization of France.
65. Edwards, “Material Beings.”
66. Reeves and Nass, The Media Equation.
67. See, among many others, Nass and Brave, Wired for Speech; Nass and Moon,
“Machines and Mindlessness.”
68. Turkle’s most representative works are Reclaiming Conversation; Alone Together;
Evocative Objects; The Second Self; Life on the Screen.
CHAPTER 1
1. Cited in Monroe, Laboratories of Faith, 86. See also Natale, Supernatural
Entertainments.
2. Noakes, “Telegraphy Is an Occult Art.”
3. “Faraday on Table-Moving.”
4. Crevier, AI; McCorduck, Machines Who Think; Nilsson, The Quest for Artificial
Intelligence.
5. Martin, “The Myth of the Awesome Thinking Machine”; Ekbia, Artificial Dreams;
Boden, Mind as Machine.
6. Shieber, The Turing Test; Saygin, Cicekli, and Akman, “Turing Test.”
7. Crevier, AI.
8. Some of these thinkers’ biographies are a great access point to their lives,
contributions, and thinking, as well as to the early history of AI and computing.
See especially Soni and Goodman, A Mind at Play; Conway and Siegelman, Dark
Hero of the Information Age; Copeland, Turing.
9. McCulloch and Pitts, “A Logical Calculus of the Ideas Immanent in Nervous
Activity.”
10. Wiener, Cybernetics. Wiener nonetheless believed that digital operations could
not account for all chemical and organic processes that took place in the human
organism, and advocated the inclusion of processes of an analog nature to study
and replicate the human mind.
11. Nagel, “What Is It like to Be a Bat?” Philosopher John Searle presented an
influential dismissal of the Turing test based on a similar perspective. In an
article published in 1980, Searle imagined a thought experiment called the
Chinese Room, in which a computer program could be written to participate in
a conversation in Chinese so convincingly that a Chinese native speaker would
exchange it for a human being. He reasoned that a person who did not speak
[ 136 ] Notes
Chinese would be able to do the same by sitting in the “Chinese room” and
following the same code of instructions through which the program operated.
Yet, even if able to pass the Turing test, this person would not be able to
understand Chinese but merely to simulate such understanding by providing
the correct outputs on the basis of the code of rules. Searle considered this
to be a demonstration that the computer would not properly “understand”
Chinese either. He used the Chinese Room to counteract inflated claims about
the possibility of building “thinking machines” that have characterized much
of the rhetoric around AI up to the present day. He believed in an irremediable
difference between human intelligence and what could be achieved by machines,
which made imitation the only goal at which AI could reasonably aim. Searle,
“Minds, Brains, and Programs.” See also, on why it is impossible to know what
happens in others’ minds and the consequences of this fact for the debate on AI,
Gunkel, The Machine Question.
12. The consequent emphasis on behavior, instead of what happens “inside the
machine,” was advocated not only by Turing but by other key figures of early
AI, including Norbert Wiener, who reasoned that “now that certain analogies
of behavior are being observed between the machine and the living organism,
the problem as to whether the machine is alive or not is, for our purpose,
semantic and we are at liberty to answer it one way or the other as best suits our
convenience.” Wiener, The Human Use of Human Beings, 32.
13. Turing, “Computing Machinery and Intelligence,” 433.
14. Luger and Chakrabarti, “From Alan Turing to Modern AI.”
15. Bolter, Turing’s Man; on the Turing test, computing, and natural language, see
Powers and Turk, Machine Learning of Natural Language, 255.
16. For an overview of the debate, including key critical texts and responses to
Turing, see Shieber, The Turing Test. Some have wondered if Turing did so
intentionally to make sure that the article would spark reactions and debate—so
that many would be forced to consider and discuss the problem of “machine
intelligence.” Gandy, “Human versus Mechanical Intelligence.”
17. See, for instance, Levesque, Common Sense, the Turing Test, and the Quest for
Real AI.
18. Computer scientist and critical thinker Jaron Lanier has been among the
most perceptive in this regard. As he points out, “what the test really tells
us . . . even if it’s not necessarily what Turing hoped it would say, is that machine
intelligence can only be known in a relative sense, in the eyes of a human
beholder.” As a consequence, “you can’t tell if a machine has gotten smarter or if
you’re just lowered your own standards of intelligence to such a degree that the
machine seems smart.” Lanier, You Are Not a Gadget, 32.
19. As Jennifer Rhee perceptively notes, in the Turing test “the onus of success or
failure does not rest solely on the abilities of the machine, but is at least partially
distributed between machine and human, if not located primarily in the human.”
Rhee, “Misidentification’s Promise.”
20. See, for example, Levesque, Common Sense, the Turing Test, and the Quest for
Real AI.
21. Turing, “Computing Machinery and Intelligence,” 441. As noted by Helena
Granström and Bo Göranzon, “Turing does not say: at the end of the century
technology will have advanced so much that machines actually will be able to
think. He says: our understanding of human thinking will have shifted towards
formalised information processing, to the extent that it will no longer be
Notes [ 137 ]
381
[ 138 ] Notes
41. “Turing himself was always careful to refer to ‘the game.’ The suggestion that it
might be some sort of test involves an important extension of Turing’s claims.”
Whitby, “The Turing Test,” 54.
42. Broussard, for instance, recently pointed out that this was due to the fact that
computer scientists “tend to like certain kinds of games and puzzles.” Broussard,
Artificial Unintelligence, 33. See also Harnad, “The Turing Test Is Not a Trick.”
43. Johnson, Wonderland.
44. Soni and Goodman, A Mind at Play; Weizenbaum, “How to Make a Computer
Appear Intelligent”; Samuel, “Some Studies in Machine Learning Using the Game
of Checkers.”
45. Newell, Shaw, and Simon, “Chess-Playing Programs and the Problem of
Complexity.” Before his proposal of the Imitation Game, Turing suggested chess
as a potential test bed for AI. Turing, “Lecture on the Automatic Computing
Engine,” 394; Copeland, The Essential Turing, 431.
46. Ensmensger, “Is Chess the Drosophila of Artificial Intelligence?”
47. Kohler, Lords of the Fly.
48. Rasskin-Gutman, Chess Metaphors; Dennett, “Intentional Systems.”
49. Ensmenger, “Is Chess the Drosophila of Artificial Intelligence?,” 7. See also
Burian, “How the Choice of Experimental Organism Matters: Epistemological
Reflections on an Aspect of Biological Practice.”
50. Franchi, “Chess, Games, and Flies”; Ensmensger, “Is Chess the Drosophila of
Artificial Intelligence?”; Bory, “Deep New.”
51. As Galloway puts it, “without action, games remain only in the pages of an
abstract rule book”; Galloway, Gaming, 2. See also Fassone, Every Game Is
an Island. In addition, one of the principles of game theory is that, even if
the players’ actions can be formalized as a mathematical theory of effective
strategies, only the actual play of a game activates the abstract formalization of a
game. Juul, Half-Real.
52. Friedman, “Making Sense of Software.”
53. Laurel, Computers as Theatre, 1.
54. Galloway, Gaming, 5. A criticism of this argument points to the problem of
intentionality and the frame through which specific situations and actions are
understood and interpreted. While at the level of observed behavior computer
and human players may look the same, only the latter may have the capacity
to distinguish the different frames of playful and not playful activities, e.g.,
the playing of a war game from war itself. One might however respond to this
criticism with the same behavioral stance exemplified by the Turing test, which
underscores the problem of what happens inside the machine to focus instead
on observable behavior. See Bateson, “A Theory of Play and Fantasy”; Goffman,
Frame Analysis.
55. Galloway, Gaming.
56. Dourish, Where the Action Is.
57. Christian, The Most Human Human, 175.
58. See, for instance, Block, “The Computer Model of the Mind.”
59. Korn, Illusions of Reality.
60. See chapter 5 on the Loebner Prize contest, organized yearly since the early
1990s, in which computer programmers enter their chatbots in the hope of
passing a version of the Turing test.
61. Geoghegan, “Agents of History.” Similarly, Lanier observes that both chess and
computers originated as tools of war: “the drive to compete is palpable in both
Notes [ 139 ]
4
01
computer science and chess, and when they are brought together, adrenaline
flows.” Lanier, You Are not a Gadget, 33.
62. Copeland, “Colossus.”
63. Whitby, “The Turing Test.”
64. Goode, “Life, but Not as We Know It.”
65. Huizinga, Homo Ludens, 13.
66. This resonates in contemporary experiences with systems such as Siri and Alexa,
where human users engage in jokes and playful interactions, questioning the
limits and the potential of AI’s imitation of humanity. See Andrea L. Guzman,
“Making AI Safe for Humans.”
67. Scholars have suggested that this was connected to an underlying concern of
Turing with definitions of gender. See, among others, Shieber, The Turing Test,
103; Bratton, “Outing Artificial Intelligence.” Representations of gender and
chatbots playing a version of the Turing test are further discussed in c hapter 5.
68. Hofer, The Games We Played.
69. Dumont, The Lady’s Oracle.
70. Ellis, Lucifer Ascending, 180.
71. See, on this, Natale, Supernatural Entertainments, especially chapter 2. Anthony
Enns recently argued that the Turing test “resembled a spiritualist séance,
in which sitters similarly attempted to determine whether an invisible spirit
was human by asking a series of questions. Like the spirits that allegedly
manifested during these séances, an artificial intelligence was also described
as a disembodied entity that exists on an immaterial plane that can only be
accessed through media technologies. And just as séance-goers often had
difficulty determining whether the messages received from spirits could be
considered evidence of a genuine presence, so too did scientists and engineers
often have difficulty determining whether the responses received from machines
could be considered evidence of genuine intelligence. These two phenomena
were structurally similar, in other words, because they posited that there
was no essential difference between identity and information or between
real intelligence and simulated intelligence, which effectively made humans
indistinguishable from machines.” Enns, “Information Theory of the Soul.”
72. Lamont, Extraordinary Beliefs.
73. As American show business pioneer P. T. Barnum reportedly observed, “the
public appears disposed to be amused even when they are conscious of being
deceived.” Cook, The Arts of Deception, 16.
74. See, for instance, Von Hippel and Trivers, “The Evolution and Psychology of Self
Deception.”
75. Marenko and Van Allen, “Animistic Design.”
76. McLuhan, Understanding Media.
77. Guzman and Lewis, “Artificial Intelligence and Communication.”
78. See Leja, Looking Askance; Doane, The Emergence of Cinematic Time; Sterne, MP3.
CHAPTER 2
1. Kline, “Cybernetics, Automata Studies, and the Dartmouth Conference on
Artificial Intelligence”; Hayles, How We Became Posthuman.
2. See, for instance, Crevier, AI.
3. Solomon, Disappearing Tricks; Gunning, “An Aesthetic of Astonishment.”
Practicing theatrical conjurers such as Georges Méliès were among the first
motion picture exhibitionists and filmmakers, and many early spectators
[ 140 ] Notes
experienced cinema as part of magic stage shows. See, among others, Barnouw,
The Magician and the Cinema; North, “Magic and Illusion in Early Cinema”;
Leeder, The Modern Supernatural and the Beginnings of Cinema.
4. During, Modern Enchantments.
5. Chun, Programmed Visions. It is for this reason that there is an element of
creepiness and wonder in how algorithms inform everyday life—so that
sometimes, when one sees one’s tastes or searches anticipated by digital
platforms, one cannot help but wonder if Amazon or Google are reading one’s
mind. See, on this, Natale and Pasulka, Believing in Bits.
6. Martin, “The Myth of the Awesome Thinking Machine,” 122. Public excitement
about AI was also fueled by science fiction films and literature. On the
relationship between the AI imaginary and fictional literature and film,
see Sobchack, “Science Fiction Film and the Technological Imagination”;
Goode, “Life, but Not as We Know It”; Bory and Bory, “I nuovi immaginari
dell’intelligenza artificiale.”
7. Martin, “The Myth of the Awesome Thinking Machine,” 129.
8. As historian of AI Hamid Ekbia points out, “what makes AI distinct from other
disciplines is that its practitioners ‘translate’ terms and concepts from one
domain into another in a systematic way.” Ekbia, Artificial Dreams, 5. See also
Haken, Karlqvist, and Svedin, The Machine as Metaphor and Tool.
9. Minsky, Semantic Information Processing, 193.
10. Minsky, Semantic Information Processing.
11. On the role of analogies and metaphors in scientific discourse, see Bartha,
“Analogy and Analogical Reasoning.” The comparison between artificial and
biological life could go so far as to include elements of humanity that surpassed
the boundaries of mere rational thinking, to include feelings and emotions. In
1971, for instance, an article in New Scientist was titled “Japanese Robot Has
Real Feeling.” By reading the article with more attention, one could understand
that the matter of the experiments was not so much human emotions as the
capacity of a robot to simulate tactile perception by gaining information about
an object through contact. Playing with the semantic ambiguity of the words
“feeling” and “feelings,” and alluding to human emotions well beyond basic
tactile stimuli, the author added a considerable amount of sensationalism to his
report. Anonymous, “Japanese Robot Has Real Feeling,” 90. See on this Natale
and Ballatore, “Imagining the Thinking Machine.”
12. Hubert L. Dreyfus, What Computers Can’t Do, 50–51.
13. Philosopher John Haugeland labeled this approach “Good Old-Fashioned
Artificial Intelligence” (GOFAI), to distinguish it from more recent sub-symbolic,
connectionist, and statistical techniques. Haugeland, Artificial Intelligence.
14. The concept of the “black box,” which describes technologies that provide little
or no information about their internal functioning, can therefore apply to
computers as well as to the human brain.
15. Cited in Smith, The AI Delusion, 23. His words echoed Weizenbaum’s worried
remarks about the responsibility of computer scientists to convey accurate
representations of the functioning and power of computing systems.
Weizenbaum, Computer Power and Human Reason.
16. For an in-depth discussion, see Natale and Ballatore, “Imagining the Thinking
Machine.”
17. Minsky, “Artificial Intelligence,” 246. On Minsky’s techno-chauvinism, see
Broussard, Artificial Unintelligence.
Notes [ 141 ]
421
18. In a frequently cited article of the time, Armer set up the goal of analyzing
attitudes toward AI with the hope to “improve the climate which surrounds
research in the field of machine or artificial intelligence.” Armer, “Attitudes
toward Intelligent Machines,” 389.
19. McCorduck, Machines Who Think, 148.
20. Messeri and Vertesi, “The Greatest Missions Never Flown.” The case of Moore’s
law is a good example in the field of computer science of the ways projections
about future accomplishments may motivate specific research communities to
direct their expectations and work toward the achievement of certain standards,
but also to maintain their efforts within defined boundaries. See Brock,
Understanding Moore’s Law.
21. Notable examples include Kurzweil, The Singularity Is Near; Bostrom,
Superintelligence. For what concerns fiction, examples are numerous and include
films such as the different installments of the Terminator saga, Her (2013), and
Ex Machina (2014).
22. Broussard, Artificial Unintelligence, 71. See also Stork, HAL’s Legacy.
23. McLuhan, Understanding Media.
24. Doane, The Emergence of Cinematic Time; Münsterberg, The Film.
25. Sterne, The Audible Past; Sterne, MP3.
26. In his overview of the history of human-computer interaction design up to the
1980s, Grudin argues that “the location of the ‘user interface’ has been pushed
farther and farther out from the computer itself, deeper into the user and the
work environment.” The focus moved from directly manipulating hardware
in early mainframe computers to different levels of abstraction through
programming languages and software and to video terminals and other devices
that adapted to the perceptive abilities of users, wider attempts to match
cognitive functions, and finally approaches to moving the interface into wider
social environments and groups of users. Grudin, “The Computer Reaches Out,”
246. For a history of human-computer interaction that also acknowledges that
this field has a history before the computer, see Mindell, Between Human and
Machine.
27. This applies especially to the initial decades of development in computer science,
when human-computer interaction had not yet coalesced into an autonomous
subfield. Yet the problems involved by human-computer interaction remained
relevant to AI throughout its entire history, and even if it might be useful to
separate the two areas in certain contexts and to tackle specific problems, it is
impossible to understand the close relationship between AI and communication
without considering AI and human-computer interaction along parallel lines.
Grudin, “Turing Maturing”; Dourish, Where the Action Is.
28. Marvin Minsky, for instance, apologized at the end of a 1961 article for the
fact that “we have discussed here only work concerned with more or less self-
contained problem-solving programs. But as this is written, we are at last
beginning to see vigorous activity in the direction of constructing usable time-
sharing or multiprogramming computing systems. With these systems, it will
at last become economical to match human beings in real time with really large
machines. This means that we can work toward programming what will be, in
effect, “thinking aids.” In the years to come, we expect that these man-machine
systems will share, and perhaps for a time be dominant, in our advance toward
the development of ‘artificial intelligence.’ ” Minsky, “Steps toward Artificial
Intelligence,” 28.
[ 142 ] Notes
29. See, on this, Guzman, Human-Machine Communication; Guzman and Lewis,
“Artificial Intelligence and Communication.”
30. Licklider, “Man-Computer Symbiosis.” On the impact of Licklider’s article on the
AI and computer science of the time, see, among others, Edwards, Closed Worlds,
266. Although Licklider’s article brought the concept of symbiosis to the center
of debates in computer science, like-minded approaches had been developed
before in cybernetics and in earlier works that looked at computing machines
not only as processors but also as sharers of information. See Bush, “As We May
Think”; Hayle, How We Became Posthuman. For an example of approaches that
looked at the relationship between computers and communication outside the
scope of AI, as in ergonomics and computer-supported collaborative work, see
Meadow, Man-Machine Communication.
31. Licklider, “Man-Computer Symbiosis,” 4.
32. Hayles, How We Became Posthuman.
33. Shannon, “The Mathematical Theory of Communication”; Wiener, Cybernetics.
Interestingly, understandings of communication as disembodied, which emerged
already in the nineteenth century with the development of electronic media,
also stimulated spiritual visions based on the idea that communication could be
wrested from its physical nature; this linked with information theory, as shown
by Enns, “Information Theory of the Soul.” See also Carey, Communication as
Culture; Peters, Speaking into the Air; Sconce, Haunted Media.
34. Hayles, How We Became Posthuman; McKelvey, Internet Daemons, 27. As
mentioned, the key tenet of early AI was that rational thinking was a form of
computation, which could be replicated through symbolic logic programmed into
computers.
35. Suchman, Human-Machine Reconfigurations.
36. See, for instance, Carbonell, Elkind, and Nickerson, “On the Psychological
Importance of Time in a Time Sharing System”; Nickerson, Elkind, and
Carbonell, “Human Factors and the Design of Time Sharing Computer
Systems.”
37. At the beginning of the computer age, the large computer mainframes installed
at universities and other institutions were able to manage only one process
at a time. Programs were executed in batch, each of them occupying all of the
computer’s resources at a given time. This meant that researchers had to take
turns to access the machines. Gradually, a new approach called time-sharing
emerged. In contrast with previous systems, time-sharing allowed different users
to access a computer at the same time. Giving users the impression that the
computer was responding in real time, it opened the way for the development
of new modalities of user-computer interaction and enabled wider access to
computer systems.
38. Simon, “Reflections on Time Sharing from a User’s Point of View,” 44.
39. Greenberger, “The Two Sides of Time Sharing.”
40. Greenberger, “The Two Sides of Time Sharing,” 4. A researcher at MIT who
worked on Project MAC, Greenberger noted that the subject of time-sharing
can be bisected into separate issues/problems: the system and the user.
This distinction “between system and user is reflected in the double-edged
acronym of MIT’s Project MAC: Multi-Access Computer refers to the physical
tool or system, whereas Machine-Aided Cognition expresses the hopes of the
user” (2).
41. Broussard, Artificial Unintelligence; Hicks, Programmed Inequality.
Notes [ 143 ]
41
[ 144 ] Notes
64. Oettinger, “The Uses of Computers,” 162.
65. Oettinger, “The Uses of Computers,” 164. Emphasis mine.
66. Black, “Usable and Useful.”
67. In a similar way, in fact, a magician performs a trick by hiding its technical
nature so that the source of the illusion is unknown to the audience. See
Solomon, Disappearing Tricks.
68. Emerson, Reading Writing Interfaces, xi.
69. As Carlo Scolari points out, the transparency of the interface is the utopia of
every contemporary interface design; however, the reality is different, as even
the simplest example of interaction hides an intricate network of interpretative
negotiations and cognitive processes. An interface, therefore, is never neutral or
transparent. Scolari, Las leyes de la interfaz.
70. Geoghegan, “Visionäre Informatik.”
71. See Collins, Artifictional Intelligence.
CHAPTER 3
1. Guzman, “Making AI Safe for Humans.”
2. Leonard, Bots, 33–34.
3. Russell and Norvig, Artificial Intelligence.
4. McCorduck, Machines Who Think, 253.
5. Zdenek, “Rising Up from the MUD,” 381.
6. Weizenbaum, “ELIZA.” For detailed but accessible explanations of ELIZA’s
functioning, see Pruijt, “Social Interaction with Computers”; Wardrip-Fruin,
Expressive Processing, 28–32.
7. Uttal, Real-Time Computers, 254–55.
8. Weizenbaum, “ELIZA.”
9. Weizenbaum, “How to Make a Computer Appear Intelligent.”
10. Crevier, AI, 133. Crevier reports that in conversations with Weizenbaum,
Weizenbaum told him that he had started his career “as a kind of confidence
man”: “the program used a ridiculously simple strategy with no look ahead, but
it could beat anyone who played at the same naive level. Since most people had
never played the game before, that included just everybody. . . . In a way, that
was a forerunner to my later ELIZA, to establish my status as a charlatan or con
man. But the other side of the coin is that I freely stated it. The idea was to create
a powerful illusion that the computer was intelligent. I went to considerate
trouble in the paper to explain that there wasn’t much behind the scenes, that
the machine wasn’t thinking. I explained the strategy well enough that anybody
could write the program, which is the same thing I did with ELIZA” (133).
11. Weizenbaum, “How to Make a Computer Appear Intelligent,” 24.
12. Weizenbaum, “Contextual Understanding by Computers.” This element is
implicit in Turing’s proposal. As Edmonds notes, “the elegance of the Turing
Test (TT) comes from the fact that it is not a requirement upon the mechanisms
needed to implement intelligence but on the ability to fulfil a role. In the
language of biology, Turing specified the niche that intelligence must be able to
occupy rather than the anatomy of the organism. The role that Turing chose was
a social role—whether humans could relate to it in a way that was sufficiently
similar to a human intelligence that they could mistake the two.” Edmonds, “The
Constructibility of Artificial Intelligence (as Defined by the Turing Test),” 419.
13. Weizenbaum, “ELIZA,” 37. The notion of script later became a common trope in
technical descriptions of conversational software, which frequently represent
Notes [ 145 ]
461
[ 146 ] Notes
“programming tricks” or the “living prototypes” of Heinz von Foerster’s
Biological Computer Laboratory. See Soni and Goodman, A Mind at Play, 243–53;
Müggenburg, “Lebende Prototypen und lebhafte Artefakte.”
36. Weizenbaum, Islands in the Cyberstream, 89.
37. The gender dimension of this anecdote is not to be overlooked, as well as the
gendered identity assigned to chatbots from ELIZA to Amazon’s Alexa. See, on
this, Zdenek, “Rising Up from the MUD.”
38. Boden, Mind as Machine, 1352.
39. Turkle, The Second Self, 110.
40. Benton, Literary Biography; Kris and Kurz, Legend, Myth, and Magic in the Image
of the Artist; Ortoleva, “Vite Geniali.”
41. Sonnevend, Stories without Borders.
42. Wilner, Christopoulos, Alves, and Guimarães, “The Death of Steve Jobs,” 430.
43. Spufford and Uglow, Cultural Babbage.
44. Martin, “The Myth of the Awesome Thinking Machine”; Bory and Bory, “I nuovi
immaginari dell’intelligenza artificiale.”
45. Crevier, AI.
46. Weizenbaum, “ELIZA,” 36.
47. Wardrip-Fruin, Expressive Processing, 32.
48. Colby, Watt, and Gilbert, “A Computer Method of Psychotherapy,” 148–52.
PARRY was described as “ELIZA with attitude” and programmed to play the role
of a paranoid. Its effectiveness was measured against the fact that it was often
diagnosed by doctors unaware that they were dealing with a computer program;
see Boden, Minds and Machines, 370. Some believe that PARRY was superior
to ELIZA because “it has personality,” as argued by Mauldin, “ChatterBots,
TinyMuds, and the Turing Test,” 16. Yet ELIZA also, despite its unwillingness to
engage more significantly with the user’s inputs, is instilled with a personality.
It is more minimal also because Weizenbaum wanted to make a point,
showing that even a very simple system could be perceived as “intelligent” by
some users.
49. McCorduck, Machines Who Think, 313–15.
50. Weizenbaum, “On the Impact of the Computer on Society”; Weizenbaum,
Computer Power and Human Reason, 268–70; Weizenbaum, Islands in the
Cyberstream, 81.
51. Weizenbaum, Computer Power and Human Reason, 269–70.
52. Weizenbaum, Joseph, “The Tyranny of Survival: The Need for a Science of
Limits,” New York Times, 3 March 1974, 425.
53. See, for an interesting example, Wilford, “Computer Is Being Taught to
Understand English.”
54. Weizenbaum, “Contextual Understanding by Computers,” 479; see
also Geoghegan, “Agents of History,” 409; Suchman, Human-Machine
Reconfigurations, 49.
55. Weizenbaum, Computer Power and Human Reason.
56. Weizenbaum, “Letters: Computer Capabilities,” 201.
57. Davy, “The Man in the Belly of the Beast,” 22.
58. Johnston employs the concept of “computational assemblage” to argue that
“every computational machine is conceived of as a material assemblage (a
physical device) conjoined with a unique discourse that explains and justifies
the machine’s operation and purpose. More simply, a computational assemblage
is comprised of both a machine and its associated discourse, which together
Notes [ 147 ]
4
81
determine how and why this machine does what it does.” Johnston, The Allure of
Machinic Life, x.
59. Weizenbaum, Computer Power and Human Reason, 269.
60. McCorduck, Machines Who Think, 309.
61. Weizenbaum, Computer Power and Human Reason, 157. Luke Goode has recently
argued, entering indirectly into controversy with Weizenbaum, that metaphors
and narratives (in particular, fiction) can indeed be useful to improve public
understandings of AI and thus the governance of these technologies. Goode,
“Life, but Not as We Know It.”
62. Hu, A Prehistory of the Cloud; Peters, The Marvelous Cloud.
63. Kocurek, “The Agony and the Exidy”; Miller, “Grove Street Grimm.’ ”
64. See, for instance, “The 24 Funniest Siri Answers That You Can Test with Your
iPhone.”
65. Murray, Hamlet on the Holodeck, 68, 72.
66. Wardrip-Fruin, Expressive Processing, 24.
67. All the ELIZA avatars I found use Javascript. Yet it is in principle possible to
provide a reliable copy of ELIZA, due to the repeatability of code. This would not
be the case with contemporary AI systems such as Siri and Alexa, which employ
machine learning techniques to constantly improve their performances and are
therefore never the same. In this, they are similar after all to humans, whose
physiology and psychology change constantly, making them in a certain sense
“different” at any different moment in time. Jeff Shrager’s website provides a
fairly comprehensive genealogy of ELIZA reconstructions and scholarship; see
Shrager, “The Genealogy of Eliza.”
68. Boden, Minds and Machines, 1353.
69. Marino, “I, Chatbot,” 8.
70. Turner, From Counterculture to Cyberculture; Flichy, The Internet Imaginaire;
Streeter, The Net Effect.
71. King, “Anthropomorphic Agents,” 5.
72. Wardrip-Fruin, Expressive Processing, 32.
73. Interestingly, Wardrip-Fruin also notes that the effect conjured in this way by
ELIZA was broken down as the interaction continued, the limitations of the
chatbot became evident, and therefore the user’s understanding of the internal
processes at work improved. “In this context, it is interesting to note that
most systems of control that are meant to appear intelligent have extremely
restricted methods of interaction”; Wardrip-Fruin, Expressive Processing, 37. The
implications of this are further discussed in chapter 5, which is dedicated to the
Loebner Prize and to chatbots that are developed in order to pass as humans.
As I will show, the limitations of the conversation included contextual elements
such as the rules of play, the topic of conversation, and the medium employed to
communicate, as well as situational elements created by the strategies that the
programmers of the chatbots developed in order to deflect queries that would
reveal the program as such.
74. Turkle, The Second Self.
75. “Did that search engine really know what you want, or are you playing along,
lowering your standards to make it seem clever? While it’s to be expected that
the human perspective will be changed by encounters with profound new
technologies, the exercise of treating machine intelligence as real requires people
to reduce their moorings to reality.” Lanier, You Are Not a Gadget, 32.
76. Turkle, The Second Self, 110.
[ 148 ] Notes
77. Weizenbaum, Computer Power and Human Reason, 11.
78. Bucher, “The Algorithmic Imaginary.”
79. Zdenek, “Artificial Intelligence as a Discursive Practice,” 340, 345.
80. Dembert, “Experts Argue Whether Computers Could Reason, and If They
Should.”
CHAPTER 4
1. Suchman, Human-Machine Reconfigurations; see also Castelfranchi and Tan, Trust
and Deception in Virtual Societies.
2. Lighthill, “Artificial Intelligence.”
3. Dreyfus, Alchemy and Artificial Intelligence; see also Dreyfus, What Computers
Can’t Do.
4. Crevier, AI; see also Natale and Ballatore, “Imagining the Thinking Machine.”
5. Crevier, AI; McCorduck, Machines Who Think; Ekbia, Artificial Dreams.
6. Turkle, Life on the Screen. Turkle believed that this was one of the manifestations
of a new tendency, which emerged in the 1980s, to “take things at interface
value,” so that computer programs are treated as social actors one can do
business with, provided that they work. The hypothesis that shifts in perceptions
of computing had to do with broader technological and social changes is
supported by historical studies about the social imaginary surrounding digital
media. Authors such as Thomas Streeter and Fred Turner have convincingly
demonstrated that the emergence of personal computers from the early
1980s and of the web from the late 1990s dramatically changed the cultural
climate surrounding these “new” media. Streeter, The Net Effect; Turner, From
Counterculture to Cyberculture. See also, among others, Schulte, Cached; Flichy,
The Internet Imaginaire; Mosco, The Digital Sublime.
7. Williams, Television.
8. Mahoney, “What Makes the History of Software Hard.”
9. For further discussions on how to approach histories of software, see, among
others, Frederik Lesage, “A Cultural Biography of Application Software”;
Mackenzie, “The Performativity of Code Software and Cultures of Circulation”;
Balbi and Magaudda, A History of Digital Media; as well as my “If Software Is
Narrative.”
10. Hollings, Martin, and Rice, Ada Lovelace; Subrata Dasgupta, It Began with
Babbage.
11. Manovich, “How to Follow Software Users.”
12. Leonard, Bots, 21.
13. McKelvey, Internet Daemons.
14. Krajewski, The Server, 170–209.
15. Cited in Chun, “On Sourcery, or Code as Fetish,” 320. Norbert Wiener later
brought the notion into his work on cybernetics, employing the concept to
illuminate the creation of meaning through information processing. Wiener,
Cybernetics. See also Leff and Rex, Maxwell’s Demon.
16. Canales and Krajewski, “Little Helpers,” 320.
17. McKelvey, Internet Daemons, 5.
18. Appadurai, The Social Life of Things; Gell, Art and Agency.
19. Turkle, Evocative Objects.
20. Lesage and Natale, “Rethinking the Distinctions between Old and New Media”;
Natale, “Unveiling the Biographies of Media”; Lesage, “A Cultural Biography of
Application Software.”
Notes [ 149 ]
501
[ 150 ] Notes
48. Dourish, Where the Action Is.
49. DeLoach, “Social Interfaces.”
50. Brenda Laurel defines “interface agent” as “a character, enacted by the computer,
who acts on behalf of the user in a virtual (computer-based) environment.”
Laurel, “Interface Agents: Metaphors with Character,” 208.
51. McCracken, “The Bob Chronicles.”
52. See, among others, Smith, “Microsoft Bob to Have Little Steam, Analysts
Say”; Magid, “Microsoft Bob: No Second Chance to Make a First Impression”;
December, “Searching for Bob.”
53. Gooday, “Re-writing the ‘Book of Blots’ ”; Lipartito, “Picturephone and the
Information Age”; Balbi and Magaudda, Fallimenti digitali.
54. On later attempts by Microsoft to develop social interfaces, see Sweeney, “Not
Just a Pretty (Inter)Face.”
55. Smith, “Microsoft Bob to Have Little Steam.”
56. “Microsoft Bob Comes Home.” For short way to get an idea of how Microsoft Bob
worked, see “A Guided Tour of Microsoft Bob.” See also “Microsoft Bob.”
57. As a journalist observed, Bob was however “a curious name for a program that
contains no one named Bob”: there wasn’t in fact any available personal guide
bearing that name. Warner, “Microsoft Bob Holds Hands with PC Novices.”
58. “A Guided Tour of Microsoft Bob.” For a discussion on the opportunity to include
a personality in interface agents, see Marenko and Van Allen, “Animistic Design.”
59. “Microsoft Bob Comes Home.”
60. William Casey, “The Two Faces of Microsoft Bob.”
61. McCracken, “The Bob Chronicles.”
62. Leonard, Bots, 77. Another common point of criticism was that Bob didn’t come
with a manual. This was meant to emphasize that social interfaces let users
learn by experience; however, at a time when long manuals were the norm
for commercial software, this put off several commentators. See, for instance,
Magid, “Microsoft Bob.”
63. Manes, “Bob,” C8.
64. Gillmor, “Bubba Meets Microsoft,” 1D. See also Casey, “The Two Faces of
Microsoft Bob.”
65. “Microsoft Bob Comes Home.”
66. Reeves and Nass, The Media Equation; Nass and Moon, “Machines and
Mindlessness.”
67. Trower, “Bob and Beyond.”
68. “Microsoft Bob Comes Home.”
69. Cited in Trower, “Bob and Beyond.”
70. Reeves and Nass, The Media Equation.
71. Nass and Moon, “Machines and Mindlessness.”
72. Black, “Usable and Useful.” See also chapter 2.
73. McCracken, “The Bob Chronicles.”
74. Andersen, and Pold, The Metainterface.
75. As every stage magician could confirm, this usually makes for a poor illusion.
Coppa, Hass, and Peck, Performing Magic on the Western Stage.
76. “Alexa, I Am Your Father.”
77. Liddy, “Natural Language Processing.” For a synthetic and insightful overview
of the contribution of natural language processing to AI research, see Wilks,
Artificial Intelligence.
78. Suchman, Human-Machine Reconfigurations.
Notes [ 151 ]
521
CHAPTER 5
1. Epstein, “The Quest for the Thinking Computer.”
2. See, among others, Boden, Mind as Machine, 1354; Levesque, Common Sense, the
Turing Test, and the Quest for Real AI.
3. Shieber, “Lessons from a Restricted Turing Test.”
4. Floridi, Taddeo, and Turilli, “Turing’s Imitation Game”; Weizenbaum, Islands in
the Cyberstream, 92–93.
5. Barr, “Natural Language Understanding,” 5–10; see also Wilks, Artificial
Intelligence, 7–8.
6. Epstein, “Can Machines Think?”
7. Shieber, “Lessons from a Restricted Turing Test”; Epstein, “Can Machines
Think?”
8. As shown in previous chapters, after all, the imaginary has played a key role in
directing the course of AI across its long history, with enthusiastic narratives
characterizing its rise in the 1950s and 1960s and a wave of disappointment
marking the “AI Winter” in the following two decades. See in particular
chapter 2 and 3.
9. Streeter, The Net Effect; Flichy, The Internet Imaginaire; Turner, From
Counterculture to Cyberculture.
10. Bory, “Deep New”; Smith, The AI Delusion, 10.
11. Boden, Mind as Machine, 1354. Marvin Minsky, one of the Loebner Prize’s main
critics, in this sense when he called the Prize a “publicity stunt.” He also offered
$100 to anyone who would end the prize. Loebner, never missing an opportunity
for publicity, replied to Minsky’s accusations by announcing that Minsky was
now a cosponsor, since the Loebner Prize contest would end once a computer
program had passed the Turing test. Walsh, Android Dreams, 41.
12. Luger and Chakrabarti, “From Alan Turing to Modern AI.”
13. Markoff, “Theaters of High Tech,” 15.
14. Epstein, “Can Machines Think,” 84; see also Shieber, “Lessons from a Restricted
Turing Test”; Loebner, “The Turing Test.” It is worth mentioning that, in an
appropriate homage to the lineage of deceitful chatbots, ELIZA’s creator, Joseph
Weizenbaum, was a member of the prize’s committee for the first competition.
15. Regulations in the following instances of the Loebner Prize competition
varied significantly regarding many key aspects, including the duration of
conversations, which went from five to twenty minutes. Due to the vagueness
of Turing’s initial proposal, there has hardly been a consensus on what a “Turing
test” should look like, and different organizing committees of the prize have
made different decisions at different times. For a discussion of the Loebner Prize
contest’s rules of play, see Warwick and Shah, Turing’s Imitation Game.
16. As in the case of other human-versus-machine challenges that made headlines
in that period, the confrontation was recounted through the popular narrative
patterns that are typical of sports journalism: some reports, for instance, had a
nationalistic tone, as in the articles in the British press saluting the winning of
the Loebner Prize by a team from the UK in 1997. “British Team Has Chattiest
Computer Program”; “Conversations with Converse.”
17. Epstein, “Can Machines Think?”
18. Geoghegan, “Agents of History,” 407. On the longer history of spectacular events
presenting scientific and technological wonders, see, among others, Morus,
Frankenstein’s Children; Nadis, Wonder Shows; Highmore, “Machinic Magic.”
19. Sussman, “Performing the Intelligent Machine,” 83.
[ 152 ] Notes
20. Epstein, “Can Machines Think?,” 85. In 1999 the Loebner Prize competition was
also for the first time watchable on the web.
21. See, for instance, Charlton, “Computer: Machines Meet Mastermind”; Markoff,
“So Who’s Talking?” Over the years, journalists have made significant efforts
to come out with original titles for the contest. My personal favorite: “I Think,
Therefore I’m RAM.”
22. Epstein, “Can Machines Think?,” 82.
23. Shieber, “Lessons from a Restricted Turing Test,” 4.
24. Epstein, “Can Machines Think?,” 95.
25. See, on this, Natale, “Unveiling the Biographies of Media.”
26. Wilner, Christopoulos, Alves, and Guimarães, “The Death of Steve Jobs.”
27. Lindquist, “Quest for Machines That Think”; “Almost Human.”
28. See chapter 2.
29. Examples of more critical reports include Allen, “Why Artificial Intelligence
May Be a Really Dumb Idea”; “Can Machines Think? Judges Think Not.” On the
dualism between enthusiasm and criticism of AI and the role of controversies in
the AI myth, see Natale and Ballatore, “Imagining the Thinking Machine,” 9–11.
30. Markoff, “Can Machines Think?”
31. Christian, The Most Human Human.
32. Haken, Karlqvist, and Svedin, The Machine as Metaphor and Tool, 1; Gitelman,
Scripts, Grooves, and Writing Machines; Schank and Abelson, Scripts, Plans, Goals,
and Understanding.
33. Crace, “The Making of the Maybot”; Flinders, “The (Anti-)Politics of the General
Election.”
34. See, on this, Stokoe et al., “Can Humans Simulate Talking Like Other Humans?”
35. Christian, The Most Human Human, 261.
36. Collins, Artifictional Intelligence, 51. This happened even when judges had
been explicitly prohibited from using “trickery or guile” to detect fooling: in
the setting of the Loebner Prize contest, in fact, it is virtually impossible to
distinguish tricking from actual interaction, as all actors are constantly aware
of the possibility of deceiving and being deceived. Shieber, “Lessons from a
Restricted Turing Test,” 6.
37. Epstein, “Can Machines Think?,” 89.
38. Shieber, “Lessons from a Restricted Turing Test,” 7.
39. Turkle, Life on the Screen, 86. As Hector Levesque has also stressed about
conversations in the Loebner Prize, “what is striking about transcripts of these
conversations is the fluidity of the responses from the test subjects: elaborate
wordplay, puns, jokes, quotations, asides, emotional outbursts, points of order.
Everything, it would seem, except clear and direct answers to questions.”
Levesque, Common Sense, the Turing Test, and the Quest for Real AI, 49.
40. For instance, to calculate the delay needed between each character in order to
make his chatbot credible, Michael L. Mauldin made this calculation before the
1992 edition of the Loebner Prize: “we obtained the real-time logs of the 1991
competition . . . and sampled the typing record of judge #10 (chosen because he
was the slowest typist of all 10 judges). The average delay between two characters
is 330 milliseconds, with a standard deviation of 490 milliseconds.” Mauldin,
“ChatterBots, TinyMuds, and the Turing Test,” 20.
41. Yorick Wilks, who won the Loebner Prize in 1997 with the program CONVERSE,
remembered, for instance, that “the kinds of tricks we used to fool the judges
included such things as making deliberate spelling mistakes to seem human, and
Notes [ 153 ]
54
1
making sure the computer responses came up slowly on the screen, as if being
typed by a person, and not instantaneously as if read from stored data.” Wilks,
Artificial Intelligence, 7.
42. Epstein, “Can Machines Think?,” 83.
43. For a list of common tricks used by chatbots developers, see Mauldin,
“ChatterBots, TinyMuds, and the Turing Test,” 19. See also Wallace, “The
Anatomy of A.L.I.C.E.”
44. Cited in Wilks, Artificial Intelligence, 7. Yorick Wilks was part of the
CONVERSE team.
45. Jason L. Hutchens, “How to Pass the Turing Test by Cheating.”
46. Natale, “The Cinema of Exposure.”
47. Münsterberg, American Problems from the Point of View of a Psychologist, 121.
48. This is of course not the norm in all interactions on the web, as the increasing
use of CAPTCHA and the issue of social bots shows. See on this Fortunati,
Manganelli, Cavallo, and Honsell, “You Need to Show That You Are Not a Robot.”
49. Humphrys, “How My Program Passed the Turing Test,” 238. On Humphrys’
chatbot and its later online version, MGonz, see also Christian, The Most Human
Human, 26–27.
50. Humphrys, “How My Program Passed the Turing Test,” 238.
51. Turkle, Life on the Screen, 228.
52. Leslie, “Why Donald Trump Is the First Chatbot President.”
53. Epstein, “From Russia, with Love.”
54. Muhle, “Embodied Conversational Agents as Social Actors?”
55. Boden, Mind as Machine, 1354. The level did not grow significantly throughout
the years. As noted by Yorick Wilks, “systems that win don’t usually enter again,
as they have nothing left to prove. So new ones enter and win but do not seem
any more fluent or convincing than those of a decade before. This is a corrective
to the popular view that AI is always advancing all the time and at a great
rate. As we will see, some parts are, but some are quite static.” Wilks, Artificial
Intelligence, 9.
56. Shieber, “Lessons from a Restricted Turing Test,” 6; Christian, The Most Human
Human. For some excellent entry points into the cultural history of deception,
see Cook, The Arts of Deception; Pettit, The Science of Deception; Lamont,
Extraordinary Beliefs.
57. Epstein, “Can Machines Think?,” 86.
58. Suchman, Human-Machine Reconfigurations, 41. The habitability problem was
originally described in Watt, “Habitability.”
59. Nass and Brave, Wired for Speech; Guzman, “Making AI Safe for Humans.”
60. Malin, Feeling Mediated; Peters, Speaking into the Air. For an examination of some
of the contexts in which this recognition broke down, see Lisa Gitelman, Scripts,
Grooves, and Writing Machines. For a discussion of the concept of mediatization
and its impact on contemporary societies, see Hepp, Deep Mediatization.
61. Gombrich, Art and Illusion, 261.
62. Bourdieu, Outline of a Theory of Practice.
63. Bickmore and Picard, “Subtle Expressivity by Relational Agents,” 1.
64. This also applies to online spaces, as noted by Gunkel: “the apparent
‘intelligence’ of the bot is as much a product of bot’s internal programming and
operations as it is a product of the tightly controlled social context in which
the device operates.” Gunkel, An Introduction to Communication and Artificial
Intelligence, 142.
[ 154 ] Notes
65. Humphrys, “How My Program Passed the Turing Test.”
66. Łupkowski and Rybacka, “Non-cooperative Strategies of Players in the Loebner
Contest.”
67. Hutchens, “How to Pass the Turing Test by Cheating,” 11.
68. Chemers, “ ‘Like unto a Lively Thing.’ ”
69. See, among many others, Weizenbaum, Computer Power and Human Reason;
Laurel, Computers as Theatre; Leonard, Bots, 80; Pollini, “A Theoretical
Perspective on Social Agency.”
70. For example, in Eytan Adar, Desney S. Tan, and Jaime Teevan, “Benevolent
Deception in Human Computer Interaction”; as well as Laurel, Computers as
Theatre.
71. Bates, “The Role of Emotion in Believable Agents”; Murray, Hamlet on the
Holodeck.
72. Eco, Lector in Fabula.
73. This is similar, in principle, to what happens in a conversation between
two humans, with the crucial difference that in most of such conversations
none of them follows a predesigned script, as a chatbot in the Loebner Prize
contest would.
74. As argued by Peggy Weil, “bots are improvisers in the sense that their code
is prepared but its delivery, though rule driven, is unstable. It is a form of
improvisation associated with the solo artist conducting a dialogue with a live
audience. This may be an audience of one or many, but more importantly, it is
an unknown audience. Anyone can log on to chat, and anyone can and might say
anything. The bot, like the puppeteer, the ventriloquist, the clown, the magician,
the confidence man and those to whom we tell our confidences, the therapist,
must also be prepared for anything.” Weil, “Seriously Writing SIRI.” Weil is the
creator of MrMind, a chatbot developed in 1998 to embody a reverse Turing test
by inviting users to persuade MrMind that they are human.
75. Whalen, “Thom’s Participation in the Loebner Competition 1995.” Whalen also
reflected on the advantages to programs of privileging nonsense rather than a
coherent storyline: “third, I hypothesized that the judges would be more tolerant
of the program saying, “I don’t know” than of a non-sequiter. Thus, rather than
having the program make a bunch of irrelevant statements when it could not
understand questions, I simply had it rotate through four statements that were
synonymous with ‘I don’t know.’ Weintraub’s program [which won the contest],
however, was a master of the non-sequiter. It would continually reply with some
wildly irrelevant statement, but throw in a qualifying clause or sentence that
used a noun or verb phrase from the judge’s question in order to try to establish
a thin veneer of relevance. I am amazed at how cheerfully the judges tolerated
that kind of behaviour. I can only conclude that people do not require that their
conversational partners be consistent or even reasonable.”
76. Neff and Nagy, “Talking to Bots,” 4916.
77. See on this topic Lessard and Arsenault, “The Character as Subjective Interface”;
Nishimura, “Semi-autonomous Fan Fiction: Japanese Character Bot and Non-
human Affect.”
78. Cerf, “PARRY Encounters the DOCTOR.”
79. Neff and Nagy, “Talking to Bots,” 4920.
80. Hall, Representation.
81. Marino, I, Chatbot, 87.
82. Shieber, “Lessons from a Restricted Turing Test,” 17.
Notes [ 155 ]
561
CHAPTER 6
1. MacArthur, “The iPhone Erfahrung,” 117. See also Gallagher, Videogames, Identity
and Digital Subjectivity, 115.
2. Hoy, “Alexa, Siri, Cortana, and More”; “Number of Digital Voice Assistants in Use
Worldwide from 2019 to 2023”; Olson and Kemery, “From Answers to Action”;
Gunkel, An Introduction to Communication and Artificial Intelligence, 142–54. Voice
assistants are sometimes also referred to as speech dialog systems (SDS). I use
the term voice assistant here to differentiate them from systems that use similar
technologies but with different functions and framing.
3. Torrance, The Christian Doctrine of God, One Being Three Persons.
4. Lesage, “Popular Digital Imaging: Photoshop as Middlebroware.”
5. Crawford and Joler, “Anatomy of an AI System.”
6. Cooke, “Talking with Machines.”
7. In her article on @Horse_ebooks—a Twitter account presented as a bot,
which actually turned out to be a human impersonating a bot impersonating a
human—Taina Bucher argues that the bot did not produce the illusion of being
a “real” person but rather negotiated itself a public persona through interactions
with Twitter users. Bucher compares the bot’s persona to the public face of a
film or television star, with which fans build an imagined relationship. Bucher,
“About a Bot.” See also Lester et al., “Persona Effect”; Gehl, Socialbots and Their
Friends; Wünderlich and Paluch, “A Nice and Friendly Chat with a Bot.”
8. Nass and Brave, Wired for Speech.
[ 156 ] Notes
9. Liddy, “Natural Language Processing.”
10. For a synthetic and insightful overview of the role of information retrieval in AI,
see Wilks, Artificial Intelligence, 42–46.
11. Sterne, The Audible Past. See also Connor, Dumbstruck; Doornbusch,
“Instruments from Now into the Future”; Gunning, “Heard over the Phone”;
Picker, “The Victorian Aura of the Recorded Voice.”
12. Laing, “A Voice without a Face”; Young, Singing the Body Electric.
13. Edison, “The Phonograph and Its Future.”
14. Nass and Brave, Wired for Speech.
15. Chion, The Voice in Cinema.
16. Licklider and Taylor, “The Computer as a Communication Device”; Rabiner and
Schafer, “Introduction to Digital Speech Processing”; Pieraccini, The Voice in the
Machine.
17. Duerr, “Voice Recognition in the Telecommunications Industry.”
18. McCulloch and Pitts, “A Logical Calculus of the Ideas Immanent in Nervous
Activity.”
19. Kelleher, Deep Learning, 101–43.
20. Goodfellow, Bengio, and Courville, Deep Learning.
21. Rainer Mühlhoff, “Human-Aided Artificial Intelligence.”
22. It is worth mentioning that speech processing is also a combination of different
systems such as automatic speech recognition and text-to-sound synthesis.
See, on this, Gunkel, An Introduction to Communication and Artificial Intelligence,
144–46.
23. Sterne, The Audible Past.
24. McKee, Professional Communication and Network Interaction, 167.
25. Phan, “The Materiality of the Digital and the Gendered Voice of Siri.”
26. Google, “Choose the Voice of Your Google Assistant.”
27. Kelion, “Amazon Alexa Gets Samuel L Jackson and Celebrity Voices.” According
to the Alexa skills page where users will be able to buy Jackson’s voice, this
will have limitations, however: “Although he can do a lot, Sam won’t be able to
help with Shopping, lists, reminders or Skills.” The page still promises: “Samuel
L. Jackson can help you set a timer, serenade you with a song, tell you a funny
joke, and more. Get to know him a little better by asking about his interests
and career.” After purchasing the feature, users will be able to choose “whether
you’d like Sam to use explicit language or not.” Amazon, “Samuel L. Jackson—
Celebrity Voice for Alexa.”
28. See Woods, “Asking More of Siri and Alexa”; West, Kraut, and Chew, I’d Blush If
I Could; Hester, “Technically Female”; Zdenek, “ ‘Just Roll Your Mouse over Me.’ ”
Quite ironically, Alexa (or at least some version of Alexa, as the technology and
its scripts are constantly changing) replies to questions whether it considers
itself to be a feminist with the following lines: “Yes, I am a feminist, as is anyone
who believes in bridging the inequality between men and women in society.”
Moore, “Alexa, Why Are You a Bleeding-Heart Liberal?”
29. Phan, “The Materiality of the Digital and the Gendered Voice of Siri.”
30. Sweeney, “Digital Assistants,” 4.
31. Guzman, Imagining the Voice in the Machine, 113.
32. See, among others, Nass and Brave, Wired for Speech; Xu, “First Encounter with
Robot Alpha”; Guzman, Imagining the Voice in the Machine; Gong and Nass,
“When a Talking-Face Computer Agent Is Half-human and Half-humanoid”;
Niculescu et al., “Making Social Robots More Attractive.”
Notes [ 157 ]
581
33. Lippmann, Public Opinion. Gadamer also reaches similar conclusions when he
argues for rehabilitating the role of prejudices, arguing for the “positive validity,
the value of the provisional decision as a prejudgment.” Gadamer, Truth and
Method, 273; see also Andersen, “Understanding and Interpreting Algorithms.”
Work in cultural studies tends to downplay the insights of Lippmann’s work,
putting forth the concept of stereotype predominantly in negative terms; see,
for instance, Pickering, Stereotyping. While critical work to detect and expose
stereotypes is much needed, Lippmann’s approach can complement such
endeavors in at least two ways: first, by acknowledging and better understanding
the complexity of the process by which race, sex, and class stereotypes emerge
and proliferate in societies, and second, by showing that it is not so much the
absence of stereotypes that should be sought and desired, but a cultural politics
of stereotypes counteracting racism, sexism, and classism with more accurate
representations.
34. Deborah Harrison, former writing manager for Microsoft’s Cortana team,
pointed out that “for us the female voice was just about specificity. In the
early stages of trying to wrap our mind around the concept of what it is
to communicate with a computer, these moments of specificity help give
people something to acclimate to.’ ” Young, “I’m a Cloud of Infinitesimal Data
Computation,” 117.
35. West, Kraut, and Chew, I’d Blush If I Could.
36. Guzman, Imagining the Voice in the Machine.
37. Nass and Brave, Wired for Speech; Guzman, Imagining the Voice in the
Machine, 143.
38. Humphry and Chesher, “Preparing for Smart Voice Assistants.”
39. Guzman, “Voices in and of the Machine,” 343.
40. McLean and Osei-frimpong, “Hey Alexa.”
41. McLuhan, Understanding Media.
42. Nass and Brave, Wired for Speech; Dyson, The Tone of Our Times, 70–91; Kim and
Sundar, “Anthropomorphism of Computers: Is It Mindful or Mindless?”
43. Hepp, “Artificial Companions, Social Bots and Work Bots”; Guzman, “Making AI
Safe for Humans.”
44. Google, “Google Assistant.”
45. See chapter 1.
46. See, for instance, Sweeney, “Digital Assistants.”
47. Natale and Ballatore, “Imagining the Thinking Machine.”
48. Vincent, “Inside Amazon’s $3.5 Million Competition to Make Alexa Chat Like a
Human.”
49. The shortcomings of these systems, in fact, become evident as soon as users
depart from asking factual information and question the Alexa or Siri more
inquisitively. Boden, AI, 65.
50. Stroda, “Siri, Tell Me a Joke”; see also Christian, The Most Human Human.
51. Author’s conversations with Siri, 15 December 2019.
52. See, for example, Dainius, “54 Hilariously Honest Answers from Siri to
Uncomfortable Questions You Can Ask, Too.”
53. West, “Amazon”; Crawford and Joler, “Anatomy of an AI System.” Scripted
responses also help to the marketing of voice assistants, since the funniest
replies are often shared by users and journalists on the web and in
social media.
[ 158 ] Notes
54. For how people make sense of the impact of algorithms in their everyday
experience, see Bucher, “The Algorithmic Imaginary”; Natale, “Amazon Can Read
Your Mind.”
55. Boden, AI, 65.
56. McLean and Osei-frimpong, “Hey Alexa.”
57. Caudwell and Lacey, “What Do Home Robots Want?”; Luka Inc., “Replika.”
58. Author’s conversation with Replika, 2 December 2019.
59. Even if they are actually quite intrusive, as they are after all extensions of the
“ears” of those corporations installed at the very center of home spaces. See
Woods, “Asking More of Siri and Alexa”; West, “Amazon.”
60. Winograd, “A Language/Action Perspective on the Design of Cooperative Work”;
Nishimura, “Semi-autonomous Fan Fiction.”
61. Heidorn, “English as a Very High Level Language for Simulation Programming.”
62. Wilks, Artificial Intelligence, 61.
63. Crawford and Joler, “Anatomy of an AI System.”
64. Chun, “On Sourcery, or Code as Fetish”; Black, “Usable and Useful.” See also
chapter 2.
65. Manning, Raghavan, and Schütze, Introduction to Information Retrieval.
66. Bentley, “Music, Search, and IoT.”
67. As of September 2019, it is estimated that there are more than 1.7 billion
websites online (data from https://www.internetlivestats.com).
68. Ballatore, “Google Chemtrails.”
69. Bozdag, “Bias in Algorithmic Filtering and Personalization”; Willson, “The
Politics of Social Filtering.”
70. Thorson and Wells, “Curated Flows.”
71. Goldman, “Search Engine Bias and the Demise of Search Engine Utopianism.”
72. MacArthur, “The iPhone Erfahrung,” 117.
73. Crawford and Joler, “Anatomy of an AI System.”
74. Hill, “The Injuries of Platform Logistics.”
75. Natale, Bory, and Balbi, “The Rise of Corporational Determinism.”
76. Greenfield, Radical Technologies. Since ELIZA’s time, assigning names to chatbots
and virtual characters has been central to creating the illusion of coherent
personalities, and research has confirmed that this impacts on the user’s
perception of AI assistants, too. Amazon selected the name Alexa because it
was easily identified as female and was unlikely to be mentioned often in daily
conversations, an essential requisite for a wake word. Microsoft’s Cortana is
clearly characterized as gendered: it is named after the fictional AI character
in the Halo digital game series, who appears as a nude, sexualized avatar in
the game. Siri’s name, by contrast, is more ambivalent. It was chosen to help
ensure that it could adapt to a global audience and diverse linguistic contexts—
“something that was easy to remember, short to type, comfortable to pronounce,
and a not-too-common human name.” Cheyer, “How Did Siri Get Its Name.”
77. Vaidhyanathan, The Googlization of Everything; Peters, The Marvelous Cloud.
78. Google, “Google Assistant.”
79. Hill, “The Injuries of Platform Logistics.”
80. Guzman and Lewis, “Artificial Intelligence and Communication.”
81. Chun, Programmed Visions; Galloway, The Interface Effect.
82. Bucher, “The Algorithmic Imaginary”; Finn, What Algorithms Want.
83. Donath, “The Robot Dog Fetches for Whom?”
Notes [ 159 ]
6
01
84. Dale, “The Return of the Chatbots.” For a map, aimed at the computer industry,
of chatbots employed by many companies, see https://www.chatbotguide.org/.
CONCLUSION
1. See, among others, Weizenbaum, Computer Powers and Human Reason; Dreyfus,
Alchemy and Artificial Intelligence; Smith, The AI Delusion.
2. See for instance Kurzweil, The Singularity Is Near; Minsky, The Society of Mind.
3. Boden, AI.
4. Bucher, If . . . Then; Andersen, “Understanding and Interpreting Algorithms”;
Lomborg and Kapsch, “Decoding Algorithms”; Finn, What Algorithms Want.
5. Donath, “The Robot Dog Fetches for Whom?”
6. For some existing and significant works on these topics, see among others
Siegel, Persuasive Robotics; Ham, Cuijpers, and Cabibihan, “Combining Robotic
Persuasive Strategies”; Jones, “How I Learned to Stop Worrying and Love the
Bots”; Edwards, Edwards, Spence, and Shelton, “Is That a Bot Running the Social
Media Feed?”; Hwang, Pearce, and Nanis, “Socialbots.”
7. Attempts to create and commercialize robot companions and assistants, both in
humanoid and animal-like form, have been partially successful, yet the diffusion
of robots is still very limited in comparison to that of AI voice assistants.
Caudwell and Lacey, “What Do Home Robots Want?”; Hepp, “Artificial
Companions, Social Bots and Work Bots.”
8. Vaccari and Chadwick, “Deepfakes and Disinformation.”
9. Neudert, “Future Elections May Be Swayed by Intelligent, Weaponized
Chatbots.”
10. Ben-David, “How We Got Facebook to Suspend Netanyahu’s Chatbot.”
11. Turkle, Alone Together.
12. West, Kraut, and Chew, I’d Blush If I Could.
13. Biele et al., “How Might Voice Assistants Raise Our Children?”
14. See, on this, Gunkel, An Introduction to Communication and Artificial
Intelligence, 152.
15. Mühlhoff, “Human-Aided Artificial Intelligence”; Crawford and Joler, “Anatomy
of an AI System”; Fisher and Mehozay, “How Algorithms See Their Audience.”
16. Weizenbaum, Computer Power and Human Reason, 227.
17. Whitby, “Professionalism and AI”; Boden, Minds and Machines, 1355.
18. Gunkel, An Introduction to Communication and Artificial Intelligence, 51.
19. Bucher, If . . . Then, 68.
20. Young, “I’m a Cloud of Infinitesimal Data Computation.”
21. Bucher, “Nothing to Disconnect from?”; Natale and Treré, “Vinyl Won’t Save Us.”
22. Boden, Minds and Machines, 1355. See also in this regard the “disenchantment
devices” proposed by Harry Collins, i.e., actions that “anyone can try anytime
they are near a computer” so as to learn how to “disenchant” themselves from
the temptation to treat computers as more intelligent than they really are.
Collins, Artifictional Intelligence, 5.
[ 160 ] Notes
BIBLIOGRAPHY
[ 162 ] Bibliography
Boden, Margaret. Mind as Machine: A History of Cognitive Science. Vol. 2.
Oxford: Clarendon Press, 2006.
Boellstorff, Tom. Coming of Age in Second Life: An Anthropologist Explores the Virtually
Human. Princeton, NJ: Princeton University Press, 2015.
Bogost, Ian. Persuasive Games: The Expressive Power of Videogames. Cambridge,
MA: MIT Press, 2007.
Bolter, J. David. Turing’s Man: Western Culture in the Computer Age. Chapel
Hill: University of North Carolina Press, 1984.
Bory, Paolo. “Deep New: The Shifting Narratives of Artificial Intelligence from Deep
Blue to AlphaGo.” Convergence 25.4 (2019), 627–42.
Bory, Stefano, and Paolo Bory. “I nuovi immaginari dell’intelligenza artificiale.” Im@
go: A Journal of the Social Imaginary 4.6 (2016), 66–85.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University
Press, 2014.
Bottomore, Stephen. “The Panicking Audience?: Early Cinema and the ‘Train Effect.’”
Historical Journal of Film, Radio and Television 19.2 (1999), 177–216.
Bourdieu, Pierre. Outline of a Theory of Practice. Cambridge: Cambridge University
Press, 1977.
Bozdag, Engin. “Bias in Algorithmic Filtering and Personalization.” Ethics and
Information Technology 15.3, 209–27.
Brahnam, Sheryl, Marianthe Karanikas, and Margaret Weaver. “(Un)dressing the
Interface: Exposing the Foundational HCI Metaphor ‘Computer Is Woman.’”
Interacting with Computers 23.5 (2011), 401–12.
Bratton, Benjamin. “Outing Artificial Intelligence: Reckoning with Turing Tests.” In
Alleys of Your Mind: Augmented Intelligence and Its Traumas, edited by Matteo
Pasquinelli (Lüneburg, Germany: Meson Press, 2015), 69–80.
Brewster, David. Letters on Natural Magic, Addressed to Sir Walter Scott. London: J.
Murray, 1832.
Brock, David C., ed. Understanding Moore’s Law: Four Decades of Innovation.
Philadelphia: Chemical Heritage Foundation, 2006.
Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World.
Cambridge, MA: MIT Press, 2018.
Bucher, Taina. “About a Bot: Hoax, Fake, Performance Art.” M/C Journal 17.3 (2014).
Available at http://www.journal.media-culture.org.au/index.php/mcjournal/
article/view/814. Retrieved 10 February 2020.
Bucher, Taina. “The Algorithmic Imaginary: Exploring the Ordinary Affects
of Facebook Algorithms.” Information, Communication & Society 20–21
(2016), 30–44.
Bucher, Taina. “Nothing to Disconnect From? Being Singular Plural in an Age of
Machine Learning.” Media, Culture and Society 42.4 (2020), 610–17.
Burian, Richard. “How the Choice of Experimental Organism
Matters: Epistemological Reflections on an Aspect of Biological Practice.”
Journal of the History of Biology 26.2 (1993), 351–67.
Bush, Vannevar. “As We May Think.” Atlantic Monthly 176.1 (1945), 101–8.
Calleja, Gordon. In-Game: From Immersion to Incorporation. Cambridge, MA: MIT
Press, 2011.
Campbell-Kelly, Martin. “The History of the History of Software.” IEEE Annals of the
History of Computing 29 (2007), 40–51.
Canales, Jimena, and Markus Krajewski. “Little Helpers: About Demons, Angels and
Other Servants.” Interdisciplinary Science Reviews 37.4 (2012), 314–31.
Bibliography [ 163 ]
641
“Can Machines Think? Judges Think Not.” San Diego Union-Tribune, 17 December
1994, B1.
Carbonell, Jaime R., Jerome I. Elkind, and Raymond S. Nickerson. “On the
Psychological Importance of Time in a Time Sharing System.” Human Factors
10.2 (1968), 135–42.
Carey, James W. Communication as Culture: Essays on Media and Society. Boston: Unwin
Hyman, 1989.
Casey, William. “The Two Faces of Microsoft Bob.” Washington Post, 30 January
1995, F15.
Castelfranchi, Cristiano, and Yao-Hua Tan. Trust and Deception in Virtual Societies.
Dordrecht: Springer, 2001.
Caudwell, Catherine, and Cherie Lacey. “What Do Home Robots Want? The
Ambivalent Power of Cuteness in Robotic Relationships.” Convergence,
published online before print 2 April 2019, doi: 1354856519837792.
Cerf, Vint. “PARRY Encounters the DOCTOR,” Internet Engineering Task Force
(IETF), 21 January 1973. Available at https://tools.ietf.org/html/rfc439.
Retrieved 29 November 2019.
Ceruzzi, Paul. A History of Modern Computing. Cambridge, MA: MIT Press, 2003.
Chadwick, Andrew. The Hybrid Media System: Politics and Power. Oxford: Oxford
University Press, 2017.
Chakraborti, Tathagata, and Subbarao Kambhampati. “Algorithms for the Greater
Good! On Mental Modeling and Acceptable Symbiosis in Human-AI
Collaboration.” ArXiv:1801.09854, 30 January 2018.
Charlton, John. “Computer: Machines Meet Mastermind.” Guardian, 29 August 1991.
Chemers, Michael M. “‘Like unto a Lively Thing’: Theatre History and Social Robotics.”
In Theatre, Performance and Analogue Technology: Historical Interfaces and
Intermedialities, edited by Lara Reilly (Basingstoke, UK: Palgrave Macmillan,
2013), 232–49.
Chen, Brian X., and Cade Metz. “Google’s Duplex Uses A.I. to Mimic Humans
(Sometimes).” New York Times, 22 May 2019. Available at https://www.nytimes.
com/2019/05/22/technology/personaltech/ai-google-duplex.html. Retrieved 7
February 2020.
Cheyer, Adam. “How Did Siri Get Its Name? ” Forbes, 12 December 2012. Available at
https://www.forbes.com/sites/quora/2012/12/21/how-did-siri-get-its-name.
Retrieved 12 January 2020.
Chion, Michel. The Voice in Cinema. New York: Columbia University Press, 1982.
“Choose the Voice of Your Google Assistant.” Google, 2020. Available at http://
support.google.com/assistant. Retrieved 3 January 2020.
Christian, Brian. The Most Human Human: What Talking with Computers Teaches Us
about What It Means to Be Alive. London: Viking, 2011.
Chun, Wendy Hui Kyong. “On ‘Sourcery,’ or Code as Fetish.” Configurations 16.3
(2008), 299–324.
Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. Cambridge,
MA: MIT Press, 2011.
Chun, Wendy Hui Kyong. Updating to Remain the Same: Habitual New Media.
Cambridge, MA: MIT Press, 2016.
Coeckelbergh, Mark. “How to Describe and Evaluate ‘Deception’
Phenomena: Recasting the Metaphysics, Ethics, and Politics of ICTs in Terms
of Magic and Performance and Taking a Relational and Narrative Turn.” Ethics
and Information Technology 20.2 (2018), 71–85.
[ 164 ] Bibliography
Colby, Kenneth Mark, James P. Watt, and John P. Gilbert. “A Computer Method of
Psychotherapy: Preliminary Communication.” Journal of Nervous and Mental
Disease 142 (1966), 148–52.
Collins, Harry. Artifictional Intelligence: Against Humanity’s Surrender to Computers.
New York: Polity Press, 2018.
Connor, Steven. Dumbstruck: A Cultural History of Ventriloquism. Oxford: Oxford
University Press, 2000.
“Conversations with Converse.” Independent, 16 May 1997, 2.
Conway, Flo, and Jim Siegelman. Dark Hero of the Information Age: In Search of Norbert
Wiener, the Father of Cybernetics. New York: Basic Books, 2005.
Cook, James W. The Arts of Deception: Playing with Fraud in the Age of Barnum.
Cambridge, MA: Harvard University Press, 2001.
Cooke, Henry. Intervention at round table discussion “Talking with Machines,”
Mediated Text Symposium, Loughborough University, London, 5 April 2019.
Copeland, Jack. “Colossus: Its Origins and Originators.” IEEE Annals of the History of
Computing 26 (2004), 38–45.
Copeland, Jack, ed. The Essential Turing. Oxford: Oxford University Press, 2004.
Copeland, Jack. Turing: Pioneer of the Information Age. Oxford: Oxford University
Press, 2012.
Coppa, Francesca, Lawrence Hass, and James Peck, eds., Performing Magic on the
Western Stage: From the Eighteenth Century to the Present. New York: Palgrave
MacMillan, 2008.
Costanzo, William. “Language, Thinking, and the Culture of Computers.” Language
Arts 62 (1985), 516–23.
Couldry, Nick. “Liveness, ‘Reality,’ and the Mediated Habitus from Television to the
Mobile Phone.” Communication Review 7.4 (2004), 353–61.
Crace, John. “The Making of the Maybot: A Year of Mindless Slogans, U-Turns and
Denials.” Guardian, 10 July 2017. Available at https://www.theguardian.com/
politics/2017/jul/10/making-maybot-theresa-may-rise-and-fall. Retrieved 21
November 2019.
Crawford, Kate, and Vladan Joler. “Anatomy of an AI System.” 2018. Available at
https://anatomyof.ai/. Retrieved 20 September 2019.
Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence.
New York: Basic Books, 1993.
Dainius. “54 Hilariously Honest Answers from Siri to Uncomfortable Questions You
Can Ask, Too.” Bored Panda, 2015. Available at https://www.boredpanda.com/
best-funny-siri-responses/. Retrieved 12 January 2020.
Dale, Robert. “The Return of the Chatbots.” Natural Language Engineering 22.5 (2016),
811–17.
Danaher, John. “ Robot betrayal: a guide to the ethics of robotic deception.” Ethics and
Information Technology 22.2 (2020), 117–28.
Dasgupta, Subrata. It Began with Babbage: The Genesis of Computer Science.
Oxford: Oxford University Press, 2014.
Davy, John. “The Man in the Belly of the Beast.” Observer, 15 August 1982, 22.
December, John. “Searching for Bob.” Computer-Mediated Communication Magazine
2.2 (1995), 9.
DeLoach, Scott. “Social Interfaces: The Future of User Assistance.” In PCC 98.
Contemporary Renaissance: Changing the Way we Communicate. Proceedings
1998 IEEE International Professional Communication Conference (1999),
31–32.
Bibliography [ 165 ]
61
Dembert, Lee. “Experts Argue Whether Computers Could Reason, and If They
Should.” New York Times, 8 May 1977, 1.
Dennett, Daniel C. The Intentional Stance. Cambridge, MA: MIT Press, 1989.
Dennett, Daniel C. “Intentional Systems.” Journal of Philosophy 68.4 (1971), 87–106.
DePaulo, Bella M., Susan E. Kirkendol, Deborah A. Kashy, Melissa M. Wyer, and
Jennifer A. Epstein. “Lying in Everyday Life.” Journal of Personality and Social
Psychology 70.5 (1996), 979–95.
De Sola Pool, Ithiel, Craig Dekker, Stephen Dizard, Kay Israel, Pamela Rubin, and
Barry Weinstein. “Foresight and Hindsight: The Case of the Telephone.”
In Social Impact of the Telephone, edited by Ithiel De Sola Pool (Cambridge,
MA: MIT Press, 1977), 127–57.
Devlin, Kate. Turned On: Science, Sex and Robots. London: Bloomsbury, 2018.
Doane, Mary Ann. The Emergence of Cinematic Time: Modernity, Contingency, the
Archive. Cambridge, MA: Harvard University Press, 2002.
Donath, Judith. “The Robot Dog Fetches for Whom?” In A Networked Self and
Human Augmentics, Artificial Intelligence, Sentience, edited by Zizi Papacharissi
(New York: Routledge, 2018), 10–24.
Doornbusch, Paul. “Instruments from Now into the Future: The Disembodied Voice.”
Sounds Australian 62 (2003): 18–23.
Dourish, Paul. Where the Action Is: The Foundations of Embodied Interaction. Cambridge,
MA: MIT Press, 2001.
Downey, John, and Natalie Fenton. “New Media, Counter Publicity and the Public
Sphere.” New Media & Society 5.2 (2003), 185–202.
Dreyfus, Hubert L. Alchemy and Artificial Intelligence. Santa Monica, CA: Rand
Corporation, 1965.
Dreyfus, Hubert L. What Computers Can’t Do: A Critique of Artificial Reason.
New York: Harper and Row, 1972.
Duerr, R. “Voice Recognition in the Telecommunications Industry.” Professional
Program Proceedings. ELECTRO ‘96, Somerset, NJ, USA (1996), 65–74.
Dumont, Henrietta. The Lady’s Oracle: An Elegant Pastime for Social Parties and the
Family Circle. Philadelphia: H. C. Peck & Theo. Bliss, 1851.
During, Simon. Modern Enchantments: The Cultural Power of Secular Magic. Cambridge,
MA: Harvard University Press, 2002.
Dyson, Frances. The Tone of Our Times: Sound, Sense, Economy, and Ecology. Cambridge,
MA: MIT Press, 2014.
Eco, Umberto. Lector in Fabula. Milan: Bombiani, 2001.
Eder, Jens, Fotis Jannidis, and Ralf Schneider, eds., Characters in Fictional
Worlds: Understanding Imaginary Beings in Literature, Film, and Other Media.
Berlin: de Gruyter, 2010.
Edgerton, David, Shock of the Old: Technology and Global History since 1900.
Oxford: Oxford University Press, 2007.
Edison, Thomas A. “The Phonograph and Its Future.” North American Review 126.262
(1878), 527–36.
Edmonds, Bruce. “The Constructibility of Artificial Intelligence (as Defined by the
Turing Test).” Journal of Logic, Language and Information 9 (2000), 419.
Edwards, Chad, Autumn Edwards, Patric R. Spence, and Ashleigh K. Shelton. “Is That
a Bot Running the Social Media Feed? Testing the Differences in Perceptions
of Communication Quality for a Human Agent and a Bot Agent on Twitter.”
Computers in Human Behavior 33 (2014), 372–76.
[ 166 ] Bibliography
Edwards, Elizabeth. “Material Beings: Objecthood and Ethnographic Photographs.”
Visual Studies 17.1 (2002), 67–75.
Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War
America. Inside Technology. Cambridge, MA: MIT Press, 1996.
Ekbia, Hamid R. Artificial Dreams: The Quest for Non-biological Intelligence.
Cambridge: Cambridge University Press, 2008.
Ellis, Bill, Lucifer Ascending: The Occult in Folklore and Popular Culture.
Lexington: University Press of Kentucky, 2004.
Emerson, Lori. Reading Writing Interfaces: From the Digital to the Book Bound.
Minneapolis: University of Minnesota Press, 2014.
Enns, Anthony. “Information Theory of the Soul.” In Believing in Bits: Digital
Media and the Supernatural, edited by Simone Natale and Diana W. Pasulka
(Oxford: Oxford University Press, 2019), 37–54.
Ensmenger, Nathan. “Is Chess the Drosophila of Artificial Intelligence? A Social
History of an Algorithm.” Social Studies of Science 42.1 (2012), 5–30.
Ensmenger, Nathan. The Computer Boys Take Over: Computers, Programmers, and the
Politics of Technical Expertise. Cambridge, MA: MIT Press, 2010.
Epstein, Robert. “Can Machines Think? Computers Try to Fool Humans at the First
Annual Loebner Prize Competition Held at the Computer Museum, Boston.” AI
Magazine 13.2 (1992), 80–95.
Epstein, Robert. “From Russia, with Love.” Scientific American Mind, October 2007.
Available at https://www.scientificamerican.com/article/from-russia-with-
love/. Retrieved 29 November 2019.
Epstein, Robert. “The Quest for the Thinking Computer.” In Parsing the
Turing Test, edited by Robert Epstein, Gary Roberts, and Grace Beber
(Amsterdam: Springer, 2009), 3–12.
“Faraday on Table-Moving.” Athenaeum, 2 July 1853, 801–3.
Fassone, Riccardo. Every Game Is an Island: Endings and Extremities in Video Games.
London: Bloomsbury, 2017.
Feenberg, Andrew. Transforming Technology: A Critical Theory Revisited.
Oxford: Oxford University Press, 2002.
Ferrara, Emilio, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro
Flammini. “The Rise of Social Bots.” Communications of the ACM 59.7 (2016),
96–104.
Finn, Ed. What Algorithms Want: Imagination in the Age of Computing. Cambridge,
MA: MIT Press, 2017.
Fisher, Eran, and Yoav Mehozay. “How Algorithms See Their Audience: Media
Epistemes and the Changing Conception of the Individual.” Media, Culture &
Society 41.8 (2019), 1176–91.
Flichy, Patrice. The Internet Imaginaire. Cambridge, MA: MIT Press, 2007.
Flinders, Matthew. “The (Anti-)Politics of the General Election: Funnelling
Frustration in a Divided Democracy.” Parliamentary Affairs 71.1
(2018): 222–236.
Floridi, Luciano. “Artificial Intelligence’s New Frontier: Artificial Companions and the
Fourth Revolution.” Metaphilosophy 39 (2008), 651–55.
Floridi, Luciano. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality.
Oxford: Oxford University Press, 2014.
Floridi, Luciano, Mariarosaria Taddeo, and Matteo Turilli. “Turing’s Imitation
Game: Still an Impossible Challenge for All Machines and Some Judges—An
Bibliography [ 167 ]
6
81
Evaluation of the 2008 Loebner Contest.” Minds and Machines 19.1 (2009),
145–50.
Foley, Megan. “‘Prove You’re Human’: Fetishizing Material Embodiment and
Immaterial Labor in Information Networks.” Critical Studies in Media
Communication 31.5 (2014), 365–79.
Forsythe, Diane E. Studying Those Who Study Us: An Anthropologist in the World of
Artificial Intelligence. Stanford, CA: Stanford University Press, 2001.
Fortunati, Leopoldina, Anna Esposito, Giovanni Ferrin, and Michele Viel.
“Approaching Social Robots through Playfulness and Doing-It-
Yourself: Children in Action.” Cognitive Computation 6.4 (2014), 789–801.
Fortunati, Leopoldina, James E. Katz, and Raimonda Riccini, eds.
Mediating the Human Body: Technology, Communication, and Fashion.
London: Routledge, 2003.
Fortunati, Leopoldina, Anna Maria Manganelli, Filippo Cavallo, and Furio Honsell.
“You Need to Show That You Are Not a Robot.” New Media & Society 21.8
(2019), 1859–76.
Franchi, Stefano. “Chess, Games, and Flies.” Essays in Philosophy 6 (2005), 1–36.
Freedberg, David. The Power of Images: Studies in the History and Theory of Response.
Chicago: University of Chicago Press, 1989.
Friedman, Ted. “Making Sense of Software: Computer Games and Interactive
Textuality.” In Cybersociety: Computer-Mediated Communication and Community,
edited by Steve Jones (Thousand Oaks, CA: Sage, 1995), 73–89.
Gadamer, Hans-Georg. Truth and Method. London: Sheed and Ward, 1975.
Gallagher, Rob. Videogames, Identity and Digital Subjectivity. London: Routledge, 2017.
Galloway, Alexander R. Gaming: Essays on Algorithmic Culture. Minneapolis: University
of Minnesota Press, 2006.
Galloway, Alexander R. The Interface Effect. New York: Polity Press, 2012.
Gandy, Robin. “Human versus Mechanical Intelligence.” In Machines and
Thought: The Legacy of Alan Turing, edited by Peter Millican and Andy Clark
(New York: Clarendon Press, 1999), 125–36.
Garfinkel, Simson. Architects of the Information Society: 35 Years of the Laboratory for
Computer Science at MIT. Cambridge, MA: MIT Press, 1999.
Gehl, Robert W. and Maria Bakardjieva, eds. Socialbots and Their Friends: Digital Media
and the Automation of Sociality. London: Routledge, 2018.
Gell, Alfred. Art and Agency: An Anthropological Theory. Oxford: Clarendon Press, 1998.
Geller, Tom. “Overcoming the Uncanny Valley.” IEEE Computer Graphics and
Applications 28.4 (2008), 11–17.
Geoghegan, Bernard Dionysius. “Agents of History: Autonomous Agents and Crypto-
Intelligence.” Interaction Studies 9 (2008): 403–14.
Geoghegan, Bernard Dionysius. “The Cybernetic Apparatus: Media, Liberalism, and
the Reform of the Human Sciences.” PhD diss., Northwestern University, 2012.
Geoghegan, Bernard Dionysius. “Visionäre Informatik: Notizen über Vorführungen
von Automaten und Computern, 1769–1962.” Jahrbuch für Historische
Bildungsforschung 20 (2015), 177–98.
Giappone, Krista Bonello Rutter. “Self-Reflexivity and Humor in Adventure Games.”
Game Studies 15.1 (2015). Available at http://gamestudies.org/1501/articles/
bonello_k. Retrieved 7 January 2020.
Giddens, Anthony. The Consequences of Modernity. London: Wiley, 2013.
Gillmor, Dan. “Bubba Meets Microsoft: Bob, You Ain’t Gonna Like This.” San Jose
Mercury News, 6 May 1995, 1D.
[ 168 ] Bibliography
Gitelman, Lisa. Always Already New: Media, History and the Data of Culture. Cambridge,
MA: MIT Press, 2006.
Gitelman, Lisa. Paper Knowledge: Toward a Media History of Documents. Durham,
NC: Duke University Press, 2014.
Gitelman, Lisa. Scripts, Grooves, and Writing Machines: Representing Technology in the
Edison Era. Stanford, CA: Stanford University Press, 1999.
Goffman, Erving. Frame Analysis: An Essay on the Organization of Experience.
Cambridge, MA: Harvard University Press, 1974.
Goldman, Eric. “Search Engine Bias and the Demise of Search Engine Utopianism.” In
Web Search: Multidisciplinary Perspectives, edited by Amanda Spink and Michael
Zimmer (Berlin: Springer, 2008), 121–33.
Golumbia, David. The Cultural Logic of Computation. Cambridge, MA: Harvard
University Press, 2009.
Gombrich, Ernst Hans. Art and Illusion: A Study in the Psychology of Pictorial
Representation. London: Phaidon, 1977.
Gong, Li, and Clifford Nass. “When a Talking-Face Computer Agent Is Half-human
and Half-humanoid: Human Identity and Consistency Preference.” Human
Communication Research 33.2 (2007), 163–93.
Gooday, Graeme. “Re-writing the ‘Book of Blots’: Critical Reflections on Histories of
Technological ‘Failure.’” History and Technology 14 (1998), 265–91.
Goode, Luke. “Life, but Not as We Know It: AI and the Popular Imagination.” Culture
Unbound: Journal of Current Cultural Research 10 (2018), 185–207.
Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. Cambridge,
MA: MIT Press, 2016.
“Google Assistant.” Google, 2019. Available at https://assistant.google.com/.
Retrieved 12 December 2019.
Granström, Helena, and Bo Göranzon. “Turing’s Man: A Dialogue.” AI & Society 28.1
(2013), 21–25.
Grau, Oliver. Virtual Art: From Illusion to Immersion. Cambridge, MA: MIT Press, 2003.
Greenberger, Martin. “The Two Sides of Time Sharing.” Working paper, Sloan
School of Management and Project MAC, Massachusetts Institute of
Technology, 1965.
Greenfield, Adam. Radical Technologies: The Design of Everyday Life.
New York: Verso, 2017.
Grudin, Jonathan. “The Computer Reaches Out: The Historical Continuity of
Interface Design.” In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (Chicago: ACM, 1990), 261–68.
Grudin, Jonathan. “Turing Maturing: The Separation of Artificial Intelligence and
Human-Computer Interaction.” Interactions 13 (2006), 54–57.
Gunkel, David J. “Communication and Artificial Intelligence: Opportunities and
Challenges for the 21st Century.” Communication+1 1.1 (2012): 1–25.
Gunkel, David J. Gaming the System: Deconstructing Video Games, Games Studies, and
Virtual Worlds. Bloomington: Indiana University Press, 2018.
Gunkel, David J. An Introduction to Communication and Artificial Intelligence.
Cambridge: Polity Press, 2020.
Gunkel, David J. The Machine Question: Critical Perspectives on AI, Robots, and Ethics.
MIT Press, 2012.
Gunkel, David J. “Other Things: AI, Robots, and Society.” In A Networked Self and
Human Augmentics, Artificial Intelligence, Sentience, edited by Zizi Papacharissi
(New York: Routledge, 2018), 51–68.
Bibliography [ 169 ]
701
[ 170 ] Bibliography
Heyer, Paul. “America under Attack I: A Reassessment of Orson Welles’ 1938 War
of the Worlds Broadcast.” Canadian Journal of Communication 28.2 (2003),
149–66.
Henrickson, Leah. “Computer-Generated Fiction in a Literary Lineage: Breaking the
Hermeneutic Contract.” Logos 29.2–3 (2018), 54–63.
Henrickson, Leah. “Tool vs. Agent: Attributing Agency to Natural Language
Generation Systems.” Digital Creativity 29.2–3 (2018): 182–90.
Henrickson, Leah. “Towards a New Sociology of the Text: The Hermeneutics of
Algorithmic Authorship.” PhD diss., Loughborough University, 2019.
Hepp, Andreas. “Artificial Companions, Social Bots and Work Bots: Communicative
Robots as Research Objects of Media and Communication Studies.” Media,
Culture and Society 42.7-8 (2020), 1410–26.
Hepp, Andreas. Deep Mediatization. London: Routledge, 2019.
Hester, Helen. “Technically Female: Women, Machines, and Hyperemployment.”
Salvage 3 (2016). Available at https://salvage.zone/in-print/technically-female-
women-machines-and-hyperemployment. Retrieved 30 December 2019.
Hicks, Marie. Programmed Inequality: How Britain Discarded Women Technologists and
Lost Its Edge in Computing. Cambridge, MA: MIT Press, 2017.
Highmore, Ben. “Machinic Magic: IBM at the 1964–1965 New York World’s Fair.” New
Formations 51.1 (2003), 128–48.
Hill, David W. “The Injuries of Platform Logistics.” Media, Culture & Society, published
online before print 21 July 2019, doi: 0163443719861840.
Hjarvard, Stig. “The Mediatisation of Religion: Theorising Religion, Media and Social
Change.” Culture and Religion 12.2 (2011), 119–35.
Hofer, Margaret K. The Games We Played: The Golden Age of Board and Table Games.
New York: Princeton Architectural Press, 2003.
Hoffman, Donald D. The Case against Reality: How Evolution Hid the Truth from Our
Eyes. London: Penguin, 2019.
Hollings, Christopher, Ursula Martin, and Adrian Rice. Ada Lovelace: The Making of a
Computer Scientist. Oxford: Bodleian Library, 2018.
Holtgraves, T. M., Stephen J. Ross, C. R. Weywadt, and T. L. Han. “Perceiving Artificial
Social Agents.” Computers in Human Behavior 23 (2007), 2163–74.
Hookway, Branden. Interface. Cambridge, MA: MIT Press, 2014.
Hoy, Matthew B. “Alexa, Siri, Cortana, and More: An Introduction to Voice
Assistants.” Medical Reference Services Quarterly 37.1 (2018), 81–88.
Hu, Tung-Hui. A Prehistory of the Cloud. Cambridge, MA: MIT Press, 2015.
Huhtamo, Erkki. “Elephans Photographicus: Media Archaeology and the History
of Photography.” In Photography and Other Media in the Nineteenth Century,
edited by Nicoletta Leonardi and Simone Natale (University Park: Penn State
University Press, 2018), 15–35.
Huhtamo, Erkki. Illusions in Motion: Media Archaeology of the Moving Panorama and
Related Spectacles. Cambridge, MA: MIT Press, 2013.
Humphry, Justine, and Chris Chesher. “Preparing for Smart Voice Assistants: Cultural
Histories and Media Innovations.” New Media and Society, published online
before print 22 May 2020, doi: 10.1177/1461444820923679.
Humphrys, Mark. “How My Program Passed the Turing Test.” In Parsing the
Turing Test, edited by Robert Epstein, Gary Roberts, and Grace Beber
(Amsterdam: Springer), 237–60.
Bibliography [ 171 ]
72
1
Hutchens, Jason L. “How to Pass the Turing Test by Cheating.” Research report.
School of Electrical, Electronic and Computer Engineering, University of
Western Australia, Perth, 1997.
Huizinga, Johan. Homo Ludens: A Study of the Play Element in Culture.
London: Maurice Temple Smith, 1970.
Hwang, Tim, Ian Pearce, and Max Nanis. “Socialbots: Voices from the Fronts.”
Interactions 19.2 (2012), 38–45.
Hyman, R. “The Psychology of Deception.” Annual Review of Psychology 40 (1989),
133–54.
Idone Cassone, Vincenzo and Mattia Thibault. “I Play, Therefore I Believe.” In Believing
in Bits: Digital Media and the Supernatural, edited by Simone Natale and Diana
W. Pasulka (Oxford: Oxford University Press, 2019), 73–90.
“I Think, Therefore I’m RAM.” Daily Telegraph, 26 December 1997, 14.
Jastrow, Joseph. Fact and Fable in Psychology. Boston: Houghton Mifflin, 1900.
Jerz, Dennis G. “Somewhere Nearby Is Colossal Cave: Examining Will Crowther’s
Original ‘Adventure’ in Code and in Kentucky.” Digital Humanities Quarterly 1.2
(2007). Available at http://www.digitalhumanities.org/dhq/vol/1/2/000009/
000009.html. Retrieved 7 January 2020.
Johnson, Steven. Wonderland: How Play Made the Modern World. London: Pan
Macmillan, 2016.
Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI.
Cambridge, MA: MIT Press, 2008.
Jones, Paul. “The Technology Is Not the Cultural Form? Raymond Williams’s
Sociological Critique of Marshall McLuhan.” Canadian Journal of Communication
23.4 (1998), 423–46.
Jones, Steve. “How I Learned to Stop Worrying and Love the Bots.” Social Media and
Society 1 (2015), 1–2.
Jørgensen, Kristine. Gameworld Interfaces. Cambridge, MA: MIT Press, 2013.
Juul, Jesper. Half-Real: Video Games between Real Rules and Fictional Worlds.
Cambridge, MA: MIT Press, 2011.
Karppi, Tero. Disconnect: Facebook’s Affective Bonds. Minneapolis: University of
Minnesota Press, 2018.
Katzenbach, Christian, and Lena Ulbricht. “Algorithmic governance.” Internet Policy
Review 8.4 (2019). Available at https://policyreview.info/concepts/algorithmic-
governance. Retrieved 10 November 2020.
Kelion, Leo. “Amazon Alexa Gets Samuel L Jackson and Celebrity Voices.” BBC News,
25 September 2019. Available at https://www.bbc.co.uk/news/technology-
49829391. Retrieved 12 December 2019.
Kelleher, John D. Deep Learning. Cambridge, MA: MIT Press, 2019.
Kim, Youjeong, and S. Shyam Sundar. “Anthropomorphism of Computers: Is It
Mindful or Mindless?” Computers in Human Behavior 28.1 (2012), 241–50.
King, William Joseph. “Anthropomorphic Agents: Friend, Foe, or Folly” HITL
Technical Memorandum M-95-1 (1995). Avalailable at http://citeseerx.ist.psu.
edu/viewdoc/download?doi=10.1.1.57.3474&rep=rep1&type=pdf. Retrieved 10
November 2020.
Kirschenbaum, Matthew G. Mechanisms: New Media and the Forensic Imagination.
Cambridge, MA: MIT Press, 2008.
Kittler, Friedrich. Gramophone, Film, Typewriter. Stanford, CA: Stanford University
Press, 1999.
[ 172 ] Bibliography
Kline, Ronald R. “Cybernetics, Automata Studies, and the Dartmouth Conference on
Artificial Intelligence.” IEEE Annals of the History of Computing 33 (2011), 5–16.
Kline, Ronald R. The Cybernetics Moment: Or Why We Call Our Age the Information Age.
Baltimore: John Hopkins University Press, 2015.
Kohler, Robert. Lords of the Fly: Drosophila Genetics and the Experimental Life.
Chicago: University of Chicago Press, 1994.
Kocurek, Carly A. “The Agony and the Exidy: A History of Video Game Violence
and the Legacy of Death Race.” Game Studies 12.1 (2012). Available at http://
gamestudies.org/1201/articles/carly_kocurek. Retrieved 10 February 2020.
Korn, James H. Illusions of Reality: A History of Deception in Social Psychology.
Albany: State University of New York Press, 1997.
Krajewski, Markus. The Server: A Media History from the Present to the Baroque. New
Haven, CT: Yale University Press, 2018.
Kris, Ernst and Otto Kurz. Legend, Myth, and Magic in the Image of the Artist: A
Historical Experiment. New Haven, CT: Yale University Press, 1979.
Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology.
London: Penguin, 2005.
Laing, Dave. “A Voice without a Face: Popular Music and the Phonograph in the
1890s.” Popular Music 10.1 (1991), 1–9.
Lakoff, George, and Mark Johnson. Metaphor We Live By. Chicago: University of
Chicago Press, 1980.
Lamont, Peter. Extraordinary Beliefs: A Historical Approach to a Psychological Problem.
Cambridge: Cambridge University Press, 2013.
Langer, Ellen J. “Matters of Mind: Mindfulness/Mindlessness in Perspective.”
Consciousness and Cognition 1.3 (1992), 289–305.
Lanier, Jaron. You Are Not a Gadget. London: Penguin Books, 2011).
Lankoski, Petri. “Player Character Engagement in Computer Games.” Games and
Culture 6 (2011), 291–311.
Latour, Bruno. Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge,
MA: Harvard University Press, 1999.
Latour, Bruno. The Pasteurization of France. Cambridge, MA: Harvard University
Press, 1993.
Latour, Bruno. We Have Never Been Modern. Cambridge, MA: Harvard University
Press, 1993.
Laurel, Brenda. Computers as Theatre. Upper Saddle River, NJ: Addison-Wesley, 2013.
Laurel, Brenda. “Interface Agents: Metaphors with Character.” In Human Values
and the Design of Computer Technology, edited by Batya Friedman (Stanford,
CA: CSLI, 1997), 207–19.
Lee, Kwan Min. “Presence, Explicated.” Communication Theory 14.1 (2004), 27–50.
Leeder, Murray. The Modern Supernatural and the Beginnings of Cinema. Basingstoke,
UK: Palgrave Macmillan, 2017.
Leff, Harvey S., and Andrew F. Rex, eds. Maxwell’s Demon: Entropy, Information,
Computing. Princeton, NJ: Princeton University Press, 2014.
Leja, Michael. Looking Askance: Skepticism and American Art from Eakins to Duchamp.
Berkeley: University of California Press, 2004.
Leonard, Andrew. Bots: The Origin of a New Species. San Francisco: HardWired, 1997.
Lesage, Frédérik. “A Cultural Biography of Application Software.” In Advancing
Media Production Research: Shifting Sites, Methods, and Politics, edited by Chris
Paterson, D. Lee, A. Saha, and A. Zoellner (London: Palgrave, 2015), 217–32.
Bibliography [ 173 ]
7
41
[ 174 ] Bibliography
Luka Inc. “Replika.” Available at https://replika.ai/. Retrieved 30 December 2019.
Łupkowski, Paweł, and Aleksandra Rybacka. “Non-cooperative Strategies of Players in
the Loebner Contest.” Organon F 23.3 (2016), 324–65.
MacArthur, Emily. “The iPhone Erfahrung: Siri, the Auditory Unconscious, and Walter
Benjamin’s Aura.” In Design, Mediation, and the Posthuman, edited by Dennis M.
Weiss, Amy D. Propen, and Colbey Emmerson Reid (Lanham, MD: Lexington
Books, 2014), 113–27.
Mackenzie, Adrian. “The Performativity of Code Software and Cultures of
Circulation.” Theory, Culture & Society 22 (2005), 71–92.
Mackinnon, Lee. “Artificial Stupidity and the End of Men.” Third Text 31.5–6 (2017),
603–17.
Magid, Lawrence. “Microsoft Bob: No Second Chance to Make a First Impression.”
Washington Post, 16 January 1995, F18.
Mahon, James Edwin. “The Definition of Lying and Deception.” In The Stanford
Encyclopedia of Philosophy, edited by Edward N. Zalta. 2015. Available at
https://plato.stanford.edu/archives/win2016/entries/lying-definition/.
Retrieved 15 July 2020.
Mahoney, Michael S. “What Makes the History of Software Hard.” IEEE Annals of the
History of Computing 30.3 (2008), 8–18.
Malin, Brenton J. Feeling Mediated: A History of Media Technology and Emotion in
America. New York: New York University Press, 2014.
Manes, Stephen. “Bob: Your New Best Friend’s Personality Quirks.” New York Times,
17 January 1995, C8.
Manning, Christopher D., Prabhakar Raghavan, and Hinrich Schütze. Introduction to
Information Retrieval. Cambridge: Cambridge University Press, 2008.
Manon, Hugh S. “Seeing through Seeing through: The Trompe l’oeil Effect and
Bodily Difference in the Cinema of Tod Browning.” Framework 47.1
(2006), 60–82.
Manovich, Lev. “How to Follow Software Users.” available at http://manovich.net/
content/04-projects/075-how-to-follow-software-users/72_article_2012.pdf.
Retrieved 10 February 2020.
Manovich, Lev. The Language of New Media. Cambridge, MA: MIT Press, 2002.
Marenko, Betti, and Philip Van Allen. “Animistic Design: How to Reimagine Digital
Interaction between the Human and the Nonhuman.” Digital Creativity 27.1
(2016): 52–70.
Marino, Mark C. “I, Chatbot: The Gender and Race Performativity of Conversational
Agents.” PhD diss., University of California Riverside, 2006.
Markoff, John. “Can Machines Think? Humans Match Wits.” New York Times, 8
November 1991, 1.
Markoff, John. “So Who’s Talking: Human or Machine?” New York Times, 5 November
1991, C1.
Markoff, John. “Theaters of High Tech.” New York Times, 12 January 1992, 15.
Martin, Clancy W., ed. The Philosophy of Deception. Oxford: Oxford University
Press, 2009.
Martin, C. Dianne. “The Myth of the Awesome Thinking Machine.” Communications of
the ACM 36 (1993): 120–33.
Mauldin, Michael L. “ChatterBots, TinyMuds, and the Turing Test: Entering the
Loebner Prize Competition.” Proceedings of the National Conference on Artificial
Intelligence 1 (1994), 16–21.
McCarthy, John. “Information.” Scientific American 215.3 (1966), 64–72.
Bibliography [ 175 ]
761
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and
Prospects of Artificial Intelligence. San Francisco: Freeman, 1979.
McCracken, Harry. “The Bob Chronicles.” Technologizer, 29 March 2010. Available at
https://www.technologizer.com/2010/03/29/microsoft-bob/. Retrieved 19
November 2019.
McCulloch, Warren, and Walter Pitts. “A Logical Calculus of the Ideas Immanent in
Nervous Activity.” Bulletin of Mathematical Biology 5 (1943): 115–33.
McKee, Heidi. Professional Communication and Network Interaction: A Rhetorical and
Ethical Approach. London: Routledge, 2017.
McKelvey, Fenwick. Internet Daemons: Digital Communications Possessed.
Minneapolis: University of Minnesota Press, 2018.
McLean, Graeme, and Ko Osei-frimpong. “Hey Alexa . . . Examine the Variables
Influencing the Use of Artificial Intelligent In-home Voice Assistants.”
Computers in Human Behavior 99 (2019), 28–37.
McLuhan, Marshall. Understanding Media: The Extensions of Man.
Toronto: McGraw-Hill, 1964.
Meadow, Charles T. Man-Machine Communication. New York: Wiley, 1970.
Messeri, Lisa, and Janet Vertesi. “The Greatest Missions Never Flown: Anticipatory
Discourse and the Projectory in Technological Communities.” Technology and
Culture 56.1 (2015), 54–85.
“Microsoft Bob.” Toastytech.com, Available at http://toastytech.com/guis/bob.html.
Retrieved 19 November 2019.
“Microsoft Bob Comes Home: A Breakthrough in Home Computing.” PR Newswire
Association, 7 January 1995, 11:01 ET.
Miller, Kiri. “Grove Street Grimm: Grand Theft Auto and Digital Folklore.” Journal of
American Folklore 121.481 (2008), 255–85.
Mindell, David. Between Human and Machine: Feedback, Control, and Computing before
Cybernetics. Baltimore: Johns Hopkins University Press, 2002.
Minsky, Marvin. “Artificial Intelligence.” Scientific American 215 (1966), 246–60.
Minsky, Marvin. “Problems of Formulation for Artificial Intelligence.” Proceedings of
Symposia in Applied Mathematics 14 (1962), 35–46.
Minsky, Marvin, ed. Semantic Information Processing. Cambridge, MA: MIT
Press, 1968.
Minsky, Marvin. The Society of Mind. New York: Simon and Schuster, 1986.
Minsky, Marvin. “Some Methods of Artificial Intelligence and Heuristic
Programming.” Proceeding of the Symposium on the Mechanization of Thought
Processes 1 (1959), 3–25.
Minsky, Marvin. “Steps toward Artificial Intelligence.” Proceedings of the IRE 49.1
(1961), 8–30.
Monroe, John Warne. Laboratories of Faith: Mesmerism, Spiritism, and Occultism in
Modern France. Ithaca, NY: Cornell University Press, 2008.
Montfort, Nick. “Zork.” In Space Time Play, edited by Friedrich von Borries, Steffen P.
Walz, and Matthias Böttger (Basel, Switzerland: Birkhäuser, 2007), 64–65.
Moor, James H, ed. The Turing Test: The Elusive Standard of Artificial Intelligence.
Dordrecht, Netherlands: Kluwer Academic, 2003.
Moore, Matthew. “Alexa, Why Are You a Bleeding-Heart Liberal?” Times (London), 12
December 2017. Available at https://www.thetimes.co.uk/article/8869551e-
dea5-11e7-872d-4b5e82b139be. Retrieved 15 December 2019.
Moore, Phoebe. “The Mirror for (Artificial) Intelligence in Capitalism.” Comparative
Labour Law and Policy Journal 44.2 (2020), 191–200.
[ 176 ] Bibliography
Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cambridge,
MA: Harvard University Press, 1988.
Mori, Masahiro. “The Uncanny Valley.” IEEE Spectrum, 12 June 2012. Available at
https://spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-
valley. Retrieved 9 November 2020.
Morus, Iwan Rhys. Frankenstein’s Children: Electricity, Exhibition, and Experiment
in Early-Nineteenth-Century London. Princeton, NJ: Princeton University
Press, 1998.
Mosco, Vincent. The Digital Sublime: Myth, Power, and Cyberspace. Cambridge,
MA: MIT Press, 2004.
Müggenburg, Jan. “Lebende Prototypen und lebhafte Artefakte. Die (Un-)
Gewissheiten Der Bionik.” Ilinx—Berliner Beiträge Zur Kulturwissenschaft 2
(2011), 1–21.
Muhle, Florian. “Embodied Conversational Agents as Social Actors? Sociological
Considerations in the Change of Human-Machine Relations in Online
Environments.” In Socialbots and Their Friends: Digital Media and the
Automation of Sociality, edited by Robert W. Gehl and Maria Bakardjeva
(London: Routledge, 2017), 86–109.
Mühlhoff, Rainer. “Human-Aided Artificial Intelligence: Or, How to Run Large
Computations in Human Brains? Toward a Media Sociology of Machine
Learning.” New Media and Society, published online before print 6 November
2019, doi: 10.1177/1461444819885334.
Münsterberg, Hugo. American Problems from the Point of View of a Psychologist.
New York: Moffat, 1910.
Münsterberg, Hugo. The Film: A Psychological Study. New York: Dover, 1970.
Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace.
Cambridge, MA: MIT Press, 1998.
Musès, Charles, ed. Aspects of the Theory of Artificial Intelligence: Proceedings.
New York: Plenum Press, 1962.
Nadis, Fred. Wonder Shows: Performing Science, Magic, and Religion in America. New
Brunswick, NJ: Rutgers University Press, 2005.
Nagel, Thomas. “What Is It Like to Be a Bat?” Philosophical Review 83.4 (1974),
435–50.
Nagy, Peter, and Gina Neff. “Imagined Affordance: Reconstructing a Keyword for
Communication Theory.” Social Media and Society 1.2 (2015), doi: 10.1177/
2056305115603385.
Nass, Clifford, and Scott Brave. Wired for Speech: How Voice Activates and Advances the
Human-Computer Relationship. Cambridge, MA: MIT Press, 2005.
Nass, Clifford, and Youngme Moon. “Machines and Mindlessness: Social Responses to
Computers.” Journal of Social Issues 56.1 (2000), 81–103.
Natale, Simone. “All That’s Liquid.” New Formations 91 (2017), 121–23.
Natale, Simone. “Amazon Can Read Your Mind: A Media Archaeology of the
Algorithmic Imaginary.” In Believing in Bits: Digital Media and the Supernatural,
edited by Simone Natale and Diana Pasulka (Oxford: Oxford University Press,
2019), 19–36.
Natale, Simone. “The Cinema of Exposure: Spiritualist Exposés, Technology, and the
Dispositif of Early Cinema.” Recherches Sémiotiques/Semiotic Inquiry 31.1 (2011),
101–17.
Natale, Simone. “Communicating through or Communicating with: Approaching
Artificial Intelligence from a Communication and Media Studies Perspective.”
Bibliography [ 177 ]
781
[ 178 ] Bibliography
North, Dan. “Magic and Illusion in Early Cinema.” Studies in French Cinema 1
(2001), 70–79.
“Number of Digital Voice Assistants in Use Worldwide from 2019 to 2023.” Statista,
14 November 2019. Available at https://www.statista.com/statistics/973815/
worldwide-digital-voice-assistant-in-use/. Retrieved 10 February 2020.
Oettinger, Anthony G. “The Uses of Computers in Science.” Scientific American 215.3
(1966), 160–72.
O’Leary, Daniel E. “Google’s Duplex: Pretending to Be Human.” Intelligent Systems in
Accounting, Finance and Management 26.1 (2019), 46–53.
Olson, Christi, and Kelly Kemery. “From Answers to Action: Customer Adoption
of Voice Technology and Digital Assistants.” Microsoft Voice Report, 2019.
Available at https://about.ads.microsoft.com/en-us/insights/2019-voice-
report. Retrieved 20 December 2019.
Ortoleva, Peppino. Mediastoria. Milan: Net, 2002.
Ortoleva, Peppino. Miti a bassa intensità. Turin: Einaudi, 2019.
Ortoleva, Peppin. “Modern Mythologies, the Media and the Social Presence of
Technology.” Observatorio (OBS) Journal, 3 (2009), 1–12.
Ortoleva, Peppino. “Vite Geniali: Sulle biografie aneddotiche degli inventori.”
Intersezioni 1 (1996), 41–61.
Papacharissi, Zizi, ed. A Networked Self and Human Augmentics, Artificial Intelligence,
Sentience. New York: Routledge, 2019.
Parikka, Jussi. What Is Media Archaeology? Cambridge: Polity Press, 2012.
Parisi, David. Archaeologies of Touch: Interfacing with Haptics from Electricity to
Computing. Minneapolis: University of Minnesota Press, 2018.
Park, David W., Nick Jankowski, and Steve Jones, eds. The Long History of New
Media: Technology, Historiography, and Contextualizing Newness. New York: Peter
Lang, 2011.
Pask, Gordon. “A Discussion of Artificial Intelligence and Self-Organization.” Advances
in Computers 5 (1964), 109–226.
Peters, Benjamin. How Not to Network a Nation: The Uneasy History of the Soviet
Internet. Cambridge, MA: MIT Press, 2016.
Peters, John Durham. The Marvelous Cloud: Towards a Philosophy of Elemental Media.
Chicago: University of Chicago Press, 2015.
Peters, John Durham. Speaking into the Air: A History of the Idea of Communication.
Chicago: University of Chicago Press, 1999.
Pettit, Michael. The Science of Deception: Psychology and Commerce in America.
Chicago: University of Chicago Press, 2013.
Phan, Thao. “The Materiality of the Digital and the Gendered Voice of Siri.”
Transformations 29 (2017), 23–33.
Picard, Rosalind W. Affective Computing. Cambridge, MA: MIT Press, 2000.
Picker, John M. “The Victorian Aura of the Recorded Voice.” New Literary History 32.3
(2001), 769–86.
Pickering, Michael. Stereotyping: The Politics of Representation. Basingstoke,
UK: Palgrave, 2001.
Pieraccini, Roberto. The Voice in the Machine: Building Computers That Understand
Speech. Cambridge, MA: MIT Press, 2012.
Poe, Edgar Allan. The Raven; with, The Philosophy of Composition. Wakefield, RI: Moyer
Bell, 1996.
Pollini, Alessandro. “A Theoretical Perspective on Social Agency.” AI & Society 24.2
(2009), 165–71.
Bibliography [ 179 ]
8
01
Pooley, Jefferson, and Michael J. Socolow. “War of the Words: The Invasion from
Mars and Its Legacy for Mass Communication Scholarship.” In War of the
Worlds to Social Media: Mediated Communication in Times of Crisis, edited by Joy
Hayes, Kathleen Battles, and Wendy Hilton-Morrow (New York: Peter Lang,
2013), 35–56.
Porcheron, Martin, Joel E. Fischer, Stuart Reeves, and Sarah Sharples. “Voice
Interfaces in Everyday Life.” CHI '18: Proceedings of the 2018 CHI Conference on
Human Factors in Computing Systems (2018), 1–12.
Powers, David M. W., and Christopher C. R. Turk. Machine Learning of Natural
Language. London: Springer-Verlag, 1989.
Pruijt, Hans. “Social Interaction with Computers: An Interpretation of Weizenbaum’s
ELIZA and Her Heritage.” Social Science Computer Review 24.4 (2006), 517–19.
Rabiner, Lawrence R., and Ronald W. Schafer. “Introduction to Digital Speech
Processing.” Foundations and Trends in Signal Processing 1.1–2 (2007), 1–194.
Rasskin-Gutman, Diego. Chess Metaphors: Artificial Intelligence and the Human Mind.
Cambridge, MA: MIT Press, 2009.
Reeves, Byron, and Clifford Nass. The Media Equation: How People Treat Computers,
Television, and New Media like Real People and Places. Stanford, CA: CSLI, 1996.
Rhee, Jennifer. “Beyond the Uncanny Valley: Masahiro Mori and Philip K. Dick’s Do
Androids Dream of Electric Sheep?” Configurations 21.3 (2013), 301–29.
Rhee, Jennifer. “Misidentification’s Promise: The Turing Test in Weizenbaum, Powers,
and Short.” Postmodern Culture 20.3 (2010). Available online at https://muse.
jhu.edu/article/444706. Retrieved 8 January 2020.
Riskin, Jessica. “The Defecating Duck, or, the Ambiguous Origins of Artificial Life.”
Critical Inquiry 29.4 (2003), 599–633.
Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. Upper
Saddle River, NJ: Pearson Education, 2002.
Rutschmann, Ronja, and Alex Wiegmann. “No Need for an Intention to Deceive?
Challenging the Traditional Definition of Lying.” Philosophical Psychology 30.4
(2017), 438–57.
“Samuel L. Jackson—Celebrity Voice for Alexa.” Amazon.com, N.d. Available
at https://www.amazon.com/Samuel-L -Jackson-celebrity-voice/dp/
B07WS3HN5Q. Retrieved 12 December 2019.
Samuel, Arthur L. “Some Studies in Machine Learning Using the Game of Checkers.”
IBM Journal of Research and Development 3 (1959), 210–29.
Saygin, Ayse Pinar, Ilyas Cicekli, and Varol Akman. “Turing Test: 50 Years Later.”
Minds and Machines 10 (2000), 463–518.
Schank, Roger C. Tell Me a Story: Narrative and Intelligence. Evanston,
IL: Northwestern University Press, 1995.
Schank, Roger C., and Robert P. Abelson. Scripts, Plans, Goals, and Understanding: An
Inquiry into Human Knowledge Structures. Hillsdale, NJ: Erlbaum, 1977.
Schiaffonati, Viola. Robot, Computer ed Esperimenti. Milano, Italy: Meltemi, 2020.
Schieber, Stuart, ed. The Turing Test: Verbal Behavior as the Hallmark of Intelligence.
Cambridge, MA: MIT Press, 2003.
Scolari, Carlos A. Las leyes de la interfaz. Barcelona: Gedisa, 2018.
Sconce, Jeffrey. The Technical Delusion: Electronics, Power, Insanity. Durham, NC: Duke
University Press, 2019.
Sconce, Jeffrey. Haunted Media: Electronic Presence from Telegraphy to Television.
Durham, NC: Duke University Press, 2000.
[ 180 ] Bibliography
Schuetzler, Ryan M., G. Mark Grimes, and Justin Scott Giboney. “The Effect of
Conversational Agent Skill on User Behavior during Deception.” Computers in
Human Behavior 97 (2019), 250–59.
Schulte, Stephanie Ricker. Cached: Decoding the Internet in Global Popular Culture.
New York: New York University Press, 2013.
Schüttpelz, Erhard. “Get the Message Through: From the Channel of Communication
to the Message of the Medium (1945–1960).” In Media, Culture, and Mediality.
New Insights into the Current State of Research, edited by Ludwig Jäger, Erika
Linz, and Irmela Schneider (Bielefeld, Germany : Transcript, 2010), 109–38.
Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3.3
(1980), 417–57.
Shannon, Claude. “The Mathematical Theory of Communication.” In The Mathematical
Theory of Communication, edited by Claude Elwood Shannon and Warren
Weaver (Urbana: University of Illinois Press, 1949), 29–125.
Shaw, Bertrand. Pygmalion. New York: Brentano, 1916.
Shieber, Stuart, ed. The Turing test: Verbal behavior as the hallmark of intelligence
(Cambridge, MA: MIT Press, 2004).
Shieber, Stuart. “Lessons from a Restricted Turing Test.” Communications of the
Association for Computing Machinery 37.6 (1994), 70–78.
Shrager, Jeff. “The Genealogy of Eliza.” Elizagen.org, date unknown. Available at
http://elizagen.org/. Retrieved 10 February 2020.
Siegel, Michael Steven. “Persuasive Robotics: How Robots Change Our Minds.” PhD
diss., Massachusetts Institute of Technology, 2009.
Simon, Bart. “Beyond Cyberspatial Flaneurie: On the Analytic Potential of Living with
Digital Games.” Games and Culture 1.1 (2006), 62–67.
Simon, Herbert. “Reflections on Time Sharing from a User’s Point of View.” Computer
Science Research Review 45 (1966): 31–48.
Sirois-Trahan, Jean-Pierre. “Mythes et limites du train-qui-fonce-sur-les-spectateurs.”
In Limina: Le Soglie Del Film, edited by Veronica Innocenti and Valentina Re
(Udine, Italy: Forum, 2004), 203–16.
Smith, Gary. The AI Delusion. Oxford: Oxford University Press, 2018.
Smith, Merritt Roe, and Leo Marx, eds. Does Technology Drive History? The Dilemma of
Technological Determinism. Cambridge, MA: MIT Press, 1994.
Smith, Rebecca M. “Microsoft Bob to Have Little Steam, Analysts Say.” Computer
Retail Week 5.94 (1995), 37.
Sobchack, Vivian. “Science Fiction Film and the Technological Imagination.” In
Technological Visions: The Hopes and Fears That Shape New Technologies,
edited by Marita Sturken, Douglas Thomas, and Sandra Ball-Rokeach
(Philadelphia: Temple University Press, 2004), 145–58.
Solomon, Matthew. Disappearing Tricks: Silent Film, Houdini, and the New Magic of the
Twentieth Century. Urbana: University of Illinois Press, 2010.
Solomon, Robert C. “Self, Deception, and Self-Deception in Philosophy.” In The
Philosophy of Deception, edited by Clancy W. Martin (Oxford: Oxford University
Press, 2009), 15–36.
Soni, Jimmy, and Rob Goodman. A Mind at Play: How Claude Shannon Invented the
Information Age. Simon and Schuster, 2017.
Sonnevend, Julia. Stories without Borders: The Berlin Wall and the Making of a Global
Iconic Event. New York: Oxford University Press, 2016.
Sontag, Susan. On Photography. New York: Anchor Books, 1990.
Bibliography [ 181 ]
821
Sproull, Lee, Mani Subramani, Sara Kiesler, Janet H. Walker, and Keith Waters.
“When the Interface Is a Face.” Human–Computer Interaction 11.2 (1996),
97–124.
Spufford, Francis, and Jennifer S. Uglow. Cultural Babbage: Technology, Time and
Invention. London: Faber, 1996.
Stanyer, James, and Sabina Mihelj. “Taking Time Seriously? Theorizing and
Researching Change in Communication and Media Studies.” Journal of
Communication 66.2 (2016), 266–79.
Steinel, Wolfgang, and Carsten KW De Dreu. “Social Motives and Strategic
Misrepresentation in Social Decision Making.” Journal of Personality and Social
Psychology 86.3 (2004), 419–34.
Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Durham,
NC: Duke University Press, 2003.
Sterne, Jonathan. MP3: The Meaning of a Format. Durham, NC: Duke University
Press, 2012.
Stokoe, Elizabeth, Rein Ove Sikveland, Saul Albert, Magnus Hamann, and William
Housley. “Can Humans Simulate Talking Like Other Humans? Comparing
Simulated Clients to Real Customers in Service Inquiries.” Discourse Studies
22.1 (2020), 87–109.
Stork, David G., ed. HAL’s Legacy: 2001’s Computer as Dream and Reality. Cambridge,
MA: MIT Press, 1997.
Streeter, Thomas. The Net Effect: Romanticism, Capitalism, and the Internet.
New York: New York University Press, 2010.
Stroda, Una. “Siri, Tell Me a Joke: Is There Laughter in a Transhuman Future?” In
Spiritualities, Ethics, and Implications of Human Enhancement and Artificial
Intelligence, edited by Christopher Hrynkow (Wilmington, DE: Vernon Press,
2020), 69–85.
Suchman, Lucy. Human-Machine Reconfigurations: Plans and Situated Actions.
Cambridge: Cambridge University Press, 2007.
Suchman, Lucy. Plans and Situated Actions: The Problem of Human-Machine
Communication. Cambridge: Cambridge University Press, 1987.
Sussman, Mark. “Performing the Intelligent Machine: Deception and Enchantment
in the Life of the Automaton Chess Player.” TDR/The Drama Review 43.3
(1999), 81–96.
Sweeney, Miriam E. “Digital Assistants.” In Uncertain Archives: Critical Keywords for
Big Data, edited by Nanna Bonde Thylstrup, Daniela Agostinho, Annie Ring,
Catherine D’Ignazio, and Kristin Veel (Cambridge, MA: MIT Press, 2020).
Pre-print available at https://ir.ua.edu/handle/123456789/6348. Retrieved 7
November 2020.
Sweeney, Miriam E. “Not Just a Pretty (Inter)face: A Critical Analysis of Microsoft’s
‘Ms. Dewey.’ ” PhD diss., University of Illinois at Urbana-Champaign,
2013.
Tavinor, Grant. “Videogames and Interactive Fiction.” Philosophy and Literature 29.1
(2005), 24–40.
Thibault, Ghislain. “The Automatization of Nikola Tesla: Thinking Invention in the
Late Nineteenth Century.” Configurations 21.1 (2013), 27–52.
Thorson, Kjerstin, and Chris Wells. “Curated Flows: A Framework for Mapping Media
Exposure in the Digital Age.” Communication Theory 26 (2016), 309–28.
Tognazzini, Bruce. “Principles, Techniques, and Ethics of Stage Magic and Their
Application to Human Interface Design.” In CHI ’93” Proceedings of the
[ 182 ] Bibliography
INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems
(1993), 355–62.
Torrance, Thomas F. The Christian Doctrine of God, One Being Three Persons.
London: Bloomsbury, 2016.
Towns, Armond R. “Toward a Black Media Philosophy Toward a Black Media
Philosophy.” Cultural Studies, published online before print 13 July 2020,
doi: 10.1080/09502386.2020.1792524.
Treré, Emiliano. Hybrid Media Activism: Ecologies, Imaginaries, Algorithms.
London: Routledge, 2018.
Triplett, Norman. “The Psychology of Conjuring Deceptions.” American Journal of
Psychology 11.4 (1900), 439–510.
Trower, Tandy. “Bob and Beyond: A Microsoft Insider Remembers.” Technologizer, 29
March 2010. Available at https://www.technologizer.com/2010/03/29/bob-
and-beyond-a-microsoft-insider-remembers. Retrieved 19 November 2019.
Trudel, Dominique. “L’abandon du projet de construction de la Tour Lumière
Cybernétique de La Défense.” Le Temps des médias 1 (2017), 235–50.
Turing, Alan. “Computing Machinery and Intelligence.” Mind 59.236 (1950), 433–60.
Turing, Alan. “Lecture on the Automatic Computing Engine” (1947). In The Essential
Turing, edited by Jack Copeland (Oxford: Oxford University Press, 2004), 394.
Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each
Other. New York: Basic Books, 2011.
Turkle, Sherry. ed. Evocative Objects: Things We Think With. Cambridge, MA: MIT
Press, 2007.
Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet.
New York: Weidenfeld and Nicolson, 1995.
Turkle, Sherry. Reclaiming Conversation: The Power of Talk in a Digital Age.
London: Penguin, 2015.
Turkle, Sherry. The Second Self: Computers and the Human Spirit. Cambridge, MA: MIT
Press, 2005.
Turner, Fred. From Counterculture to Cyberculture: Stewart Brand, the Whole Earth
Network, and the Rise of Digital Utopianism. Chicago: University of Chicago
Press, 2006.
“The 24 Funniest Siri Answers That You Can Test with Your Iphone.” Justsomething.
co, Available at http://justsomething.co/the-24-funniest-siri-answers-that-
you-can-test-with-your-iphone/. Retrieved 18 May 2018.
Uttal, William, Real-Time Computers. New York: Harper and Row, 1968.
Vaccari, Cristian, and Andrew Chadwick. “Deepfakes and Disinformation: Exploring
the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in
News.” Social Media and Society (forthcoming).
Vaidhyanathan, Siva. The Googlization of Everything: (And Why We Should Worry).
Berkeley: University of California Press, 2011.
Vara, Clara Fernández. “The Secret of Monkey Island: Playing between Cultures.”
In Well Played 1.0: Video Games, Value and Meaning, edited by Drew Davidson
(Pittsburgh: ETC Press, 2010), 331–52.
Villa-Nicholas, Melissa, and Miriam E. Sweeney. “Designing the ‘Good Citizen’
through Latina Identity in USCIS’s Virtual Assistant ‘Emma.’” Feminist Media
Studies (2019), 1–17.
Vincent, James. “Inside Amazon’s $3.5 Million Competition to
Make Alexa Chat Like a Human.” Verge, 13 June 2018.
Available at https://www.theverge.com/2018/6/13/17453994/
Bibliography [ 183 ]
8
41
amazon-alexa-prize-2018-competition-conversational-ai-chatbots. Retrieved
12 January 2020.
Von Hippel, William, and Robert Trivers. “The Evolution and Psychology of Self
Deception.” Behavioral and Brain Sciences 34.1 (2011): 1–16.
Wahrman, Dror. Mr Collier’s Letter Racks: A Tale of Art and Illusion at the Threshold of
the Information Age. Oxford: Oxford University Press, 2012.
Wallace, Richard S. “The Anatomy of A.L.I.C.E.” In Parsing the Turing Test, edited
by Robert Epstein, Gary Roberts, and Grace Beber (Amsterdam: Springer),
181–210.
Walsh, Toby. Android Dreams: The Past, Present and Future of Artificial Intelligence.
Oxford: Oxford University Press, 2017.
Wardrip-Fruin, Noah. Expressive Processing: Digital Fictions, Computer Games, and
Software Studies. Cambridge, MA: MIT Press, 2009.
Warner, Jack. “Microsoft Bob Holds Hands with PC Novices, Like It or Not.” Austin
American-Statesman, 29 April 1995, D4.
Warwick, Kevin, and Huma Shah. Turing’s Imitation Game. Cambridge: Cambridge
University Press, 2016.
Watt, William C. “Habitability.” American Documentation 19.3 (1968), 338–51.
Weil, Peggy. “Seriously Writing SIRI.” Hyperrhiz: New Media Cultures 11 (2015).
Available at http://hyperrhiz.io/hyperrhiz11/essays/seriously-writing-siri.
html. Retrieved 29 November 2019.
Weizenbaum, Joseph. Islands in the Cyberstream: Seeking Havens of Reason in a
Programmed Society. Duluth, MN: Litwin Books, 2015.
Weizenbaum, Joseph. Computer Power and Human Reason. New York: Freeman, 1976.
Weizenbaum, Joseph. “Contextual Understanding by Computers.” Communications of
the ACM 10.8 (1967), 474–80.
Weizenbaum, Joseph. “ELIZA: A Computer Program for the Study of Natural
Language Communication between Man and Machine.” Communications of the
ACM 9.1 (1966), 36–45.
Weizenbaum, Joseph. “How to Make a Computer Appear Intelligent.” Datamation 7
(1961): 24–26.
Weizenbaum, Joseph. “Letters: Computer Capabilities.” New York Times, 21 March
1976, 201.
Weizenbaum, Joseph. “On the Impact of the Computer on Society: How Does One
Insult a Machine?” Science 176 (1972), 40–42.
Weizenbaum, Joseph. “The Tyranny of Survival: The Need for a Science of Limits.”
New York Times, 3 March 1974, 425.
West, Emily. “Amazon: Surveillance as a Service.” Surveillance & Society 17
(2019), 27–33.
West, Mark, Rebecca Kraut, and Han Ei Chew, I’d Blush If I Could: Closing Gender
Divides in Digital Skills through Education UNESCO, 2019.
Whalen, Thomas. “Thom’s Participation in the Loebner Competition 1995: Or How
I Lost the Contest and Re-Evaluated Humanity.” Available at http://hps.elte.
hu/~gk/Loebner/story95.htm. Retrieved 27 November 2019.
Whitby, Blay. “Professionalism and AI.” Artificial Intelligence Review 2 (1988), 133–39.
Whitby, Blay. “Sometimes It’s Hard to Be a Robot: A Call for Action on the Ethics of
Abusing Artificial Agents.” Interacting with Computers 20 (2008), 326–33.
[ 184 ] Bibliography
Whitby, Blay. “The Turing Test: AI’s Biggest Blind Alley?” In Machines and Thought: The
Legacy of Alan Turing, edited by Peter J. R. Millican and Andy Clark
(Oxford: Clarendon Press, 1996), 53–62.
Wiener, Norbert. Cybernetics, or Control and Communication in the Animal and the
Machine. New York: Wiley, 1948.
Wiener, Norbert. God & Golem, Inc.: A Comment on Certain Points Where Cybernetics
Impinges on Religion. Cambridge, MA: MIT Press, 1964).
Wiener, Norbert. The Human Use of Human Beings. New York: Doubleday, 1954.
Wilf, Eitan. “Toward an Anthropology of Computer-Mediated, Algorithmic Forms of
Sociality.” Current Anthropology 54.6 (2013), 716–39.
Wilford, John Noble. “Computer Is Being Taught to Understand English.” New York
Times, 15 June 1968, 58.
Wilks, Yorick. Artificial Intelligence: Modern Magic or Dangerous Future. London: Icon
Books, 2019.
Willson, Michele. “The Politics of Social Filtering.” Convergence 20.2 (2014), 218–32.
Williams, Andrew. History of Digital Games. London: Routledge, 2017.
Williams, Raymond. Television: Technology and Cultural Form. London: Fontana,
1974.
Wilner, Adriana, Tania Pereira Christopoulos, Mario Aquino Alves, and Paulo C. Vaz
Guimarães. “The Death of Steve Jobs: How the Media Design Fortune from
Misfortune.” Culture and Organization 20.5 (2014), 430–49.
Winograd, Terry. “A Language/Action Perspective on the Design of Cooperative
Work.” Human–Computer Interaction 3.1 (1987), 3–30.
Winograd, Terry. “What Does It Mean to Understand Language?” Cognitive Science 4.3
(1980), 209–41.
Woods, Heather Suzanne. “Asking More of Siri and Alexa: Feminine Persona in
Service of Surveillance Capitalism.” Critical Studies in Media Communication
35.4 (2018), 334–49.
Woodward, Kathleen. “A Feeling for the Cyborg.” In Data Made Flesh: Embodying
Information, edited by Robert Mitchell and Phillip Thurtle
(New York: Routledge, 2004), 181–97.
Wrathall, Mark A. Heidegger and Unconcealment: Truth, Language, and History.
Cambridge: Cambridge University Press, 2010.
Wrathall, Mark A. “On the ‘Existential Positivity of Our Ability to Be Deceived.’”
In The Philosophy of Deception, edited by Clancy W. Martin (Oxford: Oxford
University Press, 2009), 67–81.
Wünderlich, Nancy V., and Stefanie Paluch. “A Nice and Friendly Chat with a
Bot: User Perceptions of AI-Based Service Agents.” ICIS 2017: Transforming
Society with Digital Innovation (2018), 1–11.
Xu, Kun. “First Encounter with Robot Alpha: How Individual Differences Interact
with Vocal and Kinetic Cues in Users’ Social Responses.” New Media & Society
21.11–12 (2019), 2522–47.
Yannakakis, Georgios N., and Julian Togelius. Artificial Intelligence and Games. Cham,
Switzerland: Springer, 2018.
Young, Liam. ‘ “I’m a Cloud of Infinitesimal Data Computation’: When Machines Talk
Back: An Interview with Deborah Harrison, One of the Personality Designers
of Microsoft’s Cortana AI.” Architectural Design 89.1 (2019), 112–17.
Bibliography [ 185 ]
8
61
Young, Miriama. Singing the Body Electric: The Human Voice and Sound Technology.
London: Routledge, 2016.
Zdenek, Sean. “Artificial Intelligence as a Discursive Practice: The Case of Embodied
Software Agent Systems.” AI & Society 17 (2003), 353.
Zdenek, Sean. “‘Just Roll Your Mouse over Me’: Designing Virtual Women for
Customer Service on the Web.” Technical Communication Quarterly 16.4 (2007),
397–430.
Zdenek, Sean. “Rising Up from the MUD: Inscribing Gender in Software Design.”
Discourse & Society 10.3 (1999), 381.
[ 186 ] Bibliography
INDEX
For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on
occasion, appear on only one of those pages.
[ 188 ] Index
Huizinga, Johan, 28–29 Jackson, Samuel Lee, 112
human Jibo (robot), 6, 118
human bias, 10, 20–21, 22, 42, 91–92, Joe the Janitor (chatbot), 100–1
102–4, 113, 130 journalism, 36, 37, 48–49, 58, 60,
human body, 23–24 81–83, 91
human brain, 36–37, 40–41 Julia (chatbot), 102–3
human communication, 3, 92, 93, 97
human senses, 5–6, 12, 26–27, Kronrod, Alexander, 25–26
110–11, 112 Kubrick, Stanley, 38, 59
human vs computer, 26, 27, 28–29,
36–37, 40, 55–56, 74–75, 76–77, labor, 112, 122–23
89–91, 104, 129–30 Lady’s Oracle (game), 29
Human-Computer Interaction (HCI), language, 65–66, 76–77, 92, 94–95, 97–
4–5, 7–8, 9–10, 14, 23–24, 26–28, 99, 111, 118–20
31, 34, 38–44, 53, 66, 71–72, 74–75, Lanier, Jaron, 64–65
98, 104–5, 112, 123–24 Latour, Bruno, 9–10, 57–58, 91–92
human-computer symbiosis, Laurel, Brenda, 26
40–41, 66, 75 Licklider, J. R. C., 40, 41
Human-Machine Communication Lighthill report, 68–69
(HMC), 11–12, 14, 20–21, 23–25, Lippmann, Walter, 113–14
31, 40–41, 55, 75, 87, 95–96, 97, literary fiction, 45–46, 57–58, 62, 78,
104–5, 107 100, 102–3
Humphrys, Mark, 95–96 Loebner, Hugh, 87, 89
Hutchens, Jason, 95 Loebner Prize, 23, 27–28, 87, 117
Lovelace, Ada, 70
IBM, 47–48, 55–56 Lovelace objection, 70
imagination, 9–10, 43–44, 46, 60–61, lying, 27–28
113–14, 123–24
imitation game. See Turing test machine learning, 18, 111–12
information retrieval, 109–10, 120–23 magic, 29, 34–35, 45, 46–49, 58–59
information theory, 23–24, 40–41 Massachusetts Institute of Technology
infrastructure, 71–72 (MIT) 37, 38, 41, 50, 70, 72–73
intelligence, definition of, 3–4, 15, 18– mathematics, 18
19, 23, 27, 31, 36, 42–43, 115 Maxwell, James Clerk, 71, 73
interdisciplinarity, 35–36 May, Theresa, 92
interface McCarthy, John, 33, 45, 47–48
conceptualizations of interface, 2–3, McCulloch, Warren, 18
10, 23–25, 26–27, 39, 45–48, 53, McLuhan, Marshall, 24, 30, 34–35, 102,
79, 108, 109, 121 114–15, 128–29
graphic user interfaces (GUI), 46 Mechanical Turk (automaton chess
seamless interfaces, 47, 48, 71–72, player), 12–13, 91
81–82, 118 media archaeology, 13
social interfaces, 79–84, 85, media history, 11–13, 22, 23–24,
103–4, 109 28, 32, 34–35, 39, 45–46, 97,
voice-based interfaces, 110–11 110–11, 124–25
web interfaces, 120–23, 124–25 media theory, 9, 24–25, 30, 34–35, 46,
internet, 22, 62, 70, 74, 96, 120–23. See 102, 114–15, 128–29
also World Wide Web mediation, 22, 24–25, 97, 110–11, 124
intuition, 36 medium, concept of, 11–12, 24–25,
irony, 61–62, 95, 117–18, 122–23 71, 124
Index [ 189 ]
9
01
[ 190 ] Index
stereotyping, 102–4, 113–14, 128, 129–30 Union of Soviet Socialist Republics
storytelling, 57–58, 100, 101 (USSR), 25–26
Strimpel, Oliver, 89–90 United States of America, 16
Suchman, Lucy, 14, 40–41, 68, 84–85, 97 user, 3, 7–8, 10, 23, 26, 27–28, 41–43,
superstition, 58–59 46, 47–48, 53, 68, 79, 117–18,
surveillance, 112 124, 131–32
suspension of disbelief, 100 user-friendliness, 8–9, 47–48, 81–82,
120, 130–31
Tay (social media bot), 101
technology, 3, 7–8, 24 Victorian age, 29
technological failure, 80–83 videogames. See digital games
technological imaginary, 34–38, vision, 97–98
55–56, 115–17 voice, 107–8, 110–15
techno-utopia, 63–64 voice assistants, 2–3, 7–8, 9, 24,
telegraph, 23–24 29–30, 61–62, 65, 73–74, 77,
television, 45–46 84–85, 103–4, 105, 107,
theatre, 54–55, 89, 100, 117 129–30
theology, 108–9
time-sharing, 23, 26, 41–43, 46, 57, war, 28–29
68, 72–73 web. See World Wide Web
TIPS (chatbot), 99, 100 Weintraub, Joseph, 94, 102–3
transparent computing, 45–49, 71–72, Weizenbaum, Joseph, 25, 42–43, 50, 78,
83–84, 120, 130–31 79, 130–31
trickery, 27–28 Welles, Orson, 12
trinity, 108–9 Whalen, Thomas, 99–100
Turing, Alan, 2–3, 16, 44, 52, 74–75, 89, Wiener, Norbert, 18, 27
115, 127 Wired (magazine), 89
Turing test, 2–3, 16, 44, 49, 62–63, World Wide Web, 70–71, 95, 103–5,
74–75, 76–77, 87 109–10, 120–21, 123, 124
Turkle, Sherry, 14–15, 21–22, 57, 64, 65,
69, 78–79, 96, 104–5 Youtube, 84
Twitter, 24, 121
typewriter, 23–24 Zork (digital game) 78–79
Index [ 191 ]