Empathy and Dyspathy Between Man, Android and Robot in Do Androids Dream of Electric Sheep? by Philip K. Dick and I, Robot by Isaac Asimov

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Empathy and Dyspathy between Man, Android and Robot

in Do Androids Dream of Electric Sheep? by Philip K. Dick


and I, Robot by Isaac Asimov

Maria Brand
ENGK01
Degree essay in English Literature
Spring semester 2013
Centre for Languages and Literature
Lund University
Supervisor: Ellen Turner
Abstract

We live in an age where science and technology rapidly expand the boundaries of what is
possible. One such area is the branch of technology called robotics, which deals with the
construction and design of robots. However, as robots become more advanced and acquiring
more humanlike features and capabilities, it is not uncommon to speculate, both in the real
world and in fiction, what may happen if robots become too advanced and humanlike. These
speculations are developed on in Isaac Asimov’s I, Robot and Philip K. Dick’s Do Androids
Dream of Electric Sheep?. Not only do the two narratives explore how robots and androids
change the world for the better or worse, but also how the humanlike behaviour and
appearance of the artificial beings evoke empathy or dyspathy amongst the human characters.
This essay will discuss why the human characters start to feel empathy or dyspathy toward
the artificial beings that appear in I, Robot and Do Androids Dream of Electric Sheep?.

Keywords: robot, android, empathy, dyspathy, the uncanny, the ‘Other,’ I, Robot, Do
Androids Dream of Electric Sheep?
Words: 9343
Table of Contents

1 Introduction..................................................................................................... 1

2 Empathy and Dyspathy ................................................................................... 2

3 Androids and Robots ...................................................................................... 4

4 Anthropomorphic, Uncanny and Flawed ....................................................... 6

5 Otherisation and Fear .................................................................................... 10

6 More Human than Human ............................................................................ 14

7 Conclusion .................................................................................................... 18

Works Cited ........................................................................................................ 20


Brand 1

1 Introduction

The subject of how individuals react towards fictional and nonhuman beings is a provocative
and widely discussed subject and there are many theories that attempt to develop responses to
this issue. In recent years, there has been an increased interest especially in the field of
robotics. Scientists and roboticists have started to ask whether robots are or will be capable of
acquiring humanlike intelligence, to feel and express emotions, how these should be designed
and if they should be assigned certain humanlike rights. In addition, the question of how and
why humans can develop feelings towards synthetic life, such as empathy and love, as well as
negative emotions such as fear and hatred, is also incorporated in the debate. These topics are
examined in the books I, Robot by Isaac Asimov (1950) and Do Androids Dream of Electric
Sheep? by Philip K. Dick (1968), both of which explore the empathy and dyspathy that may
develop towards artificial beings and the relationship that is formed between man and
machine.
The stories of the two books take place in hypothetical futures of Earth each with
different environments in terms of political and social climate. In I, Robot, the reader follows
the robopsychologist Susan Calvin of U.S.Robot and Mechanical Men throughout nine short
stories, and through her interviews with a journalist the readers get to know about bizarre
events that marked the development of robots in the universe of I, Robot. Initially the short
stories were published separately in pulp science-fiction magazines in the 1940s and were
later brought together in 1950 into one unified book with some minor edits in order to make
the stories more coherent, such as introducing Susan as the narrator (Warrick 54-57). While
robot morality is the centre of this work, Asimov stated that he also wrote the short stories to
challenge the stereotypical trope of robots rebelling against their masters, which he termed
“the Frankenstein complex” (Asimov on Science Fiction 162; Warrick 55). He wanted to
show the advantages of robots in society, that they could be friends and work for the good of
humanity, instead of destroying it. To achieve this, the robots are programmed to follow the
Three Laws (or Rules) of Robotics:
1. A robot may not injure a human being, or, through inaction, allow a human
being to come to harm. 2. A robot must obey the orders given it by human
beings except where such orders would conflict with the First Law. 3. A robot
must protect its own existence as long as such protection does not conflict
with the First or Second Law. (Perkowitz 31-32)
Brand 2

Apart from protecting humanity from rampaging robots, the Three Laws act as a recurring
plot device and as a trigger for creating complex moral conflicts in the robots, which the
humans consequently have to decipher and avert.
In contrast to Isaac Asimov’s optimistic version of Earth in I, Robot, Philip K. Dick’s
Do Androids Dream of Electric Sheep?1 is much more pessimistic. In the future of a
devastated Earth, the reader follows the bounty hunter Rick Deckard who is tasked with
hunting down six androids, called andys, that have fled from the colonised world of Mars.
These androids are so humanlike in outward appearance that the only thing that separates
them from humans is their (assumed) lack of empathy. Armed with a questionable empathy
test called the Voigt-Kampf test, Rick must uncover who is android and who is human, as the
androids are perceived as dangerous and a threat to humanity. However, this is easier said
than done as Rick starts to empathise with his targets and doubts whether the human and
android distinction is as simple as he has made it out to be.
Even though the main focus of the two books is on how technology can change the
world for the better or worse, the recurring theme of how the human characters come to
empathise or despite artificial beings is central. This essay will examine why the human
characters start to develop empathy and or dyspathy toward the androids of Do Androids
Dream and the robots of I, Robot. By examining how empathy and dyspathy function
between humans in general, I will argue that there are three conventions that also apply in a
similar manner towards artificial beings. After providing the definitions and the historical
summaries for empathy, dyspathy, and artificial beings, I will first argue that empathy or
dyspathy can be elicited by artificial beings as a result of how realistic their outward
appearance and behaviour are. Secondly, I will argue that dyspathy can be formed towards
artificial beings as a result of humankind’s tendency to classify each other as ‘us’ and ‘them,’
as ‘we’ and the ‘Other.’ Lastly, I will argue that the human tendency to Otherise people,
something which triggers dyspathy, can be overcome and be transform into empathy when
the human characters realise that the ‘Others,’ in this case, androids and robots, are not so
different after all.

2 Empathy and Dyspathy

1
The primary source Do Androids Dream of Electric Sheep? is from now on referred to as Do Androids Dream.
Brand 3

The word empathy, which is an essential feature in human morality, originates from the
German equivalent term Einfühlung, which was coined in the late nineteenth century by the
German aesthetician Theodor Lipps and was translated into English by the psychologist
Edward Titchener in 1909 (Keen 2006 209; Misselhorn 106). Since its formation, the concept
of empathy has had fluctuations in terms of popularity in the scientific community. Presently,
empathy has gained a significant amount of interest in multiple fields, such as “psychology,
medicine, neuroscience, and psychoanalysis” (Hollan 385), one of many the reasons for this
being as a result of the recent discoveries of so-called mirror neurons in the human brain2
along with the progression of the capability to trace chemicals and hormones in the brain with
fMRI scans (Keen “A Theory of Narrative Empathy”; “Sin”). According to these discoveries,
“[s]cience shows we humans are hardwired to have empathy…[that] kindness is in our
physiology” (“Sin”). Empathy has also gained relevance in the field of robotics as researchers
have started to discuss if and how robots should be able to show empathy, as well the moral,
ethical, and epistemological dimensions.
The everyday definition of empathy is “the ability to ‘put oneself into another’s
shoes’” (Misselhorn 105) and feel the same as that person which can be triggered by seeing,
hearing or even reading about another’s situation (Keen 2006 4-5). The ability to place
oneself in another’s situation, or rather, to project oneself onto another and experience the
same emotions, is formed by multiple cognitive and affective processes. Misselhorn defines
these processes as: “knowing what a person is feeling, feeling what another person is feeling,
and responding compassionately to another person’s distress” (105). As such, empathy can
either in narrow definition be seen as a purely cognitive process (what happens in the brain,
knowing and feeling another person’s emotions) or, more broadly, also involve an emotional
process which results in the person giving a compassionate response or reaction towards
another’s suffering. Additionally, according to Vignemont, humans do not empathise all of
the time and empathy is far from being an automatic process or voluntary action, but instead
relies on several contextual factors to be triggered (184), such as what type of emotion the
target experiences, the familiarity the empathiser know of said emotion and the context of the
situation. This essay will treat empathy in a broad sense which “include[s] all dimensions”
(106), and which both encompasses the cognitive aspects (what happens in the brain) and the
emotional aspect, which together result in an empathetic response of another’s situation and
acts of kindness and friendship.

2
The mirror neurons allow us to imitate others’ movements and facial expressions and thus can relate and
“recognize the emotions of others” (Vignemont 182).
Brand 4

While empathy is defined as a trait that encourages goodness and enables oneself to
imagine another person’s or being’s feelings, dyspathy is defined as “antipathy, aversion,
dislike” (“dyspathy, n.”). Misselhorn utilises the term dyspathy in one of her essays
synonymously to the negative feelings that are elicited by or towards artificial beings mainly
due to their outward appearance and behaviour, described as “more than just apathy—a lack
of feeling; it is a distinctly negative, aversive feeling towards androids” (103). This essay will
use the term dyspathy in a similar way, but will also use the term to discuss the hostility that
occurs towards androids and robots for other reasons than their appearance and realism in
behaviour.

3 Androids and Robots

According to Asimov, the main unifying characteristics of robots and androids are that “both
[terms] refer to artificial human beings” (“Asimov on Science Fiction” 71). As a result, they
are often used synonymously and may be confused, even though they are essentially
different. The term ‘android,’ which was first introduced in the novel The Cometeers (1950)
by Jack Williamson, means ‘manlike’ and applies to artificial beings created from organic
substances and which are close or nearly indistinguishable from a real human (71;
“Androids”). The term ‘robot,’ which was established by the Czech play R.U.R. in 1921,
means ‘slave’ and is applied to beings created by mechanical means and can take either a
nonhuman or humanlike appearance (71; “Robot”). In short, androids can be seen as a
subtype of robots, defined as more humanlike in outward appearance, while robots often have
a more visible mechanical nature. Despite this difference between the two, this essay will
treat androids and robots in a similar way following Misselhorn’s definition: as beings that,
while not human, may still manage to trigger empathy and dyspathy in the human characters,
which are “normally only shown towards our conspecifics” (102).
While the words ‘robot’ and ‘android’ are rather young words, formed in the early
and mid-twentieth century, the concept of artificially created beings is not an invention of the
modern era. There have been plenty of such beings depicted in fiction, created for various
reasons and purposes: as protectors, slaves, lovers, weapons, to mention a few. Many of these
tropes can be found in old myths, depicting the common themes, benefits and downsides
central to the concept of artificial life. For example, the Greek myth of the brazen giant Talos,
who guarded the shores of Create, shows that artificial beings could be created to be
Brand 5

protectors and weapons (Asimov, “Asimov on Science Fiction” 155; Dinello 37). The golden
women created by the Greek god Hephaistos, who helped him in his forge, show that
artificial beings can benefit, serve and be of use for humans (Asimov, “Asimov on Science
Fiction” 145; Perkowitz 18). The legend of Pygmalion, which tells about Pygmalion falling
in love with his own statue, only to have it brought to life by Aphrodite, depicts for some a
desire to bring life to the dead and to create the perfect woman (Dinello 37).
While ancient myths depict plenty of stories about artificial beings, the Alchemists of
the Renaissance were also fascinated by the thought of controlling life and nature. Apart from
wanting to create gold and a longevity potion, they sought “to create a creature of flesh and
blood without female participation” (37), which led to the idea of the Homunculus, miniature
humanlike creatures. In Johann Wolfgang von Goethe’s Dr Faust the main character Doctor
Faust, along with the help of the devil, creates a Homunculus who desires to become fully
human (38). The sixteenth century myth of the Golem, created out of clay, follows the same
concept as Talos, functioning as a protector and a weapon (38). However, when the Golem
entered literature, its nature transformed “from servant/protector to revengeful monster” (38),
introducing the more complicated relationship between man and machine, that of master and
slave and fear of being harmed by one’s own creation (45).
The philosophical aspect was introduced in to the field of robotics in the seventeenth
century when the philosopher Descartes (who was also interested in robots, called
automatons, during his time) formulated his famous dictum “I think, therefore I am,” and
subsequently claimed that:
[A]nimals and humans are nothing more than machines that operate by
mechanical principles. Humans, however, have a dual nature because they also
have “rational souls” that make them unique among living things; it is why
humans alone can say, “I think, therefore I am.” (Perkowitz 55)
This dictum, along “with his mechanistic view of the physical world…launched the modern
era of automata” (Dinello 35) and have since been a source of inspiration in the continuing
exploration of the mind, body and soul, as well as what makes humans different from other
beings such as robots.
While old myths, legends and concepts have had significant impact on the formation
of artificial beings, the modern creation and perception of androids and robots derive from
two more recent sources: Frankenstein: the Modern Prometheus (1818) by Mary Shelley
(Asimov, “Asimov on Science Fiction” 19, 105), and the drama R.U.R. (1921) written by
Karl Čapek, which deals with mass production of living creatures called ‘robots,’ an old
Brand 6

Czech word that means ‘forced labour,’ or simply, slave (Perkowitz 25-26). These robots
spurred the deep-rooted fear that robots would seek to overcome humanity itself, and which
has since then been a prevailing notion of artificial beings.
The many forms and roles which robots have taken throughout the history of fiction
show how the relationship between man and robot has developed, as well as what may be the
primary reasons for empathy and dyspathy being elicited by artificial beings. For example,
Pygmalion’s statue, Galatea, suggest that something that looks almost identical to a human,
can trigger feelings of empathy and compassion in human beings to the extent of falling in
love with it. The Golem and the robots of R.U.R show that the slave/master relationship,
alongside the fear of one’s creation going out of control, cause dyspathy. Frankenstein’s
monster showed that even if an artificial being may have a humanlike appearance, it may
evoke dyspathy it is “an abomination who exists in a liminal realm between the living and the
dead simultaneously provoking sympathy and disgust” (Brenton par. 2). In I, Robot and Do
Androids Dream, all of these different aspects of how empathy and dyspathy are elicited by
the artificial beings and triggered in the human characters can be observed, one due to the
anthropomorphism and behaviour of the robots and androids.

4 Anthropomorphic, Uncanny and Flawed

The first aspect that triggers empathy and dyspathy in the human characters in the two novels
is based on how humanlike the robots’ and androids’ appearance and behaviour is. Humans
most readily empathise with those who seem like us (Keen, “Empathy and the novel” x), and
“a more human-like physical appearance of a robot can increase the empathy expressed by
people towards it [as i]t is easier to relate to a robot that shares physical similarities with a
human than with one that resembles a machine” (Zlotowski par. III A). Furthermore, it is not
only the degree of anthropomorphisation that affects how humans will empathizes with a
robot or not, but other dimensions as well, “such as movement [63], verbal communication
[64], [65], emotions [66], gestures [67] and intelligence [68], [69]” (par. III A). However,
according to Misselhorn, empathy can also be triggered by objects that have a low degree of
human resemblance. Robots do not have to be entirely humanlike to evoke empathy, but only
need to possess a couple of anthropomorphic characteristics, as computer scientists have
noted that humans easily anthropomorphise all kinds of machines (par. II A). This can be
seen with the nursemaid robot Robbie from the short story “Robbie” in I, Robot.
Brand 7

Robbie, being one of the earlier robot models in the universe of I, Robot, is a rather
stereotypical robot - he is made out of metal, has an anthropomorphic frame (such as a head,
torso, legs, feet and hands), has red glowing eyes, produces clanking sounds and can only
communicate with the help of pantomime language. His primary purpose is to be a caretaker
of the little girl Gloria and they become very close friends. However, Gloria’s mother, Mrs.
Weston, persuades Mr. Weston to get rid of Robbie because she is afraid of him going out of
control, even though he is unable to do so because of the Three Laws. When Gloria finds out
that Robbie is gone, she stops eating and is constantly unhappy. Not knowing what to do,
Mrs. Weston asks her “Why do you cry Gloria? Robbie was only a machine, just a nasty old
machine. He wasn’t alive at all” (11), to which she replies “He was not no machine... He was
a person just like you and me and he was my friend” (11). Despite his nonhuman appearance,
Gloria still believes that he is alive, a person, whom she deeply cares about and sees as a
close friend. This belief is further reinforced when Gloria encounters ‘the talking robot’ on
her quest to find out where Robbie is. She asks it if it has seen Robbie, describing her friend
as “[a] robot just like you, except he can’t talk, of course, and - looks like a real person”
(emphasis added 18). This attachment to the less than realistic Robbie could first and
foremost be explained due to by Gloria being a child, as children in general perceive the
world as ‘alive’ in a higher degree than adults (Kang 23), as “the boundary line between the
animate and the inanimate is not yet set” (38). Gloria is more willing to perceive Robbie as
something that is alive, while her mother, who is an adult, is more reluctant. Susan reinforces
this view, as “it was easier for [Gloria] at the age of fifteen than at eight” (23) to let go of
Robbie, suggesting that age may have played a part.
On the other hand, the empathy that Gloria feels towards Robbie does not only
depend on her being a child, but is also ironically due to his low degree of human
resemblance. This notion can also be illustrated by the famous real world robot Kismet, who
was not designed to look like a human “because an imperfect simulation of humanity can be
disturbing” (Perkowitz 178), and was instead designed with “exaggerated, clownlike
features—big blue eyes, prominent lips, and conspicuous animal-like ears” (178). According
to Perkowitz, Kismet’s simple design engages people that interact with it, feeling empathy for
it (179). In a similar manner as Kismet, Robbie’s lack of human realism does not seem to
make him unpleasant, but on the contrary, makes him likeable and elicits empathy in the
human characters, especially Gloria. In addition, robots with little human resemblance that
are also represented as “benign, servile, and silly…[repress] people’s essential fear of them”
(Kang 40), much in a similar manner to how the robots R2D2 and C-3PO in the Star Wars
Brand 8

movies are portrayed (40). Robbie can likewise be ascribed these attributes, as he is
represented as benign (8), he is quick to obey and serve and at times may act silly because of
his slightly comical way of communicating, suggesting “in the same way as cute children,
unable to properly ‘use their words’ but lovable regardless” (Corcoran).
While Robbie elicits empathy in Gloria because of her young age and his
anthropomorphic characteristics, the androids in Do Androids Dream seemingly trigger
dyspathy in the human characters despite their extremely humanlike exterior. Thinking
logically, it should be much easier to empathise with them, as humans most readily empathise
with those who seem like us. This inability to feel empathy towards such realistic beings can
be explained through the concept of the ‘uncanny.’
The concept of the ‘uncanny’ was first used by the psychologist Ernst Jentsch On the
Psychology of the Uncanny in 1906 and was later further developed by Freud in his essay
“The Uncanny” in 1919 (Misselhorn 104). In short, the uncanny can be summarized as the
strange feeling and “uncertainty as to whether something one faces is an inanimate object or
living being, the insecurity being heightened when a thing not only looks like an animate
creature but also behaves like one” (Kang 22). Roboticist Masahiro Mori formulated a similar
theory called the “uncanny valley” in 1970 which deals with the same concept, but is
specifically applied to robots and robot design. His theory suggests that when a robot
becomes more and more humanlike, it elicits more and more empathy from humans, until
there is an instance when “the mismatch between their form, interactivity, and motion quality
elicits a sense of unease” (Riek, “Real-time Empathy”).
The androids in Do Androids Dream seemingly trigger Mori’s ‘uncanny’ theory
because they do at times not live up to the standards of simulating life and human behaviour,
even though their appearance is indistinguishable from that of humans. As some of the
characters are introduced to the rogue androids that have yet not been killed by Rick Deckard,
they take note of a feeling of coldness. For example, when Rick meets Luba Luft, he notes
“[h]er tone held cold reserve — and that other cold, which he had encountered in so many
androids. Always the same: great intellect, ability to accomplish much, but also this” (79).
The bounty hunter Phil Resch also takes note of this strange feeling of uneasiness, as he
describes the android Polokov as “cold. Extremely cerebral and calculating; detached” (93).
While their outward appearance does not elicit dyspathy, as nothing really separates them
from humans in general, the coldness puts the characters off. Brenton describes this
occurrence as even though a robot has a high degree of external realism, they can still trigger
the feeling of the uncanny, especially if the outward realism does not match up with the
Brand 9

behaviour (Brenton par. 3). This is also further enforced in the description of the android Pris
Straton by Isadore (at the time he did not know that she was an android):
Now that her initial fear had diminished, something else had begun to emerge
from her. Something more strange. And, he thought, deplorable. A coldness.
Like, he thought, a breath from the vacuum between inhabited worlds, in fact
from nowhere: it was not what she did or said but what she did not do and say.
(emphasis added 54)
The more realistic the outward appearance is of an artificial being, the expectations on the
behaviour and motional realism the higher the expectations (par. 4.2). If an android has too
humanlike attributes, it will “evoke expectations that [it] might not be able to fulfil”
(Bartneck par. 2). In the case of the androids in Do Androids Dream, they elicit dyspathy in
the human characters because their behaviour does not live up to their high degree of outward
realism.
In a similar manner to the androids in Do Androids Dream, the android3 Stephen
Byerley in the short story “Evidence” in I, Robot shows that dyspathy and the uncanny can be
triggered despite his humanlike appearance. However, compared to the androids in Do
Androids Dream, the dyspathy towards Stephen is only triggered in the human characters
after Byerley has been exposed as an android. In contrast to the other robots in the novel,
Stephen Byerley is indistinguishable from a real human, and no one knows that he is a robot.
He is a lawyer who is running for election to become mayor of one of the great regions of
Earth, and it is generally believed that he will win the election. His extremely realistic
appearance and behaviour doesn’t trigger dyspathy like the androids of Do Androids Dream,
as “the humans [he] encounter[ed] h[ad] no reason not to believe [him] to be human”
(Corcoran), and that “[a] highly anthropomorphic and intelligent robot is likely to be
perceived to be more animate and possibly also more likeable [than other robots]” (Bartneck
72). However, this changes when he becomes the subject of a thorough investigation by the
politician Francis Quinn. While talking to Alfred Lanning, the founder of U.S.Robot and
Mechanical Men, he tells him that his investigation showed that Stephen has never been seen
drinking, eating or sleeping (173), and points out that “the man is quite inhuman” (173),
arguing that he is a robot in disguise. The rumours start to spread, and only then is Byerley
met with dyspathy, hostility and anger by the people in the narrative. This is partially due to
the general attitude towards robots amongst the humans in I, Robot, as robots are banned

3
Stephen Byerley is labelled as a robot in the short story, but his realistic appearance would logically classify
him as an android.
Brand 10

from Earth for various reasons. However, like the androids in Do Androids Dream, Stephen
Byerley also triggers the uncanny because he does not live up to the expectations his outward
appearance establishes. In the end, Stephen Byerley manages to convince the voters that he is
a human by punching a man, which robots are not allowed to do, and wins the election. By
proving that his assumed flaws are false, such as not being able to eat, sleep and drink, and
most importantly, being able to harm a human, he is again seen as a human. That this has
succeeded is evident as he manages to become Regional Co-ordinator of the Regions of Earth
(in other words, the president of Earth) (197).
As can be seen above, the external appearance of the artificial beings can affect to
what degree empathy and dyspathy will be elicited in the human characters; robots with a low
degree of human resemblance are more likely to be likeable as “early humans who interpreted
ambiguous shapes as human [anthropomorphisation] minimized their risks of being killed by
enemies and maximized their chances of making friends” (Zlotowski par. I A). On the other
hand, the more humanlike androids and robots have a more difficult time convincing and
simulate human behaviour and characteristics, and thus tend to evoke dyspathy in the
characters. Only when the human simulation is flawless is empathy elicited. However, while
the outward realism and behaviour of robots and androids plays a large part in how humans
initially react and feel towards artificial beings, this is just a minor dimension of the many
aspects that trigger empathy and dyspathy, as it is important to take the contextual aspects
into consideration too. One such aspect is how robots and android interact and are treated by
humans. In the case of I, Robot and Do Androids Dream, the robots and the androids are met
with prejudice and fear, being stand-ins for the suppressed and shunned, envisioned as taking
the role of the ‘Other’ because they do not fit into the norms and binary categories of society.

5 Otherisation and Fear

As previously stated, it is a part of human physiology to feel empathy thanks to the mirror
neurons, the chemicals and the hormones in the human brain. Despite this, humans still do
bad things to others. As regards, the androids and robots in I, Robot and Do Androids Dream
are continuously met with prejudice, hate and fear from a majority of the human characters.
In the previous section, this was attributed to the appearance of the androids and robots,
which could both cause empathy or dyspathy to be elicited. However, in both I, Robot and Do
Brand 11

Androids Dream, there is another reason for them being targets of dyspathy from the human
characters – Otherisation.
Otherisation, or more commonly called Othering, is a process that occurs when a
dominant group of people or society excludes another group of people who do not fit into
said society (“Sin”; Embrick 1357). The process of Othering has often “historically been used
to justify the mistreatment and oppression of one group of people by another” (Embrick
1357), and the ‘Other’ are often seen as beasts or subhuman (“Sin”; Embrick 1357).
According to Jines, the idea of the ‘Other’ has close ties to empathy and dyspathy, as
empathy developed “to only [extend] to the tribe, to blood ties” (qtd. in Rifkin 447‐48), and
those who were “outside of this group was the alien ‘Other’” (qtd. in Rifkin 453) were
regarded “with a range of emotions, but without empathy” (Keen, “A Theory of Narrative
Empathy” 214). In I, Robot and Do Androids Dream the robots and androids are the ones
who are the ‘Other,’ and are met with dyspathy by the human characters.
Both Do Androids Dream and I, Robot share a couple similar reasons for the
Otherisation of the androids and robots. One of these reasons is that the robots and the
androids are deemed to be dangerous, and are thus not allowed to be on Earth amongst
humans. For example, in Do Androids Dream the humans are afraid of the androids because
they are claimed to be smarter (23), ruthless, and lack the ability to feel empathy, and are thus
dangerous to other beings. To ensure humanity’s safety, the bounty hunters are tasked to hunt
down and “retire” the androids that flee to Earth. In I, Robot the humans are afraid that the
robots may at any moment go out of control or that they will render humans useless. To avert
robots going berserk and to keep them as obedient servants the Three Laws were invented
and programmed into the robots. However, despite these safety measures, there is a lingering
fear amongst the humans throughout the two narratives: either that the Three Laws in I, Robot
may not be as solid set as they have assumed them to be, or that the androids in Do Androids
Dream are too humanlike to be found out. These fears result in Otherisation of the robots and
the androids.
Another shared reason for Otherisation to be formed towards the androids and robots
in I, Robot and Do Androids Dream is that they do not fit into any of the binary categories of
each society. Binary categories are commonly used in communities and societies to “put
together a structure of reality…to make sense of the world by organizing things in a series of
dual oppositions such as day/night…living/dead, man/woman…safe/dangerous” (Kang 29).
Entities encountered that do not fit into these dual categories, such as the androids and robot
being neither alive nor dead, human nor machine, “pose a danger to the community as a
Brand 12

whole, as [they threaten] to undermine the foundations of its shared reality, potentially
throwing it into a conceptual chaos” (30). In Do Androids Dream the norm of human society
is to follow the religion of Mercerism, a religion that is solely based around the concept of
empathy. Because of the androids inability to feel empathy, they are not able to be a part of
Mercerism, and are thus excluded from human society. In I, Robot the robots are also
Otherised as they threaten the way how human society works. In the narrative there are two
organisations that have ‘anti-robot’ attitudes: The Fundamentalist and the ‘Society for
Humanity.’ The Fundamentalists are people who “had not adapted themselves to what had
once been called the Atomic Age” (186) and are “Simple-Lifers” (186) and solely hate robots
because they fear them, while the ‘Society for Humanity’ ensures that robots are not allowed
to work on Earth as they enables “unfair labour competition” (205). The robots have been
banned from Earth presumably because of these two organisations, which sugests that the
Otherisation of the robots is due to humans’ fear of change and being rendered useless.
The process of Otherisation results in dyspathy in the human characters, which is
expressed in an alienating behaviour towards the androids and the robots. In I, Robot, this is
apparent in multiple ways, as the robots, according to Paul D. Lee “tend to highlight very real
racial problems…the robots themselves act as stand-ins for the racial minorities” (31). For
example, the way in which the robots are addressed and treated by the humans, as well as
how the robots refer to the humans, show the unequal relationship between the two. In the
short story “Runaround,” the robot scientists Gregory Powell and Michael Donovan are
forced to get help from six old robots to retrieve a more advanced robot called Speedy, who
has gone out of control. When Powell activates one of them, “The monster’s head bent
slowly and the eyes fixed themselves on Powell. Then…he grated, ‘Yes, Master!’” (29).
Donovan explains that the robots have a “good, healthy slave complex [programmed] into the
damned machines” (29).
While the robots show their subordination by using words such as master, the humans
continually impose their dominance on the robots by repeatedly calling the robots derogatory
names and insulting them. For example, in the short story “Little Lost Robot,” the robot
which disappears is treated badly by his superior, being called names “with every verbal
appearance of revulsion, disdain, and disgust” (125). Another example is when the characters
use the word ‘boy’ when talking to the robots. Gloria calls Robbie boy (2), Powell and
Donovan do so towards Speedy in “Runaround” (35), Susan in “Little Lost Robot” (133, 137-
38) and in “Escape!” (149). Lee states that this indicates certain racist undertones, indicating
that the humans impose “their own inherent sense of superiority by calling the machines
Brand 13

‘boys’” (qtd. in Scholes and Rabkin 188). These racists undertones are reminiscent of how
black slaves were treated and addressed, as “the white slave master often exercised his
authority over the black male slave, by depriving him of all attributes…treating him as a
child” (Hall 262), and one such way was to call them ‘boys’ or ‘girls.’
In a similar manner to the robots in I, Robot, the androids in Do Androids Dream are
also the subject of Otherisation. Apart from being feared because of them being dangerous,
they are created to be slaves for humans. As a result of the nuclear World War Terminus, a
majority of mankind has been forced to emigrate to Mars to start a new life. To make life
easier on the new planet, the emigrants are each offered a custom made robot. During a
commercial, the announcer says that the androids can be seen as slaves, illustrated as
“duplicates [of] the halcyon days of the pre-Civil War Southern states! Either as a body
servants or tireless field hands, the custom-tailored humanoid robot” (13). While the robots of
I, Robot are closely supervised and controlled, the androids in Do Androids Dream revolt and
actively seek to become free from humanity, dreaming of a better life on Earth (145).
However, on Earth, they are not welcome. They are shunned and hunted by bounty hunters,
seen as dangerous “murderous illegal aliens” (108), beings that are not able to feel empathy
and are thus a threat to human society. They are out of place, not fitting in with the norms of
human society. According to Hall, when such ‘anomalies’ or ‘matter out of place’ spring up
in a society, it results in attempts “to sweep [the anomaly] up, throw it out, [and] restore the
place to order” (qtd. in Kristeva 1982). This is what the bounty hunters are for, working as “a
barrier which keeps the two distinct” (112), strengthening Otherisation and dyspathy towards
them.
One part of the process of Otherisation is that those who are targeted are dehumanized
and seen as inferiors or beasts. In Do Androids Dream, the androids are, apart from being
seen as dangerous aliens, deemed worth less than animals. Because of the World War
Terminus, which wiped out most of the animal life on Earth, owning an animal is a sign of
status in human society (depending on what animal you own and how many), as well as a
way to show one’s empathetic disposition. Iran, Rick’s wife, explicitly states this: “You know
how people are about not taking care of an animal; they consider it immoral and anti-
empathetic” (9). Because of this wide spread opinion, in combination with real life animals
being so rare and expensive, people often buy and take care of electric animals as if they were
real animals. However, while the humans love both the real and the electric animals, the
androids are not in the humans’ “range of empathetic identification” (112). The androids are
seen as below animals, even electric ones, as exemplified by them being hunted and killed,
Brand 14

while the electric animals are treated with love and care. When Rick meets the android
Garland, he confirms this, saying that “It’s a chance anyway, breaking free and coming here
to Earth where we’re not even considered animals. Where every worm and wood louse is
considered more desirable than all of us put together” (97). This idea that the androids are
inferior to animals, even electric ones, is also mentioned in the conversation between Pris,
Roy and Isadore (127-28).
While dyspathy is formed towards the androids and robots by the human characters as
a result of Otherisation and fear, there are those of the human characters that start to realise
that the distinction between human and machine may not be as clear set as it is assumed to
be, resulting in empathy being formed instead.

6 More Human than Human

While dyspathy is widespread in the universes of I, Robot and Do Androids Dream as a result
of Otherisation, the androids and robots still manage to elicit empathy amongst a few of the
human characters. The reason for this lies in the realization that robots and androids, in the
end, are not so different from us humans, or that they do not deserve to be treated in the way
that they are. Otherisation, which suppresses empathy for another group or person, can be
broken (“Sin”). This can be explained in the same way as we can explain how soldiers, who
fight in close range, often have problem to kill because of “a natural tendency in themselves
to view the enemy as equally human” (Moses 136). Kathleen Taylor gives another similar
example: a man from the SS in WWII was to kill some Jewish children. He had to take the
hand of a little girl, and suddenly couldn’t kill her and the other children. His empathy broke
through. If the cues are sufficiently strong, dyspathy can be reversed (“Sin”). The same
process can be observed in relation to the androids and robots in I, Robot and Do Androids
Dream. Even though the majority of the human characters believe that they are just simple,
dangerous machines, there are moments where the robots and androids show humanlike
qualities: some hope for independence and freedom, some are able to reason, some display
the ability to feel emotions such as love, anger, fear, friendship, and empathy. At times they
may even appear more human than humans themselves.
In I, Robot there are mainly two persons who actively empathise with the robots –
Susan Calvin and Gloria. In the beginning of the short story “Robbie” the reader gets to know
how Gloria and Robbie interact: they play hide and seek and Gloria tells him fairy tales. The
Brand 15

interaction between the two shows that Robbie is not an unemotional machine. This is
reinforced later on in the story when Gloria is about to be crushed by a tractor. In the nick of
time Robbie saves her and “wound about the little girl gently and lovingly, and his eyes
glowed a deep, deep red” (22). It could be argued that Robbie only saved Gloria because the
First Law imposes that he must, but the description of Robbie’s reaction when he sees Gloria
before she is put in danger (21) shows that he does care deeply for her and would probably
have saved her even if the First Law did not exist. Additionally, as Gloria stops to play with
children her age, her mother becomes worried about her (8). George Weston says, in an
attempt to make his wife less worried, that she could regard Robbie as a dog. He reasons that
the love Gloria feels for Robbie is the same as a child or person feel for a dog, that he has
“seen hundreds of children who would rather have their dog than their father” (8). Robbie
seemingly suggests that robots in I, Robot may not just machines, but really have other
hidden sides to themselves.
Susan Calvin is also one of the few human characters in I, Robot who empathises with
the robots. She says that she “like[s] robots…considerably better than…human beings” (196),
and states that robots are better than humans. She also argues that The Three Laws, which
apart from imposing slave-like behaviour and denying the robots’ free will, also show that the
robots may be morally superior to humans. For example, in the short story “Evidence,” when
Susan speaks about Stephen Byerley, she mentions that he would be the perfect choice for a
leader: “If a robot can be treated capable of being a civil executive, I think he’d make the best
one possible. By the Laws of Robotics, he’d be incapable of harming humans, incapable of
tyranny, of corruption, of stupidity, of prejudice” (196). Furthermore, Susan Calvin also
concludes that The Three Laws share “the essential guiding principles of a good many of the
world’s ethical systems” (182), implying that robots and humans are governed by the same
ethical and moral values. For example, the First Law makes sure that the robots “love others
as himself, protect his fellow man, risk his life to save another” (182); the Second Law made
the robots “to obey laws, to follow rules, to conform to custom — even when they interfere
with his comfort or his safety” (182); and the Third Law imposes “the instinct of self-
preservation” (182).
The short stories of I, Robot also show plenty of other instances where robots exhibit
humanlike qualities. At times, they even “demonstrate the human qualities of minds and soul,
reason and emotions” (Majed 2). One example is the robot Cutie in the short story “Reason.”
Cutie, like most of the other robots presented in I, Robot, is different. He has started to doubt
that humans have built him, being “the first robot who’s ever exhibited curiosity as to his
Brand 16

own existence” (47). According to Majed, “thinking is a unique human quality; it is what
makes humans” (1), and Cutie’s ability to reason and use his pisotronic brain suggests that he
“share[s] with humanity [the] quality of thought and awareness of [his] existence” (1). As he
start to reason why he was built and what his purpose is, he confronts the scientists Donovan
and Powell, presenting his thoughts, saying “I, myself, exists, because I think“ (51), which
Powell reply “Oh, Jupiter, a robot Descartes” (51). By recalling Descarte’s dictum
(summarized by Perkowitz), only humans can utter this statement as only human beings have
‘rational souls.’ Even though Cutie is a robot, he may not only be a simple machine but have
certain humanlike attributes, such as being able to reason.
In a similar manner as the robots of I, Robot, the androids in Do Androids Dream also
suggests that humans and artificial beings may not be so different from each other. One such
aspect is the problematic usage of empathy as a way to distinguish between man and machine
in the narrative. For example, for Rick to be able to find out who is android and who is
human, he uses a test called the Voigt-Kampf test. By having the Voigt-Kampf apparatus
attached to the subject’s eye, it will be able to capture the subject’s empathetic reaction to a
couple of questions which deals with “a variety of social situations. Mostly to do with
animals” (95). If the subject does not show an adequate empathetic response to these
questions, he or she may be an android. However, these questions are almost on the verge of
the comical. To give two examples: “You are given a calfskin wallet on your birthday” (38),
or “[y]ou’re sitting watching TV…and suddenly you discover a wasp crawling on your wrist”
(39). At first, these questions come across as not having to do with empathy. For example, if
a person would answer that he or she would be happy to receive such a nice gift as a calfskin
wallet, or that he or she would swat the wasp, which would be seen as normal reactions, you
would be tested as an android in the universe of Do Androids Dream. These obscure
questions highlight the problem of using empathy as a measurement to define who is human
and who is not, as androids and humans alike, such as the bounty hunter Phil Resch and other
“authentic humans with underdeveloped empathic ability” (43), may lack the ability to feel
empathy and empathize.
Furthermore, at times the androids seem to be more human than humans (Attaway;
Dinello 65; Hayles 162), especially emotionally. Even Rick confirms that “[m]ost androids
I’ve known have more vitality and desire to live than my wife” (75-76). For example, humans
have begun to rely on machines called ‘mood organs’ which give them the capacity to choose
what emotions they want to feel at a given time, such as depression, happiness, and “The
desire to watch TV, no matter what’s on it" (4), to mention a few. While humans “program”
Brand 17

what they want to feel, the androids display a “real” emotional life that the humans do not
demonstrate to the same extent. For example, when Rick kills Roy’s wife Irmgard, he “let out
a cry of anguish” (177), and Rick responds with saying “Okay, you loved her” (177),
showing that androids can feel love, or when Pris is taken care of by Isadore, she
unexpectedly starts to cry (118). Additionally, when Pris meets her friends Irmgard and Roy,
they seemingly display affectionate body language and happiness of seeing each other (121).
This is also apparent when Isadore asks Pris about Mars and how it was there, and she
explains that “all Mars is lonely. Much worse than this,” and that “The androids…are lonely,
too” (119), showcasing that androids can feel loneliness. Not only that, but when Isadore
meets Pris for the first time, she is distorted by fear (50). These emotional displays show that
the androids are humanised as they have spontaneous emotions, while the humans are
dehumanised as they control their emotions like programmable machines.
In Do Androids Dream there are a couple of the human characters that begin to realise
that the androids are not so different from humans after all, or that humans are not so
different from androids. One of these characters is Isadore the ‘chickenhead.’ He takes care
of the three last remaining androids Pris, Roy and Irmgard Baty, and in the beginning he does
not suspect that they are androids. When he eventually finds out that they are indeed
androids, he does not care. Like the androids, Isadore is also treated badly by others because
he is a ‘special,’ a term applied to those who have been affected by the fallout and deemed
less intelligent than others (129). As discussed in the empathy section, empathy can be
triggered if the empathiser can relate to the same things that the subject experiences. While he
is not allowed to migrate to Mars, the androids are not allowed to come to Earth. As he is
treated badly because of being different enables him to identify and empathise with how the
androids are also mistreated.
Another person who starts to empathise with the androids, which is central to the plot,
is Rick himself. In the beginning, Rick is firm in his belief that the andys must be ‘retired’ (in
other words, killed), and justifies this by reasoning that “[e]mpathy, evidently, existed only
within the human community” (24), and that predators, such as cats and spiders, which
cannot depart from a meat diet, lack this ability to feel empathy. Thus, if androids lack
empathy, “the humanoid robot constituted a solitary predator” (24). In his mind, he constantly
reassures himself that they do not deserve to live in accord with Mercerism, that “You should
only kill the killers” (emphasis in original 24). This eradication of empathy is similar to an
example given by Moses:
Brand 18

Hired killers in the United States have been known to convince themselves out
loud that their intended victim is evil and does not deserve to live (Arlow,
1973). Thus they “ideologically” eradicate their empathy so as not to interfere
with their task of killing. (135)
This seems to fit well with how Rick manages to kill such humanlike creatures. He
continually suppresses his empathy towards them by reasoning that they are evil and
dangerous. Rick even states that he likes to think of the androids in such way, as it makes his
job easier (24). Despite this, Rick seems to have been bothered by his tasks in the past, shown
in his conversation with the bounty hunter Phil where they discuss whether an android should
be called ‘it’ instead of he or she. Rick replies that “I did at one time…when my conscience
occasionally bothered me about the work I had to do; I protected myself thinking of them that
way but now I no longer find it necessary” (99). Despite this, as the story progresses, Rick
once again starts to doubt his convictions, especially after Phil has killed the android Luba
Luft. Rick finally realises the absurdity of the whole situation: a singer and talent like Luba
Luft cannot possibly be a danger to their society (109). He reasons that “[t]hey can use
androids…She was a wonderful singer. The planet could have used her. This is insane” (108).
His conviction is further swayed when the ruthless and apathetic bounty hunter Phil, who
killed Luba Luft, is tested human and not an android. This shows that humans themselves,
such as Phil, can also lack empathy, as Phil seemingly likes to kill (109), and that empathy
may not be a human exclusive emotion but is an illusion to keep humans and androids apart.
As a result, Rick becomes troubled and finds it difficult to continue killing the
remaining androids. This conflict could be explained in the same manner as soldiers, who
fight at close range, often have a problem with killing, because of “a natural tendency in
themselves to view the enemy as equally human,” and that the ones who cannot go through
with killing their targets “end up by being in disharmony with themselves and with their
consciences” (Moses 136). This seems to correlate with what Rick experiences, as he
reluctantly kills the remaining androids. Empathy, in this sense, is “anathema to killing
things, to torture, and to the waging of war. It stands in contrast and in contradiction to the
demonization of the enemy, to scapegoating, to that of polarization of good and bad” (136).

7 Conclusion
Brand 19

In I, Robot and Do Androids Dream, empathy and dyspathy toward artificial beings are
caused by multiple reasons. As this essay has shown, it can be triggered by the external and
behavioural realism of the creatures, as the varying degree in appearance of the robots and
androids results in different levels of empathy and dyspathy. Dyspathy is also a result of fear
and that the androids and robots do not fit the norms of society. This process results in the
Otherisation of the androids and the robots. However, the process of Otherisation can be
overcome and develop into empathy if the characters realise that humans and artificial beings
are perhaps not as different as they have previously thought. Some of the androids and robots
display human qualities and become more and more humanised, whereas there are humans
that show qualities of dehumanisation, becoming more and more machine-like.
Of course, there are also other possible aspects that can play a part in the formation of
empathy and dyspathy towards artificial beings. For example, Rick Deckard and Phil Resch
first assume that Rick starts to empathise with the androids because the androids are female
and he finds them attractive (76, 114). Additionally, the androids and robots in both works
show that they on occasion may actually pose a threat to humans, as they expose their less
empathetic side. For example, Nestor and Cutie in I, Robot dislike their positions as inferiors,
and in the process of rebelling they pose as an actual threat towards humans. The androids in
Do Androids Dream show little compassion towards animals, and at one occasion they even
torture a spider out of amusement. However, the dual nature of the androids and robots of
either being harmless and friendly or cruel and dangerous suggests that they are much like we
humans are, as humans are also prone to do both good and bad things.
After having examined these fictional artificial beings in I, Robot and Do Androids
Dream, it can be concluded that if artificial beings show a “certain threshold of sentience”
(Perkowitz 118-19), it may be morally unacceptable to suppress them. In Do Androids Dream
Rick Deckard concludes that killing is bad regardless whether it is a human, an animal or a
robot (178), as he realises that all living beings have their respective lives and purposes (191),
even electric ones. In I, Robot, despite the resistance of humanity, the robots become the ones
who steer society and humankind into the future by working behind the scenes to make life
better. Despite how advanced or simple they may be, their tasks seem to be the caretakers of
humanity, and the Three Laws ensure that they always have humankind in highest interest.
Brand 20

Works Cited
Primary Sources
Asimov, Isaac. I, Robot. New York: Bantam Books, 2008. Print.
Dick, Philip K. Do Androids Dream of Electric Sheep? London: Millennium, 1999. Print.

Secondary Sources
“Androids.” The Encyclopedia of Science Fiction. 1993. Print.
Asimov, Isaac. Asimov on Science Fiction. London, Toronto, Sydney, New York: Granada
Publishing, 1983. Print.
Attaway, Jennifer. “Cyborg Bodies and Digitized Desires: Posthumanity and Phillip K.
Dick.” n.p. (2006): n. pag. Web. 03 Apr. 2013.
Bartneck, Christoph, et al. “Measurement Instruments for the Anthropomorphism, Animacy,
Likeability, Perceived Intelligence, and Perceived Safety of Robots.” International
Journal of Social Robotics 1.1 (2009): 71-81. Web. 06 Mar. 2013.
Brenton, Harry, et al. “The Uncanny Valley: Does it Exist?” Proceedings of Conference of
Human Computer Interaction, Workshop on Human Animated Character Interaction.
n.p. (2005): n. pag. Web. 08 Apr. 2013.
Corcoran, Erin. “Robots and the Uncanny Valley.” School of Architecture University of
Waterloo. Web. 16 May. 2013.
Dinello, Daniel. Technophobia! Science Fiction Visions of Posthuman Technology. 1st ed.
Austin: University of Texas Press, 2005. PDF file. Web. 21 Apr. 2013.
“dyspathy, n.” OED Online. June 2013. Oxford University Press. Web 08 Sept. 2013.
Embrick, David G. “‘Us and Them.’” Encyclopedia of Race, Ethnicity, and Society. Ed.
Richard T. Schaefer. Thousand Oaks, CA: SAGE Publications, Inc. (2008): 1358-59.
Web. 12 Aug. 2013.
Hall, Stuart. “The Spectacle of the ‘Other’.” Representation: Cultural Representations and
Signifying Practices. Stuart Hall (ed). London; SAGE Publications Inc. 1997. Print.
Hayles, N. Katherine. How we Became Posthuman: Virtual bodies in Cybernetics, Literature,
and Informatics. University of Chicago Press, 1999. PDF file. Web. 23 Apr. 2013.
Hollan, Douglas and C. Jason Throop. “Whatever Happened to Empathy?: Introduction.”
Ethos, 36.4 (2008): 385-401. Web 25 Apr. 2013.
Jines, Erin N. “Evolution and Empathy: A Darwinian Approach to the Cultural Other.”
(2011). Web. 23 May. 2013.
Brand 21

Kang, Minsoo. Sublime Dreams of Living Machines: The Automaton in the European
Imagination. Cambridge, Massachusetts, London: Harvard University Press, 2011.
Print.
Keen, Suzanne. “A Theory of Narrative Empathy.” Narrative 14.3 (2006): 207-36. Web. 06
Mar. 2013.
---. Empathy and the Novel. Oxford, New York et al.: Oxford University Press, 2007. PDF
file.
Lee, Paul David. “Bug-eyed Monsters and the Encounter with the Postcolonial Other: an
Analysis of the Common Postcolonial Themes and Characteristics in Science
Fiction.” Diss. University of Texas, Arlington, 2012. Web. 19 Apr. 2013.
Majed, Al-Lehaibi S. “The New Human: Robot Evolution in Selection from Asimov’s Short
Stories.” IRWLE Vol. 9 No. 1 (2013): 1-5. Web. 25 May. 2013.
Misselhorn, Catrin. “Empathy and Dyspathy with Androids: Philosophical, Fictional, and
(Neuro)Psychological Perspectives.” Konturen, 2.1 (2009): 101-123. Web. 02 Apr
2013.
Moses, Rafael. “Empathy and Dis-Empathy in Political Conflict.” Political Psychology, 6.1
(1985): 135-139. Web 25 Apr. 2013.
Perkowitz, Sidney. Digital People: From Bionic Humans to Androids. Washington, D.C:
Joseph Henry Press, 2004. Print.
Riek, Laurel D., et al. “Real-time Empathy: Facial Mimicry on a Robot.” Workshop on
Affective Interaction in Natural Environments (AFFINE) at the International ACM
Conference on Multimodal Interfaces (ICMI 08). ACM. (2008): n. pag. Web. 22 Apr.
2013.
“Robot.” The Encyclopedia of Science Fiction. 1993. Print.
“Sin.” Sex, Death and the Meaning of Life. SVT2. Stockholm, Malmö. 14 July. 2013.
Television.
Vignemont, Frédérique de. "When do we empathize?" Novartis Foundation symposium. Vol.
278 (2007): 181-190. Web. 10 Aug. 2013.
Warrick, Patricia S. The Cybernetic Imagination in Science Fiction. Massachusetts;
Cambridge England; London: The MIT Press, 1980. Print.
Zlotowski, J. et al. “More Human Than Human: Does The Uncanny Valley Curve Really
Matter?” Proceedings of the HRI2013 Workshop on Design of Humanlikeness in HRI
from Uncanny Valley to Minimal Design (2013): 7:13. Web. 08 Aug. 2013.

You might also like