0% found this document useful (0 votes)
13 views12 pages

Shang AlanTuringIan 2020

The article analyzes Ian McEwan's novel 'Machines Like Me' in relation to Alan Turing's concepts of artificial intelligence and ethics. It highlights McEwan's shift from the question 'Can machines think?' to 'Can machines lie?' and explores the moral dilemmas faced by the characters, particularly regarding the implications of lies and human responsibility towards machines. The discussion emphasizes the ethical complexities introduced by artificial beings and critiques the limitations of machine understanding of human morality.

Uploaded by

vasudharamaul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views12 pages

Shang AlanTuringIan 2020

The article analyzes Ian McEwan's novel 'Machines Like Me' in relation to Alan Turing's concepts of artificial intelligence and ethics. It highlights McEwan's shift from the question 'Can machines think?' to 'Can machines lie?' and explores the moral dilemmas faced by the characters, particularly regarding the implications of lies and human responsibility towards machines. The discussion emphasizes the ethical complexities introduced by artificial beings and critiques the limitations of machine understanding of human morality.

Uploaded by

vasudharamaul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

From Alan Turing to Ian Mcewan: Artificial Intelligence, Lies and Ethics in Machines

Like Me
Author(s): Biwu Shang
Source: Comparative Literature Studies , 2020, Vol. 57, No. 3, SPECIAL ISSUE: The Eighth
Sino-American Symposium in Comparative and World Literature (2020), pp. 443-453
Published by: Penn State University Press

Stable URL: https://www.jstor.org/stable/10.5325/complitstudies.57.3.0443

REFERENCES
Linked references are available on JSTOR for this article:
https://www.jstor.org/stable/10.5325/complitstudies.57.3.0443?seq=1&cid=pdf-
reference#references_tab_contents
You may need to log in to JSTOR to access the linked references.

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

Penn State University Press is collaborating with JSTOR to digitize, preserve and extend access
to Comparative Literature Studies

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
from alan turing to ian mcewan: artificial
intelligence, lies and ethics in machines like me

Biwu Shang

abstract
Taking Machines Like Me as its foci of analysis, this article intends to inter-
rogate Ian McEwan’s rejoinder to Alan Turing on such issues as artificial
intelligence, machines and ethics. By focusing on the machine, Adam’s
engagements with human life, it examines McEwan’s rewriting of the
Turing Test, in which he changes the question “Can machines think?” to
“Can machines lie?” Charlie and Adam’s different responses to Miranda’s
lie about her sexual attack reveal their competing ethical positions and
attitudes. In contrast to Isaac Asimov’s “Three Laws of Robotics,” McEwan
raises the ultimate question about human responsibility for machines
through the fictional character Turing’s critique of Charlie’s hammering
Adam to death.
keywords: Ian McEwan, the Turing Test, artificial intelligence, lies, ethics

In October 2018, Ian McEwan made his first visit to China to receive the
twenty-one University Students’ International Literature Award. In the
awarding ceremony, McEwan delivered a speech about artificial intelligence
and artificial humans. In particular, McEwan shows his profound interest
in questions that human beings have to deal with once artificial humans are
brought into the world. For instance,

Should we grant the rights and responsibilities of a citizen to an arti-


ficial human? Will it be wrong to buy or own such a being, as people
used to buy and own slaves? Will it be murder if we destroy such a
being? Will they become cleverer than us, and take our jobs? Already,

doi: 10.5325/complitstudies.57.3.0443
comparative literature studies, vol. 57, no. 3, 2020.
Copyright © 2020. The Pennsylvania State University, University Park, PA.

443

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
444 C O M P A R AT I V E L I T E R AT U R E S T U D I E S

in our factories, clever but mindless machines are replacing workers.


Doctors and lawyers could be next. Then the ultimate question: will
artificial humans dominate us, or even replace us?1

The questions about artificial humans are also the central concerns of
McEwan’s latest work Machines Like Me (2019). The novel is set in the 1980s
when artificial humans were just brought to the market. In narrator’s words,
“Twelve of this first edition were called Adam, thirteen were called Eve.”2
Like Alan Turing, Charlie Friend, the protagonist of the novel has also bought
an Adam home. The novel mainly tells a story about Adam’s engagement
with the life of Charlie Friend and his girlfriend Miranda. Adam surprises
Charlie by his sudden declaration of love for Miranda after their sexual affair,
and he is eventually hammered to death by Charlie because of his reporting
to the court that Miranda, for the sake of revenge for her friend Miriam, has
lied about her accusation of Peter Gorringe’s sexual attack.
Marcel Theroux contends that the novel “touches on many themes:
consciousness, the role of chance in history, artificial intelligence AI, the
neglected Renaissance essayist Sir William Cornwallis, the formal demands
of the haiku and the unsolved P versus NP problem of computer science, but
its real subject is moral choice.”3 Theroux’s view of moral choice in the novel
is consolidated and specified by Stuart Miller, who argues that characters
are “split on moral decisions. Miranda and Charlie initially are divided on
whether Adam is a machine or a sentient being. Then, at the crux of the
novel, Miranda and Charlie stare across a moral divide at Adam.”4 In his
response to Miller, McEwan claims that “I want the reader to be in Charlie’s
shoes as he’s contending with someone who has a superior character and
who can discuss Shakespeare with some warmth and insight. At the end, do
you think Adam is a cold-blooded machine or a sentient being? That’s the
issue we’re going to have, and it’s going to open up new territory for us in
the moral dimension.”5 Both Theroux and Miller’s readings of the novel are
insightful in that they pin down the work to its ethical issue. Regrettably,
they fail to consider Alan Turing as a fictional character in the novel and
his role in the progression of the work. In fact, the fictional character Turing
comments and analyzes almost all events involved with the machine Adam,
in particular, his sexual affair with Miranda, his accusation of Miranda, and
his death. Charlie’s puzzles are largely resolved and explained by Turing on
the one hand, and his hammering of Adam to death is severely criticized by
Turing on the other hand. This article tries to explore the indirect dialogue
between Turing and implied McEwan, which is realized by the interac-
tions between Charlie and the fictional Turing in the text, by deliberately

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
F R O M A L A N T U R I N G T O I A N M C E WA N 445

focusing on McEwan’s rewriting of the Turing Test, and the destruction of


the Machine and human responsibility.

The Imitation Game in Literary Guise: McEwan’s Rewriting


of the Turing Test

In remembering Turing, J. M. E. Hyland deliberately mentions Turing’s


intellectual legacy, which in his view contains two major aspects: the Turing
Machine and the Turing Test. In “The Forgotten Turing” (2016), Hyland
writes that “Turing can hardly have supposed that by the time of his centenary
he would be recognised as a national war hero, but he must have known that
he left an intellectual legacy. The Turing Machine and Turing Test are the
familiar aspects of that.”6 Fascinated with the Turing Test, McEwan claims
in an interview that “I played with the idea that it’s like the Turing test taken
to extremes.”7 The Turing Test is now generally known as Imitation Game,
which was proposed by Turing in 1950. In his article “Computing Machinery
and Intelligence,” Turing raises the question “Can Machines Think?” At the
very beginning of the paper, Turing writes that

I propose to consider the question, “Can machines think?”This should


begin with definitions of the meaning of the terms “machine” and
“think.” The definitions might be framed so as to reflect so far as
possible the normal use of the words, but this attitude is dangerous.
If the meaning of the words “machine” and “think” are to be found by
examining how they are commonly used it is difficult to escape the
conclusion that the meaning and the answer to the question, “Can
machines think?” is to be sought in a statistical survey such as a Gallup
poll. But this is absurd. Instead of attempting such a definition I shall
replace the question by another, which is closely related to it and is
expressed in relatively unambiguous words.
The new form of the problem can be described in terms of a game
which we call the “imitation game.” It is played with three people, a
man (A), a woman (B), and an interrogator (C) who may be of either
sex. The interrogator stays in a room apart from the other two. The
object of the game for the interrogator is to determine which of the
other two is the man and which is the woman. He knows them by
labels X and Y, and at the end of the game he says either “X is A and
Y is B” or “X is B and Y is A.”8

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
446 C O M P A R AT I V E L I T E R AT U R E S T U D I E S

Turing tries to engage readers to evaluate the machines’ intelligence


through imitation game, in which a woman or a man has been changed into
a human and a machine, both of which are separated in different rooms. One
is going to ask either the human or the machine questions through a certain
kind of device. In Turing’s view, if 30 percent of the participants fail to tell
which is the human and which is the machine, then the machine should be
claimed to have intelligence. He optimistically predicts that:

I believe that in about fifty years’ time it will be possible to programme


computers, with a storage capacity of about 109, to make them play
the imitation game so well that an average interrogator will not have
more than 70 percent. Chance of making the right identification after
five minutes of questioning. The original question, “Can machines
think?” I believe to be too meaningless to deserve discussion.9

The Turing Test has been a hot topic in the fields of computer science
and artificial intelligence. Through the years, a considerable large number of
researchers have been competing against each other for passing the Turing
Test. It needs to be mentioned that in the Turing Test 2014, a computer
program called Eugene Goostman was claimed to have passed the test,
convincing 33 percent of the human judges that it was human. In Machines
Like Me, how does McEwan play the Turing Test to its extremes? The
answer lies in McEwan’s changing Turing’s question of “Can machines
think?” to “Can machines lie?” McEwan’s move is largely inspired by
Turing’s own lie about his identity as a gay. Accused of being involved in
a homosexual relationship, Turing had intended to plead not guilty, the
consequence of which would deprive him of opportunities to continue
his research. As a result, he had to take his lawyer’s advice to plead guilty
and to lie about his identity. Andrew Hodges describes Turing’s dilemma
in following lines:

He was, in fact, caught between two untruths. To deny what he had


done would be to tell a lie, and to convey a false sense that he con-
sidered it something that ought to be denied. Yet to be portrayed in
public with words such as “guilty,” “self-confessed,” “admit” was also
to compound an untruth.10

Most likely, inspired by Turing’s own case of lying and the consequen-
tial dilemma, McEwan parodies the Turing’s Test so as to look at artificial
intelligence from another perspective.

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
F R O M A L A N T U R I N G T O I A N M C E WA N 447

In the novel, Adam warns Charlie to be careful about his girlfriend


Miranda, claiming that she is a liar.

“According to my researches these past few seconds, and to my


analysis, you should be careful of trusting her completely.”
“What?”
“According to my—”
“Explain yourself.”
I was staring angrily into Adam’s blank face. He said in a quiet
sorrowful voice, There’s a possibility she’s a liar. A systematic, malicious
liar.11

As the novel unfolds, we come to know the background story of


Miranda’s lie. A young man named Peter Gorringe, who is about to
be released from jail, threatens to kill Miranda. About three years ago,
Gorringe was put into jail, because he was accused of and prosecuted for
raping Miranda. Gorringe and Miranda had known each other from school.
Being alone on one night, they had sex after drinking in Gorringe’s place.
In Gorringe’s view, Miranda had been a willing partner, so her accusation
was a setup. The truth is that it is not Miranda but her friend Miriam, a girl
from Pakistan, who has been raped by him. In fear that her family and her
life will be ruined, Miriam has not reported to the police but only told her
misfortune to Miranda who has promised to keep it a secret. After being
raped by Gorringe, Miriam gets depressed day by day and commits suicide
eventually. Therefore, it is for the sake of revenge that Miranda drinks with
Gorringe and sleeps with him afterward, while the next day she accuses him
of raping her and succeeds in sending him to jail.
In “The Conflict between Scientific Selection and Ethical Selection:
Artificial Intelligence and Brain Text in Ian McEwan’s Machines Like Me”
(2019), I argue that “Miranda’s lie and revenge are involved with both law
and ethics. On the one hand, Miranda breaks the law because of her false
accusation, while on the other hand, Miranda has made the right ethical
choice because of the justice she has done for Miriam. That said, her lie is
illegal but ethically appropriate.”12 However, as a machine, Adam could hardly
understand Miranda’s lie, in particular its paradoxical involvement of law and
ethics. Given his knowledge of law acquired through machine learning, Adam
only knows how to deal with legal issues but fails to deal with moral issues.
In this circumstance, Adam records the conversation between Miranda and
Gorringe and sends it to the court. When quarrelling with Miranda, Adam
insists on the principles of truth and ignores the ethical dimensions of her lie.

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
448 C O M P A R AT I V E L I T E R AT U R E S T U D I E S

Adam criticizes Miranda’s revenge while he ignores the fact she is going
to adopt the orphan Mark, claiming that “the principle stands.”13 He contends
that Miranda should plead guilty, despite that Charlie reminds him of the
victim Miriam’s death and the justified nature of Miranda’s action. Charlie
stresses that “truth isn’t always everything,” while Adam insists that “truth
is everything.”14 Apparently, Adam fails to pass the McEwan’s form of the
Turing Test, because he neither lies nor fails to understand the lie. In his
third meeting with Charlie, Turing explains that “We don’t yet know how
to teach machines to lie.”15 In other words, it is beyond machines to under-
stand the lie. In this sense, we could see the reason why McEwan quotes
Rudyard Kipling’s poem “The Secret of the Machines” in the preface of the
novel: “But remember, please, the Law by which we live/We are not built to
comprehend a lie.”16 Though Adam succeeds in passing the Turing Test as
an intelligent machine, he fails to pass the McEwan’s test. In other words,
he is simply a machine unable to lie or to understand a lie.
In his communications with humans, Adam relies heavily on his machine
reasoning to negate the human emotion. In doing so, Adam worsens his
conflicts with humans. To a large extent, Adam and Charlie’s attitudes
toward lie reveal the difference between machines and humans, which could
also be well explained in the historical Turing’s weakness of seeking objec-
tive truth. To use Hodges’s words, “The weak points of his argument were
essentially the weaknesses of the analytical scientific method when applied
to the discussion of human beings. Concepts of objective truth that worked
so well for the prime numbers could not so straightforwardly be applied by
scientists to other people.”17

The Destruction of Machine and Human Responsibility: Asimov,


Turing, and McEwan

In The Laws of Robots: Crimes, Contracts, and Torts, Ugo Pagallo, largely
referring to Isaac Asimov, proposes that there should be a law governing the
relationship between machines and humans. Pagallo argues that “Between law
and literature, the message of Asimov’s stories seems to be clear: since robots
are here to stay, the aim of the law should be to wisely govern our mutual
relationships.”18 As we know, it is in his short story “Runaround” (1942) that
Asimov proposed the three fundamental “Rules of Robotics,” which are now
known as “Three Laws of Robotics” and generally accepted as the principle
of dealing with the relationship between machines and humans. The three

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
F R O M A L A N T U R I N G T O I A N M C E WA N 449

rules are (1) A robot may not injure a human being or, through inaction, allow
a human being to come to harm; (2) A robot must obey orders given it by
human beings except where such orders would conflict with the First Law;
(3) A robot must protect its own existence as long as such protection does
not conflict with the First or Second Law.19 Adam has apparently broken
the rules of Robotics, because he has at least hurt humans twice. First, he
breaks Charlie’s wrist when Charlie criticizes him and attempts to press the
button of his kill-switch. He even warns Charlie that if he dares to touch the
kill-switch again, he will take away an arm of Charlie. Second, when Adams
knows that Miranda’s lying about her accusation, he submits the evidence
to the court and causes the danger of her imprisonment.
In fact, Asimov’s “Three Laws of Robotics” are merely concerned with
the machines’ responsibility for humans, while McEwan attempts to go
beyond him to explore human responsibility for machines. In Machines Like
Me, he raises the question whether humans should be responsible for the
destruction of machines. Because of Adam’s reporting of Miranda’s false
accusation to the court, Charlie decides to destroy him. He says that “I bought
him and he was mine to destroy. I hesitated fractionally. A half-Second longer
he would have caught my arm, for as the hammer came down he was already
beginning to turn. He may have caught my reflection in Miranda’s eyes. It
was a two-handed blow at full force to the top of his head. The sound was
not of hard plastic cracking or of metal, but the muffled thud, as of bone.”20
Charlie has used the hammer to destroy Adam without much hesitation.
At issue is whether he has the right to destroy Adam? Should Charlie be
legally responsible for the death of Adam? These are the issues discussed by
Turing and Charlie in their third meeting.
Before his death, Adam asks Charlie and Miranda not to give his body
to the machine company but to turn it to Turing. In Turing’s office, when
Charlie finishes telling how Adam is destroyed, Turing launches a severe
criticism of Charlie: “My hope is that one day, what you did to Adam with
a hammer will constitute a serious crime. Was it because you paid for him?
Was that your entitlement?”21 In Turing’s view, Adam is not merely a machine
but a life with consciousnesses. In Turing’s conception, Adam has a life and
knows its existence. Adam’s intelligence, in particular, is far superior to that
of both Turing and Charlie. In this sense, Charlie’s destruction of Adam is
strikingly different from a spoiled child who smashes up his own toy. It is
noteworthy that Turing uses the word “conscious” to describe Adam’s exis-
tence. In the field of artificial intelligence, if a machine has consciousness,
it can no longer be merely seen as a machine. Stuart J. Russel and Peter
Norvig contend that “If robots become conscious, then to treat them as mere

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
450 C O M P A R AT I V E L I T E R AT U R E S T U D I E S

‘machines’ (e.g., to take them apart) might be immoral.”22 Seen in this light,
it is not hard for us to understand why Turing sees the destruction of Adam
as both a crime and morally wrong doing. Reproached by Turing, Charlie
feels guilty, which is evidenced in his silent leave from Turing’s office.
When Turing answers a phone call and excuses Charlie for a few
minutes, Charlie decides to take a silent leave. Before his departure, Charlie
waves a ceremonial goodbye to the corpse of Adam. The novel contains the
following lines describing Charlie’s farewell to Adam’s body:

I stood by Adam’s side, and rested my hand on his lapel, above the
stilled heart. Good cloth, was my irrelevant thought. I leaned over
the table and looked down into the sightless cloudy green eyes. I had
no particular intentions. Sometimes, the body knows, ahead of the
mind, what to do. I suppose I thought it was right to forgive him,
despite the damage he might have done to Mark, in the hope that
he or the inheritor of his memories would forgive Miranda and me
our terrible deed. Hesitating several seconds, I lowered my face over
his and kissed his soft, all-too-human lips.23

In the lines above, Charlie deliberately places emphasis on the word


“forgive.” On the one hand, he assumes that he would forgive Adam despite
all the harms he has done to Adam, which in fact mainly refers to the harm
he has done to Miranda. On the other hand, Charlie hopes that, as a return,
Adam could forgive their act of destroying him. It needs to be pointed out
that Charlie uses the word “terrible” to describe his hammering of Adam,
which shows that he seems to admit that his action is immoral though he is
not found guilty in law. In other words, Charlie’s farewell to Adam reveals
his paradoxical attitude to Adam’s death. He is unwilling to admit that it
is wrong for him to destroy Adam, while he also hopes to come to terms
with Adam and receives Adam’s forgiveness. Charlie’s paradoxical attitude
is largely caused by his position of regarding Adam both as a human and a
machine. In particular, in the last lines quoted, when bending down to look
at Adam and to kiss him, Charlie realizes that Adam is not a human but
a machine, the factor of which is also paradoxical: Adman’s all-too-human
lips betrays his nonhuman identity.
How should we interpret Turing’s criticism of Charlie and Charlie’s
paradoxical attitude to the destruction of Adam? The answer could be
found in Neil M. Richards and William D. Smart’s conception of “Android
Fallacy.” Richards and Smart claim that “We must avoid the Android Fallacy.
Robots, even sophisticated ones, are just machines. They will be no more than

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
F R O M A L A N T U R I N G T O I A N M C E WA N 451

machines for the foreseeable future, and we should design our legislation
accordingly. Falling into the trap of anthropomorphism will lead to con-
tradictory situations, such as the one described above.”24 Along the line of
Richards and Smart’s conception of “Android Fallacy,” we might postulate
that Turing has fallen into the trap of anthropomorphism, while influenced
by Turing, Charlie has started his falling into the similar trap.
If it will be a normal state for humans to cohabitate with machines in
the future, how should humans deal with their relationship with machines?
What would writers’ imagined responses to such a newly emerged situation
be like? Through focalizing the conversation on the destruction of Adam
between Turing and Charlie, McEwan gears the discussion of relations
between humans and machines to the realm of morality. He assumes that
moral dimension should be one of the most important perspectives that
novelists should explore. McEwan argues that “The extent to which we
devolve moral decisions to machines is going to be a very awkward and
interesting ride. I’m sorry to be 70 and not see more of the story. The area
where our interaction with machines enters the moral domain is going to
be a field day for novelists.”25
Asimov’s “Three Laws of Robotics” largely represent humans’ attitude
to the relations between humans and machines, while it would be very
interesting if we would try to imagine how machines thought about such
a correlation? In Machines Like Me, the implied McEwan makes use of
Philip Larkin’s poem “Trees” to illuminate Adam’s understanding of the
human-machine relations:

Our leaves are falling.


Come spring we will renew,
But you, alas, fall once.26

In Adam’s view, the three lines from the poem are not about leaves and
trees but about machines and humans. In particular, Adam explains that

It’s about machines like me and people like you and our future
together . . . the sadness that’s to come. It will happen. With improve-
ments over time . . . we’ll surpass you . . . and outlast you . . . even as we
love you. Believe me, these lines express no triumph . . . Only regret.27

Adam’s last words before its death are related to the tittle of the novel
“Machines Like Me.” About Adam’s interpretation of Larkin’s poem “Trees,”
I have argued elsewhere that “he expresses much regret that he as a machine

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
452 C O M P A R AT I V E L I T E R AT U R E S T U D I E S

can revive his life, while human beings can only live their life once. His brain
with electronic text can be reserved, which will enable him to restart with a
new body, but Charlie and Miranda cannot live again. When he comes back to
the world of humans someday in the future, Charlie and Miranda may simply
not be there.”28 Adam predicts that in the future, machines will be improved
and thus will surpass and outlast humans. In other words, in Adam’s view,
machines are different from humans in that they will completely overtake
humans in the future. In contrast to Adam’s prediction, McEwan intends
to convey a message through rewriting the Turing Test and in particular the
destruction of Adam that no matter how improved and intelligent machines
would be in the future, they could hardly compete with humans in the realm
of morality, not to mention the replacement of humans.

Conclusion

In Machines Like Me, McEwan relates artificial intelligence and machine to


the issue of morality, claiming that that “As we debate what kinds of moral
systems we want to install in our creations, we will inevitably have to con-
front and define who and what we are, what we want.”29 Instead of offering
a straightforward answer to the questions raised, McEwan deliberately
rewrites the Turing Test, which has been changed from the question “Can
Machines Think?” to the question “Can Machines Lie?,” so as to explore
the issue of machine’s engagement of human life in the field of ethics. It is
in his dialogue with Turing across time and space that McEwan examines
the relationship between machines and humans, and thus raises such a
thought-provoking question as how humans could become better when
confronting more humanlike machines in the age of artificial intelligence.

biwu shang is professor of English at Shanghai Normal University and at


Shanghai Jiao Tong University. He is the author of In Pursuit of Narrative
Dynamics (Peter Lang, 2011), Contemporary Western Narratology: Postclassical
Perspectives (People’s Literature Press, 2013) and Unnatural Narrative across
Borders: Transnational and Comparative Perspectives (Routledge, 2019). His
work has appeared in Comparative Literature Studies, Critique: Studies in
Contemporary Fiction, Journal of Literary Semantics, Semiotica, and Arcadia
among other journals.

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms
F R O M A L A N T U R I N G T O I A N M C E WA N 453

Notes
1. Ian McEwan, “If Someday Artificial Humans Wrote a Novel,” accessed January 26, 2020.
https://www.sohu.com/a/271745520_119350.
2. Ian McEwan, Machines Like Me (London: Jonathan Cape, 2019), 2.
3. Marcel Theroux, “Machines Like Me by Ian McEwan Review – Intelligent Mischief,” The
Guardian, April 11, 2019, accessed January 26, 2020, https://www.theguardian.com/books/2019/
apr/11/machines-like-me-by-ian-mcewan-review.
4. Stuart Miller, “Q&A: Ian McEwan on How ‘Machines Like Me’ Reveals the Dark Side of
Artificial Intelligence,” Los Angeles Times, April 25, 2019, accessed January 26, 2020, https://www.
latimes.com/books/la-et-jc-ian-mcewan-interview-machines-like-me-20190425-story.html.
5. Ibid.
6. J. M. E. Hyland, “The Forgotten Turing,” in The Once and Future Turing: Computing the
World, ed. S. Barry Cooper and Andrew Hodges (Cambridge: Cambridge University Press,
2016), 32.
7. Jacob Aron, “It Is the 1980s, Al Is Rising and Alan Turing Is Alive,” New Scientist 242,
no. 3226 (2019): 42.
8. A. M. Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433.
9. Ibid., 442.
10. Andrew Hodges, Alan Turing: The Enigma (Princeton: Princeton University Press, 2014),
583.
11. McEwan, Machines Like Me, 30.
12. Biwu Shang, “The Conflict between Scientific Selection and Ethical Selection: Artificial
Intelligence and Brain Text in Ian McEwan’s Machines Like Me,” Foreign Literature Studies 41,
no. 5 (2019): 69.
13. McEwan, Machines Like Me, 276.
14. Ibid., 277.
15. Ibid., 303.
16. Ibid., v.
17. Hodges, Alan Turing: The Enigma, 657.
18. Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (London: Springer, 2013), 17.
19. Isaac Asimov, “Runaround,” in Robot Visions, ed. Isaac Asimov (New York: Penguin
Books, 1990), 126.
20. McEwan, Machines Like Me, 278.
21. Ibid., 303.
22. Stuart J. Russel and Peter Norvig, Artificial Intelligence: A Modern Approach, 3rd ed.
(London: Pearson Education Limited, 2016), 2040.
23. McEwan, Machines Like Me, 306.
24. Neil M. Richards and William D. Smart, “How Should the Law Think about Robots?”
in Robot Law, ed. Ryan Calo, A. Michael Froomkin, and Ian Kerr (Cheltenham: Edward Elgar
Publishing, 2016), 20.
25. Aron, “It Is the 1980s, Al Is Rising and Alan Turing Is Alive,” 43.
26. McEwan, Machines Like Me, 280.
27. Ibid., 279.
28. Shang, “The Conflict between Scientific Selection,” 72.
29. McEwan, “If Someday Artificial Humans Wrote a Novel.”

This content downloaded from


103.68.114.80 on Fri, 03 Jan 2025 11:11:17 UTC
All use subject to https://about.jstor.org/terms

You might also like