0% found this document useful (0 votes)
28 views28 pages

Chinese Room

The Chinese Room argument, proposed by philosopher John Searle, asserts that a computer executing a program cannot possess understanding or consciousness, even if it appears to communicate fluently in a language. Searle's thought experiment illustrates that both a computer and a human following instructions lack genuine comprehension, challenging the strong AI hypothesis that machines can have minds like humans. The argument has sparked extensive debate in philosophy and cognitive science, particularly regarding the nature of consciousness and the limits of artificial intelligence.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views28 pages

Chinese Room

The Chinese Room argument, proposed by philosopher John Searle, asserts that a computer executing a program cannot possess understanding or consciousness, even if it appears to communicate fluently in a language. Searle's thought experiment illustrates that both a computer and a human following instructions lack genuine comprehension, challenging the strong AI hypothesis that machines can have minds like humans. The argument has sparked extensive debate in philosophy and cognitive science, particularly regarding the nature of consciousness and the limits of artificial intelligence.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chinese room

The Chinese room argument holds that a computer executing a program cannot have a mind,
understanding, or consciousness,[a] regardless of how intelligently or human-like the program may make
the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle
entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences.[1]
Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz
(1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has
been widely discussed in the years since.[2] The centerpiece of Searle's argument is a thought experiment
known as the Chinese room.[3]

The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room,
and a human that only knows English in another, with a door separating them. Chinese characters are
written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping
the reply underneath the door. The human is then given English instructions which replicate the
instructions and function of the computer program to converse in Chinese. The human follows the
instructions and the two rooms can perfectly communicate in Chinese, but the human still does not
actually understand the characters, merely following instructions to converse. Searle states that both the
computer and human are doing identical tasks, following instructions without truly understanding or
"thinking".

The argument is directed against the philosophical positions of functionalism and computationalism,[4]
which hold that the mind may be viewed as an information-processing system operating on formal
symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the
argument is intended to refute a position Searle calls the strong AI hypothesis:[b] "The appropriately
programmed computer with the right inputs and outputs would thereby have a mind in exactly the same
sense human beings have minds."[c]

Although its proponents originally presented the argument in reaction to statements of artificial
intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it
does not show a limit in the amount of intelligent behavior a machine can display.[5] The argument
applies only to digital computers running programs and does not apply to machines in general.[6] While
widely discussed, the argument has been subject to significant criticism and remains controversial among
philosophers of mind and AI researchers.[7][8]

Searle's thought experiment


Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it
understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the
program step by step, and then produces Chinese characters as output. The machine does this so perfectly
that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.
The questions at issue are these: does the machine actually understand the conversation, or is it just
simulating the ability to understand the conversation? Does the machine have a mind in exactly the same
sense that people do, or is it just acting as if it has a mind?

Now suppose that Searle is in a room with an English version of the program, along with sufficient
pencils, paper, erasers and filing cabinets. Chinese characters are slipped in under the door, he follows the
program step-by-step, which eventually instructs him to slide other Chinese characters back out under the
door. If the computer had passed the Turing test this way, it follows that Searle would do so as well,
simply by running the program by hand.

Searle asserts that there is no essential difference between the roles of the computer and himself in the
experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to
understand. However, Searle would not be able to understand the conversation. Therefore, he argues, it
follows that the computer would not be able to understand the conversation either.

Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is
doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the
word. Therefore, he concludes that the strong AI hypothesis is false: a computer running a program that
simulates a mind would not have a mind in the same sense that human beings have a mind.

History
Gottfried Leibniz made a similar argument in 1714 against mechanism (the idea that everything that
makes up a human being could, in principle, be explained in mechanical terms. In other words, that a
person, including their mind, is merely a very complex machine). Leibniz used the thought experiment of
expanding the brain until it was the size of a mill.[9] Leibniz found it difficult to imagine that a "mind"
capable of "perception" could be constructed using only mechanical processes.[d]

Peter Winch made the same point in his book The Idea of a Social Science and its Relation to Philosophy
(1958), where he provides an argument to show that "a man who understands Chinese is not a man who
has a firm grasp of the statistical probabilities for the occurrence of the various words in the Chinese
language" (p. 108).

Soviet cyberneticist Anatoly Dneprov made an essentially identical argument in 1961, in the form of the
short story "The Game". In it, a stadium of people act as switches and memory cells implementing a
program to translate a sentence of Portuguese, a language that none of them know.[10] The game was
organized by a "Professor Zarubin" to answer the question "Can mathematical machines think?" Speaking
through Zarubin, Dneprov writes "the only way to prove that machines can think is to turn yourself into a
machine and examine your thinking process" and he concludes, as Searle does, "We've proven that even
the most perfect simulation of machine thinking is not the thinking process itself."

In 1974, Lawrence H. Davis imagined duplicating the brain using telephone lines and offices staffed by
people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain
simulation. This thought experiment is called the China brain, also the "Chinese Nation" or the "Chinese
Gym".[11]
Searle's version appeared in his 1980 paper "Minds, Brains, and
Programs", published in Behavioral and Brain Sciences.[1] It
eventually became the journal's "most influential target article",[2]
generating an enormous number of commentaries and responses in
the ensuing decades, and Searle has continued to defend and refine
the argument in many papers, popular articles and books. David
Cole writes that "the Chinese Room argument has probably been
the most widely discussed philosophical argument in cognitive
science to appear in the past 25 years".[12]

Most of the discussion consists of attempts to refute it. "The


overwhelming majority", notes Behavioral and Brain Sciences
editor Stevan Harnad,[e] "still think that the Chinese Room
Argument is dead wrong".[13] The sheer volume of the literature
that has grown up around it inspired Pat Hayes to comment that
John Searle in December 2005
the field of cognitive science ought to be redefined as "the
ongoing research program of showing Searle's Chinese Room
Argument to be false".[14]

Searle's argument has become "something of a classic in cognitive science", according to Harnad.[13]
Varol Akman agrees, and has described the original paper as "an exemplar of philosophical clarity and
purity".[15]

Philosophy
Although the Chinese Room argument was originally presented in reaction to the statements of artificial
intelligence researchers, philosophers have come to consider it as an important part of the philosophy of
mind. It is a challenge to functionalism and the computational theory of mind,[f] and is related to such
questions as the mind–body problem, the problem of other minds, the symbol grounding problem, and the
hard problem of consciousness.[a]

Strong AI
Searle identified a philosophical position he calls "strong AI":

The appropriately programmed computer with the right inputs and outputs would thereby have
a mind in exactly the same sense human beings have minds.[c]

The definition depends on the distinction between simulating a mind and actually having one. Searle
writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the
correct simulation is a model of the mind."[22]

The claim is implicit in some of the statements of early AI researchers and analysts. For example, in
1955, AI founder Herbert A. Simon declared that "there are now in the world machines that think, that
learn and create".[23] Simon, together with Allen Newell and Cliff Shaw, after having completed the first
program that could do formal reasoning (the Logic Theorist), claimed that they had "solved the venerable
mind–body problem, explaining how a system composed of matter can have the properties of mind."[24]
John Haugeland wrote that "AI wants only the genuine article: machines with minds, in the full and literal
sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is
daring: namely, we are, at root, computers ourselves."[25]

Searle also ascribes the following claims to advocates of strong AI:

AI systems can be used to explain the mind;[20]


The study of the brain is irrelevant to the study of the mind;[g] and
The Turing test is adequate for establishing the existence of mental states.[h]

Strong AI as computationalism or functionalism


In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as
"computer functionalism" (a term he attributes to Daniel Dennett).[4][30] Functionalism is a position in
modern philosophy of mind that holds that we can define mental phenomena (such as beliefs, desires, and
perceptions) by describing their functions in relation to each other and to the outside world. Because a
computer program can accurately represent functional relationships as relationships between symbols, a
computer can have mental phenomena if it runs the right program, according to functionalism.

Stevan Harnad argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of
computationalism, a position (unlike "strong AI") that is actually held by many thinkers, and hence one
worth refuting."[31] Computationalism[i] is the position in the philosophy of mind which argues that the
mind can be accurately described as an information-processing system.

Each of the following, according to Harnad, is a "tenet" of computationalism:[34]

Mental states are computational states (which is why computers can have mental states and
help to explain the mind);
Computational states are implementation-independent—in other words, it is the software
that determines the computational state, not the hardware (which is why the brain, being
hardware, is irrelevant); and that
Since implementation is unimportant, the only empirical data that matters is how the system
functions; hence the Turing test is definitive.
Recent philosophical discussions have revisited the implications of computationalism for artificial
intelligence. Goldstein and Levinstein explore whether large language models (LLMs) like ChatGPT can
possess minds, focusing on their ability to exhibit folk psychology, including beliefs, desires, and
intentions. The authors argue that LLMs satisfy several philosophical theories of mental representation,
such as informational, causal, and structural theories, by demonstrating robust internal representations of
the world. However, they highlight that the evidence for LLMs having action dispositions necessary for
belief-desire psychology remains inconclusive. Additionally, they refute common skeptical challenges,
such as the "stochastic parrots" argument and concerns over memorization, asserting that LLMs exhibit
structured internal representations that align with these philosophical criteria.[35]

Building on this discourse, Kristina Šekrst highlights how AI hallucinations offer a unique perspective on
computationalism. While functionalism defines mental states through their functional relationships, the
emergence of hallucinations in AI systems reveals the limitations of such states when divorced from
intrinsic understanding. These hallucinations, though arising from accurate functional representations,
underscore the gap between computational reliability and the ontological complexity of human mental
states. By doing so, they challenge the adequacy of functional accuracy in attributing mental phenomena
to AI systems within a computationalist framework.[36]

David Chalmers suggests that while current LLMs lack features like recurrent processing and unified
agency, advancements in AI could address these limitations within the next decade, potentially enabling
systems to achieve consciousness. This perspective challenges Searle's original claim that purely
"syntactic" processing cannot yield understanding or consciousness, arguing instead that such systems
could have authentic mental states.[37]

Strong AI vs. biological naturalism


Searle holds a philosophical position he calls "biological naturalism": that consciousness[a] and
understanding require specific biological machinery that are found in brains. He writes "brains cause
minds"[38] and that "actual human mental phenomena [are] dependent on actual physical–chemical
properties of actual human brains".[38] Searle argues that this machinery (known in neuroscience as the
"neural correlates of consciousness") must have some causal powers that permit the human experience of
consciousness.[39] Searle's belief in the existence of these powers has been criticized.

Searle does not disagree with the notion that machines can have consciousness and understanding,
because, as he writes, "we are precisely such machines".[6] Searle holds that the brain is, in fact, a
machine, but that the brain gives rise to consciousness and understanding using specific machinery. If
neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants
that it may be possible to create machines that have consciousness and understanding. However, without
the specific machinery required, Searle does not believe that consciousness can occur.

Biological naturalism implies that one cannot determine if the experience of consciousness is occurring
merely by examining how a system functions, because the specific machinery of the brain is essential.
Thus, biological naturalism is directly opposed to both behaviorism and functionalism (including
"computer functionalism" or "strong AI").[40] Biological naturalism is similar to identity theory (the
position that mental states are "identical to" or "composed of" neurological events); however, Searle has
specific technical objections to identity theory.[41][j] Searle's biological naturalism and strong AI are both
opposed to Cartesian dualism,[40] the classical idea that the brain and mind are made of different
"substances". Indeed, Searle accuses strong AI of dualism, writing that "strong AI only makes sense given
the dualistic assumption that, where the mind is concerned, the brain doesn't matter".[26]

Consciousness
Searle's original presentation emphasized understanding—that is, mental states with intentionality—and
did not directly address other closely related ideas such as "consciousness". However, in more recent
presentations, Searle has included consciousness as the real target of the argument.[4]

Computational models of consciousness are not sufficient by themselves for consciousness. The
computational model for consciousness stands to consciousness in the same way the
computational model of anything stands to the domain being modelled. Nobody supposes that
the computational model of rainstorms in London will leave us all wet. But they make the
mistake of supposing that the computational model of consciousness is somehow conscious. It
is the same mistake in both cases.[42]

— John R. Searle, Consciousness and Language, p. 16

David Chalmers writes, "it is fairly clear that consciousness is at the root of the matter" of the Chinese
room.[43]

Colin McGinn argues that the Chinese room provides strong evidence that the hard problem of
consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can
be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is
plain that any other method of probing the occupant of a Chinese room has the same difficulties in
principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a
conscious agency or some clever simulation inhabits the room.[44]

Searle argues that this is only true for an observer outside of the room. The whole point of the thought
experiment is to put someone inside the room, where they can directly observe the operations of
consciousness. Searle claims that from his vantage point within the room there is nothing he can see that
could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that
can speak Chinese. In Searle's words, "the computer has nothing more than I have in the case where I
understand nothing".[45]

Applied ethics
Patrick Hew used the Chinese Room argument to deduce
requirements from military command and control systems if they
are to preserve a commander's moral agency. He drew an analogy
between a commander in their command center and the person in
the Chinese Room, and analyzed it under a reading of Aristotle's
notions of "compulsory" and "ignorance". Information could be
"down converted" from meaning to symbols, and manipulated
symbolically, but moral agency could be undermined if there was
inadequate 'up conversion' into meaning. Hew cited examples
from the USS Vincennes incident.[46] Sitting in the combat information
center aboard a warship—proposed
as a real-life analog to the Chinese
room
Computer science
The Chinese room argument is primarily an argument in the philosophy of mind, and both major
computer scientists and artificial intelligence researchers consider it irrelevant to their fields.[5] However,
several concepts developed by computer scientists are essential to understanding the argument, including
symbol processing, Turing machines, Turing completeness, and the Turing test.

Strong AI vs. AI research


Searle's arguments are not usually considered an issue for AI research. The primary mission of artificial
intelligence research is only to create useful systems that act intelligently and it does not matter if the
intelligence is "merely" a simulation. AI researchers Stuart J. Russell and Peter Norvig wrote in 2021:
"We are interested in programs that behave intelligently. Individual aspects of consciousness—awareness,
self-awareness, attention—can be programmed and can be part of an intelligent machine. The additional
project making a machine conscious in exactly the way humans are is not one that we are equipped to
take on."[5]

Searle does not disagree that AI research can create machines that are capable of highly intelligent
behavior. The Chinese room argument leaves open the possibility that a digital machine could be built
that acts more intelligently than a person, but does not have a mind or intentionality in the same way that
brains do.

Searle's "strong AI hypothesis" should not be confused with "strong AI" as defined by Ray Kurzweil and
other futurists,[47][21] who use the term to describe machine intelligence that rivals or exceeds human
intelligence—that is, artificial general intelligence, human level AI or superintelligence. Kurzweil is
referring primarily to the amount of intelligence displayed by the machine, whereas Searle's argument
sets no limit on this. Searle argues that a superintelligent machine would not necessarily have a mind and
consciousness.

Turing test
The Chinese room implements a version of the Turing test.[49]
Alan Turing introduced the test in 1950 to help answer the
question "can machines think?" In the standard version, a human
judge engages in a natural language conversation with a human
and a machine designed to generate performance indistinguishable
from that of a human being. All participants are separated from
one another. If the judge cannot reliably tell the machine from the
human, the machine is said to have passed the test.

Turing then considered each possible objection to the proposal


"machines can think", and found that there are simple, obvious
answers if the question is de-mystified in this way. He did not,
however, intend for the test to measure for the presence of
"consciousness" or "understanding". He did not believe this was
relevant to the issues that he was addressing. He wrote: The "standard interpretation" of the
Turing Test, in which player C, the
interrogator, is given the task of
I do not wish to give the impression that I think there is
trying to determine which player—A
no mystery about consciousness. There is, for instance, or B—is a computer and which is a
something of a paradox connected with any attempt to human. The interrogator is limited to
localise it. But I do not think these mysteries necessarily using the responses to written
need to be solved before we can answer the question questions to make the
with which we are concerned in this paper.[49] determination. Image adapted from
Saygin, et al. 2000.[48]
To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant
mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence
of consciousness, even if the room can behave or function as a conscious mind would.

Symbol processing
Computers manipulate physical objects in order to carry out calculations and do simulations. AI
researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It
is also equivalent to the formal systems used in the field of mathematical logic.

Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the
study of grammar). The computer manipulates the symbols using a form of syntax, without any
knowledge of the symbol's semantics (that is, their meaning).

Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the
necessary machinery for "general intelligent action", or, as it is known today, artificial general
intelligence. They framed this as a philosophical position, the physical symbol system hypothesis: "A
physical symbol system has the necessary and sufficient means for general intelligent action."[50][51] The
Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the
external behavior of the machine, rather than the presence or absence of understanding, consciousness
and mind.

Twenty-first century AI programs (such as "deep learning") do mathematical operations on huge matrixes
of unidentified numbers and bear little resemblance to the symbolic processing used by AI programs at
the time Searle wrote his critique in 1980. Nils Nilsson describes systems like these as "dynamic" rather
than "symbolic". Nilsson notes that these are essentially digitized representations of dynamic systems—
the individual numbers do not have a specific semantics, but are instead samples or data points from a
dynamic signal, and it is the signal being approximated which would have semantics. Nilsson argues it is
not reasonable to consider these signals as "symbol processing" in the same sense as the physical symbol
systems hypothesis.[52]

Chinese room and Turing completeness


The Chinese room has a design analogous to that of a modern computer. It has a Von Neumann
architecture, which consists of a program (the book of instructions), some memory (the papers and file
cabinets), a machine that follows the instructions (the man), and a means to write symbols in memory
(the pencil and eraser). A machine with this design is known in theoretical computer science as "Turing
complete", because it has the necessary machinery to carry out any computation that a Turing machine
can do, and therefore it is capable of doing a step-by-step simulation of any other digital machine, given
enough memory and time. Turing writes, "all digital computers are in a sense equivalent."[53] The widely
accepted Church–Turing thesis holds that any function computable by an effective procedure is
computable by a Turing machine.

The Turing completeness of the Chinese room implies that it can do whatever any other digital computer
can do (albeit much, much more slowly). Thus, if the Chinese room does not or can not contain a
Chinese-speaking mind, then no other digital computer can contain a mind. Some replies to Searle begin
by arguing that the room, as described, cannot have a Chinese-speaking mind. Arguments of this form,
according to Stevan Harnad, are "no refutation (but rather an affirmation)"[54] of the Chinese room
argument, because these arguments actually imply that no digital computers can have a mind.[28]

There are some critics, such as Hanoch Ben-Yami, who argue that the Chinese room cannot simulate all
the abilities of a digital computer, such as being able to determine the current time.[55]

Complete argument
Searle has produced a more formal version of the argument of which the Chinese Room forms a part. He
presented the first version in 1984. The version given below is from 1990.[56][k] The Chinese room
thought experiment is intended to prove point A3.[l]

He begins with three axioms:

(A1) "Programs are formal (syntactic)."

A program uses syntax to manipulate symbols and pays no attention to the


semantics of the symbols. It knows where to put the symbols and how to move them
around, but it does not know what they stand for or what they mean. For the
program, the symbols are just physical objects like any others.

(A2) "Minds have mental contents (semantics)."

Unlike the symbols used by a program, our thoughts have meaning: they represent
things and we know what it is they represent.

(A3) "Syntax by itself is neither constitutive of nor sufficient for semantics."

This is what the Chinese room thought experiment is intended to prove: the Chinese
room has syntax (because there is a man in there moving symbols around). The
Chinese room has no semantics (because, according to Searle, there is no one or
nothing in the room that understands what the symbols mean). Therefore, having
syntax is not enough to generate semantics.

Searle posits that these lead directly to this conclusion:

(C1) Programs are neither constitutive of nor sufficient for minds.

This should follow without controversy from the first three: Programs don't have
semantics. Programs have only syntax, and syntax is insufficient for semantics.
Every mind has semantics. Therefore no programs are minds.

This much of the argument is intended to show that artificial intelligence can never produce a machine
with a mind by writing programs that manipulate symbols. The remainder of the argument addresses a
different issue. Is the human brain running a program? In other words, is the computational theory of
mind correct?[f] He begins with an axiom that is intended to express the basic modern scientific
consensus about brains and minds:

(A4) Brains cause minds.

Searle claims that we can derive "immediately" and "trivially"[57] that:


(C2) Any other system capable of causing minds would have to have causal powers (at
least) equivalent to those of brains.

Brains must have something that causes a mind to exist. Science has yet to
determine exactly what it is, but it must exist, because minds exist. Searle calls it
"causal powers". "Causal powers" is whatever the brain uses to create a mind. If
anything else can cause a mind to exist, it must have "equivalent causal powers".
"Equivalent causal powers" is whatever else that could be used to make a mind.

And from this he derives the further conclusions:

(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be
able to duplicate the specific causal powers of brains, and it could not do that just by
running a formal program.

This follows from C1 and C2: Since no program can produce a mind, and
"equivalent causal powers" produce minds, it follows that programs do not have
"equivalent causal powers."

(C4) The way that human brains actually produce mental phenomena cannot be solely by
virtue of running a computer program.

Since programs do not have "equivalent causal powers", "equivalent causal powers"
produce minds, and brains produce minds, it follows that brains do not use programs
to produce minds.

Refutations of Searle's argument take many different forms (see below). Computationalists and
functionalists reject A3, arguing that "syntax" (as Searle describes it) can have "semantics" if the syntax
has the right functional structure. Eliminative materialists reject A2, arguing that minds don't actually
have "semantics"—that thoughts and other mental phenomena are inherently meaningless but
nevertheless function as if they had meaning.

Replies
Replies to Searle's argument may be classified according to what they claim to show:[m]

Those which identify who speaks Chinese


Those which demonstrate how meaningless symbols can become meaningful
Those which suggest that the Chinese room should be redesigned in some way
Those which contend that Searle's argument is misleading
Those which argue that the argument makes false assumptions about subjective conscious
experience and therefore proves nothing
Some of the arguments (robot and brain simulation, for example) fall into multiple categories.

Systems and virtual mind replies: finding the mind


These replies attempt to answer the question: since the man in the room does not speak Chinese, where is
the mind that does? These replies address the key ontological issues of mind versus body and simulation
vs. reality. All of the replies that identify the mind in the room are versions of "the system reply".

System reply
The basic version of the system reply argues that it is the "whole system" that understands Chinese.[62][n]
While the man understands only English, when he is combined with the program, scratch paper, pencils
and file cabinets, they form a system that can understand Chinese. "Here, understanding is not being
ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part"
Searle explains.[29]

Searle notes that (in this simple version of the reply) the "system" is nothing more than a collection of
ordinary physical objects; it grants the power of understanding and consciousness to "the conjunction of
that person and bits of paper"[29] without making any effort to explain how this pile of objects has
become a conscious, thinking being. Searle argues that no reasonable person should be satisfied with the
reply, unless they are "under the grip of an ideology;"[29] In order for this reply to be remotely plausible,
one must take it for granted that consciousness can be the product of an information processing "system",
and does not require anything resembling the actual biology of the brain.

Searle then responds by simplifying this list of physical objects: he asks what happens if the man
memorizes the rules and keeps track of everything in his head? Then the whole system consists of just
one object: the man himself. Searle argues that if the man does not understand Chinese then the system
does not understand Chinese either because now "the system" and "the man" both describe exactly the
same object.[29]

Critics of Searle's response argue that the program has allowed the man to have two minds in one head. If
we assume a "mind" is a form of information processing, then the theory of computation can account for
two computations occurring at once, namely (1) the computation for universal programmability (which is
the function instantiated by the person and note-taking materials independently from any particular
program contents) and (2) the computation of the Turing machine that is described by the program (which
is instantiated by everything including the specific program).[64] The theory of computation thus formally
explains the open possibility that the second computation in the Chinese Room could entail a human-
equivalent semantic understanding of the Chinese inputs. The focus belongs on the program's Turing
machine rather than on the person's.[65] However, from Searle's perspective, this argument is circular. The
question at issue is whether consciousness is a form of information processing, and this reply requires
that we make that assumption.

More sophisticated versions of the systems reply try to identify more precisely what "the system" is and
they differ in exactly how they describe it. According to these replies, the "mind that speaks Chinese"
could be such things as: the "software", a "program", a "running program", a simulation of the "neural
correlates of consciousness", the "functional system", a "simulated mind", an "emergent property", or "a
virtual mind".

Virtual mind reply


Marvin Minsky suggested a version of the system reply known as the "virtual mind reply".[o] The term
"virtual" is used in computer science to describe an object that appears to exist "in" a computer (or
computer network) only because software makes it appear to exist. The objects "inside" computers
(including files, folders, and so on) are all "virtual", except for the computer's electronic components.
Similarly, Minsky that a computer may contain a "mind" that is virtual in the same sense as virtual
machines, virtual communities and virtual reality.

To clarify the distinction between the simple systems reply given above and virtual mind reply, David
Cole notes that two simulations could be running on one system at the same time: one speaking Chinese
and one speaking Korean. While there is only one system, there can be multiple "virtual minds," thus the
"system" cannot be the "mind".[69]

Searle responds that such a mind is at best a simulation, and writes: "No one supposes that computer
simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a
rainstorm will leave us all drenched."[70] Nicholas Fearn responds that, for some things, simulation is as
good as the real thing. "When we call up the pocket calculator function on a desktop computer, the image
of a pocket calculator appears on the screen. We don't complain that it isn't really a calculator, because the
physical attributes of the device do not matter."[71] The question is, is the human mind like the pocket
calculator, essentially composed of information, where a perfect simulation of the thing just is the thing?
Or is the mind like the rainstorm, a thing in the world that is more than just its simulation, and not
realizable in full by a computer simulation? For decades, this question of simulation has led AI
researchers and philosophers to consider whether the term "synthetic intelligence" is more appropriate
than the common description of such intelligences as "artificial."

These replies provide an explanation of exactly who it is that understands Chinese. If there is something
besides the man in the room that can understand Chinese, Searle cannot argue that (1) the man does not
understand Chinese, therefore (2) nothing in the room understands Chinese. This, according to those who
make this reply, shows that Searle's argument fails to prove that "strong AI" is false.[p]

These replies, by themselves, do not provide any evidence that strong AI is true, however. They do not
show that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it
passes the Turing test. Searle argues that, if we are to consider Strong AI remotely plausible, the Chinese
Room is an example that requires explanation, and it is difficult or impossible to explain how
consciousness might "emerge" from the room or how the system would have consciousness. As Searle
writes "the systems reply simply begs the question by insisting that the system must understand
Chinese"[29] and thus is dodging the question or hopelessly circular.

Robot and semantics replies: finding the meaning


As far as the person in the room is concerned, the symbols are just meaningless "squiggles." But if the
Chinese room really "understands" what it is saying, then the symbols must get their meaning from
somewhere. These arguments attempt to connect the symbols to the things they symbolize. These replies
address Searle's concerns about intentionality, symbol grounding and syntax vs. semantics.

Robot reply
Suppose that instead of a room, the program was placed into a robot that could wander around and
interact with its environment. This would allow a "causal connection" between the symbols and things
they represent.[73][q] Hans Moravec comments: "If we could graft a robot to a reasoning program, we
wouldn't need a person to provide the meaning anymore: it would come from the physical world."[75][r]

Searle's reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs
came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the
arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does
not know what the symbols mean. Searle writes "he doesn't see what comes into the robot's eyes."[77]

Derived meaning
Some respond that the room, as Searle describes it, is connected to the world: through the Chinese
speakers that it is "talking" to and through the programmers who designed the knowledge base in his file
cabinet. The symbols Searle manipulates are already meaningful, they are just not meaningful to
him.[78][s]

Searle says that the symbols only have a "derived" meaning, like the meaning of words in books. The
meaning of the symbols depends on the conscious understanding of the Chinese speakers and the
programmers outside the room. The room, like a book, has no understanding of its own.[t]

Contextualist reply
Some have argued that the meanings of the symbols would come from a vast "background" of
commonsense knowledge encoded in the program and the filing cabinets. This would provide a "context"
that would give the symbols their meaning.[76][u]

Searle agrees that this background exists, but he does not agree that it can be built into programs. Hubert
Dreyfus has also criticized the idea that the "background" can be represented symbolically.[81]

To each of these suggestions, Searle's response is the same: no matter how much knowledge is written
into the program and no matter how the program is connected to the world, he is still in the room
manipulating symbols according to rules. His actions are syntactic and this can never explain to him what
the symbols stand for. Searle writes "syntax is insufficient for semantics."[82][v]

However, for those who accept that Searle's actions simulate a mind, separate from his own, the important
question is not what the symbols mean to Searle, what is important is what they mean to the virtual mind.
While Searle is trapped in the room, the virtual mind is not: it is connected to the outside world through
the Chinese speakers it speaks to, through the programmers who gave it world knowledge, and through
the cameras and other sensors that roboticists can supply.

Brain simulation and connectionist replies: redesigning the room


These arguments are all versions of the systems reply that identify a particular kind of system as being
important; they identify some special technology that would create conscious understanding in a machine.
(The "robot" and "commonsense knowledge" replies above also specify a certain kind of system as being
important.)

Brain simulator reply


Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese
speaker.[84][w] This strengthens the intuition that there would be no significant difference between the
operation of the program and the operation of a live human brain.

Searle replies that such a simulation does not reproduce the important features of the brain—its causal
and intentional states. He is adamant that "human mental phenomena [are] dependent on actual physical–
chemical properties of actual human brains."[26] Moreover, he argues:

[I]magine that instead of a monolingual man in a room shuffling symbols we have the man
operate an elaborate set of water pipes with valves connecting them. When the man receives the
Chinese symbols, he looks up in the program, written in English, which valves he has to turn on
and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole
system is rigged up so that after doing all the right firings, that is after turning on all the right
faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the
understanding in this system? It takes Chinese as input, it simulates the formal structure of the
synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't
understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think
is the absurd view that somehow the conjunction of man and water pipes understands,
remember that in principle the man can internalize the formal structure of the water pipes and
do all the "neuron firings" in his imagination.[86]

China brain
What if we ask each citizen of China to simulate one neuron, using the telephone system to simulate the
connections between axons and dendrites? In this version, it seems obvious that no individual would have
any understanding of what the brain might be saying.[87][x] It is also obvious that this system would be
functionally equivalent to a brain, so if consciousness is a function, this system would be conscious.

Brain replacement scenario


In this, we are asked to imagine that engineers have invented a tiny computer that simulates the action of
an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would
clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer
that simulates a brain. If Searle is right, then conscious awareness must disappear during the procedure
(either gradually or all at once). Searle's critics argue that there would be no point during the procedure
when he can claim that conscious awareness ends and mindless simulation begins.[89][y][z] (See Ship of
Theseus for a similar thought experiment.)

Connectionist replies

Closely related to the brain simulator reply, this claims that a massively parallel
connectionist architecture would be capable of understanding.[aa] Modern deep learning is
massively parallel and has successfully displayed intelligent behavior in many domains.
Nils Nilsson argues that modern AI is using digitized "dynamic signals" rather than
symbols of the kind used by AI in 1980.[52] Here it is the sampled signal which would have
the semantics, not the individual numbers manipulated by the program. This is a different
kind of machine than the one that Searle visualized.

Combination reply

This response combines the robot reply with the brain simulation reply, arguing that a
brain simulation connected to the world through a robot body could have a mind.[94]

Many mansions / wait till next year reply

Better technology in the future will allow computers to understand.[27][ab] Searle agrees
that this is possible, but considers this point irrelevant. Searle agrees that there may be
other hardware besides brains that have conscious understanding.

These arguments (and the robot or common-sense knowledge replies) identify some special technology
that would help create conscious understanding in a machine. They may be interpreted in two ways:
either they claim (1) this technology is required for consciousness, the Chinese room does not or cannot
implement this technology, and therefore the Chinese room cannot pass the Turing test or (even if it did)
it would not have conscious understanding. Or they may be claiming that (2) it is easier to see that the
Chinese room has a mind if we visualize this technology as being used to create it.

In the first case, where features like a robot body or a connectionist architecture are required, Searle
claims that strong AI (as he understands it) has been abandoned.[ac] The Chinese room has all the
elements of a Turing complete machine, and thus is capable of simulating any digital computation
whatsoever. If Searle's room cannot pass the Turing test then there is no other digital technology that
could pass the Turing test. If Searle's room could pass the Turing test, but still does not have a mind, then
the Turing test is not sufficient to determine if the room has a "mind". Either way, it denies one or the
other of the positions Searle thinks of as "strong AI", proving his argument.

The brain arguments in particular deny strong AI if they assume that there is no simpler way to describe
the mind than to create a program that is just as mysterious as the brain was. He writes "I thought the
whole idea of strong AI was that we don't need to know how the brain works to know how the mind
works."[27] If computation does not provide an explanation of the human mind, then strong AI has failed,
according to Searle.
Other critics hold that the room as Searle described it does, in fact, have a mind, however they argue that
it is difficult to see—Searle's description is correct, but misleading. By redesigning the room more
realistically they hope to make this more obvious. In this case, these arguments are being used as appeals
to intuition (see next section).

In fact, the room can just as easily be redesigned to weaken our intuitions. Ned Block's Blockhead
argument[95] suggests that the program could, in theory, be rewritten into a simple lookup table of rules of
the form "if the user writes S, reply with P and goto X". At least in principle, any program can be
rewritten (or "refactored") into this form, even a brain simulation.[ad] In the blockhead scenario, the entire
mental state is hidden in the letter X, which represents a memory address—a number associated with the
next rule. It is hard to visualize that an instant of one's conscious experience can be captured in a single
large number, yet this is exactly what "strong AI" claims. On the other hand, such a lookup table would
be ridiculously large (to the point of being physically impossible), and the states could therefore be overly
specific.

Searle argues that however the program is written or however the machine is connected to the world, the
mind is being simulated by a simple step-by-step digital machine (or machines). These machines are
always just like the man in the room: they understand nothing and do not speak Chinese. They are merely
manipulating symbols without knowing what they mean. Searle writes: "I can have any formal program
you like, but I still understand nothing."[96]

Speed and complexity: appeals to intuition


The following arguments (and the intuitive interpretations of the arguments above) do not directly explain
how a Chinese speaking mind could exist in Searle's room, or how the symbols he manipulates could
become meaningful. However, by raising doubts about Searle's intuitions they support other positions,
such as the system and robot replies. These arguments, if accepted, prevent Searle from claiming that his
conclusion is obvious by undermining the intuitions that his certainty requires.

Several critics believe that Searle's argument relies entirely on intuitions. Block writes "Searle's argument
depends for its force on intuitions that certain entities do not think."[97] Daniel Dennett describes the
Chinese room argument as a misleading "intuition pump"[98] and writes "Searle's thought experiment
depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the obvious
conclusion from it."[98]

Some of the arguments above also function as appeals to intuition, especially those that are intended to
make it seem more plausible that the Chinese room contains a mind, which can include the robot,
commonsense knowledge, brain simulation and connectionist replies. Several of the replies above also
address the specific issue of complexity. The connectionist reply emphasizes that a working artificial
intelligence system would have to be as complex and as interconnected as the human brain. The
commonsense knowledge reply emphasizes that any program that passed a Turing test would have to be
"an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world knowledge' and
meta-knowledge and meta-meta-knowledge", as Daniel Dennett explains.[80]

Speed and complexity replies


Many of these critiques emphasize speed and complexity of the human brain,[ae] which processes
information at 100 billion operations per second (by some estimates).[100] Several critics point out that
the man in the room would probably take millions of years to respond to a simple question, and would
require "filing cabinets" of astronomical proportions.[101] This brings the clarity of Searle's intuition into
doubt.

An especially vivid version of the speed and complexity reply is from Paul and Patricia Churchland. They
propose this analogous thought experiment: "Consider a dark room containing a man holding a bar
magnet or charged object. If the man pumps the magnet up and down, then, according to Maxwell's
theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will
thus be luminous. But as all of us who have toyed with magnets or charged balls well know, their forces
(or any other forces for that matter), even when set in motion produce no luminance at all. It is
inconceivable that you might constitute real luminance just by moving forces around!"[88] Churchland's
point is that the problem is that he would have to wave the magnet up and down something like 450
trillion times per second in order to see anything.[102]

Stevan Harnad is critical of speed and complexity replies when they stray beyond addressing our
intuitions. He writes "Some have made a cult of speed and timing, holding that, when accelerated to the
right speed, the computational may make a phase transition into the mental. It should be clear that is not a
counterargument but merely an ad hoc speculation (as is the view that it is all just a matter of ratcheting
up to the right degree of 'complexity.')"[103][af]

Searle argues that his critics are also relying on intuitions, however his opponents' intuitions have no
empirical basis. He writes that, in order to consider the "system reply" as remotely plausible, a person
must be "under the grip of an ideology".[29] The system reply only makes sense (to Searle) if one assumes
that any "system" can have consciousness, just by virtue of being a system with the right behavior and
functional parts. This assumption, he argues, is not tenable given our experience of consciousness.

Other minds and zombies: meaninglessness


Several replies argue that Searle's argument is irrelevant because his assumptions about the mind and
consciousness are faulty. Searle believes that human beings directly experience their consciousness,
intentionality and the nature of the mind every day, and that this experience of consciousness is not open
to question. He writes that we must "presuppose the reality and knowability of the mental."[106] The
replies below question whether Searle is justified in using his own experience of consciousness to
determine that it is more than mechanical symbol processing. In particular, the other minds reply argues
that we cannot use our experience of consciousness to answer questions about other minds (even the mind
of a computer), the epiphenoma replies question whether we can make any argument at all about
something like consciousness which can not, by definition, be detected by any experiment, and the
eliminative materialist reply argues that Searle's own personal consciousness does not "exist" in the sense
that Searle thinks it does.

Other minds reply


The "Other Minds Reply" points out that Searle's argument is a version of the problem of other minds,
applied to machines. There is no way we can determine if other people's subjective experience is the same
as our own. We can only study their behavior (i.e., by giving them our own Turing test). Critics of Searle
argue that he is holding the Chinese room to a higher standard than we would hold an ordinary
person.[107][ag]

Nils Nilsson writes "If a program behaves as if it were multiplying, most of us would say that it is, in fact,
multiplying. For all I know, Searle may only be behaving as if he were thinking deeply about these
matters. But, even though I disagree with him, his simulation is pretty good, so I'm willing to credit him
with real thought."[109]

Turing anticipated Searle's line of argument (which he called "The Argument from Consciousness") in
1950 and makes the other minds reply.[110] He noted that people never consider the problem of other
minds when dealing with each other. He writes that "instead of arguing continually over this point it is
usual to have the polite convention that everyone thinks."[111] The Turing test simply extends this "polite
convention" to machines. He does not intend to solve the problem of other minds (for machines or
people) and he does not think we need to.[ah]

Replies considering that Searle's "consciousness" is undetectable


If we accept Searle's description of intentionality, consciousness, and the mind, we are forced to accept
that consciousness is epiphenomenal: that it "casts no shadow" i.e. is undetectable in the outside world.
Searle's "causal properties" cannot be detected by anyone outside the mind, otherwise the Chinese Room
could not pass the Turing test—the people outside would be able to tell there was not a Chinese speaker
in the room by detecting their causal properties. Since they cannot detect causal properties, they cannot
detect the existence of the mental. Thus, Searle's "causal properties" and consciousness itself is
undetectable, and anything that cannot be detected either does not exist or does not matter.

Mike Alder calls this the "Newton's Flaming Laser Sword Reply". He argues that the entire argument is
frivolous, because it is non-verificationist: not only is the distinction between simulating a mind and
having a mind ill-defined, but it is also irrelevant because no experiments were, or even can be, proposed
to distinguish between the two.[113]

Daniel Dennett provides this illustration: suppose that, by some mutation, a human being is born that does
not have Searle's "causal properties" but nevertheless acts exactly like a human being. This is a
philosophical zombie, as formulated in the philosophy of mind. This new animal would reproduce just as
any other human and eventually there would be more of these zombies. Natural selection would favor the
zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. So
therefore, if Searle is right, it is most likely that human beings (as we see them today) are actually
"zombies", who nevertheless insist they are conscious. It is impossible to know whether we are all
zombies or not. Even if we are all zombies, we would still believe that we are not.[114]

Eliminative materialist reply


Several philosophers argue that consciousness, as Searle describes it, does not exist. Daniel Dennett
describes consciousness as a "user illusion".[115]

This position is sometimes referred to as eliminative materialism: the view that consciousness is not a
concept that can "enjoy reduction" to a strictly mechanical description, but rather is a concept that will be
simply eliminated once the way the material brain works is fully understood, in just the same way as the
concept of a demon has already been eliminated from science rather than enjoying reduction to a strictly
mechanical description. Other mental properties, such as original intentionality (also called “meaning”,
“content”, and “semantic character”), are also commonly regarded as special properties related to beliefs
and other propositional attitudes. Eliminative materialism maintains that propositional attitudes such as
beliefs and desires, among other intentional mental states that have content, do not exist. If eliminative
materialism is the correct scientific account of human cognition then the assumption of the Chinese room
argument that "minds have mental contents (semantics)" must be rejected.[116]

Searle disagrees with this analysis and argues that "the study of the mind starts with such facts as that
humans have beliefs, while thermostats, telephones, and adding machines don't ... what we wanted to
know is what distinguishes the mind from thermostats and livers."[77] He takes it as obvious that we can
detect the presence of consciousness and dismisses these replies as being off the point.

Other replies
Margaret Boden argued in her paper "Escaping from the Chinese Room" that even if the person in the
room does not understand the Chinese, it does not mean there is no understanding in the room. The
person in the room at least understands the rule book used to provide output responses.[117]

Carbon chauvinism
Searle conclusion that "human mental phenomena [are] dependent on actual physical–chemical properties
of actual human brains"[26] have been sometimes described as a form of "Carbon chauvinism".[118]
Steven Pinker suggested that a response to that conclusion would be to make a counter thought
experiment to the Chinese Room, where the incredulity goes the other way.[119] He brings as an example
the short story They're Made Out of Meat which depicts an alien race composed of some electronic beings
who upon finding Earth express disbelief that the meat brain of humans can experience consciousness
and thought.[120]

However, Searle himself denied being "Carbon chauvinist".[121] He said "I have not tried to show that
only biological based systems like our brains can think. [...] I regard this issue as up for grabs".[122] He
said that even silicon machines could theoretically have human-like consciousness and thought, if the
actual physical–chemical properties of silicon could be used in a way that can produce consciousness and
thought, but "until we know how the brain does it we are not in a position to try to do it artificially".[123]

See also
Computational models of language acquisition
Emergence
I Am a Strange Loop
Synthetic intelligence
Leibniz's gap

Notes
a. See § Consciousness, which discusses the relationship between the Chinese room
argument and consciousness.
b. Not to be confused with artificial general intelligence, which is also sometimes referred to as
"strong AI". See § Strong AI vs. AI research.
c. This version is from Searle's Mind, Language and Society[18] and is also quoted in Daniel
Dennett's Consciousness Explained.[19] Searle's original formulation was "The appropriately
programmed computer really is a mind, in the sense that computers given the right
programs can be literally said to understand and have other cognitive states."[20] Strong AI
is defined similarly by Stuart J. Russell and Peter Norvig: "weak AI—the idea machines
could act a as if they were intelligent—and strong AI—the assertions that do so are actually
consciously thinking (not just simulating thinking)."[21]
d. Note that Leibniz' was objecting to a "mechanical" theory of the mind (the philosophical
position known as mechanism). Searle is objecting to an "information processing" view of
the mind (the philosophical position known as "computationalism"). Searle accepts
mechanism and rejects computationalism.
e. Harnad edited the journal during the years which saw the introduction and popularisation of
the Chinese Room argument.
f. Harnad holds that the Searle's argument is against the thesis that "has since come to be
called 'computationalism,' according to which cognition is just computation, hence mental
states are just computational states".[16] David Cole agrees that "the argument also has
broad implications for functionalist and computational theories of meaning and of mind".[17]
g. Searle believes that "strong AI only makes sense given the dualistic assumption that, where
the mind is concerned, the brain doesn't matter." [26] He writes elsewhere, "I thought the
whole idea of strong AI was that we don't need to know how the brain works to know how
the mind works." [27] This position owes its phrasing to Stevan Harnad.[28]
h. "One of the points at issue," writes Searle, "is the adequacy of the Turing test."[29]
i. Computationalism is associated with Jerry Fodor and Hilary Putnam,[32] and is held by Allen
Newell,[28] Zenon Pylyshyn[28] and Steven Pinker,[33] among others.
j. Larry Hauser writes that "biological naturalism is either confused (waffling between identity
theory and dualism) or else it just is identity theory or dualism."[40]
k. The wording of each axiom and conclusion are from Searle's presentation in Scientific
American.[57][58] (A1-3) and (C1) are described as 1,2,3 and 4 in David Cole.[59]
l. Paul and Patricia Churchland write that the Chinese room thought experiment is intended to
"shore up axiom 3".[60]
m. David Cole combines the second and third categories, as well as the fourth and fifth.[61]
n. Versions of the system reply are held by Ned Block, Jack Copeland, Daniel Dennett, Jerry
Fodor, John Haugeland, Ray Kurzweil, and Georges Rey, among others.[63]
o. The virtual mind reply is held by Minsky, [66][67] Tim Maudlin, David Chalmers and David
Cole.[68]
p. David Cole writes "From the intuition that in the CR thought experiment he would not
understand Chinese by running a program, Searle infers that there is no understanding
created by running a program. Clearly, whether that inference is valid or not turns on a
metaphysical question about the identity of persons and minds. If the person understanding
is not identical with the room operator, then the inference is unsound."[72]
q. This position is held by Margaret Boden, Tim Crane, Daniel Dennett, Jerry Fodor, Stevan
Harnad, Hans Moravec, and Georges Rey, among others.[74]
r. David Cole calls this the "externalist" account of meaning.[76]
s. The derived meaning reply is associated with Daniel Dennett and others.
t. Searle distinguishes between "intrinsic" intentionality and "derived" intentionality. "Intrinsic"
intentionality is the kind that involves "conscious understanding" like you would have in a
human mind. Daniel Dennett doesn't agree that there is a distinction. David Cole writes
"derived intentionality is all there is, according to Dennett."[79]
u. David Cole describes this as the "internalist" approach to meaning.[76] Proponents of this
position include Roger Schank, Doug Lenat, Marvin Minsky and (with reservations) Daniel
Dennett, who writes "The fact is that any program [that passed a Turing test] would have to
be an extraordinarily supple, sophisticated, and multilayered system, brimming with 'world
knowledge' and meta-knowledge and meta-meta-knowledge." [80]
v. Searle also writes "Formal symbols by themselves can never be enough for mental
contents, because the symbols, by definition, have no meaning (or interpretation, or
semantics) except insofar as someone outside the system gives it to them."[83]
w. The brain simulation reply has been made by Paul Churchland, Patricia Churchland and
Ray Kurzweil.[85]
x. Early versions of this argument were put forward in 1974 by Lawrence Davis and in 1978 by
Ned Block. Block's version used walkie talkies and was called the "Chinese Gym". Paul and
Patricia Churchland described this scenario as well.[88]
y. An early version of the brain replacement scenario was put forward by Clark Glymour in the
mid-70s and was touched on by Zenon Pylyshyn in 1980. Hans Moravec presented a vivid
version of it,[90] and it is now associated with Ray Kurzweil's version of transhumanism.
z. Searle does not consider the brain replacement scenario as an argument against the CRA,
however in another context, Searle examines several possible solutions, including the
possibility that "you find, to your total amazement, that you are indeed losing control of your
external behavior. You find, for example, that when doctors test your vision, you hear them
say 'We are holding up a red object in front of you; please tell us what you see.' You want to
cry out 'I can't see anything. I'm going totally blind.' But you hear your voice saying in a way
that is completely outside of your control, 'I see a red object in front of me.' [...] [Y]our
conscious experience slowly shrinks to nothing, while your externally observable behavior
remains the same."[91]
aa. The connectionist reply is made by Andy Clark and Ray Kurzweil,[92] as well as Paul and
Patricia Churchland.[93]
ab. Searle (2009) uses the name "Wait 'Til Next Year Reply".
ac. Searle writes that the robot reply "tacitly concedes that cognition is not solely a matter of
formal symbol manipulation." [77] Stevan Harnad makes the same point, writing: "Now just
as it is no refutation (but rather an affirmation) of the CRA to deny that [the Turing test] is a
strong enough test, or to deny that a computer could ever pass it, it is merely special
pleading to try to save computationalism by stipulating ad hoc (in the face of the CRA) that
implementational details do matter after all, and that the computer's is the 'right' kind of
implementation, whereas Searle's is the 'wrong' kind."[54]
ad. That is, any program running on a machine with a finite amount memory.
ae. Speed and complexity replies are made by Daniel Dennett, Tim Maudlin, David Chalmers,
Steven Pinker, Paul Churchland, Patricia Churchland and others.[99] Daniel Dennett points
out the complexity of world knowledge.[80]
af. Critics of the "phase transition" form of this argument include Stevan Harnad, Tim Maudlin,
Daniel Dennett and David Cole.[99] This "phase transition" idea is a version of strong
emergentism (what Dennett derides as "Woo woo West Coast emergence"[104]). Harnad
accuses Churchland and Patricia Churchland of espousing strong emergentism. Ray
Kurzweil also holds a form of strong emergentism.[105]
ag. The "other minds" reply has been offered by Dennett, Kurzweil and Hans Moravec, among
others.[108]
ah. One of Turing's motivations for devising the Turing test is to avoid precisely the kind of
philosophical problems that Searle is interested in. He writes "I do not wish to give the
impression that I think there is no mystery ... [but] I do not think these mysteries necessarily
need to be solved before we can answer the question with which we are concerned in this
paper."[112]

Citations
1. Searle 1980. 26. Searle 1980, p. 13.
2. Harnad 2001, p. 1. 27. Searle 1980, p. 8.
3. Roberts 2016. 28. Harnad 2001.
4. Searle 1992, p. 44. 29. Searle 1980, p. 6.
5. Russell & Norvig 2021, p. 986. 30. Searle 2004, p. 45.
6. Searle 1980, p. 11. 31. Harnad 2001, p. 3 (Italics his)
7. Russell & Norvig 2021, section "Biological 32. Horst 2005, p. 1.
naturalism and the Chinese Room". 33. Pinker 1997.
8. "The Chinese Room Argument" (https://plat 34. Harnad 2001, pp. 3–5.
o.stanford.edu/entries/chinese-room). 35. Goldstein & Levinstein 2024.
Stanford Encyclopedia of Philosophy.
2024. 36. Šekrst 2024.
9. Cole 2004, 2.1; Leibniz 1714, section 17. 37. Chalmers 2023.
10. "A Russian Chinese Room story 38. Searle 1990a, p. 29.
antedating Searle's 1980 discussion" (htt 39. Searle 1990b.
p://www.hardproblem.ru/en/posts/Events/a- 40. Hauser 2006, p. 8.
russian-chinese-room-story-antedating-sea 41. Searle 1992, chpt. 5.
rle-s-1980-discussion/), Center for
Consciousness Studies, June 15, 2018, 42. Searle 2002.
archived (https://web.archive.org/web/2021 43. Chalmers 1996, p. 322.
0516024117/http://www.hardproblem.ru/e 44. McGinn 2000.
n/posts/Events/a-russian-chinese-room-sto 45. Searle 1980, p. 418.
ry-antedating-searle-s-1980-discussion/)
46. Hew 2016.
from the original on 2021-05-16, retrieved
2019-01-09 47. Kurzweil 2005, p. 260.
11. Cole 2004, 2.3. 48. Saygin, Cicekli & Akman 2000.
12. Cole 2004, p. 2; Preston & Bishop 2002 49. Turing 1950.
13. Harnad 2001, p. 2. 50. Newell & Simon 1976, p. 116.
14. Harnad 2001, p. 1; Cole 2004, p. 2 51. Russell & Norvig 2021, p. 19.
15. Akman 1998. 52. Nilsson 2007.
16. Harnad 2005, p. 1. 53. Turing 1950, p. 442.
17. Cole 2004, p. 1. 54. Harnad 2001, p. 14.
18. Searle 1999, p. . 55. Ben-Yami 1993.
19. Dennett 1991, p. 435. 56. Searle 1984; Searle 1990a.
20. Searle 1980, p. 1. 57. Searle 1990a.
21. Russell & Norvig 2021, p. 981. 58. Hauser 2006, p. 5.
22. Searle 2009, p. 1. 59. Cole 2004, p. 5.
23. Quoted in McCorduck 2004, p. 138. 60. Churchland & Churchland 1990, p. 34.
24. Quoted in Crevier 1993, p. 46 61. Cole 2004, pp. 5–6.
25. Haugeland 1985, p. 2 (Italics his)
62. Searle 1980, pp. 5–6; Cole 2004, pp. 6–7; 100. Crevier 1993, p. 269.
Hauser 2006, pp. 2–3; Dennett 1991, 101. Cole 2004, pp. 14–15; Crevier 1993,
p. 439; Fearn 2007, p. 44; Crevier 1993, pp. 269–270; Pinker 1997, p. 95.
p. 269.
102. Churchland & Churchland 1990; Cole
63. Cole 2004, p. 6. 2004, p. 12; Crevier 1993, p. 270; Fearn
64. Yee 1993, p. 44, footnote 2. 2007, pp. 45–46; Pinker 1997, p. 94.
65. Yee 1993, pp. 42–47. 103. Harnad 2001, p. 7.
66. Minsky 1980, p. 440. 104. Crevier 1993, p. 275.
67. Cole 2004, p. 7. 105. Kurzweil 2005.
68. Cole 2004, pp. 7–9. 106. Searle 1980, p. 10.
69. Cole 2004, p. 8. 107. Searle 1980, p. 9; Cole 2004, p. 13;
70. Searle 1980, p. 12. Hauser 2006, pp. 4–5; Nilsson 1984.
71. Fearn 2007, p. 47. 108. Cole 2004, pp. 12–13.
72. Cole 2004, p. 21. 109. Nilsson 1984.
73. Searle 1980, p. 7; Cole 2004, pp. 9–11; 110. Turing 1950, pp. 11–12.
Hauser 2006, p. 3; Fearn 2007, p. 44. 111. Turing 1950, p. 11.
74. Cole 2004, p. 9. 112. Turing 1950, p. 12.
75. Quoted in Crevier 1993, p. 272 113. Alder 2004.
76. Cole 2004, p. 18. 114. Cole 2004, p. 22; Crevier 1993, p. 271;
77. Searle 1980, p. 7. Harnad 2005, p. 4.
78. Hauser 2006, p. 11; Cole 2004, p. 19. 115. Dennett 1991, .
79. Cole 2004, p. 19. 116. Ramsey 2022.
80. Dennett 1991, p. 438. 117. Boden, Margaret A. (1988), "Escaping from
the chinese room", in Heil, John (ed.),
81. Dreyfus 1979, "The epistemological
assumption". Computer Models of Mind, Cambridge
University Press, ISBN 978-0-521-24868-6
82. Searle 1984.
118. Graham 2017, p. 168.
83. Motzkin & Searle 1989, p. 45.
119. Pinker 1997, p. 94-96.
84. Searle 1980, pp. 7–8; Cole 2004, pp. 12–
120. Bisson, Terry (1990). "They're Made Out of
13; Hauser 2006, pp. 3–4; Churchland &
Meat" (https://web.archive.org/web/201905
Churchland 1990.
01130711/http://www.terrybisson.com/they
85. Cole 2004, p. 12. re-made-out-of-meat-2/). Archived from the
86. Searle 1980, p. . original (http://www.terrybisson.com/theyre
87. Cole 2004, p. 4; Hauser 2006, p. 11. -made-out-of-meat-2/) on 2019-05-01.
88. Churchland & Churchland 1990. Retrieved 2024-11-07.
89. Cole 2004, p. 20; Moravec 1988; Kurzweil 121. Vicari, Giuseppe (2008). Beyond
2005, p. 262; Crevier 1993, pp. 271 and Conceptual Dualism: Ontology of
279. Consciousness, Mental Causation, and
Holism in John R. Searle's Philosophy of
90. Moravec 1988. Mind (https://books.google.com/books?id=
91. Searle 1992. NA6e6LhEnAMC&dq=was+searle+a+carb
92. Cole 2004, pp. 12 & 17. on+chauvinist&pg=PA49). Rodopi. p. 49.
93. Hauser 2006, p. 7. ISBN 978-90-420-2466-3.
94. Searle 1980, pp. 8–9; Hauser 2006, p. 11. 122. Fellows, Roger (1995). Philosophy and
95. Block 1981. Technology (https://books.google.com/boo
ks?id=CixGDHrR-uEC&pg=PA86).
96. Searle 1980, p. 3. Cambridge University Press. p. 86.
97. Quoted in Cole 2004, p. 13. ISBN 978-0-521-55816-7.
98. Dennett 1991, pp. 437–440. 123. Preston & Bishop 2002, p. 351.
99. Cole 2004, p. 14.
References
Akman, Varol (1998), Book Review — John Haugeland (editor), Mind Design II: Philosophy,
Psychology, and Artificial Intelligence (http://cogprints.org/539/index.html), archived (https://
web.archive.org/web/20190520104843/http://cogprints.org/539/index.html) from the original
on 2019-05-20, retrieved 2018-10-02 – via Cogprints
Alder, Mike (2004), "Newton's Flaming Laser Sword" (http://www.philosophynow.org/issues/
46/Newtons_Flaming_Laser_Sword), Philosophy Now, vol. 46, pp. 29–33, archived (https://
web.archive.org/web/20180326114344/http://www.philosophynow.org/issues/46/Newtons_Fl
aming_Laser_Sword) from the original on 2018-03-26 Also available at Newton's Flaming
Laser Sword (https://web.archive.org/web/20111114041242/http://school.maths.uwa.edu.a
u/~mike/Newtons%20Flaming%20Laser%20Sword.pdf) (PDF), archived from the original (ht
tp://school.maths.uwa.edu.au/~mike/Newtons%20Flaming%20Laser%20Sword.pdf) (PDF)
on 2011-11-14
Ben-Yami, Hanoch (1993), "A Note on the Chinese Room", Synthese, 95 (2): 169–72,
doi:10.1007/bf01064586 (https://doi.org/10.1007%2Fbf01064586), S2CID 46968094 (http
s://api.semanticscholar.org/CorpusID:46968094)
Block, Ned (1981), "Psychologism and Behaviourism" (http://www.nyu.edu/gsas/dept/philo/f
aculty/block/papers/Psychologism.htm), The Philosophical Review, 90 (1): 5–43,
CiteSeerX 10.1.1.4.5828 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.582
8), doi:10.2307/2184371 (https://doi.org/10.2307%2F2184371), JSTOR 2184371 (https://ww
w.jstor.org/stable/2184371), archived (https://web.archive.org/web/20200806145801/http://w
ww.nyu.edu/gsas/dept/philo/faculty/block/papers/Psychologism.htm) from the original on
2020-08-06, retrieved 2008-10-27
Chalmers, David (March 30, 1996), The Conscious Mind: In Search of a Fundamental
Theory (https://books.google.com/books?id=XtgiH-feUyIC), Oxford University Press,
ISBN 978-0-19-983935-3
Chalmers, David (2023), Could a Large Language Model Be Conscious? (https://arxiv.org/a
bs/2303.07103)
Churchland, Paul; Churchland, Patricia (January 1990), "Could a machine think?", Scientific
American, 262 (1): 32–39, Bibcode:1990SciAm.262a..32C (https://ui.adsabs.harvard.edu/ab
s/1990SciAm.262a..32C), doi:10.1038/scientificamerican0190-32 (https://doi.org/10.1038%2
Fscientificamerican0190-32), PMID 2294584 (https://pubmed.ncbi.nlm.nih.gov/2294584)
Cole, David (Fall 2004), "The Chinese Room Argument", in Zalta, Edward N. (ed.), The
Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/archives/fall2004/entries/chin
ese-room/), archived (https://web.archive.org/web/20210227111707/http://plato.stanford.ed
u/archives/fall2004/entries/chinese-room/) from the original on 2021-02-27, retrieved
2007-10-26 Page numbers refer to the PDF of the article.
Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY:
BasicBooks. ISBN 0-465-02997-3.
Dennett, Daniel (1991), Consciousness Explained, Penguin, ISBN 978-0-7139-9037-9
Dreyfus, Hubert (1979), What Computers Still Can't Do, New York: MIT Press, ISBN 978-0-
262-04134-8
Fearn, Nicholas (2007), The Latest Answers to the Oldest Questions: A Philosophical
Adventure with the World's Greatest Thinkers, New York: Grove Press
Goldstein, S.; Levinstein, B. (2024), Does ChatGPT have a mind? (https://arxiv.org/abs/240
7.11015)
Graham, Alan (2017), Artificial Intelligence: An Introduction (https://books.google.com/book
s?id=qFI8DwAAQBAJ&pg=PT168), Routledge, ISBN 978-1-351-33786-1
Harnad, Stevan (2001), "What's Wrong and Right About Searle's Chinese Room Argument",
in M.; Preston, J. (eds.), Views into the Chinese Room: New Essays on Searle and Artificial
Intelligence (http://cogprints.org/4023/), Oxford University Press, archived (https://web.archi
ve.org/web/20110806120215/http://cogprints.org/4023/) from the original on 2011-08-06,
retrieved 2005-12-20 Page numbers refer to the PDF of the article.
Harnad, Stevan (2005), "Searle's Chinese Room Argument", Encyclopedia of Philosophy (ht
tps://web.archive.org/web/20070116025618/http://eprints.ecs.soton.ac.uk/10424/01/chinese
room.html), Macmillan, archived from the original (http://eprints.ecs.soton.ac.uk/10424/01/ch
ineseroom.html) on 2007-01-16, retrieved 2006-04-06 Page numbers refer to the PDF of the
article.
Haugeland, John (1985), Artificial Intelligence: The Very Idea, Cambridge, MA: MIT Press,
ISBN 978-0-262-08153-5
Haugeland, John (1981), Mind Design (https://archive.org/details/minddesignphilos00haug),
Cambridge, MA: MIT Press, ISBN 978-0-262-08110-8
Hauser, Larry (2006), "Searle's Chinese Room", Internet Encyclopedia of Philosophy (http://
www.iep.utm.edu/c/chineser.htm), archived (https://web.archive.org/web/20090413030634/h
ttp://www.iep.utm.edu/c/chineser.htm) from the original on 2009-04-13, retrieved 2007-11-09
Page numbers refer to the PDF of the article.
Hew, Patrick Chisan (September 2016), "Preserving a combat commander's moral agency:
The Vincennes Incident as a Chinese Room" (https://philpapers.org/rec/HEWPAC), Ethics
and Information Technology, 18 (3): 227–235, doi:10.1007/s10676-016-9408-y (https://doi.or
g/10.1007%2Fs10676-016-9408-y), S2CID 15333272 (https://api.semanticscholar.org/Corp
usID:15333272), archived (https://web.archive.org/web/20201029191205/https://philpapers.
org/rec/HEWPAC) from the original on 2020-10-29, retrieved 2019-12-09
Kurzweil, Ray (2005), The Singularity is Near, Viking
Horst, Steven (Fall 2005), "The Computational Theory of Mind", in Zalta, Edward N. (ed.),
The Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/archives/fall2005/entries/
computational-mind/), archived (https://web.archive.org/web/20210304113123/https://plato.s
tanford.edu/archives/fall2005/entries/computational-mind/) from the original on 2021-03-04,
retrieved 2012-03-22
Leibniz, Gottfried (1714), Monadology (https://web.archive.org/web/20110703072430/http://
www.philosophy.leeds.ac.uk/GMR/moneth/monadology.html), translated by Ross, George
MacDonald, archived from the original (http://www.philosophy.leeds.ac.uk/GMR/moneth/mon
adology.html) on 2011-07-03
Moravec, Hans (1988), Mind Children: The Future of Robot and Human Intelligence (https://
books.google.com/books?id=56mb7XuSx3QC), Harvard University Press
Minsky, Marvin (1980), "Decentralized Minds", Behavioral and Brain Sciences, 3 (3): 439–
40, doi:10.1017/S0140525X00005914 (https://doi.org/10.1017%2FS0140525X00005914),
S2CID 246243634 (https://api.semanticscholar.org/CorpusID:246243634)
McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters,
ISBN 1-56881-205-1
McGinn, Collin (2000), The Mysterious Flame: Conscious Minds In A Material World (https://
books.google.com/books?id=qB0lg0u3BEkC&pg=PA194), Basic, p. 194, ISBN 978-
0786725168
Moor, James, ed. (2003), The Turing Test: The Elusive Standard of Artificial Intelligence,
Dordrecht: Kluwer, ISBN 978-1-4020-1205-1
Motzkin, Elhanan; Searle, John (February 16, 1989), "Artificial Intelligence and the Chinese
Room: An Exchange" (http://www.nybooks.com/articles/archives/1989/feb/16/artificial-intellig
ence-and-the-chinese-room-an-ex/?pagination=false), New York Review of Books, 36 (2),
archived (https://web.archive.org/web/20151025205605/http://www.nybooks.com/articles/arc
hives/1989/feb/16/artificial-intelligence-and-the-chinese-room-an-ex/?pagination=false) from
the original on 2015-10-25, retrieved 2012-03-21
Newell, Allen; Simon, H. A. (1976), "Computer Science as Empirical Inquiry: Symbols and
Search", Communications of the ACM, vol. 19, no. 3, pp. 113–126,
doi:10.1145/360018.360022 (https://doi.org/10.1145%2F360018.360022)
Nikolić, Danko (2015), "Practopoiesis: Or how life fosters a mind", Journal of Theoretical
Biology, 373: 40–61, arXiv:1402.5332 (https://arxiv.org/abs/1402.5332),
Bibcode:2015JThBi.373...40N (https://ui.adsabs.harvard.edu/abs/2015JThBi.373...40N),
doi:10.1016/j.jtbi.2015.03.003 (https://doi.org/10.1016%2Fj.jtbi.2015.03.003),
PMID 25791287 (https://pubmed.ncbi.nlm.nih.gov/25791287), S2CID 12680941 (https://api.
semanticscholar.org/CorpusID:12680941)
Nilsson, Nils (1984), A Short Rebuttal to Searle (https://ai.stanford.edu/%7Enilsson/OnlineP
ubs-Nils/General%20Essays/OtherEssays-Nils/searle.pdf) (PDF), archived (https://web.arch
ive.org/web/20200116221624/https://ai.stanford.edu/%7Enilsson/OnlinePubs-Nils/General%
20Essays/OtherEssays-Nils/searle.pdf) (PDF) from the original on 2020-01-16, retrieved
2019-08-25
Nilsson, Nils (2007), Lungarella, M. (ed.), "The Physical Symbol System Hypothesis: Status
and Prospects" (https://ai.stanford.edu/%7Enilsson/OnlinePubs-Nils/PublishedPapers/pssh.
pdf) (PDF), 50 Years of AI, Festschrift, LNAI 4850, Springer, pp. 9–17, archived (https://web.
archive.org/web/20210211221754/http://ai.stanford.edu/~nilsson/OnlinePubs-Nils/Published
Papers/pssh.pdf) (PDF) from the original on 2021-02-11, retrieved 2023-07-23
Pinker, Steven (1997), How the Mind Works, New York: W. W. Norton, ISBN 978-0-393-
31848-7
Preston, John; Bishop, Mark, eds. (2002), Views into the Chinese Room: New Essays on
Searle and Artificial Intelligence (https://books.google.com/books?id=N7msQgAACAAJ),
Oxford University Press, ISBN 978-0-19-825057-9
Ramsey, William (Spring 2022), "Eliminative Materialism", in Edward N. Zalta (ed.), The
Stanford Encyclopedia of Philosophy (https://plato.stanford.edu/archives/spr2022/entries/ma
terialism-eliminative/), Metaphysics Research Lab, Stanford University, archived (https://we
b.archive.org/web/20221104071425/https://plato.stanford.edu/archives/spr2022/entries/mat
erialism-eliminative/) from the original on 2022-11-04, retrieved 2023-07-23
Roberts, Jacob (2016), "Thinking Machines: The Search for Artificial Intelligence" (https://we
b.archive.org/web/20180819152455/https://www.sciencehistory.org/distillations/magazine/thi
nking-machines-the-search-for-artificial-intelligence), Distillations, vol. 2, no. 2, pp. 14–23,
archived from the original (https://www.sciencehistory.org/distillations/magazine/thinking-ma
chines-the-search-for-artificial-intelligence) on 2018-08-19, retrieved 2018-03-22
Russell, Stuart J.; Norvig, Peter (2021), Artificial Intelligence: A Modern Approach (4th ed.),
Hoboken, NJ: Pearson, ISBN 978-0-13-461099-3
Saygin, A. P.; Cicekli, I.; Akman, V. (2000), "Turing Test: 50 Years Later" (https://web.archive.
org/web/20110409073501/http://crl.ucsd.edu/~saygin/papers/MMTT.pdf) (PDF), Minds and
Machines, 10 (4): 463–518, doi:10.1023/A:1011288000451 (https://doi.org/10.1023%2FA%3
A1011288000451), hdl:11693/24987 (https://hdl.handle.net/11693%2F24987),
S2CID 990084 (https://api.semanticscholar.org/CorpusID:990084), archived from the
original (http://crl.ucsd.edu/~saygin/papers/MMTT.pdf) (PDF) on 2011-04-09, retrieved
2015-06-05. Reprinted in Moor (2003, pp. 23–78).
Searle, John (1980), "Minds, Brains and Programs" (https://web.archive.org/web/200712100
43312/http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html), Behavioral and
Brain Sciences, 3 (3): 417–457, doi:10.1017/S0140525X00005756 (https://doi.org/10.101
7%2FS0140525X00005756), archived from the original (http://members.aol.com/NeoNoetic
s/MindsBrainsPrograms.html) on 2007-12-10, retrieved 2009-05-13 Page numbers refer to
the PDF of the article. See also Searle's original draft (https://web.archive.org/web/2001022
1025515/http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html).
Searle, John (1983), "Can Computers Think?", in Chalmers, David (ed.), Philosophy of
Mind: Classical and Contemporary Readings, Oxford University Press, pp. 669–675,
ISBN 978-0-19-514581-6
Searle, John (1984), Minds, Brains and Science: The 1984 Reith Lectures (https://archive.or
g/details/mindsbrainsscien0000sear), Harvard University Press, ISBN 978-0-674-57631-5
Searle, John (January 1990), "Is the Brain's Mind a Computer Program?", Scientific
American, 262 (1): 26–31, Bibcode:1990SciAm.262a..26S (https://ui.adsabs.harvard.edu/ab
s/1990SciAm.262a..26S), doi:10.1038/scientificamerican0190-26 (https://doi.org/10.1038%2
Fscientificamerican0190-26), PMID 2294583 (https://pubmed.ncbi.nlm.nih.gov/2294583)
Searle, John (November 1990), "Is the Brain a Digital Computer?" (https://web.archive.org/w
eb/20121114093932/https://mywebspace.wisc.edu/lshapiro/web/Phil554_files/SEARLE-BD
C.HTM), Proceedings and Addresses of the American Philosophical Association, 64 (3): 21–
37, doi:10.2307/3130074 (https://doi.org/10.2307%2F3130074), JSTOR 3130074 (https://w
ww.jstor.org/stable/3130074), archived from the original (https://mywebspace.wisc.edu/lshap
iro/web/Phil554_files/SEARLE-BDC.HTM) on 2012-11-14
Searle, John (1992), The Rediscovery of the Mind (https://books.google.com/books?id=eoh
8e52wo_oC), Cambridge, MA: MIT Press, ISBN 978-0-262-26113-5
Searle, John (1999), Mind, language and society (https://archive.org/details/mindlanguages
oci00sear), New York: Basic, ISBN 978-0-465-04521-1, OCLC 231867665 (https://search.w
orldcat.org/oclc/231867665)
Searle, John (November 1, 2004), Mind: a brief introduction (https://books.google.com/book
s?id=oSm8JUHJXqcC), Oxford University Press, ISBN 978-0-19-515733-8
Searle, John (2002), Consciousness and Language (https://books.google.com/books?id=bv
xhV-1Duz8C&pg=PA16), Cambridge University Press, p. 16, ISBN 978-0-521-59744-9
Searle, John (2009), "Chinese room argument", Scholarpedia, 4 (8): 3100,
Bibcode:2009SchpJ...4.3100S (https://ui.adsabs.harvard.edu/abs/2009SchpJ...4.3100S),
doi:10.4249/scholarpedia.3100 (https://doi.org/10.4249%2Fscholarpedia.3100)
Turing, Alan (October 1950). "Computing Machinery and Intelligence" (https://academic.oup.
com/mind/article/LIX/236/433/986238). Mind. 59 (236): 433–460.
doi:10.1093/mind/LIX.236.433 (https://doi.org/10.1093%2Fmind%2FLIX.236.433).
ISSN 1460-2113 (https://search.worldcat.org/issn/1460-2113). JSTOR 2251299 (https://ww
w.jstor.org/stable/2251299). S2CID 14636783 (https://api.semanticscholar.org/CorpusID:146
36783).
Steiner, Philip (October 31, 2022), "Modern Hard SF: Simulating Physics in Virtual Reality in
Cixin Liu's "The Three-Body Problem" " (https://journals.umcs.pl/lsmll/article/view/13674),
Lublin Studies in Modern Languages and Literature, 46 (3): 57–66,
doi:10.17951/lsmll.2022.46.3.57-66 (https://doi.org/10.17951%2Flsmll.2022.46.3.57-66),
S2CID 253353924 (https://api.semanticscholar.org/CorpusID:253353924), archived (https://
web.archive.org/web/20230205104651/https://journals.umcs.pl/lsmll/article/view/13674)
from the original on 2023-02-05, retrieved 2023-02-05 Page numbers refer to the PDF of the
article.
Šekrst, Kristina (2024), "Chinese Chat Room: AI Hallucinations, Epistemology and
Cognition" (https://philpapers.org/rec/EKRCCR), Studies in Logic, Grammar and Rhetoric,
69 (82), doi:10.2478/slgr-2024-0029 (https://doi.org/10.2478%2Fslgr-2024-0029)
Whitmarsh, Patrick (2016), " "Imagine You're a Machine": Narrative Systems in Peter Watts's
Blindsight and Echopraxia", Science Fiction Studies, vol. 43, no. 2, pp. 237–259,
doi:10.5621/sciefictstud.43.2.0237 (https://doi.org/10.5621%2Fsciefictstud.43.2.0237)
Yee, Richard (1993), "Turing Machines And Semantic Symbol Processing: Why Real
Computers Don't Mind Chinese Emperors" (http://lyceumphilosophy.com/Lyceum-5-1.pdf)
(PDF), Lyceum, vol. 5, no. 1, pp. 37–59, archived (https://web.archive.org/web/2021022412
1521/http://lyceumphilosophy.com/Lyceum-5-1.pdf) (PDF) from the original on 2021-02-24,
retrieved 2014-12-18 Page numbers refer to the PDF of the article.

Further reading
Hauser, Larry, "Chinese Room Argument" (https://iep.utm.edu/chinese-room-argument/),
Internet Encyclopedia of Philosophy, ISSN 2161-0002 (https://search.worldcat.org/issn/2161
-0002), retrieved 2024-08-17
Cole, David (2004), "The Chinese Room Argument" (https://plato.stanford.edu/entries/chine
se-room/), in Zalta, Edward N.; Nodelman, Uri (eds.), Stanford Encyclopedia of Philosophy
(Summer 2023 ed.), Metaphysics Research Lab, Stanford University, ISSN 1095-5054 (http
s://search.worldcat.org/issn/1095-5054)

Works involving Searle


Searle, John (2009), "Chinese room argument" (http://www.scholarpedia.org/article/Chinese
_room_argument), Scholarpedia, vol. 4:8, p. 3100, Bibcode:2009SchpJ...4.3100S (https://ui.
adsabs.harvard.edu/abs/2009SchpJ...4.3100S), doi:10.4249/scholarpedia.3100 (https://doi.
org/10.4249%2Fscholarpedia.3100), ISSN 1941-6016 (https://search.worldcat.org/issn/1941
-6016)
——— (October 9, 2014), "What Your Computer Can't Know" (https://www.nybooks.com/arti
cles/2014/10/09/what-your-computer-cant-know/), The New York Review of Books, vol. 61,
no. 15, ISSN 0028-7504 (https://search.worldcat.org/issn/0028-7504)
Reviews Bostrom, Nick (2014), Superintelligence: Paths, Dangers, Strategies, Oxford
University Press, ISBN 978-0-19-967811-2 and Floridi, Luciano (2014), The 4th
Revolution: How the Infosphere Is Reshaping Human Reality, Oxford University Press,
ISBN 978-0-19-960672-6
The Chinese Room Argument (http://globetrotter.berkeley.edu/people/Searle/searle-con4.ht
ml), part 4 of the September 2, 1999 interview with Searle Philosophy and the Habits of
Critical Thinking (http://globetrotter.berkeley.edu/people/Searle/searle-con0.html) Archived
(https://web.archive.org/web/20100613070337/http://globetrotter.berkeley.edu/people/Searl
e/searle-con0.html) 2010-06-13 at the Wayback Machine in the Conversations With History
series

Retrieved from "https://en.wikipedia.org/w/index.php?title=Chinese_room&oldid=1267224595"

You might also like