Early AI in Britain Turing Et Al.
Early AI in Britain Turing Et Al.
IN EUROPE
T
racing far back along the chain of intellectual pre- computing machines” and then, eventually, to his practi-
cursors to modern AI, one reaches the American cal work on the hardware and software designs for the
philosopher C. S. Peirce. In 1908, Peirce expressed ACE and other early British electronic computers [9], [18],
the idea that a machine—“some Babbage’s analytical [29], [30]. This same trajectory issued in his early work on
engine or some logical machine”—is capable of all mathe- what he called “intelligent machinery.” Researchers that
matical reasoning [47, p. 434]. He was skeptical, however, he influenced termed the new field machine intelligence.
calling the idea “malignant,” and firmly placing it among The later term “Artificial Intelligence” appeared at a
others he deemed “logical heresies” (ibid.). 1956 conference in the U.S., the Dartmouth Summer
Peirce was a powerful influence on Hilbert and his Research Project on Artificial Intelligence, organized by
group at Go € ttingen [27, p. 1]. In 1903, Peirce formulated the McCarthy and Shannon. (The term is usually credited to
decision problem for first-order logic in roughly the form in McCarthy, but he said in an interview: “I won’t swear
which Turing later tackled it [46]. At Go € ttingen, the decision that I hadn’t seen it before . . . Someone may have used
problem was named the Entscheidungsproblem, it seems it in a paper or a conversation” [36, p. 96].) It appears
by Behmann, a young member of Hilbert’s group [34]. As that McCarthy knew little or nothing at this time about
early as 1921, Behmann used the concept of a machine to preexisting British work on machine intelligence, nor
clarify the nature of the Entscheidungsproblem, saying: about computer developments in Britain in general—
“One might, if one wanted to, speak of mechanical or even saying in Scientific American in 1966: “it does not
machine-like thinking” and “Perhaps one can one day even seem that the work of . . . Turing . . . played any direct
let it be carried out by a machine” [34, p. 176]. role in the labors of the men who made the computer a
reality” [35, p. 68].
This century has certainly seen a radical reap-
praisal of Turing’s role in the development of comput-
This work is licensed under a Creative Commons Attribution ing, and he is now widely regarded as a “founding
4.0 License. For more information, see https://creativecom- father” of computer science (although some question
mons.org/licenses/by/4.0/
the accuracy of this reappraisal, e.g., Vardi [72]). The
Digital Object Identifier 10.1109/MAHC.2023.3300660
Date of publication 1 August 2023; date of current version 1 time may be ripe for the AI community to reappraise
September 2023. Turing’s position vis-a -vis the field of AI and its origins.
July/September 2023 Published by the IEEE Computer Society IEEE Annals of the History of Computing 19
PERSPECTIVES ON ARTIFICIAL INTELLIGENCE IN EUROPE
This article does not itself attempt such a reappraisal, “which would naturally be regarded as computable”
however, the aim being simply to give an overview of are numbers producible by human computers.
Turing’s pioneering work in machine intelligence— The work of human computers—there were large
which is still not as widely known as it might be. The numbers of them in Turing’s day—would normally be said
article therefore traces the early developments involv- to have required a form of intelligence, even though the
ing Turing and researchers he influenced. work needed no ingenuity on the part of the human clerk
Turing’s first significant brush with the Entschei- nor any mathematical “intuition.” A glance at Skan’s clas-
dungsproblem was—so far as we know—in a 1935 lec- sic Handbook for Computers [57] shows the degree of
ture by the Cambridge logician Newman, himself soon intelligence that the work of human computers
to become a presence in debates on machine intelli- demanded. Junior computers recruited by the National
gence. Newman said in an interview, “I believe it all Physical Laboratory (where Turing worked in the immedi-
started because [Turing] attended a lecture of mine ate postwar years) were typically selected from among
on foundations of mathematics and logic.”1 school leavers with mathematical qualifications. The
Turing proved that the Entscheidungsproblem is training of these young computers included an emphasis
unsolvable, by means of an ingenious “diagonal” argu- on such skills as error-management: “At school,” they
ment.2 A step along the way was his invention of what were told, “if you made an error you were punished by
he called the “universal computing machine”—now loss of marks. Here, errors will be made all the time. They
simply the universal Turing machine and widely must not leave the building” [73, p. 266]. A computer’s
regarded as a bare-bones logical model of almost basic proficiencies included finding roots, numerical dif-
every modern electronic digital computer.3 This inven- ferentiation and integration of functions, numerical accu-
tion in turn led to Turing’s pioneering investigations racy checks, and the solution of multivariable equations
into machine intelligence. by a wide range of methods [57]. Advanced computa-
In his classic 1936 paper on the Entscheidungspro- tional methods would require computers selected from
blem [61], there was no mention of machine intelli- among university graduates [73, p. 265]. Turing’s thesis
gence per se, but central to the paper’s argument was maintained that all work done by human computers can
a crucial thesis, now named “Turing’s thesis” (and also be done by his machines.
the “Church-Turing thesis”): If—as Minsky stated in the 1960s, in his well-known
characterization of AI [41, p. v]—“Artificial Intelligence
The “computable” numbers include all numbers
is the science of making machines do things that
which would naturally be regarded as
would require intelligence if done by men,” then
computable [61, p. 74].
Turing’s thesis entails the in-principle achievability of
at least a certain level of AI. The thesis implies that
suitably programmed Turing machines are able to do
TURING’S THESIS AND THE work that is naturally regarded as requiring intelli-
MECHANIZATION OF gence when done by humans (assuming one accepts
INTELLIGENCE that the work of human computers is normally taken
This thesis, for which Turing argued strenuously, to require a certain degree of intelligence).
states that a Turing machine can in principle do any of By 1938, Turing was arguing, further, that what he
the mathematics that human clerks—human com- called “the exercise of ingenuity in mathematics” is reduc-
puters—are capable of doing. In Turing’s pithy formu- ible to processes that a Turing machine can carry out:
lation, the “computable” numbers are the numbers “ingenuity is replaced by patience,” he argued [62, pp.
produced by the Turing machine, and the numbers 192–193]. Intuition, though, which he distinguished from
ingenuity, cannot be wholly replaced by patience, he
maintained (ibid.). He argued that “finding a formal logic
1
Newman interviewed by Evans (circa 1977), “The Pioneers of which wholly eliminates the necessity of using intuition”
Computing: An Oral History of Computing,” London, U.K.: Sci-
ence Museum; also relevant are Smithies’ notes taken in the is an “impossibility” [62, p. 193].
course the previous year (F. Smithies, “Foundations of Mathe- There was, to be sure, a deflationary aspect to
matics. Mr. Newman,” 1934, St John’s College Library, Cam- Turing’s later discussions of intelligence:
bridge, GB 275 Smithies/H/H57).
2
For further information on the Entscheidungsproblem and
Turing’s attack on it, as well as Church’s independent contri- The extent to which we regard something as
butions, see [14]. behaving in an intelligent manner is determined
3
What Turing called “computing machines” were dubbed
“Turing machines” by Church in his review of Turing’s paper as much by our own state of mind and training
[11, p. 43]. as by the properties of the object under
consideration. If we are able to explain or A process that may solve a given problem, but
predict its behaviour . . . we have little offers no guarantees of doing so, is called a
temptation to imagine intelligence. With the heuristic [43, p. 220].
same object therefore it is possible that one
A heuristic is a mechanical process that—in terms
man would consider it as intelligent and
of achieving its goal—is “fallible but ‘fairly reliable’”
another would not; the second man would have
[25, pp. 83-84]. A good heuristic works often enough to
found out the rules of its behaviour [66, p. 431].
be useful.
Nevertheless, looking back at Turing’s two classic In their joint Turing Award lecture in 1975, Newell and
papers of the 1930s [61], [62], we can fairly say that, Simon summarized the approach their AI research had
first, his work led to the anchoring of discussions of followed since the mid 1950s. The fundamental idea was
thinking machines (previously lacking in detail, or even that a (symbol-processing) “system exercises its intelli-
fanciful) to a definite and powerful machine architec- gence in problem solving by search”; and they defined
ture, the Turing machine; and that, second, he estab- search as “generating potential solutions and testing
lished (by means of a bouquet of abstract arguments) them” [44, pp. 120, 126].5 A physical system, they said,
a thesis implying that Turing machines are able to per- “must use heuristic search to solve problems because
form tasks commonly said to require a form of intelli- such systems have limited processing resources” [44, p.
gence when done by humans. 120]. Hence, their Heuristic Search Hypothesis:
These achievements alone might be sufficient
[S]ystems solve problems by using the
ground to say that Turing’s work occupies a founda-
processes of heuristic search. (ibid.)
tional position in the history of the mechanization of
intelligence, but he in fact contributed much more. He They described this as a “law of qualitative struc-
argued forcefully that foreseeable electronic com- ture for AI” [44, p. 126].
puters are capable of “showing intelligence”—espe- In their Turing Award lecture, Newell and Simon spoke
cially if the computer has the ability to learn, in which from time to time of Turing machines but—ironically—
case it would, he said, ultimately “be like a pupil who there was no mention of Turing’s work in machine intelli-
had learnt much from his master, but had added much gence. Newell and Simon of course knew nothing about
more by his own work” [65, p. 393]. “What we want is a what Turing and his fellow codebreakers had done at
machine that can learn from experience,” he said, and Bletchley Park during the Second World War, since the
“the machine must be allowed to have contact with relevant documents were not declassified until 20 or
human beings in order that it may adapt itself to their more years after their lecture. Heuristic search (although
standards” [65, pp. 393–394]. not by that name) was in fact a principal weapon in the
These farsighted statements were made in an codebreakers’ armory.
address he gave in 1947 to the London Mathematical Moving to Bletchley Park the day after Prime Minister
Society—seemingly the first time the idea of computer Chamberlain declared war on Hitler, Turing worked on
intelligence was aired in public in a lecture by an actual Enigma and (with his colleague Welchman) designed the
practitioner in the emerging field. For by that time, Turing Bombe, an electromechanical behemoth for attacking
was indeed a practitioner. Following the publication of his encrypted German messages. The relay-based Bombes
two classic papers of the 1930s, he had begun to explore (each containing around one million soldered connec-
the potential for the practical development of machines tions [33, p. 291]) turned Bletchley Park into a codebreak-
showing intelligence, ingenuity, originality, the ability to ing factory: By late 1943, these engines of search were
learn, and more. achieving a total average throughput of 84,000 broken
messages each month—two messages every minute,
24/7 [28, p. 29].
BLETCHLEY PARK: THE POWER OF The Bombe trawled at superhuman speed through
SEARCH the Enigma machine’s possible settings, looking for
In the 1950s, two leading pioneers of AI in the U.S., the ones the machine’s operator had used to encrypt
Newell and Simon, emphasized the importance of the message. Because of the astronomical number of
guided search. They used the term “heuristic search,” potential settings, an exhaustive search was out of
adding the noun “heuristic”4 to the vocabulary of com-
puter science in an influential 1957 paper:
5
The systems they considered were what they called “symbol
4
For some context, see [49]. systems,” exemplified by Turing machines.
the question; searches had to be guided into promis- component of this idea, the concept of an enumeration
ing regions of the solution space. of the provable formulae, is present in Hilbert’s 1904
One way of guiding the search involved the classic [26].) Turing expressed his proof-as-search idea
use of what Turing called “multiple encipherments” like this:
[63, p. 317]. These were properties of a crib, a word or
We are always able to obtain from the rules of a
phrase (such as WETTER FUER DIE NACHT) that the
formal logic a method of enumerating the
codebreaker thought might be part of the concealed Ger-
propositions proved by its means. We then
man message. Each letter of a crib is enciphered by the
imagine that all proofs take the form of a
Enigma machine as some other letter (X as Y, say). A mul-
search through this enumeration for the
tiple encipherment, or loop, occurs if there is then, further
theorem for which a proof is desired [62, p. 193].
on in the coded version of the message, another occur-
rence of Y, and this deciphers to X in the crib. Such loops It is in that manner, Turing said, that ingenuity is
do not happen very often, because it is much more likely replaceable by patience. (He actually used the word “heu-
that the second Y would decipher to some other letter. ristic” on this page of his discussion, although not it
However, a good crib needs to contain a longer loop, seems in the Newell-Simon sense (ibid.).) It was not until
such as A enciphering to B and then a later occurrence of the Bombes were in operation, however, that Turing saw
B deciphering to C and then, further on still, C enciphering the ample successes of the practical, large-scale use of
to A, and a crib could contain a loop involving four or fallible mechanical searches for replacing—to an
more letters. These multiple encipherments were the extent—the endeavors of human codebreakers.
basic tool for guiding the Bombe’s search. As the Newell–Simon approach eventually demon-
A search guided by a multiple encipherment might fail. strated, Turing was quite right when—almost a
In Newell and Simon’s phrase there were “no guarantees,” decade before Newell and Simon presented their first
but the heuristics used with the Bombe worked often heuristic program at Shannon and McCarthy’s Dart-
enough. Another of the various examples of heuristics mouth conference6—he predicted that “research into
that Turing described (in his 1940 write-up of the Bombe) intelligence of machinery will probably be very greatly
was named Herivelismus, after its inventor Herivel. concerned with ’searches’”, [66, p. 430].
Herivelismus relied on the fact that, when German Enigma At Bletchley Park, Turing even composed a typescript
operators came on duty and enciphered their first mes- about machine intelligence (see the next section). Now
sage, a proportion tended not to make too thorough a job lost, this was undoubtedly the earliest paper in the field.
of turning the Enigma machine’s three code-wheels away
from the wheels’ “base” position (picture the wheels as
CHESS AND MECHANIZED
resembling the wheels of a combination lock). When the
LEARNING
Bombe was searching for the positions of the code-
Michie, a leading figure in 20th-century British AI, was
wheels at the start of what was believed to be the first
one of Turing’s younger colleagues at Bletchley Park.
message of a shift, only positions in the base position’s
When I interviewed him at his home near Palm Springs
neighborhood were considered [63, p. 335]. Herivel’s
in 1998, he put on record important glimpses of
insight was reduced to practice in the form of a mechani-
Turing’s wartime thinking. (Of course, recollections of
cal process for finding the wheel-positions that was fallible
what happened more than half-a-century previously
yet fairly reliable. This heuristic broke many messages.
must always be handled with care, especially when
Before heuristic search even had a name, it played a dra-
the recollector played a key role in later relevant
matic role in the mechanization of thought processes.
events—with the attendant risk of anachronism.)
Not long after the war ended (when the still highly
Some evenings Turing and Michie would retreat to a
secret Bombes were mothballed) Turing began extol-
pub to discuss their favorite topic:
ling the use of search for creating what he called
DONALD MICHIE: “What the codebreaker does is
“intelligent machinery.” He advanced what can be
very much a set of intellectual operations and thought
termed Turing’s Search Principle:
processes, and so we were thoroughly familiar with
[I]ntellectual activity consists mainly of various the idea of automating thought processes—both of us
kinds of search [66, p. 431]. were up to our elbows in automation of one kind and
another.”
An awareness of the connection between intellec-
tual activity and mechanical search was already present
in his prewar work, where he linked the activity of logico- 6
There are some interesting first-hand accounts of this pre-
mathematical proof with search. (An essential sentation in [36, pp. 104–108].
Chess, which Michie later described as “the in which he had specific ideas about learning and
Drosophila melanogaster of machine intelligence” more general varieties of learning.”
[39, pp. 78–79], provided a convenient framework for COPELAND: “Was this at Bletchley?”
their theorizing. Good, another codebreaker, would MICHIE: “At Bletchley, yes. When I say ‘circulating,’ I
sometimes join in these ongoing wartime discussions, know he showed a copy to me and to Jack Good—and I
usually during Sunday morning walks with Turing and suppose to one or two other associates—for our
Michie. He told me of an even earlier conversation comments.”
with Turing, in 1941, before Michie’s time at Bletchley None of this Bletchley-era pioneering work on com-
Park, when they had “talked about the possibility of puter chess was published. Like Turing and Michie, Good
mechanizing chess.”7 I asked Michie what he recalled “made the mistake of thinking it was not worth publish-
of his historic discussions with Turing. ing,” he said.8
MICHIE: “There were three headings. One was: meth- When peace descended, Turing had more time to
ods of mechanizing the game of chess and games of simi- think about machine intelligence. In 1945, the National
lar structure. Another was: the possibility of machine Physical Laboratory hired him to design an electronic
algorithms and systems which could learn from experi- computer, the Automatic Computing Engine or ACE—a
ence; and the third area was a little more general, to do universal Turing machine in hardware. (Womersley, who
with the possibility of instructing machines with more coined the name “Automatic Computing Engine,” had
general statements than purely ground-level factual visited Aiken’s computer at Harvard in 1945, calling it
statements—which would involve, in some sense, the “Turing in hardware” [76].) The ACE project was Turing’s
machine understanding and drawing inferences from opportunity to consider the possibilities of machine
what it was told.” intelligence in the exciting new context of electronic,
COPELAND: “Did you and Turing often play chess general-purpose, stored-program hardware. He said
together at this time?” that his aim was to “make a brain,”9 and a letter he wrote
MICHIE: “Being one of the few people in the Bletchley from the National Physical Laboratory contained some
environment bad enough to give him a reasonably even remarkable statements:
game, I became his regular sparring partner. Our discus-
In working on the ACE I am more interested in the
sions on machine intelligence started from the moment
possibility of producing models of the action of
that we began to play chess together.”
the brain than in the practical applications to
COPELAND: “What specific proposals did Turing
computing . . . The ACE will be used . . . in the first
make concerning chess programming at this time?”
instance in an entirely disciplined manner . . . It
MICHIE: “We talked about putting numerical val-
will also be necessarily devoid of anything that
ues on the pieces. And we talked about the mecha-
could be called originality. There is, however, no
nization of conditionals of the form: “If I do that, he
reason why the machine should always be used in
might do that, or alternatively he might do that, in
such a manner: there is nothing in its construction
which case I could do this”—what today we would
which obliges us to do so. It would be quite
call look-ahead. We certainly discussed priority of
possible for the machine to try out variations of
ordering according to some measurable plausibility
behaviour and accept or reject them . . . and I
of the move.”
have been hoping to make the machine do this.10
COPELAND: “What else?”
MICHIE: “Our discussions of chess programming Turing even mentioned computer intelligence in
also included the idea of punishing the machine in his otherwise austere report setting out the design of
some sense for obvious blunders, and the question of the ACE:
how on earth it might be possible to make this useful,
Given a position in chess the machine could be
beyond the pure rote-learning effects of punish-
made to list all the “winning combinations” to a
ment—deleting a move from the repertoire. Which is
depth of about three moves on either side. This . . .
only applicable in the early opening.”’
raises the question “Can the machine play chess?”
COPELAND: “Did Turing have any specific ideas
It could fairly easily be made to play a rather bad
about how to do that?”
MICHIE: “I don’t remember any in the context of
chess. I do remember him later circulating a typescript
8
Good quoted in [[3], p. 14].
9
Letter from Bayley to Copeland, 6 May 1998.
10
Letter from Turing to Ross Ashby, undated, circa 1946,
7
I benefitted from interviewing Good in 2004. htt_p://www.alanturing.net/turing_ashby.
game . . . There are indications however that it is equally far, the “more profitable moves” are “con-
possible to make the machine display intelligence sidered in greater detail than the less”;
at the risk of its making occasional serious › heuristics guiding the search through the tree of
mistakes. By following up this aspect the machine possible moves and countermoves;
could probably be made to play very good chess › the minimax principle;
[64, p. 389]. › quiescence search: the search along a particular
branch in the tree is discontinued when a “dead”
In the summer of 1948, Turing created a chess-player
position is found—a position with no captures or
in the form of what he called a “paper machine,” a sys-
other developments in the offing.
tem of machine-rules in loose pseudocode. The rules
were acted out using paper and pencil to do the compu- Part of computer chess’s appeal was the scope it
tations [66, p. 431]. The system that Turing and his col- offered for investigating the idea of a program learning
laborator Champernowne called “Turochamp” proved to improve its performance. Newman described one pio-
capable of beating a human player, described as a neering learning technique during a discussion on intelli-
“beginner at chess” [10]. Champernowne summed up: gent machines with Turing and others (broadcast on
Our general conclusion was that a computer BBC radio). In response to the question “Can machines
should be fairly easy to programme to play a learn to do better with practice?,” Newman said:
game of chess against a beginner and stand a Yes . . . a programme could be composed
fair chance of winning or at least reaching a that would cause the machine to do this: a
winning position [10]. 2-move chess problem is recorded into the
In 1948, Turing left the National Physical Labora- machine in some suitable coding, and . . . a white
tory for the University of Manchester’s new Comput- move is chosen at random . . . All the
ing Machine Laboratory, joining Newman.11 There he consequences of this move are now analysed, and
continued his work on machine intelligence, telling a if it does not lead to forced mate in two moves,
newspaper reporter that he saw no reason why elec- the machine prints, say, “P-Q3, wrong move,” and
tronic computers should not “enter any one of the stops. But . . . when the right move is chosen the
fields normally covered by the human intellect, and machine not only prints, say, “B-Q5, solution,” but
eventually compete on equal terms.”12 it changes the instruction calling for a random
At the Manchester lab, Turing delved further into choice to one that says “Try B-Q5.” . . . Such
computer chess, describing the successor to Turochamp a routine could certainly be made now, and I think
in a typescript [70] completed in 195113 (it was published this can fairly be called learning [71, p. 496].
in 1953, with modifications, in [5]). Turing had mentioned Turing wanted to use what we would now call a
in the course of his 1947 lecture to the London Mathe- genetic algorithm, or GA, to achieve learning. He
matical Society that he and Shannon discussed com- hinted at the idea in a quotation given previously (“try
puter chess [65, p. 393]. Their individual ground-breaking out variations of behavior and accept or reject them”),
papers on the subject—Shannon’s published in 1950 but and the idea also appeared, briefly, in a report he
written in 1948, a few months after Turing’s Turochamp wrote in 1948 for the National Physical Laboratory,
beat a human player [55]—set out the state of the art in where he spoke of “genetical or evolutionary search,”
the new field. Using modern terminology (italicized) to with “the criterion being survival value”, [66, p. 431].
label concepts from that earlier time, Turing’s typescript Turing fleshed out the idea in his typescript on chess:
(and the published version of it in [5]) covered:
[A]s to the ability of a chess-machine to profit from
› evaluation rules that assign numerical values, experience, one can see that it would be quite
indicative of strength or weakness, to board possible to programme the machine to try out
configurations; variations in its method of play (e.g. variations in
› variable look-ahead: instead of the consequen- piece value) and adopt the one giving the most
ces of every possible move being followed satisfactory results. This could certainly be
described as “learning,” though it is not quite
11
representative of learning as we know it [70, p. 575].
For details of this move see [12, pp. 395-401].
12
The Times, 11 June 1949. As far as can be ascertained from surviving records,
13
Letter from Bowden to Turing, 23 November 1951, John
Rylands Library, University of Manchester, collection GB 133 Turing’s chess experiments seem to have been con-
TUR/ADD/53. ducted entirely with paper machines. The earliest known
implementation of a GA on an electronic computer was interconnected Boolean neurons, and B-types had in
by Samuel, in 1955, in connection with a checkers-play- addition inputs that enabled training.
ing program [54]. Samuel set up two copies of his check- Using these inputs, an external agent would orga-
ers-player on an IBM 704 and programmed the nize the initially randomly connected network by
computer to try out variations in the method of play. It selectively disabling and enabling connections within
made small random alterations to the move generator it, an arrangement that is functionally equivalent to
of one copy, leaving the other copy unchanged; and, one in which the stored information takes the form of
after a series of games, alterations that led to improved new connections within the network.
performance were retained. In this way, the program Turing said that a B-type can be trained—by means of
learned to outplay Samuel in “8 or 10 hours of machine- applying “appropriate interference, mimicking educa-
playing time” [54, p. 211]. tion”—to “do any required job, given sufficient time and
provided the number of units is sufficient” [66, p. 422].
He foresaw using an electronic computer to simu-
ARTIFICIAL “NEURONS” late the network, and also envisaged programming an
In his 1948 report [66] for the National Physical Labo- automatic training algorithm—although he himself
ratory, Turing discussed a raft of concepts that would carried out his neural-network experiments using
later become central to AI, generally after reinvention paper machines, a short time before the first function-
by others. This report, titled simply “Intelligent Machin- ing electronic computers came alive. He said:
ery,” was in effect a manifesto.
I feel that more should be done on these lines.
It circulated in limited numbers, but never received
I would like to investigate other types of
the exposure it warranted. As well as Turing’s Search Prin-
unorganised machine . . . When some electronic
ciple, and his all-too-brief mention of genetical search, he
machines are in actual operation I hope that they
discussed the theorem-proving approach to AI, the con-
will make this more feasible. It should be easy to
cept of multiagent “cultural” searches, the role of ran-
make a model of any particular machine that one
domness in learning and in computation more broadly,
wishes to work on within such a U.P.C.M.
intelligent robots, the idea of intelligence as an “emo-
[universal practical computing machine] instead
tional” or response-dependent concept [51], the argument
of having to work with a paper machine as at
from Go € del’s theorem against computer intelligence
present. If also one decided on quite definite
(later elaborated in [31] and [48]), and much more besides,
“teaching policies” these could also be
including a cameo presentation of the Turing test (see
programmed into the machine. One would then
below). Moreover, in the report’s most detailed sections
allow the whole system to run for an appreciable
he set out a bottom-up brain-inspired approach to
period, and then break in as a kind of “inspector of
machine intelligence that foreshadowed aspects of con-
schools” and see what progress had been made
nectionist AI [17], [60].
[66, p. 428].
Turing suggested (in [66]) that practical comput-
ing systems be constructed out of simple, initially
randomly connected neuron-like elements, and then
trained to perform specific tasks. McCulloch and ROBOTS AND “CHILD MACHINES”
Pitts had already published an account of their Turing advocated a range of different approaches to
model neurons (McCulloch later remarking, “What developing AI—and, as the field progressed, all of his sug-
we thought we were doing (and I think we suc- gestions turned out to be fecund. One approach was his
ceeded fairly well) was treating the brain as a Turing bottom-up neural network strategy, another was the
machine”), but there was no suggestion in their higher level approach now called “Symbolic AI,” involving
work of using artificial neurons to construct neural his ideas about search, heuristics, theorem-proving, and
network-like computing systems [37], [38]. Moreover, high-level learning. He also contrasted the approach of
their discussion of neuron-level learning was per- programming disembodied systems—systems for carry-
functory. Turing’s idea that an initially unorganized ing out some “abstract activity, like the playing of
artificial neural network could be organized by what chess”—with an approach involving embodiment, saying
he called “interfering training” was new. “I think both approaches should be tried” [67, p. 463]. He
His “unorganized machines”—“A-types” and thought researchers pursuing an embodiment approach
“B-types”—are species of neural networks. He said could aim, for example, “to provide the machine with the
A-types are “about the simplest model of a nervous sys- best sense organs that money can buy, and then teach it
tem” [66, p. 418]. A-types and B-types both consisted of to understand and speak” [67, p. 463].
Turing sketched the quest, now pursued in research Michie was a leading advocate of Turing’s child-
labs round the globe, to build a humanoid robot: machine concept, in AI’s “classical” period (which
began when the rugged pioneering era of experimen-
One way of setting about our task of building
tal computers gave way to the mainframe world of the
a “thinking machine” would be to take a man
1960s and 1970s). In Michie’s hands, the concept led
as a whole and try to replace all the parts of
to Edinburgh University’s Freddy robots [1].
him by machinery. He would include
The Mark two Freddy was a stationary robot with a
television cameras, microphones,
single camera-eye and a pincer-like gripper. Michie and
loudspeakers, wheels and “handling servo-
his colleagues taught Freddy to recognize common
mechanisms” as well as some sort of
objects—a hammer, cup, and ball, for example—and to
“electronic brain” [66, p. 420].
assemble simple objects like toy cars from a pile of parts.
By interacting with the environment, the robot Although the assembly operations themselves had to be
would be “finding things out for itself,” he said (ibid.). interactively programmed, the first stages of the teaching
The “brain,” moreover, might be “stationary and con- process did somewhat resemble the teaching of a child.
trol[ing] the body from a distance” (ibid.). The experimenters would spend a few hours showing the
Turing himself steered away from an embodiment robot unfamiliar parts and demonstrating how to lay the
approach, saying it would be “a tremendous undertaking” pile of parts out ready for assembly.
and could “depend rather too much on sense organs and Turing’s far-seeing ideas on machine intelligence
locomotion to be feasible” [67, pp. 420–421]. While this were being carried forward and implemented by those
type of approach was certainly not suited to those early he influenced.
years of AI, advances in engineering made it feasible
within two decades. Brooks wrote in a review of the
embodiment approach’s history that Turing “carefully THE NEXT GENERATION
considered the question of embodiment”; Brooks sum- For Michie, his wartime discussions on machine intelli-
marized the approach: “robots have bodies and experi- gence with Turing were pivotal. “By the end of war I
ence the world directly—their actions are part of a wanted to spend my life in that field,” he said, “but I also
dynamic with the world” [7, pp. 3, 4]. Brooks’ Turingesque knew that I would have to find something else to do while
research-program at MIT produced some of AI’s best- waiting until the magic moment arrived, when there were
known early robots, including Herbert, Cog, and Kismet machines on which one could do suitable experimenta-
[6], [7], [8]. tions.” Michie bided his time until the 1960s, when he set
Another of Turing’s emphases was his child- up his Experimental Programming Unit at Edinburgh.
machine concept: Others, influenced by Turing in the immediate post-
war years and bitten by the machine intelligence bug, did
Instead of trying to produce a programme to
not wait so long to embark on programming. They were
simulate the adult mind, why not rather try to
prepared to make do with the rough-and-ready facilities
produce one which simulates the child’s?
of the vanguard computers that Turing and other trail-
[67, p. 460].
blazing designers—such as Williams, Kilburn, Wilkes, and
The child-machine is to be endowed by its makers the team that Turing left behind at the National Physical
with what it needs in order to learn as a human child Laboratory—had brought into existence.
would. “Presumably the child-brain is something like a Two of those experimental machines were located
note-book as one buys it from the stationers,” Turing said: in Manchester and Cambridge, in Newman’s Comput-
“Rather little mechanism, and lots of blank sheets” (ibid). ing Machine Laboratory and Wilkes’ Mathematical
Laboratory. Early machine-intelligence programs
Our hope is that there is so little mechanism in
came to life on those two room-sized computers, and
the child-brain that something like it can be
the programmers responsible for these preliminary
easily programmed. (ibid)
steps in AI were very much in Turing’s orbit.
The child-machine might be “more or less without Dietrich Prinz, a refugee German physicist, took on
a body,” having “at most organs of sight, speech and the challenge of chess programming. He had learned
hearing,” Turing said [66, p. 420]. Once the child how to program the Manchester computer at seminars
machine had been “subjected to an appropriate that Turing gave, and an article by Turing’s assistant
course of education one would obtain the adult brain” Davies at the National Physical Laboratory inspired him
[67, p. 460]. The teaching process “could follow the to work on chess [16]. “To programme one of the elec-
normal teaching of a child” [67, p. 463]. tronic machines for the analysis of Chess would not be
difficult,” Davies had written, echoing Turing in his design Strachey said his experience with his draughts pro-
document for the ACE [19, p. 62]. gram convinced him that a “great deal of what is usu-
Prinz’s chess program was for solving mate-in-two ally known as thinking can in fact be reduced to a
problems with various simplifications imposed, such as relatively simple set of rules of the type which can be
no double-moves by pawns, and no distinction between incorporated into a program” [59, p. 26].
mate and stalemate [50]. With so small a search space Strachey’s 1951 letter to Turing16 also described his
there was no need for Turing’s heuristic approach, and paper-machine experiments with his NIM-player, a
Prinz wrote a brute-force program. Its first successful run learning program (it remembered and tried to reach-
was in November 1951, but Turing seemed to take little ieve any winning position it reached). He did not
interest—perhaps because he knew there was no future include learning in his draughts program, however.
in brute-force alone. This was done later by Samuel (see above); after Stra-
Strachey, another emerging code-hacker at chey publicized the program at a Canadian confer-
Manchester—later responsible, with Scott, for denota- ence in 1952, Samuel coded a version for the IBM 701,
tional semantics—shared Turing’s passion for mecha- saying “The basic program used in these experiments
nized learning. In 1951, he listened to one of Turing’s is quite similar to the program described by Strachey
BBC radio broadcasts on thinking machines [69] and in 1952” [54, p. 212]. Samuel’s program ran at IBM in
wrote to him: late 1952, an example of pre-Dartmouth AI in the U.S.17
In 1951, Wilkes’ computer in Cambridge presented
[Y]our remark . . . that the programme for
a none-too-interested world with two programs incor-
making a machine think would probably have
porating learning. Both were written by an American
great similarities with the process of teaching
visitor to the Mathematical Laboratory, Oettinger,
. . . seems to me absolutely fundamental . . .
who was fueled by Turing’s ideas about mechanized
First . . . it would obviously be necessary to get
learning [45, p. 1243]—and he also cited thoughts on
the machine to learn in the way a child learns,
the topic by other scientists in Britain, including Ashby
with the aid of a teacher.14
and Grey Walter as well as Wilkes [2], [24], [75].18
With Turing’s programming manual for the Manches- Oettinger’s “response-learning” program operated
ter computer [68] at his elbow, Strachey coded a heuristic “at a level roughly corresponding to that of condi-
draughts (checkers) player. This made use of the com- tioned reflexes,” Oettinger said [45, p. 1257]. He trained
puter’s CRT monitor to display a virtual board. When the it to respond appropriately to given stimuli by means
program was ready for the human player to key in his or of what he described as “approval or disapproval.”
her next move, it emitted a peremptory pip-pip sound.15 Turing had suggested in his 1948 manifesto that the
By the summer of 1952, Strachey had developed the pro- “training of the human child depends largely on a sys-
gram to the point where it could “play a complete game tem of rewards and punishments, and this suggests
of Draughts at a reasonable speed” [58, p. 47]. that it ought to be possible to carry through the organ-
Also interested in language processing, he included ising with only two interfering inputs”—and had him-
some primitive capabilities in that direction. Like ELIZA’s self described some experiments along those lines
canned responses at MIT in the 1960s, the program’s con- involving a paper machine [66, pp. 425–429]. Oettinger
versational output would appear at the printer. Upon the summed up the results of his experiments with the
toss of a coin to decide who should move first, the pro- response-learning program as follows (his words might
gram would print either “HEADS” or “TAILS” at random, call to mind Turing’s remark that “we have little temp-
and then demand “HAVE I WON?”16 If the human hesi- tation to imagine intelligence” when “we are able to
tated too long over a move, the program might say “YOU explain or predict [the] behaviour”):
MUST PLAY AT ONCE OR RESIGN”; and ineptitude from
The behaviour pattern of the response-learning
its opponent, or blatant rule-breaking, might elicit
. . . machine is sufficiently complex to provide a
I REFUSE TO WASTE ANY MORE TIME. GO AND difficult task for an observer required to discover
PLAY WITH A HUMAN BEING. the mechanism by which the behaviour of the . . .
machine is determined [45, p. 1257].
14
Letter from Strachey to Turing, 15 May 1951, King’s College
Archive, Cambridge.
15
Turing’s and Strachey’s pioneering work on computer music
17
is described in [15]. I benefitted from my correspondence with Samuel in 1988.
16 18
The program’s remains are in the Strachey Papers, Bodleian I benefitted from interviewing Oettinger in 2000 and from
Library, Oxford. information in a letter he wrote to me dated 19 June 2000.
Oettinger described his second exhibit, his “shop- “help the interrogator.” The computer is “permitted all
ping programme,” as a “child machine” [45, p. 1247]. sorts of tricks so as to appear more man-like” (as Turing
This learned in a way reminiscent of “a small child sent explained in the script of his radio discussion with New-
on a shopping tour” [45, p. 1247]. The program’s world man [71]), and C—“who should not be expert about
consisted of eight shops and the user would instruct it machines”—is permitted to ask “anything” [71, p. 495].
to find a specified item. While searching, initially at Turing’s claims for this “question-and-answer
random, the program memorized a few of the items method” are that (a) it “seems to be suitable for intro-
stocked in each shop it visited. If sent out again for ducing almost any one of the fields of human endeav-
the same item, or for some other item whose location our that we wish to include,” and (b) it “has the
was learned during the previous searches, the pro- advantage of drawing a fairly sharp line between the
gram went directly to the appropriate shop. physical and the intellectual capacities of a man”
Oettinger observed that Turing’s “imitation game can [67, p. 442]. He re-emphasized that point in his radio
be played with the shopping . . . machine” [45: p. 1250], script: the “important thing is to try to draw a line
and was it seems the first programmer to claim an imple- between the properties of a brain, or of a man, that we
mented program capable of passing a (very) restricted want to discuss, and those that we don’t” [71, p. 94].
Turing test—where the interrogator’s questions are It is worth noting that Turing has frequently been mis-
“restricted to shopping orders of the form ‘in what shop interpreted as intending his test as a definition, in particu-
may article j be found?’ coded as vectors“: lar a behaviorist or “operational” definition. This
misunderstanding was introduced by early commenta-
Under these conditions the interrogator . . .
tors; and, since then, the leitmotif that Turing attempted
would find it difficult to make the correct
a definition has permeated very widely through academic
identification [45, p. 1250].
and popular literature (see, e.g., [56, pp. v–vi], [4, p. 248],
Oettinger regarded this as indicating that “machine [29, p. 415], [21], [22], [32], [42, p. 158]). Yet Turing stated
intelligence . . . exists, although in a very limited form,” and very clearly in his radio script:
he noted (in a phrase reminiscent of Minsky’s later charac-
I don’t want to give a definition of thinking, but
terization of AI) that his program was “capable of perform-
if I had to I should probably be unable to say
ing functions which, in living organisms, are considered to
anything more about it than that it was a sort
be the result of intelligent behaviour” [45, pp. 1250–51].
of buzzing that went on inside my head. But I
don’t really see that we need to agree on a
THE TURING TEST definition at all [71, p. 494].
Turing’s test, his “imitation game,” needs no introduc- Turing also made it completely clear that his
tion. Its first appearance was in Turing’s 1948 mani- test was intended to provide a criterion (his term
festo, in a restricted form: [67, p. 442]) but that this sufficient condition was not
also necessary. He said:
It is not difficult to devise a paper machine
which will play a not very bad game of chess. May not machines carry out something which
Now get three men as subjects for the ought to be described as thinking but which is
experiment, A, B, and C. A and C are to be very different from what a man does? . . . [A]t
rather poor chess players. B is the operator least we can say that if, nevertheless, a
who works the paper machine . . . Two rooms machine can be constructed to play the
are used with some arrangement for imitation game satisfactorily, we need not be
communicating moves, and a game is played troubled by this objection [67, p. 442].
between C and either A or the paper
The usefulness of Turing’s 70 years old test as a seri-
machine. C may find it quite difficult to tell
ous benchmark for AI in the 21st century is certainly
which he is playing [66, p. 431].
something that can be questioned, especially given
Turing added, “This is a rather idealised form of an reports of not-so-intelligent chatbots (such as Eugene
experiment I have actually done” (ibid.). Goostman19) passing the unrestricted Turing test.
His description of the unrestricted form of the test
was published two years later [67, pp. 441-442]. B is now
an electronic computer. C, the “interrogator,” must decide
“Turing Test success marks milestone in computing history,”
19
based on question-and-answer (conducted via, e.g., a University of Reading, 8 June 2014. htt_ps://archive.reading.ac.
teleprinter), which of A and B is the computer. A should uk/news-events/2014/June/pr583836.html
However, it is a common feature of modern presenta- mistaken for a human more than 30% of the time during
tions of the Turing test that Turing’s specification of a series of five minute keyboard conversations it passes
what counts as passing the test is omitted [67, p. 441]. In the test.”18 There are others who have taken this predic-
the early days, though, this aspect of the test was well tion of Turing’s as his intended threshold for passing the
understood. Oettinger gave the following explanation of test (e.g., [53, p. 152]), but the difficulty with this interpre-
what is required for a computer to be said to pass: tation is that it drives Turing into contradicting his own
words. The 30% failure rate for judges would, he said, be
He [Turing] postulates a game played by a man A,
achieved “in about fifty years,” but the test would not
a woman B, and an interrogator C . . . [T]he object
actually be passed for “at least 100 years.”
of the game is for C to make the correct
Turing’s benchmark test is much harder than it
identification . . . If, when A is replaced by a
might appear, and his prediction of final success in
machine, C is wrong in his identifications as often
the test has at least three more decades to run. Time
as when A was a man, the man and the machine
will tell. But as well as posing a significant challenge,
become indistinguishable to C [45, p. 1250].
the Turing test is an enduring emblem of the cocktail
Turing himself was very clear: the question that of ideas that Turing and company served up in those
replaces “our original, ‘Can machines think?’” is: early days of the quest to build intelligent machinery.20
Will the interrogator decide wrongly as often [in
the computer versus human game] as he does
when the game is played between a man and a CONCLUSION
woman? [67, p. 441]. From the early 1940s, Turing contributed significantly and
influentially to the theory of what we now call AI. In the
Curiously—and despite the number of Turing test
postwar years, he implemented AI programs in the form
competitions that have been held around the globe—this
of what he termed “paper machines,” these including
man–woman pretest, necessary for properly scoring the
chess programs, simulations of artificial neural networks,
Turing test, seems not to have been conducted to date.
and learning programs. His work inspired other early pro-
Yet the pretest would not be overly challenging to carry
grammers, such as Prinz, Strachey, and Oettinger, who
out, and in its absence there is no protocol for properly
ran chess, checkers, and learning programs on vanguard
determining whether a computer has passed Turing’s test.
electronic computers in 1951 and 1952. Turing’s ideas on
What did Turing say about when his test might be
robotics and “child machines” provided a basic theoreti-
passed? He thought that “in about fifty years’ time”, the
cal framework for some early robots developed in the
state of the art would have progressed sufficiently to
1960s and 1970s. His “imitation game” remains a hard
enable a program to “play the imitation game so well that
challenge and even a guiding principle for aspects of AI
an average interrogator will not have more than 70 per
today.21
cent. chance of making the right identification after five
minutes of questioning” [67, p. 449]. Events proved him
right about that. He placed passing the test much further
in the future, however. Discussing this in his 1952 radio BIBLIOGRAPHY
script, he responded to Newman’s question whether suc- [1] A. P. Ambler, H. G. Barrow, C. M. Brown, R. M. Burstall,
cess would “be a long time from now, if the machine is to and R. J. Popplestone, “A versatile computer-
stand any chance with no questions barred?”: controlled assembly system,” in Proc. 3rd Int. Joint
Oh yes, at least 100 years, I should say [71, p. 495]. Conf. AI, Stanford, CA, USA, 1973, pp. 298–307.
[2] W. R. Ashby, “Design for a brain,” Electron. Eng., vol. 20,
So perhaps reports that the unrestricted Turing test pp. 379–383, 1948.
has already been passed are premature? [3] A. G. Bell, The Machine Plays Chess. Oxford, U.K.:
The Eugene Goostman chatbot was subjected to a Pergamon, 1978.
careful series of unrestricted Turing tests, held at the
Royal Society of London in 2014. The outcome: Turing
was correct in his statement of what would be achieved 20
Turing’s test has also sparked a large literature in the phi-
“in about fifty years”—a third of the 30 interrogators losophy journals, too vast to be surveyed here. The following
made the wrong identification after 5 minute of ques- references give the flavor of some current debates: [51], [23],
tioning.18 The organizers announced though that the [20], [40], and [74], with [52] responding to views in some of
these papers.
chatbot had therefore passed the Turing test—because 21
I am grateful to two anonymous reviewers for helpful
they thought that according to Turing, “If a computer is comments.
[4] N. Block, “The computer model of the mind,” in An [23] B. Gonçalves, “The Turing test is a thought
Invitation to Cognitive Science, vol. 3, D. N. Osherson experiment,” Minds Machines, vol. 33, pp. 1–31, 2023.
and H. Lasnik, Eds. Cambridge, MA, USA: MIT Press, [24] W. Grey Walter, “Possible features of brain function
pp. 247–289, 1990. and their imitation,” in Proc. Symp. Inf. Theory, 1950,
[5] B. V. Bowden, Faster Than Thought. London, U.K.: pp. 134–136.
Pitman, 1953. [25] J. Haugeland, Artificial Intelligence: The Very Idea.
[6] C. Breazeal and B. Scassellati, “Infant-like social Cambridge, MA, USA: MIT Press, 1985.
interactions between a robot and a human caretaker,” €
[26] D. Hilbert, “Uber die Grundlagen der Logik und der
Adaptive Behav., vol. 8, pp. 49–74, 2000. Arithmetik,” in Verhandlungen des 3. Internationalen
[7] R. A. Brooks, “Intelligence without reason,” Memo No. Mathematiker-Kongresses: in Heidelberg vom 8. bis 13.
1293, MIT AI Laboratory, Apr. 1991. [Online]. Available: August 1904, A. Krazer, Ed., Leipzig, Germany: Teubner,
https://people.csail.mit.edu/brooks/papers/AIM-1293.pdf pp. 174–185, 1905.
[8] R. A. Brooks and L. A. Stein, “Building brains for € ge der
[27] D. Hilbert and W. Ackermann, Grundzu
bodies,” Auton. Robots, vol. 1, pp. 7–25, 1994. Theoretischen Logik. Berlin, Germany: Springer, 1928.
[9] B. E. Carpenter and R. W. Doran, “The other Turing [28] H. Hinsley, British Intelligence in the Second World
machine,” Comput. J., vol. 20, pp. 269–279, 1977. War, vol. 2. London, U.K.: HMSO, 1981.
[10] D. Champernowne, letter, Comput. Chess, vol. 4, [29] A. Hodges, Alan Turing: The Enigma. London, U.K.:
pp. 80–81, 1980. Vintage, 1992.
[11] A. Church, review of [61], J. Symbolic Log., vol. 2, [30] S. Lavington, Ed., Alan Turing and His Contemporaries.
pp. 42–43, 1937. Swindon, U.K.: Brit. Inform. Soc., 2012.
[12] B. J. Copeland, The Essential Turing. Oxford, U.K.: € del,” Philosophy,
[31] J. R. Lucas, “Minds, machines and Go
Oxford Univ. Press, 2004. vol. 36, pp. 112–127, 1961.
[13] B. J. Copeland, Alan Turing’s Automatic Computing [32] J. Lotman, “The phenomenon of culture,” in Juri Lotman—
Engine. Oxford, U.K.: Oxford Univ. Press, 2005. Culture, Memory and History, M. Tamm, Ed., Cham,
[14] B. J. Copeland, “The Church-Turing thesis,” in Stanford Switzerland: Palgrave Macmillan, pp. 33–48, 2019.
Encyclopedia of Philosophy, E. Zalta, Ed. 2023. [Online]. [33] P. Mahon, “History of Hut 8,” Bletchley Park, 1945, in
Available: https://plato.stanford.edu/entries/church- [12], pp. 267–312.
turing/ [34] P. Mancosu and R. Zach, “Heinrich Behmann’s 1921
[15] B. J. Copeland and J. Long, “Alan Turing: How his lecture on the decision problem and the algebra of
universal machine became a musical instrument,” IEEE logic,” Bull. Symbolic Log., vol. 21, pp. 164–187, 2015.
Spectrum, Oct. 2017. [Online]. Available: https:// [35] J. McCarthy, “Information,” Sci. Amer., vol. 215,
spectrum.ieee.org/alan-turing-how-his-universal- pp. 64–73, 1966.
machine-became-a-musical-instrument [36] P. McCorduck, Machines Who Think. New York, NY,
[16] B. J. Copeland and D. Prinz Jr., “Computer chess— USA: Freeman, 1979.
The first moments,” in The Turing Guide, B. J. [37] W. McCulloch, discussion, in John Von Neumann:
Copeland et al. Oxford, U.K.: Oxford Univ. Press, Collected Works, vol. 5, A. H. Taub, Ed., London, U.K.:
2017, pp. 327–346. Pergamon Press, pp. 319–328, 1961.
[17] B. J. Copeland and D. Proudfoot, “On Alan Turing’s [38] W. S. McCulloch and W. Pitts, “A logical calculus of the
anticipation of connectionism,” Synthese, vol. 108, ideas immanent in nervous activity,” Bull. Math.
pp. 361–377, 1996. Biophys., vol. 5, pp. 115–133, 1943.
[18] M. Croarken, Early Scientific Computing in Britain. [39] D. Michie, On Machine Intelligence, 2nd ed.
Oxford, U.K.: Oxford Univ. Press, 1990. Chichester, U.K.: Ellis Horwood, 1986.
[19] D. W. Davies, “A theory of chess and noughts and [40] P. Millican, “Alan Turing and human-like intelligence,”
crosses,” Sci. News, vol. 16, pp. 40–64, 1950. in Human-Like Machine Intelligence, S. Muggleton and
[20] S. Danziger, “Intelligence as a social concept: A socio- N. Chater, Eds. Oxford, U.K.: Oxford Univ. Press,
technological interpretation of the Turing Test,” pp. 28–51, 2021.
Philosophy Technol., vol. 35, pp. 1–26, 2022. [41] M. Minsky, Ed., Semantic Information Processing.
[21] R. French, “The Turing test: The first 50 years,” Trends Cambridge, MA, USA: MIT Press, 1968.
Cogn. Sci., vol. 4, pp. 115–122, 2000. [42] L. Moody and W. K. Bickel, “Substance use and
[22] A. T. Greenhill and B. R. Edmunds, “A primer of artificial addictions,” in Computer-Assisted and Web-Based
intelligence in medicine,” Techn. Innovations Innovations in Psychology, Special Education, and
Gastrointestinal Endoscopy, vol. 22, pp. 85–89, Health, J. K. Luiselli and A. J. Fischer, Eds. Amsterdam,
2020. Holland: Elsevier, pp. 157–183, 2016.
[43] A. Newell, J. C. Shaw, and H. A. Simon, “Empirical [62] A. M. Turing, “Systems of logic based on ordinals,”
explorations with the logic theory machine: A case Proc. London Math. Soc., series 2, 1939, vol. 45,
study in heuristics,” in Proc. Western Joint Comput. pp. 161–228. Reprinted in [12], pp. 146-204 (page
Conf., 1957, vol. 15, pp. 218–230. references in the text to this edition).
[44] A. Newell and H. A. Simon, “Computer science as [63] A. M. Turing, “The steckered Enigma. Bombe and
empirical inquiry: Symbols and search,” Commun. Spider,” Bletchley Park, 1940, in [12], pp. 314–335.
Assoc. Comput. Mach., vol. 19, pp. 113–126, [64] A. M. Turing, “Proposed electronic calculator,”
1976. National Physical Laboratory, 1945, in [13], pp. 369–454.
[45] A. G. Oettinger, “Programming a digital computer to [65] A. M. Turing, “Lecture on the Automatic Computing
learn,” Philos. Mag., vol. 43, pp. 1243–1263, 1952. Engine” delivered to the London Mathematical
[46] C. S. Peirce, “The 1903 Lowell Institute lectures I–V,” in Society, 1947, in [12], pp. 378–394.
Charles S. Peirce (part 2), vol. 2, A.-V. Pietarinen, Ed., [66] A. M. Turing, “Intelligent machinery,” National Physical
Berlin, Germany: de Gruyter, 2021. Laboratory, 1948, in [12], pp. 410–432.
[47] C. S. Peirce, “Some amazing mazes [conclusion],” [67] A. M. Turing, “Computing machinery and intelligence,”
Monist, vol. 18, pp. 416–464, 1908. Mind, vol. 59, pp. 433–460, 1950. Reprinted in [12],
[48] R. Penrose, Shadows of the Mind: A Search For the pp. 441-464 (page references in the text to this edition).
Missing Science of Consciousness. Oxford, U.K.: [68] A. M. Turing, Programmers’ Handbook for Manchester
Oxford Univ. Press, 1994. Electronic Computer Mark II. Manchester, U.K.:
[49] G. Polya, How To Solve It. Princeton, NJ, USA: Computing Machine Laboratory, Univ. Manchester,
Princeton Univ. Press, 1945. circa 1950. [Online]. Available: https://archive.
[50] D. G. Prinz, “Robot chess,” Research, vol. 5, computerhistory.org/resources/text/
pp. 261–266, 1952. Knuth_Don_X4100/PDF_index/k-4-pdf/k-4-u2780-
[51] D. Proudfoot, “Rethinking Turing’s test,” J. Philosophy, Manchester-Mark-I-manual.pdf
vol. 110, pp. 391–411, 2013. [69] A. M. Turing, “Can digital computers think?,” BBC Third
[52] D. Proudfoot, “An analysis of Turing’s criterion for Programme, 1951, in [12], pp. 482–486.
‘thinking’,” Philosophies, vol. 7, pp. 1–15, 2022. [70] A. M. Turing, “Chess,” typescript, 1951, in [12], pp. 569–575.
[53] W. Rapaport, “Turing test,” in Encyclopedia of [71] A. M. Turing, R. Braithwaite, G. Jefferson, and M.
Language and Linguistics, 2nd ed., K. Brown, Ed., Newman, “Can automatic calculating machines be
Boston, MA, USA: Elsevier, pp. 151–159, 2006. said to think?,” BBC Third Programme, 1952, in [12],
[54] A. L. Samuel, “Some studies in machine learning pp. 494–506.
using the game of checkers,” IBM J., vol. 3, [72] M. Y. Vardi, “Who begat computing?,” Commun. ACM,
pp. 211–229, 1959. vol. 56, p. 5, 2013.
[55] C. E. Shannon, “Programming a computer for [73] T. Vickers, “Applications of the Pilot ACE and the
playing chess,” Philos. Mag., vol. 41, pp. 256–275, 1950. DEUCE,” in [13], pp. 265–279.
[56] C. E. Shannon and J. McCarthy, Eds., Automata [74] M. Wheeler, “Deceptive appearances: The Turing test,
Studies. Princeton, NJ, USA: Princeton Univ. Press, response-dependence, and intelligence as an
1956. emotional concept,” Minds Machines, vol. 30,
[57] S. W. Skan, Handbook For Computers, (2 vols). London, pp. 513–532, 2020.
U.K.: Dept. Sci. Ind. Res., 1954. [75] M. V. Wilkes, “Can machines think?,” Spectator,
[58] C. S. Strachey, “Logical or non-mathematical no. 6424, pp. 177–178, 1951.
programmes,” in Proc. Assoc. Comput. Machinery, [76] J. R. Womersley, “A.C.E. project – Origin and early history,”
1952, pp. 46–49. National Physical Laboratory, 1946, in [13], pp. 38–39.
[59] C. S. Strachey, “The thinking machine,” Encounter,
vol. 3, pp. 25–31, 1954. B. JACK COPELAND is distinguished professor in humani-
[60] C. Teuscher, Turing’s Connectionism. London, U.K.: ties at the University of Canterbury, Christchurch, 8041,
Springer, 2001.
New Zealand. He received his doctorate in mathematical
[61] A. M. Turing, “On computable numbers, with an
logic from Oxford University, and has written and edited
application to the Entscheidungsproblem,” Proc.
7 books concerning Turing, most recently The Turing
London Math. Soc., series 2, 1936-37, vol. 42,
pp. 230–265. Reprinted in [12], pp. 58-90 (page Guide (Oxford University Press, 2017). Contact him at
references in the text to this edition). [email protected].