Computation Theory: Lecture Notes On
Computation Theory: Lecture Notes On
Lecture Notes on
Computation Theory
for the Computer Science Tripos, Part IB
c 2009 AM Pitts
Contents
Learning Guide Exercises and Tripos Questions Introduction: algorithmically undecidable problems
cally undecidable problems.
ii iii 2
Decision problems. The informal notion of algorithm, or effective procedure. Examples of algorithmi-
17
Denition and examples; graphical notation. Register machine computable functions. Doing arithmetic
38
49
58
Statement and proof of undecidability. Example of an uncomputable partial function. Decidable sets of
69
Informal description. Denition and examples. Turing computable functions. Equivalence of register
101
Denition and examples. Existence of a recursive, but not primitive recursive function. A partial function
123
Alpha and beta conversion. Normalization. Encoding data. Writing recursive functions in the -calculus.
[3 lectures]
ii
Learning Guide
These notes are designed to accompany 12 lectures on computation theory for Part IB of the Computer Science Tripos. The aim of this course is to introduce several apparently different formalisations of the informal notion of algorithm; to show that they are equivalent; and to use them to demonstrate that there are uncomputable functions and algorithmically undecidable problems. At the end of the course you should: be familiar with the register machine, Turing machine and -calculus models of computability; understand the notion of coding programs as data, and of a universal machine; be able to use diagonalisation to prove the undecidability of the Halting Problem; understand the mathematical notion of partial recursive function and its relationship to computability. The prerequisites for taking this course are the Part IA courses Discrete Mathematics and Regular Languages and Finite Automata. This Computation Theory course contains some material that everyone who calls themselves a computer scientist should know. It is also a prerequisite for the Part IB course on Complexity Theory. Recommended books Hopcroft, J.E., Motwani, R. & Ullman, J.D. (2001). Introduction to Automata Theory, Languages and Computation, Second Edition. Addison-Wesley. Hindley, J.R. & Seldin, J.P. (2008). Lambda-Calculus and Combinators, an Introduction. Cambridge University Press (2nd ed.). Cutland, N.J. (1980) Computability. An introduction to recursive function theory. Cambridge University Press. Davis, M.D., Sigal, R. & Wyuker E.J. (1994). Computability, Complexity and Languages, 2nd edition. Academic Press. Sudkamp, T.A. (1995). Languages and Machines, 2nd edition. Addison-Wesley.
iii
6. For the example Turing machine given on slide 75, give the register machine program implementing (S, T, D ) := (S, T ) as described on slide 83. [Tedious!maybe just do a bit.] 7. Try Tripos question 2001.3.9. [This is the Turing machine version of 2000.3.9.] 8. Try Tripos question 1996.3.9. 9. Show that the following functions are all primitive recursive. (a) Exponentiation, exp( x, y) xy .
iv xy 0 y z if x y if x < y if x = 0 if x > 0
(d) Bounded summation: if f N n+1 N is primitive recursive, then so is g N n+1 N where 0 if x = 0 g( x, x ) f ( x, 0) if x = 1 f ( x, 0) + + f ( x, x 1) if x > 1. 10. Recall the denition of Ackermanns function ack from slide 122. Sketch how to build a register machine M that computes ack( x1 , x2 ) in R0 when started with x1 in R1 and x2 in R2 and all other registers zero. [Hint: heres one way; the next question steers you another way to the computability of ack. Call a nite list L = [( x1 , y1 , z1 ), ( x2 , y2 , z2 ), . . .] of triples of numbers suitable if it satises (i) if (0, y, z) L, then z = y + 1 (ii) if ( x + 1, 0, z) L, then ( x, 1, z) L (iii) if ( x + 1, y + 1, z) L, then there is some u with ( x + 1, y, u) L and ( x, u, z) L. The idea is that if ( x, y, z) L and L is suitable then z = ack( x, y) and L contains all the triples ( x , y , ack( x, , y )) needed to calculate ack( x, y). Show how to code lists of triples of numbers as numbers in such a way that we can (in principle, no need to do it explicitly!) build a register machine that recognizes whether or not a number is the code for a suitable list of triples. Show how to use that machine to build a machine computing ack( x, y) by searching for the code of a suitable list containing a triple with x and y in its rst two components.] 11. If you are not already fed up with Ackermanns function, try Tripos question 2001.4.8. 12. If you are still not fed up with Ackermanns function ack N2 N, show that the -term ack x. x ( f y. y f ( f 1)) Succ represents ack (where Succ is as on slide 152). 13. Let I be the -term x. x. Show that nI = I holds for every Church numeral n. Now consider B f g x. g x I ( f ( g x )) Assuming the fact about normal order reduction mentioned on slide 145, show that if partial functions f , g N N are represented by closed -terms F and G respectively, then their composition ( f g)( x ) f ( g( x )) is represented by B F G. Now try Tripos question 2005.5.12.
Introduction
Computation Theory , L 1
2/171
Computation Theory , L 1
3/171
Hilberts Entscheidungsproblem
Is there an algorithm which when fed any statement in the formal language of rst-order arithmetic, determines in a nite number of steps whether or not the statement is provable from Peanos axioms for arithmetic, using the usual rules of rst-order logic?
Such an algorithm would be useful! For example, by running it on
Hilberts Entscheidungsproblem
Is there an algorithm which when fed any statement in the formal language of rst-order arithmetic, determines in a nite number of steps whether or not the statement is provable from Peanos axioms for arithmetic, using the usual rules of rst-order logic?
Posed by Hilbert at the 1928 International Congress of Mathematicians. The problem was actually stated in a more ambitious form, with a more powerful formal system in place of rst-order logic. In 1928, Hilbert believed that such an algorithm could be found. A few years later he was proved wrong by the work of Church and Turing in 1935/36, as we will see.
Computation Theory , L 1 5/171
Decision problems
Entscheidungsproblem means decision problem. Given a set S whose elements are nite data structures of some kind
(e.g. formulas of rst-order arithmetic)
a property P of elements of S
(e.g. property of a formula that it has a proof)
the associated decision problem is: nd an algorithm which terminates with result 0 or 1 when fed an element s S and yields result 1 when fed s if and only if s has property P.
Computation Theory , L 1 6/171
Algorithms, informally
No precise denition of algorithm at the time Hilbert posed the Entscheidungsproblem, just examples, such as: Procedure for multiplying numbers in decimal place notation. Procedure for extracting square roots to any desired accuracy. Euclids algorithm for nding highest common factors.
Computation Theory , L 1
7/171
Algorithms, informally
No precise denition of algorithm at the time Hilbert posed the Entscheidungsproblem, just examples. Common features of the examples: nite description of the procedure in terms of elementary operations deterministic (next step uniquely determined if there is one) procedure may not terminate on some input data, but we can recognize when it does terminate and what the result is.
Computation Theory , L 1 8/171
Algorithms, informally
No precise denition of algorithm at the time Hilbert posed the Entscheidungsproblem, just examples. In 1935/36 Turing in Cambridge and Church in Princeton independently gave negative solutions to Hilberts Entscheidungsproblem. First step: give a precise, mathematical denition of algorithm.
(Turing: Turing Machines; Church: lambda-calculus.)
Then one can regard algorithms as data on which algorithms can act and reduce the problem to. . .
Computation Theory , L 1 9/171
Turing and Churchs work shows that the Halting Problem is undecidable, that is, there is no algorithm H such that for all ( A, D ) S H ( A, D ) =
Computation Theory , L 1
1 if A( D ) 0 otherwise.
10/171
Informal proof, by contradiction. If there were such an H, let C be the algorithm: input A; compute H ( A, A); if H ( A, A) = 0 then return 1, else loop forever. So A (C ( A) H ( A, A) = 0) (since H is total) and A ( H ( A, A) = 0 A( A)) (denition of H). So A (C ( A) A( A)). Taking A to be C, we get C (C ) C (C ), contradiction!
Computation Theory , L 1 11/171
From HP to Entscheidungsproblem
Final step in Turing/Church proof of undecidability of the Entscheidungsproblem: they constructed an algorithm encoding instances ( A, D ) of the Halting Problem as arithmetic statements A,D with the property A,D is provable A( D ) Thus any algorithm deciding provability of arithmetic statements could be used to decide the Halting Problemso no such exists.
Computation Theory , L 1
12/171
Hilberts Entscheidungsproblem
Is there an algorithm which when fed any statement in the formal language of rst-order arithmetic, determines in a nite number of steps whether or not the statement is provable from Peanos axioms for arithmetic, using the usual rules of rst-order logic?
With hindsight, a positive solution to the Entscheidungsproblem would be too good to be true. However, the algorithmic unsolvability of some decision problems is much more surprising. A famous example of this is. . .
Computation Theory , L 1
13/171
Computation Theory , L 1
14/171
Diophantine equations
p ( x1 , . . . , x n ) = q ( x1 , . . . , x n ) where p and q are polynomials in unknowns x1 ,. . . ,xn with coecients from N = {0, 1, 2, . . .}.
Named after Diophantus of Alexandria (c. 250AD). Example: nd three whole numbers x1 , x2 and x3 such that the product of any two added to the third is a square [Diophantus Arithmetica, Book III, Problem 7]. In modern notation: nd x1 , x2 , x2 N for which there exists x, y, z N with x2 x2 + x2 x2 + x2 x2 + = x2 x1 x2 + y2 x2 x3 + z2 x3 x1 + 2 3 3 1 1 2 [One solution: ( x1 , x2 , x3 ) = (1, 4, 12), with ( x, y, z) = (4, 7, 4).]
Computation Theory , L 1 15/171
Register Machines
Computation Theory , L 2
17/171
Algorithms, informally
No precise denition of algorithm at the time Hilbert posed the Entscheidungsproblem, just examples. Common features of the examples: nite description of the procedure in terms of elementary operations deterministic (next step uniquely determined if there is one) procedure may not terminate on some input data, but we can recognize when it does terminate and what the result is.
Computation Theory , L 2 18/171
Computation Theory , L 2
19/171
Denition. A register machine is specied by: nitely many registers R0 , R1 , . . . , Rn (each capable of storing a natural number); a program consisting of a nite list of instructions of the form label : body, where for i = 0, 1, 2, . . ., the (i + 1)th instruction has label Li . Instruction body takes one of three forms: R+ R HALT
Computation Theory , L 2
L L ,L
add 1 to contents of register R and jump to instruction labelled L if contents of R is > 0, then subtract 1 from it and jump to L , else jump to L stop executing instructions
20/171
Example
registers: R0 R1 R2 program: L0 : R1 L1 , L2 + L1 : R0 L0 L2 : R2 L3 , L4 + L3 : R0 L2 L4 : HALT example computation: Li R0 R1 R2 0 0 1 2 1 0 0 2 0 1 0 2 2 1 0 2 3 1 0 1 2 2 0 1 3 2 0 0 2 3 0 0 4 3 0 0
Computation Theory , L 2
21/171
Computation Theory , L 2
23/171
Halting
For a nite computation c0 , c1 , . . . , cm , the last conguration cm = ( , r, . . .) is a halting conguration, i.e. instruction labelled L is either HALT (a proper halt) or R+ L, or R L, L with R > 0, or R L , L with R = 0 and there is no instruction labelled L in the program (an erroneous halt)
+ L0 : R0 L2 halts erroneously. E.g. L1 : HALT
Computation Theory , L 2
24/171
Halting
For a nite computation c0 , c1 , . . . , cm , the last conguration cm = ( , r, . . .) is a halting conguration. Note that computations may never halt. For example,
+ L0 : R0 L0 only has innite computation sequences L1 : HALT
Computation Theory , L 2
25/171
Graphical representation
one node in the graph for each instruction arcs represent jumps between instructions lose sequential ordering of instructionsso need to indicate initial instruction with START.
instruction R+ L R HALT L0
Computation Theory , L 2
L, L
Example
registers: R0 R1 R2 program: L0 : R1 L1 , L2 + L1 : R0 L0 L2 : R2 L3 , L4 + L3 : R0 L2 L4 : HALT graphical representation: START
R1 R2 + R0 + R0
HALT Claim: starting from initial conguration (0, 0, x, y), this machines computation halts with conguration (4, x + y, 0, 0).
Computation Theory , L 2 27/171
Partial functions
Register machine computation is deterministic: in any non-halting conguration, the next conguration is uniquely determined by the program. So the relation between initial and nal register contents dened by a register machine program is a partial function. . . Denition. A partial function from a set X to a set Y is specied by any subset f X Y satisfying
( x, y) f ( x, y ) f y = y
for all x X and y, y Y.
Computation Theory , L 2 28/171
Partial functions
ordered pairs {( x, y) | x X y Y } i.e. for all x X there is at most one y Y with ( x, y) f
Denition. A partial function from a set X to a set Y is specied by any subset f X Y satisfying
( x, y) f ( x, y ) f y = y
for all x X and y, y Y.
Computation Theory , L 2 29/171
Partial functions
Notation: f ( x) = y means ( x, y) f f ( x) means y Y ( f ( x) = y) f ( x) means y Y ( f ( x) = y) X Y = set of all partial functions from X to Y X Y = set of all (total) functions from X to Y
Denition. A partial function from a set X to a set Y is total if it satises f ( x) for all x X.
Computation Theory , L 2 30/171
Computable functions
Denition. f N n N is (register machine) computable if there is a register machine M with at least n + 1 registers R0 , R1 , . . . , Rn (and maybe more) such that for all ( x1 , . . . , xn ) N n and all y N, the computation of M starting with R0 = 0, R1 = x1 , . . . , Rn = xn and all other registers set to 0, halts with R0 = y if and only if f ( x1 , . . . , xn ) = y.
Note the [somewhat arbitrary] I/O convention: in the initial conguration registers R1 , . . . , Rn store the functions arguments (with all others zeroed); and in the halting conguration register R0 stores its value (if any).
Computation Theory , L 2 31/171
Example
registers: R0 R1 R2 program: L0 : R1 L1 , L2 + L1 : R0 L0 L2 : R2 L3 , L4 + L3 : R0 L2 L4 : HALT graphical representation: START
R1 R2 + R0 + R0
HALT Claim: starting from initial conguration (0, 0, x, y), this machines computation halts with conguration (4, x + y, 0, 0). So f ( x, y) x + y is computable.
Computation Theory , L 2 32/171
Computable functions
Recall: Denition. f N n N is (register machine) computable if there is a register machine M with at least n + 1 registers R0 , R1 , . . . , Rn (and maybe more) such that for all ( x1 , . . . , xn ) N n and all y N, the computation of M starting with R0 = 0, R1 = x1 , . . . , Rn = xn and all other registers set to 0, halts with R0 = y if and only if f ( x1 , . . . , xn ) = y.
Computation Theory , L 3
33/171
Multiplication f ( x, y) is computable
START
R1 R2 + R3
xy
+ R0
HALT
R3
+ R2
Computation Theory , L 3
34/171
Multiplication f ( x, y) is computable
START
R1 R2 + R3
xy
+ R0
HALT
R3
+ R2
If the machine is started with (R0 , R1 , R2 , R3 ) = (0, x, y, 0), it halts with (R0 , R1 , R2 , R3 ) = ( xy, 0, y, 0).
Computation Theory , L 3 35/171
Further examples
The following arithmetic functions are all computable. (Proofleft as an exercise!) projection: p( x, y) constant: c( x) n x y if y x 0 if y > x x
truncated subtraction: x y
Computation Theory , L 3
36/171
Further examples
The following arithmetic functions are all computable. (Proofleft as an exercise!) integer division: integer part of x/y if y > 0 x div y 0 if y = 0 integer remainder: x mod y exponentiation base 2: e( x)
x y( x div y)
2x
Computation Theory , L 3
38/171
Turing/Church solution of the Etscheidungsproblem uses the idea that (formal descriptions of) algorithms can be the data on which algorithms act. To realize this idea with Register Machines we have to be able to code RM programs as numbers. (In general, such codings are often called Gdel numberings.) To do that, rst we have to code pairs of numbers and lists of numbers as numbers. There are many ways to do that. We x upon one. . .
Computation Theory , L 3
39/171
= =
0by 1 0 0 0by 0 1 1
x in binary.)
E.g. 27 = 0b11011 = 0, 13 = 2, 3
Computation Theory , L 3
40/171
= =
0by 1 0 0 0by 0 1 1
, gives a bijection (one-one correspondence) between N N and N. , gives a bijection between N N and { n N | n = 0}.
Computation Theory , L 3 41/171
list N)
[ x1 , x2 , . . . , x n ]
x1 :: ( x2 :: ( xn :: [] ))
Computation Theory , L 3
42/171
Computation Theory , L 3
43/171
0b [ x1 , x2 , . . . , xn ]
= 1 0 0
1 0 0
1 0 0
Hence
Computation Theory , L 3
[1, 3] = 1, [3]
[2, 1, 3] = 2, [1, 3]
Computation Theory , L 3
[ body0 , . . . , bodyn ]
where the numerical code body of an instruction body + Ri L j 2i, j is dened by: Ri L j , L k 2i + 1, j, k HALT 0
Computation Theory , L 3 46/170
Any x N decodes to a unique instruction body( x): if x = 0 then body( x) is HALT, else (x > 0 and) let x = y, z in if y = 2i is even, then + body( x) is Ri Lz , else y = 2i + 1 is odd, let z = j, k in body( x) is Ri L j , Lk So any e N decodes to a unique program prog (e), called the register machine program with index e: prog (e) L0 : body( x0 ) . . where e = [ x0 , . . . , xn ] . Ln : body( xn )
47/171
Computation Theory , L 3
18 = 0b10010 = 1, 4 = 1, 0, 2 0 = HALT
L0 : R0 L0 , L2 So prog (786432) = L1 : HALT
= R0
L0 , L2
N.B. In case e = 0 we have 0 = [] , so prog (0) is the program with an empty list of instructions, which by convention we regard as a RM that does nothing (i.e. that halts immediately).
Computation Theory , L 3
48/171
Computation Theory , L 4
49/171
High-level specication
Universal RM U carries out the following computation, starting with R0 = 0, R1 = e (code of a program), R2 = a (code of a list of arguments) and all other registers zeroed: decode e as a RM program P decode a as a list of register values a1 , . . . , an carry out the computation of the RM program P starting with R0 = 0, R1 = a1 , . . . , Rn = an (and any other registers occurring in P set to 0).
Computation Theory , L 4
50/171
Mnemonics for the registers of U and the role they play in its program:
R1 P code of the RM to be simulated R2 A code of current register contents of simulated RM R3 PC program counternumber of the current instruction (counting from 0) R4 N code of the current instruction body R5 C type of the current instruction body R6 R current value of the register to be incremented or decremented by current instruction (if not HALT) R7 S, R8 T and R9 Z are auxiliary registers. R0 result of the simulated RM computation (if any).
Computation Theory , L 4 51/171
3 copy ith item of list in A to R; goto 4 4 execute current instruction on R; update PC to next label; restore register values to A; goto 1
To implement this, we need RMs for manipulating (codes of) lists of numbers. . .
Computation Theory , L 4 52/171
The program START S ::= R HALT to copy the contents of R to S can be implemented by START S R Z+ S+ Z R+ HALT
The program START pushL X HALT to to carry out the assignment (X, L) ::= (0, X :: L) can be implemented by START Z+ Z+ L Z L+ X HALT
if L = 0 then (X ::= 0; goto EXIT) else in (X ::= x; L ::= ; goto HALT) let L = x, can be implemented by START X L EXIT L+ X+ L Z+ Z L+ HALT Z
Computation Theory , L 4
55/171
3 copy ith item of list in A to R; goto 4 4 execute current instruction on R; update PC to next label; restore register values to A; goto 1
Computation Theory , L 4
56/171
HALT
PC
pop N to C
pop S to R
push R to A
PC ::= N
R+
pop A to R
pop N to PC
N+
push R to S
Computation Theory , L 4
57/170
Computation Theory , L 5
58/171
Denition. A register machine H decides the Halting Problem if for all e, a1 , . . . , an N, starting H with R0 = 0 R1 = e R2 = [ a1 , . . . , an ]
and all other registers zeroed, the computation of H always halts with R0 containing 0 or 1; moreover when the computation halts, R0 = 1 if and only if
the register machine program with index e eventually halts when started with R0 = 0, R1 = a1 , . . . , Rn = an and all other registers zeroed.
Let C be obtained from H by replacing each HALT + (& each erroneous halt) by R0 R0 . HALT Let c N be the index of Cs program.
Computation Theory , L 5 60/171
Computable functions
Recall: Denition. f N n N is (register machine) computable if there is a register machine M with at least n + 1 registers R0 , R1 , . . . , Rn (and maybe more) such that for all ( x1 , . . . , xn ) N n and all y N, the computation of M starting with R0 = 0, R1 = x1 , . . . , Rn = xn and all other registers set to 0, halts with R0 = y if and only if f ( x1 , . . . , xn ) = y.
Note that the same RM M could be used to compute a unary function (n = 1), or a binary function (n = 2), etc. From now on we will concentrate on the unary case. . .
Computation Theory , L 5 62/171
denes an onto function from N to the collection of all computable partial functions from N to N.
Computation Theory , L 5 63/171
An uncomputable function
Let f N N be the partial function with graph {( x, 0) | x ( x)}. 0 if x ( x) Thus f ( x) = undened if x ( x)
f is not computable, because if it were, then f = e for some e N and hence if e (e), then f (e) = 0 (by def. of f ); so e (e) = 0 (by def. of e), i.e. e (e) if e (e), then f (e) (by def. of e); so e (e) (by def. of f ) contradiction! So f cannot be computable.
Computation Theory , L 5 64/171
Computation Theory , L 5
65/171
Claim: S0
{e | e (0)} is undecidable.
Proof (sketch): Suppose M0 is a RM computing S0 . From M0 s program (using the same techniques as for constructing a universal RM) we can construct a RM H to carry out: let e = R1 and [ a1 , . . . , an ] = R2 in R1 ::= (R1 ::= a1 ) ; ; (Rn ::= an ) ; prog (e) ; R2 ::= 0 ; run M0 Then by assumption on M0 , H decides the Halting Problemcontradiction. So no such M0 exists, i.e. S0 is uncomputable, i.e. S0 is undecidable.
Computation Theory , L 5
67/171
Claim: S1
Proof (sketch): Suppose M1 is a RM computing S1 . From M1 s program we can construct a RM M0 to carry out: let e = R1 in R1 ::= R1 ::= 0 ; prog (e) ; run M1 Then by assumption on M1 , M0 decides membership of S0 from previous example (i.e. computes S0 )contradiction. So no such M1 exists, i.e. S1 is uncomputable, i.e. S1 is undecidable.
Computation Theory , L 5
68/171
Turing Machines
Computation Theory , L 6
69/171
Algorithms, informally
No precise denition of algorithm at the time Hilbert posed the Entscheidungsproblem, just examples. Common features of the examples: nite description of the procedure in terms of elementary operations deterministic (next step uniquely determined if there is one) procedure may not terminate on some input data, but we can recognize when it does terminate and what the result is.
Computation Theory , L 6
70/171
Register Machine computation abstracts away from any particular, concrete representation of numbers (e.g. as bit strings) and the associated elementary operations of increment/decrement/zero-test. Turings original model of computation (now called a Turing machine) is more concrete: even numbers have to be represented in terms of a xed nite alphabet of symbols and increment/decrement/zero-test programmed in terms of more elementary symbol-manipulating operations.
Computation Theory , L 6
71/171
1 0 1
special left endmarker symbol special blank symbol linear tape, unbounded to right, divided into cells containing a symbol from a nite alphabet of tape symbols. Only nitely many cells contain non-blank symbols.
Computation Theory , L 6 72/171
1 0 1
Machine starts with tape head pointing to the special left endmarker . Machine computes in discrete steps, each of which depends only on current state (q) and symbol being scanned by tape head (0). Action at each step is to overwrite the current tape cell with a symbol, move left or right one cell, or stay stationary, and change state.
Computation Theory , L 6 73/171
Turing Machines
are specied by: Q, nite set of machine states , nite set of tape symbols (disjoint from Q) containing distinguished symbols (left endmarker) and (blank) s Q, an initial state ( Q ) ( Q {acc, rej}) { L, R, S}, a transition function, satisfying: for all q Q, there exists q Q {acc, rej} with (q, ) = (q , , R)
(i.e. left endmarker is never overwritten and machine always moves to the right when scanning it)
Computation Theory , L 6 74/171
Computation Theory , L 6
75/171
(q, w, u) M (q , w , u )
to mean q = acc, rej, w = va (for some v, a) and either (q, a) = (q , a , L), w = v, and u = a u or (q, a) = (q , a , S), w = va and u = u or (q, a) = (q , a , R), u = a u is non-empty, w = va a and u = u or (q, a) = (q , a , R), u = is empty, w = va and u = .
Computation Theory , L 6 77/171
Claim: the computation of M starting from conguration (s, , 1n 0) halts in conguration (acc, , 1n+1 0).
Computation Theory , L 6 79/171
(s ,
, 1n 0 ) M M . . . M M M M . . . M M
(s , (q , (q , (q , (q , (q ,
, 1n 0 ) 1 , 1n 1 0 ) 1n , 0 ) 1n 0 , ) 1n +1 , ) 1n +1 , 0 )
(q , , 1n +1 0 ) (acc , , 1n +1 0 )
Computation Theory , L 6
80/171
Theorem. The computation of a Turing machine M can be implemented by a register machine. Proof (sketch). Step 1: x a numerical encoding of Ms states, tape symbols, tape contents and congurations. Step 2: implement Ms transition function (nite table) using RM instructions on codes. Step 3: implement a RM program to repeatedly carry out M .
Computation Theory , L 6
81/171
Step 1
Identify states and tape symbols with particular numbers:
Step 2
Using registers
Q = current state A = current tape symbol D = current direction of tape head (with L = 0, R = 1 and S = 2, say)
one can turn the nite table of (argument,result)-pairs specifying into a RM program (Q, A, D) ::= (Q, A) so that starting the program with Q = q, A = a, D = d (and all other registers zeroed), it halts with Q = q , A = a , D = d , where (q , a , d ) = (q, a).
Computation Theory , L 6 83/171
Step 3
The next slide species a RM to carry out Ms computation. It uses registers
C = code of current conguration W = code of tape symbols at and left of tape head (reading right-to-left) U = code of tape symbols right of tape head (reading left-to-right)
Starting with C containing the code of an initial conguration (and all other registers zeroed), the RM program halts if and only if M halts; and in that case C holds the code of the nal conguration.
Computation Theory , L 6 84/171
START
[Q,W,U] ::=C
HALT
yes Q<2? no pop W to A
(Q,A,D)::=(Q,A)
C::= [Q,W,U]
push A to W
yes
Q<2? no
push A to U
push B to W
Computation Theory , L 6
pop U to B
push A to W
85/171
Computable functions
Recall: Denition. f N n N is (register machine) computable if there is a register machine M with at least n + 1 registers R0 , R1 , . . . , Rn (and maybe more) such that for all ( x1 , . . . , xn ) N n and all y N, the computation of M starting with R0 = 0, R1 = x1 , . . . , Rn = xn and all other registers set to 0, halts with R0 = y if and only if f ( x1 , . . . , xn ) = y.
Computation Theory , L 7
86/171
Weve seen that a Turing machines computation can be implemented by a register machine. The converse holds: the computation of a register machine can be implemented by a Turing machine. To make sense of this, we rst have to x a tape representation of RM congurations and hence of numbers and lists of numbers. . .
Computation Theory , L 7
87/171
Computation Theory , L 7
88/171
Computation Theory , L 7
89/171
Theorem. A partial function is Turing computable if and only if it is register machine computable.
Proof (sketch). Weve seen how to implement any TM by a RM. Hence f TM computable implies f RM computable. For the converse, one has to implement the computation of a RM in terms of a TM operating on a tape coding RM congurations. To do this, one has to show how to carry out the action of each type of RM instruction on the tape. It should be reasonably clear that this is possible in principle, even if the details (omitted) are tedious.
Computation Theory , L 7
90/171
Notions of computability
Church (1936): -calculus [see later] Turing (1936): Turing machines. Turing showed that the two very dierent approaches determine the same class of computable functions. Hence: Church-Turing Thesis. Every algorithm [in intuitive sense of Lect. 1] can be realized as a Turing machine.
Computation Theory , L 7
91/171
Notions of computability
Church-Turing Thesis. Every algorithm [in intuitive sense of Lect. 1] can be realized as a Turing machine.
Further evidence for the thesis: Gdel and Kleene (1936): partial recursive functions Post (1943) and Markov (1951): canonical systems for generating the theorems of a formal system Lambek (1961) and Minsky (1961): register machines Variations on all of the above (e.g. multiple tapes, non-determinism, parallel execution. . . ) All have turned out to determine the same collection of computable functions.
Computation Theory , L 7 92/171
Notions of computability
Church-Turing Thesis. Every algorithm [in intuitive sense of Lect. 1] can be realized as a Turing machine.
In rest of the course well look at Gdel and Kleene (1936): partial recursive functions ( branch of mathematics called recursion theory) Church (1936): -calculus ( branch of CS called functional programming)
Computation Theory , L 7
93/171
Aim
A more abstract, machine-independent description of the collection of computable partial functions than provided by register/Turing machines: they form the smallest collection of partial functions containing some basic functions and closed under some fundamental operations for forming new functions from oldcomposition, primitive recursion and minimization.
The characterization is due to Kleene (1936), building on work of Gdel and Herbrand.
Computation Theory , L 7
94/171
Basic functions
n Projection functions, proji N n N: n proji ( x1 , . . . , xn )
xi
Constant functions with value 0, zeron N n N: zeron ( x1 , . . . , xn ) Successor function, succ N N: succ( x)
Computation Theory , L 7
x+1
95/171
Basic functions
are all RM computable:
n Projection proji is computed by
START R0 ::= Ri HALT Constant zeron is computed by STARTHALT Successor succ is computed by
+ STARTR1 R0 ::= R1 HALT
Computation Theory , L 7
96/171
Composition
Composition of f N n N with g1 , . . . , gn N m N is the partial function f [ g1 , . . . , gn ] N m N satisfying f [ g1 , . . . , gn ]( x1 , . . . , xm ) f ( g1 ( x1 , . . . , xm ), . . . , gn ( x1 , . . . , xm )) where is Kleene equivalence of possibly-undened expressions: LHS RHS means either both LHS and RHS are undened, or they are both dened and are equal.
Computation Theory , L 7
97/169
Composition
Composition of f N n N with g1 , . . . , gn N m N is the partial function f [ g1 , . . . , gn ] N m N satisfying f [ g1 , . . . , gn ]( x1 , . . . , xm ) f ( g1 ( x1 , . . . , xm ), . . . , gn ( x1 , . . . , xm )) So f [ g1 , . . . , gn ]( x1 , . . . , xm ) = z i there exist y1 , . . . , yn with gi ( x1 , . . . , xm ) = yi (for i = 1..n) and f (y1 , . . . , yn ) = z. N.B. in case n = 1, we write f g1 for f [ g1 ].
Computation Theory , L 7
98/169
Composition
f [ g1 , . . . , gn ] is computable if f and g1 , . . . , gn are.
Proof. Given RM programs F f (y1 , . . . , yn ) computing in gi ( x1 , . . . , x m ) Gi R1 , . . . , Rn y1 , . . . , yn R0 starting with set to , then the next R1 , . . . , Rm x1 , . . . , x m slide species a RM program computing f [ g1 , . . . , gn ]( x1 , . . . , xm ) in R0 starting with R1 , . . . , Rm set to x1 , . . . , x m .
(Hygiene [caused by the lack of local names for registers in the RM model of computation]: we assume the programs F, G1 , . . . , Gn only mention registers up to R N (where N max{n, m}) and that X1 , . . . , Xm , Y1 , . . . , Yn are some registers Ri with i > N.)
Computation Theory , L 7
99/169
START
(X1 ,...,Xm )::=(R1 ,...,Rm )
G1 G2
Y1 ::= R0 Y2 ::= R0
Gn F
Yn ::= R0
Computation Theory , L 7
100/169
Computation Theory , L 8
101/171
Aim
A more abstract, machine-independent description of the collection of computable partial functions than provided by register/Turing machines: they form the smallest collection of partial functions containing some basic functions and closed under some fundamental operations for forming new functions from oldcomposition, primitive recursion and minimization.
The characterization is due to Kleene (1936), building on work of Gdel and Herbrand.
Computation Theory , L 8
102/171
0 f 1 ( x ) + ( x + 1) 0 1 f 2 ( x ) + f 2 ( x + 1) 0 f 3 ( x + 2) + 1
f1 ( x) = sum of 0, 1, 2, . . . , x
f3 ( x) undened except when x = 0 f4 is McCarthys "91 function", which maps x to 91 if x 100 and to x 10 otherwise
103/169
Primitive recursion
Theorem. Given f N n N and g N n+2 N, there is a unique h N n+1 N satisfying h( x, 0) h( x, x + 1) for all x N n and x N. We write n ( f , g ) for h and call it the partial function dened by primitive recursion from f and g.
f ( x) g ( x, x, h( x, x))
Computation Theory , L 8
104/171
Primitive recursion
Theorem. Given f N n N and g N n+2 N, there is a unique h N n+1 N satisfying
()
h( x, 0) h( x, x + 1)
f ( x) g ( x, x, h( x, x))
Example: addition
Addition add N2 N satises: add( x1 , 0) add( x1 , x + 1) So add = ( f , g ) where
1
x1 add( x1 , x) + 1
f ( x1 ) g ( x1 , x2 , x3 ) x1 x3 + 1
Note that f = proj1 and g = succ proj3 ; so add can 3 1 be built up from basic functions using composition and primitive recursion: add = 1 (proj1 , succ proj3 ). 3 1
Computation Theory , L 8 106/171
Example: predecessor
Predecessor pred N N satises: pred(0) pred( x + 1) So pred = ( f , g ) where
0
0 x
0 x1
f () g ( x1 , x2 )
Thus pred can be built up from basic functions using primitive recursion: pred = 0 (zero0 , proj2 ). 1
Computation Theory , L 8
107/171
Example: multiplication
Multiplication mult N2 N satises: mult ( x1 , 0) mult ( x1 , x + 1)
0 mult ( x1 , x) + x1
and thus mult = 1 (zero1 , add (proj3 , proj3 )). 3 1 So mult can be built up from basic functions using composition and primitive recursion (since add can be).
Computation Theory , L 8
108/170
Denition. A [partial] function f is primitive recursive ( f PRIM) if it can be built up in nitely many steps from the basic functions by use of the operations of composition and primitive recursion. In other words, the set PRIM of primitive recursive functions is the smallest set (with respect to subset inclusion) of partial functions containing the basic functions and closed under the operations of composition and primitive recursion.
Computation Theory , L 8
109/171
Denition. A [partial] function f is primitive recursive ( f PRIM) if it can be built up in nitely many steps from the basic functions by use of the operations of composition and primitive recursion. Every f PRIM is a total function, because: all the basic functions are total if f , g1 , . . . , gn are total, then so is f ( g1 , . . . , gn ) [why?] if f and g are total, then so is n ( f , g ) [why?]
Computation Theory , L 8
110/171
Denition. A [partial] function f is primitive recursive ( f PRIM) if it can be built up in nitely many steps from the basic functions by use of the operations of composition and primitive recursion. Theorem. Every f PRIM is computable.
Proof. Already proved: basic functions are computable; composition preserves computability. So just have to show: n ( f , g ) N n+1 N computable if f N n N and g N n+1 N are. Suppose f and g are computed by RM programs F and G (with our usual I/O conventions). Then the RM specied on the next slide computes n ( f , g ). (We assume X1 , . . . , Xn+1 , C are some registers not mentioned in F and G; and that the latter only use registers R0 , . . . , R N , where N n + 2.)
Computation Theory , L 8 111/170
START
F C+
C= Xn +1 ? no yes
HALT
Computation Theory , L 8
112/170
Aim
A more abstract, machine-independent description of the collection of computable partial functions than provided by register/Turing machines: they form the smallest collection of partial functions containing some basic functions and closed under some fundamental operations for forming new functions from oldcomposition, primitive recursion and minimization.
The characterization is due to Kleene (1936), building on work of Gdel and Herbrand.
Computation Theory , L 9
113/171
Minimization
Given a partial function f N n+1 N, dene n f N n N by least x such that f ( x, x) = 0 n f ( x) and for each i = 0, . . . , x 1, f ( x, i ) is dened and > 0 (undened if there is no such x)
In other words n f = {( x, x) N n+1 | y0 , . . . , yx
x i =0
f ( x, i ) = yi ) (
x 1 i =0
yi > 0) yx = 0}
Computation Theory , L 9
114/171
Example of minimization
integer part of x1 /x2 least x3 such that (undened if x2 =0) x1 < x2 ( x3 + 1)
where f N3 N is f ( x1 , x2 , x3 )
2 f ( x1 , x2 )
1 if x1 x2 ( x3 + 1) 0 if x1 < x2 ( x3 + 1)
Computation Theory , L 9
115/170
Denition. A partial function f is partial recursive ( f PR) if it can be built up in nitely many steps from the basic functions by use of the operations of composition, primitive recursion and minimization. In other words, the set PR of partial recursive functions is the smallest set (with respect to subset inclusion) of partial functions containing the basic functions and closed under the operations of composition, primitive recursion and minimization.
Computation Theory , L 9
116/171
Denition. A partial function f is partial recursive ( f PR) if it can be built up in nitely many steps from the basic functions by use of the operations of composition, primitive recursion and minimization. Theorem. Every f PR is computable.
Proof. Just have to show: n f N n N is computable if f N n+1 N is. Suppose f is computed by RM program F (with our usual I/O conventions). Then the RM specied on the next slide computes n f . (We assume X1 , . . . , Xn , C are some registers not mentioned in F; and that the latter only uses registers R0 , . . . , R N , where N n + 1.)
Computation Theory , L 9
117/171
START
(X1 ,...,Xn )::=(R1 ,...,Rn )
C+
F
R0
Computation Theory , L 9
R0 ::=C
HALT
118/171
Proof sketch, cont. Let cong M ( x, t ) be the code of Ms conguration after t steps, starting with initial register values x. Its in PRIM because: cong M ( x, 0) cong M ( x, t + 1)
Can assume M has a single HALT as last instruction, Ith say (and no erroneous halts). Let halt M ( x) be the number of steps M takes to halt when started with initial register values x (undened if M does not halt). It satises halt M ( x) least t such that I lab(cong M ( x, t )) = 0 and hence is in PR (because lab, cong M , I ( ) PRIM). So f PR, because f ( x) val0 (cong M ( x, halt M ( x))).
Computation Theory , L 9 120/171
Denition. A partial function f is partial recursive ( f PR) if it can be built up in nitely many steps from the basic functions by use of the operations of composition, primitive recursion and minimization. The members of PR that are total are called recursive functions. Fact: there are recursive functions that are not primitive recursive. For example. . .
Computation Theory , L 9
121/171
Ackermanns function
There is a (unique) function ack N2 N satisfying ack(0, x2 ) = x2 + 1 ack( x1 + 1, 0) = ack( x1 , 1) ack( x1 + 1, x2 + 1) = ack( x1 , ack( x1 + 1, x2 )) ack is computable, hence recursive [proof: exercise]. Fact: ack grows faster than any primitive recursive function f N2 N: N f x1 , x2 > N f ( f ( x1 , x2 ) < ack( x1 , x2 )). Hence ack is not primitive recursive.
Computation Theory , L 9
122/171
Lambda-Calculus
Computation Theory , L 10
123/171
Notions of computability
Church (1936): -calculus Turing (1936): Turing machines. Turing showed that the two very dierent approaches determine the same class of computable functions. Hence: Church-Turing Thesis. Every algorithm [in intuitive sense of Lect. 1] can be realized as a Turing machine.
Computation Theory , L 10
124/171
-Terms, M
are built up from a given, countable collection of variables x, y, z, . . . by two operations for forming -terms: -abstraction: (x.M ) (where x is a variable and M is a -term) application: ( M M ) (where M and M are -terms).
Some random examples of -terms: x
Computation Theory , L 10
-Terms, M
Notational conventions:
(x1 x2 . . . xn .M ) means (x1 .(x2 . . . (xn .M ) . . .)) ( M1 M2 . . . Mn ) means (. . . ( M1 M2 ) . . . Mn ) (i.e. application is left-associative) drop outermost parentheses and those enclosing the body of a -abstraction. E.g. write (x.( x(y.(y x)))) as x.x(y.y x). x # M means that the variable x does not occur anywhere in the -term M.
Computation Theory , L 10 126/171
Computation Theory , L 10
127/171
Computation Theory , L 10
128/170
-Equivalence M = M
x.M is intended to represent the function f such that f ( x) = M for all x. So the name of the bound variable is immaterial: if M = M { x /x} is the result of taking M and changing all occurrences of x to some variable x # M, then x.M and x .M both represent the same function. For example, x.x and y.y represent the same function (the identity function).
Computation Theory , L 10
129/171
-Equivalence M = M
is the binary relation inductively generated by the rules: z # (M N) M {z/x} = N {z/y} x.M = y.N N = N M = M M N = M N where M {z/x} is M with all occurrences of x replaced by z.
Computation Theory , L 10 130/171
x = x
-Equivalence M = M
For example: because because because because x.(xx .x) x = y.(x x .x) x (z x .z) x = (x x .x) x z x .z = x x .x and x = x x .u = x .u and x = x u = u and x = x .
Computation Theory , L 10
131/170
-Equivalence M = M
Fact: = is an equivalence relation (reexive, symmetric and transitive).
We do not care about the particular names of bound variables, just about the distinctions between them. So -equivalence classes of -terms are more important than -terms themselves. Textbooks (and these lectures) suppress any notation for -equivalence classes and refer to an equivalence class via a representative -term (look for phrases like we identify terms up to -equivalence or we work up to -equivalence). For implementations and computer-assisted reasoning, there are various devices for picking canonical representatives of -equivalence classes (e.g. de Bruijn indexes, graphical representations, . . . ).
Computation Theory , L 10 132/171
Substitution N [ M/x]
x[ M/x] y[ M/x] (y.N )[ M/x] ( N1 N2 )[ M/x]
= = = =
(y.x)[y/x] = y.y
Computation Theory , L 10
133/171
Substitution N [ M/x]
x[ M/x] y[ M/x] (y.N )[ M/x] ( N1 N2 )[ M/x]
= = = =
-Reduction
Recall that x.M is intended to represent the function f such that f ( x) = M for all x. We can regard x.M as a function on -terms via substitution: map each N to M [ N/x]. So the natural notion of computation for -terms is given by stepping from a -redex -reduct
(x.M ) N
M [ N/x]
to the corresponding
Computation Theory , L 10
135/171
-Reduction
One-step -reduction, M M : MM x.M x.M MM NMNM M = N
(x.M ) N M [ N/x]
MM MN M N N = M MM NN
Computation Theory , L 10
136/171
-Reduction
E.g.
((y.z.z)u)y (x.x y)((y.z.z)u) (x.x y)(z.z) (z.z)y
y
Computation Theory , L 10
137/171
Many-step -reduction, M M = M M M
(no steps)
M: M M M M M M
MM M M
(1 step)
(1 more step)
E.g.
Computation Theory , L 10
138/171
Lambda-Denable Functions
Computation Theory , L 11
139/171
-Conversion M = N
Informally: M = N holds if N can be obtained from M by performing zero or more steps of -equivalence, -reduction, or -expansion (= inverse of a reduction).
E.g. u ((x y. v x)y) = (x. u x)(x. v y) because (x. u x)(x. v y) u(x. v y) and so we have u ((x y. v x)y) = =
Computation Theory , L 11
-Conversion M = N
is the binary relation inductively generated by the rules: M = M M = M MM M = M M = M M = M M = M x.M = x.M
M = M M = M M = M
M = M N = N M N = M N
Computation Theory , L 11 141/171
Church-Rosser Theorem
Theorem. is conuent, that is, if M1 M then there exists M such that M1 Corollary. M1 = M2 i M ( M1 M M M2 , M2 . M2 ).
Proof. = satises the rules generating ; so M M implies M M2 , then M1 = M = M2 and M = M . Thus if M1 so M1 = M2 . Conversely, the relation {( M1 , M2 ) | M ( M1 M M2 )} satises the rules generating = : the only dicult case is closure of the relation under transitivity and for this we use the Church-Rosser M M2 ). theorem. Hence M1 = M2 implies M ( M1
Computation Theory , L 11
142/171
-Normal Forms
Denition. A -term N is in -normal form (nf) if it contains no -redexes (no sub-terms of the form (x.M ) M ). M has -nf N if M = N with N a -nf.
Note that if N is a -nf and N N = N (why?). N , then it must be that
Hence if N1 = N2 with N1 and N2 both -nfs, then N1 = N2 . (For if N1 = N2 , then N1 M N2 for some M; hence by Church-Rosser, N1 M N2 for some M , so N1 = M = N2 .)
Non-termination
Some terms have no -nf.
E.g.
A term can possess both a -nf and innite chains of reduction from it.
E.g. (x.y) y, but also (x.y) (x.y) .
Computation Theory , L 11
144/171
Non-termination
Normal-order reduction is a deterministic strategy for reducing -terms: reduce the left-most, outer-most redex rst. left-most: reduce M before N in M N, and then outer-most: reduce (x.M ) N rather than either of M or N. (cf. call-by-name evaluation). Fact: normal-order reduction of M always reaches the -nf of M if it possesses one.
Computation Theory , L 11
145/171
Computation Theory , L 11
146/171
Churchs numerals
0 1 2 n
M0 N Notation: M 1 N n +1 M N
. . .
f x.x f x. f x f x. f ( f x) f x. f ( ( f x) )
n times
N MN M(Mn N)
Denition. f N n N is -denable if there is a closed -term F that represents it: for all ( x1 , . . . , xn ) N n and y N if f ( x1 , . . . , xn ) = y, then F x1 xn = y if f ( x1 , . . . , xn ), then F x1 xn has no -nf.
For example, addition is -denable because it is represented by P x1 x2 . f x. x1 f ( x2 f x): P m n = f x. m f (n f x)
-Denable functions
= f x. m f ( f n x) = f x. f m ( f n x) = f x. f m+n x = m+n
Computation Theory , L 11 148/171
Computable = -denable
Theorem. A partial function is computable if and only if it is -denable. We already know that Register Machine computable = Turing computable = partial recursive. Using this, we break the theorem into two parts: every partial recursive function is -denable -denable functions are RM computable
Computation Theory , L 11
149/171
Denition. f N n N is -denable if there is a closed -term F that represents it: for all ( x1 , . . . , xn ) N n and y N if f ( x1 , . . . , xn ) = y, then F x1 xn = y if f ( x1 , . . . , xn ), then F x1 xn has no -nf. This condition can make it quite tricky to nd a -term representing a non-total function. For now, we concentrate on total functions. First, let us see why the elements of PRIM (primitive recursive functions) are -denable.
Computation Theory , L 11 150/171
-Denable functions
Basic functions
n Projection functions, proji N n N: n proji ( x1 , . . . , xn )
xi
Constant functions with value 0, zeron N n N: zeron ( x1 , . . . , xn ) Successor function, succ N N: succ( x)
Computation Theory , L 11
x+1
151/171
Succ since
x1 f x. f ( x1 f x)
Succ n = f x. f (n f x) = f x. f ( f n x)
= f x. f n+1 x = n+1
Computation Theory , L 11 152/171
Representing composition
If total function f N n N is represented by F and total functions g1 , . . . , gn N m N are represented by G1 , . . . , Gn , then their composition f ( g1 , . . . , gn ) N m N is represented simply by x1 . . . xm . F ( G1 x1 . . . xm ) . . . ( Gn x1 . . . xm )
because
= = =
F ( G1 a1 . . . am ) . . . ( Gn a1 . . . am ) . F g1 ( a1 , . . . , a m ) . . . g n ( a1 , . . . , a m ) f ( g1 ( a1 , . . . , am ), . . . , gn ( a1 , . . . , am )) f ( g1 , . . . , gn )( a1 , . . . , am )
Computation Theory , L 11
153/171
Representing composition
If total function f N n N is represented by F and total functions g1 , . . . , gn N m N are represented by G1 , . . . , Gn , then their composition f ( g1 , . . . , gn ) N m N is represented simply by x1 . . . xm . F ( G1 x1 . . . xm ) . . . ( Gn x1 . . . xm )
This does not necessarily work for partial functions. E.g. totally undened function u N N is represented by U x1 . (why?) and zero1 N N is represented by Z x1 .0; but zero1 u is not represented by x1 . Z (U x1 ), because (zero1 u)(n) whereas (x1 . Z (U x1 )) n = Z = 0. (What is zero1 u represented by?)
Computation Theory , L 11 154/170
Primitive recursion
Theorem. Given f N n N and g N n+2 N, there is a unique h N n+1 N satisfying h( x, 0) h( x, x + 1) for all x N n and x N. We write n ( f , g ) for h and call it the partial function dened by primitive recursion from f and g.
f ( x) g ( x, x, h( x, x))
Computation Theory , L 12
155/171
= f ( a) = g ( a, a, h( a, a))
Computation Theory , L 12
156/171
Computation Theory , L 12
157/171
Representing booleans
True False If satisfy If True M N = True M N = M If False M N = False M N = N x y. x x y. y f x y. f x y
Computation Theory , L 12
159/171
Representing test-for-zero
Eq0 satises Eq0 0 = 0 (y. False) True = True Eq0 n + 1 = n + 1 (y. False) True = (y. False)n+1 True = (y. False)((y. False)n True) = False x. x(y. False) True
Computation Theory , L 12
160/171
x y f . f x y f . f True f . f False
Representing predecessor
Want -term Pred satisfying Pred n + 1 = n Pred 0 = 0
Have to show how to reduce the n + 1-iterator n + 1 to the n-iterator n. Idea: given f , iterating the function g f : ( x, y) ( f ( x), x) n + 1 times starting from ( x, x) gives the pair ( f n+1 ( x), f n ( x)). So we can get f n ( x) from f n+1 ( x) parametrically in f and x, by building g f from f , iterating n + 1 times from ( x, x) and then taking the second component. Hence. . .
Computation Theory , L 12 162/170
Representing predecessor
Want -term Pred satisfying Pred n + 1 = n Pred 0 = 0 Pred G y f x. Snd(y ( G f )(Pair x x)) where f p. Pair( f (Fst p))(Fst p)
Computation Theory , L 12
163/171
Computation Theory , L 12
164/171
We now know that h can be represented by Y (z xx. If (Eq0 x)( F x)( G x (Pred x)(z x (Pred x)))).
Computation Theory , L 12 165/170
Computation Theory , L 12
166/171
Minimization
Given a partial function f N n+1 N, dene n f N n N by least x such that f ( x, x) = 0 n f ( x) and for each i = 0, . . . , x 1, f ( x, i ) is dened and > 0 (undened if there is no such x) Can express n f in terms of a xed point equation: n f ( x) g ( x, 0) where g satises g = f ( g ) with f (N n+1 N ) (N n+1 N ) dened by f ( g )( x, x) if f ( x, x) = 0 then x else g ( x, x + 1)
Computation Theory , L 12 167/171
Representing minimization
Suppose f N n+1 N (totally dened function) satises a a ( f ( a, a) = 0), so that n f N n N is totally dened. Thus for all a N n , n f ( a) = g ( a, 0) with g = f ( g ) and f ( g )( a, a) given by if ( f ( a, a) = 0) then a else g ( a, a + 1). So if f is represented by a -term F, then n f is represented by x.Y(z x x. If (Eq0 ( F x x)) x (z x (Succ x))) x 0
Computation Theory , L 12
168/171
Hence every (total) recursive function is -denable. More generally, every partial recursive function is -denable, but matching up with nf makes the representations more complicated than for total functions: see [Hindley, J.R. & Seldin, J.P. (CUP, 2008), chapter 4.]
Computation Theory , L 12
169/170
Computable = -denable
Theorem. A partial function is computable if and only if it is -denable.
We already know that computable = partial recursive -denable. So it just remains to see that -denable functions are RM computable. To show this one can code -terms as numbers (ensuring that operations for constructing and deconstructing terms are given by RM computable functions on codes) write a RM interpreter for (normal order) -reduction. The details are straightforward, if tedious.
Computation Theory , L 12
170/170