0% found this document useful (0 votes)
63 views12 pages

Church Turing

The document discusses the deterministic one-tape Turing machine model, detailing its structure, operation, and configurations. It defines key concepts such as acceptance, rejection, and the language recognized by the machine, as well as introducing multi-tape Turing machines and their equivalence to single-tape machines. The document also outlines the simulation of multi-tape machines on single-tape machines, emphasizing that the computational power remains unchanged despite the number of tapes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views12 pages

Church Turing

The document discusses the deterministic one-tape Turing machine model, detailing its structure, operation, and configurations. It defines key concepts such as acceptance, rejection, and the language recognized by the machine, as well as introducing multi-tape Turing machines and their equivalence to single-tape machines. The document also outlines the simulation of multi-tape machines on single-tape machines, emphasizing that the computational power remains unchanged despite the number of tapes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

The Church-Turing Thesis

Mahesh Viswanathan

The default Turing machine model we will consider in the first few weeks of this class is the deterministic,
one-tape Turing machine model, and we will often refer to this as the “the Turing machine” model. This
model is shown in Figure 1. It has a finite set of (control) states, a semi-infinite tape that has a leftmost

` 0 1 1 0 1 t

finite-state
control

Figure 1: Single-tape Turing machine

cell containing a left end marker ` (never written over) and extends to infinitely many cells to the right.
This machine has a head that can move left or right one cell in each step, and in each step it reads the
current symbol it is currently scanning and overwrites it before moving on. Initally, the tape contains the
input string, which is finite, on cells numbered 1 onwards (cell 0 contains `). The rest of the tape has a
special symbol called the blank symbol t. Machine, initially, starts in state s with its input cell scanning
the leftmost cell. In each step it changes its state based on its current state and symbol under its head,
and it overwrites this cell, and moves its head left/right before taking the next step. The Turing machine
is assumed to have two special halting states — accept state t and reject state r. Like the 2-way DFA, the
Turing machine may never halt, if it does not reach either of these two halting states. The formal definition
is as follows.
Definition 1. A deterministic, one-tape Turing machine is M = (Q, Σ, Γ, `, t, δ, s, t, r) where
• Q is a finite set of states,

• Σ is a finite input alphabet, used to encode the input string,


• Γ is a finite tape alphabet consisting of symbols written and read from the tape; Σ ( Γ,
• `∈ Γ \ Σ is the left endmarker,
• t ∈ Γ \ Σ is the blank symbol,

• s ∈ states is the start state,


• t ∈ Q is the unique accepting state,
• r ∈ Q (r 6= t) is the unique rejecting state,

• δ : (Q \ {t, r}) × Γ → Q × Γ × {L, R} is the transition function that describes the next state, symbol to
be written, and direction to move the tape head, given the current state and symbol read; it is assumed
that no transitions are enabled from either t or r.

1
We assume that the left endmarker ` is never over-written, i.e., for any p ∈ Q \ {t, r}, δ(p, `) = (q, `, R) for
some q ∈ Q.
To describe computations of a Turing machine, we need to define the notion of a configuration which is
a snapshot of all the relevant information about a Turing machine at a given step. For a Turing machine,
this means the contents of the tape, the current state, and the current input head position.
At any given point, the contains of the machine’s tape will be of the form ytω , where y ∈ Γ∗ and tω is
the infinite sequence t t t · · · . In other words, all but finitely many symbols on the tape are t; this can be
argued inductively as it is definitely true when the machine starts, and any step of the machine may increase
the number of non-blank cells by at most 1.
A configuration α of TM M is an element of Q × {y tω | y ∈ Γ∗ } × N. So a configuration α = (p, z, n)
represents the fact that the TM is in state p, its tape contains z, and its head is scanning cell n. The start
configuration of M on input x ∈ Σ∗ is (s, ` xtω , 0).
Just like 2-way DFAs, computations/runs are sequences of configurations related by the next configuration
1
relation, −→, which we define next. Before doing so, we introduce a convenient notation. For a tape z = ytω
M
(y ∈ Γ∗ ), snb (z) is the string obtained from z by substituting b for the nth cell contents (zn ) in z. The relation
1
−→, is defined as
M
1
δ(p, zi ) = (q, b, L) ⇒ (p, z, i) −→ (q, sib (z), i − 1),
M
1
δ(p, zi ) = (q, b, R) ⇒ (p, z, i) −→ (q, sib (z), i + 1).
M
1
As for 2-way DFAs, we can define the n-fold composition of −→, inductively, as follows.
M

0
α −→ α for every configuration α
M
n 1 n+1
If α −→ β and β −→ γ, then α −−−→ γ
M M M

1 ∗ n
Finally, the reflexive, transitive closure of −→ is defined as, α −→ β if α −→ β for some n ∈ N.
M M M

Definition 2. The machine M = (Q, Σ, Γ, `, t, δ, s, t, r) is said to accept input x ∈ Σ∗ if (s, ` xtω , 0) −→
M

(t, y, n) for some y and n. M is said to reject x if (s, ` xtω , 0) −→ (r, y, n) for some y and n. M halts on x
M
if it either accepts or rejects x; a TM may not halt on some inputs.
The language of M is given by L(M ) = {x ∈ Σ∗ | M accepts x}.
The execution of a TM M on an input x can have one of three possibilities: it may accept x, it may reject
x, or it may not halt on x. The language accepted/recognized by M only consists of inputs that M accepts.
In other words, inputs on which M either rejects or does not halt do not belong to L(M ). One important
class of Turing machines are those that are total. A TM M is total if it halts on every input. That is, on
any input, a total TM either accepts or rejects.
Definition 3. A language/decision problem L is

• recursively enumerable (RE) if L = L(M ) for some TM M ;


• recursive (REC) if L = L(M ) for some total TM M ; and
• co-r.e. (co-RE) if L is RE.

A number of different computational models are equivalent to the single tape Turing machine model that
we have introduced. We give details of some of these variants and how they can be simulated on the Turing
machine model.

2
` 0 0 1 t

finite-state
` 1 0 t 0 0
control

` 0 1 1 0 t

Figure 2: Multi-tape Turing machine

1 Multi-tape Turing Machine


A Multi-tape Turing machine is like the Turing machine model. However, the machine has multiple tapes
on which it can write information and read from. Each tape has its own head that moves independently.
Such a machine is schematically shown in Figure 2. Intuitively, this machine works as follows.
• Input is written out on Tape 1
• The leftmost cell of all tapes has a left endmarker ` that is never overwritten.
• Initially all heads scanning cell 0 (leftmost cell), and tapes 2 to k are blank (except for the left
endmarker).
• In one step, the machine read symbols under each of the k-heads, and depending on the current control
state, writes new symbols on each of the tapes, moves each tape head independently in possibly in
different directions, and changes its control state.
Such a machine can be formally defined as follows.
Definition 4. A k-tape Turing Machine is M = (Q, Σ, Γ, `, t, δ, s, t, r) where
• Q is a finite set of control states
• Σ is a finite set of input symbols
• Γ ⊇ Σ is a finite set of tape symbols. Also, {`, t} ⊆ Γ \ Σ
• s ∈ Q is the initial state
• t ∈ Q is the accept state
• r ∈ Q is the reject state; the states t, and r are assumed to be distinct states
• δ : (Q \ {t, r}) × Γk → Q × (Γ × {L, R})k is the transition function; thus, given the current state, and
the symbols read by each of the k-heads, the transition function determines what the next state will
be, what should be written on each of the tapes, and in which direction to move each tape head.
As always we assume that none of the heads move left, if they are reading `; we skip the formal restriction
on δ to ensure this.
As for (single-tape) Turing machines, in order to define computations, acceptance, and the language
recognized, we need to identify what the configuration of such a machine is. The configuration of such a
k-tape Turing machine is a tuple that identifies the current state, the contents of each of k-tapes, and the
positions of each of the tape heads. Thus, a configuraion is α ∈ Q × ({y tω | y ∈ Γ∗ })k × Nk . For such a
machine, the initial configuration on input x is (s, ` xtω , ` tω , . . . , ` tω , 0, 0, . . . , 0). The formal definition
of a single step based on the transition function is skipped, but is similar to the definition of a single step of
a Turing machine. The machine accepts input w if it reaches state t, and it rejects input w if it reaches state

3
r. If the machine neither accepts nor rejects on a an input then the machine is said to not halt. Finally, the
language recognized/accepted by M is given as L(M ) = {w ∈ Σ∗ | M accepts x}
The ability of a machine to write and read from additional tapes does not increase its computational
power. In other words, any decision problem solved on a k-tape Turing machine can also be solved on a
single tape Turing machine.

Theorem 5. For any k-tape Turing Machine M , there is a single tape TM single(M ) such that L(single(M )) =
L(M ).
Proof. Recall that in order to simulate the k-tape machine, the single tape machine needs to keep track of
the configuration of M . That means keeping track of the state of M , storing the contents of M ’s tapes, and
keeping track of the the position of each head. How do we do this? This requires us to address the following
challenges.
• How do we store k-tapes in one?
• How do we simulate the movement of k independent heads?
We address these challenges in order.

` (`, ·, `, ∗) (1, ∗, 0, ·) (0, ·, 1, ·) (t, ·, 1, ·) t


` 1 0 t
finite-state
control finite-state
` 0 1 1 t
control

Figure 3: Multi-tape TM M Figure 4: 1-tape equivalent single M

The idea behind storing multiple tapes on a single tape is to store the contents of the ith cell of each tape
in the i + 1th cell of the single tape machine; we need to right shift by one cell because the 0th cell of the
single tape contains the left end marker, and the first cell contains an encoding of the endmarkers of each of
the k-tapes. This is achieved by enlarging the tape alphabet of the single tape machine to be the cartesian
product of the tape alphabet of the k-tape machine. This is illustrated in Figures 3 and 4. The other piece
of information that the single tape machine needs to keep track of are the head positions of the k heads.
This it will do by “marking” the cell in which a particular tapes head is positioned. This is indicated by the
presence of “*” as shown in Figure 4.
Having outlined how the single tape TM stores the k-tape contents and the k head positions, it is clear
that the single tape TM can store the configuration of the k-tape machine by storing the state in its control
state. We now need to address how the single tape TM can simulate a single step of the k-tape machine. To
simulate a single step of the k-tape machine, we need to find what is being read by each of the k tapes. This
can be accomplished by scanning the entire tape from left to right, and storing the contents of the cells that
are marked (to indicate the presence of a tape head) in the state as we encounter them. After this scan the
1-tape TM knows what the next step of the k-tape machine will be. But in order to finish its simulation,
we need to update the tape to reflect the changed contents of each of the k tapes, and determine the new
head positions. This will be accomplished by a couple of scans that update the tape contents, and move the
markers to indicate the new head positions on the tape.
Putting these ideas together we can see that the high-level algorithm for the 1-tape TM is as follows. On
input x
1. First the machine will rewrite input x to be in “new” format that stores the contents of all k-tapes in
each cell.
2. To simulate one step

4
• the machine will read from left-to-right remembering symbols read on each tape, and move all
the way back to leftmost position.
• Next, it will read from left-to-right, changing symbols, and moving those heads that need to be
moved right.
• Then, it will scan back from right-to-left moving the heads that need to be moved left.

The above high-level description can be translated into a concrete set of transitions in a straight-forward
(but painful) manner. To illustrate how this can be accomplished, we illustrate this process by giving a
precise formal definition of this construction.
Formally, let M = (Q, Σ, Γ, `, t, δ, s, t, r). To define the machine single(M ) it is useful to identify modes
that single(M ) could be in while simulating M . The modes are

mode = {init, back-init, read, back-read, fix-right, fix-left}

where
• init means that the machine is rewriting input in new format

• back-init means the machine is just going back all the way to the leftmost cell after “initializing” the
tape
• read means the machine is scanning from left to right to read all symbols being read by k-tape machine
• back-read means the machine is going back to leftmost cell after the “read” phase

• fix-right means the machine is scanning from left to right and is going to make all tape changes and
move those heads that need to be moved right
• fix-left means the machine is scanning right to left and moving all heads that need to be moved left
Now single(M ) = (Q0 , Σ0 , Γ0 , `, t, δ 0 , s0 , t0 , r0 ) where

• Recall, based on the high-level description, single(M ) needs to remember a few things in its state. It
needs to keep track of the current “mode”; M ’s state; during the read phase the symbols being scanned
by each head of M ; at the end of the read phase, the new symbols to be written and direction to move
the heads. Thus,
Q0 = {s0 , t0 , r0 } ∪ (modes × Q × (Γ × {L, R, ∗})k )
where s0 , t0 , r0 are new initial, accept and reject states, respectively. “∗” is new special symbol that
we will use to when placing new head positions, and can be ignored for now. Intuitively, when the
mode is “read” the directions don’t mean anything, and symbols in Γ will be the symbols that M is
scanning. During the “fix” phases the directions are the directions each head needs to be moved, and
the symbols are the new symbols to be written.
• Σ0 = Σ; the input alphabet does not change
• On the tape, we write the contents of one cell of each of the k-tapes and whether the head scans that
position or not. Thus, Γ0 = {`, t} ∪ (Γ × {·, ∗})k , where ` will be the left-end-marker, and t is the
blank symbol of the machine.
• The initial state, accept state and reject states are the new states s0 , t0 , and r0 .
We will now formally define the transition function δ 0 . We will describe δ 0 for various cases below; for
a case not covered below, we will assume that the machine single(M ) goes to the reject state r0 and moves
the head right.

5
Initial State In the first step, single(M ) will move to the “initialization phase”, which will rewrite the
tape in the correct format for the future. Thus, from initial state s0 you go to a state whose “mode”
is init. Since we are going to insert a symbol containing left end-marker of each of the k-tapes,
we need to “shift” all symbols of the input one-space to right, which can be accomplished by re-
membering the next symbol to be written in the state. So δ 0 (s0 , `) = (s0 , `, R) and δ 0 (s0 , a) =
(hinit, s, a, L, t, L, t, L, . . . , t, Li, (`, ∗, `, ∗, . . . , `, ∗), R) for a 6=`; the symbol L doesn’t mean anything
(and so can be changed to whatever you please).
Initialization In the initialization phase, we just read a symbol and write it in the “new format”, which
means writing blank symbols for all the other tape cells, and moving right. When we scan the entire
input to go back left, i.e., change mode to back-init. There is one caveat to this. We are shifting
symbols of the input one position to the right; so we actually write what we remembered in our state,
and remember what we read in the state.
δ 0 (hinit, s, a, L, t, L, . . . , t, Li, b) = (hinit, s, b, L, t, L, . . . , t, Li, (a, ·, t, ·, . . . , t, ·), R)
δ 0 (hinit, s, a, L, t, L, . . . , t, Li, t) = (hback-init, s, t, L, . . . , t, Li, (a, ·, t, ·, . . . , t, ·), L)

Ending Initialization After we have rewritten the tape, we move the head all the way back, and move to
the next phase which is “reading”.
δ 0 (hback-init, s, t, L, . . . , t, Li, b) = (hback-init, s, t, L, . . . , t, Li, b, L)
δ 0 (hback-init, s, t, L, . . . , t, Li, `) = (hread, s, t, L, . . . , t, Li, `, R)
where b 6=`
Reading Here we scan the tape to the right, and whenever we encounter a position, where there is a tape
head (i.e., a ∗ in the appropriate position), we will remember that symbol in the state. When we reach
the right end (i.e., read a t), we know all the information to determine the next step of M . We will
remember the new symbols to right and directions of the head in the state, and move to the next phase
back-read where we just go back all the way to the left end. These two cases are formally given as
• Suppose the current state is P = hread, q, a1 , d1 , . . . , ai , di , . . . , ak , dk i, and we read a symbol
X = (b1 , h1 , . . . , bi , hi , . . . , bk , hk ), where if hi = ∗ that means that the ith tape head is read this
position, and if hi = · then the ith tape head is not reading this position. Thus,
δ 0 (P, X) = (hq, a01 , d1 , . . . , a0i , di , . . . , a0k , dk i, X, R)
where a0i = ai if hi = · and a0i = bi if hi = ∗.
• Suppose the current state is P = hread, q, a1 , d1 , . . . , ai , di , . . . , ak , dk i, and we read t. This means
we have finished scanning and all the symbols ai are the symbols that are being read by M , and
its state is q. So suppose M ’s transition function δ says
δ(q, a1 , a2 , . . . , ak ) = (q 0 , b1 , d01 , . . . bk , d0k )
That is, it says “replace symbol ai on tape i by bi and move its head in direction d0i ”. Then
δ 0 (P, t) = (hback-read, q 0 , b1 , d01 , . . . bk , d0k i, t, L)

So the new state of M , symbols to be written and direction of heads in stored in the state and we go
back.
Ending Reading After reading and determining the next step, we move the head all the way back, and
move to the next phase which is fixing the tape to reflect the new situation.
δ 0 (hback-read, q, a1 , d1 , . . . , ak , dk i, b) = (hback-read, q, a1 , d1 , . . . , ak , dk i, b, L)
δ 0 (hback-read, q, a1 , d1 , . . . , ak , dk i, `) = (hfix-right, q, a1 , d1 , . . . , ak , dk i, `, R)
where b 6=`.

6
Right Scan of Fixing In this phase we move all the way to the right. Along the way, we change the
symbols to new symbols, wherever the old heads were, and move all heads that need to be moved
right. To move a head right, we will use “∗” in the state to remember that the old head position of
the tape is the current cell, and it needs to be moved to the next cell. Finally, there is the boundary
case of moving the head on some tape to right of the rightmost non-blank symbol on any tape. We
will capture these cases formally.
• Let the current state of single(M ) be P = hfix-right, q, a1 , d1 , . . . , ak , dk i and let the symbol be-
ing read be X = (b1 , h1 , . . . bk , hk ). In such a situation, single(M ) will move to state P 0 =
hfix-right, q, a1 , d01 , . . . , ak , d0k i, move R and write X 0 = (b01 , h01 , . . . b0k , h0k ). P 0 and X 0 are deter-
mined as follows. If hi = ∗ (that is, tape i’s head was here) and di = R then b0i = ai (write new
symbol), h0i = · (new head is not here), and d0i = ∗ (remember to put head in next cell). If hi = ∗
and di = L then b0i = ai (write new symbol), h0i = hi (defer head movement to next phase), and
d0i = di . If hi = · and di = ∗ (i.e., we remember new head position is here) then b0i = bi (don’t
change symbol), h0i = ∗ (new head is here), and d0i = R (this was the original direction). If hi = ·
and di 6= ∗ then b0i = bi and d0i = di .
• Consider the case when the state is P = hfix-right, q, a1 , d1 , . . . , ak , dk i, and t is on the tape. There
are two possibilities. If di 6= ∗ for every i then we move to state P 0 = hfix-left, q, a1 , d1 , . . . qk , dk i,
write t and move L. On the other hand, suppose there is some i such that di = ∗. Then we move
to state P 0 = hfix-right, q, a1 , d01 , . . . ak , d0k i, move R and write X 0 = (b01 , h01 , . . . b0k , h0k ), where X 0
and P 0 are given as follows. First b0i = t for all i. Next, if di = ∗ then hi = ∗ and d0i = R. On the
other hand if di 6= ∗ then h0i = · and d0i = di .
Left Scan of Fixing In this phase we move all the way to the left, and along the way we move all the head
positions that needed to be moved left. These changes are similar to the case of moving heads to the
right, except for the case when moving a head left from the leftmost position.

• Let the current state of single(M ) be P = hfix-left, q, a1 , d1 , . . . , ak , dk i and let the symbol be-
ing read be X = (b1 , h1 , . . . bk , hk ). In such a situation, single(M ) will move to state P 0 =
hfix-left, q, a1 , d01 , . . . , ak , d0k i, move L and write X 0 = (b1 , h01 , . . . bk , h0k ). P 0 and X 0 are determined
as follows. If hi = ∗ and di = L (that is this tape’s head needs to be moved left) then h0i = · (new
head is not here), and d0i = ∗ (remember to put head in next cell). If hi = ∗ and di = R then
h0i = hi and d0i = di (don’t do anything since we already handled the right moves). If hi = · and
di = ∗ (i.e., we remember new head position is here) then h0i = ∗ (new head is here), and d0i = L
(this was the original direction). If hi = · and di 6= ∗ then h0i = hi and d0i = di .
• Now we consider the case when we finish the left scan, i.e., the symbol being read is `. Let the
state be P = hfix-left, q, a1 , d1 , . . . , ak , dk i. Since we assume that no tape head moves to the left
of the left endmarker, there should be no pending left moves, i.e., di 6= ∗ for all i. In this case,
we fixed all the moves correctly, and so we move to state P 0 = hread, q, a1 , d1 , . . . , ak , dk i, write
`, and move R, and start simulating the next step.
Acceptance/Rejection If M accepts/rejects (i.e., its state is t or r) then single(M ) will move to its
accept/reject state.
δ 0 (hread, t, a1 , d1 , . . . , ak , dk i, b) = (t0 , b, L)
δ 0 (hread, r, a1 , d1 , . . . , ak , dk i, b) = (r0 , b, L)

2 Nondeterministic Turing Machines


The Turing machines we have considered thus far (whether it being single tape or multi-tape) are determin-
istic machines — at each step, there is one possible next state, symbols to be written and direction to move

7
the head, or the TM may halt. In contrast, we could consider nondeterministic machines where in each step
the future is not determined. That is, at each step, there are finitely many possibilities.
Definition 6. Formally, a nondeterministic Turing machine (NTM) is N = (Q, Σ, Γ, `, t, ∆, s, t, r), where
• Q, Σ, Γ, `, t, s, t, r are as before for the 1-tape deterministic machine

• ∆ : (Q \ {t, r}) × Γ → 2Q×Γ×{L,R}


Once again we need to define configurations, and computations of an NTM. A configuration of a nonde-
terministic TM is exactly the same as that of a 1-tape TM. So are notions of starting configuration and ac-
1 1
cepting/rejecting/halting configurations. A single step −→ is defined similarly — (p, z, i) −→ (q, sib (z), i+d), if
N N

(q, b, d) ∈ ∆(q, zi ); here “i+d” is i+1 if d = R and i−1 if d = L. x is accepted by M , if (s, ` xtω , 0) −→ (t, z, n)
N
for some z, n. Finally, L(N ) = {x ∈ Σ∗ | M accepts x}
Surprisingly, like for NFAs, nondeterminism does not provide any additional computational power.
Theorem 7. For any nondeterministic Turing Machine N , there is a (deterministic) TM det(N ) such that
L(det(N )) = L(N ).
Proof. When we translated a NFA to DFA, the DFA kept track of all possible active threads of the NFA;
this idea does not generalize very easily. Instead, det M will simulate M on the input by simulating M on
each possible sequence of computation steps that M may try in each step.
In order understand this approach it is useful to consider nondeterministic computation on an input.
Recall that any nondeterministic computation can be though of as a tree, where each path corresponds to
one of the possible “active threads”. This is shown in Figure 5. If r = maxq,X |∆(q, X)| then the runs of M

α = (s, ` xtω , 0)

α1 ··· αi ··· ··· αr

··· ··· αij ··· αr1 ··· αrr

··· ···

Figure 5: Computation of a nondeterministic machine. Each path in the tree corresponds to the sequence
of steps on one of the active threads of the machine.

can be organized as an r-branching tree. αi1 i2 ···in is the configuration of M after n-steps, where choice i1 is
taken in step 1, i2 in step 2, and so on. Input x is accepted iff ∃ accepting configuration in tree.
The idea behind constructing the deterministic machine is that det(N ) will search for an accepting
configuration in computation tree by exploring the tree along each path. The configuration at any vertex
can be obtained by simulating N on the appropriate sequence of nondeterministic choices. Since the tree
may potentially be infinite, if x is not accepted, det(N ) may not terminate.
We now give more details about the machine det(N ). det(N ) will use 3 tapes to simulate N (note,
multitape TMs are equivalent to 1-tape TMs)
• Tape 1, called input tape, will always hold input x

• Tape 2, called simulation tape, will be used as N ’s tape when simulating N on a sequence of nonde-
terministic choices
• Tape 3, called choice tape, will store the current sequence of nondeterministic choices

8
det(N ) will carry out the following steps.
1. Initially: Input tape contains x, simulation tape and choice tape are blank
2. Copy contents of input tape onto simulation tape
3. Simulate N using simulation tape as its (only) tape
(a) At the next step of N , if state is q, simulation tape head reads X, and choice tape head reads i,
then simulate the ith possibility in ∆(q, X); if i is not a valid choice, then goto step 4
(b) After changing state, simulation tape contents, and head position on simulation tape, move choice
tape’s head to the right. If Tape 3 is now scanning t, then goto step 4
(c) If N accepts then accept and halt, else goto step 3(a) to simulate the next step of N .
4. Write the lexicographically next choice sequence on choice tape, erase everything on simulation tape
and goto step 2.
To summarize the machine det(N ) works as follows.
• det(N ) simulates N over and over again, for different sequences, and for different number of steps.
• If N accepts x then there is a sequence of choices that will lead to acceptance. det(N ) will eventually
have this sequence on choice tape, and then its simulation N will accept.
• If N does not accept x then no sequence of choices leads to acceptance. det(N ) will therefore never
halt!

3 Random Access Machine


Random Access Machines are an idealized model of modern computers. They have a finite number of “reg-
isters”, an infinite number of available memory locations, and store a sequence of instructions or “program”
in memory.
• Initially, the program instructions are stored in a contiguous block of memory locations starting at
location 1. All registers and memory locations, other than those storing the program, are set to 0.
We will assume that the program consists of the following instruction set.
• add X, Y: Add the contents of registers X and Y and store the result in X.
• loadc X, I: Place the constant I in register X.
• load X, M: Load the contents of memory location M into register X.
• loadI X, M: Load the contents of the location “pointed to” by the contents of M into register X.
• store X, M: store the contents of register X in memory location M .
• jmp M: The next instruction to be executed is in location M .
• jmz X, M: If register X is 0, then jump to instruction M .
• halt: Halt execution.
Once again such machines do not provide any additional computational power over the model of a 1-tape
TM.

9
Theorem 8. Anything computed on a RAM can be computed on a Turing machine.
Proof. We outline a proof sketch. The RAM will be simulated by a multi-tape TM. In order to simulate the
RAM, the TM stores contents of registers, memory etc., in different tapes as follows.
• Instruction Counter Tape: Stores the memory location where the next instruction is stored; initially
it is 1.
• Memory Address Tape: Stores the address of memory location where a load/store operation is to be
performed.
• Register Tape: Stores the contents of each of the registers.
– Has register index followed by contents of each register as follows: #hRegisterNumberi∗hRegisterValuei# · · · .
For example, if register 1 has 11, register 2 has 100, register 3 has 11011, etc, then tape contains
#1 ∗ 11#10 ∗ 100#11 ∗ 11011# · · ·
• Memory Tape: Like register tape, store #hAddressi ∗ hContentsi#
– To store an instruction, have opcode, hargumentsi
• Work Tapes: Have 3 additional work tapes to simulate steps of the RAM
The TM will simulate the RAM as follows.
• TM starts with the program stored in memory, and the instruction location tape initalized to 1.
• Each step of the RAM is simulated using many steps.
– Read the instruction counter tape
– Search for the relevant instruction in memory
– Store the opcode of instruction and register address (of argument) in the finite control. Store the
memory addres (of argument) in memory address tape.
– Retrieve the values from register tape and/or memory tape and store them in work tape
– Perform the operation using work tapes
– Update instruction counter tape, register tape, and memory tape.
To illustrate the above sequence of steps, consider the example of simulating an add instruction.
• Suppose instruction counter tape holds 101.
• TM searches memory tape for the pattern #101∗.
• Suppose the memory tape contains · · · #101 ∗ haddi, 11, 110# · · ·
• TM stores “add”, 11 and 110 in its finite control. In other words, it moves to a state qadd 11,110 whose
job it is to add the contents of register 11 and 110 and put the result in 11.
• Search the register tape for the pattern #11∗. Suppose the register tape contains · · · #11 ∗ 10110# · · · ;
in other words, the contents of register 11 is 10110. Copy 10110 to one of the work-tapes.
• Search the register tape for pattern #101∗, and copy the contents of register onto work tape 2.
• Compute the sum of the contents of the work tapes
• Search the register tape for #11∗ and replace the string 10110 by the answer computed on the work
tape. This may involve shifting contents of the register tape to the right (or left).
• Add 1 to the instruction counter tape.

10
4 Church-Turing Thesis
Over past several decades, various efforts to capture the power of mechanical computation have resulted in
formal models that have the same expressive power.
• Non-Turing Machine models: random access machines, λ-calculus, type 0 grammars, first-order rea-
soning, π-calculus, . . .
• Enhanced Turing Machine models: TM with 2-way infinite tape, multi-tape TM, nondeterministic
TM, probabilistic Turing Machines, quantum Turing Machines . . .
• Restricted Turing Machine models: queue machines, 2-stack machines, 2-counter machines, . . .
This has led to what is called the Church-Turing thesis which states

“Anything solvable via a mechanical procedure can be solved on a Turing Machine.”


The Church-Turing thesis is not a mathematical statement that can be proved or disproved! Our belief
in it is based on the fact that many attempts to define computation yield the same expressive power as
Turing machines. As a consequence, in the course, we will use an informal pseudo-code to argue that a
problem/language can be solved on Turing machines.

5 Universal Simulation
A crucial observation about Turing machines is that there are Turing machines, called Universal Turing
machines, that can simulate other Turing machines when given their encoding as inputs. A universal Turing
machine U is such that given the necoding of Turing machine M and an input x, U can simulate M on x
and accept if M accepts, and reject if M rejects.
Before describing how such a machine might work, we need to describe how an arbitrary Turing machine
M can be encoded as a string. Since U is a fixed machine, it’s tape alphabet is also fixed. However, it needs
to be able to simulate TMs with arbitrary tape alphabet. This can be achieved by describing an encoding
of any such machine using a fixed alphabet.

5.1 Encoding Turing Machines


Consider an arbitrary Turing machine M = (Q, Σ, Γ, `, t, δ, s, t, r). We will encode M using the alphabet
·
{0, 1, [, ], , |}. This will be built using encodings of symbols a ∈ Γ, and states q ∈ Q. A symbol a ∈ Γ will be
encoded as a binary string enc(a) of length log |Γ|. We assume that `= 0log |Γ| and t = 0log |Γ|−1 1. Similarly,
each state q ∈ Q will be encoded as a binary string enc(q) of length log |Q|. We will assume that s = 0log |Q| ,
t = 0log |Q|−1 1 and r = 0log |Q|−2 10. Finally, directions L and R will be encoded as 0 and 1, respectively.
Turing machines will be encoded by encoding all its transitions as a string. A transition δ(p, a) = (q, b, d)
· · ·
is encoded as [enc(p) enc(a)|enc(q) enc(b) enc(d)]. Then code of the machine itself is just the encoding of
all its transition concatenated together, i.e.,

[| {z } | {z } · · · · · · | {z } ]
trans 1 trans 2 last trans

· · ·
An input string x = a1 a2 · · · an ∈ Γ∗ will be encoded as [enc(a1 ) enc(a2 ) · · · enc(an )].
The above scheme outlines one possible way in which a Turing machine and its input string may be
encoded. The precise choice of the alphabet and the exact encoding scheme is not important. It has been
outlined to give a concrete example of one such scheme. Therefore, it is important not to place too much
emphasis on its details. For what follows, we could use encoding scheme that is easy to interpret and is
capable of encoding all Turing machines upto isomorphism.

11
We will now describe a universal Turing machine U that will recognize the following language.

L(U ) = {enc(M )#enc(x) | x ∈ L(M )}

In other words, given the encoding of a Turing machine M and an encoding of an input x, the machine U
determines if x is accepted by M . We will often abuse notation and instead write the language of U as

L(U ) = {M #x | x ∈ L(M )}.

It is important not to get confused by such a notation.

` 0 0 1 t t Input

finite-state
` 1 0 t 0 0 t Simulation Tape
control

` 0 1 1 0 t t State of M

` [ 1 1 0 · ] Scratch

Figure 6: A universal Turing machine

The machine U is shown schematically in Figure 6. It has 4 tapes. The input tape stores the encoding
M #x. The simulation tape will be used as the tape of M when U “runs” M , and the state of M will be
stored on the state tape. Thus the configuration of M during the simulation is stored on the simulation and
state tapes. It is important to realize that the state of M cannot be stored in U . This is because U has a
fixed number of states, whereas since M could be any machine (not know at the time of designing U ) its
states cannot be bounded. The scratch tape will be used by U to auxiliary information as it simulates M .
The machine U will work as follows.

1. Check to see if the code for M is a valid TM code; if not reject the input. For example, for our code,
it involves checking to see if the string as exactly one “|” between [ and ], etc.
2. If code is ok, then copy x onto tape 2
3. Write 0 · · · 0, the start state of M , on the third tape, and scan the “first cell” of tape 2.

· · ·
4. To simulate a move of M , search for a transition [enc(p) enc(a)|enc(q) enc(b) enc(d)] on tape 1, where
enc(p) is on tape 3 (current state) and enc(a) is read from tape 2 from the current “cell”, i.e., between
·
two successive symbols from the current head position. Then, write enc(q) on tape 3 (after erasing
its current contents), write enc(b) on tape 2 instead of enc(a), and finally move the head on tape 2 to
the appropriate “cell”.

5. If state on tape 3 is 0 · · · 01 then accept; if state is 0 · · · 010 then reject.

12

You might also like