0% found this document useful (0 votes)
16 views11 pages

Ch03 Handout

Uploaded by

dhanusharma1212
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views11 pages

Ch03 Handout

Uploaded by

dhanusharma1212
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

CISC 4090: Theory of Computation

Chapter 3
The Church-Turing Thesis
Section 3.1: Turing Machines
Courtesy of Arthur G. Werschulz

Fordham University
Department of Computer and Information Sciences

Spring, 2014

1 / 48 2 / 48

Turing machines: Context Turing machines overview

I Introduced by Alan Turing in 1936


Models of computation: I Unlimited memory
I Finite automata: models devices with little memory I Infinite tape that can be
I moved left/right
I Pushdown automata: models devices with unlimited memory, I written
accessible only in LIFO order. I Much less restrictive than stack of a PDA
I Turing machines: models general purpose computers I A Turing machine can do everything a real computer can do
(even though a simple model)
I However, a Turing machine cannot solve all problems!

3 / 48 4 / 48
What is a Turing machine? Turing machine
Informal description
I Contains an infinite tape
I Tape initially contains the input string, with blanks
everywhere else.
I Machine can read and write from tape, and move left or right
Control
after each action.
I The machine continues until it enters an accept state or a a b a b ...
reject state, at which point it immediately stops and outputs
accept or reject.
Very different from FSAs and PDAs.
I The machine can loop forever.
I Why can’t an FSA or PDA loop forever?
I Answer: FSA/PDA terminates when input string is fully
processed, taking only one “action” for each input symbol
processed.

5 / 48 6 / 48

Designing a Turing machine: Example #1 Turing machine: Example #1


M1 to recognize B = w #w : w ∈ {0, 1}∗ .

I
I M1 loops. In each iteration, it matches symbols on each side
. . . to recognize the language B =

w #w : w ∈ {0, 1}∗ . of the #.
I It does the leftmost symbol remaining.
I Will focus on informal descriptions, even more than we did I It thus scans forward and backwards.
with PDAs. I It crosses off the symbol it’s working on. We can assume the
I Imagine that you are standing on an infinite tape with symbols TM replaces it with some special symbol x.
on it, and you want to check to see if the string belongs to B. I When scanning forward, it scans to the #, then scans to the
I What procedure would you use, given that you can read/write first symbol not crossed off.
and move the tape in each direction? I When scanning backwards, it scans past the # and then to the
I You have a finite control, so cannot remember much. Hence crossed-off symbol.
you must rely on the information on the tape. I If TM discovers a mismatch, it rejects.
I Try it! I If all symbols are crossed off, then accept.
I What are the possible outcomes?
I Accept or reject.
I Looping is not possible: Guaranteed to terminate/halt, since
each iteration makes progress.

7 / 48 8 / 48
Sample execution Formal definition of a Turing machine

for the string 011000#011000


Tape head is at red symbol. I The key is the transition function
0 1 1 0 0 0 # 0 1 1 0 0 0 ...
δ : Q × Γ → Q × Γ × {L, R}
x 1 1 0 0 0 # 0 1 1 0 0 0 ...
... I Suppose TM is in state q, with the head over the tape at
x 1 1 0 0 0 # x 1 1 0 0 0 ... symbol a. TM now executes. Then TM is in new state r , with
x 1 1 0 0 0 # x 1 1 0 0 0 ... a new symbol b replacing a on the tape, and tape head moves
x x 1 0 0 0 # x 1 1 0 0 0 ... either left or right.
...
x x x x x x # x x x x x x ...

9 / 48 10 / 48

Formal definition of a Turing machine (cont’d) TM computation

I A Turing machine is a 7-tuple (Q, Σ, Γ, δ, q0 , qaccept , qreject ).


I Q is a finite set of states. I A TM’s configuration consists of:
I Σ is the input alphabet, not containing the special symbol . I its current state,
I Γ is the tape alphabet, where ∈ Γ and Σ ⊂ Γ.
I the contents of the current tape location, and
I the current head location.
I δ : Q × Γ → Q × Γ × {L, R} is the transition function.
I As a TM computes, its current configuration can change.
I q0 , qaccept , and qreject are the start, accept, and reject states.
I Do we need more than one accept or reject state?
I No, since once we enter such a state, we can terminate.

11 / 48 12 / 48
Turing recognizable and decidable languages Turing machine: Example #2
n
I Design a TM M2 that decides A = { 02 : n ≥ 0 }, the
I The set of strings that a Turing machine M accepts is the language of all strings of 0s whose length is a power of two.
I Without designing it, do you think (intuitively) that this can
language of M, or the language recognized by M, be done? Why?
denoted L(M). I We could write a (C++, Java, Perl, . . . ) program to do this.
I Definitions: I Since a TM can do anything a computer can do, this can be
I A language is Turing recognizable (sometimes called done by a TM.
recursively enumerable) if some Turing machine recognizes it. I Solution? Basic idea: Divide by 2 each time and see whether
I A Turing machine that halts on all inputs is a decider. A remainder is 1: boxed
decider that recognizes a language decides it.
I A language is (Turing-) decidable (sometimes called recursive) while true do
if it is decided by some Turing machine. sweep left to right across the tape, crossing off every other 0;
I Notes: if tape has exactly one 0 then accept;
I Language is decidable if it’s Turing recognizable by a TM that ;
always halts, i.e., there is a decider for that language. if tape has an odd number of 0s then reject;
I Every decidable language is Turing-recognizable.
I It is possible for a TM to halt only on those strings it accepts.
return the head to the left-hand end of the tape;
;
end
13 / 48 14 / 48

Sample execution of TM M2 Turing machine: Example #3

0 0 0 0 Number is 4 = 22 I Design TM M3 to decide the language


x 0 0 0
x 0 x 0 Now we have 2 = 21 C = { ai bj ck : i × j = k and i, j, k ≥ 1 }
x 0 x 0
x 0 x 0 I What does this tell us about the capability of a TM?
x x x 0 I That it can do (or at least check) multiplication.
x x x 0 now we have 1 = 20 I As we have seen before, we often use unary.
x x x 0 seek back to start I How would you approach this?
x x x 0 Scan right; one 0, so accept (Imagine that we were trying 2 × 3 = 6.)

15 / 48 16 / 48
Turing machine: Example #3 (cont’d) Transducers

1. First, scan the string from left to right to verify that it is of


the form ai bj ck . If it is, scan to start of tape,1 and if not, I We keep talking about recognizing a language, rather than
reject. generating a language. This is common in language theory.
(Easy to do with finite control/finite automaton.) I But now that we are talking about computation, this may
2. Cross off the first a and scan until the first b occurs. Shuttle seem strange and limiting.
between b’s and c’s, crossing off one of each until all b’s are I Computers typically transform input into output.
gone. If all c’s have been crossed off and some b’s remain,
I For example, we are more likely to have a computer perform
multiplication than to check that the equation is correct.
reject. I TMs can also generate or transduce.
3. Restore2 the crossed-off b’s and repeat step 2 if there are any I How would you compute ci×j , given ai and bj ?
a’s remaining. If all a’s are gone, check if all c’s are crossed I Similar to TM M3 : For every a, scan through the b’s; for
off—if so, accept; if not, reject. each b, you go to the end of the string and add a c. Thus by
zig-zagging i times, you can generate the appropriate number
of c’s.
1
Some subtleties here, see text. Can either use a special symbol or can
backup until we realize tape is stuck and hasn’t actually moved left.
2
How to restore? Have a special cross-off symbol that incorporates the
original symbol; just put an X through that symbol.
17 / 48 18 / 48

Turing machine: Example #4 Turing machine: Example #4 (cont’d)

1. Place a mark on top of the left-most symbol. If it was a


blank, accept. If it was a #, continue. Otherwise, reject.
I Solve the element uniqueness problem: 2. Scan right to next # and place a mark on same. If no # is
I Given a list of strings over {0, 1}, separated by #. encountered, we only had x1 , so accept.
Accept if all the strings are different. 3. By zig-zagging, compare the two strings to the right of the
I The language is
two marked #s. If they are equal, reject.
E = { #x1 #x2 # . . . #xl : each xi ∈ {0, 1}∗ and xi 6= xj if i 6= j } 4. Move the rightmost of the two marks to the next # symbol to
the right. If no # symbol is encountered before a blank, move
I How would you do this? the left most mark to its right and the rightmost mark to
the # after that. This time, if no # is available for the
rightmost mark, all the strings have been compared, so accept.
5. Go back to step 3.

19 / 48 20 / 48
Decidability

I All these examples have been decidable.


I Showing that a language is Turing recognizable, but not
decidable, is more difficult. (Cover in Chapter 4.)
Section 3.2: Variants of Turing machines
I How do we know that these examples are decidable?
I You can tell that each iteration makes progress towards the
ultimate goal, and so you must reach that goal.
I This would be clear from simply examining the “algorithm”.
I Not hard to prove formally. For example, suppose there are
n symbols at the start. If we erase a symbol each and every
iteration, will be done in n iterations.

21 / 48 22 / 48

Variants of Turing machines TM variant #1: Let the head stay put

I Current TM model: tape head must move either left or right


after each step.
I Sometimes convenient to allow tape head to stay put.
I We saw only a few variants of FA and PDA: deterministic and
non-deterministic.
I This new variant is equivalent to original model. Prove it!
I To show that two models are equivalent, we need only show
I There are several variants of Turing machines.
that each can simulate the other.
I They are all equivalent. (No surprise here: a TM can compute I Two machines are equivalent if they recognize the same
anything that’s “computable.”) language.
I So, choose most convenient variant for task at hand.
I Proof: We can convert any TM with “stay put” transitions to
one without same by replacing each “stay put” transition with
two additional transitions:
I Move right.
I Move left.

23 / 48 24 / 48
TM variant #2: Multi-tape TMs Proof of equivalence of variant #2

I A multitape TM is like an ordinary TM, but with several


tapes. I We show how to convert a k-tape TM M into an equivalent
I Each has its own read/write head. 1-tape TM S.
I Initially: tape 1 has input string, remaining tapes are blank. I S simulates the k tapes of m using a single tape with a # as a
delimiter to separate the contents of the k tapes.
I Transition function allows reading, writing, and moving the
heads on some or all tapes simultaneously. I S marks the location of the k heads by putting a dot above
the appropriate symbols.
I Multitape TM may be more convenient (think of extra tapes
as “scratch paper”), but doesn’t add more power.

25 / 48 26 / 48

Proof of equivalence of variant #2 (cont’d) TM variant #3: Nondeterministic TM

I On input of w = w1 w2 . . . wn , machine S will look like I Can add nondeterminism to Turing machines.
I Similar to what was done in adding nondeterminism to other
#w˙1 w2 . . . wn # ˙ # ˙ . . . # ˙ # models, e.g., moving from DFAs to NFAs: allow transition
function to choose a subset of states.
I To simulate a single move, S scans its tape from the first # to
the (k + 1)st #, to determine the symbols under the virtual I Transition function for deterministic TM is
heads. Then S makes a second pass to update the heads and δ : Q × Γ → Q × Γ × {L, R}.
contents, based on M’s transition function. I Nondeterministic Turing machine has transition function
I If at any point, S moves one of the virtual heads to the right δ : Q × Γ → P(Q × Γ × {L, R}).
of a #, this action means that M has moved the head to a I Computation of NDTM: tree whose branches correspond to
previously-unread blank portion of the tape. So S writes a different possibilities for the machine.
blank symbol onto this cell, and shifts everything to the right I If some branch of computation leads to accept state, the
on the tape one unit to the right. NDTM accepts its input.

27 / 48 28 / 48
TM variant #3: Nondeterministic TM (cont’d) Enumerators

I Proof sketch of equivalence: Simulate any


non-deterministic TM N with a deterministic TM D.
I D will try all possible branches.
I An enumerator E is a TM with a printer attached.
I We can view branches as representing a tree, which we can
explore. I Can send strings to be output to the printer.
I Using depth-first search is a bad idea. This fully explores one I Input tape is initially blank.
branch before going to next. If that one loops forever, will I Language enumerated by E is the set of strings printed out.
never even try most branches.
I Use breadth-first search. This method guarantees that all I E may not halt, and may print out infinitely-many strings.
branches will be explored to any finite depth, and hence will I Theorem: A language is Turing-recognizable iff some
accept if any branch accepts. enumerator enumerates it.
I Hence, the DTM will accept if the NTM does.
I Text goes on to show that this can be done using 3 tapes
(input, handle current branch, track position in computation
tree).

29 / 48 30 / 48

Proof of enumerator equivalence Equivalence with other models

I First direction: If enumerator E enumerates language A, then I Many TM variants proposed, some of which may appear very
different.
a TM M recognizes A.
I All have unlimited access to unlimited memory.
For every w generated by E , the TM M will run E (a TM) I All models with this feature turn out to be equivalent, under
and check to see if the output matches w . If it ever matches, reasonable assumptions (i.e., that one can only do a finite
then accept. amount of work in one step).
I Other direction: If a TM M recognizes a language A, we I Thus TMs are universal model of computation. The classes of
construct an enumerator E for AA as follows:

algorithms are the same, independent of the specific model of
I Let s1 , s2 , s3 , . . . be the list of all possible strings in Σ .
I For i = 1, 2, . . . computation.
I Run M for i steps on each input s1 , s2 , . . . , si . I To get some insight, note that all programming languages are
I If any computation is accepted, then print corresponding sj . equivalent. For example, one can write a compiler for any
I Loop over i implements a breadth-first search, which language in any other language (assuming basic constructs are
eventually generates everything without getting stuck. available).

31 / 48 32 / 48
What is an algorithm?

I How would you describe an algorithm?


I An algorithm is a collection of simple instructions for carrying
Section 3.3: Definition of an Algorithm out some task.
I A procedure or recipe.
I Algorithms abound in mathematics.
I Ancient algorithms (thousands of years old):
I Finding prime numbers (Eratosthenes).
I Computing greatest common divisor (Euclid).

33 / 48 34 / 48

Hilbert’s Problems Church-Turing Thesis

I In 1900, David Hilbert proposed 23 mathematical problems I Can’t prove nonexistence results for algorithms without
for the next century. precise definition of “algorithm”.
I Hilbert’s Tenth Problem: I Definition provided in 1936:
I Devise an algorithm for determining whether a multivariate I λ-calculus (A. Church).
integer polynomial has an integral root, i.e., whether there I TMs (A. Turing).
exist x1 , x2 , . . . , xn ∈ Z such that I These were shown to be equivalent.
I Church-Turing thesis: “computable” means “TM-computable”.
p(x1 , x2 , . . . , xn ) = 0.
Connection between intuitive and formal notions of algorithm.
I Instead of “algorithm”, Hilbert said “a process by which it can
I In 1970, Yuri Matiyasevich (PhD dissertation) showed that no
determined by a finite number of operations”. algorithm exists for testing whether an arbitrary integer
I For example, the polynomial 6x 3 yz 2 + 3xy 2 − x 3 − 10 has an polynomial has integral roots. (Sometimes called “DPRM
integral root (x, y , z) = (5, 3, 0). theorem”.)
I He assumed that such a method exists. Of course, there’s always an answer (“yes” or “no”); but
I He was wrong. there’s no algorithm that gives us the answer.

35 / 48 36 / 48
More on Hilbert’s Tenth Problem More on Hilbert’s Tenth Problem (cont’d)
I Hilbert essentially asked whether

D = { multivariate integer polynomials p : p has an integral root }


I For the univariate case, there is actually an upper bound on
is decidable (not just Turing-recognizable).
the root of the polynomial.
I Can you come up with a procedure to answer this question? So in this case, previous algorithm always terminates, and so
I Univariate case: Try all possible integers 0, ±1, ±2, . . . . univariate problem is solvable.
If one of them works, answer is “yes”.
I Multivariate case: Lots of combinations.
I Think about the significance of the fact that you can prove
I The difficulty here? that something cannot be computed.
I May not terminate in a “reasonable” amount of time. I Does not that you’re not smart enough to compute it!
I You don’t know if that’s because it actually doesn’t terminate, I More in Chapter 4.
or because it didn’t run long enough.
I This shows that problem is Turing-recognizable, but can’t be
used to determine whether it’s Turing-decidable.
I One might’ve hoped that something else would work.
DPRM dashes that hope.

37 / 48 38 / 48

Ways to describe Turing machines Turing machine terminology

I As we have seen before, we can specify the design of a


machine (FA, PDA) formally or informally.
I Ditto for TM.
I For the algorithmic level.
I Informal description still describes the implementation of the
machine; just less formally. I Input to a TM: a string.
I With a TM, we can actually go up an additional level of I Other objects (graphs, lists, etc.) must be encoded as a string.
informality. I We let hOi denote the encoding of object O.
I Don’t need to describe machine in terms of tape heads and the I We implicitly assume that TM checks input to make sure it
like. follows the proper encoding, rejecting same if not proper.
I Can describe algorithmically.
I We will describe TMs . . .
I informally at the implementation level, or
I algorithmically.

39 / 48 40 / 48
Example: Algorithmic level Example: Algorithmic level (cont’d)
Algorithmic description of M?
algoruled
parameter: the encoding hG i of an undirected graph G = (V , E )
return : accept if G is connected, reject otherwise.
I Let A be language of all strings representing graphs that are select and mark the first node in V ;
connected, i.e., any node can be reached from any other node. repeat
foreach node v ∈ V do
A = { hG i : G is a connected undirected graph }. if ∃ edge (v , w ) with previously-marked w ∈ V then
mark v ;
I Give me a high-level description of the TM M for deciding A. ;
end
until no new nodes are marked;
foreach v ∈ V do
if v is not marked then r;
eturn(reject) ;
end
41 / 48
return(accept); 42 / 48
Function connected(hG i)

You might also like