0% found this document useful (0 votes)
6 views

TOC 3IS(cs)

Uploaded by

malleshss2005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

TOC 3IS(cs)

Uploaded by

malleshss2005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

3IA Theory of Computation (CS)

1. Obtain the following grammar in CNF:

S→ABC
A→aC | D
B→bB | ε| A
C→Ac | ε | Cc
D→aa

Solution:

The given grammar is converted into Chomsky Normal Form (CNF) using the following
steps:

Step 1: Eliminate ε-productions:

The productions that derive the empty string (ε) are:

B→ε
C→ε

To eliminate these, we need to consider all productions where B and C appear on the right-
hand side and create new productions by removing them. Additionally, since B can be
replaced by ε in the production S→ABC, we need to add S→AC and S→AB as new productions.

This gives us:

S→ABC | AC | AB
A→aC | D
B→bB | A
C→Ac | Cc
D→aa

Step 2: Eliminate unit productions:

The unit productions in the grammar are productions where a single non-terminal derives
another single non-terminal. We have:

B→A

To eliminate this, we replace B with its possible derivations from the production A→aC | D.
This gives us:

S→ABC | AC | AB
A→aC | D
B→bB | aC | D
C→Ac | Cc
D→aa
Step 3: Convert remaining productions to CNF:

The remaining productions that are not in CNF are:

S→ABC | AC | AB
A→aC
B→bB | aC
C→Ac | Cc
D→aa

We need to introduce new variables to break down these productions into either a single
terminal or a pair of non-terminals:

1. For S→ABC, introduce X1→BC.


2. For A→aC, introduce X2→a.
3. For B→bB, introduce X3→b.
4. For C→Ac, introduce X4→c.

The grammar in CNF is then:

S→AX1 | AC | AB
X1→BC
A→X2C | D
X2→a
B→X3B | X2C | D
X3→b
C→AX4 | CX4
X4→c
D→aa

2. Eliminate ε, unit and useless production from the following grammar and
put the resulting grammar into CNF
S→aA | aBB
A→aAA | ε
B→bB | bbC
C→B

Solution:

To eliminate ε, unit, and useless productions and put the grammar into CNF, we follow these
steps:

Step 1: Eliminate ε-productions:

The productions that derive the empty string (ε) are:

A→ε
To eliminate this, we need to consider all productions where A appears on the right-hand side
and create new productions by removing A. We also need to add S→a because A can be
replaced by ε in the production S→aA.

This results in:

S→aA | aBB | a
A→aAA | aa
B→bB | bbC
C→B

Step 2: Eliminate unit productions:

The unit production in the grammar is:

C→B

We eliminate this by replacing C with its possible derivations from the production B→bB |
bbC. Since C derives B which in turn can derive bbC, we need to consider the recursive
derivation and add B → bbB to the productions. This gives us:

S→aA | aBB | a
A→aAA | aa
B→bB | bbB | bbC

We can now remove the production C→B as it's no longer needed.

Step 3: Eliminate useless productions:

A production is useless if its left-hand side non-terminal cannot be reached from the start
symbol or if it cannot derive a string of terminals.

In this grammar, all non-terminals can be reached from the start symbol (S). However, the
non-terminal A cannot derive any string of terminals, even after substituting for B. Therefore,
we remove all productions involving A:

S→aBB | a
B→bB | bbB

Step 4: Convert remaining productions to CNF:

Finally, we convert the remaining productions into CNF by introducing new variables to
represent terminals and break down long productions:

1. For S→aBB, introduce X1 → BB and X2 → a.


2. For B→bB, introduce X3 → b.
3. For B→bbB, introduce X4 → b and X5 → bB.

The resulting grammar in CNF is:

S→ X1X2 | X2
X1 → BB
X2 → a
B→ X3B | X4X5
X3 → b
X4 → b
X5 → bB

3. Convert the following CFG to CNF.


S→aACa
A→B | a
B→C | c
C→Cc | ε

Solution:

To convert this grammar to CNF, we follow these steps:

Step 1: Eliminate ε-productions:

The production deriving the empty string is:

C→ε

We eliminate this by considering all productions where C appears on the right-hand side and
creating new productions without C. We also add productions S → aa and B → c because C
can be replaced by ε in the productions S→aACa and B→C, respectively.

This results in:

S→aACa | aa
A→B | a
B→C | c
C→Cc | c

Step 2: Eliminate unit productions:

The unit productions are:

A→B
B→C

We eliminate A→B by replacing B with its possible derivations (C | c). We then eliminate B→C
by replacing C with its possible derivations (Cc | c). Note that the production B→Cc is
already present due to the previous substitution. This gives us:

S→aACa | aa
A→Cc | c | a
B→Cc | c
C→Cc | c

Step 3: Convert remaining productions to CNF:


Now we convert the remaining productions into CNF:

1. For S→aACa, introduce X1 → AC and X2 → a.


2. For A→Cc, introduce X3 → c.
3. For B→Cc, introduce X3 → c.
4. For C→Cc, introduce X3 → c.

The resulting grammar in CNF is:

S→ X2X1X2 | X2X2
X1 → AC
X2 → a
A→ X3C | X3 | X2
B→ X3C | X3
C→ X3C | X3
X3 → c

4. Eliminate ε, unit and useless production from the following grammar and
put the resulting grammar into CNF
S→ABC|BaB
A→aA|BaC|aaa
B→bBb|a|D
C→CA|AC
D→ ε

Solution:

Step 1: Eliminate ε-productions:

The ε-production is:

D→ ε

We remove this and add productions for each instance where D appeared on the right-hand
side, excluding the productions where it was removed due to redundancy:

S→ABC|BaB
A→aA|BaC|aaa
B→bBb|a| ε
C→CA|AC

We then eliminate B→ ε by adding the production S→aB and removing redundant productions:

S→ABC|BaB | aB
A→aA|BaC|aaa
B→bBb|a
C→CA|AC

Step 2: Eliminate unit productions:

There are no unit productions in this grammar.

Step 3: Eliminate useless productions:


C cannot derive a string of terminals, and it is not part of any other production that can derive
a string of terminals. Therefore, we remove all productions involving C:

S→BaB | aB
A→aA|aaa
B→bBb|a

Note that A can now also be eliminated as it cannot be reached from the start symbol.

S→BaB | aB
B→bBb|a

Step 4: Convert remaining productions to CNF:

We convert the remaining productions into CNF by introducing new variables:

1. For S→BaB, introduce X1 → a and X2 → BB.


2. For S→aB, introduce X1 → a.
3. For B→bBb, introduce X3 → b, X4 → bB.

The resulting grammar in CNF is:

S→ BX2 | X1B
X1 → a
X2 → BB
B→ X3X4 | X1
X3 → b
X4 → bB

5. Prove that context-free languages are closed under:

• Union
• Concatenation

Proof:

Union:

Let L1 and L2 be two context-free languages. There exist CFGs G1 = (V1, T1, P1, S1) and
G2 = (V2, T2, P2, S2) such that L1 = L(G1) and L2 = L(G2). We can assume without loss of
generality that the sets of variables V1 and V2 are disjoint (we can always rename variables
to achieve this).

Construct a new grammar G = (V, T, P, S) as follows:

• V = V1 ∪ V2 ∪ {S} where S is a new start symbol


• T = T1 ∪ T2
• P = P1 ∪ P2 ∪ {S → S1 | S2}
The new grammar G essentially combines the rules of G1 and G2 and adds a new start
symbol that can derive either S1 or S2. This ensures that any string derivable from either G1
or G2 is also derivable from G.

Therefore, L(G) = L1 ∪ L2, proving that context-free languages are closed under union.

Concatenation:

Using the same definitions for L1, L2, G1, and G2 as above, we can construct a new
grammar G = (V, T, P, S) for the concatenation of L1 and L2 as follows:

• V = V1 ∪ V2 ∪ {S} where S is a new start symbol


• T = T1 ∪ T2
• P = P1 ∪ P2 ∪ {S → S1S2}

This grammar generates strings by first deriving S1 (generating a string from L1), followed
by S2 (generating a string from L2). This ensures that any string formed by concatenating
strings from L1 and L2 can be generated by G.

Therefore, L(G) = L1.L2, proving that context-free languages are closed under
concatenation.

6. Prove that CFL are closed under complementation, Intersection, Union,


Concatenation and Star Closure

This statement is incorrect. CFLs are not closed under complementation or intersection.
While the source doesn't contain a proof for this statement, it does correctly list the closure
properties of CFLs in Question 5 and mentions in Question 8 that the intersection of two
CFLs is not a CFL.

Proof for closure under union and concatenation:

This proof is already provided in the answer to Question 5.

Star Closure:

Let L be a context-free language, and G = (V, T, P, S) be a CFG such that L = L(G). We can
construct a new grammar G' = (V', T, P', S') for the star closure of L as follows:

• V' = V ∪ {S'} where S' is a new start symbol


• P' = P ∪ {S' → ε | SS'}

This new grammar generates any number of repetitions of strings from L. It can derive the
empty string or recursively derive strings formed by concatenating strings from L.

Therefore, L(G') = L, proving that context-free languages are closed under star closure.*
7. Define a Turing machine. With a neat diagram, Explain the working of a
basic Turing machine

Definition:

A Turing Machine (TM) is a theoretical model of computation that manipulates symbols on


an infinite tape according to a set of rules. It can be thought of as a finite-state machine with
an infinite memory tape.

Components of a Turing Machine:

• Tape: An infinite tape divided into cells, each capable of holding a single symbol
from a finite tape alphabet.
• Tape Head: A read/write head that can scan one cell of the tape at a time. It can
read the symbol, write a new symbol, and move left or right.
• Finite Control: A finite set of states that dictates the machine's behaviour. The
current state, the symbol under the tape head, and the transition function
determine the next action of the machine.
• Transition Function: A set of rules that defines the machine's behaviour based
on the current state and the symbol under the tape head. A transition rule
specifies the next state, the symbol to write on the tape, and the direction to
move the tape head (left or right).

Diagram:
[Image of a Turing Machine with labeled components]

Working of a Basic Turing Machine:

1. The machine starts in an initial state with the input string written on the tape. The
tape head is positioned at the beginning of the input string.
2. The machine reads the symbol under the tape head and, based on its current
state and the read symbol, consults the transition function to determine its next
action.
3. The transition function specifies:
o The next state to enter.
o The symbol to write on the current tape cell (it can overwrite the existing
symbol).
o The direction to move the tape head (left or right).
4. The machine repeats steps 2 and 3 until it reaches a halting state or enters an
infinite loop.

If the machine halts in a final state, the input is considered accepted. Otherwise, it's
rejected.

8. Obtain the TM to accept the language.L={W/W is even & ϵ={a,b}}

Solution:

Here is a Turing Machine (TM) that accepts the language L = {W / W is even & ϵ={a,b}}
where the alphabet consists of 'a' and 'b':

States:

• q0: Start state


• q1: Even state (number of 'a's and 'b's seen so far is even)
• q2: Odd state (number of 'a's and 'b's seen so far is odd)
• q3: Accepting state

Tape Alphabet:

• a, b: Input symbols
• B: Blank symbol

Transition Function:

State Symbol Next State Write Move


q0 a q2 a R
q0 b q2 b R
q1 a q2 a R
q1 b q2 b R
q2 a q1 a R
q2 b q1 b R
q1 B q3 B R
q2 B - - -
q3 - - - -

Explanation:

1. The TM starts in state q0.


2. If it reads an 'a' or 'b', it transitions to state q2 (odd state), writes the same symbol
back on the tape, and moves right.
3. From states q1 (even) or q2 (odd), if it reads another 'a' or 'b', it transitions to the
opposite state (q2 if it was in q1, and vice versa), writes the same symbol, and
moves right.
4. If the TM is in state q1 (even) and reads a blank (B), it transitions to the accepting
state q3.
5. If the TM is in state q2 (odd) and reads a blank, it doesn't have a defined
transition, meaning it rejects the input.

Transition Diagram:

[Image of the TM transition diagram, similar to the ones in the source]

9. Obtain a Turing machine to accept the language L = { 0n1n|n>=1}. Draw


the transition diagram and transition table. Show moves made by this TM
for string 000111

Solution:

Here is the Turing Machine (TM) for L = {0n1n | n>=1}:

States:

• q0: Start state


• q1: Scan right for '0'
• q2: Scan left for '1'
• q3: Replace '1' with 'X' and scan right
• q4: Accepting state

Tape Alphabet:

• 0, 1: Input symbols
• X: Marked symbol
• B: Blank symbol

Transition Function:

State Symbol Next State Write Move


q0 0 q1 X R
q1 0 q1 0 R
q1 1 q2 1 L
q1 B - - -
q2 0 q2 0 L
q2 1 q2 1 L
q2 X q3 X R
q3 0 q3 0 R
q3 1 q3 1 R
q3 B q4 B R
q4 - - - -

Explanation:

1. q0 (Start): The TM starts in state q0 and expects to see a '0'. If it finds a '0', it
replaces it with 'X' (to mark it), moves right, and enters state q1.
2. q1 (Scan Right for '0'): In this state, the TM scans right, looking for the next '0'. If
it finds a '0', it keeps moving right. If it finds a '1', it knows it has reached the end
of the '0's and moves left to enter state q2. If it reaches a blank before finding a
'1', the input is rejected.
3. q2 (Scan Left for '1'): The TM scans left, looking for a '1'. If it finds a '1', it keeps
moving left. If it encounters the 'X' marker, it knows it has found a matching pair
of '0' and '1', so it replaces the '1' with 'X', moves right, and enters state q3.
4. q3 (Scan Right): The TM scans right over '0's and '1's until it reaches a blank. This
signifies that all '0's and '1's have been marked. The TM then enters the accepting
state q4.

Transition Diagram:

[Image of the TM transition diagram, similar to the ones in the source]

Moves for string 000111:

Step ID Explanation
1 q000111 Initial ID
2 Xq100111 Replace first '0' with 'X', move right
3 X0q10111 Scan right, stay in q1
4 X00q1111 Scan right, stay in q1
5 X001q2111 Encounter '1', move left
6 X00q2111 Scan left, stay in q2
7 Xq300X111 Find 'X', replace '1' with 'X', move right
8 X0q30X111 Scan right, stay in q3
9 X00q3X111 Scan right, stay in q3
10 X00Xq3111 Scan right, stay in q3
11 X00X1q311 Scan right, stay in q3
12 X00X11q31 Scan right, stay in q3
13 X00X111q3B Reach blank, enter accepting state
14 X00X111Bq4 Input accepted
10. Obtain the TM to accept the language containing a string of 0’s &1’s
ending with 011.

Solution:

Here's the TM that accepts strings of 0's and 1's ending in "011":

States:

• q0: Start state


• q1: Found '0'
• q2: Found '01'
• q3: Found '011', accept
• q4: Reject state

Tape Alphabet:

• 0, 1: Input symbols
• B: Blank symbol

Transition Function:

State Symbol Next State Write Move


q0 0 q1 0 R
q0 1 q0 1 R
q1 0 q1 0 R
q1 1 q2 1 R
q2 0 q1 0 R
q2 1 q3 1 R
q3 0 q1 0 R
q3 1 q0 1 R
q3 B q4 B R
q4 - - - -

Explanation:

1. q0 (Start): Scans right, looking for a '0'. If a '1' is encountered, it stays in q0.
2. q1 (Found '0'): When a '0' is found, it moves to q1. It scans right, staying in q1 if
another '0' is read, or moving to q2 if a '1' is read.
3. q2 (Found '01'): Having read '01', the TM expects a '1'. If it finds a '1', it moves to
the accepting state q3. Otherwise, it resets back to q1.
4. q3 (Found '011'): The accepting state. The TM continues to scan the remaining
input, resetting to q1 if it encounters another '0', or back to q0 for a '1'. If it
reaches a blank symbol in this state, it moves to the halting state q4.

Transition Diagram:
[Image of the TM transition diagram, similar to the ones in the source]

11. Design Turing machine to accept the language L={anbncn|n>=1}. Draw


the transition diagram and transition table. Show moves made by this TM
for string aabbcc

Solution:

States:

• q0: Start state


• q1: Scan 'a's and mark
• q2: Return to start
• q3: Scan 'b's and mark
• q4: Return to start
• q5: Scan 'c's and mark
• q6: Check for blank and accept

Tape Alphabet:

• a, b, c: Input symbols
• X, Y, Z: Marked symbols
• B: Blank symbol

Transition Function:

State Symbol Next State Write Move


q0 a q1 X R
q1 a q1 a R
q1 b q2 b L
q1 c - - -
q1 B - - -
q2 a q2 a L
q2 X q0 X R
q0 b q3 Y R
q3 b q3 b R
q3 c q4 c L
q3 X - - -
q3 B - - -
q4 b q4 b L
q4 Y q0 Y R
q0 c q5 Z R
q5 c q5 c R
q5 B q6 B L
q5 Y - - -
q6 Z q6 Z L
q6 B - - -

Explanation:

1. q0 (Start): The TM starts in q0 and looks for 'a'. It replaces 'a' with 'X', moves right,
and goes to q1.
2. q1 (Scan 'a's and Mark): The TM scans right over 'a's. If it encounters a 'b', it
moves left and goes to q2. If it finds any other symbol or blank, it rejects.
3. q2 (Return to Start): The TM moves left over 'a's until it reaches 'X', then moves
right and goes back to q0 to process the next 'b'.
4. q3 (Scan 'b's and Mark): Similar to q1, but for 'b's. It replaces the first 'b' with 'Y',
scans right over 'b's, moves left when it finds a 'c', and rejects if it finds 'a' or
blank.
5. q4 (Return to Start): Similar to q2, but for 'b's. It returns to q0 to start processing
'c's.
6. q5 (Scan 'c's and Mark): Similar to q1 and q3, but for 'c's. It replaces the first 'c'
with 'Z', scans right over 'c's. If it encounters a blank, it goes to the final checking
state q6. It rejects if it finds 'a' or 'b'.
7. q6 (Check for Blank and Accept): The TM moves left over 'Z's. If it reaches a
blank without encountering any other symbols, it accepts the input. Otherwise, it
rejects.

Transition Diagram:

[Image of the TM transition diagram, similar to the ones in the source]

Moves for string aabbcc:

Step ID Explanation
1 q0aabbcc Initial ID
2 Xq1abbcc Replace 'a' with 'X', move right
3 Xa q1bbcc Scan right, stay in q1
4 Xab q2bcc Encounter 'b', move left
5 Xq0abbcc Return to start
6 XY q3bbcc Replace 'b' with 'Y', move right
7 XYb q3bcc Scan right, stay in q3
8 XYbb q4cc Encounter 'c', move left
9 XY q0bbcc Return to start
10 XYZ q5bcc Replace 'c' with 'Z', move right
11 XYZb q5cc Scan right, stay in q5
12 XYZbc q5c Scan right, stay in q5
13 XYZbcc q6B Encounter blank, move left
14 XYZbc q6ZB Move left, stay in q6
15 XYZb q6ZcB Move left, stay in q6
16 XYZ q6ZbcB Move left, stay in q6
17 XY q6ZbccB Move left, stay in q6
18 X q6ZYbccB Move left, stay in q6
19 q6XZYbccB Move left, stay in q6
20 Bq6XZYbcc Reach blank, input accepted

12. Prove HALTTM = {(M, w) |The Turing machine M halts on input w} is


undecidable.

Proof by Contradiction:

Assume that HALTTM is decidable. Then, there exists a Turing Machine H that decides HALTTM.
H would take as input the encoding of a TM M and an input string w, and it would halt and:

• Accept if M halts on w.
• Reject if M does not halt on w.

Now, let's construct a new Turing Machine D using H as a subroutine:

Turing Machine D:

1. Input: Encoding of a TM M.
2. Simulate H on input (M, M): This means running H with the encoding of M as both
the TM and the input string.
3. If H accepts, loop forever.
4. If H rejects, halt and accept.

Analysis:

• If M halts on input M, then H accepts (M, M). This causes D to loop forever.
• If M does not halt on input M, then H rejects (M, M). This causes D to halt and
accept.

Now, consider running D on input D itself:

• If D halts on input D, then according to its construction, it must have entered


an infinite loop (step 3). This is a contradiction.
• If D does not halt on input D, then according to its construction, it must have
halted and accepted (step 4). This is also a contradiction.

We have arrived at a contradiction in both cases. This contradiction arose from the
assumption that HALTTM is decidable. Therefore, HALTTM must be undecidable.
13. If L and ̅L are both recursively enumerable, show that L and ̅L are
recursive.

Proof:

Let L be a language, and ̅L be its complement. If both L and ̅L are recursively enumerable
(RE), it means there exist Turing Machines M1 and M2 such that:

• M1 accepts L (halts and accepts if the input is in L, may loop forever otherwise).
• M2 accepts ̅L (halts and accepts if the input is in ̅L, may loop forever otherwise).

To prove that L and ̅L are recursive, we need to show that there exist Turing Machines that
decide them (always halt and either accept or reject). We can construct a Turing Machine M
that decides L as follows:

Turing Machine M:

1. Input: A string w.
2. Run M1 and M2 in parallel: This means simulating both M1 and M2 on input w
simultaneously, using techniques like multi-tape Turing Machines or interleaving
their steps.
3. If M1 halts and accepts, halt and accept.
4. If M2 halts and accepts, halt and reject.

Analysis:

• Since either w ∈ L or w ∈ ̅L (but not both), one of the machines, M1 or M2, is


guaranteed to halt and accept.
• M will halt and accept if and only if M1 halts and accepts, meaning w ∈ L.
• M will halt and reject if and only if M2 halts and accepts, meaning w ∈ ̅L.
• Therefore, M decides L, meaning L is recursive.

Since the complement of a recursive language is also recursive, ̅L is also recursive

Question 14: Prove that Ld is not Recursively Enumerable?

Ld, the diagonalisation language, is not recursively enumerable. This means that there is
no Turing Machine that can list out all the strings in Ld. Here's why:

• Definition of Ld: Ld is defined as the set of strings wi such that wi is not in the
language accepted by the i-th Turing Machine Mi. In essence, Ld is constructed by
diagonalising over all Turing Machines.
• Proof by Contradiction: Assume Ld is recursively enumerable. This means there
exists a Turing Machine Md that enumerates all strings in Ld.
• Since Md is a Turing Machine, it has an index, say j, in the enumeration of all Turing
Machines. So, Md is equivalent to Mj.
• Now, consider the string wj (the j-th string in the enumeration). We have two
possibilities:
o Case 1: wj ∈ L(Mj): If wj is in the language accepted by Mj (which is the
same as Md), then by the definition of Ld, wj should not be in Ld. But this
contradicts the assumption that Md enumerates all strings in Ld.
o Case 2: wj ∉ L(Mj): If wj is not in the language accepted by Mj, then it
should be in Ld. But this again contradicts the fact that Md is supposed to
enumerate all strings in Ld.
• Conclusion: Both cases lead to a contradiction, meaning our initial assumption that
Ld is recursively enumerable must be false. Therefore, Ld is not recursively
enumerable.

Question 15: What is a multitape Turing machine? Also prove that every
language accepted by a multitape Turing Machine is recursively
enumerable.

A multitape Turing Machine is a variant of the standard Turing Machine that has
multiple tapes, each with its own independent read/write head. This allows for more
complex computations by providing additional storage and computational space.

Proof that languages accepted by multitape Turing Machines are recursively


enumerable:

1. Simulation: A multitape Turing Machine can be simulated by a standard Turing


Machine with multiple tracks. This simulation is done by interleaving the contents of
the multiple tapes onto different tracks of a single tape.
2. Track Representation: The simulation uses 2k tracks to represent k tapes. Even-
numbered tracks hold the content of the corresponding tape, and odd-numbered tracks
mark the position of the head for that tape.
3. Head Markers: The simulating Turing Machine keeps track of the positions of the
multiple heads by using special symbols or markers on the odd-numbered tracks.
4. Simulation Steps: To simulate a single move of the multitape machine, the
simulating machine needs to:
o Scan all the tracks to find the head markers.
o Based on the current state and the symbols under the head markers,
determine the next state and the actions to perform on each tape.
o Update the contents of the tracks accordingly and move the head
markers.
5. Recursive Enumeration: Since any multitape Turing Machine can be simulated by a
standard Turing Machine and all languages accepted by standard Turing Machines are
recursively enumerable, it follows that any language accepted by a multitape Turing
Machine is also recursively enumerable.
Question 16: Explain the concepts of decidability and undecidability.

Decidability:

• A problem is decidable if there exists an algorithm that can always determine


whether the answer to the problem is "yes" or "no" for any given input.
• This algorithm is guaranteed to halt in a finite amount of time and provide a
definite answer.
• Decidable problems are sometimes referred to as solvable problems.
• Example: Determining if a given number is prime is a decidable problem
because there are algorithms that can definitively determine primality.

Undecidability:

• A problem is undecidable if there is no algorithm that can always determine the


answer ("yes" or "no") for all possible inputs.
• For some inputs, the algorithm may run forever without providing an answer.
• Undecidable problems are inherently unsolvable, meaning there is no general
procedure that can solve them for all cases.
• Example: The Halting Problem is a classic example of an undecidable problem.
The Halting Problem asks whether a given Turing Machine will halt (stop) or run
forever for a given input. It has been proven that there is no algorithm that can
solve the Halting Problem for all possible Turing Machines and inputs.

Key Differences:

• Decidable problems have algorithms that always halt and provide a


definitive answer.
• **Undecidable problems have no such algorithm that works for all inputs. **
Question 17: Prove that a multitape Turing Machine is equivalent to the
basic Turing Machine?

A multitape Turing Machine is equivalent to a basic Turing Machine in terms of


computational power. This means that for every multitape Turing Machine, there exists a
basic Turing Machine that can simulate it, and vice versa.

Proof:

1. Multitape to Basic: A multitape Turing Machine can be simulated by a basic


Turing Machine with multiple tracks, as explained in the answer to Question 15.
2. Basic to Multitape: A basic Turing Machine can be seen as a special case of a
multitape Turing Machine with only one tape.

Therefore, multitape and basic Turing Machines can simulate each other, making them
computationally equivalent. This means they can decide the same set of problems and
compute the same functions.

Question 18: Explain about the codes of a Turing Machine with an example?

Every Turing Machine can be represented by a unique code, enabling us to treat Turing
Machines as data that can be input to other Turing Machines. This concept is
fundamental for constructing universal Turing Machines and proving the undecidability of
certain problems.

Encoding Scheme:

1. States: Assign a unique binary code to each state in the Turing Machine.
2. Tape Symbols: Assign a unique binary code to each tape symbol.
3. Transitions: Encode each transition as a sequence of binary codes representing:
o The current state.
o The input symbol.
o The next state.
o The output symbol.
o The direction of head movement (Left or Right).
4. Concatenation: Concatenate the codes for all transitions using a separator
symbol (e.g., "111").

Example:

Consider the following Turing Machine:

M = ({q1, q2, q3}, {0, 1}, {0, 1, B}, δ, q1, B, {q2})

where the transition function δ is:

δ(q1, 0) = (q3, 0, R)
δ(q1, 1) = (q2, 0, L)
δ(q3, 0) = (q1, 1, R)
δ(q3, 1) = (q2, 0, R)
δ(q3, B) = (q3, 1, L)

Let's encode this Turing Machine:

• States:
o q1: 01
o q2: 00
o q3: 10
• Tape Symbols:
o 0: 000
o 1: 001
o B: 010
• Transitions:
o δ(q1, 0) = (q3, 0, R): 0100010000100
o δ(q1, 1) = (q2, 0, L): 010010000010010
o δ(q3, 0) = (q1, 1, R): 1000001001100
o δ(q3, 1) = (q2, 0, R): 1000100000100
o δ(q3, B) = (q3, 1, L): 100101000110010
• Complete Code: The code for the entire Turing Machine M is the concatenation
of the transition codes separated by "111":
• 010001000010011101001000001001011110000010011001111000100000100111100
101000110010

This code uniquely represents the Turing Machine M.

Question 19: If MN is a non-deterministic Turing Machine, then there is a


deterministic Turing Machine MD such that L(MN) = L(MD)

This statement is true. Non-deterministic Turing Machines (NTMs) and deterministic


Turing Machines (DTMs) are equivalent in terms of the languages they can accept.

Simulation of NTM by DTM:

A DTM can simulate an NTM by systematically exploring all possible computational paths
of the NTM.

1. Breadth-First Search: The DTM can use a breadth-first search approach to


explore the NTM's computation tree. Each node in the tree represents a
configuration of the NTM.
2. Queue of IDs: The DTM maintains a queue of instantaneous descriptions (IDs)
representing the configurations of the NTM. It starts with the initial ID of the NTM
and iteratively dequeues an ID and explores all possible next configurations
based on the NTM's transition function.
3. Acceptance: If any of the explored configurations lead to an accepting state of
the NTM, the DTM accepts the input.
4. Rejection: If the DTM exhausts all possible paths without finding an accepting
configuration, it rejects the input.

Therefore, for any NTM, there exists a DTM that simulates it and accepts the same
language.

Question 20: Write about the simulation of an NTM by a DTM

The simulation of a non-deterministic Turing Machine (NTM) by a deterministic Turing


Machine (DTM) is a fundamental concept in computability theory. It demonstrates that non-
determinism does not add computational power in terms of the languages that can be
accepted.

Simulation Process:

1. Representation of NTM Configurations: The DTM needs a way to represent the


multiple possible configurations the NTM can be in. This is typically done using a
queue data structure that stores instantaneous descriptions (IDs) of the NTM. An ID
represents the current state, the tape contents, and the position of the head(s).
2. Initialisation: The DTM initialises the queue with the starting ID of the NTM.
3. Iterative Exploration: The DTM repeatedly performs the following steps:
o Dequeue: Remove an ID from the front of the queue.
o Expansion: For the dequeued ID, generate all possible next IDs based on
the NTM's transition function. Since an NTM can have multiple transitions
for a given state and input symbol, this step expands the computation
tree.
o Enqueue: Add the newly generated IDs to the rear of the queue.
4. Acceptance and Rejection:
o Acceptance: If the DTM encounters an ID that represents an accepting
configuration of the NTM (i.e., the NTM is in an accepting state), then the
DTM halts and accepts the input.
o Rejection: If the queue becomes empty, meaning the DTM has explored
all possible paths without finding an accepting configuration, the DTM
halts and rejects the input.

Essentially, the DTM systematically explores all possible branches of the NTM's
computation tree in a breadth-first manner. While an NTM "guesses" the correct path, the
DTM deterministically simulates all paths until an accepting one is found, or all paths are
exhausted.

Question 21: Write about the organisation of a universal Turing Machine

A universal Turing Machine (UTM) is a Turing Machine that can simulate any other
Turing Machine given its description as input. It's a powerful concept that demonstrates the
existence of a single machine capable of executing any algorithm.

Organisation of a UTM:
1. Input: The UTM takes as input:
o Encoded Turing Machine: A description of the Turing Machine to be
simulated, typically encoded as a string using a predetermined scheme.
o Input for the Simulated Machine: The input string that the simulated
Turing Machine should process.
2. Tapes: A UTM typically utilises multiple tapes to manage the simulation:
o Input Tape: Holds the encoded description of the Turing Machine to be
simulated.
o Simulation Tape: Represents the tape of the simulated Turing Machine.
o State Tape: Stores the current state of the simulated Turing Machine.
3. Finite Control: The finite control of the UTM is responsible for:
o Decoding: Reading the encoded description of the simulated Turing
Machine from the input tape.
o Simulation: Fetching the current state and the symbol under the
simulated head from the state tape and the simulation tape, respectively.
o Lookup: Using the decoded transition function of the simulated machine,
looking up the appropriate action to take.
o Execution: Updating the simulation tape, the state tape, and the head
positions based on the decoded transition.
4. Operation: The UTM operates in a cycle:
o Read the current state and input symbol from the state and simulation
tapes.
o Decode the next action from the encoded Turing Machine description.
o Execute the action by updating the tapes and head positions.
o Repeat until the simulated Turing Machine halts (either accepting or
rejecting).
5. Output: The UTM outputs the result of the simulated Turing Machine's computation
(accept or reject).

Significance: The UTM shows that a single Turing Machine can be programmed to execute
any algorithm, making it a foundational concept in theoretical computer science. It's a
powerful tool for understanding the limits of computation.

Question 22: What is a multitape Turing Machine? Also prove that every
language accepted by a multitape Turing Machine is recursively
enumerable.

This question is a repeat of Question 15. Please refer to the answer provided for Question 15.
Question 23: Obtain a Turing Machine for the given language L = {WWR/W ∈
(a+b)*} to accept a string palindrome.

This Turing Machine (TM) will determine if a given string on its tape is a palindrome, where
the string is over the alphabet {a, b}.

Informal Description:

The TM operates by comparing the first and last symbols of the input string, then moving
inwards and comparing the next pair, and so on.

1. Mark the Beginning: Mark the beginning of the input string with a special symbol
(e.g., '#').
2. Scan to the End: Move the head to the right until a blank symbol (B) is
encountered, marking the end of the input.
3. Compare Symbols:
o Compare the symbol at the beginning of the string (after the '#') with the
symbol before the blank.
o If they match, replace both symbols with blanks (or another special
symbol) to indicate they have been compared.
o If they don't match, reject the input.
4. Move Inwards: Move the head left to the next uncompared symbol at the
beginning and right to the next uncompared symbol at the end.
5. Repeat Comparison: Repeat steps 3 and 4 until all symbols have been
compared.
6. Accept: If all comparisons succeed and the tape contains only blanks (or the
special comparison symbols), accept the input.

Formal Definition:

Let the TM be M = (Q, Σ, Γ, δ, q0, B, F), where:

• Q = {q0, q1, q2, q3, q4, q_accept, q_reject}


• Σ = {a, b}
• Γ = {a, b, #, B, _}
• q0 = q0
• B=B
• F = {q_accept}

The transition function δ is defined as follows:

1. Mark the Beginning:


o δ(q0, a) = (q1, #, R)
o δ(q0, b) = (q1, #, R)
2. Scan to the End:
o δ(q1, a) = (q1, a, R)
o δ(q1, b) = (q1, b, R)
o δ(q1, B) = (q2, B, L)
3. Compare Symbols:
o δ(q2, a) = (q3, _, L) // Match 'a' at beginning and end
o δ(q2, b) = (q4, _, L) // Match 'b' at beginning and end
o δ(q2, #) = (q_accept, #, R) // All symbols compared, accept
o δ(q2, _) = (q2, _, L) // Skip over compared symbols
4. Move Inwards (for 'a' match):
o δ(q3, a) = (q3, a, L)
o δ(q3, b) = (q3, b, L)
o δ(q3, #) = (q5, #, R) // Reached beginning, move to end
5. Move Inwards (for 'b' match):
o δ(q4, a) = (q4, a, L)
o δ(q4, b) = (q4, b, L)
o δ(q4, #) = (q5, #, R) // Reached beginning, move to end
6. Move to End:
o δ(q5, _) = (q5, _, R)
o δ(q5, a) = (q2, a, L)
o δ(q5, b) = (q2, b, L)
o δ(q5, B) = (q_reject, B, L) // Reached end before finding a match

This Turing Machine will accept any string that is a palindrome over the alphabet {a, b}
and reject any string that is not.

You might also like