0% found this document useful (0 votes)
33 views16 pages

Unit - 3 Second Half-PDA

Uploaded by

fahimwazir672
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views16 pages

Unit - 3 Second Half-PDA

Uploaded by

fahimwazir672
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

CS6503 THEORY OF COMPUTATION

UNIT – III
PUSH DOWN AUTOMATA
PART - A
1. Define Pushdown Automata. (Apr/May 10, May/June 16)
A pushdown Automata M is a system (Q, Σ, Ґ, δ, q0, Z0, F) here
Q is a finite set of states.
Σ is an alphabet called the input alphabet.
Ґ is an alphabet called stack alphabet.
q0 in Q is called initial state.
Zo in Ґ is start symbol in stack.
F is the set of final states.
δ is a mapping from Q X (Σ U {Є}) X Ґ to finite subsets of Q X Ґ*.
2. Specify the two types of moves in PDA. (Apr/ May 12)
The move dependent on the input symbol (a) scanned is:
δ(q,a,Z) = { ( p1, γ1 ), ( p2,γ2 ),……..( pm,γm ) } where q and p are states , a is in Σ ,Z is
a stack symbol and γi is in Ґ*. PDA is in state q, with input symbol a and Z the top
symbol on state enter state pi Replace symbol Z by string γi.
The move independent on input symbol is (Є-move):
δ(q,Є,Z)= { ( p1,γ1 ), ( p2,γ2 ),…………( pm,γm ) }.Is that PDA is in state q ,
independent of input symbol being scanned and with Z the top symbol on the stack enter
a state p i and replace Z by γi.
3. What are the different types of language acceptances by a PDA and define them?
For a PDA M=(Q, Σ ,Ґ ,δ ,q0 ,Z0 ,F ) we define :
(i) Language accepted by final state L (M) as:
* { w | (q0 , w , Z0 ) |-- ( p, Є , γ ) for some p in F and γ in Ґ * }.
(ii) Language accepted by empty / null stack N (M) is:
{ w | (q0,w ,Z0) |----( p, Є, Є ) for some p in Q}.
4. Is it true that the language accepted by a PDA by empty stack and final states are different
languages.
No, because the languages accepted by PDA‘s by final state are exactly the languages
accepted by PDA’s by empty stack.
5. Define Deterministic PDA. [Nov/ Dec 16]
A PDA M =( Q, Σ ,Ґ ,δ ,q0 ,Z0 ,F ) is deterministic if:
For each q in Q and Z in Ґ , whenever δ(q,Є,Z) is nonempty then δ(q,a,Z) is empty
for all a in Σ. For no q in Q , Z in Ґ , and a in Σ U { Є} does δ(q,a,Z) contains more than one
element. (Eg): The PDA accepting {wcwR | w in (0+1) *}.
6. Define Instantaneous description (ID) in PDA.
ID describe the configuration of a PDA at a given instant.ID is a triple such as (q, w, γ)
, where q is a state, w is a string of input symbols and γ is a string of stack symbols. If
M = (Q, Σ, Ґ, δ, q0, Z0, F) is a PDA we say that (q,aw,Zα) |-----( p, , βα) if δ(q,a,Z)
contains (p, β ).M ‘a’ may be Є or an input symbol. Example: (q1, BG) is in δ (q1, 0,)
that (q1, 011, GGR)|---- (q1, 11, BGGR).

7. What is the significance of PDA?


Finite Automata is used to model regular expression and cannot be used to represent non
regular languages. Thus to model a context free language, a Pushdown Automata is used.

Page 67
CS6503 THEORY OF COMPUTATION

8. When is a string accepted by a PDA?


The input string is accepted by the PDA if:
o The final state is reached.
o The stack is empty.
9. Give examples of languages handled by PDA.(Apr/ May 10)
(1) L= { anbn | n>=0}, here n is unbounded, hence counting cannot be done by finite
memory. So we require a PDA, a machine that can count without limit.
(2) L= {wwR | w Є {a, b}*}, to handle this language we need unlimited counting
capability.
10. Is NPDA (Nondeterministic PDA) and DPDA (Deterministic PDA) equivalent?
The languages accepted by NPDA and DPDA are not equivalent.
For example: wwR is accepted by NPDA and not by any DPDA.
11. State the equivalence of acceptance by final state and empty stack.
 If L = L (M2) for some PDA M2, then L = N (M1) for some PDA M1.
 If L = N(M1) for some PDA M1 ,then L = L(M2 ) for some PDA M2
Where L (M) = language accepted by PDA by reaching a final state.
N (M) = language accepted by PDA by empty stack.
12. Construct a PDA that accepts the language generated by the grammar (Nov/ Dec12)
S-> aSbb / aab
Solution: The PDA A = ({q}, {a, b}, {S, a, b}, δ, q, S} Where δ :
i) δ (q, z0, S) = {(q, aSbb), (q, abb)}
ii) δ q, a, a) = {(q, ε)}
iii) δ (q, b, b) = {(q, ε)}
13. Construct a PDA that accepts the language generated by the grammar
S-> aABB, A-> aB / a, B ->bA / b
Solution: The PDA A = ({q}, {a, b}, {S, A, B, Z, a, b}, δ, q, S} Where δ:
i) δ (q, z, S) = {(q, aABB)}
ii) δ q, z, A) = {(q, aB, (q,a)}
iii) δ (q, z, B) = {(q, bA, (q,b)}
iv) δ (q, a, a) = {(q, ε)}
v) δ (q, b, b ) = {(q, ε)}
14. Is it true that NDPA is more powerful than that od DPDA? Justify your answer.
No, NPDA is not powerful than DPDA. B cause NPDA may produce ambiguous grammar by reaching
its final state or by emptying its stack. But DPDA produces only ambiguous grammar
15. State the pumping lemma for CFLs.(Apr/ May 12,14)
Let L be any CFL. Then there is a constant n, depending only on L, such that if z is in
L and |z| >=n, then z=uvwxy such that:
(i) |vx| >=1
(ii) |vwx| <=n and
(iii) for all i>=0 uviwxiy is in L.
16. What is the main application of pumping lemma in CFLs?
The pumping lemma can be used to prove a variety of languages are not context free.
Some examples are: L1 = { aibici | i>=1} is not a CFL. L2= { aibjcidj | i>=1 and j>=1} is not a
CFL.

Page 68
CS6503 THEORY OF COMPUTATION
PART - B

1. Explain about Pushdown Automata [Apr /May 10, 12, 13 May/June 16]
A Push down Automata (PDA) is essentially a finite automaton with control of both an
input tape and a stack on which it can store a string of stack symbols. With the help of a stack
the pushdown automata can remember an infinite amount of information.

Model of PDA
 The PDA consists of a finite set of states, a finite set of input symbols and a finite set of
pushdown symbols.
 The finite control has control of both the input tape and pushdown store.
 In one transition of the PDA,
o The control head reads the input symbol, then goto the new state.
o Replaces the symbol at the top of the stack by any string.
Definition of PDA:
A PDA consists of seven tuples.
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q – A finite set of states.
Σ – A finite set of input symbols.
Γ – A finite set of stack symbols.
δ - The transition function. Formally, δ takes a argument a triple δ(q, a, X),
Where, - ‘q’ is a state in Q
‘a’ is either an input symbol in Σ or a = ε.
‘X’ is a stack symbol that is a member of Γ.
Q0 – The start state.
Z0 – The start symbol of the stack.
F – The set of accepting states or final states.
Ex: Mathematical model of a PDA for the language, L = {wwR | w is in (0+1)* }, then PDA for
L can be described as, P = ({q0, q1, q2}, {0, 1}, {0, 1, Z0}, δ, q0, Z0, {q2}), where δ is defined by
the following rules;δ(q0, 0, Z0) = {(q0, 0Z0)} and δ(q0, 1, Z0) = {(q0, 1Z0)}. One of these rules
applies initially, when we are in state q0 and we see the start symbol Z0 at the top of the stack.
We read the first input, and push it onto the stack, leaving Z0 below to mark the bottom.
1. δ (q0, 0, 0) = {(q0, 00)}, δ(q0, 0, 1) = {(q0, 01)}, δ(q0, 1, 0) = {(q0, 10)} and δ(q0, 1, 1) =
{(q0, 11)}. These four similar rules allow us to stay in state q0 and read inputs, pushing
each onto the top of the stack and leaving the pervious top stack symbol alone.
2. δ (q0, ε, Z0) = {(q1, Z0)}, δ(q0, ε, 0) = {(q1, 0)}, and δ(q0, ε, 1) = {(q1, 1)}. These three
rules allow P to go from state q0 to state q1 spontaneously (on ε input), leaving intact
whatever symbol is at the top of the stack.

Page 69
CS6503 THEORY OF COMPUTATION
3. δ (q1, 0, 0) = {(q1, ε)}, and δ(q1, 1, 1) = {(q1, ε)}. Now in state q1we can match input
symbols against the top symbols on the stack, and pop when the symbols match.
4. δ (q1, ε, Z0) = {(q2, Z0)}. Finally, if we expose the bottom-of-stack marker Z0 and we are
in state q1, then we have found an input of the form wwR. We go to state q2 and accept.
A Graphical Notation for DFA:
Sometimes, a diagram generalizing the transition diagram of a finite automaton will make
aspects of the behavior of a given PDA clearer. A transition diagram for PDA indicates,
(a) The nodes correspond to the states of the PDA.
(b) An arrow label start indicates, the start state and doubly circled states are accepting, as
for finite automata.
(c) An arc labeled a, X/α from state q to state ‘p’ means that δ(q, a, X) contains the pair
(p,α).

Instantaneous Descriptions(ID) of a PDA:


The ID is defined as a triple (q, w, γ), where,
q – Current state
w – String of input symbols
γ – String of stack symbols
Let P = (Q, Σ, Γ, δ, q0, Z0, F) be a PDA. Suppose δ(q, a, X) contains (p,α). Then for all
strings ‘w’ in Σ* and β in Γ*; (q, aw, β) ├ (p, w, αβ).
Ex.1: Construct PDA on the input strings 001010c010100, 001010c011100. PDA can be
described as, P = ({q1, q2}, {0, 1, c}, {R, B, G}, δ, q1, R, {q2}), where δ is,
δ(q1, 0, R) = (q1, BR) δ(q1, c, R) = (q2, R)
δ(q1, 1, R) = (q1, GR) δ(q1, c, B) = (q2, B)
δ(q1, 0, B) = (q1, BB) δ(q1, c, G) = (q2, G)
δ(q1, 1, B) = (q1, GB) δ(q2, 0, B) = (q2, ε)
δ(q1, 0, G) = (q1, BG) δ(q2, 1, G) = (q2, ε)
δ(q1, 1, G) = (q1, GG) δ(q2, ε, R) = (q2, ε)
Check whether the string is accepted or not?
Soln: w1 = 001010c010100
(q1, 001010c010100, R) ├ (q1, 01010c010100, BR)
├ (q1, 1010c010100, BBR)
├ (q1, 010c010100, GBBR)
├ (q1, 10c010100, BGBBR)
├ (q1, 0c010100, GBGBBR)
├ (q1, c010100, BGBGBBR)
├ (q2, 010100, BGBGBBR)
├ (q2, 10100, GBGBBR)

Page 70
CS6503 THEORY OF COMPUTATION
├ (q2, 0100, BGBBR)
├ (q2, 100, GBBR)
├ (q2, 00, BBR)
├ (q2, 0, BR)
├ (q2, ε, R)
├ (q2, ε, ε)
 The string is accepted.
w2 = 001010c011100
(q1, 001010c011100, R) ├ (q1, 01010c011100, BR)
├ (q1, 1010c011100, BBR)
├ (q1, 010c011100, GBBR)
├ (q1, 10c011100, BGBBR)
├ (q1, 0c011100, GBGBBR)
├ (q1, c011100, BGBGBBR)
├ (q2, 011100, BGBGBBR)
├ (q2, 11100, GBGBBR)
├ (q2, 1100, BGBBR)
 There is no transition for (q2, 1, B). So the string is not accepted.
LANGUAGES OF A PUSH DOWN AUTOMATA [May/June 16]
There are two ways to accept a string a PDA,
(a) Accept by final state that is, reach the final state from the start state.
(b) Accept by an empty stack that is, after consuming input, the stack is empty and current
state could be a final state or non-final state.
Both methods are equivalent. One method can be converted to another method and vice
versa.
Acceptance by final state:
Let M = (Q, Σ, Γ, δ, q0, Z0, F) be a PDA. The languages accepted by a final state is
defined as, L (M) = {w | (q0, w, Z0) ├* (q, ε, α), where q  F and α  Γ*.
It means that, from the current station q0 after scanning the input string ‘w’, the PDA
enters into a final state ‘q’ leaving the input tape empty. Here contents of the stack is irrelevant.
Acceptance by Empty Stack:
For each PDA P = (Q, Σ, Γ, δ, q0, Z0, F), language accepted by empty stack can be
defined as, N (P) = {w | (q0, w, Z0) ├* (q, ε, ε), where q  Q.
It means that when the string ‘w’ is accepted by an empty stack, the final state is
irrelevant, the input tape should be empty and stack also should be empty.
2. From Empty Stack to Final State: [Nov / Dec 2014]
Theorem:
If L = N (PN) for some PDA PN = (Q, Σ, Γ, δ, q0, Z0), then there is a PDA PF such that L = L (PF).
Proof:
Initially, change the stack content from Z0 to Z0X0. So consider a new stack start symbol
X0 for the PDA PF. Also need a new start state P0, which is the initial state of PF. It is to push Z0
the start symbol of PN, onto the top of the stack and enter state q0. Finally, we need another new
state Pf. that is the accepting state of PF.
The specification of PF is as follows:
P = (Q  {P0, Pf}, Σ, Γ  {X0}, δF, P0, X0, {Pf})
where δF is defined by,
1. δF(P0, ε, X0) = {(q0, Z0X0)}. In its start state, PF makes a spontaneous transition to the
start state of PN, pushing its start symbol Z0 onto the stack.
Page 71
CS6503 THEORY OF COMPUTATION
2. For all states ‘q’ in Q, inputs ‘a’ in Σ or a = ε, and stack symbols Y in Γ, δF (q, a, Y) = δN
(q, a, Y).
3. δF (q, ε, X0) = (Pf, ε) for every state ‘q’ in Q.
We must show that ‘w’ is in L (PF) if and only if ‘w’ is in N (PN). The moves of the PDA
PF to accept a string ‘w’ can be written as,
(P0, w, X0) ├ (q , w, Z0X0) ├* (q0, ε, X0) ├ (Pf, ε, ε).
PF 0 PF PF
Thus PF accepts ‘w’ by final state.
3. From Final State to Empty Stack:
Theorem:
Let L be L (PF) for some PDA, PF = (Q, Σ, Γ, δ, q0, Z0, F). Then there is a PDA PN such
that L = N (PN).
Proof:
Initially, change the stack content from Z0 to Z0X0. So we also need a start state P0, and
final state P, which is the start and final of PN.
The specification of PN is as follows:
PN = (Q  {P0, P}, Σ, Γ  {X0}, δN, P0, X0)
Where δN is defined by,
1. δN (P0, ε, X0) = {(q0, Z0X0)} to change the stack content initially.
2. δN (q, a, Y) = δF(q, a, Y), for all states ‘q’ in Q, inputs ‘a’ in Σ or a = ε, and stack symbols
Y in Γ.
3. δN (q, ε, Y) = (P, ε), for all accepting states ‘q’ in F and stack symbols Y in Γ or Y = X0.
4. δN (P, ε, Y) = (P, ε), for all stack symbols Y in Γ or Y = X0, to pop the remaining stack
contents.
Suppose (Q0, w, Z0) ├* (q, ε, α) for some accepting state ‘q’ and stack string α. Then
PF
PN can do the following:
(P0, w, X0) ├ (q , w, Z0X0) ├* (q, ε, αX0) ├* (P, ε, ε).
PN 0 PN PN
Ex.1: Construct a PDA that accepts the given language, L = {xmyn | n<m}.
Soln: Language L accepted by the strings are, L = {xxy, xxxy, xxxyy, xxxxyy …}
First find the grammar for that language. The grammar for the language can be,
S → xSy | xS | x
The corresponding PDA for the above grammar is,
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q = {q}
Σ = {x, y}
Γ = {S, x, y}
q0 = {q}
Z0 = {S}
F =ф
and δ is defined as,
δ (q, ε, S) = {(q, xSy), (q, xS), (q, x)}
δ (q, x, x) = (q, ε)
δ (q, y, y) = (q, ε)
To prove the string xxxyy is accepted by PDA,
(q, xxxyy, S) ├ (q, xxxyy, xSy) ├ (q, xxyy, Sy) ├ (q, xxyy, xSyy) ├ (q, xyy, Syy)
├ (q, xyy, xyy) ├ (q, yy, yy) ├ (q, y, y) ├ (q, ε, ε)
Hence the string is accepted.

Page 72
CS6503 THEORY OF COMPUTATION
Ex.2: Construct a PDA that accepts the given language, L = {0n1n | n≥1}.
Soln: Language L accepted by the strings are, L = {01, 0011, 000111, 00001111 …}
First find the grammar for that language. The grammar for the language can be,
S → 0S1 | 0A1
A → 01 | ε
The corresponding PDA for the above grammar is,
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q = {q}
Σ = {0, 1}
Γ = {S, A, 0, 1}
q0 = {q}
Z0 = {S}
F =ф
and δ is defined as,
δ (q, ε, S) = {(q, 0S1), (q, 0A1)}
δ (q, ε, A) = {(q, 01), (q, ε)}
δ (q, 0, 0) = (q, ε)
δ (q, 1, 1) = (q, ε)
To prove the string 000111 is accepted by PDA,
(q, 000111, S) ├ (q, 000111, 0S1) ├ (q, 00111, S1) ├ (q, 00111, 0S11) ├ (q, 0111, S11)
├ (q, 0111, 0A111) ├ (q, 111, A111) ├ (q, 111, 111) ├ (q, 11, 11) ├ (q, 1, 1)
├ (q, ε, ε)
Hence the string is accepted.
4. Explain about Equivalence of PDA and CFG [Nov/ Dec 2014]
From Grammars to Pushdown Automata:
It is possible to convert a CFG to PDA and vice versa.
Input: Context Free Grammar ‘G’.
Output: PDA – P that simulates the leftmost derivations of G. Stack contains all the symbols
(variables as well as terminals) of CFG.
Let G = (V, T, P, S) be a CFG. The PDA which accepts L(G) is given by,
P = ({q}, T, V  T, δ, q, S, ф) where δ is defined by,
1. For each variable ‘A’ include a transition δ(q, ε, A) = (q, b) such that A → b is a
production of P.
2. For each terminal ‘a’ include a transition δ(q, a, a) = (q, ε).
Ex.1: Construct a PDA that accepts the language generated by the grammar,
S → aSbb | abb
Soln: PDA – P is defined as follows:
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q = {q}
Σ = {a, b}
Γ = {S, a, b}
q0 = {q}
Z0 = {S}
F =ф
and δ is defined as,
δ (q, ε, S) = {(q, aSbb), (q, abb)}
δ (q, a, a) = (q, ε) δ (q, b, b) = (q, ε)

Page 73
CS6503 THEORY OF COMPUTATION

Ex.2: Construct a PDA equivalent to the CFG,


S → aABB | aAA
A → aBB | a
B → bBB |A
Soln:
PDA – P is defined as follows:
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q = {q}
Σ = {a, b} Γ = {S, A, B, a, b} q0 = {q} Z0 = {S} F =ф

and δ is defined as,


δ(q, ε, S) = {(q, aABB), (q, aAA)}
δ(q, ε, A) = {(q, aBB), (q, a)}
δ(q, ε, B) = {(q, bBB), (q, A)}
δ(q, a, a) = (q, ε)
δ(q, b, b) = (q, ε)

Ex.3: Construct a PDA equivalent to the CFG,


E → I | E+E | E*E | (E)
I → a | b | Ia | Ib | I0 | I1
Soln: PDA – P is defined as follows:
P = (Q, Σ, Γ, δ, q0, Z0, F)
Where, Q = {q}
Σ = {a, b, 0, 1, +, *, (, )}
Γ = {E, I, B, a, b, 0, 1, +, *, (,)}
q0 = {q}
Z0 = {E}
F =ф
and δ is defined as,
δ (q, ε, E) = {(q, I), (q, E+E), (q, E*E), (q, (E))}
δ (q, ε, I) = {(q, a), (q, b), (q, 0), (q, 1), (q, +), (q, *), (q, ( ), (q, ))}
δ (q, a, a) = (q, ε)
δ (q, b, b) = (q, ε)
δ (q, 0, 0) = (q, ε)
δ (q, 1, 1) = (q, ε)
δ (q, +, +) = (q, ε)
δ (q, *, *) = (q, ε)
δ (q, (, ( ) = (q, ε)
δ (q,),)) = (q, ε)

Page 74
CS6503 THEORY OF COMPUTATION
From PDA’s to Grammars:
Theorem:
If L is N (M) for some PDA M, then L is a context free grammar.
Construction:
Let M = (Q, Σ, Γ, δ, q0, Z0, F) be the PDA. Let G = (V, T, P, S) be a CFG.
Where, - V is the set of objects of the form [q, A, p], ‘q’ and ‘p’ in Q and A in Γ.
- New symbol S.
- P is the set of productions.
The productions are,
(1) S → [q0, Z0, q], q in Q
(2) If δ (q, a, A) = (q1, B1B2 ……… Bn) then,
[q, A, qm+1] = a[q1, B1, q2][q2, B2, q3] ………… [qm, Bm, qm+1], for each ‘a’ in Σ
 {ε} and A, B1B2 ……. Bm in Γ.
Proof:
If m = 0; δ (q, a, A) = (q1, ε)
[q, A, q1] → a
*
Let ‘x’ be the input string, to show that, [q, A, p]  x, iff (q, x, A) ├* (p, ε, ε)
*
We show by induction on ‘i’ that, if (q, x, A) ├i (p, ε, ε) then [q, A, p]  x
Basis: when i = 1,
δ(q, x, A) = (p, ε)
Here ‘x’ is a single input symbol. Thus [q, A, p] → x is a production of G.
Induction: when i > 1, let x = ay.
(q, ay, A) ├ (q1, ay, B1B2 ……..Bn)
The string ‘y’ can be written as y = y1y2 …..yn, where yj has the effect of popping Bj from
the stack possibly after a long sequence of moves.
Let y1 be the prefix of ‘y’ at the end of which the stack first becomes short as n-1
symbols. Let y2 be the symbols of ‘y’ following y1 such that at the end of y2 the stack is a short
as n-2 symbols and so on. That is,
(q1, y1y2…….yn, B1B2…….Bn) ├ (q2, y2y3…….yn, B2B3…….Bn) ├ (q3, y3y4…….yn,
B3B4…….Bn) There exist states q2, q3…qn+1.Where qn+1 = p
(q1, y1, B1) ├ (q2, ε, ε)
(q2, y2, B2) ├ (q3, ε, ε)

(qj, yj, Bj) ├ (qj, ε, ε)


*
In CFG, [qj, Bj, qj+1]  yj
The original move,
(q, ay, A) ├ (q1, y, B1B2……Bn)
(q, ay1y2 ……. yn, A) ├ (q1, y1y2……yn, B1B2……..Bn)
CFG is,
[q, A, p]  a[q1, B1, q2] [q2, B2, q3] …….. [qn, Bn, qn+1]
*
[q, A, p]  ay1y2 ……. yn
*
[q, A, p]  ay
*
[q, A, p]  x iff (q, x, A) ├* (p, ε, ε), where qn+1 = p.
Page 75
CS6503 THEORY OF COMPUTATION
Algorithm for getting production rules of CFG:
1. The start symbol production can be, S → [q0, Z0, q]
where, q indicates the next state.
q0 is a start state
Z0 is a stack symbol
q and q0  Q
2. If there exist a move of PDA, δ (q, a, Z) = {(q’, ε)}, then the production rule can be
written as, [q, Z, q’] → a
3. If there exist a move of PDA as, δ(q, a, Z) = {(qm, Z1Z2…….Zn)}, then the production
rule can be written as, [q, Z, qm] → a[q1, Z1, q2] [q2, Z2, q3] [q3, Z3, q4] …… [qm-1, Zn-1,
qm]
Ex.1: Construct a CFG for the PDA, P = ({q0, q1}, {0, 1}, {S, A}, δ, q0, S, {q1}), where δ is,
δ (q0, 1, S) = {(q0, AS)} δ(q0, 0, A) = {(q1, A)}
δ (q0, ε, S) = {(q0, ε)} δ(q1, 1, A) = {(q1, ε)}
δ (q0, 1, A) = {(q0, AA) δ(q1, 0, S) = {(q0, S)} [Nov/Dec 16]
Soln: CFG, G is defined as, G = (V, T, P, S)
Where,
V = {[q0, S, q0], [q0, S, q1], [q1, S, q0], [q1, S, q1], [q0, A, q0], [q0, A, q1], [q1, A, q0], [q1, A, q1]}
T = {0, 1}
S = {S} [Start stack symbol]
To find production, P;
(1) Production for S,
S → [q0, S, q0]
S → [q0, S, q1] [q0 – Start state, S – Initial stack symbol]
(2) δ(q0, 1, S) = {(q0, AS)} we get,
For q0, [q0, S, q0] → 1[q0, A, q0] [q0, S, q0]
[q0, S, q0] → 1[q0, A, q1] [q1, S, q0]
For q1, [q0, S, q1] → 1[q0, A, q0] [q0, S, q1]
[q0, S, q1] → 1[q0, A, q1] [q1, S, q1]
(3) δ(q0, ε, S) = {(q0, ε)}
[q0, S, q0] → ε
(4) δ(q0, 1, A) = {(q0, AA)
For q0, [q0, A, q0] → 1[q0, A, q0] [q0, A, q0]
[q0, A, q0] → 1[q0, A, q1] [q1, A, q0]
For q1, [q0, A, q1] → 1[q0, A, q0] [q0, A, q1]
[q0, A, q1] → 1[q0, A, q1] [q1, A, q1]
(5) δ(q0, 0, A) = {(q1, A)}
For q0, [q0, A, q0] → 0[q1, A, q0]
For q1, [q0, A, q1] → 0[q1, A, q1]
(6) δ(q1, 1, A) = {(q1, ε)}
[q1, A, q1] → 1
(7) δ(q1, 0, S) = {(q0, S)}
For q0, [q1, S, q0] → 0[q0, S, q0]
For q1, [q1, S, q1] → 0[q0, S, q1]
Since [q1, A, q0], [q1, A, q1] does not have any productions we can leave them.
After eliminating the unwanted productions,
Page 76
CS6503 THEORY OF COMPUTATION
S → [q0, S, q0]
[q0, S, q0] → 1[q0, A, q1] [q1, S, q0]
[q0, S, q0] → ε
[q0, A, q1] → 1[q0, A, q1] [q1, A, q1]
[q0, A, q1] → 0[q1, A, q1]
[q1, A, q1] → 1
[q1, S, q0] → 0[q0, S, q0]
Finally P is given by,
S → [q0, S, q0]
[q0, S, q0] → 1[q0, A, q1] [q1, S, q0] | ε
[q0, A, q1] → 1[q0, A, q1] [q1, A, q1] | 0[q1, A,
q1]
[q1, A, q1] → 1
[q1, S, q0] → 0[q0, S, q0]
Ex.2: Construct a CFG for the PDA, P = ({q0, q1}, {0, 1}, {X, Z0}, δ, q0, Z0, {q1}), where δ is,
δ(q0, 0, Z0) = {(q0, XZ0)} δ(q1, 1, X) = {(q1, ε)}
δ(q0, 0, X) = {(q0, XX)} δ(q1, ε, X) = {(q1, ε)}
δ(q0, 1, X) = {(q1, ε) δ(q1, ε, Z0) = {(q1, ε)}
Soln: CFG, G is defined as, G = (V, T, P, S)
Where, V = {[q0, X, q0], [q0, X, q1], [q1, X, q0], [q1, X, q1], [q0, Z0, q0], [q0, Z0, q1],
[q1, Z0, q0], [q1, Z0, q1]}
T = {0, 1} S = {S} [Start stack symbol]
To find production, P;
(1) Production for S,
S → [q0, Z0, q0]
S → [q0, Z0, q1] [q0 – Start state, Z0 – Initial stack symbol]
(2) δ(q0, 0, Z0) = {(q0, XZ0)} we get,
For q0, [q0, Z0, q0] → 0[q0, X, q0] [q0, Z0, q0]
[q0, Z0, q0] → 0[q0, X, q1] [q1, Z0, q0]
For q1, [q0, Z0, q1] → 0[q0, X, q0] [q0, Z0, q1]
[q0, Z0, q1] → 0[q0, X, q1] [q1, Z0, q1]
(3) δ(q0, 0, X) = {(q0, XX)}
For q0, [q0, X, q0] → 0[q0, X, q0] [q0, X, q0]
[q0, X, q0] → 0[q0, X, q1] [q1, X, q0]
For q1, [q0, X, q1] → 0[q0, X, q0] [q0, X, q1]
[q0, X, q1] → 0[q0, X, q1] [q1, X, q1]
(4) δ(q0, 1, X) = {(q1, ε)
[q0, X, q1] → 1
(5) δ(q1, 1, X) = {(q1, ε)}
[q1, X, q1] → 1
(6) δ(q1, ε, X) = {(q1, ε)}
[q1, X, q1] → ε
(7) δ(q1, ε, Z0) = {(q1, ε)}
[q1, Z0, q1] → ε
After eliminating the unwanted productions, we get;
S → [q0, Z0, q1]
[q0, Z0, q1] → 0[q0, X, q1] [q1, Z0, q1]
[q0, X, q1] → 0[q0, X, q1] [q1, X, q1]
Page 77
CS6503 THEORY OF COMPUTATION
[q0, X, q1] → 1
[q1, X, q1] → 1
[q1, X, q1] → ε
[q1, Z0, q1] → ε
Finally P is given by,
S → [q0, Z0, q1]
[q0, Z0, q1] → 0[q0, X, q1] [q1, Z0, q1]
[q0, X, q1] → 0[q0, X, q1] [q1, X, q1] |
1
[q1, X, q1] → 1 | ε
[q1, Z0, q1] → ε
Ex.3: Construct a CFG for the PDA, P = ({q0, q1}, {0, 1}, {X, Z0}, δ, q0, Z0, {q1}), where δ is,
δ(q0, b, Z0) = {(q0, ZZ0)} δ(q0, ε, Z0) = {(q0, ε)}
δ(q0, b, Z) = {(q0, ZZ)} δ(q0, a, Z) = {(q1, Z)}
δ(q1, b, Z) = {(q1, ε) δ(q1, a, Z0) = {(q0, Z0)}
Soln: CFG, G is defined as, G = (V, T, P, S) Where,
V = {[q0, Z0, q0], [q0, Z0, q1], [q1, Z0, q0], [q1, Z0, q1], [q0, Z, q0], [q0, Z, q1],
[q1, Z, q0], [q1, Z, q1]}
T = {a, b}
S = {S} [Start stack symbol]
To find production, P;
(1) Production for S,
S → [q0, Z0, q0]
S → [q0, Z0, q1] [q0 – Start state, Z0 – Initial stack symbol]
(2) δ(q0, b, Z0) = {(q0, ZZ0)} we get,

For q0, [q0, Z0, q0] → b [q0, Z, q0] [q0, Z0, q0]
[q0, Z0, q0] → b [q0, Z, q1] [q1, Z0, q0]
For q1, [q0, Z0, q1] → b [q0, Z, q0] [q0, Z0, q1]
[q0, Z0, q1] → b [q0, Z, q1] [q1, Z0, q1]
(3) δ(q0, b, Z) = {(q0, ZZ)}
For q0, [q0, Z, q0] → b[q0, Z, q0] [q0, Z, q0]
[q0, Z, q0] → b [q0, Z, q1] [q1, Z, q0]
For q1, [q0, Z, q1] → b [q0, Z, q0] [q0, Z, q1]
[q0, Z, q1] → b [q0, Z, q1] [q1, Z, q1]
(4) δ(q1, b, Z) = {(q1, ε)
[q1, Z, q1] → b
(5) δ(q0, ε, Z0) = {(q0, ε)}
[q0, Z0, q0] → ε
(6) δ(q0, a, Z) = {(q1, Z)}
For q0, [q0, Z, q0] → a [q1, Z, q0]
For q1, [q0, Z, q1] → a [q1, Z, q1]
(7) δ(q1, a, Z0) = {(q0, Z0)}
For q0, [q1, Z0, q0] → a [q0, Z0, q0]
For q1, [q1, Z0, q1] → a [q0, Z0, q1]

Page 78
CS6503 THEORY OF COMPUTATION
After eliminating the unwanted productions, we gt;
S → [q0, Z0, q0]
[q0, Z0, q0] → b [q0, Z, q1] [q1, Z0, q0]
[q0, Z0, q0] → ε
[q0, Z, q1] → b [q0, Z, q1] [q1, Z, q1]
[q0, Z, q1] → a [q1, Z, q1]
[q1, Z0, q0] → a [q0, Z0, q0]
[q1, Z, q1] → b
5. Explain about Pumping Lemma for CFL [Apr /May 10, 13, 14, Nov/Dec 12, 13, 14 May/June 16]
Pumping lemma for CFL states that in any sufficiently long string in a CFL, it is possible to find
at most two short substrings close together that can be repeated, both of the strings same number of times.
Statement of the Pumping Lemma:
The pumping lemma for CFL’s is similar to the pumping lemma for regular language, but we
break each string ‘z’ in the CFL, L into five parts.
Theorem:
Let L be any CFL. Then there is a constant ‘n’, depending only on L, such that if ‘z’ is in L and
|z| ≥ n, then we may write z=uvwxy such that,
(1) |vx| ≥ 1
(2) |vwx| ≤ n and
(3) For all i ≥ 0 uviwxiy is in L.
Proof:
If ‘z’ is in L (G) and ‘z’ is long then any parse tree for ‘z’ must contain a long path. If the parse
tree of a word generated by a Chomsky Normal Form grammar has no path of length greater than ‘i’ then
the word is of length no greater than 2i-1.
To prove this,
Basis:
Let i=1, the tree must look like,

Induction:
Let i > 1, the root and its sons be,

If there are no paths of length greater than i-1 in trees T1 and T2 then the trees generate words of
2i-2. Let G have ‘k’ variable and let n=2k. If ‘z’ is in L(G) and |z| ≥ n then since |z| > 2k-1 any parse for ‘z’
must have a path of length at least k+1. But such a path has at least k+2 vertices. Then there must be
some variables that appear twice in a path since there are only ‘k’ variables.
Let ‘P’ be a path that is a long than any path in the tree. Then there must be 2 vertices V 1 and V2
on the path satisfying following conditions,
(1) The vertices V1 and V2 both have same label say ‘A’.
(2) Vertex V1 is closer to the root than vertex V2.
(3) The portion of the path from V1 to leaf is of length at most k+1.
The sub tree T1 with root ‘r1’ represents the derivation of length at most 2k. There is no path in T1
of length greater than k+1, since ‘P’ was the longest path.
Applications of the Pumping Lemma:
Pumping Lemma can be used to prove a variety of languages not to be context free. To show that
a language L is not context free, we use the following steps;
(i) Assume L is context free. Let ‘n’ be the natural number obtained by using pumping lemma.
Page 79
CS6503 THEORY OF COMPUTATION
(ii) Choose |z| ≥ n, write z = uvwxy using the lemma.
(iii) Find a suitable integer ‘i’ such that uviwxiy L. This is a contradiction, and so L is not context
free.
Ex.1: Show that L = {anbncn | n ≥ 1} is not context free.
Soln: Assume L is context free.
L = {abc, aabbcc, aaabbbccc…}
Let z = uvwxy.
Take a string in L = aabbcc [Take any string in L]
To prove, aabbcc is not regular.
Case 1:
z = aabbcc; n = 6
Now, divide ‘z’ into uvwxy.
Let u = aa, v = b, w = b, x = ε, y = cc [ |vx| ≥ 1, |vwx| ≤ n]
Find uviwxiy. When i = 2,
uviwxiy = aabbbcc
aabbbcc L.
So L is not context free.

Case 2:
z = aabbcc; n = 6
Now, divide ‘z’ into uvwxy.
Let u = a, v = a, w = b, x = ε, y = bcc [ |vx| ≥ 1, |vwx| ≤ n]
Find, uviwxiy. When i = 2,
uviwxiy = aaabbcc
aaabbcc L.
So L is not context free.
Ex.2: Show that L = {0n1n2n | n ≥ 1} is not context free. [Nov/Dec 2014]
Soln: Assume L is context free.
L = {012, 001122, 000111222…}
Let z = uvwxy.
Take a string in L = 001122 [Take any string in L]
To prove, 001122 is not regular.
Case 1:
z = 001122; n = 6
Now, divide ‘z’ into uvwxy.
Let u = 00, v = 1, w = 1, x = ε, y = 22 [ |vx| ≥ 1, |vwx| ≤ n]
i i
Find, uv wx y. When i = 2,
uviwxiy = 0011122
0011122 L.
So L is not context free.
Case 2:
z = 001122; n = 6
Now, divide ‘z’ into uvwxy.
Let u = 0, v = 0, w = 1, x = ε, y = 122 [ |vx| ≥ 1, |vwx| ≤ n]
Find, uviwxiy. When i = 2,
uviwxiy = 0001122
0001122 L.
So L is not context free.

Kings Engineering College Page 80


CS6503 THEORY OF COMPUTATION
6. Explain about Closure Properties of CFL [Nov/Dec 13, Apr/May 12, 13]
We now consider some operations that preserve context free languages. The operations are
useful not only in constructing or proving that certain languages are context free but also in proving
certain languages not to be context free.
Closure under Union:
Theorem: Context Free Languages are closed under union.
Proof: Let ‘L1’ and ‘L2’ be CFL’s generated by CFG’s.
G1 = {V1, T1, P1, S1}, G2 = {V2, T2, P2, S2}
For L1 L2, construct grammar G3,
G3 = {V1 V2 {S3}, T1 T2, P3, S3}
Where, P3 is P1 P2 plus the production S3 → S1 | S2.
If a string ‘w’ is in L1 then the derivation, S3 S1 * w is in derivation in G3, as every production of G1
is a production of G1 is a production of G3.
Thus L1 L(G3).
Similarly for a string w1 in L2, S3 S2 * w1 is a derivation in G3, as every of G2 is a production of G3.
Thus L2 L(G3). Hence L(G3) = L1 L2
Closure under Concatenation:
Theorem:
Context Free Languages are closed under concatenation.
Proof:
Let ‘L1’ and ‘L2’ be CFL’s generated by CFG’s.
G1 = {V1, T1, P1, S1}, G2 = {V2, T2, P2, S2}
For L1 .L2, construct grammar G1,
G1 = {V1 V2 {S4}, T1 T2, P4, S4}
Where, P4 is P1 P2 plus the production S4 → S1S2.
L (G4) = L (G1). L (G2)
Closure under Kleene Closure:
Theorem: Context Free Languages are closed under kleene closure.
Proof: Let ‘L1’ and ‘L2’ be CFL’s generated by CFG’s.
G1 = {V1, T1, P1, S1}
For closure, let G5 = {V1 {S5}, T1, P5, S5}
Where, P5 is P1 plus the production S5 → S1S5 | ε.
If a string ‘w’ is in L then the derivation, S5  S1S5  wS5  w [S5  ε]
L (G5) = L (G1)*
Closure under Substitution:
Theorem: Context Free Languages are closed under substitution.
Proof: Let L be a CFL, L Σ* and for each ‘a’ in Σ, let La be a CFL. Let L be a L(G) and for each ‘a’ in
Σ, let La be L(Ga). Construct a grammar G’ as follows;
 The variables of G’ are all the variables of G and Ga’s.
 The terminals of G’ are the terminals of the Ga’s.
 The start symbols of G’ are all the production of the Ga’s.
 The productions of G’ are all the productions of the Ga’s together with those productions formed
by taking a production A→α of G and substituting Sa, the start symbol of Ga, for each instance of
an ‘a’ in Σ appearing in α.
Ex: Let L be the set of words with an equal number of a’s and b’s,
La = {0nln | n ≥ 1} and Lb = {wwR | w in (0+2)*}
For G we may choose, S→aSbS | bSaS | ε.
For Ga take, Sa→0Sa1 | 01
For Gb take, Sb→0Sb0 | 2Sb2 | ε
If ‘f’ is the substitution f (a) = La and f (b) = Lb, then, f (L) is generated by grammar G’ as,

Page 81
CS6503 THEORY OF COMPUTATION
S → SaSSbS | SbSSaS | ε
Sa → 0Sa1 | 01
Sb → 0Sb0 | 2Sb2 | ε
Closure under Intersection:
Theorem: Context Free Languages are closed under intersection.
Proof: Let L1 and L2 be CFL’s, then L1∩L2 is a language, where it satisfies both the properties of
L1 and L2 which is not possible in CFL.

Ex: Let L1 = {ambn | m ≥ 1, n ≥ 1}, L2 = {anbm | n ≥ 1, m ≥ 1}


L = L1∩L2 is not possible.
Because L1 requires that there be ‘m’ number of a’s and ‘n’ member of b’s.
So CFL’s are not closed under intersection.
Closure under Homomorphism:
Let L be a CFL over the alphabet Σ and ‘h’ is a homomorphism on Σ. Let ‘S’ be the substitution
that replaces each symbol ‘a’ in Σ by the language consisting of one string h(a).
S (a) = {h (a)} for all ‘a’ in Σ.
Thus h (L) = S (L).
Closure under Inverse Homomorphism:
If ‘h’ is a homomorphism and ‘L’ is any language then h-1(L) is the set of strings ‘w’ such that
h(w) is in L. Thus CFL’s are closed under inverse homomorphism. The following fig., shows the inverse
homomorphism of PDA’s.

After getting the input ‘a’, h (a) is placed in a buffer. The symbols of h(a) are used one at a time
and fed to the PDA being simulated. While applying homomorphism the PDA checks whether the buffer
is empty. If it is empty, then the PDA read the input symbols and applies the homomorphism.

Page 82

You might also like