ATC Module 3
ATC Module 3
Automata (PDA)
Course Outcomes-(CO)
At the end of the course student will be able to:
i. Explain core concepts in Automata and Theory of Computation.
ii. Identify different Formal language Classes and their Relationships.
iii. Design Grammars and Recognizers for different formal languages.
iv. Prove or disprove theorems in automata theory using their properties.
v. Determine the decidability and intractability of Computational problems.
Syllabus of Module 3
vi. Context-Free Grammars(CFG): Introduction to Rewrite Systems and Grammars
vii. CFGs and languages, designing CFGs,
viii. Simplifying CFGs,
ix. Proving that a Grammar is correct,
x. Derivation and Parse trees, Ambiguity,
xi. Normal Forms.
xii. Pushdown Automata (PDA): Definition of non-deterministic PDA,
xiii. Deterministic and Non-deterministic PDAs,
xiv. Non-determinism and Halting, Alternative equivalent definitions of a PDA,
xv. Alternatives those are not equivalent to PDA.
Text Books:
1. Elaine Rich, Automata, Computability and Complexity, 1st Edition, Pearson
Education, 2012/2013. Text Book 1: Ch 11, 12: 11.1 to 11.8, 12.1 to 12.6 excluding
12.3.
2. K L P Mishra, N Chandrasekaran , 3rd Edition, Theory of Computer Science, PHI,
2012
Reference Books:
1. John E Hopcroft, Rajeev Motwani, Jeffery D Ullman, Introduction to Automata
Theory, Languages, and Computation, 3rd Edition, Pearson Education, 2013
2. Michael Sipser : Introduction to the Theory of Computation, 3rd edition, Cengage
learning,2013
3. John C Martin, Introduction to Languages and The Theory of
Computation, 3rd Edition,Tata McGraw –Hill Publishing Company Limited, 2013
4. Peter Linz, “An Introduction to Formal Languages and Automata”, 3rd Edition,
Narosa Publishers, 1998
5. Basavaraj S. Anami, Karibasappa K G, Formal Languages and Automata theory,
WileyIndia, 2012
1
Learning Outcomes:
At the end of the module student should be able to:
Sl.No TLO’s
1. Define context free grammars and languages
2. Design the grammar for the given context free languages.
3. Apply the simplification algorithm to simplify the given grammar
4. Prove the correctness of the grammar
5. Define leftmost derivation and rightmost derivation
6. Draw the parse tree to a string for the given grammar.
7. Define ambiguous and inherently ambiguous grammars.
8. Prove whether the given grammar is ambiguous grammar or not.
9. Define Chomsky normal form. Apply the normalization algorithm to
convert the grammar to Chomsky normal form.
10. Define Push down automata (NPDA). Design a NPDA for the given
CFG.
11. Design a DPDA for the given language.
12. Define alternative equivalent definitions of a PDA.
Simple-rewrite (G, S) will generate the strings in L(G). The symbol ⇒ to indicate steps in a
Using a Grammar to Derive a String
derivation.
Given: S aS ---- rule 1
S ⇒ aSb ⇒ aaSbb ⇒
S ---- rule 2
A derivation could begin with:
aabb
Generating Many Strings
LHS of Multiple rules may match with the working string.
Given: S aSb ----- rule 1
S bSa ----- rule 2
S ⇒ aSb ⇒ aaSbb ⇒
Three are three choices at the next step:
⇒ aabSabb S ⇒ aSb ⇒
(using rule 2),
aaSbb ⇒ aabb
(using rule 3).
S ⇒ aTTb ⇒ abTaTb ⇒
Two choices at the next step:
S ⇒ aTTb ⇒ aTbTab ⇒
3
When to Stop
Case 1: The working string no longer contains any nonterminal symbols (including, when it
Case 2: There are nonterminal symbols in the working string but none of them appears on the
left-hand side of any rule in the grammar. In this case, we have a blocked or non-terminated
derivation but no generated string.
Given: S aSb ----- rule 1
S bTa ----- rule 2
Recall Regular Grammar which has a left-hand side that is a single nonterminal and have a
right-hand side that is or a single terminal or a single terminal followed by a single
nonterminal.
X →Y
( NT) ( or T or T NT)
4
Definition Context-Free Grammar
A context-free grammar G is a quadruple, (V, , R, S), where:
• V is the rule alphabet, which contains nonterminals and terminals.
• (the set of terminals) is a subset of V,
• R (the set of rules) is a finite subset of (V - ) V*,
A derivation will halt whenever no rules on the left hand side matches against working-string.
At every step, any rule that matches may be chosen.
aaabbb
X w1Yw2, where: Y ⇒* w3Xw4 for some w1, w2, w3, and w4 in V*. Expanding a non-
A grammar is recursive iff it contains at least one recursive rule. A rule is recursive iff it is
terminal according to these rules can eventually lead to a string that includes the same non-
terminal again.
Example1: L = AnBn = {anbn : n ≥ 0} Let G = ({S, a, b}, {a, b}, {S a S b, S }, S)
S ⇒ (S) ⇒ ()
Some example derivations in G:
5
Self-Embedding Grammar Rules
grammar G is self-embedding iff it is of the form X w1Yw2, where Y ⇒* w3Xw4 and both
A grammar is self-embedding iff it contains at least one self-embedding rule. A rule in a
w1w3 and w4w2 are in +. No regular grammar can impose such a requirement on its strings.
Example: S aSa is self-embedding
S aS is recursive but not self- embedding
S aT
T Sa is self-embedding
Example : PalEven = {wwR : w {a, b}*}= The L of even length palindrome of a’s and b’s.
L = {, aa, bb, aaaa, abba, baab, bbbb, ..…….}
G = {{S, a, b}, {a, b}, R, S}, where:
R = { S aSa ----- rule 1
S bSb ----- rule 2
S ----- rule 3 }.
S ⇒ aSa ⇒ abSba ⇒
Example derivation in G:
abba
Where Context-Free Grammars Get Their Power
If a grammar G is not self-embedding then L(G) is regular. If a language L has the property
that every grammar that defines it is self-embedding, then L is not regular.
6
Example2: A CFG for C++ compound statements:
<compound stmt> { <stmt list> }
<stmt list> <stmt> <stmt list> | epsilon
<stmt> <compound stmt>
<stmt> if ( <expr> ) <stmt>
<stmt> if ( <expr> ) <stmt> else <stmt>
<stmt> while ( <expr> ) <stmt>
<stmt> do <stmt> while ( <expr> ) ;
<stmt> for ( <stmt> <expr> ; <expr> ) <stmt>
<stmt> case <expr> : <stmt>
<stmt> switch ( <expr> ) <stmt>
<stmt> break ; | continue ;
<stmt> return <expr> ; | goto <id> ;
S NP VP
NP the Nominal | a Nominal | Nominal | ProperNoun | NP PP
Nominal N | Adjs N
N cat | dogs | bear | girl | chocolate | rifle
ProperNoun Chris | Fluffy
Adjs Adj Adjs | Adj
Adj young | older | smart
VP V | V NP | VP PP
V like | likes | thinks | shots | smells
PP Prep NP
Prep with
3. Designing Context-Free
Grammars
If L has a property that every string in it has two regions & those regions must bear some
relationship to each other, then the two regions must be generated in tandem. Otherwise,
there is no way to enforce the necessary constraint.
7
Example 1: L = {anbncm : n, m ≥ 0} = L = {, ab, c, abc, abcc, aabbc, …….}
The cm portion of any string in L is completely independent of the anbn portion, so we should
generate the two portions separately and concatenate them together.
G = ({S, A, C, a, b, c}, {a, b, c}, R, S} where:
R = { S AC /* generate the two independent portions
A aAb | /* generate the anbn portion, from the outside in
C cC | } /* generate the cm portion
aabbab
8
Example 5: Equal Numbers of a’s and b’s. = {w {a, b}*: #a(w) = #b(w)}.
L = {, ab, ba, abba, aabb, baba, bbaa, …….}
G = {{S, a, b}, {a, b}, R, S}, where:
R = { S aSb ----- rule 1
S bSa ----- rule 2
S SS ----- rule 3
S ----- rule 4 }.
Example 6
L = {aibj : 2i = 3j + 1} = {a2b1 , a5b3 , a8b5 …….}
G = {{S, a, b}, {a, b}, R, S}, where:
2i = 3j + 1
aib j
2*2= 3*1 + 1 = 4
a2b1 2*5= 3*3 + 1 = 10
2*8= 3*5 + 1 = 16
5 3
R={ S aaaSbb | aab } a b
S ⇒ aaaSbb ⇒ aaaaabbb
Example derivation in G for string aaaaabbb:
a8b5
a.Removing Unproductive
=
1. G = G.
2. Mark every nonterminal symbol in G
as unproductive.
3. Mark every terminal symbol in G as
productive.
4. Until one entire pass has been made without any new symbol being marked do:
For each rule X in R do:
If every symbol in has been marked as productive and X has not
yet been marked as productive then:
Mark X as productive.
5. Remove from G every unproductive symbol.
6. Remove from G every rule that contains an unproductive symbol.
9
7. Return G.
Example: G = ({S, A, B, C, D, a, b}, {a, b}, R, S), where
R={ S AB | AC
A aAb |
B bA
C bCa
D AB }
1) a and b terminal symbols are productive
2) A is productive( because A aAb )
3) B is productive( because B bA )
4) S & D are productive(because S AB & D AB )
5) C is unproductive
On eliminating C from both LHS and RHS the rule set R obtained is
R = { S AB A aAb | B bA D AB }
b.Removing Unreachable
Nonterminals Removeunreachable (G: CFG)
=
1. G = G.
2. Mark S as reachable.
3. Mark every other nonterminal symbol
as unreachable.
4. Until one entire pass has been made without any new symbol being marked do:
For each rule X A (where A V - ) in R do:
If X has been marked as reachable and A has not then:
Mark A as reachable.
5. Remove from G every unreachable symbol.
6. Remove from G every rule with an unreachable symbol on the left-
hand side.
7. Return G.
Example
G = ({S, A, B, C, D, a, b}, {a, b}, R, S), where
R = {S AB
A aAb |
B bA D
AB }
S, A, B are reachable but D is not reachable, on eliminating D from both LHS and RHS the
rule set R is
R = { S AB
A aAb |
B bA }
10
5. Proving the Correctness of a Grammar
Given some language L and a grammar G, can we prove that G is correct (ie it generates
exactly the strings in L)
To do so, we need to prove two things:
1. Prove that G generates only strings in L.
2. Prove that G generates all the strings in L.
Algorithms used for generation and recognition must be systematic. The expansion order is
important for algorithms that work with CFG. To make it easier to describe such algorithms,
we define two useful families of derivations.
a. A leftmost derivation is one in which, at each step, the leftmost nonterminal in the
working string is chosen for expansion.
b. A rightmost derivation is one in which, at each step, the rightmost nonterminal in the
working string is chosen for expansion.
S ⇒ AB ⇒ AaB ⇒ aaB ⇒
Example 1 : S → AB | aaB A → a | Aa B → b
⇒ AB ⇒ Ab ⇒ Aab ⇒ aab
Left-most derivation for string aab is
Right-most derivation for string aab is Saab
⇒
iytiytxe ⇒ iytiytxex
Right-most Derivation for string iytiytxex is S ⇒ iCtSeS ⇒ iCtSex ⇒ iCtiCtSex ⇒
⇒ iCtiytxex ⇒ iytiytxex
iCtiCtxex
11
S ⇒ NP VP
Left-most Derivation for the string “the smart cat smells chocolate”
⇒ the Nominal VP
⇒ the Adjs N VP
⇒ the Adj N VP
⇒ the smart N VP
⇒ the smart cat VP
⇒ the smart cat V NP
⇒ the smart cat smells NP
⇒ the smart cat smells Nominal
⇒ the smart cat smells N
⇒ the smart cat smells chocolate
S ⇒ NP VP
chocolate”
⇒ NP V NP
⇒ NP V Nominal
⇒ NP V N
⇒ NP V chocolate
⇒ NP smells chocolate
⇒ the Nominal smells chocolate
⇒ the Adjs N smells chocolate
⇒ the Adjs cat smells chocolate
⇒ the Adj cat smells chocolate
⇒ the smart cat smells chocolate
Parse Trees
Regular grammar: in most applications, we just want to describe the set of strings in a
language. Context-free grammar: we also want to assign meanings to the strings in a
language, for which we care about internal structure of the strings. Parse trees capture the
essential grammatical structure of a string. A program that produces such trees is called a
parser. A parse tree is an (ordered, rooted) tree that represents the syntactic structure of a
string according to some formal grammar. In a parse tree, the interior nodes are labeled by
non terminals of the grammar, while the leaf nodes are labeled by terminals of the grammar
or .
A parse
1. tree, derived
Every by
leafa node
grammar G = (V,
is labeled S, R,
with an S), of ∑ ∪{
is a rooted,
element ordered
}, tree in which:
2. The root node is labeled S,
3. Every other node is labeled with some element of: V –
4. ∑, and
If m is a nonleaf node labeled X and the children of m
are labeled x1, x2, …,
xn, then R contains the rule X → x1, x2, …, xn.
12
Left-most derivation for the string aab is S ⇒ AB ⇒ AaB ⇒ aaB ⇒
Example 1: S AB | aaB A a | Aa Bb
⇒ iytiytxeS ⇒ iytiytxex
iCtS iytiytSeS
Example 3: Parse Tree -Structure in English for the string “the smart cat smells
chocolate”. It is clear from the tree that the sentence is not about cat smells or smart cat
smells.
A parse tree may correspond to multiple derivations. From the parse tree, we cannot tell
S ⇒ NP VP ⇒ the Nominal VP ⇒
which of the following is used in derivation:
S ⇒ NP VP ⇒ NP V NP ⇒
Parse trees capture the important structural facts about a derivation but throw away the details
of the nonterminal expansion order. The order has no bearing on the structure we wish to
assign to a string.
Generative Capacity
Because parse trees matter, it makes sense, given a grammar G, to distinguish between:
1. G’s weak generative capacity, defined to be the set of strings, L(G), that G generates,
and
2. G’s strong generative capacity, defined to be the set of parse trees that G generates.
When we design grammar, it will be important that we consider both their weak and their
strong generative capacities.
7. Ambiguity
Sometimes a grammar may produce more than one parse tree for some (or all ) of the strings
it generates. When this happens we say that the grammar is ambiguous. A grammar is
ambiguous iff there is at least one string in L(G) for which G produces more than one parse
tree.
13
Example 1: Bal={w { ),(}*: the parenthesis are balanced}.
Left-most Derivation1 for the string (())() is S ⇒ S⇒(S)S ⇒ ((S))S ⇒ (())S ⇒ (())(S) ⇒
G={{S,),( }, {),(},R,S} where R={ S S SS S (S) }
(())() Left-most Derivation2 for the string (())() is S ⇒ SS ⇒SSS ⇒SS ⇒ (S)S ⇒ ((S))S
⇒ (())S
⇒ (())(S) ⇒ (())()
Since both the parse trees obtained for the same string (())() are different, the grammar is
ambiguous.
Left-most Derivation for the string iytiytxex is S ⇒ iCtS ⇒ iytS ⇒ iytiCtSeS ⇒ iytiytSeS
Example 2: S iCtS | iCtSeS | x C y
⇒
iytiytxeS ⇒ iytiytxex
Right-most Derivation for the string iytiytxex is S ⇒ iCtSeS ⇒ iCtSex ⇒ iCtiCtSex
⇒iCtiCtxex ⇒ iCtiytxex ⇒ iytiytxex
Since both the parse trees obtained for the same string iytiytxex are different, the grammar is
ambiguous.
S ⇒ AB ⇒ AaB ⇒ aaB ⇒
Example 3: S AB | aaB A a | Aa Bb
Since both the parse trees obtained for the same string aab are different, the grammar is
ambiguous.
14
Why Is Ambiguity a Problem?
With regular languages, for most applications, we do not care about assigning internal
structure to strings.
With context-free languages, we usually do care about internal structure because, given a
string w, we want to assign meaning to w. It is generally difficult, if not impossible, to assign
a unique meaning without a unique parse tree. So an ambiguous G, which fails to produce a
unique parse tree is a problem.
Should the evaluation of this expression return 17 or 25? Designers of practical languages
must be careful that they create languages for which they can write unambiguous grammars.
Techniques for Reducing Ambiguity
No general purpose algorithm exists to test for ambiguity in a grammar or to remove it when
it is found. But we can reduce ambiguity by eliminating
a. rules like S →
b. Rules with symmetric right-hand sides
• A grammar is ambiguous if it is both left and right recursive.
• Fix: remove right recursion
• S → SS or E→E+E
c. Rule sets that lead to ambiguous attachment of optional postfixes.
15
a. Eliminating ε-Rules
Let G =(V, , R, S) be a CFG. The following algorithm constructs a G such that L(G ) =
L(G)-{} and G contains no rules:
removeEps(G: CFG) =
1. Let G = G.
2. Find the set N of nullable variables in G.
3. Repeat until G contains no modifiable rules that haven’t been processed:
Given the rule PQ, where Q N, add the rule P if it is not already present
and if and if P .
4. Delete from G all rules of the form X .
5. Return G.
The new grammar built is better than the original one. The string (())() has only one parse
tree.
But it is still ambiguous as the string ()()() has two parse trees?
So we get: S* | S
S SS1 /* force branching only to the left
S S1 /* add rule
S1 (S) |
()
17
Unambiguous Grammar for Bal={w { ),(}*: the parenthesis are balanced}.
G={{S,),( }, {),(},R,S} where
S* n | S S
SS1 | S1
S1 (S) | ()
The parse tree obtained for the
string ()()() is
18
Proving that the grammar is Unambiguous
A grammar is unambiguous iff for all strings w, at every point in a leftmost derivation or
rightmost derivation of w, only one rule in G can be applied.
S* ---(1)
S* S ---(2)
S SS1 ---(3)
S S1 ---(4)
S1 (S) ---(5)
()
Inherent Ambiguity
In many cases, for an ambiguous grammar G, it is possible to construct a new grammar G
that generate L(G) with less or no ambiguity. However, not always. Some languages have the
property that every grammar for them is ambiguous.We call such languages inherently
ambiguous.
Example: L = {aibjck: i, j , k 0, i=j or j=k}.
Every string in L has either (or both) the same number of a’s and b’s or the same number of
b’s and c’s. L = {anbncm: n, m 0} {anbmcm: n, m 0}.
One grammar for L has the rules:
S S1 | S2
S1 S1c | A /* Generate all strings in {anbncm}.
A aAb |
S2 aS2 | B /* Generate all strings in {anbmcm}.
B bBc |
Consider the string a2b2c2 .
19
8. Normal Forms
We have seen in some grammar where RHS is , it makes grammar harder to use. Lets see
what happens if we carry the idea of getting rid of -productions a few steps farther. To
make our tasks easier we define normal forms.
Normal Forms - When the grammar rules in G satisfy certain restrictions, then G is said to be
in Normal Form.
• Normal Forms for queries & data can simplify database processing.
• Normal Forms for logical formulas can simplify automated reasoning in AI systems
and in program verification system.
• It might be easier to build a parser if we could make some assumptions about the form
of the grammar rules that the parser will use.
Normal Forms for Grammar
Among several normal forms, two of them are:-
• Chomsky Normal Form(CNF)
• Greibach Normal Form(GNF)
20
Converting to Chomsky Normal Form
Apply some transformation to G such that the language generated by G is unchanged.
1. Rule Substitution.
Example: X aYc Y b Y ZZ equivalent grammar constructed is X abc | aZZc
There exists 4-steps algorithm to convert a CFG G into a new grammar Gc such that: L(G) =
L(Gc) – {}
convertChomsky(G:CFG) =
1. G' = removeEps(G:CFG) S
2. G'' = removeUnits(G':CFG) AB
3. G''' = removeMixed(G'':CFG) A aB
v
4. G' = removeLong(G''' :CFG) S
return Gc ABCD
22
Now, by apply removeLong returns G'v :
S TaS1 | TaS3 | TaS2 | TaTa
S1 AS2
S2 CTa
S3 ATa
A TcC |
a|c
B TcC |
c
C TcC | c
Ta a
Tc c
Tc c
removeLong(G''' :CFG) returns
G'v: S Ta S1 | Tb S2 | TbTb
S1 S Ta S2
Tb C
B Tb S2 | TbTb 23
C TcC | c Ta a
Tb b Tc c
Example 3: Apply the normalization algorithm to convert the grammar to CNF
G: S → ABC
A → aC | D
B → bB | ε | A
C → Ac | ε | Cc
D → aa
removeEps(G:CFG) returns
G: S → ABC | AC | AB | A
A → aC | D | a
B → bB | A | b
C → Ac | Cc | c
D → aa
removeUnits(G':CFG) returns
G S → ABC | AC | AB | aC | aa | a
: A → aC | aa | a
B → bB | aC | aa | a | b
C → Ac | Cc | c
D → aa
removeMixed(G'':CFG) returns
G S → ABC | AC | AB | Ta C | Ta Ta | a
: A → Ta C | Ta Ta | a
B → Tb B | Ta C | Ta Ta | a | b
C → A Tc | C Tc | c
D → Ta Ta
Ta a
Tb b
Tc c
24
9. Pushdown Automata
An acceptor for every context-free language. A pushdown automata , or PDA, is a finite state
machine that has been augmented by a single stack.
Definition of a (NPDA) Pushdown Automaton
M = (K, S, G, Δ , s, A), where:
K is a finite set of states,
S is the input alphabet,
Configuration
A configuration of PDA M is an element of K X S* X G*. Current state, Input that is still left
to read and, Contents of its stack.
The initial configuration of a PDA M, on input w, is (s, w, ).
If s1s2…sn is pushed onto the stack: the value after the push is s1s2…sncba
Yields-in-one-step
Yields-in-one-step written |-M relates configuration1 to configuration2 iff M can move from
1,2
configuration1 to configuration2 in one step. Let c be any element of ∑ U { }, let
25
A computation by M is a finite sequence of configurations C0, C1, C2,,,,,,,,,,,,,Cn for some n ≥0
Computation
such that:
Accepting
A computation accepts only if it runs out of input when it is in an accepting state and the
stack is empty.
C is a rejecting configuration iif C= (s, w, ) |-M* (q, w, ),
where C is not an accepting computation and where M has no moves that it can makes from
(q, w, ). A computation can reject only if the criteria for accepting have not been met and
there are no further moves that can be taken.
Let w be a string that is an element of S* . Then:
• M accepts w iif atleast one of its computations accepts.
• M rejects w iif all of its computations reject.
The language accepted by M, denoted L(M), is the set of all strings accepted by M. M rejects
a string w iff all paths reject it.
It is possible that, on input w, M neither accepts nor rejects. In that case, no path accepts and
some path does not reject.
Transition ((q1, c, 1), (q2, 2)) says that “If c matches the input and g1 matches the current
Transition
input, 1 will be popped from the stack, and 2 will be pushed onto it. M cannot peek at the
top of the stack, the transition from q1 to q2 can be taken. Then, c will be removed from the
• If 1 = , the transition can be taken without checking the stack or popping anything.
• If c = , the transition can be taken without consuming any input.
26
Example1: A PDA for Balanced Parentheses. Bal={w { ),(}*: the parenthesis are
balanced}
M = (K, S, G, Δ, s, A),
where:
K = {s} the states
S = {(, )} the input alphabet
= {(} the stack alphabet
A = {s} the accepting state
Δ = { ((s, (, ), (s, ( )) ----- (1)
((s, ), ( ), (s, )) ----- (2) }
(S, (())(), ) |- (S, ())(), ( ) |- (S,))()),(() |- (S, )(), () |- (S, (), ) |- (S, ), () |- (S, , )
An Example of Accepting -- Input string = (())()
The computation accepts the string ((())() as it runs out of input being in the accepting state S
and stack empty.
Transition State Unread Stack
input
S (())()
1 S ())() (
1 S ))() ((
2 S )() (
2 S ()
1 S ) (
2 S
The computation has reached the final state S and stack is empty, but still the string is
rejected because the input is not empty.
27
Example2 of Rejecting -- Input string = ((())
Transition State Unread input Stack
S ((())
1 S (()) (
1 S ()) ((
1 S )) (((
2 S ) ((
2 S (
(S, ((()),) |- (S, (()),( |- (S,( )),(() |- (S, )),((() |- (S, ),(() |- (S, ,()
The computation has reached the final state S and runs out of input, but still the string is
rejected because the stack is not empty.
Γ = {a}
S = {a, b} the input alphabet
the stack alphabet
A = {s, f} the accepting
state
Δ = { ((s, a, ), (s, a )) -----(1)
((s, b, a ), (f, )) -----(2)
((f, b, a ), (f, )) } -----(3)
The computation has reached the final state f, the input string is consumed and the stack is
empty. Hence the string aabb is accepted.
Γ = {a,
S = {a, b, c} the input alphabet
the stack alphabet
b} A = the accepting state
{f}
28
Δ = {((s, a, ), (s, a) -----(1)
((s, b, ), (s, b)) -----(2)
((s, c, ), (f, )) -----(3)
((f, a, a), (f, )) -----(4)
((f, b, b), (f, -----(5)
))}
(s, abcba, ) |- (s, bcba, a ) |- (s, cba,ba) |- (f, ba, ba) |- (f, a, a) |- (f, , )
An Example of Accepting -- Input string = abcba
The computation has reached the final state f, the input string is consumed and the stack is
empty. Hence the string abcba is accepted.
Γ = {a}
S = {a, b} the input alphabet
the stack alphabet
A = {s, f} the accepting
state
Δ = { ((s, a, ), (s, aa )) -----(1)
((s, b, a ), (f, )) -----(2)
((f, b, a ), (f, )) } -----(3)
(s, aabbbb, ) |- (s, abbbb, aa ) |- (s, bbbb,aaaa) |- (f, bbb, aaa) |- (f, bb, aa) |-(f, b, a) |-
An Example of Accepting -- Input string = aabbbb
(f, , )
29
10. Deterministic and Nondeterministic PDAs
Exploiting Nondeterministic
Previous examples are DPDA, where each machine followed only a single computational
path. But many useful PDAs are not deterministic, where from a single configuration there
exist multiple competing moves. As in FSMs, easiest way to envision the operation of a
NDPDA M is as a tree.
Each node in the tree corresponds to a configuration of M and each path from the root to a
leaf node may correspond to one computation that M might perform. The state, the stack and
the remaining input can be different along different paths. As a result, it will not be possible
to simulate all paths in parallel, the way we did for NDFSMs.
Γ = {a,
S = {a, b} the input alphabet
the stack alphabet
b} A = the accepting
state
Δ = {((s, a, ), (s, a)) -----(1)
{f}
((s, b, ), (s, b)) -----(2)
((s, , ), (f, )) -----(3)
((f, a, a), (f, )) -----(4)
((f, b, b), (f, -----(5)
))}
30
Example 2: PDA for {w {a, b}* : #a(w) = #b(w)}= Equal Numbers of a’s and b’s.
L = {, ab, ba, abba, aabb, baba, bbaa, …….}
M = (K, S, G, Δ, s, A),
where:
K = {s} the states
Γ = {a,
S = {a, b} the input alphabet
the stack alphabet
b} A = {s} the accepting state
Δ ={((s, a, ), (s, a)) -----(1)
((s, b, ), (s, b)) -----(2)
((s, a, b), (s, )) -----(3)
((s, b, a), (s, ))} -----(4)
Example 3: The a Region and the b Region are Different. L = {ambn : m ≠ n; m, n > 0}
It is hard to build a machine that looks for something negative, like ≠. But we can break L
into two sublanguages: {ambn : 0 < n < m} and {ambn : 0 < m < n}
• If stack and input are empty, halt and reject
• If input is empty but stack is not (m > n) (accept)
• If stack is empty but input is not (m < n) (accept)
31
Δ = { ((1, a, ), (1, a )) -----(1)
((1, b, a ), (2, )) -----(2)
((2, b, a ), (2, )) -----(3)
((2, , a ), (3, )) -----(4)
((3, , a ), (3, )) } -----(5)
Γ = {a}
S = {a, b} the input alphabet
the stack alphabet
A = {3,4} the accepting state
32
Two problems with this M:
1. We have no way to specify that a move can be taken only if the stack is empty.
2. We have no way to specify that the input stream is empty.
3. As a result, in most of its moves in state 2, M will have a choice of three paths to take.
Case1: A transition that should be taken only if the stack is empty competes against one or
more moves that require a match of some string on the stack.
Problem: Nondeterminism could be eliminated if it were possible to check for an empty
stack.
Solution: Using a special bottom-of-stack marker ( # )
Before doing anything, push a special character onto the stack. The stack is then logically
empty iff that special character (#) is at the top of the stack. Before M accepts a string, its
stack must be completely empty, so the special character must be popped whenever M
reaches an accepting state.
Now the transition back to state 2 no longer competes with the transition to state 4, which can
only be taken when the # is the only symbol on the stack. The machine is still
nondeterministic because the transition back to state 2 competes with the transition to state 3.
33
Case2: A transition that should be taken only if the input stream is empty competes against
one or more moves that require a match against a specific input character.
Problem: Nondeterminism could be eliminated if it were possible to check for an empty input
stream.
Solution: using a special end-of-string marker ($ )
Adding an end-of-string marker to the language to be accepted is a powerful tool for reducing
nondeterminism. Instead of building a machine to accept a language L, build one to accept
L$, where $ is a special end-of-string marker.
Now the transition back to state 2 no longer competes with the transition to state 3, since the
can be taken when the $ is read. The $ must be read on all the paths, not just the one where
we need it.
34
But the facts about CFGs and PDAs are different from the facts about RLs and FSMs.
1. There are context-free languages for which no deterministic PDA exists.
2. It is possible that a PDA may
● not halt,
● not ever finish reading its input.
However, for an arbitrary PDA M, there exists M that halts and L(M) = L(M).
There exists no algorithm to minimize a PDA. It is undecidable whether a PDA is minimal.
Problem 2 : Let M be a PDA that accepts some language L. Then, on input w, if w L then
M will halt and accept. But if w L then, while M will not accept w, it is possible that it will
not reject it either.
Example1: Let S = {a} and consider M =
For L(M) = {a}. The computation (1, a, e) |- (2, a, a) |- (3, e, e) causes M to accept a.
Example2: Consider M =
35
There are two alternatives to this:
1. PDA by Final state: Accept if, when the input has been consumed, M lands in an
accepting state, regardless of the contents of the stack.
2. PDA by Empty stack: Accept if, when the input has been consumed, the stack is
empty, regardless of the state M is in.
All of these definitions are equivalent in the sense that, if some language L is accepted by a
PDA using one definition, it can be accepted by some PDA using each of the other definition.
For example:- If some language L is accepted by a PDA by Final state then it can be accepted
by PDA by Final state and empty stack. If some language L is accepted by a PDA by Final
state and empty stack then can be accepted by PDA by Final state.
We can prove by showing algorithms that transform a PDA of one sort into and equivalent
PDA of the other sort.
Equivalence
3. Given a PDA M that accepts by accepting state and empty stack, construct a new
PDA M that accepts by accepting state alone, where L(M) = L(M).
4. Given a PDA M that accepts by accepting state alone, construct a new PDA M that
accepts by accepting state and empty stack, where L(M) = L(M).
Hence we can prove that M and M accept the same strings.
5. Accepting by Final state Alone
Define a PDA M = (K, S, G, Δ , s, A). Accepts when the input has been consumed, M lands
in an accepting state, regardless of the contents of the stack. M accepts if C= (s, w, ) |-M* (q,
, g), for some q A.
M will have a single accepting state qa. The only way for M to get to qa will be to land in an
accepting state of M when the stack is logically empty. Since there is no way to check that
the stack is empty, M will begin by pushing a bottom-of-stack marker #, onto the stack.
Whenever # is the top symbol of the stack, then stack is logically empty.
The construction proceeds as follows:
1. Initially, let M = M.
2. Create a new start state s.
Add the transition ((s, , ),(s, #)),
3. For each accepting state a in M do:
Add the transition ((a, ,#),(qa, )),
4.Make qa the only accepting state in
M Example:
36
It is easy to see that M lands in its accepting state(qa) iff M lands in some accepting state
with an empty stack. Thus M and M accept the same strings.
2. Accepting by Final state and Empty stack
The construction proceeds as follows:
1. Initially, let M = M.
2. Create a new accepting state F
3. From each accepting state a in M do:
Add the transition ((a, , ),(F,
)),
4. Make F the only accepting state in M
5. For every element g of Γ,
Add the transition to M ((F, , g),
(F, )).
In other words, iff M accepts, go to the only
accepting state of M′ and clear the stack. Thus
M′ will accept by accepting state and empty stack iff M accepts by accepting state.
Example:-
37
Sl.No Sample Questions
1. Define context free grammars and languages.
6. Define leftmost derivation and rightmost derivation. Given the following CFG.
E E + T|T T T*F|F F(E)|a|b|c
Draw parse tree for the following sentences and also derive the leftmost and rightmost derivations
i) (a+b)*c ii) (a) + b*c
7. Consider the following grammar G: S → 0S1 | SS | 10
Show a parse tree produced by G for each of the following strings:
a) 010110
b) 00101101
8. Define ambiguous and explain inherently ambiguous grammars.
11. Define Chomsky normal form. Apply the normalization algorithm to convert the grammar to
Chomsky normal form.
a. S → aSa S → B B → bbC B
→ bb C → ε C → cC
b. S → ABCA → aC | D B → bB | ε | A C
→ Ac | ε | Cc D → aa
12. Define Push down automata (NPDA). Design a NPDA for the CFG given in Question (2).
13. Design a PDA for the given language.L$, where L = {w ∈ {a, b}* : #a(w) = #b(w)}.
14. Design a PDA for the language: L={ aibjck : i+j=k ,i>=0,j>=0}
16. Design a PDA for the language: L={ aibjck : i+k=j ,i>=0,k>=0}
17. Design a PDA for the language: L={ aibjck : k+j=i ,k>=0,j>=0}
38