0% found this document useful (0 votes)
70 views146 pages

Unit 3 Parsing Methods

Uploaded by

hello
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views146 pages

Unit 3 Parsing Methods

Uploaded by

hello
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 146

CS450: DESIGN OF LANGUAGE PROCESSOR

Unit-3

Parsing Methods
Outline
• Top Down Parsing: Recursive-Descent Parsing, FIRST and FOLLOW, LL(1) grammar
• Non-recursive Predictive Parsing
• Construction of Non-recursive Predictive Parsing Table
• Error Recovery in Predictive Parsing
• Bottom-up Parsing: Shift-Reduce Parsing, Conflicts during Shift-Reduce Parsing
• Introduction to LR Parsing, L-R Parsing Algorithm, Viable Prefixes
• Simple LR Parser (SLR), Construction of Simple LR Parsing Table
• Canonical LR(1), Construction of LR(1) Parsing Table
• Look Ahead LR (LALR), Construction of LALR Parsing Table
• Parser Generator – Yacc

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 2


Parsing Methods
Context free grammar
🡪

🡪 Nonterminal symbol:
🡪 The name of syntax category of a language, e.g., noun, verb, etc.
🡪 The It is written as a single capital letter, or as a name enclosed between < … >, e.g., A or
<Noun>
<Noun Phrase> → <Article><Noun>
<Article> → a | an | the
<Noun> → boy | apple

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 4


Context free grammar
🡪

🡪 Terminal symbol:
🡪 A symbol in the alphabet.
🡪 It is denoted by lower case letter and punctuation marks used in language.

<Noun Phrase> → <Article><Noun>


<Article> → a | an | the
<Noun> → boy | apple

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 5


Context free grammar
🡪

🡪 Start symbol:
🡪 First nonterminal symbol of the grammar is called start symbol.

<Noun Phrase> → <Article><Noun>


<Article> → a | an | the
<Noun> → boy | apple

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 6


Context free grammar
🡪

🡪 Production:
🡪 A production, also called a rewriting rule, is a rule of grammar. It has the form of
A nonterminal symbol → String of terminal and nonterminal symbols

<Noun Phrase> → <Article><Noun>


<Article> → a | an | the
<Noun> → boy | apple

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 7


Example: Context Free Grammar
Write non terminals, terminals, start symbol, and productions for following grammar.
E 🡪 E O E | (E) | id
O🡪+|-|*|/ |↑

Non terminals: E, O
Terminals: id + - * / ↑ ( )
Start symbol: E
Productions: E 🡪 E O E | (E) | id
O🡪+|-|*|/ |↑

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 8


Ambiguous grammar
Ambiguity
🡪 Ambiguity, is a word, phrase, or statement which contains more than one meaning.

A long thin piece of potato

Chip

A small piece of silicon

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 10


Ambiguity
🡪

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 11


Ambiguous grammar
🡪 Ambiguous grammar is one that produces more than one leftmost or more then one rightmost
derivation for the same sentence.
🡪 Grammar: S🡪S+S | S*S | (S) | a Output string: a+a*a

S S S S
🡪S*S 🡪S+S
S * S S + S
🡪S+S*S 🡪a+S
🡪a+S*S S + 🡪a+S*S
S a a S * S
🡪a+a*S 🡪a+a*S
🡪a+a*a a 🡪a+a*a
a a a
🡪 Here, Two leftmost derivation for string a+a*a is possible hence, above grammar is ambiguous.

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 12


Parsing
🡪 Parsing is a technique that takes input string and produces output either a parse tree if string is
valid sentence of grammar, or an error message indicating that string is not a valid.
Types of Parsing

Top down parsing: In top down parsing Bottom up parsing: Bottom up parser starts
parser build parse tree from top to bottom. from leaves and work up to the root.
Grammar: String: abbcde
S
S🡪aABe S
A🡪Abc | b
A
B🡪d a A B e
A B
A b c d
a b b c d e
b
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 13
Classification of Parsing
Classification of parsing
Parsing

Top down parsing Bottom up parsing (Shift reduce)

Back tracking Operator precedence

Parsing without
backtracking (predictive LR parsing
parsing)
SLR
LL(1)
CLR
Recursive
descent LALR

Prof. Dixita
Jay R Dhamsaniya
B Kagathara #3170701 (PS) ⬥⬥ Unit
#3130006(CD) Unit 31 –– Syntax
Basic Probability
Analysis (I) 15
Recursive descent parsing
🡪 A top down parsing that executes a set of recursive procedure to process the input without
backtracking is called recursive descent parser.
🡪 There is a procedure for each non terminal in the grammar.
🡪 Consider RHS of any production rule as definition of the procedure.
🡪 As it reads expected input symbol, it advances input pointer to next position.

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 16


Example: Recursive descent parsing
Procedure E Proceduce Match(token t)
Procedure T
{ {
{
If lookahead=num If lookahead=t
If lookahead=’*’
{ lookahead=next_token;
{
Match(num); Else
Match(‘*’);
T(); Error();
If lookahead=num
} }
{
Else
Match(num);
Error();
T(); Procedure Error
} {
If lookahead=$
Else Print(“Error”);
{
Error(); }
Declare success;
}
}
Else
Else
Error();
NULL E🡪 num T
}
} T🡪 * num T | 𝜖
3 * 4 $ Success

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 17


Example: Recursive descent parsing
Procedure E Procedure T Proceduce Match(token t)
{ { {
If lookahead=num If lookahead=’*’ If lookahead=t
{ { lookahead=next_token;
Match(num); Match(‘*’); Else
T(); If lookahead=num Error();
} { }
Else Match(num);
Error(); T(); Procedure Error
If lookahead=$ } {
{ Else Print(“Error”);
Declare success; Error(); }
}
Else }
Error(); Else
} NULL E🡪 num T
} T🡪 * num T | 𝜖
3 * 4 $ Success 3 4 * $ Error
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 18
Backtracking
Backtracking
🡪 In backtracking, expansion of nonterminal symbol we choose one alternative and if any
mismatch occurs then we try another alternative.
🡪 Grammar: S🡪 cAd Input string: cad
A🡪 ab | a

S S S

c A d c A d c A d
Make prediction Make prediction

a b Backtrack a Parsing done

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 20


First & Follow
Rules to compute first of non terminal
🡪

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 22


Rules to compute first of non terminal
🡪

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 23


Rules to compute FOLLOW of non terminal
🡪

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 24


Example-1: First & Follow
Compute FIRST E🡪TE’
First(E) E’🡪+TE’ | ϵ
E 🡪 T E’ Rule 3 T🡪FT’
E🡪TE’ A 🡪 Y1 Y2 First(A)=First(Y1) T’🡪*FT’ | ϵ
F🡪(E) | id
FIRST(E)=FIRST(T) = {(, id }

NT First
First(T) E { (,id }
T 🡪 F T’ Rule 3 E’
T🡪FT’
A 🡪 Y1 Y2 First(A)=First(Y1)
T { (,id }
FIRST(T)=FIRST(F)= {(, id } T’
First(F) F { (,id }
F🡪(E) F🡪id F id
F 🡪 ( E ) 🡪
A 🡪 A 🡪

FIRST(F)={ ( , id }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 25
Example-1: First & Follow
Compute FIRST E🡪TE’
First(E’) E’🡪+TE’ | ϵ
T🡪FT’
E’🡪+TE’ T’🡪*FT’ | ϵ
F🡪(E) | id
E’ 🡪 + T E’
A 🡪 NT First
E { (,id }

E’🡪𝜖 E’ { +, 𝜖 }
T { (,id }
T’
E’ 🡪
F { (,id }
A 🡪

FIRST(E’)={ + , 𝜖 }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 26
Example-1: First & Follow
Compute FIRST E🡪TE’
First(T’) E’🡪+TE’ | ϵ
T🡪FT’
T’🡪*FT’ T’🡪*FT’ | ϵ
F🡪(E) | id
T’ 🡪 * F T’
A 🡪 NT First
E { (,id }

T’🡪𝜖 E’ { +, 𝜖 }
T { (,id }
T’ { *, 𝜖 }
T’ 🡪
F { (,id }
A 🡪

FIRST(T’)={ * , 𝜖 }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 27
Example-1: First & Follow
Compute FOLLOW E🡪TE’
FOLLOW(E) E’🡪+TE’ | ϵ
T🡪FT’
Rule 1: Place $ in FOLLOW(E) T’🡪*FT’ | ϵ
F🡪(E) | id
F🡪(E)
NT First Follow
E { (,id } { $,) }
E’ { +, 𝜖 }
F 🡪 ( E ) Rule 2 T { (,id }
A 🡪 B
T’ { *, 𝜖 }
F { (,id }

FOLLOW(E)={ $, ) }

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 28


Example-1: First & Follow
E🡪TE’
Compute FOLLOW E’🡪+TE’ | ϵ
FOLLOW(E’) T🡪FT’
T’🡪*FT’ | ϵ
E🡪TE’ F🡪(E) | id
NT First Follow
E 🡪 T E’ Rule 3 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id }
E’🡪+TE’ T’ { *, 𝜖 }
F { (,id }
E’ 🡪 +T E’ Rule 3
A 🡪 B

FOLLOW(E’)={ $,) }

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 29


Example-1: First & Follow
Compute FOLLOW E🡪TE’
FOLLOW(T) E’🡪+TE’ | ϵ
T🡪FT’
E🡪TE’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
E 🡪 T E’ Rule 2 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id }
T’ { *, 𝜖 }
F { (,id }
E 🡪 T E’ Rule 3
A 🡪 B

FOLLOW(T)={ +, $, )
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 30
Example-1: First & Follow
Compute FOLLOW E🡪TE’
FOLLOW(T) E’🡪+TE’ | ϵ
T🡪FT’
E’🡪+TE’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
E’ 🡪 + T E’ Rule 2 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id } { +,$,) }
T’ { *, 𝜖 }
F { (,id }
E’ 🡪 + T E’ Rule 3
A 🡪 B

FOLLOW(T)={ +, $, ) }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 31
Example-1: First & Follow
Compute FOLLOW E🡪TE’
FOLLOW(T’) E’🡪+TE’ | ϵ
T🡪FT’
T🡪FT’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
T 🡪 F T’ Rule 3 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T’🡪*FT’ T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
F { (,id }
T’ 🡪 *F T’ Rule 3
A 🡪 B

FOLLOW(T’)={+ $,) }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 32
Example-1: First & Follow
Compute FOLLOW E🡪TE’
FOLLOW(F) E’🡪+TE’ | ϵ
T🡪FT’
T🡪FT’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
T 🡪 F T’ Rule 2 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
F { (,id }
T 🡪 F T’ Rule 3
A 🡪 B

FOLLOW(F)={ *, + ,$ , )
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 33
Example-1: First & Follow
Compute FOLLOW E🡪TE’
FOLLOW(F) E’🡪+TE’ | ϵ
T🡪FT’
T’🡪*FT’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
T’ 🡪 * F T’ Rule 2 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
F { (,id } {*,+,$,)}
T’ 🡪 * F T’ Rule 3
A 🡪 B

FOLLOW(F)={ *,+, $, ) }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 34
Example-2: First & Follow
S🡪ABCDE
A🡪 a | 𝜖
B🡪 b | 𝜖
C🡪 c NT First Follow
D🡪 d | 𝜖 S {a,b,c} {$}
E🡪 e | 𝜖 A {a, 𝜖} {b, c}
B {b, 𝜖} {c}
C {c} {d, e, $}
D {d, 𝜖} {e, $}
E {e, 𝜖} {$}

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 35


LL(1) parser (Predictive parser or Non recursive descent parser)
🡪 LL(1) is non recursive top down parser.
1. First L indicates input is scanned from left to right.
2. The second L means it uses leftmost derivation for input string
3. 1 means it uses only input symbol to predict the parsing process.

a + b $ INPUT

X
Predictive
Y
Stack parsing OUTPUT
Z program
$

Parsing table M

Model of LL(1) Parser


Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 36
LL(1) parsing (predictive parsing)
Steps to construct LL(1) parser
1. Remove left recursion / Perform left factoring (if any).
2. Compute FIRST and FOLLOW of non terminals.
3. Construct predictive parsing table.
4. Parse the input string using parsing table.

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 37


Rules to construct predictive parsing table
🡪

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 38


Example-1: LL(1) parsing
S🡪aBa
B🡪bB |
NT First
ϵ
Step 1: Not required S {a}

Step 2: Compute FIRST B {b,𝜖}

First(S) S 🡪 a B a
S🡪aBa A 🡪
FIRST(S)={ a }

First(B)
B🡪bB B🡪𝜖

B 🡪 b B B 🡪 𝜖
A 🡪 A 🡪

FIRST(B)={ b , 𝜖 }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 39
Example-1: LL(1) parsing
S🡪aBa
B🡪bB | NT First Follow
ϵ S {a} {$}
Step 2: Compute FOLLOW B {b,𝜖} {a}
Follow(S)
Rule 1: Place $ in FOLLOW(S)
Follow(S)={ $ }

Follow(B)
S🡪aBa B🡪bB

S 🡪 a B a B 🡪 b B Rule 3
A 🡪 B A 🡪 B Follow(A)=follow(B)

Follow(B)={ a }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 40
Example-1: LL(1) parsing
S🡪aBa
B🡪bB | NT First Follow
ϵ S {a} {$}
Step 3: Prepare predictive parsing table B {b,𝜖} {a}

NT Input Symbol
a b $
S S🡪aBa
B

S🡪aBa
a=FIRST(aBa)={ a }
M[S,a]=S🡪aBa

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 41


Example-1: LL(1) parsing
S🡪aBa
B🡪bB | NT First Follow
ϵ S {a} {$}
Step 3: Prepare predictive parsing table B {b,𝜖} {a}

NT Input Symbol
a b $
S S🡪aBa
B B🡪bB

B🡪bB
a=FIRST(bB)={ b }
M[B,b]=B🡪bB

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 42


Example-1: LL(1) parsing
S🡪aBa
B🡪bB | NT First Follow
ϵ S {a} {$}
Step 3: Prepare predictive parsing table B {b,𝜖} {a}

NT Input Symbol
a b $
S S🡪aBa Error Error
B B🡪ϵ B🡪bB Error

B🡪ϵ
b=FOLLOW(B)={ a }
M[B,a]=B🡪𝜖

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 43


Example-2: LL(1) parsing
S🡪aB | ϵ
B🡪bC | ϵ
C🡪cS | ϵ
Step 1: Not required
NT First
Step 2: Compute FIRST S { a, 𝜖 }
First(S) B {b,𝜖}
S🡪aB S🡪𝜖 C {c,𝜖}

S 🡪 a B S 🡪 𝜖
A 🡪 A 🡪

FIRST(S)={ a , 𝜖 }

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 44


Example-2: LL(1) parsing
S🡪aB | ϵ
B🡪bC | ϵ
C🡪cS | ϵ
Step 1: Not required
NT First
Step 2: Compute FIRST S { a, 𝜖 }
First(B) B {b,𝜖}
B🡪bC B🡪𝜖 C {c,𝜖}

B 🡪 b C B 🡪 𝜖
A 🡪 A 🡪

FIRST(B)={ b , 𝜖 }

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 45


Example-2: LL(1) parsing
S🡪aB | ϵ
B🡪bC | ϵ
C🡪cS | ϵ
Step 1: Not required
NT First
Step 2: Compute FIRST S { a, 𝜖 }
First(C) B {b,𝜖}
C🡪cS C🡪𝜖 C {c,𝜖}

C 🡪 c S C 🡪 𝜖
A 🡪 A 🡪

FIRST(B)={ c , 𝜖 }

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 46


Example-2: LL(1) parsing
Step 2: Compute FOLLOW
Follow(S) Rule 1: Place $ in FOLLOW(S)
Follow(S)={ $ }
C🡪cS S🡪aB | ϵ
B🡪bC | ϵ
C 🡪 c S Rule 3 C🡪cS | ϵ
A 🡪 B Follow(A)=follow(B)
Follow(S)=Follow(C) ={$}
NT First Follow
S {a,𝜖} {$}
B🡪bC S🡪aB B {b,𝜖} {$}
C {c,𝜖} {$}
B 🡪 b C S 🡪 a B Rule 3
Rule 3
A 🡪 B Follow(A)=follow(B) A 🡪 B Follow(A)=follow(B)
Follow(C)=Follow(B) ={$} Follow(B)=Follow(S) ={$}

Prof. Dixita
Jay R Dhamsaniya
B Kagathara #3170701 (PS) ⬥⬥ Unit
#3130006(CD) Unit 31 –– Syntax
Basic Probability
Analysis (I) 47
Example-2: LL(1) parsing
S🡪aB | ϵ
NT First Follow
B🡪bC | ϵ
S {a,𝜖} {$}
C🡪cS | ϵ
B {b,𝜖} {$}
Step 3: Prepare predictive parsing table C {c,𝜖} {$}

N Input Symbol
T a b c $
S S🡪aB
B
C

S🡪aB
a=FIRST(aB)={ a }
M[S,a]=S🡪aB
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 48
Example-2: LL(1) parsing
S🡪aB | ϵ
NT First Follow
B🡪bC | ϵ
S {a} {$}
C🡪cS | ϵ
B {b,𝜖} {$}
Step 3: Prepare predictive parsing table C {c,𝜖} {$}

N Input Symbol
T a b c $
S S🡪aB S🡪 𝜖
B
C

S🡪𝜖
b=FOLLOW(S)={ $ }
M[S,$]=S🡪𝜖
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 49
Example-2: LL(1) parsing
S🡪aB | ϵ
NT First Follow
B🡪bC | ϵ
S {a} {$}
C🡪cS | ϵ
B {b,𝜖} {$}
Step 3: Prepare predictive parsing table C {c,𝜖} {$}

N Input Symbol
T a b c $
S S🡪aB S🡪 𝜖
B B🡪bC
C

B🡪bC
a=FIRST(bC)={ b }
M[B,b]=B🡪bC
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 50
Example-2: LL(1) parsing
S🡪aB | ϵ
NT First Follow
B🡪bC | ϵ
S {a} {$}
C🡪cS | ϵ
B {b,𝜖} {$}
Step 3: Prepare predictive parsing table C {c,𝜖} {$}

N Input Symbol
T a b c $
S S🡪aB S🡪 𝜖
B B🡪bC B🡪𝜖
C

B🡪𝜖
b=FOLLOW(B)={ $ }
M[B,$]=B🡪𝜖
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 51
Example-2: LL(1) parsing
S🡪aB | ϵ
NT First Follow
B🡪bC | ϵ
S {a} {$}
C🡪cS | ϵ
B {b,𝜖} {$}
Step 3: Prepare predictive parsing table C {c,𝜖} {$}

N Input Symbol
T a b c $
S S🡪aB S🡪 𝜖
B B🡪bC B🡪𝜖
C C🡪cS

C🡪cS
a=FIRST(cS)={ c }
M[C,c]=C🡪cS
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 52
Example-2: LL(1) parsing
S🡪aB | ϵ
NT First Follow
B🡪bC | ϵ
S {a} {$}
C🡪cS | ϵ
B {b,𝜖} {$}
Step 3: Prepare predictive parsing table C {c,𝜖} {$}

N Input Symbol
T a b c $
S S🡪aB Error Error S🡪 𝜖
B Error B🡪bB Error B🡪𝜖
C Error Error C🡪cS C🡪𝜖

C🡪𝜖
b=FOLLOW(C)={ $ }
M[C,$]=C🡪𝜖
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 53
Example-3: LL(1) parsing
E🡪E+T | T
T🡪T*F | F
F🡪(E) | id
Step 1: Remove left recursion
E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
T’🡪*FT’ | ϵ
F🡪(E) | id

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 54


Example-3: LL(1) parsing
Step 2: Compute FIRST E🡪TE’
First(E) E’🡪+TE’ | ϵ
E 🡪 T E’ Rule 3 T🡪FT’
E🡪TE’ A 🡪 Y1 Y2 First(A)=First(Y1) T’🡪*FT’ | ϵ
F🡪(E) | id
FIRST(E)=FIRST(T) = {(, id }

NT First
First(T) E { (,id }
T 🡪 F T’ Rule 3 E’
T🡪FT’
A 🡪 Y1 Y2 First(A)=First(Y1)
T { (,id }
FIRST(T)=FIRST(F)= {(, id } T’
First(F) F { (,id }
F🡪(E) F🡪id F id
F 🡪 ( E ) 🡪
A 🡪 A 🡪

FIRST(F)={ ( , id }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 55
Example-3: LL(1) parsing
Step 2: Compute FIRST E🡪TE’
First(E’) E’🡪+TE’ | ϵ
T🡪FT’
E’🡪+TE’ T’🡪*FT’ | ϵ
F🡪(E) | id
E’ 🡪 + T E’
A 🡪 NT First
E { (,id }

E’🡪𝜖 E’ { +, 𝜖 }
T { (,id }
T’
E’ 🡪
F { (,id }
A 🡪

FIRST(E’)={ + , 𝜖 }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 56
Example-3: LL(1) parsing
Step 2: Compute FIRST E🡪TE’
First(T’) E’🡪+TE’ | ϵ
T🡪FT’
T’🡪*FT’ T’🡪*FT’ | ϵ
F🡪(E) | id
T’ 🡪 * F T’
A 🡪 NT First
E { (,id }

T’🡪𝜖 E’ { +, 𝜖 }
T { (,id }
T’ { *, 𝜖 }
T’ 🡪
F { (,id }
A 🡪

FIRST(T’)={ * , 𝜖 }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 57
Example-3: LL(1) parsing
Step 2: Compute FOLLOW E🡪TE’
FOLLOW(E) E’🡪+TE’ | ϵ
T🡪FT’
Rule 1: Place $ in FOLLOW(E) T’🡪*FT’ | ϵ
F🡪(E) | id
F🡪(E)
NT First Follow
E { (,id } { $,) }
E’ { +, 𝜖 }
F 🡪 ( E ) Rule 2 T { (,id }
A 🡪 B
T’ { *, 𝜖 }
F { (,id }

FOLLOW(E)={ $, ) }

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 58


Example-3: LL(1) parsing
E🡪TE’
Step 2: Compute FOLLOW E’🡪+TE’ | ϵ
FOLLOW(E’) T🡪FT’
T’🡪*FT’ | ϵ
E🡪TE’ F🡪(E) | id
NT First Follow
E 🡪 T E’ Rule 3 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id }
E’🡪+TE’ T’ { *, 𝜖 }
F { (,id }
E’ 🡪 +T E’ Rule 3
A 🡪 B

FOLLOW(E’)={ $,) }

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 59


Example-3: LL(1) parsing
Step 2: Compute FOLLOW E🡪TE’
FOLLOW(T) E’🡪+TE’ | ϵ
T🡪FT’
E🡪TE’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
E 🡪 T E’ Rule 2 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id }
T’ { *, 𝜖 }
F { (,id }
E 🡪 T E’ Rule 3
A 🡪 B

FOLLOW(T)={ +, $, )
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 60
Example-3: LL(1) parsing
Step 2: Compute FOLLOW E🡪TE’
FOLLOW(T) E’🡪+TE’ | ϵ
T🡪FT’
E’🡪+TE’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
E’ 🡪 + T E’ Rule 2 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id } { +,$,) }
T’ { *, 𝜖 }
F { (,id }
E’ 🡪 + T E’ Rule 3
A 🡪 B

FOLLOW(T)={ +, $, ) }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 61
Example-3: LL(1) parsing
Step 2: Compute FOLLOW E🡪TE’
FOLLOW(T’) E’🡪+TE’ | ϵ
T🡪FT’
T🡪FT’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
T 🡪 F T’ Rule 3 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T’🡪*FT’ T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
F { (,id }
T’ 🡪 *F T’ Rule 3
A 🡪 B

FOLLOW(T’)={+ $,) }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 62
Example-3: LL(1) parsing
Step 2: Compute FOLLOW E🡪TE’
FOLLOW(F) E’🡪+TE’ | ϵ
T🡪FT’
T🡪FT’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
T 🡪 F T’ Rule 2 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
F { (,id }
T 🡪 F T’ Rule 3
A 🡪 B

FOLLOW(F)={ *, + ,$ , )
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 63
Example-3: LL(1) parsing
Step 2: Compute FOLLOW E🡪TE’
FOLLOW(F) E’🡪+TE’ | ϵ
T🡪FT’
T’🡪*FT’ T’🡪*FT’ | ϵ
F🡪(E) | id
NT First Follow
T’ 🡪 * F T’ Rule 2 E { (,id } { $,) }
A 🡪 B
E’ { +, 𝜖 } { $,) }
T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
F { (,id } {*,+,$,)}
T’ 🡪 * F T’ Rule 3
A 🡪 B

FOLLOW(F)={ *,+, $, ) }
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 64
Example-3: LL(1) parsing
Step 3: Construct predictive parsing table E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
NT Input Symbol
T’🡪*FT’ | ϵ
id + * ( ) $ F🡪(E) | id
E E🡪TE’ E🡪TE’
E’ NT First Follow
T E { (,id } { $,) }
T’ E’ { +, 𝜖 } { $,) }
F T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
E🡪TE’ F { (,id } {*,+,$,)}
a=FIRST(TE’)={ (,id }
M[E,(]=E🡪TE’
M[E,id]=E🡪TE’
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 65
Example-3: LL(1) parsing
Step 3: Construct predictive parsing table E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
NT Input Symbol
T’🡪*FT’ | ϵ
id + * ( ) $ F🡪(E) | id
E E🡪TE’ E🡪TE’
E’ E’🡪+TE’ NT First Follow
T E { (,id } { $,) }
T’ E’ { +, 𝜖 } { $,) }
F T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
E’🡪+TE’ F { (,id } {*,+,$,)}
a=FIRST(+TE’)={ + }
M[E’,+]=E’🡪+TE’

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 66


Example-3: LL(1) parsing
Step 3: Construct predictive parsing table E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
NT Input Symbol
T’🡪*FT’ | ϵ
id + * ( ) $ F🡪(E) | id
E E🡪TE’ E🡪TE’
E’ E’🡪+TE’ E’🡪𝜖 E’🡪𝜖 NT First Follow
T E { (,id } { $,) }
T’ E’ { +, 𝜖 } { $,) }
F T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
E’🡪𝜖 F { (,id } {*,+,$,)}
b=FOLLOW(E’)={ $,) }
M[E’,$]=E’🡪𝜖
M[E’,)]=E’🡪𝜖
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 67
Example-3: LL(1) parsing
Step 3: Construct predictive parsing table E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
NT Input Symbol
T’🡪*FT’ | ϵ
id + * ( ) $ F🡪(E) | id
E E🡪TE’ E🡪TE’
E’ E’🡪+TE’ E’🡪𝜖 E’🡪𝜖 NT First Follow
T T🡪FT’ T🡪FT’ E { (,id } { $,) }
T’ E’ { +, 𝜖 } { $,) }
F T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
T🡪FT’ F { (,id } {*,+,$,)}
a=FIRST(FT’)={ (,id }
M[T,(]=T🡪FT’
M[T,id]=T🡪FT’
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 68
Example-3: LL(1) parsing
Step 3: Construct predictive parsing table E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
NT Input Symbol
T’🡪*FT’ | ϵ
id + * ( ) $ F🡪(E) | id
E E🡪TE’ E🡪TE’
E’ E’🡪+TE’ E’🡪𝜖 E’🡪𝜖 NT First Follow
T T🡪FT’ T🡪FT’ E { (,id } { $,) }
T’ T’🡪*FT’ E’ { +, 𝜖 } { $,) }
F T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
T’🡪*FT’ F { (,id } {*,+,$,)}
a=FIRST(*FT’)={ * }
M[T’,*]=T’🡪*FT’

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 69


Example-3: LL(1) parsing
Step 3: Construct predictive parsing table E🡪TE’
E’🡪+TE’ | ϵ
NT Input Symbol T🡪FT’
id + * ( ) $ T’🡪*FT’ | ϵ
E E🡪TE’ E🡪TE’ F🡪(E) | id

E’ E’🡪+TE’ E’🡪𝜖 E’🡪𝜖


NT First Follow
T T🡪FT’ T🡪FT’
E { (,id } { $,) }
T’ T’🡪𝜖 T’🡪*FT’ T’🡪𝜖 T’🡪𝜖
E’ { +, 𝜖 } { $,) }
F
T { (,id } { +,$,) }
T’🡪𝜖 T’ { *, 𝜖 } { +,$,) }
b=FOLLOW(T’)={ +,$,) } F { (,id } {*,+,$,)}
M[T’,+]=T’🡪𝜖
M[T’,$]=T’🡪𝜖
M[T’,)]=T’🡪𝜖
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 70
Example-3: LL(1) parsing
Step 3: Construct predictive parsing table E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
NT Input Symbol
T’🡪*FT’ | ϵ
id + * ( ) $ F🡪(E) | id
E E🡪TE’ E🡪TE’
E’ E’🡪+TE’ E’🡪𝜖 E’🡪𝜖 NT First Follow
T T🡪FT’ T🡪FT’ E { (,id } { $,) }
T’ T’🡪𝜖 T’🡪*FT’ T’🡪𝜖 T’🡪𝜖 E’ { +, 𝜖 } { $,) }
F F🡪(E) T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
F { (,id } {*,+,$,)}
F🡪(E)
a=FIRST((E))={ ( }
M[F,(]=F🡪(E)
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 71
Example-3: LL(1) parsing
Step 3: Construct predictive parsing table E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
NT Input Symbol
T’🡪*FT’ | ϵ
id + * ( ) $ F🡪(E) | id
E E🡪TE’ E🡪TE’
E’ E’🡪+TE’ E’🡪𝜖 E’🡪𝜖 NT First Follow
T T🡪FT’ T🡪FT’ E { (,id } { $,) }
T’ T’🡪𝜖 T’🡪*FT’ T’🡪𝜖 T’🡪𝜖 E’ { +, 𝜖 } { $,) }
F F🡪id F🡪(E) T { (,id } { +,$,) }
T’ { *, 𝜖 } { +,$,) }
F { (,id } {*,+,$,)}
F🡪id
a=FIRST(id)={ id }
M[F,id]=F🡪id
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 72
Example-3: LL(1) parsing
🡪 Step 4: Make each undefined entry of table be Error
NT Input Symbol
id + * ( ) $
E E🡪TE’ Error Error E🡪TE’ Error Error
E’ Error E’🡪+TE’ Error Error E’🡪𝜖 E’🡪𝜖
T T🡪FT’ Error Error T🡪FT’ Error Error
T’ Error T’🡪𝜖 T’🡪*FT’ Error T’🡪𝜖 T’🡪𝜖
F F🡪id Error Error F🡪(E) Error Error

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 73


Example-3: LL(1) parsing
Step 4: Parse the string : id + id * id $ NT Input Symbol
id + * ( ) $
STACK INPUT OUTPUT
E E🡪TE’ Error Error E🡪TE’ Error Error
E$ id+id*id$
E’ Error E’🡪+TE’ Error Error E’🡪𝜖 E’🡪𝜖
TE’$ id+id*id$ E🡪TE’
T T🡪FT’ Error Error T🡪FT’ Error Error
FT’E’$ id+id*id$ T🡪FT’
T’ Error T’🡪𝜖 T’🡪*FT’ Error T’🡪𝜖 T’🡪𝜖
idT’E’$ id+id*id$ F🡪id
F F🡪id Error Error F🡪(E) Error Error
T’E’$ +id*id$
E’$ +id*id$ T’🡪𝜖
+TE’$ +id*id$ E’🡪+TE’
TE’$ id*id$ FT’E’$ id$
FT’E’$ id*id$ T🡪FT’ idT’E’$ id$ F🡪id
idT’E’$ id*id$ F🡪id T’E’$ $
T’E’$ *id$ E’$ $ T’🡪𝜖
*FT’E’$ *id$ T🡪*FT’ $ $ E’🡪𝜖
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 74
Non- Recursive Predictive Parsing
🡪 Non-Recursive predictive parsing uses a parsing table that shows which production rule to
select from several alternatives available for expanding a given non-terminal and the first
terminal symbol that should be produced by that non-terminal. The parsing table consists of
rows and columns where there are two for each non-terminal and a column for each terminal
symbol including S, the end marker for the input string.
🡪 Each entry M[A, a] in a table is either a production rule or an error.
🡪 It uses a stack containing a sequence of grammar symbols with the $.
🡪 A symbol is placed on the bottom indicating the bottom of the stack. Initially, the start symbol
resides on top. The stack is used to keep track of all the non-terminals for which no prediction
has been made yet.
🡪 The parser also uses an input buffer and an output stream.
🡪 The string to be parsed is stored in the input buffer.
🡪 The end of the buffer uses a $ symbol to indicate the end of the input string.

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 75


Non- Recursive Predictive Parsing

Fig.non Recursive Parser Model

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 76


Non- Recursive Predictive Parsing
❖ Steps to construct Predictive parsing.
🡪 Remove left recursion / Remove left factoring.
🡪 Compute FIRST and FOLLOW of nonterminals.
🡪 Construct predictive parsing table.
🡪 Parse the input string with the help of parsing table.
🡪 Example:- E🡪E+T/T
T🡪T*F/F
F🡪 (E)/id
🡪 Step1: Remove left recursion
E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
T’🡪*FT’ | ϵ
F🡪 (E) | id
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 77
Non- Recursive Predictive Parsing

Step2: Compute FIRST & FOLLOW


FIRST FOLLOW
E {(,id} {$,)}
E’ {+,ϵ} {$,)}
T {(,id} {+,$,)}
T’ {*,ϵ} {+,$,)}
F {(,id} {*,+,$,)}

Step3: Predictive Parsing Table


id + * ( ) $

E EàTE’ EàTE’

E’ E’à+TE’ E’àϵ E’àϵ

T TàFT’ TàFT’

T’ T’àϵ T’à*FT’ T’àϵ T’àϵ

F Fàid Fà(E)

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 78


Non- Recursive Predictive Parsing
Step4: Parse the string
Stack Input Action
$E id+id*id$
$E’T id+id*id$ EàTE’
$ E’T’F id+id*id$ TàFT’
$ E’T’id id+id*id$ Fàid
$ E’T’ +id*id$
$ E’ +id*id$ T’à ϵ
$ E’T+ +id*id$ E’à+TE’
$ E’T id*id$
$ E’T’F id*id$ TàFT’
$ E’T’id id*id$ Fàid
$ E’T’ *id$
$ E’T’F* *id$ T’à*FT’
$ E’T’F id$
$ E’T’id id$ Fàid
$ E’T’ $
$ E’ $ T’🡪 ϵ
$ $ E’🡪 ϵ

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 79


Error Recovery in Predictive Parsing
🡪 Panic mode error recovery is based on the idea of skipping symbols on the input until a
token in a selected set of synchronizing token appears.
🡪 Its effectiveness depends on the choice of synchronizing set.
🡪 Some heuristics are as follows:
▪ Insert ‘synch’ in FOLLOW symbol for all non terminals. ‘synch’ indicates resume the
parsing. If entry is “synch” then non terminal on the top of the stack is popped in an
attempt to resume parsing.
▪ If we add symbol in FIRST (A) to the synchronizing set for a non terminal A, then it may
be possible to resume parsing if a symbol in FIRST(A) appears in the input.
▪ If a non terminal can generate the empty string, then the production deriving the ε can
be used as a default.
▪ If parser looks entry M[A,a] and finds that it is blank then i/p symbol a is skipped.
▪ If a token on top of the stack does not match i/p symbol then we pop token from the
stack.

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 80


Error Recovery in Predictive Parsing

▪ Consider the grammar given below:


E🡪TE’
E’🡪+TE’ | ϵ
T🡪FT’
T’🡪*FT’ | ϵ
F🡪 (E) | id
▪ Insert ‘synch’ in FOLLOW symbol for all non terminals.
FOLLOW
E {$,)}
E’ {$,)}
T {+,$,)}
T’ {+,$,)}
F {+,*,$,)}

Follow set of non terminals

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 81


Error Recovery in Predictive Parsing

NT Input Symbol

id + * ( ) $

E E =>TE’ E=>TE’ synch Synch

E’ E’ => +TE’ E’ => ε E’ => ε

T T => FT’ synch T=>FT’ Synch synch

T’ T’ => ε T’ =>* FT’ T’ => ε T’ => ε

F F => <id> synch Synch F=>(E) synch synch

Synchronizing token added to parsing table

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 82


Error Recovery in Predictive Parsing
Stack Input Remarks
$E )id*+id$ Error, skip )
$E id*+id$
$E’ T id*+id$
$E’ T’ F id*+id$
$E’ T’ id id*+id$
$E’ T’ *+id$
$E’ T’ F* *+id$
$E’ T’ F +id$ Error, M[F,+]=synch
$E’ T’ +id$ F has been popped.
$E’ +id$
$E’ T+ +id$
$E’ T id$
$E’ T’ F id$
$E’ T’ id id$
$E’ T’ $
$E’ $
$ $
Table:Parsing and error recovery moves made by predictive parser

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 83


Handle & Handle pruning
🡪 Handle: A “handle” of a string is a substring of the string that matches the right side of a
production, and whose reduction to the non terminal of the production is one step along the
reverse of rightmost derivation.
🡪 Handle pruning: The process of discovering a handle and reducing it to appropriate Left hand
side non terminal is known as handle pruning.
Right sentential form Handle Reducing production

id1+id2*id3 id1 Eàid

E+id2*id3 id2 Eàid

E+E*id3 id3 Eàid

E+E*E E*E Eà E*E

E+E E+E Eà E+E

Table :Handles

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 84


Bottom-Up Parsing
🡪 Construct parse tree for the input string starting at the leaves (bottom) and working up towards
the root (top) (reduction)
🡪 Grammar ( G ) : E 🡪 E + E | E * E | - E | ( E ) | id
🡪 String : id + id + id

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 85


Bottom-Up Parsing
🡪 Handle Pruning
🡪 A right most derivation in reverse can be obtain by handle pruning

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 86


Derivation
Derivation
🡪 A derivation is basically a sequence of production rules, in order to get the input string.
🡪 To decide which non-terminal to be replaced with production rule, we can have two options:
1. Leftmost derivation
2. Rightmost derivation

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 88


Leftmost derivation
🡪

S
Parse tree represents the
structure of derivation
S - S

S * S a

a a
Leftmost Derivation Parse tree

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 89


Rightmost derivation
🡪

S * S

a S - S

a a
Rightmost Derivation Parse Tree
Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 90
Left Recursion
Left recursion
🡪

A A

A
A
A
A
A

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 92


Left recursion elimination

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 93


Examples: Left recursion elimination
E🡪E+T | T
E🡪TE’
E’🡪+TE’ | ε
T🡪T*F | F
T🡪FT’
T’🡪*FT’ | ε
X🡪X%Y | Z
X🡪ZX’
X’🡪%YX’ | ε

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 94


Left Factoring
Left factoring
🡪

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 96


Left factoring

| δ

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 97


Example: Left factoring
S🡪aAB | aCD
S🡪aS’
S’🡪AB | CD
A🡪 xByA | xByAzA | a

A🡪 xByAA’ | a
A’🡪 Є | zA

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 98


Operator precedence parsing
Operator precedence parsing
🡪 Operator Grammar: A Grammar in which there is no Є in RHS of any production or no adjacent
non terminals is called operator grammar.
🡪 Example: E🡪 EAE | (E) | id
A🡪 + | * | -
🡪 Above grammar is not operator grammar because right side EAE has consecutive non terminals.
🡪 In operator precedence parsing we define following disjoint relations:

Relation Meaning
a<.b a “yields precedence to” b
a=b a “has the same precedence as” b
a.>b a “takes precedence over” b
Precedence & associativity of operators

Operator Precedence Associative


↑ 1 right
*, / 2 left
+, - 3 left
Steps of operator precedence parsing
1. Find Leading and trailing of non terminal
2. Establish relation
3. Creation of table
4. Parse the string
Leading & Trailing
Leading:- Leading of a non terminal is the first terminal or operator in production of that non
terminal.
Trailing:- Trailing of a non terminal is the last terminal or operator in production of that non
terminal.
Example: E🡪E+T | T
T🡪T*F | F
F🡪id

Non terminal Leading Trailing


E {+,*,id} {+,*,id}
T {*,id} {*,id}
F {id} {id}
Rules to establish a relation
.
🡪
Example: Operator precedence parsing
🡪 Step 1: Find Leading & Trailing of NT
E🡪 E +T| T
Nonterminal Leading Trailing T🡪 T *F| F
E {+,*,id} {+,*,id} F🡪 id
T {*,id} {*,id}
F {id} {id}

Step 2: Establish Relation Step3: Creation of Table


+ * id $
.
+ > <. <. .
>
. .
* > > <. .
>
id . . .
> > >
$ <. <. <.
Example: Operator precedence parsing
🡪Step 1: Find Leading & Trailing of NT
E🡪 E+ T| T
Nonterminal Leading Trailing T🡪 T* F| F
E {+,*,id} {+,*,id} F🡪 id
T {*,id} {*,id}
F {id} {id}

Step2: Establish Relation Step3: Creation of Table


+ * id $
.
+ > <. <. .
>
. .
* > > <. .
>
id . . .
> > >
$ <. <. <.
Example: Operator precedence parsing
Step 1: Find Leading & Trailing of NT
E🡪 E+ T| T
Nonterminal Leading Trailing T🡪 T* F| F
E {+,*,id} {+,*,id} F🡪 id
T {*,id} {*,id}
F {id} {id}

Step 2: Establish Relation Step 3: Creation of Table


+ * id $
.
+ > <. <. .
>
. .
* > > <. .
>
id . . .
> > >
$ <. <. <.
Example: Operator precedence parsing
Step 4: Parse the string using precedence table
Assign precedence operator between terminals
String: id+id*id + * id $
$ id+id*id $ + .
> <. <. .
>
$ <. id+id*id$ * .
> .
> <. .
>
id . . .
. .
$ < id > +id*id$ > > >
.
$ < <. <.
$ <. id .> + <. id*id$
$ <. id .> + <. id .> *id$
$ <. id .> + <. id .> *<. id$
$ <. id .> + <. id .> *<. id .> $
Example: Operator precedence parsing
Step 4: Parse the string using precedence table E🡪E+T | T
1. Scan the input string until first .> is encountered. T🡪T*F | F
F🡪id
2. Scan backward until <. is encountered.
3. The handle is string between <. and .>
$ <. Id .> + <. Id .> * <. Id .> $ Handle id is obtained between <. and .> + * id $
Reduce this by F🡪id + .
> <. <. .
>
. . .
$ F + < Id > * < Id > $. Handle id is obtained between <. and .> * .
> .
> <. .
>
Reduce this by F🡪id id . . .
> > >
$ F + F * <. Id .> $ Handle id is obtained between <. and .>
Reduce this by F🡪id
$ <. <. <.
$F+F*F$ Perform appropriate reductions of all nonterminals.
$E+T*F$ Remove all non terminals.
$ + * $ Place relation between operators
$ <. + <. * >$ The * operator is surrounded by <. and .>. This
indicates * becomes handle so reduce by T🡪T*F.
$ <. + >$ + becomes handle. Hence reduce by E🡪E+T.

$ $ Parsing Done
Operator precedence function
Operator precedence function
🡪
Operator precedence function
🡪 E🡪 E+T | T
T🡪 T*F | F F🡪 id

f+ f* fid f$

g+ g* gid g$
Operator precedence function
🡪 Partition the
.
symbols in as many as groups possible, in such a way that f a and gb are in the same
group if a = b.

+ * id $
.
+ > <. <. .
>
gid fid . .
* > > <. .
>
id . . .
> > >
$ <. <. <.
f* g*

g+ f+

f$ g$
Operator precedence function
3. if a <· b, place an edge from the group of gb to the group of fa
if a ·> b, place an edge from the group of fa to the group of gb

g
+ * id $
.
+ > <. <. .
>
gid fid
. .
f * > > <. .
>
id . . .
> > >
f* g* $ <. <. <.

f+ .> g+ f+ 🡪 g+
g+ f+
f* .> g+ f* 🡪 g+
fid .> g+ fid 🡪 g+
f$ g$ f$ <. g+ f$ 🡪 g+
Operator precedence function
3. if a <· b, place an edge from the group of gb to the group of fa
if a ·> b, place an edge from the group of fa to the group of gb

g
+ * id $
.
+ > <. <. .
>
gid fid
. .
f * > > <. .
>
id . . .
> > >
f* g* $ <. <. <.

f+ <. g* f+ 🡪 g*
g+ f+
f* .> g* f* 🡪 g*
fid .> g* fid 🡪 g*
f$ g$ f$ <. g* f$ 🡪 g*
Operator precedence function
3. if a <· b, place an edge from the group of gb to the group of fa
if a ·> b, place an edge from the group of fa to the group of gb

g
+ * id $
.
+ > <. <. .
>
gid fid
. .
f * > > <. .
>
id . . .
> > >
f* g* $ <. <. <.

f+ <. gid f+ 🡪 gid


g+ f+
f* <. gid f* 🡪 gid
f$ <. gid f$ 🡪 gid
f$ g$
Operator precedence function
3. if a <· b, place an edge from the group of gb to the group of fa
if a ·> b, place an edge from the group of fa to the group of gb

g
+ * id $
.
+ > <. <. .
>
gid fid
. .
f * > > <. .
>
id . . .
> > >
f* g* $ <. <. <.

f+ <. g$ f+ 🡪 g$
g+ f+
f* <. g$ f* 🡪 g$
fid <. g$ fid 🡪 g$
f$ g$
Operator precedence function

+ * id $
f 2
gid fid g

f* g*
4. If the constructed graph has
a cycle then no precedence
g+ f+ functions exist. When there
are no cycles collect the
length of the longest paths
f$ g$ from the groups of fa and gb
respectively.
Operator precedence function

+ * id $
f 2
gid fid
g 1

f* g*

g+ f+

f$ g$
Operator precedence function

+ * id $
f 2 4
gid fid
g 1

f* g*

g+ f+

f$ g$
Operator precedence function

+ * id $
f 2 4
gid fid
g 1 3

f* g*

g+ f+

f$ g$
Operator precedence function

+ * id $
f 2 4 4
gid fid
g 1 3

f* g*

g+ f+

f$ g$
Operator precedence function

+ * id $
f 2 4 4
gid fid
g 1 3 5

f* g*

g+ f+

f$ g$
Operator precedence function

+ * id $
f 2 4 4 0
gid fid
g 1 3 5 0

f* g*

g+ f+

f$ g$
Introduction to LR Parser
LR parser
🡪 LR parsing is most efficient method of bottom up parsing which can be used to parse large
class of context free grammar.
🡪 The technique is called LR(k) parsing:
1. The “L” is for left to right scanning of input symbol,
2. The “R” for constructing right most derivation in reverse,
3. The “k” for the number of input symbols of look ahead that are used in making parsing
decision. a + b $ INPUT

X
LR parsing
Y
program OUTPUT
Z
$
Parsing Table
Action Goto
Closure & goto function
Computation of closure & goto function
S🡪AS | b
S’🡪S.
A🡪SA | a
Closure(I): S🡪A.S

)
(I,S
S🡪.AS
)

to
(I ,A S🡪.b
Go o to A🡪.SA
G A🡪.a
S’🡪.S
S🡪.AS Goto(I,b)
S🡪.b S🡪b.
A🡪.SA A🡪S.A
A🡪.a
Goto(I,S) A🡪.SA
Go A🡪.a
to(
I,a S🡪.AS
) S🡪.b
A🡪a.
SLR Parser
Shift reduce parser
🡪 The shift reduce parser performs following basic operations:
1. Shift: Moving of the symbols from input buffer onto the stack, this action is called shift.
2. Reduce: If handle appears on the top of the stack then reduction of it by appropriate rule is
done. This action is called reduce action.
3. Accept: If stack contains start symbol only and input buffer is empty at the same time then
that action is called accept.
4. Error: A situation in which parser cannot either shift or reduce the symbols, it cannot even
perform accept action then it is called error action.

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 130


Viable Prefix
🡪 The set of prefixes of right sentential forms that can appear on the stack of a shift-reduce
parser are called viable prefixes.

Prof. Jay R Dhamsaniya #3130006 (PS) ⬥ Unit 1 – Basic Probability 131


Example: SLR(1)- simple LR
S 🡪 AA
S🡪 AA .
A 🡪 aA | b S’🡪 S.
A🡪 a . A
A🡪. aA
S🡪 A . A A🡪. b
S’🡪.S
A🡪. aA
S🡪. AA
A🡪. b A🡪 b.
A🡪. aA
A🡪. b
A🡪 aA . LR(0) item set
Augmented
grammar A🡪 a . A
A🡪. aA A🡪 a . A
A🡪. b A🡪. aA
A🡪 b.
A🡪. b

A🡪 b.
Rules to construct SLR parsing table
🡪
Example: SLR(1)- simple LR
S🡪 AA .
S’🡪 S.
A🡪 a . A
A🡪. aA
S🡪 A . A A🡪. b Action Go to
S’🡪. S
A🡪. aA Item a b $ S A
S🡪. AA
A🡪. b A🡪 b. set
A🡪. aA
0 S3 S4 1 2
A🡪. b
A🡪 aA . 1 Accept
2 S3 S4 5
A🡪 a . A
3 S3 S4 6
A🡪. aA A🡪 a . A 4 R3 R3 R3
A🡪. b A🡪. aA
A🡪 b. 5 R1
A🡪. b
6 R2 R2 R2
S 🡪 AA
A 🡪 aA | b A🡪 b.
CLR Parser
How to calculate look ahead?
How to calculate look ahead?
S🡪CC
S’ 🡪 . S , $
C🡪 cC | d
A 🡪 . X ,
Closure(I)
S’🡪.S,$
S🡪.CC, $
C🡪.cC, c|d S 🡪 . C C , $
C🡪.d, c|d A 🡪 . X ,
Example: CLR(1)- canonical LR
S🡪 AA. ,$ A🡪 aA.,$
S’🡪 S., $ A🡪 a.A,$
A🡪 a.A,$
A🡪. aA,$
A🡪. aA,$
A🡪. b, $
S🡪 A.A,$ A🡪. b, $
S’🡪.S,$ A🡪 b. ,S
A🡪.aA, $
S🡪.AA,$
A🡪. b, $ A🡪 b. ,$
A🡪.aA, a|b
A🡪.b, a|b
A🡪 aA.,a|b LR(1) item set
Augmented
grammar A🡪a.A, a|b
A🡪.aA ,a|b A🡪 a.A , a|b
A🡪. b, a|b A🡪.aA , a|b
A🡪 b., a|b
A🡪.b , a|b
S 🡪 AA
A 🡪 aA | b A🡪 b., a|b
Example: CLR(1)- canonical LR
S🡪 AA. ,$ A🡪 aA.,$
S’🡪 S., $ A🡪 a.A,$
A🡪 a.A,$
A🡪. aA,$
A🡪. aA,$
A🡪. b, $
S🡪 A.A,$ A🡪. b, $
S’🡪.S,$ A🡪 b. ,$
A🡪.aA, $
S🡪.AA,$
A🡪. b, $ A🡪 b. ,$
A🡪.aA, a|b Item Action Go to
set a b $ S A
A🡪.b, a|b
A🡪 aA.,a|b 0 S3 S4 1 2
1 Accept
A🡪a.A, a|b 2 S6 S7 5
A🡪.aA ,a|b A🡪 a.A , a|b 3 S3 S4 8
A🡪. b, a|b 4 R3 R3
A🡪 b., a|b A🡪.aA , a|b
5 R1
A🡪.b , a|b 6 S6 S7 9
S 🡪 AA 7 R3
A 🡪 aA | b A🡪 b., a|b 8 R2 R2
9 R2
LALR Parser
Example: LALR(1)- look ahead LR

S🡪 AA. ,$ A🡪 aA.,$
S’🡪 S., $ A🡪 a.A,$
A🡪 a.A,$
A🡪. aA,$
A🡪. aA,$
A🡪. b, $
S🡪 A.A,$ A🡪. b, $
S’🡪.S,$ A🡪 b. ,$
A🡪.aA, $
S🡪.AA,$
A🡪. b, $ A🡪 b. ,$
A🡪.aA, a|b CLR
A🡪.b, a|b
A🡪 aA.,a|b A🡪a.A, a|b|$
A🡪.aA , a|b|$
A🡪a.A, a|b
A🡪. b, a|b|$
A🡪.aA ,a|b A🡪 a.A , a|b
A🡪. b, a|b A🡪.aA , a|b
A🡪 b., a|b
A🡪.b , a|b A🡪 b., a|b|$
S 🡪 AA
A 🡪 aA | b A🡪 b., a|b A🡪 aA.,a|b|$
Example: LALR(1)- look ahead LR

Item Action Go to
set a b $ S A
0 S3 S4 1 2 Item Action Go to
1 Accept set a b $ S A
2 S6 S7 5 0 S36 S47 1 2
3 S3 S4 8 1 Accept
4 R3 R3 2 S36 S47 5
5 R1 36 S36 S47 89
6 S6 S7 9 47 R3 R3 R3
5 R1
7 R3
89 R2 R2 R2
8 R2 R2
9 R2

CLR Parsing Table LALR Parsing Table


Parser Generator-YACC
YACC tool or YACC Parser Generator
🡪 YACC is a tool which generates the parser.
🡪 It takes input from the lexical analyzer (tokens) and produces parse tree as an output.

Yacc
Yacc Compiler y.tab.c
specification
(translate.y)

y.tab.c C Compiler a.out

Input a.out output


Structure of Yacc Program
🡪 Any Yacc program contains mainly three sections
1. Declaration
2. Translation rules
3. Supporting C-routines

Structure of Program
<left side>🡪<alt 1>|<alt 2>|……..|<alt n>
Declaration It is used to declare variables, constant & Header files
%%
Example:
<left side> : <alt 1> {semantic action 1}
%{
%% | <alt 2> {semantic action 2}
int x,y;
Translation rule | <alt n> {semantic action n}
const int digit=50;
%% %%
#include <ctype.h>
%}
Supporting C routines All the function needed are specified over here.
Example: Yacc Program
🡪 Program: Write Yacc program for simple desk calculator
/* Declaration */ /* Translation rule */ /* Supporting C routines*/
%{ %% yylex()
#include <ctype.h> line : expr ‘\n’ {print(“%d\n”,$1);} {
%} expr : expr ‘+’ term {$$=$1 + $3;} int c;
% token DIGIT | term; c=getchar();
term : term ‘*’ factor{$$=$1 * $3;} if(isdigit(c))
| factor; {
factor : ‘(‘ expr ‘)’ {$$=$2;} yylval= c-’0’
| DIGIT; return DIGIT
%% }
E🡪E+T | T return c;
T🡪T*F | F }
F🡪(E) | id
References
Books:
1. Compilers Principles, Techniques and Tools, PEARSON Education (Second Edition)
Authors: Alfred V. Aho, Monica S. Lam, Ravi Sethi, Jeffrey D. Ullman
2. Compiler Design, PEARSON (for Gujarat Technological University)
Authors: Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman

You might also like