0% found this document useful (0 votes)
27 views35 pages

Lecture 13

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views35 pages

Lecture 13

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Compiler Design

Lecture--13
Lecture

Predictive Parsing Algorithms

2
Topics Covered
 Predictive Parser
 Left Recursive Grammars
 Constructing the Parsing Table
 LR Parsing
 SLR Parsing

3
Predictive Parser:
Generalities
 In many cases, by carefully writing a
grammar—eliminating left recursion
from it and left factoring the resulting
grammar—we can obtain a grammar
that can be parsed by a recursive-
descent parser that needs no
backtracking.
 Such parsers are called predictive
parsers.

4
Left Recursive Grammars I
 A grammar is left recursive if it has a
nonterminal A such that there is a derivation
A  Aα, for some string α
 Top-down parsers can loop forever when
facing a left-recursive rules. Therefore, such
rules need to be eliminated.
 A left-recursive rule such as A  A α | β
can be eliminated by replacing it by:
◦ Aβ R where R is a new non-
terminal
◦ RαR|є and є is the empty string
 The new grammar is right-recursive

5
Left--Recursive Grammars II
Left
 The general procedure for removing direct left
recursion—recursion that occurs in one rule—is the
following:
◦ Group the A-rules as
A  Aα1 |… | Aαm | β1 | β2 |…| βn
where none of the β’s begins with A
◦ Replace the original A-rules with
 A  β1A’ | β2 A’ | … | βn A’
 A’  α1 A’ | α2 A’ | … | αm A’
 This procedure will not eliminate indirect left recursion
of the kind:
◦ A  BaA
◦ B  Ab [Another procedure exists that is not given
here]
 Direct or Indirect Left-Recursion is problematic for all
top-down parsers. However, it is not a problem for 6
bottom-up parsing algorithms.
Left--Recursive Grammars III
Left
 Here is an example of a (directly) left-
recursive grammar:
EE+T|T
TT*F|F
F  ( E ) | id
 This grammar can be re-written as the
following non left-recursive grammar:

E  T E’ E’  + TE’ | є
T  F T’ T’  * F T’ | є
F  (E) | id

7
Left--Factoring a Grammar I
Left
 Left Recursion is not the only trait that
disallows top-down parsing.
 Another is whether the parser can always
choose the correct Right Hand Side on the
basis of the next token of input, using only
the first token generated by the leftmost
nonterminal in the current derivation.
 To ensure that this is possible, we need to
left-factor the non left-recursive grammar
generated in the previous step.

8
Left--Factoring a Grammar II
Left
 Here is the procedure used to left-factor a
grammar:
◦ For each non-terminal A, find the longest prefix α
common to two or more of its alternatives.
◦ Replace all the A productions:
A  αβ1 | αβ2 … | αβn | γ
(where γ represents all alternatives that do not
begin with α)
◦ By:
A  α A’ | γ
A’  β1 | β2 | … | βn

9
Left--Factoring a Grammar III
Left
 Here is an example of a common grammar that
needs left factoring:

S  iEtS | iEtSeS | a
Eb
( i stands for “if”; t stands for “then”; and e stands for “else”)

 Left factored, this grammar becomes:

S  iEtSS’ | a
S’  eS | є
Eb
10
Predictive Parser: Details
 The key problem during predictive parsing is that
of determining the production to be applied for a
non-terminal.

 This is done by using a parsing table.

 A parsing table is a two-dimensional array M[A,a]


where A is a non-terminal, and a is a terminal or
the symbol $, menaing “end of input string”.

 The other inputs of a predictive parser are:


◦ The input buffer, which contains the string to be
parsed followed by $.
◦ The stack which contains a sequence of grammar
symbols with, initially, $S (end of input string and start
symbol) in it.
11
Predictive Parser: Informal
Procedure
 The predictive parser considers X, the
symbol on top of the stack, and a, the
current input symbol. It uses, M, the parsing
table.
◦ If X=a=$  halt and return success
◦ If X=a≠$  pop X off the stack and advance
input pointer to the next symbol
◦ If X is a non-terminal  Check M[X,a]
 If the entry is a production rule, then replace X on the
stack by the Right Hand Side of the production
 If the entry is blank, then halt and return failure

12
Stack Input Output
Predictive Parser: $E id+id*id$
An Example $E’T id+id*id$ E  TE’
$E’T’F id+id*id$ T  FT’
$E’T’id id+id*id$ F  id
id + * ( ) $
$E’T’ +id*id$
E E E
TE’ TE’ $E’ +id*id$ T’  є
E’ E’ E’ E’ $E’T+ +id*id$ E’  +TE’
+ є є $E’T id*id$
TE’
$E’T’F id*id$ T  FT’
T T T $E’T’id id*id$ F  id
FT’ FT’
$E’T’ *id$
T’ T’ T’ T’ T’
є * є є $E’T’F* *id$ T’  *FT’
FT’ $E’T’F id$
F F F $E’T’id id$ F  id
id (E) $E’T’ $
Parsing Table $E’ $ T’  є
Algorithm Trace  $ $ E’  є
13
Constructing the Parsing Table I:
First and Follow
 First(α) is the set of terminals that begin the
strings derived from α. Follow(A) is the set
of terminals a that can appear to the right of
A. First and Follow are used in the
construction of the parsing table.
 Computing First:
◦ If X is a terminal, then First(X) is {X}
◦ If X  є is a production, then add є to First(X)
◦ If X is a non-terminal and X  Y1 Y2 … Yk is a
production, then place a in First(X) if for some i, a
is in First(Yi) and є is in all of First(Y1)…First(Yi-
1)

14
Constructing the Parsing Table II:
First and Follow
 Computing Follow:
◦ Place $ in Follow(S), where S is the start symbol and $
is the input right endmarker.
◦ If there is a production A  αBβ, then everything in
First(β) except for є is placed in Follow(B).
◦ If there is a production A  αB, or a production A 
αBβ where First(β) contains є, then everything in
Follow(A) is in Follow(B)
Example: E  TE’ E’  +TE’ | є
T  FT’ T’  *FT’ | є
F  (E) | id

First(E) = First(T) = First(F) = {(, id} First(E’) = {+, є}


First(T’) = {*, є}
Follow(E) = Follow(E’) = {),$}
Follow(F)={+,*,),$}
Follow(T) = Follow(T’) = {+,),$}
15
Constructing the Parsing
Table III
 Algorithm for constructing a predictive
parsing table:
1. For each production A  α of the grammar, do
steps 2 and 3
2. For each terminal a in First(α), add A  α to
M[A, a]
3. If є is in First(α), add A  α to M[A, b] for each
terminal b in Follow(A). If є is in First(α), add A
 α to M[A,b] for each terminal b in Follow(A).
If є is in First(α) and $ is in Follow(A), add A 
α to M[A, $].
4. Make each undefined entry of M be an error.

16
LL(1) Grammars
 A grammar whose parsing table has no multiply-
defined entries is said to be LL(1)
 No ambiguous or left-recursive grammar can be
LL(1).
 A grammar G is LL(1) iff whenever A  α | β are
two distinct productions of G, then the following
conditions hold:
◦ For no terminal a do both α and β derive strings
beginning with a
◦ At most one of α and β can derive the empty string
◦ If β can (directly or indirectly) derive є, then α does not
derive any string beginning with a terminal in Follow(A).

17
Part II
Bottom--Up Parsing
Bottom
 There are different approaches to bottom-up
parsing. One of them is called Shift-Reduce
parsing, which in turns has a number of different
instantiations.

 Operator-precedence parsing is one such


method as is LR parsing which is much more
general.

 In this course, we will be focusing on LR parsing.


LR Parsing itself takes three forms: Simple LR-
Parsing (SLR) a simple but limited version of LR-
Parsing; Canonical LR parsing, the most
powerful, but most expensive version; and LALR18
which is intermediate in cost and power. Our
LR Parsing: Advantages
 LR Parsers can recognize any language for
which a context free grammar can be
written.

 LR Parsing is the most general non-


backtracking shift-reduce method known,
yet it is as efficient as ither shift-reduce
approaches

 The class of grammars that can be parsed


by an LR parser is a proper superset of that
that can be parsed by a predictive parser.

 An LR-parser can detect a syntactic error


as soon as it is possible to do so on a left-
to-right scan of the input. 19
LR-Parsing:
LR-
Drawback/Solution
 The main drawback of LR parsing is that it is
too much work to construct an LR parser by
hand for a typical programming language
grammar.

 Fortunately, specialized tools to construct LR


parsers automatically have been designed.

 With such tools, a user can write a context-free


grammar and have a parser generator
automatically produce a parser for that
grammar.

 An example of such a tool is Yacc “Yet Another


Compiler-Compiler” 20
LR Parsing Algorithms:
Details I
 An LR parser consists of an input, output, a
stack, a driver program and a parsing table
that has two parts: action and goto.

 The driver program is the same for all LR


Parsers. Only the parsing table changes
from one parser to the other.

 The program uses the stack to store a


string of the form s0X1s1X2…Xmsm, where sm
is the top of the stack. The Sk‘s are state
symbols while the Xi‘s are grammar
symbols. Together state and grammar
symbols determine a shift-reduce parsing
decision.
21
LR Parsing Algorithms:
Details II
 The parsing table consists of two parts: a
parsing action function and a goto function.

 The LR parsing program determines sm,


the state on top of the stack and ai, the
current input. It then consults action[sm, ai]
which can take one of four values:
 Shift
 Reduce
 Accept
 Error

22
LR Parsing Algorithms:
Details III
 If action[sm, ai] = Shift s, where s is a state,
then the parser pushes ai and s on the stack.

 If action[sm, ai] = Reduce A  β, then ai and sm


are replaced by A, and, if s was the state
appearing below ai in the stack, then goto[s, A]
is consulted and the state it stores is pushed
onto the stack.

 If action[sm, ai] = Accept, parsing is completed

 If action[sm, ai] = Error, then the parser


discovered an error.

23
LR Parsing Example: The
Grammar
1. EE+T
2. ET
3. TT*F
4. TF
5. F  (E)
6. F  id

24
LR-Parser Example: The Parsing
LR-
Table
State Action Goto
id + * ( ) $ E T F
0 s5 s4 1 2 3
1 s6 Acc
2 r2 s7 r2 r2
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 R1 r1
10 r3 r3 r3 r3
11 r5 r5 r5 r5 25
LR-Parser Example: Parsing
LR-
Trace Stack Input Action
(1) 0 id * id + id $ Shift
(2) 0 id 5 * id + id $ Reduce by F  id
(3) 0 F 3 * id + id $ Reduce by T  F
(4) 0 T 2 * id + id $ Shift
(5) 0 T 2 * 7 id + id $ Shift
(6) 0 T 2 * 7 id 5 + id $ Reduce by F  id
(7) 0 T 2 * 7 F 10 + id $ Reduce by T  T * F
(8) 0 T 2 + id $ Reduce by E T
(9) 0 E 1 + id $ Shift
(10) 0 E 1 + 6 id $ Shift
(11) 0 E 1 + 6 id 5 $ Reduce by F  id
(12) 0 E 1 + 6 F 3 $ Reduce by T  F
(13) 0 E 1 + 6 T 9 $ EE+T
(14) 0 E 1 $ Accept 26
SLR Parsing
 Definition: An LR(0) item of a grammar G is a
production of G with a dot at some position of the
right side.
 Example: A  XYZ yields the four following
items:
◦ A  .XYZ
◦ A  X.YZ
◦ A  XY.Z
◦ A  XYZ.
 The production A  є generates only one item, A
.
 Intuitively, an item indicates how much of a
production we have seen at a given point in the
parsing process. 27
SLR Parsing
 To create an SLR Parsing table, we define
three new elements:

◦ An augmented grammar for G, the initial


grammar. If S is the start symbol of G, we
add the production S’  .S . The purpose
of this new starting production is to
indicate to the parser when it should stop
parsing and accept the input.
◦ The closure operation
◦ The goto function

28
SLR Parsing:
The Closure Operation
 If I is a set of items for a grammar G,
then closure(I) is the set of items
constructed from I by the two rules:
1. Initially, every item in I is added to
closure(I)
2. If A  α . B β is in closure(I) and B  γ
is a production, then add the item B  .
γ to I, if it is not already there. We apply
this rule until no more new items can be
added to closure(I).
29
SLR Parsing:
The Closure Operation – Example
Original grammar Augmented
grammar
0. E’  E
 EE+T 1. E  E +
T
 ET 2. E  T
 TT*F 3. E  T *
F
LetI = {[E’
T F
E]} then Closure(I)=4. T  F
{ [E’  .E], [E  .E + T],
 F  (E) [E  .T], 5.[EF (E)
.T*F],
 F  id [T  .F], 6.
[F   id
F .(E)]
[F  .id] } 30
SLR Parsing:
The Goto Operation
 Goto(I,X), where I is a set of items and X is
a grammar symbol, is defined as the
closure of the set of all items [A  αX.β]
such that [A  α.Xβ] is in I.
 Example: If I is the set of two items {E’ 
E.], [E  E.+T]}, then goto(I, +) consists of
E  E + .T
T  .T * F
T  .F
F  .(E)
F  .id

31
SLR Parsing:
Sets--of
Sets of--Items Construction
Procedure items(G’)
C = {Closure({[S’  .S]})}
Repeat
For each set of items I in C and each
grammar symbol X such that got(I,X)
is not empty and not in C do
add goto(I,X) to C
Until no more sets of items can be added
to C

32
Example: The Canonical LR(0)
collection for grammar G
I0: E’  .E I4: F  (.E) I7: T  T * .F
E  .E + T E  .E + T F  .(E)
E  .T E  .T F  .id
T  .T * F T  .T * F I8: F  (E.)
T  .F T  .F E  E.+T
F  .(E) F  .(E) I9: E  E + T.
F  .id F  .id T  T.* F
I1: E’  E. I5: F  id. I10: T  T*F.
E  E.+T I6: E  E+.T I11: F  (E).
I2: E  T. T  .T*F
T  T. * F T  .F
I3: T  F. F  .(E)
F  .id

33
Constructing an SLR Parsing
Table
1. Construct C={I0, I1, … In} the collection of
sets of LR(0) items for G’
2. State i is constructed from Ii. The parsing
actions for state i are determined as
follows:
a. If [A  α.aβ] is in Ii and goto(Ii,a) = Ij, then set
action[i,a] to “shift j”. Here, a must be a
terminal.
b. If [A  α.] is in Ii, then set action[i, a] to
“reduce A  α” for all a in Follow(A); here A
may not be S’.
c. If [S’  S.] is in Ii, then set action[i,$] to
“accept”
If any conflicting actions are generated by the
above rules, we say that the grammar is not
SLR(1). The algorithm then fails to produce a 34

parser.
Constructing an SLR Parsing
Table (cont’d)
3. The goto transitions for state i are
constructed for all nonterminals A using
the rule: If goto(Ii, A) = Ij, then goto[i, A] = j.
4. All entries not defined by rules (2) and (3)
are made “error”.
5. The initial state of the parser is the one
constructed from the set of items
containing [S’  S].

See example in class

35

You might also like