0% found this document useful (0 votes)
46 views79 pages

LR Parsing. Parser Generators.: Lecture 7-8

This document discusses bottom-up, or LR, parsing. It begins by introducing bottom-up parsing and noting that it is more general than top-down parsing. The document then provides an example grammar and parses the string "int + (int) + (int)" using a bottom-up, left-to-right derivation shown in reverse order. It describes how LR parsers use shift and reduce actions based on the contents of a stack and input symbol to build the parse tree from bottom to top.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views79 pages

LR Parsing. Parser Generators.: Lecture 7-8

This document discusses bottom-up, or LR, parsing. It begins by introducing bottom-up parsing and noting that it is more general than top-down parsing. The document then provides an example grammar and parses the string "int + (int) + (int)" using a bottom-up, left-to-right derivation shown in reverse order. It describes how LR parsers use shift and reduce actions based on the contents of a stack and input symbol to build the parse tree from bottom to top.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 79

LR Parsing. Parser Generators.

Lecture 7-8

Prof. Bodik CS 164 Lecture 7-8 1


Bottom-Up Parsing

• Bottom-up parsing is more general than top-


down parsing
– And just as efficient
– Builds on ideas in top-down parsing
– Preferred method in practice

• Also called LR parsing


– L means that tokens are read left to right
– R means that it constructs a rightmost derivation !

Prof. Bodik CS 164 Lecture 7-8 2


An Introductory Example

• LR parsers don’t need left-factored grammars


and can also handle left-recursive grammars

• Consider the following grammar:

E  E + ( E ) | int

– Why is this not LL(1)?

• Consider the string: int + ( int ) + ( int )


Prof. Bodik CS 164 Lecture 7-8 3
The Idea

• LR parsing reduces a string to the start


symbol by inverting productions:

str  input string of terminals


repeat
– Identify  in str such that A   is a production
(i.e., str =   )
– Replace  by A in str (i.e., str becomes  A )
until str = S
Prof. Bodik CS 164 Lecture 7-8 4
A Bottom-up Parse in Detail (1)

int + (int) + (int)

int + ( int ) + ( int )


Prof. Bodik CS 164 Lecture 7-8 5
A Bottom-up Parse in Detail (2)

int + (int) + (int)


E + (int) + (int)

int + ( int ) + ( int )


Prof. Bodik CS 164 Lecture 7-8 6
A Bottom-up Parse in Detail (3)

int + (int) + (int)


E + (int) + (int)
E + (E) + (int)

E E

int + ( int ) + ( int )


Prof. Bodik CS 164 Lecture 7-8 7
A Bottom-up Parse in Detail (4)

int + (int) + (int)


E + (int) + (int)
E + (E) + (int)
E + (int)
E

E E

int + ( int ) + ( int )


Prof. Bodik CS 164 Lecture 7-8 8
A Bottom-up Parse in Detail (5)

int + (int) + (int)


E + (int) + (int)
E + (E) + (int)
E + (int)
E
E + (E)

E E E

int + ( int ) + ( int )


Prof. Bodik CS 164 Lecture 7-8 9
A Bottom-up Parse in Detail (6)

int + (int) + (int) E


E + (int) + (int)
E + (E) + (int)
E + (int)
E
E + (E)
E

A rightmost E E E
derivation in reverse
int + ( int ) + ( int )
Prof. Bodik CS 164 Lecture 7-8 10
Important Fact #1

Important Fact #1 about bottom-up parsing:

An LR parser traces a rightmost derivation in


reverse

Prof. Bodik CS 164 Lecture 7-8 11


Where Do Reductions Happen

Important Fact #1 has an interesting


consequence:
– Let  be a step of a bottom-up parse
– Assume the next reduction is by A 
– Then  is a string of terminals !

Why? Because A   is a step in a right-


most derivation

Prof. Bodik CS 164 Lecture 7-8 12


Notation

• Idea: Split string into two substrings


– Right substring (a string of terminals) is as yet
unexamined by parser
– Left substring has terminals and non-terminals

• The dividing point is marked by a I


– The I is not part of the string

• Initially, all input is unexamined: Ix1x2 . . . xn


Prof. Bodik CS 164 Lecture 7-8 13
Shift-Reduce Parsing

• Bottom-up parsing uses only two kinds of


actions:

Shift

Reduce

Prof. Bodik CS 164 Lecture 7-8 14


Shift

Shift: Move I one place to the right


– Shifts a terminal to the left string

E + (I int )  E + (int I )

Prof. Bodik CS 164 Lecture 7-8 15


Reduce

Reduce: Apply an inverse production at the right


end of the left string
– If E  E + ( E ) is a production, then

E + (E + ( E ) I )  E +(E I )

Prof. Bodik CS 164 Lecture 7-8 16


Shift-Reduce Example

I int + (int) + (int)$ shift

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times
E + (int I ) + (int)$ red. E  int

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times
E + (int I ) + (int)$ red. E  int
E + (E I ) + (int)$ shift

E E

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times
E + (int I ) + (int)$ red. E  int
E + (E I ) + (int)$ shift
E + (E) I + (int)$ red. E  E + (E)

E E

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times
E + (int I ) + (int)$ red. E  int
E + (E I ) + (int)$ shift
E
E + (E) I + (int)$ red. E  E + (E)
E I + (int)$ shift 3 times

E E

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times
E + (int I ) + (int)$ red. E  int
E + (E I ) + (int)$ shift
E
E + (E) I + (int)$ red. E  E + (E)
E I + (int)$ shift 3 times
E + (int I )$ red. E  int

E E

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times
E + (int I ) + (int)$ red. E  int
E + (E I ) + (int)$ shift
E
E + (E) I + (int)$ red. E  E + (E)
E I + (int)$ shift 3 times
E + (int I )$ red. E  int
E + (E I )$ shift
E E E

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times
E + (int I ) + (int)$ red. E  int
E + (E I ) + (int)$ shift
E
E + (E) I + (int)$ red. E  E + (E)
E I + (int)$ shift 3 times
E + (int I )$ red. E  int
E + (E I )$ shift
E E E
E + (E) I $ red. E  E + (E)

int + ( int ) + ( int )


Shift-Reduce Example

I int + (int) + (int)$ shift E


int I + (int) + (int)$ red. E  int
E I + (int) + (int)$ shift 3 times
E + (int I ) + (int)$ red. E  int
E + (E I ) + (int)$ shift
E
E + (E) I + (int)$ red. E  E + (E)
E I + (int)$ shift 3 times
E + (int I )$ red. E  int
E + (E I )$ shift
E E E
E + (E) I $ red. E  E + (E)
EI$ accept
int + ( int ) + ( int )
The Stack

• Left string can be implemented by a stack


– Top of the stack is the I

• Shift pushes a terminal on the stack

• Reduce pops 0 or more symbols off of the


stack (production rhs) and pushes a non-
terminal on the stack (production lhs)

Prof. Bodik CS 164 Lecture 7-8 28


Key Issue: When to Shift or Reduce?

• Decide based on the left string (the stack)


• Idea: use a finite automaton (DFA) to decide
when to shift or reduce
– The DFA input is the stack
– The language consists of terminals and non-terminals

• We run the DFA on the stack and we examine


the resulting state X and the token tok after I
– If X has a transition labeled tok then shift
– If X is labeled with “A   on tok” then reduce
Prof. Bodik CS 164 Lecture 7-8 29
LR(1) Parsing. An Example
int I int + (int) + (int)$ shift
0 1
E E  int int I + (int) + (int)$ E  int
+ ( on $, + E I + (int) + (int)$ shift(x3)
2 3 4
accept
E + (int I ) + (int)$ E  int
on $
E int E + (E I ) + (int)$ shift
) E  int E + (E) I + (int)$ E  E+(E)
7 6 5 on ), + E I + (int)$ shift (x3)
E  E + (E)
on $, + + int E + (int I )$ E  int
( E + (E I )$ shift
8 9 E + (E) I $ E  E+(E)
E
+ EI$ accept
10 11 E  E + (E)
)
on ), +
Representing the DFA

• Parsers represent the DFA as a 2D table


– Recall table-driven lexical analysis
• Lines correspond to DFA states
• Columns correspond to terminals and non-
terminals
• Typically columns are split into:
– Those for terminals: action table
– Those for non-terminals: goto table

Prof. Bodik CS 164 Lecture 7-8 31


Representing the DFA. Example

• The table for a fragment of our DFA:


( int + ( ) $ E
3 4

E int
3 s4
4 s5 g6
6 5
5 rEint rEint
E  int
) on ), + 6 s8 s7
7 rEE+(E) rEE+(E)
7 …
E  E + (E)
on $, + Prof. Bodik CS 164 Lecture 7-8 32
The LR Parsing Algorithm

• After a shift or reduce action we rerun the DFA


on the entire stack
– This is wasteful, since most of the work is repeated

• Remember for each stack element on which state


it brings the DFA

• LR parser maintains a stack


 sym1, state1  . . .  symn, staten 
statek is the final state of the DFA on sym1 … symk
Prof. Bodik CS 164 Lecture 7-8 33
The LR Parsing Algorithm

Let I = w$ be initial input


Let j = 0
Let DFA state 0 be the start state
Let stack =  dummy, 0 
repeat
case action[top_state(stack), I[j]] of
shift k: push  I[j++], k 
reduce X  :
pop || pairs,
push X, Goto[top_state(stack), X]
accept: halt normally
error: halt and report error
Prof. Bodik CS 164 Lecture 7-8 34
LR Parsing Notes

• Can be used to parse more grammars than LL

• Most programming languages grammars are LR

• Can be described as a simple table

• There are tools for building the table

• How is the table constructed?


Prof. Bodik CS 164 Lecture 7-8 35
Outline

• Review of bottom-up parsing

• Computing the parsing DFA

• Using parser generators

Prof. Bodik CS 164 Lecture 7-8 36


Bottom-up Parsing (Review)

• A bottom-up parser rewrites the input string


to the start symbol
• The state of the parser is described as
I
–  is a stack of terminals and non-terminals
–  is the string of terminals not yet examined

• Initially: I x1x2 . . . xn

Prof. Bodik CS 164 Lecture 7-8 37


The Shift and Reduce Actions (Review)

• Recall the CFG: E  int | E + (E)


• A bottom-up parser uses two kinds of actions:

• Shift pushes a terminal from input on the stack


E + (I int )  E + (int I )

• Reduce pops 0 or more symbols off of the stack


(production rhs) and pushes a non-terminal on
the stack (production lhs)
E + (E + ( E ) I )  E +(E I )
Prof. Bodik CS 164 Lecture 7-8 38
Key Issue: When to Shift or Reduce?

• Idea: use a finite automaton (DFA) to decide


when to shift or reduce
– The input is the stack
– The language consists of terminals and non-terminals

• We run the DFA on the stack and we examine


the resulting state X and the token tok after I
– If X has a transition labeled tok then shift
– If X is labeled with “A   on tok” then reduce

Prof. Bodik CS 164 Lecture 7-8 39


LR(1) Parsing. An Example
int I int + (int) + (int)$ shift
0 1
E E  int
int I + (int) + (int)$ E  int
+ ( on $, + E I + (int) + (int)$ shift(x3)
2 3 4
accept
E + (int I ) + (int)$ E  int
on $
E int E + (E I ) + (int)$ shift
) E  int E + (E) I + (int)$ E  E+(E)
7 6 5 on ), + E I + (int)$ shift (x3)
E  E + (E)
on $, + + int E + (int I )$ E  int
( E + (E I )$ shift
8 9 E + (E) I $ E  E+(E)
E
+ EI$ accept
10 11 E  E + (E)
)
on ), +
End of review

Prof. Bodik CS 164 Lecture 7-8 41


Key Issue: How is the DFA Constructed?

• The stack describes the context of the parse


– What non-terminal we are looking for
– What production rhs we are looking for
– What we have seen so far from the rhs

• Each DFA state describes several such


contexts
– E.g., when we are looking for non-terminal E, we
might be looking either for an int or a E + (E) rhs

Prof. Bodik CS 164 Lecture 7-8 42


LR(1) Items

• An LR(1) item is a pair:


X ², a
– X   is a production
– a is a terminal (the lookahead terminal)
– LR(1) means 1 lookahead terminal

• [X ², a] describes a context of the parser


– We are trying to find an X followed by an a, and
– We have  already on top of the stack
– Thus we need to see next a prefix derived from a
Prof. Bodik CS 164 Lecture 7-8 43
Note

• The symbol I was used before to separate the


stack from the rest of input
–  I , where  is the stack and  is the remaining
string of terminals
• In items ² is used to mark a prefix of a
production rhs:
X ², a
– Here  might contain non-terminals as well
• In both case the stack is on the left

Prof. Bodik CS 164 Lecture 7-8 44


Convention

• We add to our grammar a fresh new start


symbol S and a production S  E
– Where E is the old start symbol

• The initial parsing context contains:


S  ²E, $
– Trying to find an S as a string derived from E$
– The stack is empty

Prof. Bodik CS 164 Lecture 7-8 45


LR(1) Items (Cont.)

• In context containing
E ! E + ² ( E ), +
– If ( follows then we can perform a shift to context
containing
E ! E + (² E ), +
• In context containing
E ! E + ( E ) ², +
– We can perform a reduction with E ! E + ( E )
– But only if a + follows
Prof. Bodik CS 164 Lecture 7-8 46
LR(1) Items (Cont.)

• Consider the item


E ! E + (² E ) , +
• We expect a string derived from E ) +
• There are two productions for E
E ! int and E ! E + ( E)
• We describe this by extending the context
with two more items:
E ! ² int, )
E!²E+(E),)
Prof. Bodik CS 164 Lecture 7-8 47
The Closure Operation

• The operation of extending the context with


items is called the closure operation

Closure(Items) =
repeat
for each [X ! ²Y, a] in Items
for each production Y ! 
for each b 2 First(a)
add [Y ! ², b] to Items
until Items is unchanged
Prof. Bodik CS 164 Lecture 7-8 48
Constructing the Parsing DFA (1)

• Construct the start context: Closure({S ! ²E, $})


S ! ²E, $
E ! ²E+(E), $
E ! ²int, $
E ! ²E+(E), +
E ! ²int, +
• We abbreviate as:
S ! ²E, $
E ! ²E+(E), $/+
E ! ²int, $/+
Prof. Bodik CS 164 Lecture 7-8 49
Constructing the Parsing DFA (2)

• A DFA state is a closed set of LR(1) items

• The start state contains [S ! ²E, $]

• A state that contains [X ! ², b] is labeled with


“reduce with X !  on b”

• And now the transitions …


Prof. Bodik CS 164 Lecture 7-8 50
The DFA Transitions

• A state “State” that contains [X ! ²y, b] has


a transition labeled y to a state that contains
the items “Transition(State, y)”
– y can be a terminal or a non-terminal

Transition(State, y)
Items à 
for each [X ! ²y, b] 2 State
add [X ! y², b] to Items
return Closure(Items)
Prof. Bodik CS 164 Lecture 7-8 51
Constructing the Parsing DFA. Example.

S ! ²E, $ 0 1
E ! int², $/+ E ! int
E ! ²E+(E), $/+ int on $, +
E ! ²int, $/+
E ! E+² (E), $/+ 3
2 E +
S ! E², $ (
E ! E²+(E), $/+ E ! E+(²E), $/+ 4
accept E E ! ²E+(E), )/+
on $ E ! ²int, )/+
E ! E+(E²), $/+ int 5
6
E ! E²+(E), )/+ E ! int², )/+ E ! int
on ), +
and so on… Prof. Bodik CS 164 Lecture 7-8 52
LR Parsing Tables. Notes

• Parsing tables (i.e. the DFA) can be


constructed automatically for a CFG

• But we still need to understand the


construction to work with parser generators
– E.g., they report errors in terms of sets of items

• What kind of errors can we expect?

Prof. Bodik CS 164 Lecture 7-8 53


Shift/Reduce Conflicts

• If a DFA state contains both


[X ! ²a, b] and [Y ! ², a]

• Then on input “a” we could either


– Shift into state [X ! a², b], or
– Reduce with Y ! 

• This is called a shift-reduce conflict

Prof. Bodik CS 164 Lecture 7-8 54


Shift/Reduce Conflicts

• Typically due to ambiguities in the grammar


• Classic example: the dangling else
S  if E then S | if E then S else S | OTHER
• Will have DFA state containing
[S  if E then S², else]
[S  if E then S² else S, x]
• If else follows then we can shift or reduce
• Default (bison, CUP, etc.) is to shift
– Default behavior is as needed in this case
Prof. Bodik CS 164 Lecture 7-8 55
More Shift/Reduce Conflicts

• Consider the ambiguous grammar


E E + E | E * E | int
• We will have the states containing
[E  E * ² E, +] [E E * E², +]
[E  ² E + E, +] E [E E ² + E, +]
… …
• Again we have a shift/reduce on input +
– We need to reduce (* binds more tightly than +)
– Recall solution: declare the precedence of * and +

Prof. Bodik CS 164 Lecture 7-8 56


More Shift/Reduce Conflicts

• In bison declare precedence and associativity:

%left +
%left *
• Precedence of a rule = that of its last terminal
– See bison manual for ways to override this default

• Resolve shift/reduce conflict with a shift if:


– no precedence declared for either rule or terminal
– input terminal has higher precedence than the rule
– the precedences are the same and right associative
Prof. Bodik CS 164 Lecture 7-8 57
Using Precedence to Solve S/R Conflicts

• Back to our example:


[E  E * ² E, +] [E E * E², +]
[E  ² E + E, +] E [E E ² + E, +]
… …

• Will choose reduce because precedence of


rule E  E * E is higher than of terminal +

Prof. Bodik CS 164 Lecture 7-8 58


Using Precedence to Solve S/R Conflicts

• Same grammar as before


E E + E | E * E | int
• We will also have the states
[E  E + ² E, +] [E E + E², +]
[E  ² E + E, +] E [E E ² + E, +]
… …
• Now we also have a shift/reduce on input +
– We choose reduce because E  E + E and + have
the same precedence and + is left-associative

Prof. Bodik CS 164 Lecture 7-8 59


Using Precedence to Solve S/R Conflicts

• Back to our dangling else example


[S  if E then S², else]
[S  if E then S² else S, x]
• Can eliminate conflict by declaring else with
higher precedence than then
– Or just rely on the default shift action
• But this starts to look like “hacking the parser”
• Best to avoid overuse of precedence declarations
or you’ll end with unexpected parse trees

Prof. Bodik CS 164 Lecture 7-8 60


Reduce/Reduce Conflicts

• If a DFA state contains both


[X ! ², a] and [Y ! ², a]
– Then on input “a” we don’t know which
production to reduce

• This is called a reduce/reduce conflict

Prof. Bodik CS 164 Lecture 7-8 61


Reduce/Reduce Conflicts

• Usually due to gross ambiguity in the grammar


• Example: a sequence of identifiers
S   | id | id S

• There are two parse trees for the string id


S  id
S  id S  id
• How does this confuse the parser?

Prof. Bodik CS 164 Lecture 7-8 62


More on Reduce/Reduce Conflicts

• Consider the states [S  id ², $]


[S’  ² S, $] [S id ² S, $]
[S  ², $] id [S ², $]
[S  ² id, $] [S ² id, $]
[S  ² id S, $] [S ² id S, $]
• Reduce/reduce conflict on input $
S’  S  id
S’  S  id S id
• Better rewrite the grammar: S   | id S

Prof. Bodik CS 164 Lecture 7-8 63


Using Parser Generators

• Parser generators construct the parsing DFA


given a CFG
– Use precedence declarations and default
conventions to resolve conflicts
– The parser algorithm is the same for all grammars
(and is provided as a library function)
• But most parser generators do not construct
the DFA as described before
– Because the LR(1) parsing DFA has 1000s of states
even for a simple language
Prof. Bodik CS 164 Lecture 7-8 64
LR(1) Parsing Tables are Big

• But many states are similar, e.g.


1 5
E ! int
E ! int², $/+ and E ! int², )/+ E ! int
on $, + on ), +

• Idea: merge the DFA states whose items


differ only in the lookahead tokens
– We say that such states have the same core
• We obtain 1’
E ! int², $/+/) E ! int
on $, +, )
Prof. Bodik CS 164 Lecture 7-8 65
The Core of a Set of LR Items

• Definition: The core of a set of LR items is


the set of first components
– Without the lookahead terminals

• Example: the core of


{ [X  ², b], [Y  ², d]}
is
{X  ², Y  ²}

Prof. Bodik CS 164 Lecture 7-8 66


LALR States

• Consider for example the LR(1) states


{[X  ², a], [Y  ², c]}
{[X  ², b], [Y  ², d]}
• They have the same core and can be merged
• And the merged state contains:
{[X  ², a/b], [Y  ², c/d]}
• These are called LALR(1) states
– Stands for LookAhead LR
– Typically 10 times fewer LALR(1) states than LR(1)
Prof. Bodik CS 164 Lecture 7-8 67
A LALR(1) DFA

• Repeat until all states have distinct core


– Choose two distinct states with same core
– Merge the states by creating a new one with the
union of all the items
– Point edges from predecessors to new state
– New state points to all the previous successors

A B C A C
BE
D E F D F

Prof. Bodik CS 164 Lecture 7-8 68


Conversion LR(1) to LALR(1). Example.
int int
0 1 0 1,5
E E ! int E ! int
E on $, +, )
( on $, + int
2 + 3 4
accept
E int + (
on $ 2 3,8 4,9
) E ! int accept
7 6 5 on ), + on $
+ E
E ! E + (E)
on $, + + int
)
( 7,11 6,10
8 9 E ! E + (E)
E on $, +, )
+
10 11 E ! E + (E)
)
on ), +
The LALR Parser Can Have Conflicts

• Consider for example the LR(1) states


{[X  ², a], [Y  ², b]}
{[X  ², b], [Y  ², a]}
• And the merged LALR(1) state
{[X  ², a/b], [Y  ², a/b]}
• Has a new reduce-reduce conflict

• In practice such cases are rare

Prof. Bodik CS 164 Lecture 7-8 70


LALR vs. LR Parsing

• LALR languages are not natural


– They are an efficiency hack on LR languages

• Any reasonable programming language has a


LALR(1) grammar

• LALR(1) has become a standard for


programming languages and for parser
generators
Prof. Bodik CS 164 Lecture 7-8 71
A Hierarchy of Grammar Classes

From Andrew Appel,


“Modern Compiler
Implementation in Java”

Prof. Bodik CS 164 Lecture 7-8 72


Notes on Parsing

• Parsing
– A solid foundation: context-free grammars
– A simple parser: LL(1)
– A more powerful parser: LR(1)
– An efficiency hack: LALR(1)
– LALR(1) parser generators

• Now we move on to semantic analysis

Prof. Bodik CS 164 Lecture 7-8 73


Supplement to LR Parsing

Strange Reduce/Reduce Conflicts


Due to LALR Conversion
(from the bison manual)

Prof. Bodik CS 164 Lecture 7-8 74


Strange Reduce/Reduce Conflicts

• Consider the grammar


SPR, NL  N | N , NL
P  T | NL : T RT |N:T
N  id T id
• P - parameters specification
• R - result specification
• N - a parameter or result name
• T - a type name
• NL - a list of names
Prof. Bodik CS 164 Lecture 7-8 75
Strange Reduce/Reduce Conflicts

• In P an id is a
– N when followed by , or :
– T when followed by id
• In R an id is a
– N when followed by :
– T when followed by ,
• This is an LR(1) grammar.
• But it is not LALR(1). Why?
– For obscure reasons

Prof. Bodik CS 164 Lecture 7-8 76


A Few LR(1) States
P²T id 1
P  ² NL : T id T  id ² id 3 LALR reduce/reduce
id N  id ² : conflict on “,”
NL  ² N :
NL  ² N , NL : N  id ² ,

N  ² id :
N  ² id , T  id ² id/,
LALR merge
T  ² id id N  id ² :/,

R²T , 2 T  id ² , 4
id
R²N:T , N  id ² :
T  ² id ,
N  ² id : Prof. Bodik CS 164 Lecture 7-8 77
What Happened?

• Two distinct states were confused because


they have the same core
• Fix: add dummy productions to distinguish the
two confused states
• E.g., add
R  id bogus
– bogus is a terminal not used by the lexer
– This production will never be used during parsing
– But it distinguishes R from P

Prof. Bodik CS 164 Lecture 7-8 78


A Few LR(1) States After Fix
P²T id 1
P  ² NL : T id T  id ² id 3
NL  ² N : id N  id ² :
NL  ² N , NL : N  id ² ,
N  ² id :
N  ² id , Different cores  no LALR merging
T  ² id id

R.T , T  id ² , 4

R.N:T , 2 N  id ² :

R  id bogus , id R id ² bogus ,

T  . id ,
N  . id : Prof. Bodik CS 164 Lecture 7-8 79

You might also like