ATCD-Unit4
ATCD-Unit4
TRANSLATOR
A translator is a program that takes as input a program written in one language and
produces as output a program in another language. Beside program translation, the
translator performs another very important role, the error-detection. Any violation of HLL
specification would be detected and reported to the programmers. Important role of
translator are:
1 Translating the HLL program input into an equivalent machine language program.
2 Providing diagnostic messages wherever the programmer violates specification of
the HLL.
A translator is a program that takes as input a program written in one language and
produces as output a program in another language. Beside program translation, the
translator performs another very important role, the error-detection. Any violation of HLL
specification would be detected and reported to the programmers. Important role of
translator are:
TYPE OF TRANSLATORS:-
a. Compiler
b. Interpreter
c. Preprocessor
Compiler
Error msg
Executing a program written n HLL programming language is basically of two parts. The
source program must first be compiled and translated into a object program. Then the
resulting object program is loaded into a memory executed.
Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA also
uses interpreter. The process of interpretation can be carried out in following phases.
1. Lexical analysis
2. Synatx analysis
3. Semantic analysis
4. Direct Execution
Advantages:
Modification of user program can be easily made and implemented as execution
proceeds.
5. Type of object that denotes a various may change dynamically.
Debugging a program and finding errors is simplified task for a program used
for interpretation.
The interpreter for the language makes it machine independent.
Disadvantages:
The execution of the program is slower.
Memory consumption is more.
Preprocessor
A preprocessor produce input to compilers. They may perform the following functions.
1. Macro processing: A preprocessor may allow a user to define macros that are
short hands for longer constructs.
2. File inclusion: A preprocessor may include header files into the program text.
3. Rational preprocessor: these preprocessors augment older languages with more
modern flow-of-control and data structuring facilities.
4. Language Extensions: These preprocessor attempts to add capabilities to the
language by certain amounts to build-in macro
“A loader is a program that places programs into memory and prepares them for
execution.” It would be more efficient if subroutines could be translated into object form
the loader could”relocate” directly behind the user‟s program. The task of adjusting
programs othey may be placed in arbitrary core locations is called relocation.
STRUCTURE OF A COMPILER
Lexical Analysis:-
LA or Scanners reads the source program one character at a time, carving the
source program into a sequence of automic units called tokens.
Syntax Analysis:-
The second stage of translation is called Syntax analysis or parsing. In this
phase expressions, statements, declarations etc… are identified by using the results of lexical
analysis. Syntax analysis is aided by using techniques based on formal grammar of the
programming language.
Intermediate Code Generations:-
An intermediate representation of the final machine language code is produced.
This phase bridges the analysis and synthesis phases of translation.
Code Optimization :-
This is optional phase described to improve the intermediate code so that the
output runs faster and takes less space.
Code Generation:-
The last phase of translation is code generation. A number of optimizations to
reduce the length of machine language program are carried out during this phase. The
output of the code generator is the machine language program of the specified computer.
Table Management (or) Book-keeping:-
This is the portion to keep the names used by the program and records
essential information about each. The data structure used to record this information called a
„Symbol Table‟.
Error Handlers:-
It is invoked when a flaw error in the source program is detected.
The output of LA is a stream of tokens, which is passed to the next phase, the
syntax analyzer or parser. The SA groups the tokens together into syntactic structure called
as expression. Expression may further be combined to form statements. The syntactic
structure can be regarded as a tree whose leaves are the token called as parse trees.
The parser has two functions. It checks if the tokens from lexical analyzer,
occur in pattern that are permitted by the specification for the source language. It also
imposes on tokens a tree-like structure that is used by the sub-sequent phases of the compiler.
Example, if a program contains the expression A+/B after lexical analysis this
expression might appear to the syntax analyzer as the token sequence id+/id. On seeing the /,
the syntax analyzer should detect an error situation, because the presence of these two
adjacent binary operators violates the formulations rule of an expression.
Code Optimization
This is optional phase described to improve the intermediate code so that the
output runs faster and takes less space. Its output is another intermediate code program that
does the some job as the original, but in a way that saves time and / or spaces.
1, Local Optimization:-
There are local transformations that can be applied to a program
to make an improvement. For example,
If A > B goto L2
Goto L3
L2 :
This can be replaced by a single statement
If A < B goto L3
Might be evaluated as
T1 := B + C
A := T1 + D
E := T1 + F
Take this advantage of the common sub-expressions B + C.
2, Loop Optimization:-
Another important source of optimization concerns about increasing
the speed of loops. A typical loop improvement is to move a
computation that produces the same result each time around the loop
to a point, in the program just before the loop is entered.
Code Generator :-
Code Generator produces the object code by deciding on the memory locations
for data, selecting code to access each datum and selecting the registers in which each
computation is to be done. Many computers have only a few high speed registers in which
computations can be performed quickly. A good code generator would attempt to utilize
registers as efficiently as possible.
Table Management OR Book-keeping :-
A compiler needs to collect information about all the data objects that appear
in the source program. The information about data objects is collected by the early phases of
the compiler-lexical and syntactic analyzers. The data structure used to record this
information is called as Symbol Table.
Error Handing :-
One of the most important functions of a compiler is the detection and
reporting of errors in the source program. The error message should allow the programmer to
determine exactly where the errors have occurred. Errors may occur in all or the phases of a
compiler.
Whenever a phase of the compiler discovers an error, it must report the error to
the error handler, which issues an appropriate diagnostic msg. Both of the table-management
and error-Handling routines interact with all phases of the compiler.
LEXICAL ANALYSIS
Upon receiving a get next token command form the parser, the lexical analyzer
reads the input character until it can identify the next token. The LA return to the parser
representation for the token it has found. The representation will be an integer code, if the
token is a simple construct such as parenthesis, comma or colon.
LA may also perform certain secondary tasks as the user interface. One such task is
striping out from the source program the commands and white spaces in the form of blank,
tab and new line characters. Another is correlating error message from the compiler with the
source program.
The lexical analyzer (the "lexer") parses A parser does not give the nodes any
individual symbols from the source code file meaning beyond structural cohesion. The
into tokens. From there, the "parser" proper next thing to do is extract meaning from this
turns those whole tokens into sentences of structure (sometimes called contextual
your grammar analysis).
INPUT BUFFERING
The LA scans the characters of the source pgm one at a time to discover tokens.
Because of large amount of time can be consumed scanning characters, specialized buffering
techniques have been developed to reduce the amount of overhead required to process an input
character.
Buffering techniques:
1. Buffer pairs
2. Sentinels
The lexical analyzer scans the characters of the source program one a t a time to discover
tokens. Often, however, many characters beyond the next token many have to be examined
before the next token itself can be determined. For this and other reasons, it is desirable for
thelexical analyzer to read its input from an input buffer. Figure shows a buffer divided into
two haves of, say 100 characters each. One pointer marks the beginning of the token being
discovered. A look ahead pointer scans ahead of the beginning point, until the token is
discovered .we view the position of each pointer as being between the character last read and
thecharacter next to be read. In practice each buffering scheme adopts one convention either
apointer is at the symbol last read or the symbol it is ready to read.
Token beginnings look ahead pointerThe distance which the lookahead pointer may
have to travel past the actual token may belarge. For example, in a PL/I program we may see:
DECALRE (ARG1, ARG2… ARG n) Without knowing whether DECLARE is a keyword or
an array name until we see the character that follows the right parenthesis. In either case, the
token itself ends at the second E. If the look ahead pointer travels beyond the buffer half in
which it began, the other half must be loaded with the next characters from the source file.
Since the buffer shown in above figure is of limited size there is an implied constraint on how
much look ahead can be used before the next token is discovered. In the above example, ifthe
look ahead traveled to the left half and all the way through the left half to the middle, we could
not reload the right half, because we would lose characters that had not yet been groupedinto
tokens. While we can make the buffer larger if we chose or use another buffering scheme,we
cannot ignore the fact that overhead is limited.
Token: Token is a sequence of characters that can be treated as a single logical entity.
Typical tokens are,
1) Identifiers 2) keywords 3) operators 4) special symbols 5)constants
Pattern: A set of strings in the input for which the same token is produced as output. This set
of strings is described by a rule called a pattern associated with the token.
Lexeme: A lexeme is a sequence of characters in the source program that is matched by
the pattern for a token.
Example:
Description of token
if if If
relation <,<=,= ,< >,>=,> < or <= or = or < > or >= or letter
followed by letters & digit
i pi any numeric constant
Recognition of tokens:
We learn how to express pattern using regular expressions. Now, we must study how to take
the patterns for all the needed tokens aand
nd build a piece of code that examins the input string
and finds a prefix that is a lexeme matching one of the patterns.
Stmt -> if expr then stmt
| If expr then else stmt
|є
Expr --> term relop term
|term
Term -->id
For relop ,we use the comparison operations of languages like Pascal or SQL where = is
“equals” and < > is “not equals” because it presents an interesting structure of lexemes. The
terminal of grammar, which are if, then , else, relop ,id and numbers are the names of tokens
as far as the lexical analyzer is concerned, the patterns for the tokens are described using
regular definitions.
digit -->[0,9]
digits -->digit+
number -->digit(.digit)?(e.[+-]?digits)?
]?digits)?
letter -->[A-Z,a-z]
id -->letter(letter/digit)*
if --> if
then -->then
else -->else
relop --></>/<=/>=/==/< >
In addition, we assign the lexical analyzer the job stripping out white space, by recognizing
the “token” we defined by:
+
ws --> (blank/tab/newline)
Here, blank, tab and newline are abstract symbols that we use to express the ASCII
characters of the same names. Token ws is different from the other tokens in that ,when we
recognize it, we do not return it to parser ,but rather restart the lexical analysis
analys from the
character that follows the white space . It is the following token that gets returned to the
parser.
<= relop LE
= relop ET
<> relop NE
TRANSITION DIAGRAM:
Transition Diagram has a collection of nodes or circles, called states. Each state
represents a condition that could occur during the process of scanning the input looking for a
lexeme that matches one of several patterns .
Edges are directed from one state of the transition diagram to another. each edge is labeled
by a symbol or set of symbols.
If we are in one state s, and the next input symbol is a, we look for an edge out of state s
labeled by a. if we find such an edge ,we advance the forward pointer and enter the
state of the transition diagram to which that edge leads.
Some important conventions about transition diagrams are
1. Certain states are said to be accepting or final .These states indicates that a lexeme has
been found, although the actual lexeme may not consist of all positions b/w the lexeme Begin
and forward pointers we always indicate an accepting state by a double circle.
2. In addition, if it is necessary to return the forward pointer one position, then we shall
additionally place a * near that accepting state.
3. One state is designed the state ,or initial state ., it is indicated by an edge labeled “start”
entering from nowhere .the transition diagram always begins in the state before any input
symbols have been used.
Lex specifications:
declarations
%%
translation rules
%%
auxiliary procedures
3. The third section holds whatever auxiliary procedures are needed by the
actions.Alternatively these procedures can be compiled separately and loaded with the
lexical analyzer.
Note: You can refer to a sample lex program given in page no. 109 of chapter 3 of the book:
Compilers: Principles, Techniques, and Tools by Aho, Sethi & Ullman for more clarity.
UNIT -2
SYNTAX ANALYSIS
Parser obtains a string of tokens from the lexical analyzer and verifies that it can be generated
by the language for the source program. The parser should report any syntax errors in an
intelligible fashion. The two types of parsers employed are:
1.Top down parser: which build parse trees from top(root) to bottom(leaves)
2.Bottom up parser: which build parse trees from leaves and work up the root.
Therefore there are two types of parsing methods– top-down parsing and bottom-up parsing
Given a context-free grammar, a parse tree according to the grammar is a tree with the following
properties:
1. The root is labeled by the start symbol.
2. Each leaf is labeled by a terminal or by e.
3. Each interior node is labeled by a nonterminal
If A is the nonterminal labeling some interior node and X I , Xz, . . . ,Xn are the labels of the children
of that node from left to right, then there must be a production A → X1X2 . . Xn . Here, X1,
X2,. . . , Xn, each stand for a symbol that is either a terminal or a nonterminal. As a special case,
if A → c is a production, then a node labeled A may have a single child labeled E
Ambiguity
A grammar can have more than one parse tree generating a given string of terminals. Such a grammar is
said to be ambiguous. To show that a grammar is ambiguous, all we need to do is find a terminal string
that is the yield of more than one parse tree. Since a string with more than one parse tree usually has
more than one meaning, we need to design unambiguous grammars for compiling applications, or to use
ambiguous grammars with additional rules to resolve the ambiguities.
Example :: Suppose we used a single nonterminal string and did not distinguish between digits and
lists,
Fig. shows that an expression like 9-5+2 has more than one parse tree with this grammar. The two trees
for 9-5+2 correspond to the two ways of parenthesizing the expression: (9-5) +2 and 9- (5+2) . This
second parenthesization gives the expression the unexpected value 2 rather than the customary value 6.
Example
If there is a grammar
G: N = {S, A, B} T = {a, b} P = {S → AB, A → a, B → b}
Here S produces AB, and we can replace A by a, and B by b. Here, the only accepted string is ab, i.e.,
L(G) = {ab}
Writing a grammar
A grammar consists of a number of productions. Each production has an abstract symbol
called a nonterminal as its left-hand side, and a sequence of one or more nonterminal
and terminal symbols as its right-hand side. For each grammar, the terminal symbols are drawn from a
specified alphabet.
Starting from a sentence consisting of a single distinguished nonterminal, called
the goal symbol, a given context-free grammar specifies a language, namely, the set of
possible sequences of terminal symbols that can result from repeatedly replacing any nonterminal in
the sequence with a right-hand side of a production for which the nonterminal is the left-hand side.
There are four categories in writing a grammar :
1. Lexical Vs Syntax Analysis
2. Eliminating ambiguous grammar.
3. Eliminating left-recursion
4. Left-factoring.
Each parsing method can handle grammars only of a certain form hence, the initial grammar may have
to be rewritten to make it parsable
a) Regular expressions provide a more concise and easier to understand notation for tokens than
grammars.
b) The lexical rules of a language are simple and to describe them, we donot need notation as
powerful as grammars.
c) Efficient lexical analyzers can be constructed automatically from RE than from grammars.
d) Separating the syntactic structure of a language into lexical and nonlexical parts provides a
convenient way of modularizing the front end into two manageable-sized components.
Ambiguity of the grammar that produces more than one parse tree for leftmost or rightmost
derivation can be eliminated by re-writing the grammar.
Consider this example,
G: stmt→if expr then stmt
|if expr then stmt else stmt
|other
This grammar is ambiguous since the string if E1 then if E2 then S1 else S2 has the following two
parse trees for leftmost derivation
The general rule is “Match each else with the closest unmatched then.This disambiguating rule can be
used directly in the grammar,
3. Eliminating left-recursion
Because we try to generate a leftmost derivation by scanning the input from left to right, grammars of the
form A A x may cause endless recursion.Such grammars are called left-recursive and they must be
transformed if we want to use a top-down parser.
A grammar is left recursive if for a non-terminal A, there is a derivation A+ A
To eliminate direct left recursion replace
1) A A | with A’ A’|
2) A A1 | A2 | ... | Am | 1 | 2 | ... | n
with
A 1B | 2B | ... | nB
B 1B | 2B | ... | mB |
4. Left-factoring
Left factoring is a grammar transformation that is useful for producing a grammar suitable for predictive
parsing. When it is not clear which of two alternative productions to use to expand a non-terminal A, we
can rewrite the A-productions to defer the decision until we have seen enough of the input to make the
right choice.
Consider S if E then S else S | if E then S
Which of the two productions should we use to expand non-terminal S when the next
token is if?
We can solve this problem by factoring out the common part in these rules.
PARSING
It is the process of analyzing a continuous stream of input in order to determine its grammatical
structure with respect to a given formal grammar.
Types of parsing:
Top-down parsing : A parser can start with the start symbol and try to transform it to the
input string. Example : LL Parsers.
Bottom-up parsing : A parser can start with input and attempt to rewrite it into the start symbol.
Example : LR Parsers.
TOP-DOWN PARSING
Step2:
The leftmost leaf ‘c’ matches the first symbol of w, so advance the input pointer to the second symbol
of w ‘a’ and consider the next leaf ‘A’. Expand A using the first alternative.
Step3:
The second symbol ‘a’ of w also matches with second leaf of tree. So advance the input pointer to third
symbol of w ‘d’. But the third leaf of tree is b which does not match with the input symbol d. Hence
discard the chosen production and reset the pointer to second backtracking.
Step4: Now try the second alternative for A.
Predictive parsing
It is possible to build a nonrecursive predictive parser by maintaining a stack explicitly, rather than
implicitly via recursive calls. The key problem during predictive parsing is that of determining the
production to be applied for a nonterminal . The nonrecursive parser in figure looks up the production to
be applied in parsing table. In what follows, we shall see how the table can be constructed directly from
certain grammars.
A table-driven predictive parser has an input buffer, a stack, a parsing table, and an output
stream. The input buffer contains the string to be parsed, followed by $, a symbol used as a right
endmarker to indicate the end of the input string. The stack contains a sequence of grammar symbols
with $ on the bottom, indicating the bottom of the stack. Initially, the stack contains the start symbol of
the grammar on top of $. The parsing table is a two dimensional array M[A,a] where A is a nonterminal,
and a is a terminal or the symbol $. The parser is controlled by a program that behaves as follows. The
program considers X, the symbol on the top of the stack, and a, the current input symbol. These two
symbols determine the action of the parser. There are three possibilities.
Method. Initially, the parser is in a configuration in which it has $S on the stack with S, the start symbol
of G on top, and w$ in the input buffer. The program that utilizes the predictive parsing table M to
produce a parse for the input is shown in Fig.
let X be the top stack symbol and a the symbol pointed to by ip. if X is a terminal of $ then
if X=a then
pop X from the stack and advance ip else error()
else
if M[X,a]=X->Y1Y2...Yk then begin pop X from the stack;
push Yk,Yk-1...Y1 onto the stack, with Y1 on top; output the production X-> Y1Y2...Yk
end
else error()
until X=$ /* stack is empty */
To compute FIRST(X) for all grammar symbols X, apply the following rules until no more
terminals or e can be added to any FIRST set.
Input : Grammar G
Output : Parsing table M
Method :
1. For each production A → α of the grammar, do steps 2 and 3.
2. For each terminal a in FIRST(α), add A → α to M[A, a].
3. If ε is in FIRST(α), add A → α to M[A, b] for each terminal b in FOLLOW(A). If ε is in
FIRST(α) and $ is in FOLLOW(A) , add A → α to M[A, $].
4. Make each undefined entry of M be error.
Example:
Consider the following grammar :
E→E+T|T
T→T*F|F
F→(E)|id
After eliminating left-recursion the grammar is
E →TE’
E’ → +TE’ | ε
T →FT’
T’ → *FT’ | ε
F → (E)|id
First( ) :
FIRST(E) = { ( , id}
FIRST(E’) ={+ , ε }
FIRST(T) = { ( , id}
FIRST(T’) = {*, ε }
FIRST(F) = { ( , id }
Follow( ):
FOLLOW(E) = { $, ) }
FOLLOW(E’) = { $, ) }
FOLLOW(T) = { +, $, ) }
FOLLOW(T’) = { +, $, ) }
FOLLOW(F) = {+, * , $ , ) }
Stack Implementation
- 24 -
LL(1) GRAMMAR
The parsing table algorithm can be applied to any grammar G to produce a parsing table M.
For some Grammars, for example if G is left recursive or ambiguous, then M will have at
least one multiply-defined entry. A grammar whose parsing table has no multiply defined
entries is said to be LL(1). It can be shown that the above algorithm can be used to produce
for every LL(1) grammar G, a parsing table M that parses all and only the sentences of G.
LL(1) grammars have several distinctive properties. No ambiguous or left recursive grammar
can be LL(1). There remains a question of what should be done in case of multiply defined
entries. One easy solution is to eliminate all left recursion and left factoring, hoping to
produce a grammar which will produce no multiply defined entries in the parse tables.
Unfortunately there are some grammars which will give an LL(1) grammar after any kind of
alteration. In general, there are no universal rules to convert multiply defined entries into
single valued entries without affecting the language recognized by the parser.
The main difficulty in using predictive parsing is in writing a grammar for the source
language such that a predictive parser can be constructed from the grammar. Although left
recursion elimination and left factoring are easy to do, they make the resulting grammar hard
to read and difficult to use the translation purposes. To alleviate some of this difficulty, a
common organization for a parser in a compiler is to use a predictive parser for control
constructs and to use operator precedence for expressions.however, if an lr parser generator
is available, one can get all the benefits of predictive parsing and operator precedence
automatically.
Consider error recovery predictive parsing using the following two methods
Panic-Mode recovery
Phrase Level recovery
Panic-mode error recovery is based on the idea of skipping symbols on the input until a
token in a selected set of synchronizing tokens appears. Its effectiveness depends on the
choice of synchronizing set. The sets should be chosen so that the parser recovers quickly
from errors that are likely to occur in practice.
As a starting point, we can place all symbols in FOLLOW(A) into the synchronizing set for
nonterminal A. If we skip tokens until an element of FOLLOW(A) is seen and pop A from
the stack, it is likely that parsing can continue.
It is not enough to use FOLLOW(A) as the synchronizingset for A. Fo example , if
semicolons terminate statements, as in C, then keywords that begin statements may not
appear in the FOLLOW set of the nonterminal generating expressions. A missing semicolon
after an assignment may therefore result in the keyword beginning the next statement being
skipped. Often, there is a hierarchica structure on constructs in a language; e.g., expressions
appear within statement, which appear within bblocks,and so on. We can add to the
synchronizing set of a lower construct the symbols that begin higher constructs. For
example, we might add keywords that begin statements to the synchronizing sets for the
nonterminals generaitn expressions.
If a nonterminal can generate the empty string, then the production deriving e can be
used as a default. Doing so may postpone some error detection, but cannot cause an error
to be missed. This approach reduces the number of nonterminals that have to be considered
during error recovery.
If a terminal on top of the stack cannot be matched, a simple idea is to pop the
terminal, issue a message saying that the terminal was inserted, and continue parsing. In
effect, this approach takes the synchronizing set of a token to consist of all other tokens.
This involves, defining the blank entries in the table with pointers to some error routines
which may
Change, delete or insert symbols in the input or
May also pop symbols from the stack
BOTTOM-UP PARSING
Constructing a parse tree for an input string beginning at the leaves and going towards the root is
called bottom-up parsing. A general type of bottom-up parser is a shift-reduce parser.
Handles: A handle of a string is a substring that matches the right side of a production, and
whose reduction to the non-terminal on the left side of the production represents one step
along the reverse of a rightmost derivation.
Example:
Consider the grammar:
E→E+E
E→E*E
E→(E)
E→id
Handle pruning:
A rightmost derivation in reverse can be obtained by “handle pruning”. (i.e.) if w is a sentence
or string of the grammar at hand, then w = γn, where γn is the nth right sentential form of
some rightmost derivation.
Actions in shift-reduce parser:
• shift - The next input symbol is shifted onto the top of the stack.
• reduce - The parser replaces the handle within a stack with a non-terminal.
• accept - The parser announces successful completion of parsing.
• error - The parser discovers that a syntax error has occurred and calls an error recovery routine.
1. Shift-reduce conflict:
Example:
Consider the grammar:
E→E+E | E*E | id and input id+id*id
2. Reduce-reduce conflict:
Consider the grammar: M→R+R|R+c|R
R→c
input c+c
INTRODUCTION TO LR PARSERS
An efficient bottom-up syntax analysis technique that can be used CFG is called LR(k) parsing.
The ‘L’ is for left-to-right scanning of the input, the ‘R’ for constructing a rightmost derivation in
reverse, and the ‘k’ for the number of input symbols. When ‘k’ is omitted, it is assumed to be 1.
Advantages of LR parsing:
1. It recognizes virtually all programming language constructs for which CFG can be written.
2. It is an efficient non-backtracking shift-reduce parsing method.
3.A grammar that can be parsed using LR method is a proper superset of a grammar that
can be parsed with predictive parser
4.It detects a syntactic error as soon as possible.
Drawbacks of LR method:
It is too much of work to construct a LR parser by hand for a programming language grammar.
A specialized tool, called a LR parser generator, is needed. Example: YACC.
Types of LR parsing method:
1. SLR- Simple LR
Easiest to implement, least powerful.
2. CLR- Canonical LR
Most powerful, most expensive.
3. LALR- Look-Ahead LR
Intermediate in size and cost between the other two methods.
It consists of an input, an output, a stack, a driver program, and a pa parts (action and goto).
Action : The parsing program determines sm, the state currently on top of stack, and ai, the current
input symbol. It then consults action[sm,ai] in the action table which can have one of four values:
1. shift s, where s is a state,
2. reduce by a grammar production A → β,
3. accept,
4. Error.
Goto : The function goto takes a state and grammar symbol as arguments and produces a state.
LR Parsing algorithm:
Input: An input string w and an LR parsing table with functions action and goto for grammar G.
Output: If w is in L(G), a bottom-up-parse for w; otherwise, an error indication.
Method: Initially, the parser has s0 on its stack, where s0 is the initial state, and w$ in the input buffer.
set ip to point to the first input symbol of w$; repeat forever begin
let s be the state on top of the stack and a the symbol pointed to by ip;
if action[s, a] = shift s’ then begin
push a then s’ on top of the stack; advance ip to the next input symbol end
else if action[s, a] = reduce A→β then begin pop 2* | β | symbols off the stack;
let s’ be the state now on top of the stack; push A then goto[s’, A] on top of the stack; output the
production A→ β
end
else if action[s, a] = accept then
return
else error( )
end
An LR(0) item of a grammar G is a production of G with a dot at some position of the right side. For
example, production A → XYZ yields the four items :
A→.XYZ
A → X . YZ
A → XY . Z
A → XYZ .
Closure operation:
If I is a set of items for a grammar G, then closure(I) is the set of items constructed from I by
the two rules:
1. Initially, every item in I is added to closure(I).
2. If A → α . Bβ is in closure(I) and B → γ is a production, then add the item B → . γ to I , if it
is not already there. We apply this rule until no more new items can be added to closure(I).
Goto operation:
Goto(I, X) is defined to be the closure of the set of all items [A→ αX . β] such that [A→ α . Xβ] is in I.
Steps to construct SLR parsing table for grammar G are:
1. Augment G and produce G’
2. Construct the canonical collection of set of items C for G’
3. Construct the parsing action function action and goto using the following algorithm that requires
FOLLOW(A) for each non-terminal of grammar.
Add Augment Production and insert '•' symbol at the first position for every production in G
S` → •E
E → •E + T
E → •T
T → •T * F
T → •F
F → •id
I0 State:
Add all productions starting with E in to I0 State because "." is followed by the non-terminal. So, the I0
State becomes
I0 = S` → •E
E → •E + T
E → •T
Add all productions starting with T and F in modified I0 State because "." is followed by the non-
terminal. So, the I0 State becomes.
I0= S` → •E
E → •E + T
E → •T
T → •T * F
T → •F
F → •id
Add all productions starting with T and F in I5 State because "." is followed by the non-terminal. So, the
I5 State becomes
I5 = E → E +•T
T → •T * F
T → •F
F → •id
Add all productions starting with F in I6 State because "." is followed by the non-terminal. So, the I6
State becomes
I6 = T → T * •F
F → •id
Drawing DFA
Explanation:
1) I1 contains the final item which drives S → E• and follow (S) = {$}, so action {I1, $} = Accept
2) I2 contains the final item which drives E → T• and follow (E) = {+, $}, so action {I2, +} = R2,
action {I2, $} = R2
3) I3 contains the final item which drives T → F• and follow (T) = {+, *, $}, so action {I3, +} =
R4, action {I3, *} = R4, action {I3, $} = R4
4) I4 contains the final item which drives F → id• and follow (F) = {+, *, $}, so action {I4, +} =
R5, action {I4, *} = R5, action {I4, $} = R5
5) I7 contains the final item which drives E → E + T• and follow (E) = {+, $}, so action {I7, +} =
R1, action {I7, $} = R1
6) I8 contains the final item which drives T → T * F• and follow (T) = {+, *, $}, so action {I8, +}
= R3, action {I8, *} = R3, action {I8, $} = R3.
CANONICAL LR PARSING
CLR refers to canonical look ahead. CLR parsing use the canonical collection of LR (1) items to build
the CLR (1) parsing table. CLR (1) parsing table produces the more number of states as compare to the
SLR (1) parsing.
In the CLR (1), we place the reduce node only in the look ahead
In the SLR method we were working with LR(0)) items. In CLR parsing we will be using LR(1)
items. LR(k) item is defined to be an item using lookaheads of length k. So ,the LR(1) item is
comprised of two parts : the LR(0) item and the lookahead associated with the item. The look ahead
is used to determine that where we place the final item. The look ahead always add $ symbol for the
argument production.
LR(1) parsers are more powerful parser.
for LR(1) items we modify the Closure and GOTO function.
Closure Operation
Closure(I)
repeat
for (each item [ A -> ?.B?, a ] in I )
for (each production B -> ? in G’)
for (each terminal b in FIRST(?a))
add [ B -> .? , b ] to set I;
until no more items are added to I;
return I;
Goto Operation
Goto(I, X)
Initialise J to be the empty set;
for ( each item A -> ?.X?, a ] in I )
Add item A -> ?X.?, a ] to se J; /* move the dot one step */
return Closure(J); /* apply closure to the set */
LR(1) items
Void items(G’)
Initialise C to { closure ({[S’ -> .S, $]})};
Repeat
For (each set of items I in C)
For (each grammar symbol X)
if( GOTO(I, X) is not empty and not in C)
Add GOTO(I, X) to C;
Until no new set of items are added to C;
8. The inital state of the parser is the one constructed from the set of items
containing [S' -> .S, $].
Example,
Consider the following grammar,
S‟->S
S->CC
C->cC
C->d
Sets of LR(1) items
I0: S‟->.S,$
S->.CC,$
C->.Cc,c/d
C->.d,c/d
I1: S‟->S.,$
I2: S->C.C,$
C->.Cc,$
C->.d,$
I3: C->c.C,c/d C-
>.Cc,c/d C-
>.d,c/d
I4: C->d.,c/d
I5: S->CC.,$
I6: C->c.C,$
C->.cC,$
C->.d,$
I7: C->d.,$
I8: C->cC.,c/d
I9: C->cC.,$