Compiler Design Short Notes
Compiler Design Short Notes
ON
COMPILER DESIGN
B. Tech V semester
Dr K Rajendra Prasad
Professor
Ms. B Ramyasree
Assistant Professor
Ms. K Saranya
Assistant Professor
1
UNIT -I
OVERVIEW OF COMPILATION
Preprocessor
A preprocessor produce input to compilers. They may perform the following functions.
1 . Macro processing: A preprocessor may allow a user to define macros that
are
short hands for longer constructs.
2. File inclusion: A preprocessor may include header files into the program text.
3. Rational preprocessor: these preprocessors augment older languages
with more modern flow-of-control and data structuring facilities.
4. Language Extensions: These preprocessor attempts to add capabilities to the
language by certain amounts to build-in macro
1.2 Compiler
Compiler is a translator program that translates a program written in (HLL) the
source program and translates it into an equivalent program in (MLL) the
target program. As an important part of a compiler is error showing to the
programmer.
Error msg
2
Executing a program written n HLL programming language is basically of two parts. the
source program must first be compiled translated into a object program. Then the results
object program is loaded into a memory executed.
1.3 Assembler: programmers found it difficult to write or read programs in machine language.
They begin to use a mnemonic (symbols) for each machine instruction, which they would
subsequently translate into machine language. Such a mnemonic machine language is
now called an assembly language. Programs known as assembler were written to
automate the translation of assembly language in to machine language. The input to an
assembler program is called source program, the output is a machine language translation
(object program).
Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA also
uses interpreter. The process of interpretation can be carried out in following phases.
1. Lexical analysis
2. Syntax analysis
3. Semantic analysis
4. Direct Execution
Advantages:
Disadvantages:
3
memory and executed. The assembler could place the object program directly in memory
and transfer control to it, thereby causing the machine language program to be
execute. This would waste core by leaving the assembler in memory while the user‟s
program was being executed. Also the programmer would have to retranslate his program
with each execution, thus wasting translation time. To overcome this problems of wasted
translation time and memory. System programmers developed another component called
loader.
“A loader is a program that places programs into memory and prepares them for
execution.” It would be more efficient if subroutines could be translated into object form the
loader could ”relocate” directly behind the user‟s program. The task of adjusting programs o
they may be placed in arbitrary core locations is called relocation. Relocation loaders
perform four functions.
1.6 Translator
A translator is a program that takes as input a program written in one language and
produces as output a program in another language. Beside program translation, the translator
performs another very important role, the error-detection. Any violation of d HLL
specification would be detected and reported to the programmers. Important role of translator
are:
Type Of Translators:-
1. Interpreter
2. Compiler
3. preprocessor
List Of Compilers
4
4. Phases Of A Compiler:
A compiler operates in phases. A phase is a logically interrelated operation that takes
source program in one representation and produces output in another representation. The
phases of a compiler are shown in below
There are two phases of compilation.
a. Analysis (Machine Independent/Language Dependent)
b. Synthesis (Machine Dependent/Language independent)
Lexical Analysis:-
Lexical Analysis or Scanners reads the source program one character at a time, carving the
source program into a sequence of automatic units called tokens.
Syntax Analysis:-
The second stage of translation is called syntax analysis or parsing. In this phase
expressions, statements, declarations etc… are identified by using the results of lexical
analysis. Syntax analysis is aided by using techniques based on formal grammar of the
programming language.
5
Intermediate Code Generations:-
An intermediate representation of the final machine language code is produced. This phase
bridges the analysis and synthesis phases of translation.
Code Optimization:-
This is optional phase described to improve the intermediate code so that the output runs
faster and takes less space.
Code Generation:-
The last phase of translation is code generation. A number of optimizations to Reduce the
length of machine language program are carried out during this phase. The output of the
code generator is the machine language program of the specified computer.
Error Handlers:-
It is invoked when a flaw error in the source program is detected. The output of LA is a
stream of tokens, which is passed to the next phase, the syntax analyzer or parser. The SA
groups the tokens together into syntactic structure called as expression. Expression may
further be combined to form statements. The syntactic structure can be regarded as a tree
whose leaves are the token called as parse trees.
The parser has two functions. It checks if the tokens from lexical analyzer, occur in
pattern that are permitted by the specification for the source language. It also imposes
on tokens a tree-like structure that is used by the sub-sequent phases of the compiler.
Example, if a program contains the expression A+/B after lexical analysis this expression
might appear to the syntax analyzer as the token sequence id+/id. On seeing the /, the syntax
analyzer should detect an error situation, because the presence of these two adjacent
binary operators violates the formulations rule of an expression.
Syntax analysis is to make explicit the hierarchical structure of the incoming token stream
by identifying which parts of the token stream should be grouped.
6
One common style uses instruction with one operator and a small number of
operands.The output of the syntax analyzer is some representation of a parse tree. The
intermediate code generation phase transforms this parse tree into an intermediate language
representation of the source program.
Code Optimization:-
This is optional phase described to improve the intermediate code so that the
output runs faster and takes less space. Its output is another intermediate code
program that does the same job as the original, but in a way that saves time and / or
spaces.
/* 1, Local Optimization:-
There are local transformations that can be applied to a
program to make an improvement. For example,
If A > B goto L2
Goto L3 L2 :
This can be replaced by a single statement If A < B goto L3
Another important local optimization is the elimination of common
sub-expressions
A := B + C + D
E := B + C + F
Might be evaluated as
T1 := B + C
A := T1 + D
E := T1 + F
Loop Optimization:-
Another important source of optimization concerns about increasing the speed of loops.
A typical loop improvement is to move a computation that produces the same result
each time around the loop to a point, in the program just before the loop is entered.*/
Code generator :-
C produces the object code by deciding on the memory locations for data, selecting code
to access each data and selecting the registers in which each computation is to be done.
Many computers have only a few high speed registers in which computations can be
performed quickly. A good code generator would attempt to utilize registers as efficiently as
possible.
Error Handing:-
One of the most important functions of a compiler is the detection and reporting of
errors in the source program. The error message should allow the programmer to determine
exactly where the errors have occurred. Errors may occur in all or the phases of a compiler.
7
Whenever a phase of the compiler discovers an error, it must report the error to the error
handler, which issues an appropriate diagnostic msg. Both of the table-management and
error-Handling routines interact with all phases of the compiler.
Example:
position:= initial + rate *60
Lexical Analyzer
Syntsx Analyzer
id1 +
id2 *
id3 id4
Semantic Analyzer
id1 +
id2 *
id3 60
int to real
8
Code Optimizer
Code Generator
MOVF id3,r2
MULF *60.0 r2
MOVF id2, r2
ADDF r2, r1
MOVF r1,id1
Upon receiving a „get next token‟ command form the parser, the lexical analyzer reads
the input character until it can identify the next token. The LA return to the parser
representation for the token it has found. The representation will be an integer code, if
the token is a simple construct such as parenthesis, comma or colon.
LA may also perform certain secondary tasks as the user interface. One such task is striping
out from the source program the commands and white spaces in the form of blank, tab and
new line characters. Another is correlating error message from the compiler with the source
program.
9
Lexical Analysis Vs Parsing:
Token: Token is a sequence of characters that can be treated as a single logical entity.
Typical tokens are,
1) Identifiers 2) keywords 3) operators 4) special symbols 5) constants
Pattern: A set of strings in the input for which the same token is produced as output. This set
of strings is described by a rule called a pattern associated with the token.
Lexeme: A lexeme is a sequence of characters in the source program that is matched by the
pattern for a token.
Example:
Description of token
if if If
relation <,<=,= ,< >,>=,> < or <= or = or < > or >= or letter
followed by letters & digit
i pi any numeric constant
A pattern is a rule describing the set of lexemes that can represent a particular
token in source program.
10
Lexical Errors:
Lexical errors are the errors thrown by the lexer when unable to continue. Which means
that there‟s no way to recognise a lexeme as a valid token for you lexer? Syntax errors, on the
other side, will be thrown by your scanner when a given set of already recognized valid
tokens don't match any of the right sides of your grammar rules. Simple panic-mode error
handling system requires that we return to a high-level parsing function when a parsing or
lexical error is detected.
4. Regular Expressions:
In language theory, the terms "sentence" and "word" are often used as synonyms for "string."
The length of a string s, usually written |s|, is the number of occurrences of symbols in s. For
example, banana is a string of length six. The empty string, denoted ε, is the string of length
zero.
Operations on strings
The following string-related terms are commonly used:
1. A prefix of string s is any string obtained by removing zero or more symbols from the
end of strings.
For example, ban is a prefix of banana.
2. A suffix of string s is any string obtained by removing zero or more symbols from
the beginning of s.
For example, nana is a suffix of banana.
4. The proper prefixes, suffixes, and substrings of a string s are those prefixes, suffixes,
and substrings, respectively of s that are not ε or not equal to s itself.
Operations on languages:
The following are the operations that can be applied to languages:
1. Union
2. Concatenation
3. Kleene closure
4.Positive closure
Here are the rules that define the regular expressions over some alphabet Σ and the languages
that those expressions denote:
1. ε is a regular expression, and L(ε) is { ε }, that is, the language whose sole member is the
empty string.
2. If„a‟is a symbol in Σ, then „a‟is a regular expression, and L(a) = {a}, that is, the
language with one string, of length one, with „a‟in its one position.
3. Suppose r and s are regular expressions denoting the languages L(r) and L(s). Then,
Shorthand‟s
Certain constructs occur so frequently in regular expressions that it is convenient to introduce
notational shorthands for them.
13
o If r is a regular expression that denotes the language L(r), then ( r )+ is a regular
expression that denotes the language (L (r ))+
o Thus the regular expression a+ denotes the set of all strings of one or more a‟s.
o The operator + has the same precedence and associativity as the operator *.
3. Character Classes:
- The notation [abc] where a, b and c are alphabet symbols denotes the regular expression
a | b | c.
- Character class such as [a – z] denotes the regular expression a | b | c | d | ….|z.
- We can describe identifiers as being strings generated by the regular expression,
[A–Za–z][A–Za–z0–9]*
Non-regular Set
A language which cannot be described by any regular expression is a non-regular set.
Example: The set of all strings of balanced parentheses and repeating strings cannot be
described by a regular expression. This set can be specified by a context-free grammar.
Recognition Of Tokens:
Consider the following grammar fragment:
stmt → if expr then stmt
|if expr then stmt else stmt |ε
expr → term relop term |term
term → id |num
where the terminals if , then, else, relop, id and num generate sets of strings given by
the following regular definitions:
If → if
then → then
else → else
relo
p→ <|<=|=|<>|>|>=
id → letter(letter|digit)*
num → digit+ (.digit+)?(E(+|-)?digit+)?
14
For this language fragment the lexical analyzer will recognize the keywords if, then, else, as
well as the lexemes denoted by relop, id, and num. To simplify matters, we assume keywords
are reserved; that is, they cannot be used as identifiers.
Lexeme Token Name Attribute Value
Any ws _ _
if if _
then then _
else else _
Any id id pointer to table entry
Any number number pointer to table
entry
< relop LT
<= relop LE
= relop ET
<> relop NE
Transition Diagram:
Transition Diagram has a collection of nodes or circles, called states. Each
state represents a condition that could occur during the process of scanning the input
looking for a lexeme that matches one of several patterns .Edges are directed from
one state of the transition diagram to another. each edge is labeled by a symbol or
set of symbols.If we are in one state s, and the next input symbol is a, we look
for an edge out of state s labeled by a. if we find such an edge ,we advance the
forward pointer and enter the state of the transition diagram to which that edge
leads.
Some important conventions about transition diagrams are
1. Certain states are said to be accepting or final .These states indicates that a
lexeme has been found, although the actual lexeme may not consist of all
positions b/w the lexeme Begin and forward pointers we always indicate an
accepting state by a double circle.
2. In addition, if it is necessary to return the forward pointer one position, then
we shall additionally place a * near that accepting state.
3. One state is designed the state ,or initial state ., it is indicated by an edge
labeled “start” entering from nowhere .the transition diagram always begins in
the state before any input symbols have been used.
15
As an intermediate step in the construction of a LA, we first produce a
stylized flowchart, called a transition diagram. Position in a transition diagram,
are drawn as circles and are called as states.
Automata:
Automation is defined as a system where information is transmitted and used for
performing some functions without direct participation of man.
1. An automation in which the output depends only on the input is called automation
without memory.
2. An automation in which the output depends on the input and state also is called as
automation with memory.
3. An automation in which the output depends only on the state of the machine is called
a Moore machine.
4. An automation in which the output depends on the state and input at any instant of time is
called a mealy machine.
Description Of Automata
1. An automata has a mechanism to read input from input tape,
2. Any language is recognized by some automation, Hence these automation are
basically language „acceptors‟ or „language recognizers‟.
Types of Finite Automata
Deterministic Automata
Non-Deterministic Automata.
Deterministic Automata:
A deterministic finite automata has at most one transition from each state on any input.
A DFA is a special case of a NFA in which:-
DFA formally defined by 5 tuple notation M = (Q, ∑, δ, qo, F), where Q is a finite „set of
states‟, which is non empty.
16
∑ is „input alphabets‟, indicates input set.
qo is an „initial state‟ and qo is in Q ie, qo, ∑, Q F is a
set of „Final states‟,
δ is a „transmission function‟ or mapping function, using this function the next
state can be determined.
The regular expression is converted into minimized DFA by the following procedure:
a
a
So S2
S1
From state S0 for input „a‟ there is only one path going to S2. similarly from so
there is only one path for input going to S1.
Nondeterministic Automata:
A NFA ia A mathematical model consists of
This graph looks like a transition diagram, but the same character can label two or more
transitions out of one state and edges can be labeled by the special symbol € as well as input
symbols.
17
The transition graph for an NFA that recognizes the language (a|b)*abb is shown
5. Bootstrapping:
When a computer is first turned on or restarted, a special type of absolute loader, called as
bootstrap loader is executed. This bootstrap loads the first program to be run by the
computer usually an operating system. The bootstrap itself begins at address O in the
memory of the machine. It loads the operating system (or some other program) starting at
address 80. After all of the object code from device has been loaded, the bootstrap program
jumps to address 80, which begins the execution of the program that was loaded.
Such loaders can be used to run stand-alone programs independent of the operating system or
the system loader. They can also be used to load the operating system or the loader itself into
memory.
L
Liinnkkain
gge
L in kin g L i b r a ry LEodait
doerr
L i b r a ry L o a d er
L in ke d
Pr o gr a m
M e mo ry
Re l oc at i n g
Lin kin g L o a d e r L o a d er
M e mo ry
Lin k a g e E d it o r
18
Phases: (Phases are collected into a front end and back end)
Frontend:
The front end consists of those phases, or parts of phase, that depends primarily on the
source language and is largely independent of the target machine. These normally include
lexical and syntactic analysis, the creation of the symbol table, semantic analysis, and the
generation of intermediate code.
A certain amount of code optimization can be done by front end as well. the front end
also includes the error handling tha goes along with each of these phases.
Back end:
The back end includes those portions of the compiler that depend on the target machine and
generally, these portions do not depend on the source language .
Lex Specification
{ definitions }
%%
{ rules }
%%
{ user subroutines }
19
o Definitions include declarations of variables, constants, and regular definitions
o Rules are statements of the form p1 {action1}p2 {action2} … pn {action}
o where pi is regular expression and actioni describes what action the lexical analyzer
should take when pattern pi matches a lexeme. Actions are written in C code.
o User subroutines are auxiliary procedures needed by the actions. These can be
compiled separately and loaded with the lexical analyzer.
8. Input Buffering
The LA scans the characters of the source program one at a time to discover
tokens. Because of large amount of time can be consumed scanning characters,
specialized buffering techniques have been developed to reduce the amount of
overhead required to process an input character.
Buffering techniques:
1. Buffer pairs
2. Sentinels
The lexical analyzer scans the characters of the source program one a t a time to discover
tokens. Often, however, many characters beyond the next token many have to be examined
before the next token itself can be determined. For this and other reasons, it is desirable for
the lexical analyzer to read its input from an input buffer. Figure shows a buffer divided into
two halves of, say 100 characters each. One pointer marks the beginning of the token being
discovered. A look ahead pointer scans ahead of the beginning point, until the token is
discovered .we view the position of each pointer as being between the character last read and
the character next to be read. In practice each buffering scheme adopts one convention either
a pointer is at the symbol last read or the symbol it is ready to read.
Token beginnings look ahead pointer, The distance which the look ahead pointer may have
to travel past the actual token may be large.
For example, in a PL/I program we may see: DECALRE (ARG1, ARG2… ARG n) without
knowing whether DECLARE is a keyword or an array name until we see the character that
follows the right parenthesis.
20
TOPDOWN PARSING
Context-free Grammars: Definition:
Example of CFG:
E ==>EAE | (E) | -E | id
A==> + | - | * | / |
Where E, A are the non-terminals while id, +, *, -, /,(, ) are the terminals.
Syntax analysis:
In syntax analysis phase the source program is analyzed to check whether if conforms to the
source language‟s syntax, and to determine its phase structure. This phase is often separated
into two phases:
Parsing:
Parsing is the activity of checking whether a string of symbols is in the language of some
grammar, where this string is usually the stream of tokens produced by the lexical analyzer.
If the string is in the grammar, we want a parse tree, and if it is not, we hope for some kind of
error message explaining why not.
There are two main kinds of parsers in use, named for the way they build the parse trees:
Top-down: A top-down parser attempts to construct a tree from the root, applying
productions forward to expand non-terminals into strings of symbols.
Bottom-up: A Bottom-up parser builds the tree starting with the leaves, using productions
in reverse to identify strings of symbols that can be grouped together.
In both cases the construction of derivation is directed by scanning the input sequence from
left to right, one symbol at a time.
21
Parse Tree:
L e xic a l Re s t of
P ar s er
A n a ly z er f ro nt e n d
Sy mb o l
T a ble
A parse tree is the graphical representation of the structure of a sentence according to its
grammar.
Example:
Let the production P is:
E T | E+T
T F | T*F
F V | (E)
V a | b | c |d
The parse tree may be viewed as a representation for a derivation that filters out the choice
regarding the order of replacement.
E + T
T + F F
F V V
V b c
22
Parse tree for a + b * c is:
E
E + T
T T * F
F F V
V V c
a b
T * F
F (E)
(E) E + T
T F
E + T
F V
T F
V d
F V
c
V b
Parse tree can be presented in a simplified form with only the relevant structure information
by:
Leaving out chains of derivations (whose sole purpose is to give operators difference
precedence).
23
Labeling the nodes with the operators in question rather than a non-terminal.
The simplified Parse tree is sometimes called as structural tree or syntax tree.
a * b + c a + b * c (a + b) * (c + d)
E E E
+ *
+
a (E) (E)
*
* c
b c
a b + +
a b c d
Sy nt a x T r e e s
If a compiler had to process only correct programs, its design & implementation would be
greatly simplified. But programmers frequently write incorrect programs, and a good
compiler should assist the programmer in identifying and locating errors.The programs
contain errors at many different levels.
For example, errors can be:
Much of error detection and recovery in a compiler is centered around the syntax analysis
phase.
The goals of error handler in a parser are:
It should report the presence of errors clearly and accurately.
It should recover from each error quickly enough to be able to detect subsequent errors.
It should not significantly slow down the processing of correct programs.
2.3 Ambiguity:
Several derivations will generate the same sentence, perhaps by applying the same
productions in a different order. This alone is fine, but a problem arises if the same sentence
has two distinct parse trees. A grammar is ambiguous if there is any sentence with more than
one parse tree.
Any parses for an ambiguous grammar has to choose somehow which tree to return. There
are a number of solutions to this; the parser could pick one arbitrarily, or we can provide
some hints about which to choose. Best of all is to rewrite the grammar so that it is not
ambiguous.
24
There is no general method for removing ambiguity. Ambiguity is acceptable in spoken
languages. Ambiguous programming languages are useless unless the ambiguity can be
resolved.
Any sentence with more than two variables, such as (arg, arg, arg) will have multiple parse
trees.
A A 1 | A 2 | - - - | A m | 1 | 2 | - - - | n
Where,
A is the left recursive non-terminal,
is any string of terminals and
is any string of terminals and non terminals that does not begin with A.
A 1 AI | 2 AI | - - - | n AI
AI 1 AI | 2 AI | - - - |m AI |
25
This can lead to non termination in a top-down parser.
for a top-down parser, any recursion must be right recursion.
we would like to convert the left recursion to right recursion.
Example 1:
Remove the left recursion from the production: A A |
Left Recursive.
Eliminate
Applying the transformation yields:
A AI
AI AI |
Remaining part after A.
Example 2:
Remove the left recursion from the productions:
EE+T|T
TT*F|F
Applying the transformation yields:
E T EI T F TI
EI T EI | TI * F T I |
Example 3:
Remove the left recursion from the productions:
EE+T|E–T|T
T T * F | T/F | F
Applying the transformation yields:
E T EI T F TI
E + T EI | - T EI | TI * F TI | /F TI |
Example 4:
Remove the left recursion from the productions:
SAa|b
AAc|Sd|
1.The non terminal S is left recursive because S A a S d a
But it is not immediate left recursive.
2.Substitute S-productions in A S d to obtain:
AAc|Aad|bd|
3.Eliminating the immediate left recursion:
SAa|b
26
A b d AI | AI
AI c A I | a d A I |
Example 5:
Consider the following grammar and eliminate left recursion.
SAa|b
ASc|d
The nonterminal S is left recursive in two steps:
SAaScaAacaScaca---
Left recursion causes the parser to loop like this, so remove:
Replace A S c | d by A A a c | b c | d
and then by using Transformation rules:
A b c AI | d AI
AI a c AI |
Algorithm:
For all A non-terminal, find the longest prefix that occurs in two or more right-hand
sides of A.
If then replace all of the A productions,
A I | 2 | - - - | n | r
With
A AI | r
A I I | 2| - - - | n |
Where, AI is a new element of non-terminal.
Repeat until no common prefixes remain.
It is easy to remove common prefixes by left factoring, creating new non-terminal.
For example consider:
V|r
Change to:
V VI
VI | r
Example 1:
Eliminate Left factoring in the grammar:
S V := int
27
V alpha „[„ int ‟]‟ | alpha
Becomes:
S V := int
V alpha VI
VI ‟[„ int ‟] |
The input token string is: If id then while true do print else print.
1. Tree:
S
if E then S e ls e S
if E then S e ls e S
id
if E then S e ls e S
id w hile E do S
if E then S e ls e S
id w hile E do S
t ru e
if E then S e ls e S
id w hile E do S
t ru e pr i nt
29
7. Tree:
if E then S e ls e S
id w hile E do S pr i nt
t ru e pr i nt
Input: print.
Action: print matches; input exhausted; done.
S S S
c A d c A d c A
d
a b a
30
The left most leaf, labeled c, matches the first symbol of w, so we now advance the input
pointer to a ,the second symbol of w, and consider the next leaf, labeled A. We can then
expand A using the first alternative for A to obtain the tree in Fig (b). we now have a match
for the second input symbol so we advance the input pointer to d, the third, input symbol, and
compare d against the next leaf, labeled b. since b does not match the d ,we report failure
and go back to A to see where there is any alternative for Ac that we have not tried but that
might produce a match.
In going back to A, we must reset the input pointer to position2,we now try second
alternative for A to obtain the tree of Fig(c).The leaf matches second symbol of w and the
leaf d matches the third symbol .
The left recursive grammar can cause a recursive- descent parser, even one with
backtracking, to go into an infinite loop.That is ,when we try to expand A, we may eventually
find ourselves again trying to ecpand A without Having consumed any input.
Predictive Parsing:
Predictive parsing is top-down parsing without backtracking or look a head. For many
languages, make perfect guesses (avoid backtracking) by using 1-symbol look-a-head. i.e., if:
A I | 2 | - - - | n.
Choose correct i by looking at first symbol it derive. If is an alternative, choose it
last.
This approach is also called as predictive parsing. There must be at most one production in
order to avoid backtracking. If there is no such production then no parse tree exists and an
error is returned.
The crucial property is that, the grammar must not be left-recursive.
Predictive parsing works well on those fragments of programming languages in which
keywords occurs frequently.
For example:
stmt if exp then stmt else stmt
| while expr do stmt
| begin stmt-list end.
then the keywords if, while and begin tell, which alternative is the only one that could
possibly succeed if we are to find a statement.
The model of predictive parser is as follows:
31
A predictive parser has:
Stack
Input
Parsing Table
Output
The input buffer consists the string to be parsed, followed by $, a symbol used as a right end
marker to indicate the end of the input string.
The stack consists of a sequence of grammar symbols with $ on the bottom, indicating the
bottom of the stack. Initially the stack consists of the start symbol of the grammar on the top
of $.
Recursive descent and LL parsers are often called predictive parsers, because they operate by
predicting the next step in a derivation.
Define FOLLOW (A), for nonterminals A, to be the set of terminals a that can appear
immediately to the right of A in some sentential form, that is, the set of terminals a such that
there exist a derivation of the form S=>αAaβ for some α and β. If A can be the rightmost
symbol in some sentential form, then $ is in FOLLOW(A).
A BC | EFGH | H
Bb
Cc|
Ee|
F CE
Gg
Hh|
Solution:
Finding first () set:
1. first (H) = first (h) first () = {h, }
2. first (G) = first (g) = {g}
3. first (C) = first (c) first () = c, }
4. first (E) = first (e) first () = {e, }
5. first (F) = first (CE) = (first (c) - {}) first (E)
= (c, } {}) {e, } = {c, e, }
6. first (B) = first (b)={b}
7. first (A) = first (BC) first (EFGH) first (H)
= first (B) (first (E) – {}) first (FGH) {h, }
= {b, h, } {e} (first (F) – {}) first (GH)
= {b, e, h, } {C, e} first (G)
= {b, c, e, h, } {g} = {b, c, e, g, h, }
34
6. follow(E) = first(FGH) m- {} follow(F)
= ((first(F) – {}) first(GH)) – {} follow(F)
= {c, e} {g} {g} = {c, e, g}
7. follow(C) = follow(A) first (E) – {} follow (F)
={$} {e, } {g} = {e, g, $}
Example 1:
Construct a predictive parsing table for the given grammar or Check whether the given
grammar is LL(1) or not.
EE+T|T
TT*F|F
F (E) | id
Step 1:
Suppose if the given grammar is left Recursive then convert the given grammar (and ) into
non-left Recursive grammar (as it goes to infinite loop).
E T EI
EI + T E I |
TI F TI
TI * F TI |
F (E) | id
Step 2:
Find the FIRST(X) and FOLLOW(X) for all the variables.
35
FOLLOW (EI) = FOLLOW (E) = {$, )} E TEI
FOLLOW (T) = (FIRST (EI) - {}) U FOLLOW (E) U FOLLOW (EI) E TEI
= {+, ), $} EI +TEI
Step 3:
Construction of parsing table:
Terminals
+ * ( ) id $
Variables
E E TEI E TEI
EI
EI EI EI
+TEI
T T FTI T FTI
TI TI TI *FT TI TI
F F (E) F id
Table 3.1. Parsing Table
Fill the table with the production on the basis of the FIRST(). If the input symbol is an
in FIRST(), then goto FOLLOW() and fill , in all those input symbols.
Let us start with the non-terminal E, FIRST(E) = {(, id}. So, place the production E
TEI at ( and id.
For the non-terminal EI, FIRST (EI) = {+, }.
So, place the production EI +TEI at + and also as there is a in FIRST(EI), see
FOLLOW(EI) = {$, )}. So write the production EI at the place $ and ).
Similarly:
For the non-terminal T, FIRST(T) = {(, id}.
So place the production T FTI at ( and id.
For the non-terminal TI, FIRST (TI) = {*, }
So place the production TI *FTI at * and also as there is a in FIRST (TI), see
FOLLOW (TI) = {+, $, )}, so write the production TI at +, $ and ).
For the non-terminal F, FIRST (F) = {(, id}.
So place the production F id at id location and F (E) at ( as it has two productions.
36
Finally, make all undefined entries as error.
As these were no multiple entries in the table, hence the given grammar is LL(1).
Step 4:
Moves made by predictive parser on the input id + id * id is:
STACK INPUT REMARKS
E and id are not identical; so see E on id in parse table, the
$E id + id * id $ production is ETEI; pop E, push EI and T i.e., move in
reverse order.
See T on id the production is T F TI ;
$ EI T id + id * id $
Pop T, push TI and F; Proceed until both are identical.
$ EI T I F id + id * id $ F id
$ EI TI id id + id * id $ Identical; pop id and remove id from input symbol.
$ E I TI + id * id $ See TI on +; TI so, pop TI
$ EI + id * id $ See EI on +; EI +T EI; push EI , + and T
$ EI T + + id * id $ Identical; pop + and remove + from input symbol.
$ EI T id * id $
$ EI T I F id * id $ T F TI
$ EI TI id id * id $ F id
$ E I TI * id $
$ EI T I F * * id $ TI * F T I
$ EI T I F id $
$ EI TI id id $ F id
$ E I TI $ TI
$ EI $ EI
$ $ Accept.
Table 3.2 Moves made by the parser on input id + id
* id
Predictive parser accepts the given input string. We can notice that $ in input and stuck, i.e.,
both are empty, hence accepted.
LL (1) Grammar:
The first L stands for “Left-to-right scan of input”. The second L stands for “Left-most
derivation”. The „1‟ stands for “1 token of look ahead”.
37
No LL (1) grammar can be ambiguous or left recursive.
If there were no multiple entries in the Recursive decent parser table, the given grammar is
LL (1).
If the grammar G is ambiguous, left recursive then the recursive decent table will have atleast
one multiply defined entry.
The weakness of LL(1) (Top-down, predictive) parsing is that, must predict which
production to use.
For the constructed table., fill with synch for rest of the input symbols of FOLLOW set and
then fill the rest of the columns with error term.
Terminals
+ * ( ) id $
Variables
E error error E TEI synch E TEI synch
EI
EI error error EI error EI
+TEI
T synch error T FTI synch T FTI synch
TI TI TI *FT error TI error TI
F synch synch F (E) synch F id synch
Table3.3 :Synchronizing tokens added to parsing table for table 3.1.
If the parser looks up entry in the table as synch, then the non terminal on top of the stack is
popped in an attempt to resume parsing. If the token on top of the stack does not match the
input symbol, then pop the token from the stack.
The moves of a parser and error recovery on the erroneous input) id*+id is as follows:
S iEtSSI | a
SI eS |
Eb
Solution:
Computation of First () set:
39
The parsing table for this grammar is:
a b e i t $
S Sa S
iEtSSI
SI SI SI
SI eS
E Eb
As the table multiply defined entry. The given grammar is not LL(1).
Example 3:
Construct the FIRST and FOLLOW and predictive parse table for the grammar:
S AC$
Cc|
A aBCd | BQ |
B bB | d
Qq
Solution:
Finding the first () sets:
First (Q) = {q}
First (B) = {b, d}
First (C) = {c, }
First (A) = First (aBCd) First (BQ) First ()
= {a} First (B) First (d) {}
= {a} First (bB) First (d) {}
= {a} {b} {d} {}
= {a, b, d, }
First (S) = First (AC$)
= (First (A) – {}) (First (C) – {}) First ()
= ({a, b, d, } – {}) ({c, } – {}) {}
= {a, b, d, c, }
Finding Follow () sets:
Follow (S) = {#}
Follow (A) = (First (C) – {}) First ($) = ({c, } – {}) {$}
Follow (A) = {c, $}
Follow (B) = (First (C) – {}) First (d) First (Q)
= {c} {d} {q} = {c, d, q}
Follow (C) = (First ($) First (d) = {d, $}
Follow (Q) = (First (A) = {c, $}
41
UNIT- II
BOTTOM UP PARSING
Bottom Up Parsing:
Bottom-up parser builds a derivation by working from the input sentence back towards the
start symbol S. Right most derivation in reverse order is done in bottom-up parsing.
Sr0r1r2- - - rn-1rnsentence
Bottom-up
Assuming the production A, to reduce ri ri-1 match some RHS against ri then replace
with its corresponding LHS, A.
Example – 1:
Sif E then S else S/while E do S/ print
E true/ False/id
Input: if id then while true do print else print.
Parse tree:
Basic idea: Given input string a, “reduce” it to the goal (start) symbol, by looking for
substring that match production RHS.
S
if E then S Clse S
I I
id While do
E S Print
S
I I
true Print
42
Top down Vs Bottom-up parsing:
Top-down Bottom-up
1. Construct tree from root to leaves 1. Construct tree from leaves to root
2. “Guers” which RHS to substitute for 2. “Guers” which rule to “reduce”
nonterminal terminals
3. Produces left-most derivation 3. Produces reverse right-most derivation.
4. Recursive descent, LL parsers 4. Shift-reduce, LR, LALR, etc.
5. Recursive descent, LL parsers 5. “Harder” for humans.
6. Easy for humans
Both work for most (but not all) features of most computer languages.
43
Example – 2:
Right-most derivation
SaAcBe llp: abbcde/ SaAcBe
AAb/b aAcde
Bd aAbcde
abbcde
Bottom-up approach
“Right sentential form” Reduction
abbcde
aAbcde Ab
Aacde AAb
AacBe Bd
S SaAcBe
44
Parsing using Bottom-up approach:
Input Production used
abbcde
aAbcde Ab
AAde AAbc
AABe Bd
45
Handles:
Always making progress by replacing a substring with LHS of a matching production will
not lead to the goal/start symbol.
For example:
abbcde
aAbcde Ab
aAAcde Ab
struck
Informally, A Handle of a string is a substring that matches the right side of a production,
and whose reduction to the non-terminal on the left side of the production represents one step
along the reverse of a right most derivation.
If the grammar is unambiguous, every right sentential form has exactly one handle.
More formally, A handle is a production A and a position in the current right-sentential
form such that:
SA/
a/Abcde
Then the handle is AAb at the marked position. „a‟ never contains non-terminals.
Handle Pruning:
Keep removing handles, replacing them with corresponding LHS of production, until we
reach S.
Example:
EE+E/E*E/(E)/id
a+b*c a Eid
E+b*c b Eid
46
E+E*C C Eid
The grammar is ambiguous, so there are actually two handles at next-to-last step.
We can use parser-generators that compute the handles for us.
Possible Conflicts:
Ambiguous grammars lead to parsing conflicts.
1. Shift-reduce: Both a shift action and a reduce action are possible in the same state
(should we shift or reduce)
Example: dangling-else problem
2. Reduce-reduce: Two or more distinct reduce actions are possible in the same state.
(Which production should we reduce with 2).
Example:
Stmt id (param) (a(i) is procedure call)
Param id
Expr id (expr) /id (a(i) is array subscript)
Stack input buffer action
$…aa (i ) ….$ Reduce by ?
47
Should we reduce to param or to expr? Need to know the type of a: is it an array or a
function. This information must flow from declaration of a to this use, typically via a symbol
table.
Shift – reduce parsing example: (Stack implementation)
Grammar: EE+E/E*E/(E)/id
Input: id1+id2+id3
One Scheme to implement a handle-pruning, bottom-up parser is called a shift-reduce parser.
Shift reduce parsers use stack and an input buffer.
The sequence of steps is as follows:
1. initialize stack with $.
2. Repeat until the top of the stack is the goal symbol and the input token is “end
of life”.
48
Example 2:
Goal Expr
$ Id - num * id Shift
$ Goal - Accept
49
1. shift until the top of the stack is the right end of a handle
2. Find the left end of the handle & reduce.
Procedure:
stmtif expr then stmt/if expr then stmt/other then example string is: if E1 then if E2 then S1
else S2 has two parse trees (ambiguity) and so this grammar is not of LR(k) type.
50
Operator precedence parsing has a number of disadvantages:
1. It is hard to handle tokens like the minus sign, which has two different precedences.
2. Only a small class of grammars can be parsed.
3. The relationship between a grammar for the language being parsed and the operator-
precedence parser itself is tenuous, one cannot always be sure the parser accepts exactly
the desired language.
Disadvantages:
1. L(G) L(parser)
2. error detection
3. usage is limited
4. They are easy to analyse manually
Example:
Grammar: EEAE|(E)|-E/id
A+|-|*|/|
Input string: id+id*id
The operator – precedence relations are:
Id + * $
Id .> .> .>
+ <. .> <. .>
* <. .> .> .>
$ <. <. <.
Solution: This is not operator grammar, so first reduce it to operator grammar form, by
eliminating adjacent non-terminals.
Operator grammar is:
EE+E|E-E|E*E|E/E|EE|(E)|-E|id
The input string with precedence relations interested is:
$<.id.> + <.id.> * <.id.> $
Scan the string the from left end until first .> is encounted.
$<.id.>+<.id.>*<.id.<$
This occurs between the first id and +.
Scan backwards (to the left) over any =‟s until a <. Is encounted. We scan backwards to $.
51
$<.id.>+<.id.>*<.id.>$
Everything to the left of the first .> and to the right of <. Is called handle. Here, the handle is
the first id.
Then reduce id to E. At this point we have:
E+id*id
E+E*E
Now, the 1/p string afte detecting the non-terminals sis:
$+*$
Inserting the precedence relations, we get:
$<.+<.*.>$
The left end of the handle lies between + and * and the right end between * and $. It
indicates that, in the right sentential form E+E*E, the handle is E*E.
Reducing by EE*E, we get:
E+E
Now the input string is:
$<.+$
Again inserting the precedence relations, we get:
$<.+.>$
reducing by EE+E, we get,
$$
and finally we are left with:
E
Hence accepted.
52
Input string Precedence relations Action
inserted
id+id*id $<.id.>+<.id.>*<.id.>$
E+id*id $+<.id.>*<.id.>$ Eid
E+E*id $+*<.id.>$ Eid
E+E*E $+*$
E+E*E $<.+<.*.>$ EE*E
E+E $<.+$
E+E $<.+.>$ EE+E
E $$ Accepted
LR Parsing Introduction:
The "L" is for left-to-right scanning of the input and the "R" is for constructing a
rightmost derivation in reverse.
Why LR Parsing:
1. LR parsers can be constructed to recognize virtually all programming-
language constructs for which context-free grammars can be written.
2. The LR parsing method is the most general non-backtracking shift-reduce
parsing method known, yet it can be implemented as efficiently as
other shift-reduce methods.
3. The class of grammars that can be parsed using LR methods is a proper subset
of the class of grammars that can be parsed with predictive parsers.
53
4. An LR parser can detect a syntactic error as soon as it is possible to do so on a
left-to-right scan of the input.
The disadvantage is that it takes too much work to constuct an LR parser by
hand for a typical programming-language grammar. But there are lots of LR
parser generators available to make this task easy.
LR Parsers:
LR(k) parsers are most general non-backtracking shift-reduce parsers. Two cases of interest
are k=0 and k=1. LR(1) is of practical relevance.
„L‟ stands for “Left-to-right” scan of input.
„R‟ stands for “Rightmost derivation (in reverse)”.
„K‟ stands for number of input symbols of look-a-head that are used in making parsing
decisions. When (K) is omitted, „K‟ is assumed to be 1.
LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token)
for handle recognition.
LR(1) parsers recognize languages that have an LR(1) grammar.
A grammar is LR(1) if, given a right-most derivation
Sr0r1r2- - - rn-1rnsentence.
We can isolate the handle of each right-sentential form ri and determine the production by
which to reduce, by scanning ri from left-to-right, going atmost 1 symbol beyond the right
end of the handle of ri.
Parser accepts input when stack contains only the start symbol and no remaining input
symbol are left.
LR(0) item: (no lookahead)
Grammar rule combined with a dot that indicates a position in its RHS.
Ex– 1: SI .S$
S.x
S.(L)
Ex-2: AXYZ generates 4LR(0) items –
54
A.XYZ
AX.YZ
AXY.Z
AXYZ.
The „.‟ Indicates how much of an item we have seen at a given state in the parse.
A.XYZ indicates that the parser is looking for a string that can be derived from XYZ.
AXY.Z indicates that the parser has seen a string derived from XY and is looking for one
derivable from Z.
LR(0) items play a key role in the SLR(1) table construction algorithm.
LR(1) items play a key role in the LR(1) and LALR(1) table construction algorithms.
LR parsers have more information available than LL parsers when choosing a production:
* LR knows everything derived from RHS plus „K‟ lookahead symbols.
* LL just knows „K‟ lookahead symbols into what‟s derived from RHS.
Deterministic context free languages:
It consists of an input, an output, a stack, a driver program, and a parsing table that has two
parts:
action and goto.
55
The LR parser program determines Sm, the current state on the top of the stack, and ai, the
current input symbol. It then consults action [Sm, ai], which can have one of four values:
1. Shift S, where S is a state.
2. reduce by a grammar production A
3. accept and
4. error
The function goes to takes a state and grammar symbol as arguments and produces a state.
The goto function of a parsing table constructed from a grammar G using the SLR, canonical
LR or LALR method is the transition function of DFA that recognizes the viable prefixes of
G. (Viable prefixes of G are those prefixes of right-sentential forms that can appear on the
stack of a shift-reduce parser, because they do not extend past the right-most handle).
Augmented Grammar:
If G is a grammar with start symbol S, then GI, the augmented grammar for G with a new
start symbol SI and production SIS.
The purpose of this new start stating production is to indicate to the parser when it should
stop parsing and announce acceptance of the input i.e., acceptance occurs when and only
when the parser is about to reduce by SIS.
EIE
EE+T
ET
TT*F
56
TF
F(E)
Fid
EE.+T
Goto (I0,T):
I2: ET. - reduced Item (RI)
TT.*F
Goto (I0,F):
I2: ET. - reduced item (RI)
TT.*F
Goto (I0,C):
57
I4: F(.E)
E.E+T
E.T
T.T*F
T.F
F.(E)
F.id
If „.‟ Precedes non-terminal start writing its corresponding production. Here first E then T
after that F.
Start writing F productions.
Goto (I0,id):
I5: F id. - reduced item.
E successor (I, state), it contains two items derived from state 1 and the closure operation
adds no more (since neither marker precedes a non-terminal). The state I2 is thus:
Goto (I1,+):
I6: EE+.T start writing T productions
T.T*F
Goto (I2,*):
I7: TT*.F start writing F productions
F.(E)
F.id
Goto (I4,E):
58
I8: F(E.)
EE.+T
Goto (I4,T):
I2: ET. these are same as I2.
TT.*F
Goto (I4,C):
I4: F(.E)
E.E+T
E.T
T.T*F
T.F
F.(E)
F.id
goto (I4,id):
I5: Fid. - reduced item
Goto (I6,T):
I9: EE+T. - reduced item
TT.*F
Goto (I6,F):
I3: TF. - reduced item
Goto (I6,C):
I4: F(.E)
59
E.E+T
E.T
T.T*F
T.F
F.(E)
F.id
Goto (I6,id):
I5: Fid. - reduced item.
Goto (I7,F):
I10: TT*F - reduced item
Goto (I7,C):
I4: F(.E)
E.E+T
E.T
T.T*F
T.F
F.(E)
F.id
Goto (I7,id):
I5: Fid. - reduced item
Goto (I8,)):
I11: F(E). - reduced item
Goto (I8,+):
I11: F(E). - reduced item
Goto (I8,+):
60
I6: EE+.T
T.T*F
T.F
F.(E)
F.id
Goto (I9,+):
I7: TT*.f
F.(E)
F.id
Sshift items
Rreduce items
Initially EIE. is in I1 so, I = 1.
Set action [I, $] to accept i.e., action [1, $] to Acc
Action Goto
State Id + * ( ) $ E T F
I0 S5 S4 1 2 3
1 S6 Accept
2 r2 S7 R2 R2
3 R4 R4 R4 R4
4 S5 S4 8 2 3
5 R6 R6 R6 R6
6 S5 S4 9 3
7 S5 S4 10
8 S6 S11
61
9 R1 S7 r1 r1
10 R3 R3 R3 R3
11 R5 R5 R5 R5
62
goto (I1,+)=I6, then action [1,+] = shift 6.
3. Consider I2:
4. Consider I3:
TF. is the reduced item, so take FOLLOW (T).
In forming item sets a closure operation must be performed to ensure that whenever the
marker in an item of a set precedes a non-terminal, say E, then initial items must be included
in the set for all productions with E on the left hand side.
The first item set is formed by taking initial item for the start state and then performing the
closure operation, giving the item set;
We construct the action and goto as follows:
1. If there is a transition from state I to state J under the terminal symbol K, then set
action [I,k] to SJ.
2. If there is a transition under a non-terminal symbol a, say from state „i‟ to state „J‟,
set goto [I,A] to SJ.
63
3. If state I contains a transition under $ set action [I,$] to accept.
4. If there is a reduce transition #p from state I, set action [I,k] to reduce #p for all
terminals k belonging to FOLLOW (A) where A is the subject to production #P.
If any entry is multiply defined then the grammar is not SLR(1). Blank entries are
represented by dash (-).
5. Consider I4 items:
The item Fid gives rise to goto [I4,id] = I5 so,
Action (4,id) shift 5
The item F.E action (4,c) shift 4
The item goto (I4,F) I3, so goto [4,F] = 3
The item goto (I4,T) I2, so goto [4,F] = 2
The item goto (I4,E) I8, so goto [4,F] = 8
6. Consider I5 items:
Fid. Is the reduced item, so take FOLLOW (F).
7. Consider I6 items:
goto (I6,T) = I9, then goto [6,T] = 9
goto (I6,F) = I3, then goto [6,F] = 3
goto (I6,C) = I4, then goto [6,C] = 4
goto (I6,id) = I5, then goto [6,id] = 5
8. Consider I7 items:
goto (I7,F) = I10, then goto [7,F] = 10
goto (I7,C) = I4, then action [7,C] = shift 4
64
goto (I7,id) = I5, then goto [7,id] = shift 5
9. Consider I8 items:
goto (I8,)) = I11, then action [8,)] = shift 11
goto (I8,+) = I6, then action [8,+] = shift 6
65
Action [11,*] = reduce 5
Action [11,)] = reduce 5
Action [11,$] = reduce 5
67
Procedure for Step-V
The parsing algorithm used for all LR methods uses a stack that contains alternatively state
numbers and symbols from the grammar and a list of input terminal symbols terminated by $.
For example:
AAbBcCdDeEf/uvwxyz$
Where, a. . .. f are state numbers
A . . .. E are grammar symbols (either terminal or non-terminals)
u . . . z are the terminal symbols of the text still to be parsed.
The parsing algorithm starts in state I0 with the configuration –
0 / whole program upto $.
Repeatedly apply the following rules until either a syntactic error is found or the parse is
complete.
(i) If action [f,4] = Si then transform
aAbBcCdDeEf / uvwxyz$
to
aAbBcCdDeEfui / vwxyz$
This is called a SHIFT transition
(ii) If action [f,4] = #P and production # P is of length 3, say, then it will be of the
form P CDE where CDE exactly matches the top three symbols on the stack, and
P is some non-terminal, then assuming goto [C,P] = g
aAbBcCdDEfui / vwxyz$
will transform to
aAbBcPg / vwxyz$
The symbols in the stack corresponding to the right hand side of the production have been
replaced by the subject of the production and a new state chosen using the goto table.
This is called a REDUCE transition.
(iii) If action [f,u] = accept. Parsing is completed
(iv) If action [f,u] = - then the text parsed is syntactically in-correct.
Canonical LR(O) collection for a grammar can be constructed by augmented grammar and
two functions, closure and goto.
68
The closure operation:
If I is the set of items for a grammar G, then closure (I) is the set of items constructed from I
by the two rules:
initially, every item in I is added to closure (I).
5. Canonical LR Parsing:
Example:
S CC
C cC/d.
1. Number the grammar productions:
S CC
C cC
C d
SI S
S CC
C cC
C d.
A = SI
=
69
B=S
=
a=$
Function closure tells us to add [B.r,b] for each production Br and terminal b in FIRST
(a). Now r must be SCC, and since is and a is $, b may only be $. Thus,
S.CC,$
We continue to compute the closure by adding all items [C.r,b] for b in FIRST [C$] i.e.,
matching [S.CC,$] against [A.B,a] we have, A=S, =, B=C and a=$. FIRST (C$)
= FIRST ©
FIRST© = {c,d}
We add items:
C.cC,C
CcC,d
C.d,c
C.d,d
None of the new items have a non-terminal immediately to the right of the dot, so we have
completed our first set of LR(1) items. The initial I0 items are:
I0 : SI.S,$
S.CC,$
C.cC,c/d
C.d.c/d
Now we start computing goto (I0,X) for various non-terminals i.e.,
Goto (I0,S):
Goto (I0,C):
I2 : SC.C, $
70
C.cC,$
C.d,$
Goto (I0,C) :
I2 : Cc.C,c/d
C.cC,c/d
C.d,c/d
Goto (I0,d) :
I4 : Cd., c/d reduced item.
Goto (I2,C) : I5
: SCC.,$ reduced item.
Goto (I2,C) : I6
Cc.C,$
C.cC,$
C.d,$
Goto (I2,d) : I7
Cd.,$ reduced item.
Goto (I3,C) : I8
CcC.,c/d reduced item.
Goto (I3,C) : I3
Cc.C, c/d
C.cC,c/d
C.d,c/d
Goto (I3,d) : I4
Cd.,c/d. reduced item.
Goto (I6,C) : I9
CcC.,$ reduced item.
Goto (I6,C) : I6
Cc.C,$
71
C,cC,$
C.d,$
Goto (I6,d) : I7
Cd.,$ reduced item.
All are completely reduced. So now we construct the canonical LR(1) parsing table –
Here there is no neet to find FOLLOW ( ) set, as we have already taken look-a-head for each
set of productions while constructing the states.
Constructing LR(1) Parsing table:
Action goto
State C D $ S C
I0 S3 S4 1 2
1 Accept
2 S6 S7 5
3 S3 S4 8
4 R3 R3
5 R1
6 S6 S7 9
7 R3
8 R2 R2
9 R2
1. Consider I0 items:
72
The item SC.C,$ gives rise to goto [I2,C] = I5. so goto [2,C] = 5
The item C.cC, $ gives rise to goto [I2,C] = I6. so action [0,C] = shift The item C.d,$
gives rise to goto [I2,d] = I7. so action [2,d] = shift 7
4. Consider I3 items:
The item C.cC, c/d gives rise to goto [I3,C] = I8. so goto [3,C] = 8
The item C.cC, c/d gives rise to goto [I3,C] = I3. so action [3,C] = shift 3.
The item C.d, c/d gives rise to goto [I3,d] = I4. so action [3,d] = shift 4.
5. Consider I4 items:
The item C.d, c/d is the reduced item, it is in I4 so set action [4,c/d] to reduce cd.
(production rule no.3)
6. Consider I5 items:
The item SCC.,$ is the reduced item, it is in I5 so set action [5,$] to SCC (production
rule no.1)
7. Consider I6 items:
The item Cc.C,$ gives rise to goto [I6 ,C] = I9. so goto [6,C] = 9
The item C.cC,$ gives rise to goto [I6 ,C] = I6. so action [6,C] = shift 6
The item C.d,$ gives rise to goto [I6 ,d] = I7. so action [6,d] = shift 7
8. Consider I7 items:
The item Cd., $ is the reduced item, it is in I7.
9. Consider I8 items:
The item CCC.c/d in the reduced item, It is in Is, so set action[8,c/d] to reduce Ccd
(production rale no .2)
The item C cC, $ is the reduced item, It is in I9, so set action [9,$] to reduce CcC
(Production rale no.2)
73
If the Parsing action table has no multiply –defined entries, then the given grammar is called
as LR(1) grammar
LALR Parsing:
Example:
For each core present among the set of LR (1) items, find all sets having that core, and
replace there sets by their Union# (clus them into a single term
I0 same as previous
I1 “
I2 “
C cC,c/d/$
CcC,c/d/$
Cd,c/d/$
I5 some as previous
I47 Cd,c/d/$
74
LALR Parsing table construction:
Action Goto
State
c d $ S C
Io S36 S47 1 2
1 Accept
2 S36 S47 5
36 S36 S47 89
47 r3 r3
5 r1
89 r2 r2 r2
75
UNIT- III
SEMANTIC ANALYSIS
Intermediate code forms:
An intermediate code form of source program is an internal form of a program created by the
compiler while translating the program created by the compiler while translating the program
from a high –level language to assembly code(or)object code(machine code).an intermediate
source form represents a more attractive form of target code than does assembly. An
optimizing Compiler performs optimizations on the intermediate source form and produces an
object module.
Analysis + syntheses=translation
We assume that the source program has already been parsed and statically checked.. the
various intermediate code forms are:
The ordinary (infix) way of writing the sum of a and b is with the operator in the middle:
a+b. the postfix (or postfix polish)notation for the same expression places the operator at the
right end, as ab+.
76
In general, if e1 and e2 are any postfix expressions, and Ø to the values denoted by e1 and e2
is indicated in postfix notation nby e1e2Ø.no parentheses are needed in postfix notation
because the position and priority (number of arguments) of the operators permits only one
way to decode a postfix expression.
Example:
Postfix notation can be generalized to k-ary operators for any k>=1.if k-ary operator Ø is
applied to postfix expression e1,e2,……….ek, then the result is denoted by e1e2…….ek Ø.
if we know the priority of each operator then we can uniquely decipher any postfix
expression by scanning it from either end.
Example:
The right hand * says that there are two arguments to its left. since the next –to-rightmost
symbol is c, simple operand, we know c must be the second operand of *.continuing to the
left, we encounter the operator +.we know the sub expression ending in + makes up the first
operand of *.continuing in this way ,we deduce that ab+c* is “parsed” as (((a,b)+),c)*.
a. syntax tree:
The parse tree itself is a useful intermediate-language representation for a source program,
especially in optimizing compilers where the intermediate code needs to extensively
restructure.
A parse tree, however, often contains redundant information which can be eliminated, Thus
producing a more economical representation of the source program. One such variant of a
parse tree is what is called an (abstract) syntax tree, a tree in which each leaf represents an
operand and each interior node an operator.
77
Exmples:
Three-Address Code:
• In three-address code, there is at most one operator on the right side of aninstruction; that is,
no built-up arithmetic expressions are permitted.
x+y*z �t1 = y * z
t2 = x + t1
• Example
Problems:
Write the 3-address code for the following expression
1. if(x + y * z > x * y +z)
78
a=0;
2. (2 + a * (b – c / d)) / e
3. A :=b * -c + b * -c
79
The multiplication i * 8 is appropriate for an array of elements that each take 8 units of space.
C. quadruples:
• Three-address instructions can be implemented as objects or as record with fields for the
operator and operands.
• Three such representations
– Quadruple, triples, and indirect triples
• A quadruple (or quad) has four fields: op, arg1, arg2, and result.
Example
D. Triples
• A triple has only three fields: op, arg1, and arg2
• Using triples, we refer to the result of an operation x op y by its position, rather by
an explicit temporary name.
Example
d. Triples:
• A triple has only three fields: op, arg1, and arg2
80
• Using triples, we refer to the result of an operation x op y by its position, rather by an
explicit temporary name.
Example
Fig: Representations of a = b * - c + b * - c
Type Checking:
•A compiler has to do semantic checks in addition to syntactic checks.
•Semantic Checks
•A type system is a collection of rules for assigning type expressions to the parts of a
program.
•A sound type system eliminates run-time type checking for type errors.
82
•A programming language is strongly-typed, if every program its compiler accepts will
execute without type errors.
In practice, some of type checking operations is done at run-time (so, most of the
programming languages are not strongly yped).
–Ex: int x[100]; … x[i] most of the compilers cannot guarantee that i will be between 0 and
99
Type Expression:
•The type of a language construct is denoted by a type expression.
–A basic type
•void: no type
–A type name
•arrays: If T is a type expression, then array (I,T)is a type expression where I denotes index
range. Ex: array (0..99,int)
•products: If T1and T2 are type expressions, then their Cartesian product T1 x T2 is a type
expression. Ex: int x int
•pointers: If T is a type expression, then pointer (T) is a type expression. Ex: pointer (int)
83
else S.type=type-error }
else S.type=type-error }
else S.type=type-error }
else E.type=type-error }
f: double x char->int
•As long as type expressions are built from basic types (no type names), we may use
structural equivalence between two type expressions
•In some programming languages, we give a name to a type expression, and we use that
name as a type expression afterwards.
–Get equivalent type expression for a type name (then use structural equivalence), or
SDD is a generalization of CFG in which each grammar productions X->α is associated with
it a set of semantic rules of the form
a: = f(b1,b2…..bk)
– This set of attributes for a grammar symbol is partitioned into two subsets called
synthesized and inherited attributes of that grammar symbol.
• Evaluation of a semantic rule defines the value of an attribute. But a semantic rule may also
have some side effects such as printing a value.
An attribute is said to be synthesized attribute if its value at a parse tree node is determined
from attribute values at the children of the node
An inherited attribute is one whose value at parse tree node is determined in terms of
attributes at the parent and | or siblings of that node.
The attribute can be string, a number, a type, a, memory location or anything else.
The parse tree showing the value of attributes at each node is called an annotated parse
tree.
The process of computing the attribute values at the node is called annotating or decorating
the parse tree.Terminals can have synthesized attributes, but not inherited attributes.
• A parse tree showing the values of attributes at each node is called an Annotated parse
tree.
• The process of computing the attributes values at the nodes is called annotating (or
decorating) of the parse tree.
• Of course, the order of these computations depends on the dependency graph induced by
the semantic rules.
Ex1:1) Synthesized Attributes :
Ex: Consider the CFG :
S→ EN
E→ E+T
E→E-T
E→ T
T→ T*F
T→T/F
T→F
F→ (E)
86
F→digit
N→;
Solution: The syntax directed definition can be written for the above grammar by using
semantic actions for each production.
S →EN S.val=E.val
E →E1+T E.val =E1.val + T.val
E →E1-T E.val = E1.val – T.val
E →T E.val =T.val
T →T*F T.val = T.val * F.val
T →T|F T.val =T.val | F.val
F → (E) F.val =E.val
T →F T.val =F.val
F →digit F.val =digit.lexval
N →; can be ignored by lexical Analyzer as; I
is terminating symbol
For the Non-terminals E,T and F the values can be obtained using the attribute “Val”.
In S→EN, symbol S is the start symbol. This rule is to print the final answer of expressed.
1. Write the SDD using the appropriate semantic actions for corresponding production rule of
the given Grammar.
2. The annotated parse tree is generated and attribute values are computed. The Computation
is done in bottom up manner.
PROBLEM 1:
Consider the string 5*6+7; Construct Syntax tree, parse tree and annotated tree.
Solution:
The corresponding annotated parse tree is shown below for the string 5*6+7;
87
Syntax tree:
Advantages: SDDs are more readable and hence useful for specifications
Ex2:
PROBLEM : Consider the grammar that is used for Simple desk calculator. Obtain
the Semantic action and also the annotated parse tree for the string
3*5+4n.
L→En
E→E1+T
88
E→T
T→T1*F
T→F
F→ (E)
F→digit
Solution :
L→En L.val=E.val
E→T E.val=T.val
T→T1*F T.val=T1.val*F.val
T→F T.val=F.val
F→(E) F.val=E.val
F→digit F.val=digit.lexval
The corresponding annotated parse tree U shown below, for the string 3*5+4n.
89
Dependency Graphs:
90
Postfix Translation Schemes
The postfix SDT implements the desk calculator SDD with one change: the action for the
first production prints the value. As the grammar is LR, and the SDD is S-attributed.
L →E n {print(E.val);}
E → E1 + T { E.val = E1.val + T.val }
E → E1 - T { E.val = E1.val - T.val }
E → T { E.val = T.val }
T → T1 * F { T.val = T1.val * F.val }
T → F { T.val = F.val }
F → ( E ) { F.val = E.val }
F → digit { F.val = digit.lexval }
91
Symbol Tables
92
Basic Implementation Techniques:
Unordered List
Simplest to implement
Linked list can grow dynamically – alleviates problem of a fixed size array
Insertion is fast O(1), but lookup is slow for large tables – O(n) on average
Ordered List
Compiler must do the storage allocation and provide access to variables and data
Memory management
93
Stack allocation
Heap management
Garbage collection
Storage Organization:
Operating system will later map it to physical addresses, decide how touse cache
memory, etc.
Program code
Other static data storage, including global constants and compilergenerated data
Stack to support call/return policy for procedures
Heap to store data that can outlive a call to a procedure
94
Example:
Consider the quick sort program
95
Activation tree representing calls during an execution of quicksort:
Activation records
Procedure calls and returns are usually managed by a run-time stack called the control
stack.
Each live activation has an activation record (sometimes called a frame)
The root of activation tree is at the bottom of the stack
The current execution path specifies the content of the stack with the last
Activation has record in the top of the stack.
A General Activation Record
Activation Record
Temporary values
Local data
A saved machine status
An “access link”
A control link
96
Space for the return value of the called function
The actual parameters used by the calling procedure
Elements in the activation record:
Temporary values that could not fit into registers.
Local variables of the procedure.
Saved machine status for point at which this procedure called. Includes return address
and contents of registers to be restored.
Access link to activation record of previous block or procedure in lexical scope chain.
Control link pointing to the activation record of the caller.
Space for the return value of the function, if any.
actual parameters (or they may be placed in registers, if possible)
Values communicated between caller and callee are generally placed at the beginning of
callee‟s activation record
Fixed-length items: are generally placed at the middle
97
Items whose size may not be known early enough: are placed at the end of activation
record
We must locate the top-of-stack pointer judiciously: a common approach is to have it
point to the end of fixed length fields
ML:
ML is a functional language
Variables are defined, and have their unchangeable values initialized, by a statementof
the form:
val (name) = (expression)
Functions are defined using the syntax:
fun (name) ( (arguments) ) = (body)
For function bodies we shall use let-statements of the form:
let (list of definitions) in (statements) end
98
A version of quick sort, in ML style, using nested functions:
99
Sketch of ML program that uses function-parameters:
100
Memory Manager:
Two basic functions:
Allocation
Deallocation
Properties of memory managers:
Space efficiency
Program efficiency
Low overhead
Locality in Programs:
The conventional wisdom is that programs spend 90% of their time executing 10% of the
code:
Programs often contain many instructions that are never executed.
Only a small fraction of the code that could be invoked is actually executed in atypical
run of the program.
The typical program spends most of its time executing innermost loops and tight
recursive cycles in a program.
101
UNIT- IV
CODE OPTIMIZATION
Introduction
The code produced by the straight forward compiling algorithms can often be made to
run faster or take less space, or both. This improvement is achieved by program
transformations that are traditionally called optimizations. Compilers that apply code-
improving transformations are called optimizing compilers.
The transformation must preserve the meaning of programs. That is, the optimization
must not change the output produced by a program for a given input, or cause an error
such as division by zero, that was not present in the original source program. At all times
we take the “safe” approach of missing an opportunity to apply a transformation rather
than risk changing what the program does.
The transformation must be worth the effort. It does not make sense for a compiler writer
to expend the intellectual effort to implement a code improving transformation and to
have the compiler expend the additional time compiling source programs if this effort is
not repaid when the target programs are executed. “Peephole” transformations of this
kind are simple enough and beneficial enough to be included in any compiler.
102
Flow analysis is a fundamental prerequisite for many important types of code
improvement.
Generally control flow analysis precedes data flow analysis.
Control flow analysis (CFA) represents flow of control usually in form of graphs, CFA
constructs such as
Data flow analysis (DFA) is the process of ascerting and collecting information prior to
program execution about the possible modification, preservation, and use of certain
entities (such as values or attributes of variables) in a computer program.
Many transformations can be performed at both the local and global levels. Local
transformations are usually performed first.
Function-Preserving Transformations
There are a number of ways in which a compiler can improve a program without
changing the function it computes.
The transformations
o Common sub expression elimination,
o Copy propagation,
o Dead-code elimination, and
o Constant folding, are common examples of such function-preserving transformations.
The other transformations come up primarily when global optimizations are
performed.
Frequently, a program will include several calculations of the same value, such as an
offset in an array. Some of the duplicate calculations cannot be avoided by the
programmer because they lie below the level of detail accessible within the source
language.
The above code can be optimized using the common sub-expression elimination as
t1: =4*i
t2: =a
[t1] t3:
=4*j t5:
=n
t6: =b [t1] +t5
The common sub expression t 4: =4*i is eliminated as its computation is already in t1. And
value of i is not been changed from definition to use.
Copy Propagation:
Assignments of the form f : = g called copy statements, or copies for short. The idea behind
the copy-propagation transformation is to use g for f, whenever possible after the copy
statement f: = g. Copy propagation means use of one variable instead of another. This may
not appear to be an improvement, but as we shall see it gives us an opportunity to eliminate
x.
For example:
x=Pi;
……
A=x*r*r;
A=Pi*r*r;
Dead-Code Eliminations:
A variable is live at a point in a program if its value can be used subsequently; otherwise, it
is dead at that point. A related idea is dead or useless code, statements that compute values
that never get used. While the programmer is unlikely to introduce any dead code
intentionally, it may appear as the result of previous transformations. An optimization can
104
be done by eliminating dead code.
Example:
i=0;
if(i=1)
{
a=b+5;
}
Here, „if‟ statement is dead code because this condition will never get satisfied.
Constant folding:
o We can eliminate both the test and printing from the object code. More generally,
deducing at compile time that the value of an expression is a constant and using the
constant instead is known as constant folding.
o One advantage of copy propagation is that it often turns the copy statement into dead
code.
For example,
a=3.14157/2 can be replaced by
a=1.570 there by eliminating a division operation.
Loop Optimizations:
o We now give a brief introduction to a very important place for optimizations, namely
loops, especially the inner loops where programs tend to spend the bulk of their time.
The running time of a program may be improved if we decrease the number of
instructions in an inner loop, even if we increase the amount of code outside that loop.
Code Motion:
An important modification that decreases the amount of code in a loop is code motion.
This transformation takes an expression that yields the same result independent of the
number of times a loop is executed ( a loop-invariant computation) and places the
expression before the loop. Note that the notion “before the loop” assumes the existence
of an entry for the loop. For example, evaluation of limit-2 is a loop-invariant
computation in the following while-statement:
Induction Variables :
Loops are usually processed inside out. For example consider the loop around B3.
Note that the values of j and t4 remain in lock-step; every time the value of j decreases by
1, that of t4 decreases by 4 because 4*j is assigned to t4. Such identifiers are called
induction variables.
When there are two or more induction variables in a loop, it may be possible to get rid of
all but one, by the process of induction-variable elimination. For the inner loop around
B3 in Fig. we cannot get rid of either j or t4 completely; t4 is used in B3 and j in B4.
However, we can illustrate reduction in strength and illustrate a part of the process of
induction-variable elimination. Eventually j will be eliminated when the outer loop of
B2 - B5 is considered.
Example:
As the relationship t 4:=4*j surely holds after such an assignment to t 4 in Fig. and t4 is not
changed elsewhere in the inner loop around B3, it follows that just after the statement j:=j -1
the relationship t4:= 4*j-4 must hold. We may therefore replace the assignment t 4:= 4*j by
t4:= t4-4. The only problem is that t 4 does not have a value when we enter block B3 for the
first time. Since we must maintain the relationship t4=4*j on entry to the block B3, we place
an initializations of t4 at the end of the block where j itself is initialized, shown by the dashed
addition to block B1 in second Fig.
Reduction in Strength:
Reduction in strength replaces expensive operations by equivalent cheaper ones on the
target machine. Certain machine instructions are considerably cheaper than others and
can often be used as special cases of more expensive operators.
For example, x² is invariably cheaper to implement as x*x than as a call to an
exponentiation routine. Fixed-point multiplication or division by a power of two is
cheaper to implement as a shift. Floating-point division by a constant can be implemented
as multiplication by a constant, which may be cheaper.
106
Optimization of Basic Blocks
107
Example:
a: =b+c
b: =a-d
c: =b+c
d: =a-d
The 2nd and 4th statements compute the same expression: b+c and a-d
In this we can transform a basic block to its equivalent block called normal-form block.
t1:=b+c
t2:=x+y
can be interchanged or reordered in its computation in the basic block when value of t1 does
not affect the value of t2.
Algebraic Transformations:
Algebraic identities represent another important class of optimizations on basic blocks.
This includes simplifying expressions or replacing expensive operation by cheaper ones
i.e. reduction in strength.
Associative laws may also be applied to expose common sub expressions. For example,
if the source code has the assignments
a :=b+c e
:=c+d+b
a :=b+c t
:=c+d
e :=t+b
Example:
x:=x+0 can be removed
Dominators:
In a flow graph, a node d dominates node n, if every path from initial node of the flow graph
to n goes through d. This will be denoted by d dom n. Every initial node dominates all the
remaining nodes in the flow graph and the entry of a loop dominates all nodes in the loop.
Similarlyeverynode dominates itself.
Example:
*In the flow graph below,
*Initial node,node1 dominates every node. *node 2 dominates itself
*node 3 dominates all but 1 and 2. *node 4 dominates all but 1,2 and 3.
109
*node 5 and 6 dominates only themselves,since flow of control can skip around either by goin
through the other.
*node 7 dominates 7,8 ,9 and 10. *node 8 dominates 8,9 and 10.
The way of presenting dominator information is in a tree, called the dominator tree in
which the initial node is the root.
In terms of the dom relation, the immediate dominator m has the property is d=!n and d
dom n, then d dom m.
110
D(1)={1}
D(2)={1,2}
D(3)={1,3}
D(4)={1,3,4}
D(5)={1,3,4,5}
D(6)={1,3,4,6}
D(7)={1,3,4,7}
D(8)={1,3,4,7,8}
D(9)={1,3,4,7,8,9}
D(10)={1,3,4,7,8,10}
Natural Loop:
One application of dominator information is in determining the loops of a flow graph
suitable for improvement.
The properties of loops are
o A loop must have a single entry point, called the header. This entry point-dominates all
nodes in the loop, or it would not be the sole entry to the loop.
o There must be at least one wayto iterate the loop(i.e.)at least one path back to the header.
One way to find all the loops in a flow graph is to search for edges in the flow graph
whose heads dominate their tails. If a→b is an edge, b is the head and a is the tail. These
types of edges are called as back edges.
Example:
In the above graph,
7→4 4 DOM 7
0 →7 7 DOM 10
4→3
8→3
9 →1
111
The above edges will form loop in flow graph.
Given a back edge n → d, we define the natural loop of the edge to be d plus the set of
nodes that can reach n without going through d. Node d is the header of the loop.
Procedure insert(m);
if m is not in loop then
begin loop := loop U
{m}; push m onto stack
end;
stack : =empty;
loop :
={d};
insert(n;
while stack is not empty do begin
pop m, the first element of stack, off stack;
for each predecessor p of m do insert(p)
end;
InnerLoop:
If we use the natural loops as “the loops”, then we have the useful property that unless
two loops have the same header, they are either disjointed or one is entirely contained in
the other. Thus, neglecting loops with the same header for the moment, we have a natural
notion of inner loop: one that contains no other loop.
When two natural loops have the same header, but neither is nested within the other, they
are combined and treated as a single loop.
Pre-Headers:
Several transformations require us to move statements “before the header”. Therefore
begin treatment of a loop L by creating a new block, called the preheater.
The pre -header has only the header as successor, and all edges which formerly entered
the header of Lfrom outside L instead enter the pre-header.
112
Edges from inside loop L to the header are not changed.
Initially the pre-header is empty, but transformations on L may place statements in it.
Definition:
A flow graph G is reducible if and only if we can partition the edges into two disjoint
groups, forward edges and back edges, with the following properties.
The forward edges from an acyclic graph in which every node can be reached from initial
node of G.
The back edges consist only of edges where heads dominate theirs tails.
If we know the relation DOM for a flow graph, we can find and remove all the back
edges.
In the above example remove the five back edges 4→3, 7→4, 8→3, 9→1 and 10→7
whose heads dominate their tails, the remaining graph is acyclic.
The key property of reducible flow graphs for loop analysis is that in such flow graphs
every set of nodes that we would informally regard as a loop must contain a back edge.
Peephole Optimization
A statement-by-statement code-generations strategy often produce target code that
contains redundant instructions and suboptimal constructs .The quality of such target
code can be improved by applying “optimizing” transformations to the target program.
A simple but effective technique for improving the target code is peephole optimization,
a method for trying to improving the performance of the target program by examining a
short sequence of target instructions (called the peephole) and replacing these instructions
by a shorter or faster sequence, whenever possible.
The peephole is a small, moving window on the target program. The code in the peephole
need not contiguous, although some implementations do require this.it is characteristic of
peephole optimization that each improvement may spawn opportunities for additional
improvements.
We shall give the following examples of program transformations that are characteristic
of peephole optimizations:
Redundant-instructions elimination
Flow-of-control optimizations
Algebraic simplifications
Use of machine idioms
Unreachable Code
Unreachable Code:
Another opportunity for peephole optimizations is the removal of unreachable
instructions. An unlabeled instruction immediately following an unconditional jump may
114
be removed. This operation can be repeated to eliminate a sequence of instructions. For
example, for debugging purposes, a large program may have within it certain segments
that are executed only if a variable debug is 1. In C, the source code might look like:
#define debug
0 ….
If ( debug ) {
debug =1 goto L2
goto L2
L2:…………………………(a)
One obvious peephole optimization is to eliminate jumps over jumps .Thus no matter
what the value of debug; (a) can be replaced by:
If debug ≠1 goto L2
L2:……………………………(b)
As the argument of the statement of (b) evaluates to a constant true it can be replaced by
If debug ≠0 goto L2
L2: ……………………………(c)
As the argument of the first statement of (c) evaluates to a constant true, it can be
replaced by goto L2. Then all the statement that print debugging aids are manifestly
unreachable and can be eliminated one at a time.
115
Flows-Of-Control Optimizations:
The unnecessary jumps can be eliminated in either the intermediate code or the target
code by the following types of peephole optimizations. We can replace the jump
sequence
goto L1
….
L1: gotoL2
by the sequence
goto L2
….
L1: goto L2
If there are now no jumps to L1, then it may be possible to eliminate the statement
L1:goto L2 provided it is preceded by an unconditional jump .Similarly, the sequence
if a < b goto L1
….
L1: goto L2
can be replaced by
Ifa < b goto L2
….
L1: goto L2
goto L1
……..
L1: if a <b goto L2
L3:…………………………………..(1)
Maybe replaced by
Ifa<b goto L2
goto L3
…….
L3:………………………………….(2)
While the number of instructions in (1) and (2) is the same, we sometimes skip the
116
unconditional jump in (2), but never in (1).Thus (2) is superior to (1) in execution time
Algebraic Simplification:
There is no end to the amount of algebraic simplification that can be attempted through
peephole optimization. Only a few algebraic identities occur frequently enough that it is
worth considering implementing them .For example, statements such as
x := x+0
Or
x := x * 1
Reduction in Strength:
Reduction in strength replaces expensive operations by equivalent cheaper ones on the
target machine. Certain machine instructions are considerably cheaper than others and
can often be used as special cases of more expensive operators.
X2 → X*X
The use of these modes greatly improves the quality of code when pushing or popping a
stack, as in parameter passing. These modes can also be used in code for statements like
i : =i+1.
i:=i+1 → i++
i:=i-1 → i--
Global transformations are not substitute for local transformations; both must be
performed.
The available expressions data-flow problem discussed in the last section allows us to
determine if an expression at point p in a flow graph is a common sub-expression. The
following algorithm formalizes the intuitive ideas presented for eliminating common sub-
expressions.
To discover the evaluations of y+z that reach s‟s block, we follow flow graph edges,
searching backward from s‟s block. However, we do not go through any block that
evaluates y+z. Thelast evaluation of y+z in each block encountered is an evaluation of
y+z that reaches s.
u : = y + z w : =u
The search in step(1) of the algorithm for the evaluations of y+z that reach statement s
can also be formulated as a data-flow analysis problem. However, it does not make sense
to solve it for all expressions y+z and all statements or blocks because too much
irrelevant information is gathered.
Not all changes made by algorithm are improvements. We might wish to limit
the number of different evaluations reaching s found in step (1), probably to one.
Algorithm will miss the fact that a*z and c*z must have the same value in
118
a :=x+y c :=x+y
vs
b :=a*z d :=c*z
Because this simple approach to common sub expressions considers only the literal
expressions themselves, rather than the values computed by expressions.
Copy propagation:
Various algorithms introduce copy statements such as x :=copies may also be generated
directly by the intermediate code generator, although most of these involve temporaries
local to one block and can be removed by the dag construction. We may substitute y for
x in all these places, provided the following conditions are met every such use u of x.
On every path from s to including paths that go through u several times, there are
no assignments to y.
Condition (1) can be checked using ud-changing information. We shall set up a new
data-flow analysis problem in which in[B] is the set of copies s: x:=y such that every
path from initial node to the beginning of B contains the statement s, and subsequent to
the last occurrence of s, there are no assignments to y.
Input: A flow graph G, with ud-chains giving the definitions reaching block B, and
with c_in[B] representing the solution to equations that is the set of copies x:=y that reach
block B along every path, with no assignment to x or y following the last occurrence of
x:=y on the path. We also need ud-chains giving the uses of each definition.
Output: A revised flow graph.
Method: For each copy s : x:=y do the following:
Determine those uses of x that are reached by this definition of namely, s: x: =y.
Determine whether for every use of x found in (1) , s is in c_in[B], where B is the block
of this particular use, and moreover, no definitions of x or y occur prior to this use of x
within B. Recall that if s is in c in[B]then s is the only definition of x that reaches B.
If s meets the conditions of (2), then remove s and replace all uses of x found in (1) by
y.
119
Detection of loop-invariant computations:
Ud-chains can be used to detect those computations in a loop that are loop-invariant, that
is, whose value does not change as long as control stays within the loop. Loop is a region
consisting of set of blocks with a header that dominates all the other blocks, so the only
way to enter the loop is through the header.
Mark “invariant” those statements whose operands are all either constant or have all
their reaching definitions outside L.
Repeat step (3) until at some repetition no new statements are marked “invariant”.
Mark “invariant” all those statements not previously so marked all of whose operands
either are constant, have all their reaching definitions outside L, or have exactly one
reaching definition, and that definition is a statement in L marked invariant.
The block containing s dominates all exit nodes of the loop, where an exit of a loop is a
node with a successor not in the loop.
1‟. The block containing s either dominates all exists of the loop, or x is not used outside the
loop. For example, if x is a temporary variable, we can be sure that the value will be used
only in its own block.
If code motion algorithm is modified to use condition (1‟), occasionally the running
time will increase, but we can expect to do reasonably well on the average. The
modified algorithm may move to pre-header certain computations that may not be
executed in the loop. Not only does this risk slowing down the program significantly,
it may also cause an error in certain circumstances.
Even if none of the conditions of (2i), (2ii), (2iii) of code motion algorithm are met by
an assignment x: =y+z, we can still take the computation y+z outside a loop. Create a
new temporary t, and set t: =y+z in the pre-header. Then replace x: =y+z by x: =t in the
loop. In many cases we can propagate out the copy statement x: = t.
Definitions of variables used by s are either outside L, in which case they reach the pre-
header, or they are inside L, in which case by step (3) they were moved to pre-header
ahead of s.
We put the pointer on each ud-chain containing s. Then, no matter where we move s,
we have only to change ps , regardless of how many ud-chains s is on.
The dominator information is changed slightly by code motion. The pre-header is now
121
the immediate dominator of the header, and the immediate dominator of the pre-header
is the node that formerly was the immediate dominator of the header. That is, the pre-
header is inserted into the dominator tree as the parent of the header.
However, our methods deal with variables that are incremented or decremented zero,
one, two, or more times as we go around a loop. The number of changes to an induction
variable may even differ at different iterations.
We shall look for basic induction variables, which are those variables i whose only
assignments within loop L are of the form i := i+c or i-c, where c is a constant.
122
UNIT-V
While the details are dependent on the target language and the operating system, issues such
as memory management, instruction selection, register allocation, and evaluation order are
Inherent In Almost All Code Generation Problems.
The input to the code generator consists of the intermediate representation of the source
program produced by the front end, together with information in the symbol table that is used
to determine the run time addresses of the data objects denoted by the names in the
intermediate representation.
123
There are several choices for the intermediate language, including: linear representations
such as postfix notation, three address representations such as quadruples, virtual machine
representations such as syntax trees and dags.
We assume that prior to code generation the front end has scanned, parsed, and translated the
source program into a reasonably detailed intermediate representation, so the values of names
appearing in the intermediate language can be represented by quantities that the target
machine can directly manipulate (bits, integers, reals, pointers, etc.). We also assume that the
necessary type checking has take place, so type conversion operators have been inserted
wherever necessary and obvious semantic errors (e.g., attempting to index an array by a
floating point number) have already been detected. The code generation phase can therefore
proceed on the assumption that its input is free of errors. In some compilers, this kind of
semantic checking is done together with code generation.
Target Programs
The output of the code generator is the target program. The output may take on a variety of
forms: absolute machine language, relocatable machine language, or assembly language.
Producing an absolute machine language program as output has the advantage that it can be
placed in a location in memory and immediately executed. A small program can be compiled
and executed quickly. A number of “student-job” compilers, such as WATFIV and PL/C,
produce absolute code.
Producing an assembly language program as output makes the process of code generation
somewhat easier .We can generate symbolic instructions and use the macro facilities of the
assembler to help generate code .The price paid is the assembly step after code generation.
Because producing assembly code does not duplicate the entire task of the assembler, this
choice is another reasonable alternative, especially for a machine with a small memory,
where a compiler must uses several passes.
124
Memory Management
Mapping names in the source program to addresses of data objects in run time memory is
done cooperatively by the front end and the code generator. We assume that a name in a
three-address statement refers to a symbol table entry for the name.
If machine code is being generated, labels in three address statements have to be converted to
addresses of instructions. This process is analogous to the “back patching”. Suppose that
labels refer to quadruple numbers in a quadruple array. As we scan each quadruple in turn we
can deduce the location of the first machine instruction generated for that quadruple, simply
by maintaining a count of the number of words used for the instructions generated so far.
This count can be kept in the quadruple array (in an extra field), so if a reference such as j:
goto i is encountered, and i is less than j, the current quadruple number, we may simply
generate a jump instruction with the target address equal to the machine location of the first
instruction in the code for quadruple i. If, however, the jump is forward, so i exceeds j, we
must store on a list for quadruple i the location of the first machine instruction generated for
quadruple j. Then we process quadruple i, we fill in the proper machine location for all
instructions that are forward jumps to i.
Instruction Selection
The nature of the instruction set of the target machine determines the difficulty of instruction
selection. The uniformity and completeness of the instruction set are important factors. If the
target machine does not support each data type in a uniform manner, then each exception to
the general rule requires special handling.
Instruction speeds and machine idioms are other important factors. If we do not care about
the efficiency of the target program, instruction selection is straightforward. For each type of
three- address statement we can design a code skeleton that outlines the target code to be
generated for that construct.
For example, every three address statement of the form x := y + z, where x, y, and z are
statically allocated, can be translated into the code sequence
ADD z, R0 /* add z to R0 */
Unfortunately, this kind of statement – by - statement code generation often produces poor
code. For example, the sequence of statements
125
a := b + c
d := a + e
MOV b, R0
ADD c, R0
MOV R0, a
MOV a, R0
ADD e, R0
MOV R0, d
Here the fourth statement is redundant, and so is the third if „a‟ is not subsequently used.
The quality of the generated code is determined by its speed and size.
A target machine with a rich instruction set may provide several ways of implementing a
given operation. Since the cost differences between different implementations may be
significant, a naive translation of the intermediate code may lead to correct, but unacceptably
inefficient target code. For example if the target machine has an “increment” instruction
(INC), then the three address statement a := a+1 may be implemented more efficiently by the
single instruction INC a, rather than by a more obvious sequence that loads a into a register,
add one to the register, and then stores the result back into a.
MOV a, R0
ADD #1,R0
MOV R0, a
Instruction speeds are needed to design good code sequence but unfortunately, accurate
timing information is often difficult to obtain. Deciding which machine code sequence is best
for a given three address construct may also require knowledge about the context in which
that construct appears.
126
Register Allocation
Instructions involving register operands are usually shorter and faster than those involving
operands in memory. Therefore, efficient utilization of register is particularly important in
generating good code. The use of registers is often subdivided into two sub problems:
1. During register allocation, we select the set of variables that will reside in registers at a
point in the program.
2. During a subsequent register assignment phase, we pick the specific register that a
variable will reside in.
Finding an optimal assignment of registers to variables is difficult, even with single register
values. Mathematically, the problem is NP-complete. The problem is further complicated
because the hardware and/or the operating system of the target machine may require that
certain
Certain machines require register pairs (an even and next odd numbered register) for some
operands and results. For example, in the IBM System/370 machines integer multiplication
and integer division involve register pairs. The multiplication instruction is of the form
M x, y
The multiplicand value is taken from the odd register pair. The multiplier y is a single
register. The product occupies the entire even/odd register pair.
D x, y
where the 64-bit dividend occupies an even/odd register pair whose even register is x; y
represents the divisor. After division, the even register holds the remainder and the odd
register the quotient.
Now consider the two three address code sequences (a) and (b) in which the only difference
is the operator in the second statement. The shortest assembly sequence for (a) and (b) are
given in(c).
127
Ri stands for register i. L, ST and A stand for load, store and add respectively. The optimal
choice for the register into which „a‟ is to be loaded depends on what will ultimately happen
to e.
t := a + b t := a + b
t := t * c t := t + c
t := t / d t := t / d
L R1, a L R0, a
A R1, b A R0, b
M R0, c A R0, c
ST R1, t D R0, d
ST R1, t
(a) (b)
128
Basic Blocks And Flow Graphs
A graph representation of three-address statements, called a flow graph, is useful for
understanding code-generation algorithms, even if the graph is not explicitly constructed by a
code-generation algorithm. Nodes in the flow graph represent computations, and the edges
represent the flow of control. Flow graph of a program can be used as a vehicle to collect
information about the intermediate program. Some register-assignment algorithms use flow
graphs to find the inner loops where a program is expected to spend most of its time.
Basic Blocks
A basic block is a sequence of consecutive statements in which flow of control enters at the
beginning and leaves at the end without halt or possibility of branching except at the end.
The following sequence of three-address statements forms a basic block:
t1 := a*a
t2 := a*b
t3 := 2*t2
t4 := t1+t3
t5 := b*b
t6 := t4+t5
A three-address statement x := y+z is said to define x and to use y or z. A name in a basic
block is said to live at a given point if its value is used after that point in the program,
perhaps in another basic block.
The following algorithm can be used to partition a sequence of three-address statements into
basic blocks.
II) Any statement that is the target of a conditional or unconditional goto is a leader.
III) Any statement that immediately follows a goto or conditional goto statement is a leader.
2. For each leader, its basic block consists of the leader and all statements up to but not
including the next leader or the end of the program.
129
Example: Consider the fragment of source code shown in fig. 7; it computes the dot product
of two vectors a and b of length 20. A list of three-address statements performing this
computation on our target machine is shown in fig. 8.
begin
prod := 0;
i := 1;
do begin
prod := prod + a[i] * b[i];
i := i+1;
end
while i<= 20
end
Let us apply Algorithm 1 to the three-address code in fig 8 to determine its basic blocks.
Statement (1) is a leader by rule (I) and statement (3) is a leader by rule (II), since the last
statement can jump to it. By rule (III) the statement following (12) is a leader. Therefore,
statements (1) and (2) form a basic block. The remainder of the program beginning with
statement (3) forms a second
basic block.
(1) prod := 0
(2) i := 1
(3) t1 := 4*i
(4) t2 := a [ t1 ]
(5) t3 := 4*i
(6) t4 :=b [ t3 ]
(7) t5 := t2*t4
(8) t6 := prod +t5
(9) prod := t6
(10) t7 := i+1
(11) i := t7
(12) if i<=20 goto (3)
130
fig 8. Three-address code computing dot product
prod := 0
i := 1
Structure-preserving transformations
The primary structure-preserving transformations on basic blocks are:
1. Common sub-expression elimination
2. dead-code elimination
3. Renaming of temporary variables
4. Interchange of two independent adjacent statements
We assume basic blocks have no arrays, pointers, or procedure calls.
d:= b
Although the 1st and 3rd statements in both cases appear to have the same expression on the
right, the second statement redefines b. Therefore, the value of b in the 3rd statement is
different from the value of b in the 1st, and the 1st and 3rd statements do not compute the same
expression.
131
2. Dead-code elimination
Suppose x is dead, that is, never subsequently used, at the point where the statement x:= y+z
appears in a basic block. Then this statement may be safely removed without changing the
value of the basic block.
t2:= x+y
Then we can interchange the two statements without affecting the value of the block if and
only if neither x nor y is t1 and neither b nor c is t2. A normal-form basic block permits all
statement interchanges that are possible.
The target machine characteristics are
Byte-addressable, 4 bytes/word, n registers
Two operand instructions of the form
Op source, destination
Example opcodes: MOV, ADD, SUB, MULT
Several addressing modes
An instruction has an associated cost
Cost corresponds to length of instruction
Addressing
Modes &
Extra Costs
132
1) Generate target code for the source language statement
“(a-b) + (a-c) + (a-c);”
Total cost=12
133