0% found this document useful (0 votes)
3 views

Compiler Design Unit 1

Uploaded by

Kid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Compiler Design Unit 1

Uploaded by

Kid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

COMPILER DESIGN (KCS-502)

UNIT-1 (Lecture-1)

Compiler Design: Overview


Computers are a balanced mix of software and hardware. Hardware is just a piece of mechanical
device and its functions are being controlled by compatible software. Hardware understands
instructions in the form of electronic charge, which is the counterpart of binary language in
software programming. Binary language has only two alphabets, 0 and 1. To instruct, the
hardware codes must be written in binary format, which is simply a series of 1s and 0s. It would
be a difficult and cumbersome task for computer programmers to write such codes, which is
why we have compilers to write such codes.

Language Processing System


We have learnt that any computer system is made of hardware and software. The hardware
understands a language, which humans cannot understand. So we write programs in high-level
language, which is easier for us to understand and remember. These programs are then fed into
a series of tools and OS components to get the desired code that can be used by the machine.
This is known as Language Processing System.

RAVIKANT NIRALA Page 1


COMPILER DESIGN (KCS-502)

The high-level language is converted into binary language in various phases. A compiler is a
program that converts high-level language to assembly language. Similarly, an assembler is a
program that converts the assembly language to machine-level language.
Let us first understand how a program, using C compiler, is executed on a host machine.
• User writes a program in C language (high-level language).
• The C compiler, compiles the program and translates it to assembly program (low-level
language).
• An assembler then translates the assembly program into machine code (object).
• A linker tool is used to link all the parts of the program together for execution
(executable machine code).
• A loader loads all of them into memory and then the program is executed.
Before diving straight into the concepts of compilers, we should understand a few other tools
that work closely with compilers.

Preprocessor

A preprocessor, generally considered as a part of compiler, is a tool that produces input for
compilers. It deals with macro-processing, augmentation, file inclusion, language extension, etc.

Interpreter

An interpreter, like a compiler, translates high-level language into low-level machine language.
The difference lies in the way they read the source code or input. A compiler reads the whole
source code at once, creates tokens, checks semantics, generates intermediate code, executes the
whole program and may involve many passes. In contrast, an interpreter reads a statement from
the input, converts it to an intermediate code, executes it, then takes the next statement in
sequence. If an error occurs, an interpreter stops execution and reports it. whereas a compiler
reads the whole program even if it encounters several errors.

Assembler

An assembler translates assembly language programs into machine code.The output of an


assembler is called an object file, which contains a combination of machine instructions as well
as the data required to place these instructions in memory.

Linker

Linker is a computer program that links and merges various object files together in order to
make an executable file. All these files might have been compiled by separate assemblers. The
major task of a linker is to search and locate referenced module/routines in a program and to

RAVIKANT NIRALA Page 2


COMPILER DESIGN (KCS-502)

determine the memory location where these codes will be loaded, making the program
instruction to have absolute references.

Loader

Loader is a part of operating system and is responsible for loading executable files into memory
and execute them. It calculates the size of a program (instructions and data) and creates memory
space for it. It initializes various registers to initiate execution.

Bootstrapping

It is a method to create self hosting compiler. Self hosting compiler is a type of compiler that can
compile its own source code.

A compiler can be characterized by three language:

1. Source language
2. Target language
3. Implementation language

Bootstrapping is a process by which a simple language is used to translate more complicated


program.

Cross-compiler

A compiler that runs on platform (A) and is capable of generating executable code for platform
(B) is called a cross-compiler.
Example: A compiler that run on window 7 and produces code that run on android.
Q. We have a new language L, which we want to available on machine A & B.
Solution:

RAVIKANT NIRALA Page 3


COMPILER DESIGN (KCS-502)

Source-to-source Compiler

A compiler that takes the source code of one programming language and translates it into the
source code of another programming language is called a source-to-source compiler.

RAVIKANT NIRALA Page 4


COMPILER DESIGN (KCS-502)

UNIT-1 (Lecture-2)
A compiler can broadly be divided into two phases based on the way they compile.

Analysis Phase

Known as the front-end of the compiler, the analysis phase of the compiler reads the source
program, divides it into core parts and then checks for lexical, grammar and syntax errors.The
analysis phase generates an intermediate representation of the source program and symbol table,
which should be fed to the Synthesis phase as input.

Synthesis Phase

Known as the back-end of the compiler, the synthesis phase generates the target program with
the help of intermediate source code representation and symbol table.

Passes & Phases of Compiler


A compiler can have many phases and passes.
• Pass : A pass refers to the traversal of a compiler through the entire program.
• Phase : A phase of a compiler is a distinguishable stage, which takes input from the
previous stage, processes and yields output that can be used as input for the next stage.
A pass can have more than one phase.
The compilation process is a sequence of various phases. Each phase takes input from its
previous stage, has its own representation of source program, and feeds its output to the next
phase of the compiler. Let us understand the phases of a compiler.

RAVIKANT NIRALA Page 1


COMPILER DESIGN (KCS-502)

RAVIKANT NIRALA Page 2


COMPILER DESIGN (KCS-502)

• Lexical Analysis
The first phase of scanner works as a text scanner. This phase scans the source code as a
stream of characters and converts it into meaningful lexemes. Lexical analyzer represents
these lexemes in the form of tokens as:
<token-name, attribute-value>

• Syntax Analysis
The next phase is called the syntax analysis or parsing. It takes the token produced by
lexical analysis as input and generates a parse tree (or syntax tree). In this phase, token
arrangements are checked against the source code grammar, i.e. the parser checks if the
expression made by the tokens is syntactically correct.
• Semantic Analysis
Semantic analysis checks whether the parse tree constructed follows the rules of
language. For example, assignment of values is between compatible data types, and
adding string to an integer. Also, the semantic analyzer keeps track of identifiers, their
types and expressions; whether identifiers are declared before use or not etc. The
semantic analyzer produces an annotated syntax tree as an output.
• Intermediate Code Generation
After semantic analysis the compiler generates an intermediate code of the source code
for the target machine. It represents a program for some abstract machine. It is in
between the high-level language and the machine language. This intermediate code
should be generated in such a way that it makes it easier to be translated into the target
machine code.
• Code Optimization
The next phase does code optimization of the intermediate code. Optimization can be
assumed as something that removes unnecessary code lines, and arranges the sequence of
statements in order to speed up the program execution without wasting resources (CPU,
memory).
• Code Generation
In this phase, the code generator takes the optimized representation of the intermediate
code and maps it to the target machine language. The code generator translates the
intermediate code into a sequence of (generally) re-locatable machine code. Sequence of
instructions of machine code performs the task as the intermediate code would do.
• Symbol Table
It is a data-structure maintained throughout all the phases of a compiler. All the
identifier's names along with their types are stored here. The symbol table makes it easier
for the compiler to quickly search the identifier record and retrieve it. The symbol table
is also used for scope management.

RAVIKANT NIRALA Page 3


COMPILER DESIGN (KCS-502)

UNIT-1 (Lecture-3)
Example to process as code segment through phases of compiler:

RAVIKANT NIRALA Page 1


COMPILER DESIGN (KCS-502)

Compiler Design - Passes


We already know about all the Phases of Compiler design, now the Compiler Passes.
A Compiler pass refers to the traversal of a compiler through the entire program. Compiler pass
are two types: Single Pass Compiler, and Two Pass Compiler or Multi Pass Compiler.

1. Single Pass: If we combine or group all the phases of compiler design in a single module
known as single pass compiler.

In above diagram there are all 6 phases are grouped in a single module, some points of single
pass compiler is as:
1. A one pass/single pass compiler is that type of compiler that passes through the part of each
compilation unit exactly once.
2. Single pass compiler is faster and smaller than the multi pass compiler.
3. As a disadvantage of single pass compiler is that it is less efficient in comparison with
multipass compiler.
4. Single pass compiler is one that processes the input exactly once, so going directly from
lexical analysis to code generator, and then going back for the next read.

Note: Single pass compiler almost never done, early Pascal compiler did this as an introduction.

Problems with single pass compiler:


1. We can not optimize very well due to the context of expressions are limited.
2. As we can’t backup and process, it again so grammar should be limited or simplified.

RAVIKANT NIRALA Page 2


COMPILER DESIGN (KCS-502)

2. Two Pass or Multi pass: A Two pass/multi-pass Compiler is a type of compiler that
processes the source code or abstract syntax tree of a program multiple times. In multipass
Compiler we divide phases in two pass as:

1. First Pass: is refers as


(a) Front end
(b) Analytic part
(c) Platform independent
In first pass the included phases are as Lexical analyzer, syntax analyzer, semantic
analyzer, intermediate code generator are work as front end and analytic part means all
phases analyze the High level language and convert them three address code and first pass
is platform independent because the output of first pass is as three address code which is
useful for every system and the requirement is to change the code optimization and code
generator phase which are comes to the second pass.
2. Second Pass: is refers as
(a) Back end
(b) Synthesis Part
(c) Platform Dependent
In second Pass the included phases are as Code optimization and Code generator are work
as back end and the synthesis part refers to taking input as three address code and convert
them into Low level language/assembly language and second
pass is platform dependent because final stage of a typical compiler converts the
intermediate representation of program into an executable set of instructions which is
dependent on the system.

RAVIKANT NIRALA Page 3


COMPILER DESIGN (KCS
(KCS-502)

With multi-pass Compiler we can solve these 2 basic problems:


1. If we want to design a compiler for different programming language for same machine. In
this case for each programming language there is requirement of making Front end/first
pass for each of them and only one Back end/second pass as:

2. If we want to design a compiler for same programming language for different


machine/system. In this case we make different Back end for different Machine/system
and make only one Front end for same programming language as:

Differences between Single Pass and Multipass Compilers:

PARAMETERS SINGLE PASS MULTI PASS

Speed Fast Slow

Memory More Less

Time Less More

Portability No Yes

RAVIKANT NIRALA Page 4


COMPILER DESIGN (KCS-502)

UNIT-1 (Lecture-4)

Compiler Design - Lexical Analysis


Lexical analysis is the first phase of a compiler. It takes the modified source code from
language preprocessors that are written in the form of sentences. The lexical analyzer breaks
these syntaxes into a series of tokens, by removing any whitespace or comments in the source
code.
If the lexical analyzer finds a token invalid, it generates an error. The lexical analyzer works
closely with the syntax analyzer. It reads character streams from the source code, checks for
legal tokens, and passes the data to the syntax analyzer when it demands.

Tokens
Lexemes are said to be a sequence of characters (alphanumeric) in a token. There are some
predefined rules for every lexeme to be identified as a valid token. These rules are defined by
grammar rules, by means of a pattern. A pattern explains what can be a token, and these patterns
are defined by means of regular expressions.
In programming language, keywords, constants, identifiers, strings, numbers, operators and
punctuations symbols can be considered as tokens.
For example, in C language, the variable declaration line
int value = 100;
contains the tokens:
int (keyword), value (identifier), = (operator), 100 (constant) and ; (symbol).

RAVIKANT NIRALA Page 1


COMPILER DESIGN (KCS-502)

Specifications of Tokens
Let us understand how the language theory undertakes the following terms:

Alphabets
Any finite set of symbols {0,1} is a set of binary alphabets, {0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F} is
a set of Hexadecimal alphabets, {a-z, A-Z} is a set of English language alphabets.

Strings
Any finite sequence of alphabets is called a string. Length of the string is the total number of
occurrence of alphabets, e.g., the length of the string Galgotia is 8 and is denoted by |Galgotia| =
8. A string having no alphabets, i.e. a string of zero length is known as an empty string and is
denoted by ε (epsilon).

Special Symbols
A typical high-level language contains the following symbols:-
Arithmetic Symbols Addition(+), Subtraction(-), Modulo(%), Multiplication(*), Division(/)

Punctuation Comma(,), Semicolon(;), Dot(.), Arrow(->)

Assignment =

Special Assignment +=, /=, *=, -=

Comparison ==, !=, <, <=, >, >=

Preprocessor #

Location Specifier &

Logical &, &&, |, ||, !

Shift Operator >>, >>>, <<, <<<

RAVIKANT NIRALA Page 2


COMPILER DESIGN (KCS-502)

Language

A language is considered as a finite set of strings over some finite set of alphabets. Computer
languages are considered as finite sets, and mathematically set operations can be performed on
them. Finite languages can be described by means of regular expressions.

Longest Match Rule

When the lexical analyzer read the source-code, it scans the code letter by letter; and when it
encounters a whitespace, operator symbol, or special symbols, it decides that a word is
completed.
For example:
int intvalue;
While scanning both lexemes till ‘int’, the lexical analyzer cannot determine whether it is a
keyword int or the initials of identifier int value.
The Longest Match Rule states that the lexeme scanned should be determined based on the
longest match among all the tokens available.

Regular Expressions
The lexical analyzer needs to scan and identify only a finite set of valid string/token/lexeme that
belong to the language in hand. It searches for the pattern defined by the language rules.
Regular expressions have the capability to express finite languages by defining a pattern for
finite strings of symbols. The grammar defined by regular expressions is known as regular
grammar. The language defined by regular grammar is known as regular language.
Regular expression is an important notation for specifying patterns. Each pattern matches a set
of strings, so regular expressions serve as names for a set of strings. Programming language
tokens can be described by regular languages. The specification of regular expressions is an
example of a recursive definition. Regular languages are easy to understand and have efficient
implementation.
There are a number of algebraic laws that are obeyed by regular expressions, which can be used
to manipulate regular expressions into equivalent forms.

Operations
The various operations on languages are:
• Union of two languages L and M is written as
L U M = {s | s is in L or s is in M}
• Concatenation of two languages L and M is written as
LM = {st | s is in L and t is in M}

RAVIKANT NIRALA Page 3


COMPILER DESIGN (KCS-502)

• The Kleene Closure of a language L is written as


L* = Zero or more occurrence of language L.

Notations
If r and s are regular expressions denoting the languages L(r) and L(s), then
• Union : (r)|(s) is a regular expression denoting L(r) U L(s)
• Concatenation : (r)(s) is a regular expression denoting L(r)L(s)
• Kleene closure : (r)* is a regular expression denoting (L(r))*
Precedence and Associativity
• *, concatenation (.), and | (pipe sign) are left associative
• * has the highest precedence
• Concatenation (.) has the second highest precedence.
• | (pipe sign) has the lowest precedence of all.

Representing valid tokens of a language in regular expression


If x is a regular expression, then:
• x* means zero or more occurrence of x.
i.e., it can generate { e, x, xx, xxx, xxxx, … }
• x+ means one or more occurrence of x.
i.e., it can generate { x, xx, xxx, xxxx … } or x.x*
• x? means at most one occurrence of x
i.e., it can generate either {x} or {e}.
[a-z] is all lower-case alphabets of English language.
[A-Z] is all upper-case alphabets of English language.
[0-9] is all natural digits used in mathematics.

Representing occurrence of symbols using regular expressions


letter = [a – z] or [A – Z]
digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 or [0-9]
sign = [ + | - ]

Q: Write the regular expression for the language:


L = {abn w:n ≥ 3, w ∈ (a,b)+}
Sol: The string of language L starts with "a" followed by atleast three b's.

RAVIKANT NIRALA Page 4


COMPILER DESIGN (KCS-502)

UNIT-1 (Lecture-5)

Compiler Design: Finite Automata


Finite state machine
o Finite state machine is used to recognize patterns.
o Finite automata machine takes the string of symbol as input and changes its state
accordingly. In the input, when a desired symbol is found then the transition occurs.
o While transition, the automata can either move to the next state or stay in the same state.
o FA has two states: accept state or reject state. When the input string is successfully
processed and the automata reached its final state then it will accept.

A Finite Automata consists of following:

Q: finite set of states


∑: finite set of input symbol
q0: initial state
F: final state
δ: Transition function

Transition function can be define as

1. δ: Q x ∑ →Q

FA is characterized into two ways:

1. DFA (finite automata)


2. NDFA (non deterministic finite automata)

DFA
DFA stands for Deterministic Finite Automata. Deterministic refers to the uniqueness of the
computation. In DFA, the input character goes to one state only. DFA doesn't accept the null
move that means the DFA cannot change state without any input character.

DFA has five tuples {Q, ∑, q0, F, δ}

Q: set of all states


∑: finite set of input symbol where δ: Q x ∑ →Q
q0: initial state

RAVIKANT NIRALA Page 1


COMPILER DESIGN (KCS-502)

F: final state
δ: Transition function

Example
See an example of deterministic finite automata:
1. Q = {q0, q1, q2}
2. ∑ = {0, 1}
3. q0 = {q0}
4. F = {q3}

NDFA
NDFA refer to the Non Deterministic Finite Automata. It is used to transit the any number of
states for a particular input. NDFA accepts the NULL move that means it can change state
without reading the symbols.

NDFA also has five states same as DFA. But NDFA has different transition function.

Transition function of NDFA can be defined as:

δ: Q x ∑ →2Q
Example
See an example of non deterministic finite automata:
1. Q = {q0, q1, q2}
2. ∑ = {0, 1}
3. q0 = {q0}
4. F = {q3}

RAVIKANT NIRALA Page 2


COMPILER DESIGN (KCS-502)

Optimization of DFA
To optimize the DFA you have to follow the various steps. These are as follows:

Step 1: Remove all the states that are unreachable from the initial state via any set of the
transition of DFA.

Step 2: Draw the transition table for all pair of states.

Step 3: Now split the transition table into two tables T1 and T2. T1 contains all final states and
T2 contains non-final states.

Step 4: Find the similar rows from T1 such that:

1. δ (q, a) = p
2. δ (r, a) = p

That means, find the two states which have same value of a and b and remove one of them.

Step 5: Repeat step 3 until there is no similar rows are available in the transition table T1.

Step 6: Repeat step 3 and step 4 for table T2 also.

Step 7: Now combine the reduced T1 and T2 tables. The combined transition table is the
transition table of minimized DFA.

Example

Solution:

Step 1: In the given DFA, q2 and q4 are the unreachable states so remove them.

Step 2: Draw the transition table for rest of the states.

RAVIKANT NIRALA Page 3


COMPILER DESIGN (KCS-502)

Step 3:

Now divide rows of transition table into two sets as:

1. One set contains those rows, which start from non-final sates:

2. Other set contains those rows, which starts from final states.

Step 4: Set 1 has no similar rows so set 1 will be the same.


Step 5: In set 2, row 1 and row 2 are similar since q3 and q5 transit to same state on 0 and 1. So
skip q5 and then replace q5 by q3 in the rest.

Step 6: Now combine set 1 and set 2 as:

Now it is the transition table of minimized DFA.


Step 7: Transition diagram of minimized DFA:

Fig: Minimized DFA

RAVIKANT NIRALA Page 4


COMPILER DESIGN (KCS-502)

UNIT-1 (Lecture-6)
Compiler Design: LEX, FG, BNF, YACC
Formal grammar
o Formal grammar is a set of rules. It is used to identify correct or incorrect strings of
tokens in a language. The formal grammar is represented as G.
o Formal grammar is used to generate all possible strings over the alphabet that is
syntactically correct in the language.
o Formal grammar is used mostly in the syntactic analysis phase (parsing) particularly
during the compilation.

Formal grammar G is written as follows:


G = <V, N, P, S>
Where:
N describes a finite set of non-terminal symbols.
V describes a finite set of terminal symbols.
P describes a set of production rules
S is the start symbol.

Example:
L = {a, b}, N = {S, R, B}
Production rules:
S = bR
R = aR
R = aB
B=b
Through this production we can produce some strings like: bab, baab, baaab etc.
This production describes the string of shape banab.

Fig: Formal grammar

RAVIKANT NIRALA Page 1


COMPILER DESIGN (KCS-502)

BNF Notation
BNF stands for Backus-Naur Form. It is used to write a formal representation of a context-free
grammar. It is also used to describe the syntax of a programming language.

BNF notation is basically just a variant of a context-free grammar.

In BNF, productions have the form:

Left side → definition

Where leftside ∈ (Vn∪ Vt)+ and definition ∈ (Vn∪ Vt)*. In BNF, the leftside contains one non-
terminal.

We can define the several productions with the same leftside. All the productions are separated
by a vertical bar symbol "|".

There is the production for any grammar as follows:

S → aSa
S → bSb
S→c

In BNF, we can represent above grammar as follows:

S → aSa| bSb| c

RAVIKANT NIRALA Page 2


Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
COMPILER DESIGN (KCS-502)

UNIT-1 (Lecture-7)

Compiler Design – Context Free Grammar

Context free grammar


Context free grammar is a formal grammar which is used to generate all possible strings in a
given formal language. Context free grammar G can be defined by four tuples as:

G= (V, T, P, S)

Where,
G describes the grammar
T describes a finite set of terminal symbols.
V describes a finite set of non-terminal symbols
P describes a set of production rules
S is the start symbol.

In CFG, the start symbol is used to derive the string. You can derive the string by repeatedly
replacing a non-terminal by the right hand side of the production, until all non-terminal have
been replaced by terminal symbols.

Example:

L= {wcwR | w € (a, b)*}

Production rules:

S → aSa
S → bSb
S→c

Now check that abbcbba string can be derived from the given CFG.

S ⇒ aSa
S ⇒ abSba
S ⇒ abbSbba
S ⇒ abbcbba

By applying the production S → aSa, S → bSb recursively and finally applying the production S
→ c, we get the string abbcbba.

RAVIKANT NIRALA Page 1


COMPILER DESIGN (KCS-502)

Capabilities of CFG
There are the various capabilities of CFG:

o Context free grammar is useful to describe most of the programming languages.


o If the grammar is properly designed then an efficient parser can be constructed
automatically.
o Using the features of associatively & precedence information, suitable grammars for
expressions can be constructed.
o Context free grammar is capable of describing nested structures like: balanced
parentheses, matching begin-end, corresponding if-then-else's & so on.

RAVIKANT NIRALA Page 2


COMPILER DESIGN (KCS-502)

UNIT-1 (Lecture-8)

Compiler Design - Syntax Analysis


Syntax Analyzers
A syntax analyzer or parser takes the input from a lexical analyzer in the form of token streams.
The parser analyzes the source code (token stream) against the production rules to detect any
errors in the code. The output of this phase is a parse tree.

This way, the parser accomplishes two tasks, i.e., parsing the code, looking for errors and
generating a parse tree as the output of the phase.
Parsers are expected to parse the whole code even if some errors exist in the program. Parsers
use error recovering strategies, which we will learn later in this chapter.

Derivation
A derivation is basically a sequence of production rules, in order to get the input string. During
parsing, we take two decisions for some sentential form of input:

• Deciding the non-terminal which is to be replaced.


• Deciding the production rule, by which, the non-terminal will be replaced.
To decide which non-terminal to be replaced with production rule, we can have two options.

Left-most Derivation

If the sentential form of an input is scanned and replaced from left to right, it is called left-most
derivation. The sentential form derived by the left-most derivation is called the left-sentential
form.

RAVIKANT NIRALA Page 1


COMPILER DESIGN (KCS-502)

Right-most Derivation

If we scan and replace the input with production rules, from right to left, it is known as right-
most derivation. The sentential form derived from the right-most derivation is called the right-
sentential form.
Example
Production rules:
E→E+E
E→E*E
E → id
Input string: id + id * id
The left-most derivation is:
E→E*E
E→E+E*E
E → id + E * E
E → id + id * E
E → id + id * id
Notice that the left-most side non-terminal is always processed first.
The right-most derivation is:
E→E+E
E→E+E*E
E → E + E * id
E → E + id * id
E → id + id * id

Parse Tree
A parse tree is a graphical depiction of a derivation. It is convenient to see how strings are
derived from the start symbol. The start symbol of the derivation becomes the root of the parse
tree. Let us see this by an example from the last topic.
We take the left-most derivation of a + b * c
The left-most derivation is:
E→E*E
E→E+E*E
E → id + E * E
E → id + id * E
E → id + id * id

RAVIKANT NIRALA Page 2


COMPILER DESIGN (KCS-502)

Step 1:

E→E*E

Step 2:

E→E+E*E

Step 3:

E → id + E * E

Step 4:

RAVIKANT NIRALA Page 3


COMPILER DESIGN (KCS-502)

E → id + id * E

Step 5:

E → id + id * id

In a parse tree:

• All leaf nodes are terminals.


• All interior nodes are non-terminals.
• In-order traversal gives original input string.
A parse tree depicts associativity and precedence of operators. The deepest sub-tree is traversed
first, therefore the operator in that sub-tree gets precedence over the operator which is in the
parent nodes.

RAVIKANT NIRALA Page 4


COMPILER DESIGN (KCS-502)

Ambiguity
A grammar G is said to be ambiguous if it has more than one parse tree (left or right derivation)
for at least one string.
Example
E→E+E
E→E–E
E → id
For the string id + id – id, the above grammar generates two parse trees:

The language generated by an ambiguous grammar is said to be inherently ambiguous.


Ambiguity in grammar is not good for a compiler construction. No method can detect and
remove ambiguity automatically, but it can be removed by either re-writing the whole grammar
without ambiguity, or by setting and following associativity and precedence constraints.

RAVIKANT NIRALA Page 5

You might also like