0% found this document useful (0 votes)
2 views

Compiler Design Unit 1

The document provides an overview of compiler design, explaining the roles of compilers and interpreters, the phases of compilation, and the differences between native and cross compilers. It also discusses finite automata, regular expressions, and grammars, detailing their definitions and classifications. Additionally, it covers concepts such as bootstrapping, symbol tables, and the distinctions between single-pass and multi-pass compilers.

Uploaded by

ky7005228
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Compiler Design Unit 1

The document provides an overview of compiler design, explaining the roles of compilers and interpreters, the phases of compilation, and the differences between native and cross compilers. It also discusses finite automata, regular expressions, and grammars, detailing their definitions and classifications. Additionally, it covers concepts such as bootstrapping, symbol tables, and the distinctions between single-pass and multi-pass compilers.

Uploaded by

ky7005228
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

COMPILER DESIGN (BCS-602)

COMPILER DESIGN (BCS-602)


INTRODUCTION TO COMPILER
Compiler is a program that can read a program in one
language “SOURCE language” and translate it into an
equivalent program in another language “TARGET
language”

SOURCE TARGET
LANGUAGE COMPILER LANGUAGE

ERROR
MESSAGE
INTRODUCTION TO INTERPRETER
Interpreter is another common kind of language
processor. It reads the program in one language “SOURCE
program” as input and interpret line by line and produce
a “TARGET program”

SOURCE TARGET
PROGRAM INTERPRETER PROGRAM
LANGUAGE PROCESSING SYSTEM

Source program

PREPROCESSOR
Modified source
program

COMPILER
Target
assembly
language
ASSEMBLER
Relocatable
machine
code
Library files
Linker/Loader relocatable
object files
Target machine
code
DIFFERENCE BETWEEN
COMPILER AND INTERPRETER
Some terminologies:
•A token is the smallest individual element of a program that is
meaningful to the compiler. It cannot be further broken down.
Identifiers, strings, keywords, etc., can be the example of the token.

•A lexeme is a sequence of characters of a program that is grouped


together as a single unit. When a compiler or interpreter reads the
source code of a program, the compiler breaks it down into smaller
units called lexemes. These lexemes will help the compiler to analyze
and process the program efficiently.

•A pattern explains what can be a token, and these patterns are


defined by means of regular expressions.
PHASES OF COMPILER (Cont…)

Analysis of the Source


Program

Synthesis of the Object


Program
PHASES OF COMPILER (Cont…)
Lexical Analysis:
Lexical analyzer phase is the first phase of compilation process. It
takes source code as input. It reads the source program one character
at a time and converts it into meaningful lexemes. Lexical analyzer
represents these lexemes in the form of tokens.
Syntax Analysis
Syntax analysis is the second phase of compilation process. It takes
tokens as input and generates a parse tree as output. In syntax analysis
phase, the parser checks that the expression made by the tokens is
syntactically correct or not.
Semantic Analysis
Semantic analysis is the third phase of compilation process. It checks
whether the parse tree follows the rules of language. Semantic
analyzer keeps track of identifiers, their types and expressions. The
output of semantic analysis phase is the annotated tree syntax.
PHASES OF COMPILER (Cont…)
Intermediate Code Generation
In the intermediate code generation, compiler generates the source code into
the intermediate code. Intermediate code is generated between the high-
level language and the machine language. The intermediate code should be
generated in such a way that you can easily translate it into the target
machine code.
Code Optimization
Code optimization is an optional phase. It is used to improve the
intermediate code so that the output of the program could run faster and
take less space. It removes the unnecessary lines of the code and
arranges the sequence of statements in order to speed up the program
execution.
Code Generation
Code generation is the final stage of the compilation process. It takes the
optimized intermediate code as input and maps it to the target machine
language. Code generator translates the intermediate code into the
machine code of the specified computer.
PHASES OF COMPILER (Cont…)
Symbol Table
• Every compiler uses a symbol table to track all variables,
functions, and identifiers in a program.
• It stores information such as the name, type, scope, and
memory location of each identifier.
• Built during the early stages of compilation, the symbol table
supports error checking, scope management, and code
optimization for runtime efficiency.
• The symbol table acts as a bridge between the analysis and
synthesis phases of the compiler.
• It collects information during the analysis phases and utilizes
it during the synthesis phases to generate efficient code,
ultimately enhancing compile-time performance.
PHASES OF COMPILER (Cont…)
Symbol Table
PHASES OF COMPILER
CATEGORIES OF COMPILERS
1. Native Compiler :
Native compiler are compilers that generates code for the
same Platform on which it runs. It converts high language into
computer’s native language. For example Turbo C or GCC
compiler

2. Cross compiler :
A Cross compiler is a compiler that generates executable code
for a platform other than one on which the compiler is
running. For example a compiler that running on Linux/x86
box is building a program which will run on a separate
Arduino/ARM.
CATEGORIES OF COMPILERS
Difference between Native and Cross Compiler

NATIVE COMPILER CROSS COMPILER

Translates program for same Translates program for different


hardware/platform/machine on it is hardware/platform/machine other than
running. the platform which it is running.

It is dependent on System/machine It is independent of System/machine


and OS and OS

It can generate executable file like It can generate raw code .hex
.exe
TurboC or GCC is native Compiler. Keil is a cross compiler.
PASSES
The number of iteration in which the entire phases
of compiler are done are termed as PASS. It has two
categories:
•Single pass compiler (Pascal)
•Two Pass/Multi pass compiler (Java)
•Pass1 also known as : Front End, Analytic Part, Platform
Independent
•Pass2 also known as : Back End, Synthesis Part, Platform
Dependent
SINGLE-PASS COMPILER
In a Single (One) pass compiler the entire
phases performs its function in a single pass.
Advantage:-
•It takes less time to execute.
Disadvantage:-
•In this we go in a sequence and can’t go back to
handle the error.
•In this more space is occupied.
TWO/MULTI-PASS COMPILER
A Two pass/multi-pass Compiler is a type
of compiler that processes the source
code or abstract syntax tree of a program
multiple times. In multipass Compiler we
divide phases in two pass as:
Advantage:-
•It occupies less memory space.
•Errors can be removed in every pass to make
error free.

Disadvantage:-
•It takes more time to convert source code
into target code.
TWO/MULTI-PASS COMPILER
It helps to solve two main problem:
1.If we want to design compiler for different programming
language for same machine.
TWO/MULTI-PASS COMPILER
It helps to solve two main problem:
2. If we want to design compiler of same
programming language for different machines.
DIFFERENCE B/W SINGLE AND
MULTIPASS
PARAMETER SINGLE PASS MULTIPASS

SPEED FAST SLOW

MEMORY MORE LESS

TIME LESS MORE

PORTABILITY NO YES
BOOTSTRAPPING
Bootstrapping is widely used in the compilation development.
• It is used to produce a self-hosting compiler.
•Self-hosting compiler is a type of compiler that can compile its own
source code.
•It is used to compile the compiler and then you can use this
compiled compiler to compile everything else as well as future
versions of itself.
For bootstrapping purpose, a compiler is characterized by three
languages:
•Source language S that compiler compiles
•Target language T that it generate codes
•The Implementation language I the compiler is written
Notation: represents a compiler for Source S, Target T, implemented in
I. The T-diagram shown above is also used to depict the same
compiler.
BOOTSTRAPPING
S T
This is represented by a T-diagram as:
I
In textual form this can be represented as :
C1 A Small Compiler 1

A
C A Small Compiler 2

C1
C A C A
C1 C1 A A
A
FINITE AUTOMATON
An automaton with a finite number of states is called a Finite
Automaton (FA) or Finite State Machine (FSM).
An automaton can be represented by a 5-tuple (Q, ∑, δ, q0, F), where

Q is a finite set of states.
∑ is a finite set of symbols, called the alphabet of the automaton.
δ is the transition function.
q0 is the initial state from where any input is processed (q0 ∈ Q).
F is a set of final state/states of Q.
FINITE AUTOMATA
(cont….)
Related Terminologies

• Alphabet
Definition − An alphabet is any finite set of symbols.
Example − ∑ = {a, b, c, d} is an alphabet set where ‘a’, ‘b’, ‘c’, and ‘d’ are symbols.

• String
Definition − A string is a finite sequence of symbols taken from ∑.
Example − ‘cabcad’ is a valid string on the alphabet set ∑ = {a, b, c, d}

• Length of a String
Definition − It is the number of symbols present in a string. (Denoted by |S|).
Examples −
If S = ‘cabcad’, |S|= 6
If |S|= 0, it is called an empty string (Denoted by λ or ε)
FINITE AUTOMATA
(cont….)
Related Terminologies

• Kleene Star
Definition − The Kleene star, ∑*, is a unary operator on a set of symbols or strings, ∑, that gives the
infinite set of all possible strings of all possible lengths over ∑ including λ.
Representation − ∑* = ∑0 ∪ ∑1 ∪ ∑2 ∪……. where ∑p is the set of all possible strings of length p.
Example − If ∑ = {a, b}, ∑* = {λ, a, b, aa, ab, ba, bb,………..}

• Kleene Closure / Plus


Definition − The set ∑+ is the infinite set of all possible strings of all possible lengths over ∑ excluding λ.
Representation − ∑+ = ∑1 ∪ ∑2 ∪ ∑3 ∪…….
∑+ = ∑* − { λ }
Example − If ∑ = { a, b } , ∑+ = { a, b, aa, ab, ba, bb,………..}

• Language
Definition − A language is a subset of ∑* for some alphabet ∑. It can be finite or infinite.
Example − If the language takes all possible strings of length 2 over ∑ = {a, b}, then L = { ab, bb, ba, bb}
FINITE AUTOMATA
(cont….)
Finite Automaton can be classified into two types −
•Deterministic Finite Automaton (DFA)
•Non-deterministic Finite Automaton (NDFA / NFA)

Deterministic Finite Automaton (DFA)


In DFA, for each input symbol, one can determine the state to which the machine
will move. Hence, it is called Deterministic Automaton. As it has a finite number of
states, the machine is called Deterministic Finite Machine or Deterministic Finite
Automaton.
Formal Definition of a DFA
A DFA can be represented by a 5-tuple (Q, ∑, δ, q0, F) where −
Q is a finite set of states.
∑ is a finite set of symbols called the alphabet.
δ is the transition function where δ: Q × ∑ → Q
q0 is the initial state from where any input is processed (q0 ∈ Q).
F is a set of final state/states of Q (F ⊆ Q).
FINITE AUTOMATA
(cont….)
Example
Let a deterministic finite automaton be →
Q = {a, b, c},
∑ = {0, 1},
q0 = {a},
F = {c}, and
Transition function δ as shown by the following table −
FINITE AUTOMATA
(cont….)
Non-Deterministic Finite Automaton (NDFA/NFA)
In NDFA, for a particular input symbol, the machine can move to any combination
of the states in the machine. In other words, the exact state to which the machine
moves cannot be determined. Hence, it is called Non-deterministic Automaton. As
it has finite number of states, the machine is called Non-deterministic Finite
Machine or Non-deterministic Finite Automaton.

Formal Definition of an NDFA

An NDFA can be represented by a 5-tuple (Q, ∑, δ, q0, F) where −


Q is a finite set of states.
∑ is a finite set of symbols called the alphabets.
δ is the transition function where δ: Q × ∑ → 2Q
(Here the power set of Q (2Q) has been taken because in case of NDFA, from a
state, transition can occur to any combination of Q states)
q0 is the initial state from where any input is processed (q0 ∈ Q).
F is a set of final state/states of Q (F ⊆ Q).
FINITE AUTOMATA
(cont….)
Example
Let a non-deterministic finite automaton be →
Q = {a, b, c}
∑ = {0, 1}
q0 = {a}
F = {c}
The transition function δ as shown below −
REGULAR EXPRESSION
Regular expression is an important notation for specifying patterns. Each pattern
matches a set of strings, so regular expressions serve as names for a set of strings.
Programming language tokens can be described by regular languages.

A Regular Expression can be recursively defined as follows −

• ε is a Regular Expression indicates the language containing an empty string. (L (ε)


= {ε})
. φ is a Regular Expression denoting an empty language. (L (φ) = { })
. x is a Regular Expression where L = {x}

If X is a Regular Expression denoting the language L(X) and Y is a Regular


Expression denoting the language L(Y), then
. X + Y is a Regular Expression corresponding to the language L(X) ∪ L(Y) where L(X+Y) =
L(X) ∪ L(Y).
. X . Y is a Regular Expression corresponding to the language L(X) . L(Y) where L(X.Y) =
L(X) . L(Y)
. R* is a Regular Expression corresponding to the language L(R*)where L(R*) = (L(R))*
REGULAR EXPRESSION
RE TO NFA CONVERSION
GRAMMAR
Grammars are used to describe the syntax of a programming
language. It specifies the structure of expression and statements.
Definition
A context free grammar G is defined by four tuples as,
G=(V,T,P,S)
where,
G - Grammar
V - Set of variables
T - Set of Terminals
P - Set of productions
S - Start symbol
Terminals are represented by small letters
Variables are represented by capital letters
GRAMMAR
According to Chomsky hierarchy, grammars are divided of 4 types:
TYPE of Grammar
Type 0: Unrestricted Grammar:
• Type-0 grammars include all formal grammars. Type 0
grammar language are recognized by Turing machine. These
languages are also known as the Recursively Enumerable
languages.
• Grammar Production in the form of:
TYPE of Grammar
• Type 1: (Context Sensitive Grammar)
Type-1 grammars generate the context-sensitive languages. The
language generated by the grammar are recognized by the Linear
Bound Automata.
• In Type 1
I. First of all Type 1 grammar should be Type 0.
II. Grammar Production in the form of
TYPE of Grammar
• Type 2: Context Free Grammar:

Type-2 grammars generate the context-free languages. The language


generated by the grammar is recognized by a Pushdown automata.
Type-2 grammars generate the context-free languages.

In Type 2

1. First of all it should be Type 1.


2. Left hand side of production can have only one variable.
TYPE of Grammar
• Type 3: Regular Grammar:
Type-3 grammars generate regular languages. These
languages are exactly all languages that can be accepted by a
finite state automaton.
• Type 3 is most restricted form of grammar.

• Type 3 should be in the given form only :


TYPE of Grammar
DERIVATION
A derivation is basically a sequence of production
rules, in order to get the input string. During
parsing, we take two decisions for some
sentential form of input:
• Deciding the non-terminal which is to be
replaced.
• Deciding the production rule, by which, the non-
terminal will be replaced.
To decide which non-terminal to be replaced with
production rule, we can have two options.
DERIVATION (Cont…)
To decide which non-terminal to be replaced with
production rule, we can have two options.
• Left-most Derivation
If the sentential form of an input is scanned and
replaced from left to right, it is called left-most
derivation. The sentential form derived by the left-
most derivation is called the left-sentential form.
• Right-most Derivation
If we scan and replace the input with production rules,
from right to left, it is known as right-most derivation.
The sentential form derived from the right-most
derivation is called the right-sentential form.
DERIVATION Example
Example The left-most derivation The right-most derivation
is: is:
Production rules: E→E+E E→E-E
E→E-E+E E→E-E+E
E→E+E E → id - E + E E → E - E + id
E → id - id + E E → E - id + id
E→E-E E → id - id + id E → id - id + id

E → id

Input string:
Id - id + id
AMBIGUITY

A grammar G is said to ambiguous if it has more than one


parse tree (Left or Right derivation)

The left-most derivation The right-most derivation


is: is:
E→E+E E→E-E
E→E-E+E E→E-E+E
Example E → id - E + E E → E - E + id
E → id - id + E E → E - id + id
Production rules:
E → id - id + id E → id - id + id
E→E+E
E→E-E
E → id

Input string:
Id - id + id
LEX and YACC
LEX also knows as Lexical Analyzer, or Scanner.
It uses patterns that match strings in the input and converts the
strings to tokens.
Tokens are numerical representations of strings, and simplify
processing.

YACC (Yet Another Compiler Compiler) also known as Syntax


Analyzer, or Parser.
Yacc uses grammar rules that allow it to analyze tokens from lex
and create a syntax tree.
A syntax tree imposes a hierarchical structure on tokens.
LEX and YACC
Working of LEX
The working of lex in compiler design as a lexical analysis takes
place in multiple steps. Firstly we create a file that describes the
generation of the lex analyzer. This file is written in Lex language
and has a .l extension. The lex compiler converts this program into
a C file called lex.yy.c. The C compiler then runs this C file, and it
is compiled into a.out file. This a.out file is our working Lexical
Analyzer which will produce the stream of tokens based on the
input text.
LEX and YACC
Any lex file consists of the following three parts-
• Definitions: This section of lex files contains declarations of
constant, variable, and regular definitions.

• Rules: This section of the lex files defines the rules in the form of
the regular expressions and corresponding actions that the lexer
should take to identify tokens in the input stream. Each rule
consists of a regular expression followed by a code block that
specifies the action to take when the regular expression matches.
The rule is in the form of- p1 {action1} p2 {action2} ... pn
{action}. Where Pi is the ith Regular expression and action is the
action that must be taken when the regular expression matches.

• User Subroutines: This section contains the user-defined


functions that can be used by the action code block in the rules
section of the lex file.
LEX and YACC
Input to Lex is divided into three sections, with %% dividing the
sections.

Optional

Pattern Action Code


RE

Optional
LEX and YACC
Input to YACC is divided into three sections, with %% dividing the
sections.

Optional

Production Action Code


Rule CFG

Optional
LEX and YACC
LEX and YACC
To run :

% lex bas.l
% gcc lex.yy.c –ll
% a.out
-------
------
-------
%
COMPILER CONSTRUCTION TOOLS…….
•Scanner Generator:- These automatically generate lexical analyzers
normally from a specification based on regular expression.
•Parser Generator:- These produce syntax analyzer, normally from I/P
that is based on a context free grammar.
•Syntax-directed translation engines:- These produce
collection of routines that walk the parse tree.
•Code-generator generators:- Such a tool takes a collection of
rules that defines the translation of each operation of the intermediate language
into the machine language for the target machine.
•Dataflow analysis engines:- Much of the information needed to
perform good code optimization involves “data flow analysis”. The gathering of
information about how values are transmitted from one part of a program to
other part.
Q1. Generate the Token and Parse tree for the following:

If (MAX==5) GOTO 100


Q2. Generate the Token and Parse tree for the following:

While A>B do
A=A+B
Q3. Generate the Token and Parse tree for the following:

While A>=B & A=2*5 do


A=A*B

You might also like