0% found this document useful (0 votes)
457 views27 pages

Module 3 Notes

This document contains lecture notes on compiler design from a course. It discusses the different phases and components of a compiler, including lexical analysis, syntax analysis, and intermediate code generation. It also compares compilers to interpreters, describing the key differences in how each approaches translating source code. Finally, it provides examples of popular programming language compilers.

Uploaded by

m
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
457 views27 pages

Module 3 Notes

This document contains lecture notes on compiler design from a course. It discusses the different phases and components of a compiler, including lexical analysis, syntax analysis, and intermediate code generation. It also compares compilers to interpreters, describing the key differences in how each approaches translating source code. Finally, it provides examples of popular programming language compilers.

Uploaded by

m
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

COMPILER DESIGN NOTES (17CS63)

COMPILER DESIGN (17CS63)

LECTURE NOTES

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 1


COMPILER DESIGN NOTES (17CS63)

MODULE 3
Lexical analysis:

• Language processors`
• The structure of a Compiler;
• The evolution of programming languages;
• The science of building a Compiler;
• Applications of compiler technology;

• Lexical analysis:
• The Role of Lexical Analyzer;
• Input Buffering;
• Specifications of Tokens;
• Recognition of Tokens
• The Lexical-Analyzer Generator Lex

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 2


COMPILER DESIGN NOTES (17CS63)

Language processors

The programs which translate the program written in a programming language by the user into an
executable program is known as language processors. The program translated by language processor is
understood by the hardware of the computer. Examples of language processor are assemblers, compilers
and interpreters.

Preprocessor:
A preprocessor produce input to compilers. They may perform the following functions.
1. Macro processing: A preprocessor may allow a user to define macros that are short
hands for longer constructs.
2. File inclusion: A preprocessor may include header files into the program text.
3. Rational preprocessor: these preprocessors augment older languages with more
modern flow-of-control and data structuring facilities.
4. Language Extensions: These preprocessor attempts to add capabilities to the
language by certain amounts to build-in macro

Compilers:
A compiler is a program which translates a program written in a high level language into machine
language program that can be understood by the computer. Generally, compilers are large programs that
occupy a large memory space.

Assemblers:
A program that translates an assembled language program into a machine language is called
assemblers.
programmers found it difficult to write or read programs in machine language. They begin to
use mnemonic (symbols) for each machine instruction, which they would subsequently
translate into machine language. Such a mnemonic machine language is now called an
assembly language. Programs known as assembler were written to automate the translation of
assembly language in to machine language. The input to an assembler program is called
source program, the output is a machine
language translation (object program).

Interpreter:
Interpreter is a program which translates the statement of' a high level language into machine codes. It
translates one statement at a time. lt seems to be similar to a compiler, but a compiler reads the entire
program first and then translates it into machine codes. Hence, assemblers, compilers and interpreter all
of them are computer programs.

Loader and Link-editor:


Once the assembler procedures an object program, that program must be placed into memory
and executed. The assembler could place the object program directly in memory and transfer
control to it, thereby causing the machine language program to be execute. This would waste

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 3


COMPILER DESIGN NOTES (17CS63)

core by leaving the assembler in memory while the user’s program was being executed. Also
the programmer would have to retranslate his program with each execution, thus wasting
translation time. To over come this problems of wasted translation time and memory. System
programmers developed another component called loader.
A loader is a program that places programs into memory and prepares them for execution.” It
would be more efficient if subroutines could be translated into object form the loader
could”relocate” directly behind the user’s program. The task of adjusting programs o they may
be placed in arbitrary core locations is called relocation. Relocation loaders perform four
functions.

DIFFERENCE BETWEEN COMPILER AND INTERPRETER


• A compiler converts the high level instruction into machine language while an
interpreter converts the high level instruction into an intermediate form.
• Before execution, entire program is executed by the compiler whereas after
translating the first line, an interpreter then executes it and so on.
• List of errors is created by the compiler after the compilation process while an
interpreter stops translating after the first error.
• An independent executable file is created by the compiler whereas interpreter is
required by an interpreted program each time.
• The compiler produce object code whereas interpreter does not produce object code.
In the process of compilation the program is analyzed only once and then the code is
generated whereas source program is interpreted every time it is to be executed and
every time the source program is analyzed. hence interpreter is less efficient than
compiler.
• Examples of interpreter: A UPS Debugger is basically a graphical source level
debugger but it contains built in C interpreter which can handle multiple source files.

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 4


COMPILER DESIGN NOTES (17CS63)

example of compiler: Borland c compiler or Turbo C compiler compiles the programs


written in C or C++.

Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA also
uses interpreter. The process of interpretation can be carried out in following phases.
1. Lexical analysis
2. Syntax analysis
3. Semantic analysis
4. Direct Execution
Advantages:

a. Modification of user program can be easily made and implemented as


execution proceeds.
b. Type of object that denotes a various may change dynamically.
c. Debugging a program and finding errors is simplified task for a program used
for interpretation.
d. The interpreter for the language makes it machine independent.

TRANSLATOR

A translator is a program that takes as input a program written in one language and
produces as output a program in another language. Beside program translation, the translator
performs another very important role, the error-detection. Any violation of d HLL
specification would be detected and reported to the programmers. Important role of
translator are:

1 Translating the hll program input into an equivalent ml program.


2 Providing diagnostic messages wherever the programmer violates specification of
the hll.

TYPE OF TRANSLATORS:-

INTERPRETER
COMPILER
PREPROCESSOR

Examples of compilers

1. Ada compilers
2 .ALGOL compilers
3 .BASIC compilers
4 .C# compilers
5 .C compilers
6 .C++ compilers
7 .COBOL compilers
8 .D compilers
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 5
COMPILER DESIGN NOTES (17CS63)

9 .Common Lisp compilers


10. ECMAScript interpreters
11. Eiffel compilers
12. Felix compilers
13. Fortran compilers
14. Haskell compilers
15 .Java compilers
16. Pascal compilers
17. PL/I compilers
18. Python compilers
19. Scheme compilers
20. Smalltalk compilers
21. CIL compilers

Structure Of Compilers:

Phases of a compiler: A compiler operates in phases. A phase is a logically interrelated


operation that takes source program in one representation and produces output in another
representation. The phases of a compiler are shown in below
There are two phases of compilation.
a. Analysis (Machine Independent/Language Dependent)
b. Synthesis(Machine Dependent/Language independent)
Compilation process is partitioned into no-of-sub processes called ‘phases’.

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 6


COMPILER DESIGN NOTES (17CS63)

Lexical Analysis:-
LA or Scanners reads the source program one character at a time, carving the source program
into a sequence of atomic units called tokens. The output of LA is a stream of tokens, which is
passed to the next phase, the syntax analyzer or parser.

Syntax Analysis:-
The second stage of translation is called Syntax analysis or parsing. In this phase expressions,
statements, declarations etc… are identified by using the results of lexical analysis. Syntax
analysis is aided by using techniques based on formal grammar of the programming language.
The SA groups the tokens together into syntactic structure called as expression. Expression
may further be combined to form statements. The syntactic structure can be regarded as a tree
whose leaves are the token called as parse trees.

The parser has two functions. It checks if the tokens from lexical analyzer, occur in pattern
that are permitted by the specification for the source language. It also imposes on tokens a
tree-like structure that is used by the sub-sequent phases of the compiler.

Syntax analysis is to make explicit the hierarchical structure of the incoming token stream by
identifying which parts of the token stream should be grouped.

Intermediate Code Generations:-


An intermediate representation of the final machine language code is produced. This phase
bridges the analysis and synthesis phases of translation. The intermediate code generation uses
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 7
COMPILER DESIGN NOTES (17CS63)

the structure produced by the syntax analyzer to create a stream of simple instructions. Many
styles of intermediate code are possible. One common style uses instruction with one operator
and a small number of operands.
The output of the syntax analyzer is some representation of a parse tree. the intermediate code
generation phase transforms this parse tree into an intermediate language representation of the
source program.

Code Optimization :-
This is optional phase described to improve the intermediate code so that the output runs faster
and takes less space. Its output is another intermediate code program that does the some job as
the original, but in a way that saves time and / or spaces.

Code Generation:-
The last phase of translation is code generation. A number of optimizations to reduce the
length of machine language program are carried out during this phase. The output of the
code generator is the machine language program of the specified computer. Cg produces the
object code by deciding on the memory locations for data, selecting code to access each datum
and selecting the registers in which each computation is to be done. Many computers have
only a few high speed registers in which computations can be performed quickly. A good code
generator would attempt to utilize registers as efficiently as possible

Table Management (or) Book-keeping:-


This is the portion to keep the names used by the program and records essential information about
each. The data structure used to record this information called an ‘Symbol Table’.

Error Handlers:-
It is invoked when a flaw error in the source program is detected. One of the most important
functions of a compiler is the detection and reporting of errors in the source program. The
error message should allow the programmer to determine exactly where the errors have
occurred. Errors may occur in all or the phases of a compiler.
Whenever a phase of the compiler discovers an error, it must report the error to the error
handler, which issues an appropriate diagnostic msg. Both of the table-management and error-
Handling routines interact with all phases of the compiler.

Structure of the compiler is explained well with one example as shown below. For every phase of the
compiler we can seen the input and output.

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 8


COMPILER DESIGN NOTES (17CS63)

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 9


COMPILER DESIGN NOTES (17CS63)

➢ The evolution of programming languages


• The move to higher-level languages
The first step towards more people friendly programming
languages was the development of mnemonic assembly languages in the early 1950’s. Initially,
the instructions in an assembly language were just mnemonic representations of machine
instructions. Later, macro instructions were added to assembly languages so that a programmer
could define parameterized shorthands for frequently used sequences of machine instructions.

• Impacts on Compilers
Compilers can help promote the use of high-level
languages by minimizing the execution overhead of the programs written in these languages.
Compilers are also critical in making high-performance computer architectures effective on
users applications. In fact, the performance of a computer system is so dependent on compiler
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 10
COMPILER DESIGN NOTES (17CS63)

technology that compilers are used as a tool in evaluating architectural concepts before a
computer is built.
➢ The science of building a Compiler
Compiler design deals with complicated real world problems.
• First the problem is taken.
• A mathematical abstraction is formulated.
• Solve using mathematical techniques.
Modeling in compiler design and implementation
• Right mathematical model and right algorithm.
• Fundamental models – finite state machine, regular expression, context free grammar.
The science of code optimization
Optimization : It is an attempt made by compiler to produce code that is more efficient
than the obvious one.
Compiler optimizations-Design objectives
• Must improve performance of many programs.
• Optimization must be correct.
• Compilation time must be kept reasonable.
• Engineering effort required must be manageable

➢ Applications of compiler technology

1. IMPLEMENTATION OF HIGH LEVEL PROGRAMING LANGUAGES.


• The programmer expresses an algorithm using the Lang, and the compiler must translate that program
to the target language.
• Generally HLP languages are easier to program in, but are less efficient, i.e., the target program runs
more slowly.
• Programmers using LLPL have more control over a computation and can produce more efficient
code.
• Unfortunately, LLP are harder to write and still worse less portable, more prone to errors and harder
to maintain.
• Optimizing compilers include techniques to improve the performance of general code, thus offsetting
the inefficiency introduced by HL abstractions.
2. OPTIMIZATIONS FOR COMPUTER ARCHITECTURES
• The rapid evolution of comp architecture has also led to an insatiable demand for a new complier
technology.
• Almost all high performance systems take advantage of the same basic 2 techniques: parallelism and
memory hierarchies.
• Parallelism can be found at several levels : at the instruction level, where multiple operations are
executed simultaneously and at the processor level, where different threads of same application are run
on different processors.
• Memory hierarchies are a response to the basic limitation that we can built very fast storage or very
large storage, but not storage that is both fast and large.

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 11


COMPILER DESIGN NOTES (17CS63)

➢ PARALLELISM
All modern microprocessors exploit instruction-level parallelism. this can be hidden from the
programmer.
• The hardware scheduler dynamically checks for dependencies in the sequential instruction stream and
issues them in parallel when possible.
• Whether the hardware reorders the instruction or not, compilers can rearrange the instruction to make
instruction-level parallelism more effective.

➢ MEMORY HIERARCHIES

• a memory hierarchy consists of several levels of storage with different speeds and sizes.
• a processor usually has a small number of registers consisting of hundred of bytes, several levels of
caches containing kilobytes to megabytes, and finally secondary storage that contains gigabytes and
beyond.
• Correspondingly, the speed of accesses between adjacent levels of the hierarchy can differ by two or
three orders of magnitude.
• The performance of a system is often limited not by the speed of the processor but by the performance
of the memory subsystem.
• While compliers traditionally focus on optimizing the processor execution, more emphasis is now
placed on making the memory hierarchy more effective.

3. DESIGN OF NEW COMPUTER ARCHITECTURES.


• In modern computer arch development, compilers are developed in the processor design stage, and
compiled code running on simulators, is used to evaluate the proposed architectural design.
• One of the best known ex of how compilers influenced the design of computer architecture was the
invention of RISC (reduced instruction-set comp) arch.
• Over the last 3 decades, many architectural concepts have been proposed. They include data flow
machines, vector machines, VLIW(very long instruction word) machines, multiprocessors with shared
memory, and with distributed memory.
• The development of each of these architectural concepts was accompanied by the research and
development of corresponding compiler technology.
• Compiler technology is not only needed to support programming of these arch, but also to evaluate the
proposed architectural designs.

4. PROGRAM TRANSLATIONS
Normally we think of compiling as a translation of a high level Lang to machine level language, the
same technology can be applied to translate between diff kinds of languages.
The following are some of the imp applications of program translation techniques:

➢ BINARY TRANSLATION
• Compiler technology can be used to translate the binary code for one machine to that of another,
allowing a machine to run programs originally compiled for another instruction set.
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 12
COMPILER DESIGN NOTES (17CS63)

• This technology has been used by various computer companies to increase the availability of software
to their machines.
➢ HARDWARE SYNTHESIS
• Not only is most software written in high level languages, even hardware designs are mostly described
in high level hardware description languages like verilog and VHDL(very high speed integrated circuit
hardware description languages).
• Hardware designs are typically described at the register transfer level (RTL).
• Hardware synthesis tools translate RTL descriptions automatically into gates which are then mapped
to transistors and eventually to a physical layout. This process takes long hours to optimize the circuits
unlike compilers for programming languages.
➢ DATABASE QUERY INTERPRETERS
• Query languages like SQL are used to search databases.
• These database queries consist of relational and Boolean operators.
• They can be compiled into commands to search a database for records satisfying the needs.
5. SOFTWARE PRODUCTIVITY TOOLS
Several ways in which program analysis, building techniques originally developed to optimize code in
compilers, have improved software productivity.
TYPE CHECKING
BOUNDS CHECKING
MEMORY MANAGEMENT TOOLS.

Lexical analysis:

To identify the tokens we need some method of describing the possible tokens that can
appear in the input stream. For this purpose we introduce regular expression, a notation that
can be used to describe essentially all the tokens of programming language.
Secondly , having decided what the tokens are, we need some mechanism to recognize these
in the input stream. This is done by the token recognizers, which are designed using
transition diagrams and finite automata.

ROLE OF LEXICAL ANALYZER


The LA is the first phase of a compiler. It main task is to read the input character and produce
as output a sequence of tokens that the parser uses for syntax analysis.

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 13


COMPILER DESIGN NOTES (17CS63)

Upon receiving a ‘get next token’ command form the parser, the lexical analyzer reads the
input character until it can identify the next token. The LA return to the parser representation
for the token it has found. The representation will be an integer code, if the token is a simple
construct such as parenthesis, comma or colon.

LA may also perform certain secondary tasks as the user interface. One such task is striping
out from the source program the commands and white spaces in the form of blank, tab and
new line characters. Another is correlating error message from the compiler with the source
program.

LEXICAL ANALYSIS VS PARSING:

Lexical analysis Parsing


A Scanner simply turns an input String (say
a A parser converts this list of tokens into a
file) into a list of tokens. These tokens Tree-like object to represent how the tokens
represent things like identifiers, parentheses, fit together to form a cohesive whole
operators etc. (sometimes referred to as a sentence).

The lexical analyzer (the "lexer") parses A parser does not give the nodes any
individual symbols from the source code file meaning beyond structural cohesion. The
into tokens. From there, the "parser" proper next thing to do is extract meaning from this
turns those whole tokens into sentences of structure (sometimes called contextual
your grammar analysis).

TOKEN
LA reads the source program one character at a time, carving the source program into a
sequence of automatic units called ‘Tokens’.
1. Type of the token.
2,. Value of the token.
Type : variable, operator, keyword, constant
Value : N1ame of variable, current variable (or) pointer to symbol table.
If the symbols given in the standard format the LA accepts and produces token as
output. Each token is a sub-string of the program that is to be treated as a single unit. Token
are two types.
1.Specific strings such as IF (or) semicolon.
2. Classes of string such as identifiers, label, constants.

TOKEN, LEXEME, PATTERN:

Token:
Token is a sequence of characters that can be treated as a single logical entity. Typical tokens are,
1) Identifiers 2) keywords 3) operators 4) special symbols 5)constants

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 14


COMPILER DESIGN NOTES (17CS63)

Pattern:
A set of strings in the input for which the same token is produced as output. This set of
strings is described by a rule called a pattern associated with the token. A pattern is a rule
describing the set of lexemes that can represent a particular token in source program.

Lexeme:
A lexeme is a sequence of characters in the source program that is matched by the pattern
for a token.

Example:

Description of token

Token Lexeme Pattern

Const Const Const

If If If

< or <= or = or < > or >= or


relation <,<=,= ,< >,>=,> letter
followed by letters & digit
I Pi any numeric constant

Nun 3.14 any character b/w “and “except"

Literal "core" Pattern

LEXICAL ERRORS:

Lexical errors are the errors thrown by your lexer when unable to continue. Which means
that there's no way to recognise a lexeme as a valid token for you lexer. Syntax errors, on the
other side, will be thrown by your scanner when a given set of already recognised valid
tokens don't match any of the right sides of your grammar rules. simple panic-mode error
handling system requires that we return to a high-level parsing function when a parsing or
lexical error is detected.

Error-recovery actions are:


i. Delete one character from the remaining input.
ii. Insert a missing character in to the remaining input.
iii. Replace a character by another character.
iv. Transpose two adjacent characters.

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 15


COMPILER DESIGN NOTES (17CS63)

REGULAR EXPRESSIONS

Regular expression is a formula that describes a possible set of string.


Component of regular expression..

X the character x
. any character, usually accept a new line
[x y z] any of the characters x, y, z, …..
R? a R or nothing (=optionally as R)
R* zero or more occurrences…..
R+ one or more occurrences ……
R1R2 an R1 followed by an R2
R2R1 either an R1 or an R2.

• A token is either a single string or one of a collection of strings of a certain type. If


we view the set of strings in each token class as an language, we can use the regular-
expression notation to describe tokens.
• Consider an identifier, which is defined to be a letter followed by zero or more letters
or digits. In regular expression notation we would write.

Identifier = letter (letter | digit)*


Here are the rules that define the regular expression over alphabet .

is a regular expression denoting { € }, that is, the language containing only the empty
string.
o For each ‘a’ in ∑, is a regular expression denoting { a }, the language with only one
string consisting of the single symbol ‘a’ .
o If R and S are regular expressions, then

(R) | (S) means LrULs


R.S means Lr.Ls
R* denotes Lr*

REGULAR DEFINITIONS

• For notational convenience, we may wish to give names to regular expressions and to
define regular expressions using these names as if they were symbols.
• Identifiers are the set or string of letters and digits beginning with a letter. The
following regular definition provides a precise specification for this class of string.

Example
Ab*|cd? Is equivalent to (a(b*)) | (c(d?))

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 16


A | B | ……| Z | a | b |……| z| 0 | 1 | 2 | …. | 9

letter (letter /digit)*


COMPILER DESIGN NOTES (17CS63)

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 17


COMPILER DESIGN NOTES (10CS63)

Recognition of tokens:
We learn how to express pattern using regular expressions. Now, we must study how to take
the patterns for all the needed tokens and build a piece of code that examins the input string
and finds a prefix that is a lexeme matching one of the patterns.
->
Stmt if expr then stmt
| If expr then else stmt

Expr -> term relop
term | term
Term -> id
|numb
er

For relop ,we use the comparison operations of languages like Pascal or SQL where = is
“equals” and < > is “not equals” because it presents an interesting structure of lexemes. The
terminal of grammar, which are if, then , else, relop ,id and numbers are the names of tokens
as far as the lexical analyzer is concerned, the patterns for the tokens are described using
regular definitions.

digit -->[0,9]
digits -->digit+
number --> digit(.digit)?(e.[+-]?digits)?
letter -->[A-Z,a-z]
id -->letter(letter/digit)*
if --> if
then -->then
else -->else
relop --></>/<=/>=/==/< >

In addition, we assign the lexical analyzer the job stripping out white space, by recognizing
the “token” we defined by:

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 18


COMPILER DESIGN NOTES (10CS63)

ws ---> (blank/tab/newline)+

Here, blank, tab and newline are abstract symbols that we use to express the ASCII
characters of the same names. Token ws is different from the other tokens in that ,when we
recognize it, we do not return it to parser ,but rather restart the lexical analysis from the
character that follows the white space . It is the following token that gets returned to the
parser.

Lexeme Token Name Attribute Value


Any ws _ _
If If _
Then Then _
Else Else _
Any id Id pointer to table entry
Any number Number pointer to table
Entry

<= Relop LE
= Relop ET
<> Relop NE
< Relop LT

TRANSITION DIAGRAM:
Transition Diagram has a collection of nodes or circles, called states. Each state represents a
condition that could occur during the process of scanning the input looking for a lexeme that
matches one of several patterns .
Edges are directed from one state of the transition diagram to another.
Each edge is labelled by a symbol or set of symbols .If we are in some state s, and the next input symbol
is a, we look for an edge out of state s labeled by a. If we find such an edge, we advance the forward
pointer and enter the state of the transition diagrams are which the edge leads.

Some important conventions about transition diagrams are


• Certain states are said to be accepting or final .These states indicates that a lexeme
has been found, although the actual lexeme may not consist of all positions b/w the
lexeme Begin and forward pointers we always indicate an accepting state by a double
circle.
• In addition, if it is necessary to return the forward pointer one position, then we shall
additionally place a * near that accepting state.
• One state is designed the state ,or initial state ., it is indicated by an edge labeled
“start” entering from nowhere .the transition diagram always begins in the state
before any input symbols have been used.

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 19


COMPILER DESIGN NOTES (10CS63)

As an intermediate step in the construction of a LA, we first produce a stylized flowchart,


called a transition diagram. Position in a transition diagram, are drawn as circles and are
called as states.

TOKEN getRelop() // TOKEN has two components


TOKEN retToken = new(RELOP); // First component set here
while (true)
switch(state)
case 0: c = nextChar();
if (c == '<') state = 1;
else if (c == '=') state = 5;
else if (c == '>') state = 6;
else fail();
break;
case 1: ...
...
case 8: retract(); // an accepting state with a star
retToken.attribute = GT; // second component
return(retToken)

Implementation of relop transition diagram

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 20


COMPILER DESIGN NOTES (10CS63)

The Transition Diagram for unsigned numbers:

Recognition of Reserved Words and Identifiers


The next transition diagram corresponds to the regular definition given previously.
Note again the star affixed to the final state.
Two questions remain.
1. How do we distinguish between identifiers and keywords such as then, which also match the pattern
in the transition diagram?
2. What is (gettoken(), installID())?
We will continue to assume that the keywords are reserved, i.e., may not be used as identifiers. (What if
this is not the case—as in Pl/I, which had no reserved words? Then the lexer does not distinguish
between keywords and identifiers and the parser must.)
We will use the method mentioned last chapter and have the keywords installed into the identifier table
prior to any invocation of the lexer. The table entry will indicate that the entry is a keyword.

The above TD for an identifier, defined to be a letter followed by any no of letters


or digits. A sequence of transition diagram can be converted into program to look for the
tokens specified by the diagrams. Each state gets a segment of code.

installID() checks if the lexeme is already in the table. If it is not present, the lexeme is installed as an id
token. In either case a pointer to the entry is returned. gettoken() examines the lexeme and returns the
token name, either id or a name corresponding to a reserved keyword.
The text also gives another method to distinguish between identifiers and
keywords.

If = If
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 21
COMPILER DESIGN NOTES (10CS63)

Then = Then
Else = Else
Relop = < | <= | = | > | >=
Id = letter (letter | digit) *|
Num = digit |
AUTOMATA

• An automation is defined as a system where information is transmitted and used for


performing some functions without direct participation of man.
• an automation in which the output depends only on the input is called an automation
without memory.
• an automation in which the output depends on the input and state also is called as
automation with memory.
• an automation in which the output depends only on the state of the machine is called
a Moore machine.
• an automation in which the output depends on the state and input at any instant of
• time is called a mealy machine.

DESCRIPTION OF AUTOMATA

• an automata has a mechanism to read input from input tape,


• any language is recognized by some automation, Hence these automation are
basically language ‘acceptors’ or ‘language recognizers’.

Types of Finite Automata

Deterministic Automata
Non-Deterministic Automata.

DETERMINISTIC AUTOMATA
A deterministic finite automata has at most one transition from each state on any input. A
DFA is a special case of a NFA in which:-

• it has no transitions on input € ,


• each input symbol has at most one transition from any state.

DFA formally defined by 5 tuple notation M = (Q, ∑, δ, qo, F), where


Q is a finite ‘set of states’, which is non empty.
∑ is ‘input alphabets’, indicates input set.
qo is an ‘initial state’ and qo is in Q ie, qo, ∑, Q
F is a set of ‘Final states’,
δ is a ‘transmission function’ or mapping function, using this function the
next state can be determined.

The regular expression is converted into minimized DFA by the following procedure:

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 22


COMPILER DESIGN NOTES (10CS63)

Regular expression → NFA → DFA → Minimized DFA

The Finite Automata is called DFA if there is only one path for a specific input from
current state to next state.

a
A
So S2

S1

From state S0 for input ‘a’ there is only one path going to S2. similarly from S0 there
is only one path for input going to S1.
NONDETERMINISTIC AUTOMATA

A NFA is a mathematical model that consists of


⚫ A set of states S.
⚫ A set of input symbols ∑.
⚫ A transition for move from one state to an other.
⚫ A state so that is distinguished as the start (or initial) state.
⚫ A set of states F distinguished as accepting (or final) state.
⚫ A number of transition to a single symbol.
NFA can be diagrammatically represented by a labeled directed graph, called a
transition graph, In which the nodes are the states and the labeled edges represent
the transition function.

This graph looks like a transition diagram, but the same character can label two or
more transitions out of one state and edges can be labeled by the special symbol €
as well as by input symbols.

The transition graph for an NFA that recognizes the language ( a | b ) * abb is
shown

DEFINITION OF CFG

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 23


COMPILER DESIGN NOTES (10CS63)

It involves four quantities.


CFG contain terminals, N-T, start symbol and production.
Terminal are basic symbols form which string are formed.
N-terminals are synthetic variables that denote sets of strings
In a Grammar, one N-T are distinguished as the start symbol, and the set of
string it denotes is the language defined by the grammar.
The production of the grammar specify the manor in which the terminal and
N-T can be combined to form strings.
Each production consists of a N-T, followed by an arrow, followed by a string
of one terminal and terminals.
DEFINITION OF SYMBOL TABLE

An extensible array of records.


The identifier and the associated records contains collected information about
the identifier.
FUNCTION identify (Identifier name)
RETURNING a pointer to identifier information contains
The actual string
A macro definition A
keyword definition
A list of type, variable & function definition
A list of structure and union name definition
A list of structure and union field selected definitions.

Creating a lexical analyzer with Lex

Lex specifications:

A Lex program (the .l file ) consists of three parts:

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 24


COMPILER DESIGN NOTES (10CS63)

declarations
%%
translation rules
%%
auxiliary procedures

1. The declarations section includes declarations of variables,manifest constants(A manifest


constant is an identifier that is declared to represent a constant e.g. # define PIE 3.14),
and regular definitions.
2. The translation rules of a Lex program are statements of the form :
p1 {action 1}
p2 {action 2}
p3 {action 3}
… …
… …
where each p is a regular expression and each action is a program fragment describing
what action the lexical analyzer should take when a pattern p matches a lexeme. In Lex
the actions are written in C.

3. The third section holds whatever auxiliary procedures are needed by the
actions.Alternatively these procedures can be compiled separately and loaded with the
lexical analyzer.
INPUT BUFFERING

The LA scans the characters of the source pgm one at a time to discover tokens. Because of
large amount of time can be consumed scanning characters, specialized buffering techniques
have been developed to reduce the amount of overhead required to process an input
character.
Buffering techniques:
1. Buffer pairs
2. Sentinels

• The lexical analyzer scans the characters of the source program one a t a time to
discover tokens. Often, however, many characters beyond the next token many have
to be examined before the next token itself can be determined.
• For this and other reasons, it is desirable for the lexical analyzer to read its input from
an input buffer. Figure shows a buffer divided into two haves of, say 100 characters
each.
• One pointer marks the beginning of the token being discovered. A look ahead pointer
scans ahead of the beginning point, until the token is discovered .
• we view the position of each pointer as being between the character last read and the
character next to be read. In practice each buffering scheme adopts one convention
either a pointer is at the symbol last read or the symbol it is ready to read.
Token beginnings look ahead pointer. The distance which the lookahead pointer may have to
travel past the actual token may be large. If the look ahead pointer travels beyond the buffer

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 25


COMPILER DESIGN NOTES (10CS63)

half in which it began, the other half must be loaded with the next characters from the source
file. Since the buffer shown in above figure is of limited size there is an implied constraint on
how much look ahead can be used before the next token is discovered. In the above example,
if the look ahead traveled to the left half and all the way through the left half to the middle,
we could not reload the right half, because we would lose characters that had not yet been
grouped into tokens. While we can make the buffer larger if we chose or use another
buffering scheme, 1we cannot ignore the fact that overhead is limited.

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 26


COMPILER DESIGN NOTES (10CS63)

Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 27

You might also like