Module 3 Notes
Module 3 Notes
LECTURE NOTES
MODULE 3
Lexical analysis:
• Language processors`
• The structure of a Compiler;
• The evolution of programming languages;
• The science of building a Compiler;
• Applications of compiler technology;
• Lexical analysis:
• The Role of Lexical Analyzer;
• Input Buffering;
• Specifications of Tokens;
• Recognition of Tokens
• The Lexical-Analyzer Generator Lex
Language processors
The programs which translate the program written in a programming language by the user into an
executable program is known as language processors. The program translated by language processor is
understood by the hardware of the computer. Examples of language processor are assemblers, compilers
and interpreters.
Preprocessor:
A preprocessor produce input to compilers. They may perform the following functions.
1. Macro processing: A preprocessor may allow a user to define macros that are short
hands for longer constructs.
2. File inclusion: A preprocessor may include header files into the program text.
3. Rational preprocessor: these preprocessors augment older languages with more
modern flow-of-control and data structuring facilities.
4. Language Extensions: These preprocessor attempts to add capabilities to the
language by certain amounts to build-in macro
Compilers:
A compiler is a program which translates a program written in a high level language into machine
language program that can be understood by the computer. Generally, compilers are large programs that
occupy a large memory space.
Assemblers:
A program that translates an assembled language program into a machine language is called
assemblers.
programmers found it difficult to write or read programs in machine language. They begin to
use mnemonic (symbols) for each machine instruction, which they would subsequently
translate into machine language. Such a mnemonic machine language is now called an
assembly language. Programs known as assembler were written to automate the translation of
assembly language in to machine language. The input to an assembler program is called
source program, the output is a machine
language translation (object program).
Interpreter:
Interpreter is a program which translates the statement of' a high level language into machine codes. It
translates one statement at a time. lt seems to be similar to a compiler, but a compiler reads the entire
program first and then translates it into machine codes. Hence, assemblers, compilers and interpreter all
of them are computer programs.
core by leaving the assembler in memory while the user’s program was being executed. Also
the programmer would have to retranslate his program with each execution, thus wasting
translation time. To over come this problems of wasted translation time and memory. System
programmers developed another component called loader.
A loader is a program that places programs into memory and prepares them for execution.” It
would be more efficient if subroutines could be translated into object form the loader
could”relocate” directly behind the user’s program. The task of adjusting programs o they may
be placed in arbitrary core locations is called relocation. Relocation loaders perform four
functions.
Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA also
uses interpreter. The process of interpretation can be carried out in following phases.
1. Lexical analysis
2. Syntax analysis
3. Semantic analysis
4. Direct Execution
Advantages:
TRANSLATOR
A translator is a program that takes as input a program written in one language and
produces as output a program in another language. Beside program translation, the translator
performs another very important role, the error-detection. Any violation of d HLL
specification would be detected and reported to the programmers. Important role of
translator are:
TYPE OF TRANSLATORS:-
INTERPRETER
COMPILER
PREPROCESSOR
Examples of compilers
1. Ada compilers
2 .ALGOL compilers
3 .BASIC compilers
4 .C# compilers
5 .C compilers
6 .C++ compilers
7 .COBOL compilers
8 .D compilers
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 5
COMPILER DESIGN NOTES (17CS63)
Structure Of Compilers:
Lexical Analysis:-
LA or Scanners reads the source program one character at a time, carving the source program
into a sequence of atomic units called tokens. The output of LA is a stream of tokens, which is
passed to the next phase, the syntax analyzer or parser.
Syntax Analysis:-
The second stage of translation is called Syntax analysis or parsing. In this phase expressions,
statements, declarations etc… are identified by using the results of lexical analysis. Syntax
analysis is aided by using techniques based on formal grammar of the programming language.
The SA groups the tokens together into syntactic structure called as expression. Expression
may further be combined to form statements. The syntactic structure can be regarded as a tree
whose leaves are the token called as parse trees.
The parser has two functions. It checks if the tokens from lexical analyzer, occur in pattern
that are permitted by the specification for the source language. It also imposes on tokens a
tree-like structure that is used by the sub-sequent phases of the compiler.
Syntax analysis is to make explicit the hierarchical structure of the incoming token stream by
identifying which parts of the token stream should be grouped.
the structure produced by the syntax analyzer to create a stream of simple instructions. Many
styles of intermediate code are possible. One common style uses instruction with one operator
and a small number of operands.
The output of the syntax analyzer is some representation of a parse tree. the intermediate code
generation phase transforms this parse tree into an intermediate language representation of the
source program.
Code Optimization :-
This is optional phase described to improve the intermediate code so that the output runs faster
and takes less space. Its output is another intermediate code program that does the some job as
the original, but in a way that saves time and / or spaces.
Code Generation:-
The last phase of translation is code generation. A number of optimizations to reduce the
length of machine language program are carried out during this phase. The output of the
code generator is the machine language program of the specified computer. Cg produces the
object code by deciding on the memory locations for data, selecting code to access each datum
and selecting the registers in which each computation is to be done. Many computers have
only a few high speed registers in which computations can be performed quickly. A good code
generator would attempt to utilize registers as efficiently as possible
Error Handlers:-
It is invoked when a flaw error in the source program is detected. One of the most important
functions of a compiler is the detection and reporting of errors in the source program. The
error message should allow the programmer to determine exactly where the errors have
occurred. Errors may occur in all or the phases of a compiler.
Whenever a phase of the compiler discovers an error, it must report the error to the error
handler, which issues an appropriate diagnostic msg. Both of the table-management and error-
Handling routines interact with all phases of the compiler.
Structure of the compiler is explained well with one example as shown below. For every phase of the
compiler we can seen the input and output.
• Impacts on Compilers
Compilers can help promote the use of high-level
languages by minimizing the execution overhead of the programs written in these languages.
Compilers are also critical in making high-performance computer architectures effective on
users applications. In fact, the performance of a computer system is so dependent on compiler
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 10
COMPILER DESIGN NOTES (17CS63)
technology that compilers are used as a tool in evaluating architectural concepts before a
computer is built.
➢ The science of building a Compiler
Compiler design deals with complicated real world problems.
• First the problem is taken.
• A mathematical abstraction is formulated.
• Solve using mathematical techniques.
Modeling in compiler design and implementation
• Right mathematical model and right algorithm.
• Fundamental models – finite state machine, regular expression, context free grammar.
The science of code optimization
Optimization : It is an attempt made by compiler to produce code that is more efficient
than the obvious one.
Compiler optimizations-Design objectives
• Must improve performance of many programs.
• Optimization must be correct.
• Compilation time must be kept reasonable.
• Engineering effort required must be manageable
➢ PARALLELISM
All modern microprocessors exploit instruction-level parallelism. this can be hidden from the
programmer.
• The hardware scheduler dynamically checks for dependencies in the sequential instruction stream and
issues them in parallel when possible.
• Whether the hardware reorders the instruction or not, compilers can rearrange the instruction to make
instruction-level parallelism more effective.
➢ MEMORY HIERARCHIES
• a memory hierarchy consists of several levels of storage with different speeds and sizes.
• a processor usually has a small number of registers consisting of hundred of bytes, several levels of
caches containing kilobytes to megabytes, and finally secondary storage that contains gigabytes and
beyond.
• Correspondingly, the speed of accesses between adjacent levels of the hierarchy can differ by two or
three orders of magnitude.
• The performance of a system is often limited not by the speed of the processor but by the performance
of the memory subsystem.
• While compliers traditionally focus on optimizing the processor execution, more emphasis is now
placed on making the memory hierarchy more effective.
4. PROGRAM TRANSLATIONS
Normally we think of compiling as a translation of a high level Lang to machine level language, the
same technology can be applied to translate between diff kinds of languages.
The following are some of the imp applications of program translation techniques:
➢ BINARY TRANSLATION
• Compiler technology can be used to translate the binary code for one machine to that of another,
allowing a machine to run programs originally compiled for another instruction set.
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 12
COMPILER DESIGN NOTES (17CS63)
• This technology has been used by various computer companies to increase the availability of software
to their machines.
➢ HARDWARE SYNTHESIS
• Not only is most software written in high level languages, even hardware designs are mostly described
in high level hardware description languages like verilog and VHDL(very high speed integrated circuit
hardware description languages).
• Hardware designs are typically described at the register transfer level (RTL).
• Hardware synthesis tools translate RTL descriptions automatically into gates which are then mapped
to transistors and eventually to a physical layout. This process takes long hours to optimize the circuits
unlike compilers for programming languages.
➢ DATABASE QUERY INTERPRETERS
• Query languages like SQL are used to search databases.
• These database queries consist of relational and Boolean operators.
• They can be compiled into commands to search a database for records satisfying the needs.
5. SOFTWARE PRODUCTIVITY TOOLS
Several ways in which program analysis, building techniques originally developed to optimize code in
compilers, have improved software productivity.
TYPE CHECKING
BOUNDS CHECKING
MEMORY MANAGEMENT TOOLS.
Lexical analysis:
To identify the tokens we need some method of describing the possible tokens that can
appear in the input stream. For this purpose we introduce regular expression, a notation that
can be used to describe essentially all the tokens of programming language.
Secondly , having decided what the tokens are, we need some mechanism to recognize these
in the input stream. This is done by the token recognizers, which are designed using
transition diagrams and finite automata.
Upon receiving a ‘get next token’ command form the parser, the lexical analyzer reads the
input character until it can identify the next token. The LA return to the parser representation
for the token it has found. The representation will be an integer code, if the token is a simple
construct such as parenthesis, comma or colon.
LA may also perform certain secondary tasks as the user interface. One such task is striping
out from the source program the commands and white spaces in the form of blank, tab and
new line characters. Another is correlating error message from the compiler with the source
program.
The lexical analyzer (the "lexer") parses A parser does not give the nodes any
individual symbols from the source code file meaning beyond structural cohesion. The
into tokens. From there, the "parser" proper next thing to do is extract meaning from this
turns those whole tokens into sentences of structure (sometimes called contextual
your grammar analysis).
TOKEN
LA reads the source program one character at a time, carving the source program into a
sequence of automatic units called ‘Tokens’.
1. Type of the token.
2,. Value of the token.
Type : variable, operator, keyword, constant
Value : N1ame of variable, current variable (or) pointer to symbol table.
If the symbols given in the standard format the LA accepts and produces token as
output. Each token is a sub-string of the program that is to be treated as a single unit. Token
are two types.
1.Specific strings such as IF (or) semicolon.
2. Classes of string such as identifiers, label, constants.
Token:
Token is a sequence of characters that can be treated as a single logical entity. Typical tokens are,
1) Identifiers 2) keywords 3) operators 4) special symbols 5)constants
Pattern:
A set of strings in the input for which the same token is produced as output. This set of
strings is described by a rule called a pattern associated with the token. A pattern is a rule
describing the set of lexemes that can represent a particular token in source program.
Lexeme:
A lexeme is a sequence of characters in the source program that is matched by the pattern
for a token.
Example:
Description of token
If If If
LEXICAL ERRORS:
Lexical errors are the errors thrown by your lexer when unable to continue. Which means
that there's no way to recognise a lexeme as a valid token for you lexer. Syntax errors, on the
other side, will be thrown by your scanner when a given set of already recognised valid
tokens don't match any of the right sides of your grammar rules. simple panic-mode error
handling system requires that we return to a high-level parsing function when a parsing or
lexical error is detected.
REGULAR EXPRESSIONS
X the character x
. any character, usually accept a new line
[x y z] any of the characters x, y, z, …..
R? a R or nothing (=optionally as R)
R* zero or more occurrences…..
R+ one or more occurrences ……
R1R2 an R1 followed by an R2
R2R1 either an R1 or an R2.
is a regular expression denoting { € }, that is, the language containing only the empty
string.
o For each ‘a’ in ∑, is a regular expression denoting { a }, the language with only one
string consisting of the single symbol ‘a’ .
o If R and S are regular expressions, then
REGULAR DEFINITIONS
• For notational convenience, we may wish to give names to regular expressions and to
define regular expressions using these names as if they were symbols.
• Identifiers are the set or string of letters and digits beginning with a letter. The
following regular definition provides a precise specification for this class of string.
Example
Ab*|cd? Is equivalent to (a(b*)) | (c(d?))
Recognition of tokens:
We learn how to express pattern using regular expressions. Now, we must study how to take
the patterns for all the needed tokens and build a piece of code that examins the input string
and finds a prefix that is a lexeme matching one of the patterns.
->
Stmt if expr then stmt
| If expr then else stmt
|є
Expr -> term relop
term | term
Term -> id
|numb
er
For relop ,we use the comparison operations of languages like Pascal or SQL where = is
“equals” and < > is “not equals” because it presents an interesting structure of lexemes. The
terminal of grammar, which are if, then , else, relop ,id and numbers are the names of tokens
as far as the lexical analyzer is concerned, the patterns for the tokens are described using
regular definitions.
digit -->[0,9]
digits -->digit+
number --> digit(.digit)?(e.[+-]?digits)?
letter -->[A-Z,a-z]
id -->letter(letter/digit)*
if --> if
then -->then
else -->else
relop --></>/<=/>=/==/< >
In addition, we assign the lexical analyzer the job stripping out white space, by recognizing
the “token” we defined by:
ws ---> (blank/tab/newline)+
Here, blank, tab and newline are abstract symbols that we use to express the ASCII
characters of the same names. Token ws is different from the other tokens in that ,when we
recognize it, we do not return it to parser ,but rather restart the lexical analysis from the
character that follows the white space . It is the following token that gets returned to the
parser.
<= Relop LE
= Relop ET
<> Relop NE
< Relop LT
TRANSITION DIAGRAM:
Transition Diagram has a collection of nodes or circles, called states. Each state represents a
condition that could occur during the process of scanning the input looking for a lexeme that
matches one of several patterns .
Edges are directed from one state of the transition diagram to another.
Each edge is labelled by a symbol or set of symbols .If we are in some state s, and the next input symbol
is a, we look for an edge out of state s labeled by a. If we find such an edge, we advance the forward
pointer and enter the state of the transition diagrams are which the edge leads.
installID() checks if the lexeme is already in the table. If it is not present, the lexeme is installed as an id
token. In either case a pointer to the entry is returned. gettoken() examines the lexeme and returns the
token name, either id or a name corresponding to a reserved keyword.
The text also gives another method to distinguish between identifiers and
keywords.
If = If
Prof. ANOOP, DIVYA, CHANDANA, Dept. Of CSE, EWIT 21
COMPILER DESIGN NOTES (10CS63)
Then = Then
Else = Else
Relop = < | <= | = | > | >=
Id = letter (letter | digit) *|
Num = digit |
AUTOMATA
DESCRIPTION OF AUTOMATA
Deterministic Automata
Non-Deterministic Automata.
DETERMINISTIC AUTOMATA
A deterministic finite automata has at most one transition from each state on any input. A
DFA is a special case of a NFA in which:-
The regular expression is converted into minimized DFA by the following procedure:
The Finite Automata is called DFA if there is only one path for a specific input from
current state to next state.
a
A
So S2
S1
From state S0 for input ‘a’ there is only one path going to S2. similarly from S0 there
is only one path for input going to S1.
NONDETERMINISTIC AUTOMATA
This graph looks like a transition diagram, but the same character can label two or
more transitions out of one state and edges can be labeled by the special symbol €
as well as by input symbols.
The transition graph for an NFA that recognizes the language ( a | b ) * abb is
shown
DEFINITION OF CFG
Lex specifications:
declarations
%%
translation rules
%%
auxiliary procedures
3. The third section holds whatever auxiliary procedures are needed by the
actions.Alternatively these procedures can be compiled separately and loaded with the
lexical analyzer.
INPUT BUFFERING
The LA scans the characters of the source pgm one at a time to discover tokens. Because of
large amount of time can be consumed scanning characters, specialized buffering techniques
have been developed to reduce the amount of overhead required to process an input
character.
Buffering techniques:
1. Buffer pairs
2. Sentinels
• The lexical analyzer scans the characters of the source program one a t a time to
discover tokens. Often, however, many characters beyond the next token many have
to be examined before the next token itself can be determined.
• For this and other reasons, it is desirable for the lexical analyzer to read its input from
an input buffer. Figure shows a buffer divided into two haves of, say 100 characters
each.
• One pointer marks the beginning of the token being discovered. A look ahead pointer
scans ahead of the beginning point, until the token is discovered .
• we view the position of each pointer as being between the character last read and the
character next to be read. In practice each buffering scheme adopts one convention
either a pointer is at the symbol last read or the symbol it is ready to read.
Token beginnings look ahead pointer. The distance which the lookahead pointer may have to
travel past the actual token may be large. If the look ahead pointer travels beyond the buffer
half in which it began, the other half must be loaded with the next characters from the source
file. Since the buffer shown in above figure is of limited size there is an implied constraint on
how much look ahead can be used before the next token is discovered. In the above example,
if the look ahead traveled to the left half and all the way through the left half to the middle,
we could not reload the right half, because we would lose characters that had not yet been
grouped into tokens. While we can make the buffer larger if we chose or use another
buffering scheme, 1we cannot ignore the fact that overhead is limited.