CD Unit 1
CD Unit 1
UNIT – I
LEXICAL ANALYSIS
Translator:
It is a program that translates one language to another Language.
COMPILER:
Compiler is a translator, program that translates a program written in (HLL) - the source program
and translate it into an equivalent program in (MLL) - the target program. As an important part of the
compiler is error showing to the programmer.
ASSEMBLER:
Programmers found it difficult to write or read programs in machine language. They begin to use
a mnemonic (symbols) for each machine instruction, which they would subsequently translate into
machine language. Such a mnemonic machine language is now called an assembly language.
Programs known as assembler were written to automate the translation of assembly language in to
machine language. The input to an assembler program is called source program, the output is a machine
language translation (object program)
Assembly Machine
Assembler
Language Language
UNIT-I COMPILER DESIGN
INTERPRETER:
It is one of the translators that translate high level language to low level language.
High Level Low Level
Interpreter
Language Language
No Compiler Interpreter
Program need not be compiled every Every time higher level program is
5
time converted into lower level program
Errors are displayed after entire Errors are displayed for every
6
program is checked instruction interpreted (if any)
Examples of Compilers:
1. Ada compilers 9. Fortran compilers
2 .ALGOL compilers 10 .Java compilers
3 .BASIC compilers 11.Pascal compilers
4 .C# compilers 12. PL/I compilers
5 .C compilers 13. Python compilers
6 .C++ compilers
7 .COBOL compilers
8 .Common Lisp compilers
UNIT-I COMPILER DESIGN
a +
b *
c 2
Semantic Analysis:-
It is the third phase of the compiler.
It gets input from the syntax analysis as parse tree and checks whether the given
syntax is correct or not.
It performs type conversion of all the data types into real data types.
Code Optimization:-
It is the fifth phase of the compiler.
It gets the intermediate code as input and produces optimized intermediate code
as output.
This phase reduces the redundant code and attempts to improve the intermediate code
so that faster-running machine code will result.
During the code optimization, the result of the program is not affected.
To improve the code generation, the optimization involves
- deduction and removal of dead code (unreachable code).
- calculation of constants in expressions and terms.
- collapsing of repeated expression into temporary string.
- loop unrolling.
- moving code outside the loop.
- removal of unwanted temporary variables.
Code Generation:-
It is the final phase of the compiler.
It gets input from code optimization phase and produces the target code or object code
as result.
Intermediate instructions are translated into a sequence of machine instructions
that perform the same task.
The code generation involves
- allocation of register and memory
- generation of correct references
- generation of correct data types
-generation of missing code.
It is a data structure containing a record for each identifier, with fields for the attributes
of the identifier.
It allows to find the record for each identifier quickly and to store or retrieve data
from that record.
Whenever an identifier is detected in any of the phases, it is stored in the symbol table.
Error Handlers:-
Each phase can encounter errors. After detecting an error, a phase must handle the
error so that compilation can proceed.
In lexical analysis, errors occur in separation of tokens.
In syntax analysis, errors occur during construction of syntax tree.
In semantic analysis, errors occur when the compiler detects constructs with
right syntactic structure but no meaning and during type conversion.
In code optimization, errors occur when the result is affected by the optimization.
In code generation, it shows error when code is missing etc.
Example: To illustrate the translation of source code through each phase, consider the statement
position := initial + rate * 60. The figure shows the representation of this statement after each
phase.
Compiler passes
A collection of phases is done only once (single pass) or multiple times (multi pass)
Single pass: usually requires everything to be defined before being used in
source program.
Multi pass: compiler may have to keep entire program representation in memory.
Several phases can be grouped into one single pass and the activities of these phases are
interleaved during the pass. For example, lexical analysis, syntax analysis, semantic analysis and
intermediate code generation might be grouped into one pass.
LEXICAL ANALYSIS
Lexical analysis is the process of converting a sequence of characters into a sequence of
tokens. A program or function which performs lexical analysis is called a lexical analyzer or
scanner. A lexer often exists as a single function which is called by a parser or another function.
symbol
table
1. **Tokenization**:
- The lexical analyzer reads the input characters of the source program and groups them into **lexemes**.
- It produces a sequence of **tokens** for each lexeme.
- Tokens represent meaningful units such as identifiers, keywords, operators, constants, and special symbols.
2. **Token Classification**:
- Each token has a type (e.g., keyword, identifier) and an optional attribute value.
- Lexemes are classified into different token categories based on their role in the program.
3. **Token Validation**:
- The lexical analyzer checks whether each token is valid according to the rules of the programming language.
- It ensures that tokens adhere to the language's syntax.
4. **Output Generation**:
- The final stage involves generating a list of tokens.
- These tokens serve as input for subsequent phases of the compiler.
In summary, the lexical analyzer acts as an interface between the source code and other compiler phases, breaking
UNIT-I COMPILER DESIGN
Upon receiving a “get next token” command from the parser, the lexical analyzer reads
input characters until it can identify the next token.
sum Identifier
= Assignment operator
3 Number
+ Addition operator
2 Number
; End of statement
LEXEME:
Collection or group of characters forming tokens is called Lexeme.
Pattern:
A pattern is a description of the form that the lexemes of a token may take.
In the case of a keyword as a token, the pattern is just the sequence of characters that form
the keyword. For identifiers and some other tokens, the pattern is a more complex structure that is
matched by many strings.
INPUT BUFFERING
We often have to look one or more characters beyond the next lexeme before we can be
sure we have the right lexeme. As characters are read from left to right, each character is stored in
the buffer to form a meaningful token as shown below:
Forward pointer
A = B + C
We introduce a two-buffer scheme that handles large look ahead safely. We then consider an
improvement involving "sentinels" that saves time checking for the ends of buffers.
Buffer Pairs
A buffer is divided into two N-character halves, as shown below
: : E : : = : : M : * C : * : : * : 2 : eof
lexeme_beginning
forwar
d
Each buffer is of the same size N, and N is usually the number of characters on one
disk block. E.g., 1024 or 4096 bytes.
Using one system read command we can read N characters into a buffer.
If fewer than N characters remain in the input file, then a special character,
represented by eof, marks the end of the source file.
Two pointers to the input are maintained:
1. Pointer lexeme_beginning, marks the beginning of the current
lexeme, whose extent we are attempting to determine.
2. Pointer forward scans ahead until a pattern match is found.
Once the next lexeme is determined, forward is set to the character at its
right end.
The string of characters between the two pointers is the current lexeme.
After the lexeme is recorded as an attribute value of a token returned to the parser,
lexeme_beginning is set to the character immediately after the lexeme just found.
Sentinels
For each character read, we make two tests: one for the end of the buffer, and one to
determine what character is read. We can combine the buffer-end test with the test for the
current character if we extend each buffer to hold a sentinel character at the end.
The sentinel is a special character that cannot be part of the source program, and a natural
choice is the character eof. The sentinel arrangement is as shown below:
lexeme_beginning
forwar
d
Note that eof retains its use as a marker for the end of the entire input. Any eof
that appears other than at the end of a buffer means that the input is at an end.
Code to advance forward pointer:
forward : = forward + 1;
if forward ↑ = eof then begin
if forward at end of first half then
begin reload second half;
forward := forward + 1
end
else if forward at end of second half then
begin reload first half;
move forward to beginning of first
half end
else /* eof within a buffer signifying end of input
*/ terminate lexical analysis
end
SPECIFICATION OF TOKENS
There are 3 specifications of tokens: we specify the token with the help of following :
1) Strings
2) Languageand 3) Regular expression
Operations on strings
The following string-related terms are commonly used:
1. A prefix of string s is any string obtained by removing zero or more symbols from the end
of string s. For example, ban is a prefix of banana.
2. A suffix of string s is any string obtained by removing zero or more symbols from the
beginning of s.
For example, nana is a suffix of banana.
3. A substring of s is obtained by deleting any prefix and any suffix from s.
For example, nan is a substring of banana.
UNIT-I COMPILER DESIGN
4. The proper prefixes, suffixes, and substrings of a string s are those prefixes, suffixes, and
substrings, respectively of s that are not ε or not equal to s itself.
5. A subsequence of s is any string formed by deleting zero or more not necessarily consecutive
positions of s.
For example, baan is a subsequence of banana.
Operations on languages:
The following are the operations that can be applied to languages:
1.Union
2.Concatenation
3.Kleene closure
4.Positive closure
The following example shows the operations on strings:
Let L={0,1} and S={a,b,c}
1. Union : L U S={0,1,a,b,c}
2. Concatenation : L.S={0a,1a,0b,1b,0c,1c}
*
3. Kleene closure : L ={ ε,0,1,00….}
+
4. Positive closure : L ={0,1,00….}
Regular Expressions
Each regular expression r denotes a language L(r).
Here are the rules that define the regular expressions over some alphabet Σ and the languages
that those expressions denote:
1. ε is a regular expression, and L(ε) is { ε }, that is, the language whose sole member is the
empty string.
2. If ‘a’ is a symbol in Σ, then ‘a’ is a regular expression, and L(a) = {a}, that is, the language
with one string, of length one, with ‘a’ in its one position.
3. Suppose r and s are regular expressions denoting the languages L(r) and L(s). Then,
Regular set
A language that can be defined by a regular expression is called a regular set.
If two regular expressions r and s denote the same regular set, we say they are equivalent and
write r = s.
There are a number of algebraic laws for regular expressions that can be used to
manipulate into equivalent forms.
For instance, r|s = s|r is commutative; r|(s|t)=(r|s)|t is associative.
Regular Definitions
Giving names to regular expressions is referred to as a Regular definition. If Σ is an
alphabet of basic symbols, then a regular definition is a sequence of definitions of the form
dl → r 1 d2
→ r2
………
dn → rn
UNIT-I COMPILER DESIGN
Shorthands
Certain constructs occur so frequently in regular expressions that it is convenient to
introduce notational shorthands for them.
- Thus the regular expression a denotes the set of all strings of one or more a’s.
+ *
- The operator
has the same precedence and associativity as the operator
instance ( ?):
UNIT-I COMPILER DESIGN
2. Zero or one
- The unary postfix operator ? means “zero or one instance of”.
- The notation r? is a shorthand for r | ε.
- If ‘r’ is a regular expression, then ( r )? is a regular expression that denotes the
language L( r ) U { ε }.
3. Character Classes:
- The notation [abc] where a, b and c are alphabet symbols denotes the regular
expression a | b | c.
- Character class such as [a – z] denotes the regular expression a | b | c | d | ….|z.
- We can describe identifiers as being strings generated by the regular
expression, [A–Za–z][A–Za–z0–9]*
Non-regular Set
A language which cannot be described by any regular expression is a non-regular set.
Example: The set of all strings of balanced parentheses and repeating strings cannot be described
by a regular expression. This set can be specified by a context-free grammar.
RECOGNITION OF TOKENS
Consider the following grammar fragment:
stmt → if expr then stmt Finite automata (specifically, deterministic finite automata (DFA)) are used to
| if expr then stmt else recognize tokens based on their patterns or regular expression.
stmt | ε it processes input symbols and transitions between states.
expr → term relop Each state in the DFA corresponds to a specific part of the token being
term | term recognized.
term → id |
num
where the terminals if , then, else, relop, id and num generate sets of strings given by
the following regular definitions:
if → if The lexer scans the input source code character by character.
then → then
else → else It maintains a state machine (DFA) that guides the recognition process.
relop → <|<=|=|<>|>|>=
* When a valid token is encountered, the lexer transitions to an accepting state
id → letter(letter|digit ) and returns the corresponding token to the parser.
+ + +
num → digit (.digit )?(E(+|-)?digit )?
For this language fragment the lexical analyzer will recognize the keywords if, then, else,
as well as the lexemes denoted by relop, id, and num. To simplify matters, we assume keywords
are reserved; that is, they cannot be used as identifiers.Transition diagrams
It is a diagrammatic representation to depict the action that will take place when a lexical
analyzer is called by the parser to get the next token. It is used to keep track of information about
the characters that are seen as the forward pointer scans the input.
UNIT-I COMPILER DESIGN
UNIT-I COMPILER DESIGN
LEX
Lex is a computer program that generates lexical analyzers. Lex is commonly used
with the yacc parser generator.
Creating a lexical analyzer
First, a specification of a lexical analyzer is prepared by creating a program lex.l in the Lex
language. Then, lex.l is run through the Lex compiler to produce a C program lex.yy.c.
Finally, lex.yy.c is run through the C compiler to produce an object program a.out, which is
the lexical analyzer that transforms an input stream into a sequence of tokens.
Lex
lex.l lex.yy.c
compiler
Lex Specification
A Lex program consists of three parts:
{ definitions }
%%
{ rules }
%%
{ user subroutines }
FINITE AUTOMATA
Finite Automata is one of the mathematical models that consist of a number of states and
edges. It is a transition diagram that recognizes a regular expression or grammar.
The following steps are involved in the construction of DFA from regular expression:
i) Convert RE to NFA using Thomson’s rules
ii) Convert NFA to DFA
iii) Construct minimized DFA