Compiler Lecture 3

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 16

Lecture 3: Introduction to Lexical Analysis

Source code Front-End IR Object code


Back-End
Lexical
Analysis

(from last lecture) Lexical Analysis:


• reads characters and produces sequences of tokens.

Today’s lecture:
Towards automated Lexical Analysis.
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 1
Design
The Big Picture
First step in any translation: determine whether the text to be translated
is well constructed in terms of the input language. Syntax is
specified with parts of speech - syntax checking matches parts of
speech against a grammar.
In natural languages, mapping words to part of speech is idiosyncratic.
In formal languages, mapping words to part of speech is syntactic:
• based on denotation
• makes this a matter of syntax
• reserved keywords are important

What does lexical analysis do?


Recognises the language’s parts of speech.
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 2
Design
Lexical analyzer and the parser

Two processes Lexical Analysis:


-Scanning
-Lexical analysis
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 3
Design
Tokens, Patterns, and Lexemes
 A token is a pair consisting of a token name and an optional attribute
value.
 A pattern is a description of the form that the lexemes of a token may
take. In the case of a keyword as a token, the pattern is just the
sequence of characters that form the keyword.
 A lexeme is a sequence of characters in the source program that
matches the pattern for a token and is identified by the lexical analyzer
as an instance of that token.

Example: printf("Total = %d\n", score);


 both printf and score are lexemes matching the pattern for token id
 "Total = %d\n" is a lexeme matching literal.

CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 4


Design
Token Example..
1. One token for each keyword.
2. Tokens for the operators, either individually or in classes such as the
token comparison mentioned in Fig. 3.2.
3. One token representing all identifiers.
4. One or more tokens representing constants, such as numbers and
literal strings.
5. Tokens for each punctuation symbol, such as left and right
parentheses, comma, and semicolon.

CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 5


Design
Some Definitions
• A vocabulary (alphabet) is a finite set of symbols.
• A string is any finite sequence of symbols from a vocabulary.
• A language is any set of strings over a fixed vocabulary.
• A grammar is a finite way of describing a language.
• A context-free grammar, G, is a 4-tuple, G=(S,N,T,P), where:
S: starting symbol
N: set of non-terminal symbols
T: set of terminal symbols
P: set of production rules
• A language is the set of all terminal productions of G.
• Example:
S=CatWord; N={CatWord}; T={miau};
P={CatWord  CatWord miau | miau}

CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 6


Design
Terms for Parts of Strings

CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 7


Design
Example
(A simplified version from Lecture2, Slide 6):
S=E; N={E,T,F}; T={+,*,(,),x}
P={ET|E+T, T F|T*F, F (E)|x}
By repeated substitution we derive sentential forms:
E E+T T+T F+T x+T x+T*F x+F*F
x+x*F x+x*x
This is an example of a leftmost derivation (at each step the
leftmost non-terminal is expanded).
To recognise a valid sentence we reverse this process.
• Exercise: what language is generated by the (non-context free) grammar:
S=S; N={A,B,S}; T={a,b,c};
P={Sabc|aAbc, AbbA, AcBbcc, bBBb, aB aa|aaA}
(for the curious: read about Chomsky’s Hierarchy)
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 8
Design
Why all this?
• Why study lexical analysis?
– To avoid writing lexical analysers (scanners) by hand.
– To simplify specification and implementation.
– To understand the underlying techniques and technologies.
• We want to specify lexical patterns (to derive tokens):
– Some parts are easy:
• WhiteSpace  blank | tab | WhiteSpace blank | WhiteSpace tab
• Keywords and operators (if, then, =, +)
• Comments (/* followed by */ in C, // in C++, % in latex, ...)
– Some parts are more complex:
• Identifiers (letter followed by - up to n - alphanumerics…)
• Numbers
We need a notation that could lead to an implementation!
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 9
Design
Regular Expressions
Patterns form a regular language. A regular expression is a way of
specifying a regular language. It is a formula that describes a possibly
infinite set of strings.
Example:
identifier letter_( letter_| digit)*
Regular Expression (RE) (over a vocabulary V):
  is a RE denoting the empty set {}.
• If a V then a is a RE denoting {a}.
• If r1, r2 are REs then:
– r1* denotes zero or more occurrences of r1;
– r1r2 denotes concatenation;
– r1 | r2 denotes either r1 or r2;
• Shorthands: [a-d] for a | b | c | d; r+ for rr*; r? for r | 
Describe the languages denoted by the following REs
a; a | b; a*; (a | b)*; (a | b)(a | b); (a*b*)*; (a | b)*baa;
Read more: Example 3.3, 3.4 Figure 3.7 Exercise 3.3.2
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 10
Design
Examples
• integer  (+ | – | ) (0 | 1 | 2 | … | 9)+
• integer  (+ | – | ) (0 | (1 | 2 | … | 9) (0 | 1 | 2 | … | 9)*)
• decimal  integer.(0 | 1 | 2 | … | 9)*
• identifier  [a-zA-Z] [a-zA-Z0-9]*

• Real-life application (perl regular expressions):


– [+–]?(\d+\.\d+|\d+\.|\.\d+)
– [+–]?(\d+\.\d+|\d+\.|\.\d+|\d+)([eE][+–]?\d+)?
(for more information read: % man perlre)
(Not all languages can be described by regular expressions.
But, we don’t care for now).
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 11
Design
Building a Lexical Analyser by hand
Based on the specifications of tokens through regular expressions we
can write a lexical analyser. One approach is to check case by case
and split into smaller problems that can be solved ad hoc. Example:
void get_next_token() {
c=input_char();
if (is_eof(c)) { token  (EOF,”eof”); return}
if (is_letter(c)) {recognise_id()}
else if (is_digit(c)) {recognise_number()}
else if (is_operator(c))||is_separator(c))
{token  (c,c)} //single char assumed
else {token  (ERROR,c)}
return;
}
...
do {
get_next_token();
print(token.class, token.attribute);
} while (token.class != EOF);

Can be efficient; but requires a lot of work and may be difficult to modify!
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 12
Design
Building Lexical Analysers “automatically”
Idea: try the regular expressions one by one and find the longest match:
set (token.class, token.length) (NULL, 0)
// first
find max_length such that input matches T1RE1
if max_length > token.length
set (token.class, token.length) (T1, max_length)
// second
find max_length such that input matches T2RE2
if max_length > token.length
set (token.class, token.length) (T2, max_length)

// n-th
find max_length such that input matches TnREn
if max_length > token.length
set (token.class, token.length) (Tn, max_length)
// error
if (token.class == NULL) { handle no_match }

Disadvantage: linearly dependent on number of token classes and


requires restarting the search
CSE 359 - Compiler Masud Ibnfor each
Afjal, regular expression.
CSE, HSTU 13
Design
We study REs to automate scanner construction!
Consider the problem of recognising register names starting with r and
requiring at least one digit:
Register  r (0|1|2|…|9) (0|1|2|…|9)* (or, Register  r Digit Digit*)
The RE corresponds to a transition diagram:

digit
start r digit
S0 S1 S2

Depicts the actions that take place in the scanner.


• A circle represents a state; S0: start state; S2: final state (double circle)
• An arrow represents a transition; the label specifies the cause of the transition.
A string is accepted if, going through the transitions, ends in a final state
(for example, r345, r0, r29, as opposed to a, r, rab)
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 14
Design
Towards Automation (finally!)
An easy (computerised) implementation of a transition diagram
is a transition table: a column for each input symbol and a
row for each state. An entry is a set of states that can be
reached from a state on some input symbol. E.g.:
state ‘r’ digit
0 1 -
1 - 2
2(final) - 2
If we know the transition table and the final state(s) we can
build directly a recogniser that detects acceptance:
char=input_char();
state=0; // starting state
while (char != EOF) {
state  table(state,char);
if (state == ‘-’) return failure;
word=word+char;
char=input_char();
}
if (state == FINAL) return acceptance; else return failure;
CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 15
Design
The Full Story!
The generalised transition diagram is a finite automaton. It can be:
• Deterministic, DFA; as in the example
• Non-Deterministic, NFA; more than 1 transition out of a state may
be possible on the same input symbol: think about: (a | b)* abb
Every regular expression can be converted to a DFA!
Summary: an introduction to lexical analysis was given.
Next time: More on finite automata and conversions.
Exercise: Produce the DFA for the RE (Q: what is it for?):
Register  r ((0|1|2) (Digit|) | (4|5|6|7|8|9) | (3|30|31))
Reading: Aho2, Sections 2.2, 3.1-3.4.

CSE 359 - Compiler Masud Ibn Afjal, CSE, HSTU 16


Design

You might also like