0% found this document useful (0 votes)
30 views29 pages

Chapter Two (3) (Autosaved)

Uploaded by

firomsadine59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views29 pages

Chapter Two (3) (Autosaved)

Uploaded by

firomsadine59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Chapter two

Lexical Analysis and The Scanner


The role of lexical analyzer and its scanning process
• the first phase of a compiler.

• the main task of the lexical analyzer is to read the input characters of the source
program, group them into lexemes, and produce as output a sequence of tokens.

• The stream of tokens is sent to the parser for syntax analysis.

• When the lexical analyzer discovers a lexeme constituting an identifier, it needs


to enter that lexeme into the symbol table.

• In some cases, information regarding the kind of identifier may be read from the
symbol table by the lexical analyzer to assist it in determining the proper token
it must pass to the parser.
The interaction b/n the lexical analyzer and the parser

Commonly, the interaction is implemented by


having the parser call the lexical analyzer. The
call, suggested by the getNextToken command,
causes the lexical analyzer to read characters
from its input until it can identify the next
lexeme and produce for it the next token, which
it returns to the parser.
Lexical analyzers are divided into a cascade of two processes:

A. Scanning: It consists of simple processes that do not require the tokenization of the
input such as deletion of comments, and compaction of consecutive white space
characters into one.

B. Lexical analysis: This is the more complex portion where the scanner produces a
sequence of tokens as output. produces tokens from the output of the scanner.

There are three issues in lexical analysis:

• To make the design simpler.

• To improve the efficiency of the compiler.

• To enhance the compiler portability.


Tokens, Patterns, and Lexemes

• A token is a string of characters, categorized according to the rules as a symbol


(e.g., IDENTIFIER, NUMBER, COMMA). The process of forming tokens
from an input stream of characters is called tokenization.

• A token can look like anything useful for processing an input text stream or text
file.

• Consider this expression in the programming language: sum=3+2;

LEXEME:

• Collection or group of characters forming tokens is called Lexeme.


Lexeme Token type
sum Identifier
= Assignment operator
3 Number
+ Addition operator
2 Number
; End of statement
PATTERN: The rule of description is a pattern.
It is specified using regular expression.
Eg. Letter(letter/digit)*
Attributes for Tokens
• Some tokens have attributes that can be passed back to the parser.

• The lexical analyzer collects information about tokens into their associated
attributes. The attributes influence the translation of tokens.

i) Constant: the value of the constant

ii) Identifiers: pointer to the corresponding symbol table entry.


Error recovery strategies in lexical analysis:

• The following are the error-recovery actions in lexical analysis:

1. Deleting an unimportant character.

2. Inserting a missing character.

3. Replacing an incorrect character by a correct Character.

4. Transforming two adjacent characters.

5. Panic mode recovery: Deletion of consecutive characters from the token until
error is resolved.
SPECIFICATION OF TOKENS
There are 3 specifications of Tokens:
1. Strings
2. Language
3. Regular expression
Strings and Languages
An alphabet or character class is a finite set of symbols.
A string over an alphabet is a finite sequence of symbols drawn from that
alphabet.
A language is any countable set of strings over some fixed alphabet.
Operations on strings
The following string-related terms are commonly used:
1. A prefix of string S is any string obtained by removing zero or more
symbols from the end of strings. For example, ban is a prefix for banana.
2. A suffix of string S is any string obtained by removing zero or more
symbols from the beginning of S. For example, nana is a suffix for
banana.
3. A substring of S is obtained by deleting any prefix or any suffix
forms. For example, nan is a substring of banana.
4. The proper prefixes, suffixes, and substrings of a string S are those
prefixes, suffixes, and substrings.
Con,….
5. A subsequence of S is any string formed by deleting zero or more not necessarily
consecutive positions of S.
For example, baan is a subsequence of banana.
Operations on languages:
The following are the operations that can be applied to languages:
1. Union
2. Concatenation
3. Kleene closure
4. Positive closure
The following example shows the operations on languages:
Let L={0,1} and S={a, b, c}
1. Union : L U S={0,1, a, b, c}
2. Concatenation : L.S={0a,1a,0b,1b,0c,1c}
3. Kleene closure : L*={ ε,0,1,00….}
4. Positive closure: L+={0,1,00….}
Regular Expressions
• Regular expressions are used for algebraically representing certain sets of strings.

• Regular expressions are an important notation for specifying lexeme patterns.

• Each regular expression r denotes a language L(r).

• Here are the rules that define the regular expressions over some alphabet Σ and the languages that
those expressions denote:

1. ε is a regular expression, and L(ε) is { ε }, that is, the language whose sole member is the empty
string.

2. If ‘a’ is a symbol in Σ, then ‘a’ is a regular expression, and L(a) = {a}, that is, the language with one
string, of length one.
3. Suppose r and s are regular expressions denoting the languages L(r) and
L(s). Then,

a) (r)|(s) is a regular expression denoting the language L(r) U L(s).

b) (r)(s) is a regular expression denoting the language L(r)L(s).

c) (r)* is a regular expression denoting (L(r))*.

d) (r) is a regular expression denoting L(r).


Regular set

• A language that can be defined by a regular expression is called a regular set.

• If two regular expressions r and s denote the same regular set, we say they are
equivalent and write r = s.

• There are several algebraic laws for regular expressions that can be used to
manipulate into equivalent forms.

• For instance, r|s = s|r is commutative; r|(s|t)=(r|s)|t is associative.


Regular Definitions
• Giving names to regular expressions is referred to as a Regular definition. If Σ is an alphabet of

basic symbols, then a regular definition is a sequence of definitions of the form

dl → r 1 d2 → r2 ……… dn → rn

1. Each di is a distinct name.

2. Each ri is a regular expression over the alphabet Σ U {dl, d2,. . . , di-l}.


• Example: Identifiers are a set of strings of letters and digits beginning with a letter. Regular

definition for this set:

letter → A | B | …. | Z | a | b | …. | z |

digit → 0 | 1 | …. | 9

id → letter ( letter | digit ) *


Extensions of regular expressions

1. One or more instances of (+):

-The unary postfix operator + means “ one or more instances of”.

- If r is a regular expression that denotes the language L(r), then ( r )+ is a regular


expression that denotes the language (L (r ))+

- Thus the regular expression a+ denotes the set of all strings of one or more a’s.

- The operator + has the same precedence and associativity as the operator *.
2. Zero or one instance ( ?):

- The unary postfix operator ? means “zero or one instance of”.


- The notation r? is a shorthand for r | ε.

- If ‘r’ is a regular expression, then ( r )? is a regular expression that


denotes the language L( r ) U { ε }.
3. Character Classes:

- The notation [abc] where a, b and c are alphabet symbols denotes the regular
expression a | b | c.

- Character class such as [a – z] denotes the regular expression a | b | c | d | ….|z.

-We can describe identifiers as being strings generated by the regular expression,
[A–Za–z][A–Za–z0–9]*
RECOGNITION OF TOKENS
• Consider the following grammar fragment:

stmt → if expr then stmt

|if expr then stmt else

stmt |ε

expr → term relop

term |term

term → id |num
• where the terminals if , then, else, relop, id and num generate sets of strings given
by the following regular definitions:
• If → If
• Then → Then
• Else → Else
• relop → <|<=|=|<>|>|>=
• Id → letter(letter|digit)*
• Num → digit+ (.digit+)?(E(+|-)?digit+)?

• For this language fragment the lexical analyzer will recognize the keywords if,
then, else, as well as the lexemes denoted by relop, id, and num. To simplify
matters, we assume keywords are reserved; that is, they cannot be used as
identifiers.
Transition diagrams

• It is a diagrammatic representation to depict the action that will take


place when a lexical analyzer is called by the parser to get the next
token. It is used to keep track of information about the characters that
are seen as the forward pointer scans the input.

• Transition diagrams have a collection of nodes or circles, called states


Finite automata

• Finite Automata is one of the mathematical models that consist of a number of


states and edges. It is a transition diagram that recognizes a regular expression or
grammar.

Types of Finite Automata

• There are two types of Finite Automata :

Non-deterministic Finite Automata (NFA)

Deterministic Finite Automata (DFA)


Non-deterministic Finite Automata

NFA is a mathematical model that consists of five tuples denoted by

M = {Qn, Ʃ,δ, q0, fn}

Qn – finite set of states

Ʃ – finite set of input symbols

δ – transition function that maps state-symbol pairs to set of states

q0 –starting state

fn –final state
Deterministic Finite Automata
DFA is a special case of a NFA in which
i) no state has an ε-transition.
ii) there is at most one transition from each state on any input.
DFA has five tuples denoted by
M = {Qd, Ʃ, δ, q0, fd}
Qd – finite set of states
Ʃ– finite set of input symbols
δ – transition function that maps state-symbol pairs to set of states
q0– starting state
fd–final state
Construction of DFA from regular expression
The following steps are involved in the construction of DFA from regular
expression:

i) Convert RE to NFA using Thomson’s rules

ii) Convert NFA to DFA

iii) Construct minimized DFA


Thank you

You might also like