Unit 2
Unit 2
1
II. PARSER
Role of Parser-Context-free Grammar – Derivations and Parse Tree - Types of Parser –
Bottom Up: Shift Reduce Parsing - Operator Precedence Parsing, SLR parser- Top Down:
Recursive Decent Parser - Non-Recursive Decent Parser-Error handling and Recovery in
Syntax Analyzer-YACC.
SYNTAX ANALYSIS:
Every programming language has rules that prescribe the syntactic structure of well-formed
programs. In Pascal, for example, a program is made out of blocks, a block out of statements,
a statement out of expressions, an expression out of tokens, and so on. The syntax of
programming language constructs can be described by context-free grammars or BNF
(Backus-Naur Form) notation. Grammars offer significant advantages to both language
designers and compiler writers.
• A grammar gives a precise, yet easy-to-understand. Syntactic specification of a
programming language.
• From certain classes of grammars we can automatically construct an efficient parser that
determines if a source program is syntactically well formed. As an additional benefit, the
parser construction process can reveal syntactic ambiguities and other difficult-to-parse
constructs that might otherwise go undetected in the initial design phase of a language and
its compiler.
• A properly designed grammar imparts a structure to a programming language that is
useful for the translation of source programs into correct object code and for the detection
of errors. Tools are available for converting grammar-based descriptions of translations
into working pro-grams.
Languages evolve over a period of time, acquiring new constructs and performing additional
tasks. These new constructs can be added to a language more easily when there is an existing
implementation based on a grammatical description of the language.
ROLE OF THE PARSER:
Parser for any grammar is program that takes as input string w (obtain set of strings tokens
from the lexical analyzer) and produces as output either a parse tree for w , if w is a valid
sentences of grammar or error message indicating that w is not a valid sentences of given
grammar.
2
The goal of the parser is to determine the syntactic validity of a source string is valid, a tree is
built for use by the subsequent phases of the computer. The tree reflects the sequence of
derivations or reduction used during the parser. Hence, it is called parse tree. If string is
invalid, the parse has to issue diagnostic message identifying the nature and cause of the
errors in string. Every elementary subtree in the parse tree corresponds to a production of the
grammar.
There are two ways of identifying an elementary subtree:
1. By deriving a string from a non-terminal or
2. By reducing a string of symbol to a non-terminal.
3
A is a non- terminal and α is a string of terminals and non-terminals (including the empty
string).S is a start symbol (one of the non-terminal symbol).
L(G) is the language of G (the language generated by G) which is a set of sentences.
4
iii. Lower-case italic names such as expr or stmt.
3. Upper-case letters late in the alphabet, such as X,Y,Z, represent grammar symbols,
that is either terminals or non-terminals.
4. Greek letters α , β , γ represent strings of grammar symbols.
e.g a generic production could be written as A → α.
5. If A → α1 , A → α2 , . . . . , A → αn are all productions with A , then we can write A
→ α1 | α2 |. . . . | αn , (alternatives for A).
6. Unless otherwise stated, the left side of the first production is the start symbol.
Using the shorthand, the grammar can be written as:
E → E A E | ( E ) | - E | id
A→+|-|*|/|^
Derivations:
A derivation of a string for a grammar is a sequence of grammar rule applications that
transform the start symbol into the string. A derivation proves that the string belongs to the
grammar's language.
5
• The sentential form derived by the left-most derivation is called the left-sentential
form.
2. Rightmost Derivation (RMD):
• If we scan and replace the input with production rules, from right to left, it is known
as right-most derivation.
• The sentential form derived from the right-most derivation is called the right-
sentential form.
Example:
Consider the G,
E → E + E | E * E | (E ) | - E | id
Derive the string id + id * id using leftmost derivation and rightmost derivation.
(a) (b)
Fig 2.2 a) Leftmost derivation b) Rightmost derivation
Strings that appear in leftmost derivation are called left sentential forms. Strings that appear
in rightmost derivation are called right sentential forms.
Sentential Forms:
Given a grammar G with start symbol S, if S => α , where α may contain non-terminals or
terminals, then α is called the sentential form of G.
Parse Tree:
A parse tree is a graphical representation of a derivation sequence of a sentential form.
In a parse tree:
Inner nodes of a parse tree are non-terminal symbols.
The leaves of a parse tree are terminal symbols.
A parse tree can be seen as a graphical representation of a derivation.
A parse tree depicts associativity and precedence of operators. The deepest sub-tree is
traversed first, therefore the operator in that sub-tree gets precedence over the operator which
is in the parent nodes.
6
Fig 2.3 Build the parse tree for string –(id+id) from the derivation
Yield or frontier of tree:
Each interior node of a parse tree is a non-terminal. The children of node can be a terminal or
non-terminal of the sentential forms that are read from left to right. The sentential form in the
parse tree is called yield or frontier of the tree.
Ambiguity:
A grammar that produces more than one parse tree for some sentence is said to be ambiguous
grammar. i.e. An ambiguous grammar is one that produce more than one leftmost or more
than one rightmost derivation for the same sentence.
Example : Given grammar G : E → E+E | E*E | ( E ) | - E | id
The sentence id+id*id has the following two distinct leftmost derivations:
7
Consider another example,
stmt → if expr then stmt | if expr then stmt else stmt | other
This grammar is ambiguous since the string if E1 then if E2 then S1 else S2 has the following
Two parse trees for leftmost derivation :
8
Table 2.1 Ambiguous grammar vs. Unambiguous grammar
9
The resultant unambiguous grammar is:
E→E+T|E–T|T
T→T*F|T/F|F
F → (E) | id
Trying to derive the string id+id*id using the above grammar will yield one unique
derivation.
10
Equivalent grammar is given by:
A0 → a A0 | b A0 | a A1
A1 → b A2
A2 → b A3
A3 → ε
Types of Parser:
11
• The disadvantage is that it takes too much work to construct an LR parser by hand for
a typical programming-language grammar. But there are lots of LR parser generators
available to make this task easy.
Bottom-Up Parsing:
Constructing a parse tree for an input string beginning at the leaves and going towards the
root is called bottom-up parsing. A general type of bottom-up parser is a shift-reduce parser.
Shift-Reduce Parsing:
Shift-reduce parsing is a type of bottom -up parsing that attempts to construct a parse tree for
an input string beginning at the leaves (the bottom) and working up towards the root (the
top).
Example:
Consider the grammar:
S → aABe
A → Abc | b
B→d
The string to be recognized is abbcde. We want to reduce the string to S.
Steps of reduction:
abbcde (b,d can be reduced)
aAbcde (leftmost b is reduced)
aAde (now Abc,b,d qualified for reduction)
aABe (d can be reduced)
S
Each replacement of the right side of a production by the left side in the above example is
called reduction, which is equivalent to rightmost derivation in reverse.
Handle:
A substring which is the right side of a production such that replacement of that substring by
the production left side leads eventually to a reduction to the start symbol, by the reverse of a
rightmost derivation is called a handle.
12
Stack Implementation of Shift-Reduce Parsing:
There are two problems that must be solved if we are to parse by handle pruning. The first is
to locate the substring to be reduced in a right-sentential form, and the second is to determine
what production to choose in case there is more than one production with that substring on
the right side.
A convenient way to implement a shift-reduce parser is to use a stack to hold grammar
symbols and an input buffer to hold the string w to be parsed. We use $ to mark the bottom of
the stack and also the right end of the input. Initially, the stack is empty, and the string w is
on the input, as follows:
STACK INPUT
$ w$
The parser operates by shifting zero or more input symbols onto the stack until a handle is on
top of the stack. The parser repeats this cycle until it has detected an error or until the stack
contains the start symbol and the input is empty:
STACK INPUT
$S $
Example: The actions a shift-reduce parser in parsing the input string id1+id2*id3, according
to the ambiguous grammar for arithmetic expression.
13
Fig 2.10 Reductions made by Shift Reduce Parser
While the primary operations of the parser are shift and reduce, there are actually four
possible actions a shift-reduce parser can make:
(1) shift, (2) reduce,(3) accept, and (4) error.
In a shift action, the next input symbol is shifted unto the top of the stack.
In a reduce action, the parser knows the right end of the handle is at the top of the
stack. It must then locate the left end of the handle within the stack and decide with
what non-terminal to replace the handle.
In an accept action, the parser announces successful completion of parsing.
In an error action, the parser discovers that a syntax error has occurred and calls an
error recovery routine.
Figure 2.11 represents the stack implementation of shift reduce parser using unambiguous
grammar.
14
adjacent non terminals. This property enables the implementation of efficient operator-
precedence parsers.
Example: The following grammar for expressions:
E→E A E | (E) | -E | id
A→ + | - | * | / | ^
This is not an operator grammar, because the right side EAE has two consecutive non-
terminals. However, if we substitute for A each of its alternate, we obtain the following
operator grammar:
E→E + E |E – E |E * E | E / E | ( E ) | E ^ E | - E | id
In operator-precedence parsing, we define three disjoint precedence relations between pair of
terminals. This parser relies on the following three precedence relations.
15
Defining Precedence Relations:
The precedence relations are defined using the following rules:
Rule-01:
• If precedence of b is higher than precedence of a, then we define a < b
• If precedence of b is same as precedence of a, then we define a = b
• If precedence of b is lower than precedence of a, then we define a > b
Rule-02:
• An identifier is always given the higher precedence than any other symbol.
• $ symbol is always given the lowest precedence.
Rule-03:
• If two operators have the same precedence, then we go by checking their
associativity.
16
Fig 2.15 Stack Implementation
Implementation of Operator-Precedence Parser:
• An operator-precedence parser is a simple shift-reduce parser that is capable of
parsing a subset of LR(1) grammars.
• More precisely, the operator-precedence parser can parse all LR(1) grammars where
two consecutive non-terminals and epsilon never appear in the right-hand side of any
rule.
17
Algorithm for LEADING(A):
{
1. ‘a’ is in LEADING(A) is A→ γaδ where γ is ε or any non-terminal.
2.If ‘a’ is in LEADING(B) and A→B, then ‘a’ is in LEADING(A).
}
Computation of TRAILING:
• Trailing is defined for every non-terminal.
• Terminals that can be the last terminal in a string derived from that non-terminal.
• TRAILING(A)={ a| A=>+ γaδ },where δ is ε or any non-terminal, =>+ indicates
derivation in one or more steps, A is a non-terminal.
Algorithm for TRAILING(A):
{
1. ‘a’ is in TRAILING(A) is A→ γaδ where δ is ε or any non-terminal.
2.If ‘a’ is in TRAILING(B) and A→B, then ‘a’ is in TRAILING(A).
}
Example 1: Consider the unambiguous grammar,
E→E + T
E→T
T→T * F
T→F
F→(E)
F→id
Step 1: Compute LEADING and TRAILING:
LEADING(E)= { +,LEADING(T)} ={+ , * , ( , id}
LEADING(T)= { *,LEADING(F)} ={* , ( , id}
LEADING(F)= { ( , id}
TRAILING(E)= { +, TRAILING(T)} ={+ , * , ) , id}
TRAILING(T)= { *, TRAILING(F)} ={* , ) , id}
TRAILING(F)= { ) , id}
Step 2: After computing LEADING and TRAILING, the table is constructed between all the
terminals in the grammar including the ‘$’ symbol.
18
Fig 2.16 Algorithm for constructing Precedence Relation Table
+ * id ( ) $
+ > < < < > >
* > > < < > >
id > > e e > >
( < < < < = e
) > > e e > >
$ < < < < e Accept
Fig 2.17 Precedence Relation Table * All undefined entries are error (e).
Rough work:
19
LEADING(E)= {+ , * , ( TRAILING(E)= {+ , * , )
, id} , id}
LEADING(T)= {* , ( , TRAILING(T)= {* , ) ,
id} id}
LEADING(F)= { ( , id} TRAILING(F)= { ) , id}
Terminal followed by Non-Terminal
NonTerminal followed by Terminal
Rule-1. Rule-1. E + =>
+ T => + < leading(T) Trailing(E) > +
Rule-3. Rule-3. T * =>
* F => * < leading(F) Trailing(T) > *
Rule-4. Rule-4. E ) =>
( E => ( < leading(E) Trailing(E) > )
Step 3: Parse the given input string (id+id)*id$
20
$(+ + < id id)*id$ Shift id
$* * >$ $ Pop *
$ $ Accept
Fig 2.19 Parse the input string (id+id)*id$
Precedence Functions:
Compilers using operator-precedence parsers need not store the table of precedence relations.
In most cases, the table can be encoded by two precedence functions f and g that map
terminal symbols to integers. We attempt to select f and g so that, for symbols a and b.
1. f (a) < g(b) whenever a<·b.
2. f (a) = g(b) whenever a = b. and
3. f(a) > g(b) whenever a ·> b.
Algorithm for Constructing Precedence Functions:
1. Create functions fa for each grammar terminal a and for the end of string symbol.
2. Partition the symbols in groups so that fa and gb are in the same group if a = b (there
can be symbols in the same group even if they are not connected by this relation).
3. Create a directed graph whose nodes are in the groups, next for each symbols a and b
do: place an edge from the group of gb to the group of fa if a <· b, otherwise if a ·> b
place an edge from the group of fa to that of gb.
4. If the constructed graph has a cycle then no precedence functions exist. When there
are no cycles collect the length of the longest paths from the groups of fa and gb
respectively.
21
Fig 2.20 Precedence Graph
There are no cycles,so precedence function exist. As f$ and g$ have no out
edges,f($)=g($)=0.The longest path from g+ has length 1,so g(+)=1.There is a path from gid to
f* to g* to f+ to g+ to f$ ,so g(id)=5.The resulting precedence functions are:
Example 2:
Consider the following grammar, and construct the operator precedence parsing table and
check whether the input string (i) *id=id (ii)id*id=id are successfully parsed or not?
S→L=R
S→R
L→*R
L→id
R→L
Solution:
1. Computation of LEADING:
LEADING(S) = {=, * , id}
LEADING(L) = {* , id}
LEADING(R) = {* , id}
2. Computation of TRAILING:
TRAILING(S) = {= , * , id}
22
TRAILING(L)= {* , id}
TRAILING(R)= {* , id}
3. Precedence Table:
= * id $
= e <· <· ·>
* ·> <· <· ·>
id ·> e e ·>
$ <· <· <· accept
* All undefined entries are error (e).
2. id*id=id
STACK INPUT STRING ACTION
$ id*id=id$ $<·idPush
23
Example 3: Check whether the following Grammar is an operator precedence grammar or
not.
E→E+E
E→E*E
E→id
Solution:
1. Computation of LEADING:
LEADING(E) = {+, * , id}
2. Computation of TRAILING:
TRAILING(E) = {+, * , id}
3. Precedence Table:
+ * id $
All undefined entries are error. Since the precedence table has multiple defined entries, the
grammar is not an operator precedence grammar.
LR PARSERS:
An efficient bottom-up syntax analysis technique that can be used to parse a large class of
CFG is called LR(k) parsing. The “L” is for left-to-right scanning of the input, the
“R” for constructing a rightmost derivation in reverse, and the “k” for the number of
input symbols of lookahead that are used in making parsing decisions.. When (k) is omitted,
it is assumed to be 1. Table 2.2 shows the comparison between LL and LR parsers.
24
Table 2.2 LL vs. LR
25
Fig 2.25 Model of an LR Parser
The parsing table consists of two parts : action and goto functions.
Action : The parsing program determines sm, the state currently on top of stack, and ai, the
current input symbol. It then consults action[sm,ai] in the action table which can have one of
four values :
1. shift s, where s is a state,
2. reduce by a grammar production A → β,
3. accept, and
4. error.
Goto : The function goto takes a state and grammar symbol as arguments and produces a
state.
CONSTRUCTING SLR PARSING TABLE:
To perform SLR parsing, take grammar as input and do the following:
1. Find LR(0) items.
2. Completing the closure.
3. Compute goto(I,X), where, I is set of items and X is grammar symbol.
LR(0) items:
An LR(0) item of a grammar G is a production of G with a dot at some position of the right
side. For example, production A → XYZ yields the four items :
A → •XYZ
A → X•YZ
A → XY•Z
A → XYZ•
26
Closure operation:
If I is a set of items for a grammar G, then closure(I) is the set of items constructed from I by
the two rules:
1. Initially, every item in I is added to closure(I).
2. If A → α . Bβ is in closure(I) and B → γ is a production, then add the item B → . γ
to I , if it is not already there. We apply this rule until no more new items can be
added to closure(I).
Goto operation:
Goto(I, X) is defined to be the closure of the set of all items [A→ αX•β] such that [A→
α•Xβ] is in I.Steps to construct SLR parsing table for grammar G are:
1. Augment G and produce G`
2. Construct the canonical collection of set of items C for G‟
3. Construct the parsing action function action and goto using the following algorithm
that requires FOLLOW(A) for each non-terminal of grammar.
27
SLR Parsing algorithm:
Input: An input string w and an LR parsing table with functions action and goto for grammar
G.
Output: If w is in L(G), a bottom-up-parse for w; otherwise, an error indication.
Method: Initially, the parser has s0 on its stack, where s0 is the initial state, and w$ in the
input buffer. The parser then executes the following program :
begin
1.E→E + T
2.E→T
of the stack;
5.F→(E)
6.F→id
symbol
Augmented grammar:
E'→E
E→E + T
end
28
else if action[s,
E→T
T→T * F
T→F
F→(E)
F→id
29
Step 3 : Construction of Parsing table.
1. Computation of FOLLOW is required to fill the reduction action in the ACTION part
of the table.
FOLLOW(E) = {+,),$ }
FOLLOW(T) ={*,+,) ,$}
FOLLOW(F) ={*,+,) ,$}
Step 4: Parse the given input. The Fig 2.29 shows the parsing the string id*id+id using stack
implementation.
30
Fig 2.29 Moves of LR parser on id*id+id
31
Consider the grammar
S → cAd
A → ab | a
and the input string w=cad. Construction of parse is shown in fig 2.21.
The leftmost leaf, labeled c, matches the first symbol of w, hence advance the input pointer to
a, the second symbol of w. Fig 2.21(b) and (c) shows the backtracking required to match the
input string.
Predictive Parser:
A grammar after eliminating left recursion and left factoring can be parsed by a recursive
descent parser that needs no backtracking is a called a predictive parser. Let us understand
how to eliminate left recursion and left factoring.
Eliminating Left Recursion:
A grammar is said to be left recursive if it has a non-terminal A such that there is a derivation
A=>Aα for some string α. Top-down parsing methods cannot handle left-recursive grammars.
Hence, left recursion can be eliminated as follows:
If there is a production A → Aα | β it can be replaced with a sequence of two productions
A → βA'
A' → αA' | ε
Without changing the set of strings derivable from A.
Example : Consider the following grammar for arithmetic expressions:
E → E+T | T
T → T*F | F
F → (E) | id
First eliminate the left recursion for E as
E → TE'
32
E' → +TE' | ε
Then eliminate for T as
T → FT '
T'→ *FT ' | ε
Thus the obtained grammar after eliminating left recursion is
E → TE'
E' → +TE' | ε
T → FT '
T'→ *FT ' | ε
F → (E) | id
Algorithm to eliminate left recursion:
1. Arrange the non-terminals in some order A1, A2 . . . An.
2. for i := 1 to n do begin
for j := 1 to i-1 do begin
replace each production of the form Ai → Aj γ
by the productions Ai → δ1 γ | δ2γ | . . . | δk γ.
where Aj → δ1 | δ2 | . . . | δk are all the current Aj-productions;
end
eliminate the immediate left recursion among the Ai- productions
end
Left factoring:
Left factoring is a grammar transformation that is useful for producing a grammar suitable for
predictive parsing. When it is not clear which of two alternative productions to use to expand
a non-terminal A, we can rewrite the A-productions to defer the decision until we have seen
enough of the input to make the right choice.
If there is any production A → αβ1 | αβ2 , it can be rewritten as
A → αA'
A’ → αβ1 | αβ2
Consider the grammar,
S → iEtS | iEtSeS | a
E→b
Here,i,t,e stand for if ,the,and else and E and S for “expression” and “statement”.
After Left factored, the grammar becomes
S → iEtSS' | a
33
S' → eS | ε
E→b
The program considers X, the symbol on top of the stack, and a, the current input symbol.
These two symbols determine the action of the parser. There are three possibilities.
1. If X = a =$,the parser halts and announces successful completion of parsing.
2. If X =a ≠$, the parser pops X off the stack and advances the input pointer to the next
input symbol.
3. If X is a nonterminal, the program consults entry M[X,a] of the parsing table M. This
entry will be either an X-production of the grammar or an error entry. If, for example,
34
M[X,a] = {X→UVW}, the parser replaces X on top of the stack by WVU (with U on
top). If M[X, a] = error, the parser calls an error recovery routine.
35
Method: Initially, the parser is in a configuration in which it has $$ on the stack with S, the
start symbol of G on top, and w$ in the input buffer. The program that utilizes the predictive
parsing table M to produce a parse for the input.
37
Fig 2.24 Moves made by predictive parser on input id+id*id
LL(1) Grammars:
For some grammars the parsing table may have some entries that are multiply-defined. For
example, if G is left recursive or ambiguous , then the table will have at least one multiply-
defined entry. A grammar whose parsing table has no multiply-defined entries is said to be
LL(1) grammar.
Example: Consider this following grammar:
S→ iEtS | iEtSeS | a
E→b
After eliminating left factoring, we have
S→ iEtSS’ | a S’→ eS | ε
E→b
To construct a parsing table, we need FIRST() and FOLLOW() for all the non-terminals.
FIRST(S) ={ i, a }
FIRST(S’) = {e, ε }
FIRST(E) = { b}
FOLLOW(S) = { $ ,e }
FOLLOW(S’) = { $ ,e }
38
FOLLOW(E) = {t}
Parsing Table for the grammar:
Since there are more than one production for an entry in the table, the grammar is not LL(1)
grammar.
In this phase of compilation, all possible errors made by the user are detected and reported to
the user in form of error messages. This process of locating errors and reporting them to users
is called the Error Handling process.
Detection
Reporting
Recovery
Classification of Errors
39
Compile-time errors:
Compile-time errors are of three types:-
These errors are detected during the lexical analysis phase. Typical lexical errors are:
These errors are detected during the syntax analysis phase. Typical syntax errors are:
Errors in structure
Missing operator
Misspelled keywords
Unbalanced parenthesis
In this method, successive characters from the input are removed one at a time until a
designated set of synchronizing tokens is found. Synchronizing tokens are delimeters
such as ; or }
The advantage is that it’s easy to implement and guarantees not to go into an infinite
loop
The disadvantage is that a considerable amount of input is skipped without checking it
for additional errors
3. Error production
If a user has knowledge of common errors that can be encountered then, these errors
can be incorporated by augmenting the grammar with error productions that generate
erroneous constructs.
40
If this is used then, during parsing appropriate error messages can be generated and
parsing can be continued.
The disadvantage is that it’s difficult to maintain.
4. Global Correction
The parser examines the whole program and tries to find out the closest match for it
which is error-free.
The closest match program has less number of insertions, deletions, and changes of
tokens to recover from erroneous input.
Due to high time and space complexity, this method is not implemented practically.
3.Semantic errors
These errors are detected during the semantic analysis phase. Typical semantic errors are
Incompatible type of operands
Undeclared variables
Not matching of actual arguments with a formal one
41
Fig 2.26 Compilation Sequence
The patterns in the above diagram is a file you create with a text editor. Lex will read your
patterns and generate C code for a lexical analyzer or scanner. The lexical analyzer matches
strings in the input, based on your patterns, and converts the strings to tokens. Tokens are
numerical representations of strings, and simplify processing.
When the lexical analyzer finds identifiers in the input stream it enters them in a symbol
table. The symbol table may also contain other information such as data type (integer or real)
and location of each variable in memory. All subsequent references to identifiers refer to the
appropriate symbol table index.
The grammar in the above diagram is a text file you create with a text edtior. Yacc will read
your grammar and generate C code for a syntax analyzer or parser. The syntax analyzer uses
grammar rules that allow it to analyze tokens from the lexical analyzer and create a syntax
tree. The syntax tree imposes a hierarchical structure the tokens. For example, operator
precedence and associativity are apparent in the syntax tree. The next step, code generation,
does a depth-first walk of the syntax tree to generate code. Some compilers produce machine
code, while others, as shown above, output assembly language.
42
Fig. 2.27 Building a Compiler with Lex/Yacc
Yacc reads the grammar descriptions in bas.y and generates a syntax analyzer (parser) that
includes function yyparse, in file y.tab.c. Included in file bas.y are token declarations. The –d
option causes yacc to generate definitions for tokens and place them in file y.tab.h.
Lex reads the pattern descriptions in bas.l, includes file y.tab.h, and generates a lexical
analyzer, that includes function yylex, in file lex.yy.c.
Finally, the lexer and parser are compiled and linked together to create executable bas.exe.
From main we call yyparse to run the compiler. Function yyparse automatically calls yylex to
obtain each token.
Input File:
YACC input file is divided into three parts.
43
Definition Part:
The definition part includes information about the tokens used in the syntax definition:
The definition part can include C code external to the definition of the parser and variable
declarations, within %{ and %} in the first column.
Rules Part:
The rules part contains grammar definition in a modified BNF form.
It can also contain the main() function definition if the parser is going to be run as a
program.
Example Program:
Evaluation of Arithmetic expression using Unmbiguous Grammar(Use Lex and Yacc Tool)
T->T*F | T/F|F
F-> (E) | id
44
%option noyywrap
%{
#include<stdio.h>
#include"y.tab.h"
void yyerror(char *s);
Fig 2.28 Lex Program
%{
extern int yylval;
#include<stdio.h>
%} void yyerror(char*);
extern int yylex(void);
digit [0-9]
%}
%token NUM
%%
%%
S:
S E '\n' {printf("%d\n",$2);}
|
;{digit}+
E:
{yylval=atoi(yytext);r
E '+' T {$$=$1+$3;}
|E '-' T {$$=$1-$3;}
|T
T:
eturn NUM;}
{$$=$1;}
T '*' F {$$=$1*$3;}
[-+*/\n] {return
|T '/' F {$$=$1/$3;}
|F {$$=$1;}
F:
*yytext;}
'(' E ')' {$$=$2;}
|NUM {$$=$1;}
\( {return *yytext;}
%%
void yyerror(char *s) Fig 2.29 YACC Program
{
\) {return *yytext;}
printf("%s",s); 45
. {yyerror("syntax
int main()