Compilers: University of Wales Swansea Department of Computer Science
Compilers: University of Wales Swansea Department of Computer Science
Compilers: University of Wales Swansea Department of Computer Science
.
For each n 0 we dene
n
= w
[ [w[ = n.
We dene
+
=
n1
n
.
(Thus
=
+
.)
For a symbol or word x, x
n
denotes x concatenated with itself n times, with the convention
that x
0
denotes .
A language over is a set L
.
Two languages L
1
and L
2
over common alphabet are equal if they are equal as sets.
Thus L
1
= L
2
if, and only if, L
1
L
2
and L
2
L
1
.
2.2 Decidability
Given a language L over some alphabet , a basic question is: For each possible word w
,
can we eectively decide if w is a member of L or not? We call this the decision problem for L.
Note the use of the word eectively: this implies the mechanism by which we decide on
membership (or non-membership) must be a nitistic, deterministic and mechanical procedure
16
that can be carried out by some form of computing agent. Also note the decision problem asks
if a given word is a member of L or not; that is, it is not sucient to be only able to decide
when words are members of L.
More precisely then, a language L
[ w = 0
n
1for some n.
Does the program below solve the decision problem for L?
read( char );
if char = END_OF_STRING then
print( "No" )
else /* char must be 0 or 1 */
while char = 0 do
read( char )
od; /* char must be 1 or END_OF_STRING */
if char = 1 then
print( "Yes" )
else
print( "No" )
fi
fi
Answer: ....
2.3 Basic Facts
(1) Every nite language is decidable. (Hence every undecidable language is innite.)
17
(2) Not every innite language is undecidable.
(3) Programming languages are (usually) innite but (always) decidable. (Why?)
2.4 Applications to Compilation
Languages may be classied by the means in which they are dened. Of interest to us are
regular languages and context-free languages.
Regular Languages. The signicant aspects of regular languages are:
they are dened by patterns called regular expressions;
every regular language is decidable;
the decision problem for any regular language is solved by a deterministic nite state
automaton (DFA); and
programming languages lexical patterns are specied using regular expressions, and lex-
ical analysers are (essentially) DFAs.
Regular languages and their relationship to lexical analysis are the subjects of the next section.
Context-Free Languages. The signicant aspects of context-free languages are:
they are dened by rules called context-free grammars;
every context-free language is decidable;
the decision problem for any context-free language of interest to us is solved by a deter-
ministic push-down automaton (DPDA);
programming language syntax is specied using context-free grammars, and (most) parsers
are (essentially) DPDAs.
Context-free languages and their relationship to syntax analysis are the subjects of sections 4
and 5.
18
3 Lexical Analysis
In this section we study some theoretical concepts with respect to the class of regular languages
and apply these concepts to the practical problem of lexical analysis. Firstly, in Section 3.1, we
dene the notion of a regular expression and show how regular expressions determine regular
languages. We then, in Section 3.2, introduce deterministic nite automata (DFAs), the class
of algorithms that solve the decision problems for regular languages. We show how regular
expressions and DFAs can be used to specify and implement lexical analysers in Section 3.3,
and in Section 3.4 we take a brief look at Lex, a popular lexical analyser generator built upon
the theory of regular expressions and DFAs.
Note.This section contains mainly theoretical denitions; the lectures will cover examples and
diagrams illustrating the theory.
3.1 Regular Expressions
Recall from the Introduction that a lexical analyser uses pattern matching with respect to
rules associated with the source languages tokens. For example, the token then is associated
with the pattern t, h, e, n, and the token id might be associated with the pattern an
alphabetic character followed by any number of alphanumeric characters. The notation of
regular expressions is a mathematical formalism ideal for expressing patterns such as these,
and thus ideal for expressing the lexical structure of programming languages.
3.1.1 Denition
Regular expressions represent patterns of strings of symbols. A regular expression r matches a
set of strings over an alphabet. This set is denoted L(r) and is called the language determined
or generated by r.
Let be an alphabet. We dene the set RE() of regular expressions over , the strings they
match and thus the languages they determine, as follows:
RE() matches no strings. The language determined is L() = .
RE() matches only the empty string. Therefore L() = .
If a then a RE() matches the string a. Therefore L(a) = a.
if r and s are in RE() and determine the languages L(r) and L(s) respectively, then
r[s RE() matches all strings matched either by r or by s. Therefore, L(r[s) =
L(r) L(s).
19
rs RE() matches any string that is the concatenation of two strings, the rst
matching r and the second matching s. Therefore, the language determined is
L(rs) = L(r)L(s) = uv
RE() matches all nite concatenations of strings which all match r. The
language denoted is thus
L(r
) = (L(r))
iN
(L(r))
i
= L(r) L(r)L(r)
3.1.2 Regular Languages
Let L be a language over . L is said to be a regular language if L = L(r) for some r RE().
3.1.3 Notation
We need to use parentheses to overide the convention concerning the precedence of the
operators. The normal convention is: is higher than concatenation, which is higher
than [. Thus, for example, a[bc
is a[(b(c
)).
We write r
+
for rr
.
We write r? for [r.
We write r
n
as an abbreviation for r . . . r (n times r), with r
0
denoting .
3.1.4 Lemma
Writing r = s to mean L(r) = L(s) for two regular expressions r, s RE(), the following
identities hold for all r, s, t RE():
r[s = s[r ([ is commutative)
(r[s)[t = r[(s[t) ([ is associative)
(rs)t = r(st) (concatenation is associative)
r(s[t) = rs[rt (concatenation
(r[s)t = rt[st distributes over [)
20
r = r =
[r = r
=
r?
= r
= r
(r
= (r[s)
r = r = r.
3.1.5 Regular denitions
It is often useful to give names to complex regular expressions, and to use these names in place
of the expressions they represent. Given an alphabet comprising all ASCII characters,
letter = A[B[ [Z[a[b[ [z
digit = 0[1[ [9
ident = letter(letter[digit)
, when input to M:
(1) if w L(r) then M terminates with output Yes, and
(2) if w , L(r) then M terminates with output No.
Thus, every regular language is decidable.
The machines in question are Deterministic Finite State Automata.
3.2 Deterministic Finite State Automata
In this section we dene the notion of a DFA without reference to its application in lexical
analysis. Here we are interested purely in solving the decision problem for regular languages;
that is, dening machines that say yes or no given an inputted string, depending on its
membership of a particular language. In Section 3.3 we use DFAs as the basis for lexical
analysers: pattern matching algorithms that output sequences of tokens.
21
3.2.1 Denition
A deterministic nite state automaton (or DFA) M is a 5-tuple
M = (Q, , , q
0
, F)
where
Q is a nite non-empty set of states,
is an alphabet,
: Q Q is the transition or next-state function,
q
0
Q is the initial state, and
F Q is the set of accepting or nal states.
The idea behind a DFA M is that it is an abstract machine that denes a language L(M)
Q by
(q, ) = q
for each q Q and
(q, aw) =
((q, a), w)
for each q Q, a and w
.
We dene the language of M by
L(M) = w
[
(q
0
, w) F.
22
3.2.2 Transition Diagrams
DFAs are best understood by depicting them as transition diagrams; these are directed graphs
with nodes representing states and labelled arcs between states representing transitions.
A transition diagram for a DFA is drawn as follows:
(1) Draw a node labelled q for each state q Q.
(2) For every q Q and every a draw an arc labelled a from node q to node (q, a);
(3) Draw an unlabelled arc from outside the DFA to the node representing the initial
state q
0
;
(4) Indicate each nal state by drawing a concentric circle around its node to form a
double circle.
3.2.3 Examples
Let M
1
= (Q, , , q
0
, F) where Q = 1, 2, 3, 4, = a, b, q
0
= 1, F = 4 and where is
given by:
(1, a) = 2 (1, b) = 3
(2, a) = 3 (2, b) = 4
(3, a) = 3 (3, b) = 3
(4, a) = 3 (4, b) = 4.
From the transition diagram for M
1
it is clear that:
L(M
1
) = w a, b
[
(1, w) F
= w a, b
[
(1, w) = 4
= ab, abb, abbb, . . . , ab
n
, . . .
= L(ab
+
).
Let M
2
be obtained from M
1
by adding states 1 and 2 to F. Then
L(M
2
) = L([ab
).
Let M
3
be obtained from M
1
by changing F to 3. Then
L(M
3
) = L((b[aa[abb
a)(a[b)
).
Simplications to transition diagrams.
23
It is often the case that a DFA has an error state, that is, a non-accepting state from
which there are no transitions other than back to the error state. In such a case it is
convenient to apply the convention that any apparently missing transitions are transitions
to the error state.
It is also common for there to be a large number of transitions between two given states
in a DFA, which results in a cluttered transition diagram. For example, in an identier
recognition DFA, there may be 52 arcs labelled with each of the lower- and upper-case
letters from the start state to a state representing that a single letter has been recognised.
It is convenient in such cases to dene a set comprising the labels of each of these arcs,
for example,
letter = a, b, c, . . . , z, A, B, C, . . . , Z
and to replace the arcs by a single arc labelled by the name of this set, e.g. letter.
It is acceptable practice to use these conventions provided it is made clear that they are being
operated.
3.2.4 Equivalence Theorem
(1) For every r RE() there exists a DFA M with alphabet such that L(M) = L(r).
(2) For every DFA M with alphabet there exists an r RE() such that L(r) =
L(M).
Proof. See J.E. Hopcroft and J. D. Ullman Introduction to Automata Theory, Languages, and
Computation (Addison Wesley, 1979).
Applications. The signicance of the Equivalence Theorem is that its proof is constructive;
there is an algorithm that, given a regular expression r, builds a DFA M such that L(M) =
L(r). Thus, if we can write a fragment of a programming language syntax in terms of regular
expressions, then by part (1) of the Theorem we can automatically construct a lexical analyser
for that fragment.
Part (2) of the Equivalence Theorem is a useful tool for showing that a language is regular,
since if we cannot nd a regular expression directly, part (2) states that it is sucient to nd
a DFA that recognises the language.
The standard algorithm for constructing a DFA from a given regular expression is not dicult,
but would require that we also take a look at nondeterministic nite state automata (NFAs).
NFAs are equivalent in power to DFAs but are slightly harder to understand (see the course
text for details). Given a regular expression, the RE-DFA algorithm rst constructs an NFA
equivalent to the RE (by a method known as Thompsons Construction), and then transforms
the NFA into an equivalent DFA (by a method known as the Subset Construction).
24
3.3 DFAs for Lexical Analysis
Lets suppose we wish to construct a lexical analyser based on a DFA. We have seen that it
is easy to construct a DFA that recognises lexemes for a given programming language token
(e.g. for individual keywords, for identiers, and for numbers). However, a lexical analyser
has to deal with all of a programming languages lexical patterns, and has to repeatedly match
sequences of characters against these patterns and output corresponding tokens. We illustrate
how lexical analysers may be constructed using DFAs by means of an example.
3.3.1 An example DFA lexical analyser
Consider rst writing a DFA for recognising tokens for a (minimal!) language with identiers
and the symbols +, ; and := (well add keywords later). A transition diagram for a DFA that
recognises these symbols is given by:
in_ident
start
other
=
done
assign
error
letter
letter
digit
error
identifier
(or keyword)
whitespace
:
in_assign
other (re-read)
other
+
;
end_of_input
plus
semi_colon
end_of_file
The lexical analyser code (see next section) consists of a procedure get next token which out-
puts the next token, which can be either identifier (for identiers), plus (for +), semi colon
(for ;), assign (for :=), error (if an error occurs, for example if an invalid character such as
( is read or if a : is not followed by a =) and end of input (for when the complete input le
has been read).
The (lexical analyser based on) the DFA begins in state start, and returns a token when it
enters state done; the token returned depends on the nal transition it takes to enter the done
state, and is shown on the right hand side of the diagram.
25
For a state with an output arc labelled other, the intuition is that this transition is made
on reading any character except those labelled on the states other arcs; re-read denotes
that the read character should not be consumedit is re-read as the rst character when
get next token is next called.
Notice that adding keyword recognition to the above DFA would be tricky to do by hand and
would lead to a complex DFAwhy? However, we can recognise keywords as identiers using
the above DFA, and when the accepting state for identiers is entered the lexeme stored in the
buer can be checked against a table of keywords. If there is a match, then the appropriate
keyword token is output, else identifier token is output.
Further, when we have recognised an identier we will also wish to output its string value as
well as the identifier token, so that this can be used by the next phase of compilation.
3.3.2 Code for the example DFA lexical analyser
Lets consider how code may be written based on the above DFA. Lets add the keywords if,
then, else and fi to the language to make it slightly more realistic.
Firstly, we dene enumerated types for the sets of states and tokens:
state = (start, in_identifier, in_assign, done);
token = (k_if, k_then, k_else, k_fi, plus, identifier, assign,
semi_colon, error, end_of_input);
Next, we dene some variables that are shared by the lexical analyser and the syntax anal-
yser. The job of the procedure get next token is to set the value of current token to next
token, and if this token is identifier it also sets the value of current identifier to the
current lexeme. The value of the Boolean variable reread character determines whether the
last character read during the previous execution of get next token should be re-read at the
beginning of its next execution. The current character variable holds the value of the last
character read by get next token.
current_token : token;
current_identifier : string[100];
reread_character : boolean;
current_character : char;
We also need the following auxiliary functions (their implementations are omitted here) with
the obvious interpretations:
function is_alpha(c : char) : boolean;
26
function is_digit(c : char) : boolean;
function is_white_space(c : char) : boolean;
Finally, we dene two constant arrays:
{ Constants used to recognise keyword matches }
NUM_KEYWORDS = 4;
token_tab : array[1..NUM_KEYWORDS] of token = (k_if, k_then, k_else, k_fi);
keyword_tab : array[1..NUM_KEYWORDS] of string = (if, then,else, fi);
that store keyword tokens and keywords (with associated keywords and tokens stored at the
same location each array) and a function that searches the keyword array for a string and
returns the token associated with a matched keyword or the token identifier if not. Notice
that the arrays and function are easily modied for any number of keywords appearing in our
source language.
function keyword_lookup(s : string) : token;
{If s is a keyword, return this keywords token; else
return the identifier token}
var
index : integer;
found : boolean;
begin
keyword_lookup := identifier;
found := FALSE;
index := 1;
while (index <= NUM_KEYWORDS) and (not found) do begin
if keyword_tab[index] = s then begin
keyword_lookup := token_tab[index];
found := TRUE
end;
index := index + 1
end
end;
The get next token procedure is implemented as follows. Notice that within the main loop
a case statement is used to deal with transitions from the current state. After the loop exits
(when the done state is entered), if an identifier has been recognised, keyword lookup is
used to check whether or not a keyword has been matched.
27
procedure get_next_token;
{Sets the value of current_token by matching input characters. Also,
sets the values current_identifier and reread_character if
appropriate}
var
current_state : state;
no_more_input : boolean;
begin
current_state := start;
current_identifier := ;
while not (current_state = done) do begin
no_more_input := eof; {Check whether at end of file}
if not (reread_character or no_more_input) then
read(current_character);
reread_character := FALSE;
case current_state of
start:
if no_more_input then begin
current_token := end_of_input;
current_state := done
end else if is_white_space(current_character) then
current_state := start
else if is_alpha(current_character) then begin
current_identifier := current_identifier + current_character;
current_state := in_identifier
end else case current_character of
; : begin
current_token := semi_colon;
current_state := done
end;
+ : begin
current_token := plus;
current_state := done
end;
: :
current_state := in_assign
else begin
current_token := error;
current_state := done;
end
end; {case}
in_identifier:
if (no_more_input or not(is_alpha(current_character)
or is_digit(current_character))) then begin
current_token := identifier;
28
current_state := done;
reread_character := true
end else
current_identifier := current_identifier + current_character;
in_assign:
if no_more_input or (current_character <> =) then begin
current_token := error;
current_state := done
end else begin
current_token := assign;
current_state := done
end
end; {case}
end; {while}
if (current_token = identifier) then
current_token := keyword_lookup(current_identifier);
end;
Test code (in the absence of a syntax analyser) might be the following. This just repeatedly
calls get next token until the end of the input le has been reached, and prints out the value
of the read token.
{Request tokens from lexical analyser, outputting their
values, until end_of_input}
begin
reread_character := false;
repeat
get_next_token;
writeln(Current Token is , token_to_text(current_token));
if (current_token = identifier) then
writeln(Identifier is , current_identifier);
until (current_token = end_of_input)
end.
where
function token_to_text(t : token) : string;
converts token values to text.
29
3.4 Lex
Lex is a widely available lexical analyser generator.
3.4.1 Overview
Given a Lex source le comprising regular expressions for various tokens, Lex generates a lexical
analyser (based on a DFA), written in C, that groups characters matching the expressions into
lexemes, and can return their corresponding tokens.
In essence, a Lex le comprises a number of lines typically of the form:
pattern action
where pattern is a regular expression and action is a piece of C code.
When run on a Lex le, Lex produces a C le called lex.yy.c (a lexical analyser).
When compiled, lex.yy.c takes a stream of characters as input and whenever a sequence of
characters matches a given regular expression the corresponding action is executed. Characters
not matching any regular expressions are simply copied to the output stream.
Example. Consider the Lex fragment:
a { printf( "read a\n" ); }
b { printf( "read b\n" ); }
After compiling (see below on how to do this) we obtain a binary executable which when
executed on the input:
sdfghjklaghjbfghjkbbdfghjk
dfghjkaghjklaghjk
produces
sdfghjklread a
ghjread b
fghjkread b
read b
dfghjk
dfghjkread a
ghjklread a
ghjk
30
Example. Consider the Lex program:
%{
int abc_count, xyz_count;
%}
%%
ab[cC] {abc_count++; }
xyz {xyz_count++; }
\n { ; }
. { ; }
%%
main()
{
abc_count = xyz_count = 0;
yylex();
printf( "%d occurrences of abc or abC\n", abc_count );
printf( "%d occurrences of xyz\n", xyz\_count );
}
This le rst declares two global variables for counting the number of occurrences of abc
or abC and xyz.
Next come the regular expressions for these lexemes and actions to increment the relevant
counters.
Finally, there is a main routine to initialise the counters and call yylex().
When executed on input:
akhabfabcdbcaxyzXyzabChsdk
dfhslkdxyzabcabCdkkjxyzkdf
the lexical analyser produces:
4 occurrences of abc or abC
3 occurrences of xyz
Some features of Lex illustrated by this example are:
31
(1) The notation for [; for example, [cC] matches either c or C.
(2) The regular expression n which matches a newline.
(3) The regular expression . which matches any character except a newline.
(4) The action ; which does nothing except to suppress printing.
3.4.2 Format of Lex Files
The format of a Lex le is:
denitions
analyser specication
auxiliary functions
Lex Denitions. The (optional) denitions section comprises macros (see below) and global
declarations of types, variables and functions to be used in the actions of the lexical analyser
and the auxiliary functions (if present). All such global declaration code is written in C and
surrounded by % and %.
Macros are abbreviations for regular expressions to be used in the analyser specication. For
example, the token identier could be dened by:
IDENTIFIER [a-zA-Z][a-zA-Z0-9]*
The shorthand character range construction [x-y] matches any of the characters between
(and including) x and y. For example, [a-c] means the same as a|b|c, and [a-cA-C] means
the same as a|b|c|A|B|C.
Denitions may use other denitions (enclosed in braces) as illustrated in:
ALPHA [a-zA-Z]
ALPHANUM [a-zA-Z0-9]
IDENTIFIER {ALPHA}{ALPHANUM}*
and:
ALPHA [a-zA-Z]
NUM [0-9]
ALPHANUM ({ALPHA}|{NUM})
IDENTIFIER {ALPHA}{ALPHANUM}*
Notice the use of parentheses in the denition of ALPHANUM. What would happen without them?
32
Lex Analyser Specications. These have the form:
r
1
action
1
r
2
action
2
. . . . . .
r
n
action
n
where r
1
, r
2
, . . . , r
n
are regular expressions (possibly involving macros enclosed in braces) and
action
1
, action
2
, . . ., action
n
are sequences of C statements.
Lex translates the specication into a function yylex() which, when when called, causes the
following to happen:
The current input character(s) are scanned to look for a match with the regular expres-
sions.
If there is no match, the current character is printed out, and the scanning process resumes
with the next character.
If the next m characters match r
i
then
(a) the matching characters are assigned to string variable yytext,
(b) the integer variable yyleng is assigned the value m,
(c) the next m characters are skipped, and
(d) action
i
is executed. If the last instruction of action
i
is return n; (where n
is an integer expression) then the call to yylex() terminates and the value of
n is returned as the functions value; otherwise yylex() resumes the scanning
process.
If end-of-le is read at any stage, then the call to yylex() terminates returning the value
0.
If there is a match against two or more regular expressions, then the expression giving
the longest lexeme is chosen; if all lexemes are of the same length then the rst matching
expression is chosen.
Lex Auxiliary Functions. This optional section has the form:
fun
1
fun
2
. . .
fun
n
where each fun
i
is a complete C function.
We can also compile lex.yy.c with the lex library using the command:
gcc lex.yy.c -ll
33
This has the eect of automatically including a standard main() function, equivalent to:
main()
{
yylex();
return;
}
Thus in the absence of any return statements in the analysers actions, this one call to yylex()
consumes all the input up to and including end-of-le.
3.4.3 Lexical Analyser Example
The lex program below illustrates how a lexical analyser for a Pascal-type language is dened.
Notice that the regular expression for identiers is placed at the end of the list (why?).
We assume that the syntax analyser requests tokens by repeatedly calling the function yylex().
The global variable yylval (of type integer in this example) is generally used to pass tokens
attributes from the lexical analyser to the syntax analyser and is shared by both phases of the
compiler. Here it is being used to pass integers values and identiers symbol table positions
to the syntax analyser.
34
%{
definitions (as integers) of IF, THEN, ELSE, ID, INTEGER, ...
%}
delim [ \t\n]
ws {delim}+
letter [A-Za-z]
digit [0-9]
id {letter}({letter}|{digit})*
integer [+\-]?{digit}+
%%
{ws} { ; }
if { return(IF); }
then { return(THEN); }
else { return(ELSE); }
... ...
{integer} { yylval = atoi(yytext); return(INTEGER); }
{id} { yylval = InstallInTable(); return(ID); }
%%
int InstallInTable()
{
put yytext in symbol table and return
the position it has been inserted.
}
35
4 Syntax Analysis
In this section we will look at the second phase of compilation: syntax analysis, or parsing.
Since parsing is a central concept in compilation and because (unlike lexical analysis) there are
many approaches to parsing, this section makes up most of the remainder of the course.
In Section 4.1 we discuss the class of context-free languages and its relationship to the syntac-
tic structure of programming languages and the compilation process. Parsing algorithms for
context-free languages fall into two main categories: top-down and bottom-up parsers (the names
refer to the process of parse tree construction). Dierent types of top-down and bottom-up
parsing algorithms will be discussed in Sections 4.2 and 4.3 respectively.
4.1 Context-Free Languages
Regular languages are inadequate for specifying all but the simplest aspects of programming
language syntax. To specify more-complex languages such as
L = w a, b
[ w = a
n
b
n
for some n,
L = w (, )
.
36
Notation. In what follows we use:
a, b, c, . . . for members of T,
A, B, C, . . . for members of N,
. . . , X, Y, Z for members of T N,
u, v, w, . . . for members of T
, and
, , , . . . for members of (T N)
.
Examples.
(1) G
1
= (T, N, S, P) where
T = a, b,
N = S and
P = S ab, S aSb.
(2) G
2
= (T, N, S, P) where
T = a, b,
N = S, X and
P = S X, S aa, S bb, S aSa, S bSb, X a, X b.
Notation. It is customary to dene a context free grammar by simply listing its productions
and assuming:
The terminals and nonterminals of the grammar are exactly those terminals appearing in
the productions. (It is usually clear from the context whether a symbol is a terminal or
nonterminal.)
The start symbol is the nonterminal on the left-hand side of the rst production.
Right-hand sides separated by [ indicate alternatives.
For example, G
2
above can be written as
S X [ aa [ bb [ aSa [ bSb
X a [ b
37
BNF Notation. The denition of a grammar as given so far is ne for simple theoretical
grammars. For grammars of real programming languages, a popular notation for context-free
grammars is Backus-Naur Form. Here, non-terminal symbols are given a descriptive name and
placed in angled brackets, and [ is used, as above, to indicate alternatives. The alphabet
will also commonly include keywords and language symbols such as <=. For example, BNF for
simple arithmetic expressions might be
< exp > < exp > + < exp > [ < exp > < exp > [
< exp > < exp > [ < exp > / < exp > [
( < exp > ) [ < number >
< number > < digit > [ < digit > < number >
< digit > 0 [ 1 [ 2 [ 3 [ 4 [ 5 [ 6 [ 7 [ 8 [ 9.
and a typical rule for while loops would be:
< while loop > while < exp > do < command list > od
< command list > < assignment > [ < conditional > . . .
Note that the symbol ::= is often used in place of .
Derivation and Languages. A (context-free) grammar G denes a language L(G) in the
following way:
We say A immediately derives if A is a production of G. We write A =
.
We say derives in zero or more steps, in symbols
= , if
(i) = or
(ii) for some we have
= and = .
If S
= then we say is a sentential form.
We say derives in one or more steps, in symbols
+
= , if
= and ,= .
We dene L(G) by
L(G) = w T
[ S
+
= w.
Let L T
there exists
a parse tree in grammar G with yield if, and only if, S
= .
Examples.
1. Consider the parse trees for the derivations (a) and (c) of the above examples. Notice that
the derivations are dierent and so are the parse trees, although both derivations are leftmost.
2. Let G be a grammar with productions
< stmt > if < boolexp > then < stmt > [
if < boolexp > then < stmt > else < stmt > [
< other >
< boolexp > . . .
40
< other > . . .
A classical problem in parsing is that of the dangling else: Should
if < boolexp > then if < boolexp > then < other > else < other >
be parsed as
if < boolexp > then ( if < boolexp > then < other > else < other > )
or
if < boolexp > then ( if < boolexp > then < other > ) else < other >
Theorem. Let G = (T, N, S, P) be a context-free grammar and let w L(G). For every parse
tree of w there is precisely one leftmost derivation of w and precisely one rightmost derivation
of w.
4.1.6 Ambiguity
A grammar G for which there is some w L(G) for which there are two (or more) parse trees
is said to be ambiguous. In such a case, w is known as an ambiguous sentence. Ambiguity is
usually an undesirable property since it may lead to dierent interpretations of the same string.
Unfortunately:
Equivalently (by the above theorem), a grammar G is ambiguous if there is some w L(G) for
which there exists more than one leftmost derivation (or rightmost derivation).
Theorem. The decision problem: Is G an unambiguous grammar? is undecidable.
However, for a particular grammar we can sometimes resolve ambiguity via disambiguating
rules that force troublesome strings to be parsed in a particular way. For example, let G be
with productions
< stmt > < matched > [ < unmatched >
< matched > if < boolexp > then < matched > else < matched > [
< other >
< unmatched > if < boolexp > then < stmt > [
if < boolexp > < then > < matched > else < unmatched >
< boolexp > . . .
< other > . . .
(< matched > is intended to denote an if-statement with a matching else or some other (non-
conditional) statement, whereas < unmatched > denotes an if-statement with an unmatched
if.)
41
Now, according to this G,
if < boolexp > then if < boolexp > then < other > else < other >
has a unique parse tree. (exercise: convince yourself that this parse tree really is unique.)
4.1.7 Inherent Ambiguity
We have seen that it is possible to remove ambiguity from a grammar by changing the (non-
terminals and) productions to give an equivalent (in terms of the language dened) but unam-
biguous grammar. However, there are some context-free languages for which every grammar is
ambiguous; such languages are called inherently ambiguous.
Formally,
Theorem.There exists a context-free language L such that for every context-free grammar G,
if L = L(G) then G is ambiguous.
Furthermore,
Theorem. The decision problem: Is L an inherently ambiguous language? is undecidable.
4.1.8 Limits of Context-Free Grammars
It is known that not every language is context-free. For example,
L = w a, b, c
[ w = a
n
b
n
c
n
for some n 1
is not context-free, nor is
L = w a, b, c
[ w = a
m
b
n
c
p
for m n p
although proof of these facts is beyond the scope of this course.
More signicantly, there are some aspects of programming language syntax that cannot be
captured with a context-free grammar. Perhaps the most well-know example of this is the
rule that variables must be declared before they are used. Such properties are often called
context-sensitive since they can be specied by grammars that have productions of the form
A
which, informally, says that A can be replaced by but only when it occurs in the context
. . . .
In practice we still use context-free grammars; problems are resolved by allowing strings (pro-
grams) that parse but are technically invalid, and then by adding further code that checks for
invalid (context-sensitive) properties.
42
In this section we describe a number of techniques for constructing a parser for a given context-
free language from a context-free grammar for that language. Parsers fall into two broad
categories: top-down parsers and bottom-up parsers, as described in the next two sections.
4.2 Top-Down Parsers
Top-down parsing can be thought of as the process of creating a parse tree for a given input
string by starting at the top with the start symbol and then working depth-rst downwards to
the leaves. Equivalently, it can be viewed as creating a leftmost derivation of the input string.
We begin, in Section 4.2.1, by showing how a simple parser may be coded in an ad-hoc way using
a standard programming language such as Pascal. In Section 4.2.2 we dene those grammars
for which such a simple parser may be written. In Section 4.2.3 we show how to add syntax-
tree construction and (very limited) error recovery code to these parsing algorithms. Finally,
in Section 4.2.4 we show how this simple approach to parsing can be made more structured by
using the same algorithm for all grammars, where the algorithm uses a dierent look-up table
for each particular grammar.
4.2.1 Recursive Descent Parsing
The simplest idea for creating a parser for a grammar is to construct a recursive descent parser
(r.d.p.). This is a top-down parser, and needs backtracking in general; i.e the parser needs to
try out dierent production rules; if one production rule fails it needs to backtrack in the input
string (and the parse tree) in order to try further rules. Since backtracking can be costly in
terms of parsing time, and because other parsing methods (e.g. bottom-up parsers) do not
require it, it is rare to write backtracking parsers in practice.
We will show here how to write a non-backtracking r.d.p. for a given grammar. Such parsers
are commonly called predictive parsers since they attempt to predict which production rule
to use at each step in order to avoid the need to backtrack; if when attempting to apply a
production rule, unexpected input is read, then this input must be an error and the parser
reports this fact. As we will see, is not possible to write such a simple parser for all grammars:
for many programming languages grammars such a parser cannot be written. For these well
need more powerful parsing methods which we will look at later.
We will not concern ourselves here with the construction of a parse tree or syntax treewell
just attempt to write code that produces (implicitly, via procedure calls) a derivation of an
input string (if the string is in the language given by the grammar) or reports an error (for
an incorrect string). This code will form the basis of a complete syntax analysersyntax tree
construction and error recovery code will just be inserted into it at the correct places, as well
see in Section 4.2.3.
43
We will write a r.d.p. for the grammar G with productions:
A a [ BCA
B be [ cA
C d
We will assume the existence of a procedure get next token which reads terminal symbols (or
tokens) from the input, and places them into the variable current token. The tokens for the
above grammar are a, b, c, d and e. Well also assume the tokens end of input (for end of
string) and error for a non-valid input.
Well also assume the following auxiliary procedure which tests whether the current token is
what is expected and, if so, reads the next token; otherwise it calls an error routine. Later
on well see more about error recovery, but for now assume that the procedure error simply
outputs a message and causes the program to terminate.
procedure match(expected : token);
begin
if current_token = expected then
get_next_token
else
error
end;
The parser is constructed from three procedures, one for each non-terminal symbol. The aim
of each procedure is to try to match any of the right-hand-sides of production rules for that
non-terminal symbol against the next few symbols on the input string; if it fails to do this then
there must be an error in the input string, and the procedure reports this fact.
Consider the non-terminal symbol A and the A-productions A a and A BCA. The
procedure parse A attempts to match either an a or BCA. It rst needs to decide which
production rule to choose: it makes its choice based on the current input token. If this is a, it
obviously uses the production rule A a and is bound to succeed since it already knows it is
going to nd the a(!).
Notice that any string derivable from BCA begins with either a b or a c, because of the right
hand sides of the two B-productions. Therefore, if either b or c is the current token, then
parse A will attempt to parse the next few input symbols using the rule A BCA. In order
to attempt to match BCA it calls procedures parse B, parse C and parse A in turn, each of
which may result in a call to error if the input string is not in the language. If the current
token is neither a, b or c the input cannot be derived from A and therefore error is called.
procedure parse_A
44
begin
case current_token of
a : match(a);
b, c : begin
parse_B; parse_C; parse_A
end
else
error
end
end;
The procedures parse B and parse C are constructed in a similar fashion.
procedure parse_B;
begin
case current_token of
b : begin
match(b);
match(e)
end;
c : begin
match(c);
parse_A
end
else
error
end
end;
procedure parse_C;
begin
match(d)
end;
To execute the parser, we need to (i) make sure that the rst token of the input string has
been read into current token, (ii) call parse A since this is the start symbol, and if an error
is not generated by this call (iii) check that the end of the input string has been reached. This
is accomplished by
get_next_token;
parse_A;
45
if current_token = end_of_input then
writeln(Success!)
else
error
By executing the parser on a range of inputs that drive the algorithm through all possible
branches (try the algorithm on beda, bed and cbedada for examples), it is easy to convince
oneself that the algorithm is a neat and correct solution to the parsing problem for this particular
grammar.
Notice how the algorithm
i. determines a sequence of productions that are equivalent to the construction of a parse
tree that starts from the root and builds down to the leaves, and
ii. determines a leftmost derivation.
Although it is possible to construct compact, ecient recursive descent parsers by hand for
some very large grammars, the method is somewhat ad hoc and more useful general-purpose
methods are preferred.
We note that the method of constructing a r.d.p. will fail to produce a correct parser if the
grammar is left recursive; that is if A
+
= A for some nonterminal Awhy?
Also notice that each for each nonterminal of the above grammar it is possible to predict which
production (if any) is to be matched against, purely on the basis of the current input character.
This is clearly the case for the grammar G above, because as we have seen:
To recognise an A: if the current symbol is a, we try the production rule A a; if the
symbol is b or c, we try A BCA; otherwise we report an error;
To recognise a B, if the current symbol is b, we try B be; if the symbol is cA we try
B cA; otherwise we report an error;
To recognise a C, if the current symbol is d, we try C d; otherwise we report an error.
The choice is determined by the so-called First sets of the right-hand sides of the production
rules. In general, First() for any string (T N)
=
then also add all non- members of First(Y
2
) to First(X). If Y
1
= and Y
2
= then
also add all non- members of First(Y
3
) to First(X), etc. If all Y
1
, . . . , Y
n
= , then
add to First(X).
To compute First(X
1
. . . X
m
) for any string X
1
. . . X
m
, rst add to First(X
1
. . . X
m
) all non-
members of First(X
1
). If First(X
1
) then add all non- members of First(X
2
) to
First(X
1
. . . X
m
), etc. Finally, if every set First(X
1
), . . . , First(X
m
) contains then we also
add to First(X
1
. . . X
m
).
From the denition of First, we can give a rst condition for a grammar G = (T, N, S, P) to
be LL(1), as follows. For each non-terminal A N, suppose that
A
1
[
2
[ [
n
are all the A-productions in P. For G to be LL(1) it is necessary, but not sucient, for
First(
i
) First(
j
) =
for each 1 i, j n where i ,= j.
We let Follow(A) be the set of terminal symbols that can appear immediately to the right of
the non-terminal symbol A in a sentential form. If A appears as the rightmost non-terminal in
a sentential form, then Follow(A) also contains the symbol $.
To compute Follow(A) for all non-terminals A, apply the following rules until nothing can be
added to any Follow set.
Place $ in Follow(S).
If there is a production A B, then place all non- members of First() in Follow(B).
49
If there is a production A B, or a production A B where
= , then place
everything in Follow(A) into Follow(B).
Now, supposing a grammar G meets the rst condition above. G is an LL(1) grammar if it also
meets the following condition. For each A N, let
A
1
[
2
[ [
n
be all the A-productions. If First(
k
) for some 1 k n (and notice, if the rst condition
holds, there will only be at most one such value of k) then it must be the case that
Follow(A) First(
i
) =
for each 1 i n, i ,= k.
Example. Let G have the following productions:
S BAaC
A D [ E
B b
C c
D aF
E
F f
First and Follow sets for this grammar are:
First(BAaC) = b Follow(S) = $
First(D) = a Follow(A) = a
First(E) = Follow(B) = a
First(b) = b Follow(C) = $
First(c) = c Follow(D) = a
First(aF) = a Follow(E) = a
First(f) = f Follow(F) = a.
It is clear that G meets the rst condition above, as
First(D) First(E) = a = .
However, G does not meet the second condition because
Follow(A) First(D) = a a , = .
Many non-LL(1) grammars can be transformed into LL(1) grammars (by removing left recursion
and left factoring (see Section 4.2.5), to enable a predictive parser to be written for them.
However, although left factoring and elimination of left recursion are easy to implement, they
often make the grammar hard to read. Moreover, not every grammar can be transformed into
an LL(1) grammar, and for such grammars we need other methods of parsing.
50
4.2.3 Non-recursive Predictive Parsing
A signicant aspect of LL(1) grammars is that it is possible to construct non-recursive predictive
parsers from an LL(1) grammar. Such parsers explicitly maintain a stack, rather than implicitly
via recursive calls. They also use a parsing-table which is a two dimensional array M of elements
M[A, a] where A is a non-terminal symbol and a is either a terminal symbol or the end-of-string
symbol $. Each element M[A, a] of M is either an A-production or an error entry. Essentially,
M[A, a] says which production to apply when we need to match an A (in this case, A will be
on the top of the stack), and when the next symbol input symbol is a.
Constructing an LL(1) parsing table. To construct the parsing table M, we complete the
following steps for each production A of the grammar G:
For each terminal a in First(), let M[A, a] be A .
If First(), let M[A, b] be A for all terminals b in Follow(A). If First()
and $ Follow(A), let M[A, $] be A .
Let all other elements of M denote an error.
What happens if we try to construct such a table for a non-LL(1) grammar?
The LL(1) parsing algorithm. The non-recursive predictive parsing algorithm (common to
all LL(1) grammars) is shown below. Note the use of $ to mark the bottom of the stack and
the end of the input string.
set stack to $S (with S on top);
set input pointer to point to the first symbol of the input w$;
repeat the following step
(letting X be top of stack and a be current input symbol):
if X = a = $ then we have parsed the input successfully;
else if X = a (a token) then pop X off the stack and advance input pointer;
else if X is a token or $ but not equal to a then report an error;
else (X is a non-terminal) consult M[X,a]:
if this entry is a production X -> Y1...Yn then pop X from the stack
and replace it with Yn...Y1 (with Y1 on top);
output the production X -> Y1...Yn
else report an error
Example. Consider the grammar G with productions:
C cD [ Ed
D EFC [ e
E [ f
F g
51
First we dene useful First and Follow sets for this grammar:
First(cD) = c First(f) = f Follow(C) = $
First(Ed) = d, f First(g) = g Follow(D) = $
First(EFC) = f, g First(F) = g Follow(E) = d, g
First() = First(C) = c, d, f Follow(F) = c, d, f
This gives the following LL(1) parsing table:
c d e f g $
C C cD C Ed C Ed
D D e D EFC D EFC
E E E f E
F F g
which, when used with the LL(1) parsing algorithm on input string cgfd, gives:
Stack Input Action
$C cgfd$ C cD
$Dc cgfd$ match
$D gfd$ D EFC
$CFE gfd$ E
$CF gfd$ F g
$Cg gfd$ match
$C fd$ C Ed
$dE fd$ E f
$df fd$ match
$d d$ match
$ $ accept.
52
4.2.4 Syntax-tree Construction for Recursive-Descent Parsers
In this section we extend the recursive descent parser writing technique of Section 4.2.1 in order
to build syntax trees. We consider the construction of a r.d.p. for the following simple grammar
which only allows assignments and if-statements.
command list) ::= command) rest commands)
rest commands) ::= ; command list) [ null
command) ::= if command) [ assignment)
if command) ::= if expression) then command list) else command list) fi
assignment) ::= identifier := expression)
expression) ::= atom) rest expression)
rest expression) ::= + atom) rest expression) [ null
atom) ::= identifier
In order to construct a predictive parser, we rst list the important First and Follow sets.
These are (grouped for each of the eight non-terminals):
First(command) rest commands)) = if, identifier
First(; command list)) = ;
First(null) = null
Follow(rest commands)) = else, fi, $
First(if command)) = if
First(assignment)) = identifier
First(if expression) then command list) else command list) fi) = if
First(identifier := expression)) = identifier
First(atom) rest expression)) = identifier
First(+ atom) rest expression)) = +
First(null) = null
Follow(rest expression)) = ;, else, fi, then, $
First(identifier) = identifier
53
We will construct syntax trees rather than parse trees, since the former are much smaller and
simpler yet contain all the information required to enable straightforward code generation.
Notice that:
command sequences are better constructed as lists rather than as trees as would be natural
given the grammar;
expressions are better constructed as syntax trees that better illustrate the expression
(with proper rules of precedence and associativity), rather than directly as given by the
grammar;
the grammar fails to represent the fact that + should be left associative; (i.e. x+y+z
is parsed as x+(y+z) rather than the correct (x+y)+zthis is not really a problem for
+, but for - it would be, since 1-1-1 would be evaluated 1-(1-1) = 1 rather than the
conventional (1-1)-1 = -1. In fact, it is impossible to write an LL(1) that achieves this
task. Therefore, we will get the parser to construct the correct (left associative) syntax
tree using a clever techniquesee the code.
We rst write some type denitions to represent arbitrary nodes in a syntax tree. Assume that
MAXC is a constant with value 3, which represents the maximum number of children a node can
have (if-commands need three children, for the condition, the then clause and the else clause).
{ There are command nodes and expression nodes.
Command nodes can be if or assign nodes; expression
nodes can be indentifiers or operations }
node_kind = (n_command, n_expr);
command_kind = (c_if, c_assign);
expr_kind = (e_ident, e_operation);
tree_ptr = ^tree_node;
tree_node = record
children : array[1..MAXC] of tree_ptr;
kind : node_kind; { is it a command or expression node? }
ckind : command_kind; { what kind of command is it? }
ident : string[20]; { for assignment lhs and simple expressions }
sibling : tree_ptr; { the next command in the list }
ekind : expr_kind; { what kind of expression is it? }
operator : token { expression operator }
end;
Note that some of the elds of this record type will be redundant for certain types of node. For
example, only command nodes will have siblings and only expression nodes will have operators.
54
We could use variant records (or unions in C) but this would complicate the discussion slightly,
so we choose not to. Also note that we do not need a eld for operators, since we only have +
in the grammar; however, using this eld we can easily add extra operations to the grammar.
Recall from Section 4.2.1, that we write a procedure parse A for each non-terminal A in the
grammar, the aim of which is to recognise strings derivable from A. To write a parser that
constructs syntax trees, we replace the procedure parse A by a function whose additional role
it is to construct and return a syntax tree for the recognised string. The code for the above
grammar is as follows.
We assume that the lexical analyser takes the form of a procedure get next token that updates
the variables current token and current identifier, and that we have a match procedure
that checks tokens. The tokens for the grammar are k if, k then, k else, k fi, assign,
semi colon, identifier, emd of input and error.
We rst dene two auxiliary functions that the other functions will call in order to create nodes
for expressions or commands, respectively.
function new_expr_node(t : expr_kind) : tree_ptr;
{Construct a new expression node of kind t}
var
i : integer; temp : tree_ptr;
begin
new(temp);
for i := 1 to MAXC do temp^.children[i] := nil;
temp^.kind := n_expr;
temp^.ekind := t;
new_expr_node := temp
end;
function new_command_node(t : command_kind) : tree_ptr;
{Construct a new command node of kind t}
var
i : integer; temp : tree_ptr;
begin
new(temp);
for i := 1 to MAXC do temp^.children[i] := nil;
temp^.kind := n_command;
temp^.ckind := t;
temp^.sibling := nil;
new_command_node := temp
end;
Next, we dene match and syntax error procedures:
55
procedure match(expected : token);
{Check the current token is as expected; if so,
read the next token in}
begin
if current_token = expected then
get_next_token
else
syntax_error(Unexpected token)
end;
procedure syntax_error(s : string);
{Output error message}
begin
writeln(s);
writeln(Exiting);
halt
end;
For the rst three rules in the BNF (those for lists of commands) we have the following functions.
A linked list of commands is built using tree nodes sibling elds.
Note that in parse command list, we do not bother looing at the current token to see if it is
valid, since we know parse command will do this. Notice also how the parse rest commands
function uses the set Follow(rest commands)) = else, fi, $ to determine when to use the
-production.
function parse_command_list : tree_ptr;
var
temp : tree_ptr;
begin
temp := parse_command;
temp^.sibling := parse_rest_commands;
parse_command_list := temp
end;
56
function parse_rest_commands : tree_ptr;
var
temp : tree_ptr;
begin
temp := nil;
if current_token = semi_colon then begin
match(semi_colon);
temp := parse_command_list
end else if not (current_token in [k_else,
k_fi, end_of_input]) then
syntax_error(Incomplete command);
parse_rest_commands := temp
end;
function parse_command : tree_ptr;
begin
case current_token of
k_if : parse_command := parse_if_command;
identifier : parse_command := parse_assignment;
else begin
syntax_error(Expected if or variable);
parse_command := nil
end
end
end;
The two types of command (if commands and assignments) are parsed using the following
functions.
function parse_assignment : tree_ptr;
var
temp : tree_ptr;
begin
temp := new_command_node(c_assign);
temp^.ident := current_identifier;
match(identifier);
match(assign);
temp^.children[1] := parse_expression;
parse_assignment := temp
end;
57
function parse_if_command : tree_ptr;
var
temp : tree_ptr;
begin
temp := new_command_node(c_if);
match(k_if);
temp^.children[1] := parse_expression;
match(k_then);
temp^.children[2] := parse_command_list;
match(k_else);
temp^.children[3] := parse_command_list;
match(k_fi);
parse_if_command := temp
end;
The nal three functions handle expressions. Notice that the function parse rest expression
takes an argument comprising the (already parsed) left hand side expression, and also that it
uses the set Follow(rest expression)) = ;, then, else, fi, $ to determine whether to use the
-production.
Follow through the functions for the case x+y+z to convince yourself that they return the
correct interpretation (x+y)+z.
function parse_expression : tree_ptr;
{Parse an expression, and force it to be left associative}
var
left : tree_ptr;
begin
left := parse_atom;
parse_expression := parse_rest_expression(left);
end;
58
function parse_rest_expression(left : tree_ptr) : tree_ptr;
{If there is an operator on the input, construct an expression
with lhs left (already parsed) by parsing rhs. Otherwise, return
left as the expression}
var
new_e : tree_ptr;
begin
if (current_token = plus) then begin
new_e := new_expr_node(e_operation);
new_e^.operator := current_token;
new_e^.children[1] := left;
get_next_token;
new_e^.children[2] := parse_atom;
parse_rest_expression := parse_rest_expression(new_e)
end else if (current_token in [semi_colon, end_of_input,
k_else, k_fi, k_then]) then
parse_rest_expression := left
else begin
syntax_error(Badly formed expression);
parse_rest_expression := nil
end
end;
function parse_atom : tree_ptr;
var
temp : tree_ptr;
begin
temp := nil;
if current_token = identifier then begin
temp := new_expr_node(e_identifier);
temp^.ident := current_identifier
match(identifier);
end else
syntax_error(Badly formed expression)
parse_atom := temp
end;
4.2.5 Operations on context-free grammars for top-down parsing
As we have seen, many grammars are not suitable for use in top-down parsing. In particular,
for predictive parsing only LL(1) grammars may be used. Many grammars can be transformed
into more suitable forms in order to make construction of top-down parsers possible. Two
59
useful techniques are left-factoring and removal of left recursion.
Left factoring. Many simple parsing algorithms (e.g. those we have studied so far) cannot
cope with grammars with productions such as
if stmt) if bool) then stmt list) fi [
if bool) then stmt list) else stmt list) fi
where two or more productions for a given non-terminal share a common prex. This is because
they have to make a decision on which production rule to use based only on the rst symbol
of the rule.
We will not look at the general algorithm for left-factoring a grammar. For the above example
however, we simply factor-out the common prex to obtain the equivalent grammar rules:
if stmt) if bool) then stmt list) rest if)
rest if) else stmt list) fi [ fi
Removing Immediate Left Recursion. A grammar is said to be immediately left recursive
if there is a production of the form A A. Grammars for programming languages are often
immediately left recursive, or have general left recursion (see next section). However, a large
class of parsing algorithms (specically top-down parsers) cannot cope with such grammars.
Here we shall show how immediate left recursion can be removed from a context-free grammar,
and in the next section we will see how general left recursion can be removed.
Consider a grammar G with productions A A [ where does not begin with an A.
We can make a new grammar G
[
More generally, consider a grammar G with productions
A A
1
[ A
2
[ [ A
n
[
1
[
2
[ [
m
where none of
1
, . . . ,
m
begin with an A. We can make a new grammar G
without left
recursion by replacing the above productions with
A
1
A
[
2
A
[ [
m
A
1
A
[
2
A
[ [
n
A
[
Removing Left Recursion. A grammar is said to be left recursive if A
+
= A for some
nonterminal A. Let G be a grammar that has no -productions and no cycles. (A grammar G
has a cycle if A
+
= A for some nonterminal A.) The following is an algorithm that removes
left recursion from G.
60
Arrange the nonterminals in some order A
1
. . . A
n
for i := 1 to n do
for j := 1 to i-1 do
replace each production of the form A
i
A
j
with A
i
1
[ [
k
where A
j
1
[ [
k
are all the current A
j
productions;
od
remove any immediate left recursion from all current A
i
-productions
od
In order to understand how the algorithm works, we begin by considering how G might be left
recursive. First, we arrange the nonterminals into some order A
1
, . . . , A
n
. In order for G to be
left recursive, it must be that A
i
+
= A
i
for some i and some . Now consider an arbitrary
A
i
-production A
i
. Clearly, there are two possibilities for :
(1) is of the form = a for some terminal a, or
(2) is of the form = A
j
for some nonterminal A
j
.
Notice that if every A
i
-production is of the form (1) then any derivation from A
i
will yield a
string that starts with a terminal symbol. Since A
i
is a nonterminal, it is impossible to have
A
i
+
= A
i
for any contradicting the assumption of Gs left recursiveness. Thus, it must
be that at least one A
i
-production is of the form A
i
A
j
. Suppose that j > i for every such
production (for all non-terminals A
i
): Then it must be that whenever A
i
derives a string and
the leftmost symbol of is a nonterminal A
k
, then k > i. Since this means k ,= i, G cannot be
left recursive.
The way the algorithm works is to transform the productions of G in such a way that the new
productions are either of the form A
i
a or of the form A
i
A
j
where j > i, and thus
the new grammar cannot be left recursive.
In more detail, the algorithm considers the A
i
- productions for i = 1, 2, . . . , n. The case i = 1
is a special case since if A
i
A
j
then j i. If j > i then the production is in the right form,
and if j = i then we have an instance of immediate left recursion that can be removed as in
the previous section.
For i = 2, . . . , n, it is possible to have A
i
-productions of the form A
i
A
j
with j < i. For
each of these productions the algorithm substitutes for A
j
each possible right hand side of an
A
j
-production. That is, if A
j
1
[ [
p
are all the A
j
-productions, then the algorithm
replaces A
i
A
j
with A
i
1
[ [
p
. Now, since the algorithm has already processed
the A
j
-productions (think inductively here!), it must be the case that if the leftmost symbol of
r
(r = 1, . . . , p) is a nonterminal A
k
say, then k > j (because
r
is the right hand side of an
A
j
-production). Thus the lowest index of any nonterminal occurring as the leftmost symbol of
the right hand side of any A
i
-production has been increased from j to k.
61
We require this lowest index to be greater than (or equal to) i. If k is already greater than (or
equal to) i then the production A
i
r
is of the right form (we can remove the immediate left
recursion if k = i). Otherwise, j < k < i. Now, k j iterations of the inner loop later, j will
be equal to k, and so the algorithm will then consider A
i
-productions of the form A
i
A
k
.
As before, since k < i the algorithm has already considered the A
k
-productions, and so all the
A
k
-productions whose right hand sides start with a nonterminal A
l
will have l > k. Thus, on
substituting these right hand sides for A
k
in A
i
A
k
, we obtain productions whose right
hand sides begin with a terminal symbol a nonterminal A
l
for l > k. Thus the lowest index of
any nonterminal occurring as the leftmost symbol of the right hand side of any A
i
-production
has now been increased from j to k to l.
Continuing in this way, we see that by the time that we have dealt with the case j = i 1 we
must arrive at a situation where the lowest index has been increased to i or greater.
Example. Let us remove the left recursion from the following productions:
A
1
a [ A
2
b
A
2
c [ A
3
d
A
3
e [ A
4
f
A
4
g [ A
2
h
Consider what happens when we execute the above algorithm on these productions. First, the
nonterminals are already ordered, so we proceed with the outer loop. For each value of i the
algorithm modies the set of A
i
-productions this set may grow as the algorithm proceeds.
Consider the A
i
-productions for i = 1, 2, 3. Notice each is of the form A
i
where the
leftmost symbol of is either a terminal, or is A
j
for j > i. Thus the algorithm does nothing
for i = 1, 2, 3.
At i = 4, there are no A
i
-productions of the form A
4
A
1
and so nothing happens for j = 1.
At j = 2, A
4
A
2
is a production for = h. The current A
2
-productions are A
2
c [ A
3
d,
and so we replace A
4
A
2
h with A
4
ch [ A
3
dh. This ends the case j = 2, and the
A
4
-productions are currently A
4
g [ ch [ A
3
dh.
At j = 3, we see A
4
A
3
is a current A
4
-production for = dh. The current A
3
-productions
are A
3
e [ A
4
f, and so A
4
A
3
dh is replaced with A
4
edh [ A
4
fdh. This ends the case
j = 3 and the A
4
-productions are now A
4
g [ ch [ edh [ A
4
fdh.
This ends the inner loop, and so we remove the immediate left recursion from A
4
s productions,
resulting in:
A
4
gA
4
[ chA
4
[ edhA
4
A
4
fdhA
4
[ .
62
4.3 Bottom-up Parsers
Bottom-up parsing can be thought of as the process of creating a parse tree for a given input
string by working upwards from the leaves towards the root. The most well-known form of
bottom-up parsing is that of shift-reduce parsing to which we will restrict our attention; this is
a non-recursive, non-backtracking method. Shift-reduce parsing works by matching substrings
of the input string against productions of a grammar G and then replacing such substrings
with the appropriate nonterminal, so that the input string (provided it is in L(G)) is eventually
reduced to the start symbol S. In contrast to top-down parsing that traces out a leftmost
derivation, shift-reduce parsing aims to yield a rightmost derivation (in reverse).
Example. Consider a grammar G with the following productions:
A ab [ BAc
B CA
C d [ Ba [ abc.
Now consider the input string w = dababc. The d matches C d so w can be reduced to
w
1
= Cababc. Next, the rst ab matches A ab so w
1
can be reduced to w
2
= CAabc. Next,
CA matches B CA so w
2
can be reduced to w
3
= Babc. Now ab matches A ab so w
3
can
be reduced to w
4
= BAc. Finally, w
4
can be reduced to A (the start symbol) by the production
A BAc. We have reduced w to the start symbol, and reversing this process does indeed
yield a rightmost derivation of w from A:
A = BAc = Babc = CAabc = Cababc = dababc.
There are other sequences of reductions that reduce w to A but they need not yield rightmost
derivations. For example, we could have chosen to reduce w
2
to CAAc (and then to BAc), but
this yields:
A = BAc = CAAc = CAabc = Cababc = dababc.
Further, notice that not all sequences of reductions lead to the start symbol. For example, w
2
can be reduced to CAC that can only be reduced to BC.
General idea of shift-reduce parsing. A shift-reduce parser uses a stack in a similar way
to LL(1) parsers. The stack begins empty, and a parse is complete when S appears as the only
symbol on the stack. The parser has two possible actions (besides accept and error) at each
step:
To shift the current input symbol onto the top of the stack.
To reduce a string (for example, ) at the top of the stack to the non-terminal A, given
the production A .
Consider the above grammar and the input string dababc. The actions that a shift-reduce
parser might make are:
63
Stack Input Action
dababc$ shift
d ababc$ reduce C d
C ababc$ shift
Ca babc$ shift
Cab abc$ reduce A ab
CA abc$ reduce B CA
B abc$ shift
Ba bc$ shift
Bab c$ reduce A ab
BA c$ shift
BAc $ reduce A BAc
A $ accept.
It is important to note that this is only an illustration of what a shift-reduce parser might do; it
does not show how the parser makes decisions on what actions to take. The main problem with
shift-reduce parsing is deciding when to shift and when to reduce (notice that in the example,
we do not reduce the top of the stack every time possiblee.g we did not reduce the Ba to
C when the stack contained Bab as this would have given an incorrect parse). A shift-reduce
parser looks at the contents of the stack and at (some of) the input to decide what action to
take. Dierent types of shift-reduce parsers use the available stack/input in dierent ways, as
will will see in the next section.
Handles. The diculty in writing a shift-reduce parser begins with deciding when reducing a
substring of an input string will result in a (reverse) step in a rightmost derivation.
Informally, a handle of a string is a substring that matches the right hand side of a production,
and whose reduction represents one (reverse) step in a rightmost derivation from the start
symbol.
Formally, a handle of a right-sentential form is a pair h = (A , p) comprising a production
A and a position p indicating where the substring may be found and replaced by A
to produce the the previous right-sentential form in a rightmost derivation of . That is, if
S
= Aw = w = is a rightmost derivation, then (A , p) is a handle of w when
p marks the position immediately after .
For example, with reference to the example above, (A ab, 2) is a handle of Cababc since
Cababc is a right-sentential form, and replacing ab by A at position 2 yields CAabc that is the
predecessor of Cababc in its rightmost derivation. In contrast, (A ab, 4) is not a handle of
Cababc since although CabAc = Cababc is a rightmost derivation, it is not possible to derive
CabAc from the start symbol with a rightmost derivation.
An important fact underlying the shift/reduce method of parsing is that handles will always
appear on the top of the stack. In other words, if there is a handle to reduce at all, then the last
symbol of the handle must be on top of the stack; it is then a matter of tracing down through
64
the stack to nd the start of the handle.
Viable Prexes. The second technical notion that we require in order to understand how a
shift/reduce parser can be made to make choices that lead to a rightmost derivation is that of
a viable prex.
A viable prex is a prex of a right-sentential form that does not continue past the right hand
end of the rightmost handle of that sentential form. For the example, the viable prexes of
Cababc are , C, Ca and Cab since the rightmost handle of Cababc is the rst occurrence of ab.
The signicance of viable prexes is that provided the input seen up to a given point can be
reduced to a viable prex (as is always possible initially since is always a viable prex), it is
always possible to add more symbols to yield a handle at the top of the stack.
4.3.1 LR(k) Parsing
An LR(k) grammar is one for which we can construct a shift/reduce parser that reads input
left-to-right and builds an rightmost derivation in reverse, using at most k symbols of lookahead.
There are situations where a shift/reduce parser cannot resolve conicts even knowing the entire
contents of the stack and the next k inputs: such grammars, which include all ambiguous
grammars, are called non-LR.
However, shift/reduce parsing can be adapted to cope with ambiguous grammars, and moreover
LR parsing (i.e. LR(k) parsing for some k 0) has a number of advantages. Most importantly,
LR parsers can be constructed to recognise virtually all programming language constructs for
which context-free grammars can be devised (including all LL(k) grammars). In fact, it is rarely
the case that more than one lookahead character is required. Also, the LR parsing method can
be implemented eciently, and LR parsers can detect syntax errors as soon as theoretically
possible.
The process of constructing an LR parser is however relatively involved, and in general, too
dicult to do by hand. For this reason parser generators have been devised to automate the
process of constructing an LR parser where this is possible. One well-known such tool is the
program yacc. The remainder of this section is devoted to explaining the LR(k) parsing method
in order to provide a thorough understanding of the ideas behind this tool.
4.3.2 LR, SLR and LALR parsing
The LR parsing algorithm is a stack-based shift/reduce parsing algorithm involving a parsing
table that dictates an appropriate action for a given state of the parse. We will look at four
dierent kinds of LR parser that use at most one symbol of lookahead. Each kind is an instance
of the general LR parsing algorithm but is distinguished by the kind of parsing table employed.
65
The four types are:
LR(0) parsers. As implied by the 0, the actions taken by LR(0) parsers are not inuenced
by lookahead characters.
SLR(1) parsers (S meaning simple).
LALR(1) parsers (LA meaning lookahead).
LR(1) parsers.
The grammars for which each of the types of parsers may be constructed is described by the
hierarchy:
LR(0) SLR(1) LALR(1) LR(1)
such that every LR(0) grammar is also an SLR(1) grammar, etc. Roughly speaking, the more
general the parser type, the more dicult the (manual) construction of the parsing table (i.e.
LR(0) parsing tables are the easiest to construct, LR(1) tables are the most dicult). Finally,
although the parsing tables for LR(0), SLR(1) and LALR(1) grammars are essentially the
same size, LR(1) parsing tables can be extremely large. Because many programming language
constructs cannot be described by SLR(1) grammars (and hence neither by LR(0) grammars),
but almost all can be described (with eort) by LALR(1) grammars, it is the LALR(1) class
of parser that has proven the most popular for bottom-up parsing; yacc, for example, is an
LALR(1) parser generator.
We will rst look at the LR parsing algorithm (or LRP for LR parser for short) and then
look at how to construct the parsing table in the case of each of the others.
4.3.3 The LR Parsing Algorithm
The LRP is essentially a shift/reduce parser that is augmented with states. The stack holds
strings of the form
s
0
X
1
s
1
X
2
s
2
X
m
s
m
(with s
m
on the top) where X
1
, . . . , X
m
are grammar symbols and s
0
, . . . , s
m
are states. The
idea behind a state in the stack is that it summarises the information in the stack below it:
to be more precise, a state represents a viable prex that has occurred before that state was
reached. One of these states represents nothing has yet been read (note that the empty string
is always a viable prex) and is designated the initial state. In fact, this initial state is the
initial state of a DFA that recognises viable prexes as we shall see below.
The states are used by the LRPs goto function: this is a function that takes a state and a
grammar symbol as arguments and produces a new state; the idea is that if goto(s, X) = s
then
s
represents the viable prex obtainable by appending an X to the viable prex represented by
s. (In other words, the goto function is the transition function of the DFA mentioned above.)
66
The behaviour of the LRP is best described in terms of congurations; that is, a pair
(s
0
X
1
s
1
X
2
s
2
X
m
s
m
, a
i
. . . a
n
)
comprising the contents of the stack and the input yet to be read. The idea behind such
a conguration is that it represents the LRP having up to this point constructed the right-
sentential form
X
1
X
2
. . . X
m
a
i
. . . a
n
and about to read a
i
.
The LRP starts with the initial state on its stack and the input pointer at the start of the
input. It then repeatedly looks at the current (topmost) state and input symbol and takes the
appropriate action before considering the next state and input, until processing is complete.
For a given conguration such as the above (with current state s
m
and input symbol a
i
) the
action action(s
m
, a
i
) taken by the LRP is one of the following:
action(s
m
, a
i
) = shift s. Here the parser puts a
i
and s onto the stack (in that order)
and advances the input pointer. Thus from the conguration above, the LRP enters the
conguration:
(s
0
X
1
s
1
X
2
s
2
X
m
s
m
a
i
s, a
i+1
. . . a
n
).
action(s
m
, a
i
) = reduce by A . This action occurs when the top r (say) grammar
symbols constitute ; that is, when X
mr+1
. . . X
m
= . When this does occur, the
topmost 2r symbols are replaced with A and s = goto(s
mr
, A) (in that order); thus, in
this case the conguration becomes:
(s
0
X
1
s
1
X
2
s
2
X
mr
s
mr
As, a
i
. . . a
n
).
action(s
m
, a
i
) = accept. Here parsing is completed and the LRP terminates.
action(s
m
, a
i
) = error. Here a syntax error has been detected and the LRP calls an error
recovery routine.
4.3.4 LR(0) State Sets and Parsing
We begin our discussion of the dierent types of LR parser by looking at LR(0) parsers. As
implied by the 0, an LR(0) parser makes decisions on whether to shift or reduce based purely
on the contents of the stack, not on lookahead. Although few useful grammars are LR(0), the
construction of LR(0) parsing tables illustrates most of the concepts that we will need to build
SLR(1), LR(1) and LALR(1) parsing tables.
As mentioned previously, the idea behind the states on the stack of an LR parser is that a
state represents a viable prex that has occurred on the stack up to a given point in the parse.
67
We begin to make this idea more precise by dening an LR(0) item (or item for short) to
be a string of the form A where A is a production. (If A , that is, A ,
then A is the only item for this production.) Thus, for example, the production A XY
yields the three items: A XY , A X Y and A XY .
In general, the dot () is a marker denoting the current position in parsing a string; that is, an
item of the form A represents a parser having parsed and about to (try and) parse
.
We begin the process of constructing LR(0) parsing tables by augmenting the grammar G with
a new production S
S (where S
S).
We construct all other states from this initial state by using the goto and closure operations
above, until no new states can be generated. The following algorithm accomplishes this task:
C := closure(S
S)
repeat
for each state I C
for each item A X in I
let J be goto(I, X)
add J to C
until C does not change
return C
Example. Consider the LR(0) grammar
S (S) [ a
which we augment with the production S
S,
S
S, S (S), S a
which we will refer to as I
0
. So C = I
0
after the initial step of the algorithm. In the next
step of the algorithm we compute goto(I
0
, S), goto(I
0
, a) and goto(I
0
, ():
goto(I
0
, S) = closure(S
S)
= S
S
= I
1
(say)
goto(I
0
, a) = closure(S a)
= S a
= I
2
(say)
goto(I
0
, () = closure(S (S))
= S (S), S (S), S a
= I
3
(say)
69
and thus C = I
0
, I
1
, I
2
, I
3
after the rst iteration of the repeat-forever loop.
On the next iteration, (noticing that neither I
1
nor I
2
contain items of the form A X)
we consider I
3
and compute goto(I
3
, S), goto(I
3
, a) and goto(I
3
, ():
goto(I
3
, S) = closure(S (S))
= S (S)
= I
4
(say)
goto(I
3
, a) = closure(S a)
= I
2
goto(I
3
, () = closure(S (S))
= I
3
and now C = I
0
, I
1
, I
2
, I
3
, I
4
. On the next iteration, we need only compute goto(I
4
, )):
goto(I
4
, )) = closure(S (S))
= S (S)
= I
5
(say)
and now C = I
0
, I
1
, I
2
, I
3
, I
4
, I
5
. It is clear that a further iteration of the loop will not yield
any further states, so we have nished.
The states and goto functions can be depicted as a DFA....
Constructing LR(0) Parsing tables. We construct an LR(0) parsing table as follows:
(1) Construct the set I
0
, . . . , I
n
of states for the augmented grammar G
.
(2) The parsing actions for state I
i
are determined as follows:
(a) If A a I
i
and goto(I
i
, a) = I
j
then action(i, a) = shift j.
(b) If A I
i
then action(i, a) = reduce A , for each a T $.
(c) If S
S I
i
then action(i, $) = accept.
(3) The goto actions for state i are as given by directly from the goto function; that is,
if goto(I
i
, A) = I
j
then let goto(i, A) = j.
(4) All entries not dened by the above rules are made error.
(5) The initial state is the one constructed from S
S.
Rule (a) above says that, if we have parsed and we can see a terminal symbol b then we
should shift the symbol onto the stack and go into state j.
Rule (b) says that, if we have nished parsing a particular production rule then we should
reduce by that production rule for all terminal symbols.
70
Rule (c) says that, if we have successfully parsed the start symbol then we have nished.
This method of creating a parser will fail if there are any conicts in the above rules, and
grammars leading to such conicts are not LR(0).
Example. For the example grammar we obtain the following parsing table, where sj means
shift j, rj means reduce using production rule j (where rule 1 is S (S) and rule 2 is S a),
and ok means accept.
action goto
State a ( ) $ S
0 s2 s3 1
1 ok
2 r2 r2 r2 r2
3 s2 s3 4
4 s5
5 r1 r1 r1 r1
A trace of the parse of the string ((a)) is as follows:
Stack Input Action
0 ((a))$ shift 3
0(3 (a))$ shift 3
0(3(3 a))$ shift 2
0(3(3a2 ))$ reduce S a
0(3(3S4 ))$ shift 5
0(3(3S4)5 )$ reduce S (S)
0(3S4 )$ shift 5
0(3S4)5 $ reduce S (S)
0S1 $ accept
Notice that from the rules for constructing an LR(0) parsing table, for a grammar to be LR(0),
if a state contains an item of the form A (signifying the fact that should be reduced
to A) then it cannot also contain another item of the form:
(a) B , or
(b) B a.
In the case of (a), the parser cannot decide whether to reduce using rule A or rule B ;
this is known as a reduce-reduce conict. In the case of (b), the parser cannot decide whether
to reduce using rule A or to shift (by the item B a); this is known as a shift-reduce
conict. A grammar is LR(0) if, and only if, all states contain either no items of the form
A , or contain exactly one item.
71
4.3.5 SLR(1) Parsing
SLR(1) parsing is more powerful than LR(0) as it consults (by the way an SLR(1) parsing table
is constructed) the next input token to direct its actions. The construction of an SLR(1) table
begins with the LR(0) state set construction as with LR(0) parsers.
Lets consider an example. We rst construct the LR(0) state set. We then show that the
grammar is not LR(0) due to shift-reduce conicts. We then show how to construct general
SLR(1) parsing tables and apply this to our example.
Example. Consider the following productions for a grammar for arithmetic expressions.
(1) E E + T
(2) E T
(3) T T F
(4) T F
(5) F (E)
(6) F a
augmented with the production E
E.
The state set construction proceeds as follows. We begin by constructing the initial state:
I
0
= closure(E
E)
= E
E, E E + T, E T, T T F, T F, F (E), F a.
Thus C = I
0
after the initial step of the algorithm.
Next, we see that we need to compute goto(I
0
, X) for X = a, (, E, T, F:
goto(I
0
, a) = closure(F a)
= F a
= I
1
(say)
goto(I
0
, () = closure(F (E))
= F (E), E E + T, E T, T T F,
T F, F (E), F a
= I
2
(say)
goto(I
0
, E) = closure(E
E, E E +T)
= E
E, E E +T
= I
3
(say)
goto(I
0
, T) = closure(E T, T T F)
72
= E T, T T F
= I
4
(say)
goto(I
0
, F) = closure(T F)
= T F
= I
5
(say).
Thus now C = I
0
, I
1
, I
2
, I
3
, I
4
, I
5
.
On the next iteration of the loop we need to consider I
1
, I
2
, I
3
, I
4
, I
5
. First, notice that there
are no items in I
1
that have any symbols following the marker, and hence goto(I
1
, X) = for
each possible symbol X. We now compute goto(I
2
, X) for X = a, (, E, T, F as follows:
goto(I
2
, a) = closure(F a)
= I
1
goto(I
2
, () = closure(F (E))
= I
2
goto(I
2
, E) = closure(F (E), E E +T)
= F (E), E E +T
= I
6
(say)
goto(I
2
, T) = closure(E T, T T F)
= I
4
goto(I
2
, F) = closure(T F)
= I
5
.
For I
3
we only need to compute goto(I
3
, +):
goto(I
3
, +) = closure(E E +T)
= E E +T, T T F, T F, F (E), F a
= I
7
(say).
For I
4
we need only compute goto(I
4
, ):
goto(I
4
, ) = closure(T T F)
= T T F, F (E), F a)
= I
8
(say).
We also see that goto(I
5
, X) = for all X, as in the case of I
1
. Thus, after the second iteration
of the loop we have C = I
0
, I
1
, I
2
, I
3
, I
4
, I
5
, I
6
, I
7
, I
8
.
In the next iteration we need to consider I
6
, I
7
, I
8
. For I
6
, we see that we need to compute
goto(I
6
, +) and goto(I
6
, )):
goto(I
6
, +) = closure(E E +T)
= I
7
goto(I
6
, )) = closure(F (E))
73
= F (E)
= I
9
(say).
For I
7
, we see that we need to compute goto(I
7
, X) for X = a, (, T, F:
goto(I
7
, a) = closure(F a)
= I
1
goto(I
7
, () = closure(F (E))
= I
2
goto(I
7
, T) = closure(E E + T, T T F))
= E E + T, T T F)
= I
10
(say)
goto(I
7
, F) = closure(T F)
= I
5
.
For I
8
, we see that we need to compute goto(I
8
, X) for X = a, (, F:
goto(I
8
, a) = closure(F a)
= I
1
goto(I
8
, () = closure(F (E))
= I
2
goto(I
8
, F) = closure(T T F)
= T T F
= I
11
(say).
Thus, after the third iteration of the loop we have C = I
0
, I
1
, I
2
, I
3
, I
4
, I
5
, I
6
, I
7
, I
8
, I
9
, I
10
, I
11
.
Next, we need to consider I
9
, I
10
, I
11
. For I
9
and I
11
, we see that goto(I
9
, X) = goto(I
11
, X) =
for all X. For I
10
we only need to compute goto(I
10
, ):
goto(I
10
, ) = closure(T T F)
= I
8
.
Thus, C did not change during the fourth iteration of the loop and we have nished(!).
Now, consider an attempt at constructing an LR(0) parsing table from this state set. Specif-
ically, consider the state I
4
(we could also have chosen I
10
). This state contains the items
E T and T T F and goto(I
4
, ) = I
8
. This is a shift-reduce conict (we would
attempt to place both s8 and r2 at location (4, ) in the table).
Constructing SLR(1) Parsing Tables.
In LR(0) parsing a reduce action is performed whenever the parser enters a state i (i.e. I
i
) whose
(only) item is of the form A regardless of the next input symbol. In SLR(1) parsing, if a
state contains an item of this form then the reduction of to A is only performed if the next
token is in Follow(A) (the set of all terminal symbols that can follow A in a sentential form),
74
else it performs a dierent action (e.g. a shift, reduce, or error) based on the other items of
state I
i
.
The rules for constructing SLR(1) parsing tables are the same as for LR(0) tables giving in the
previous section, except that we replace rule 2(b):
If A I
i
then action(i, a) = reduce A , for each a T $
by
If A I
i
then action(i, a) = reduce A , for each a Follow(A).
Lets consider the example grammar. The Follow sets are
Follow(E) = $, +, )
Follow(T) = $, , +, )
Follow(F) = $, , +, ).
Notice rst that the shift-reduce conict mentioned above has been resolved. Recall the items
E T and T T F of I
4
where goto(I
4
, ) = I
8
. As / Follow(E), we do not choose to
reduce in this state (i.e. we place only s8 in the parsing table at position (4, )).
The complete SLR(1) parsing table for this grammar is as shown below.
action goto
State a + ( ) $ E T F
0 s1 s2 3 4 5
1 r6 r6 r6 r6
2 s1 s2 6 4 5
3 s7 ok
4 r2 s8 r2 r2
5 r4 r4 r4 r4
6 s7 s9
7 s1 s2 10 5
8 s1 s2 11
9 r5 r5 r5 r5
10 r1 s8 r1 r1
11 r3 r3 r3 r3
A parse of the string a + a a is:
75
Step Stack Input Action
1 0 a + a a shift 1
2 0a1 +a a reduce by F a
3 0F5 +a a reduce by T F
4 0T4 +a a reduce by E T
5 0E3 +a a shift 7
6 0E3 + 7 a a shift 1
7 0E3 + 7a1 a reduce by F a
8 0E3 + 7F5 a reduce by T F
9 0E3 + 7T10 a shift 8
10 0E3 + 7T10 8 a shift 1
11 0E3 + 7T10 8a1 $ reduce by F a
12 0E3 + 7T10 8F11 $ reduce by T T F
13 0E3 + 7T10 $ reduce by E E + T
14 0E3 $ accept
It can be seen from the SLR(1) table construction rules that a grammar is SLR(1) if, and only
if, the following conditions are satised for each state i:
(a) For any item A a, there is no other item B with a Follow(B);
(b) For any two items A and B , Follow(A) Follow(B) = .
Violation of (a) will lead to a shift-reduce conict: if the parser is in state i and a is the next
token, the parser cannot decide whether to shift the a (and goto(i, a)) onto the stack or to
reduce the top of the stack using the rule B . Violation of (b) will lead to a reduce-reduce
conict: if the parser is in state i and the next token a is in Follow(A) and Follow(B), the
parser cannot decide whether to reduce using rule A or rule B . For non-SLR(1)
grammars that are still LR(1), we need to look at LR(1) and LALR(1) parsers.
76
5 Javacc
Javacc is a compiler compiler which takes a grammar specication and writes a parser for that
language in Java. It comes with a useful tool called Jjtree which in turn produces a Javacc
le with extra code added to construct a tree from a parse. By running the output from Jjtree
through Javacc and then compiling the resulting Java les we get a full top-down parser for
our grammar. The process is shown in gure 1
Language Specification
myFile.jjt
jjtree
myFile.jj
Parser in Java
javac
parser
Javacc file
myFiles.java
javacc
Figure 1: The JavaTree Compilation Process
Because the parser is top-down we can only use Javacc on LL(k) grammars, meaning that we
have to remove any left-recursion from the grammar before writing the parser.
5.1 Grammar and Coursework
The grammar we will be compiling, both in these notes and in your coursework, is dened here:
program decList begin statementList [ statementList
decList dec decList [ dec
dec type id
type oat [ int
statementList statement ; statementList [ statement
statement ifStatement [ repeatStatement [ assignStatement [ readStatement [ writeStatement
ifStatement ifPart end [ ifPart elsePart end
77
ifPart if exp then statementList
elsePart else statementList
repeatStatement repeat statementList until exp
assignStatement id := exp
readStatement read id
writeStatement write exp
exp simpleExp boolop simpleExp [ simpleExp
boolOp < [ =
simpleExp term addOp simpleExp [ term
addOp + [ -
term factor mulOp factor [ factor
mulOp * [ /
factor id [ oat [ int [ ( exp )
So we have an optional list of variable declarations, followed by a series of statements. There
are 5 statements, a conditional, an iterational, an assignment, and a read and write statement.
All variables must be declared before use and each variable can be a oating point number
or an integer. We also have various expressions. Note how the grammar forces all arithmetic
expressions to have the correct operator precedence. Other ways of achieving this will be
discussed in the lectures. There are no boolean variables but two boolean operators are available
for conditional and repeat statements, less than, and equals. The terminals in this grammar
are precisely those written in bold in the above list.
Your coursework comes in seven parts with a deadline at midnight on the Friday of weeks 2 -
6, 8 and 11. The coursework is designed to take 30 hours to complete in total, so you should
spend about 3 hours each week. The early parts shouldnt take that long so you are encouraged
to work ahead. I hope to put a model answer on Blackboard by Monday evening each week,
together with your grade, so if you achieved a disappointing grade for one assignment you can
use the model answer for the next. Each week the coursework will build upon the previous
work. Because I am giving model answers each week there will be no extensions. If you have
a reasonable excuse for not submitting on time you will be exempted from that coursework.
You are recommended to spend each Friday afternoon working on this assignment. Although
there will not be formal lab classes you are strongly recommended to use Friday afternoons
as such. You plan each week should be to read the relevant section of this document, try out
the examples, ensure that you understand fully what is going on, and complete the coursework
for that week. If this takes less than three hours you should start work on the next one. The
rst ve courseworks are worth 10% each, the last two are worth 25% each. The coursework in
total is worth 30% of the marks for the module.
78
5.2 Recognising Tokens
Our rst job is to build a recogniser for the terminal symbols in the grammar. Lets look at a
minimal javacc le:
options{
STATIC = true;
}
PARSER_BEGIN(LexicalAnalyser)
class LexicalAnalyser{
public static void main(String[] args){
LexicalAnalyser lexan = new LexicalAnalyser(System.in);
try{
lexan.start();
}//end try
catch(ParseException e){
System.out.println(e.getMessage());
}//end catch
System.out.println("Finished Lexical Analysis");
}//end main
}//end class
PARSER_END(LexicalAnalyser)
//Ignore all whitespace
SKIP:{" "|"\t"|"\n"|"\r"}
//Declare the tokens
TOKEN:{<INT: (["0"-"9"])+>}
//Now the operators
TOKEN:{<PLUS: "+">}
TOKEN:{<MINUS: "-">}
TOKEN:{<MULT: "*">}
TOKEN:{<DIV: "/">}
void start():
{}
{
<PLUS> {System.out.println("Plus");} |
<MINUS> {System.out.println("Minus");} |
<DIV> {System.out.println("Divide");} |
<MULT> {System.out.println("Multiply");}|
<INT> {System.out.println("Integer");}
}
79
First we have a section for options. Most of the options available default to acceptable values
for this example but we want this program to be static because we are going to write a main
method here rather than in another le. Therefore we set this to true because it defaults to
false.
Next we have a block of code delimited by PARSER BEGIN(LexicalAnalyser) and
PARSER END(LexicalAnalyser). The argument is the name which Javacc will give the nished
parser. Within that there is a class declaration. All code within the PARSER BEGIN and
PARSER END delimiters will be output verbatim by Javacc. Now we write the main method,
which constructs a new parser and calls its start() method. This has to be placed inside a try-
catch block because Javacc automatically generates its own internally dened ParseExceptions
and throws them if something goes wrong with the parse. Finally we print a message to the
screen announcing that we have nished the parse successfully.
We then have some SKIPped items. In this case we are simply telling the nal parser to ignore
all whitespace, spaces, tabs, newlines and carriage returns. This is normal as we usually want
to encourage users of our language to use whitespace to indent their programs to make them
easier to read. There are languages in which whitespace has meaning, e.g. Haskell, but these
tend to be rare.
We then have to declare the tokens or terminal symbols of the grammar. The rst declaration
is simply a regular expression for integers. We can put in square brackets a range of values,
in this case 0 - 9, separated by a hyphen. This means anything between these two values
inclusive. Then wrapping the range expression in parentheses and adding a + symbol we
say that an INT (our name for integers) is one or more instances of a digit. Note that literals
are always included in quote marks.
We then include denitions for the arithmetic operators. For such a simple example this is not
necessary since we could use literals instead, but we will be making this more complex later so
its as well to start o the right way. Besides, it sometimes makes the code dicult to read if
we mix literals and references freely. A TOKEN declaration consists of:
The keyword TOKEN.
A colon.
An open brace.
An open angle bracket.
The name by which we will be referring to this token.
A colon.
80
A regular expression dening the pattern for recognising this token.
The close angle bracket.
The close brace.
We can have multiple token (or skip) declarations in a single line by including the [ (or) symbol,
as in the skip declarations in the example above.
We nally dene the start() method in which all the work is done. A method declaration in
Javacc consists of the return type (in this case void), the name of the method (in this case
start) and any arguments required, followed by a colon and two (!) sets of braces. The rst
one contains variables local to this method (in this case none). The second one contains the
grammars production rules. In the above example we are saying that a legal program consists
of a single instance of one of the grammar terminals.
Each statement inside the start() method follows a format whereby a previously dened token
is followed by a set on one or more (one in this example) statements inside braces. This states
that when the respective token is identied by the parser the code inside the braces (which
must be legal Java code) is executed. Thus, if the parser recognises an int it will print integer
to the terminal.
We rst run the code through Javacc :Javacc lexan.jj. (The le-type .jj is compulsory). This
gives us a series of messages. Most of these relate to the les being produced by Javacc. We
then compile the resulting java les: javac *.java. This gives us the .class les which can be
run from the command line with Java LexicalAnalyser.
N.B. You are strongly recommended to type in the above code and compile it. You
should do this with all the examples given in this chapter. Test your parser with
both legal and illegal input to see what happens.
At the moment we can only read a single token. The rst improvement is to continue to read
tokens until either we reach the end of the input or we read illegal input. We can do this by
simply creating another regular expression inside the start() method.
void start():
{}
{
(<PLUS> |
<MINUS> |
<DIV> |
<MULT> |
<INT>)+
}
Also we can redirect the input from the console to an input le:
81
java LexicalAnalyser < input.txt
This le will now, when compiled, produce a parser which recognises any number of our oper-
ators and integers, printing to the screen what it has found. I will continue to do this until it
reaches the end of input or nds an error.
It is time to do assignment 1. The deadline for this is Friday midnight of week
two. You must write a Javacc le which recognises all legal terminal symbols in
the grammar dened on page 77.
5.3 JjTree
So far all we have done is create a lexical analyser which merely decides whether each input
is a token (terminal symbol) or not. The rst time it nds an error it exits. The above
program demonstrates how each token is specied and given a name. Then when each token
is recognised we insert a piece of java code to execute. For a complete compiler we need to
construct a tree. Having done that we can traverse the tree generating machine-code for our
target machine. Luckily Javacc comes with a tree builder which inserts all the code necessary
for the construction of such a tree. We will now build a complete parser using this tool Jjtree.
First of all it should be noted that Jjtree is a pre-processor for Javacc, i.e. it takes our input
le and outputs Javacc code with tree building code built in. The process is shown in gure 1.
The input le to Jjtree needs a le-type of jjt so lets just change the le we have been using
to Lexan.jjt (from Lexan.jj) and add a few lines of code:
options{
STATIC = true;
}
PARSER_BEGIN(LexicalAnalyser)
class LexicalAnalyser{
public static void main(String[] args)throws ParseException{
LexicalAnalyser lexan = new LexicalAnalyser(System.in);
SimpleNode root = lexan.start();
System.out.println("Finished Lexical Analysis");
root.dump("");
}
}
PARSER_END(LexicalAnalyser)
//Ignore all whitespace
82
SKIP:{" "|"\t"|"\n"|"\r"}
//Declare the tokens
TOKEN:{<INT: (["0"-"9"])+>}
//Now the operators
TOKEN:{<PLUS: "+">}
TOKEN:{<MINUS: "-">}
TOKEN:{<MULT: "*">}
TOKEN:{<DIV: "/">}
SimpleNode start():
{}
{
(<PLUS> {System.out.println("Plus");} |
<MINUS> {System.out.println("Minus");} |
<DIV> {System.out.println("Divide");} |
<MULT> {System.out.println("Multiply");} |
<INT>{System.out.println("Integer");
)+
{return jjtThis;}
}
We have stated that calling lexan.start() will return an object of type SimpleNode and as-
signed this to a variable called root. This will be the root of our tree. SimpleNode is the
default type of node returned by JavaTree methods. We will be seeing later how we can change
this behaviour. The nal line of the main method now calls root.dump(), passing it an
empty string. This is a method dened within the les produced automatically by JavaTree
and its behaviour can be changed if we wish. At the moment it prints to the output a text
description of the tree which has been built by the parse if successful. The start() method now
has a return value of type simpleNode and the nal line of the method returns jjtThis. jjtThis
is the node currently under construction when it is executed. Running this code (Lexan.jjt)
through jjtree produces a le called Lexan.jj. Run this through Javacc and we get the usual se-
ries of .java les. Finally invoking javac *.java produces our parser, this time with tree building
capabilities built in. Running the compiled code with the following input le...
123
+
- * /
...produces the following output:
83
Integer
Plus
Minus
Multiply
Divide
Finished Lexical Analysis
start
It is the same as before except that we now see the result of printing out the tree (dump()).
It produces the single line start. This is in fact the name of the root of the tree. We have
constructed a tree with a single node called start. Lets add a few production rules to get a
clearer idea of what is happening.
SimpleNode start():
{}
{
(multExp())* {return jjtThis;}
}
void multExp():
{}
{
intExp() <MULT> intExp()
}
void intExp():
{}
{
<INT>
}
What we are doing here is stating that all internal nodes in our nished tree take the form of
a method, while all terminal nodes are tokens which we have dened.
The code is the same until the denition of the start() method. The we have a series of methods,
each of which contains a production rule for that non-terminal symbol. The start() rule expects
0 or more multExps. A multExp is an intExp followed by a MULT terminal followed by another
intExp. An intExp is simply an INT terminal. The grammar in BNF form is as follows:
start multExp [ multExp start [
multExp intExp MULT intExp
intExp INT
84
where INT and MULT are terminal symbols.
The output from this after being run on the input:
1*12
3 *14
45 * 6
is more interesting:
start
multExp
intExp
intExp
multExp
intExp
intExp
multExp
intExp
intExp
Pictorially the tree we have built from the parse of the input is shown in gure 2
Start
MultExp
MultExp MultExp
IntExp IntExp IntExp IntExp IntExp IntExp
Figure 2: The tree built from the parse
We have a start node containing three multExps, each of which contains two children, both
of which are intExps. It is time for assignment 2. All you have to do is change the
le you produced for assignment 1 into a jjtree le which contains the complete
specication for the module grammar given on page 77
Notice how the nodes take the same name as the method name in which they are dened? Lets
change this behaviour:
SimpleNode start() #Root:{}{...}
void multExp() #MultiplicationExpression:{}{...}
void intExp() #IntegerExpression:{}{...}
85
All we have done is precede the method body with a local name (indicated by the # sign).
This gives us the following, more easily understandable output:
Root
MultiplicationExpression
IntegerExpression
IntegerExpression
MultiplicationExpression
IntegerExpression
IntegerExpression
MultiplicationExpression
IntegerExpression
IntegerExpression
To see the full eect of this change we can add two options to the rst section of the le:
MULTI = true;
NODE_PREFIX = "";
The rst tells the compiler builder to create dierent types of node, rather than just SimpleNode.
If we dont add the second line the les thus generated will all have a default prex. We will
be using these dierent names later.
It is time for assignment 3. Change all the methods you have written so far so
that they produce node names and types which are other than the names of the
methods.
5.4 Information within nodes
With the addition of the above method of changing node names we have a great deal of infor-
mation about interior nodes. What about terminal nodes? We know that we have, for example,
an identier or perhaps a oating point number but what is the name of that identier,
or the actual value of the oat? We will need to know that information in order to be able to
use it when, for example, generating code.
Looking at the list of les that Javacc generates for us we can see lots of nodes. Examining the
code we can see that all of them extend SimpleNode. Lets look at the superclass:
/* Generated By:JJTree: Do not edit this line. SimpleNode.java */
public class SimpleNode implements Node {
protected Node parent;
86
protected Node[] children;
protected int id;
protected LexicalAnalyser parser;
public SimpleNode(int i) {
id = i;
}
public SimpleNode(LexicalAnalyser p, int i) {
this(i);
parser = p;
}
public void jjtOpen() {
}
public void jjtClose() {
}
public void jjtSetParent(Node n) { parent = n; }
public Node jjtGetParent() { return parent; }
public void jjtAddChild(Node n, int i) {
if (children == null) {
children = new Node[i + 1];
} else if (i >= children.length) {
Node c[] = new Node[i + 1];
System.arraycopy(children, 0, c, 0, children.length);
children = c;
}
children[i] = n;
}
public Node jjtGetChild(int i) {
return children[i];
}
public int jjtGetNumChildren() {
return (children == null) ? 0 : children.length;
}
/* You can override these two methods in subclasses of SimpleNode to
customize the way the node appears when the tree is dumped. If
87
your output uses more than one line you should override
toString(String), otherwise overriding toString() is probably all
you need to do. */
public String toString() { return LexicalAnalyserTreeConstants.jjtNodeName[id]; }
public String toString(String prefix) { return prefix + toString(); }
/* Override this method if you want to customize how the node dumps
out its children. */
public void dump(String prefix) {
System.out.println(toString(prefix));
if (children != null) {
for (int i = 0; i < children.length; ++i) {
SimpleNode n = (SimpleNode)children[i];
if (n != null) {
n.dump(prefix + " ");
}
}
}
}
}
The child nodes for this node are kept in an Array. jjtgetChild(i), returns the i
th
child node,
jjtGetNumChildren() returns the number of children this node has and dump() alters how
the nodes are printed to the screen when we call the Root note. Notice that dump() in particular
recursively invokes dump() on all its children after printing itself, giving a pre-order printout
of the tree. This can easily be changed to in-order or post-order.
We can add functionality to the node classes by adding it to SimpleNode, since all other nodes
inherit from this. Add the following to the SimpleNode le:
protected String type = "Interior node";
protected int intValue;
public void setType(String type){this.type = type;}
public void setInt(int value){intValue = value;}
public String getType(){return type;}
public int getInt(){return intValue;}
By adding these members to the SimpleNode class all subclasses of SimpleNode will inherit
them. Now change the code in the intExp method as follows:
88
void factor() #Factor:
{Token t;}
{
<ID> |
<FLOAT> |
t = <INT> {jjtThis.setType("int"); jjtThis.setInt(Integer.parseInt(t.image));} |
<OPENPAR> exp() <CLOSEPAR>
}
Each time javacc recognises a predened token it produces an object of type Token. The class
Token contains some useful methods of which two are seen here, kind and image. This is
exactly the information we need to add to our nodes.
What we are saying here is that if the method factor() is called then it will be an ID, and
int, a oat or a parenthesised expression. If it is an int we add to the factor node the type
and the value of this particular int.
Now, when we have a factor node we know, if it is an integer, what value it has. We can also,
by adding to this code, know exactly which type of factor we are dealing with. Look at the
dump() method:
public void dump(String prefix) {
System.out.println(toString(prefix));
if(toString().equals("Factor")){
if(getType().equals("int")){
System.out.println("I am an integer and my value is " + getInt());}
}
if (children != null) {
for (int i = 0; i < children.length; ++i) {
SimpleNode n = (SimpleNode)children[i];
if (n != null) {
n.dump(prefix + " ");
}
}
}
}
Now when we call the dump() method of our root node we will get all the information we had
previously, plus, for each terminal factor node, if it is an integer we will know its value as well.
Time for assignment 4. Alter your code (in the factor method and the SimpleNode
class) so that variables, integers and oats will all print out what they are, plus
their value or name.
89
5.5 Conditional statements in jjtree
Many of the methods we have dened so far for the internal nodes have an optional part. This
means that, for example, a statement list is a single statement followed by an optional semi-
colon and statement list. This use of recursion allows us to have as many statements in our
grammar as we want. Unfortunately it means that when the tree is constructed it will not be
possible to tell what exactly we have when we are examining a statement list node without
counting to see how many child nodes there are. This unnecessarily complicates our code so
lets see how to alter it:
void multExp() #MultiplicationExpression(>1):
{}
{
intExp() (<MULT> intExp())?
}
The production rule now states that a multExp is a single intExp followed by 0 or 1 intExps
(indicated by the question mark). The (>1) expression says to return a MultiplicationExpres-
sion node only if there is more than one child. If the expression consists of just a single intExp
the method returns that, rather than a MultiplicationExpression.
This very much simplies the output of dump(), and also makes our job much easier when
walking the tree to, for example, type check assignment statements, or generate machine code.
Now when we visit an expression or statement we know exactly what sort of node we are
dealing with.
5.6 Symbol Tables
Because our terminal nodes are treated dierently from interior nodes we need to maintain a
symbol table to keep track of them. We will use this during all phases of our nished compiler
so we should look at it now. Of course we will implement a symbol table as a hash table, so
that we can access our variables in constant time.
This would also be a convenient time to add some demands to our language, such as the re-
quirement to declare variables before using them, and forbidding, or at least detecting, multiple
declarations.
Our Variables have a name, a type and a value. This is a trivial class to implement:
public class Variable{
private String type;
private String name;
90
private int intValue = 0;
private double floatValue = 0.0;
public Variable(int value, String name){
this.name = name;
type = "integer";
intValue = value;
}
public Variable(double value, String name){
this.name = name;
type = "float";
floatValue = value;
}
public String getType(){return type;}
public String getString(){return name;}
public int getInt(){return intValue;}
public double getDouble(){return floatValue;}
}
Now that we have Variables we need to declare a HashMap in which to put them. Once
we have done that we add to our LexicalAnalyser class a method for adding variables to our
symbol table:
public static void addToSymbolTable(Variable v){
if(symbolTable.containsKey(v.getName())){
System.out.println("Variable " + v.getName() + " multiply declared");
}
else{
symbolTable.put(v.getName(), v);
}
}
This quite simply interrogates the symbol table to see if the Variable we have just read exists
or not. If it does we print out an error message, otherwise we add the new variable to the table.
All that is left to do is invoke our function when we read a variable declaration:
void dec() #Declaration:
{String tokType;
Token tokName;
Variable v;}
91
{
tokType = type() tokName = <ID>{
if(tokType.equals("integer"))v = new Variable(0, tokName.image);
else v = new Variable(0.0, tokName.image);
addToSymbolTable(v);
}
}
String type() #Type:
{Token t;}
{
t = <FPN> {return "float";} |
t = <INTEGER> {return "integer";}
}
Read the above carefully, type it in and make sure you understand exactly what is going on.
Then start this weeks assignment. Assignment 5. Alter the code you have written so
far so that methods with alternative numbers of children return the appropriate
type of node. Then add the symbol table code given above. (Ideally you should
declare the main method in a class of its own to avoid having all the extra methods
and elds declared as static.) Then add code which detects if the writer is trying
to use a variable which has not been declared, printing an error message if (s)he
does so.
5.7 Visiting the Tree
In altering the dump() method in the previous example we merely altered the toString()
method. But lets think about what we are trying to do, especially with the abilities you have
all demonstrated by progressing thus far in your studies.
Let us consider what we have here. We have a data structure (a tree) which we want to
traverse, operating upon each node (for the time being we will merely print out details of the
node). Those of you who took CS-211 - Programming with Objects and Threads will recognise
immediately the Visitor pattern. (See, you always knew that module would come in handy.)
Fortunately the writers of Javacc also took that module so they have already thought of this.
If we add a VISITOR switch to the options section:
options{
MULTI = true;
NODE_PREFIX = "";
STATIC = false;
VISITOR = true;
92
}
...jjtree writes a Visitor interface for us to implement. It denes an accept method for each
of the nodes allowing us to pass our implementation of the Visitor to each node. This is the
Visitor interface for our current example:
/* Generated By:JJTree: Do not edit this line. ./LexicalAnalyserVisitor.java */
public interface LexicalAnalyserVisitor
{
public Object visit(SimpleNode node, Object data);
public Object visit(Root node, Object data);
public Object visit(DeclarationList node, Object data);
public Object visit(Declaration node, Object data);
public Object visit(Type node, Object data);
public Object visit(StatementList node, Object data);
public Object visit(Statement node, Object data);
public Object visit(IfStatement node, Object data);
public Object visit(IfPart node, Object data);
public Object visit(ElsePart node, Object data);
public Object visit(RepeatStatement node, Object data);
public Object visit(AssignmentStatement node, Object data);
public Object visit(ReadStatement node, Object data);
public Object visit(WriteStatement node, Object data);
public Object visit(Expression node, Object data);
public Object visit(BooleanOperator node, Object data);
public Object visit(SimpleExpression node, Object data);
public Object visit(AdditionOperator node, Object data);
public Object visit(Term node, Object data);
public Object visit(MultiplicationOperator node, Object data);
public Object visit(Factor node, Object data);
}
Warning 1. Jjtree does not consistently rewrite all of its classes. This means that
if you have been writing these classes and experimenting with them as you have
read this (highly recommended) you should delete all les except the .jjt le, add
the new options and then recompile with Jjtree.
Warning 2. We will be writing an implementation of the Visitor interface. This is
our own le and is not therefore rewritten by Jtree. Therefore from now on if we
alter the .jjt le and then compile byte code with javac *.java we may get errors if
we have not checked our implementation of the Visitor rst.
93
Now instead of dump()ing our tree we visit it with a visitor. By implementing the visit()
method of each particular type of node in a dierent way we can do anything we want, without
altering the construct of the nodes themselves, exactly what the Visitor pattern was designed
to do.
public class MyVisitor implements ParserVisitor{
public Object visit(Root node, Object data){
System.out.println("Hello. I am a Root node");
if(node.jjtGetNumChildren() == 0){
System.out.println("I have no children");
}
else{
for(int i = 0; i < node.jjtGetNumChildren(); i++){
node.children[i].jjtAccept(this, data);
}
}
}
Simple isnt it! Assignment 6. Write a Visitor class, implementing LexicalAnaly-
serVisitor, which visits each node in turn, printing out the type of node it is and
how many children it has.
5.8 Semantic Analyses
Now that we have our symbol table and a Visitor interface it is easy to implement a Visitor
object which can do anything we wish with the data held in the tree which our parser builds.
In the last assignment all we did was print out details of the type of node. In a real compiler we
would want eventually to generate machine code for our target machine. We will not be doing
that as an assignment, though if you are enjoying this module I hope you might want to do this
out of interest. For your nal assignment you are going to be doing some semantic analysis,
to ensure that you do not attempt to assign incorrect values to variables, e.g. a oating point
number to an integer variable.
You have three weeks to do this assignment so there will be plenty of help given during lectures
and you have plenty of time to come and see me, or ask Program Advisory for help. It should
be clear thet we only need to write code for assignment statements. Lets look at this:
void assignStatement() #AssignmentStatement:
{}
{
94
<ID> <ASSIGN> exp()
}
When we read the ID token we can check its type by looking it up in the symbol table as
we have done before. All we need to do is decide what the type of the exp is. This is your
assignment. Start from the bottom nodes of your tree and work your way up to the expression.
95
6 Tiny
For your coursework you will be writing a compiler which compiles a while language, similar to
the one you met in Theory of Programming Languages, into an assembly language called tiny.
A tiny machine simulator can be downloaded from the course web-site to test your compiler.
This section describes the tiny language.
6.1 Basic Architecture
tiny consists of a readonly instruction memory, a data memory, and a set of eight general
purpose registers. These all use nonnegative integer addresses, beginning at 0. Register 7 is
the program counter and is the only special register, as described below.
The following C declarations will be used in the descriptions which follow:
#define IADDR_SIZE...
/* size of instruction memory */
#define DADDR_SIZE...
/* size of data memory */
#define NO_REGS 8
/* number of registers */
#define PC_REG 7
Instruction iMem[IADDR_SIZE];
int dMem[DADDR_SIZE];
int reg[NO_REGS];
tiny performs a conventional fetchexecute cycle:
do
/* fetch */
currentInstruction = iMem[reg[pcRegNo]++];
/* execute current instruction */
...
...
while (!(halt|error));
At startup the tiny machine sets all registers and data memory to 0, then loads the value
of the highest legal address (namely DADDR SIZE - 1) into dMem[0]. This allows memory
to easily be added to the machine since programs can nd out during execution how much
96
RO Instructions
Format : opcode r, s, t
Opcode Eect
HALT stop execution (operands ignored)
IN reg[r] integer value read from the standard input. (s and t ignored)
OUT reg[r] the standard output (s and t ignored)
ADD reg[r] = reg[s] + reg[t]
SUB reg[r] = reg[s] - reg[t]
MUL reg[r] = reg[s] * reg[t]
DIV reg[r] = reg[s] / reg[t] (may generate ZERO DIV)
RM Instructions
Format : opcode r, d(s) a = d + reg[s]; any reference to dMem[a] generates DMEM ERR
if a < 0 or a DADDR SIZE
Opcode Eect
LD reg[r] = dMem[a] (load r with memory value at a)
LDA reg[r] = a (load address a directly into r)
LDC reg[r] = d (load constant d directly into r. s is ignored)
ST dMem[a] = reg[r] (store value in r to memory location a)
JLT if (reg[r] < 0) reg[PC REG] = a
(jump to instruction a if r is negative, similarly for the following)
JLE if(reg[r] 0) reg[PC REG] = a
JGE if(reg[r] 0) reg[PC REG] = a
JGT if(reg[r] > 0) reg[PC REG] = a
JEQ if(reg[r] == 0) reg[PC REG] = a
JNE if(reg[r] != 0) reg[PC REG] = a
Table 1: The tiny instruction set
memory is available. The machine then starts to execute instructions beginning at iMem[0].
The machine stops when a HALT instruction is executed. The possible error conditions include
IMEM ERR, which occurs if reg[PC REG] < 0 or reg[PC REG] IADDR SIZE in the fetch
step, and the two conditions DMEM ERR and ZERO DIV, which occur during instruction
execution execution as described below.
The instruction set of the machine is given in table 1, together with a brief description of the
eect of each instruction.
There are two basic instruction formats: registeronly, or RO instructions, and register
memory, or RM instructions. A registeronly instruction has the format
opcode r,s,t
where the operands r, s, t are legal registers (checked at load time). Thus, such instructions
are threeaddress, and all three addresses must be registers. All arithmetic instructions are
97
limited to this format, as are the two primitive input/output instructions.
A registermemory instruction has the format
opcode r,d(s)
In this code r and s must be legal registers (checked at load time), and d is a positive or
negative integer representing an oset. This instruction is a twoaddress instruction, where the
rst address is always a register and the second address is a memory address a given by a =
d + reg[r], where a must be a legal address (0 a < DADDR SIZE). If a is out of the legal
range, then DMEM ERR is generated during execution.
RM instructions include three dierent load instructions corresponding to the three addressing
modes load constant (LDC), load address (LDA), and load memory (LD). In addition
there is one store instruction and six conditional jump instructions.
In both RO and RM instructions, all three operands must be present, even though some of
them may be ignored. This is due to the simple nature of the loader, which only distinguishes
between the two classes of instruction (RO and RM) and does not allow dierent formats within
each class. (This actually makes code generation easier since only two dierent routines will be
needed).
Table 1 and the discussion of the machine up to this point represent the complete tiny archi-
tecture. In particular there is no hardware stack or other facilities of any kind. No register
except the PC is special in any way, there is no sp or fp. A compiler for the machine must
therefore maintain any runtime environment organisation entirely manually. Although this may
be unrealistic it has the advantage that all operations must be generated explicitly as they are
needed.
Since the instruction set is minimal we should give some idea of how they can be used to achieve
some more complicated programming language operations. (Indeed, the machine is adequate,
if not comfortable, for even very sophisticated languages).
i. The target register in the arithmetic, IN, and load operations comes rst, and the source
register(s) come second, similar to the 80x85 and unlike the Sun SparcStation. There is
no restriction on the use of registers for sources and targets; in particular the source and
target registers may be the same.
ii. All arithmetic operations are restricted to registers. No operations (except load and store
operations) act directly on memory. In this the machine resembles RISC machines such
as the Sun SparcStation. On the other hand the machine has only 8 registers, while most
RISC processors have many more.
iii. There are no oating point operations or oating point registers. (But the language you
are compiling has no oating point variables either).
98
iv. There is no restriction on the use of the PC in any of the instructions. Indeed, since there
is no unconditional jump, it must be simulated by using the PC as the target register in
an LDA instruction:
LDA 7, d(s)
This instruction has the eect of jumping to location d + reg[s].
v. There is also no indirect jump instruction, but it too can be imitated if necessary, by
using an LD instruction. For example,
LD 7,0(1)
jumps to the instruction whose address is stored in memory at the location pointed to by
register 1.
vi. The conditional jump instructions (JLT etc.), can be made relative to the current position
in the program by using the PC as the second register. For example
JEQ 0,4(7)
causes the machine to jump ve instructions forward in the code if register 0 is 0. An
unconditional jump can also be made relative to the PC by using the PC twice in an
LDA instruction. Thus
LDA 7,-4(7)
performs an unconditional jump three instructions backwards.
vii. There are no procedure calls or JSUB instruction. Instead we must write
LD 7,d(s)
which has the eect of jumping to the procedure whose entry address is dMem[d +
reg[s]]. Of course we need to remember to save the return address rst by executing
something like
LDA 0,1(7)
which places the current PC value plus one into reg[0].
6.2 The machine simulator
The machine accepts text les containing TM instructions as described above with the following
conventions:
An entirely blank line is ignored
A line beginning with an asterisk is considered to be a comment and is ignored.
Any other line must contain an integer instruction location followed by a colon followed
by a legal instruction. Any text after the instruction is considered to be a comment and
is ignored.
99
The machine contains no other featuresin particular there are no symbolic labels and no macro
facilities.
We now have a look at a TM program which computes the factorial of a number input by the
user.
* This program inputs an integer, computes
* its factorial if it is positive,
* and prints the result
0: IN 0,0,0 r0 = read
1: JLE 0,6(7) if 0 < r0 then
2: LDC 1,1,0 r1 = 1
3: LDC 2,1,0 r2 = 1
* repeat
4: MUL 1,1,0 r1 = r1 * r0
5: SUB 0,0,2 r0 = r0 - r2
6: JNE 0,-3(7) until r0 = 0
7: OUT 1,0,0 write r1
8: HALT 0,0,0 halt
* end of program
Strictly speaking there is no need for the HALT instruction at the end of the code since the
machine sets all instruction locations to HALT before loading the program, however it is useful
to put it in as a reminder, and also as a target for jumps which wish to end the program.
If this program were saved in the le fact.tm then this le can be loaded and executed as in
the following sample session. (The simulator assumes the le extension .tm if one is not given).
tm fact TM simulation (enter h for help) ... Enter command: g Enter value for IN instruction:
7 OUT instruction prints: 5040 HALT: 0,0,0 Halted Enter command: q Simulation done.
The g command stands for go, meaning that the program is executed starting at the current
contents of the PC (which is 0 after loading), until a HALT instruction is read. The complete
list of simulator commands can be obtained by using the command h.
100