Compiler Construction 1
Compiler Construction 1
Niklaus Wirth
This is a slightly revised version of the book published by Addison-Wesley in 1996
ISBN 0-201-40353-6
Zrich, February 2014
Preface
This book has emerged from my lecture notes for an introductory course in compiler design at ETH
Zrich. Several times I have been asked to justify this course, since compiler design is considered a
somewhat esoteric subject, practised only in a few highly specialized software houses. Because
nowadays everything which does not yield immediate profits has to be justified, I shall try to explain
why I consider this subject as important and relevant to computer science students in general.
It is the essence of any academic education that not only knowledge, and, in the case of an
engineering education, know-how is transmitted, but also understanding and insight. In particular,
knowledge about system surfaces alone is insufficient in computer science; what is needed is an
understanding of contents. Every academically educated computer scientist must know how a
computer functions, and must understand the ways and methods in which programs are
represented and interpreted. Compilers convert program texts into internal code. Hence they
constitute the bridge between software and hardware.
Now, one may interject that knowledge about the method of translation is unnecessary for an
understanding of the relationship between source program and object code, and even much less
relevant is knowing how to actually construct a compiler. However, from my experience as a
teacher, genuine understanding of a subject is best acquired from an in-depth involvement with
both concepts and details. In this case, this involvement is nothing less than the construction of an
actual compiler.
Of course we must concentrate on the essentials. After all, this book is an introduction, and not a
reference book for experts. Our first restriction to the essentials concerns the source language. It
would be beside the point to present the design of a compiler for a large language. The language
should be small, but nevertheless it must contain all the truly fundamental elements of programming
languages. We have chosen a subset of the language Oberon for our purposes. The second
restriction concerns the target computer. It must feature a regular structure and a simple instruction
set. Most important is the practicality of the concepts taught. Oberon is a general-purpose, flexible
and powerful language, and our target computer reflects the successful RISC-architecture in an
ideal way. And finally, the third restriction lies in renouncing sophisticated techniques for code
optimization. With these premisses, it is possible to explain a whole compiler in detail, and even to
construct it within the limited time of a course.
Chapters 2 and 3 deal with the basics of language and syntax. Chapter 4 is concerned with syntax
analysis, that is the method of parsing sentences and programs. We concentrate on the simple but
surprisingly powerful method of recursive descent, which is used in our exemplary compiler. We
consider syntax analysis as a means to an end, but not as the ultimate goal. In Chapter 5, the
transition from a parser to a compiler is prepared. The method depends on the use of attributes for
syntactic constructs.
After the presentation of the language Oberon-0, Chapter 7 shows the development of its parser
according to the method of recursive descent. For practical reasons, the handling of syntactically
erroneous sentences is also discussed. In Chapter 8 we explain why languages which contain
declarations, and which therefore introduce dependence on context, can nevertheless be treated as
syntactically context free.
Up to this point no consideration of the target computer and its instruction set has been necessary.
Since the subsequent chapters are devoted to the subject of code generation, the specification of a
target becomes unavoidable (Chapter 9). It is a RISC architecture with a small instruction set and a
set of registers. The central theme of compiler design, the generation of instruction sequences, is
thereafter distributed over three chapters: code for expressions and assignments to variables
(Chapter 10), for conditional and repeated statements (Chapter 11) and for procedure declarations
and calls (Chapter 12). Together they cover all the constructs of Oberon-0.
The subsequent chapters are devoted to several additional, important constructs of generalpurpose programming languages. Their treatment is more cursory in nature and less concerned
with details, but they are referenced by several suggested exercises at the end of the respective
chapters. These topics are further elementary data types (Chapter 13), and the constructs of open
arrays, of dynamic data structures, and of procedure types called methods in object-oriented
terminology (Chapter 14).
Chapter 15 is concerned with the module construct and the principle of information hiding. This
leads to the topic of software development in teams, based on the definition of interfaces and the
subsequent, independent implementation of the parts (modules). The technical basis is the
separate compilation of modules with complete checks of the compatibility of the types of all
interface components. This technique is of paramount importance for software engineering in
general, and for modern programming languages in particular.
Finally, Chapter 16 gives a brief overview of problems of code optimization. It is necessary because
of the semantic gap between source languages and computer architectures on the one hand, and
our desire to use the available resources as well as possible on the other.
Acknowledgements
I express my sincere thanks to all who contributed with their suggestions and criticism to this book
which matured over the many years in which I have taught the compiler design course at ETH
Zrich. In particular, I am indebted to Hanspeter Mssenbck and Michael Franz who carefully read
the manuscript and subjected it to their scrutiny. Furthermore, I thank Stephan Gehring, Stefan
Ludwig and Josef Templ for their valuable comments and cooperation in teaching the course.
N. W. December 1995
field programmable gate array (FPGA). The target computer is now specified as a text in the
language Verilog. From this text the circuit is automatically compiled and then loaded into the
FPGA's configuration memory. The RISC thereby gains in actuality and reality. This in particular,
because of the availability of a low-cost development board containing the FPGA chip. Therefore,
the presented system becomes attractive for courses, in which hardware-software codesign is
taught, where a complete understanding of hardware and software is the goal.
May this text be instructive not only for future compiler designers, but for all who wish to gain insight
into the detailed functioning of hardware together with software.
Niklaus Wirth, Zrich, February 2014
http://www.inf.ethz.ch/personal/wirth/Oberon/Oberon07.Report.pdf
http://www.inf.ethz.ch/personal/wirth/FPGA-relatedWork/RISC.pdf
http://www.digilentinc.com/Products/Detail.cfm?Prod=S3BOARD
http://www.xilinx.com/products/silicon-devices/fpga/spartan-3.html
Contents
Preface
1. Introduction
2. Language and Syntax
2.1. Exercises
3. Regular Languages
4. Analysis of Context-free Languages
4.1. The method of recursive descent
4.2. Table-driven top-down parsing
4.3. Bottom-up parsing
4.4. Exercises
5. Attributed Grammars and Semantics
5.1. Type rules
5.2. Evaluation rules
5.3. Translation rules
5.4. Exercises
6. The Programming Language Oberon-0
7. A Parser for Oberon-0
7.1. The scanner
7.2. The parser
7.3. Coping with syntactic errors
7.4. Exercises
8. Consideration of Context Specified by Declarations
8.1. Declarations
8.2. Entries for data types
8.3. Data representation at run-time
8.4. Exercises
9. A RISC Architecture as Target
9.1. Registers and resources
9.2. Register instructions
9.3. Memory instructions
9.4. Branch instructions
9.5. An emulator
10. Expressions and Assignments
10.1. Straight code generation according to the stack principle
10.2. Delayed code generation
10.3. Indexed variables and record fields
10.4. Exercises
11. Conditional and Repeated Statements and Boolean Expressions
11.1. Comparisons and jumps
11.2. Conditional and repeated statements
11.3. Boolean operations
11.4. Assignments to Boolean variables
11.5. Exercises
12. Procedures and the Concept of Locality
12.1. Run-time organization of the store
12.2. Addressing of variables
12.3. Parameters
12.4. Procedure declarations and calls
1. Introduction
Computer programs are formulated in a programming language and specify classes of
computing processes. Computers, however, interpret sequences of particular instructions, but
not program texts. Therefore, the program text must be translated into a suitable instruction
sequence before it can be processed by a computer. This translation can be automated,
which implies that it can be formulated as a program itself. The translation program is called a
compiler, and the text to be translated is called source text (or sometimes source code).
It is not difficult to see that this translation process from source text to instruction sequence
requires considerable effort and follows complex rules. The construction of the first compiler
for the language Fortran (formula translator) around 1956 was a daring enterprise, whose
success was not at all assured. It involved about 18 man years of effort, and therefore figured
among the largest programming projects of the time.
The intricacy and complexity of the translation process could be reduced only by choosing a
clearly defined, well structured source language. This occurred for the first time in 1960 with
the advent of the language Algol 60, which established the technical foundations of compiler
design that still are valid today. For the first time, a formal notation was also used for the
definition of the language's structure (Naur, 1960).
The translation process is now guided by the structure of the analysed text. The text is
decomposed, parsed into its components according to the given syntax. For the most
elementary components, their semantics is recognized, and the meaning (semantics) of the
composite parts is the result of the semantics of their components. Naturally, the meaning of
the source text must be preserved by the translation.
The translation process essentially consists of the following parts:
1. The sequence of characters of a source text is translated into a corresponding sequence of
symbols of the vocabulary of the language. For instance, identifiers consisting of letters and
digits, numbers consisting of digits, delimiters and operators consisting of special characters
are recognized in this phase, which is called lexical analysis.
2. The sequence of symbols is transformed into a representation that directly mirrors the
syntactic structure of the source text and lets this structure easily be recognized. This phase
is called syntax analysis (parsing).
3. High-level languages are characterized by the fact that objects of programs, for example
variables and functions, are classified according to their type. Therefore, in addition to
syntactic rules, compatibility rules among types of operators and operands define the
language. Hence, verification of whether these compatibility rules are observed by a
program is an additional duty of a compiler. This verification is called type checking.
4. On the basis of the representation resulting from step 2, a sequence of instructions taken
from the instruction set of the target computer is generated. This phase is called code
generation. In general it is the most involved part, not least because the instruction sets of
many computers lack the desirable regularity. Often, the code generation part is therefore
subdivided further.
A partitioning of the compilation process into as many parts as possible was the predominant
technique until about 1980, because until then the available store was too small to
accommodate the entire compiler. Only individual compiler parts would fit, and they could be
loaded one after the other in sequence. The parts were called passes, and the whole was
called a multipass compiler. The number of passes was typically 4 - 6, but reached 70 in a
particular case (for PL/I) known to the author. Typically, the output of pass k served as input of
pass k+1, and the disk served as intermediate storage (Figure 1.1). The very frequent access
to disk storage resulted in long compilation times.
lexical
analysis
syntax
analysis
code
generation
Modula
Oberon
Syntax tree
MIPS
RISC
ARM
Increasingly, high-level languages are also employed for the programming of microcontrollers
used in embedded applications. Such systems are primarily used for data acquisition and
automatic control of machinery. In these cases, the store is typically small and is insufficient to
carry a compiler. Instead, software is generated with the aid of other computers capable of
compiling. A compiler which generates code for a computer different from the one executing
the compiler is called a cross compiler. The generated code is then transferred - downloaded via a data transmission line.
In the following chapters we shall concentrate on the theoretical foundations of compiler
design, and thereafter on the development of an actual single-pass compiler.
Mary eats
Mary talks
where the symbol | is to be pronounced as or. We call these formulas syntax rules, productions, or
simply syntactic equations. Subject and predicate are syntactic classes. A shorter notation for the
above omits meaningful identifiers:
S = AB.
A = "a" | "b".
B = "c" | "d".
We will use this shorthand notation in the subsequent, short examples. The set L of sentences
which can be generated in this way, that is, by repeated substitution of the left-hand sides by the
right-hand sides of the equations, is called the language.
The example above evidently defines a language consisting of only four sentences. Typically,
however, a language contains infinitely many sentences. The following example shows that an
infinite set may very well be defined with a finite number of equations. The symbol stands for the
empty sequence.
S = A.
A = "a" A | .
It is clear that arbitrarily deep nestings (here of As) can be expressed, a property particularly
important in the definition of structured languages.
Our fourth and last example exhibits the structure of expressions. The symbols E, T, F, and V stand
for expression, term, factor, and variable.
E
T
F
V
=
=
=
=
T | A "+" T.
F | T "*" F.
V | "(" E ")".
"a" | "b" | "c" | "d".
From this example it is evident that a syntax does not only define the set of sentences of a
language, but also provides them with a structure. The syntax decomposes sentences in their
constituents as shown in the example of Figure 2.1. The graphical representations are called
structural trees or syntax trees.
a*b+c
a+b*c
(a+b)*(c+d)
(A)
(A)
=
=
=
=
=
production syntax | .
identifier "=" expression "." .
term | expression "|" term.
factor | term factor.
identifier | string.
identifier
string
stringhead
letter
digit
=
=
=
=
=
This notation was introduced in 1960 by J. Backus and P. Naur in almost identical form for the
formal description of the syntax of the language Algol 60. It is therefore called Backus Naur Form
(BNF) (Naur, 1960). As our example shows, using recursion to express simple repetitions is rather
10
detrimental to readability. Therefore, we extend this notation by two constructs to express repetition
and optionality. Furthermore, we allow expressions to be enclosed within parentheses. Thereby an
extension of BNF called EBNF (Wirth, 1977) is postulated, which again we immediately use for its
own, precise definition:
syntax
production
expression
term
factor
=
=
=
=
=
{production}.
identifier "=" expression "." .
term {"|" term}.
factor {factor}.
identifier | string | "(" expression ")" | "[" expression "]" | "{" expression "}".
identifier
string
letter
digit
=
=
=
=
A factor of the form {x} is equivalent to an arbitrarily long sequence of x, including the empty
sequence. A production of the form
A = AB | .
is now formulated more briefly as A = {B}. A factor of the form [x] is equivalent to "x or nothing",
that is, it expresses optionality. Hence, the need for the special symbol for the empty sequence
vanishes.
The idea of defining languages and their grammar with mathematical precision goes back to N.
Chomsky. It became clear, however, that the presented, simple scheme of substitution rules was
insufficient to represent the complexity of spoken languages. This remained true even after the
formalisms were considerably expanded. In contrast, this work proved extremely fruitful for the
theory of programming languages and mathematical formalisms. With it, Algol 60 became the first
programming language to be defined formally and precisely. In passing, we emphasize that this
rigour applied to the syntax only, not to the semantics.
The use of the Chomsky formalism is also responsible for the term programming language,
because programming languages seemed to exhibit a structure similar to spoken languages. We
believe that this term is rather unfortunate on the whole, because a programming language is not
spoken, and therefore is not a language in the true sense of the word. Formalism or formal notation
would have been more appropriate terms.
One wonders why an exact definition of the sentences belonging to a language should be of any
great importance. In fact, it is not really. However, it is important to know whether or not a sentence
is well formed. But even here one may ask for a justification. Ultimately, the structure of a (well
formed) sentence is relevant, because it is instrumental in establishing the sentence's meaning.
Owing to the syntactic structure, the individual parts of the sentence and their meaning can be
recognized independently, and together they yield the meaning of the whole.
Let us illustrate this point using the following, trivial example of an expression with the addition
symbol. Let E stand for expression, and N for number:
E = N | E "+" E.
N = "1" | "2" | "3" | "4" .
Evidently, "4 + 2 + 1" is a well-formed expression. It may even be derived in several ways, each
corresponding to a different structure, as shown in Figure 2.2.
11
2.1. Exercises
2.1. The Algol 60 Report contains the following syntax (translated into EBNF):
primary = unsignedNumber | variable | "(" arithmeticExpression ")" | ... .
factor = primary | factor "" primary.
term = factor | term ("" | "/" | "") factor.
simpleArithmeticExpression = term | ("+" | "-") term | simpleArithmeticExpression ("+" | "-") term.
arithmeticExpression = simpleArithmeticExpression |
"IF" BooleanExpression "THEN" simpleArithmeticExpression "ELSE" arithmeticExpression.
relationalOperator = "=" | "" | "" | "<" | "" | ">" .
relation = arithmeticExpression relationalOperator arithmeticExpression.
BooleanPrimary = logicalValue | variable | relation | "(" BooleanExpression ")" | ... .
BooleanSecondary = BooleanPrimary | "" BooleanPrimary.
BooleanFactor = BooleanSecondary | BooleanFactor "" BooleanSecondary.
BooleanTerm = BooleanFactor | BooleanTerm "" BooleanFactor.
implication = BooleanTerm | implication "" BooleanTerm.
simpleBoolean = implication | simpleBoolean "" implication.
BooleanExpression = simpleBoolean |
"IF" BooleanExpression "THEN" simpleBoolean "ELSE" BooleanExpression.
Determine the syntax trees of the following expressions, in which letters are to be taken as
variables:
x+y+z
xy+z
x+yz
(x - y) (x + y)
-x y
12
a+b<c+d
a+b<cdefg>hij=klm-n+pq
2.2. The following productions also are part of the original definition of Algol 60. They contain
ambiguities which were eliminated in the Revised Report.
forListElement = arithmeticExpression |
arithmeticExpression "STEP" arithmeticExpression "UNTIL" arithmeticExpression |
arithmeticExpression "WHILE" BooleanExpression.
forList = forListElement | forList "," forListElement.
forClause = "FOR" variable ":=" forList "DO" .
forStatement = forClause statement.
compoundTail = statement "END" | statement ";" compoundTail.
compoundStatement = "BEGIN" compoundTail.
unconditional Statement = basicStatement | forStatement | compoundStatement | ... .
ifStatement = "IF" BooleanExpression "THEN" unconditionalStatement.
conditionalStatement = ifStatement | ifStatement "ELSE" statement.
statement = unconditionalStatement | conditionalStatement.
Find at least two different structures for the following expressions and statements. Let A and B
stand for "basic statements".
IF a THEN b ELSE c = d
IF a THEN IF b THEN A ELSE B
IF a THEN FOR ... DO IF b THEN A ELSE B
Propose an alternative syntax which is unambiguous.
2.3. Consider the following constructs and find out which ones are correct in Algol, and which ones
in Oberon:
a+b=c+d
a * -b
a<b&c<d
Evaluate the following expressions:
5 * 13 DIV 4 =
13 DIV 5*4 =
13
3. Regular Languages
Syntactic equations of the form defined in EBNF generate context-free languages. The term
"context-free" is due to Chomsky and stems from the fact that substitution of the symbol left of
= by a sequence derived from the expression to the right of = is always permitted, regardless
of the context in which the symbol is embedded within the sentence. It has turned out that this
restriction to context freedom (in the sense of Chomsky) is quite acceptable for programming
languages, and that it is even desirable. Context dependence in another sense, however, is
indispensible. We will return to this topic in Chapter 8.
Here we wish to investigate a subclass rather than a generalization of context-free languages.
This subclass, known as regular languages, plays a significant role in the realm of
programming languages. In essence, they are the context-free languages whose syntax
contains no recursion except for the specification of repetition. Since in EBNF repetition is
specified directly and without the use of recursion, the following, simple definition can be
given:
A language is regular, if its syntax can be expressed by a single EBNF expression.
The requirement that a single equation suffices also implies that only terminal symbols occur
in the expression. Such an expression is called a regular expression.
Two brief examples of regular languages may suffice. The first defines identifiers as they are
common in most languages; and the second defines integers in decimal notation. We use the
nonterminal symbols letter and digit for the sake of brevity. They can be eliminated by
substitution, whereby a regular expression results for both identifier and integer.
identifier = letter {letter | digit}.
integer = digit {digit}.
letter = "A" | "B" | ... | "Z".
digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9".
The reason for our interest in regular languages lies in the fact that programs for the
recognition of regular sentences are particularly simple and efficient. By "recognition" we
mean the determination of the structure of the sentence, and thereby naturally the
determination of whether the sentence is well formed, that is, it belongs to the language.
Sentence recognition is called syntax analysis.
For the recognition of regular sentences a finite automaton, also called a state machine, is
necessary and sufficient. In each step the state machine reads the next symbol and changes
state. The resulting state is solely determined by the previous state and the symbol read. If the
resulting state is unique, the state machine is deterministic, otherwise nondeterministic. If the
state machine is formulated as a program, the state is represented by the current point of
program execution.
The analysing program can be derived directly from the defining syntax in EBNF. For each
EBNF construct K there exists a translation rule which yields a program fragment Pr(K). The
translation rules from EBNF to program text are shown below. Therein sym denotes a global
variable always representing the symbol last read from the source text by a call to procedure
next. Procedure error terminates program execution, signalling that the symbol sequence read
so far does not belong to the language.
K
Pr(K)
"x"
(exp)
[exp]
{exp}
fac0 fac1 ... facn
14
Cond(K)
term0 | term1
fac0 fac1
[exp] or {exp}
These conditions are satisfied trivially in the examples of identifiers and integers, and
therefore we obtain the following programs for their recognition:
IF sym IN letters THEN next ELSE error END ;
WHILE sym IN letters + digits DO
CASE sym OF
"A" .. "Z": next
| "0" .. "9": next
END
END
IF sym IN digits THEN next ELSE error END ;
WHILE sym IN digits DO next END
Frequently, the program obtained by applying the translation rules can be simplified by
eliminating conditions which are evidently established by preceding conditions. The conditions
sym IN letters and sym IN digits are typically formulated as follows:
("A" <= sym) & (sym <= "Z")
The significance of regular languages in connection with programming languages stems from
the fact that the latter are typically defined in two stages. First, their syntax is defined in terms
of a vocabulary of abstract terminal symbols. Second, these abstract symbols are defined in
terms of sequences of concrete terminal symbols, such as ASCII characters. This second
definition typically has a regular syntax. The separation into two stages offers the advantage
that the definition of the abstract symbols, and thereby of the language, is independent of any
concrete representation in terms of any particular character sets used by any particular
equipment.
This separation also has consequences on the structure of a compiler. The process of syntax
analysis is based on a procedure to obtain the next symbol. This procedure in turn is based on
the definition of symbols in terms of sequences of one or more characters. This latter
procedure is called a scanner, and syntax analysis on this second, lower level, lexical
analysis. The definition of symbols in terms of characters is typically given in terms of a
regular language, and therefore the scanner is typically a state machine.
15
Input element
Algorithm
Syntax
Lexical analysis
Syntax analysis
Character
Symbol
Scanner
Parser
Regular
Context free
As an example we show a scanner for a parser of EBNF. Its terminal symbols and their
definition in terms of characters are
symbol =
identifier =
string
=
{blank} (identifier | string | "(" | ")" | "[" | "]" | "{" | "}" | "|" | "=" | ".") .
letter {letter | digit}.
""" {character} """.
From this we derive the procedure GetSym which, upon each call, assigns a numeric value
representing the next symbol read to the global variable sym. If the symbol is an identifier or a
string, the actual character sequence is assigned to the further global variable id. It must be
noted that typically a scanner also takes into account rules about blanks and ends of lines.
Mostly these rules say: blanks and ends of lines separate consecutive symbols, but otherwise
are of no significance. Procedure GetSym, formulated in Oberon, makes use of the following
declarations.
CONST IdLen = 32;
ident = 0; literal = 2; lparen = 3; lbrak = 4; lbrace = 5; bar = 6; eql = 7;
rparen = 8; rbrak = 9; rbrace = 10; period = 11; other = 12;
TYPE Identifier = ARRAY IdLen OF CHAR;
VAR ch: CHAR;
sym: INTEGER;
id: Identifier;
R: Texts.Reader;
Note that the abstract reading operation is now represented by the concrete call
Texts.Read(R, ch). R is a globally declared Reader specifying the source text. Also note that
variable ch must be global, because at the end of GetSym it may contain the first character
belonging to the next symbol. This must be taken into account upon the subsequent call of
GetSym.
PROCEDURE GetSym;
VAR i: INTEGER;
BEGIN
WHILE ~R.eot & (ch <= " ") DO Texts.Read(R, ch) END ; (*skip blanks*)
CASE ch OF
"A" .. "Z", "a" .. "z": sym := ident; i := 0;
REPEAT id[i] := ch; INC(i); Texts.Read(R, ch)
UNTIL (CAP(ch) < "A") OR (CAP(ch) > "Z");
id[i] := 0X
| 22X: (*quote*)
Texts.Read(R, ch); sym := literal; i := 0;
WHILE (ch # 22X) & (ch > " ") DO
id[i] := ch; INC(i); Texts.Read(R, ch)
END ;
IF ch <= " " THEN error(1) END ;
id[i] := 0X; Texts.Read(R, ch)
| "=" : sym := eql; Texts.Read(R, ch)
| "(" : sym := lparen; Texts.Read(R, ch)
| ")" : sym := rparen; Texts.Read(R, ch)
| "[" : sym := lbrak; Texts.Read(R, ch)
| "]" : sym := rbrak; Texts.Read(R, ch)
| "{" : sym := lbrace; Texts.Read(R, ch)
16
3.1. Exercise
Sentences of regular languages can be recognized by finite state machines. They are usually
described by transition diagrams. Each node represents a state, and each edge a state
transition. The edge is labelled by the symbol that is read by the transition. Consider the
following diagrams and describe the syntax of the corresponding languages in EBNF.
o
b
)
+
.
17
18
A = A "a" | "b".
we recognize that the requirement is violated, simply because b is a start symbol of A (b IN
first(A)), and because therefore first(A"a") and first("b") are not disjoint. "b" is the common
element.
The simple consequence is: left recursion can and must be replaced by repetition. In the
example above A = A "a" | "b" is replaced by A = "b" {"a"}.
Another way to look at our step from the state machine to its generalization is to regard the
latter as a set of state machines which call upon each other and upon themselves. In principle,
the only new condition is that the state of the calling machine is resumed after termination of
the called state machine. The state must therefore be preserved. Since state machines are
nested, a stack is the appropriate form of store. Our extension of the state machine is
therefore called a pushdown automaton. Theoretically relevant is the fact that the stack
(pushdown store) must be arbitrarily deep. This is the essential difference between the finite
state machine and the infinite pushdown automaton.
The general principle which is suggested here is the following: consider the recognition of the
sentential construct which begins with the start symbol of the underlying syntax as the
uppermost goal. If during the pursuit of this goal, that is, while the production is being parsed,
a nonterminal symbol is encountered, then the recognition of a construct corresponding to this
symbol is considered as a subordinate goal to be pursued first, while the higher goal is
temporarily suspended. This strategy is therefore also called goal-oriented parsing. If we look
at the structural tree of the parsed sentence we recognize that goals (symbols) higher in the
tree are tackled first, lower goals (symbols) thereafter. The method is therefore called topdown parsing (Knuth, 1971; Aho and Ullman, 1977). Moreover, the presented implementation
of this strategy based on recursive procedures is known as recursive descent parsing.
Finally, we recall that decisions about the steps to be taken are always made on the basis of
the single, next input symbol only. The parser looks ahead by one symbol. A lookahead of
several symbols would complicate the decision process considerably, and thereby also slow it
down. For this reason we will restrict our attention to languages which can be parsed with a
lookahead of a single symbol.
As a further example to demonstrate the technique of recursive descent parsing, let us
consider a parser for EBNF, whose syntax is summarized here once again:
syntax
production
expression
term
factor
"}".
=
=
=
=
=
{production}.
identifier "=" expression "." .
term {"|" term}.
factor {factor}.
identifier | string | "(" expression ")" | "[" expression "]" | "{" expression
By application of the given translation rules and subsequent simplification the following parser
results. It is formulated as an Oberon module:
MODULE EBNF;
IMPORT Viewers, Texts, TextFrames, Oberon;
CONST IdLen = 32;
ident = 0; literal = 2; lparen = 3; lbrak = 4; lbrace = 5; bar = 6; eql = 7;
rparen = 8; rbrak = 9; rbrace = 10; period = 11; other = 12;
TYPE Identifier = ARRAY IdLen OF CHAR;
VAR ch: CHAR;
sym: INTEGER;
lastpos: LONGINT;
id: Identifier;
19
R: Texts.Reader;
W: Texts.Writer;
PROCEDURE error(n: INTEGER);
VAR pos: LONGINT;
BEGIN pos := Texts.Pos(R);
IF pos > lastpos+4 THEN (*avoid spurious error messages*)
Texts.WriteString(W, " pos"); Texts.WriteInt(W, pos, 6);
Texts.WriteString(W, " err"); Texts.WriteInt(W, n, 4); lastpos := pos;
Texts.WriteString(W, " sym "); Texts.WriteInt(W, sym, 4);
Texts.WriteLn(W); Texts.Append(Oberon.Log, W.buf)
END
END error;
PROCEDURE GetSym;
BEGIN ... (*see Chapter 3*)
END GetSym;
PROCEDURE record(id: Identifier; class: INTEGER);
BEGIN (*enter id in appropriate list of identifiers*)
END record;
PROCEDURE expression;
PROCEDURE term;
PROCEDURE factor;
BEGIN
IF sym = ident THEN record(id, 1); GetSym
ELSIF sym = literal THEN record(id, 0); GetSym
ELSIF sym = lparen THEN
GetSym; expression;
IF sym = rparen THEN GetSym ELSE error(2) END
ELSIF sym = lbrak THEN
GetSym; expression;
IF sym = rbrak THEN GetSym ELSE error(3) END
ELSIF sym = lbrace THEN
GetSym; expression;
IF sym = rbrace THEN GetSym ELSE error(4) END
ELSE error(5)
END
END factor;
BEGIN (*term*) factor;
WHILE sym < bar DO factor END
END term;
BEGIN (*expression*) term;
WHILE sym = bar DO GetSym; term END
END expression;
PROCEDURE production;
BEGIN (*sym = ident*) record(id, 2); GetSym;
IF sym = eql THEN GetSym ELSE error(7) END ;
expression;
IF sym = period THEN GetSym ELSE error(8) END
END production;
PROCEDURE syntax;
BEGIN
20
POINTER TO SymDesc;
RECORD alt, next: Symbol END
Then formulate this abstract data type for terminal and nonterminal symbols by using
Oberon's type extension feature (Reiser and Wirth, 1992). Records denoting terminal symbols
specify the symbol by the additional attribute sym:
Terminal =
TSDesc =
POINTER TO TSDesc;
RECORD (SymDesc) sym: INTEGER END
=
=
=
=
POINTER TO NTSDesc;
RECORD (SymDesc) this: Header END
POINTER TO HDesc;
RECORD sym: Symbol; name: ARRAY n OF CHAR END
As an example we choose the following syntax for simple expressions. Figure 4.1 displays the
corresponding data structure as a graph. Horizontal edges are next pointers, vertical edges
are alt pointers.
expression =
term
=
factor
=
Now we are in a position to formulate the general parsing algorithm in the form of a concrete
procedure:
21
expression
term
factor
id
error
Figure 4.1. Syntax as data structure
The following remarks must be kept in mind:
1. We tacitly assume that terms always are of the form
T = f0 | f1 | ... | fn
where all factors except the last start with a distinct, terminal symbol. Only the last factor
may start with either a terminal or a nonterminal symbol. Under this condition is it possible
to traverse the list of alternatives and in each step to make only a single comparison.
22
2. The data structure can be derived from the syntax (in EBNF) automatically, that is, by a
program which compiles the syntax.
3. In the procedure above the name of each nonterminal symbol to be recognized is output.
The header element serves precisely this purpose.
4. Empty is a special terminal symbol and element representing the empty sequence. It is
needed to mark the exit of repetitions (loops).
x
F
T
T*
T*(
T*(y
T*(F
T*(T
T*(E
T*(E+
T*(E + z
T*(E + F
T*(E + T
x * (y + z)
* (y + z)
* (y + z)
* (y + z)
(y + z)
y + z)
+ z)
+ z)
+ z)
+ z)
z)
)
)
)
23
R
S
R
R
R
T*(E
T*(E)
T*F
T
E
At the end, the initial source text is reduced to the start symbol E, which here would better be
called the stop symbol. As mentioned earlier, the intermediate store to the left is a stack.
In analogy to this representation, the process of parsing the same input according to the topdown principle is shown below. The two kinds of steps are denoted by M (match) and P
(produce, expand). The start symbol is E.
P
P
P
P
M
M
P
M
P
P
P
P
M
M
P
P
M
M
E
T
T* F
F*F
id * F
*F
F
(E)
E)
E + T)
T + T)
F + T)
id + T)
+ T)
T)
F)
id)
)
x * (y + z)
x * (y + z)
x * (y + z)
x * (y + z)
x * (y + z)
* (y + z)
(y + z)
(y + z)
y + z)
y + z)
y + z)
y + z)
y + z)
+ z)
z)
z)
z)
)
Evidently, in the bottom-up method the sequence of symbols read is always reduced at its
right end, whereas in the top-down method it is always the leftmost nonterminal symbol which
is expanded. According to Knuth the bottom-up method is therefore called LR-parsing, and the
top-down method LL-parsing. The first L expresses the fact that the text is being read from left
to right. Usually, this denotation is given a parameter k (LL(k), LR(k)). It indicates the extent of
the lookahead being used. We will always implicitly assume k = 1.
Let us briefly return to the bottom-up principle. The concrete problem lies in determining which
kind of step is to be taken next, and, in the case of a reduce step, how many symbols on the
stack are to be involved in the step. This question is not easily answered. We merely state
that in order to guarantee an efficient parsing process, the information on which the decision is
to be based must be present in an appropriately compiled way. Bottom-up parsers always use
tables, that is, data structured in an analogous manner to the table-driven top-down parser
presented above. In addition to the representation of the syntax as a data structure, further
tables are required to allow us to determine the next step in an efficient manner. Bottom-up
parsing is therefore in general more intricate and complex than top-down parsing.
There exist various LR parsing algorithms. They impose different boundary conditions on the
syntax to be processed. The more lenient these conditions are, the more complex the parsing
process. We mention here the SLR (DeRemer, 1971) and LALR (LaLonde et al., 1971)
methods without explaining them in any further detail.
4. 4. Exercises
4.1. Algol 60 contains a multiple assignment of the form v1 := v2 := ... vn := e. It is defined by
the following syntax:
24
25
Attribute rule
exp(T0) = term(T1) |
T0 := T1
exp(T1) "+" term(T2) | T0 := T1
exp(T1) "-" term(T2).
T0 := T1
Context condition
T1 = T2
T1 = T2
If operands of the types INTEGER and REAL are to be admissible in mixed expressions, the
rules become more relaxed, but also more complicated:
T0 := if (T1 = INTEGER) & (T2 = INTEGER) then INTEGER else REAL,
T1 = INTEGER or T1 = REAL
T2 = INTEGER or T2 = REAL
Rules about type compatibility are indeed also static in the sense that they can be verified
without execution of the program. Hence, their separation from purely syntactic rules appears
quite arbitrary, and their integration into the syntax in the form of attribute rules is entirely
appropriate. However, we note that attributed grammars obtain a new dimension, if the
possible attribute values (here, types) and their number are not known a priori.
If a syntactic equation contains a repetition, then it is appropriate with regard to attribute rules
to express it with the aid of recursion. In the case of an option, it is best to express the two
cases separately. This is shown by the following example where the two expressions
exp(T0) = term(T1) {"+" term(T2)}.
exp(T0) = term(T1) |
"-" term(T1).
The type rules associated with a production come into effect whenever a construct
corresponding to the production is recognized. This association is simple to implement in the
case of a recursive descent parser: program statements implementing the attribute rules are
simply interspersed within the parsing statements, and the attributes occur as parameters to
the parser procedures standing for the syntactic constructs (nonterminal symbols). The
procedure for recognizing expressions may serve as a first example to demonstrate this
extension process, where the original parsing procedure serves as the scaffolding:
26
PROCEDURE expression;
BEGIN term;
WHILE (sym = "+") OR (sym = "-") DO
GetSym; term
END
END expression
is extended to implement its attribute (type) rules:
PROCEDURE expression(VAR typ0: Type);
VAR typ1, typ2: Type;
BEGIN term(typ1);
WHILE (sym = "+") OR (sym = "-") DO
GetSym; term(typ2);
typ1 := ResType(typ1, typ2)
END ;
typ0 := typ1
END expression
term(v0) =
factor(v0) =
term(v1) |
exp(v1) "+" term(v2) |
exp(v1) "-" term(v2).
factor(v1) |
term(v1) "*" factor(v2) |
term(v1) "/" factor(v2).
number(v1) |
"(" exp(v1) ")".
v0 := v1
v0 := v1 + v2
v0 := v1 - v2
v0 := v1
v0 := v1 * v2
v0 := v1 / v2
v0 := v1
v0 := v1
Here, the attribute is the computed, numeric value of the recognized construct. The necessary
extension of the corresponding parsing procedure leads to the following procedure for
expressions:
PROCEDURE expression(VAR val0: INTEGER);
VAR val1, val2: INTEGER; op: CHAR;
BEGIN term(val1);
WHILE (sym = "+") OR (sym = "-") DO
op : = sym; GetSym; term(val2);
IF op = "+" THEN val1 : = val1 + val2 ELSE val1 := val1 - val2 END
END ;
val0 := val1
END expression
27
the output is a sequence of instructions. In this example, the instructions are replaced by
abstract symbols, and their output is specified by the operator put.
Syntax
exp
term =
factor =
put("+")
put("-")
put("*")
put("/")
put(number)
-
As can easily be verified, the sequence of output symbols corresponds to the parsed
expression in postfix notation. The parser has been extended into a translator.
Infix notation
Postfix notation
2+3
2*3+4
2+3*4
(5 - 4) * (3 + 2)
23+
23*4+
234*+
54-32+*
Type rules
Program
Semantics
Result
Generic compiler
Figure 5.1. Schema of a general, parametrized compiler.
Ultimately, the basic idea behind every language is that it should serve as a means for
communication. This means that partners must use and understand the same language.
Promoting the ease by which a language can be modified and extended may therefore be
rather counterproductive. Nevertheless, it has become customary to build compilers using
table-driven parsers, and to derive these tables from the syntax automatically with the help of
tools. The semantics are expressed by procedures whose calls are also integrated
automatically into the parser. Compilers thereby not only become bulkier and less efficient
than is warranted, but also much less transparent. The latter property remains one of our
principal concerns, and therefore we shall not pursue this course any further.
28
5.4. Exercise
5.1. Extend the program for syntactic analysis of EBNF texts in such a way that it generates
(1) a list of terminal symbols, (2) a list of nonterminal symbols, and (3) for each nonterminal
symbol the sets of its start and follow symbols. Based on these sets, the program is then to
determine whether the given syntax can be parsed top-down with a lookahead of a single
symbol. If this is not so, the program displays the conflicting productions in a suitable way.
Hint: Use Warshall's algorithm (R. W. Floyd, Algorithm 96, Comm. ACM, June 1962).
TYPE matrix = ARRAY [1..n],[1..n] OF BOOLEAN;
PROCEDURE ancestor(VAR m: matrix; n: INTEGER);
(* Initially m[i,j] is TRUE, if individual i is a parent of individual j.
At completion, m[i,j] is TRUE, if i is an ancestor of j *)
VAR i, j, k: INTEGER;
BEGIN
FOR i := 1 TO n DO
FOR j := 1 TO n DO
IF m[j, i] THEN
FOR k := 1 TO n DO
IF m[i, k] THEN m[j, k] := TRUE END
END
END
END
END
END ancestor
It may be assumed that the numbers of terminal and nonterminal symbols of the analysed
languages do not exceed a given limit (for example, 32).
29
30
The following example of a module may help the reader to appreciate the character of the
language. The module contains various, well-known sample procedures. It also contains calls to
specific, predefined procedures ReadInt, WriteInt, WriteLn, and eot() whose purpose is evident.
MODULE Samples;
VAR n: INTEGER;
PROCEDURE Multiply;
VAR x, y, z: INTEGER;
BEGIN ReadInt(x); ReadInt(y); z := 0;
WHILE x > 0 DO
IF x MOD 2 = 1 THEN z := z + y END ;
y := 2*y; x := x DIV 2
END ;
WriteInt(x); WriteInt(y); WriteInt(z); WriteLn
END Multiply;
PROCEDURE Divide;
VAR x, y, r, q, w: INTEGER;
BEGIN ReadInt(x); ReadInt(y); r := x; q := 0; w := y;
WHILE w <= r DO w := 2*w END ;
WHILE w > y DO
q := 2*q; w := w DIV 2;
IF w <= r THEN r := r - w; q := q + 1 END
END ;
WriteInt(x); WriteInt(y); WriteInt(q); WriteInt(r); WriteLn
END Divide;
PROCEDURE BinSearch;
VAR i, j, k, n, x: INTEGER;
a: ARRAY 32 OF INTEGER;
BEGIN ReadInt(x); k := 0;
WHILE ~eot() DO ReadInt(a[k]); k := k + 1 END ;
i := 0; j := n;
WHILE i < j DO
k := (i+j) DIV 2;
IF x < a[k] THEN j := k ELSE i := k+1 END
END ;
WriteInt(i); WriteInt(j); WriteInt(a[j]); WriteLn
END BinSearch;
BEGIN ReadInt(n);
IF n = 0 THEN Multiply ELSIF n = 1 THN Divide ELSE BinSearch END
END Samples.
31
6.1. Exercise
6.1. Determine the code for the computer defined in Chapter 9, generated from the program listed
at the end of this Chapter.
32
The words written in upper-case letters represent single, terminal symbols, and they are called
reserved words. They must be recognized by the scanner, and therefore cannot be used as
identifiers. In addition to the symbols listed, identifiers and numbers are also treated as terminal
symbols. Therefore the scanner is also responsible for recognizing identifiers and numbers.
It is appropriate to formulate the scanner as a module. In fact, scanners are a classic example of
the use of the module concept. It allows certain details to be hidden from the client, the parser, and
to make accessible (to export) only those features which are relevant to the client. The exported
facilities are summarized in terms of the module's interface definition:
DEFINITION OSS; (*Oberon Subset Scanner*)
IMPORT Texts;
CONST IdLen = 16;
(*symbols*) null = 0; times = 1; div = 3; mod = 4;
and = 5; plus = 6; minus = 7; or = 8; eql = 9;
neq = 10; lss = 11; leq = 12; gtr = 13; geq = 14;
period = 18; int = 21; false = 23; true = 24;
not = 27; lparen = 28; lbrak = 29;
ident = 31; if = 32; while = 34;
repeat = 35;
comma = 40; colon = 41; becomes = 42; rparen = 44;
rbrak = 45; then = 47; of = 48; do = 49;
semicolon = 52; end = 53;
else = 55; elsif = 56; until = 57;
array = 60; record = 61; const = 63; type = 64;
var = 65; procedure = 66; begin = 67; module = 69;
eof = 70;
TYPE Ident = ARRAY IdLen OF CHAR;
VAR val: INTEGER;
id: Ident;
error: BOOLEAN;
PROCEDURE Mark(msg: ARRAY OF CHAR);
PROCEDURE Get(VAR sym: INTEGER);
PROCEDURE Init(T: Texts.Text; pos: LONGINT);
END OSS.
The symbols are mapped onto integers. The mapping is defined by a set of constant definitions.
Procedure Mark serves to output diagnostics about errors discovered in the source text. Typically, a
short explanation is written into a log text together with the position of the discovered error.
Procedure Get represents the actual scanner. It delivers for each call the next symbol recognized.
The procedure performs the following tasks:
1. Blanks and line ends are skipped.
2. Reserved words, such as BEGIN and END, are recognized.
33
3. Sequences of letters and digits starting with a letter, which are not reserved words, are
recognized as identifiers. The parameter sym is given the value ident, and the character
sequence itself is assigned to the global variable id.
4. Sequences of digits are recognized as numbers. The parameter sym is given the value number,
and the number itself is assigned to the global variable val.
5. Combinations of special characters, such as := and <=, are recognized as a symbol.
6. Comments, represented by sequences of arbitrary characters beginning with (* and ending with *)
are skipped.
7. The symbol null is returned, if the scanner reads an illegal character (such as $ or %). The
symbol eof is returned if the end of the text is reached. Neither of these symbols occur in a wellformed program text.
First(S)
selector
. [
factor
( ~ integer ident
term
( ~ integer ident
SimpleExpression
+ - ( ~ integer ident
expression
+ - ( ~ integer ident
assignment
ident
ProcedureCall
ident
statement
ident IF WHILE REPEAT
StatementSequence ident IF WHILE REPEAT
FieldList
ident
type
ident ARRAY RECORD
FPSection
ident VAR
FormalParameters
(
ProcedureHeading
PROCEDURE
ProcedureBody
END CONST TYPE VAR PROCEDURE BEGIN
ProcedureDeclaration PROCEDURE
declarations
CONST TYPE VAR PROCEDURE
module
MODULE
S
Follow(S)
selector
factor
term
SimpleExpression
expression
assignment
ProcedureCall
statement
StatementSequence
FieldList
type
FPSection
FormalParameters
ProcedureHeading
*
*
*
34
ProcedureBody
ProcedureDeclaration
declarations
;
;
END BEGIN
The subsequent checks of the rules for determinism show that this syntax of Oberon-0 may indeed
be handled by the method of recursive descent using a lookahead of one symbol. A procedure is
constructed corresponding to each nonterminal symbol. Before the procedures are formulated, it is
useful to investigate how they depend on each other. For this purpose we design a dependence
graph (Figure 7.1). Every procedure is represented as a node, and an edge is drawn to all nodes on
which the procedure depends, that is, calls directly or indirectly. Note that some nonterminal
symbols do not occur in this graph, because they are included in other symbols in a trivial way. For
example, ArrayType and RecordType are contained in type only and are therefore not explicitly
drawn. Furthermore we recall that the symbols ident and integer occur as terminal symbols,
because they are treated as such by the scanner.
module
FPsection
declarations
IdentList
type
ProcedureDeclaration
StatSequence
expression
SimpleExpression
term
factor
selector
Figure 7.1. Dependence diagram of parsing procedures
Every loop in the diagram corresponds to a recursion. It is evident that the parser must be
formulated in a language that allows recursive procedures. Furthermore, the diagram reveals how
procedures may possibly be nested. The only procedure which is not called by another procedure is
Module. The structure of the program mirrors this diagram. The parser, like the scanner, is also
formulated as a module.
35
program. Without an at least partially correct hypothesis, continuation of the parsing process is
futile (Graham and Rhodes, 1975; Rechenberg and Mssenbck, 1985).
The technique of choosing good hypotheses is complicated. It ultimately rests upon heuristics, as
the problem has so far eluded formal treatment. The principal reason for this is that the formal
syntax ignores factors which are essential for the human recognition of a sentence. For instance, a
missing punctuation symbol is a frequent mistake, not only in program texts, but an operator symbol
is seldom omitted in an arithmetic expression. To a parser, however, both kinds of symbols are
syntactic symbols without distinction, whereas to the programmer the semicolon appears as almost
redundant, and a plus symbol as the essence of the expression. This kind of difference must be
taken into account if errors are to be treated sensibly. To summarize, we postulate the following
quality criteria for error handling:
1. As many errors as possible must be detected in a single scan through the text.
2. As few additional assumptions as possible about the language are to be made.
3. Error handling features should not slow down the parser appreciably.
4. The parser program should not grow in size significantly.
We can conclude that error handling strongly depends on a concrete case, and that it can be
described by general rules only with limited success. Nevertheless, there are a few heuristic rules
which seem to have relevance beyond our specific language, Oberon. Notably, they concern the
design of a language just as much as the technique of error treatment. Without doubt, a simple
language structure significantly simplifies error diagnostics, or, in other words, a complicated syntax
complicates error handling unnecessarily.
Let us differentiate between two cases of incorrect text. The first case is where symbols are
missing. This is relatively easy to handle. The parser, recognizing the situation, proceeds by
omitting one or several calls to the scanner. An example is the statement at the end of factor, where
a closing parenthesis is expected. If it is missing, parsing is resumed after emitting an error
message:
IF sym = rparen THEN Get(sym) ELSE Mark(" ) missing") END
Virtually without exception, only weak symbols are omitted, symbols which are primarily of a
syntactic nature, such as the comma, semicolon and closing symbols. A case of wrong usage is an
equality sign instead of an assignment operator, which is also easily handled.
The second case is where wrong symbols are present. Here it is unavoidable to skip them and to
resume parsing at a later point in the text. In order to facilitate resumption, Oberon features certain
constructs beginning with distinguished symbols which, by their nature, are rarely misused. For
example, a declaration sequence always begins with the symbol CONST, TYPE, VAR, or
PROCEDURE, and a structured statement always begins with IF, WHILE, REPEAT, CASE, and so
on. Such strong symbols are therefore never skipped. They serve as synchronization points in the
text, where parsing can be resumed with a high probability of success. In Oberon's syntax, we
establish four synchronization points, namely in factor, statement, declarations and type. At the
beginning of the corresponding parser procedures symbols are being skipped. The process is
resumed when either a correct start symbol or a strong symbol is read.
PROCEDURE factor;
BEGIN (*sync*)
IF (sym < int) OR (sym > ident) THEN Mark("ident ?");
REPEAT Get(sym) UNTIL (sym >= int) & (sym < ident)
END ;
...
END factor;
PROCEDURE StatSequence;
BEGIN (*sync*)
IF ~((sym = OSS.ident) OR (sym >= OSS.if) & (sym <= OSS.repeat)
36
Evidently, a certain ordering among symbols is assumed at this point. This ordering had been
chosen such that the symbols are grouped to allow simple and efficient range tests. Strong symbols
not to be skipped are assigned a high ranking (ordinal number) as shown in the definition of the
scanner's interface.
In general, the rule holds that the parser program is derived from the syntax according to the
recursive descent method and the explained translation rules. If a read symbol does not meet
expectations, an error is indicated by a call of procedure Mark, and analysis is resumed at the next
synchronization point. Frequently, follow-up errors are diagnosed, whose indication may be omitted,
because they are merely consequences of a formerly indicated error. The statement which results
for every synchronization point can be formulated generally as follows:
IF ~(sym IN follow(SYNC)) THEN Mark(msg);
REPEAT Get(sym) UNTIL sym IN follow(SYNC)
END
where follow(SYNC) denotes the set of symbols which may correctly occur at this point.
In certain cases it is advantageous to depart from the statement derived by this method. An
example is the construct of statement sequence. Instead of
Statement;
WHILE sym = semicolon DO Get(sym); Statement END
This replaces the two calls of Statement by a single call, whereby this call may be replaced by the
procedure body itself, making it unnecessary to declare an explicit procedure. The two tests after
Statement correspond to the legal cases where, after reading the semicolon, either the next
statement is analysed or the sequence terminates. Instead of the condition sym IN
follow(StatSequence) we use a Boolean expression which again makes use of the specifically
chosen ordering of symbols:
(sym >= semicolon) & (sym < if) OR (sym >= array)
37
The construct above is an example of the general case where a sequence of identical
subconstructs which may be empty (here, statements) are separated by a weak symbol (here,
semicolon). A second, similar case is manifest in the parameter list of procedure calls. The
statement
IF sym = lparen THEN
Get(sym); expression;
WHILE sym = comma DO Get(sym); expression END ;
IF sym = rparen THEN Get(sym) ELSE Mark(") ?") END
END
is being replaced by
IF sym = lparen THEN Get(sym);
REPEAT expression;
IF sym = comma THEN Get(sym)
ELSIF (sym = rparen) OR (sym >= semicolon) THEN Mark(") or , ?")
END
UNTIL (sym = rparen) OR (sym >= semicolon)
END
The reason for deviating from the previously given method is that declarations in a wrong order (for
example variables before constants) must provoke an error message, but at the same time can be
parsed individually without difficulty. A further, similar case can be found in Type. In all these cases,
it is absolutely mandatory to ensure that the parser can never get caught in the loop. The easiest
way to achieve this is to make sure that in each repetition at least one symbol is being read, that is,
that each path contains at least one call of Get. Thereby, in the worst case, the parser reaches the
end of the source text and stops.
It should now have become clear that there is no such thing as a perfect strategy of error handling
which would translate all correct sentences with great efficiency and also sensibly diagnose all
errors in ill-formed texts. Every strategy will handle certain abstruse sentences in a way that
appears unexpected to its author. The essential characteristics of a good compiler, regardless of
details, are that (1) no sequence of symbols leads to its crash, and (2) frequently encountered
errors are correctly diagnosed and subsequently generate no, or few additional, spurious error
messages. The strategy presented here operates satisfactorily, albeit with possibilities for
improvement. The strategy is remarkable in the sense that the error handling parser is derived
according to a few, simple rules from the straight parser. The rules are augmented by the judicious
choice of a few parameters which are determined by ample experience in the use of the language.
7.4. Exercises
7.1. The scanner uses a linear search of array KeyTab to determine whether or not a sequence of
letters is a key word. As this search occurs very frequently, an improved search method would
certainly result in increased efficiency. Replace the linear search in the array by
1. A binary search in an ordered array.
2. A search in a binary tree.
38
3. A search of a hash table. Choose the hash function so that at most two comparisons are
necessary to find out whether or not the letter sequence is a key word.
Determine the overall gain in compilation speed for the three solutions.
7.2. Where is the Oberon syntax not LL(1), that is, where is a lookahead of more than one symbol
necessary? Change the syntax in such a way that it satisfies the LL(1) property.
7.3. Extend the scanner in such a way that it accepts real numbers as specified by the Oberon
syntax.
39
The following declarations are, for example, represented by the list shown in Figure 8.1.
CONST N = 10;
TYPE T = ARRAY N OF INTEGER;
VAR x, y: T
topScope
name
class
type
val
next
N
Const
Int
10
T
Type
x
Var
T
y
Var
T
NIL
Figure 8.1. Symbol table representing objects with names and attributes.
40
For the generation of new entries we introduce the procedure NewObj with the explicit parameter
class, the implied parameter id and the result obj. The procedure checks whether the new identifier
(id) is already present in the list. This would signify a multiple definition and constitute a
programming error. The new entry is appended at the end of the list, so that the list mirrors the
order of the declarations in the source text.
PROCEDURE NewObj(VAR obj: Object; class: INTEGER);
VAR new, x: Object;
BEGIN x := topScope;
WHILE (x.next # NIL) & (x.next.name # id) DO x := x.next END ;
IF x.next = NIL THEN
NEW(new); new.name := id; new.class := class; new.next := NIL;
x.next := new; obj := new
ELSE obj := x.next; Mark("multiple declaration")
END
END NewObj;
In order to speed up the search process, the list is often replaced by a tree structure. Its advantage
becomes noticeable only with a fairly large number of entries. For structured languages with local
scopes, that is, ranges of visibility of identifiers, the symbol table must be structured accordingly,
and the number of entries in each scope becomes relatively small. Experience shows that as a
result the tree structure yields no substantial benefit over the list, although it requires a more
complicated search process and the presence of three successor pointers per entry instead of one.
Note that the linear ordering of entries must also be recorded, because it is significant in the case of
procedure parameters.
A procedure find serves to access the object with name id. It represents a simple linear search,
proceeding through the list of scopes, and in each scope through the list of objects.
PROCEDURE find(VAR obj: OSG.Object);
VAR s, x: Object;
BEGIN s := topScope;
REPEAT x := s.next;
WHILE (x # NIL) & (x.name # id) DO x := x.next END ;
s := s.dsc
UNTIL (x # NIL) OR (s = NIL);
IF x = NIL THEN x := dummy; OSS.Mark("undef") END ;
obj := x
END find;
The type of variable a has no name. An easy solution to the problem is to introduce a proper data
type in the compiler to represent types as such. Named types then are represented in the symbol
table by an entry of type Object, which in turn refers to an element of type Type.
Type = POINTER TO TypDesc;
TypDesc = RECORD
form, len: INTEGER;
fields: Object;
base: Type
END
The attribute form differentiates between elementary types (INTEGER, BOOLEAN) and structured
types (arrays, records). Further attributes are added according to the individual forms.
41
Characteristic for arrays are their length (number of elements) and the element type (base). For
records, a list representing the fields must be provided. Its elements are of the class Field. As an
example, Figure 8.2. shows the symbol table resulting from the following declarations:
TYPE R = RECORD f, g: INTEGER END ;
VAR x: INTEGER;
a: ARRAY 10 OF INTEGER;
r, s: R;
name
class
R
Typ
e
x
Var
a
Var
r
Var
s
Var
NIL
type
form
len
base
form
len
fields
Rec
name
class
next
type
Field
Field
Array
10
NIL
Int
42
computer used here, the former occupies 4 bytes, the latter a single byte. However, in general
every type has a size, and every variable has an address.
These attributes, type.size and obj.adr, are determined when the compiler processes declarations.
The sizes of the elementary types are given by the machine architecture, and corresponding entries
are generated when the compiler is loaded and initialized. For structured, declared types, their size
has to be computed.
The size of an array is its element size multiplied by the number of its elements. The address of an
element is the sum of the array's address and the element's index multiplied by the element size.
Let the following general declarations be given:
TYPE T = ARRAY n OF T0
VAR a: T
Then type size and element address are obtained by the following equations:
size(T) = n * size(T0)
adr(a[x]) = adr(a) + x * size(T0)
For multi-dimensional arrays, the corresponding formulas (see Figure 8.3) are:
TYPE T = ARRAY nk-1, ... , n1, n0 OF T0
size(T) =
0
0
4
8
8
12
43
b
d
a
c
a
c
d
0
4
8
aligned
Here, procedure IdentList is used to process an identifier list, and the recursive procedure Type1
serves to compile a type declaration.
PROCEDURE IdentList(class: INTEGER; VAR first: Object);
VAR obj: Object;
BEGIN
IF sym = ident THEN
NewObj(first, class); Get(sym);
WHILE sym = comma DO
Get(sym);
IF sym = ident THEN NewObj(obj, class); Get(sym) ELSE Mark("ident?") END
END;
44
The auxiliary procedures OpenScope and CloseScope ensure that the list of record fields is not
intermixed with the list of variables. Every record declaration establishes a new scope of visibility of
field identifiers, as required by the definition of the language Oberon. Note that the list into which
new entries are inserted is rooted in the global variable topScope.
8.4. Exercises
8.1. The scope of identifiers is defined to extend from the place of declaration to the end of the
procedure in which the declaration occurs. What would be necessary to let this range extend from
the beginning to the end of the procedure?
8.2. Consider pointer declarations as defined in Oberon. They specify a type to which the declared
pointer is bound, and this type may occur later in the text. What is necessary to accommodate this
relaxation of the rule that all referenced entities must be declared prior to their use?
45