0936 Static Program Analysis
0936 Static Program Analysis
September 9, 2020
Copyright
c 2008–2020 Anders Møller and Michael I. Schwartzbach
Preface iii
1 Introduction 1
1.1 Applications of Static Program Analysis . . . . . . . . . . . . . . 1
1.2 Approximative Answers . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Undecidability of Program Correctness . . . . . . . . . . . . . . 6
3 Type Analysis 17
3.1 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Type Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Solving Constraints with Unification . . . . . . . . . . . . . . . . 22
3.4 Record Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Limitations of the Type Analysis . . . . . . . . . . . . . . . . . . 30
4 Lattice Theory 33
4.1 Motivating Example: Sign Analysis . . . . . . . . . . . . . . . . . 33
4.2 Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 Constructing Lattices . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4 Equations, Monotonicity, and Fixed-Points . . . . . . . . . . . . . 39
i
ii CONTENTS
6 Widening 75
6.1 Interval Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.2 Widening and Narrowing . . . . . . . . . . . . . . . . . . . . . . 77
8 Interprocedural Analysis 93
8.1 Interprocedural Control Flow Graphs . . . . . . . . . . . . . . . 93
8.2 Context Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.3 Context Sensitivity with Call Strings . . . . . . . . . . . . . . . . 98
8.4 Context Sensitivity with the Functional Approach . . . . . . . . . 101
Bibliography 153
Preface
Static program analysis is the art of reasoning about the behavior of computer
programs without actually running them. This is useful not only in optimizing
compilers for producing efficient code but also for automatic error detection
and other tools that can help programmers. A static program analyzer is a pro-
gram that reasons about the behavior of other programs. For anyone interested
in programming, what can be more fun than writing programs that analyze
programs?
As known from Turing and Rice, all nontrivial properties of the behavior
of programs written in common programming languages are mathematically
undecidable. This means that automated reasoning of software generally must
involve approximation. It is also well known that testing, i.e. concretely running
programs and inspecting the output, may reveal errors but generally cannot
show their absence. In contrast, static program analysis can – with the right kind
of approximations – check all possible executions of the programs and provide
guarantees about their properties. One of the key challenges when developing
such analyses is how to ensure high precision and efficiency to be practically
useful. For example, nobody will use an analysis designed for bug finding if
it reports many false positives or if it is too slow to fit into real-world software
development processes.
These notes present principles and applications of static analysis of pro-
grams. We cover basic type analysis, lattice theory, control flow graphs, dataflow
analysis, fixed-point algorithms, widening and narrowing, path sensitivity, rela-
tional analysis, interprocedural analysis, context sensitivity, control-flow ana-
lysis, several flavors of pointer analysis, and key concepts of semantics-based
abstract interpretation. A tiny imperative programming language with point-
ers and first-class functions is subjected to numerous different static analyses
illustrating the techniques that are presented.
We take a constraint-based approach to static analysis where suitable constraint
systems conceptually divide the analysis task into a front-end that generates
constraints from program code and a back-end that solves the constraints to
produce the analysis results. This approach enables separating the analysis
iii
iv Preface
specification, which determines its precision, from the algorithmic aspects that
are important for its performance. In practice when implementing analyses, we
often solve the constraints on-the-fly, as they are generated, without representing
them explicitly.
We focus on analyses that are fully automatic (i.e., not involving program-
mer guidance, for example in the form of loop invariants or type annotations)
and conservative (sound but incomplete), and we only consider Turing com-
plete languages (like most programming languages used in ordinary software
development).
The analyses that we cover are expressed using different kinds of constraint
systems, each with their own constraint solvers:
• term unification constraints, with an almost-linear union-find algorithm,
• conditional subset constraints, with a cubic-time algorithm, and
Introduction
Static program analysis aims to automatically answer questions about the possi-
ble behaviors of programs. In this chapter, we explain why this can be useful
and interesting, and we discuss the basic characteristics of analysis tools.
Analysis for program correctness The most successful analysis tools that have
been designed to detect errors (or verify absence of errors) target generic cor-
rectness properties that apply to most or all programs written in specific pro-
gramming languages. In unsafe languages like C, such errors sometimes lead to
critical security vulnerabilities. In more safe languages like Java, such errors are
typically less severe, but they can still cause program crashes. Examples of such
properties are:
• Does there exist an input that leads to a null pointer dereference, division-
by-zero, or arithmetic overflow?
• Are all variables initialized before they are read?
• Are arrays always accessed within their bounds?
• Can there be dangling references, i.e., use of pointers to memory that has
been freed?
• Does the program terminate on every input? Even in reactive systems such
as operating systems, the individual software components, for example
device driver routines, are expected to always terminate.
Other correctness properties depend on specifications provided by the program-
mer for the individual programs (or libraries), for example:
• Are all assertions guaranteed to succeed? Assertions express program
specific correctness properties that are supposed to hold in all executions.
• Is function hasNext always called before function next, and is open always
called before read? Many libraries have such so-called typestate correctness
properties.
• Does the program throw an ActivityNotFoundException or a
SQLiteException for some input?
With web and mobile software, information flow correctness properties have
become extremely important:
• Can input values from untrusted users flow unchecked to file system
operations? This would be a violation of integrity.
• Can secret information become publicly observable? Such situations are
violations of confidentiality.
The increased use of concurrency (parallel or distributed computing) and event-
driven execution models gives rise to more questions about program behavior:
• Are data races possible? Many errors in multi-threaded programs are cause
by two threads using a shared resource without proper synchronization.
• Can the program (or parts of the program) deadlock? This is often a
concern for multi-threaded programs that use locks for synchronization.
1.2 APPROXIMATIVE ANSWERS 3
while (n > 1) {
if (n % 2 == 0) // if n is even, divide it by two
n = n / 2;
else // if n is odd, multiply by three and add one
n = 3 * n + 1;
}
In 1937, Collatz conjectured that the answer is “yes”. As of 2017, the conjecture
has been checked for all inputs up to 87 · 260 , but nobody has been able to prove
it for all inputs [Roo19].
Even straight-line programs can be difficult to reason about. Does the fol-
lowing program output true for some integer inputs?
This was an open problem since 1954 until 2019 when the answer was found
after over a million hours of computing [BS19].
Rice’s theorem [Ric53] is a general result from 1953 which informally states
that all interesting questions about the behavior of programs (written in Turing-
complete programming languages1 ) are undecidable. This is easily seen for any
special case. Assume for example the existence of an analyzer that decides if a
variable in a program has a constant value in any execution. In other words, the
analyzer is a program A that takes as input a program T , one of T ’s variables
x, and some value k, and decides whether or not x’s value is always equal to k
whenever T is executed.
A yes
(T, x, k)
Is the value of variable x
always equal to k when
T is executed?
no
We could then exploit this analyzer to also decide the halting problem by
using as input the following program where TM(j) simulates the j’th Turing
machine on empty input:
x = 17; if (TM(j)) x = 18;
Here x has a constant value 17 if and only if the j’th Turing machine does not
halt on empty input. If the hypothetical constant-value analyzer A exists, then
we have a decision procedure for the halting problem, which is known to be
impossible [Tur37].
At first, this seems like a discouraging result, however, this theoretical result
does not prevent approximative answers. While it is impossible to build an
analysis that would correctly decide a property for any analyzed program, it is
often possible to build analysis tools that give useful answers for most realistic
programs. As the ideal analyzer does not exist, there is always room for building
more precise approximations (which is colloquially called the full employment
theorem for static program analysis designers).
Approximative answers may be useful for finding bugs in programs, which
may be viewed as a weak form of program verification. As a case in point,
consider programming with pointers in the C language. This is fraught with
dangers such as null dereferences, dangling pointers, leaking memory, and
unintended aliases. Ordinary compilers offer little protection from pointer errors.
Consider the following small program which may perform every kind of error:
int main(int argc, char *argv[]) {
if (argc == 42) {
char *p,*q;
p = NULL;
printf("%s",p);
1 From this point on, we only consider Turing complete languages.
1.2 APPROXIMATIVE ANSWERS 5
q = (char *)malloc(100);
p = q;
free(q);
*p = ’x’;
free(p);
p = (char *)malloc(100);
p = (char *)malloc(100);
q = p;
strcat(p,q);
assert(argc > 87);
}
}
Standard compiler tools such as gcc -Wall detect no errors in this program.
Finding the errors by testing might miss the errors (for this program, no errors
are encountered unless we happen to have a test case that runs the program
with exactly 42 arguments). However, if we had even approximative answers
to questions about null values, pointer targets, and branch conditions then
many of the above errors could be caught statically, without actually running
the program.
Exercise 1.1: Describe all the pointer-related errors in the above program.
Ideally, the approximations we use are conservative (or safe), meaning that all
errors lean to the same side, which is determined by our intended application.
As an example, approximating the memory usage of programs is conservative if
the estimates are never lower than what is actually possible when the programs
are executed. Conservative approximations are closely related to the concept
of soundness of program analyzers. We say that a program analyzer is sound if
it never gives incorrect results (but it may answer maybe). Thus, the notion of
soundness depends on the intended application of the analysis output, which
may cause some confusion. For example, a verification tool is typically called
sound if it never misses any errors of the kinds it has been designed to detect, but
it is allowed to produce spurious warnings (also called false positives), whereas
an automated testing tool is called sound if all reported errors are genuine, but
it may miss errors.
Program analyses that are used for optimizations typically require soundness.
If given false information, the optimization may change the semantics of the
program. Conversely, if given trivial information, then the optimization fails to
do anything.
Consider again the problem of determining if a variable has a constant value.
If our intended application is to perform constant propagation optimization,
then the analysis may only answer yes if the variable really is a constant and
must answer maybe if the variable may or may not be a constant. The trivial
solution is of course to answer maybe all the time, so we are facing the engineering
challenge of answering yes as often as possible while obtaining a reasonable
6 1 INTRODUCTION
analysis performance.
A yes, definitely!
(T, x, k)
Is the value of variable x
always equal to k when
T is executed?
maybe, don’t know
given Turing machine is correct, and it halts in the reject state otherwise. Our
goal is to show that P cannot exist.
If P exists, then we can also build another Turing machine, let us call it M ,
that takes as input the encoding e(T ) of a Turing machine T and then builds the
encoding e(ST ) of yet another Turing machine ST , which behaves as follows:
ST is essentially a universal Turing machine that is specialized to simulate T on
input e(T ). Let w denote the input to ST . Now ST is constructed such that it
simulates T on input e(T ) for at most |w| moves. If the simulation ends in T ’s
accept state, then ST goes to its fail state. It is obviously possible to create ST in
such a way that this is the only way it can reach its fail state. If the simulation does
not end in T ’s accept state (that is, |w| moves have been made, or the simulation
reaches T ’s reject or fail state), then ST goes to its accept state or its reject state
(which one we choose does not matter). This completes the explanation of how
ST works relative to T and w. Note that ST never diverges, and it reaches its fail
state if and only if T accepts input e(T ) after at most |w| moves. After building
e(ST ), M passes it to our hypothetical program analyzer P . Assuming that P
works as promised, it ends in accept if ST is correct, in which case we also let M
halt in its accept state, and in reject otherwise, in which case M similarly halts
in its reject state.
M
accept
accept
e(T ) construct e(ST ) e(ST ) P
from e(T )
reject
reject
We now ask: Does M accept input e(M )? That is, what happens if we run
M with T = M ? If M does accept input e(M ), it must be the case that P
accepts input e(ST ), which in turn means that ST is correct, so its fail state is
unreachable. In other words, for any input w, no matter its length, ST does
not reach its fail state. This in turn means that T does not accept input e(T ).
However, we have T = M , so this contradicts our assumption that M accepts
input e(M ). Conversely, if M rejects input e(M ), then P rejects input e(ST ), so
the fail state of ST is reachable for some input v. This means that there must
exist some w such that the fail state of ST is reached in |w| steps on input v, so
T must accept input e(T ), and again we have a contradiction. By construction
M halts in either accept or reject on any input, but neither is possible for input
e(M ). In conclusion, the ideal program correctness analyzer P cannot exist.
Exercise 1.2: In the above proof, the hypothetical program analyzer P is only
required to correctly analyze programs that always halt. Show how the proof
can be simplified if we want to prove the following weaker property: There
exists no Turing machine P that can decide whether or not the fail state is
reachable in a given Turing machine. (Note that the given Turing machine is
now not assumed to be total.)
Chapter 2
A Tiny Imperative
Programming Language
Basic Expressions
The basic expressions all denote integer values:
I → 0 | 1 | -1 | 2 | -2 | . . .
X → x | y | z | ...
E→I
| X
| E + E | E - E | E * E | E / E | E > E | E == E
| (E)
10 2 A TINY IMPERATIVE PROGRAMMING LANGUAGE
| input
Statements
The simple statements S are familiar:
S → X = E;
| output E;
| SS
|
?
| if (E) { S } else { S }
| while (E) { S }
?
We use the notation . . . to indicate optional parts. In the conditions we
interpret 0 as false and all other values as true. The output statement writes an
integer value to the output stream.
Functions
A function declaration F contains a function name, a list of parameters, local
variable declarations, a body statement, and a return expression:
?
F → X ( X,. . . ,X ) { var X,. . . ,X; S return E; }
Function names and parameters are identifiers, like variables. The var block
declares a collection of uninitialized local variables. Function calls are an extra
kind of expression:
E → X ( E,. . . ,E )
Records
A record is a collection of fields, each having a name and a value. The syntax for
creating records and for reading field values looks as follows:
E → { X:E,. . . , X:E }
| E.X
Pointers
To be able to build data structures and dynamically allocate memory, we intro-
duce pointers:
E → alloc E
| &X
| *E
| null
The first expression allocates a new cell in the heap initialized with the value of
the given expression and results in a pointer to the cell. The second expression
creates a pointer to a program variable, and the third expression dereferences a
pointer value. In order to assign values through pointers we allow another form
of assignment:
S → *X = E;
In such an assignment, if the variable on the left-hand-side holds a pointer to
a call, then the value of the right-hand-side expression is stored in that cell.
Pointers and integers are distinct values, and pointer arithmetic is not possible.
Functions as Values
We also allow functions as first-class values. The name of a function can be used
as a kind of variable that refers to the function, and such function values can be
assigned to ordinary variables, passed as arguments to functions, and returned
from functions.
We add a generalized form of function calls (sometimes called computed or
indirect function calls, in contrast to the simple direct calls described earlier):
E → E ( E,. . . ,E )
Unlike simple function calls, the function being called is now an expression
that evaluates to a function value. Function values allow us to illustrate the
main challenges that arise with methods in object-oriented languages and with
higher-order functions in functional languages.
Programs
A complete program is just a collection of functions:
P → F ...F
(We sometimes also refer to indivial functions or statements as programs.) For
a complete program, the function named main is the one that initiates execution.
Its arguments are supplied in sequence from the beginning of the input stream,
and the value that it returns is appended to the output stream.
To keep the presentation short, we deliberately have not specified all details
of the TIP language, neither the syntax nor the semantics.
12 2 A TINY IMPERATIVE PROGRAMMING LANGUAGE
Exercise 2.1: Identify some of the under-specified parts of the TIP language,
and propose meaningful choices to make it more well-defined.
main() {
var n;
n = input;
return foo(&n,foo);
}
2.3 NORMALIZATION 13
2.3 Normalization
A rich and flexible syntax is useful when writing programs, but when describing
and implementing static analyses, it is often convenient to work with a syntacti-
cally simpler language. For this reason we sometimes normalize programs by
transforming them into equivalent but syntactically simpler ones. A particularly
useful normalization is to flatten nested pointer expressions, such that pointer
dereferences are always of the form *X rather than the more general *E, and sim-
ilarly, function calls are always of the form X(X,. . . ,X) rather than E(E,. . . ,E).
It may also be useful to flatten arithmetic expressions, arguments to direct calls,
branch conditions, and return expressions.
As an example,
x = f(y+3)*5;
can be normalized to
t1 = y+3;
t2 = f(t1);
x = t2*5;
where t1 and t2 are fresh variables, whereby each statement performs only one
operation.
Exercise 2.2: Argue that any TIP program can be normalized so that all
expressions, with the exception of right-hand side expressions of assignments,
are variables. (This is sometimes called A-normal form [FSDF93].)
Exercise 2.4: In the current syntax for TIP, heap assignments are restricted to
the form *X = E. Languages like C allow the more general *E1 = E2 where E1
is an expression that evaluates to a (non-function) pointer. Explain how the
statement **x=**y; can be normalized to fit the current TIP syntax.
TIP uses lexical scoping, however, we make the notationally simplifying as-
sumption that all declared variable and function names are unique in a program,
i.e. that no identifiers is declared more than once.
Exercise 2.5: Argue that any program can be normalized so that all declared
identifiers are unique.
14 2 A TINY IMPERATIVE PROGRAMMING LANGUAGE
ite
n return
f = ... f
while
var
1 > f = ... n = ...
f
n 0 * −
f n n 1
With this representation, it is easy to extract the set of statements and their
structure for each function in the program.
If v is a node in a CFG then pred (v) denotes the set of predecessor nodes and
succ(v) the set of successor nodes.
For programs that are fully normalized (cf. Section 2.3), each node corre-
sponds to only one operation.
For now, we only consider simple statements, for which CFGs may be con-
structed in an inductive manner. The CFGs for assignments, output, return
statements, and declarations look as follows:
For the sequence S1 S2 , we eliminate the exit node of S1 and the entry node of
S2 and glue the statements together:
S1
S2
Similarly, the other control structures are modeled by inductive graph construc-
tions (sometimes with branch edges labeled with true and false):
E E E
false true true false false true
S S1 S2 S
Using this systematic approach, the iterative factorial function results in the
following CFG:
16 2 A TINY IMPERATIVE PROGRAMMING LANGUAGE
var f
f=1
n>0
false true
f=f*n
n=n−1
return f
Exercise 2.6: Draw the AST and the CFG for the rec program from Section 2.2.
Type Analysis
The TIP programming language does not have explicit type declarations, but of
course the various operations are intended to be applied only to certain kinds of
values. Specifically, the following restrictions seem reasonable:
We assume that their violation results in runtime errors. Thus, for a given
program we would like to know that these requirements hold during execution.
Since this is an nontrivial question, we immediately know (Section 1.3) that it is
undecidable.
We resort to a conservative approximation: typability. A program is typable if
it satisfies a collection of type constraints that is systematically derived, typically
from the program AST. The type constraints are constructed in such a way
that the above requirements are guaranteed to hold during execution, but the
converse is not true. Thus, our type analysis will be conservative and reject some
programs that in fact will not violate any requirements during execution.
In most mainstream programming languages with static type checking, the
programmer must provide type annotations for all declared variables and func-
tions. Type annotations serve as useful documentation, and they also make it
easier to design and implement type systems. TIP does not have type annota-
tions, so our type analysis must infer all the types, based on how the variables
and functions are being used in the program.
18 3 TYPE ANALYSIS
Exercise 3.1: Type checking also in mainstream languages like Java may reject
programs that cannot encounter runtime type errors. Give an example of
such a program. To make the exercise nontrivial, every instruction in your
program should be reachable by some input.
Exercise 3.2: Even popular programming languages may have static type
systems that are unsound. Inform yourself about Java’s covariant typing of
arrays. Construct an example Java program that passes all of javac’s type
checks but generates a runtime error due to this covariant typing. (Note that,
because you do receive runtime errors, Java’s dynamic type system is sound,
which is important to avert malicious attacks, e.g. through type confusion or
memory corruption.)
3.1 Types
We first define a language of types that will describe possible values:
τ → int
| &τ
| (τ ,. . . ,τ ) → τ
(&int,(&int,(&int,(&int,...)→int)→int)→int)→int
3.1 TYPES 19
To express such recursive types concisely, we add the µ operator and type
variables α to the language of types:
τ → µα.τ
| α
α → α1 | α2 | . . .
A type of the form µα.τ is considered identical to the type τ [µα.τ /α].1 With this
extra notation, the type of the foo function can be expressed like this:
Exercise 3.3: Explain how regular types can be represented by finite automata
so that two types are equal if their automata accept the same language. Show
an automaton that represents the type µα1 .(&int,α1 )→int.
We allow free type variables (i.e., type variables that are not bound by an
enclosing µ). Such type variables are implicitly universally quantified, meaning
that they represent any type. Consider for example the following function:
store(a,b) {
*b = a;
return 0;
}
It has type (α1 ,&α1 )→int where α1 is a free type variable meaning that it can be
any type, which corresponds to the polymorphic behavior of the function. Note
that such type variables are not necessarily entirely unconstrained: the type of a
may be anything, but it must match the type of whatever b points to. The more
restricted type (int,&int)→int is also a valid type for the store function, but we
are usually interested in the most general solutions.
Exercise 3.4: What are the types of rec, f, and n in the recursive factorial
program from Section 2.2?
Exercise 3.5: Write a TIP program that contains a function with type
((int)→int)→(int,int)→int.
Type variables are not only useful for expressing recursive types; we also use
them in the following section to express systems of type constraints.
1 Think of a term µα.τ as a quantifier that binds the type variable α in the sub-term τ . An
occurrence of α in a term τ is free if it is not bound by an enclosing µα. The notation τ1 [τ2 /α]
denotes a copy of τ1 where all free occurrences of α have been substituted by τ2 .
20 3 TYPE ANALYSIS
I: [[I]] = int
E1 op E2 : [[E1 ]] = [[E2 ]] = [[E1 op E2 ]] = int
E1 ==E2 : [[E1 ]] = [[E2 ]] ∧ [[E1 ==E2 ]] = int
input: [[input]] = int
X = E: [[X]] = [[E]]
output E: [[E]] = int
if (E) S: [[E]] = int
if (E) S1 else S2 : [[E]] = int
while (E) S: [[E]] = int
X(X 1 ,. . . ,X n ){ . . . return E; }: [[X]] = ([[X 1 ]],. . . ,[[X n ]])→[[E]]
E(E1 ,. . . ,En ): [[E]] = ([[E1 ]],. . . ,[[En ]])→[[E(E1 ,. . . ,En )]]
alloc E: [[alloc E]] = &[[E]]
&X: [[&X]] = &[[X]]
null: [[null]] = &α
*E: [[E]] = &[[*E]]
*X = E: [[X]] = &[[E]]
The notation ‘op’ here represents any of the binary operators, except == which
has its own rule. In the rule for null, α denotes a fresh type variable. (The
purpose of this analysis is not to detect potential null pointer errors, so this
simple model of null suffices.) Note that program variables and var blocks do
not yield any constraints and that parenthesized expression are not present in
the abstract syntax.
For the program
short() {
var x, y, z;
x = input;
y = alloc x;
3.2 TYPE CONSTRAINTS 21
*y = x;
z = *y;
return z;
}
[[short]] = ()→[[z]]
[[input]] = int
[[x]] = [[input]]
[[alloc x]] = &[[x]]
[[y]] = [[alloc x]]
[[y]] = &[[x]]
[[z]] = [[*y]]
[[y]] = &[[*y]]
Most of the constraint rules are straightforward. For example, for any syntac-
tic occurrence of E1 ==E2 in the program being analyzed, the two sub-expressions
E1 and E2 must have the same type, and the result is always of type integer.
Exercise 3.6: Explain each of the above type constraint rules, most importantly
those involving functions and pointers.
For a complete program, we add constraints to ensure that the types of the
parameters and the return value of the main function are int:
where c and c0 are term constructors and each ti and t0i is a sub-term. In the
previous example two of the constraints are [[y]] = &[[x]] and [[y]] = &[[*y]], so by
the term equality axiom we also have [[x]] = [[*y]].
Furthermore, as one would expect for an equality relation, we have reflexi-
tivity, symmetry, and transitivity:
t1 = t1
t1 = t2 =⇒ t2 = t1
t1 = t2 ∧ t2 = t3 =⇒ t1 = t3
for all terms t1 , t2 , and t3 .
22 3 TYPE ANALYSIS
A solution assigns a type to each type variable, such that all equality con-
straints are satisfied.2 The correctness claim for the type analysis is that the
existence of a solution implies that the specified runtime errors cannot occur
during execution. A solution for the identifiers in the short program is the
following:
[[short]] = ()→int
[[x]] = int
[[y]] = &int
[[z]] = int
Exercise 3.8: Give a reasonable definition of what it means for one solution
to be “more general” than another. (See page 19 for an example of two types
where one is more general than the other.)
Exercise 3.9: This exercise demonstrates the importance of the term equality
axiom. First explain what the following TIP code does when it is executed:
var x,y;
x = alloc 1;
y = alloc (alloc 2);
x = y;
Then generate the type constraints for the code, and apply the unification
algorithm (by hand).
Exercise 3.10: Extend TIP with procedures, which, unlike functions, do not
return anything. Show how to extend the language of types and the type
constraint rules accordingly.
from type variables to types. Applying a substitution σ to a type τ , denoted τ σ, means replacing
each free type variable α in τ by σ(α). A solution to a set of type constraints is a substitution σ where
τ1 σ is identical to τ2 σ for each of the type constraints τ1 = τ2 .
3.3 SOLVING CONSTRAINTS WITH UNIFICATION 23
In pseudo-code:
procedure MakeSet(x)
x.parent := x
end procedure
procedure Find(x)
if x.parent 6= x then
x.parent := Find(x.parent)
end if
return x.parent
end procedure
procedure Union(x, y)
xr := Find(x)
y r := Find(y)
if xr 6= y r then
xr .parent := y r
end if
end procedure
full version with almost-linear worst case time complexity see a textbook on data structures.
24 3 TYPE ANALYSIS
Reading the solution after all constraints have been processed is then easy.
For each program variable or expression that has an associated type variable,
simply invoke Find to find the canonical representative of its equivalence class.
If the canonical representative has sub-terms (for example, in the term &τ we say
that τ is a sub-term), find the solution recursively for each sub-term. The only
complication arises if this recursion through the sub-terms leads to an infinite
type, in which case we introduce a µ term accordingly.
3.3 SOLVING CONSTRAINTS WITH UNIFICATION 25
Exercise 3.11: Argue that the unification algorithm works correctly, in the
sense that it finds a solution to the given constraints if one exists. Additionally,
argue that if multiple solutions exist, the algorithm finds the uniquely most
general one (cf. Exercise 3.8).
(The most general solution, when one exists, for a program expression is also
called the principal type of the expression.)
The unification solver only needs to process each constraint once. This means
that although we conceptually first generate the constraints and then solve them,
in an implementation we might as well interleave the two phases and solve the
constraints on-the-fly, as they are being generated.
The complicated factorial program from Section 2.2 generates the following
constraints (duplicates omitted):
These constraints have a solution, where most variables are assigned int, except
these:
[[p]] = &int
[[q]] = &int
[[alloc 0]] = &int
[[x]] = µα1 .(&int,α1 )→int
[[foo]] = µα1 .(&int,α1 )→int
[[&n]] = &int
[[main]] = ()→int
As mentioned in Section 3.1, recursive types are needed for the foo function and
the x parameter. Since a solution exists, we conclude that our program is type
correct.
Exercise 3.12: Check (by hand or using the Scala implementation) that the
constraints and the solution shown above are correct for the complicated
factorial program.
26 3 TYPE ANALYSIS
Exercise 3.13: Consider this fragment of the example program shown earlier:
x = input;
*y = x;
z = *y;
Explain step-by-step how the unification algorithm finds the solution, includ-
ing how the union-find data structure looks in each step.
Recursive types are also required when analyzing TIP programs that manip-
ulate data structures. The example program
var p;
p = alloc null;
*p = p;
creates these constraints:
[[null]] = &α1
[[alloc null]] = &[[null]]
[[p]] = &[[alloc null]]
[[p]] = &[[p]]
which for [[p]] has the solution [[p]] = µα1 .&α1 that can be unfolded to [[p]] =
&&&. . . .
Exercise 3.14: Show what the union-find data structure looks like for the
above example program.
Exercise 3.15: Generate and solve the constraints for the iterate example
program from Section 2.2.
3.4 RECORD TYPES 27
Exercise 3.16: Generate and solve the type constraints for this program:
map(l,f,z) {
var r;
if (l==null) r=z;
else r=f(map(*l,f,z));
return r;
}
foo(i) {
return i+1;
}
main() {
var h,t,n;
t = null;
n = 42;
while (n>0) {
n = n-1;
h = alloc null;
*h = t;
t = h;
}
return map(h,foo,0);
}
What is the output from running the program?
(Try to find the solutions manually; you can then use the Scala implementation
to check that they are correct.)
not designed to detect null pointer errors, it is also not a goal to check at every
field lookup that the field necessarily exists in the record; we leave that to more
advanced analyses.)
As a first attempt, the type constraints for record construction and field
lookup can be expressed as follows, inspired by our treatment of pointers and
dereferences.
{ X 1 :E1 ,. . . , X n :En }: [[{ X 1 :E1 ,. . . , X n :En }]] = {X1 :[[E1 ]], . . . ,Xn :[[En ]]}
E.X: [[E]] = { . . . ,X:[[E.X]], . . . }
Intuitively, the constraint rule for field lookup says that the type of E must be
a record type that contains a field named X with the same type as E.X. The
right-hand-side of this constraint rule is, however, not directly expressible in our
language of types. One way to remedy this, without requiring any modifications
of our unification algorithm, is to require that every record type contains all
record fields that exist in the program. Let F = {f1 , f2 , . . . , fm } be the set of all
field names. We then use the following two constraint rules instead of the ones
above.
Exercise 3.17: Assume we extend the TIP language with array operations.
Array values are constructed using a new form of expressions (not to be
confused with the syntax for records):
E → { E, . . . ,E }
and individual elements are read and written as follows:
E → E[E]
S → E[E] = E
For example, the following statements construct an array containing two
integers, then overwrites the first one, and finally reads both entries:
a = { 17, 87 };
a[0] = 42;
x = a[0] + a[1]; // x is now 129
Arrays are constructed in the heap and passed by reference, so in the first line,
the contents of the array are not copied, and a is like a pointer to the array
containing the two integers.
The type system is extended accordingly with an array type constructor:
τ → τ []
As an example, the type int[][] denotes arrays of arrays of integers.
Give appropriate type constraints for array operations. Then use the type
analysis to check that the following program is typable and infer the type of
each program variable:
var x,y,z,t;
x = {2,4,8,16,32,64};
y = x[x[3]];
z = {{},x};
t = z[1];
t[2] = y;
30 3 TYPE ANALYSIS
Exercise 3.19: Discuss how TIP could be extended with strings and operations
on strings, and how the type analysis could be extended accordingly to check,
for example, that the string operations are only applied to strings and not to
other types of values.
Exercise 3.20: Generate and solve the type constraints for the following
program:
var a,b,c,d;
a = {f:3, g:17};
b = a.f;
c = {f:alloc 5, h:15};
d = c.f;
What happens if you change c.f to c.g in the last line?
main() {
return f(alloc 1) + *(f(alloc(alloc 2));
}
It never causes an error at runtime but is not typable since it among others
generates constraints equivalent to
&int = [[x]] = &&int
which are clearly unsolvable. For this program, we could analyze the f function
first, resulting in this polymorphic type:
[[f]] = (&α1 )→ α1
When analyzing the main function, at each call to f we could then instantiate the
polymorphic type according to the type of the argument: At the first call, the
argument has type &int so in this case we treat f as having the type (&int)→int,
and at the second call, the argument has type &&int so here we treat f as having
the type (&&int)→&int. The key property of the program that enables this
technique is the observation that the polymorphic function is not recursive.
This idea is called let-polymorphism (and this is essentially how Damas-Hindley-
Milner-style type analysis actually works in ML and related languages). In
Section 8.2 we shall see a closely related program analysis mechanism called
context sensitivity. The price of the increased precision of let-polymorphism in
type analysis is that the worst-case complexity increases from almost-linear to
exponential [KTU90, Mai90].
Even with let-polymorphism, infinitely many other examples will inevitably
remain rejected. An example:
polyrec(g,x) {
var r;
if (x==0) { r=g; } else { r=polyrec(2,0); }
return r+1;
}
main() {
return polyrec(null,1);
}
32 3 TYPE ANALYSIS
With functions that are both polymorphic and recursive, type analysis becomes
undecidable in the general case [Hen93, KTU93].
Exercise 3.21: Explain the runtime behavior of the polyrec program, and
why it is unfairly rejected by our type analysis, and why let-polymorphism
does not help.
Yet another limitation of the type system presented in this chapter is that it
ignores many other kinds of runtime errors, such as dereference of null pointers,
reading of uninitialized variables, division by zero, and the more subtle escaping
stack cell demonstrated by this program:
baz() {
var x;
return &x;
}
main() {
var p;
p = baz();
*p = 1;
return *p;
}
The problem in this program is that *p denotes a stack cell that has “escaped”
from the baz function. As we shall see in the following chapters, such problems
can instead be handled by other kinds of static analysis.
Chapter 4
Lattice Theory
The technique for static analysis that we will study next is based on the mathe-
matical theory of lattices, which we briefly review in this chapter. The connection
between lattices and program analysis was established in the seminal work by
Kildall, Kam and Ullman [Kil73, KU77].
var a,b,c;
34 4 LATTICE THEORY
a = 42;
b = 87;
if (input) {
c = a + b;
} else {
c = a - b;
}
Here, the analysis could conclude that a and b are positive numbers in all possible
executions at the end of the program. The sign of c is either positive or negative
depending on the concrete execution, so the analysis must report > for that
variable.
For this analysis we have an abstract domain consisting of the five abstract
values {+, -, 0, >, ⊥}, which we can organize as follows with the least precise
information at the top and the most precise information at the bottom:
+ 0 −
The ordering reflects the fact that ⊥ represents the empty set of integer values
and > represents the set of all integer values. Note that > may arise for different
reasons: (1) In the example above, there exist executions where c is positive and
executions where c is negative, so, for this choice of abstract domain, > is the
only sound option. (2) Due to undecidability, imperfect precision is inevitable,
so no matter how we design the analysis there will be programs where, for
example, some variable can only have a positive value in any execution but the
analysis is not able to show that it could not also have a negative value (recall
the TM(j) example from Chapter 1).
The five-element abstract domain shown above is an example of a so-called
lattice. We continue the development of the sign analysis in Section 5.1, but we
first need the mathematical foundation in place.
4.2 Lattices
A partial order is a set S equipped with a binary relation v where the following
conditions are satisfied:
• reflexivity: ∀x ∈ S : x v x
• transitivity: ∀x, y, z ∈ S : x v y ∧ y v z =⇒ x v z
• anti-symmetry: ∀x, y ∈ S : x v y ∧ y v x =⇒ x = y
4.2 LATTICES 35
X v X ∧ ∀y ∈ S : y v X =⇒ y v X
F
For pairs of elements, we sometimes use the infix notation xty instead of {x, y}
and x u y insteadFof {x, y}. We also sometimes
F use the subscript notation, for
example writing a∈A f (a) instead of {f (a) | a ∈ A}.
The least upper bound operation plays an important role in program analysis.
As we shall see in Chapter 5, we use least upper bound when combining abstract
information from multiple sources, for example when control flow merges after
the branches of if statements.
F
Exercise 4.1: Let X ⊆ S. Prove that if X exists, then it must be unique.
Exercise 4.3: Argue that the abstract domain presented in Section 4.1 is indeed
a lattice.
Any finite partial order may be illustrated by a Hasse diagram in which
the elements are nodes and the order relation is the transitive closure of edges
leading from lower to higher nodes. With this notation, all of the following
partial orders are also lattices:
we choose to use the shorter name. Many lattices are finite. For those the lattice requirements reduce
to observing that ⊥ and > exist and that every pair of elements x and y have a least upper bound
x t y and a greatest lower bound x u y.
36 4 LATTICE THEORY
Exercise 4.5: Prove that if L is a partially ordered set, then every subset of L
F if and only if every subset of L has a greatest lower
has a least upper bound
bound. (Hint: Y = {x ∈ L | ∀y ∈ Y : x v y}.)
Every lattice has a unique largest element denoted > (pronounced top) and a
unique smallest element denoted ⊥ (pronounced bottom).
F
Exercise 4.6: Prove that S and S are the unique largest element andF the
unique smallest element, respectively, in S. In other words, we have > = S
and ⊥ = S.
F F
Exercise 4.7: Prove that S = ∅ and thatF S = ∅. (Together with
Exercise 4.6 we then have > = ∅ and ⊥ = ∅.)
The height of a lattice is defined to be the length of the longest path from ⊥
to >. As an example, the height of the sign analysis lattice from Section 4.1 is 2.
For some lattices the height is infinite (see Section 6.1).
{0,1,2,3}
The above powerset lattice has height 4. In general, the lattice (2A , ⊆) has height
|A|. We use powerset lattices in Chapter 5 to represent sets of variables or
expressions.
The reverse powerset lattice for a finite set A is the lattice (2A , ⊇).
Exercise 4.8: Draw the Hasse diagram of the reverse powerset lattice for the
set {foo, bar, baz}.
a1 a2 ... an
is a lattice with height 2. As an example, the set Sign = {+, -, 0, >, ⊥} with the
ordering described in Section 4.1 forms a lattice that can also be expressed as
flat({+, 0, -}).
If L1 , L2 , . . . , Ln are lattices, then so is the product:
L1 × L2 × . . . × Ln = {(x1 , x2 , . . . , xn ) | xi ∈ Li }
Exercise 4.9: Show that the t and u operators for a product lattice L1 × L2 ×
. . . × Ln can be computed pointwise (i.e. in terms of the t and u operators
from L1 , L2 , . . . , Lk ).
Exercise 4.12: Show that if A is finite and L has finite height then the height
of the map lattice A → L is height(A → L) = |A| · height(L).
var a,b; // 1
a = 42; // 2
b = a + input; // 3
a = a - b; // 4
a1 = >
b1 = >
a2 = +
b2 = b1
a3 = a2
b3 = a 2 + >
a4 = a3 - b3
b4 = b3
For example, a2 denotes the abstract value of a at the program point immediately
after line 2. The operators + and - here work on abstract values, which we return
to in Section 5.1. In this constraint system, the constraint variables have values
from the abstract value lattice Sign defined in Section 4.3. We can alternatively
4 We use the term constraint variable to denote variables that appear in mathematical constraint
systems, to avoid confusion with program variables that appear in TIP programs.
40 4 LATTICE THEORY
derive the following equivalent constraint system where each constraint variable
instead has a value from the abstract state lattice StateSigns from Section 4.3:5
x1 = [a 7→ >, b 7→ >]
x2 = x1 [a 7→ +]
x3 = x2 [b 7→ x2 (a) + >]
x4 = x3 [a 7→ x3 (a) - x3 (b)]
Here, each constraint variable models the abstract state at a program point; for
example, x1 models the abstract state at the program point immediately after line
1. Notice that each equation only depends on preceding ones for this example
program, so in this case the solution can be found by simple substition. However,
mutually recursive equations may appear, for example for programs that contain
loops (see Section 5.1).
Also notice that it is important for the analysis of this simple program that the
order of statements is taken into account, which is called flow-sensitive analysis.
Specifically, when a is read in line 3, the value comes from the assignment to a
in line 2, not from the one in line 4.
Exercise 4.14: Give a solution to the constraint system above (that is, values
for x1 , . . . , x4 that satisfy the four equations).
Exercise 4.15: Why is the unification solver from Chapter 3 not suitable for
this kind of constraints?
5 The notation f [a 7→ x , . . . , a
1 n n 7→ xn ] means the function that maps ai to xi , for each
i = 1, . . . , n and for all other inputs gives the same output as the function f .
4.4 EQUATIONS, MONOTONICITY, AND FIXED-POINTS 41
Exercise 4.21: The operators t and u can be viewed as functions. For example,
x1 t x2 where x1 , x2 ∈ L returns an element from L. Show that t and u are
monotone.
Exercise 4.23: Show that set difference, X\Y , as a function with two argu-
ments over a powerset lattice is monotone in the first argument X but not in
the second argument Y .
x1 = f1 (x1 , . . . , xn )
x2 = f2 (x1 , . . . , xn )
..
.
xn = fn (x1 , . . . , xn )
x = f (x)
x1 = [a 7→ >, b 7→ >]
x2 = x1 [a 7→ +]
x3 = x2 [b 7→ x2 (a) + >]
x4 = x3 [a 7→ x3 (a) - x3 (b)]
Exercise 4.26: Show that the four constraint functions f1 , . . . , f4 are monotone.
(Hint: see Exercise 4.24.)
6 We also use the broader concept of constraint systems. An equation system is a constraint system
where all constraint are equalities. On page 45 we discuss other forms of constraints.
4.4 EQUATIONS, MONOTONICITY, AND FIXED-POINTS 43
Exercise 4.27: Argue that your solution from Exercise 4.14 is the least fixed-
point of the function f defined by
f (x1 , . . . , x4 ) = f1 (x1 , . . . , x4 ), . . . , f4 (x1 , . . . , x4 ) .
(Note that when applying this theorem to the specific equation system shown
above, f is a function over the product lattice Ln .)
The proof of this theorem is quite simple. Observe that ⊥ v f (⊥) since ⊥
is the least element. Since f is monotone, it follows that f (⊥) v f 2 (⊥) and by
induction that f i (⊥) v f i+1 (⊥) for any i. Thus, we have an increasing chain:
⊥ v f (⊥) v f 2 (⊥) v . . .
Since L is assumed to have finite height, we must for some k have that f k (⊥) =
f k+1 (⊥), i.e. f k (⊥) is a fixed-point for f . By Exercise 4.2, f k (⊥) must be the least
upper bound of all elements in the chain, so fix (f ) = f k (⊥). Assume now that
x is another fixed-point. Since ⊥ v x it follows that f (⊥) v f (x) = x, since f is
monotone, and by induction we get that fix (f ) = f k (⊥) v x. Hence, fix (f ) is a
least fixed-point, and by anti-symmetry of v it is also unique.
The theorem is a powerful result: It tells us not only that equation systems
over lattices always have solutions, provided that the lattices have finite height
and the constraint functions are monotone, but also that uniquely most precise
solutions always exist. Furthermore, the careful reader may have noticed that
the theorem provides an algorithm for computing the least fixed-point: simply
compute the increasing chain ⊥ v f (⊥) v f 2 (⊥) v . . . until the fixed-point
is reached. In pseudo-code, this so-called naive fixed-point algorithm looks as
follows.
procedure NaiveFixedPointAlgorithm(f )
x := ⊥
while x 6= f (x) do
x := f (x)
end while
7 There are many fixed-point theorems in the literature; the one we use here is a variant of a
return x
end procedure
(Instead of computing f (x) both in the loop condition and in the loop body, a
trivial improvement is to just compute it once in each iteration and see if the
result changes.) The computation of a fixed-point can be illustrated as a walk
up the lattice starting at ⊥:
11111111111
00000000000
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
This algorithm is called “naive” because it does not exploit the special structures
that are common in analysis lattices. We shall see various less naive fixed-point
algorithms in Section 5.3.
The least fixed point is the most precise possible solution to the equation
system, but the equation system is (for a sound analysis) merely a conservative
approximation of the actual program behavior (again, recall the TM(j) example
from Chapter 1). This means that the semantically most precise possible (while
still correct) answer is generally below the least fixed point in the lattice. We shall
see examples of this in Chapter 5.
Exercise 4.28: Explain step-by-step how the naive fixed-point algorithm com-
putes the solution to the equation system from Exercise 4.14.
• the height of the lattice, since this provides a bound for the number of
iterations of the algorithm, and
• the cost of computing f (x) and testing equality, which are performed in
each iteration.
Exercise 4.30: Does the fixed-point theorem also hold without the assumption
that the lattice has finite height? If yes, give a proof; if no, give a counterex-
ample.
Classical dataflow analysis starts with a CFG and a lattice with finite height.
The lattice describes abstract information we wish to infer for the different CFG
nodes. It may be fixed for all programs, or it may be parameterized based on the
given program. To every node v in the CFG, we assign a constraint variable1 [[v]]
ranging over the elements of the lattice. For each node we then define a dataflow
constraint that relates the value of the variable of the node to those of other nodes
(typically the neighbors), depending on what construction in the programming
language the node represents. If all the constraints for the given program happen
to be equations or inequations with monotone right-hand sides, then we can use
the fixed-point algorithm from Section 4.4 to compute the analysis result as the
unique least solution.
The combination of a lattice and a space of monotone functions is called a
monotone framework [KU77]. For a given program to be analyzed, a monotone
framework can be instantiated by specifying the CFG and the rules for assigning
dataflow constraints to its nodes.
An analysis is sound if all solutions to the constraints correspond to correct
information about the program. The solutions may be more or less imprecise, but
computing the least solution will give the highest degree of precision possible.
We return to the topic of analysis correctness and precision in Chapter 11.
Throughout this chapter we use the subset of TIP without function calls,
pointers, and records; those language features are studied in Chapters 9 and 10
and in Exercise 5.10.
1 As for type analysis, we will ambiguously use the notation [[S]] for [[v]] if S is the syntax associated
with node v. The meaning will always be clear from the context.
48 5 DATAFLOW ANALYSIS WITH MONOTONE FRAMEWORKS
Continuing the example from Section 4.1, our goal is to determine the sign
(positive, zero, negative) of all expressions in the given programs. We start with
the tiny lattice Sign for describing abstract values:
+ 0 −
We want an abstract value for each program variable, so we define the map
lattice
where Vars is the set of variables occurring in the given program. Each element
of this lattice can be thought of as an abstract state, hence its name. For each CFG
node v we assign a constraint variable [[v]] denoting an abstract state that gives
the sign values for all variables at the program point immediately after v. The
lattice States n , where n is the number of CFG nodes, then models information
for all the CFG nodes.
The dataflow constraints model the effects of program execution on the
abstract states. For simplicity, we here focus on a subset of TIP that does not
contain pointers or records, so integers are the only type of values we need to
consider.
First, we define an auxiliary function JOIN (v) that combines the abstract
states from the predecessors of a node v:
G
JOIN (v) = [[w]]
w∈pred(v)
Note that JOIN (v) is a function of all the constraint variables [[v1 ]], . . . , [[vn ]] for
the program. For example, with the following CFG, we have JOIN ([[a=c+2]]) =
[[c=b]] t [[c=-5]].
5.1 SIGN ANALYSIS, REVISITED 49
b > 5
true false
c=b c=−5
a=c+2
The most interesting constraint rule for this analysis is the one for assignment
statements, that is, nodes v of the form X = E:
This constraint rule models the fact that the abstract state after an assignment
X = E is equal to the abstract state immediately before the assignment, except
that the abstract value of X is the result of abstractly evaluating the expression
E. The eval function performs an abstract evaluation of expression E relative to
an abstract state σ:
The function sign gives the sign of an integer constant, and opb is an abstract
evaluation of the given operator,2 defined by the following tables:
b+ ⊥ 0 - + > b- ⊥ 0 - + >
⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥
0 ⊥ 0 - + > 0 ⊥ 0 + - >
- ⊥ - - > > - ⊥ - > - >
+ ⊥ + > + > + ⊥ + + > >
> ⊥ > > > > > ⊥ > > > >
b* ⊥ 0 - + > b/ ⊥ 0 - + >
⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥
0 ⊥ 0 0 0 0 0 ⊥ ⊥ 0 0 >
- ⊥ 0 + - > - ⊥ ⊥ > > >
+ ⊥ 0 - + > + ⊥ ⊥ > > >
> ⊥ 0 > > > > ⊥ ⊥ > > >
2 Unlike in Section 4.4, to avoid confusion we now distinguish between concrete operators and
b> ⊥ 0 - + > ==
b ⊥ 0 - + >
⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥
0 ⊥ 0 + 0 > 0 ⊥ + 0 0 >
- ⊥ 0 > 0 > - ⊥ 0 > 0 >
+ ⊥ + + > > + ⊥ 0 0 > >
> ⊥ > > > > > ⊥ > > > >
Exercise 5.1: In the CFGs we consider in this chapter (for TIP without function
calls), entry nodes have no predecessors.
(a) Argue that the constraint rule [[v]] = JOIN (v) for such nodes is equiva-
lent to defining [[v]] = ⊥.
(b) Argue that removing all equations of the form [[v]] = ⊥ from an equation
system does not change its least solution.
Exercise 5.3: Check that the above tables indeed define monotone operators
on the Sign lattice.
5.1 SIGN ANALYSIS, REVISITED 51
Exercise 5.4: Argue that these tables are the most precise possible for the
Sign lattice, given that soundness must be preserved. (An informal argument
suffices for now; we shall see a more formal approach to stating and proving
this property in Section 11.4.)
Using the fixed-point algorithm from Section 4.4, we can now obtain the ana-
lysis result for the given program bycomputing fix (af ) where af (x1 , . . . , xn ) =
af 1 (x1 , . . . , xn ), . . . , af n (x1 , . . . , xn ) .
Recall the example program from Section 4.1:
var a,b,c;
a = 42;
b = 87;
if (input) {
c = a + b;
} else {
c = a - b;
}
v1
var a,b,c v2
a = 42 v3
b = 87 v4
input v5
true false
c=a+b v6 c=a−b v7
v8
52 5 DATAFLOW ANALYSIS WITH MONOTONE FRAMEWORKS
Exercise 5.6: Generate the equation system for this example program. Then
solve the equations using the fixed-point algorithm from Section 4.4.
(Notice that the least upper bound operation is exactly what we need to model
the merging of information at v8 !)
Exercise 5.7: Write a small TIP program where the sign analysis leads to an
equation system with mutually recursive constraints. Then explain step-by-
step how the fixed-point algorithm from Section 4.4 computes the solution.
We lose some information in the above analysis, since for example the expres-
sions (2>0)==1 and x-x are analyzed as >, which seems unnecessarily coarse.
(These are examples where the least fixed-point of the analysis equation system
is not identical to the semantically best possible answer.) Also, + divided by
+ results in > rather than + since e.g. 1/2 is rounded down to zero. To handle
some of these situations more precisely, we could enrich the sign lattice with
element 1 (the constant 1), +0 (positive or zero), and -0 (negative or zero) to
keep track of more precise abstract values:
+0 −0
+ 0 −
Exercise 5.8: Define the six operators on the extended Sign lattice (shown
above) by means of 8 × 8 tables. Check that they are monotone. Does this
new lattice improve precision for the expressions (2>0)==1, x-x, and 1/2?
5.2 CONSTANT PROPAGATION ANALYSIS 53
Exercise 5.9: Show how the eval function could be improved to make the sign
analysis able to show that the final value of z cannot be a negative number in
the following program:
var x,y,z;
x = input;
y = x*x;
z = (x-x+1)*y;
Exercise 5.10: Explain how to extend the sign analysis to handle TIP programs
that use records (see Chapter 2).
One approach, called field insensitive analysis, simply mixes together the
different fields of each record. Another approach, field sensitive analysis,
instead uses a more elaborate lattice that keeps different abstract values for
the different field names.
−3 −2 −1 0 1 2 3
var x,y,z;
x = 27;
y = input;
z = 2*x+y;
if (x < 0) { y = z-3; } else { y = 12; }
output y;
into
var x,y,z;
x = 27;
y = input;
z = 54+y;
if (0) { y = z-3; } else { y = 12; }
output y;
which, following a reaching definitions analysis and dead code elimination (see
Section 5.7), can be reduced to this shorter and more efficient program:
var y;
y = input;
output 12;
This kind of optimization was among the first uses of static program ana-
lysis [Kil73].
5.3 FIXED-POINT ALGORITHMS 55
procedure RoundRobin(f1 , . . . , fn )
(x1 , . . . , xn ) := (⊥, . . . , ⊥)
while (x1 , . . . , xn ) 6= f (x1 , . . . , xn ) do
for i := 1 . . . n do
xi := fi (x1 , . . . , xn )
end for
end while
return (x1 , . . . , xn )
end procedure
Exercise 5.13: Prove that the round-robin algorithm computes the least fixed-
point of f . (Hint: see the proof of the fixed-point theorem, and consider
the ascending chain that arises from the sequence of xi := fi (x1 , . . . , xn )
operations.)
Exercise 5.14: Continuing Exercise 4.28, how many iterations are required by
the naive fixed-point algorithm and the round-robin algorithm, respectively,
to reach the fixed-point?
procedure ChaoticIteration(f1 , . . . , fn )
(x1 , . . . , xn ) := (⊥, . . . , ⊥)
while (x1 , . . . , xn ) 6= f (x1 , . . . , xn ) do
choose i nondeterministically from {1, . . . , n}
xi := fi (x1 , . . . , xn )
end while
return (x1 , . . . , xn )
end procedure
This is not a practical algorithm, because its efficiency and termination depend
on how i is chosen in each iteration. Additionally, naive computing the loop
condition may now be more expensive than executing the loop body. However,
if it terminates, the algorithm produces the right result.
Exercise 5.15: Prove that the chaotic-iteration algorithm computes the least
fixed-point of f , if it terminates. (Hint: see your solution to Exercise 5.13.)
In the general case, every constraint variable [[vi ]] may depend on all other
variables. Most often, however, an actual instance of fi will only read the values
of a few other variables, as in the examples from Exercise 4.26 and Exercise 5.6.
We represent this information as a map
dep : Nodes → 2Nodes
which for each node v tells us the subset of other nodes for which [[v]] occurs in
a nontrivial manner on the right-hand side of their dataflow equations. That is,
5.3 FIXED-POINT ALGORITHMS 57
dep(v) is the set of nodes whose information may depend on the information of
v. We also define its inverse: dep −1 (v) = {w | v ∈ dep(w)}.
For the example from Exercise 5.6, we have, in particular, dep(v5 ) = {v6 , v7 }.
This means that whenever [[v5 ]] changes its value during the fixed-point compu-
tation, only f6 and f7 may need to be recomputed.
Armed with this information, we can present a simple work-list algorithm:
procedure SimpleWorkListAlgorithm(f1 , . . . , fn )
(x1 , . . . , xn ) := (⊥, . . . , ⊥)
W := {v1 , . . . , vn }
while W 6= ∅ do
vi := W.removeNext()
y := fi (x1 , . . . , xn )
if y 6= xi then
xi := y
for each vj ∈ dep(vi ) do
W.add(vj )
end for
end if
end while
return (x1 , . . . , xn )
end procedure
The set W is here called the work-list with operations ‘add’ and ‘removeNext’
for adding and (nondeterministically) removing an item. The work-list initially
contains all nodes, so each fi is applied at least once. It is easy to see that the
work-list algorithm terminates on any input: In each iteration, we either move
up in the Ln lattice, or the size of the work-list decreases. As usual, we can
only move up in the lattice finitely many times as it has finite height, and the
while-loop terminates when the work-list is empty. Correctness follows from
observing that each iteration of the algorithm has the same effect on (x1 , . . . , xn )
as one iteration of the chaotic-iteration algorithm for some nondeterministic
choice of i.
Exercise 5.16: Argue that a sound, but probably not very useful choice for the
dep map is one that always returns the set of all CFG nodes.
Exercise 5.17: As stated above, we can choose dep(v5 ) = {v6 , v7 } for the
example equation system from Exercise 5.6. Argue that a good strategy for the
sign analysis is to define dep = succ. (We return to this topic in Section 5.8.)
Exercise 5.18: Explain step-by-step how the work-list algorithm computes the
solution to the equation system from Exercise 5.6. (Since the ‘removeNext’
operation is nondeterministic, there are many correct answers!)
58 5 DATAFLOW ANALYSIS WITH MONOTONE FRAMEWORKS
Assuming that |dep(v)| and |dep −1 (v)| are bounded by a constant for all
nodes v, the worst-case time complexity of the simple work-list algorithm can
be expressed as
O(n · h · k)
where n is the number of CFG nodes in the program being analyzed, h is the
height of the lattice L for abstract states, and k is the worst-case time required to
compute a constraint function fi (x1 , . . . , xn ).
Exercise 5.20: Prove the above statement about the worst-case time complexity
of the simple work-list algorithm. (It is reasonable to assume that the work-list
operations ‘add’ and ‘removeNext’ take constant time.)
Exercise 5.22: Estimate the worst-case time complexity of the sign analysis
with the simple work-list algorithm, using the formula above. (As this for-
mula applies to any dataflow analysis implemented with the simple work-list
algorithm, the actual worst-case complexity of this specific analysis may be
asymptotically better!)
A variable is live at a program point if there exists an execution where its value
is read later in the execution without it is being written to in between. Clearly
undecidable, this property can be approximated by a static analysis called live
variables analysis (or liveness analysis). The typical use of live variables analysis
is optimization: there is no need to store the value of a variable that is not live.
For this reason, we want the analysis to be conservative in the direction where
the answer “not live” can be trusted and “live” is the safe but useless answer.
We use a powerset lattice where the elements are the variables occurring in
the given program. This is an example of a parameterized lattice, that is, one that
depends on the specific program being analyzed. For the example program
var x,y,z;
x = input;
while (x>1) {
y = x/2;
if (y>3) x = x-y;
z = x-4;
if (z>0) x = x/2;
z = z-1;
}
output x;
States = (2{x,y,z} , ⊆)
4 A word of caution: For historical reasons, some textbooks and research papers describe dataflow
analyses using the lattices “upside down”. This makes no difference whatsoever to the analysis
results (because of the lattice dualities discussed in Chapter 4), but it can be confusing.
60 5 DATAFLOW ANALYSIS WITH MONOTONE FRAMEWORKS
z>0 x = x/2
z = z−1
output x
For every CFG node v we introduce a constraint variable [[v]] denoting the subset
of program variables that are live at the program point before that node. The
analysis wil be conservative, since the computed set may be too large. We use
the auxiliary definition [
JOIN (v) = [[w]]
w∈succ(v)
Unlike the JOIN function from sign analysis, this one combines abstract states
from the successors instead of the predecessors. We have defined the order
relation as v=⊆, so t = ∪.
As in sign analysis, the most interesting constraint rule is the one for assign-
ments:
This rule models the fact that the set of live variables before the assignment is
the same as the set after the assignment, except for the variable being written to
and the variables that are needed to evaluate the right-hand-side expression.
Exercise 5.23: Explain why the constraint rule for assignments, as defined
above, is sound.
where vars(E) denotes the set of variables occurring in E. For variable declara-
tions and exit nodes:
[[exit]] = ∅
For all other nodes:
[[v]] = JOIN (v)
Exercise 5.24: Argue that the right-hand sides of the constraints define mono-
tone functions.
Our example program yields these constraints:
[[entry]] = ∅
[[var x,y,z]] = ∅
[[x=input]] = ∅
[[x>1]] = {x}
[[y=x/2]] = {x}
[[y>3]] = {x, y}
[[x=x-y]] = {x, y}
[[z=x-4]] = {x}
[[z>0]] = {x, z}
[[x=x/2]] = {x, z}
[[z=z-1]] = {x, z}
[[output x]] = {x}
[[exit]] = ∅
From this information a clever compiler could deduce that y and z are never live
at the same time, and that the value written in the assignment z=z-1 is never
read. Thus, the program may safely be optimized into the following one, which
saves the cost of one assignment and could result in better register allocation:
var x,yz;
x = input;
while (x>1) {
62 5 DATAFLOW ANALYSIS WITH MONOTONE FRAMEWORKS
yz = x/2;
if (yz>3) x = x-yz;
yz = x-4;
if (yz>0) x = x/2;
}
output x;
Show for each program point the set of live variables, as computed by our
live variables analysis. (Do not forget the entry and exit points.)
(a) Show how the live variables analysis can be modified to compute
strongly live variables.
(b) Show for each program point in the program from Exercise 5.25 the set
of strongly live variables, as computed by your new analysis.
We can estimate the worst-case time complexity of the live variables analysis,
with for example the naive fixed-point algorithm from Section 4.4. We first
observe that if the program has n CFG nodes and b variables, then the lattice
(2Vars )n has height b · n, which bounds the number of iterations we can perform.
Each lattice element can be represented as a bitvector of length b · n. Using the
observation from Exercise 5.19 we can ensure that |succ(v)| < 2 for any node v.
For each iteration we therefore have to perform O(n) intersection, difference, or
equality operations on sets of size b, which can be done in time O(b · n). Thus,
we reach a time complexity of O(b2 · n2 ).
5.5 AVAILABLE EXPRESSIONS ANALYSIS 63
Exercise 5.28: Can you obtain an asymptotically better bound on the worst-
case time complexity of live variables analysis with the naive fixed-point
algorithm, if exploiting properties of the structures of TIP CFGs and the
analysis constraints?
Exercise 5.29: Recall from Section 5.3 that the work-list algorithm relies on
a function dep(v) for avoiding recomputation of constraint functions that
are guaranteed not to change outputs. What would be a good strategy for
defining dep(v) in general for live variables analysis of any given program?
Exercise 5.30: Estimate the worst-case time complexity of the live variables
analysis with the simple work-list algorithm, by using the formula from
page 58.
var x,y,z,a,b;
z = a+b;
y = a*b;
while (y > a+b) {
a = a+1;
x = a+b;
}
we have four different nontrivial expressions, so our lattice for abstract states is
States = (2{a+b,a*b,y>a+b,a+1} , ⊇)
{a+b,a*b,y>a+b,a+1}
The top element of our lattice is ∅, which corresponds to the trivial information
that no expressions are known to be available. The CFG for above program looks
as follows:
var x,y,z,a,b
z = a+b
y = a*b
false
y > a+b
true
a = a+1
x = a+b
Here, the function↓X removes all expressions that contain the variable X, and
exps collects all nontrivial expressions:
exps(X) = ∅
exps(I) = ∅
exps(input) = ∅
exps(E1 op E2 ) = {E1 op E2 } ∪ exps(E1 ) ∪ exps(E2 )
[[entry]] = ∅
For all other kinds of nodes, the collected sets of expressions are simply propa-
gated from the predecessors:
Exercise 5.31: Explain informally why the constraints are monotone and the
analysis is sound.
[[entry]] = ∅
[[var x,y,z,a,b]] = [[entry]]
[[z=a+b]] = exps(a+b) ↓ z
[[y=a*b]] = ([[z=a+b]] ∪ exps(a*b)) ↓ y
[[y>a+b]] = ([[y=a*b]] ∩ [[x=a+b]]) ∪ exps(y>a+b)
[[a=a+1]] = ([[y>a+b]] ∪ exps(a+1)) ↓ a
[[x=a+b]] = ([[a=a+1]] ∪ exps(a+b)) ↓ x
[[exit]] = [[y>a+b]]
[[entry]] = ∅
[[var x,y,z,a,b]] = ∅
[[z=a+b]] = {a+b}
[[y=a*b]] = {a+b, a*b}
[[y>a+b]] = {a+b, y>a+b}
[[a=a+1]] = ∅
[[x=a+b]] = {a+b}
[[exit]] = {a+b, y>a+b}
The expressions available at the program point before a node v can be computed
from this solution as JOIN (v). In particular, the solution confirms our previous
observations about a+b. With this knowledge, an optimizing compiler could
systematically transform the program into a (slightly) more efficient version:
var x,y,z,a,b,aplusb;
aplusb = a+b;
z = aplusb;
y = a*b;
while (y > aplusb) {
a = a+1;
aplusb = a+b;
x = aplusb;
}
[[exit]] = ∅
The rules for the remaining nodes, include branch conditions and output state-
ments, are the same as for available expressions analysis.
On the example program:
var x,a,b;
x = input;
a = x-1;
b = x-2;
while (x>0) {
output a*b-x;
x = x-1;
}
output a*b;
the analysis reveals that a*b is very busy inside the loop. The compiler can
perform code hoisting and move the computation to the earliest program point
where it is very busy. This would transform the program into this more efficient
version:
var x,a,b,atimesb;
x = input;
a = x-1;
b = x-2;
atimesb = a*b;
while (x>0) {
output atimesb-x;
x = x-1;
}
output atimesb;
if (z>0) x = x/2;
z = z-1;
}
output x;
the lattice modeling abstract states becomes:
States = (2{x=input,y=x/2,x=x-y,z=x-4,x=x/2,z=z-1} , ⊆)
For every CFG node v the variable [[v]] denotes the set of assignments that may
define values of variables at the program point after the node. We define
[
JOIN (v) = [[w]]
w∈pred(v)
x = input
x>1
y = x/2
y>3 x = x−y
z = x−4
z>0 x = x/2
z = z−1
output x
The def-use graph is a further abstraction of the program and is the basis of
widely used optimizations such as dead code elimination and code motion.
Exercise 5.33: Show that the def-use graph is always a subgraph of the transi-
tive closure of the CFG.
5.8 FORWARD, BACKWARD, MAY, AND MUST 69
Forward Backward
May Reaching Definitions Live Variables
Must Available Expressions Very Busy Expressions
These classifications are mostly botanical, but awareness of them may provide
inspiration for constructing new analyses.
70 5 DATAFLOW ANALYSIS WITH MONOTONE FRAMEWORKS
Exercise 5.34: Which among the following analyses are distributive, if any?
(a) Available expressions analysis.
(b) Very busy expressions analysis.
Exercise 5.35: Let us design a flow-sensitive type analysis for TIP. In the simple
version of TIP we focus on in this chapter, we only have integer values at run-
time, but for the analysis we can treat the results of the comparison operators
> and == as a separate type: boolean. The results of the arithmetic operators +,
-, *, / can similarly be treated as type integer. As lattice for abstract states we
choose
States = Vars → 2{integer,boolean}
such that the analysis can keep track of the possible types for every variable.
(b) After analyzing a given program, how can we check using the com-
puted abstract states whether the branch conditions in if and while
statements are guaranteed to be booleans? Similarly, how can we check
that the arguments to the arithmetic operators +, -, *, / are guaranteed
to be integers? As an example, for the following program two warnings
should be emitted:
main(a,b) {
var x,y;
x = a+b;
if (x) { // warning: using integer as branch condition
output 17;
}
y = a>b;
return y+3; // warning: using boolean in addition
}
5.9 INITIALIZED VARIABLES ANALYSIS 71
Exercise 5.37: What is the JOIN function for initialized variables analysis?
detection tool could now check for every use of a variable that it is contained
in the computed set of initialized variables, and emit a warning otherwise. A
warning would be emitted for this trivial example program:
main() {
var x;
return x;
}
Exercise 5.39: Write a TIP program where such an error detection tool would
emit a spurious warning. That is, in your program there are no reads from
uninitialized variables in any execution but the initialized variables analysis
is too imprecise to show it.
(a) How should we order the two elements? That is, which one is > and
which one is ⊥?
(b) How should the constraint rule for assignments be modified to fit with
this alternative lattice?
F
In the simple work-list algorithm, JOIN (v) = w∈dep −1 (v) [[w]] is computed
in each iteration of the while-loop. However, often [[w]] has not changed since last
time v was processed, so much of that computation may be redundant. (When
we introduce inter-procedural analysis in Chapter 8, we shall see that dep −1 (v)
may become large.) We now present another work-list algorithm based on trans-
fer functions that avoids some of that redundancy. With this algorithm, for a
forward analysis each variable xi denotes the abstract state for the program point
before the corresponding CFG node vi , in contrast to the other fixed-point solvers
we have seen previously where xi denotes the abstract state for the program
point after vi (and conversely for a backward analysis).
procedure PropagationWorkListAlgorithm(t1 , . . . , tn )
(x1 , . . . , xn ) := (⊥, . . . , ⊥)
W := {v1 , . . . , vn }
while W 6= ∅ do
vi := W.removeNext()
y := tvi (xi )
for each vj ∈ dep(vi ) do
z := xj t y
if xj 6= z then
xj := z
W.add(vj )
end if
end for
end while
return (x1 , . . . , xn )
end procedure
Compared to the simple work-list algorithm, this variant typically avoids many
redundant least-upper-bound computations. In each iteration of the while-loop,
the transfer function of the current node vi is applied, and the resulting abstract
state is propagated (hence the name of the algorithm) to all dependencies. Those
that change are added to the work-list. We thereby have tv ([[v]]) v [[w]] for all
nodes v, w where w ∈ succ(v).
Widening
where
N = {−∞, . . . , −2, −1, 0, 1, 2, . . . , ∞}
is the set of integers extended with infinite endpoints and the order on intervals
is defined by inclusion:
[l1 , h1 ] v [l2 , h2 ] ⇐⇒ l2 ≤ l1 ∧ h1 ≤ h2
[− , ]
8
8
[−2,2]
[− ,0] [0, ]
8
8
[−2,1] [−1,2]
[− ,−1] [1, ]
8
8
[−2,0] [−1,1] [0,2]
[− ,−2] [2, ]
8
8
[−2,−1] [−1,0] [0,1] [1,2]
This lattice does not have finite height, since it contains for example the following
infinite chain:
Before we specify the constraint rules, we define a function eval that performs
an abstract evaluation of expressions:
eval (σ, X) = σ(X)
eval (σ, I) = [I, I]
exps(σ, input) = [−∞, ∞]
eval (σ, E1 op E2 ) = op(eval
b (σ, E1 ), eval (σ, E2 ))
The abstract arithmetical operators all are defined by:
op([l
b 1 , h1 ], [l2 , h2 ]) = [ min x op y, max x op y]
x∈[l1 ,h1 ],y∈[l2 ,h2 ] x∈[l1 ,h1 ],y∈[l2 ,h2 ]
Exercise 6.1: Explain why the definition of eval given above is a conservative
approximation compared to evaluating TIP expressions concretely. Give an
example of how the definition could be modified to make the analysis more
precise (and still sound).
6.2 WIDENING AND NARROWING 77
Exercise 6.2: This general definition of opb looks simple in math, but it is
nontrivial to implement it efficiently. Your task is to write pseudo-code for
an implementation of the abstract greater-than operator b >. (To be usable in
practice, the execution time of your implementation should be less than linear
in the input numbers!) It is acceptable to sacrifice optimal precision, but see
how precise you can make it.
The interval analysis lattice has infinite height, so applying the naive fixed-
point algorithms may never terminate: for the lattice Ln , the sequence of ap-
proximants
f i (⊥, . . . , ⊥)
need never converge. A powerful technique to address this kind of problem is
introduced in the next section.1
Exercise 6.4: Give an example of a TIP program where none of the fixed-point
algorithms terminate for the interval analysis as presented above.
the analysis constraints. The fixed-point theorem (see page 43) tells us that this is well-defined: a
unique least solution always exists for such analyses. However, for the interval analysis defined in
this section, that fixed-point theorem does not apply, so the careful reader may wonder, even if we
disregard the fact that the fixed-point algorithms cannot compute the solutions for all programs, do
the interval analysis constraints actually have a least solution for any given program? The answer is
affirmative. A variant of the fixed-point theorem that relies on transfinite iteration sequences holds
without the finite-height assumption [CC79a].
78 6 WIDENING
that can expressed using monotone equation systems, but it is typically used in
flow-sensitive analyses with infinite-height lattices.
Let f : L → L denote the function from the fixed-point theorem and the naive
fixed-point algorithm (Section 4.4). A particularly simple form of widening,
which often suffices in practice, introduces a function ω : L → L so that the
sequence
(ω ◦ f )i (⊥) for i = 0, 1, . . .
is guaranteed to converge on a fixed-point that is larger than or equal to each
approximant f i (⊥) of the naive fixed-point algorithm and thus represents sound
information about the program. To ensure this property, it suffices that ω is
monotone and extensive (see Exercise 4.16), and that the image ω(L) = {y ∈
L | ∃x ∈ L : y = ω(x)} has finite height. The fixed-point algorithms can easily
be adapted to use widening by applying ω in each iteration.
The widening function ω will intuitively coarsen the information sufficiently
to ensure termination. For our interval analysis, ω is defined pointwise down
to single intervals. It operates relative to a set B that consists of a finite set of
integers together with −∞ and ∞. Typically, B could be seeded with all the
integer constants occurring in the given program, but other heuristics could also
be used. For single intervals we define the function ω 0 : Intervals → Intervals by
Exercise 6.5: Show that interval analysis with widening, using this definition
of ω, always terminates and yields a solution that is a safe approximation of
the ideal result.
then we have fix v fix ω. However, we also have that fix v f (fix ω) v fix ω,
which means that a subsequent application of f may improve our result and
still produce sound information. This technique, called narrowing, may in fact
be iterated arbitrarily many times.
6.2 WIDENING AND NARROWING 79
Exercise 6.7: Exactly how many narrowing steps are necessary to reach this
solution?
This result is really the best we could hope for, for this program. For that
reason, further narrowing has no effect. However, in general, the decreasing
sequence
fixw w f (fixw ) w f 2 (fix ω) w f 3 (fix ω) . . .
is not guaranteed to converge, so heuristics must determine how many times to
apply narrowing.
Exercise 6.8: Give an example of a TIP program where the narrowing se-
quence diverges for the interval analysis, when using widening followed by
narrowing.
80 6 WIDENING
∇: L × L → L
The widening operator ∇ (usually written with infix notation) must satisfy
∀x, y ∈ L : x v x∇y ∧y v x∇y (meaning that it is an upper bound operator) and
that for any increasing sequence z0 v z1 v z2 v . . . , the sequence y0 , y1 , y2 . . .
defined by y0 = z0 and yi+1 = yi ∇ zi+1 for i = 0, 1, . . . converges after a finite
number of steps. With such an operator, we can obtain a safe approximation of
the least fixed-point of f by computing the following sequence:
x0 = ⊥
xi+1 = xi ∇ f (xi )
This sequence eventually converges, that is, for some k we have xk+1 = xk .
Furthermore, the result is a safe approximation of the ordinary fixed-point:
fix v xk .
This leads us to the following variant of the naive fixed-point algorithm with
(traditional) widening:
procedure NaiveFixedPointAlgorithmWithWidening(f )
x := ⊥
while x 6= f (x) do
x := x ∇ f (x)
end while
return x
end procedure
The other fixed-point algorithms (Section 5.3) can be extended with this form
of widening in a similar manner.
Note that if we choose as a special case ∇ = t, the computation of x0 , x1 , . . .
proceeds exactly as with the ordinary naive fixed-point algorithm.
Exercise 6.10: Show that t is a widening operator (albeit perhaps not a very
useful one) if L has finite height.
right-hand argument, respectively), and only coarsen abstract values that are
unstable.
For the interval analysis we can for example define ∇ as follows. We first
define a widening operator ∇0 : Intervals → Intervals on single intervals:
⊥ ∇0 y = y
x ∇0 ⊥ = x
and (
h1 if h2 ≤ h1
h3 =
min{i ∈ B | h2 ≤ i} otherwise
Compared to the definition of ω 0 for simple widening (see page 78), we now
coarsen the interval end points only if they are unstable compared to the last
iteration. Intuitively, an interval that does not become larger during an iteration
of the fixed-point computation cannot be responsible for divergence.
Now we can define ∇ based on ∇0 , similarly to how we previously defined
ω pointwise in terms of ω 0 :
(σ1 , . . . , σn ) ∇ (σ10 , . . . , σn0 ) = (σ100 , . . . , σn00 ) where σi00 (X) = σi (X) ∇0 σi0 (X)
for i = 1, . . . , n and X ∈ Vars
Exercise 6.11: Show that this definition of ∇ for the interval analysis satisfies
the requirements for being a widening operator.
With this more advanced form of widening but without using narrowing,
for the small example program from page 79 we obtain the same analysis result
as with the combination of simple widening and narrowing we looked at earlier.
Exercise 6.12: Explain why the “simple” form of widening (using the unary
ω operator) is just a special case of the “traditional” widening mechanism
(using the binary ∇ operator).
With the simple form of widening, the analysis effectively just uses a finite
subset of L. In contrast, the traditional form of widening is fundamentally
more powerful: Although each program being analyzed uses only finitely many
elements of L, no finite-height subset suffices for all programs [CC92].
We can be even more clever by observing that divergence can only appear in
presence of recursive dataflow constraints (see Section 5.1) and apply widening
82 6 WIDENING
only at, for example, CFG nodes that are loop heads.2 In the above definition of
∇, this means changing the definition of σi00 to
(
00 σi (X) ∇ σi0 (X) if node i is a loop head
σi (X) =
σi0 (X) otherwise
Exercise 6.13: Argue why applying widening only at CFG loop heads suffices
for guaranteeing convergence of the fixed-point computation.
Then give an example of a program where this improves precision for the
interval analysis, compared to widening at all CFG nodes.
Exercise 6.14: We can define another widening operator for interval analysis
that does not require a set B of integer constants. In the definition of ∇0 and
∇ from page 81, we simply chance l3 and h3 as follows:
(
l1 if l1 ≤ l2
l3 =
−∞ otherwise
and (
h1 if h2 ≤ h1
h3 =
∞ otherwise
Intuitively, this widening coarsens unstable intervals to +/ − ∞.
(a) Argue that after this change, ∇ still satisfies the requirements for being
a widening operator.
(b) Give an example of a program that is analyzed less precisely after this
change.
2 Aslong as we ignore function calls and only analyze individual functions, the loop heads are
the while nodes in CFGs for TIP programs. If we also consider interprocedural analysis (Chapter 8)
then recursive function calls must also be taken into account.
Chapter 7
Until now, we have ignored the values of branch and loop conditions by simply
treating if- and while-statements as a nondeterministic choice between the two
branches, which is called control insensitive analysis. Such analyses are also path
insensitive, because they do not distinguish different paths that lead to a given
program point. The information about branches and paths can be important for
precision. Consider for example the following program:
x = input;
y = 0;
z = 0;
while (x > 0) {
z = z+x;
if (17 > y) { y = y+1; }
x = x-1;
}
The previous interval analysis (with widening) will conclude that after the
while-loop, the variable x is in the interval [−∞, ∞], y is in the interval [0, ∞],
and z is in the interval [−∞, ∞]. However, in view of the conditionals being
used, this result is too pessimistic.
Exercise 7.1: What would be the ideal (i.e., most precise, yet sound) analysis
result for x, y, and z at the exit program point in the example above, when
using the Intervals lattice to describe abstract values? (Later in this chapter
we shall see an improved interval analysis that obtains that result.)
84 7 PATH SENSITIVITY AND RELATIONAL ANALYSIS
x = input;
y = 0;
z = 0;
while (x > 0) {
assert(x > 0);
z = z+x;
if (17 > y) { assert(17 > y); y = y+1; }
x = x-1;
}
assert(!(x > 0));
(We here also extend TIP with a unary negation operator !.) It is always safe to
ignore the assert statements, which amounts to this trivial constraint rule:
With that constraint rule, no extra precision is gained. It requires insight into the
specific static analysis to define nontrivial and sound constraints for assertions.
For the interval analysis, extracting the information carried by general condi-
tions, or predicates, such as E1 > E2 or E1 == E2 relative to the lattice elements
is complicated and in itself an area of considerable study. For simplicity, let us
consider conditions only of the two kinds X > E and E > X. The former kind
of assertion can be handled by the constraint rule
assert(X > E): [[v]] = JOIN (v)[X 7→ gt(JOIN (v)(X), eval (JOIN (v), E))]
Exercise 7.2: Argue that this constraint for assert is sound and monotone.
Negated conditions are handled in similar fashions, and all other conditions
are given the trivial constraint by default.
With this refinement, the interval analysis of the above example will conclude
that after the while-loop the variable x is in the interval [−∞, 0], y is in the
interval [0, 17], and z is in the interval [0, ∞].
Exercise 7.4: Discuss how more conditions may be given nontrivial constraints
for assert to improve analysis precision further.
As the analysis now takes the information in the branch conditions into
account, this kind of analysis is called control sensitive (or branch sensitive). An al-
ternative approach to control sensitivity that does not involve assert statements
is to model each branch node in the CFG using two constraint variables instead
of just one, corresponding to the two different outcomes of the evaluation of the
branch condition. Another approach is to associate dataflow constraints with
CFG edges instead of nodes. The technical details of such approaches will be
different compared to the approach taken here, but the overall idea is the same.
if (condition) {
open();
flag = 1;
} else {
flag = 0;
}
...
if (flag) {
close();
}
We here assume that open and close are built-in functions for opening and
closing a specific file. (A more realistic setting with multiple files can be handled
using techniques presented in Chapter 10.) The file is initially closed, condition
is some complex expression, and the “. . . ” consists of statements that do not
call open or close or modify flag. We wish to design an analysis that can check
that close is only called if the file is currently open, that open is only called if
the file is currently closed, and that the file is definitely closed at the program
exit. In this example program, the critical property is that the branch containing
the call to close is taken only if the branch containing the call to open was taken
earlier in the execution.
As a starting point, we use this powerset lattice for modeling the open/closed
86 7 PATH SENSITIVITY AND RELATIONAL ANALYSIS
[[open()]] = {open}
[[close()]] = {closed}
For the entry node, we define:
[[entry]] = {closed}
and for every other node, which does not modify the file status, the constraint is
simply
[[v]] = JOIN (v)
where JOIN is defined as usual for a forward, may analysis:
[
JOIN (v) = [[w]]
w∈pred(v)
In the example program, the close function is clearly called if and only if
open is called, but the current analysis fails to discover this.
Exercise 7.5: Write the constraints being produced for the example program
and show that the solution for [[flag]] (the node for the last if condition) is
{open, closed}.
Arguing that the program has the desired property obviously involves the
flag variable, which the lattice above ignores. So, we can try with a slightly
more sophisticated lattice – a product lattice of two powerset lattices that keeps
track of both the status of the file and the value of the flag:
L0 = 2{open,closed} × 2{flag=0,flag6=0}
(The lattice order is implicitly defined as the pointwise subset ordering of the
two powersets.) For example, the lattice element {flag 6= 0} in the right-most
sub-lattice means that flag is definitely not 0, and {flag = 0, flag 6= 0} means
that the value of flag is unknown. Additionally, we insert assert statements to
model the conditionals:
if (condition) {
assert(condition);
open();
7.2 PATHS AND RELATIONS 87
flag = 1;
} else {
assert(!condition);
flag = 0;
}
...
if (flag) {
assert(flag);
close();
} else {
assert(!flag);
}
This is still insufficient, though. At the program point after the first if-else
statement, the analysis only knows that open may have been called and flag
may be 0.
Exercise 7.6: Specify the constraints that fit with the L0 lattice. Then show that
the analysis produces the lattice element ({open, closed}, {flag = 0, flag 6= 0})
at the program point after the first if-else statement.
L00 = Paths → L
where Paths is a finite set of path contexts. A path context is typically a predicate
over the program state.1 (For instance, a condition expression in TIP defines
such a predicate.) In general, each statement is then analyzed in |Paths| different
path contexts, each describing a set of paths that lead to the statement, which is
why this kind of analysis is called path sensitive. For the example above, we can
use Paths = {flag = 0, flag 6= 0}.
The constraints for open, close, and entry can now be defined as follows.2
[[open()]] = λp.{open}
[[close()]] = λp.{closed}
[[entry]] = λp.{closed}
The constraints for assignments make sure that flag gets special treatment:
1 Anotherway to select Paths is to use sequences of branch nodes.
2 Wehere use the lambda abstration notation to denote a function: if f = λx.e then f (x) = e.
Thus, λp.{open} is the function that returns {open} for any input p.
88 7 PATH SENSITIVITY AND RELATIONAL ANALYSIS
S
flag = 0: [[v]] = [flag = 0 7→ p∈Paths JOIN (v)(p),
flag 6= 0 7→ ∅]
S
flag = I: [[v]] = [flag 6= 0 7→ p∈Paths JOIN (v)(p),
flag = 0 7→ ∅]
S
flag = E: [[v]] = λq. p∈Paths JOIN (v)(p)
Here, I is an integer constant other than 0 and E is a non-integer-constant
expression. The definition of JOIN follows from the lattice structure and from
the analysis being forward:
[
JOIN (v)(p) = [[w]](p)
w∈pred(v)
The constraint for the case flag = 0 models the fact that flag is definitely 0 after
the statement, so the open/closed information is obtained from the predecessors,
independent of whether flag was 0 or not before the statement. Also, the
open/closed information is set to the bottom element ∅ for flag 6= 0 because that
path context is infeasible at the program point after flag = 0. The constraint for
flag = I is dual, and the last constraint covers the cases where flag is assigned
an unknown value.
For assert statements, we also give special treatment to flag:
assert(flag): [[v]] = [flag 6= 0 7→ JOIN (v)(flag 6= 0),
flag = 0 7→ ∅]
Notice the small but important difference compared to the constraint for
flag = 1 statements. As before, the case for negated expressions is similar.
Finally, for any other node v, including other assert statements, the constraint
keeps the dataflow information for different path contexts apart but otherwise
simply propagates the information from the predecessors in the CFG:
Although this is sound, we could make more precise constraints for assert
nodes by recognizing other patterns that fit into the abstraction given by the
lattice.
For our example program, the following constraints are generated:
[[entry]] = λp.{closed}
[[condition]] = [[entry]]
[[assert(condition)]] = [[condition]]
[[open()]] = λp.{open}
S
[[flag = 1]] = flag 6= 0 7→ p∈Paths [[open()]](p), flag = 0 7→ ∅
[[assert(!condition)]] = [[condition]]
7.2 PATHS AND RELATIONS 89
S
[[flag = 0]] = flag = 0 7→ p∈Paths [[assert(!condition)]](p),
flag 6
= 0 →
7 ∅
[[...]] = λp. [[flag = 1]](p) ∪ [[flag = 0]](p)
[[flag]] = [[...]]
[[assert(flag)]] = [flag 6= 0 7→ [[flag]](flag 6= 0), flag = 0 7→ ∅]
[[close()]] = λp.{closed}
[[assert(!flag)]] = [flag = 0 7→ [[flag]](flag =0), flag 6= 0 7→ ∅]
[[exit]] = λp. [[close()]](p) ∪ [[assert(!flag)]](p)
The minimal solution is, for each [[v]](p):
flag = 0 flag 6= 0
[[entry]] {closed} {closed}
[[condition]] {closed} {closed}
[[assert(condition)]] {closed} {closed}
[[open()]] {open} {open}
[[flag = 1]] ∅ {open}
[[assert(!condition)]] {closed} {closed}
[[flag = 0]] {closed} ∅
[[...]] {closed} {open}
[[flag]] {closed} {open}
[[assert(flag)]] ∅ {open}
[[close()]] {closed} {closed}
[[assert(!flag)]] {closed} ∅
[[exit]] {closed} {closed}
The analysis produces the lattice element [flag = 0 7→ {closed}, flag 6= 0 7→
{open}] for the program point after the first if-else statement. The constraint
for the assert(flag) statement will eliminate the possibility that the file is
closed at that point. This ensures that close is only called if the file is open, as
desired.
Exercise 7.8: For the present example, the basic lattice L is a defined as a
powerset of a finite set A = {open, closed}.
(a) Show that Paths → 2A is isomorphic to 2Paths×A for any finite set A.
(This explains why such analyses are called relational: each element of
2Paths×A is a (binary) relation between Paths and A.)
(b) Reformulate the analysis using the lattice 2{flag=0,flag6=0}×{open,closed}
instead of L00 (without affecting the analysis precision).
Exercise 7.9: Describe a variant of the example program above where the
present analysis would be improved if combining it with constant propaga-
tion.
In general, the program analysis designer is left with the choice of Paths.
Often, Paths consists of combinations of predicates that appear in conditionals
in the program. This quickly results in an exponential blow-up: for k predicates,
90 7 PATH SENSITIVITY AND RELATIONAL ANALYSIS
Exercise 7.10: Assume that we change the rule for open from
[[open()]] = λp.{open}
to
[[open()]] = λp. if JOIN (v)(p) = ∅ then ∅ else {open}
Argue that this is sound and for some programs more precise than the original
rule.
Exercise 7.12: Construct yet another variant of the open/close example pro-
gram where the desired property can only be established with a choice of
Paths that includes a predicate that does not occur as a conditional expression
in the program source. (Such a program may be challenging to handle with
iterative refinement techniques.)
7.2 PATHS AND RELATIONS 91
Exercise 7.13: The following TIP code computes the absolute value of x:
if (x < 0) {
sgn = -1;
} else {
sgn = 1;
}
y = x * sgn;
Design an analysis (i.e., define a lattice and describe the relevant constraint
rules) that is able to show that y is always positive or zero after the last
assignment in this program.
Chapter 8
Interprocedural Analysis
So far, we have only analyzed the body of individual functions, which is called
intraprocedural analysis. We now consider interprocedural analysis of whole pro-
grams containing multiple functions and function calls.
We use the subset of the TIP language containing functions, but still ignore
pointers and functions as values. As we shall see, the CFG for an entire program
is then quite simple to obtain. It becomes more complicated when adding
function values, which we discuss in Chapter 9.
First we construct the CFGs for all individual function bodies as usual. All
that remains is then to glue them together to reflect function calls properly.
We need to take care of parameter passing, return values, and values of local
variables across calls. For simplicity we assume that all function calls are
performed in connection with assignments:
X = f (E1 , . . . , ,En );
Exercise 8.1: Show how any program can be normalized (cf. Section 2.3) to
have this form.
In the CFG, we represent each function call statement using two nodes: a
call node representing the connection from the caller to the entry of f, and an
after-call node where execution resumes after returning from the exit of f:
94 8 INTERPROCEDURAL ANALYSIS
= f (E 1,...,En )
X=
result = E
As discussed in Section 2.5, CFGs can be constructed such that there is always a
unique entry node and a unique exit node for each function.
We can now glue together the caller and the callee as follows:
f(b1 , ..., b n )
= f (E 1,...,E n )
X=
result = E
The connection between the call node and its after-call node is represented by
a special edge (not in succ and pred ), which we need for propagating abstract
values for local variables of the caller.
With this interprocedural CFG in place, we can apply the monotone frame-
work. Examples are given in the following sections.
8.1 INTERPROCEDURAL CONTROL FLOW GRAPHS 95
Exercise 8.2: How many edges may the interprocedural CFG contain in a
program with n CFG nodes?
Recall the intraprocedural sign analysis from Sections 4.1 and 5.1. That
analysis models values with the lattice Sign:
+ 0 −
and abstract states are represented by the map lattice States = Vars → Sign. For
any program point, the abstract state only provides information about variables
that are in scope; all other variables can be set to ⊥.
To make the sign analysis interprocedural, we define constraints for function
entries and exits. For an entry node v of a function f (b1 , . . . ,bn ) we consider
the abstract states for all callers pred (v ) and model the passing of parameters:
G
[[v]] = sw
w∈pred(v)
where1
sw = ⊥[b1 7→ eval ([[w]], Ew w
1 ), . . . , bn 7→ eval ([[w]], En )]
where Ew i is the i’th argument at the call node w. As discussed in Section 4.4,
constraints can be expressed using inequations instead of equations. The con-
straint rule above can be reformulated as follows, where v is a function entry
node v and w ∈ pred (v) is a caller:
[[v]] w sw
Intuitively, this shows how information flows from the call node (the right-hand-
side of w) to the function entry node (the left-hand-side of w).
Exercise 8.3: Explain why these two formulations of the constraint rule for
function entry nodes are equivalent.
For the entry node v of the main function with parameters b1 , . . . , bn we have
this special rule that models the fact that main is implicitly called with unknown
arguments:
[[v]] = ⊥[b1 7→ >, . . . , bn 7→ >]
1 In this expression, ⊥ denotes the bottom element of the Vars → Sign, that is, it maps every
For an after-call node v that stores the return value in the variable X and where
v 0 is the accompanying call node and w ∈ pred (v) is the function exit node, the
dataflow can be modeled by the following constraint:
The constraint obtains the abstract values of the local variables from the call
node v 0 and the abstract value of result from w.
With this approach, no constraints are needed for call nodes and exit nodes.
In a backward analysis, one would consider the call nodes and the function
exit nodes rather than the function entry nodes and the after-call nodes. Also
notice that we exploit the fact that the variant of the TIP language we use in this
chapter does not have global variables, a heap, nested functions, or higher-order
functions.
Exercise 8.4: Write and solve the constraints that are generated by the inter-
procedural sign analysis for the following program:
inc(a) {
return a+1;
}
main() {
var x,y;
x = inc(17);
y = inc(87);
return x+y;
}
Exercise 8.5: Assume we extend TIP with global variables. Such variables are
declared before all functions and their scope covers all functions. Write a TIP
program with global variables that is analyzed incorrectly (that is, unsoundly)
with the current analysis. Then show how the constraint rules above should
be modified to accommodate this language feature.
Function entry nodes may have many predecessors, and similarly, function
exit nodes may have many successors. For this reason, algorithms like Propaga-
tionWorkListAlgorithm (Section 5.10) are often preferred for interprocedural
dataflow analysis.
Exercise 8.6: For the interprocedural sign analysis, how can we choose dep(v)
when v is a call node, an after-call node, a function entry node, or a function
exit node?
8.2 CONTEXT SENSITIVITY 97
main() {
var x,y;
x = f(0); // call 1
y = f(87); // call 2
return x + y;
}
Due to the first call to f the parameter z may be 0, and due to the second call it
may be a positive number, so in the abstract state at the entry of f, the abstract
value of z is >. That value propagates through the body of f and back to the
callers, so both x and y also become >. This is an example of dataflow along
interprocedurally invalid paths: according to the analysis constraints, dataflow
from one call node propagates through the function body and returns not only
at the matching after-call node but at all after-call nodes. Although the analysis
is still sound, the resulting loss of precision may be unacceptable.
A naive solution to this problem is to use function cloning. In this specific
example we could clone f and let the two calls invoke different but identical func-
tions. A similar effect would be obtained by inlining the function body at each
call. More generally this may, however, increase the program size significantly,
and in case of (mutually) recursive functions it would result in infinitely large
programs. As we shall see next, we can instead encode the relevant information
to distinguish the different calls by the use of more expressive lattices, much
like the path-sensitivity approach in Chapter 7.
As discussed in the previous section, a basic context-insensitive dataflow ana-
lysis can be expressed using a lattice States n where States is the lattice describing
abstract states and n = |Nodes| (or equivalently, using a lattice Nodes → States).
Context-sensitive analysis instead uses a lattice of the form
n
Contexts → lift(States)
where G
JOIN (v, c) = [[w]](c)
w∈pred(v)
to match the new lattice with context sensitivity. Note that information for
different call contexts is kept apart, and that the reachability information is
propagated along. How to model the dataflow at call nodes, after-call nodes,
function entry nodes, and function exit nodes depends on the context sensitivity
strategy, as described in the following sections.
or more formally: A≤k = i=0,...,k Ai . The symbol denotes the empty tuple.
S
8.3 CONTEXT SENSITIVITY WITH CALL STRINGS 99
which has different abstract values for z depending on the caller. Notice that the
information for the context is unreachable, since f is not the main function but
is always executed from c1 or c2 .
The constraint rule for an entry node v of a function f (b1 , . . . ,bn ) models
parameter passing in the same way as in context-insensitive analysis, but it now
takes the call context c at the function entry and the call context c0 at each call
node into account: G 0
[[v]](c) = scw
w ∈ pred(v) ∧
c=w∧
c0 ∈ Contexts
0
where scw denotes the abstract state created from the call at node w in context c0 :
(
c0 unreachable if [[w]](c0 ) = unreachable
sw =
⊥[b1 7→ eval ([[w]](c0 ), Ew 0 w
1 ), . . . , bn 7→ eval ([[w]](c ), En )] otherwise
Exercise 8.8: Verify that this constraint rule for function entry nodes indeed
leads to the lattice element shown above for the example program.
Expressed using inequations instead, the constraint rule for a function entry
node v where w ∈ pred (v) is a caller and c0 ∈ Contexts is a call context can be
written as follows, which may be more intuitively clear.
0
[[v]](w) w scw
0
Informally, for any call context c0 at the call node w, an abstract state scw is built
by evaluating the function arguments and propagated to call context w at the
function entry node v.
Exercise 8.9: Give a constraint rule for the entry node of the special function
main. (Remember that main is always reachable in context and that the
values of its parameters can be any integers.)
Assume v is an after-call node that stores the return value in the variable X,
and that v 0 is the associated call node and w ∈ pred (v) is the function exit node.
The constraint rule for v merges the abstract state from the v 0 and the return
value from w, now taking the call contexts and reachability into account:
(
unreachable if [[v 0 ]](c) = unreachable ∨ [[w]](v 0 ) = unreachable
[[v]](c) =
[[v 0 ]](c)[X 7→ [[w]](v 0 )(result)] otherwise
Notice that with this kind of context sensitivity, v 0 is both a call node and a call
context, and the abstract value of result is obtained from the exit node w in call
context v 0 .
Exercise 8.10: Write and solve the constraints that are generated by the inter-
procedural sign analysis for the program from Exercise 8.4, this time with
context sensitivity using the call string approach with k = 1. (Even though
this program does not need context sensitivity to be analyzed precisely, it
illustrates the mechanism behind the call string approach.)
8.4 CONTEXT SENSITIVITY WITH THE FUNCTIONAL APPROACH 101
Exercise 8.12: Write a TIP program that needs the call string bound k = 2 or
higher to be analyzed with optimal precision using the sign analysis. That is,
some variable in the program is assigned the abstract value > by the analysis
if and only if k < 2.
Exercise 8.13: Generalize the constraint rules shown above to work with any
k ≥ 1, not just k = 1.
In summary, the call string approach distinguishes calls to the same function
based on the call sites that appear in the call stack. In practice, k = 1 sometimes
gives inadequate precision, and k ≥ 2 is generally too expensive. For this reason,
a useful strategy is to select k individually for each call site, based on heuristics.
f(z) {
return z*42;
}
main() {
var x,y;
x = f(42); // call 1
y = f(87); // call 2
return x + y;
}
102 8 INTERPROCEDURAL ANALYSIS
The call string approach with k ≥ 1 will analyze the f function twice, which
is unnecessary because the abstract value of the argument is + at both calls.
Rather than distingushing calls based on information about control flow from
the call stack, the functional approach to context sensitivity distinguishes calls
based on the data from the abstract states at the calls. In the most general form,
the functional approach uses
Contexts = States
although a subset often suffices. With this set of call contexts, the analysis lattice
becomes n
States → lift(States)
which clearly leads to a significant increase of the theoretical worst-case com-
plexity compared to context insensitive analysis.
The idea is that a lattice element for a CFG node v is a map m : States →
lift(States) such that m(s) approximates the possible states at v given that the
current function containing v was entered in a state that matches s. The situation
m(s) = unreachable means that there is no execution of the program where the
function is entered in a state that matches s and v is reached. If v is the exit node
of a function f , the map m is a summary of f , mapping abstract entry states to
abstract exit states, much like a transfer function (see Section 5.10) models the
effect of executing a single instruction but now for an entire function.
Returning to the example program from Section 8.2 (page 97), we will now
define the analysis constraints such that, in particular, we obtain the following
lattice element at the exit of the function f:3
⊥[z 7→ 0] 7→ ⊥[z 7→ 0, result 7→ 0],
⊥[z 7→ +] 7→ ⊥[z 7→ +, result 7→ +],
all other contexts 7→ unreachable]
This information shows that the exit of f is unreachable unless z is 0 or + at the
entry of the function, and that the sign of result at the exit is the same as the
sign of z at the input. In particular, the context where z is - maps to unreachable
because f is never called with negative inputs in the program.
The constraint rule for an entry node v of a function f (b1 , . . . ,bn ) is the
same as in the call strings approach, except for the condition on c:
G 0
[[v]](c) = scw
w ∈ pred(v) ∧
0
c = scw ∧
0
c ∈ Contexts
3 We here use the map update notation described on page 40 and the fact that the bottom element
of a map lattice maps all inputs to the bottom element of the codomain, so ⊥[z 7→ 0] denotes the
function that maps all variables to ⊥, except z which is mapped to 0.
8.4 CONTEXT SENSITIVITY WITH THE FUNCTIONAL APPROACH 103
0
(The abstract state scw is defined as in Section 8.3.) In this constraint rule, the
abstract state computed for the call context c at the entry node v only includes
0
information from the calls that produce an entry state scw if the context c is
0
identical to the entry state scw , for any context c0 at the call node w.
Exercise 8.15: Verify that this constraint rule for function entry nodes indeed
leads to the lattice element shown above for the example program.
Exercise 8.16: Give a constraint rule for the entry node of the special function
main. (Remember that main is always reachable and that the values of its
parameters can be any integers.)
Assume v is an after-call node that stores the return value in the variable X,
and that v 0 is the associated call node and w ∈ pred (v) is the function exit node.
The constraint rule for v merges the abstract state from the v 0 and the return
value from w, while taking the call contexts and reachability into account:
(
unreachable if [[v 0 ]](c) = unreachable ∨ [[w]](scv0 ) = unreachable
[[v]](c) =
[[v 0 ]](c)[X 7→ [[w]](scv0 )(result)] otherwise
To find the relevant context for the function exit node, this rule builds the
same abstract state as the one built at the call node.
Exercise 8.17: Assume we have analyzed a program P using context sensitive
interprocedural sign analysis with the functional approach, and the analysis
result contains the following lattice element for the exit node of a function
named foo:
[x 7→ -, y 7→ -, result 7→ ⊥] 7→ [x 7→ +, y 7→ +, result 7→ +],
[x 7→ +, y 7→ +, result 7→ ⊥] 7→[x 7→ -, y 7→ -, result 7→ -],
all other contexts 7→ unreachable
Explain informally what this tells us about the program P . What could foo
look like?
Exercise 8.18: Write and solve the constraints that are generated by the inter-
procedural sign analysis for the program from Exercise 8.4, this time with
context sensitivity using the functional approach.
104 8 INTERPROCEDURAL ANALYSIS
Exercise 8.19: Show that this claim about the precision of the functional
approach is correct.
E → λX.E
| X
| EE
(In Section 9.3 we demonstrate this analysis technique on the TIP language.)
For simplicity we assume that all λ-bound variables are distinct. To construct a
CFG for a term in this calculus, we need to approximate for every expression
E the set of closures to which it may evaluate. A closure can be modeled by a
symbol of the form λX that identifies a concrete λ-abstraction. This problem,
called closure analysis, can be solved using the techniques from Chapters 4 and 5.
However, since the intraprocedural control flow is trivial in this language, we
might as well perform the analysis directly on the AST.
The lattice we use is the powerset of closures occurring in the given term
ordered by subset inclusion. For every AST node v we introduce a constraint
variable [[v]] denoting the set of resulting closures. For an abstraction λX.E we
have the constraint
λX ∈ [[λX.E]]
106 9 CONTROL FLOW ANALYSIS
for every closure λX.E, which models that the actual argument may flow into the
formal argument and that the value of the function body is among the possible
results of the function call.
Exercise 9.1: Show how the resulting constraints can be expressed as mono-
tone constraints and solved by a fixed-point computation, with an appropriate
choice of lattice.
Exercise 9.2: Show that a unique minimal solution exists, since solutions are
closed under intersection.
The algorithm is based on a simple data structure. Each variable is mapped to a
node in a directed acyclic graph (DAG). Each node has an associated bitvector
belonging to {0, 1}k , initially defined to be all 0’s. Each bit has an associated list
of pairs of variables, which is used to model conditional constraints. The edges
in the DAG reflect inclusion constraints. An example graph may look like:
x1
x2
x3
x4
(x2 ,x4 )
Constraints are added one at a time, and the bitvectors will at all times directly
represent the minimal solution of the constraints seen so far.
A constraint of the form t ∈ x is handled by looking up the node associated
with x and setting the corresponding bit to 1. If its list of pairs was not empty,
9.3 TIP WITH FIRST-CLASS FUNCTION 107
then an edge between the nodes corresponding to y and z is added for every
pair (y, z) and the list is emptied. A constraint of the form t ∈ x =⇒ y ⊆ z is
handled by first testing if the bit corresponding to t in the node corresponding
to x has value 1. If this is so, then an edge between the nodes corresponding to
y and z is added. Otherwise, the pair (y, z) is added to the list for that bit.
If a newly added edge forms a cycle, then all nodes on that cycle can be
merged into a single node, which implies that their bitvectors are unioned
together and their pair lists are concatenated. The map from variables to nodes
is updated accordingly. In any case, to reestablish all inclusion relations we
must propagate the values of each newly set bit along all edges in the graph.
To analyze the time complexity this algorithm, we assume that the numbers
of tokens and variables are both O(n). This is clearly the case in closure analysis
of a program of size n.
Merging DAG nodes on cycles can be done at most O(n) times. Each merger
involves at most O(n) nodes and the union of their bitvectors is computed in
time at most O(n2 ). The total for this part is O(n3 ).
New edges are inserted at most O(n2 ) times. Constant sets are included at
most O(n2 ) times, once for each t ∈ x constraint.
Finally, to limit the cost of propagating bits along edges, we imagine that
each pair of corresponding bits along an edge are connected by a tiny bitwire.
Whenever the source bit is set to 1, that value is propagated along the bitwire
which then is broken:
1 1
0 0
0 0
0 1
0 0
1 1
Since we have at most n3 bitwires, the total cost for propagation is O(n3 ). Adding
up, the total cost for the algorithm is also O(n3 ). The fact that this seems like a
lower bound as well is referred to as the cubic time bottleneck.
The kinds of constraints covered by this algorithm is a simple case of the
more general set constraints, which allows richer constraints on sets of finite
n
terms. General set constraints are also solvable but in time O(22 ).
f ∈ [[f ]]
[[E]] ⊆ [[X]]
and, finally, for computed function calls E(E1 ,. . . ,En ) we have for every defi-
nition of a function f with arguments a1f , . . . , anf and return expression Ef0 this
constraint:
foo(n,f) {
var r;
if (n==0) { f=ide; }
r = f(n);
return r;
}
main() {
var x,y;
x = input;
if (x>0) { y = foo(x,inc); } else { y = foo(x,dec); }
return y;
}
The control flow analysis generates the following constraints:
inc ∈ [[inc]]
dec ∈ [[dec]]
9.3 TIP WITH FIRST-CLASS FUNCTION 109
ide ∈ [[ide]]
[[ide]] ⊆ [[f]]
[[f(n)]] ⊆ [[r]]
inc ∈ [[f]] =⇒ [[n]] ⊆ [[i]] ∧ [[i+1]] ⊆ [[f(n)]]
dec ∈ [[f]] =⇒ [[n]] ⊆ [[j]] ∧ [[j-1]] ⊆ [[f(n)]]
ide ∈ [[f]] =⇒ [[n]] ⊆ [[k]] ∧ [[k]] ⊆ [[f(n)]]
[[input]] ⊆ [[x]]
[[foo(x,inc)]] ⊆ [[y]]
[[foo(x,dec)]] ⊆ [[y]]
foo ∈ [[foo]]
foo ∈ [[foo]] =⇒ [[x]] ⊆ [[n]] ∧ [[inc]] ⊆ [[f]] ∧ [[r]] ⊆ [[foo(x,inc)]]
foo ∈ [[foo]] =⇒ [[x]] ⊆ [[n]] ∧ [[dec]] ⊆ [[f]] ∧ [[r]] ⊆ [[foo(x,dec)]]
[[inc]] = {inc}
[[dec]] = {dec}
[[ide]] = {ide}
[[f]] = {inc, dec, ide}
[[foo]] = {foo}
x > 0 f = ide
call−3=ret−inc call−3=ret−dec call−3=ret−ide
save−1−y=y save−2−y=y
r=save−3−r
n = x n = x
r = call−3
f = inc f = dec
ret−foo=r
call−1=ret−foo
x=save−1−x x=save−2−x
call−2=ret−foo
y=save−1−y y=save−2−y
y = call−1 y = call−2
ret−main=y
which then can be used as basis for subsequent interprocedural dataflow analy-
ses.
(a) Write the constraints that are produced by the control-flow analysis for
this program.
(b) Compute the least solution to the constraints.
9.4 CONTROL FLOW IN OBJECT ORIENTED LANGUAGES 111
inc(x) {
return x+1;
}
dec(y) {
return y-1;
}
main(a) {
var t;
t = inc;
a = t(a);
t = dec;
a = t(a);
return a;
}
Pointer Analysis
The final extension of the TIP language introduces pointers and dynamic mem-
ory allocation. For simplicity, we ignore records in this chapter.
To illustrate the problem with pointers, assume we with to perform a sign
analysis of TIP code like this:
...
*x = 42;
*y = -87;
z = *x;
Here, the value of z depends on whether or not x and y are aliases, meaning that
they point to the same cell. Without knowledge of such aliasing information,
it quickly becomes impossible to produce useful dataflow and control-flow
analysis results.
The first analyses that we shall study are flow-insensitive. The end result of
such an analysis is a function pt : Vars → 2Cells that for each pointer variable X
returns the set pt(X) of cells it may point to. We wish to perform a conservative
analysis that computes sets that may be too large but never too small.
Given such points-to information, many other facts can be approximated. If
we wish to know whether pointer variables x and y may be aliases, then a safe
answer is obtained by checking whether pt(x) ∩ pt(y) is nonempty.
The initial values of local variables are undefined in TIP programs, however,
for these flow-insensitive points-to analyses we assume that all the variables are
initialized before they are used. (In other words, these analyses are sound only
for programs that never read from uninitialized variables.)
An almost-trivial analysis, called address taken, is to simply return all possible
abstract cells, except that X is only included if the expression &X occurs in
the given program. This only suffices for very simple applications, so more
ambitious approaches are usually preferred. If we restrict ourselves to typable
programs, then any points-to analysis could be improved by removing those
cells whose types do not match the type of the pointer variable.
• X 1 = &X 2
• X1 = X2
• X 1 = *X 2
• *X 1 = X 2
• X = null
p = alloc null;
x = y;
x = z;
*p = z;
p = q;
q = &y;
x = *p;
p = &z;
alloc-1 ∈ [[p]]
[[y]] ⊆ [[x]]
[[z]] ⊆ [[x]]
c ∈ [[p]] =⇒ [[z]] ⊆ [[c]] for each c ∈ Cells
[[q]] ⊆ [[p]]
y ∈ [[q]]
c ∈ [[p]] =⇒ [[c]] ⊆ [[x]] for each c ∈ Cells
z ∈ [[p]]
where Cells = {p, q, x, y, z, alloc-1}. The least solution is quite precise in this
case (here showing only the nonempty values):
pt(p) = {alloc-1, y, z}
pt(q) = {y}
Note that although this algorithm is flow insensitive, the directionality of the
constraints implies that the dataflow is still modeled with some accuracy.
Exercise 10.3: Use Andersen’s algorithm to compute the points-to sets for the
variables in the following program fragment:
a = &d;
b = &e;
a = b;
*a = alloc null;
116 10 POINTER ANALYSIS
Exercise 10.4: Use Andersen’s algorithm to compute the points-to sets for the
variables in the following program fragment:
z = &x;
w = &a;
a = 42;
if (a > b) {
*z = &a;
y = &b;
} else {
x = &b;
y = w;
}
&α1 = &α2 =⇒ α1 = α2
For the example program from Section 10.2, Steensgaard’s algorithm gener-
ates the following constraints:
[[p]] = &[[alloc-1]]
[[x]] = [[y]]
[[x]] = [[z]]
[[p]] = &α1 [[z]] = α1
10.4 INTERPROCEDURAL POINTS-TO ANALYSIS 117
[[p]] = [[q]]
[[q]] = &[[y]]
[[x]] = α2 [[p]] = &α2
[[p]] = &[[z]]
which is less precise than Andersen’s algorithm, but using the faster algorithm.
Exercise 10.5: Use Steensgaard’s algorithm to compute the points-to sets for
the two programs from Exercise 10.3 and Exercise 10.4.
to
[[X2 ]] = &[[X1 ]]
without affecting the analysis results?
to
[[X1 ]] = &[[X2 ]]
without affecting the analysis results?
(*x)(x);
X = X’(X 1 ,. . . , X n );
so that the involved expressions are all variables. Similarly, all return expressions
are assumed to be just variables.
Exercise 10.9: Continuing Exercise 10.8, can we still use the cubic algorithm
(Section 9.2) to solve the analysis constraints? If so, is the analysis time still
O(n3 ) where n is the size of the program being analyzed?
NN
where the bottom element NN means definitely not null and the top element >
represents values that may be null. We then form the following map lattice for
abstract states:
States = Cells → Null
10.5 NULL POINTER ANALYSIS 119
For every CFG node v we introduce a constraint variable [[v]] denoting an element
from the map lattice. We shall use each constraint variable to describe an abstract
state for the program point immediately after the node.
For all nodes that do not involve pointer operations we have the constraint:
where G
JOIN (v) = [[w]]
w∈pred(v)
Similar reasoning gives constraints for the other operations that affect pointer
variables:
X = alloc P : [[v]] = JOIN (v)[X 7→ NN, alloc-i 7→ >]
X 1 = &X 2 : [[v]] = JOIN (v)[X 1 7→ NN]
X1 = X2: [[v]] = JOIN (v)[X 1 7→ JOIN (v)(X 2 )]
X = null: [[v]] = JOIN (v)[X 7→ >]
Exercise 10.10: Explain why the above four constraints are monotone and
sound.
For a heap store operation *X 1 = X 2 we need to model the change of what-
ever X 1 points to. That may be multiple abstract cells, namely pt(X1 ). Moreover,
each abstract heap cell alloc-i may describe multiple concrete cells. In the
constraint for heap store operations, we must therefore join the new abstract
value into the existing one for each affected cell in pt(X1 ):
where
store(σ, X1 , X2 ) = σ [α 7→ σ(α) t σ(X2 ) ]
α∈pt(X1 )
The situation we here see at heap store operations where we model an as-
signment by joining the new abstract value into the existing one is called a weak
120 10 POINTER ANALYSIS
update. In contrast, in a strong update the new abstract value overwrites the
existing one, which we see in the null pointer analysis at all operations that
modify program variables. Strong updates are obviously more precise than
weak updates in general, but it may require more elaborate analysis abstractions
to detect situations where strong update can be applied soundly.
After performing the null pointer analysis of a given program, a pointer
dereference *X at a program point v is guaranteed to be safe if
JOIN (v)(X) = NN
The precision of this analysis depends of course on the quality of the underlying
points-to analysis.
Consider the following buggy example program:
p = alloc null;
q = &p;
n = null;
*q = n;
*p = n;
Andersen’s algorithm computes the following points-to sets:
pt(p) = {alloc-1}
pt(q) = {p}
pt(n) = ∅
Based on this information, the null pointer analysis generates the following
constraints:
[[p = alloc null]] = ⊥[p 7→ NN, alloc-1 7→ >]
[[q = &p]] = [[p = alloc null]][q 7→ NN]
[[n = null]] = [[q = &p]][n 7→ >]
[[*q = n]] = [[n = null]][p 7→ [[n = null]](p) t [[n = null]](n)]
[[*p = n]] = [[*q = n]][alloc-1 7→ [[*q = n]](alloc-1) t [[*q = n]](n)]
The least solution is:
[[p = alloc null]] = [p 7→ NN, q 7→ NN, n 7→ NN, alloc-1 7→ >]
[[q = &p]] = [p 7→ NN, q 7→ NN, n 7→ NN, alloc-1 7→ >]
[[n = null]] = [p 7→ NN, q 7→ NN, n 7→ >, alloc-1 7→ >]
[[*q = n]] = [p 7→ >, q 7→ NN, n 7→ >, alloc-1 7→ >]
[[*p = n]] = [p 7→ >, q 7→ NN, n 7→ >, alloc-1 7→ >]
By inspecting this information, an analysis could statically detect that when *q =
n is evaluated, which is immediately after n = null, the variable q is definitely
non-null. On the other hand, when *p = n is evaluated, we cannot rule out the
possibility that p may contain null.
Exercise 10.11: Show an alternative constraint for heap load operations using
weak update, together with an example program where the modified analysis
then gives a result that is less precise than the analysis presented above.
10.6 FLOW-SENSITIVE POINTS-TO ANALYSIS 121
Exercise 10.12: Show an (unsound) alternative constraint for heap store op-
erations using strong update, together with an example program where the
modified analysis then gives a wrong result.
where x, y, and z are program variables. We will seek to answer questions about
disjointness of the structures contained in program variables. In the example
above, x and y are not disjoint whereas y and z are. Such information may be
useful, for example, to automatically parallelize execution in an optimizing com-
piler. For such analysis, flow-insensitive reasoning is sometimes too imprecise.
However, we can create a flow-sensitive variant of Andersen’s analysis.
We use a lattice of points-to graphs, which are directed graphs in which the
nodes are the abstract cells for the given program and the edges correspond
to possible pointers. Points-to graphs are ordered by inclusion of their sets of
edges. Thus, ⊥ is the graph without edges and > is the completely connected
graph. Formally, our lattice for abstract states is then
States = 2Cells×Cells
ordered by the usual subset inclusion. For every CFG node v we introduce
a constraint variable [[v]] denoting a points-to graph that describes all possible
stores at that program point. For the nodes corresponding to the various pointer
manipulations we have these constraints:
X = alloc P : [[v]] = JOIN (v) ↓ X ∪ {(X, alloc-i)}
X 1 = &X 2 : [[v]] = JOIN (v) ↓ X 1 ∪ {(X 1 , X 2 )}
X1 = X2: [[v]] = assign(JOIN (v), X 1 , X 2 )
X 1 = *X 2 : [[v]] = load (JOIN (v), X 1 , X 2 )
*X 1 = X 2 : [[v]] = store(JOIN (v), X 1 , X 2 )
X = null: [[v]] = JOIN (v) ↓ X
and for all other nodes:
[[v]] = JOIN (v)
122 10 POINTER ANALYSIS
where [
JOIN (v) = [[w]]
w∈pred(v)
σ ↓ x = {(s, t) ∈ σ | s 6= x}
p q
alloc−3 alloc−4
x y
alloc−1 alloc−2
From this result we can safely conclude that x and y will always be disjoint.
Note that this analysis also computes a flow sensitive points-to map that for
each program point v is defined by:
pt(p) = {t | (p, t) ∈ [[v]]}
This analysis is more precise than Andersen’s algorithm, but clearly also more
expensive to perform. As an example, consider the program:
10.7 ESCAPE ANALYSIS 123
x = &y;
x = &z;
After these statements, Andersen’s algorithm would predict that pt(x) = {y, z}
whereas the flow-sensitive analysis computes pt(x) = {z} for the final program
point.
baz() {
var x;
return &x;
}
main() {
var p;
p=baz(); *p=1;
return *p;
}
Abstract Interpretation
In the preceding chapters we have used the term soundness of an analysis only
informally: if an analysis is sound, the properties it infers for a given program
hold in all actual executions of the program. The theory of abstract interpreta-
tion provides a solid mathematical foundation for what it means for an analysis
to be sound, by relating the analysis specification to the formal semantics of
the programming language. Another use of abstract interpretation is for un-
derstanding whether an analysis design, or a part of an analysis design, is as
precise as possible relative to a choice of analysis lattice and where imprecision
may arise. The fundamental ideas of abstract interpretation were introduced by
Cousot and Cousot in the 1970s [CC76, CC77, CC79b].
as the “concrete semantics” and the analysis specification is called the “abstract semantics”.
126 11 ABSTRACT INTERPRETATION
ConcreteStates = Vars ,→ Z
For every CFG node v we have a constraint variable that ranges over sets of
concrete states:
{[v]} ⊆ ConcreteStates
The idea is that {[v]} shall denote the set of concrete states that are possible at
the program point immediately after the instruction represented by v, in some
execution of the program. This is called a collecting semantics, because it “collects”
the possible states. In the exercises at the end of Section 11.6 we shall study other
kinds of collecting semantics that collect relevant information suitable for other
analysis, such as live variables analysis. We choose to focus on the program
point immediately after the instruction of the CFG node, instead of the program
point before, to align with our sign analysis and the other forward analyses from
Chapter 5.
Consider this simple program as an example:
var x;
x = 0;
while (input) {
x = x + 2;
}
Its CFG looks as follows, where the bullets represent the program points that
have associated constraint variables.
entry
var x
x = 0
input
true
false
x = x + 2
exit
The solution we are interested in maps the constraint variable {[x = 0]} to the
single state where x is zero, {[x = x + 2]} is mapped to the set of all states where
x is a positive even number, and similarly for the other program points.
As a first step, we define some useful auxiliary functions, ceval , csucc, and
CJOIN , that have a close connection to the auxiliary functions used in the
2 We use the notation A ,→ B to denote the set of partial functions from A to B.
11.1 A COLLECTING SEMANTICS FOR TIP 127
specification of the sign analysis, but now for concrete executions instead of
abstract executions.
The function ceval : ConcreteStates × E → 2Z gives the semantics of evaluat-
ing an expression E relative to a concrete state ρ ∈ ConcreteStates, which results
in a set of possible integer values, defined inductively as follows:3
ceval (ρ, X) = {ρ(X)}
ceval (ρ, I) = {I}
ceval (ρ, input) = Z
ceval (ρ, E1 + E2 ) = {z1 + z2 | z1 ∈ ceval (ρ, E1 ) ∧ z2 ∈ ceval (ρ, E2 )}
ceval (ρ, E1 / E2 ) = {z1 / z2 | z1 ∈ ceval (ρ, E1 ) ∧ z2 ∈ ceval (ρ, E2 )}
Evaluation of the other binary operators is defined similarly. In this simple
subset of TIP we consider here, evaluating an expression cannot affect the values
of the program variables. Also note that division by zero simply results in the
empty set of values.
We overload ceval such that it also works on sets of concrete states,
ceval : 2ConcreteStates × E → 2Z :
[
ceval (R, E) = ceval (ρ, E)
ρ∈R
For a CFG node v, CJOIN (v) denotes the set of states at the program point
immediately before the instruction represented by v, relative to the states at the
program points after the relevant other nodes according to the csucc function:4
CJOIN (v) = {ρ ∈ ConcreteStates | ∃w ∈ Nodes : ρ ∈ {[w]} ∧ v ∈ csucc(ρ, w)}
3 We slightly abuse notation by using E both as an arbitrary expression and as the set of all
expressions, and similarly for the other syntactic categories. We also let I denote both an arbitrary
syntactic numeral and the mathematical integer it describes, and for simplicity we do not restrict
the numeric computations to, for example, 64 bit signed integers.
4 Note that CJOIN (v) is a function of all the constraint variables {[v ]}, . . . , {[v ]} for the entire
1 n
program, just like JOIN (v) is a function of all the constraint variables [[v1 ]], . . . , [[vn ]].
128 11 ABSTRACT INTERPRETATION
Exercise 11.1: Convince yourself that this definition of CJOIN makes sense,
especially for the cases where v is a node with multiple incoming edges (like
input in the example on page 126) or it is the first node of a branch (like x =
x + 2 in the example).
This rule formalizes the runtime behavior of assignments: for every state ρ
that may appear immediately before executing the assignment, the state after
the assignment is obtained by overwriting the value of X with the result of
evaluating E.
If v is a variable declaration, var X1 , . . . ,Xn , we use this rule:
The only possible initial state at entry nodes is the partial map that is undefined
for all program variables, denoted []:
{[entry]} = {[]}
For all other kinds of nodes, we have this trivial constraint rule:
Notice the resemblance with the analysis constraints from Section 5.1.
Exercise 11.2: Define a suitable constraint rule that expresses the semantics
of assert statements (see Section 7.1) in our collecting semantics. (For this
exercise, think of assert statements as written explicitly by the programmers
anywhere in the programs, not just for use in control sensitive analysis.)
Naturally, for this particular program we want a solution where x in the loop
and at the exit point can only be a nonnegative even integer, not an arbitrary
integer.
Exercise 11.4:
(a) Check that both of the above solutions are indeed solutions to the con-
straints for this particular program.
(b) Give an example of yet another solution.
(c) Argue that solution 1 is the least of all solutions.
lattices provided that f is continuous. This tells us that a unique least solution
always exists – even though the solution, of course, generally cannot be computed
using the naive fixed-point algorithm.5
Exercise 11.5: Prove that if L is a lattice (not necessarilyFwith finite height) and
the function f : L → L is continuous, then fix (f ) = i≥0 f i (⊥) is a unique
least fixed-point for f .
Exercise 11.6: Show that the constraints defined by the rules above are indeed
continuous. (Note that for each of the n constraint variables, the associated
constraint is a function cf v : (2ConcreteStates )n → 2ConcreteStates .) Then show
that the combined constraint function cf is also continuous.
Exercise 11.7: Show that the lattice (2ConcreteStates )n has infinite height.
var a,b,c;
a = 42;
b = 87;
if (input) {
c = a + b;
} else {
c = a - b;
}
For this program, at the program points immediately after the assignment
b = 87, immediately after the assignment c = a - b (at the end of the else
branch), and the exit, the following sets of concrete states are possible according
to the collecting semantics:
Exercise 11.8: Check that the least fixed point of the semantic constraints for
the program is indeed these sets, for the three program points.
In comparison, the sign analysis specified in Section 5.1 computes the following
abstract states for the same program points:
In this specific case the analysis result is almost the best we could hope for,
with that choice of analysis lattice. Notice that the abstract value of c at the
program point after c = a - b is >, although the only possible value in concrete
executions is −45. This is an example of a conservative analysis result.
For use later in this chapter, let us introduce the notation {[P ]} = fix (cf ) and
[[P ]] = fix (af ) where cf is the semantic constraint function and af is the analysis
constraint function for a given program P . In other words, {[P ]} denotes the
semantics of P , and [[P ]] denotes the analysis result for P .
αa : 2Z → Sign
αb : 2ConcreteStates → States
αc : (2ConcreteStates )n → States n
As before, 2Z is the powerset lattice defined over the set of integers ordered
by subset, Sign is the sign lattice from Section 4.1, we define ConcreteStates =
Vars → Z and State = Vars → Sign as in Sections 11.1 and 4.1, respectively, and
n is the number of CFG nodes. The functions are defined as follows, to precisely
capture the informal descriptions given earlier:
⊥ if D is empty
+ if D is nonempty and contains only positive integers
αa (D) = - if D is nonempty and contains only negative integers
0 if D is nonempty and contains only the integer 0
> otherwise
for any D ∈ 2Z
132 11 ABSTRACT INTERPRETATION
Dually we may define concretization functions that express the meaning of the
analysis lattice elements in terms of the concrete values, states, and n-tuples of
states:
γa : Sign → 2Z
γb : States → 2ConcreteStates
γc : States n → (2ConcreteStates )n
defined by
∅ if s = ⊥
{1, 2, 3, . . . } if s = +
γa (s) = {−1, −2, −3, . . . } if s = -
{0} if s = 0
Z if s = >
for any s ∈ Sign
Exercise 11.11: Argue that the three functions γa , γb , and γc from the sign
analysis example are monotone.
γc γc
y
x
αc αc
Exercise 11.12: Show that all three pairs of abstraction and concretization
functions (αa , γa ), (αb , γb ), and (αc , γc ) from the sign analysis example are
Galois connections.
Exercise 11.13: Show that αa ◦γa , αb ◦γb , and αc ◦γc are all equal to the identity
function, for the three pairs of abstraction and concretization functions from
the sign analysis.
Exercise 11.14: Argue that γ ◦ α is typically not the identity function, when
α : L1 → L2 is an abstraction function and γ : L2 → L1 is the associated
concretization function for some analysis. (Hint: consider αa and γa from the
sign analysis example.)
if and only if
∀x ∈ L1 , y ∈ L2 : α(x) v y ⇐⇒ x v γ(y)
This theorem is useful for some of the later exercises in this section.
We shall use this result in the soundness argument in Section 11.3. Not
surprisingly, the dual property also holds: γ satisfies γ( B) = b∈B γ(b) for
every B ⊆ L2 when α and γ form a Galois connection.
Exercise 11.18: Show that if α and γ form a Galois connection, then α(⊥) = ⊥
and γ(>) = >.
We have argued that the Galois connection property is natural for any rea-
sonable pair of an abstraction function and a concretization function, including
those that appear in our sign analysis example. The following exercise tells
us that it always suffices to specify either α or γ, then the other is uniquely
determined if requiring that they together form a Galois connection.
11.2 ABSTRACTION AND CONCRETIZATION 135
α(x) = G y
y∈L2 where xvγ(y)
for all x ∈ L1 .
The result from Exercise 11.19 means that once the analysis designer has
specified the collecting semantics and the analysis lattice and constraint rules,
then the relation between the semantic domain and the analysis domain may
be specified using an abstraction function α (resp. a concretization function γ),
and then the associated concretization function γ (resp. abstraction function α)
is uniquely determined – provided that one exists such that the two functions
form a Galois connection.
The dual property also holds: if γ satisfies γ( B) = b∈B γ(b) for every
B ⊆ L2 then there exists a function α : L1 → L2 such that α and γ form a
Galois connection.
The following exercise demonstrates that the Galois connection property can
be used as a “sanity check” when designing analysis lattices.
136 11 ABSTRACT INTERPRETATION
Exercise 11.21: Instead of using the usual Sign lattice from Section 5.1, assume
we chose to design our sign analysis based on this lattice:
>
0- 0+
At first, this may seem like a reasonable lattice for an analysis, but there some-
thing strange about it: How should we define eval (σ, 0)? Or equivalently,
how should we define αa ({0})? We could somewhat arbitrarily choose either
0+ or 0-.
Show that, with this choice of lattice, there does not exist an abstraction
function αa such that αa and γa form a Galois connection.
Exercise 11.22: Continuing Exercise 11.21, let us add a lattice element 0, below
0- and 0+ and above ⊥, with γa (0) = {0}. Show that with this modification,
an abstraction function αa exists such that αa and γa form a Galois connection.
11.3 SOUNDNESS 137
Exercise 11.23: In your solution to Exercise 5.36, you may have chosen the
following lattice for abstract values:
bigint
int
byte char
bool
where the lattice elements model the different sets of integers, for example,
γ(char) = {0, 1, . . . , 65535}. When defining the analysis constraints, you prob-
ably encountered some design choices and made some more or less arbitrary
choices. For example, the abstract addition of two bool values could be mod-
eled as either byte or char.
Does there exist an abstraction function α such that α and γ form a Galois
connection for the above lattice?
11.3 Soundness
In other words, soundness means that the analysis result over-approximates the
abstraction of the semantics of the program. For the sign analysis, the property
can be illustrated like this:
138 11 ABSTRACT INTERPRETATION
[[P ]]
αc
{[P ]}
(2ConcreteStates )n States n
For the simple TIP example program considered in Section 11.1, the sound-
ness property is indeed satisfied (here showing the information just for the
program point immediately after the c = a - b statement):
αc (. . . , {[c = a - b]}, . . . ) =
αc (. . . , [a 7→ 42, b 7→ 87, c 7→ −45], . . . ) v
(. . . , [a 7→ +, b 7→ +, c 7→ >], . . . ) =
(. . . , [[c = a - b]], . . . )
If we specify the relation between the two domains using concretization
functions instead of using abstraction functions, we may dually define soundness
as the property that the concretization of the analysis result over-approximates
the semantics of the program:
For the sign analysis, which uses the concretization function γc : States n →
(2ConcreteStates )n , this property can be illustrated as follows:
γc
[[P ]]
{[P ]}
(2ConcreteStates )n States n
Exercise 11.24: Show that if α and γ form a Galois connection, then the two
definitions of soundness stated above are equivalent.
(Hint: see Exercise 11.16.)
11.3 SOUNDNESS 139
Exercise 11.25: Prove that eval is a sound abstraction of ceval , in the sense
defined above.
Hint: Use induction in the structure of the TIP expression. As part of the
proof, you need to show that each abstract operator is a sound abstraction
the corresponding concrete operator, for example for the addition operator:
αa ({z1 +z2 | z1 ∈ D1 ∧ z2 ∈ D2 }) v αa (D1 ) b
+ αa (D2 ) for all sets D1 , D2 ⊆ Z.
Exercise 11.26: Prove that succ is a sound abstraction of csucc and that JOIN
is a sound abstraction of CJOIN , in the sense defined above.
cf αc af
αc
(2ConcreteStates )n States n
Exercise 11.27: Prove that each kind of CFG node v, the sign analysis con-
straint af v is a sound abstraction of the semantic constraint cfv . (The most
interesting case is the one where v is an assignment node.) Then use that
result to prove that af is a sound abstraction of cf .
As an example for our sign analysis, eval being a sound abstraction of ceval
means that αa ◦ceval v eval ◦αb (for a fixed expression), which can be illustrated
as follows:
11.3 SOUNDNESS 141
αb
ceval
αa
2Z Sign
Intuitively, when starting from a set of concrete states, if we first abstract the
states and then evaluate abstractly with eval we get an abstract value that over-
approximates the one we get if we first evaluate concretely with ceval and then
abstract the values.
With the result of Exercise 11.27, it follows from the fixed-point theorems
(see pages 43 and 130) and the general properties of Galois connections shown
in Section 11.2 that the sign analysis is sound with respect to the semantics and
the abstraction functions, as shown next.
F the analysis result for a given program P is computed as [[P ]] =
Recall that
fix (af ) = i≥0 af i (⊥). By the fixed-point theorem from Exercise 11.5, the se-
mantics of P is similarly given by {[P ]} = fix (cf ) = i≥0 cf i (⊥). According to
F
the definition of soundness, we thus need to show that α(fix (cf )) v fix (af ).
The central result we need is the following soundness theorem:
Exercise 11.29: Prove that the interval analysis (Section 6.1) with widening
(using the definition of ∇ from page 81) is sound with respect to the collecting
semantics from Section 11.1.
11.4 OPTIMALITY 143
11.4 Optimality
Assume we are developing a new analysis, and that we have chosen an analysis
lattice and the rules for generating analysis constraints for the various program-
ming language constructs. To enable formal reasoning about the soundness and
precision of the analysis, we have also provided a suitable collecting semantics for
the programming language (as in Section 11.1) and abstraction/concretization
functions that define the meaning of the analysis lattice elements (as in Sec-
tion 11.2). Furthermore, assume we have proven that the analysis is sound using
the approach from Section 11.3. We may now ask: Are our analysis constraints as
precise as possible, relative to the chosen analysis lattice?
As in the previous section, let α : L1 → L2 be an abstraction function where
L1 is the lattice for a collecting semantics and L2 is the lattice for an analysis, such
that α and γ form a Galois connection, and consider two functions cf : L1 → L1
and af : L2 → L2 that represent, respectively, the semantic constraints and
the analysis constraints for a given program. We say that af is the optimal6
abstraction of cf if
af = α ◦ cf ◦ γ
(which can also be written: af (b) = α(cf (γ(b))) for all b ∈ L2 ). Using the lattices
and abstraction/concretization functions from the sign analysis example, this
property can be illustrated as follows.
cf αc af
γc
(2ConcreteStates )n States n
(Compare this with the illustration of soundness from page 140.) To see that α ◦
cf ◦ γ is indeed the most precise monotone function that is a sound abstraction of
6 In the literature on abstract interpretation, the term “best” is sometimes used instead of “opti-
mal”.
144 11 ABSTRACT INTERPRETATION
for any s1 , s2 ∈ Sign, where we overload the · operator to work on sets of integers:
D1 · D2 = {z1 · z2 | z1 ∈ D1 ∧ z2 ∈ D2 } for any D1 , D2 ⊆ Z.
Despite the result of the previous exercise, the eval function from Section 5.1
is not the optimal abstraction of ceval . Here is a simple counterexample: Let
σ ∈ States such that σ(x) = > and consider the TIP expression x - x. We then
have
eval (σ, x - x) = >
while
αa ceval (γb (σ), x - x) = 0
(This is essentially the same observation as the one in Exercise 5.9, but this time
stated more formally.) Interestingly, defining the eval function inductively and
compositionally in terms of optimal abstractions does not make the function
itself optimal.
11.5 COMPLETENESS 145
Exercise 11.31: Assume we only work with normalized TIP programs (as
in Exercise 2.2). Give an alternative computable definition of eval for sign
analysis (i.e., an algorithm for computing eval (σ, E) for any abstract state σ
and normalized TIP expression E), such that eval is the optimal abstraction
of ceval .
Exercise 11.33: Which of the abstractions used in interval analysis (Section 6.1)
are optimal?
To be able to reason about optimality of the abstractions used in, for example,
live variables analysis or reaching definitions analysis, we first need a style of
collecting semantics that is suitable for those analyses, which we return to in
Section 11.6.
11.5 Completeness
As usual in logics, the dual of soundness is completeness. In Section 11.3 we
defined soundness of an analysis for a program P as the property α({[P ]}) v [[P ]].
Consequently, it is natural to define that an analysis is complete for P if:
x = input;
y = x;
z = x - y;
7 The literature on abstract interpretation often uses the term “complete” for what we call “sound
Let σ denote the abstract state after the statement y = x such that σ(x) = σ(y) =
>. Any sound abstraction of the semantics of the single statement z = x - y
will result in an abstract state that maps z to >, but the answer 0 would be
more precise and still sound in the analysis result for the final program point.
Intuitively, the analysis does not know about the correlation between x and y.
For this specific example program, we could in principle improve analysis
precision by changing the constraint generation rules to recognize the special
pattern consisting of the statement y = x followed by z = x - y. Instead of
such an ad hoc approach to gain precision, relational analysis (see Chapter 7) is
usually a more viable solution.
In Section 11.3 we observed (Exercise 11.24) that analysis soundness could
equivalently be defined as the property {[P ]} v γ([[P ]]). However, a similar
equivalence does not hold for completeness, as shown by the following two
exercises.
Exercise 11.34: We know from Exercise 11.16 that if α and γ form a Galois
connection then α(x) v y ⇐⇒ x v γ(y) for all x, y. Prove (by showing a
counterexample) that the converse property does not hold, i.e. α(x) w y ⇐⇒
6
x w γ(y).
(Hint: consider αa and γa from the sign analysis.)
Exercise 11.35: Give an example of a program P such that [[P ]] v α({[P ]}) and
γ([[P ]]) 6v {[P ]} for sign analysis.
Thus, {[P ]} = γ([[P ]]) is a (much) stronger property than α({[P ]}) = [[P ]].
If {[P ]} = γ([[P ]]) is satisfied, the analysis captures exactly the semantics of
P without any approximation; we say that the analysis is exact for P . Every
nontrivial abstraction loses information and therefore no interesting analysis is
exact for all programs.8 (Still, the property may hold for some programs.)
Having established a notion of analysis completeness (and a less interesting
notion of analysis exactness), we proceed by defining a notion of completeness of
the individual abstractions used in an analysis, to understand where imprecision
may arise.
As in the preceding sections, assume L1 and L01 are lattices used in a collecting
semantics and L2 and L02 are lattices used for an analysis. Let α : L1 → L2 and
α0 : L01 → L02 be abstraction functions, and let γ : L2 → L1 and γ 0 : L02 → L01 be
concretization functions such that α, γ, α0 , and γ 0 form two Galois connections.
Consider two functions cg : L1 → L01 and ag : L2 → L02 . We say that ag is a
complete abstraction of cg if ag ◦ α v α0 ◦ cg. (Compare this with the definition
of soundness of abstractions from page 140.)
Again, let us consider sign analysis as example. In Section 11.4 we saw that
8 Theseobservations show that we could in priciple have chosen define the concept of complete-
ness using the concretization function γ instead of using the abstraction function α, but that would
have been much less useful.
11.5 COMPLETENESS 147
for any D1 , D2 ⊆ Z. This tells us that the analysis, perhaps surprisingly, never
loses any precision at multiplications.
For abstract addition, the situation is different, as shown in the next exercise.
Exercise 11.37: For which of the operators -, /, >, and == is sign analysis
complete? What about input and output E?
Exercise 11.38: In sign analysis, is the analysis constraint function for assign-
ments af X=E a complete abstraction of the corresponding semantic constraint
function cf X=E , given that E is an expression for which eval is complete?
For abstractions that are sound, completeness implies optimality (but not
vice versa, cf. exercises 11.30 and 11.36):
Exercise 11.39: Prove that if ag is sound and complete with respect to cg, it is
also optimal.
We have seen in Section 11.4 that for any Galois connection, there exists an
optimal (albeit often non-computable) abstraction of every concrete operation.
The same does not hold for soundness and completeness, as shown in the
following exercise.
Exercise 11.40: Prove that there exists a sound and complete abstraction ag
of a given concrete operation cg if and only if the optimal abstraction of cg is
sound and complete.
(We have seen examples of abstractions that are optimal but not sound and
complete, so this result implies that sound and complete abstractions do not
always exist.)
Exercise 11.41: Prove that if af is sound and complete with respect to cf then
α({[P ]}) = [[P ]], where cf is the semantic constraint function and af is the
analysis constraint function for a given program P , and α is the abstraction
function.
Exercise 11.42: Which of the abstractions used in interval analysis (Section 6.1)
are complete? In particular, is abstract addition complete?
There are not many programs for which our simple sign analysis is complete
and gives a nontrivial analysis result, so to be able to demonstrate how these
observations may be useful, let us modify the analysis to use the following
slightly different lattice instead of the usual Sign lattice from Section 5.1.
>
0- 0+
⊥
The meaning of the elements is expressed by this concretization function γa :
∅ if s = ⊥
{0} if s = 0
γa (s) = {0, 1, 2, 3, . . . } if s = 0+
{0, −1, −2, −3, . . . } if s = 0-
Z if s = >
From Exercise 11.21 we know that an abstraction function αa exists such that αa
and γa form a Galois connection.
11.6 TRACE SEMANTICS 149
Exercise 11.44: Give a definition of eval that is optimal for expressions of the
form t * t where t is any program variable (recall Exercise 11.31).
The remaining abstract operators can be defined similarly, and the rest of
the eval function and other analysis constraints can be reused from the ordinary
sign analysis.
The modified sign analysis concludes that output is 0+ for the following
small program:
x1 = input;
x2 = input;
y1 = x1 * x1;
y2 = x2 * x2;
output y1 + y2;
Exercise 11.45: Explain why the modified sign analysis is sound and complete
for this program.
Assume the analysis is built to raise an alarm if the output of the analyzed
program is a negative value for some input. In this case, it will not raise an
alarm for this program, and because we know the analysis is sound (over-
approximating all possible behaviors of the program), this must be the correct
result. Now assume instead that the analysis is built to raise an alarm if the
output of the analyzed program is a positive value for some input. In this case,
it does raise an alarm, and because we know the analysis is complete for this
program (though not for all programs), this is again the correct result – there
must exist an execution of the program that outputs a positive value; we can
trust that the alarm is not a false positive.
Exercise 11.46: Design a relational sign analysis that is sound and complete
for the three-line program from page 145.
Exercises 11.42 and 11.46 might suggest that increasing analysis precision
generally makes an analysis complete for more programs, but that is not the
case: The trivial analysis that uses a one-element analysis lattice is sound and
complete for all programs, but it is obviously useless because its abstraction
discards all information about any analyzed program.
For further discussion about the notion of completeness, see Giacobazzi et
al. [GRS00, GLR15].
possible when program execution reaches that point for some input. As we
have seen, this precisely captures the meaning of TIP programs in a way that
allows us to prove soundness of, for example, sign analysis. For other analyses,
however, the reachable states collecting semantics is insufficient because it does
not capture all the information about how TIP programs execute. As a trivial
example, for the program
main(x) {
return x;
}
the reachable states collecting semantics will only tell us that the set of states at
the function entry and the set of states at the function exit are both
{[x 7→ z] | z ∈ Z}, but it does not tell us that the return value is always the
same as the input value. In other words, the reachable states collecting seman-
tics does not provide information about how one state at a program point is
related to states at other program points. To capture such aspects of TIP program
execution, we can instead use a trace semantics that expresses the meaning of a
TIP program as the set of traces that can appear when the program runs. A trace
is a finite sequence of pairs of program points (represented as CFG nodes) and
states:9
Traces = (Nodes × ConcreteStates)∗
We first define the semantics of single CFG nodes as functions from concrete
states to sets of concrete states (as concrete counterparts to the transfer functions
from Section 5.10): ctv : ConcreteStates → 2ConcreteStates .
For assignment nodes, ctv can be defined as follows:
The semantics of variable declaration nodes can be defined similarly. All other
kinds of nodes do not change the state:
which contains the information that the value of x is the same at the entry and
the exit, in any execution.
Exercise 11.47: What is the trace semantics of the program from page 126?
αt : 2Traces → (2ConcreteStates )n
defined by
αt (T ) = (R1 , . . . , Rn )
where Ri = {ρ | ··· ·(vi , ρ) · ··· ∈ T } for each i = 1, . . . , n.
Intuitively, given a set of traces, αt simply extracts the set of reachable states for
each CFG node.
The set 2Traces forms a powerset lattice, and αt is continuous, so by Exer-
cise 11.20 we know that a concretization function γt exists such that αt and γt
form a Galois connection.
Exercise 11.48: Show that αt (as defined above) is indeed continuous.
The existence of this Galois connection shows that the domain of the reach-
able states collecting semantics is in some sense an abstraction of the domain of
the trace semantics.
The following exercise shows that composition of Galois connections leads
to new Galois connections.
Exercise 11.49: Let α1 : L1 → L2 , γ1 : L2 → L1 , α2 : L2 → L3 , and γ2 : L3 →
L2 . Assume both (α1 , γ1 ) and (α2 , γ2 ) are Galois connections. Prove that
(α2 ◦ α1 , γ1 ◦ γ2 ) is then also a Galois connection.
γt ◦ γc
γt γc
αt αc
αc ◦ αt
Exercise 11.50: Prove that the reachable states collecting semantics is sound
with respect to the trace semantics. (Even though the collecting semantics is
not a computable analysis, we can still apply the notion of soundness and the
proof techniques from Section 11.3.)
Hint: You need a variant of the soundness theorem from page 141 that works
for infinite-height lattices.
Exercise 11.51: Use the approach from Section 11.3 to prove that the reaching
definitions analysis from Section 5.7 is sound. As part of this, you need to
specify an appropriate collecting semantics that formally captures what we
mean by an assignment being a “reaching definition” at a given program
point (see the informal definition in Section 5.7).
Exercise 11.52: Use the approach from Section 11.3 to prove that the available
expressions analysis from Section 5.5 is sound. (This is more tricky than
Exercise 11.51, because available expressions analysis is a “must” analysis!)
Exercise 11.53: Use the approach from Section 11.3 to prove that the live
variables analysis from Section 5.4 is sound. As part of this, you need to
specify an appropriate collecting semantics that formally captures what it
means for a variable to be live (see the informal definition in Section 5.4).
(This is more tricky than Exercise 11.51, because live variables analysis is a
“backward” analysis!)
Exercise 11.54: Investigate for some of the abstractions used in analyses pre-
sented in the preceding chapters (for example, live variables analysis or reach-
ing definitions analysis) whether or not they are optimal and/or complete.
Bibliography
[And94] Lars Ole Andersen. Program analysis and specialization for the C pro-
gramming language. PhD thesis, University of Copenhagen, 1994.
[BR02] Thomas Ball and Sriram K. Rajamani. The SLAM project: debugging
system software via static analysis. In Conference Record of POPL 2002:
The 29th SIGPLAN-SIGACT Symposium on Principles of Programming
Languages, Portland, OR, USA, January 16-18, 2002, pages 1–3. ACM,
2002.
153
154 BIBLIOGRAPHY
[Wan87] Mitchell Wand. A simple algorithm and proof for type inference.
Fundamenta Informaticae, 10(2):115–121, 1987.