0% found this document useful (0 votes)
59 views44 pages

A Formal Basis For The Specification of Concurrent Systems

The document discusses formal methods for specifying the behavior of concurrent systems. It proposes describing systems based on their possible behaviors, represented as sequences of state transitions. Both constructive and axiomatic approaches are discussed. The key points are: - Systems are characterized by their possible behaviors, represented as sequences of state transitions between states. - Constructive and axiomatic approaches can both be used to specify the set of possible behaviors, but axiomatic specifications are more fundamental. - A simple example program is used to illustrate specifying behaviors with axioms, by stating rules for the initial state, possible transitions from each control point, and termination conditions. - The document proposes a formal framework for specifying states,

Uploaded by

smieciuch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views44 pages

A Formal Basis For The Specification of Concurrent Systems

The document discusses formal methods for specifying the behavior of concurrent systems. It proposes describing systems based on their possible behaviors, represented as sequences of state transitions. Both constructive and axiomatic approaches are discussed. The key points are: - Systems are characterized by their possible behaviors, represented as sequences of state transitions between states. - Constructive and axiomatic approaches can both be used to specify the set of possible behaviors, but axiomatic specifications are more fundamental. - A simple example program is used to illustrate specifying behaviors with axioms, by stating rules for the initial state, possible transitions from each control point, and termination conditions. - The document proposes a formal framework for specifying states,

Uploaded by

smieciuch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

A Formal Basis for the Specication of Concurrent

Systems
Notes for the NATO Advanced Study Institute, Izmir, Turkey
Leslie Lamport
June 26, 2000
Contents
1 Describing Complete Systems 2
1.1 Systems as Sets of Behaviors . . . . . . . . . . . . . . . . . . 2
1.1.1 Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Behavioral Semantics . . . . . . . . . . . . . . . . . . 3
1.1.3 Specifying Behaviors with AxiomsA Simple Approach 4
1.1.4 Specifying Behaviors with Axiomsthe Right Way . . 5
1.1.5 Concurrent Programs . . . . . . . . . . . . . . . . . . 9
1.1.6 Programs as Axioms . . . . . . . . . . . . . . . . . . . 12
1.2 Correctness of an Implementation . . . . . . . . . . . . . . . . 13
1.3 The Formal Description of Systems . . . . . . . . . . . . . . . 17
1.3.1 Specifying States and Actions . . . . . . . . . . . . . . 17
1.3.2 Specifying Behaviors . . . . . . . . . . . . . . . . . . . 19
1.3.3 Completeness of the Method . . . . . . . . . . . . . . 22
1.3.4 Programs as Axioms . . . . . . . . . . . . . . . . . . . 23
1.4 Implementing One System with Another . . . . . . . . . . . . 23
1.4.1 The Formal Denition . . . . . . . . . . . . . . . . . . 23
1.4.2 The Denition in Terms of Axioms . . . . . . . . . . . 24
2 Specication 28
2.1 The Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 The Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3 State Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.3.1 The Modules State Functions . . . . . . . . . . . . . . 32
2.3.2 Interface and Internal State Functions . . . . . . . . . 32
2.3.3 Aliasing and Orthogonality . . . . . . . . . . . . . . . 34
2.4 Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.3 Formal Interpretation . . . . . . . . . . . . . . . . . . 38
2.5 The Composition of Modules . . . . . . . . . . . . . . . . . . 39
2.6 The Correctness of an Implementation . . . . . . . . . . . . . 40
1
1 Describing Complete Systems
1.1 Systems as Sets of Behaviors
We are ultimately interested in specifying systemsusually systems that
are to be implemented as concurrent programs. We cannot claim to have
written a formal specication of such a system unless we can formally state
what it means for a program to satisfy the specication. Such a statement
requires a formal semantics for programsthat is, the assignment to every
program of some mathematical object that denotes the meaning of the
program. We therefore begin with an informal sketch of a formal semantics
for concurrent programs.
We will ignore many interesting aspects of program semantics, including
the specication of most language constructs and the issue of composition-
ality. These questions are addressed in [6].
1.1.1 Behaviors
While sequential programs can often be described in terms of their input and
output, concurrent programs must be described in terms of their behavior.
A behavior, also called an execution sequence, is a sequence (nite or innite)
s
0

1
s
1

2
s
2

3
s
3
. . .
where the s
i
are states and the
i
are atomic operations. We claim, but will
not attempt to justify this claim here, that the behavior of every discrete
system, be it hardware or software, can be formally represented as such a
sequence.
A triple s

t is called an transition; s is called the initial state
and t is called the nal state of the transition. The above behavior can be
viewed as a sequence of transitions s
i1

i
s
i
such that the nal state of
each transition is the initial state of the next transition.
As an example, we consider the simple program of Figure 1, where x and
y are assumed to be integer-valued variables. The angle brackets indicate
that each assignment statement is a single atomic operation. The statements
are labeled and , and the control point following statement is labeled
. The state of the program consists of an assignment of values to the
variables x and y and an assignment of one of the three values , , or
to the program counter pc that denes the current locus of control. Let
(x = 4, y = 15, pc = ) denote the state in which the value of x is 4, the
2
: y := 7 );
: x := x
2
+y)
:
Figure 1: A simple program.
value of y is 15, and control is right at the beginning of statement . One
possible behavior of this program is
(x = 2, y = 128, pc = )

(x = 2, y = 7, pc = )

(x = 11, y = 7, pc = )
In fact, the set of all sequences of the form
(x = x
0
, y = y
0
, pc = )

(x = x
0
, y = 7, pc = )

(x = x
2
0
+ 7, y = 7, pc = )
for arbitrary integers x
0
and y
0
, are possible execution sequences of the
program. (The value of an integer variable is assumed to be a mathematical
integer, which can be arbitrarily large.) One might expect these to be the
only possible behaviors of this program, but it is convenient to allow certain
others that are described below.
In this example, the atomic-operation labels on the arrows in the execu-
tion sequence are redundantjust looking at the sequence of states allows
us to ll in the arrow labels. In fact, this is true for any reasonable sequen-
tial or concurrent program. Labeling the transition with the action makes it
easy to formalize the notion of who is performing an action. For now, these
labels can be regarded as a harmless bit of redundancy.
No signicance should be attached to the use of the same letters ( and
) to denote both atomic operations (arrow labels) and control point values
(values of pc). We simply nd it more convenient to overload these symbols
than to introduce an extra set of program labels.
1.1.2 Behavioral Semantics
We dene a semantics in which the meaning of a program is a set of behav-
iorsthe set of all possible behaviors that the program is allowed to exhibit.
There are usually considered to be two general approaches to describing
a set of behaviors: constructive (or operational) and axiomatic. In the
3
constructive approach, one gives rules for generating sequences. One can
view the program of Figure 1 as a constructive specication by considering
it to be a method for generating a behavior given initial values for x and y.
The semantics of the program is the set of all sequences generated by this
method starting from arbitrary integer value for x and y. In the axiomatic
approach, one describes the set of behaviors by a collection of axioms. The
meaning of the program is the set of all sequences that satisfy the axioms.
Formal mathematics is ultimately reducible to axiomatic reasoning, and
a constructive method rests upon axioms. The distinction between construc-
tive and axiomatic specications is therefore illusory. When one formally
describes a constructive method, it becomes axiomatic. However, there are
axiomatic methods that are nonconstructivethat is, for which there is no
clearly operational way to describe the set of behaviors that they dene.
Thus, constructive methods are really a special class of axiomatic ones.
Classifying constructive methods as a special case of axiomatic ones may
seem like a dubious bit of reductionism, obscuring an essential distinction.
We hope that the semantics described below will serve to counter that ob-
jection. It is certainly axiomatic, since every specication can be written as
a formula in a formal logical system. However, most of the axioms will be
written in a distinctly operational way.
1.1.3 Specifying Behaviors with AxiomsA Simple Approach
To provide an axiomatic specication of the above set of behaviors for the
program of Figure 1, we rst observe that this set of behaviors is determined
by the following four separate rules:
The initial state has pc = .
When pc = , the next transition is an transition that sets y to 7
and sets pc to .
When pc = , the next transition is a transition that sets x to x
2
+y
and sets pc to .
When pc = , no more transitions can occur.
The four rules are informal axioms that specify the set of behaviors for
the program. To turn them into a true axiomatic specication, they must
be expressed in some formal system. Temporal logic is an excellent formal
system for this purpose. We will not give a formal description of temporal
4
logic here, and will write axioms using somewhat stilted English. All the
properties in our specications that are expressed informally in English can
be translated into a temporal logic formula by anyone well versed in the
temporal logic described in the appendix of [7].
The above four rules can be rewritten more precisely as the following
temporal assertions.
Initial Axiom: In the starting state, pc = and x and y have integer
values.
Transition Axiom: It is always the case that if pc = and x = x
0
,
then the next action is labeled and, in the next state, pc = , x = x
0
,
and y = 7.
Transition Axiom: It is always the case that if pc = , x = x
0
, and
y = y
0
, then the next action is labeled and, in the next state, pc = ,
x = x
2
0
+y
0
, and y = y
0
.
Termination Axiom: It is always the case that if pc = , then there is
no next state.
If we add the assumption that the state is specied by the values of x, y,
and pc, then the set of execution sequences that satisfy these four axioms is
precisely the set of sequences described above.
This is the obvious method of writing a temporal logic specication of
the set of execution sequences described above. However, it turns out that
this is the wrong way to do it. The use of the in the next state temporal
operator raises serious diculties in dening what it means to implement
the program of Figure 1 by a lower-level program. Instead, we now describe
a less obvious method that does not use this temporal operator.
Note that the next action operator causes no problem and will be used.
The temporal logic of [7] can express the concept of the next action but not
the concept of the next state.
1.1.4 Specifying Behaviors with Axiomsthe Right Way
In the axiomatic approach, one usually species a set of behaviors by writing
a list of properties, which species the set of all behaviors that satisfy all
the properties. It is convenient to distinguish two types of properties: safety
and liveness. Intuitively, a safety property asserts that something (presum-
ably bad) will not happen, while a liveness property asserts that something
5
(presumably good) will eventually happen. (A formal characterization of
these properties can be found in [1].)
In specifying the set of behaviors for the program of Figure 1, safety
properties assert that the program does not perform an incorrect action
for example, a safety axiom would rule out execution sequences whose rst
atomic action sets the value of y to 13, or changes the value of x. Liveness
properties assert that the program does eventually make progress, unless it
has reached its halting state. For example, a liveness axiom would assert
that if control is at , then eventually control will be at .
In place of the in the next state temporal operator, we use the until
operator, where A until B means that A remains true at least until the
next time that B becomes true.
In our next attempt at specifying the program of Figure 1, we use the
same initial and termination axiom as before, but, for each atomic operation,
we have a pair of axioms, one for safety and one for liveness.
Transition Axiom: (Safety) It is always the case that if pc = ,
x = x
0
, and y = y
0
, then the next action is labeled and pc = ,
x = x
0
, and y = y
0
remain true until pc = , x = x
0
, and y = 7.
(Liveness) It is always the case that if pc = then eventually pc ,= .
Transition Axiom: (Safety) It is always the case that if pc = ,
x = x
0
, and y = y
0
, then the next action is labeled and pc = ,
x = x
0
, and y = y
0
remain true until pc = , x = x
2
0
+y
0
, and y = y
0
.
(Liveness) It is always the case that if pc = then eventually pc ,= .
Note the general pattern: instead of saying that if A holds then B holds in
the next state, we say that A holds until B does (a safety property) and
that if A holds then eventually A will cease to hold (a liveness property).
All the execution sequences described above for the program of Figure 1
satisfy these axioms. However, there are additional sequences that also
satisfy the axioms. For example, the axiom for statement states that,
starting in a state (x = x
0
, y = y
0
, pc = ), the next arrow must be labeled
, and the only way that the state can change is for the next state to become
(x = x
0
, y = 7, pc = ). However, it does not rule out stuttering actions
labeled that leave the state unchanged.
Assuming that the state is completely determined by the values of x, y,
and pc, the above axioms specify the set of all execution sequences starting
in state (x = x
0
, y = y
0
, pc = ), for arbitrary integers x
0
and y
0
, followed
6
by a nite number (possibly zero) of actions

(x = x
0
, y = y
0
, pc = ),
followed by an action

(x = x
0
, y = 7, pc = ), followed by a nite number
of actions

(x = x
0
, y = 7, pc = ), followed by an action

(x = x
2
0
+
7, y = 7, pc = ).
The extra stuttering actions allowed by this specication may seem bur-
densome. We shall see later that, on the contrary, they are the key to the
proper denition of what it means to implement a program with a lower-
level program. In fact, the in the next state temporal operator is bad
precisely because it allows one to specify that there is no stuttering.
Although they may seem strange, there is no reason not to allow stut-
tering actions. The state includes all visible information about what the
program is doing. An atomic action that does not change the state has no
visible eect, and an action with no visible eect must be harmless.
Since stuttering actions are harmless, there is no reason not to allow
behaviors of the program of Figure 1 to have extra stuttering actions labeled
at the end. In fact, it turns out to be useful to assume that, instead of
the execution sequences being nite, they all end with an innite sequence
of actions that do not change the state. Again, this creates no problems
because there is no way to distinguish a program that has halted from one
that is continually doing nothing. (Indeed, a halt instruction does not turn
o a computer; it causes the computer to cycle endlessly, doing nothing.)
We now rewrite the above specication to require that execution se-
quences end with an innite sequence of stuttering actions. This requires
adding an axiom for the action. We also rewrite the axioms for the and
actions in an even more baroque form that leads to the proper general-
ization for concurrent programs. To make the axioms easier to understand,
we express them in a form that mentions the next state, although explicitly
allowing stuttering actions. These axioms can be expressed in terms of the
until operator without the next state operator, but doing so results in
rather convoluted assertions. (In fact, the major problem with the until
operator is that it leads to formulas that are hard to understand.) The same
initialization axiom is used, but the Termination Axiom is replaced by the
Completion Axiom.
Transition Axiom: (Safety) It is always the case that, if the next
action is labeled , then pc = (in the current state) and the next
state is either unchanged or else has pc set equal to , y set equal to
7, and all other variables unchanged.
7
(Liveness) It is always the case that, if there are innitely many
actions, then eventually pc ,= .
Transition Axiom: (Safety) It is always the case that, if x = x
0
,
y = y
0
(in the current state) and the next action is labeled , then
pc = and the next state is either unchanged or else has pc changed
to , x set to x
2
0
+y
0
, and all other variables unchanged.
(Liveness) It is always the case that, if there are innitely many
actions, then eventually pc ,= .
Transition Axiom: (Safety) It is always the case that, if the next
action is labeled , then pc = in the current state and the next state
is unchanged.
Completion Axiom: It is always the case that the next action is labeled
either , , or .
Note that there is no liveness axiom for the action, and the Completion
Axiom expresses a safety property.
Most readers will not nd it immediately obvious that this set of axioms
species the same set of execution sequences described above; we now show
that it does. The initial axiom implies that the rst state has pc = . The
Completion Axiom implies that the rst action must be labeled , , or
. However, the other axioms imply that only an action can occur when
pc = . Hence, the rst action must be an action, which can either leave
the state unchanged or else set y to 7. The liveness axiom for implies
that there cannot be an innite sequence of actions that leave the state
unchanged, because that would mean that there would be an innite number
of actions and pc would remain forever equal to , contrary to the axiom.
Hence, there can be some nite number of actions that do not change the
state, but they must be followed by an action that sets y to 7 and sets pc to
. Similar reasoning shows that there must then be a nite number (possibly
zero) of actions that leave the state unchanged followed by one that sets x
to x
2
0
+7 and sets pc to . Finally, the Completion Axiom implies that there
must be an innite number of actions (since there is always a next one), and
the only action possible when pc = is a action, so the sequence must
end with an innite sequence of actions that leave the state unchanged.
8
cobegin
: y := 7 );
: x := x
2
+y)
:
while : x ,= 7 )
do : x := x +z )od
:
coend
:
Figure 2: A simple concurrent program.
1.1.5 Concurrent Programs
Thus far, we have considered only a sequential program. Let us now turn to
the simple concurrent program of Figure 2. In it, the program of Figure 1
is one process in a two-process program. A state of this program is an
assignment of values to the integer variables x, y, and z, and to the program
counter pc, which denotes the control state of the two processes, or else
equals when both processes have halted. Let (x = 7, y = 2, pc = (, ))
denote the state that assigns the value 7 to x, 2 to y, 4 to z, and in which
control in the rst process is at and control in the second process is at .
Let pc
1
and pc
2
denote the two components of pc, so, in this state, pc
1
=
and pc
2
= . For notational convenience, let (, ) = , so if pc = then
pc
1
= and pc
2
= .
An execution sequence of this program consists of an interleaving of
atomic operations from the two processes. One such execution begins
(x = 0, y = 1, z = 5, pc = (, ))

(x = 0, y = 1, z = 5, pc = (, ))

(x = 0, y = 1, z = 5, pc = (, ))

(x = 0, y = 7, z = 5, pc = (, ))

(x = 7, y = 7, z = 5, pc = (, ))

(x = 12, y = 7, z = 5, pc = (, ))

(x = 12, y = 7, z = 5, pc = (, ))

(x = 12, y = 7, z = 5, pc = (, ))

(x = 17, y = 7, z = 5, pc = (, ))


and continues with the second process cycling forever through and actions
(some of which may be stuttering actions).
This program also permits halting executions, which occur if statement
9
happens to be executed when x has the value 7. Such execution sequences
end with an innite string of stuttering actions.
We specify the set of behaviors of this program with the following axioms.
For convenience, pc
1
and pc
2
are considered to be variables when asserting
that no other variables are changed. Note that the axioms for and
are almost the same as before, the only change being the substitution of pc
1
for pc.
Initial Axiom: In the starting state, pc = (, ) and x, y, and z have
integer values.
Transition Axiom: (Safety) It is always the case that, if the next
action is labeled , then pc
1
= (in the current state) and the next
state is either unchanged or else has pc
1
set equal to , y set equal to
7, and all other variables are unchanged.
(Liveness) It is always the case that, if there are innitely many
actions, then eventually pc
1
,= .
Transition Axiom: (Safety) It is always the case that, if x = x
0
,
y = y
0
(in the current state) and the next action is labeled , then
pc
1
= and the next state is either unchanged or else has pc
1
set to
, x set to x
2
0
+y
0
, and all other variables are unchanged.
(Liveness) It is always the case that, if there are innitely many
actions, then eventually pc
1
,= .
Transition Axiom: (Safety) It is always the case that, if the next
action is labeled , then pc
1
= (in the current state) and the the
next state is either unchanged or else only the value of pc
2
is changed
and its value in the next state equals if x ,= 7 in the current state
and equals if x = 7 in the current state.
(Liveness) It is always the case that, if there are innitely many
actions, then eventually pc
2
,= .
Transition Axiom: (Safety) It is always the case that, if x = x
0
and
z = z
0
(in the current state) and the next action is labeled , then
pc
2
= and the next state is either unchanged or else has x set to
x
0
+z
0
, pc
2
changed to , and all other variables unchanged.
(Liveness) It is always the case that, if there are innitely many
actions, then eventually pc
2
,= .
10
Transition Axiom: (Safety) It is always the case that, if the next
action is labeled , then pc = in the current state and the next state
is unchanged.
Completion Axiom: It is always the case that the next action is labeled
either , , , , or .
Note that there are no or actions; when one process has terminated, all
the actions are generated by the other process until it halts, at which time
only stuttering actions occur.
The reader can check that these axioms permit all the execution se-
quences we expect. They also guarantee that nonstuttering actions keep
happening unless the program haltsthat is, until pc equals . However,
they do not rule out the possibility that the second process loops forever and
the rst process never terminates. A cobegin statement whose semantics
allows one process to be starved in this way is called an unfair cobegin.
An alternative semantics for cobegin rules out that possibility, dening a
fair cobegin. To specify the set of behaviors allowed by a fair cobegin,
we must add another axiom. Let us say that control is in the rst process
if pc
1
equals or , and that control is in the second process if pc
2
equals
or . Let us also say that and are the atomic operations of the rst
process, and that and are the atomic operations of the second process.
Fairness is expressed by the following axiom:
Fairness Axiom: For each process of the cobegin, if it is always the
case that control is in the process then eventually there will be an
action labeled with some atomic operation of the process.
The Fairness Axiom insures that the rst process must terminate in
any execution sequence of the program of Figure 2. To see this, observe
that initially pc
1
= , and the Completion Axiom together with the safety
axioms for all the actions imply that pc
1
must then remain equal to unless
an action changes it. We show by contradiction that pc
1
cannot remain
forever equal to . Assume the contrary. Then the Fairness Axiom implies
that it is always the case that there will eventually be an action of the
rst process, which means that there must be innitely many actions of
the rst process. Since pc
1
always equals , the transition axioms imply
that the only actions of the rst process that can occur are actions, so
there must be innitely many actions. The liveness axiom for then
implies that pc
1
must eventually become unequal to , which is the required
11
contradiction. Thus, we have proved that there must eventually be an
action that changes the value of pc
1
. By the safety part of the Action
Axiom, pc
1
can be changed only to . Hence, we must eventually have
pc
1
= . A similar argument then shows that pc
1
must eventually equal .
This reasoning may seem rather convoluted, but one expects proving
properties of a program directly from an axiomatic semantics to be a long-
winded aair. We could have written the axioms in a somewhat more
straightforward way that would have simplied this proof. However, writ-
ing the axioms the way we did should make it clear how one writes similar
axioms for any program built from simple sequential constructs and un-
fair cobegins. There is an axiom describing the initial state, a safety and
liveness axiom for each atomic operation, a safety axiom for the stuttering
action that represents termination (not necessary if termination is impossi-
ble), and a completeness axiom saying that the next action is always one of
the programs atomic actions.
When a program contains fair cobegins, an additional fairness axiom
is needed for each fair cobegin. If all cobegins are assumed to be fair,
then the fairness axiom given above can be used. If a fair cobegin can
be nested inside an unfair one, then the following axiom denes the most
natural semantics.
Fairness Axiom: If innitely many actions are labeled with the atomic
operations of the cobegin, then, for each process of the cobegin, it is
always the case that if control is in the process then eventually there
will be an action labeled with some atomic operation of the process.
This axiom asserts that no process in the cobegin is starved unless the
entire cobegin is starved.
1.1.6 Programs as Axioms
One normally thinks of the program of Figure 2 as a machine for gener-
ating execution sequencesin other words, we base our understanding of
the program upon an intuitive constructive semantics. The above discus-
sion showed that we can represent our intuitive understanding (modied to
allow stuttering actions) by a set of axioms. Thus, the program can equally
well be thought of as an axiomatic description of a set of behaviors.
We ask the reader to change his way of thinking, and to regard the
program in just this way: as an axiomatic specication of a set of execution
sequences, not as a machine for generating behaviors. Pretend that someone
12
a: load 7)
store Y )
b: load X)
multiply by X)
store X)
load Y )
add X)
store X)
g:
Figure 3: Implementation of the program of Figure 1.
has built a machine for generating execution sequences. We will use the
program of Figure 2 to determine if this machine is correct, where correctness
means that every execution sequence that it generates is in the set specied
by this program. Instead of thinking of the program as a mechanism for
generating behaviors, think of it as a lter for outlawing incorrect behaviors.
Of course, this way of thinking has no formal signicance; formally, all
that we have is a correspondence between the program text and the set of
behaviors. However, thinking of a program as an axiomatic specication of
its set of possible behaviors is a crucial step in understanding hierarchical
specication.
1.2 Correctness of an Implementation
Let us now consider what it means for one program to correctly implement
another. For our example, we return to the simple sequential program of
Figure 1 and consider its implementation with an assembly language pro-
gram. For simplicity, assume that the assembly language implements in-
nitely large integers.
The program of Figure 3 is an implementation that might be produced
by a very stupid compiler for a computer with a single accumulator. (There
are two unnecessary instructions.) The assembly language variables X and
Y implement the variables x and y of the higher-level program of Figure 1,
statement is implemented by the instructions at locations a and a+1, and
statement is implemented by the instructions at locations b through b +5.
(We assume that each instruction occupies a single memory location.)
Formally, the programs of Figures 1 and 3 specify sets of execution se-
quences. We will not bother writing the axioms that dene the set of be-
13
haviors of the assembly language program. Suce it to say that the state
consists of the values of the variables X and Y , the value of the accumulator,
and the value of the program counter PC.
We must dene what it means for one set of execution sequences to
implement another. The basic idea is that every possible execution sequence
of the assembly language program should, when viewed at the higher level,
be a possible execution of the higher-level program. What does viewing at
the higher level mean? We start with what is perhaps the most obvious
approach, and show that it does not work.
First we must interpret the state of the lower-level program in terms of
the state components of the higher-level one. The values of the assembly-
language variables X and Y represent the values of the higher-level variables
x and y. If control in the lower-level program is at a, b, or g, then control
in the higher-level one is at , , or , respectively. But what happens
when control is at some other point in the assembly language programfor
example, at location b +3? At that point, X contains an intermediate value
of the computation, and its value does not correspond to a valid value of x.
An execution of the higher-level program contains three distinct states,
while an execution of the lower-level program contains nine. Three of those
nine states, the ones with control at statements a, b, and g, correspond to
the three correct states of the higher-level program. The other six states
of the lower-level program are intermediate states that do not correspond
to any states of the higher-level program. We dene an implementation
to be correct if we can partition the states of the lower-level program into
valid and intermediate states, and dene a mapping from valid states to
states of the higher-level program such that throwing away the intermediate
states and applying the mapping to the remaining sequence of valid states
produces a sequence of higher-level states that is a possible behavior of the
higher-level program.
This approach is quite natural, and captures the way most people think
about implementations. Unfortunately, while it is adequate for sequential
programs, it does not work for concurrent ones. In a concurrent program,
it is possible for the entire program never to be in a valid state except
at the beginningthere could always be some process in an intermediate
state. Thus, throwing away the intermediate states leaves us with nothing.
Therefore, we must eschew the obvious approach.
The proper approach is more subtle. We cannot throw away any in-
termediate states because, in a concurrent program, almost all the states
could be intermediate ones. If we dont throw away any states, how can
14
an execution of the lower-level program with eight dierent nonstuttering
actions correspond to an execution of the higher-level program that has
only three dierent nonstuttering actions? The answer is clear: ve of the
nonstuttering lower-level actions must correspond to higher-level stuttering
actions. How is this possible when the value of X assumes an intermediate
value that is never assumed by the higher-level variable x? The answer to
this question is more subtle: the value of x is not dened to simply equal
X, but is a more complex function of the lower-level program state.
We will dene the variables x and y and the control state pc of the higher-
level program as functions of the lower-level program state such that: the
execution of statement a corresponds to a nonstuttering high-level action ,
the execution of statement a + 1 corresponds to a stuttering action , the
execution of statement b corresponds to a high-level nonstuttering action ,
and execution of the remaining statements in the implementation correspond
to stuttering actions . This is done as follows:
pc is dened to equal: (i) if PC = a; (ii) if PC = a+1 or PC = b;
or (iii) if PC > b.
x is dened to equal: (i) X if PC equals a, a + 1, b, or c; (ii) X
2
+Y
if PC equals b + 1 or b + 2; or (iii) X + Y if PC equals b + 3, b + 4,
or b + 5.
y is dened to equal Y unless PC = a + 1, in which case it equals 7.
The reader should check that, with these denitions, execution of the
lower-level program statements has the higher-level interpretation indicated
abovefor example, that executing the store instruction at b + 2 does not
change the value of x, y, or pc, so it is a stuttering action.
Although this procedure for proving that an assembly language program
implements a higher-level program works in this example, it is not obvious
that it works in general. In fact, the method works only if every variable of
the higher-level program can be represented as a function of the lower-level
variables (including the variable PC). This is not always the case. For
example, an optimizing compiler could discover that a variable is never used
and decide not to implement it, making it impossible to represent that vari-
ables value as a function of the assembly language programs state. In such a
case, one must add dummy variables to the assembly-language program
variables and extra statements that are not actually implemented (and take
up no memory), but which provide the additional state information needed
15
to represent the higher-level variables. This is seldom necessary in practice
and will not be explained in any more detail.
Let us examine more formally what we have done. We expressed each of
the state components x, y, and pc of the higher-level program as a function
of the state components X, Y , and PC of the assembly language program,
and we expressed each of the actions , , and of the higher-level program
as a set of actions of the lower-level one. This denes several mappings.
First, there is a mapping F
st
from states of the lower-level program to states
of the higher-level one and a mapping F
ac
from actions of the lower-level
program to actions of the higher-level one. For example, if s is the state
(X = 2, Y = 5, PC = b + 1) of the assembly language program, then F
st
(s)
is the state (x = 9, y = 5, pc = ) of the higher-level program, and F
ac
(a +
1) = that is, F
ac
maps the action a+1 of the assembly language program
(the action corresponding to the store Y atomic operation) to the action
of the higher-level program. The mappings F
st
and F
ac
dene a mapping F
on execution sequences, where F maps an execution sequence
s
0

1
s
1

2
s
2

3
s
3
. . .
of the assembly language program to the execution sequence
F
st
(s
0
)
Fac(
1
)
F
st
(s
1
)
Fac(
2
)
F
st
(s
2
)
Fac(
3
)
F
st
(s
3
) . . .
of the higher-level program. The implementation is correct if, for every
execution sequence that satises the axioms for the assembly language
program, the execution F() satises the axioms for the higher-level pro-
gram.
Expressing the state components and actions of the higher-level program
as functions of the lower-level program state and actions denes a mapping
F

that maps assertions about the higher-level program into assertions about
the lower-level program. For example, if A is the assertion (about the state
of the higher-level program) that y = y
0
, then F

(A) is the assertion (about


the state of the assembly language program) obtained by substituting for
y its expression as a function of PC and Y namely, the assertion that
if PC = a + 1 then 7 = y
0
, else Y = y
0
. If B is the assertion that the
next action (in an execution of the higher-level program) is labeled , then
F

(B) is the assertion that the next action (in an execution of the lower-level
program) is labeled either a + 1 or b.
The mappings F and F

are related as follows. If A is an assertion


about execution sequences of the higher-level program, and is an execution
16
sequence of the assembly language program, then the assertion A is true of
the execution sequence F() if and only if the assertion F

(A) is true of .
1.3 The Formal Description of Systems
Let us now abstract the basic method underlying the above example pro-
grams. A system is a triple (B, S, A), where S is a set of states, A is a set
of actions, and B is a set of behaviors of the form
s
0

1
s
1

2
s
2

3
s
3
. . . (1)
with each s
i
an element of S and each
i
an element of A. The set B must
also be invariant under stuttering, which means that given any behavior (1)
in B, the behavior obtained by replacing s
i1

i
s
i
with s
i1

i
s
i1

i
s
i
is also in B.
We now explain how a system is formally described. This involves the
formal specication of the sets S and A and of the set of behaviors B.
1.3.1 Specifying States and Actions
Specication of the set A of actions involves simply naming all the actions.
In other words, the set A is specied simply by enumerating its elements.
This is easy for a nite set. Innite sets of actions are also possible, and are
usually enumerated in parametrized forme.g., by including a set of actions

i
for every positive integer i.
The set S of states is described in terms of state functions, where a state
function is a mapping from S to some set of values called its range. In our
specication of the program of Figure 1, we used the three state functions
x, y, and pc; the range of x and of y was the set of integers, the range of pc
was the set , , . In general, the set S is dened by giving a complete
collection of state functions f
1
, . . . , f
n
. An element s of S is uniquely
determined by the n-tuple of values (v
1
, . . . , v
n
) such that f
1
(s) = v
1
, . . . ,
f
n
(s) = v
n
.
One can further restrict the set S by dening a constraint that limits
the possible sets of n-tuples (f
1
(s), . . . , f
n
(s)). For example, consider a se-
quential program with an integer variable u whose scope does not include
the entire program. We could dene the range of u to consist of the integers
together with the special element denoting an undened value, and then
require that the value of u be an integer for certain values of pc and that it
equal for the remaining value of pc.
17
To express this formally, let R
i
be the range of f
i
. A constraint is a
subset C of R
1
R
n
. Given the functions f
i
and the set C, S is
eectively dened by the requirements: (i) for every element s of S, there is
a unique element (v
1
, . . . , v
n
) in C such that f
i
(s) = v
i
; and (ii) for distinct
elements s and t of S, the n-tuples (f
1
(s), . . . , f
n
(s)) and (f
1
(t), . . . , f
n
(t))
are unequal.
A subset C of R
1
R
n
is the same as a boolean-valued function
on that set, where the function C is dened by letting C(v
1
, . . . , v
n
) equals
true if and only if (v
1
, . . . , v
n
) is in the set C. A constraint C is usually
expressed as the relation C(f
1
, . . . , f
n
) among the state functions f
i
. For
example, suppose there are three state functions f
1
, f
2
, and f
3
, where the
ranges R
1
and R
2
are the set of integers and R
3
is the set , , . We
write [f
1
< f
2
] [(f
3
= ) (f
1
= 0)] to mean the subset
(v
1
, v
2
, v
3
) : v
1
< v
2
(v
3
= v
1
= 0)
of R
1
R
2
R
3
.
In the three programs considered above, the state functions consisted of
the program variables and the program counter. With more complicated
language constructs, other state functions may be needed to describe the
program state. A program with subroutines requires a state function to
record the current value of the stack. A concurrent program that uses
message-sending primitives may need state functions that record the con-
tents of message buers. In general, the state functions must completely
describe the current state of the system, specifying everything that is nec-
essary to continue its execution. For a deterministic system, such as the
program of Figure 1, the current state completely determines its future be-
havior. For a nondeterministic system, such as the concurrent program of
Figure 2, the current state determines all possible future behavior.
Observe that in giving the state functions and constraint, we are de-
scribing the essential properties of the set S, but we are not specifying any
particular representation of S. In mathematical terms, we are dening S
only up to isomorphism. This little mathematical detail is the formal rea-
son why our specication of the program of Figure 1 does not state whether
the variable x is stored in a binary or decimal representation. It doesnt mat-
ter whether the elements of S are strings of bits or decimal digits, or even
sequences of voltages on ip-op wires. All that the specication mentions
is the value of the state function x, not the structure of the states.
New state functions can be created as combinations of the state functions
f
i
. For example, if f
1
and f
2
are integer-valued state functions, then we can
18
dene a new boolean-valued state function f by letting f(s) = (f
1
(s) <
f
2
(s) + 3), for any s in S. There are two ways to view the state function
f. We can think of the f
i
as elementary state functions and f as a derived
state function, or we can consider f to have the same status as the f
i
by
adding the condition f = (f
1
< f
2
+3) to the constraint. Formally, the two
views are equivalent. In practice, the rst view seems more convenient and
will be adopted.
1.3.2 Specifying Behaviors
Having specied the sets S and A, we must now specify the set B of behav-
iors. The set of behaviors is described formally by a collection of axioms.
Four kinds of axioms are used: initial axioms, transition axioms, liveness
axioms, halting axioms, and completion axioms.
Initial Axioms A state predicate is a boolean-valued state function (either
derived or elementary). We say that a predicate P is true for a state s if
P(s) equals true.
An initial axiom is a state predicate. It is true for the behavior (1) if and
only if it is true for the initial state s
0
. Initial axioms are used to specify
the starting state of the system.
Transition Axioms A relation R on the set S consists of a set of ordered
pairs of elements of S. We write sRt to denote that (s, t) is in the relation
S. We say that the relation R is enabled in a state s if there exists a state t
such that sRt. The relation R is said to be self-disabling if, for any states s
and t such that sRt: R is not enabled in t.
A transition axiom is a pair (, R) where is an action in A and R is
a self-disabling relation on S. We write this pair as : R instead of (, R).
The transition axiom : R asserts the following for a behavior of the form
(1):
(Safety) For each i: if
i
= , then R is enabled in state s
i1
, and
either s
i1
Rs
i
or else s
i1
= s
i
.
(Liveness) If there exist innitely many values of i such that
i
= ,
then, for any i, there exists a j > i such that R is not enabled in s
j
.
This is the formal description of the kind of transition axioms we wrote
for the programs of Figures 1 and 2. Each atomic operation of the programs
was described by a separate transition axiom.
19
The behavior (1) can be thought of as an innite sequence of transitions
s
i1

i
s
i
such that the nal state of each transition equals the initial state
of the next transition. A transition axiom describes the transitions that can
appear in a behavior. The safety part of a transition axiom : R asserts how
an transition can change the state. The conjunction of these assertions for
all actions describes all possible ways that the state can change. However,
it does not assert that any change must occur. Asserting that something
must change is a liveness property. Interesting liveness properties are as-
serted by special liveness axioms, described below. Being a liveness axiom,
the liveness part of a transition axiom is more logically included with the
other liveness axioms. However, it expresses a very weak liveness property
namely, that an innite number of stuttering transitions cannot occur with
R continuously enabled. (Remember that R is self-disabling, so a nonstut-
tering action must disable the transition.) We know of no cases in which
one does not want at least this liveness property to hold, so it is easiest to
include it as part of the transition axiom.
The requirement that R be self-disabling avoids certain formal dicul-
ties, such as the ones pointed out in [3]. In specifying systems, it seems to
be a bad idea to allow actions that could repeat themselves innitely often
with no intervening actionsexcept for a trivial halting action that denotes
termination. Thus, this requirement is not a signicant restriction.
A relation R on S is described as a relation on state functions subscripted
new or old. For example, if f and g are state functions, then f
new
< g
old
describes the relation R such that sRt is true if and only if f(t) < g(s).
In a practical specication language, one needs a convenient notation for
expressing relations on state functions. One method is to write the rela-
tions using the new and old subscripts. Another method is to use ordinary
programming language constructs. The assignment statement x := x + y
describes a relation such that x
new
= x
old
+y
old
and the new and old values
of all other variables are the same. The use of new and old subscripts is
more general, since a relation such as x
2
new
+ y
2
old
= x
old
cannot be written
conveniently as an assignment statement. On the other hand, sometimes the
programming notation is more convenient. A specication language should
probably allow both notations.
Liveness Axioms Liveness axioms are expressed with temporal logic.
The time has come to describe this logic more formally. Fortunately, we
need only a very restricted form of temporal logica form that is known in
20
the trade as linear-time temporal logic with unary operators. In particular,
we do not need the binary until operator, which can make formulas hard
to understand. Of course, we do not include an in the next state operator.
A temporal logic formula represents a boolean function of behaviors. We
write [= U to denote that the formula U is true on the behavior . Recall
that a state predicate is a boolean-valued function on the set S of states. An
action predicate is a boolean-valued function on the set A of actions. We
identify an action of A with the action predicate that is true on an action
of A if and only if = . The generalization of state predicates and
action predicates is a general predicate, which is a boolean-valued function
on the set (S A) of state, action pairs.
A formula of temporal logic is made up of the following building blocks:
general predicates; the ordinary logical operators (conjunction), (dis-
junction), (implication), and (negation); and the unary temporal oper-
ator . To dene the meaning of any formula, we dene inductively what
it means for such a formula to be true for a behavior of the form (1).
A general predicate G is interpreted as a temporal logic formula by
dening [= G to be true if and only if G(s
0
,
1
) is true. Thus, a state
predicate is true of a behavior if and only if it is true for the rst state, and
an action predicate is true of a behavior if and only if it is true for the rst
action.
The meaning of the ordinary boolean operators is dened in the obvious
wayfor example, [= U V is true if and only if both ( [= U) and
( [= V ) are true, and [= U is true if and only if [= U is false.
The operator , read always or henceforth, is dened as follows. If
is the behavior (1), then let
+n
be the behavior
s
n

n+1
s
n+1

n+2

for n 0. For any formula U, [= U is dened to be true if and only if

+n
[= U is true for all n 0. For example, if P is a state predicate, then
P is true for if and only if P is true for every state s
n
in .
The derived operator , read eventually, is dened by letting U
equal U for any formula U. Thus, [= U is true if and only if

+n
[= U is true for some n 0. In particular, if P is a state predicate,
then P is true for if and only if P is true on some state s
n
in .
The derived operator , read leads to, is dened by letting U V
equal (U V ). Intuitively, U V means that whenever U is true,
V must be true then or at some later time. Thus, if P and Q are state
21
predicates and is the behavior (1), then P Q is true for if and only
if, for every n: if P is true on state s
n
then Q is true on s
m
for some m n.
You should convince yourself that, for a state predicate P, the formula
P (read innitely often P) is true for the behavior (1) if and only if P
is true on innitely many states s
n
, and P is true if and only if there is
some n such that P is true on all states s
m
for m n. With a little practice,
it is easy to understand the type of temporal logic formulas one writes to
specify liveness properties.
Most liveness properties are expressed with the operator. A typical
property asserts that if a transition becomes enabled then it will eventually
re. This property is expressed by a formula of the formP P, where P
is the state predicate asserting that the transition is enabled. The liveness
part of a transition axiom : R is A P, where A is the action
predicate that is true for action and false for all other actions, and P is
the predicate that is true for a state s if and only if R is enabled on s.
Halting and Completion Axioms A halting axiom consists of a pair
: P where is an action in A and P is a state predicate. The halting
axiom : P is true for the behavior of (1) if the following condition is
satised:
For every i: if
i
= , then P is true in state s
i1
and s
i
= s
i1
.
The Transition Axiom for the program of Figure 1 and the Transition
Axiom for the program of Figure 2 are examples of halting axioms.
A halting axiom is needed to allow the possibility of haltingthe ab-
sence of any more nonstuttering transitions. It turns out that, when writing
descriptions of individual modules rather than of complete programs, one
usually does not need a halting axiom.
The completion axiom denes the set A of actions, asserting what the
possible actions
i
are in a behavior of the form (1). In writing specications,
a slightly dierent form of completion axiom will be used that asserts which
actions are performed by which modules.
1.3.3 Completeness of the Method
Implicit in the work of Alpern and Schneider [2] is a proof that any system
that can be described using a very powerful formal system for writing tem-
poral axioms (much more powerful than the simple temporal logic dened
above) can also be described by initial axioms, transition axioms, simple
22
liveness axioms of the form P Q with P and Q state predicates, a halting
axiom, and a completion axiom. Our method for describing systems can
therefore be used to provide a formal description of any system that we
would expect it to.
Of course, theoretically possible does not necessarily mean practical and
convenient. The utility of our formal method as the basis for a practi-
cal method for specifying and reasoning about concurrent systems must be
demonstrated with examples.
1.3.4 Programs as Axioms
It is customary to adopt the point of view that the transition axiom describes
the semantics of the atomic operation. From now on, it will be useful to
reverse this way of thinking and instead to think of the atomic operation as
a convenient way to write the transition axiom. A program then becomes
an easy way to write a collection of axioms.
When writing specications, programs seem to be convenient for express-
ing the transition axioms but not so convenient for expressing other liveness
properties.
1.4 Implementing One System with Another
1.4.1 The Formal Denition
Let (B, S, A) and (B

, S

, A

) be two systems. We now formally dene what it


means for the rst to implement the second. We call (B, S, A) the lower-level
system and its state functions, behaviors, etc. are called lower-level objects;
(B

, S

, A

) is said to be the higher-level system and its state functions, etc.


are called higher-level objects.
Recall the approach used in Section 1.2, where the lower-level system
was an assembly language program and the higher-level one was a program
written in a higher-level language. We dened a mapping F from lower-
level behaviors to higher-level behaviors. More precisely, if is a behavior
with lower-level states and actions, then F() is a behavior with higher-level
states and actions. Correctness of the implementation meant that for any
behavior of the lower-level system, F() is a behavior of the higher-level
one. The mapping F was derived from mappings F
st
from lower-level states
to higher-level states and F
ac
from lower-level actions to higher-level actions.
We generalize this denition slightly by allowing F
ac
to be a function
of both the action and the state of the lower-level system rather than just
23
of the action. This generalization permits the same lower-level action to
implement several dierent higher-level actions. For example, suppose the
compiler translates (higher-level) exponential operations into calls of a sin-
gle (assembly language) exponentiation subroutine. An atomic operation
performed by the exponentiation subroutine can correspond to an execu-
tion of one of many dierent atomic operations of the higher-level program,
which one depending upon the value of the register that contains the return
address of the subroutine call.
The formal denition states that (B, S, A) implements (B

, S

, A

) if there
exist mappings F
st
: S S

and F
ac
: A S A

such that F(B) B

,
where the mapping F is dened by letting F(), for the behavior given
by (1), equal
F
st
(s
0
)
Fac(
1
,s
0
)
F
st
(s
1
)
Fac(
2
,s
1
)
F
st
(s
2
)
Fac(
3
,s
2
)
(2)
1.4.2 The Denition in Terms of Axioms
The Mappings The set of states is specied by elementary state functions
and a constraint. Let f
1
, . . . , f
n
be the elementary state functions and C
the constraint that dene the set S of lower-level states, and let f

1
, . . . , f

m
,
C

be the elementary state functions and constraint dening the set S

of
higher-level states. To dene the mapping F
st
, we must express each higher-
level state function f

j
in terms of the lower-level ones f
i
. That is, we must
choose mappings F
j
: R
1
R
n
R

j
, where R
i
is the range of f
i
and
R

j
is the range of f

j
, and dene f

j
to equal F
j
(f
1
, . . . , f
n
). The F
j
must be
constraint-preservingthat is, for any v in R
1
R
n
, if C(v) = true
then C

(F
1
(v), . . . , F
m
(v)) = true.
The mappings F
j
dene the mapping F
st
: S S

as follows. If s is the
(unique) element of S such that f
i
(s) = v
i
, for i = 1, . . . , n, then F
st
(s) is
the element of S

such that f

j
(F
st
(s)) = F
j
(v
1
, . . . , v
n
), for j = 1, . . . , m.
For any higher-level state function f, let F

st
(f) be the lower-level state
function such that F

st
(f)(s) is dened to equal f(F(s)). It is a simple
exercise in unraveling the notation to verify that, for the elementary higher-
level state function f

j
, the lower-level state function F

st
(f

j
) is the function
F
j
(f
1
, . . . , f
n
). Thus, for the elementary higher-level state function f

j
, the
lower-level state function F

st
(f

j
) is just the denition of f

j
as a function
of the lower-level state functions. For a derived higher-level state function
f, one can compute F

st
(f) in terms of the functions F

st
(f

j
). For example,
F

st
(3f

1
+f

2
) equals F

st
(3f

1
) +F

st
(f

2
).
24
These denitions are basically quite simple. Unfortunately, simple ideas
can appear complicated when they are expressed in the abstract formal-
ism of n-tuples and mappings. The mapping F
st
from lower-level states to
higher-level ones and the mapping F

st
from higher-level state functions to
lower-level ones are the basis for our method of verifying the correctness
of an implementation. To fully understand this method, the reader should
develop an intuitive understanding of these mappings, and the relation be-
tween them. This is best done by expressing the mappings from the example
in Section 1.2 in terms of this formalism.
Our specications talk about state functions rather than states. Hence,
it is the mapping F

st
from higher-level state functions to lower-level ones
that we must use rather than the mapping F
st
from lower-level states to
higher-level ones.
For any action , let A() be the action predicate such that A()() is
true if and only if = . Action predicates of the form A() are called
elementary action predicates; they play the same role for the set of actions
that the elementary state functions play for the set of states. Any action
predicate can be expressed as a function of elementary action predicates.
Just as the mapping F
st
is dened by expressing the higher-level elemen-
tary state functions in terms of the lower-level ones, the mapping F
ac
from
AS to A

is dened by expressing each higher-level elementary action pred-


icate as a function of the lower-level elementary action predicates and state
functions. This denes a mapping F

ac
from higher-level action predicates to
lower-level general predicates such that F

ac
(A

)(, s) = A

(F
ac
(, s)) for any
higher-level action predicate A. (Recall that a lower-level general predicate
is a boolean function on S A.) The formal denitions are analogous to
the ones for F
st
and F

st
, and we wont bother with the details.
The mappings F

st
and F

ac
induce a mapping F

from higher-level general


predicates to lower-level ones, where, for any higher-level general predicate
G, F

(G) is the predicate whose value on (s, ) equals G(F


st
(s), F
ac
(s, )).
For a high-level state predicate P, F

(P) is the same as F

st
(P), and, for
a high-level action predicate A, F

(A) is the same as F

st
(A).
1
Any gen-
eral predicate is represented as a function of elementary state and action
predicates, so F

can be computed from the F

st
(f

j
) and the F

ac
(A()) for
the higher-level elementary state functions f

j
and action predicates A().
For example, F

(A() f
1
< f
2
) is the lower-level general predicate
1
Formally, this requires identifying the state predicate P with the general predicate P
such that P(s, ) = P(s), and similarly for the action predicate A.
25
F

ac
(A()) F

st
(f
1
) < F

st
(f
2
).
The mapping F

is extended to arbitrary temporal formulas in the nat-


ural wayfor example, for general predicates G and H,
F

((G H)) = (F

(G) F

(H))
Thus, F

maps higher-level temporal logic formulas into lower-level ones.


It is a simple exercise in untangling the notation to verify that, for any
lower-level level behavior and any higher-level temporal logic formula U:
F() [= U is true if and only if [= F

(U) is.
For later use, we note that F
st
also induces a mapping F

from higher-
level relations on S

to lower-level relations on S, where s F

(R) t is dened
to equal F
st
(s) RF
st
(t). The mapping F

on relations is easily computed


from the mapping F

st
on state functions. For example, if R is the higher-
level relation dened by f
new
< g
old
for higher-level state functions f and g,
then F

(R) is the lower-level relation dened by F

st
(f)
new
< F

st
(g)
old
.
Mapping the Axioms Suppose that the sets of behaviors B and B

are
specied by sets of axioms | and |

, respectively. Thus, a behavior is


in B if and only if [= U is true for every axiom U in |, and similarly
for the behaviors in B

. Recall that the lower-level system implements the


higher-level one if and only if, for every behavior in B, F() is in B

. The
behavior F() is in B

if and only if F() [= U is true for every axiom U


in |

, which is true if and only if [= F

(U) is true. Hence, to show that


B implies F() B

, it is necessary and sucient to show, for all U in


|

, that V | : [= V implies [= F

(U), which is the same as showing


[= ( V |) F

(U). Proving that [= ( V |) F

(U) for all


means showing that the axioms of | imply F

(U).
Thus, to show that the lower-level system implements the higher-level
one, we must show that for every higher-level axiom U, the lower-level tem-
poral logic formula F

(U) is provable from the lower-level axioms. We now


consider what this means for the dierent kinds of axioms that constitute
the formal description of a system.
An initial axiom is simply a state predicate. For each higher-level initial
axiom P, we must prove that the state predicate F

st
(P) follows from the
lower-level initial axioms.
A higher-level liveness axiom is a temporal logic formula, and for each
such axiom U we must prove that F

(U) is a logical consequence of the


lower-level liveness axioms. The liveness part of a transition axiom is also
considered to be a liveness axiom, and is handled in this way.
26
If U is the higher-level completion axiom, then F

(U) follows immedi-


ately from the lower-level completion axiom and the fact that the range of
values assumed by F
ac
is contained in the set A

.
Finally, we consider the conjunction of the halting axioms and the safety
part of the transition axioms as a single higher-level axiom U. The axiom
U asserts that for any higher-level transition s

:
If there is a transition axiom

: R

then R

is enabled in state s

and
either s

or s

= t

.
If there is a halting axiom

: P

then P

is true for state s

and s

= t

.
The formula F

(U) asserts that for any lower-level transition s



t:
If there is a transition axiom F
ac
(, s) : R

then R

is enabled in state
F
st
(s) and either F
st
(s) R

F
st
(t) or F
st
(s) = F
st
(t).
If there is a halting axiom F
ac
(, s) : P

then P

is true for state F


st
(s)
and F
st
(s) = F
st
(t).
The lower-level transition axioms determine the possible lower-level transi-
tions s

t. Assume that for each action there is exactly one transition or
halting axiom.
2
Then a lower-level transition s

t must satisfy either a
transition axiom : R or a halting axiom : P. It follows from the above
characterization of the formula F

(U) that the lower-level transition and


halting axioms imply F

(U) if and only if the following conditions hold:


For every lower-level transition axiom : R, if sRt then:
If there is a transition axiom F
ac
(, s) : R

then R

is enabled in
state F
st
(s) and either F
st
(s) R

F
st
(t) or F
st
(s) = F
st
(t).
If there is a halting axiom F
ac
(, s) : P

then P

is true for state


F
st
(s) and F
st
(s) = F
st
(t).
For every lower-level halting axiom : P, if P is true on a state s then
there is a halting axiom F
ac
(, s) : P

and P

is true on state F
st
(s).
2
It makes no sense to have both a transition and a halting axiom for the same transition,
and the conjunction of two transition axioms : R and : R

is equivalent to the single


transition axiom : R R

.
27
These conditions imply that for every transition s

t satisfying the lower-
level transition or halting axiom for , the transition F
st
(s)
Fac(,s)
F
st
(t)
satises the higher-level transition or halting axiom for F
ac
(, s).
The above conditions can be written more compactly in the common case
when F
ac
(, s) depends only upon the action . In that case, the conditions
can be expressed as follows, where = denotes the equality relation (the set
of pairs (s, s)) and enabled(R) is the predicate that is true for state s if
and only if R is enabled in s. (Recall that F

(R

) is the relation such that


s F

(R

) t if and only if F
st
(s) R

F
st
(t).)
For every lower-level transition axiom : R:
If there is a transition axiom F
ac
() : R

then enabled(R)
F

(enabled(R

)) and R F

(R

=).
If there is a halting axiom F
ac
() : P

then enabled(R) F

(P

)
and R F

(=).
For every lower-level halting axiom : P, there is a halting axiom
F
ac
() : P

such that P F

(P

).
2 Specication
Thus far, we have been discussing the formal description of a complete
system. A prerequisite for a specication is the splitting of a system into
two parts: the part to be specied, which we call a module, and the rest of the
system, which we will call the environment. A Modula-2 [11] module is an
example of something that could qualify as a module, but it is not the only
example. A piece of hardware, such as RAM chip, could also be a module.
The purpose of a specication is to describe how the module interacts with
the environment, so that (i) the environment can use the module with no
further knowledge about how it is implemented, and (ii) the module can be
implemented with no further knowledge of how it will be used. (A Modula-
2 denition module describes the syntax of this interaction; a complete
specication must describe the semantics.)
One can regard a Modula-2 module as a module, with the rest of the
program as the environment. A specication describes the eects of calling
the modules procedures, where these eects can include the setting of ex-
ported variables and var arguments, calls to other procedures that are part
of the environment, and the eventual setting of the program counter to
28
the location immediately following the procedure call. The specication de-
scribes only the procedures interaction with the environment, not how this
interaction is implemented. For example, it should not rule out the possi-
bility of an implementation that invokes machine-language subroutines, or
even special-purpose hardware. Any other requirementsfor example, that
the procedure be implemented in ASCII standard Pascal, or that it be de-
livered on 1600 bpi magnetic tape, or that it be written on parchment in
green inkare not part of the specications that we will write. This omis-
sion is not meant to imply that these other requirements are unimportant.
Any formal method must restrict itself to some aspect of a system, and we
choose to consider only the specication of the interface.
It is reasonably clear what is meant by specifying a Modula-2 module
because the boundary between the module and the environment is evident.
On the other hand, we have no idea what it would mean to specify the
solar system because we do not know what is the module to be specied
and what is its environment. One can give a formal description of some
aspect of the solar system, such as the ones developed by Ptolemy, Coper-
nicus, and Kepler. The distinction we make between a specication and
a formal description appears not to be universally accepted, since a work-
shop on specifying concurrent systems [4] was devoted to ten specication
problems, only three of which had moderately clear boundaries between the
module to be specied and its environment.
2.1 The Axioms
As we have seen, the complete system is described formally by a triple
(B, S, A), where B is a set of behaviors. A behavior is a sequence of transi-
tions s

t, where is a transition in A and s and t are states in S. When
specifying a module, we dont know what the complete set of states S is,
nor what the complete set of actions A is. All we can know about are the
part of the state accessed by the module and the subset of actions that are
relevant to the modules activities, including those actions performed by the
module itself.
A complete system (B, S, A) is specied by a collection of axioms. This
collection of axioms should be partitioned into two subcollections: ones that
specify the module and ones that specify the rest of the system. Our task is
to write the axioms that specify the module without making any unnecessary
restrictions on the behavior of the rest of the system.
A major purpose of labeling the arc in a transition s

t is that it
29
allows us to identify whether the transition is performed by the module
or the environment. The operation of the module is specied by axioms
about transitions performed by the moduleusually describing how they
change the state. The specication of the module must also include axioms
about transitions performed by the environment, since no module can work
properly in the face of completely arbitrary behavior by the environment.
(Imagine trying to write a procedure that works correctly even though con-
currently operating processes are randomly modifying the procedures local
variables.) Axioms about the environments transitions usually specify what
the environment cannot dofor example, that it cannot change parts of the
state that are local to the module.
We will partition a modules specication into axioms that constrain
the modules behavior and ones that constrain the environment. This will
be done by talking about transitionsaxioms that describe the modules
transitions constrain the modules behavior, and ones that describe the en-
vironments transitions constrain the environments behavior. However, this
is not as easy as it sounds. The implications of axioms are not always obvi-
ous, and axioms that appear to specify the module may actually constrain
the behavior of the environment. For example, consider a specication of a
Modula-2 procedure that returns the square root of its argument, the result
and argument being of type real. Neglecting problems of round-o error,
we might specify this procedure by requiring that, if it is called with an
argument x, then it will eventually return a value y such that y
2
= x. Such
a specication contains only axioms specifying the module, and no axioms
specifying the environment. However, observe that the specication implies
that if the procedure is called with an argument of 4, then it must return
a real value y such that y
2
= 4. This is impossible. The axioms specifying
the module therefore constrain the environment never to call the procedure
with a negative argument.
In general, for an axiom that species a safety property, it is possible
to determine if an axiom constrains the module, the environment, or both.
However, it appears to be impossible to do this for a liveness axiom.
In practice, one species safety properties by transition axioms, and it is
easy to see if a transition axiom constrains the module or the environment;
a transition axiom for a module action constrains the module and one for
an environment action constrains the environment. Liveness properties are
more subtle. A liveness property is specied by an axiom asserting that,
under certain conditions, a particular transition must eventually occur. For
example, when the subroutine has been called, a return action must even-
30
tually occur. We view such an axiom as constraining the module in question
if the transition is performed by the module, and as constraining the envi-
ronment if it is performed by the environment. However, we must realize
that a liveness axiom could have nonobvious implications. In the above
example, a simple liveness property of the module (that it must return an
answer) implies a safety property of the environment (that it may not call
the module with a negative argument).
2.2 The Interface
The mechanism by which the module and the environment communicate is
called the interface. The specication should describe everything that the
environment needs to know in order to use the module, which implies that
the interface must be specied at the implementation level. A procedure to
compute a square root will not function properly if it expects its argument
to be represented as a double-word binary oating point number and it is
called with an argument represented as a string of ASCII characters.
The need to specify the interface at the implementation level is not
restricted to the relatively minor problems of data representation. See [8]
for an example indicating how the interfaces implementation details can
inuence the specication of fundamental properties of concurrent systems.
In practice, specifying the interface at the implementation level is not
a problem. When writing a specication, one generally knows if the imple-
mentation is going to be in Modula 2, Ada, or CMOS. One can then specify
the interface as, for example, a collection of procedure calls with arguments
of a certain type. For a Modula 2 module, the denition module will usually
provide the interface specication. (Unfortunately, it is unlikely that the
semantics of any existing concurrent programming language are specied
precisely enough to insure that specications are always independent of the
particular compiler.)
We shall see that the specication can be decomposed into two parts: the
interface specication and the internal specication. The interface speci-
cation is implementation dependent. In principle, the internal specication
is independent of the implementation. However, details of the interface are
likely to manifest themselves in the internal specication as well. For ex-
ample, what a procedure does when called with incorrect input may depend
upon whether or not the language provides an exception-handling mecha-
nism.
The need to specify the interface at the implementation level was rec-
31
ognized by Guttag and Horning in the design of Larch [5]. What they call
the language-dependent part of the specication corresponds to the inter-
face specication. The language-independent part of a Larch specication
includes some aspects of our internal specication. However, to handle con-
currency, we need to describe the behavior of the module during a procedure
call, a concept not present in a Larch specication, which describes only in-
put/output relations.
2.3 State Functions
2.3.1 The Modules State Functions
To describe the set S of states of the complete program, we give a collection
of state functions f
1
, . . . , f
n
, and a constraint C, and assert that S is
determined by the values of these functionsfor every n-tuple (v
1
, . . . , v
n
)
of values that satises the constraint C, there is a unique element s of S
such that, for all i, f
i
(s) = v
i
.
When specifying a module, we do not know the complete state because
we know very little about the environment. We can only know about the
part of S that is relevant to the module. Fortunately, this causes no diculty.
To specify a module, we specify n state functions f
i
and constraint C that
describe the relevant part of the state, and we drop the requirement that
the n-tuple of values f
i
(s) uniquely determines the state s. There can be
many states s that have the same values f
i
(s), but have dierent values of
g(s) for some state function g that is relevant only to the environment. For
example, if our module is a Modula-2 module, g could represent the value
of a variable local to some separate module.
2.3.2 Interface and Internal State Functions
The decomposition of the system into environment and module implies that
there are two dierent types of state functions: interface state functions and
internal state functions. Interface state functions are part of the interface.
They are externally visible, and are specied at the implementation level. To
explain why they are needed, we briey discuss the nature of communication.
Synchronous processes can communicate through transient phenomena;
if you are listening to me, waiting for me to say something, I can communi-
cate by sound waves, which are a transient disturbance in the atmosphere.
However, if we are not synchronized in this way, and I dont know whether
or not you are listening to me, we cannot communicate in this way. In the
32
asynchronous case, I have to make some nontransient state changefor ex-
ample, writing a message on your blackboard or magnetizing the surface of
the tape in your answering machine. You can receive my communication by
examining the state of your blackboard or answering machine. Communica-
tion is eected with a nontransient change to a communication medium. In
computer systems, we sometimes pretend that asynchronous processes com-
municate by transient events such as sending a message. However, a closer
examination reveals that the transient event actually institutes a nontran-
sient state change to a communication mediumfor example, by putting
a message in a buer. Communication is achieved through the use of this
medium. We specify the communication between the environment and the
module in terms of the state of the communication medium.
The interface state functions represent the communication medium by
which the environment and the module communicate. In the specication of
a hardware component, the interface state functions might include the volt-
age levels on certain wires. In the specication of a procedure, the interface
state functions might include parameter-passing mechanisms. For example,
immediately after the environment executes a call to this procedure, there
must be some state component that records the fact that the procedure has
just been called and the argument with which it was called. The state func-
tions that provide this information are part of the interface specication.
For a Modula-2 module, the interface functions are implicitly specied by
the denition module. Fortunately, there is usually no need explicitly to
describe those interface state functions in detail.
Interface state functions can be directly observed or modied by the
environment; the environment can read the voltage on the wires leading to
the hardware device, or set the value of the state function that describes
the argument with which a procedure is called. Internal state functions
are not directly observable by the environment. Their values can only be
inferred indirectly, by observing the external behavior of the module as
indicated by the values of its interface state functions. For example, consider
the specication of a Modula-2 module that implements a queue, having a
procedure that adds an element to the end of the queue and one that removes
an element from the head of the queue. Its specication will include, as an
internal state function, the value of the queuethat is, the sequence of
elements comprising the current contents of the queue. This state function
is probably not directly observable; the queue is probably implemented by
variables that are local to the module and not visible externally. One can
only infer the contents of the queue by the modules response to a sequence
33
of procedure calls.
2.3.3 Aliasing and Orthogonality
In a program, we usually assume that distinct variables represent disjoint
data objectsthat is, we assume the absence of aliasing. Given a complete
program, without pointers and dereferencing operations, aliasing can be han-
dled by explicitly determining which variable is aliased to what. However,
with the introduction of pointers or, equivalently, of procedures with var
parameters, aliasing is no longer such a simple matter. Two procedure pa-
rameters with dierent names may, in a particular call of that procedure,
represent the identical data object.
In programming languages, aliasing manifests itself most clearly in an
assignment statementx and y are aliased if assigning a value to x can
change the value of y. The usual case is the absence of aliasing, which we call
orthogonality. In an ordinary programming language, two variables x and y
are said to be orthogonal to one another if assigning to one of them does not
change the value of the other. The concepts of aliasing and orthogonality
must be precisely dened in any specication language. Any method for
specifying a transition must describe both what state functions change and
what state functions do not change. Specifying what state functions change,
and how they change, is conceptually simple. For example, we can write the
assignment statement x := x+y to specify that the value of x changes such
that x
new
= x
old
+ y
old
. However, describing what state functions dont
change is more dicult. Implicit in the assignment statement x := x + y
is the assumption that state functions orthogonal to x are not changed.
However, a precise denition of orthogonality is dicultespecially when
the state includes pointers and transient objects. A complete discussion of
the problem of aliasing is beyond the scope of this paper. See [9] for an
introduction to aliasing and orthogonality in sequential programs.
Just as with program variables, dierent state functions are usually or-
thogonal. The constraint determines any aliasing relations that may exist
between state functions in the same module. A constraint such as f > g may
be regarded as a general form of aliasing, since changing the value of f might
necessitate a change to g to maintain this constraint. Aliasing relations be-
tween state functions from dierent modules are what makes intermodule
communication possible. If two modules communicate through the value of
a voltage on a wire, then that value is an interface state function in each
of the modules, those two state functions being aliases of one another. As
34
another example, suppose a procedure in a module A calls a procedure in
another module B. The interface state function of module A that repre-
sents the argument with which A calls Bs procedure is aliased to the state
function of module B that represents the value of the argument parameter.
Internal state functions of one module are assumed to be orthogonal
to internal state functions of any other module and to any interface state
functions, including the ones of the same module. By their nature, internal
state functions are not directly accessible from the environment, so they
cannot be aliased to state functions belonging to or accessible from the
environment.
2.4 Axioms
2.4.1 Concepts
Recall that, to specify a complete system, we had the following classes of
axioms: initial axioms, transition axioms, liveness axioms, halting axioms
and completion axioms. In order to specify a module, which is only part of
the complete system, some modications to this approach are needed.
No change is needed to the way we write initial axioms and liveness
axioms. Of course, when specifying liveness properties, we must remember
that the module is not executing in isolation. The discussion of the fairness
axioms in Section 1.1.5 indicates the type of considerations that this involves.
A halting axiom does not seem to be necessary. The module halts by
performing no further transitions; halting transitions can be provided by the
environment.
Recall that, for a complete system, the completion axiom specied the
set of all actions. In specifying a module, we obviously do not know what
the set of all actions is; we can specify only what the modules actions are.
Thus, the completion axiom asserts that every action of the module is an
element of some set A
m
of actions.
Fundamental to splitting the system into module and environment is the
ability to distinguish the modules actions from the environments actions.
For example, when presented with a machine-language implementation of
a program containing a Modula-2 module, we must be able to determine
which machine-language statement executions belong to the module, and
which to its environment (the rest of the program). To understand why
this is important, consider a specication of a queue, where the interface
contains two procedures: put to insert an element at the end of the queue
35
and get to fetch the element at the front of the queue. An important part of
the specication that is often overlooked is the requirement that the put and
get procedures be called by the environment, not by the module. Without
this requirement, a correct implementation could arbitrarily insert and
delete elements from the queue by calling the put and get procedures itself.
The specication must specify actions performed by the environment
as well as those performed by the module. The environment actions that
must be specied are the ones by which the environment changes interface
functionsfor example, the action of calling a procedure in a Modula-2
module. An external action in the specication of a module will be a module
action in the specication of some other module.
Environment actions should not change internal state functions. Indeed,
allowing the environment to change an internal state function would eec-
tively make that state function part of the interface.
2.4.2 Notation
A specication language requires some convenient notation for writing ax-
ioms. As we have seen, the subtle issue is the specication of the relation
R of a transition axiom (, R). For a complete system, we could write R
as a simple relation on n-tuples of state function values. This doesnt work
specifying a module in a larger system because we dont know what all the
state functions are. Therefore, we must write the relation R in two parts:
one specifying what state functions it can change, and the other specifying
how it can change those state functions. The rst part is specied by simply
listing the state functions; the second is specied by writing a relation be-
tween the old and new values of state functions. One could write a transition
axiom in the following fashion:
module transition changes only f, g:
(f
old
= 1) (f
new
= g
old
) (g
new
= h +g
old
)
This axiom species that an transition, which is a transition performed
by the module rather than the environment, is enabled for a state s if and
only if f(s) = 1, and a nonstuttering transitions sets the new value of f
to the old value of g, sets the new value of g to its old value plus the value
of h, and changes no other state functions. Since does not change h, the
new and old values of h are the same, so no subscript is needed. As before,
this axiom also includes a liveness part that asserts that there cannot be an
36
innite number of transitions without the transition becoming disabled
(f assuming a value dierent from 1).
We could also adopt a more programming language style of notation and
write this transition axiom as follows:
: f = 1 f := g;
g := h +g
)
This assumes a convention that only state functions appearing on the left-
hand side of an assignment statement may be changed.
To be more precise, the above transition axioms do not say that an
transition changes only f and g; it could also change the value of state
functions aliased to f or g. What it actually asserts is that any state function
orthogonal to both f and g is left unchanged by an transition.
It is often convenient to use parametrized transition axioms, such as:
module transition (x : integer) changes only f, g:
(f
old
= x) (f
new
= g
old
) (g
new
= h +g
old
)
Formally, this species a set of transition axiomsone for each (integer)
value of x. It is simply a way of saving us from having to write an in-
nite number of separate axioms. Thus, (5) and (7) are two completely
dierent actions; they are not invocations of any single entity .
For any state function f, the transition axioms for the complete system
describe what transitions can change f. If f is an internal state function,
then we know that only the modules transitions can change its value. How-
ever, an interface state function can be changed by actions of the environ-
ment. We need some way to constrain how the environment can change
an interface state function. We could use transition axioms to specify all
possible ways that the environment can modify an interface state function.
However, it is more convenient to write the following kind of axiom:
f changed only by
1
, . . . ,
m
while P
where f is a state function, the
i
are transitions, and P is a state predicate.
This asserts that if a behavior includes a transition s

t for which f(s) ,=
f(t), then either P(s) is false or else is one of the
i
. Of course, one can
nd syntactic sugarings of this type of axiom, such as omitting the while
clause when P is identically true, thereby asserting that f can be changed
only by the indicated transitions and by no others.
37
A changed only by axiom is needed for every interface state function;
without one, the environment would be allowed to change the function at
any time in any way. Since internal state functions can be changed only by
the modules transitions, they do not need a changed only by axiom
the modules transition axioms explicitly state what state functions they
can change. However, it would probably be a good idea to include one for
redundancy.
2.4.3 Formal Interpretation
In Section 1.3, we dened a system to be a triple (B, S, A), where S is a set
of states, A is a set of actions, and B is a set of behaviors. The set of states
was specied by giving the ranges of the state functions f
1
, . . . , f
n
and a
constraint C that they satisfy. For simplicity, we will drop the constraint C
from here on, assuming that it is the trivial constraint that is satised by all
n-tuples. (It is a simple matter to add the constraint to the formalism.) By
requiring that the n values f
1
(s), . . . , f
n
(s) uniquely determine s among all
the elements of S, we determined S up to isomorphism. Here, we drop that
requirement, so specifying the ranges of the state functions f
i
still leaves a
great deal of freedom in the choice of the set S.
Instead of a single collection of state functions, we now have two kinds
of state functions: the internal state functions, which we denote f
1
, . . . , f
n
,
and interface state functions, which we denote h
1
, . . . , h
p
. Similarly, there
are two types of actions: internal actions
1
, . . . ,
s
and interface actions

1
, . . . ,
t
.
3
The specication consists of initial axioms, transition axioms,
etc., which can all be expressed as temporal logic formulas. Let | denote
the temporal logic formula that is the conjunction of all these axioms. Then
the free variables of this formula are the f
i
, h
i
,
i
, and
i
. Formally, the
specication is the formula
4
f
1
. . . f
n

1
. . .
s
: |
Thus, the interface state functions h
i
and actions
i
are the free variables of
the specication, while the internal state functions and actions are quantied
existentially. As we shall see, this is the mathematical expression of the fact
that one is free to implement the internal state functions and actions as one
chooses, but the interface state functions and actions are given.
3
For notational convenience, we are assuming that there are a nite number of state
functions and actions; however, there could be innitely many of them.
4
The formula x1 . . . xq : X is an abbreviation for x1 : x2 : . . . xn : X.
38

f1
A

f2

f3

g1

g2
B

g3
The modules.

f1
A

B

g3
Their composition.
Figure 4: Two hardware modules and their composition.
What does it mean for a system (B, S, A) to satisfy this formula? Since |
has the h
i
and
i
as free variables, these state functions and action predicates
must be dened on S and A, respectively. If that is the case, then it makes
sense to ask if the temporal logic formula | is true for a behavior in B.
We therefore say that (B, S, A) satises this formula if and only if the state
functions h
i
are dened on S, the
i
are elements of A, and the formula is
true for every behavior in B.
There is one problem with this denition: we have not yet dened the
semantics of the temporal logic formula x : U when x is a state function or
action predicate. We give the denition for x a state function; the denition
for action predicates is similar. Let be the behavior s
0

1
s
1

2
. . . and
let

be the behavior s

1
s

2
. We say that and

are equivalent
except for x if, for every i and every state function f that is orthogonal
to x, f(s
i
) = f(s

i
). We dene a stuttering behavior of to be a behavior
obtained from by replacing each transition s
i1

i
s
i
by a nonempty nite
sequence of transitions of the form s
i1

i
s
i1

i
. . .

i
s
i1

i
s
i
. We
then dene [= x : U to be true if and only if there exists a behavior

that is equivalent except for x to some stuttering behavior of such that

[= x : U is true. In other words, x : U is true on a behavior if and only


if we can make U true by adding stuttering actions and arbitrarily changing
the value of x on each state.
5
2.5 The Composition of Modules
One builds a system by combining (composing) modules. As an example,
consider two hardware modules, A and B, and their composition, illus-
trated in Figure 4. The specication of module A has three interface state
functions, f
1
, f
2
, and f
3
, whose values represent the voltage levels on the
indicated wires. Similarly, module Bs specication has the interface state
functions g
1
, g
2
, and g
3
.
5
I wish to thank Amir Pnueli for pointing out this denition to me.
39
Connecting the two modules as shown in the gure means identifying
the state functions f
2
and f
3
of As specication with the state functions g
1
and g
2
of Bs specicationthat is, declaring f
2
g
1
and f
3
g
2
. Suppose
the specication of A includes a single transition of A that can change
f
2
and f
3
. The specication of B might include a transition axiom for a
transition of Bs environment describing how g
1
and g
2
are allowed to
change. (The fact that A changes f
2
and f
3
while the environment changes
g
1
and g
2
means that f
2
and f
3
are outputs of A while g
1
and g
2
are inputs
to B.)
The formal specication of the composition of the two modules is the
conjunction of their specicationsthat is, the conjunction of the axioms
that make up their specicationsconjoined with the aliasing relations f
2

g
1
and f
3
g
2
.
2.6 The Correctness of an Implementation
In Section 1.4 we dened what it meant for one complete system (B, S, A) to
implement another complete system (B

, S

, A

). The denition was based


upon mappings F
st
: S S

and F
ac
: A S A

. From these mappings,


we dened mappings F

st
from state functions on S

to state functions on S
and F

ac
from action predicates on A

to functions on AS.
If we examine how all these mappings are actually dened for a real
example, we discover that it is the mappings F

st
and F

ac
that are really
being dened. This is because we dont know what the actual states are,
just the state functions. Let f
1
, . . . , f
n
be the state functions of the rst
system and f

1
, . . . , f

m
be the state functions of the second system. To
dene the mapping F
ac
, we must express the values of each f

i
in terms of
the f
j
, which means dening the state function F

st
(f

j
).
The mappings F

st
and F

ac
in turn dene a mapping F

from temporal
logic formulas about behaviors in B

to temporal logic formulas about be-


haviors in B. If the sets of behaviors B and B

are dened by the axioms |


and |

, respectively, then (B, S, A) correctly implements (B

, S

, A

) if and
only if | implies F

(|

).
Now let us turn to the question of what it means for a specication of
one or more modules to implement a specication of another module. As
we have seen, the formal specication of a module or collection of modules
is a formula of the form
f
1
. . . f
n

1
. . .
s
: | (3)
40
where the f
i
and
i
are the internal state functions and actions, and | is
a temporal logic formula that depends upon the f
i
and
i
as well as on
interface state functions h
i
and actions
i
.
Let / be the formula (3) and let /

be the similar formula


f

1
. . . f

1
. . .

r
: |

(4)
so /

species a module with internal state functions f

j
and internal actions

j
. There are two characterizations of what it means for the module specied
by / to implement the module specied by /

:
1. Every system (B, S, A) that satises the formula / also satises the
formula /

.
2. / implies /

.
Since one temporal logic formula implies another if and only if every behavior
that satises the rst also satises the second, it is easy to see that these
two characterizations are equivalent.
To prove that the module specied by / implements the module spec-
ied by /

, we must prove that / implies /

. Recall that these two


formulas are given by (3) and (4). To prove that / implies /

, it suces
to construct mappings F

st
and F

ac
, which dene the mapping F

, as above
so that | implies F

(|

). This is exactly the same procedure used in Sec-


tion 1.4 to prove that one system implement another. The only dierence is
that, in addition to the internal state functions and actions f
i
,
i
, f

i
,

i
, the
axioms also involve the interface state functions and actions. (Another way
of viewing this is to say that F

st
and F

ac
are dened so they map each inter-
face state function h
i
and interface action
i
into itself.) Thus, the method
of actually verifying that one module implements another is the same one
used to show that one concurrent program implements another.
As we observed earlier, it is not always possible to dene the f

j
as func-
tions of the f
i
. In this case, it is necessary to add dummy internal state
functions g
1
, . . . , g
d
to the system specied by /, and to dene the f

j
as
functions of the f
i
and the g
i
. For notational convenience, suppose that
one introduces a single dummy state function g. Let /
d
be the temporal
logic formula that represents the new specication. We want to prove that
the new specication /
d
correctly implements /

and infer from that /


correctly implements /

. This means that we must prove that / implies


/
d
. If / is the formula (3), then /
d
is the formula
g : f
1
. . . f
n

1
. . .
s
: |
d
41
Assume that the dummy state function g is orthogonal to all other state func-
tions, both internal and external. (The specication language will have some
notation, analogous to variable declarations in programming languages, for
introducing a new state functions that is orthogonal to all other state func-
tions.) To prove that / implies /
d
, we must show that |
d
must has the
following property: if is any behavior that satises |, then there is a be-
havior

that is equivalent except for g to some stuttering behavior of


such that

satises |
d
.
The condition for the specication with the dummy state function to be
equivalent to the original specication is stated in terms of the semantics
of the temporal logicwhether or not a behavior satises an axiomrather
than within the logic itself. One wants syntactic rules for adding dummy
state functions that ensure that this condition is satised. the specication.
These rules will depend upon the particular specication language; they
will correspond to the rules given by Owicki [10] for a simple programming
language.
References
[1] Bowen Alpern and Fred B. Schneider. Dening liveness. Information
Processing Letters, 21:181185, October 1985.
[2] Bowen Alpern and Fred B. Schneider. Verifying Temporal Properties
without using Temporal Logic. Technical Report TR85-723, Department
of Computer Science, Cornell University, December 1985.
[3] Howard Barringer, Ruurd Kuiper, and Amir Pnueli. A really abstract
concurrent model and its temporal logic. In Thirteenth Annual ACM
Symposium on Principles of Programming Languages, pages 173183,
ACM, January 1986.
[4] B. T. Denvir, W. T. Harwood, M. I. Jackson, and M. J. Wray, editors.
The Analysis of Concurrent Systems. Volume 207 of Lecture Notes in
Computer Science, Springer-Verlag, Berlin, 1985.
[5] J. V. Guttag, J. J. Horning, and J. M. Wing. Larch in Five Easy Pieces.
Technical Report 5, Digital Equipment Corporation Systems Research
Center, July 1985.
[6] Leslie Lamport. An Axiomatic Semantics of Concurrent Programming
Languages, pages 77122. Springer-Verlag, Berlin, 1985.
42
[7] Leslie Lamport. Specifying concurrent program modules. ACM Trans-
actions on Programming Languages and Systems, 5(2):190222, April
1983.
[8] Leslie Lamport. What it means for a concurrent program to satisfy a
specication: why no one has specied priority. In Proceedings of the
Twelfth ACM Symposium on Principles of Programming Languages,
pages 7883, ACM SIGACT-SIGPLAN, New Orleans, January 1985.
[9] Leslie Lamport and Fred B. Schneider. Constraints: a uniform approach
to aliasing and typing. In Proceedings of the Twelfth ACM Symposium
on Principles of Programming Languages, ACM SIGACT-SIGPLAN,
New Orleans, January 1985.
[10] S. Owicki. Axiomatic Proof Techniques for Parallel Programs. PhD
thesis, Cornell University, August 1975.
[11] Niklaus Wirth. Programming in Modula-2. Springer-Verlag, third edi-
tion, 1985.
43

You might also like