Algorithmic Logic for Programmers
Algorithmic Logic for Programmers
for
Software Construction and
Analysis
24 May 2009
ii
Contents
Preface v
1 Introduction 1
1.1 Logic? What for? . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Which logic? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Data structures 11
2.1 On algebraic structures . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Algebraic systems for the representation of finite sets . . . . . 20
2.3 Formal language & semantics . . . . . . . . . . . . . . . . . . 24
2.4 Expressibility problems in the first-order language . . . . . . . 31
4 Algorithmic Logic 69
4.1 Axiomatization . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 On the completeness of algorithmic logic . . . . . . . . . . . . 78
iii
iv CONTENTS
v
vi PREFACE
Introduction
1
2 CHAPTER 1. INTRODUCTION
Let us remark that the amount and the importance of software grows
steadily. On the market the transfer of software is bigger than this of hard-
ware. One can gain a lot producing quickly programs of quality. On the
other hand the price of software is high. It is caused by diverse factors. We
did not reached yet the ability to produce the software in an industrial way.
Many believes that an appropriate theory is needed. At the foundations of
technology one can find various physical and mathematical theories. For the
development of an industrial technology of software production a theory of
programming is necessary. The price which we pay for using programs with
errors in them (i.e. the bugs) is very high. Compare it with the price of a
bridge that crashed because the engineer disregarded the laws of mechanics.
The price of using wrong software is multiplied because of diversity and the
of mass scale of applications of the computer technology in a nowadays world,
and also because we are introducing the innovations in a hurry. The wish
to base the production of programs on a formalized mathematical calculus,
in order to eliminate (dangerous and costly) errors, should be in our opinion
the minimal goal.
There is a lot of discussion on the necessity of acceleration of software
production, on its automatization. We can not judge in this moment whether
the work of programmers is possible to automatize. Let us remark, however,
how little tools exists which help in the labor of programming. It is, sui
generis, a paradox that there exist many programs which help in the work of
designers of machines or electronic circuits, architects, physicians, composers,
creators of almost all professions, and there are not tools addressed to the
profession of programmer.
The quick design and testing of effective algorithms does not cover the
whole problem. It is not the solitary work of an ingenious programmer which
decides about the results. The work on software is more and more the collec-
tive, or social, work. Apart of the author(s) of a program, and obviously of
computer, we meet the users, the people maintaining the software et al. The
process of using software requires a language to communicate. This language
is not a programming language, nor an ethnic language of a certain human
group. The production of software leads to creation of texts. Texts have
to meet certain requirements (specifications). Hence we have a problem in
which at least three elements can be distinguished.
Fig.1.1
This are the arguments, their strength, correctness and quality, which
decide whether programmer will convince user or commissioner that his pro-
gram meets the requirements. The language in which the process of software
production takes place must assure the possibility of formulating the re-
quirements a program is to meet, the possibility of writing programs and the
1.1. LOGIC? WHAT FOR? 3
Fig1_1.pdf
0.416000in3.143600in
Remark 1.1.1 We are not going to force the people involved in the process of
software use and production, to formalize all their work. We are appealing to
the imagination of the reader. The work on software should be formalizable.
(Very much like mathematics: the papers published by mathematicians are
not formal proofs they are communiques about the possible formalization of
proofs). On the other hand we would like to make a remark on the more
and more distributed computer systems which help lawyers in their work. If
the work of lawyers can be supported by the formalized systems (for every
computer system is formal), why the programmers can not profit from the
tools they are working with?
Our beliefs, listed below, are in connection with the above considerations:
- Before passing an algorithm or a programming system to the user(s) one
should make an analysis concerning the quality of the software product of-
fered. To neglect it, is hazardous in economic aspects and sometimes it can
be hazardous to a humans life.
- How to solve this dilemma? To convince the buyer of our products
that the program has been constructed in accordance to the state of art
and the present state of knowledge. It is obviously a program minimum.
Our proposal goes beyond this minimum and offers the possibility of deeper
analysis of software,(c.f.chapter 5).
- An algorithm is only an expression, it is the interpretation of signs which
appear in it that decides about the meaning of the expression, (c.f.3.2.4)
- The interpretation can be precisely characterized assuming certain ax-
ioms (cf. chapter 5)
- An analysis of programs can be done on a formal base, proofs of seman-
tical properties of programs can be given (cf. chapter 4)
What are the questions which appear during the work on software? We
think that the problems encountered can be classified into one of the three
groups:
(1) Is the problem solvable at all? It is an economical nonsense to spend
money on squaring the circle, construction of perpetuum mobile. Frequently
4 CHAPTER 1. INTRODUCTION
we heard the promises of that type, e.g. when a company promised to sell a
program which was going to answer whether any other program given as a
data is correct or not. In other words: avoid inconsistent specifications.
(2) Is the program we constructed or obtained (bought, stolen...) com-
puting the correct solution of the problem. Again an answer to a question of
that kind has economic consequences. In other words: should we pay? is it
worth its price?
(3) Is the given program the best solution to the problem? In other words
shall we pay again for a better product?
It was pointed several times that the answer to the questions (2) and (3)
can be obtained only by proving. A proof is also required for a negative
answer to the question (1). It is obvious that in the case (1) an appropriate
construction i.e. an appropriate experiment, is of importance. Neither the-
oretical considerations nor experiments can pretend to be the one and only
one method of work on software. The role of experiment is caused by the
fact that our programs are to work, are to be applied in someone’s job. The
importance of theory follows from the mass application of software and from
the fact that no one experiment can decide that the program in question is
error-free, and will work in a reliable manner in all possible situations. An
experiment can help in detection of an error, but it can never be used as
an argument that a program is correct. Coming back to our example with
the construction of a bridge, we think that in a similar way one should do
with programs: during the design and construction one should use all the
accessible theoretical knowledge and prove that it has specified properties,
and later one should experiment with it. For it is the experiment which can
help us to learn the complexity of algorithm, e.g. to estimate the coefficients
of a polynomial describing the time needed to compute the solution (if it is
a polynomial).
data structures and modules of programs that correspond to them. This ap-
plication of algorithmic logic seems more important than proving properties
of algorithms.
Algorithmic logic, its axioms and inference rules are helpful in defining
the semantics of a programming language.
We claim that for work with programs the tools offered by the first-order
logic are not satisfactory. First-order logic is not able to express properties
of programs. It turns out that certain properties of algorithms are equiva-
lent to mathematical properties known to be expressible in the language of
first-order logic. The list of these properties is long and one can find in it the
properties basic for mathematics, e.g. “to be a natural number“, “to be an
archimedean ring“ etc. In the sequel we present these properties and show
that they are in fact algorithmic properties. Mathematical logic developed
many non-classical variants, e.g. infinitary logic which admits enumerable
disjunctions, logics of higher orders etc. These logics are strong enough to
express all the properties not expressible in first-order logic. The majority of
interesting properties of programs can be studied in these logics. For exam-
ple, the property “program while g do K od has no infinite computations“
can be expressed as an infinite disjunction of formulas, each formula (they
increase their length!) saying the computation will have n iterations of the
instruction K. Similarly, one expresses this property of programs using quan-
tifiers that operate on finite sets (it is the weak second order logic). It seems
to us that the application of such formalisms is not especially attractive to
programmers. For it would imply the necessity of, first, translation of soft-
ware problems into the language say, second-order logic, next, to conduct the
analysis of problem, and finally, to translate the results back to the program-
mer’s language. Even if all the steps of the described procedure were fully
mechanizable it would be something unnatural, artificial for a programmer.
Therefore we propose algorithmic logic. The algorithmic logic plays for com-
puter science the role which the mathematical logic plays for mathematics.
The expressions of algorithmic logic are constructed from programs and
the logical formulas. There is no need to use infinitary disjunctions nor the
second-order quantifiers. The formulas of algorithmic logic have the power
to express all semantical properties of importance in programmer’s practice.
Algorithmic logic AL supplies the tools needed to perform deductive rea-
sonings. This makes possible to analyze properties of programs, a priori,
before a computational experiment.
The present authors think that the importance of proposed tools lies more
in the area of documentation (specification), of discourse between program-
mer and the user or commissioner of software than in the area of proving
programs. In chapter 5 we give a detailed example of such an application
6 CHAPTER 1. INTRODUCTION
structure. One program can have many different meanings depending of the
interpretation which is associated to the signs occurring in the program. If
an interpretation is fixed we can talk of a data structure in which program
computes. The remark that one program can be interpreted in various data
structures is important not only to those who wish to learn about the na-
ture of things. It has also practical aspect. It enables, namely, to use one
algorithm in various contexts to achieve quite different goals. In this way
we make a liaison to the advanced methodology of programming, to abstract
data types, to object-oriented programming etc.
How to describe the meaning of a program? We are convinced that the
majority of us connects its own intuitions with the concept of a computational
process. Here we use a definition of computation determined by a program,
a data structure (i.e. an appropriate algebraic system) and a valuation of
variables. Valuation of variables is a formal counterpart of the notion of a
memory state. Data structure is a formal counterpart of an ALU arithmetic-
logical unit of computer. This notion is more general, more abstract and
therefore we can analyze the behavior of programs written in modern pro-
gramming languages as well as assembler programs. A computation can be
finite of infinite, successful or not i.e. aborted, correct or not. These or
others semantical properties can be enjoyed by computations. The seman-
tical properties are of most interest to a user of a program. We mentioned
already that reducing the analysis of algorithms to the experiments only is
erroneous from the methodological point of view. Experiments, often called
debugging, can help to detect certain facts. They can not be the evidence
for a conclusion of the type: “all computations of a program will be correct“.
How to solve the dilemma? We propose to express semantical properties of
programs by formulas. Then by studying the validity of formulas we will
be able to analyze semantics. We shall say that a semantical property SP
is expressible by a formula a, if the formula is valid if and only if when the
semantical property SP takes place. In the sequel we shall construct a lan-
guage in which one can find programs and formulas expressing semantical
properties of programs. In a natural way we shall come to the next problem
of finding a deductive system which enable to prove validity of formulas. The
logical calculus we are going to build, will be determined by two sets: set of
axioms and set of inference rules. The sets must be chosen in the way which
guarantees the consistency. That is, no a false statement can be proved in
the system. Moreover, for every valid formula should exists a proof. This will
allow us to replace the job of analysis of semantical properties of programs by
a job of analyzing whether a formula is true or not, which in turn is replaced
by a job of proving formulas.
The constructed calculus finds diverse applications: in the analysis of
1.2. WHICH LOGIC? 9
Data structures
11
12 CHAPTER 2. DATA STRUCTURES
− a = 1 iff a = 0.
Elements of the set B0 are usually interpreted as the values truth (1) and
false (0). The operations ∪, ∩, − are known, respectively, as disjunction,
conjunction and negation.
Example 2.1.5 Let suc denote the successor function on the set of natural
numbers N . Let 0 be a constant, i.e. a zero-argument operation on N and
= the equality relation on N .The system N =< N, suc, 0; = > is the well
known algebraic structure of type < 1, 0; 2 >. We call this structure the
standard structure of arithmetic of natural numbers.
Definition 2.1.6 Two algebraic structures A and B will be called similar iff
their types are the same, i.e. structures
A =< A, f1 , ..., fk ; r1 , ..., rl > and B =< B, f1′ , ..., fs′ ; r1′ , ..., rt′ >
are similar if and only if
(a) k = s, l = t,
(b) fi′ is an ni -argument operation on B iff fi is an ni -argument operation
on A, for each 1 ≤ i ≤ k , and
(c) rj′ is an mj -argument relation in B iff rj is an mj -argument relation on
A , for each 1 ≤ j ≤ l.
If algebraic structures A and B, defined as above, are similar, then fi and fi′ ,
for 1 ≤ i ≤ k, are corresponding operations, and rj and rj′ , for 1 ≤ j ≤ l,
are corresponding relations in A and in B, respectively .
Example 2.1.9 Let us consider the algebraic structure < A∗, ◦ ; ≺ > (cf.
example 2.1.4 ) and the system < N, +; ≤ > with the natural interpretation
of + and ≤. The function h : A∗ −→ N such that such that h(ε) = 0 , where
ε denotes the empty word,
h(x) = 1 for x ∈ A
h(w◦ x) = h(w) + h(x) for any w ∈ A∗ and x ∈ A
is a homomorphism which maps A∗ into N .
Conditions (1) and (2) of the definition 2.1.4 are satisfied in the obvious way.
For the proof of property (3) let us take any w1 , w2 ∈ A∗ . We then have
w1 ≺ w2 =⇒ (∃w3 )w2 = w1 ◦ w3 =⇒ (∃w3 )h(w2 ) = h(w1 ) + h(w3 ) =⇒
h(w1 ) ≤ h(w2 )
which ends the proof.
relation rj in A,
(a1 , ..., amj ) ∈ rj iff (h(a1 ), ..., h(amj )) ∈ sj ,
then h is called isomorphism.
If there exists an isomorphism from A onto the structure B, then A and B
are called isomorphic structures, abbreviated to A ≊ B.
X
2i (ai + bi + pi−1 − 2(ai + bi + pi−1 ))div2 =
i≤n
16 CHAPTER 2. DATA STRUCTURES
X X X
2i ai + 2i bi + 2i ∗ (pi−1 − 2 ∗ (ai + bi + pi−1 )div2)
i≤n i≤n i≤n
2i ai + 2i bi ) ≥ m
( P P
1 if (
(an + bn + pn − 1)div2 = i≤n i≤n
0 otherwise
Finally,
Example 2.1.13 Consider the algebraic structure < N, +, ∗, 0 > and for
some fixed number k a binary relation ≈k on N such that
m ≈k n iff m mod k = n mod k, for all n, m ∈ N .
The relation ≈k is reflexive, symmetric, transitive and therefore is an equiv-
alence relation on the set N . The very simple verification of this fact is left
to the reader. The relation ≈k is a congruence on the system < N, +, ∗, 0 >.
Let us briefly check this. Suppose that m1 ≈k n1 and m2 ≈k n2 , then there
are natural numbers i1 , i2 , j1 , j2 , r1 , r2 , such that
m1 = i1 ∗ k + r1 m2 = i2 ∗ k + r2 ,
n1 = j1 ∗ k + r1 n2 = j2 ∗ k + r2 .
Thus m1 + m2 = (i1 + i2 ) ∗ k + r1 + r2 ,
n1 + n2 = (j1 + j2 ) ∗ k + r1 + r2 .
The last two equations imply that (m1 + m2 ) mod k = (n1 + n2 ) mod k.
Analogously, for multiplication *,
m1 ∗ m2 = (i1 ∗ i2 ) ∗ k + (r1 ∗ i2 + r2 ∗ i1 ) ∗ k + r1 ∗ r2 ,
n1 ∗ n2 = (j1 ∗ j2 ) ∗ k + (r1 ∗ j2 + r2 ∗ j1 ) ∗ k + r1 ∗ r2 .
Dividing m1 ∗ m2 by k we obtain the same remainder as dividing r1 ∗ r2 by
k, which is equal, according to the above remarks, to the remainder obtained
from dividing n1 ∗ n2 by k. Hence (m1 ∗ m2 ) mod k = (n1 ∗ n2 ) mod k.
2.1. ON ALGEBRAIC STRUCTURES 17
Example 2.1.14 Consider the system < A∗, ◦ ; ≺ > and the homomorphism
h : A −→ N defined as in example 2.1.5. Let ≈be the equivalence relation
defined as follows: for any u, w ∈ A∗ , ,u≈w iff h(u) = h(w) .For u≈u′ and
w≈w′ we have h(u◦ w) = h(u) + h(w) = h(u′ ) + h(w′ ) = h(u′◦ w′ ) , hence
u◦ w≈u′◦ w′ . Nevertheless, ≈ is not a congruence relation when A contains
more then one element. Indeed, if x, y are different elements of A, then
although x≈x,xx≈yy and x ≺ xx, it is not the case that x ≺ yy.
In the sequel, when the names of relations and functions are not important
we shall write, for short A =< A, ΩF ; ΩR >, to denote algebraic structure
with a sequence of functions WF and a sequence of relations ΩR in A.
Most of applications deal with algebraic structures which are many-sorted
structures. A structure is many-sorted if its universe is the union of disjoint
subsets called sorts. Different arguments of a function or of a relation in such
a system may belong to different sorts. As an example, consider the vector-
space, in which we have two kinds of sets: Vectors and Scalars (components
of vectors). The operation of scalar multiplication is then defined on the
cartesian product V ectors × Scalars
When specifying functions in a many-sorted structure with elements of
different sorts, we must declare not only the arity of an operation but also
18 CHAPTER 2. DATA STRUCTURES
the sorts of its arguments. This will be called the type of the operation (or
relation). In our example the type of scalar multiplication is (V ectors ×
Scalars −→ V ectors).
Definition 2.1.22 The system A/ ∼=< B, Ω∗F ; Ω∗R >, where B = B1 ∪...∪Bt ,
is called a many-sorted quotient structure.
h(f (a1 , ..., an )) = f ∗ (h(a1 ), ... , h(an )) = f ∗ ([a1 ], ..., [an ]) = [f (a1 , ..., an )]
insert(e, d) = d ∪ {e},
delete(e, d) = d\{e}, when d̸= ∅
d ∈ empty iff d = ∅,
(e, d) ∈ member iff e ∈ d .
Queues are data structures often used in simulation programs where the
performance of real systems is modelled and analyzed. In operating systems
a queue of processes is created to solve access conflicts to the same port, e.g.
to a line printer.
Below we present two kinds of tree data structures which can be used
for example, in the evaluation of the value of an expression, or for data
management.
Definition 2.2.5 By the standard binary tree data structure we shall mean
a two-sorted algebraic structure
< At ∪ T ree, cons, lef t, right; atom, empty >,
such that
cons is an operation of type (T ree × T ree −→ T ree) lef t and right are
operations of type (T ree −→ T ree)
empty and atom are relations of type (T ree), i.e. subsets of the set T ree,
and, for t ∈ T ree and t1, t2 ∈ T ree\{nil},
cons(t1, t2) = (t1◦ t2), Dom(cons) = (T ree\{nil}) × (T ree\{nil})
◦
lef t((t1 t2)) = t1, Dom(lef t) = T ree\{nil}
2.2. ALGEBRAIC SYSTEMS FOR THE REPRESENTATION OF FINITE SETS23
Below we present another type of binary tree which finds wide application
in sorting.
Let Et be a set linearly ordered by some relation ≺. Elements of the
set Et will be called labels. Let T r be the smallest set which contains the
expression ( ) and such that, if e ∈ Et and t1, t2 ∈ T r, then (t1et2) ∈ T r.
Elements of the set T r are called labeled binary trees (trees with information
at their nodes). If a tree t has the form (t1et2), then t1 is called its left
subtree and t2 its right subtree. Element e is called the label of t.
Let BST be the set of all binary search trees over the set Et. Each element
t of this set is a finite binary tree. Each vertex of the tree is labeled by an
element of the set Et. For every subtree t′ of the tree t, the label associated
with the root of t′ is greater than any label associated with a node in its left
subtree. Similarly for right subtrees, any label associated with a node of the
right subtree is greater that the label of the root of the tree.
Example 2.2.7 Figure 2.2 represents a binary search tree labeled by natural
numbers. The binary tree which is illustrated in Figure 2.3 is not a binary
search tree.
Fig.2.2
Fig.2.3
Definition 2.2.8 By the standard binary search tree data structure (or, for
short, BST structure) we shall mean the two-sorted algebraic structure
< Et ∪ BST, val, lef t, right, new, upl, upr; isnone, ú, =>,
where
val is a one-argument operation of type (BST −→ Et)
val(t1 e t2 ) = e Dom(val) = BST \ {( )}
lef t, right are operations of type (BST −→ BST )
lef t(t1 e t2 ) = t1 ,
24 CHAPTER 2. DATA STRUCTURES
(t1 et) if f (t1 et) ∈ BST
upr(t, t1 et2 ) = ,
undef ined otherwise
In what follows we shall assume that the set of individual variables con-
sists of the non-empty disjoint sets Vi , 1 ≤ i ≤ tt for some natural number
tt. If x ∈ Vi , then we shall say that x is a variable of type i. Each predicate
ρ ∈ P has a defined type t(ρ) of the form (i1 × ... × in → 0), where n is the
2.3. FORMAL LANGUAGE & SEMANTICS 25
number of arguments (arity of predicate), and i1 , ..., in are the types of the
arguments of the predicate ρ, 1 ≤ i1, ..., in ≤ tt. Each functor φ ∈ Ψ has
a defined type t(φ) of the form (i1 × ... × in → in+1 ), where n is its arity,
and i1 , ..., in are the types of its arguments, 1 ≤ i1 , ..., in , in+1 ≤ tt. We shall
assume, moreover, that the alphabet is (at most) a countable set.
Definition 2.3.2 The triple < tt, {t(φ)}φ∈Ψ , {t(ρ)}ρ∈P > will be called the
type of the language.
The set of all proper expressions of a first order language consists of the set
of terms and the set of formulas. Using terms, we can define new functions.
Similarly, formulas can be considered as definitions of new predicates.
Definition 2.3.3 The set of terms T over the alphabet Alph is the smallest
set which contains the set of individual variables V and such that, if φ is a
functor of type (i1 × ... × in → in+1 ) and τ1 , ..., τn are terms of types i1 , ..., in
respectively, then φ(τ1 , ..., τn ), is a term of type in+1 .
Definition 2.3.4 The set of formulas F over the alphabet A is the smallest
set of expressions which contains all propositional variables and all expres-
sions of the form ρ(τ1 , ..., τm ), (called elementary formulas) where ρ is a
predicate of type (i1 × ... × im → 0), and τ1 , ..., τm are arbitrary terms of types
i1 ,...,im , respectively, and such that
Example 2.3.7 Consider the first-order language over the alphabet described
in example 2.3.1 and the formula α of the form (β ∨ δ), where
β = (∀x)(∃y)((x + y) ∗ z > y), δ = (x > y(x + y)).
The variable z is free in β and variables x, y are bounded in β. All variables
that occur in the formula δ are free variables. In the formula α there are
three free variables x, y, z and two bounded variables x, y. Notice, that the
same variable may occur in one formula as free and as bounded:
((∀x)(∃y)((x + y) ∗ z > y) ∨ (x > y ∗ (x + y)))
| {z } | {z }
bounded occurrences free occurrences
Remark 2.3.9 Different alphabets determine different sets of terms and for-
mulas, and hence, different first-order languages. The above definition, thus,
describes the class of first-order languages.
We use the following denotations: true for the formula (p ∨ ¬p) and
false for the formula (p ∧ ¬p), where p is a propositional variable of the
2.3. FORMAL LANGUAGE & SEMANTICS 27
φA (τ1A (v), ..., τnA (v)) if τ1A (v), ..., τnA (v)
φ(τ1 , ..., τn )A (v) = and φA (τ1A (v), .., τnA (v) are defined
undefined otherwise
Let us review
- the value of a simple term of the form x is given by its valuation in the
structure A,
- the value of a term τ = φ(τ1 , ..., τn ) is the element φA (a1 , ..., an ), if the
functions τ1A ,...,τnA have defined values in the valuation v, equal respectively
to a1 , ..., an , and (a1 , ..., an ) belongs to the domain of the partial function φA .
Similarly we can determine the meaning of formulas. The formal defini-
tion is preceded by an example.
Example 2.3.12 Let L and A be the language and data structure defined in
example 2.3.10. Assume moreover, that we have a two-argument predicate ρ
interpreted in A as <. Consider the formula
(ρ(ψ(x, x), ψ(y, y)) =⇒ ρ(x, y)).
Interpreting it in the structure A, we obtain the expression (x ∗ x < y ∗ y =⇒
x < y), which can be understood as a definition
- of a binary relation which holds for real numbers a, b iff a < b or |a| ≥ |b| ,
or
- of a two-argument function f (x, y), which takes on the value 1 ∈ B0 iff
x < y or |x| ≥ |y| .
Remark 2.3.14 (1) Every formula is a finite expression. Thus its value
depends only on the finite number of values of variables which occur in the
formula.
(2) If we consider a first-order language L= (i.e. the binary predicate = be-
longs to our language) then for any valuation v in any data structure for L=
, the following holds
A, v |= τ = τ iff v ∈ Dom(τ ).
In other words, τ = τ is valid in A only for those valuations for which the
value of the term τ in the structure A is defined.
symbol ≤.
Example 2.3.19 Let α(x) denote a formula in the language L= with one
free variable x. Let α(x/τ ) be the formula obtained from α by simultaneous
replacement of all occurrences of x by τ . We shall prove that each data
structure for L= is a model for all formulas of the form
Example 2.4.5 (a) Let α denote the following formula in a certain language
L= with equality
(∃x1 )..(∃xn )(∀x)(x = x1 ∨ ...∨ x = xn ).
The formula α has the following property: for every data structure A, A |=
α iff the universe of the data structure A contains at most n elements.
Indeed, consider a data structure A, such that its universe has cardinality
greater than n. Let v be an arbitrary valuation in A and let a be an element
of A, such that v(xi ) ̸= a for 1 ≤ i ≤ n. Then
A, vax |= (x ̸= x1 ∧ ... ∧ x ̸= xn )
and consequently
non A, v |= (∀x)(x = x1 ∨ ... ∨ x = xn ).
The valuation v was an arbitrary valuation, hence
non A, v |= (∃x1 )...(∃xn )(∀x)(x = x1 ∨ ... ∨ x = xn ),i.e. non A |= α. Con-
versely, if non A |= α, then (since α is a closed formula) for every valuation
v, non A, v |= α. Therefore non A, v |= (∀x)(x = x1 ∨ ... ∨ x = xn ) for
every v.In accordance with the definition of quantifiers for every valuation v
there exists an element a such that non A, vax |= (x = x1 ∨ ... ∨ x = xn ).
Finally, for every sequence of values v(x1 ), ..., v(xn ) there exists an element
a such that a ̸= v(xi ), for i ≤ n, i.e. card(A) > n.
(b) Consider the formula β of the form:
(∃x1 )...(∃xn )(x1 ̸= x2 ∧ x1 ̸= x3 ∧ ... ∧ x1 ̸= xn ∧ x2 ̸= x3 ∧ x2 ̸= xn ... ∧ xn−1 ̸=
xn ).
Formula β is valid in those and only those data structures in which the uni-
verse has at least n elements. Hence the formula β expresses the property
“the universe of the system has at least n elements“. The proof of this fact
is left to the reader.
Putting together the two formulas we can define the class of data struc-
tures that have exactly n elements since the conjunction (α ∧ β) expresses
this property.
Theorem 2.4.6 Let A and B be data structures for the language L. Let
h be an isomorphism mapping structure A onto structure B, h : A → B.
Consider a term τ and a valuation v in A, then τA is a mapping defined
for the valuation v iff B is defined for the valuation h(v), and when τA (v),
τB (h(v)) are defined, we have
h(τA (v)) = τB (h(v)) (2.1)
Moreover, for every formula α and for every valuation v
A, v |= α ⇔ B, (h(v)) |= α (2.2)
TU DIAGRAM
Deterministic Iterative
Programs
In this chapter we discuss the simplest class of programs which is rich enough
to define every computable function. We call it the class of iterative programs
or the class of deterministic while-programs and denote by it P . For this
class we shall present some logic-oriented methods which allow us to analyze
programs and data structures.
Our considerations are based on a language of first-order logic F OL with
equality. This language is, however, too weak to describe more complicated
properties of programs, e.g. the holting property is not expressible in the
language. Consequently, we shall extend the language by allowing expres-
sions which contain programs as well as classical terms and formulas. These
expressions, called algorithmic formulas, allow us to express all important
properties of programs as well as properties of data structures which are not
expressible in a language of first-order logic.
3.1 Programs
From the very beginning the development of computers was accompanied by
the rapid development of different programming languages concepts. Some
of them are of universal character, whereas others serve as tools for highly
specialized problems. They often differ, not only from a formal point of view,
but also, more essentially, by the tools (instructions) offered to programmers.
Moreover, this process is continuing and will continue. Nonethelees, new pro-
gramming languages are appearing with new programming constructs whose
aim is to make the process of program-construction easier, e.g. processes,
coroutines, signals etc. Within this large population of programming lan-
35
36 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
z := (x + y) − z and q := (x ≤ y =⇒ x ≥ (y − z))
Fig. 3.3
Analogously, if diagrams of programs K and M are as in figure 3.2, then
the graphs given in Fig.3.4 and Fig.3.5 are illustrations of the programs
“if g then K else M fi“ and “while g do K of “,
respectively.
Fig. 3.4
Fig. 3.5
To end the section, we can observe that the set of programs P creates an
algebra with the two-argument operations of composition ◦ and branching
⊻γ and the one-argument operation ∗γ , for γ ∈ Fo , defined as follows
df
K ◦ M = begin K; M end
df
K ⊻γ M = if γ then K else M fi
df
∗γ M = while γ do M od
The set of all assignments instructions is the set of generators of this algebra.
By this we mean that every program from the set P can be constructed from
assignments by means of the program operations ◦ , ⊻γ , ∗γ .
The algebraic character of the set P stresses the modular character of the
class of programs.
Henceforth we will use the following notations and abbreviations:
- instead of i iterations of the program K
begin K;K;...;K end
SYMBOL 59 \f “Fences“
i times
we shall write Ki ,
- for an arbitrary variable x, instead of the instruction
if g then K else x :=x fi
we shall write for short if g then K fi.
38 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
3.2 Semantics
In order to define the semantics of a programming language one should first
fix an interpretation of the language which is used to define programs. This
means that the relational system or data structure corresponding to the first
order language FOL must be given. The interpretation of terms and formulas
which occur in programs is then defined as in section 2.3. The second impor-
tant part of the semantics is the interpretation of programming constructs.
Using the algebraic structure of the class P we can define the interpretation
of the more complicated instructions making up from the interpretation of
simpler parts.
Although there is no common concensus on what the basic programming
language constructs are and what are their meanings, there is a common
conception which consists in associating with a program a binary relation
which describes the connections between the input and output states of the
computer memory. We shall call such a relation the input-output relation.
Since, in our approach, the memory state is just a valuation of variables (cf.
def.2.3.6), this relation is defined on the set of all valuations. The state of
the computer memory before the execution of a program is called the initial
valuation and the state of the computer memory after execution of a program
- the output valuation.
We shall define the input-output relation with respect to the structure of
the program. The method presented here is called operational semantics of
programming languages.
Let M be a program and let A be a data structure of the language L. If
valuations v, v ′ are in the input-output relation determined by the program
M in data structure A, then we write this as
MA
v −−→ v′
or without the index A when the structure is fixed.
The inductive definition of the input-output relation is given below:
(x:=w)A
v −−−−→ v1 iff v ∈ Dom(wA ), v1 (x) = ωA (v) and v1 (z) = v(z) for z ̸= x
beginK;M end K M
v −−−−−−−−→ A
v1 iff v −→
A
v ′ and v ′ −−→
A
v1
if γ then K else M f iA KA MA
v −−−−−−−−−−−−−−→ v1 iffA, v |= γandv −→ v1 orA, v |= ¬γandv −−→ v1
while γ do M od
v −−−−−−−−−−→
A
v1 iff either A, v |= ¬γ and v1 = v or A, v |= γ and
M while γ do M od
(∃v ′ )v −−→
A
v ′ and v ′ −−−−−−−−−−→ A
v1
for all valuations v, v ′ in the data structure A.
The way in which we have determined the meaning of the programs en-
ables us to observe the process by which the initial data are transformed
into the result. We shall call this process the computation of a program. In
3.2. SEMANTICS 39
the sequel, we define formally the notion of computation and two auxiliary
notions: configuration and the direct succesorship relation.
DEFINITION 3.2.1
Remark 3.2.3 For a given configuration c=<v,K1 ;...;Kn >there exist at most
one configuration c’ such that c −→c’.
A
Remark 3.2.6 For any data structure A, for any valuation v, for any pro-
gram M
MA
(v, v′) ∈ MA iff v −−→ v′ .
42 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
Remark 3.2.8 Each partial recursive function (cf. Grzegorczyk [21]) is pro-
grammable in a data structure of the natural numbers.
EXAMPLE 3.2.2
Example 3.2.9 Let us consider the function div(x,y) in the data structure
<N,0,s;≤,=>. The following program M defines this function:
begin
r := x; i := 0;
while y ≤ r
do
u := y; j := 0;
while r ̸= u
do
u := s(u); j := s(j)
done;
r := j; i := s(i)
done
end
.
In fact, for any valuation v, if v(x) < v(y), then the value of the variable i
after execution of the program is 0; otherwise, i is the greatest natural num-
ber n such that n ∗ v(y) ≤ v(x).
DEFINITION 3.2.5
Definition 3.2.10 We say that a relation r(x1, ..., xm), of type (i1∗...∗im),
on the data structure A, is programmable (in the class P ), iff there exists a
program M ∈ P which uses at least m variables x1 , ..., xm and a propositional
variable q, and is such that, for arbitrary aj ∈ A of type ij (for 1≤j≤m), and
any valuation v such that v(xi ) = ai for i≤m, (a1 , ..., am ) ∈ r iff there exists
a finite successful computation of M at the initial valuation v, and v ′ (q) = 1
for v ′ = MA (v).
44 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
REMARK
EXAMPLE 3.2.3
begin
u := x;
w := y;
q := true;
while ((q ∧ u ̸= y) ∨ (¬q ∧ x ̸= w))
do
q := ¬ q;
if q then u := s(u) else w := s(w) fi
done
end
Observe that program M halts for arbitrary initial data in the data structure
N (i.e.for all initial values of x and y).
DEFINITION 3.2.6true
definition of a computation,
MA(v) = v off Vout(M).
It is a simple observation that the result does not depend on the values of the
variables which occur only on the left-hand side of assignment instructions.
They play the role of auxiliary var-iables. By analogy with bounded variables
in formulas, we can call them bounded variables of the program.
Let Vin(M) denote the variables in M which occur in tests or on the right-
hand side of assignment instructions. Changes to the values of these variables
may change the result and, moreover,
v=v’ off(V-Vin(M)) implies MA(v)=MA(v’).
Variables from the set Vin(M) are similar to free variables in formulas and
we call them free variables of the program. Note that the sets Vin(M) and
Vout(M) are not necessarily disjoint.
begin
z := x;
u := y;
while z ̸= 0 ∧ u ̸= 0
do
if z¿u then z := z-u else u := u-z fi
od;
if z=0 then z := u fi
end.
All variables that occur in the program are of the same type, > is a two-
argument predicate, − is a two-argument functor, and 0 is a constant.
(1) Let
A =< Z, 0, −; >, =, >
be the structure of the integers with the constant 0, one two-argument function
− (minus) and two binary relations > (greater than) and = (the equality
relation). If v defines the initial data which satisfy v(x) ̸= 0 and v(y) ̸= 0,
then, after execution of M , the value of z is the greatest common divisor of
46 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
chapter:3,page:1
program M does not loop for v(x) ≥ 0. Additionally, the condition control-
√ M i.e.|yi + 1 − yi | < δ is small enough. As a
ling the loop in the program
consequence we have |y − x| < ε
begin
z:=1;
u:=x; w:=y;
while y¿0
do
if even(w)
then
u:=u*u; w:=w/2
else
z:=z*u; w:=w-1
fi
done
end
Consider the structure R of the real numbers with the usual interpretation of
the functors and predicates appearing in the program. Program M is partially
correct with respect to the precondition true and the postcondition (z = xy )
and is correct with respect to the precondition “y is a natural number“ and
the postcondition (z = xy ).
3.3. SEMANTIC PROPERTIES OF PROGRAMS 49
in the structure of the real numbers R with the usual interpretation of the
symbols +, −, 0, 1, 2, =, ≥ .
The variable y takes on the values of the consecutive odd numbers. As a
result, after execution of the ith iteration, the values of variables z, y are,
respectively,
v(x) − 1P− 3 − 5 − ... − (2i − 1) and (2i + 1).
Since (2j − 1) = i2 ,
0≤j≤i
the value of variable z after the ith iteration is v(x) − i2 . The instruction
“while“ is executed until the difference v(x) − i2 − (2i + 1) is less than zero,
i.e. when we find a natural number n, such that
(n + 1)2 > v(x) and n2 < v(x).
50 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
jp k2
The value of the variable z is equal to v(x) − v(x) , and the value of y
jp k
is 2 (x) + 1. Briefly, if the condition v(x) > 0 holds for the initial data
v, then the program M has a finite, successful computation, and the value of
variable z is the distance from the greatest integer square number less than
v(x).
The formula √ √
2
β ≡ (z = x − ⌊ x⌋ ) ∧ (y = 2 ⌊ x⌋ + 1) ∧ x > 0
is the strongest postcondition of the formula a ≡ x>0 with respect to program
M in the data structure considered.
Indeed, if R, v |= x > 0, then, after execution of the program M , the valuation
MR (v) satisfies the formula β from the above analysis. The condition (1) of
the definition is thus satisfied.
Let δ be a formula such that for any valuation v,
R, v |= a and v ∈ Dom(MR ) implies R, MR (v) |= d.
Consider any valuation v’ and let R,v’|= b. Then
jp k2
v ′ (y) = 2 v ′ (x) + 1,
jp k2
v ′ (z) = v ′ (x) − v ′ (x) ,
v ′ (x) > 0.
Hence R, v ′ |= α, and therefore the program M has a finite computation at
the initial valuation v ′ , i.e. v ′ ∈ Dom(MR ). Using our assumption, we then
have R, MR (v ′ ) |= (δ ∧ β). Since the behaviour of the program M depends
only on the value of the variable x, which, in fact, does not change during
the computation, the resulting valuation MR (v ′ ) may differ from the initial
valuation only on variables y or z. According to the above analysis, the values
of variables z, y after execution of the program M satisfy condition β. As a
consequence, MR (v ′ ) = v ′ and R, v ′ |= δ and finally R, v ′ |= (β =⇒ δ) for all
valuations v ′ , i.e. R |= (β =⇒ δ).
(1) for all initial data in A, which satisfy the condition α, the results of M
satisfy the condition β, (i.e. α is a precondition of the formula α with respect
to the program M ) ,
(2) for any formula δ, if δ is a precondition of the formula β with respect to
the program M , then in the structure A, δ implies α.
3.3.5 Invariants
In many tasks we are interested in what goes on inside the computation
process. We are interested more in what properties are unchanged during
computation than how the initial valuation differs from the result. In this
class of problems we find, for example, simulation programs and operating
systems.
The above naive example shows that the criterion of equivalence of pro-
grams is too strong. This suggests that we must restrict our consideration
to some important variables, from the point of view of the task to be solved.
This leads us to the following example of a criterion which goes in this di-
rection:
3.3. SEMANTIC PROPERTIES OF PROGRAMS 53
Lemma 3.4.1 The halting property is not expressible in the first-order lan-
guage.
Proof. Assume the contrary, that for any program M there exists a
formula αM such that for every data structure
(3.1) αM ≡ program M halts on all input data
Consider the program M of the form
begin x := 0; while ¬y = x do x := succ(x) od end
and the class of structures N at similar to the standard model of the natural
numbers N =< N, 0, +1; =>. We define N at to be the class of models for
the following set of classical first-order formulas
(∀x) succ(x) = succ(x)
(∀x) ¬(succ(x) = 0)
(∀x)(∀y) (succ(x) = succ(y) =⇒ x = y).
Remark 3.4.2 If A ∈ N at and the program M halts for all initial data in
the structure A, then A is isomorphic to N.
Example 3.4.5 Let us determine the value of the following algorithmic for-
mula :
(q := γ)α ≡ (if γ do q := true else q := f alse f i)α
in an arbitrarily chosen data structure A at an arbitrary valuation v.
A, v |= (q := γ)α iff A, (q := γ)A (v) |= α iff
A, v ′ |= α and v ′ (q) = γA (v), v ′ (z) = v(z) off q iff
[A, v ′ |= α and v ′ (q) = true, v ′ (z) = v(z) off q and A, v |= γ or
A, v ′ |= α and v ′ (q) = f alse, v ′ (z) = v(z) off q and A, v |= ¬γ] iff
[A, v ′ |= α and v ′ = (q := true)A (v) and A, v |= γ or
A, v ′ |= α and v ′ = (q := f alse)A (v) and A, v |= ¬γ] iff
3.4. ALGORITHMIC LANGUAGE 57
REMARK
LEMMA 3.4.1
PROOF
Proof. Let A be arbitrary fixed data structure for L(Π), and let v be
arbitrary valuation in A such that
A, v |= if γ then K else M fi α.
By the definition of the semantics of formulas there exists a finite successful
computation of the program if γ then K else M fi, whose result v ′ satisfies
α. By lemma 3.2.1, we have
v ′ ∈ (if γ then K else M fi)A (v) iff
A, v |= γ and v′ = KA (v) or A, v |= ¬γ and v ′ = M A(v).
Putting together the above facts we have
A, v |= ((γ ∧ Kα) ∨ (¬γ ∧ M α)).
Since all steps of our reasoning are equivalent statements,
A, v |= (if γ then K else M fi) α ≡ ((γ ∧ Kα) ∨ (¬γ ∧ M α)).
We have thus proved that the formula (3.4) is satisfied for any valuation in
the structure A; it is therefore valid in A.
LEMMA 3.4.2
Example 3.4.10 Let S be the data structure of stacks. Consider the formula
of the form
M (empty(x) ∧ empty(y) ∧ bool),
where bool is a propositional variable, and M is the following program :
begin
bool:= true;
while (¬empty(x) ∧ ¬empty(y) ∧ bool)
do
bool:= bool ∧ top(x)=top(y);
x:=pop(x);
y:=pop(y)
od
end
For any valuation v in the structure S, we have
S, v |= M (empty(x) ∧ empty(y) ∧ bool)
if and only if the stack x has the same contents as the stack y.
Let K denote the program occurring in the above example between do, and
od andS let γ denote the formula (¬empty(x) ∧ ¬empty(y) ∧ bool). Then
S |= if γ then K fi¬γ
for every valuation v in S. In fact, if v is an arbitrary valuation and i is
equal to the minimum of lengths of the stacks v(x) and v(y), then after the
i-th deletion of elements, one of the stacks will be empty. Hence
S, v |= (if γ then K fi)i ¬γ .
If we know that the value of variable x at the valuation v is a stack with n
elements,
T then
S, v |= if γ then K fi true ≡ (if γ then K fi)n ¬γ.
Algorithmic formulas of the form M α are natural and useful tools which
allow us to formulate properties of algorithms. We continue the discussion
of this problem in the next section. In some applications it is convenient to
use (cf. Chapter 5) a version of the algorithmic language which allow us to
use notonly algorithmic formulas but also algorithmic terms.
definition 3.4.3
Definition 3.4.11 The set of algorithmic terms is the smallest set which
contains all individual variables and such that
(1) If τ1 ,..,τn are algorithmic terms of type t1 , .., tn and f is n-argument func-
tor of type t1 × ... × tn −→ t, then the expression f (τ1 , .., τn ) is an algorithmic
60 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
term of type t. then the expression of the form f (τ1 , .., τn ) is an algorithmic
term of type t,
(2) If τ is an algorithmic term of type t and M is an arbitrary program, then
the expression M τ is an algorithmic term of type t.
REMARK
Remark
Remark 3.4.14 For arbitrary A and v, if the values of all the terms which
occur in all the expressions below are defined, then
A, v |= M f (τ1 , ..., τn ) = f ((M τ1 ), ..., (M τn ))
A, v |= M ′ (M τ ) = (begin M ′ ; M end)τ.
Lemma 3.4.15 For any algorithmic term τ , there exists a program M and
a variable x such that for every A and v
(1) v ∈ Dom(MA ) iff v ∈ Dom(τA ) and
(2) if (M x)A (v), τA (v) are defined then (M x)A (v) = τA (v).
3.5. EXPRESSIVENESS OF THE AL LANGUAGE 61
Lemma 3.5.1 A |=M true iff all computations of M in the structure A are
finite and successful.
PROOF
If the result of a computation of M is defined then it obviously satis-
fies the formula true. Conversely, if for some valuationv, A, v|=M true,
then according to assumed meaning of algorithmic formulas, there exists a
successful, finite computation of program M .
♢
LEMMA 3.5.2
PROOF
The proof proceeds by induction with respect to the structure of the
program M . If program M is an assignment instruction, then M has no
infinite computation no matter what the data structure and valuation are.
Assume that the theorem is valid for program K and that M is of the form
while γ do S K od. If A, v|=loop(M ), then by the definition of loop,
A, v|=γ∧ if γ then K fi (γ∧loop(K)) or A, v|= ¬Kγ.
In the first case A, v|= γ and there exists a natural number n such that
A, v|= if γ then K fin (γ∧loop(K)).
62 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
LEMMA 3.5.3
Lemma 3.5.5 For any valuation v in A, A, v|=f ail(M ) iff M has an un-
successful computation at the initial valuation v in A.
♢
LEMMA 3.5.4
PROOF
Suppose that A, v |=(α =⇒ M β) for some valuation. If A, v |= α, then
by this assumption A, v |= M β. According to the semantics of formulas,
there exists a finite computation of M at initial valuation v and the result
v ′ = MA (v) of this computation satisfies β, i.e. A, v ′ |= β. Hence the post-
condition is valid.
Conversely, the validity of the precondition α guarantees the existence of a
finite successful computation (cf. def.3.3.2) and guarantees that the result
satisfies β. Thus, if for some valuation v, A, v |= α, then . Therefore
A, v |= (α =⇒ M β) (3.1)
.♢
The partial correctness property of a program M with respect to formulas
α and β is expressible by the algorithmic formula ((α∧M true) =⇒ M β).
Validity of this formula in the data structure A means that whenever the
initial data satisfies precondition α and the program M has a finite successful
computation at v in A, then the result of the program M satisfies formula β.
LEMMA 3.5.5
PROOF
Let A, v |=(M α) for some valuation v in A. According to the semantics
of algorithmic formulas, the result v ′ of program M is defined and A, v ′ |= α.
Thus M α is a precondition of M with respect to the formula α.
Consider any formula δ such that the property A, v|= δ implies the exis-
tence of a valuation v ′ satisfying v ′ = MA (v) and A, v ′ |= α. Hence A, v|= δ
implies A, v|=M α, and therefore for any valuation v, A, v|=(d =⇒ M α).
This proves that (M α) is the weakest precondition of M with respect to the
formula α.
♢
In what follows we use the following denotations and definitions.
An element of a vector x will be denoted by x, the i-th element is denoted
as usual by xi . Two vectors of variables x, y will be called corresponding iff
64 CHAPTER 3. DETERMINISTIC ITERATIVE PROGRAMS
their lengths are equal, say equal to n, and, for i = 1, ..., n, the type of xi
is the same as the type of yi . If x, y are corresponding vectors and x ∈ x,
then the corresponding element of the vector y is denoted by y. Let x be the
sequence of all variables that occur in the formula M α and let y be a sequence
of different variables corresponding to x such that x ∩ y = ∅. Let α(x/y)
denote the formula and M (x/y) the program, obtained, respectively from a
formula α and a program M , by simultaneous replacement of all occurrences
of variables from the sequence x by the corresponding variables from the
sequence y. By (∃y) we denote the sequence of quantifiers (∃y1 )...(∃yn ).
LEMMA 3.5.6
PROOF
Consider a valuation v in A such that
(3.7)
A, v|=(α ∧ M true) (3.2)
There exists a valuation v ′ such that
v ′ = MA (v) and A, v|= α.
Denote by v ′′ the valuation obtained from v ′ by the following changes of
the values of the variables in y,
v ′′ (y) = v(x) for y ∈ yv ′′ (x) = v ′ (x) for x ∈ x.
By our assumption, we have
A, v ′′ |= α(x/y) and A, M (x/y)A (v ′′ ) |= x = y.
Hence and from the definition of the existential quantifier,
A, v ′ |= (∃y)(α(x/y) ∧ M (x/y)(x = y)).
Since v ′ = MA (v), and using (3.7), we have finally A, v|=((α ∧ M true) =⇒ M β).
The above reasoning is valid for any valuation and we have therefore proved
that the program M is partially correct with respect to the formulas α and
β,
A |=((α ∧ M true) =⇒ M β).
Now, assume now that δ is any formula such that
A |=((α ∧ M true) =⇒ M δ).
We will prove that A |= (β =⇒ δ). Assume on the contrary, that for some
valuation v,
(3.8)A, v |= β and non A, v|= δ.
The formula β forces the existence of an initial valuation v ′ satisfying α
and for which the program M has a finite computation.
A, v|=β iff (∃v ′ )A, v ′ |= α and v = MA (v ′ ).
3.5. EXPRESSIVENESS OF THE AL LANGUAGE 65
LEMMA 3.5.7
Proof.
It suffices to show, that, for any valuation v in the data structure A, the
following property holds:
(3.11) A, v|= ⊥α M iff there exists a state in the computation of M, from
the initial valuation v, which satisfies α.
Assume that the formula α is not satisfied by the valuation v, since oth-
erwise the lemma is trivial. The proof proceeds by structural induction.
If M is an assignment instruction of the form x := ω, then the valuation
[x := ω]A (v) is defined and satisfies the formula α. Obviously the theorem
holds in this case.
Assume property (3.11) for all programs which are simpler than M . If
M is of the form if γ then M1 else M2 fi then the computation of M is
3.5. EXPRESSIVENESS OF THE AL LANGUAGE 67
Lemma 3.5.13 Programs K and M are equivalentS with respect to the set of
variables z in the structure AV, z ⊆ V (K) V V (M ) (cf. Definition 3.3.8) iff
A |= (y := x) K(M (x/y)∧ (x = y)) ∧ (Kq ≡ M q) ∧ (Ktrue ≡ M true)
x∈z∩Vi q∈z∩V0
□
LEMMA 3.5.10
□
Algorithmic formulas allow us to express some properties of data struc-
tures in which the computations are performed. For example,we can mention
two properties: “to be a natural number“ and “to be a finite stack“. The
formula
(x := 0)(while ¬x = y do x := x + 1 od true)
is satisfied in the data structure of real numbers by a valuation v if and only
if v(y) is a natural number.
The formula
while ¬empty(x) do x := pop(x) od true
is satisfied in the structure of stacks by a valuation v if and only if v(x) is a
finite stack. We shall discuss other properties of data structures in Chapter
5.
Chapter 4
Algorithmic Logic
This chapter presents the formal system of algorithmic logic. The logic was
designed to allow formal proofs of semantic properties of programs and data
structures. Henceforth, this system will be denoted by AL(Π), in this way
we indicate that the system is relative to the class of deterministic iterative
programs. In later chapters, we discuss algorithmic logics of others classes of
programs.
4.1 Axiomatization
The goal of axiomatization is achieved when a set of formulas, called axioms,
and a set of inference rules is given, such that all formulas valid in the as-
sumed semantics, and only these formulas, are provable from the axioms. For
this reason, the axioms cannot be an arbitrary set of formulas. They must
themselves be valid formulas. Moreover, the inference rules must preserve
the validity of formulas, i.e. they must lead to valid conclusions if the given
premises are valid. The process of deduction of a formula from a given set
of axioms with the help of given inference rules is called a formal proof.
Studying the validity of a formula using only semantic methods only is,
in general, a complicated and tedious activity. The axiomatic method of
inferring formulas from a set of admitted premises (i.e. axioms) sometimes
reduces and simplifies this process.
Definition 4.1.1
69
70 CHAPTER 4. ALGORITHMIC LOGIC
Axioms
Inference rules IR :
α, (α ⇒ β)
R1
β
(α ⇒ β)
R2
(Kα ⇒ Kβ)
{s(if γ then K fi)i (¬γ ∧ α) =⇒ β}i∈N
R3
(s(while γ do K od α) =⇒ β)
{(K i α =⇒ β)}i∈N
R4 S
( Kα =⇒ β)
{(α =⇒ K i β)}i∈N
R5 T
(α =⇒ Kβ)
(α(x) =⇒ β)
R6
((∃x)α(x) =⇒ β)
(β =⇒ α(x))
R7
(β =⇒ (∀)α(x))
form the n-th level of the tree. Two nodes c and c′ are connected by an edge
if and only if c is a node on level n, of the form (i1, ..., in), and c′ is a node
on level n + 1 and c′ = (i1, ..., in, j) for certain i1, ..., in, j ∈ N . The node c′
is called the successor, or the son, of the node c. The nodes which have no
son are called leaves of the tree D. The node which is not a son of any node
is called the root of D. The tree D, labeled by formulas in accordance with
the above definition, is called the proof tree of the formula associated with
the root of D.
Example 4.1.2
Fig.4.1
example 4.1.3
((α ∨ γ) =⇒ (β ∨ γ))
Fig.4.2
If the rules which we apply in a proof have finitely many premises then
the tree is finite: - it contains only finitely many nodes. Hence in classical
logic every proof is finite. As an immediate corollary we obtain the follow-
ing property: if a formula α has a proof in classical logic, from a set Z of
formulas, then it has a proof from a certain finite subset Z0 ⊂ Z. the set
Z0 contains precisely those formulas of Z which appear in a certain formal
proof of α. Thus every theorem of classical logic is a consequence of a finite
set of premises.
Unfortunately, in algorithmic logic the situation is much more compli-
cated. A formal proof is not necessarily finite. If a rule with infinitely many
premises is used at least once then the width of the tree is infinite. It may
also happen that all branches of a tree are finite but that no upper bound
on their lengths exists: this means that the tree has infinitely many levels.
example 4.1.4
A formal proof of the formula α from the set Z is given in Figure 4.3. Note
that all branches of the proof are finite but, for every natural number n, there
exists in the tree a branch of the length n.
Fig.4.3
To check whether a given, finite tree labeled by formulas is a proof, is
a rather simple task. Both the set of axioms and the set of inference rules
contain only finitely many schemes. Definition 4.1.3 leads us to a straight-
forward algorithm to answer this question in the case of a finite tree. To
construct a proof of a certain formula α is, however a completely different
problem. In general, we cannot guess whether the formula has a proof or
not. If it has one, then it is impossible to guess its form. Obviously one
formula can have many proofs. We cannot say a priori which rules will be
used, which assumptions are necessary. The process of building a proof is a
creative process requiring a certain ingenuity.
The system presented in this section, a so called Hilbert-like system, is not
appropriate for the goal of automatization. There exist other formal systems
of algorithmic logic which permit a mechanized proofs, up to a certain extent.
One such system is a Gentzen-like system which will be presented later in
this book.
definition 4.1.4
(2) Z ⊆ C(Z),
(3) C(C(Z)) = C(Z).
(4) For every formula α, Z ⊢ α iff α ∈ C(Z).
DEfinition 4.1.5
Definition 4.1.10 Let Z be a certain set of formulas. If from the fact that
for every i ∈ I, αi ∈ Z it follows that the formula β ∈ C(Z), then the scheme
{αi }i∈I
β
α⇒β
(α ∨ δ) ⇒ (β ∨ δ)
(c.f.example 4.1.3).
Below we present other examples of secondary inference rules. These rules
are helpful for they permit the simplification of the proofs.
example 4.1.5
Example 4.1.11 For every i ∈ N , for every program M , for any open for-
mula γ and any formula α, the following formula is a theorem of algorithmic
logic AL(Π).
Proof. the proof proceeds by induction with respect to the number i. For
i = 0 we have
By axiom Ax20
⊢ (if γ then M fi)k+1 (¬γ∧α) ⇒ (γ∧M (if γ then M fi)k (¬γ∧α))∨(¬γ∧(if γ then M fi)k (¬γ∧
and
⊢ (¬γ ∧ (if γ then M fi)k β ⇒ (¬γ ∧ α)).
Hence by axioms Ax1 , Ax5 − Ax9 (c.f. example 4.1.3),
By the rule of mathematical induction we have proved that the formula (4.1)
has a proof for any natural number i. example 4.1.6
Example 4.1.12 For every set Z of formulas, the set C(Z) is closed with
respect to the following secondary inference rule called the invariant. rule
(α ⇒ M α)
((α ∧ while γ do M od true) ⇒ while γ do M odα)
where α is an arbitrary formula, M any program and γ is any open formula
(i.e. a formula without quantifiers or programs).
Proof. We will prove that if the premise of the rule has a proof in AL,
then the conclusion also has a proof. The rule is a helpful tool in proving
properties of iterative programs. Suppose that
Z ⊢ (α ⇒ M α) (4.2)
4.1. AXIOMATIZATION 77
has a formal proof from the set Z in algorithmic logic. Denote by IF the pro-
gram if γ then M fi and by WH the program whileγdo M od. It is fairly
obvious, by axioms Ax2 , Ax21 and by modus ponens, that
Putting together the last formula, axiom Ax16 , the inductive hypothesis and
formula 4.1 proved in the previous example, we obtain
and hence formula 4.3 has been proved for every natural number i. Applying
axiom Ax8 we can transform formula 4.3 to the form required in ω-rule R3 .
The rule leads to the conclusion
Proof. We will give a proof that, for every set Z of formulas, for all formu-
las α, β and every program K, if the premise has a proof from Z then the
conclusion of the rule has a proof from Z .
Z ⊢ (α ⇒ β) {assumption}
Z ⊢ (¬β ⇒ ¬α) {propositional calculus}
Z ⊢ (¬β ⇒ while α do K od true) {Ax2 and Ax21 }
Assume (inductive hypothesis) that, for some i ∈ N ,
Z ⊢ if β then K fii ¬β ⇒ while α do K od true.
df df
Let IF = if β then K fi and WH = while α do K od.
⊢ if β then K fii+1 true ⇒ (¬β ∧ IFi true ∨ β ∧ K(IFi true))
⊢ (¬β ∧ IFi true ∨ β ∧ K(IFi true)) ⇒ (¬β ∨ β ∧ K(IFi true))
{The following line comes from the inductive hypothesis, by R2 andthe secondary rule of F
Z ⊢ (¬β ∨ β ∧ K(IFi true)) ⇒ ((¬α ∨ β ∧ K(while α do K od true))
But
⊢ ¬α ∨ β ∧ K(WH true) ≡ ¬α ∨ ¬α ∧ β ∧ K(WH true) ∨ α ∧ β ∧ K(WH true),
and hence
Z ⊢ (¬β ∨ β ∧ K(IFi true)) ⇒ (¬α ∨ α ∧ K(while α do K od true))
Z ⊢ (¬α ∨ α ∧ K(while α do K od true)) ⇒ while α do K od true
From this sequence of statements, by axiom Ax1 and by modus ponens, we obtain
Z ⊢ if β then K fii+1 true ⇒ while α do K od true.
By the principle of induction, we can assert that, for every natural numberi,
Z ⊢ if β then K fii true ⇒ while α do K od true.
We can now apply the rule R3 and conclude the proof
Z ⊢ while β do K od true ⇒ while α do K od true.
A, v |= s (while γ do M od α)andnon A, v |= β.
A, sA(v) |= (if γ then M fi)i (¬γ ∧α)and A, v |= s (if γ then M fi)i (¬γ ∧α).
non A |= (β ⇒ K n α)
see that A, vxa |= β . From this it follows that non A, vxa |= (β ⇒ α(x)),
which contradicts our assumption that the premise of the rule is valid at
every valuation.
The proofs in the remaining cases are analogous to the cases already consid-
ered.
The theorem which we will now prove is called the adequacy of axioma-
tization theorem and summarizes the above discussion.
There is no formula α such that both α and its negation ¬α have formal proofs
in the system. If such a formula existed, then, by the above theorem, both
α and ¬α would be valid in any data structure. This is clearly impossible.
Theorem 4.2.3 can be expressed in the following way: if a formula α is
a syntactical consequence of a set Z of formulas, then it is also a semantic
consequence of the set Z.
The system AL(Π) does not admit a proof of a falsifiable (i.e. not valid)
formula. This raises another question: does the system have the property
that every valid formula has a proof? The following theorem, called com-
pleteness theorem, answers this question.
82 CHAPTER 4. ALGORITHMIC LOGIC
Theorem 4.2.5 For every set Z of formulas and for every formula α
α ∈ C(Z) iff α ∈ Cn(Z)
The detailed and difficult proof of this theorem is given elsewhere. Those
readers interested in the method for proving the completeness theorem are
advised to consult [40].
The following example shows how to apply the completeness theorem in
order to show the existence of a proof. We could produce a formal proof, but
the reasoning presented below is much shorter and easier to understand.
Example 4.2.1
Example 4.2.6 Suppose that Vout (K) ∩ V (α) = ∅ for a formula α and a
program K. Then the formula
Kα ⇔ (α ∧ Ktrue)
Since by assumption, the variables which can have the values changed as a
result of performing the program K do not occur in the formula α, hence
αA (KA (v)) = αA (v).
As a consequence, for any data structure A we have A |= Kα ⇔ (α∧Ktrue).
By the completeness theorem we obtain ⊢ Kα ⇔ (α ∧ Ktrue). This result
can also be presented as an inference rule
Ktrue, α
where Vout (K) ∩ V (a) = ∅.
Kα
which is a convenient tool in proofs of properties of programs .
□
example 4.2.2
Example 4.2.7 We shall prove that, for arbitrary programs K, M and for
every formula α , such that V (M α) ∩ V (K) = ∅,
⊢ M (Kα) ⇔ (M α ∧ Ktrue)
and ⊢ K(M α) ⇐⇒ M (Kα).
Lemma 4.2.8 The ω-rules can not be avoided and replaced by any (finite or
infinite) set of finitary rules.
Proof. Let X be such a system. The following postulates are valid (by
assumption):
Proof. Let Z be any set of formulas. If Z ⊢ K2 true, then by the rule (4.4)
we have
Z ⊢ (γ ⇐⇒ K2 γ).
Applying rule R2 we have
Z ⊢ K1 (M γ) ⇐⇒ K1 (M (K2 γ)).
Similarly, if Z ⊢ K1 true, then by rule (4.4)
Z ⊢ M γ ⇐⇒ K1 (M γ).
The above two formulas imply by axiom Ax1 that
Z ⊢ M γ ⇔ begin K1 ; M ; K2 endγ .
4.2. ON THE COMPLETENESS OF ALGORITHMIC LOGIC 85
Example 4.2.11 The following schemes are simple secondary rules whose
proofs do not require ω-rules.
α ⇒ ¬γ
α ∧ if γ then M fiβ ⇔ α ∧ β
α ⇒ ¬γ
α ∧ while γ do M odβ ⇔ α ∧ β
where γ is any open formula, α ,β are arbitrary formulas and M is a program.
Proof. If formula (α ⇒ ¬γ) has a proof, then by axioms Ax2 and Ax7 , there
are proofs of the following formulas
((α ∧ γ) ⇔ (¬γ ∧ γ)),
((α ∧ β) ⇔ (¬γ ∧ α ∧ β)) .
Thus
⊢ (α ∧ (γ ∧ M δ ∨ ¬γ ∧ β)) ⇔ (α ∧ β)
for any formula δ.
Now we shall apply this result twice. Setting, first time δ = β we obtain
⊢ (α ∧ if γthen M f iβ) ⇔ (α ∧ β)
. Setting the second time δ = (while γ do M odβ) we have by axioms Ax20
and Ax21
⊢ (α ∧ while γdo M odGbeta) ⇔ (α ∧ β).
Theorem 4.2.12 For every formulas α , β and for every set Z of formulas,
Proof.
Let A be a model for the set Z, and suppose that for some valuation v0
A, v0 |= (∀x)α(x).
Since the value of this formula does not dependent on the values of the
variables x, we have for any valuation v,
A, v |= (∀x)α(x).
This implies that A is a model for the set Z ∪ {α}.
If we assume that β ∈ C(Z ∪ {α}) then by the above reasoning and by the
completeness theorem 4.2.4 we obtain
A |= β .
In particular, A, v0 |= β .
Hence, we have proved that, for an arbitrary valuation v, A, v |= ((∀x)α(x) ⇒
β).
Consequently, every model for the set Z is also a model for the formula
((∀x)α(x) ⇒ β), i.e. Z |= ((∀x)α(x) ⇒ β).
By the completeness theorem, we have Z ⊢ (∀x)α(x) ⇒ β .
As an immediate corollary of the above, we obtain the following theorem on
deduction.
theorem 4.2.6
Theorem 4.2.13 For any formula β , any closed formula α and any set Z
of formulas,
Z ⊢ (α ⇒ β) iff Z ∪ {α} ⊢ β.
□
88 CHAPTER 4. ALGORITHMIC LOGIC
Chapter 5
89
90 CHAPTER 5. OTHER LOGICS OF PROGRAMS
x≥ 0, y>0
q :=0
r:=x
x=q*y+r, r≥ 0
while r≥ y
do done
x=q*y+r, r≥ y
q :=q+1
x=q*y+r, r≥ 0
q := 0;
r := x;
while r ≥ y
do
r := r − y;
q := q + 1
done
One can remark that during every execution of the program the following
property holds: whenever during the computation we pass along an edge then
the actual state of memory satisfies the formula associated to the edge. In
particular, when the program ends its computation, the formula associated
to the outgoing edge is satisfied by the results of the program. The formula
of our example expresses the following property: the integer number q is the
quotient of the integers x and y, and the number r is the remainder.
b) the expression {α}if γ then M1′ else M2′ fi {β} is the annotated version
of the program if γ then M1 else M2 fi,
d) the expression {α} begin M1′ ; M2′ end {β} is the annotated version of
the program begin M1 ; M2 end.
x:=
Fig. 5.3 Then the annotated programs of more complicated structure are as
a) 1 b) 2
M1 M2
1 2
a) b) c)
1
if
while M1'
then else 1
do
2 1 done
1 2
M2'
M1' M2' M1'
2
1 fi 2 1
Let M1′ and M2′ be annotated versions of programs M, 1 and M2 , let αi be the
pre-condition of the annotated program Mi′ , let βi be the post-condition of the
annotated program Mi′ {i = 1, 2}.
Example 5.1.7 The verification condition for the above given annotated
program is the following formula: V C(M ) = (α1 ⇒ α2 ) ∧ (α7 ⇒ α8 ) ∧ (α13 ⇒
α14 ) ∧ (α5 ⇒ [y := y + 2]α6 ) ∧ (α3 ⇒ [i := i + 1]α4 ) ∧ (α4 ⇒ [z :=
z − y]α5 ) ∧ (((α2 ∨ α6 ) ∧ z − y > 0) ⇒ α3 ) ∧ (((α2 ∨ α6 ) ∧ z − y > 0) ⇒
α7 ) ∧ (α9 ⇒ [y := 0]α10 ) ∧ (α − 11 ⇒ [y := z]α12 ) ∧ ((α8 ∧ z = y) ⇒
α9 ) ∧ ((α8 ∧ z ̸== y) ⇒ α11 ) ∧ ((α10 ∨ α12 ) ⇒ α13 ).
The above lemma is in fact, a rather simple corollary from the completeness
theorem (c.f. 4.x.y). The lemma has, however, deeper consequences, for it
enables verification of partial correctness of a program through a proof of a
number of implications, where each of them is easy to check.
A verification condition is a conjunction of certain number of implications.
Every implication has a simple structure. Hence, most of them are easily
provable or disprovable (by means of a counter-example). This feature caused
certain popularity of Floyd’s method. Two years later, C.A.R.Hoare [27]
presented an axiomatic system for proving partial correctness of programs.
We are going now to show that the axioms and rules H1 - H7 can be deduced
with the help of algorithmic logic. Hence, every reasoning which makes use
of Hoare’s rules is also a correct reasoning in algorithmic logic.
Warning
After M. O’Donnell we would like to call the attention of the reader that it
is somewhat easy to turn this system into an inconsistent one.
Pr4 ) M (α ∨ β) ⇔ (M α ∨ M β)
Pr5 ) In the book [18] this property is formulated as follows: for any program
M and for every infinite sequence of formulas α0 , α1 , α2 , ..., such that
for all states and for every number r ≥ 0 the following implications
hold
(αr ⇒ αr+1 ),
then for every state, the following equivalence holds
We can remark, moreover that the index r need not to occur in any of the
formulas of the sequence, it is just a number of a formula. In the language
of first-order logic a correct formula of similar form is (∃r≥0 α(r)), but it
has completely different meaning and has no connection with sequence of
formulas we were talking about in the premise of the property P r5 .
This leads to the conclusion, that property Pr5 is a rule of inference with
the infinitely many premises. It should read as follows
(αr ⇒ αr+1 )r∈N
W W
M ( αr ) ⇔ ( M αs )
r≥0 s≥0
Remark 5.3.1 The subformula H0 (α) in the above definition 5.2 is redun-
dant and may be ommitted
df
Hk+1 (α) ⇔ (if γ then M fi)Hk (α).
Let us remark, first of all, that every of these formulas has a different struc-
ture and that the formulas Hk do not contain the variable k at all. The
usage of quantifier notation (∃k≥0 Hk (α)) should be treated as an informal
denotation of the infinite disjunction of formulas Hk , moreover, that every
such formula has a different structure. Let us recall, in the first-order predi-
cate calculus the expression of the form (∃x)α(x) is a formula of the language
if α(x) is a formula. Can we apply this simple grammatical rule to the ex-
pression (∃k≥0 Hk (α))? No, it is not a formula. Here the formulas and their
denotations are mixed. The expression Hk belongs to the metalanguage.
Putting our remark in other words: the formula (∃x)α(x) has a value equal
to the value of the least upper bound of values of formulas α(x/τ ) where τ is
any term (or if you prefer an arithmetic expression). Every of these formulas
has the same logical structure, the same number of logical connectives and
quantifiers in it, however they can differ in length since the terms t can be of
different structure. In the case of formulas Hk their logical structure changes
with the growth of k. Therefore, application of quantifier notation is not
allowed in this case. And how to do this when the variable k does not occur
in the formulas Hk ?
We would like to call the reader’s attention to the fact that the semantic
meaning of the Dijkstra’s axiom A6 is faithfully expressed by the following
formula
[
while γ do K done α ⇔ if γ then K fi (α ∧ ¬γ). (5.3)
Validity of this formula has been shown in lemma 3.4.2. By the completeness
theorem we know that the formula has a proof from the axioms of algorith-
mic logic.
Proof. In the proof of 5.3 we use the axiom Ax21 and rules R3 and R4
of algorithmic logic. It follows from the axiom Ax21 that for every natural
number i
(if γ then K fi)i (¬γ ∧ α) =⇒ while γ do K done α.
A proof of this fact we gave in example 4.1.5. Applying the rule R4 we derive
the
S implication
if γ then K fi (α ∧ ¬γ) =⇒while γ do K done α.
The reverse implication is also provable. For every natural number i the
102 CHAPTER 5. OTHER LOGICS OF PROGRAMS
Our next remarks concern the property Pr1 called by Dijkstra the law
of excluded miracle. Since the implication false =⇒M false is a tautology,
what remains to be proved is (Mfalse)=>false. Let us recall, that (α ∧ ¬α)
⇔ false. Note, that for every program M and every formula a, holds
M (α ∧ ¬α) ⇔ Ma ∧ M¬α (axiom Ax14).
By another axiom M¬α =⇒ ¬Ma. Hence
M (α ∧ ¬α) =⇒ Ma ∧ ¬Ma
Applying the axiom (α ∧¬α) =⇒ β we derive the implication M false =⇒
false what finishes the formal proof of the law of excluded miracle.
Let us return to the properties Pr2 and Pr5 , they are inference rules. The
property Pr5 of continuity has infinitely many premises. We have shown that
the premises are not necessary for the distributivity of program over infinite
disjunction. However the infinite disjunction itself must be characterised
in some way. It is not difficult to guess that for this, one needs a rule with
infinitely many premises. In an extended algorithmic logic one can derive the
5.3. DIJKSTRA’S CALCULUS OF WEAKEST PRECONDITIONS 103
equivalence A6 from axiom Ax2 1 and rule R3 . We do not think however that
such extension is needed. It seems redundant. In the light of our previous
considerations, we see that semantic properties of programs can be expressed
and studied in a language of finite expressions.
Another question arises: is the statement of Dijkstra that properties Pr1
- Pr5 and axioms A1 - A6 define the semantics well argued? D.Harel [24]
has studied the question and came to the conclusion that among many pos-
sible strategies of visiting trees of non-deterministic computations only one
method of visiting satisfies the axioms of Dijkstra, and therefore, concludes
Harel, the axioms of Dijkstra define the semantics of non-deterministic com-
putations. For us the problem was not definitely solved, for where the trees
of computations are coming from? This was a motivation for the research,
the results of which are presented in section 4.4. The results of 4.4 present
the stronger consequences of admitting algorithmic axioms. The meaning
of the programming connectives: composition, branching, iteration and also
of atomic instructions (assignment) is uniquely determined by the require-
ment that a realisation of the language satisfied the axioms of AL. In the
present moment we do not know whether a similar result can be proved for
non-deterministic computations.