(James H. Andrews) Logic Programming Operational
(James H. Andrews) Logic Programming Operational
(James H. Andrews) Logic Programming Operational
JAMES H. ANDREWS
Simon Fraser University
CAMBRIDGE
UNIVERSITY PRESS
PUBLISHED BY THE PRESS SYNDICATE OF THE UNIVERSITY OF CAMBRIDGE
The Pitt Building, Trumpington Street, Cambridge, United Kingdom
http://www.cambridge.org
A catalogue record for this book is available from the British Library
Abstract ix
Acknowledgements xi
1 Introduction 1
1. Programming Languages and Semantics 1
2. Logic Programming 2
2.1. Declarative Semantics 3
2.2. Operational Semantics 3
2.3. Declarative Semantics Revisited 4
3. The Approach and Scope of This Thesis 4
4. Definitional Preliminaries 6
2 Operational Semantics 9
1. Control Disciplines and their Uses 10
2. Stack-of-Stacks Operational Semantics: SOS 11
2.1. Definitions and Rules 11
2.2. Properties of SOS 13
2.3. Discussion 16
3. Sequential Variants of SOS 17
3.1. Definitions and Rules 17
3.2. Completeness Properties of Sequential Computations 18
4. Compositional Operational Semantics 21
4.1. One-Stack Operational Semantics OS: Parallel Or 21
4.2. One-Stack Operational Semantics OSso: Sequential Or 23
4.3. One-Formula Operational Semantics Csa 24
5. Some Properties of Existential Quantification 26
5.1. Failure 26
5.2. Success 27
5.3. Solution Completeness 28
6. Summary and Classification of Queries 29
5 Approaches to Incompleteness 71
1. Incompleteness 71
1.1. The Halting Problem and Divergence 72
1.2. Godel Incompleteness 74
2. Infinitary Methods 75
2.1. An Infinitary Rule for Free Variables 76
2.2. Model Checking for Divergence 77
3. Induction 77
3.1. Subterm Induction 78
3.2. Disadvantages of Subterm Induction 79
3.3. Weil-Founded Induction 80
1.6. Constraints 87
1.7. Multiple Control Disciplines 88
2. Practical Program Proving . 88
Bibliography 89
A Examples 95
1. Conventions 95
2. List Membership Examples 96
2.1. Computations 96
2.2. Derivations 96
3. Infinite Loop 96
3.1. Derivations 99
4: Subterm Induction 99
5. Well-Founded Induction 99
Logic programming systems which use parallel strategies for computing "and" and "or"
are theoretically elegant, but systems which use sequential strategies are far more widely
used and do not fit well into the traditional theory of logic programming. This thesis
presents operational and proof-theoretic characterisations for systems having each of the
possible combinations of parallel or sequential "and" and parallel or sequential "or".
The operational semantics are in the form of an abstract machine. The four control
strategies emerge as simple variants of this machine with varying degrees of determinism;
some of these variants have equivalent, compositional operational semantics, which are
given.
The proof-theoretic characterisations consist of a single central sequent calculus, LKE
(similar to Gentzen's sequent calculus for classical first order logic), and sets of axioms
which capture the success or failure of queries in the four control strategies in a highly
compositional, logical way. These proof-theoretic characterisations can be seen as logical
semantics of the logic programming languages.
The proof systems can also be used in practice to prove more general properties of logic
programs, although it is shown that they are unavoidably incomplete for this purpose.
One aspect of this incompleteness is that it is not possible to derive all valid sequents
having free variables; however, induction rules are given which can help to prove many
useful sequents of this kind.
IX
Acknowledgements
Thank you:
To the British Science and Engineering Research Council, Edinburgh University, the
Edward Boyle Memorial Trust, and Bell-Northern Research Inc., for their generous fi-
nancial assistance;
To my advisors, Don Sannella and Stuart Anderson, for many fruitful meetings;
To Inge-Maria Bethke, Ruth Davis, Paul Gilmore, Lars Hallnas, James Harland, Bob
Harper, Gordon Plotkin, David Pym, Peter Schroeder-Heister, and David Walker, for
helpful comments and suggestions;
To Paul Voda, for his vision and for setting me on the course of research that led to
this thesis;
To my examiners, Alan Smaill and Dov Gabbay, for very helpful corrections and sug-
gestions;
And to Julian Bradfield, Leslie Henderson, Jane Hillston, Craig McDevitt, James Mc-
Kinna, and the Salisbury Centre Men's Group, for much-appreciated friendship and
support.
This thesis is dedicated to the memory of my father, Stewart James Andrews.
XI
Chapter 1
Introduction
The quest for programming languages which are more readable and expressive has led to
many developments in programming languages, one of which is the logic programming
paradigm. In theory, logic programming languages are more readable and expressive
because they borrow some of the expressive power of the language of mathematical logic
- a language which was developed specifically in order to model some of the deductive
processes of the human mind.
This theoretical goal has been achieved to only a limited extent in practice, because the
implementations of logic programming languages differ from the ideal theoretical model
in many ways. One of the most basic and profound of the differences is that the theory
concerns languages which can be implemented completely only by parallel (breadth-first)
interpreters, while most practical implementations use incomplete, sequential (depth-
first) strategies.
This incompleteness in itself would not necessarily be a problem; but unfortunately,
the exact set of terminating sequential logic programs is hard to characterise in a logical
way. Sequentiality also affects reasoning about programs, disrupting the hope that the
identification of program with logical formula would make this straightforward. These
problems tend to weaken claims that practical and truly logical programming is possible.
This thesis is intended as a step towards mending this rift between theory and prac-
tice, between parallel and sequential systems. In the thesis, I present a homogeneous
operational characterisation of the parallel and sequential versions of a basic logic pro-
gramming language; I then use proof systems to characterise, in a logical manner, the
sets of queries which terminate in the various parallel, sequential, and mixed control
disciplines. I also show how these proof systems can be used to prove more general
properties of logic programs.
By way of introduction, I will present some discussion about the general principles and
historical development of programming languages and semantics. I then will focus on
logic programming, addressing in particular the various approaches to its declarative and
operational semantics, and the associated problems. Finally, I will delineate the approach
and scope of this thesis, and end this introduction with some definitional preliminaries.
they expressed directly what the machine was to do. To write a program, programmers
had to find out how to express the problem to be solved as a sequence of instructions.
Programmers soon came to realise that certain constructs corresponding to higher-level
concepts were being used again and again. Compilers and interpreters were introduced
in order to allow programmers to express these higher-level concepts more directly, with
the compiler or interpreter handling the automatic translation into the standard con-
structs: languages thus became more "human-oriented". For instance, in FORTRAN,
programmers were for the first time able to write arithmetic expressions directly, and
expect the compiler to generate the appropriate sequence of loads, stores and arithmetic
operations.
The concepts of procedures and functions, structured programming, functional, logic,
and object-oriented programming all arose out of similar desires to make high-level con-
cepts clearer. Languages can now be grouped into various so-called "paradigms", accord-
ing to how a program is viewed in the language. In imperative languages, a program is
a sequence of instructions. In functional languages, it is a set of formal declarations of
functions. In logic programming languages, it is a set of logical expressions acting as a
"knowledge base". And in object-oriented languages, it is a description of communica-
tions between agents.
To some extent, semantics of programming languages reflect the desire to view pro-
grams at a high level. One important thing that a formal semantics can give us is an
abstract mathematical account of the exact sense in which a programming language en-
codes higher-level concepts. Thus, functional programming language semantics associate
function definitions with actual mathematical functions; logic programming language se-
mantics associate programs with models, which in turn are characterisations of "possible
states of knowledge". A good semantics describes rigorously how we expect programs to
behave at a high level, and does so in terms of the intended paradigm.
However, in addition to a high-level view, we still need descriptions of programming
languages which capture what is going on at the basic level within the machine. We
need such descriptions in order to tell how much time and space our programs are going
to take up and how we can improve our programs' efficiency, and in order to be able to
follow the execution of our programs for debugging and testing purposes.
These computational considerations are somewhat in conflict with the other purposes of
semantics. The way in which we usually resolve this conflict is to give one "operational"
semantics which meets our computational criteria, and one or more other semantics which
give a higher-level, "declarative" view of the meanings of programs. We then formally
prove the equivalence of the two semantic descriptions, to allow us to get the benefits of
both.
2. Logic Programming
In logic programming, in particular, the tradition since the earliest papers in seman-
tics [74] has been to give an operational (or "procedural") semantics (usually based on
SLD-resolution [45]), and one or more logical (or "declarative") semantics which give
intuitive descriptions of the meanings of programs. The operational semantics, how-
ever, is usually implemented only incompletely, so if we want to keep the correspondence
between operational and declarative semantics we must find new declarative semantics
which correspond to the incomplete implementations.
Logic Programming 3
I concentrate on studying the behaviour of the logical connectives in various logic pro-
gramming systems, and in particular the behaviour of "and" and "or". I am specifically
not concerned with other issues of incompleteness or unsoundness in logic programming
implementations, such as the occurs check. I deal with a very simple language equivalent
to Horn clauses - one which has no negation or cut, for instance - although I do suggest
ways in which the language might be extended.
In the first technical chapter, Chapter 2, I give a definition of a very general operational
semantics for this logic programming language. The operational semantics, SOS, is in
the form of a formal tree-rewriting system or formal abstract machine. The basic opera-
tional semantics is one for a parallel-and, parallel-or system; a simple restriction on the
application of the transition rules leads to a sequential-and system, and another, similar
restriction leads to a sequential-or system. (Both restrictions together give the com-
mon "sequential Prolog" system.) I use SOS rather than the traditional SLD-resolution
because it allows us to describe these various control strategies, including the failure-
and-backtrack behaviour of sequential Prolog, within the formal system.
In Chapter 2, I also give several more compositional variants of SOS, which allow us to
see more clearly some of the higher-level properties of computation in SOS. The chapter
ends with a classification of the queries which succeed and fail with the four major control
strategies (sequential or parallel "and", sequential or parallel "or").
In Chapters 3 and 4, I present the declarative semantics, which follow the proof-
theoretic tradition mentioned above. The semantics take the form of sequent calculi.
The elements of sequents in the calculus are assertions, which are expressions built up
from signed formulae by the connectives of classical logic. Signed formulae, in turn, are
essentially logic programming goal formulae enclosed by the sign S (for success) or F
(for failure). The sequent calculi therefore retain a strong logical flavour.
The calculi in the two chapters share a set of rules called LKE. LKE is basically
a Gentzen-style sequent calculus for classical first order logic with equality as syntactic
identity. In Chapter 3, I concentrate on the problem of characterising queries which fail in
parallel-and systems, and those which succeed in parallel-or systems; LKE is augmented
by a set of simple logical axioms which describe the success and failure behaviour of
queries under these assumptions. I prove the soundness of all the rules, and various
completeness results about useful classes of sequents. One such completeness result is
that if a query A succeeds (resp. fails), the sequent [—> 5(3[A])] (resp. [—> F(3[A])])
is derivable, where 3[A] is the existential closure of A. This amounts to a precise and
logical characterisation of the sets of successful and failing queries.
In Chapter 4, on the other hand, I concentrate on characterising queries which succeed
or fail in sequential-and, sequential-or systems. There is a similar set of axioms which
correspond to this assumption; these axioms are made simple and compositional by the
introduction of the notion of disjunctive unfolding of a formula. I prove similar soundness
and completeness results about this calculus.
In addition to being able to prove assertions about the success and failure of individual
queries, the sequent calculi are able to prove much more general properties of programs
- such as the disjunctions, negations, and universal generalisations of such assertions.
They therefore have important practical applications in software engineering: they can
act as a basis for practical systems for proving properties of logic programs, such as
proving that a program meets its specification.
There are limitations to how complete a finitary sequent calculus can be for this pur-
6 Chapter 1. Introduction
4. Definitional Preliminaries
Definition 4.1 A first-order language C consists of a set X(£) of variable names, a finite
set ^(C) of function symbols ft each with an associated arity n* > 0, and a set V(C) of
predicate names Pj each with an associated arity rrij > 0.
The terms of a language C are inductively defined as follows: every variable of C
is a term, and every expression f ( t i , . . . , t n ) , where f is a function symbol of arity n
and t i , . . . , t n are terms, is a term; nothing else is a term. Nullary function symbol
applications f() are often written as simply f.
Following Miller and Nadathur [55], I will define the class of "goal formulae" as a
restricted class of first order formulae built up from predicate applications using only the
connectives "and", "or", and "there exists".
Definition 4.2 A goal formula in a first-order language C with a binary equality pred-
icate = is an expression which meets the following BNF syntax:
The class of goal formulae of C is thus a subclass of the class of formulae of C. I will
treat the word "query" as a synonym for "goal formula"; but I will use the former when
we want to refer to a formula, possibly having free variables, for which we ask a logic
programming system to find a satisfying substitution.
In the traditional approach to logic programming, used to describe Prolog and similar
languages, programs are defined as sets of Horn clauses. Because I wish to compare logic
programming systems directly with proof systems, I adopt a form of predicate definition
which looks more like the completion of a predicate [21].
Definition 4.4 A program in £ is a finite set of predicate definitions in which all names
of predicates being defined are distinct.
Definitional Preliminaries 7
This form is no loss or gain of power over the clausal form, but it makes connectives
explicit and allows us to examine their effect and processing directly. To get a program of
this form from a Horn clause program, we need only take the completion of the program
[21]. Example: in a language with a binary function symbol [_|_] of list formation, the
standard "member" predicate Mem might be defined as follows:
Definition 4.5 The existential closure of a formula A, in symbols 3[A], is the formula
3 x i . . . 3x n A, where x i . . . x n are all the free variables of A.
In the sequel we will assume the existence of some fixed first-order language C with
equality = as the language of all programs. We will further assume that C generates at
least two closed terms. (This is not really a restriction for most practical applications.)
We will write these closed terms as 0 and 1, and define the formula true as an abbreviation
for 0 = 0, and false as an abbreviation for 0 = 1 .
In most of what follows, we will also assume the existence of at least one non-nullary
operation symbol; that is, we will assume that the language has an infinite number of
closed terms. This is a reasonable assumption for most logic programming applications,
but not so for the area of logic databases, where there may be only a finite number of
data elements in the domain of discourse. I will point out the use of the assumption of
an infinite domain whenever it is used, and discuss the implications of this.
Other notation is as follows. A, B, C, D are metavariables standing for goal formulae;
P , Q, and R stand for predicate names; r, s, and t stand for terms; and x, y, and z
stand for variables.
0 and p stand for substitutions. A0 stands for the application of 0 to A (where all
substitutions of terms for variables take place simultaneously, and may involve renaming
to avoid capture of free variables); similarly tO stands for 0 applied to the term t. [x := t]
is the substitution which maps only x to t.
1 use the notation A(s) and then later A(t) to mean A[x := s] and then later A[x := t],
for some given formula A with some given variable x free. (A(x) should not be confused
with P(x), which is an application of predicate P to variable x.) Similarly I use r[s := t],
r(s), r(t).
Chapter 2
Operational Semantics
In this chapter, I first discuss the various control disciplines and their uses, and then give
the parallel-and, parallel-or operational semantics SOS ("stack of stacks"), which will act
as a reference point throughout the rest of the thesis. Sequential-and and sequential-or
systems are simple variants of SOS, and have certain completeness properties with respect
to SOS; these I describe next. I also give some more compositional variants of SOS, which
will be useful in later chapters for proving theorems. I give some somewhat technical
lemmas about the behaviour of existential quantifiers, which will be used frequently in
later chapters. The chapter ends with a summary of the classification of all queries
according as if they succeed, fail, or diverge in the various control disciplines described
in this chapter.
parallel-or systems are complete for successful queries (as we shall see below), one use
of such systems might be as execution engines for executing Horn clause specifications,
because such specifications are intended to be very high-level and free from operational
considerations.
Parallel "and" systems are useful for problems which can benefit from the more tra-
ditional parallelisation of computation, done to make independent computations more
efficient. They are also useful for doing programming in which predicate calls joined by
a conjunction represent communicating agents, such as sequential processes, coroutines,
or objects [68, 67].
In some systems with parallel connectives [76], the sequential versions of these con-
nectives are also available (usually distinguished by a different syntax). Here, I will be
simplifying the situation and considering only systems in which there is one kind of each
connective; there is one conjunction, which is either always parallel or always sequential,
and similarly one disjunction. I do this mainly to make the operational semantics simpler
and thus more amenable to analysis.
1. &:
/?i;(0:ai)(B&C),a2);/?2 S:§.S/?i;(0:ai,B,C,a2);/?2
2. V:
3. 3:
/? i ; (0:ai,(3xB),a 2 );/3 2 S :§. S ft; [9 : O l , B [ x := x ' ] , a 2 ) ; ^
where x' is some variable not appearing free to the left of the arrow
4. Defined predicates:
S S
J9i;(«:ai,P(t 1,...,tB),a2);A ^ /31;(e:a1,A(ti,...,tn),a2);i92
where II contains the definition ( P ( x l 5 . . . ,x n ) <-» A ( x l 5 . . . ,x n ))
5. Unification, success:
for which we are trying to find a satisfying substitution. Thus the elements of a goal
stack represent subgoals, all of which are to be solved in the context of the substitution
in their closure; and the elements of a backtrack stack represent different alternatives,
any one of which may yield a solution.
There will be one operational semantics, called SOSn, corresponding to each program
II. (Since the particular program being considered will usually not be important, we will
usually drop the subscript.) SOS is a rewriting system which consists of rules for a binary
CMC
relation =£ between backtrack stacks, and a definition of which backtrack stacks are
to be considered "success states" and "failure states".
The rewriting rules of SOS are in Figure 2.1. The success states of SOS are all backtrack
stacks of the form
Stack-of-Stacks Operational Semantics: SOS 13
that is, all backtrack stacks containing a closure with an empty goal stack. The single
failure state of SOS is e, the empty backtrack stack.
To execute a particular goal formula A in this interpreter, we form the backtrack stack
consisting of the single closure (() : A), where () is the empty substitution. We then
repeatedly apply appropriate transitions. If we reach a failure state, we conclude that
the query is unsatisflable; if we reach a success state (3\',{0 : e); /?2, we conclude that 0 is
a satisfying substitution for A. Of course, for a given goal we might never reach a success
or a failure state, due to repeated diverging applications of the defined predicate rule.
Definition 2.2 The relation ^ * is the reflexive and transitive closure of the =?•
relation.
We say that a backtrack stack /3 succeeds if /? =>*/?', where ft' is a success state.
We say that /? fails if /? W* e.
We say that (3 diverges if it neither succeeds nor fails.
We say that a query formula A succeeds (fails, diverges) if the backtrack stack (() : A)
succeeds (fails, diverges), where () is the empty substitution.
We say that SOS describes a "parallel and" discipline because each conjunction in a
backtrack stack is processed by splitting it into its subformulae, after which point the
computational rules can operate on either subformula. We say that it describes a "parallel
or" discipline for similar reasons.
Proof : (1.) By case analysis. If the two computations select the same subgoal in the
same closure, then f3i and /?2 are already renaming-permutation-equivalent. Otherwise,
14 Chapter 2. Operational Semantics
a different subgoal, possibly in different closures, has been selected for processing. The
cases are on the form of the subgoals selected.
If both subgoals are existential formulae, then the computations can converge in one
step. If both computations selected the same new variable, then the convergence is only
up to a renaming of the new variables.
Otherwise, if each subgoal is in a different closure, then the computations can converge
identically in one step, because nothing that goes on in one closure can affect others.
Otherwise, both subgoals are in the same closure, but are not both existentials. If
both are disjunctions, then the computations can converge in two steps, by each branch
performing a Disjunction step on the two copies of the other disjunction. The convergence
here is only up to a permutation of the closures: if the two disjunctions were A V B and
C V D, then fii will contain closures containing (in order) A . . . C , A . . . D , B . . . C , and
B . . . D, whereas /?2 will contain closures containing (in order) A . . . C , B . . . C , A . . . D ,
andB...D.
Otherwise, if both subgoals are equalities, then we can consider without loss of gen-
erality the case where the first computation is (0 : Si = ti,S2 = t2) =£ (60\ : S2 = t2)
and the second computation is (0 : Sx = t ! , s 2 = t 2 ) =£ (002 : Si = ti). There are two
subcases.
• There is no 0[ such that S200i0[ = t200i0[. In this case the first computation
ends in failure. Now, if there were a 62 such that Si 002^2 = ^i00202, then (by the
properties of mgu on 0X, the mgu of sx0 and t^B) there would be a 0[ such that
69202 = 00i0[- But since s200202 = t 2 ^ 2 ^ » w e n a v e t n a t S 2^i^i = t200i0l1'1 so
6[ would have the property that we started out by assuming that no substitution
could have. There is therefore no such 02, and the second computation fails as well.
• There is a 6[ such that S200i0| = t 2 00i0i. In this case (by the properties of mgu
on 02), there is also a 6'2 such that 66i6[ = 0020^; therefore the second computation
performs a Unification step, resulting in an identical goal stack (up to renaming of
variables, since mgu's are unique up to renaming of variables).
Otherwise, both subgoals are in the same closure, but are not both existentials, dis-
junctions, or equalities. If one of the subgoals is a disjunction, then the computations
can converge in two steps on the disjunction branch (similar steps on the two copies of
the other subgoal), and one step on the other branch.
In all other cases (one or both subgoals are conjunctions, or one is an equality and the
other is an existential), the computation of each subgoal does not interfere with that of
the other, so the computations can converge in one step.
(2.) Let the ordinal measure m(C) of a closure C be j • 2fc, where j is the number of
connectives and equality formulae in C, and k is the number of disjunctions in C. Let the
measure ra(/?) of a backtrack stack ft be the sum of the measures of its closures. Then
every transition in fSOS lowers the measure of the backtrack stack, since every transition
eliminates at least one connective or equality formula except the V rule, which changes
the measure from j • 2k to 2((j - 1) • 2fc~1) = (j - 1) • 2k.
Thus, by induction, no infinite computation sequence is possible. The only backtrack
stacks in which no transitions are possible are ones containing only predicate calls (ones
with measure 0). We say that these backtrack stacks are in normal form. Since fSOS
Stack-of-Stacks Operational Semantics: SOS 15
computation steps can be performed on any backtrack stack not in normal form, every
computation can be extended to reach a backtrack stack in normal form.
(3.) (A variant of Newman's Lemma [11].) From (1) and (2). Call a backtrack stack
from which two distinct normal forms are computable ambiguous. We will prove by
reductio ad absurdum that there are no ambiguous backtrack stacks.
Assume that there is an ambiguous backtrack stack ft, of measure m; in other words,
that ft =£ * /?!, ft =£ * /?2, and fti and ft2 are in normal form and not renaming-permuta-
tion equivalent. These computations must consist of at least one step each. If the first
step in the two computations is the same, then there is an ambiguous backtrack stack
with measure less than m (namely, the backtrack stack arrived at after this first step).
Otherwise, the two computations make a first step to ft[ and ft2, respectively; by the
Diamond property, there are computations leading from both ft[ and ft2 to some ft1. Now
say tjiat there is a computation leading from ft1 to some normal form ft". Either ft" is
equivalent to /?1? in which case it is not equivalent to ft2, in which case ft[ is ambiguous;
or ft" is equivalent to ft2, in which case it is not equivalent to fti, in which case ft2 is
ambiguous; or ft" is equivalent to neither, in which case both ft[ and ft 2 are ambiguous. In
any case, there is an ambiguous backtrack stack of lower measure than ft. By induction,
therefore, there is an ambiguous backtrack stack of measure 0; but this is impossible,
since backtrack stacks of measure 0 are in normal form.
Thus, the normal form of a backtrack stack ft is unique; every computation proceeding
from ft can be extended to this unique normal form; and thus if ft =£ * fti and ft =§• * /? 2,
then there is a ft1 (namely, the unique normal form) such that ft =£ * fti and ft =£ * ft2.
•
To obtain the analogous result about SOS, we need to separate the steps of an SOS
computation into the predicate "unfoldings" and the connective steps. The following
notion, which will also be necessary in later proof-theoretic characterisations, will help
to capture this.
Proof : First of all, note that there are predicate unfoldings ft* of ft, ft* of fti, and
ftt of p2, such that ft* fS 4 S * ftt and ft* fS 4 S * 0}: we can identify in ft the predicate
application subformula which gets unfolded first in the computation of fti and perform
a predicate 1-unfolding on ft for that, and so on for the other predicate steps in the
computations of fti and ft2. The fSOS computations will then be just the steps in the
SOS computations other than the predicate steps; the unfoldings will not interfere with
these steps, ft* might not be identical to fti because it will have some predicates unfolded
to account for the steps toward ft2, or some unfolded subformulae duplicated due to the
V rule.
16 Chapter 2. Operational Semantics
Now, from the Strong Normalisation and Church-Rosser properties of fSOS, we know
that there is a /?' in normal form such that /?* =£ * /3f and /3} =£ * /?'. (31 is thus
accessible from /?i and /?2 by a series of predicate unfoldings followed by a series of fSOS
steps. However, we can form an SOS-computation of /?' from f3\ by inserting Defined
Predicate steps corresponding to each predicate unfolding, at the points where each
instance of the predicate call in question becomes a top-level element of a backtrack
stack. We can do the same to derive an SOS-computation of /?' from /?2« O
This theorem has some interesting consequences. The most important of these are the
following.
Corollary 2.7 If any computation of ft reaches the failure state, then no computation
of J3 can reach a success state.
Proof : By the theorem, if this were possible, then the two computations could be ex-
tended to meet (up to renaming-permutation equivalence). But there is no computation
proceeding from the failure state; and no computation proceeding from a success state
(which has one closure to which no rules apply) can reach a failure state (which has no
closures). •
This means that a query cannot both succeed and fail; the set of succeeding backtrack
stacks and the set of failing backtrack stacks are disjoint. (Below, we will study further
some of the structure of these two sets.)
Corollary 2.8 If a computation of /3 reaches a success state /?' whose successful closure
is (0 : e), then every computation of /? can reach such a success state (up to renaming-
disjunction equivalence).
Proof : Every backtrack stack which is a descendent of /?' will contain (0 : e) as a clo-
sure, because no rules will eliminate it. By the Church-Rosser property, any computation
can reach a descendent of /?'. •
This means that if one solution is found, this does not preclude the finding of other
solutions. Thus we can characterise the behaviour of a goal stack by giving the set of
substitutions which can appear as solutions in successful computations (failing stacks
taking the empty set).
2.3. Discussion
The major difference between SLD-resolution and SOS is that disjunctive information
is made explicit. In SLD-resolution, each resolution step chooses one clause from the
program; however, information about which clauses have been tried and which have yet
to be tried is not explicit in the formal system. This means that information about how a
particular system tries clauses must be represented outside the formal system (typically
by a "search rule" in the search tree of candidate clauses [50]). In SOS, clause information
corresponds to the placement of the halves of a disjunction on the backtrack stack. As
we will see, this facilitates the definition of sequential computations as variants of the
single formal system.
The disadvantages of this "multi-step" style of operational semantics, in general, are
that it is not very compositional, and that (except for the sequential variant) it is non-
deterministic and therefore not a detailed description of an implementation.
Sequential Variants of SOS 17
I say that it is not very compositional because transitions depend not on the structure
of high-level elements of the formal system (backtrack stacks) but on the structure of
fairly low-level ones (individual formulae within goal stacks). Each transition involves
a change to what may be only a small part of the elements of the system. This makes
SOS a fairly clumsy system for proof purposes, since a lot of manipulation of structure is
required in proofs involving it. Later, we will see other operational semantics which are
more compositional; these will help to make clear some of the higher-level properties of
SOS computations, and we will sometimes use these systems in the later soundness and
completeness proofs.
The nondeterminism of SOS (which it shares with SLD-resolution) is a practical prob-
lem, because although the nondeterminism is used to model parallelism, it is not clear how
this parallelism is to be implemented. However, there are so many ways of implementing
parallelism (explicit breadth-first search or dovetailing on a rule-by-rule basis, process
creation on a timesharing machine, actual parallel computation by parallel processors,
etc.) that perhaps this is better left up to the implementor's choice.
We will call SOS restricted to sequential-or computations SOS/so, and SOS restricted
to sequential-and computations SOS/sa. We will call SOS restricted to sequential-both
computations SOS/sao, or more simply SP (Sequential Prolog).
There is only one sequential-both computation sequence of a given backtrack stack; that
is, the operational semantics SP is monogenic. (We ignore renaming of variables here,
assuming that our system has a deterministic algorithm for selecting variable names in
18 Chapter 2. Operational Semantics
1. &:
2. V:
(B : (B V C),a);P % (6 : B,a);(6 : C,a);/?
3. 3:
(0:(3xB),a);t3S4(6:B[x:=x'},a);p
where x' is some variable not appearing to the left of the arrow
4. Defined predicates:
(0 : P ( t i , . . . , t n ) , a); 0 i£ (0 : A ( t l 5 . . . , t n ) , a); /?
where II contains the definition ( P ( x l 5 . . . , x n ) «-• A ( x i , . . . , x n ))
5. Unification, success:
(0:S = t,a);/3S4(e6':ay,/3
where B' is the mgu of s# and tO
6. Unification, failure:
(0:s = t,a);/?!£/?
where s0 and tO do not unify
the 3 rule and for selecting an mgu in the unification algorithm.) In the case of sequential-
both computations, the stacks really do behave as stacks, with the top of the stack to
the left: nothing is relevant to the computation except the leftmost subgoal in the goal
stack of the leftmost closure. Because only one choice of subgoal is possible, SP does not
need to be implemented by more than one processor.
Sequential-both computations are the most practically important class of computations,
because many logic programming interpreters use only the sequential strategy. Because
we will want to study sequential-both computations in great detail later, it will be useful
to set down the rules of SP explicitly.
The rules of SP are as in Figure 2.2. The success states of SP are the backtrack stacks
of the form (0 : e); /?. The single failure state of SP is the empty backtrack stack, e.
to the leftmost closure, because the leftmost must fail.) We can form a new computation
by taking /?, performing the central step which replaces the leftmost closure C by a
stack /?'; appending to that the initial segment, with C replaced by /?' everywhere; and
appending to that the final segment. The computation consisting of the altered initial
segment and the final segment has, by the induction hypothesis, a failing sequential-or
computation; so the original stack /? has a failing sequential-or computation. •
Example. Consider the following computation:
( ( ) : 0 = 1); (() : 3x(x = 2kx = 3)) S=§*S (() : 3x(x = 2&z = 3)) (Unif, fail)
%s ( ( ) : * = 2kx = 3) (3)
The query LoopQ&zfalse fails in SOS but diverges for SOS/sa. (Recall that false is
an abbreviation for 0 = 1, where 0 and 1 are distinct closed terms.) However, there
is an analogous relationship between sequential-and and sequential-both computations.
This is not a straightforward consequence of the last theorem, because the set of failing
computations of SOS/sa is a subset of that of SOS, and that of SP is a subset of that of
SOS /so. However, the proof is very similar.
Proof : (1, —•) By induction on the length of the computation. If ^ is a success state
itself, then the result holds trivially. Otherwise, let /? ^ /?' be the first step. By the
induction hypothesis, one of the closures in /?' succeeds. If this closure appears in /?, then
the result holds. Otherwise, it is the product of a computation step applied to a formula
in a particular closure of /?; therefore, that closure in )3 also succeeds.
(1, *—) Say /? = /?i; C;/?2, a n d C succeeds. Then we can get a successful computation
of /? by taking the successful computation of C and appending fa and (32 on either side
of each step.
(2) As in the proof for (1). •
We mention the number of steps in this lemma, and at various points from hereon in,
because of the requirements of later proofs by induction. They are not really essential
for a high-level understanding of the theorems.
Theorem 3.5 (SOS-Success Completeness of SOS/sa) If (0 : a) has a successful
SOS-computation, then it has a successful sequential-and computation of smaller or equal
length.
• The first step is on some disjunction B V C; that is, the computation starts
One of these closures (by Lemma 3.4) succeeds after at most n — 1 steps. From this
point, we can proceed as in the proof of the last subcase; that is, we can take the
sequential-and version of the tail of this computation, interchange the first step of
it with the disjunction step, and take the sequential-and version of this new tail.
Success rules.
OS
=
'1: (a +\9$ aai (*a) = '2: ~ ~ . OQ (*a)
&.
(0:ai,B&C,<*2)°#/>
: c*i,B V C,a2) °#
(^:Ql,g2)g|fail
=
'1: — ns (*a) =
'2: c/3 + A OS , ., (*a)
(6' )^fil (fl : a i , s = t , a 2 ) => fail
&.
(0:ai,B&C,a 2 )°!fail
(*c) The definition P(x l 5 ..., xn) <-> A(x 1? ..., xn) appears in the program II
system. The rules for OS are in Figure 2.3. The rules define a relation => between a
closure and either a substitution p or the expression fail. The production relation thus
"jumps" from initial to final state in a single step; it is in the natural-deduction-style
rules for the parts of the formulae that the computation takes place.
Theorem 4.1 /3 has a successful computation in SOS iff some closure in it has a suc-
cessful computation in OS.
Proof: (—») By Lemma 3.4, some closure in /? has a successful computation in SOS; we
can do induction on the length of this SOS-computation. Each case follows fairly directly
from the induction hypothesis. The only case that needs some care is the Unification,
Failure case; there, we note that the failing closure cannot contribute to the success of
/?, so it must be one of the other closures which succeeds.
(«—) We can build an SOS-computation from the OS-computation by reading out clo-
sures from bottom to top, sometimes adding the extraneous branches of disjunctions to
the closure. •
Theorem 4.2 /3 has a failing computation in SOS iff all of its closures have failing
computations in OS.
Proof : (—>) By induction on the length of the SOS-computation. Each case follows
directly from the induction hypothesis.
(«—) By induction on the total number of steps in all the OS-computations; the cases
are on the lowest step in the OS-computation of the first closure in /?. Each case follows
directly from the induction hypothesis. •
We can form a sequential-and variant of OS, called OS/sa, by restricting ai in all rules
to be the empty goal stack. This variant has the same completeness properties with
respect to SOS/sa as OS has to SOS; we can see this by noting that the SOS computations
which the above theorems construct from OS/sa computations are sequential-and, and
similarly in the other direction.
The same kind of relationship exists between OSso and SOS/so as between OS and
SOS. The failure-completeness theorem is exactly the same as with OS and SOS:
Theorem 4.3 /? has a failing computation in SOS/so iff all of its closures have failing
computations in OSso.
24 Chapter 2. Operational Semantics
Proof : As above. •
The success-completeness theorem has a different form, but is proven in essentially the
same way. It depends on the failure-completeness theorem.
Theorem 4.4 The backtrack stack C\\C2\- •.; C n has a successful computation in
SOS/so iff there is some z, 1 < i < n, such that Cj has a failing computation in OSso for
all 1 < j < z, and C{ has a successful computation in OSso.
Proof : As above, except that in the case of a successful "or" computation we note that
the SOS/so computation can and must fail on the left-hand branch before succeeding on
the right-hand branch. •
There is a sequential-and variant of OSso as well, OSso/sa, in which oc\ must be empty
for all rules. OSso/sa is sound and complete with respect to SP. Again, we can prove this
by noting that the constructions that the above theorems make give SP computations
from OSso/sa computations, and vice versa.
Theorem 4.5 (0 : A i , . . . , A n ) =£sa p iff there are substitutions # i , . . . , 0 n_i such that
{9 : A x %* 6t), (0! : A 2 ^ a 0 2 ),... (0 n _ 2 : An_x %* 9n^), (0n_x : A n ^ p).
Proof : (—>) By induction on the size of the OS/sa-computation. Cases are on the
bottommost rule application.
=, 1 and 2: The first step finds the most general unifier 0'. The result follows imme-
diately from the induction hypothesis.
&: Let Ai = B&C. By the induction hypothesis, we have that 6 : B ^ 6' and
0' : C =? 6i] the result follows immediately.
V (1 and 2), 3, P : the result follows directly from the induction hypothesis.
(<—) By induction on the structure of the Csa-computations. Cases are on the form of
Ai, and are straightforward. •
The more significant corollary says that the set of successful queries is the same for the
two operational semantics.
Success:
(
6:s = t%»66' *a) &
6 : B&C W p
V, 1: '/ V,2:
^:BVC^p
6>: 3x B 4- p 0 : P ( t i , . . . , t n ) 4> p
Failure:
V ;
0 :s = t ^ fail
fl:A(t1,...,tB)%afaiI
Side-conditions:
Theorem 4.7 (#o • Ai, A2,.. •, A n ) => fail iff for some z, 1 < i < n, we have that
(6» 0 : A t ^ a 61), (0X : A 2 ^ 62),..., (^_ 2 : A { _! ^ a ^ - x ) , (*,-_! : A s ^ fail).
5.1. Failure
Some properties of failure are shared by all variants of the operational semantics. One
such property is that if a computation fails, it also fails under any more specific substi-
Some Properties of Existential Quantification 27
tution.
L e m m a 5.1 If (0 : a) fails in SOS, then for all substitutions p, (Op : a) fails in SOS with
a smaller or equal number of steps; and if the original computation was sequential-and
(and/or sequential-or), so is the new computation.
5.2. Success
A complementary set of results to the ones in the previous section are ones which deal
with success: if a query succeeds, then it succeeds with all more specific substitutions
that are still more general than the solution substitution.
28 Chapter 2. Operational Semantics
Definition 5.3 We say that 0' is more specific than 0, or that 0 is more general than 0',
if there is a p such that 0f = 0p.
L e m m a 5.4 If (0 : a) succeeds in SOS with the solution 0y, then, for any 0' more
specific than 0 but more general than 0/, (0' : a ) succeeds in SOS with the solution
0f with a smaller or equal number of steps. Moreover, if the original computation was
sequential-and (and/or sequential-or), so is the new computation.
Proof : Let the first step in the computation replace x by x', and let t be any closed
instance of x'0f. By a simple induction we can prove that (0 : 3x (B&x = t)) succeeds
with solution #/[x' := t]. But then by the lemma, (#[x' := t] : 3x (B&x = t)) succeeds,
after making two steps to (0{x.f := t] : B[x := x'],x ; := t); this closure will behave
essentially identically to {0 : B[x := t]), which must therefore succeed. •
Thus, if the existential closure of a query succeeds, then there is some instance of it
which succeeds.
Lemma 5.6 If 0' is more specific than 0, and {0' : a) => #/, then there is a 0'j such that
(0 : a) =£ 0'j. Moreover, if the original computation was sequential-and, then so is the
new computation.
Summary and Classification of Queries 29
Failure Success
\ *
\
1
J/
SOS(=OS);
SOS/so (=OSso)
SP(=OSso/sa)
1
/
SOS/sa(=OS/sa=Csa);\ (ok \
/ \
rv
SOS(=OS) ;
f""~SOS/sa (=OS/sa=Csa)
SOS/so(=OSso)
^SP(=OSso/sa)
/ ^ B /
A
Figure 2.6. Classification of queries as to their behaviour in the variants of SOS. A, B, and
C are example formulae within these classes; A = Loop()8zfalse] B = (Loop()&zfalse)V
true] C = Loop() V true.
Chapter 3
In this chapter, I present a sequent calculus which characterises the outer two circles of
the Venn diagram in Figure 2.6. In this calculus, we can state and prove assertions about
the success and failure of queries in parallel logic programming systems. Assertions can
be signed formulae, which are essentially expressions of the form 5(A) ("A succeeds")
or F(A) ("A fails"); assertions can also be expressions built up from signed formulae
with the usual connectives of first order logic.
The sequent calculus is in two distinct parts. LKE is a very typical classical sequent
calculus with equality, and deals with connectives at the top level of assertions. PAR is a
set of simple axioms which describe how the goal-formula connectives distribute over the
signs S and F; for instance, S(B&C) <-> S(B)&S(C). LKE will also be used in the next
chapter, where we shall deal with sequential systems; but there, PAR will be replaced by
a set of analogous axioms, SEQ.
Associated with assertions and sequents is a notion of validity. For reasons which will be
discussed in Chapter 5, we cannot have a finitary sequent calculus which is both sound (all
derivable sequents are valid) and complete (all valid sequents are derivable); however, the
calculus in this chapter is sound, and has various useful completeness properties weaker
than full completeness.
This sequent calculus can therefore be seen as a semantic characterisation of parallel
Prolog; or, conversely, as a logic with respect to which parallel Prolog enjoys soundness
and completeness properties. It can also be used to prove properties of programs. These
issues will be discussed in greater detail in the final section of this chapter.
31
32 Chapter 3. Characterising Parallel Systems
to make more complex assertions about success or failure: "For all x there is a y such
that P(x,y) succeeds," or "For all /, if List(l) succeeds then 3n Length(l,n) succeeds,"
for instance. We can therefore define assertions as a class of pseudo-formulae built up
from signed formulae and equality formulae by the classical connectives. In assertions, as
we will see, the 5 and F signs act much like modalities; but we cannot define the syntax
of assertions as we would define formulae in modal logic, because only goal formulae can
appear within a sign.
Finally, we need to decide what style of proof system to use: natural deduction, sequent
calculus, tableaux, or some other style. I have chosen to use the sequent calculus style
because it seems to provide more uniformity and expressive power than the natural
deduction style, and because its organisation of rules is more natural for our purposes
than the tableau style.
We therefore have the following definitions.
Definition 1.1 A signed formula is an expression of one of the forms 5(A), F y ( A ) , or
FN(A), where A is a goal formula. The informal meaning of these expressions is intended
to be, respectively, "A succeeds", "A fails, possibly performing predicate calls", and "A
fails without performing predicate calls". We will sometimes use the notation -F(A) to
mean either FY(A) or FN(A), when either it does not matter which we mean or it is
temporarily being used to mean consistently one thing. Similarly, we use the notation
<J(A) to mean either 5(A), F y ( A ) , or FN(A).
An assertion is an expression of the following BNF syntax:
A ::= Ai&A 2 I -1A | 3xA | s = t | cr(G)
where G is a goal formula. We will generally use A, B, C, D as metavariables ranging
over assertions as well as goal formulae; their use will be unambiguous.
We will define B V C as -i(-.B&-.C), B D C as -.(B&-.C), and Vx B as - ( 3 x -.B).
Notions oifree and bound variables for signed formulae and assertions will be a straight-
forward extension of those for formulae.
A sequent is an expression of the form
A i , . . . , A n -> D i , . . . , D m
where n,m > 0 and each of the At-'s and D^'s is an assertion. We will treat each side
of the sequent as a finite set of assertions. F and A will range over sequences/sets of
assertions: we will write F, A for the union of the two sets F and A; F, A for the union
of F with {A}; and so on.
We will generally refer to the sequence of formulae on the left-hand side of the arrow
as the antecedent, and that on the right-hand side as the consequent of the sequent.
We now need to have a notion of validity which expresses our intended interpretation
of the truth of assertions. In keeping with the approach of basing the logic on the
operational semantics, I will give an inductive definition of validity of a closed assertion,
at the base of which are simple notions of validity of closed equality formulae and closed
signed goal formulae.
In this chapter, we will be concerned with validity with respect to the parallel opera-
tional semantics SOS. In the next chapter, the sequential operational semantics SP will
concern us, but the definition of validity will be essentially the same, so I will express
this notion in more general terms.
Overview and Definitions 33
Definition 1.2 A closed assertion is valid with respect to a particular operational se-
mantics 0 (chosen from the variants of SOS) and program II just in the following cases.
• 5(A) is valid iff the backtrack stack (() : A) succeeds in the operational
semantics O.
• F Y(A) is valid iff the backtrack stack (() : A) fails in the operational se-
mantics o.
• F N(A) is valid iff the backtrack stack (() : A) can fail in the operational
semantics O without performing any Defined Predicate steps.
Note that the negation sign has nothing to do with negation as failure in this context;
-iS(A) means that A does not succeed, which might mean either that it fails or that it
diverges.
We will also speak of the validity of a sequent, which is based on the validity of formulae
and the notion of substitution.
Definition 1.3 A sequent A i , . . . , A n —> D x , . . . , D m is valid iff for every 0 which makes
all the assertions in the sequent closed, either some A t # is not valid or else some DjO is
valid.
Eq:
r->t = tA
Ineq:
r , s x = t ! , . . . , s n = t n —> A
Comp: -
2. Connectives.
r,B,c •A r-^B,A r-^c,A
r,B&c *> A
r B[x:= y] A •-»B[x:=t],A
3 1- '
T,3x B > A T->3xB,A
r-+i r,B-^ A
~S 1:
r,iB- —^ A "'
Side-conditions
Occ: • (*c)
Ax: Cut: —
I\A-> A, A
Side-conditions:
(*c) s is a proper subterm of t or vice versa
I\B->A r,c^A
V, r:
r-- > ]BVC,A
T->B,A r,c-*A r, B ^ C , A
T,B DC-> A r- 3 D C , A
T,B[x:=t]->A V, r:
B [x:=y],A (*&)
T,Vx B-> A
Side-conditions:
(*6) y does not occur free in the lower sequent
prove something somewhat stronger than that: that whenever we add new rules to the
system (as we will do twice later), the LKE rules remain sound, and that if we change
the operational semantics at the base of the definition of validity, the LKE rules remain
sound. The following theorem proves this.
Proof : We can analyse each case separately. I will give only representative cases here.
36 Chapter 3. Characterising Parallel Systems
r,B&-.c-> A
Other cases are similar. •
I will leave the detailed historical and philosophical discussion of these rules to the end
of the chapter, and go on with the technical material.
Axioms for Parallel Validity 37
Success axioms:
S(...) left right
=: S(s = t) -> s = t s = t ^ 5(s = t)
&: 5(B&C) -> 5(B)&5(C) 5(B)&5(C) -> 5(B&C)
V: 5(B V C) -> 5(B) V 5(C) S(B) V 5(C) -> 5(B V C)
3: 5(3x B) -> 3x 5(B) 3x 5(B) ^ 5(3x B)
Failure axioms:
F(...) left right
=: F(s = t ) -* -.s = t -is = t -»- F(s = t )
&: F(B&C) -> F(B) V F(C) F(B) V F(C) -> F(B&C)
V: F ( B V C) -»- F(B)&F(C) F(B)&F(C) -> F ( B V C)
F(3x B) -^ Vx F(B) (*a) Vx F"(B) -^ F(3x B)
LU
Figure 3.4. PAR axioms characterising parallel connectives. F means either FY or FN,
its use being consistent throughout each axiom.
In the rules which follow, we will allow the replacement of a formula or assertion by its
predicate unfolding and vice versa. The following theorem will help to justify this.
Theorem 3.2 (Operational Equivalence of Predicate Unfoldings) If A ; is a pred-
icate unfolding of A, then any backtrack stack /3 containing A succeeds (fails) in any of
the SOS variants iff (3 with A replaced by A ; succeeds (fails).
As with the LKE rules, we must prove these rules sound - that is, that sequents
derivable from valid sequents are also valid. (For brevity, "validity" in this chapter will
consistently mean "validity with respect to SOS".)
Theorem 3.3 (Validity of PAR axioms) Each instance of the axiom schemata in
PAR is a valid sequent with respect to the operational semantics SOS.
no predicate calls), so it must fail. Now, the new true (i.e., 0 = 0) subformulae in B '
cannot have any effect on the failure of this backtrack stack, since they have no effect
on the substitution in any closure; therefore there is a failing computation which does
not perform Unification steps on the true subformulae; therefore if these subformulae are
replaced by anything, we still get a failing backtrack stack. In particular, the backtrack
stack (() : (3x B)0) must fail flatly.
FN/FY: trivial.
Unf: By Theorem 3.2.
FN(P): The formula P ( t i , . . . , t n ) clearly cannot fail without performing Defined Pred-
icate steps, so the sequent as a whole is valid. •
Proof : By induction on the structure of the derivation. Cases are from the Validity
theorems for LKE and PAR. •
Proof : By induction on the measure j -LO + k, where j is the number of free variables in
the sequent and k is the number of occurrences of function symbols. When the measure
is zero, the sequent is empty and thus invalid. When it is non-zero, we have cases on the
form of S.
If there is a formula of the form f(...) = g(...) in 5 , then let S' be S without this
formula. By the induction hypothesis, S' is invalid; that is, there is a substitution 0
under which all the equalities in Sf are invalid. But since f(...) = g(...) is invalid under
any substitution, all the equalities in S are invalid under 0 as well.
Otherwise, if there is a formula of the form f ( s i , . . . , s n ) = f ( t i , . . . , t n ) , then since the
sequent is invalid, the two terms in the equality must not be identical; so there must be
an i such that s, ^ t;. Let Sf be the sequent with this formula replaced by Sj = t t . By
the induction hypothesis, there is a 0 under which all the formulae in S1 are invalid; but
Completeness: Closed Assertions 41
Proof : If we take A(x) to be x = s, we can use the Sub,r rule of LKE to derive the
sequent:
s = t —> A(s) . s = t —> s = s
-—L i g
s = t —> A ( t ) s = t —* t = s
•
Theorem 4.3 (Completeness for Equality) All valid sequents S containing only
equality formulae are derivable in LKE.
Lemma 4.5 If a backtrack stack (3 fails, then some predicate unfolding of /3 can fail
without performing any Defined Predicate steps.
Proof : By induction on the number of Defined Predicate steps in the failing computa-
tion of j3. If this is zero, we already have the result. If it is more than zero, then consider
the first Defined Predicate step. The indicated predicate call must be an occurrence of
a predicate call which appears in /?; let /3' be f3 with that occurrence unfolded. f3 l has
a computation which is identical to that of /?, except for one subformula being different
and at least one Defined Predicate step being missing (the first one, and possibly later
ones resulting from copies of the predicate call being generated by the V rule). The result
follows from the induction hypothesis. •
Theorem 4.6 (Closed Completeness, stage 2a) All valid sequents S of the form
y
[—> F ( A ) ] , where A is a closed formula, are derivable in LKE+PAR.
Completeness: Closed Assertions 43
Proof: A is a closed formula with a failing computation, so the backtrack stack (() : A)
fails. By the Lemma, there is therefore some unfolding A' of A such that (() : A') fails
without Defined Predicate steps. The sequent [—> FN(Af)] is therefore valid, and (by the
last stage) therefore derivable; [—> FY(A)] is derivable from it by one application of Cut
with FN jFY of PAR and zero or more applications of Unf, right of PAR. •
Note that this last proof would not have worked if A had any free variables, since
the lemma does not guarantee that there will be a single unfolding which fails without
predicate calls for all instances of A.
Theorem 4.7 (Closed Completeness, stage 2b) All valid sequents S of the form
[—> 5(A)], where A is a closed formula, are derivable in LKE+PAR.
Proof: First note that A succeeds (fails) iff 3[A], its existential closure, succeeds (fails);
this is because the backtrack stack (() : 3[A]) must be computed by first doing one 3
step for each quantified variable. The rest of the theorem follows from the Soundness
theorem for LKE+PAR and the Closed Completeness, stage 3 theorem. •
What this means is that we have succeeded in logically characterising the two outer
circles of the Venn diagram, Figure 2.6. A query A is in the outer failure set iff [—•
.F^(3[A])] is derivable, and it is in the outer success set iff [—> S^fA])] is derivable.
Because of the great expressive power of sequents, sequents can express many more things
than just the success or failure of individual queries; the incompleteness of LKE-f PAR
is only in these areas.
LKE+PAR has enabled us to achieve one part of our main goal: the logical character-
isation of the sets of queries which succeed or fail in the parallel operational semantics
SOS. Because of the completeness properties of SOS/so and SOS/sa, we have also char-
acterised the queries which fail in SOS/so and those which succeed in SOS/sa.
Theorem 5.2 (Flat Completeness, stage 2) All valid sequents S containing no pred-
icate calls are derivable in LKE+PAR.
Proof: By induction on j*u+k, where j is the total number of connectives within signed
formulae, and k is the total number of connectives anywhere in assertions containing
signed formulae. Case 0 is the previous stage (no assertions contain signed formulae). If
there are assertions containing signed formulae, we have cases on the first such formula
in the sequent.
If this formula is not itself a signed formula, we can eliminate its top-level connective in
the manner of the last stage, decreasing k by one. (In the rest of the cases, we will push
46 Chapter 3. Characterising Parallel Systems
one connective outside the sign of a signed formula, leaving k the same but decreasing j
by one.)
Otherwise, it is a signed formula. If it is of the form 5(s = t), then let Sf be S with
this assertion replaced by s = t. By the validity of the PAR axiom S'(=),l, S1 is valid; we
can derive S from it by an application of Cut with 5(=),r. The same reasoning holds for
the other 5(A) assertions, F ( = ) , F(&), F(V), and FN(3). It does not hold for FY(3)
because there is a different rule for each direction.
If the signed formula is of the form F y ( 3 x B ) and is in the antecedent, then let Sl
be S with this assertion replaced by VxF y (B). S can be derived from S1 by Cut with
.F(3),l; we now need only to show that S1 is valid. Let S" be S with F y ( 3 x B ) replaced
by VxF N (B). S" is derivable from S (by Cut with F(3),r), and so must be valid; but
since B contains no predicate calls, FY(B) implies that FN(B), and S1 is also valid.
If the signed formula is of the form F y ( 3 x B ) and is in the consequent, then let Sf be S
with this assertion replaced by V x F ^ B ) . S can be derived from S' by Cut with jP(3),r
and FN jFY from PAR; but S' can be derived from S by Cut with F(3),l, and so must
be valid. •
This last stage says that as long as we restrict our attention to only predicate-free
sequents, we can prove any valid sequent. This is not very useful for characterising logic
programming, since there we are mostly concerned with proving properties of recursive
predicates. Its utility is mainly in casting light on the question of why there is no complete
proof system.
One way of strengthening this result is to allow predicate calls, but to disallow recursion
in the base program. This kind of result suggests that it is not exactly predicate calls,
but calls to recursive predicates that causes the problem.
Definition 5.3 A hierarchical program II is one in which each predicate P in the lan-
guage C can be assigned a natural number np > 0 such that if P calls Q as defined in
II, riQ < n P .
Theorem 5.4 If II is hierarchical, then all valid sequents S are derivable in LKE-fPAR.
Proof : By the techniques of this section, except that signed formulae containing pred-
icate calls are expanded by a finite number of applications of Cut with Unf from PAR.
•
There are various ways in which we could combine the results of the last section with
those of this section, to describe wider and wider classes of sequents or programs for
which LKE+PAR is complete. Unfortunately, these classes are somewhat unnatural and
difficult to describe. Here are some brief examples.
Theorem 5.5 All valid sequents in which no predicate calls appear in a negative context,
and no S or FY formula in a positive context contains both predicate calls and free
variables, are derivable in LKE+PAR.
6. Discussion
Here I present some notes of a more philosophical and historical nature on the contents
of this chapter, which would have seemed out of place in the technical discussions earlier.
6.1. LKE
Equality has been examined in the context of first order logic for decades. Commentators
such as Fitch [32] and Church [20] give axiomatisations for simple equality between first
order terms, and note that the axiom Vx(x = x) and some form of substitution axiom
schemata are sufficient to deduce the transitivity and symmetry of equality. Takeuti [72]
casts this into sequent calculus form by augmenting Gentzen's LK [36] to a calculus LK e;
this calculus has the property that a sequent is derivable in it iff the sequent with the
equality axioms added to the antecedent is derivable.
The notion of validity associated with these formalisations of equality leaves open the
possibility that two non-identical closed terms might be equal. For the purposes of logic
programming, we want to exclude this possibility. We thus want an axiomatisation
equivalent to Clark's equality theory [22]; actually, any sequent calculus which has the
completeness property for equality between closed terms as syntactic identity, will do for
this purpose. LKE is simply Gentzen's LK with such additional axioms and rules.
Readers might be curious about the classical nature of LKE. Recent results [54] show
that logic programming has a strong connection with intuitionistic logic, but LKE is
clearly a classical sequent calculus. However, these results are about the internal logic
within logic programming, and in LKE+PAR we are concerned with notions of proving
properties of success and failure of queries in a program logic.
Constructivist principles should not bar us from working classically in this setting,
because given our definition of validity, the law of excluded middle clearly holds for all
assertions. Note, for instance, that no predicates other than equality can appear at the
top level of sequents, and that all other predicate applications are enclosed within signs.
6.2. PAR
The classicality of the assertion connectives is exploited in the PAR axioms, which present
in a clear and concise manner the relationship between goal formula connectives within
signed formulae and assertion connectives outside signed formulae. The 5 axioms state
that success of a query is essentially the same as its provability in LKE (given the
expansion of predicates with the Unfolding rule). The F axioms, like De Morgan's
laws, state how failure can be "pushed down through" the goal formula connectives,
converting conjunctions into disjunctions, disjunctions into conjunctions, and existentials
into universals.
The only elements which mar the compositionality of the PAR axioms are the F NjFY
dichotomy and the associated unfolding of predicate calls within goal formulae. This de-
vice is necessitated by the inherent asymmetry in logic programming discussed in Chapter
48 Chapter 3. Characterising Parallel Systems
2: essentially, we terminate if only one solution is found, but rather than terminating
if only one counterexample is found, we keep discarding counterexamples until we are
satisfied that every element of the domain is a counterexample.
In this chapter, I will give a characterisation of the two inner circles of the Venn diagram
in Figure 2.6 in the same way as I characterised the two outer circles. That is, I will
give a proof-theoretic characterisation of sequential logic programming (in particular, the
operational semantics SP) in the form of a sequent calculus.
For this sequent calculus, we can use the rules LKE from the last chapter unchanged; we
need only give anew group of axioms, SEQ, corresponding to PAR from the last chapter.
These axioms, however, are more complex than those in PAR, have more side-conditions,
and in particular involve the concept of disjunctive unfoldings of formulae.
Nevertheless, we can prove the same things about SEQ that we can about PAR: the laws
are sound, and the proof system LKE+SEQ characterises sequential logic programming
in several useful ways.
I will also give a characterisation of the last circle in Figure 2.6, namely the middle
success circle. This set contains all queries which succeed in SOS/so, and can be charac-
terised by a set of axioms, PASO, which combines axioms from PAR and from SEQ in a
simple and intuitively clear way.
1. Approaches to Semantics
I begin by going into more detail about why we want a semantics for sequential logic
programming, and what approaches have been taken so far to giving one.
The assumptions made about search strategies in most research on foundations of logic
programming (for instance, SLD-resolution with a fair search rule) are not satisfied by
sequential logic programming. Sequential Prolog systems may diverge in cases where fair
SLD-resolution can succeed, or in cases where parallel Prologs can fail.
However, it seems clear that sequential Prolog is a useful language - and thus needs
a mathematical semantics which will allow us to do such things as proving termination
and correctness properties of programs. Various approaches have been taken to describ-
ing termination and correctness, including analyses of the operational semantics, and
denotational descriptions that implicitly take termination into account.
51
52 Chapter 4- Characterising Sequential Systems
branches to the left of the first solution, then the program terminates. Francez et al. also
give a proof system in which proofs of properties of programs can be made.
This is an adequate method of characterising termination. However, the operational
semantics of a logic programming language is clearly secondary to the declarative seman-
tics, which is where the whole purpose of the language comes from. A characterisation
of termination in terms of the underlying logic of the language would be preferable to
this purely operational description. Their proof system approach, while having a log-
ical structure, reifies such concepts as answer substitutions and unification, which are
more properly of the operational semantics than the abstract logical structure of Pro-
log programs. We therefore achieve very little abstract mathematical insight from this
technique.
2. Disjunctive Unfoldings
The notion of the disjunctive unfolding of a formula is one of the main novelties of
this thesis, and the mechanism which allows us to isolate the non-compositionality of
sequential Prolog. The disjunctive unfolding of a formula A is a formula A' which is
classically equivalent to A, but has the property that its satisfiability depends only on
the satisfiability of its subformulae.
This requires some motivation. Once we have set out to develop a proof system char-
acterising sequential Prolog, there is one fairly natural way to proceed (which has been
followed independently, for example, by Girard [38]). However, the resulting proof sys-
tem still has soundness problems; as with full first order logic, we can still prove things
which have no corresponding computation.
Disjunctive Unfoldings 53
Unfoldings of formulae are exactly what we need to solve these soundness problems.
This section will present the idea of unfoldings by giving an outline of the initial at-
tempt at a proof system, describing that system's problems, defining the predicate and
disjunctive unfoldings of a formula, and proving some essential properties of unfoldings.
F
S(B&C) <-> S(B)&S(C) ^F(BLC) «-> F(B) V (S(B)kF(C))
The axioms for success and failure of disjunctions would presumably be the duals of
these:
: F :
5(B V C ) H S(B) V (F(B)&S(C)) ^ F(B V C ) H F(B)kF(C)
However, although the —> direction of these axioms are sound, the <— direction of the
F(&) axiom - the direction we need to prove sequents of the form [—> F(B&C)] - is not
sound. Consider the query (true V LoopQ)$z false. This query diverges according to the
operational semantics SP; the transitions are
(() : (true V Loop())k false) i£ (() : true V LoopQ, false) i£
(() : true, false)] (() : LoopQ, false) => (() : false)] (() : LoopQ, false) =>•
(() : LoopQ, false) W (() : LoopQ, false) S4 ...
However, with the rules given above, we can "prove" that it fails.
-> 0 = 0,F(true)kS(Loop())
-» S(true),F(true)&S(Loop()) 0 = 1 ->
-» S(true) V (F(true)kS(Loop())) -» -.(0 = 1)
-> S(true V LoopQ) -> F(false)
-> S(true V Loop())kF(false)
-> F(true V LoopQ), S(true V LoopQ)kF(false)
~F(true V LoopQ) V (S(true V Loop())&F(false))
-> F((true V Loop())kfalse)
But now consider the query (trueSzfalse) V (LoopQ&zfalse). This is classically equiv-
alent to the previous query, and is handled in much the same way by the operational
semantics SP; in fact, the computation is exactly the same after one steps. However, we
cannot get a derivation of [—» F((true& false) V (LoopQ & false))]] in other words, the
proof system behaves correctly in regard to this query.
Basically, if a query has all its disjunctions outside all its conjunctions and existential
quantifiers, then it will be handled correctly by the simple proof rules above. Many
queries will not have this property. However, a query can be transformed in this direction
54 Chapter 4- Characterising Sequential Systems
Figure 4.1. B is the key subformula of formula A. Informally, the shaded region consists
only of = , &, and 3 formulae.
Definition 2.1 The key subformula of a formula is the leftmost disjunction or predicate
application subformula which is not a proper subformula of another disjunction. (See
Figure 4.1.) Not every formula has a key subformula.
The disjunctive unfolding of a formula A with key subformula B V C is the formula
A B V A c , where A B (resp. A c ) means A with its key subformula replaced by B (resp.
C). (See Figure 4.2.) Formulae which do not have a disjunctive key subformula have no
disjunctive unfolding; we write A =£J A7 if A' is the disjunctive unfolding of A.
Examples. The formulae (B V C)&s = t, 3z(s = t&(B V C)), and B V C all have B V C
as their key subformulae. The disjunctive unfoldings of these formulae are, respectively,
(B&s = t) V (C&s = t); 3z(s = t&B) V 3x(s = t&C); and B V C itself. The formula
P(x)&(B V C) has P(x) as its key subformula; the formula 3x(x = Q&zx = 1) has no key
subformula.
As we will see in future sections, we can make the simple proof rules given in sec-
tion 2.1. work properly if we first restrict certain formulae in the rules to be formulae
Disjunctive Unfoldings 55
without predicate application subformulae, and then add a rule allowing a formula to be
transformed by a disjunctive unfolding as well as a predicate unfolding.
Proof : (—>•) By induction on the depth of the key subformula (which is a disjunction)
in the tree of the formula A. In what follows, I use the notation f3 4>* S/F to denote
that f3 succeeds or fails, its meaning being consistent through the analysis of each case.
Case depth = 0: A is itself a disjunction, so its disjunctive unfolding is itself. The
result holds trivially.
Case depth > 0: A can be either a conjunction or an existential formula.
But then
But then
( 0 : 3 x B B l V 3 x B B 2 , a ) ; ^ ^ (0 : 3 x B B l , a ) ; (6 : 3 x B B 2 , a ) ; P ^%-
This completes the proof of the forward direction of the theorem statement. We now
move on to the proof of the converse.
(•—) By induction on the depth of the key subformula (which is a disjunction) in the
tree of the formula A.
Case depth = 0: A is itself a disjunction, so its disjunctive unfolding is itself. The
result holds trivially.
Case depth > 0: A can be either a conjunction or an existential formula.
(0 :B,C,a);pS4* S/F
So, adding one step onto the front of the computation,
(6:BkC,a);!3S4* S/F
Theorem 3.1 (Validity of SEQ axioms) Each instance of the axiom schemata in
SEQ is a valid sequent.
Proof : One case for each arrow direction of each axiom. We will assume that 0 is a
substitution which makes S closed.
S(=): As in the case for PAR. For 0 which makes the sequent closed, s6 = tO succeeds
iff s and t are identical under 0, iff s0 = t0 is valid.
5(&): As in the case for PAR. Since B and C are closed under 0, their conjunction
succeeds iff each succeed independently.
5(V), right: If 5(B) V (F(B)&5(C)) is valid under 0, then either B0 succeeds or else
B0 fails and C0 succeeds. But since (() : (B V C)0) i£ (() : B0); (() : C0), (B V C)0 also
succeeds.
5(V), left: (() : (B V C)0 i£ (() : B<9); (() : Cff) %* (0' : c); /?. So either (() : B0) must
succeed, or else (() : B0) must fail and (() : C0) succeed.
5(3), right: Assume that 0 makes the sequent closed, and that the antecedent is valid;
that is, that there is a t such that B[x := t]0 succeeds. Since B contains no predicate
calls, (3x B)0 must either succeed or fail. If it were to fail, then (by Theorem 5.2) every
B[x := s]0 would fail; so it must also succeed.
5(3), left: By the "succeeds-one-succeeds" theorem (Theorem 5.5), if (3x B)# succeeds,
then there must be some closed t such that B[x := t]0 succeeds.
For the F rules, we must prove validity for both FY and FN.
F(=): As in the case for PAR. For 0 which makes the sequent closed, s0 = t0 fails iff
s and t are non-identical closed terms, iff -i(s0 = t0) is valid.
Axioms for Sequential Validity 59
Success axioms:
S(...) left right
=: 5(s = t) -> s = t s = t -> 5(s = t)
&: 5(B&C) -> 5(B)&5(C) 5(B)&5(C) -* 5(B&C)
V: S(B V C ) ^ 5(B) V (Fy(B)kS(C))
5(B) V (F y (B)&5(C)) -> 5(B V C)
S(3x B) -» 3x 5(B) 3x 5(B) -> S(3x B) (*a)
LU
Failure axioms:
F(...) left right
=: F(s = t ) - • -is = t s = t - » F (s = t)
&: F(B&C) -> F ( B ) V (S(B)i^ F ( C ))
(1) F(B&C)
(2) V F(C) -> F(B&C) (*q)
V: • F(B V C) -> JF(B)&ir?(C) F(B F(B V C)
F(3x B) -> Vx F ( B ) Vx F^(B)-> F(3x B)
LLJ
), left: (() : (B&C)(9 ^ (() : BO, CO) ^£* e. So either BO fails without even
reaching C0, or else B0 succeeds but CO fails. If (B&C)0 fails flatly, then so do the
subsidiary computations. (This is clearly not all the information we can extract from the
fact that (B&C)# fails; see the discussion on the inversion principle at the end of this
chapter.)
F(&), right, 1: If (() : B0) =£* e, we can graft the formula C# onto every closure in that
computation to get a failing computation of (() : BO, C0). But (() : (BkC)O) % (() :
B0, C#), so (B&C)0 fails too. If B# fails flatly, then so does the new computation.
F(&), right, 2: If the antecedent is valid under 0, then either BO fails or CO fails. If BO
fails, then (() : BO, CO) must fail too; so (B&C)0 fails. If BO does not fail, then (since
it contains no predicate calls) it must succeed, and CO must fail for the antecedent to be
valid; so again (() : BO, CO) must fail.
F(V): As in the case for PAR. (() : B0);(() : CO) fails iff both closures fail indepen-
dently; and fails flatly iff both closures fail flatly.
F(3), left: As in the case for PAR. By the "fails-all-fail" theorem (Theorem 5.2), if
(3x B)0 fails, then every instance of BO fails, and the restriction to flat failure holds as
60 Chapter 4- Characterising Sequential Systems
well.
JF(3), right: As in the case for PAR. If every instance of B0 fails flatly, we can (by the
techniques of this case in the PAR validity proof) prove that (3x B)0 fails.
FN/FY: If A fails flatly, then it clearly fails.
Disj, Unf: by Theorems 3.2 and 2.2.
FN(P): Clearly, no predicate call can fail without performing at least one Defined
Predicate step. •
The reason for the restriction on the F(8z), right, 2 rule should be clear from the
example given in the last section. The corresponding example justifying the restriction
on the 5(3), right rule is the following query:
3x((x = OkLoopQ) V x = 1)
Obviously, the query (1 = 0&Loop()) V 1 = 1 succeeds, so the assertion 3xS((x =
0&Loop())Vx = 1) is valid; but the query itself does not succeed. As in the case for F(8z),
though, the disjunctive unfolding of the query (in this case 3x(x = 0&zLoop())V3x(x = 1))
behaves properly.
Finally, we have the comprehensive Soundness result for LKE+SEQ.
Proof : By induction on the structure of the derivation. Cases are from the Validity
theorems for LKE and SEQ. •
Theorem 4.1 (Closed Completeness, stage 1) All valid sequents S which have only
equalities in the antecedent, and only equalities and FN formulae in the consequent, are
derivable in LKE+SEQ.
Proof : As in the proof of the corresponding theorem for PAR, by induction on the
number of connectives and equality formulae within FN signs. If this number is 0, then
by the Completeness Theorem for Equality, Theorem 4.3, the result holds.
Otherwise, let S be T -> A, F N ( D i ) , . . . , FN(Dm), where the A formulae are all equal-
ities. Cases are on Di. We will derive sequents which must also be valid, and which have
fewer connectives and equality formulae within FN signs.
Completeness: Closed Assertions 61
Lemma 4.2 If a backtrack stack /? fails, then some predicate unfolding of /? can fail
flatly.
Theorem 4.3 (Closed Completeness, stage 2a) All valid sequents S of the form
[—> F y (A)], where A is a closed formula, are derivable in LKE+SEQ.
Theorem 4.4 (Closed Completeness, stage 2b) All valid sequents S of the form
[—> 5(A)], where A is a closed formula, are derivable in LKE+SEQ.
Proof : As in the last chapter. We can either thin out all but a valid S or F Y formula,
or thin out all the S and FY formulae, and by stages 1, 2a, and 2b, we will be able to
derive the resulting sequent. •
Theorem 4.6 (Closed Completeness, stage 4) All valid sequents S in which no free
variable appears in an S or FY subassertion, and no signed formula appears in a negative
context, are derivable in LKE+SEQ.
Completeness: Predicate-Free Assertions 63
Proof : As in the last chapter: by induction on the total number of connectives outside
signed formulae. •
Finally, we have the important results about the characterisation of successful and
failing queries corresponding to that in the last chapter.
Theorem 4.7 (Characterisation of SP) A goal formula A succeeds in SP iff the se-
quent [-> 5(3[A])] is derivable in LKE+SEQ; it fails in SP iff the sequent [-> F y (3[A])]
is derivable in LKE+SEQ.
Proof : As in the last chapter: from the Soundness and Completeness theorems. •
So just as LKE+PAR characterised the two outer circles from the Venn diagram (Figure
2.6), LKE+SEQ has succeeded in characterising the two innermost circles from that
diagram. A query A is in the innermost failure set iff [—> .F(3[A])] is derivable; a query
A is in the innermost success set iff [—> 5(3[A])] is derivable. Because of the completeness
property of SOS/sa, we have also characterised the queries which fail in SOS/sa.
Proof : If S is valid with respect to SP, then it is also valid with respect to SOS; thus,
by the Flat Completeness, stage 1 theorem in the last chapter, it is derivable. •
Theorem 5.2 (Flat Completeness, stage 2) All valid sequents S containing no pred-
icate calls are derivable in LKE+SEQ.
Proof : As in the last chapter. The more restrictive side-conditions do not apply, since
none of the signed formulae contain predicate calls; the different forms of some of the
rules do not seriously affect the proof. D
The results about hierarchical programs extend to SEQ as well.
Theorem 5.3 If II is hierarchical, then all valid sequents S are derivable in LKE+SEQ.
Proof : As in the last chapter: we can unfold predicate calls until none are left in the
sequent. •
Success axioms:
5(...) left right
=: S(s = t)->s s = t -> 5(s = t)
= t
&: 5(B&C) -+ 5(B)&5(C) 5(B)&5(C) ^ 5(B&C)
V: 5(B VC)-^ S(B) V (FY(B)LS(C))
S(B) V (FY(B)kS(C)) -> 5(B V C)
3: 5(3x B) -> 3x 5(B) | 3x 5(B) -> 5(3x B) (*a)
Failure axioms:
F(...) left right
=: F(s = t) -> -is = t -is = t -* F(s = t)
&: Jr 1 ri^viv ^ l —f x^ l i i l v r 1 V-' I F(B) V F(C) -^ F(B&C)
V: F(B VC)-» F(B)&F(C) F(B)kF(C) -+ F(B V C)
F(3x B) -> Vx F(B) Vx FN(B) -> F(3x B)
LU
Figure 4.4. PASO axioms characterising SOS/so. F means either FY or F N , its use
being consistent throughout each axiom.
As we saw in Chapter 2, the set of successful queries for SOS/so is smaller than the set
of successful queries for SOS because the sequential "or" misses some solutions; but it is
also bigger than the set of successful queries for SP, because the parallel "and" causes
more queries to fail, and this has the knock-on effect of allowing some more queries to
succeed.
This effect of the expansion of the set of failing queries is illustrated well in the set of
axioms PASO ("Parallel And, Sequential Or"; Fig. 4.4). The Success and Miscellaneous
axioms of PASO are exactly those from SEQ, but the Failure axioms are exactly those
from PAR. We are therefore able to prove more things to be failing than in SEQ (for ex-
ample, LoopQSzfalse); and because the 5(V) axioms depend on failure, we are therefore
able to prove more things successful than in SEQ (for example, (LoopQSzfalse) V true,
the example from Chapter 2). See the Appendix for a derivation of the success of this
query in LKE+PASO.
We must prove these axioms to be sound, and prove the equivalents of the Closed
Completeness results, of course. However, many of the results we need to prove follow
directly from those in this chapter and the previous one, and others have very similar
proofs. I will briefly outline the results, noting only where they differ from those for PAR
and SEQ.
Success for SOS/so 65
Proof : This is much the same as in the case for SEQ; I will sketch it here. The proof
is by induction on the depth of the key subformula, a disjunction, in A.
Case depth = 0: A is itself a disjunction, so its disjunctive unfolding is itself. The
result holds trivially.
Case depth > 0: A can be either a conjunction or an existential formula.
(61: B&C, a)
( 0 : B B l VB B %C,a) *
(0:BBl,C,a);(0:BB»,C,a)
(0:B B l &C,a);(0:B B a &C,a)
(6>:(B Bl &C)V(B B2 &C),a)
where the step * is from the induction hypothesis. Since A' = (BBl&C)V(BB2&C),
the result holds.
• A = B&C, and the key subformula, Ci V C2, is in C. If one of the following goal
stacks succeeds (resp. fails) in SOS/so, all the rest succeed (resp. fail) in SOS/so:
(0 : B&C, a)
(0:B,C,a)
(6>:B,C Cl VC C l ,a) *
c c
(0 :B,C \a);($ :B,C ',a)
(9 : B & C \ a ) ; ( 0 : B&C C j ,a)
c
• A = 3x B. If one of the following goal stacks succeeds (resp. fails) in SOS/so, all
the rest succeed (resp. fail) in SOS/so:
(«:3xB,a)
(0:B[x:=x'],a)
(0:B B l [x:=x']VB B 2 [x:=x'],a) *
(6» : B Bl [x :=x'],a);(0 : BB2[x := x'],a)
66 Chapter 4- Characterising Sequential Systems
Theorem 6.2 (Validity of PASO Axioms) Each instance of the axiom schemata in
PASO is valid with respect to SOS/so.
Proof : The Failure group is identical to the one found in PAR. The only signed
formulae they contain are F-signed formulae, which are valid wrt SOS/so iff they are
valid wrt SOS (by the completeness results in Chapter 2); so, since we have proven them
valid wrt SOS in the validity proof for PAR, they are also valid wrt SOS/so. The same
holds for the Miscellaneous axioms that have only i^-signed formulae.
The proofs for the other axioms are much as in the case for SEQ. The "and" is now
parallel, but the logic remains the same: if (B0&C0) is closed, then it succeeds iff both
B0 and CO succeed independently. The validity of the Disj axioms follows from the
operational equivalence of disjunctive unfolding for SOS/so (Theorem 6.1). All other
axioms follow the pattern of SEQ. •
In the completeness results, the only stage that needs to be re-proved is Closed Com-
pleteness, stage 2b:
Theorem 6.3 (Closed Completeness, stage 2b) All sequents S valid wrt SOS/so
and of the form [—> 5(A)], where A is a closed formula, are derivable in LKE+PASO.
Proof: Almost identical to the case for SEQ. The only case that is significantly different
is 5(3). There, we must prove that if 3x B is closed and succeeds, then [—> 5(3x B)] is
derivable, given the induction hypothesis. There are three subcases.
In the case that B contains no predicate calls, we can apply the 5(3), right axiom
straightforwardly, because the "succeeds-one-succeeds" theorem (Theorem 5.5) guaran-
tees that there is a witness.
If B contains predicate calls and its key subformula is a predicate call, then the proof
is not quite so straightforward as in the case for SEQ. Because we have parallel "and",
the closure may be split up into several due to some disjunction to the right of the key
subformula; for example,
However, it is still the case that since the query succeeds, the key subformula must
eventually be computed in at least one of the resultant closures. Thus, the predicate 1-
unfolding of B in which the key subformula is unfolded must have an SOS/so-computation
of fewer steps. The induction hypothesis applies, and we can therefore derive [—> 5(3x B)]
from the derivable [—> 5(3x B')], where B' is this predicate 1-unfolding of B.
Discussion 67
If B contains predicate calls and its key subformula is a disjunction, then (by the
operational equivalence of disjunctive unfoldings, Theorem 6.1) the disjunctive unfolding
of 3x B, 3x B B l V 3x B B 2 , must also succeed. But then, as in the case for SEQ, we can
use the 5(V), right axiom to derive [-> 5(3x B B l V3x B B2 )] from either [-* 5(3x B B l )]
or from [-> F(3x B B l )] and [-> 5(3x B B *)]. •
Stages 1 and 2a of Closed Completeness are about failing formulae, and thus follow from
the corresponding stages from PAR. Stages 3 and 4 follow the proofs for PAR and SEQ
exactly. We therefore have the following important characterisation result corresponding
to those for PAR and SEQ:
Proof : As in the case for PAR and SEQ: from the Soundness and Completeness
theorems. •
Finally, as with SEQ, the Flat Completeness results go through unchanged because all
sequents containing no predicate calls have the property that they are either valid with
respect to all variants of SOS, or invalid with respect to all variants.
These soundness and completeness results complete our logical characterisation of the
queries succeeding and failing in the various control disciplines. The characterisation
is especially pleasing because the three characterising proof systems share the common
rules LKE, and because the PASO rules integrate the PAR and SEQ rules in a simple
and intuitively clear way.
7. Discussion
Some comments from the Discussion section of Chapter 3 apply to the material in this
chapter as well. These include the discussion of more practical failure rules. Here,
I concentrate on issues surrounding SEQ in particular, and its correspondence to SP;
these comments apply equally to PASO and its correspondence to SOS/so.
In the case of S*(3), for instance, we can conclude from the information that 3x B
succeeds that there is a t such that B[x := t] succeeds. However, this information is
not sufficient to prove that 3x B succeeds. This is one of the effects of having the
non-compositional rules of predicate and disjunctive unfolding: the completeness of the
consequent rules depends on being able to transform a formula into one for which certain
side-conditions are met. In this case, we can regain the inversion property by making
the same restriction on the formula as holds in the introduction rule: that B contain
no predicate calls. If B does contain predicate calls, then we must use the antecedent
unfolding rule, just as we have to use the consequent unfolding rule when the formula
appears in the consequent.
The case of F(Sz) is more problematic. We cannot make the restriction that B contain
no predicate calls, because it may be that B contains predicate calls but fails before it
makes any. One possibility for regaining an inversion property is to split the left-hand
axiom into two:
F(B&C) -> FN(B) V F(A')
where B contains predicate calls, and A' is a predicate unfolding of (B&C); and
where B contains no predicate calls. These rules are slightly clumsy, but do extract as
much information out of antecedent signed formulae as possible.
In short, we can modify some of the rules in SEQ to regain an inversion property.
Since this involves making changes and restrictions to the rules inessential to the proofs of
soundness and completeness, I chose not to make these restrictions in SEQ for simplicity's
sake.
70 Chapter 4- Characterising Sequential Systems
Chapter 5
Approaches to Incompleteness
Although we have proven some useful completeness theorems about the proof systems in
the last two chapters, we have not been able to prove absolute completeness: that every
valid' sequent is derivable. Because of some formal incompleteness results, we will never
be able to prove such a completeness theorem, for any finitary proof system; but there are
several ways in which we can, at least partially, escape the effect of these incompleteness
results. In this chapter, I present the incompleteness theorems and some of the partial
solutions.
There are two main incompleteness results, as discussed in the first section below.
The first says that we will never be able to derive all valid closed sequents which have
signed formulae in negative contexts, and follows from the non-existence of a solution to
the Halting Problem. (We can deal with many of the important cases of this result by
adding extra rules which I will describe.) The second result says that we will never be
able to derive all valid sequents with free variables, even if they have no signed formulae
in negative contexts, and is a version of Godel's Incompleteness Theorem.
The "mathematical" solution to these problems is to bring the proof theory closer to
a kind of model theory, by allowing infinitary elements into the proof systems. Though
these are not adequate solutions for practical theorem proving, they are useful in that
they shed light on the extent to which the proof systems in question are complete. I
discuss these methods in the second section of this chapter.
The mere practical solution to some of the incompleteness problems is to add some
form of induction. Many of the useful sequents with free variables which we cannot prove
in the finitary systems in the last two chapters, can be proven if we add induction rules
to the systems. Some of the variations on this approach are described in the last section.
1. Incompleteness
In the Closed Completeness results in the last two chapters, there were two restrictions
on the class of valid sequents being proven derivable. The first was that signed formulae
were barred from appearing in negative contexts; the second was that free variables were
barred from appearing within signed formulae.
This is not to say that no valid sequent not appearing in the classes mentioned is
derivable; many are, as for instance the Flat Completeness results show. So we can
relax these restrictions to some extent, and show that wider classes of valid sequents
are derivable. But we cannot remove either of them completely, even while maintaining
the other. If we remove the first restriction, we will not be able to find a finitary proof
71
72 Chapter 5. Approaches to Incompleteness
system which is complete for that class of sequents, due to the unsolvability of the Halting
Problem. This will also happen if we remove the second restriction, due this time to a
version of Godel's Incompleteness Theorem.
This section proves these incompleteness results, and points out a small relaxation of
the first condition which will allow us to prove the derivability of another fairly useful
class of valid sequents.
Guaranteed Termination
The Halting Problem Incompleteness theorem seems a little "unfair" in some ways. Some
sequents are valid for the trivial reason that some instance of some signed formula in a
negative context diverges. It seems that many of the cases in which we will be unable to
prove a valid sequent happen when the sequent is of this form. This is unfair because in
practice, we will seldom want to prove such sequents; we are more interested in proving
properties of programs given known success or failure properties of predicates.
One important class of such sequents has assumptions of the form 5(A) or -F(A), in
which each instance of A either succeeds or fails. For instance, we may want to prove
the sequent
S(Even(x)) -> S(N(x))
Incompleteness 73
where N tests whether its argument is a Peano integer, and Even tests whether it is
an even Peano integer. Every closed instance of this sequent is derivable in LKE+PAR
or LKE+SEQ, even though the Closed Completeness theorem does not include it. In
general, if we want to prove properties of predicates based on this kind of "type" or
"structural" properties of their arguments, we will get sequents of this form. We would
therefore like to prove a completeness result which takes in this class of sequents as well.
Definition 1.2 A goal formula A is guaranteed terminating (in a variant of SOS) if
every instance of A either succeeds or fails. We similarly say that a signed formula is
guaranteed terminating if its formula is guaranteed terminating.
To facilitate the proof of completeness for guaranteed termination, I will introduce
two simple rules which are sound with respect to any operational semantics, but were
not needed for proving soundness or completeness results so far. The completeness of
guaranteed termination could be proven without them, but not without repeating many
of the cases in the other completeness proofs. Let the set of rules GT be the following:
GT R
' I\S(A)-A ' I\F(A)->A
where F in GT,F means either FY or FN.
Theorem 1.3 The GT rules are sound with respect to any variant of SOS; that is, if
the premisses of an application of one of the GT rules are valid wrt the variant, then the
conclusion is valid wrt the variant.
Proof : Consider GT,S. The premiss says that if all the T assertions are valid under
#, then either AO fails (in which case S(A)0 is invalid and the conclusion is valid) or else
one of the A assertions is valid under 0 (in which case, again, the conclusion is valid).
The proof for GT,F is similar; the reasoning is the same for both the FY and FN cases.
•
The kind of completeness that these rules give us is the following consequence of Closed
Completeness, stage 3.
Theorem 1.4 (Completeness for Guaranteed Termination) All sequents S valid
wrt SOS (resp. SP, SOS/so) in which all assertions in the antecedent are equality formulae
or guaranteed-terminating signed formulae, all assertions in the consequent are equality
formulae or signed formulae, and no free variable appears within an 5 or FY sign, are
derivable in LKE+PAR-f GT (resp. LKE+SEQ+GT, LKE+PASO+GT).
Proof : By induction on the number of signed formulae in the antecedent. When this
is not zero, consider the first such formula, cr(A). A either succeeds or fails; therefore
(by Closed Completeness, stage 3) we can prove the sequent [—> S(A),FY(A)].
If a is 5, let S' be S with 5(A) removed from the antecedent and FY(A) added to the
consequent. S' is derivable from S using Cut and [—> S'(A), FY(A), so it is valid; by the
induction hypothesis, it must be derivable; and S is derivable from it by an application
of the GT, S rule.
The cases where a is FY or FN are similar. •
There is a further result corresponding to stage 4 of Closed Completeness.
74 Chapter 5. Approaches to Incompleteness
Theorem 1.5 All sequents S valid wrt SOS (resp. SP, SOS/so) in which no free variable
appears in an S or FY subassertion, and no signed formula appears in a negative context
except guaranteed-terminating signed formulae, are derivable in LKE+PAR+GT (resp.
LKE+SEQ+GT, LKE+PASO+GT).
Proof : Assume that, for every £, there is such a proof system; then prove a contra-
diction.
Choose C so that we can represent variables, terms, formulae, signed formulae, finite
programs, judgements, and derivations for such a proof system, all as closed terms. As
Godel showed, any language with at least one nullary function symbol and at least one
non-nullary function symbol suffices. Applying the assumption, we have a sound and
complete finitary proof system S for judgements in this language. We can decide on a
representation within the language of each expression (variable, term, etc.); let us write
[X] for the representation of the expression X.
The readers can convince themselves that we can write a program IIQ containing pred-
icates Subst and Deriv with the following operational properties:
Infinitary Methods 75
• The closed query Subst(r, si, S2, t) succeeds iff r is some |V(A)], and we can obtain
t from r by substituting all occurrences of \z\] by Si and all occurrences of \z2~\ by
S2. (Note that 21,22 are fixed variable names.) Moreover, if r,Si,S2 are all closed
and t is some variable x, then the query does not fail, and the resultant substitution
substitutes a closed term for x.
• The closed query Derzt;(r,s,t) succeeds iff s is some [II], t is some [cr( A)~|, and r
is the representation of a derivation, in the proof system S, of (II, cr(A)). Moreover,
if all the arguments are closed terms, then the query terminates (either succeeds
or fails).
(Note that we can write these predicates so that they behave properly under any variant
of SOS.)
Now, let U be the signed formula
F(3y(Subst(z2,z1,z2,y)kDeriv(x, zuy)))
Let G be the signed formula U[zi := \HG] » Z2 :=z \U~\]; that is, let G be
fails. However, the call to Subst cannot fail, and must in fact produce a substitution
mapping y to some closed term s.
But what is this closed term s? It is in fact [G]. From this we must conclude that
the query Deriv(t, [IIG] ? [G]) fails, for every t. But what this means is that there is
no derivation for ( I I G , G ) , even though that was what we just assumed. Thus, ( I I G , G )
must not be derivable in S after all.
However, it is still the case that the call to Subst cannot fail, and that the call to Deriv
cannot diverge; so the query (*) must fail, for every choice of t. This means that (HG, G)
is in fact valid but underivable.
This contradicts our first assumption that S was complete; so for this choice of £, there
can be no sound, complete, and finitary proof system for program-formula judgements.
•
The kind of proof systems we have been studying in the other chapters are param-
eterised by the (implicit) program: that is, there is a different proof system for each
program. This parameterisation was done only for convenience, however, and we could
instead give calculi whose judgements are sequents with appended programs. It should
be clear that the incompleteness proof will still apply to the parameterised proof systems,
as long as they are still finitary.
2. Infinitary Methods
The incompleteness results of the last section emphasised the restriction to finitary proof
systems: proof systems with judgments and derivations which can be represented finitely.
76 Chapter 5. Approaches to Incompleteness
Although these are really the only kinds of proof systems which can be used in practice,
it is possible to obtain much stronger completeness results by adding infinitary elements
to the system.
Of course, if our only concern had been to build a flnitary proof system, and we now had
decided to remove that restriction, we could simply code the entire operational semantics
and definition of validity into a complete "proof system". We must take care to adhere
to our original goals of producing as logical a proof system as possible, and not to add
elements which are so strong as to make the proof system trivially complete. Infinitary
methods meeting these criteria are the addition of an infinitary rule (rule with an infinite
number of premisses) to handle free variables in sequents, and the addition of elements
to handle divergence by model checking methods.
Inf. r[x := tx] -> A[x := ti] T[x := t 2 ] -> A[x := t 2 ] ...
r-> A
where t i , t 2 , . . . is an enumeration of all the closed terms in the language
If the language C has an infinite number of closed terms, then this rule has an infinite
number of premisses. However, the addition of this rule allows us to remove the restriction
to closed sequents in the various completeness proofs. The soundness of the infinitary
rule follows immediately from the definition of validity, and we can prove (for instance)
the following analogue of the Closed Completeness, stage 4 theorem for PAR:
Theorem 2.2 Every instance of a goal formula A succeeds in SOS iff the sequent
[—> S'(A)] is derivable in LKE+PAR+Inf; every instance of it fails in SOS iff the se-
quent [-> FY(A)] is derivable in LKE+PAR+Inf.
Proof : From the Soundness theorem for LKE+PAR+Inf and the Completeness theo-
rem above. •
The analogous results also hold for LKE+SEQ+Inf, LKE+PASO+Inf, and any of these
systems augmented by the Guaranteed Termination rules of the last section.
Induction 77
The value of this infinitary rule lies in its simplicity, combined with the scope of the
resulting completeness. The importance of the infinitary rule is that it says something
about the other rules in the given proof systems: it says that they are complete, "except
that they cannot handle free variables." This is useful, since it assures us that the rules
are not incomplete for some other reason, as for instance they would be if one of the
connective axioms were missing. This complements the knowledge that we have from
the Closed Completeness results that the finitary systems are complete for a wide class
of closed sequents.
If we had the infinitary rule from the beginning, some of the completeness results
would be easier to prove, since we could assume right from the start that all sequents
were closed and introduce free variables only at the last stage. Some of the equality rules
are redundant in the system with the infinitary rule. In fact, the two groups of equality
rules in LKE represent those which would be needed even with unrestricted use of the
infinitary rule (Eq, Ineq, and Comp), and those which serve to replace the infinitary rule
to some extent in some of the completeness theorems for the finitary systems (Occ, Sub,l,
and Sub,r).
Finally, the infinitary rule is very useful in smoothing out a difficulty with the com-
pleteness results. Note that the completeness theorem for equality (4.3) followed directly
from the assumption that there was an infinite number of closed terms in the base lan-
guage. Without this assumption, the theorem would not be true: there would be a valid
sequent satisfying the restrictions in the theorems but underivable in LKE. For exam-
ple, consider the language with only two terms, the constants a and 6; then the sequent
[—> x = a, x = b] would be valid, since under any of the two minimal substitutions which
make it closed ([x := a] and [x := &]), one of its formulae is valid. All the other com-
pleteness theorems are based on this equality completeness theorem, so all these would
fail as well.
However, the case in which there are only a finite number of terms is exactly the case
in which the "infinitary" rule is no longer infinitary! Thus in this case we can prove the
first stages of the completeness results in a trivial manner by using the "infinitary" rule,
and the strong completeness theorem of this section can be proved without infinitary
constructs.
3. Induction
The finitary proof systems of the last chapters are complete for all closed sequents with
empty antecedents, and all sequents involving non-recursive predicates. However, there
78 Chapter 5. Approaches to Incompleteness
are many practically important non-closed assertions involving recursive predicates which
cannot be proven by them; and needless to say, the infinitary techniques of the last section
are not useful in practice.
The main finitary method of handling such assertions in practice is induction. This is
a generalisation of the usual practice of proving a statement about the natural numbers
using "mathematical induction": if the statement holds for 0, and whenever it holds for
n it holds for n -f 1, then it holds for all natural numbers.
In the context of proof systems, we can add proof rules which formalise this method
of reasoning. The two major types of induction rules that we might add are the simpler,
but less general subterm induction rule, and the more complex but much more general
and useful well-founded induction rule. Neither of these rules give us a complete proof
system, because of Godel incompleteness; but they do enable us to prove a useful subset
of the non-closed sequent s.
Definition 3.1 Let S be of the form F —>> A,cr(A(x)), where A(x) is the only formula
in S to have x free. Let f be an n-ary function symbol. Then IH(S, f, x) is the sequent
The induction rule (or more properly rule schema) for the sequent calculus now takes
the form
Jg(5f1;x) ... IH(S,fm,x)
S
where S is any sequent of the form F —> - A,cr(A(x)), A(x) is the only formula in S to
have x free, and fi . . . fm are all the function symbols in the language C. This rule, which
Induction 79
I will call SUBTERM, is sound in the same sense that the other rules we have discussed
are sound, as stated by the following theorem.
Theorem 3.2 If each premiss of an application of the induction rule is valid with respect
to some variant of SOS, then the conclusion is valid with respect to it.
Proof : If F —• A is valid, then the result holds trivially. Otherwise, there is some
substitution 0[x := t] under which all of the F signed formulae are valid, and we must
prove that cr(A) is valid under it. We can do this by induction on the structure of t;
each subcase corresponds to one of the premisses of the rule application. •
See Section 4. for an extended example of the use of subterm induction - a derivation of
the sequent [—» S(Add(x, 0, x))]. It is possible to derive this sequent using the infmitary
rule mentioned in the last section, but not with just the rules in the finitary proof systems
of Chapters 3 and 4. Subterm induction allows us to derive it using only finitary methods.
This latter sequent we could prove using the induction rule. However, the premiss for
the constant a would be [—> S(N(a)) D 5(Add(a, 0,a)], which would be valid only by
virtue of N(a) failing.
In general, as we add function symbols, we add premisses to the induction rule which
we must prove in many trivial cases. This is a problem in general with logic programs,
because we often want to prove properties about predicates while assuming some type
information about the parameters to the predicate (e.g., that all parameters are lists),
but we generally have many function symbols.
Another shortcoming of subterm induction is that not all recursions found in programs
are on the subterm structure of the parameters to a predicate. Some predicates ter-
minate in general because of some other decreasing measure. For assertions involving
these predicates, we could define an natural number measure on the parameters and use
subterm induction on natural numbers to derive the relevant sequents. This may turn
out to be rather complicated, however; a more general solution is discussed in the next
section.
If we were working with a strongly typed logic, in which all formulae, terms and
variables have an associated type, some of these problems would vanish. In typed logic
programming languages (and strongly typed programming languages in general [18]),
the definition of a predicate includes a specification of the types of its arguments; calls
to predicates with arguments of anything other than the correct types are syntactically
80 Chapter 5. Approaches to Incompleteness
ill-formed. The number of cases to consider in the induction rule would therefore not
increase with bigger languages; for instance, one would specify that the arguments to Add
were exclusively of natural number type, and only two premisses would ever be needed
for the induction rule.
The addition of types to our language would take us outside the scope of this thesis,
although some form of typing is clearly desirable, for this and other reasons. However,
even in a typed language we are left with the problem of recursions which do not act on
subterm structure. This problem must still be handled by a generalisation of induction
to induction on any well-founded measure.
• There is no infinite sequence of closed terms t i , t2, t 3 , . . . such that R(tt-+1, tt-)
succeeds for all i > 1 ("there is no infinite descending sequence of closed
terms").
Using well-ordering predicates, we can define an induction rule, WF, which is in some
ways simpler and more useful than that of the last section:
where R is a well-ordering and y does not appear free in F or A. (Note that the proof
that R is a well-ordering cannot be done within the proof system, and must be done
metatheoretically.)
The definition of well-ordering could be made more straightforward by our insisting
that, for instance, R(t,t) fail. However, the weaker conditions above suffice to ensure
the soundness of the well-founded induction rule.
Theorem 3.4 If the premiss of an application of the well-founded induction rule is valid
(with respect to some variant of SOS), then the conclusion is valid (with respect to that
variant of SOS).
Assume the premiss is valid but the conclusion is invalid. (This is the reductio assump-
tion.) This means that there is a 0[y := ti] which makes the conclusion closed, such that
all its antecedent assertions are valid under 0[y := ti] but none of its consequent asser-
tions (including A(y)) is valid under 0[y := ti]. (In other words, ti is a counterexample
for A(y)#.) However, by assumption, the premiss is valid; so since none of Us consequent
assertions are valid under 0[y := ti], one of its antecedent assertions must be invalid under
0[y := ti]; and the only one that can be is Vx(S(R(x, y)) D A(x)). Since this is invalid
under 0[y := ti], there must be a t 2 such that (5(R(x,y)) D A(x))[x := t2]0[y := ti] is
invalid; that is, such that R(t 2 ,ti) succeeds but (A(t2))0 is invalid.
But this would mean that t 2 , like t1? is also a counterexample for A(y)0, and since R is
a well-ordering, t 2 must be distinct from t]_. So we can follow the same line of reasoning
to get another counterexample t3 such that R(t3,t 2 ) succeeds, and so on. But then we
will have the infinite descending sequence of closed terms which we denied we had when
we assumed R was a well-ordering. Contradiction; so the conclusion of the application
of the rule must be valid after all. •
With well-founded induction, we can prove all of the things we could with subterm
induction (since the subterm inclusion predicate is a well-ordering), but in some situations
the derivations are simpler. Consider the sequent [—• S(N(x)) D S(Add(x, 0, #))], which
was easy to prove with only two function symbols in the language, but became more
cumbersome as the number of symbols increased. With well-founded induction, the
complexity of the derivation is large, but the derivation is independent of the number
of function symbols in the language. One possibility for a well-ordering predicate is the
predicate R given by the following definitions:
This thesis has taken as its object of study the control-discipline variants of a simple logic
programming language equivalent to Horn clause logic programming. It has classified
and logically characterised the set of successful and failing queries of these variants of
the language.
I have given an operational semantics, SOS, of which variants correspond to the par-
allel "and" and "or", sequential "and", sequential "or", and sequential "and" and "or"
control disciplines. This operational semantics homogenises the treatment of the control
disciplines by incorporating control information (such as the failure-backtrack mecha-
nism of seqeuntial systems) into the operational semantics. (Some of the variants of
SOS have equivalent compositional operational semantics, which I have given.) I have
also classified the queries into those succeeding and those failing in each of the control
disciplines, and have proven the equivalence of some of these classes.
I have then used a sequent calculus framework, in which the elements of sequents are
assertions about the success or failure of queries, to give a logical analysis of these classes
of queries. Three calculi are given; they share a common set LKE of rules for classical
logic with equality as syntactic identity, and differ in the set of axioms which characterise
the behaviour of queries.
• LKE-f PAR characterises the queries which succeed in parallel-or systems, and those
which fail in parallel-and systems;
• LKE-f SEQ characterises the queries which succeed in the sequential-and, sequential-
or system, and those which fail in sequential-and systems;
• LKE-f PASO characterises the queries which succeed in the parallel-and, sequential-
or system.
The precise sense in which these calculi "characterise" the classes of queries is that if
a query succeeds or fails in a particular control discipline, the corresponding assertion
of its success or failure is derivable in the appropriate calculus. The value of these
characterisations is that they give a precise, logical account of which queries fail or
succeed.
These calculi can also be used for proving more general properties of logic programs, in-
cluding general termination and correctness properties. This is important, as it addresses
issues of program correctness and specification which arise in practical settings. The se-
83
84 Chapter 6. Summary and Future Directions
quent calculi can therefore act as a basis for practical tools for proving such properties
of programs. See below for a more detailed discussion of this possibility.
The sequents of the calculi are sufficiently expressive that the calculi cannot be complete
with respect to the natural notion of validity; but I have shown several results that give
a wide range of valid, derivable sequents. I have also analysed the senses in which the
systems in question must be incomplete, and have given extensions, such as induction
rules, which can be used to prove a wide class of useful properties of programs.
1. Language Extensions
One of the main deficiencies of this thesis, from a practical point of view, is that the
programming language it considers is not very expressive or powerful. Logic programs as
I have defined them are a variant of the strict Horn clause programs originally described
by Kowalski [49]. Since then, the state of the art has moved on considerably. It would be
desirable to incorporate more recent developments into the language being characterised,
to see whether they can be treated in the proof-theoretic framework given in this thesis.
1.2. Types
Types are important in any programming language. Their main practical use is to allow
programmers to specify what kinds of arguments they expect predicates (or functions
or procedures) to take, so that the compiler or interpreter can inform programmers of
Language Extensions 85
inconsistencies in use of predicates. In large programs, this can be very helpful, as this
kind of type inconsistency is a common source of errors and can be checked relatively
cheaply.
Types would also be important in facilitating inductive proofs of properties of logic
programs. As suggested in Section 3., it may be possible to give simpler rules for sub-
term induction over given types, and the presence of types can considerably simplify
the statements of theorems to be proven (as all type assumptions are implicit in the
predicate applications being made). To do induction over the structure of a given free
variable, one would choose the appropriate rule for that variable, eliminating the need
for vacuous premisses corresponding to incorrectly-typed terms and reducing the need
for well-ordering predicates for well-founded induction.
The question of how we should add types to logic programming languages is an area
of current research [64, 65]. An important question would be which of the currently-
proposed type systems would be best suited to proof-theoretic analysis.
formulae be of "ground" mode, and use negation as failure for the allowed occurrences of
negation. In this way, we can capture a large class of uses of negation without sacrificing
logicality; as Jaffar et al. have shown [47], negation as failure is a complete strategy for
ground negated queries.
Some form of if-then-else construct would also be possible if we used modes in this way.
The expression if A then B else C could be defined as being equivalent to (A&B) V
(-•A&C), if we maintain the syntactic restriction of A to having only input-mode free
variables.
In an if-then-else formula, the condition A need be computed only once. It is therefore
better to have an explicit if-then-else construct than to just write the equivalent formula,
since it not only clarifies the underlying meaning of the formula, but also signals to the
compiler or interpreter that this optimisation can be done. A bonus of such an if-then-
else construct is that it can be used to replace many uses of the non-logical Prolog "cut",
which is commonly used to avoid having to compute the negation of a positive condition
in another clause.
1.6. Constraints
The area of constraints seems promising as a possible extension to the framework in this
thesis. This is because constraint logic programming languages generally extend basic
logic programming in ways which are independent of the various control disciplines.
One description of a subset of the constraint logic programming languages is as follows.
We are given a first order language £, with some predicate names in C being identified
as constraint "predicates. We are also given a semantics for these constraint predicates -
a characterisation of which applications of the constraint predicates are considered to be
true and which false. (How many of the useful CLP languages described by Jaffar and
Lassez [46] this description captures is not clear.)
In conventional logic programming (which is subsumed by the constraint paradigm),
the only constraint predicate is equality, which is interpreted as syntactic identity. But
other systems are possible: for instance, the addition of ^ as a constraint predicate
[1], the interpretation of = as identity between infinite rational trees [25], Presburger
arithmetic, and linear arithmetic [46].
In the sequent-calculus framework of this thesis, none of these languages would seem
to require anything more than an axiomatisation of the semantics of the constraint pred-
icates, along the lines of the axiomatisation of equality in LKE. On the operational
semantics side, we need, for each constraint language, a unification (or "subsumption")
algorithm which allows us to completely decide whether a given finite set of constraint
predicate applications is satisfiable or not.
Because of the requirement that a subsumption algorithm exist, one would expect
that the entire analysis of successful and failing queries for both parallel and sequential
systems would go through as in this thesis. In particular, there would be analogues of
the Equality Completeness theorem (4.3) which would form the basis of the rest of the
completeness theorems; the remainder of the theorems are mainly about the connectives
and defined predicates, and so would go through as before.
88 Chapter 6. Summary and Future Directions
[1] James H. Andrews. An environment theory with precomplete negation over pairs.
Technical Report 86-23, University of British Columbia, Department of Computer
Science, Vancouver, B. C , 1986.
[2] James H. Andrews. Trilogy Users' Manual Complete Logic Systems, Inc., Vancou-
ver, B.C., Canada, 1987.
[6] James H. Andrews. The logical structure of sequential Prolog. Technical Report
LFCS-90-110, Laboratory for the Foundations of Computer Science, University of
Edinburgh, Edinburgh, Scotland, April 1990.
[7] James H. Andrews. The logical structure of sequential Prolog. In Proceedings of the
1990 North American Conference on Logic Programming, pages 585-602, Austin,
Texas, October-November 1990. MIT Press.
[8] Krzysztof R. Apt, Roland N. Bol, and Jan Willem Klop. On the safe termination
of Prolog programs. In Proceedings of the Sixth International Conference on Logic
Programming, pages 353-368, Lisbon, 1989.
[9] Bijan Arbab and Daniel M. Berry. Operational and denotational semantics of Prolog.
Journal of Logic Programming, 4:309-329, 1987.
[10] Edward Babb. An incremental pure logic language with constraints and classical
negation. In Tony Dodd, Richard Owens, and Steve Torrance, editors, Logic Pro-
gramming: Expanding the Horizons, pages 14-62, Oxford, 1991. Intellect.
89
90 Bibliography
[11] Henk Barendregt. The Lambda Calculus: Its Syntax and Semantics, volume 103
of Studies in Logic and Foundations of Mathematics. North-Holland, Amsterdam,
1984.
[13] Egon Borger. A logical operational semantics of full Prolog. Technical Report IWBS
Report 111, IBM Wissenschaftliches Zentrum, Institut fur Wissensbasierte Systeme,
Heidelberg, Germany, March 1990.
[14] A. Bossi and N. Cocco. Verifying correctness of logic programs. In Theory and
Practice of Software Engineering, volume 352 of Lecture Notes in Computer Science,
pages 96-110, Barcelona, Spain, 1989. Springer-Verlag.
[15] Julian Bradfield and Colin Stirling. Local model checking for infinite state spaces.
Technical Report LFCS-90-115, Laboratory for the Foundations of Computer Sci-
ence, University of Edinburgh, Edinburgh, 1990.
[18] Luca Cardelli and Peter Wegner. On understanding types, data abstraction, and
polymorphism. ACM Computing Surveys, 17(4):471-522, December 1985.
[19] Weidong Chen, Michael Kifer, and David S. Warren. HiLog: A first-order semantics
of higher-order logic programming constructs. In Proceedings of the North American
Conference on Logic Programming, Cleveland, Ohio, October 1989.
[21] K. L. Clark. Negation as failure. In Logic and Data Bases, pages 293-322, New
York, 1978. Plenum Press.
[24] K. L. Clark and F. McCabe. The control facilities of IC-Prolog. In D. Michie, editor,
Expert Systems in the Micro-Electronic Age, pages 122-149. Edinburgh University
Press, 1983.
Bibliography 91
[25] Alain Colmerauer, Henry Kanoui, and Michel van Caneghem. Prolog, theoretical
principles and current trends. Technology and Science of Information, 2(4):255-292,
1983.
[26] A. de Bruin and E. P. de Vink. Continuation semantics for Prolog with cut. In Theory
and Practice of Software Engineering, volume 351 of Lecture Notes in Computer
Science, pages 178-192, Barcelona, Spain, 1989. Springer-Verlag.
[27] Saumya Debray and Prateek Mishra. Denotational and operational semantics of
Prolog. Journal of Logic Programming, 5:61-91, 1988.
[28] Saumya K. Debray and David S. Warren. Automatic mode inference for Prolog
programs. In Proceedings of 1986 Symposium on Logic Programming, pages 78-88,
Salt Lake City, Utah, September 1983.
[29] P. Deransart and G. Ferrand. An operational formal definition of Prolog. Technical
Report RR763, INRIA, 1987.
[30] Pierre Deransart. Proofs of declarative properties of logic programs. In Theory and
Practice of Software Engineering, volume 351 of Lecture Notes in Computer Science,
pages 207—226, Barcelona, Spain, 1989. Springer-Verlag.
[31] Yves Deville. Logic Programming: Systematic Program Development Addison-
Wesley, Wokingham, England, 1990.
[32] Frederic Brenton Fitch. Symbolic Logic: An Introduction. Ronald Press, New York,
1952.
[33] Melvin Fitting. A Kripke-Kleene semantics for logic programs. Journal of Logic
Programming, 4:295-312, 1985.
[34] N. Francez, 0 . Grumberg, S. Katz, and A. Pnueli. Proving termination of Prolog
programs. In Rohit Parikh, editor, Logics of Programs, volume 193 of Lecture Notes
in Computer Science, pages 89-105, Berlin, July 1985. Springer-Verlag.
[35] D. M. Gabbay and U. Reyle. N-Prolog: An extension of Prolog with hypothetical
implications, i. Journal of Logic Programming, 1:319-355, 1984.
[36] Gerhardt Gentzen. The Collected Papers of Gerhard Gentzen. North-Holland, Am-
sterdam, 1969. Ed. M. E. Szabo.
[37] Paul C. Gilmore. Natural deduction based set theories: A new resolution of the old
paradoxes. Journal of Symbolic Logic, 51(2):393—411, June 1986.
[38] Jean-Yves Girard. Towards a geometry of interaction. In Proceedings of the A MS
Conference on Categories, Logic, and Computer Science, Boulder, Colorado, June
1987. To appear.
[39] Kurt Godel. On Formally Undecidable Propositions of Principia Mathematica and
Related Systems. Oliver and Boyd, Edinburgh, 1962. Translation by B. Meltzer
of "Uber formal unentscheidbare Satze der Principia Mathematica und verwandter
Systeme I", Monatshefte fur Mathematik und Physik, 38:173-198, Leipzig, 1931.
92 Bibliography
[40] Robert Goldblatt. Axiomatising the Logic of Computer Programming, volume 130
of Lecture Notes in Computer Science. Springer-Verlag, Berlin, 1982.
[41] Masami Hagiya and Takafumi Sakurai. Foundation of logic programming based on
inductive definition. New Generation Computing, 2:59-77, 1984.
[42] Lars Hallnas and Peter Schroeder-Heister. A proof-theoretic approach to logic pro-
gramming. Technical Report R88005, Swedish Institute of Computer Science, 1988.
[43] Seif Haridi and Sverker Janson. Kernel Andorra Prolog and its computation model.
In Proceedings of the International Conference on Logic Programming, pages 31-46,
Jerusalem, 1990.
[45] Robert Hill. LUSH-resolution and its completeness. Technical Report DCI Memo
78, Department of Artificial Intelligence, University of Edinburgh, Edinburgh, 1974.
[46] Joxan Jaffar and Jean-Louis Lassez. Constraint logic programming. Technical re-
port, Department of Computer Science, Monash University, June 1986.
[47] Joxan Jaffar, Jean-Louis Lassez, and John Lloyd. Completeness of the negation
as failure rule. In Proceedings of the International Joint Conference on Artificial
Intelligence, pages 500-506, Karlsruhe, 1983.
[48] Neil D. Jones and Alan Mycroft. Stepwise development of operational and denota-
tional semantics for Prolog. In Proceedings of the 1984 International Symposium on
Logic Programming, February 1984.
[51] Zohar Manna and Richard Waldinger. The logic of computer programming. IEEE
Transactions on Software Engineering, SE-4:199-229, 1978.
[52] Peter McBrien. Implementing logic languages by graph rewriting. In Tony Dodd,
Richaard Owens, and Steve Torrance, editors, Logic Programming: Expanding the
Horizons, pages 164-188, Oxford, 1991. Intellect.
[53] Paola Mello and Antonio Natali. Logic programming in a software engineering
perspective. In Proceedings of the 1989 North American Conference on Logic Pro-
gramming, pages 441-458, Cleveland, Ohio, 1989.
[54] Dale Miller, Gopalan Nadathur, and Andre Scedrov. Hereditary harrop formulas and
uniform proof systems. Technical Report MS-CIS-87-24, Department of Computer
and Information Science, University of Pennsylvania, Philadelphia, March 1987.
Bibliography 93
[55] Dale A. Miller and Gopalan Nadathur. Higher-order logic programming. In Pro-
ceedings of the Third International Logic Programming Conference, Imperial College,
London, July 1986.
[56] Lee Naish. Ml)'-Prolog S.ldb Reference Manual University of Melbourne, 1984.
[57] Tim Nicholson and Norman Foo. A denotational semantics for Prolog. ACM Trans-
actions on Programming Languages and Systems, 11:650-665, October 1989.
[59], Lutz Pliimer. Termination Proofs for Logic Programs, volume 446 of Lecture Notes
in Artificial Intelligence. Springer-Verlag, Berlin, 1990.
[60] Lutz PKimer. Termination proofs for logic programs based on predicate inequalities.
In Proceedings of the 1990 International Conference on Logic Programming, pages
634-648, Jerusalem, July 1990.
[61] David Poole and Randy Goebel. On eliminating loops in Prolog. SIGPLAN Notices,
20(8):38-40, 1985.
[63] Dag Prawitz. Natural Deduction: A Proof-Theoretical Study, volume 3 of Ada Uni-
versitatis Stockholmiensis, Stockholm Studies in Philosophy. Almqvist and Wiksell,
Uppsala, 1965.
[64] Changwoo Pyo and Uday S. Reddy. Inference of polymorphic types for logic pro-
grams. In Proceedings of the 1989 North American Conference on Logic Program-
ming, pages 1115-1132, Cleveland, Ohio, 1989.
[66] J. Alan Robinson. A machine-oriented logic based on the resolution principle. Jour-
nal of the Association for Computing Machinery, 12:23-41, 1965.
[67] Ehud Shapiro. The family of concurrent logic programming languages. ACM Com-
puting Surveys, 21(3):412-510, September 1989.
[68] Ehud Shapiro and Akikazu Takeuchi. Object oriented programming in Concurrent
Prolog. New Generation Computing, 1(1), 1983.
Examples
1. Conventions
Computations in this appendix of examples will appear in the following format: with
the starting backtrack stack on a line by itself, and then each step on a separate line or
lines consisting of the number of the computation rule from the relevant SOS variant, the
production arrow, and the resultant backtrack stack. For example, in the computation
(() : (Loop()k false) V true)
SO /S
(2) 4 ° (() : LoopQkfalse)', (() : true)
SO /S
(1) 4 ° (() : Loop{\ false)- (() : true)
the second backtrack stack is derived from the first by an application of rule 2 (V) from
SOS/so; the third is derived from the second by an application of rule 1 (&) from SOS/so;
and so on.
Due to formatting difficulties, the example derivations in this appendix will not be of
the tree format in which the rules are given. They will be instead in the following style:
each sequent will be given on a separate line, with the premisses in the derivation of each
sequent being above it and indented. Dots will be placed on each line to help the reader
see the indentation.
Thus the derivation which is written in tree style as
A B C D
E
95
96 Appendix A. Examples
The query we will be concerned with is Mem(x, [a\[b\[ ]]]). I assume that [ ] is the
nullary "empty list" constant and [ | ] is the binary list formation function symbol, and
that a and b are constants. I will give the SP (sequential) computation of this query,
then an alternate SOS computation which makes use of the parallel "or"; then I will give
two derivations in LKE+PAR and two in LKE-f SEQ for the query.
2.1. Computations
The SP computation for the query is in Figure A.I. The computation ends with the
solution a for x.
One possible SOS-computation starts off the same, but then proceeds differently after
the second-last step by expanding the predicate call in the second closure, ending with
the solution b for x. This computation is given in Figure A.2.
Note the renaming that goes on in the second and third steps, the first renaming (of t
to ti) to avoid variable capture arising from the definition of substitution, and the second
(from h to hi) arising from the 3 rule.
2.2. Derivations
The sequent corresponding to the statement that the query succeeds is the following one:
-S(3*Mem(i, [
In PAR, one possible derivation is given in Figure A.3. Only the essential steps are
given; in particular, the major premisses of the numerous Cut rules are omitted. They
can be easily guessed by inspection of the PAR axioms.
Another derivation, which corresponds to the second computation given above, is given
in Figure A.4. The difference arises here only from a different choice of witness for x.
The more complex SEQ derivation is given in Figure A.5. Note that we must do a
predicate and then a disjunctive unfolding to handle the predicate call in the query.
3. Infinite Loop
Here, we treat the slightly paradoxical example of the query which succeeds in all oper-
ational semantics except SP, but diverges in SP. The example is the query
where false = 0 = 1, true = 0 = 0, and Loop is defined with the predicate definition
(0 : Mem(x,[a\[b\[]]])
S
(4) 4 (() : 3ft 3t([a\[b\[ ]]] = [h\t)k{x = ft V Mem(x,t))))
S
(3) 4 (() : 3< ([a|[6|[ ]]] = [ft|*]&(* = ft V Mem(x, <))))
S
(3) 4 (() : [a|[6|[ ]]] = [h\t]k(x = ftVMem(x,i))))
(1) 15 (() : [a|[6|[ ]]] = [h\t],x = ft V Mem(x,t)))
(5) % ([h := a,t := [6|[ ]]] : x = h V Mem(i,i)))
(2) % ([h := a,< := [6|[ ]]] : x = fc);([ft := a,t := [6|[ ]]] : Mem(x,<))
(5) 15 ([fc := a,t := [6|[ ]],x := a] : c);([ft := a,t := [6|[ ]]] : Mem(x,t))
Figure A.I. The computation of the query Afem(x, [«|[6|[ ]]]) in SP.
(2) %S ([h := a,t := [6|[ ]]] : x = ft); ([ft := a,t := [6|[ ]]] : Mem(x,t))
(4) s ^ s ( [ A . = a ) t : = [ 6 | [ ] ] ] : a ; = /i);
([h := a,t := [6|[ ]]] : 3ft 3*! (t = [ft|<i]&(a; = ft V Mem(i,ti))))
s s
(3) ^ ( [ f t : ^ a , < : = [ 6 | [ ] ] ] : a ; = ft);
([ft := a,* := [6|[ ]]] : 3<i (< = [fti|ti]&(i = fti V Mem(x,h))))
S S
(3) ^ ([ft:=a,<:=[fe|[]]]:x = ft);
([ft := a, t := [6|[ ]]]:< = [ft^tiJ&Ci = fti V Mem(x,< 1 )))
S s
(1) 4 ( [ 7 , : = a , f : = [ 6 | [ ] ] ] : x = /l);
([ft := a,* := [6|[ ]]] : t = [fti|*i],x = fti V Mem(x,i 1 ))
s s
(5) 4 ( [ f t : = < M : = [ 6 | [ ] ] ] : * = ft);
([ft := a,t := [6|[ ]],fti := 6,^ = [ ]] : x = fti V Mem(i,*i))
s S
(2) 4 ([ft:=a,f:
( [ | [ | [ ]]] [#|[]]])
c S(a = a)
(a = aVMcm(a|[6|[]]))
Figure A.3. A derivation of the sequent [-+ S(3x Mem(x, [a\[b\[ ]]]))] in LKE+PAR. Only
essential steps are given.
98 Appendix A. Examples
( )
5(6 = 6 V Mem(b, [ ]))
= [h\t]k(b=hVMem(b,t))))
)
S(&=aVMem(a,[6|[]]))
. -> S(Mem(b,[a\in))
^S(3xMem(x,[a\[b\[}}}))
Figure A.4. Another derivation of the sequent [—»• S(3x Mem(x,[a\[b\[ ]]]))] in
LKE+PAR. Only essential steps are given.
= [a\[b\[ ]]])
. -^S(3x3h3t(B)) V (F(3x3h3t{B))kS(3x3h3t(C)))
-» S(3x3h3t(B) V 3a:3h3t(C))
S(3:c 3^ 3t([a\[b\[ ]}} = [h\t]k(x = h V Mem(x, f))))
where B = ([a\[b\[ ]]] = [h\t]kx = /i) and C = ([a|[6|[ ]]] = [h\t]kMem(x, t))
Figure A.5. A derivation of the sequent [-> 5(3a: Mem(o;, [a|[6|[ ]]]))] in LKE+SEQ. Only
essential steps are given.
3.1. Derivations
To see that (Loop()$zfalse)\/ true succeeds in SOS (or SOS/sa, since its set of successful
queries is the same), we need only give the derivation for the sequent
4. Subterm Induction
The example to be proven here is the sequent
-^ S(Add(x,0,x))
- that is, "for all terms t, the query Add(t,0,t) succeeds", where Add is defined as
follows:
This is true only if we make some assumptions about the language. We will as-
sume that the language contains only the unary function symbol s, and the constant
(nullary function symbol) 0. See Figure A. 10 for the derivation of this sequent in
LKE+PAR+SUBTERM.
5. Well-Founded Induction
The example to be proven here is the analogue of the sequent in the last section:
-+S(N(y))DS(Add{y,0,y))
- that is, "for all terms t such that N{t) succeeds, the query Ac?d(t,0,t) succeeds".
The definitions of iV, the predicate testing for whether its argument is a Peano natural
number, the order predicate, i?, and its auxiliary predicate Lt, are as follows:
. —> S(true)
-» S((Loop()k false) V true)
-> i 0 = 1
->F(false)
. . . . ->F(Loop())\/ F(false)
. . . -*F(Loop()& false)
. . . . ->0 = 0
. . . —> S(true)
. . -> F(Loop()kfalse)kS(true)
. -> S(Loop()kfalse) V (F(Loop()kfalse)kS(true))
-> S((Loop()kfalse) V true)
. . . . - • 5(0 = 0)
. . . . - > 5(0 = 0)
,0,0))
= s(px)&N(j>x)))->S(Add(y,0,y))
. . B(y),S(N(y))^S(Add(y,0,y))
• B(y)->C(y)
-C(y)
> 5(JV(py))
(straightforward)
S(N(py)) ^ S(N(s(jn,)))
(straightforward)
S(N(py))^S(N(py)kN(s(py))kLt(py,s(py)))
S(N(py))^S(R(py,s(py))
S(N(py))^S(N(py)),S(Add(s(py),O,s(py)))
. . . . C(py),S(N(py))-,S(Add(s(py),O,s(py)))
. . . S(R(Py,s(py))DC(py),S(N(py))->S(Add(s(Py),0,4py)))
• • S(R(py, y)) D C(py), 5(?/ = s(py)), S(N(py)) - . 5(>ldd(y, 0, y))
• 5(/?(py, y)) D C(py), 5(y = 5(py)&iV(py)) -» 5(Add(y, 0, y))
B(y),5(y - s(py)kN(py))
where C(z) = S(N(z)) D S(Add(z,0,z)) and B(z) = Va;(5(i?(x,z)) D C(z))
Figure A.12. Derivation of B(y),5(y = s(py)kN(py)) -» 5(Add(y,0,y)) in
LKE+PAR+WF.
Index of Definitions
103
104 Index of Definitions
Substitution:
More specific 5.3 28
More general 5.3 28
Success 2.2 13
Term 4.1 6
Unfolding:
Disjunctive 2.1 54
Predicate 3.1 37
Validity:
Of a closed assertion 1.2 33
Of a sequent 1.3 33
Well-Ordering 3.3 80