Apt2007 - Constraint Logic Programming Using Eclipse PDF
Apt2007 - Constraint Logic Programming Using Eclipse PDF
Krzysztof R. Apt received his PhD in 1974 in mathematical logic from the University
of Warsaw in Poland. He is a senior researcher at Centrum voor Wiskunde en Informatica,
Amsterdam and Professor of Computer Science at the University of Amsterdam. He is the
author of three other books: Verication of Sequential and Concurrent Programs (with E.-R.
Olderog), From Logic Programming to Prolog, and Principles of Constraint Programming,
and has published 50 journal articles and 15 book chapters.
He is the founder and the rst editor-in-chief of the ACM Transactions on Computational
Logic, and past president of the Association for Logic Programming. He is a member of
the Academia Europaea (Mathematics and Informatics Section).
After completing a degree at Oxford in Mathematics and Philosophy, Mark Wallace
joined the UK computer company ICL, who funded his PhD at Southampton University,
which was published as a book: Communicating with Databases in Natural Language.
He has been involved in the ECLi PSe constraint programming language since its incep-
tion and has led several industrial research collaborations exploiting the power of constraint
programming with ECLi PSe . Currently he holds a chair in the Faculty of Information
Technology at the Monash University, Victoria, Australia and is involved in a major new
constraint programming initiative funded by National ICT Australia (NICTA), and in the
foundation of a Centre for Optimisation in Melbourne. He has published widely, chaired the
annual constraint programming conference, and is an editor for three international journals.
Advance praise for Constraint Logic Programming using ECLi PSe
The strength of Constraint Logic Programming using ECLi PSe is that it simply and
gradually explains the relevant concepts, starting from scratch, up to the realisation of
complex programs. Numerous examples and ECLi PSe programs fully demonstrate the
elegance, simplicity, and usefulness of constraint logic programming and ECLi PSe .
The book is self-contained and may serve as a guide to writing constraint applications in
ECLi PSe , but also in other constraint programming systems. Hence, this is an indispensable
resource for graduate students, practioners, and researchers interested in problem solving
and modelling.
Eric Monfroy, Universite de Nantes
ECLi PSe is a exible, powerful and highly declarative constraint logic programming
platform that has evolved over the years to comprehensively support constraint programmers
in their quest for the best search and optimisation algorithms. However, the absence of a
book dedicated to ECLi PSe has presented those interested in this approach to programming
with a signicant learning hurdle. This book will greatly simplify the ECLi PSe learning
process and will consequently help ECLi PSe reach a much wider community.
Within the covers of this book readers will nd all the information they need to start
writing sophisticated programs in ECLi PSe . The authors rst introduce ECLi PSe s history,
and then walk the reader through the essentials of Prolog and Constraint Programming,
before going on to present the principal features of the language and its core libraries in a
clear and systematic manner.
Anyone learning to use ECLi PSe or seeking a course book to support teaching constraint
logic programming using the language will undoubtedly benet from this book.
Hani El-Sakkout, Cisco Systems, Boston, Massachusetts
It has been recognized for some years now within the Operations Research community
that Integer Programming is needed for its powerful algorithms, but that logic is a more
exible modelling tool. The case was made in most detail by John Hooker, in his book Logic-
Based Methods for Optimization: Combining Optimization and Constraint Satisfaction.
The ECLi PSe system is a highly successful embodiment of these ideas. It draws on
ideas coming from logic programming, constraint programming, and the Prolog language. I
strongly recommend this book as a systematic account of these topics. Moreover, it gives a
wealth of examples showing how to deploy the power thus made available via the ECLi PSe
system.
Maarten van Emden, University of Victoria, Canada
This is an impressive introduction to Constraint Logic Programming and the ECLi PSe
system by two pioneers in the theory and practice of CLP. This book represents a state-of-
the-art and comprehensive coverage of the methodology of CLP. It is essential reading for
new students, and an essential reference for practioners.
Joxan Jaffar, National University of Singapore
CONSTRAINT LOGIC PROGRAMMING
USING ECLi PSe
Cambridge University Press has no responsibility for the persistence or accuracy of urls
for external or third-party internet websites referred to in this publication, and does not
guarantee that any content on such websites is, or will remain, accurate or appropriate.
Contents
Introduction page ix
v
vi Contents
3.5 Operators 52
3.6 Summary 55
3.7 Exercises 57
4 Control and meta-programming 59
4.1 More on Prolog syntax 59
4.2 Control 62
4.3 Meta-programming 69
4.4 Summary 74
4.5 Exercises 75
5 Manipulating structures 76
5.1 Structure inspection facilities 76
5.2 Structure comparison facilities 78
5.3 Structure decomposition and construction facilities 80
5.4 Summary 85
5.5 Exercises 85
Logic programming
Logic programming has roots in the influential approach to automated the-
orem proving based on the resolution method due to Alan Robinson. In his
fundamental paper, Robinson [1965], he introduced the resolution principle,
the notion of unification and a unification algorithm. Using his resolution
method one can prove theorems formulated as formulas of first-order logic,
so to get a Yes or No answer to a question. What is missing is the possi-
bility to compute answers to a question.
The appropriate step to overcome this limitation was suggested by Robert
Kowalski. In Kowalski [1974] he proposed a modified version of the resolu-
tion that deals with a a subset of first-order logic but allows one to generate
a substitution that satisfies the original formula. This substitution can then
be interpreted as a result of a computation. This approach became known
as logic programming . A number of other proposals aiming to achieve the
same goal, viz. to compute with the first-order logic, were proposed around
the same time, but logic programming turned out to be the simplest one
and most versatile.
In parallel, Alain Colmerauer with his colleagues worked on a program-
1 In what follows we refer to the final articles discussing the mentioned programming languages.
This explains the small discrepancies in the dateline.
ix
x Introduction
Constraint programming
Let us turn now our attention to constraint programming . The formal
concept of a constraint was used originally in physics and combinatorial
optimisation. It was first adopted in computer science by Ivan Sutherland in
Sutherland [1963] for describing his interactive drawing system Sketchpad.
In the seventies several experimental languages were proposed that used the
notion of constraints and relied on the concept of constraint solving. Also
in the seventies, in the field of artificial intelligence (AI), the concept of a
constraint satisfaction problem was formulated and used to describe
problems in computer vision. Further, starting with Montanari [1974] and
Mackworth [1977], the concept of constraint propagation was identified
as a crucial way of coping with the combinatorial explosion when solving
constraint satisfaction problems using top-down search.
Top-down search is a generic name for a set of search procedures in
which one attempts to construct a solution by systematically trying to ex-
tend a partial solution through the addition of constraints. In the simplest
case, each such constraint assigns a value to another variable. Common
to most top-down search procedures is backtracking , which can be traced
back to the nineteenth century. In turn, the branch and bound search, a
Introduction xi
top-down search concerned with optimisation, was defined first in the con-
text of combinatorial optimisation.
In the eighties the first constraint programming languages of importance
were proposed and implemented. The most significant were the languages
based on the logic programming paradigm. They involve an extension of
logic programming by the notion of constraints. The main reason for the suc-
cess of this approach to constraint programming is that constraints and logic
programming predicates are both, mathematically, relations; backtracking
is automatically available; and the variables are viewed as unknowns in the
sense of algebra. The latter is in contrast to the imperative programming in
which the variables are viewed as changing, but each time known entities,
as in calculus.
The resulting paradigm is called constraint logic programming . As
mentioned above, Prolog III is an example of a programming language re-
alising this paradigm. The term was coined in the influential paper Jaffar
and Lassez [1987] that introduced the operational model and semantics for
this approach and formed a basis for the CLP(R) language that provided
support for solving constraints on reals, see Jaffar et al. [1992].
Another early constraint logic programming language is CHIP, see Dincbas
et al. [1988] and for a book coverage Van Hentenryck [1989]. CHIP incorpo-
rated the concept of a constraint satisfaction problem into the logic program-
ming paradigm by using constraint variables ranging over user-defined finite
domains. During the computation the values of the constraint variables are
not known, only their current domains. If a variable domain shrinks to
one value, then that is the final value of the variable. CHIP also relied on
top-down search techniques originally introduced in AI.
The language was developed at the European Computer-Industry Re-
search Centre (ECRC) in Munich. This brings us to the next stage in our
historical overview.
ECLi PSe
ECRC was set up in 1984 by three European companies to explore the devel-
opment of advanced reasoning techniques applicable to practical problems.
In particular three programming systems were designed and implemented.
One enabled complex problems to be solved on multiprocessor hardware,
and eventually on a network of machines. The second supported advanced
database techniques for intelligent processing in data-intensive applications.
The third system was CHIP. All three systems were built around a common
foundation of logic programming.
xii Introduction
In 1991 the three systems were merged and ECLi PSe was born. The con-
straint programming features of ECLi PSe were initially based on the CHIP
system, which was spun out from ECRC at that time in a separate company.
Over the next 15 years the constraint solvers and solver interfaces supported
by ECLi PSe have been continuously extended in response to users require-
ments.
The first released interface to an external state-of-the-art linear and mixed
integer programming package was in 1997. The integration of the finite do-
main solver and linear programming solver, supporting hybrid algorithms,
came in 2000. In 2001 the ic library was released. It supports constraints on
Booleans, integers and reals and meets the important demands of practical
use: it is sound, scalable, robust and orthogonal. ECLi PSe also includes as li-
braries some constraint logic programming languages, for example CLP(R),
that were developed separately. By contrast with the constraint solving fa-
cilities, the parallel programming and database facilities of ECLi PSe have
been much less used, and over the years some functionality has been dropped
from the system.
The ECLi PSe team was involved in a number of European research pro-
jects, especially the Esprit project CHIC Constraint Handling in Industry
and Commerce (19911994). Since the termination of ECRCs research ac-
tivities in 1996, ECLi PSe has actively been further developed at the Centre
for Planning and Resource Control at Imperial College in London (IC-Parc),
with funding from International Computers Ltd (ICL), the UK Engineering
and Physical Sciences Research Council, and the Esprit project CHIC-2
Creating Hybrid Algorithms for Industry and Commerce (19961999). The
Esprit projects played an important role in focussing ECLi PSe development.
In particular they emphasised the importance of the end user, and the time
and skills needed to learn constraint programming and to develop large scale
efficient and correct programs.
In 1999, the commercial rights to ECLi PSe were transferred to IC-Parcs
spin-off company Parc Technologies, which applied ECLi PSe in its optimi-
sation products and provided funding for its maintenance and continued
development. In August 2004, Parc Technologies, and with it the ECLi PSe
platform, was acquired by Cisco Systems.
ECLi PSe is in use at hundreds of institutions for teaching and research
all over the world, and continues to be freely available for education and
research purposes. It has been exploited in a variety of applications by
academics around the world, including production planning, transportation
scheduling, bioinformatics, optimisation of contracts, and many others. It
is also being used to develop commercial optimisation software for Cisco.
Introduction xiii
2 For those wishing to acquire the full knowledge of Prolog we recommend Bratko [2001] and
Sterling and Shapiro [1994].
xiv Introduction
Acknowledgements
Krzysztof Apt would like to thank his colleagues who during their stay at
CWI used ECLi PSe and helped him to understand it better. These are:
Sebastian Brand, Sandro Etalle, Eric Monfroy and Andrea Schaerf. Also,
he would like to warmly thank three people who have never used ECLi PSe
but without whose support this book would never have been written. These
are: Alma, Daniel and Ruth.
Mark Wallace offers this book as a testament both to the ECRC CHIP
team, who set the ball rolling, to Micha Meier at ECRC, and to the IC-
Parc ECLi PSe team, including Andy Cheadle, Andrew Eremin, Warwick
Harvey, Stefano Novello, Andrew Sadler, Kish Shen, and Helmut Simonis
of both the CHIP and ECLi PSe fame who have designed, built and
Introduction xv
maintained the functionality described herein. Last but not least, he thanks
Joachim Schimpf, a great colleague, innovative thinker and brilliant software
engineer, who has shaped the whole ECLi PSe system.
Since dragging his reluctant family halfway round the world to Australia,
Mark has buried himself in his study and left them to it. With the comple-
tion of this book he looks forward to sharing some proper family weekends
with Duncan, Tessa and Toby and his long-suffering wonderful wife Ingrid.
The authors acknowledge helpful comments by Andrea Schaerf, Joachim
Schimpf, Maarten van Emden and Peter Zoeteweij. Also, they would like to
thank the School of Computing of the National University of Singapore for
making it possible for them to meet there on three occasions and to work
together on this book. The figures were kindly prepared by Ren Yuan using
the xfig drawing program.
To Alma, Daniel and Ruth and
to Duncan, Tessa, Toby and Ingrid
Part I
1.1 Introduction 3
1.2 Syntax 4
1.3 The meaning of a program 7
1.4 Computing with equations 9
1.5 Prolog: the first steps 15
1.6 Two simple pure Prolog programs 23
1.7 Summary 26
1.8 Exercises 26
1.1 Introduction
any variable can stand for a number or a string, but also a list, a tree, a
record or even a procedure or program,
3
4 Logic programming and pure Prolog
In this chapter we discuss the logic programming framework and the cor-
responding small subset of Prolog, usually called pure Prolog . This will
allow us to set up a base over which we shall define in the successive chap-
ters a more realistic subset of Prolog supporting in particular arithmetic and
various control features. At a later stage we shall discuss various additions
to Prolog provided by ECLi PSe , including libraries that support constraint
programming.
We structure the chapter by focussing in turn on each of the above three
items. Also we clarify the intended meaning of pure Prolog programs.1
Consequently, we discuss in turn
1.2 Syntax
Syntactic conventions always play an important role in the discussion of
any programming paradigm and logic programming is no exception in this
matter. In this section we discuss the syntax of Prolog.
Full Prolog syntactic conventions are highly original and very powerful.
Their full impact is little known outside of the logic programming commu-
nity. We shall discuss these novel features in Chapters 3 and 4.
By the objects of computation we mean anything that can be denoted
by a Prolog variable. These are not only numbers, lists and so on, but also
compound structures and even other variables or programs.
Formally, the objects of computation are base terms, which consists of:
The arguments of facts may also be variables and compound terms. Con-
sider for example the fact p(a,f(b)). The interpretation of the compound
term f(b) is a logical expression, in which the unary function f is applied
to the logical constant b. Under the interpretation of pure Prolog programs,
the denotations of any two distinct ground terms are themselves distinct.2
Consequently we can think of ground terms as denoting themselves, and so
we interpret the fact p(a,f(b)) as the atomic formula p(a, f (b)).
The next fact has a variable argument: p(a,Y). We view it as a statement
that for all ground terms t the atomic formula p(a, t) is true. So we interpret
it as the universally quantified formula Y. p(a, Y ).
With this interpretation there can be no use in writing the procedure
p(a,Y).
p(a,b).
because the second fact is already covered by the first, more general fact.
Finally we should mention that facts with no arguments are also admit-
ted. Accordingly we can assert the fact p. Its logical interpretation is the
proposition p.
In general, we interpret a fact by simply changing the font from teletype
to italic and by preceding it by the universal quantification of all variables
that appear in it.
The interpretation of a rule involves a logical implication. For example
the rule
p :- q.
states that if q is true then p is true.
As another example, consider the ground rule
p(a,b) :- q(a,f(c)), r(d).
Its interpretation is as follows. If q(a, f (c)) and r(d) are both true, then
p(a, b) is also true, i.e., q(a, f (c)) r(d) p(a, b).
Rules with variables need a little more thought. The rule
p(X) :- q.
states that if q is true, then p(t) is true for any ground term t. So logically
this rule is interpreted as q X. p(X). This is equivalent to the formula
X. (q p(X)).
If the variable in the head also appears in the body, the meaning is the
same. The rule
2 This will no longer hold for arithmetic expressions which will be covered in Chapter 3.
1.4 Computing with equations 9
p(X) :- q(X).
states that for any ground term t, if q(t) is true, then p(t) is also true.
Therefore logically this rule is interpreted as X. (q(X) p(X)).
Finally, we consider rules in which variables appear in the body but not
in the head, for example
p(a) :- q(X).
This rule states that if we can find a ground term t for which q(t) is true,
then p(a) is true. Logically this rule is interpreted as X. (q(X) p(a)),
which is equivalent to the formula (X. q(X)) p(a).
Given an atomic goal A denote its interpretation by A. Any ground rule H
:- B1 , . . . , Bn is interpreted as the implication B1 . . . Bn H. In general,
all rules H :- B1 , . . . , Bn have the same, uniform, logical interpretation. If
V is the list of the variables appearing in the rule, its logical interpretation
is V. (B1 . . . Bn H).
This interpretation of , (as ) and :- (as ) leads to so-called declara-
tive interpretation of pure Prolog programs that focusses through their
translation to the first-order logic on their semantic meaning.
The computational interpretation of pure Prolog is usually called pro-
cedural interpretation. It will be discussed in the next section. In this
interpretation the comma , separating atomic goals in a query or in a body
of a rule is interpreted as the semicolon symbol ; of the imperative program-
ming and :- as (essentially) the separator between the procedure header
and body.
p(X) :- q(X,a).
q(Y,Y).
q(W,a).
The definition of the predicate q/2 comprises just the single fact q(Y,Y).
Clearly the query can only succeed if W = a.
Inside Prolog, however, this constraint is represented as an equation be-
tween two atomic goals: q(W,a) = q(Y1,Y1). The atomic goal q(W,a) at
the left-hand side of the equation is just the original query. For the fact
q(Y,Y), however, a new variable Y1 has been introduced. This is not im-
portant for the current example, but it is necessary in general because of
possible variable clashes. This complication is solved by using a different
variable each time. Accordingly, our first query succeeds under the con-
straint q(W,a) = q(Y1,Y1).
Now consider the query
p(a).
This time we need to use a rule instead of a fact. Again a new variable is
introduced for each use of the rule, so this first time it becomes:
p(X1) :- q(X1,a).
To answer this query, Prolog first adds the constraint p(a) = p(X1),
which constrains the query to match the definition of p/1. Further, the
query p(a) succeeds only if the body q(X1,a) succeeds, which it does un-
der the additional constraint q(X1,a) = q(Y1,Y1). The complete sequence
of constraints under which the query succeeds is therefore p(a) = p(X1),
q(X1,a) = q(Y1,Y1). Informally we can observe that these constraints hold
if all the variables take the value a.
Consider now the query:
p(b).
1.4 Computing with equations 11
Reasoning as before, we find the query would succeed under the constraints:
p(b) = p(X1), q(X1,a) = q(Y1,Y1). In this case, however, there are no
possible values for the variables which would satisfy these constraints. Y1
would have to be equal both to a and to b, which is impossible. Consequently
the query fails.
Next consider a non-atomic query
p(a), q(W,a).
The execution of this query proceeds in two stages. First, as we already saw,
p(a) succeeds under the constraints p(a) = p(X1), q(X1,a) = q(Y1,Y1),
and secondly q(W,a) succeeds under the constraint q(W,a) = q(Y2,Y2).
The complete sequence of constraints is therefore: p(a) = p(X1), q(X1,a)
= q(Y1,Y1), q(W,a) = q(Y2,Y2). Informally these constraints are satis-
fied if all the variables take the value a.
A failing non-atomic query is:
p(W), q(W,b).
Indeed, this would succeed under the constraints p(W) = p(X1), q(X1,a)
= q(Y1,Y1), q(W,b) = q(Y2,Y2). However, for this to be satisfied W would
have to take both the value a (to satisfy the first two equations) and b (to
satisfy the last equation), so the constraints cannot be satisfied and the
query fails.
Operationally, the constraints are added to the sequence during compu-
tation, and tested for consistency immediately. Thus the query
p(b), q(W,b).
fails already during the evaluation of the first atomic query, because already
at this stage the accumulated constraints are inconsistent. Consequently the
second atomic query is not evaluated at all.
MartelliMontanari Algorithm
Non-deterministically choose from the set of equations an equation of a form
below and perform the associated action.
The algorithm starts with the original sequence of equations and termi-
nates when no action can be performed or when failure arises. In case of
success we obtain the desired mgu.
Note that in the borderline case, when n = 0, action (1) includes the case
c = c for every constant c which leads to deletion of such an equation. In
addition, action (2) includes the case of two different constants and the cases
where a constant is compared with a term that is neither a constant or a
variable.
1.4 Computing with equations 13
Choosing the second equation again action (1) applies and yields
Now, choosing the last equation action (3) applies and yields
Finally, choosing the second equation action (5) applies and yields
Next, choosing the first equation action (1) applies again and yields
Choosing again the first equation action (2) applies and a failure arises. So
the atomic goals p(k(h(a), f(X,b,Z))) and p(k(h(b), f(g(a),Y,Z)))
are not unifiable.
Let us try to repeat the choices made in (i). Applying action (1) twice we
get the set
{Z = h(X), f(X, b, Z) = f(g(Z), Y, Z)}.
Next, choosing the second equation action (1) applies again and yields
{Z = h(X), X = g(Z), b = Y, Z = Z}.
Choosing the third equation action (4) applies and yields
{Z = h(X), X = g(Z), Y = b, Z = Z}.
Now, choosing the fourth equation action (3) applies and yields
{Z = h(X), X = g(Z), Y = b}.
Finally, choosing the second equation action (5) applies and yields
{Z = h(g(Z)), X = g(Z), Y = b}.
But now choosing the first equation action (6) applies and a failure arises.
Hence the atomic goals p(k(Z, f(X,b,Z))) and p(k(h(X),f(g(Z),Y,Z)))
are not unifiable.
on Windows. This will bring up the TkECLi PSe top level shown in Figure
1.1.
Help for TkECLi PSe and its component tools is available from the Help
menu in the TkECLi PSe window.
To write your programs you need an editor. From the editor, save your
program as a plain text file, and then you can compile the program into
ECLi PSe . To compile the program in TkECLi PSe , select the Compile option
from the File menu. This will bring up a file selection dialogue, from which
you select the file you want to compile and click on the Open button. This
will compile the file, and any other it depends on.
If you have edited a file and want to compile it again (together with any
other files that may have changed), you just click on the make button. Now
a query to the program can be submitted, by typing it into the Goal Entry
field.
1.5 Prolog: the first steps 17
The idea is that the system evaluates the query with respect to the pro-
gram read-in and reports an answer. There are three possible outcomes.
W = a
18 Logic programming and pure Prolog
No (0.00s cpu)
[eclipse 5]: p(a), q(W,a).
W = a
Yes (0.00s cpu)
[eclipse 6]: p(W), q(W,b).
No (0.00s cpu)
X = X.
Z = h(g(a))
X = g(a)
Y = b
Yes (0.00s cpu)
1.5 Prolog: the first steps 19
For the second pair of atomic queries we get the same outcome as in
Subsection 1.4.2, as well:
[eclipse 8]: p(k(h(a), f(X,b,Z))) = p(k(h(b), f(g(a),Y,Z))).
No (0.00s cpu)
However, for the third pair of atomic queries we get a puzzling answer:
[eclipse 9]: p(k(Z, f(X,b,Z))) = p(k(h(X), f(g(Z),Y,Z))).
Z = h(g(h(g(h(g(h(g(h(g(h(g(h(g(h(g(...))))))))))))))))
X = g(h(g(h(g(h(g(h(g(h(g(h(g(h(g(h(...))))))))))))))))
Y = b
Yes (0.00s cpu)
The point is that for the efficiency reasons unification in Prolog is imple-
mented with the test x Var (t) (so whether x occurs in t) in the actions
(5) and (6) of the MartelliMontanari algorithm omitted. This test is
called an occur check . If x does occur in t, then instead of a failure a
circular binding is generated.
In ECLi PSe the situation is restored, by setting a specific flag on:
[eclipse 10]: set_flag(occur_check,on).
No (0.00s cpu)
In practice it is very rare that this complication arises so the default is
that this flag is not set.
Let us return now to the program
p(X) :- X = a.
p(X) :- X = b.
discussed in Subsection 1.4.3 and consider the query p(X):
[eclipse 12]: p(X).
X = a
Yes (0.00s cpu, solution 1, maybe more) ? ;
X = b
20 Logic programming and pure Prolog
[eclipse 13]: X = 1, X = 2.
No (0.00s cpu)
a [. ..]
b [. ..]
c []
The list notation is not very readable and even short lists become difficult
to parse, as the depicted list [a|[b|[c|[]]]] shows.
So the following shorthands are carried out internally in Prolog for m 1
and n 0, also within the subterms:
[s0 |[s1 , ..., sm |t]] abbreviates to [s0 , s1 , ..., sm |t],
[s1 , ..., sm |[t1 , ..., tn ]] abbreviates to [s1 , ..., sm , t1 , ..., tn ].
Thus for example, [a|[b|c]] abbreviates to [a,b|c], and the depicted
list [a|[b|[c|[]]]] abbreviates to a more readable form, [a,b,c].
The following interaction with ECLi PSe shows that these simplifications
are also carried out internally.
[eclipse 14]: X = [a | [b | c]].
X = [a, b|c]
Yes (0.00s cpu)
[eclipse 15]: [a,b |c] = [a | [b | c]].
X = [a, b, c, d, e, f]
Yes (0.00s cpu)
1.6 Two simple pure Prolog programs 23
book(harry_potter, rowlings).
book(anna_karenina, tolstoy).
book(elements, euclid).
book(histories, herodotus).
book(constraint_logic_programming, apt).
book(constraint_logic_programming, wallace).
genre(harry_potter, fiction).
genre(anna_karenina, fiction).
genre(elements, science).
genre(histories, history).
genre(constraint_logic_programming, science).
No (0.00s cpu)
and to compute one or more solutions, like in the following two queries:
Author = herodotus
Yes (0.00s cpu, solution 1, maybe more) ? ;
No (0.00s cpu)
Author = apt
Yes (0.00s cpu, solution 1, maybe more) ? ;
Author = wallace
Yes (0.00s cpu, solution 2)
the concatenation of the empty list [] and the list ys yields the list ys,
if the concatenation of the lists xs and ys equals zs, then the concatena-
tion of the lists [x | xs] and ys equals [x | zs].
This translates into the program given in Figure 1.5.3
Xs = []
Ys = [mon, wed, fri, sun]
Yes (0.00s cpu, solution 1, maybe more) ? ;
Xs = [mon]
Ys = [wed, fri, sun]
Yes (0.00s cpu, solution 2, maybe more) ? ;
Xs = [mon, wed]
Ys = [fri, sun]
Yes (0.00s cpu, solution 3, maybe more) ? ;
The first call to app/3 generates upon backtracking all possible splits of its
last argument into two lists, while the second call concatenates these lists
in the reverse order. By imposing an additional condition on the output we
can then generate all rotations that satisfy some condition. For example, the
26 Logic programming and pure Prolog
Zs = [2, 2, 3, 1, 2, 1]
Yes (0.00s cpu, solution 1, maybe more) ? ;
Zs = [2, 3, 1, 2, 1, 2]
Yes (0.00s cpu, solution 2, maybe more) ? ;
Zs = [2, 1, 2, 2, 3, 1]
Yes (0.00s cpu, solution 3, maybe more) ? ;
No (0.00s cpu)
1.7 Summary
In this chapter we introduced pure Prolog programs and discussed their
syntax, meaning and the computation process. In pure Prolog programs the
values are assigned to variables by means of the unification algorithm.
To explain the execution model of pure Prolog we discussed computing
with equations, choice points and backtracking. Finally, to clarify some
basic features of Prolog programming we introduced two simple pure Prolog
programs.
1.8 Exercises
Exercise 1.1 What is the ECLi PSe output to the following queries?
1. X = 7, X = 6.
2. X = 7, X = X+1.
3. X = X+1, X = 7.
4. X = 3, Y = X+1.
5. Y = X+1, X = 3.
1.8 Exercises 27
Exercise 1.2 Consider the program BOOKS from Subsection 1.6.1. Using
X \= Y to test whether X is different from Y, what query do you use to find
the other author of the book by apt?
1. Z = 2, q(X).
2. Z = 2, q(Z).
3. Z = 2, q(X), Y = Z+1.
Exercise 1.5 Reconsider the predicate rotate/2 defined on page 25. Ex-
plain why in the following interaction with ECLi PSe the first solution is
produced twice. (The query asks for all rotations of the list [1,2,2,3,1,2]
that begin with 1.)
Zs = [1, 2, 2, 3, 1, 2]
Yes (0.00s cpu, solution 1, maybe more) ? ;
Zs = [1, 2, 1, 2, 2, 3]
28 Logic programming and pure Prolog
Zs = [1, 2, 2, 3, 1, 2]
Yes (0.00s cpu, solution 3)
Suggest a modification of rotate/2, using \=, for which each solution is
produced once.
2
2.1 Introduction 29
2.2 The programming language L0 29
2.3 Translating pure Prolog into L0 33
2.4 Pure Prolog and declarative programming 35
2.5 L0 and declarative programming 36
2.6 Summary 37
2.7 Exercises 38
2.1 Introduction
29
30 A reconstruction of pure Prolog
To make these and other aspects of pure Prolog explicit we now define
a translation of pure Prolog into a small programming language L0 that
does include equality as an explicit atomic statement. Additionally, this
language features a number of other programming constructs, including re-
cursive procedures and an alternative choice statement. This interpretation
of the pure Prolog programs within a more conventional programming lan-
guage provides an alternative account of their semantics, a matter to which
we return in Sections 2.4 and 2.5.
To start with, we assume the same set of terms as in the case of pure
Prolog. The L0 programming language comprises
four types of atomic statements:
the equality statement,
the skip statement,
the fail statement,
the procedure call statement,
the program composition, written as ;,
the alternative choice statement, written as
p(x1 , . . ., xn ) : S,
where n 0, p is the procedure identifier, x1 , . . ., xn are different variables
called formal parameters, and S is the procedure body. If n = 0, we view
this declaration as p : S.
By a program we mean a sequence of procedure declarations followed by
a (main) statement. We now explain the meaning of the programs written
in this small language. All variables used range over the set of possible
values, which are terms. A state is an answer in the sense of Subsection
1.4.2 but now viewed as a finite mapping that assigns to each variable a
value which is a term. Each variable is interpreted in the context of the
current state. The statements transform the states, i.e., the answers.
2.2 The programming language L0 31
the variables x01 , . . ., x0n that are not present in the current state. This yields
a renaming S 0 of S. Then S 0 is executed. If it succeeds, the equations in-
volving the variables x01 , . . ., x0n are removed from the state. If it fails, then
backtracking takes place.
Finally, a procedure call p(t1 , . . ., tn ), where p is declared by
p(x1 , . . ., xn ) : S
Example 2.1 Assume that the initial state is the empty answer, i.e.,
initially no variable has a value.
(i) The statement X = f(Y); Y = a successfully terminates in the state X
= f(a), Y = a.
(ii) The statement X = a; X = b fails.
(iii) The statement begin f(X) = g(b) orelse f(X) = f(a) end success-
fully terminates in the state X = a.
(iv) The statement X = a; begin new X; X = b end successfully termi-
nates in the state X = a.
(v) Assuming the procedure declaration
the procedure call p(b,a) successfully terminates in the empty state, while
the procedure call p(a,a) fails.
num(0).
num(s(X)) :- num(X).
num(Y) :- Y = 0.
num(Y) :- Y = s(X), num(X).
Now both clauses have the same head, num(Y). In general, we shall rewrite
first a pure Prolog program in such a way that, for each predicate p, all
clauses that define p have the same head of the form p(x1 , . . ., xn ), where
x1 , . . ., xn are different variables.
Next, we translate the bodies of the above two clauses to the following two
statements: Y = 0 and begin new X; Y = s(X); num(X) end. The latter
reflects the fact that in the second clause X is a local variable, i.e., it does
not occur in the clause head. These two statements are then put together
by means of the alternative choice statement and the original program NUM
is translated to the following procedure declaration:
num(Y): begin
Y = 0
orelse
begin new X; Y = s(X); num(X) end
end
The reader can now check that given this procedure declaration the proce-
dure call num(s(s(X))) succeeds and assigns 0 to X, while the procedure call
num(s(f(X))) fails. This coincides with the outcomes of the corresponding
Prolog computations.
Let us define now the translation for an arbitrary pure Prolog program P .
It will be translated into a sequence of procedure declarations. We perform
successively the following steps, where X1 , . . ., Xn , . . . are new variables:
34 A reconstruction of pure Prolog
q(X) : fail.
To clarify the boundary cases of the above translation consider the fol-
lowing pure Prolog program:
p.
p :- r.
r :- q.
u.
2.4 Pure Prolog and declarative programming 35
as the disjunction S1 . . . Sn ,
the block statement
p(x1 , . . ., xn ) : S
num(0).
num(s(X)) :- num(X).
2.6 Summary 37
2.6 Summary
In this chapter we defined the meaning of pure Prolog programs by trans-
lating them into a small programming language L0 . This translation re-
veals the programming features implicitly present in the logic programming
paradigm. In particular, it clarifies the following three points made in the
previous chapter:
computing takes place over the set of all terms,
values are assigned to variables by means of the most general unifiers,
the control is provided by a single mechanism: automatic backtracking .
Additionally, we note that
the predicates correspond to the procedure identifiers,
the variables present in the body of a clause but not in its head correspond
to local variables,
the definition of a predicate corresponds to a procedure declaration,
the comma , separating the atomic goals in a query or in a clause body
corresponds to the program composition ;.
38 A reconstruction of pure Prolog
So this translation shows that pure Prolog is a very succinct yet highly
expressive formalism.
Additionally, we clarified what it means that logic programming and pure
Prolog support declarative programming. Finally we showed that the lan-
guage L0 supports declarative programming, as well.
2.7 Exercises
Exercise 2.3 Translate to the language L0 the program APPEND from Figure
1.5 on page 24.
Part II
Elements of Prolog
3
Arithmetic in Prolog
3.1 Introduction 41
3.2 Arithmetic expressions and their evaluation 42
3.3 Arithmetic comparison predicates 47
3.4 Passive versus active constraints 51
3.5 Operators 52
3.6 Summary 55
3.7 Exercises 57
3.1 Introduction
41
42 Arithmetic in Prolog
Recall that the integer division is defined as the integer part of the usual
division outcome and given two integers x and y such that y 6= 0, x mod
y is defined as x - y*(x//y). The use of the infix and bracketless prefix
form for arithmetic operators leads to well-known ambiguities. For example,
4+3*5 could be interpreted either as (4+3)*5 or 4+(3*5) and -3+4 could
be interpreted either as (-3)+4 or -(3+4). Further, 12//4//3 could be
interpreted either as (12//4)//3 or 12//(4//3), etc.
Such ambiguities are resolved in Prolog by stipulating the following bind-
ing order (written as ):
+, -(binary), -(unary) *, // div, rem, mod ^
and assuming that the binary arithmetic operators are left associative. That
is to say, + binds weaker than *, so 4+3*5 is interpreted as 4+(3*5).
In turn, 12//4*3 is interpreted as (12//4)*3, 12//4//3 is interpreted as
(12//4)//3, etc.
The arithmetic operators and the above set of numbers uniquely deter-
mine a set of terms. We call terms defined in this language arithmetic
expressions and introduce the abbreviation gae for ground arithmetic
expressions, i.e., arithmetic expressions without variables.
As in every programming language we would like to evaluate the ground
arithmetic expressions. For example, we would like to compute 3+4 in Pro-
log. In customary programming languages this is done by means of the
assignment statement. In Prolog this is done by means of a limited coun-
terpart of the assignment, the built-in is/2, written in the infix form. For
example, we have
[eclipse 1]: X is 3+4.
X = 7
Yes (0.00s cpu)
We call is/2 the arithmetic evaluator . The above example represents
the most common use of is/2. In general, s is t is executed by unifying
s with the value of the ground arithmetic expression t. If t is not a gae, a
run-time error arises. More precisely, the following possibilities arise.
t is a gae.
Let val(t) be the value of t.
s is a number identical to val(t).
Then the arithmetic evaluator succeeds and the empty substitution is
produced. For example,
44 Arithmetic in Prolog
No (0.00s cpu)
s is a variable.
Then the arithmetic evaluator succeeds and the substitution s = val(t)
is produced. For example,
[eclipse 4]: X is 4+3*5.
X = 19
Yes (0.00s cpu)
s is not a number and not a variable.
Then a run-time error arises. For example,
[eclipse 5]: 3+4 is 3+4.
type error in +(3, 4, 3 + 4)
Abort
t is not a gae.
Then a run-time error arises. For example,
[eclipse 6]: X is Y+1.
instantiation fault in +(Y, 1, X)
Abort
[eclipse 7]: X is X+1.
instantiation fault in +(X, 1, X)
Abort
So the is/2 built-in significantly differs from the assignment statement.
In particular, as the last example shows, we cannot increment a variable
using is/2. Because of this, typical uses of counting in Prolog are rather
subtle and confusing for the beginners. Consider for instance the following
trivial problem:
len([], 0).
len([_ | Ts], N) :- len(Ts, M), N is M+1.
X = 3
Yes (0.00s cpu)
X = 0 + 1 + 1 + 1
Yes (0.00s cpu)
The problem is that the gaes can be evaluated only using the is statement.
To compute the value of a gae defined inductively one needs to generate
new local variables (like M in the LENGTH program) to store in them the
intermediate results, and then use the is statement.
To save explicitly introducing such extra variables, ECLi PSe supports for
is/2 a functional syntax. Let us suppose p/2 is a predicate whose last
argument is numeric, viz.
p(a, 1).
p(b, 2).
in can be used both to compute the length of a list, and to build a list of different variables of
a specified length.
46 Arithmetic in Prolog
Then p/1 can be used as a function when it occurs on the right-hand side
of is, for example:
[eclipse 10]: X is p(a)+1.
X = 2
Yes (0.00s cpu)
In particular, both the len/2 predicate from the LENGTH program and the
length/2 built-in can be used as a function, for example:
[eclipse 11]: X is length([a,b,c]).
X = 3
Yes (0.00s cpu)
As another example of the use of an arithmetic evaluator consider a pred-
icate defining all the integer values in a given range. The appropriate defi-
nition is given in Figure 3.2.
Note the use of a local variable Min1 in the arithmetic evaluator Min1 is
Min+1 to compute the increment of Min. In this program the appropriate
values are generated only upon demand, by means of backtracking. We
shall see the usefulness of such a process in Section 7.6. We have for example:
[eclipse 12]: select_val(10, 13, Val).
Val = 10
Yes (0.00s cpu, solution 1, maybe more) ? ;
Val = 11
3.3 Arithmetic comparison predicates 47
Val = 12
Yes (0.00s cpu, solution 3, maybe more) ? ;
Val = 13
Yes (0.00s cpu, solution 4, maybe more) ? ;
No (0.00s cpu)
No (0.00s cpu)
[eclipse 14]: select_val(10, 13, X), select_val(12, 15, X).
X = 12
Yes (0.00s cpu, solution 1, maybe more) ? ;
X = 13
Yes (0.00s cpu, solution 2, maybe more) ? ;
No (0.01s cpu)
The equality predicate =:=/2 between gaes should not be confused with
the is unifiable with predicate =/2 discussed in Section 1.4.
48 Arithmetic in Prolog
The comparison predicates work on gaes and produce the expected out-
come. So for instance >/2 compares the values of two gaes and succeeds if
the value of the first argument is larger than the value of the second and
fails otherwise. Thus, for example
[eclipse 15]: 5*2 > 3+4.
No (0.00s cpu)
However, when one of the arguments of the comparison predicates is not
a gae, a run-time error arises. In particular, =:=/2 cannot be used to assign
a value to a variable. For example, we have
[eclipse 17]: X =:= 5.
instantiation fault in X =:= 5
Abort
As a simple example of the use of the comparison predicates consider the
program in Figure 3.3 that checks whether a list is an -ordered list of
numbers.
ordered([]).
ordered([H | Ts]) :- ordered(H, Ts).
ordered(_, []).
ordered(H, [Y | Ts]) :- H =< Y, ordered(Y, Ts).
We now have
[eclipse 18]: ordered([1,1.5,2,3]).
Here a run-time error took place because during the computation the com-
parison predicate =</2 was applied to an argument that is not a number.
On the other hand, we also have
which shows that programs that use the arithmetic comparison predicates
can be quite subtle. In this case, the ORDERED program correctly rejects (by
means of a run-time error) some queries, such as ordered([1,X,3]) but
incorrectly accepts some other queries, such as ordered([X]).
A possible remedy is to use the number/1 built-in of Prolog that tests
whether its argument is a number and rewrite the program as follows.
ordered([]).
ordered([H | Ts]) :-
number(H),
ordered(H, Ts).
ordered(_, []).
ordered(H, [Y | Ts]) :-
number(Y),
H =< Y, ordered(Y, Ts).
% qs(Xs, Ys) :-
% Ys is an =<-ordered permutation of the list Xs.
qs([], []).
qs([X | Xs], Ys) :-
part(X, Xs, Littles, Bigs),
qs(Littles, Ls),
qs(Bigs, Bs),
app(Ls, [X | Bs], Ys).
In the part/4 predicate the first two arguments are used as inputs and
the last two as outputs. Note that the list constructors are used both in the
input positions and in the output positions.
We then have
Ys = [1, 5, 7, 8, 9]
More (0.00s cpu) ? ;
No (0.00s cpu)
3.5 Operators
Arithmetic operators are examples of function symbols that are written in
the infix form (for example +) or bracketless prefix form (the unary -).
These forms are equivalent to the standard, prefix, form. For example in
Prolog and ECLi PSe the expressions +(1,2) and 1+2 are indistinguishable:
10 before 11.
9 before 10.
X before Z :- X before Y, Y before Z.
and ask
X = 10
Yes (0.00s cpu, solution 1, maybe more) ? ;
X = 11
Yes (0.00s cpu, solution 2, maybe more) ?
% Asking for more here would start an infinite loop!
This query can still be written in the standard syntax, as before(9, X).
3.5 Operators 53
before(:(10,00), :(11,00)).
To achieve the other form we would have to bracket the original expression
thus:
10:((00 before 11):00).
This second bracketing specifies that ((00 before 11):00) is a single com-
pound term. This term has the function symbol :/2 and two arguments,
(00 before 11) and 00.
If such brackets were always required, however, then it would make arith-
metic expressions quite unreadable. For example, we would have to write
(2*3)-(4//((3*Y)+(2*Z))) instead of 2 * 3 - 4 // (3 * Y + 2 * Z).
In order to guide the parser without the need for extra brackets, we asso-
ciate a priority with each operator. In the declaration
3.6 Summary
In this chapter we discussed Prolog facilities for arithmetic. They consist of
the is/2 built-in that allows us to evaluate ground arithmetic expressions,
56 Arithmetic in Prolog
This ensures the binding order and the associativity information stated
in Section 3.2.
In turn, the comparison predicates introduced in Section 3.3 are prede-
clared as
3.7 Exercises
Exercise 3.1 What is the ECLi PSe output to the following queries?
1. X is 7, X is 6.
2. X is 7, X is X+1.
3. X is X+1, X is 7.
4. X is 3, Y is X+1.
5. Y is X+1, X is 3.
inc(X) :- Z is X+1.
What is the ECLi PSe output to the following queries?
1. Z is 2, q(X).
2. Z is 2, q(Z).
3. Z is 2, q(X), Y is Z+1.
4. Z = 2, inc(X), X = 2.
5. X = 2, inc(X), Z = 2.
min(X, Y, X) :- X < Y.
min(X, Y, Y) :- X >= Y.
and
% min(X, Y, Z) :- Z is the minimum of the gaes X and Y.
min(X, Y, Z) :- X < Y, Z is X.
58 Arithmetic in Prolog
min(X, Y, Z) :- X >= Y, Z is Y.
Find a query for which they yield different answers.
Exercise 3.5 Redefine the len/2 predicate from Figure 3.1 on page 45 as
an infix operator has length/2, using an appropriate operator declaration.
4
59
60 Control and meta-programming
This takes care of equations such as f(s) = f(t,u) which are now syntac-
tically allowed. (Here s,t,u are structures.)
In the previous chapter we explained using the example of the before/2
predicate that binary predicates can be turned into operators. Now that we
have also introduced the ambivalent syntax only one small step is needed to
realise meta-programming in Prolog. Namely, it suffices to identify queries
and clauses with the structures. To this end we just view the comma ,
separating the atomic goals of a query as a right-associative, infix operator
and :- as an infix operator. To ensure the correct interpretation of the
resulting structures the following declarations are internally stipulated:
of the QUICKSORT program. The reader can easily check that it is a structure
with the priority 1200 and interpretation
Now that we have identified queries and clauses with structures we can
naturally view programs as lists of clauses and pass them as arguments.
However, it is much more convenient to be able to access the considered
program directly. To this end Prolog provides a built-in that we shall discuss
in Section 4.3.
4.1.2 Meta-variables
Another unusual syntactic feature of Prolog is that it permits the use of vari-
ables in the positions of atomic goals, both in the queries and in the clause
bodies. Such a use of a variable is called a meta-variable. Computation
in the presence of the meta-variables is defined as for pure Prolog programs
with the exception that the mgus employed now also bind the meta-variables.
So for example, given the QUICKSORT program and the clause
solve(Q) :- Q.
4.2 Control
So far the only control mechanism we introduced is the automatic back-
tracking. Prolog provides several control statements that allow one to write
more efficient and simpler programs.
X = a
Y = a
Yes (0.00s cpu)
As a more meaningful example consider the problem of modifying the
ORDERED program of Figure 3.3 so that in the case of a wrong input an
appropriate information is printed. To this end we use the number/1 built-
in which tests whether its argument is a number and the write/1 built-in
which writes its argument (which can be a compound term, like wrong
input:H) below). This modification is presented in Figure 4.1.
We get then as desired
[eclipse 2]: ordered([1,a,0]).
wrong input : a
Yes (0.00s cpu)
ordered([]).
ordered([H | Ts]) :-
( number(H) ->
ordered(H, Ts)
;
write(wrong input:H)
).
ordered(_, []).
ordered(H, [Y | Ts]) :-
( number(Y) ->
H =< Y, ordered(Y, Ts)
;
write(wrong input:Y)
).
[eclipse 3]: X = a ; X = b.
X = a
More (0.00s cpu) ? ;
X = b
64 Control and meta-programming
X = a
Y = c
Yes (0.00s cpu)
No (0.00s cpu)
No (0.00s cpu)
The reason is that the query not X = 1 fails since the query X = 1 succeeds.
However, there exists an instance of X that is different than 1, for example
2. So the use of not in the presence of variables can lead to counterintuitive
answers. From the declarative point of view it is preferable to use it only
as a test, so when the query in its scope, like a = b in not a = b, has no
variables at the moment of the execution.
We shall discuss the internal definition of not once we introduce the so-
called cut functor.
4.2 Control 65
Once Next, Prolog provides the functor once/1. The query once Q is exe-
cuted the same way as Q is with the exception that once a solution is found,
the other solutions are discarded. So for example
[eclipse 8]: once (X = a ; X = b).
X = a
Yes (0.00s cpu)
No (0.00s cpu)
Again, we shall discuss the internal definition of once after we have in-
troduced the cut functor.
Cut The most controversial control facility of Prolog is the nullary functor
called cut and denoted by !/0. It is used to make the programs more
efficient by pruning the computation tree over which the search for solutions
takes place. Informally, the cut means commit to the current choice and
discard other solutions, though the precise meaning is more complicated.
To explain its effect, we explain first the implementation of two already
introduced built-ins using cut.
The once/1 functor is internally defined by means of the following decla-
ration and the single clause:
:- op(900, fy, once).
once Q :- Q, !.
So once Q is executed by executing Q first. If Q succeeds for the first time
the cut functor ! is executed. This has the effect that the execution
of once Q successfully terminates without any attempt to explore whether
other solutions to Q exist. If Q fails, then once Q fails, as well.
The implementation of the negation functor not/1 is equally short though
much less easy to understand:
:- op(900, fy, not).
not Q :- Q, !, fail.
not _ .
Here fail/0 is a Prolog built-in with the empty definition. This cryptic
two line program employs several discussed features of Prolog. Consider the
call not Q, where Q is a query. If Q succeeds, then the cut ! is performed.
66 Control and meta-programming
This has the effect that all alternative ways of computing Q are discarded and
also the second clause is discarded. Next, the built-in fail is executed and a
failure arises. Since the only alternative clause was just discarded, the query
not Q fails. If on the other hand the query Q fails, then the backtracking
takes place and the second clause, not , is selected. It immediately succeeds
with the empty substitution and so the initial query not Q succeeds with
the empty substitution.
After these two examples of the uses of cut let us provide its definition in
full generality. Consider the following definition of a predicate p:
p(s1 ) :- A1 .
...
p(si ) :- B,!,C.
...
p(sk ) :- Ak .
Here the ith clause contains the cut !. Suppose now that during the
execution of a query a call p(t) is encountered and eventually the ith clause
is used and the indicated occurrence of the cut is executed. Then this
occurrence of ! succeeds immediately, but additionally
The cut was introduced in Prolog to improve the efficiency of the pro-
grams. As an example let us return to the QUICKSORT program defined in
Figure 3.4 of Section 3.3 and consider the execution of a typical query such as
qs([7,9,8,1,5], Ys). To see that the resulting computation is inefficient
let us focus on the definition of the part/4 predicate:
Now the execution of the second clause above fails when 7 is compared with
9 and subsequently the last, third, clause is tried. At this moment 7 is again
compared with 9. The same redundancy occurs later when 1 is compared
with 5. To avoid such inefficiencies the definition of part/4 can be rewritten
using the cut as follows:
part(_, [], [], []).
part(X, [Y | Xs], [Y | Ls], Bs) :-
X > Y, !,
part(X, Xs, Ls, Bs).
part(X, [Y | Xs], Ls, [Y | Bs]) :-
part(X, Xs, Ls, Bs).
So the cut is introduced after the test X > Y whereas the test X =< Y is
deleted. The reader can easily check that the cut has the desired effect here.
The cut can also be used to ensure that only one solution is produced.
Consider for example the MEMBER program defined in Figure 4.2.1
mem(X, [X | _]).
mem(X, [_ | Xs]) :- mem(X, Xs).
No (0.00s cpu)
X = jan
More (0.00s cpu) ? ;
X = wed
More (0.00s cpu) ? ;
X = fri
More (0.00s cpu) ? ;
No (0.00s cpu)
To ensure that only one solution is produced we modify the MEMBER pro-
gram by adding to the first clause the cut:
mem_check(X, [X | _]) :- !.
mem_check(X, [_ | Xs]) :- mem_check(X, Xs).
X = jan
Yes (0.00s cpu)
Another use of the cut will be illustrated in the next section. In general,
the cut is a powerful mechanism inherent to Prolog. Some of its uses can
be replaced by employing other built-ins that are internally defined using
the cut. For example, instead of the just discussed revision of the MEMBER
program we could use the once/1 built-in and simply run the queries of
the form once(mem(s,t)). Further, notice that the not/1 built-in can be
defined in a much more intuitive way using the if-then-else statement as
follows:
The above clause simply formalises the intuitive meaning of negation, namely
that it reverses failure and success. What is happening here is that the cut is
absorbed in the if-then-else statement, which itself can be defined using
the cut as follows:
if_then_else(B, Q, R) :- B,!,Q.
if_then_else(B, Q, R) :- R.
4.3 Meta-programming 69
On the other hand, several natural uses of the cut can be modelled by
means of other built-ins only at the cost of, sometimes considerable, rewrit-
ing of the program that can affect the readability. For example, in the case
of the part/4 predicate we can use the if-then-else operator and rewrite
the second clause as
part(X, [Y | Xs], Ls, Bs) :-
( X > Y ->
Ls = [Y | L1s], part(X, Xs, L1s, Bs)
;
Bs = [Y | B1s], part(X, Xs, Ls, B1s)
).
The effect is the same as when using the cut but we had to introduce the
explicit calls to the unification functor =/2.
Non-trivial complications arise during the interaction of the cut with other
Prolog built-ins. In some situations cut is not allowed. For example, no
cuts are allowed in the query B in the context of the if-then-else call
B -> S ; T. Moreover, once the cut is used, the clause order becomes of
crucial importance. For example, if we ignore the efficiency considerations,
in the original QUICKSORT program the order of the clauses defining the
part/4 predicate was immaterial. This is no longer the case in the revised
definition of part/4 that uses the cut. In short, the cut has to be used with
considerable care.
4.3 Meta-programming
4.3.1 Accessing programs directly
We explained in Section 4.1 how Prolog programs can be conveniently passed
as data, in the form of a list of clauses. But it is much more convenient to be
able to access the considered program directly. To this end Prolog provides
the clause/2 built-in.
In its call the first argument has to be a term that is neither a variable nor
a number. This makes it possible to determine the functor to which the call
refers to. So for example each of the calls clause(X, Y) and clause(100,
Y) yields a run-time error, while the call clause(p(X), Y) may succeed or
fail.
Given a call clause(head, body), first the term head is unified with
a head of a clause present in the considered program. If no such clause
exists, the call clause(head, body) fails. Otherwise, the first such clause
is picked and the term head :- body is unified with this clause. (It is
70 Control and meta-programming
assumed that true is the body of a unit clause. (true/0 is a Prolog built-in
that immediately succeeds.) Upon backtracking successive choices for head
are considered and the corresponding alternative solutions are generated.
As an example of the use of clause/2 assume that the file with the
QUICKSORT program of Figure 3.4 from page 50 is read in. Then we have
the following interaction:2
U = []
V = []
Z = true
Yes (0.00s cpu, solution 1, maybe more) ? ;
U = [X|Xs]
V = V
Z = part(X, Xs, Littles, Bigs), qs(Littles, Ls),
qs(Bigs, Bs), app(Ls, [X|Bs], V)
Yes (0.00s cpu, solution 2)
which shows how easily we can dissect the definition of the qs/2 predicate
into parts that can be further analysed. Note three references to the ,/2
operator in the structure assigned to Z.
To be able to access the definition of a functor p/n by means of the
clause/2 built-in, p/n has to be declared as a dynamic predicate. For
example, the above interaction with the QUICKSORT program requires the
presence of the declaration
:- dynamic qs/2.
4.3.2 Meta-interpreters
Using the clause/2 built-in it is remarkably easy to write meta-interpreters,
so programs that execute other programs. As an example consider the prob-
lem of writing in Prolog an interpreter for pure Prolog. The program is
presented in Figure 4.3. It is strikingly concise and intuitive.
The first clause states that the built-in true/0 succeeds immediately. The
second clause states that a query of the form A,B succeeds if both A and B
2 In ECLi PSe the values of all variables are displayed. This explains the superfluous equation V
= V. In particular, the empty substitution is displayed as a sequence of such equalities for each
variable present in the query.
4.3 Meta-programming 71
% solve(X) :-
% The query X succeeds for the pure Prolog
% program accessible by clause/2.
solve(true) :- !.
solve((A,B)) :- !, solve(A), solve(B).
solve(H) :- clause(H, Body), solve(Body).
succeed. Here A,B is a structure the functor of which is ,/2. Recall from
page 60 that ,/2 was declared by
Xs = []
Ys = [fri, sun]
Zs = [fri, sun]
More (0.00s cpu) ? ;
Xs = [mon]
Ys = [sun]
Zs = [mon, sun]
More (0.00s cpu) ? ;
Xs = [mon, wed]
72 Control and meta-programming
Ys = []
Zs = [mon, wed]
More (0.00s cpu) ? ;
No (0.00s cpu)
app(Xs, [_, _|Ys], [mon, wed, fri, sun]), app(Xs, Ys, Zs)
directly.
The META INTERPRETER program forms a basis for several non-trivial meta-
programs written in Prolog, including debuggers, tracers, interpreters and
partial evaluators. As an example of a simple extension of this program con-
sider the problem of enhancing the answers to the queries with the length of
the derivation. The appropriate modification of solve/1 is straightforward
and is presented in Figure 4.4.
% solve(X, N) :-
% The query X succeeds for the pure Prolog
% program accessible by clause/2 in N steps.
solve(true, 0) :- !.
solve((A, B), N) :- !, solve(A, N1), solve(B, N2),
N is N1+N2.
solve(H, N) :- clause(H, Body), solve(Body, M),
N is M+1.
For the same query as above we now get the following answers:
Xs = []
Ys = [fri, sun]
Zs = [fri, sun]
N = 2
More (0.00s cpu) ? ;
4.3 Meta-programming 73
Xs = [mon]
Ys = [sun]
Zs = [mon, sun]
N = 4
More (0.00s cpu) ? ;
Xs = [mon, wed]
Ys = []
Zs = [mon, wed]
N = 6
More (0.00s cpu) ? ;
No (0.00s cpu)
solve(true, 0) :- !.
solve(A, 1) :- arithmetic(A), !, A.
solve((A, B), N) :- !, solve(A, N1), solve(B, N2),
N is N1+N2.
solve(H, N) :- clause(H, Body), solve(Body, M),
N is M+1.
In this program the calls of the arithmetic atomic queries are just shifted
to the system level. In other words, these calls are executed directly by the
underlying Prolog system. We now have
Ys = [1, 5, 7, 8, 9]
N = 36
More (0.00s cpu) ? ;
No (0.00s cpu)
So it takes 36 derivation steps to compute the answer to the query
qs([7,9,8,1,5], Ys).
4.4 Summary
In this chapter we discussed in turn:
ambivalent syntax and meta-variables,
various control constructs,
meta-programming.
In the process we introduced the following built-ins:
the if-then-else statement written in Prolog as B -> S ; T,
write/1,
disjunction: ;/2,
negation: not/1,
fail/0,
once/1,
cut: !/0,
clause/2.
In particular we showed how the syntactic features of Prolog (and the
absence of types) make it easy to write various meta-programs. The am-
bivalent syntax goes beyond the first-order logic syntax. It is useful to realise
that there is an alternative syntactic reading of Prolog programs in which
the first-order logic syntax is respected and the ambivalent syntax is not
used. Namely, after interpreting the operators used as the customary prefix
operators we can view each Prolog clause as a construct of the form
s :- t1 , . . ., tn ,
where s is a term that is neither a variable or a number and each ti is a
term that is not a number.
This syntactic reading respects the first-order logic but has one important
disadvantage. Namely, the declarative interpretation of the programs that
relied on the use of the predicates is then lost. The ambivalent syntax can
4.5 Exercises 75
4.5 Exercises
Exercise 4.1 Redefine the len/2 predicate from Figure 3.1 on page 45 so
that it can be used both to compute the length of a list, and to build a list
of different variables of a specified length, like the length/2 built-in. Hint.
Use the integer/1 built-in which tests whether its argument is an integer.
Exercise 4.2 Suppose we maintain sets as lists. Consider the following two
ways of defining the result of adding an element to a set:
% add(X, Xs, Ys) :- The set Ys is the result of adding
% the element X to the set Ys.
Manipulating structures
X = X
Yes (0.00s cpu)
[eclipse 2]: var(f(X)).
No (0.00s cpu)
76
5.1 Structure inspection facilities 77
X = X
Yes (0.00s cpu)
No (0.00s cpu)
No (0.00s cpu)
No (0.00s cpu)
X = 1
Yes (0.00s cpu)
[eclipse 13]: var(X), X = 1, var(X), X = 1.
No (0.00s cpu)
Also the order of various simple atomic goals now becomes crucial: the
query var(X), X = 1 succeeds but the query X = 1, var(X) fails.
X = X
Yes (0.00s cpu)
[eclipse 15]: f(X) == f(Y).
No (0.00s cpu)
\==/2, which tests whether two structures are not literally identical:
[eclipse 16]: f(X) \== f(X).
No (0.00s cpu)
[eclipse 17]: f(X) \== f(Y).
X = X
Y = Y
5.2 Structure comparison facilities 79
So these two built-ins are used as infix operators. In ECLi PSe they are
defined internally as follows:
list([]) :- !.
list([_ | Ts]) :- list(Ts).
X = []
Yes (0.00s cpu)
The obvious correction consists of using in the first clause the ==/2 built-in:
list(X) :- X == [], !.
list([_ | Ts]) :- list(Ts).
Now, however, the query list(X) diverges. The reason is that during its
execution a backtracking takes place and then the second clause is repeatedly
used. The right remedy turns out to add an appropriate test in the second
clause. The resulting program is given in Figure 5.1.
% list(X) :- X is a list.
list(X) :- X == [], !.
list([_ | Ts]) :- nonvar(Ts), list(Ts).
F = f
N = 2
Yes (0.00s cpu)
[eclipse 20]: functor(c, F, N).
F = c
N = 0
Yes (0.00s cpu)
In the presence of the arithmetic operators functor/3 deals correctly
with the infix notation:
[eclipse 21]: functor(3*4+5, F, N).
F = +
N = 2
Yes (0.00s cpu)
Also, functor/3 allows us to reveal the internal representation of lists:
5.3 Structure decomposition and construction facilities 81
F = []
N = 0
Yes (0.00s cpu)
[eclipse 23]: functor([a | b], F, N).
F = .
N = 2
Yes (0.00s cpu)
Indeed, the lists are represented internally using the functor ./2 and the
constant []. For example we have:
X = [a, b]
Yes (0.00s cpu)
t is a variable.
Then f has to be a non-numeric constant (an atom) and n a natural
number. Then the query of functor(t, f, n) is executed by instantiat-
ing t to a most general structure of arity n the leading symbol of which
is f. For example:
T = f
Yes (0.00s cpu)
[eclipse 27]: functor(T, ., 2).
T = [_162|_163]
Yes (0.00s cpu)
X = X
A = g(a, X)
Yes (0.00s cpu)
[eclipse 29]: arg(2, [a,b,c], A).
A = [b, c]
Yes (0.00s cpu)
As an example of the use of these built-ins consider the following problem.
X = X
Xs = Xs
Ys = Ys
Zs = Zs
List = [X, Xs, Ys, X, Zs, Xs, Ys, Zs]
Yes (0.00s cpu)
Of course, by a minor modification of the above program we can compute
in the same way the list of variables of a term, without repetitions (see
Exercise 5.2).
=../2 The =../2 built-in (for historic reasons pronounced univ) either
creates a list which consists of the leading symbol of the structure followed
by its arguments or constructs a structure from a list that starts with a
function symbol and the tail of which is a list of structure arguments.
It is internally defined as an infix operator with the following declaration:
5.3 Structure decomposition and construction facilities 83
% vars(Term, List) :-
% List is the list of variables of the term Term.
args(0, _, []) :- !.
args(K, Term, List) :-
K > 0,
K1 is K-1,
args(K1, Term, L1s),
arg(K,Term,A),
vars(A, L2s),
app(L1s, L2s, List).
augmented by the APPEND program of Figure 1.5.
X = X
List = [f, a, g(X)]
Yes (0.00s cpu)
s is a variable.
Then t has to be a list, say [f, s1 , . . ., sn ] the head of which (so f) is
84 Manipulating structures
So using =../2 we can construct terms and pass them as arguments. More
interestingly, we can construct queries and execute them using the meta-
variable facility. This way it is possible to realise higher-order programming
in Prolog in the sense that predicates can be passed as arguments. To
illustrate this point consider the program MAP given in Figure 5.3.
In the last clause =../2 is used to construct an atomic query. Note the
use of the meta-variable Query. MAP is Prologs counterpart of a higher-order
functional program and it behaves in the expected way. For example, given
the clause
increment(X, Y) :- Y is X+1.
we get
5.4 Summary 85
Ys = [2, 3, 4, 5]
Yes (0.00s cpu)
5.4 Summary
In this chapter we discussed Prolog built-ins that allow us to manipulate
structures. We discussed built-ins that allow us
to inspect structures:
var/1,
nonvar/1,
ground/1,
compound/1,
number/1,
atom/1,
atomic/1,
to compare structures:
==/2,
\==/2,
to decompose and construct structures:
functor/3,
arg/3,
../2.
Using these built-ins we can analyse structures and also realise higher-order
programming in Prolog.
5.5 Exercises
Exercise 5.2 Modify the program VARIABLES from Figure 5.2 to compute
the list of variables of a term, without repetitions.
86 Manipulating structures
6.1 Introduction 89
6.2 Basic concepts 90
6.3 Example classes of constraints 91
6.4 Constraint satisfaction problems: examples 94
6.5 Constrained optimisation problems 98
6.6 Solving CSPs and COPs 102
6.7 From CSPs and COPs to constraint programming 107
6.8 Summary 108
6.9 Exercises 108
6.1 Introduction
89
90 Constraint programming: a primer
function and the task is to find an optimal solution w.r.t. the cost function.
In the area of constraint programming various methods and techniques were
developed to solve both kind of problems and we shall explain briefly their
basics.
the domains of x and y are disjoint or at least one of them is not a singleton
set.
As an example of a CSP involving these constraints take
hx = y, y 6= z, z 6= u ; x {a, b, c}, y {a, b, d}, z {a, b}, u {b}i.
It is straightforward to check that {(x, b), (y, b), (z, a), (u, b)} is its unique
solution.
Boolean constraints
Boolean constraints are formed using specific function symbols called con-
nectives. We adopt the unary connective (negation) and the binary
connectives (conjunction) and (disjunction). We also allow two
constants, true and false. The resulting terms are called Boolean ex-
pressions. The binary connectives are written using the infix notation and
when applied to a variable is written without the brackets. By a Boolean
constraint we mean a formula of the form s = t, where s, t are Boolean ex-
pressions. So the equality symbol = is the only predicate in the language.
Each Boolean expression s can be viewed as a shorthand for the Boolean
constraint s = true.
The variables are interpreted over the set of truth values, identified with
{0, 1}, and are called Boolean variables. In turn, the connectives are
interpreted using the standard truth tables. This interpretation allows us to
interpret each Boolean constraint c on the Boolean variables x1 , . . ., xn as a
subset of {0, 1}n .
For example, the interpretation of the Boolean constraint (x) (y z),
that is (x) (y z) = true, is the set
{(0, 0, 1), (0, 1, 0), (0, 1, 1)},
where we assume the alphabetic ordering of the variables.
Linear constraints
Next, we define linear constraints. We shall interpret them either over the
set of integers or over the set of reals, or over a subset of one of these sets
(usually an interval). To start with, we assume a fixed set of numeric
constants representing either integers or reals.
As in the case of Boolean constraints we introduce the constraints in two
steps. By a linear expression we mean a term formed in the alphabet
that contains
6.3 Example classes of constraints 93
We also assume that + and are left associative and have the same
binding strength.
By a linear constraint we mean a formula of the form
s op t,
4x+3yx5z+7y
is a linear constraint.
When linear constraints are interpreted over the set of reals, it is custom-
ary to admit only , = and as the comparison operators.
Arithmetic constraints
Finally, we introduce arithmetic constraints. The only difference between
them and linear constraints lies in the use of the multiplication symbol. In
the case of linear constraints the multiplication was allowed only if one of
its operands (the first one) was a numeric constant. To fit this restriction
into the syntax of first-order logic we introduced the unary multiplication
symbols r. Now we dispense with these unary function symbols and simply
allow the multiplication as a binary function symbol.
We define the arithmetic expressions and arithmetic constraints
analogously to the linear expressions and linear constraints, but now using
the binary multiplication symbol . In particular, each linear constraint is
an arithmetic constraint.
For example 4 x + x y x + 7 is an arithmetic expression and
4 x + x y x y (z + 5) 3 u
is an arithmetic constraint.
94 Constraint programming: a primer
3 2 4
SEND
+ MORE
MONEY
is correct.
The problem can be formulated using linear constraints in a number of
different ways. The variables are S, E, N, D, M, O, R, Y . The domain of
each variable consists of the integer interval [0..9].
D+E = 10 C1 + Y,
C1 + N + R = 10 C2 + E,
C2 + E + O = 10 C3 + N,
C3 + S + M = 10 C4 + O,
C4 = M.
Here, C1 , . . ., C4 are the carry variables.
where d {0, 1, . . ., 9}. The 0/1 variables isx,d relate back to the original
variables used in the equality constraint(s) via eight constraints
9
X
d isx,d = x,
d=0
where x {S, E, N, D, M, O, R, Y }.
The disadvantage of this approach is of course that we need to introduce
80 new variables.
The problem under consideration has a unique solution depicted by the
following sum:
9567
+ 1085
10652
That is, the assignment
{(S, 9), (E, 5), (N, 6), (D, 7), (M, 1), (O, 0), (R, 8), (Y, 2)}
is a unique solution of the original CSP and each other discussed represen-
tation of the problem has a unique solution, as well.
The condition i 6= k in the last item ensures that the fields represented by
xi,j and xk,l are different.
The obvious disadvantage of this representation is the large number of
variables it involves.
all different(x1 , . . ., xn ).
cost : D1 Dn R
to state that the amount of i euro cents can be paid using xi1 coins of 1 euro
cent, xi2 coins of 2 euro cents, xi5 coins of 5 euro cents, xi10 coins of 10 euro
cents, xi20 coins of 20 euro cents and xi50 coins of 50 euro cents.
Additionally, we add the constraints stating that for each i [1..99] the
amounts xij are respectively smaller than xj . More precisely, for each i
[1..99] and j {1, 2, 5, 10, 20, 50} we use the following constraint:
xij xj .
These 594 inequality constraints ensure that each amount of i euro cents
can indeed be paid using the collection represented by the xj variables.
Finally, for each solution to the just formulated CSP the sum
x1 + x2 + x5 + x10 + x20 + x50
represents the total amount of coins that we would like to minimise. So it
is the appropriate cost function.
Further, we assume that each customer may be served by any set of ware-
houses. We want to choose a set of warehouses from which the product will
be provided to the customers so that
the demands of all customers are satisfied,
the total cost, which equals the sum of the setup costs of the selected
warehouses plus the resulting transportation costs, is minimal.
In Figure 6.2 we illustrate the situation for three warehouses and four
customers. With each line connecting a warehouse and a customer a trans-
portation cost is associated.
To formalise this problem as a COP we use two kinds of decision variables.
For each warehouse j there is a Boolean variable Bj reflecting whether the
warehouse j is open (i.e., chosen) or not. Next, for each warehouse j and
customer i there is a continuous variable (i.e., a variable ranging over the
set of reals) Si,j saying how much of the customer i demand is supplied by
the warehouse j.
Then for each warehouse j there is a capacity constraint
X
capj Si,j ,
i
customer 1
dem1
cost 1
,1
t
warehouse 1
scost1 tcost2,1
cap1
customer 2
dem2
warehouse 2
scost2 tco
cap2 st
3,1
customer 3
warehouse 3 dem3
scost3
tc
os
cap3
t 4,
1
customer 4
dem4
We also state that the supplies from the warehouses that are not selected
equal zero, i.e., for each warehouse j
Bj = 0 i : Si,j = 0.
Finally, the cost function is the sum of the setup costs plus the trans-
portation costs:
X X
(Bj scostj + Si,j tcosti,j ).
j i
developed in the area of Linear Algebra and when we deal with systems of
linear inequalities over reals, it is preferable to use the methods developed
in the area of Linear Programming.
Still, in many situations the constraints used do not adhere to any simple
syntax restrictions or are a mixture of several types of constraints. Then it
is useful to rely on general methods. Constraint programming developed a
collection of such methods. They allow us to solve problems formulated as
CSPs and COPs. These methods are based on search. In general two forms
of search are used.
but simpler. The terms in italic can be made precise. Constraint prop-
agation is alternated with branching. This yields a tree, called a search
tree, over which a top-down search takes place. The leaves of the tree are
CSPs that are either obviously inconsistent (usually with an empty variable
domain) or solved. An important aspect of the top-down search is that the
search tree is generated on the fly.
An example of a search tree in which we refer explicitly to constraint
propagation and branching is depicted in Figure 6.3.
constraint propagation
branching
constraint propagation
branching
constraint propagation
which remains after choosing that value for that variable. In Section 8.4 we
shall discuss in more detail this form of search and program it in Prolog.
Finally, several types of CSPs can be solved using specialised, domain spe-
cific methods. We already mentioned the examples of linear equations over
reals and linear inequalities over reals. In typical constraint programming
languages and systems these methods are directly available to the user in the
form of constraint solvers that can be readily invoked from a program.
Additionally, these languages and systems integrate such constraint solvers
with the general methods, such as built-in forms of constraint propagation,
so that the programmer can rely automatically on both types of methods.
As a result generating and solving of CSPs and COPs in constraint pro-
gramming languages and systems is considerably simpler than in conven-
tional programming languages. The resulting approach to programming,
i.e. constraint programming , is radically different from the usual one.
In particular, modelling plays an important role, while the programming
process is based on a judicious use of the available built-ins.
In the subsequent chapters we shall explain the ECLi PSe approach to
constraint programming. ECLi PSe , as already mentioned, extends Prolog
and consequently uses variables as unknowns, as in algebra or mathematical
logic. This makes it substantially easier to model CPSs and COPs since they
use the variables in the same sense. Further, ECLi PSe relies on a number
of libraries that facilitate the task of the programmer and simplify the task
of constraint programming in a dramatic way.
6.8 Summary
The aim of this chapter was to explain the basic principles of constraint
programming. To this end we introduced first the concepts of a constraint,
constraint satisfaction problem (CSP) and constrained optimisation prob-
lem (COP) and illustrated them by means of various examples. We also
mentioned the most common classes of constraints considered in practice.
Then we discussed the methods used to solve CSPs and COPs, notably
constraint propagation, backtracking search and branch and bound search.
We explained how these methods are supported in constraint programming
languages. In these languages specialised, domain specific methods are di-
rectly available in the form of constraint solvers.
6.9 Exercises
Exercise 6.1 Consider the following puzzle taken from Gardner [1979].
Ten cells numbered 0, . . ., 9 inscribe a 10-digit number such that each cell, say i,
6.9 Exercises 109
indicates the total number of occurrences of the digit i in this number. Find this
number.
Formulate this problem as a CSP problem.
For your information: the answer is 6210001000.
7.1 Introduction
len([], 0).
len([_ | Ts], N) :- len(Ts, M), N is M+1.
It does seem a rather obscure and complicated way of expressing some-
thing quite simple. The difficulty is, of course, that declarative programs do
not allow the same variable to be reused for different things, so within the
pure declarative paradigm, we cannot just walk down the list incrementing
a counter.
ECLi PSe offers a syntax for encoding such iterations in a reasonably nat-
ural way. In this chapter we discuss these constructs in a systematic way. In
particular we show how they can be used to obtain a non-recursive solution
to the classic n-queens problem.
110
7.2 Iterating over lists and integer ranges 111
a
b
c
El = El
Yes (0.00s cpu)
The user sees only one variable El, which appears to take at each iteration
the value of a different member of the list. However, internally ECLi PSe
turns this iteration into a recursion, so a different variable (El 1, El 2, etc.)
is introduced for each member of the list.
A similar iterator, count/3 is used for running a counter over a range of
integers:
[eclipse 2]: ( count(I,1,4) do writeln(I) ).
1
2
3
4
I = I
Yes (0.00s cpu)
The ECLi PSe iteration construct allows more than one iterator , so one
can iterate over a list and an integer range simultaneously. Thus we can write
out the elements of a list together with a number showing their position in
the list as follows:
[eclipse 3]: ( foreach(El,[a,b,c]), count(I,1,_)
do
writeln(I-El)
).
1 - a
112 Intermezzo: Iteration in ECLi PSe
2 - b
3 - c
El = El
I = I
Yes (0.00s cpu)
When two or more iterators appear on the same level, as in this example,
then they all iterate in step. In effect there is just one iteration, but at
each iteration all the iterators step forward together. In the above example,
therefore, at iteration 1, El takes the value a, whilst I takes the value 1.
At iteration 2, El takes the value b, whilst I takes the value 2. At the last
iteration 3, El takes the value c, whilst I takes the value 3. Because the
iteration stops when the end of the list is reached, we do not need to specify
a maximum for the integer range in the iterator count(I,1, ).
Later we shall encounter iterators at different levels, which is achieved by
writing an iteration construct within the query formed by another iteration
construct.
Coming back to the current example, with iterators at the same level,
we can even use the count iterator to compute the maximum, because it
automatically returns the value of I at the last iteration:
[eclipse 4]: ( foreach(El,[a,b,c]), count(I,1,Max)
do
writeln(I-El)
).
El = El
I = I
Max = 3
Yes (0.00s cpu)
1 - a
2 - b
3 - c
In the above query Max returns the length of the list. Thus we can find
the length of a list by this very construction we just do not bother to
write out the elements:
[eclipse 5]: ( foreach(_,[a,b,c]), count(_,1,Length)
do
true
7.3 Iteration specifications in general 113
).
Length = 3
Yes (0.00s cpu)
The foreach/2 iterator walks down the list, and the count iterator in-
crements its index at each step. When the end of the list is reached, the
iteration stops, and the final value of the counter is returned as the value of
the variable Length. Curiously, the action taken at each iteration is noth-
ing! The atomic query true simply succeeds without doing anything. In
particular, we do not use the elements of the list or the counters that are
now represented by anonymous variables.
( IterationSpecs do Query )
where Query is any ECLi PSe query. The iteration specifiers IterationSpecs
is a comma-separated sequence of iterators, such as
foreach(El,List), count(I,1,Length)
El = El
List = [1, 2, 3, 4]
I = I
Yes (0.00s cpu)
114 Intermezzo: Iteration in ECLi PSe
Thus to construct a list of given length we can use the very same code we
used for measuring the length of a list, except that this time we input the
length, and output the list.
Notice that the variables El and I are not instantiated on completion of
the iteration. Their scope is local to the iteration construct, and even local to
each iteration. This means that we can construct a list of different variables
by simply replacing El = I in the above by true. To make clear these
are different variables, it is necessary to set the ECLi PSe output mode
to output the internal variable names.1 With the new output mode the
behaviour is as follows:
[eclipse 7]: ( foreach(El,List), count(I,1,4) do true ).
N = 3
1 To set the output mode one can use set flag(output mode,"VmQP").
7.3 Iteration specifications in general 115
% len(List, N) :-
% N is the length of the list List
% or List is a list of variables of length N.
len(List, N) :-
( foreach(_,List), count(_,1,N) do true ).
I = I
Yes (0.00s cpu)
1 2 3 4 5
In = In
Out = Out
OutList = [2, 3, 4, 5]
Yes (0.00s cpu)
116 Intermezzo: Iteration in ECLi PSe
The reader should compare it with the MAP program from Figure 5.3 and
query No. 31 from page 84. Just as In and Out in the above query have
scope local to a single iteration, so we can add other local variables in the
body of the query. For example, we can input a list of characters and output
a list of their ASCII successors as follows:
[eclipse 13]: ( foreach(In,[a,b,c]),
foreach(Out,OutList)
do
char_code(In,InCode),
OutCode is InCode+1,
char_code(Out,OutCode)
).
In = In
Out = Out
OutList = [b, c, d]
InCode = InCode
OutCode = OutCode
Yes (0.00s cpu)
(The char_code/2 built-in provides a bi-directional conversion between
the characters and the ASCII codes.) Each variable occurring in a query
inside the iteration has local scope.
It is straightforward to pass a value, such as 5, into an iteration, as follows:
[eclipse 14]: ( foreach(In,[1,2,3]),
foreach(Out,OutList)
do
Out is In+5
).
In = In
Out = Out
OutList = [6,7,8]
Yes (0.00s cpu)
However, we cannot simply use a variable in place of the 5 in the above
query, because ECLi PSe treats any variable in the body of an iteration
construct as local, even if a variable with the same name also occurs outside
the iteration. For example the following behaviour is not really what we
want:
7.3 Iteration specifications in general 117
Var = 5
In = In
Out = Out
OutList = [6, 7, 8]
Yes (0.00s cpu)
We can now build a single clause program where the number to be added
to each element is supplied as an argument:
add_const(Increment, InList, OutList) :-
( foreach(In,InList),
foreach(Out,OutList),
param(Increment)
do
Out is In+Increment
).
Any number of parameters can be passed into an iteration. For example
we can increment positive elements by one amount and decrement negative
118 Intermezzo: Iteration in ECLi PSe
Output = [3 : 5, 3 : 6, 3 : 7, 3 : 8]
Yes (0.00s cpu)
7.3 Iteration specifications in general 119
We shall see this use of param in Section 8.4. Without param(N) we would
get the following behaviour (and a Singleton variables warning during
the compilation time):
[eclipse 19]: inc(3, [4,5,6,7], Output).
Output = [N : 5, N : 6, N : 7, N : 8]
Yes (0.00s cpu)
I = I
J = J
K = K
Yes (0.00s cpu)
5 6 7 8 9 10 12 14 16 18 15 18 21 24 27
Notice the use of the param(I) to pass the argument I into the inner itera-
tion.
Because of the need to pass in parameters embedded iterations can be
complicated to write. Therefore a facility is provided to iterate over multiple
ranges using a single iterator multifor/3. The same example as above can
be written:
[eclipse 21]: ( multifor([I,J],[1,5],[3,9])
do
K is I*J,
write(K), write( )
120 Intermezzo: Iteration in ECLi PSe
).
I = I
J = J
K = K
Yes (0.00s cpu)
5 6 7 8 9 10 12 14 16 18 15 18 21 24 27
Arg = Arg
Yes (0.00s cpu)
a
b
c
The efficiency comes from being able to access components directly from
their location in the structure. In the following example we fill a structure
with values, and then access the third component:
do
true
),
X is Array[3].
% transpose(Matrix, Transpose) :-
% Transpose is a transpose of the matrix Matrix.
transpose(Matrix, Transpose) :-
dim(Matrix,[R,C]),
dim(Transpose,[C,R]),
( foreachelem(El,Matrix,[I,J]),
param(Transpose)
do
subscript(Transpose,[J,I],El)
).
writeln(Head)
).
a
b
c
Head = Head
Tail = Tail
Yes (0.00s cpu)
The reader may note that this is a possible implementation of the foreach/2
iterator. Indeed, all the other iterators can be implemented using fromto/4.
Our informal description of the behaviour of fromto is highly procedural.
However, the behaviour of the fromto/4 construct is more powerful than a
procedural description might suggest. In fact, this same construct can be
used to construct a value of First backwards from the value of Last. In
the following two examples we show fromto/4 using
its first argument as the full list,
its last argument as the full list.
In both of them fromto/4 and foreach/2 iterate in step.
[eclipse 35]: ( fromto([a,b,c],[Head | Tail],Tail,[]),
foreach(El,List)
do
El = Head
).
Head = Head
Tail = Tail
El = El
List = [a, b, c]
Yes (0.00s cpu)
Tail = Tail
126 Intermezzo: Iteration in ECLi PSe
Head = Head
El = El
List = [c, b, a]
Yes (0.00s cpu)
The second query constructs the list [c,b,a] by putting one at a time
an element, being a new intermediate variable, on the front of an initially
empty list. The actual values of the list elements are unknown until the
iteration terminates, at which point all the intermediate variables, as well
as the output list List, become fully instantiated.
So using the fromto/4 iterator we can reverse a list in a simple and elegant
way. The program is given in Figure 7.5.
% rev(List, Reverse) :-
% Reverse is a reverse of the list List or
% List is a reverse of the list Reverse.
rev(List, Reverse) :-
( fromto([],Tail,[Head | Tail],Reverse),
foreach(El,List) do El = Head ).
Reverse = [c, b, a]
List = [a, b, c]
Yes (0.00s cpu)
The fromto/4 iterator can also be used to iterate down the lists in cases
where more than one element needs to be visible at each iteration. Let
us use this, first, to write a non-recursive version of the program ORDERED
from Figure 3.3 on page 48 that checks whether a list is an -ordered list
of numbers. The program is given in Figure 7.6.
Here the iterator
fromto(List,[This,Next | Rest],[Next | Rest],[_])
iterates down a non-empty list and at each iteration returns the tail of the
remaining list.
7.5 fromto/4: the most general iterator 127
ordered(List) :-
( List = [] ->
true
;
( fromto(List,[This,Next | Rest],[Next | Rest],[_])
do
This =< Next
)
).
The previous iterators have another limitation that all the different itera-
tors in a single iteration must have the same number of components. Thus
the output list/range/structure must have the same number of components
as the input. This makes it impossible to use the previous iterators to filter
a list, for example, constructing a list of only those elements that pass a
test. To take an input list of integers and output a list of only the positive
integers it is therefore necessary to use the fromto/4 iterator, thus:
El = El
This = This
Next = Next
PosList = [4, 5, 2]
Yes (0.00s cpu)
The output list is in the reverse order from the input list, because each new
positive integer is added in front of the previous ones. To have the output
come in the same order as the input, the following iteration is required:
do
( El >= 0 -> This = [El | Next] ; This = Next )
).
El = El
PosList = [2, 5, 4]
This = This
Next = Next
Yes (0.00s cpu)
Old = Old
New = New
Name = c
ElVal = ElVal
OldVal = OldVal
Yes (0.00s cpu)
( perfect_square(In) ->
Next = stop
;
Next = go
)
).
In = In
Rest = Rest
OutList = [3, 5, 3, 4]
Next = Next
Yes (0.00s cpu)
In this example the first fromto/4 iterator goes through the input list, while
the second fromto/4 iterator checks the stopping condition at each iteration.
This enables the iterations to stop before the input list has been exhausted.
QueenStruct = [](1, 5, 8, 6, 3, 7, 2, 4)
Yes (0.02s cpu, solution 1, maybe more) ?
This solution corresponds to Figure 7.8.
130 Intermezzo: Iteration in ECLi PSe
queens(QueenStruct, Number) :-
dim(QueenStruct,[Number]),
( for(J,1,Number),
param(QueenStruct,Number)
do
select_val(1, Number, Qj),
subscript(QueenStruct,[J],Qj),
( for(I,1,J-1),
param(J,Qj,QueenStruct)
do
QueenStruct[I] =\= Qj,
QueenStruct[I]-Qj =\= I-J,
QueenStruct[I]-Qj =\= J-I
)
).
augmented by the SELECT program of Figure 3.2.
7.7 Summary
Iterators are typically used to iterate over data structures. They can also
be used as an alternative for recursion. In many situations the resulting
programs are simpler.
We introduced the iteration construct
( IterationSpecs do Query )
and the different iterators which can be used within the sequence of the itera-
tor specifiers IterationSpecs. In the process we also introduced the arrays
that are declared in ECLi PSe using the dim/2 built-in. The subscript/3
built-in allows us to assign a value to an array component or to test its value.
We discussed in turn the following iterators:3
For iterating over lists:
foreach(El,List) do Query(El).
Iterate Query(El) over each element El of the list List.
3 We also list here the most general versions of the discussed constructs.
7.7 Summary 131
8
7
6
5
4
3
2
1
a b c d e f g h
( Iterator_1,
...,
Iterator_n,
param(Argument1,...,Variable,...)
do
Query(..., Argument1, ..., Variable, ...).
).
multifor(List,MinList,MaxList,IncrementList) do Query(List).
As above, but with specified increments.
For iterating over compound terms, especially arrays:
foreacharg(X,Term) do Query(X).
Iterate Query(X) with X ranging over all the arguments of the structure
Term.
foreacharg(X,Term,Index) do Query(X,Index).
As before, with Index recording the argument position of X in the
term.
foreachelem(X,Array) do Query(X).
Iterate over all the elements of a multi-dimensional array.
foreachelem(X,Array,Index) do Query(X,Index).
As before, but Index is set to the multi-dimensional index of the
element.
foreachelem(Array,Index) do Query(Index).
As before, but only use the index.
The most general iterator:
fromto(First,In,Out,Last) do Query(In,Out).
Iterate Query(In,Out) starting with In = First, until Out = Last.
7.8 Exercises
Exercise 7.3 Write a non-recursive version of the the select val/3 pred-
icate from Figure 3.2 on page 46.
8.1 Introduction
133
134 Top-down search with passive constraints
for example in the case of Boolean constraints, each time the same principle
could be invoked. In the subsequent chapters we shall gradually relax these
limitations, first by allowing, in the next chapter, passive constraints without
the risk of run-time errors, and second, starting with Chapter 10, by allowing
active constraints.
X=1 X=2
Note that both trees have eight leaves corresponding to the eight possible
pairs of values for X and Y but have different number of internal nodes. The
136 Top-down search with passive constraints
search(X,Y) :-
member(X, [1,2]),
member(Y, [1,2,3,4]),
X+Y =:= 6.
X=1 X=2
We shall see in Chapter 11 that with active constraints the variable order-
ing can have a tremendous influence on the number of search steps needed
to find a solution, or to prove that none exists.
naive_search(List) :-
( foreach(Var-Domain,List) do member(Var, Domain) ).
The variable order and the value order are built into this search routine.
The following generic labelling procedure enables the programmer to choose
the next variable and the next value freely. As will be shown in Chapter 11,
in ECLi PSe these choices can be supported by information which is gleaned
during search. This generic search procedure can be written as in Figure
8.6.
% search(List) :-
% Assign values from the variable domains
% to all the Var-Domain pairs in List.
search(List) :-
( fromto(List, Vars, Rest, [])
do
choose_var(Vars, Var-Domain, Rest),
choose_val(Domain, Val),
Var = Val
).
An important point to note is that choose var/3 does not introduce any
choice points, because only one variable ordering is needed to carry out a
complete search. However choose val/2 does introduce choice points: in
fact the search tree is completely defined by the set of value choices at each
node of the search tree.
credit to values which seem more likely to be a good choice. This credit
is then available for the search subtree which remains after choosing that
value for that variable. To program this procedure we modify accordingly
the generic search procedure from Figure 8.6. The resulting generic credit
search procedure is shown in Figure 8.9.
The difference between this generic search procedure and the one from
Figure 8.6 is that during the iteration down the variable list the credit is
being maintained. The initial credit equals Credit and in each loop iteration
is reduced from the current credit CurCredit to the new credit NewCredit.
This reduction takes place during the value selection to the variable Var,
in the call choose_val(Domain, Val, CurCredit, NewCredit). The new
credit NewCredit is then available for the remainder of the search concerned
with the list Rest.
This search/2 procedure needs to be completed by choosing the
choose var/3 procedure and the share credit/3 procedure that manages
the credit allocation. As an example of the latter let us encode the pro-
gram in Figure 8.8 as a credit allocation procedure. The corresponding
share credit/3 procedure is shown in Figure 8.10.
In this program the iteration is controlled by the amount of credit left.
The initial credit equals N and in each loop iteration is reduced from the
current credit CurCredit to the new credit NewCredit by 1, or to 0 if no
8.4 Incomplete search 141
% search(List, Credit) :-
% Search for solutions with a given Credit.
search(List, Credit) :-
( fromto(List, Vars, Rest, []),
fromto(Credit, CurCredit, NewCredit, _)
do
choose_var(Vars, Var-Domain, Rest),
choose_val(Domain, Val, CurCredit, NewCredit),
Var = Val
).
elements are left. The initial credit N is used to select the first N elements
of the domain Domain (or all the elements if the domain has fewer than N
elements). During the iteration the list DomCredList is built by assigning
to each selected element in turn the same, initial, credit N. The latter is
achieved by using the param(N) iterator.
We have then for example:
[eclipse 1]: share_credit([1,2,3,4,5,6,7,8,9], 5, DomCredList).
DomCredList = [1 - 5, 2 - 5, 3 - 5, 4 - 5, 5 - 5]
Yes (0.00s cpu)
[eclipse 2]: share_credit([1,2,3,4], 5, DomCredList).
DomCredList = [1 - 5, 2 - 5, 3 - 5, 4 - 5]
Yes (0.00s cpu)
Because we keep allocating to the selected elements the same, initial,
credit, in the original search/2 procedure from Figure 8.9 the current credit
CurCredit remains equal to the initial credit Credit, which ensures the
correct operation of each call
share_credit(Domain, CurCredit, DomCredList).
In particular the query
142 Top-down search with passive constraints
% share_credit(Domain, N, DomCredList) :-
% Admit only the first N values.
share_credit(Domain, N, DomCredList) :-
( fromto(N, CurCredit, NewCredit, 0),
fromto(Domain, [Val|Tail], Tail, _),
foreach(Val-N, DomCredList),
param(N)
do
( Tail = [] ->
NewCredit is 0
;
NewCredit is CurCredit - 1
)
).
% share_credit(Domain, N, DomCredList) :-
% Allocate credit N by binary chop.
share_credit(Domain, N, DomCredList) :-
( fromto(N, CurCredit, NewCredit, 0),
fromto(Domain, [Val|Tail], Tail, _),
foreach(Val-Credit, DomCredList)
do
( Tail = [] ->
Credit is CurCredit
;
Credit is fix(ceiling(CurCredit/2))
),
NewCredit is CurCredit - Credit
).
DomCredList = [1 - 3, 2 - 1, 3 - 1]
Yes (0.00s cpu)
These solutions correspond to the explored leaves in the search tree de-
picted in Figure 8.12. The explored part of the tree is put in bold.
4 2 2
2 1 1 1 1 1 1
The total credit allocated to all the values at a search node is equal to the
available credit N. Hence it follows that the total credit allocated throughout
the search tree is equal to the initial input credit at the top of the search
tree. Consequently, this form of credit sharing gives the programmer precise
control over the maximum number of nodes explored.
Another way to use credit is as a measure of distance from the preferred
left-hand branch of the search tree. This approach has been called limited
discrepancy search. The same amount of credit is allocated to the first
value as was input to the search node. An amount of credit one less than the
input is allocated to the second value; one less again to the third value; and
so on. The total credit allocated to alternative values, in this case, exceeds
the input credit. This share credit/3 procedure is shown in Figure 8.13.
So, as in the case of the share_credit/3 procedure from Figure 8.10 on
page 142, the same elements are selected. However, instead of the same
credit, N, now the diminishing credit, CurCredit, is allocated to these ele-
ments.
For example we now have:
% share_credit(Domain, N, DomCredList) :-
% Allocate credit N by discrepancy.
share_credit(Domain, N, DomCredList) :-
( fromto(N, CurrCredit, NewCredit, -1),
fromto(Domain, [Val|Tail], Tail, _),
foreach(Val-CurrCredit, DomCredList)
do
( Tail = [] ->
NewCredit is -1
;
NewCredit is CurrCredit-1
)
).
DomCredList = [1 - 5, 2 - 4, 3 - 3, 4 - 2, 5 - 1, 6 - 0]
Yes (0.00s cpu)
DomCredList = [1 - 5, 2 - 4, 3 - 3, 4 - 2]
Yes (0.00s cpu)
Because of this difference the query
search([X-[1,2,3,4,5,6,7,8,9], Y-[1,2,3,4], Z-[1,2,3,4]], 4).
generates now 33 solutions for (X,Y,Z).
To further clarify the limited discrepancy credit allocation assume the
initial credit of 1 and consider the query
search([X-[1,2], Y-[1,2], Z-[1,2], U-[1,2], V-[1,2]], 1).
It produces six solutions for (X,Y,Z,U,V):
(1,1,1,1,1), (1,1,1,1,2), (1,1,1,2,1),
(1,1,2,1,1), (1,2,1,1,1), (2,1,1,1,1).
These solutions correspond to the explored leaves in the search tree depicted
in Figure 8.14. The explored part of the tree is put in bold.
146 Top-down search with passive constraints
setval/2,
incval/1,
decval/1,
getval/2.
logical variable (this is the only way the values of non-logical variables can
be accessed) or tests whether a non-logical variable has a given value.
Thus we have for example:
N = 3
M = 4
Yes (0.00s cpu)
No (0.00s cpu)
The usefulness of non-logical variables stems from the fact that their val-
ues persist through backtracking. For example we have:
N = 1
Yes (0.00s cpu)
As a slightly less trivial example let us count now the number of times a
query succeeds. The appropriate program is given in Figure 8.15.
It is useful to see that the execution of the query succeed(Q, N) does
not instantiate any of the variables of the query Q. We have for example:
X = X
N = 3
Yes (0.00s cpu)
148 Top-down search with passive constraints
% succeed(Q, N) :-
% N is the number of times the query Q succeeds.
succeed(Q, N) :-
( setval(count,0),
Q,
incval(count),
fail
;
true
),
getval(count,N).
one(Q) :-
not(twice(Q)), % The query Q succeeds at most once.
getval(x,1). % The query Q succeeds at least once.
twice(Q) :-
setval(x,0),
Q,
incval(x),
getval(x,2), !.
N = 8
Yes (0.00s cpu)
To avoid variable clashes of this kind, ECLi PSe offers a special global vari-
able called a shelf whose name is automatically generated by the system.
Each shelf has a different name and so clashes are avoided. As our intention
is to focus on the basic concepts of ECLi PSe we do not pursue this topic
further.
X = 1
Y = 2
Yes (0.00s cpu, solution 1, maybe more) ? ;
X = 2
Y = 1
150 Top-down search with passive constraints
No (0.00s cpu)
X=1 X=2
1 2
3 4 5 6
The number of upward arrows in this figure is six, and this is one measure
of the number of backtracks. The backtracking measure that is most often
used, however, only counts four backtracks in this search behaviour.
The reason is that when a failure lower in the search tree causes back-
tracking to a point several levels higher in the tree, this is usually counted as
a single backtrack. Consequently the upward arrows from node 4 to node 0
only count as a single backtrack. Similarly, the final failure at node 6 which
completes the search is only counted as a single backtrack.
To count backtracks according to these two measures, we augment our
generic search procedure from Figure 8.6 with three atomic goals: to ini-
tialise the backtrack count, to count backtracks, and to retrieve the final
count. The augmented generic search procedure is shown in Figure 8.18.
The backtrack information is recorded in non-logical variables and is
maintained through backtracking. The implementation of the backtracking
predicates that includes each backtrack step (yielding a total of six back-
tracks in the previous toy search example) is shown in Figure 8.19. In this
case we simply increment the backtracks variable on backtracking. The
predicate count_backtracks is implemented using the auxiliary predicate
on backtracking that takes a query as an argument and calls it on back-
tracking. Subsequently it fails so that backtracking continues.
8.6 Counting the number of backtracks 151
% search(List, Backtracks) :-
% Find a solution and count the number of backtracks.
search(List, Backtracks) :-
init_backtracks,
( fromto(List, Vars, Rest,[])
do
choose_var(Vars, Var-Domain, Rest),
choose_val(Domain, Val),
Var = Val,
count_backtracks
),
get_backtracks(Backtracks).
init_backtracks :-
setval(backtracks,0).
get_backtracks(B) :-
getval(backtracks,B).
count_backtracks :-
on_backtracking(incval(backtracks)).
more clear, though less efficient. The second one implements the counting
process directly.
% First implementation
count_backtracks :-
setval(single_step,true).
count_backtracks :-
on_backtracking(clever_count).
clever_count :-
( getval(single_step,true) ->
incval(backtracks)
;
true
),
setval(single_step, false).
% Second implementation
count_backtracks :-
setval(single_step,true).
count_backtracks :-
getval(single_step,true),
incval(backtracks),
setval(single_step,false).
8.7 Summary
In this chapter we discussed how finite CSPs can be solved in Prolog by
means of top-down search. This brought us to a discussion of complete and
incomplete forms of top-down search. For the complete top-down search we
discussed the impact of the variable ordering and the value ordering. For the
incomplete top-down search we focussed on the implementations of various
forms of credit-based search, including the limited discrepancy search.
We also discussed ECLi PSe facilities that allow us to count the number
of backtracks. They consist of non-logical variables and the following four
built-ins using which we can manipulate them:
setval/2,
incval/1,
decval/1,
getval/2.
8.8 Exercises
p(a,Y) :- q(Y).
p(b,Y) :- r(Y).
q(1).
q(2).
r(1).
Exercise 8.3 The naive search/1 predicate from page 138 can be alter-
natively defined by
naive_search1(Vars, Vals) :-
( foreach(V,Vars), param(Vals) do member(V,Vals) ).
It searches for all possible assignments of the values to the variables. For
example the query naive search1([X,Y,Z], [1,2]) returns
X=1, Y=1, Z=1
and then the following on backtracking
X=1, Y=1, Z=2,
X=1, Y=2, Z=1,
X=1, Y=2, Z=2,
etc.
Modify naive search1/2 to a predicate naive search2/2 that displays
its search tree using the names v1, v2, v3 and so on for the variables. For
example the query naive search2([X,Y,Z], [1,2]), fail should pro-
duce the following output:
v0 = 1 v1 = 1 v2 = 1
v2 = 2
v1 = 2 v2 = 1
v2 = 2
v0 = 2 v1 = 1 v2 = 1
v2 = 2
v1 = 2 v2 = 1
v2 = 2
Hint. Use the non-logical variables and the on backtracking/1 predicate
defined in Figure 8.19 on page 151.
9
9.1 Introduction
155
156 The suspend library
solve them in a natural way. How this is done is the topic of the current
chapter.
solve(List):-
declareDomains(List),
generateConstraints(List),
search(List).
The first line loads the appropriate library, here my_library. Next, List
is a list of the CSP variables. The atomic goal declareDomains(List)
generates the domains of the variables from List and the atomic goal
generateConstraints(List) generates the desired constraints. Finally,
the atomic goal search(List) launches the appropriately chosen search
process that results in solving the generated CSP (or establishing its incon-
sistency).
So we use here a different approach than in Chapter 8 where we discussed
how to solve CSPs using Prolog. Indeed, the constraints are now generated
first and only then the search is launched. This difference is fundamental
and in the presence of appropriate libraries supporting constraint processing
it can lead to substantial gains in computing time.
To solve a COP we also need to generate the cost function. This is often
done together with the generation of constraints. So conceptually we use
then an ECLi PSe program that looks as follows:
:- lib(my_library).
solve(List):-
declareDomains(List),
generateConstraints_and_Cost(List, Cost),
search(List, Cost).
The ECLi PSe system provides a collection of libraries that can be used to
generate and solve CSPs and COPs. Each such library offers the user a set of
built-ins that deal with specific constraints (for example linear constraints)
and specific methods used to solve them (for example branch and bound).
9.3 Introducing the suspend library 157
In ECLi PSe each library, say my_library, can be accessed in three dif-
ferent ways.
Y = 3
Yes (0.00s cpu)
So in the presence of the suspend library a more complex computation
mechanism is used than in Prolog. In Prolog in each query the leftmost
atomic goal is selected. Now in each query the leftmost atomic goal that
is either non-arithmetic (that is, does not involve an arithmetic comparison
predicate) or ground is selected. We can make explicit how the Prolog selec-
tion rule is changed to handle delayed queries by writing a meta-interpreter
that does it.
158 The suspend library
solve(true) :- !.
solve((A,B)) :- !, solve(A), solve(B).
solve(H) :- rule(H, Body), solve(Body).
rule(A, B) :-
functor(A,F,N),
is_dynamic(F/N),
clause(A, B).
rule(A, true) :- A.
The definition has been changed to use rule/2 instead of clause/2. The
is dynamic(F/N) built-in tests whether the predicate F/N has been declared
as dynamic (see page 70). We obtain this way a meta-interpreter for pro-
grams that use built-ins, for example the comparison predicates. As in the
programs SOLVE and SOLVE2 from Figures 4.4 and 4.5 the handling of built-
ins is just shifted to the system level.
The above meta-interpreter always chooses the leftmost atomic goal to
solve next. The modified meta-interpreter is given in Figure 9.1. Naturally,
the suspend library is not really implemented in this way.
To enable an atomic goal to be delayed (in case it is arithmetic and not
ground) we introduce two extra arguments to pass the delayed goals through
each call. Further, the new meta-interpreter has an extra clause that delays,
through a call of the postpone/1 predicate, an atomic goal if it is not ready
to be solved immediately. This is a non-ground atomic goal of the form
suspend:A. Such a goal is added to the currently delayed goal.
In the last clause for solve/3, after unifying the atomic goal H with the
head of a program clause, the meta-interpreter tries to solve the current
delayed goal, SuspIn, before solving the body Body. This is done in the
same way as solving the input query, using the call solve(SuspIn, true,
Susp2).
Finally, to handle suspend-qualified atomic goals a new rule is added to
the definition of rule/2. This clause is needed only when the suspend
library is not available or is not loaded.
This meta-interpreter now delays non-ground suspend-qualified goals as
required:
rule(A, B) :-
functor(A,F,N),
is_dynamic(F/N),
clause(A, B).
rule(suspend:A, true) :- !, A.
rule(A, true) :- A.
Fig. 9.1 The META INTERPRETER program for the suspend library
Abort
The programmer can use these constraints freely to model the problem,
and then select which constraint solver(s) to send them to, when experiment-
ing with algorithms for solving the problem. In particular, these constraints
are available in the suspend library.
X = 0
Y = Y
Delayed goals:
suspend : (0 or Y)
Yes (0.00s cpu)
X = 0
Y = 1
Yes (0.00s cpu)
or to a failure:
[eclipse 7]: suspend:(X or Y), X = 0, Y = 0.
No (0.00s cpu)
The same holds for all the core constraints: nothing is done until the query
is completely instantiated (i.e. becomes ground).
To implement Boolean constraints in Prolog we need to follow a different
approach and list the true facts. For example the or constraint would be
defined by three facts:
1 or 1.
1 or 0.
0 or 1.
Then the query X or Y, X = 0, Y = 1 succeeds as well, but, in contrast to
the approach based on the suspend library, the computation now involves
backtracking.
In general, if a core constraint becomes fully instantiated it is either
deleted (when it succeeds) or it causes a failure (when it fails) which can
trigger backtracking, as in the case of customary Prolog computations.
No (0.00s cpu)
but also for generating constraints in the form of delayed (atomic) goals:
162 The suspend library
Y = Y
Delayed goals:
suspend : (2 + 4 > Y)
suspend : (Y > 2)
Yes (0.00s cpu)
and for solving them through an instantiation:
[eclipse 11]: suspend:(2+4 > Y), suspend:(Y > 2), Y = 5.
Y = 5
Yes (0.00s cpu)
In particular, thanks to this delay mechanism we can first generate con-
straints and then solve them by a systematic generation of the candidates
values for the relevant variables.
Once the suspend library is loaded, these arithmetic constraints can be
written in a more compact way, using the $ prefix (though the equality and
disequality are written differently). So we have six arithmetic comparison
constraints:
less than, written as $<,
less than or equal, written as $=<,
equality, written as $=,
disequality, written as $\=,
greater than or equal, written as $>=,
greater than, written as $>.
In contrast to the corresponding original arithmetic comparison predi-
cates,
<, =<, =:=, =\=, =\=,>=, >,
they can be used in queries with variables, like in
[eclipse 12]: 1 + 2 $= Y.
Y = Y
Delayed goals:
suspend : (1 + 2 =:= Y)
9.4 Core constraints 163
X = 1 + 2
Yes (0.00s cpu)
[eclipse 14]: X $= 1 + 2.
X = X
Delayed goals:
suspend : (X =:= 1 + 2)
Yes (0.00s cpu)
[eclipse 15]: X =:= 1 + 2.
instantiation fault in X =:= 3 in module eclipse
Abort
In particular, the query X $= 1 + 2 is delayed, while its Prolog counterpart
X =:= 1 + 2, as already explained in Section 3.3, yields a run-time error.
A non-trivial example showing that the suspend library allows us to solve
more queries than Prolog is the following one. Let us modify the QUICKSORT
program of Figure 3.4 in Section 3.3 by treating the arithmetic comparison
operators as constraints, so by using the following modified definition of the
part/4 predicate:
part(_, [], [], []).
part(X, [Y|Xs], [Y|Ls], Bs) :- X $> Y, part(X, Xs, Ls, Bs).
part(X, [Y|Xs], Ls, [Y|Bs]) :- X $=< Y, part(X, Xs, Ls, Bs).
Then we can successfully run queries such as:
[eclipse 16]: qs([3.14, Y, 1, 5.5], [T, 2, U, Z]).
Y = 2
T = 1
U = 3.14
164 The suspend library
Z = 5.5
Yes (0.00s cpu, solution 1, maybe more) ? ;
No (0.00s cpu)
The same behaviour holds for all the considered modifications of QUICKSORT.
X = X
Delayed goals:
suspend : (X :: 1 .. 9)
Yes (0.00s cpu)
This query generates a declaration of the variable X by stipulating that it
ranges over the integer interval [1..9]. The ECLi PSe prompt confirms
that this variable declaration is interpreted as a unary constraint. Once
the suspend library is loaded, we can simply write X :: 1..9 instead of
suspend:(X :: 1..9) and similarly with all other declarations here dis-
cussed. From now on we assume that the suspend library is loaded.
To abbreviate multiple declarations with the same domain a list notation
can be used:
9.4 Core constraints 165
[S,E,N,D,M,O,R,Y] :: 0..9.
The domain bounds can be parameters that will be instantiated at the
run-time, as in the following example:
declare(M, N, List):- List :: M..N.
Then the query declare(0, 9, [S,E,N,D,M,O,R,Y]) achieves the effect of
the above declaration.
When handled by the suspend library the range is only used as a test
whether the variable becomes correctly instantiated:
[eclipse 18]: X :: 1..9, X = 5.
X = 5
Yes (0.00s cpu)
[eclipse 19]: X :: 1..9, X = 0.
No (0.00s cpu)
Finally, by using $:: instead of :: we can declare variables ranging
over a real interval:
[eclipse 20]: X :: 1..9, X = 2.5.
No (0.00s cpu)
[eclipse 21]: X $:: 1..9, X = 2.5.
X = 2.5
Yes (0.00s cpu)
The same can be achieved by using reals as bounds.
The second form of a variable declaration in the suspend library is pro-
vided by the integers/1 and reals/1 constraints. An example is
integers(X)
which declares the variable X ranging over the set of integers. Such decla-
rations can be viewed as a form of type declarations. As before, multiple
declarations can be written using the list notation. The following sample
queries illustrate the use of these two forms of variable declarations:
[eclipse 22]: integers([X,Y]), X = 3, Y = 4.
X = 3
166 The suspend library
Y = 4
Yes (0.00s cpu)
[eclipse 23]: integers([X,Y]), X = 3, Y = 4.5.
No (0.00s cpu)
[eclipse 24]: reals(X), X = [1,2.1,4].
X = [1, 2.1, 4]
Yes (0.00s cpu)
[eclipse 25]: reals(X), X = [1,2.1,a].
No (0.00s cpu)
X = 2.5
Yes (0.00s cpu)
[eclipse 27]: X #> 2, X = 2.5.
No (0.00s cpu)
No (0.00s cpu)
[eclipse 30]: $>(4, 5, Bool), Bool = 0.
Bool = 0
9.5 User defined suspensions 167
Y = 2
Yes (0.00s cpu)
[eclipse 32]: $::(X, 1..9, 0), X = 10.
X = 10
Yes (0.00s cpu)
[eclipse 33]: or(1,0,B).
B = 1
Yes (0.00s cpu)
In the last example the binary disjunction or/2 is reified to the ternary
one, or/3.
X = X
Delayed goals:
X =:= 10
Yes (0.00s cpu)
X = 10
Yes (0.00s cpu)
168 The suspend library
susp_xor(X, Y) :-
( nonvar(X) ->
susp_y_xor(X, Y)
;
suspend(susp_y_xor(X,Y), 3, X -> inst)
).
susp_y_xor(X, Y) :-
( nonvar(Y) ->
xor(X, Y)
;
suspend(xor(X,Y), 3, Y -> inst)
).
xor(1, 0).
xor(0, 1).
1 The full set of conditions can be found in the ECLi PSe documentation.
9.5 User defined suspensions 169
until X is instantiated and then it calls the second predicate, susp y xor/2.
(If X is already instantiated, then it calls susp y xor/2 immediately.) This
predicate in turn waits, using the call
X = X
Y = Y
Delayed goals:
susp_y_xor(X, Y)
X = 0
Y = Y
Delayed goals:
xor(0, Y)
Yes (0.00s cpu)
[eclipse 38]: susp_xor(X, Y), Y = 1.
X = X
Y = 1
Delayed goals:
susp_y_xor(X, 1)
Yes (0.00s cpu)
[eclipse 39]: susp_xor(X, Y), Y = 1, X = 0.
X = 0
170 The suspend library
Y = 1
Yes (0.00s cpu)
This should be contrasted with the behaviour of xor/2: the corresponding
queries (so queries 3739 with susp_xor replaced by xor) all succeed, but
each time a backtracking takes place.
Finally, of course there is no need to delay an atomic goal by means of
suspend/3 until all its arguments are instantiated. In fact, a more general
use of suspend/3 allows us to suspend an atomic goal until some of its
arguments are instantiated.
To explain the syntax and its use let us suspend the xor(X, Y) goal until
either X or Y have been instantiated:
[eclipse 40]: suspend(xor(X, Y), 2, [X, Y] -> inst).
X = X
Y = Y
Delayed goals:
xor(X, Y)
Yes (0.00s cpu)
[eclipse 41]: suspend(xor(X, Y), 2, [X, Y] -> inst), X = 1.
X = 1
Y = 0
Yes (0.00s cpu)
[eclipse 42]: suspend(xor(X, Y), 2, [X, Y] -> inst), Y = 1.
X = 0
Y = 1
Yes (0.00s cpu)
We shall discuss the suspend/3 built-in again in Subsection 10.3.1, where
we shall review its behaviour in the presence of constraint propagation.
X = X
Y = Y
Z = Z
Delayed goals:
suspend : ([X, Y, Z] :: 0 .. 1)
suspend : (X #\= Y)
suspend : (Y #\= Z)
suspend : (X #\= Z)
Yes (0.00s cpu)
Indeed, during its execution the arithmetic comparison predicate =\= is ap-
plied to arguments that are not numbers.
So far, all the constraints were explicitly written. A powerful feature of
ECLi PSe is that the constraints and whole CSPs can also be generated. Let
us clarify it by the following example. Suppose we wish to generate the CSP
with an arbitrary list of variables List. Each such query generates an ap-
propriate CSP. For example, to generate the CSP
172 The suspend library
ordered(List) :-
( List = [] ->
true
;
( fromto(List,[This,Next | Rest],[Next | Rest],[_])
do
This $< Next
)
).
X = X
Y = Y
Z = Z
U = U
V = V
Delayed goals:
suspend : ([X, Y, Z, U, V] :: 1 .. 1000)
suspend : (X < Y)
suspend : (Y < Z)
suspend : (Z < U)
suspend : (U < V)
Yes (0.00s cpu)
Earlier we used a query (query No. 43 on page 171) that given a list
of three variables posted a disequality constraint between each pair of ele-
ments of the list. Now we show how to define a generic version that we call
diff list/1 that does the same with a list of any number of variables. This
predicate corresponds to the all different constraint introduced in Sub-
9.6 Generating CSPs 173
section 6.4.2. It is defined in Figure 9.4 using the fromto/4 and foreach/2
iterators.
% diff_list(List) :-
% List is a list of different variables.
diff_list(List) :-
( fromto(List,[X|Tail],Tail,[])
do
( foreach(Y, Tail),
param(X)
do
X $\= Y
)
).
produces a list List of three anonymous variables, each with the domain
1..100, and subsequently generates the disequality constraints between
these three variables. Recall from Section 7.3 that length/2 is a Prolog
built-in that given a list as the first argument computes its length, or given
a natural number n as the second argument generates a list of anonymous
(and thus distinct) variables of length n.
Here is the outcome of the interaction with ECLi PSe :
Delayed goals:
suspend : ([_268, _330, _242] :: 1 .. 100)
suspend : (_268 =\= _330)
suspend : (_268 =\= _242)
suspend : (_330 =\= _242)
Yes (0.00s cpu)
174 The suspend library
where n 1, all ai s are given parameters and all xi s and v are variables
ranging over an integer interval. Recall that we used this constraint when
discussing the knapsack problem in Subsection 6.5.1. This constraint can
be generated in ECLi PSe as follows:
sigma(As, Xs, V) :-
sum(As, Xs, Out),
eval(Out) #=< V.
X = X
Y = Y
Z = Z
U = U
V = V
Delayed goals:
suspend : ([X, Y, Z, U] :: 1 .. 1000)
suspend : (eval(1 * X + (2 * Y + (3 * Z + (4 * U + 0))))
9.7 Using the suspend library 175
#=< V)
Yes (0.00s cpu)
neighbour(1, 2).
neighbour(1, 3).
neighbour(1, 4).
neighbour(2, 3).
neighbour(2, 4).
colour(1). % red
colour(2). % yellow
colour(3). % blue
176 The suspend library
colour_map(Regions) :-
constraints(Regions),
search(Regions).
constraints(Regions) :-
no_of_regions(Count),
dim(Regions,[Count]),
( multifor([I,J],1,Count),
param(Regions)
do
( neighbour(I, J) ->
Regions[I] $\= Regions[J]
;
true
)
).
search(Regions) :-
( foreacharg(R,Regions) do colour(R) ).
Regions = [](1, 2, 3, 3)
Yes (0.00s cpu, solution 1, maybe more) ? ;
List = [9, 5, 6, 7, 1, 0, 8, 2]
Yes (176.60s cpu, solution 1, maybe more) ? ;
No (183.94s cpu)
The reason is that each constraint is activated only when all its variables
become ground. So the fact that these constraints were generated first is
only of a limited use. For example, the combination of values S = 1, E =
1, N = 1 is not generated, since a failure arises after trying S = 1, E = 1.
178 The suspend library
send(List):-
List = [S,E,N,D,M,O,R,Y],
List :: 0..9,
diff_list(List),
1000*S + 100*E + 10*N + D
+ 1000*M + 100*O + 10*R + E
$= 10000*M + 1000*O + 100*N + 10*E + Y,
S $\= 0,
M $\= 0,
search(List).
search(List) :-
( foreach(Var,List) do select_val(0, 9, Var) ).
augmented by the DIFF LIST program of Figure 9.4 and the SELECT
program of Figure 3.2.
However, all possible value combinations for the variables that are pairwise
different are still generated since the crucial equality constraint is used only
for testing. In the next chapter we shall see that in the presence of constraint
propagation this approach becomes considerably more realistic and efficient.
QueenStruct[I] :: 1..Number,
queens(QueenStruct, Number) :-
dim(QueenStruct,[Number]),
constraints(QueenStruct, Number),
search(QueenStruct).
constraints(QueenStruct, Number) :-
( for(I,1,Number),
param(QueenStruct,Number)
do
QueenStruct[I] :: 1..Number,
( for(J,1,I-1),
param(I,QueenStruct)
do
QueenStruct[I] $\= QueenStruct[J],
QueenStruct[I]-QueenStruct[J] $\= I-J,
QueenStruct[I]-QueenStruct[J] $\= J-I
)
).
search(QueenStruct) :-
dim(QueenStruct,[N]),
( foreacharg(Col,QueenStruct),
param(N)
do
select_val(1, N, Col)
).
augmented by the SELECT program of Figure 3.2.
The structure of this program is different from that of the QUEENS program
given in Figure 7.7 on page 130. Namely, first constraints are generated and
then systematically the values for the subscripted variables are generated.
Unfortunately, because each constraint is activated only when the variable
QueenStruct becomes ground (i.e., when every queen has been placed) this
program is very inefficient:
QueenStruct = [](1, 5, 8, 6, 3, 7, 2, 4)
Yes (341.84s cpu, solution 1, maybe more) ?
9.8 Summary
In this chapter we discussed the suspend library of ECLi PSe . It provides a
support for passive constraints through a dynamic reordering of the atomic
goals. In the case of arithmetic constraints this prevents the occurrence of
run-time errors.
We introduced the following core constraints that are available in the
suspend library:
Boolean constraints:
and/2, or/2, neg/1, =>/2,
arithmetic comparison constraints on reals:
$<, $=<, $=, $\=, $>=, $>,
arithmetic comparison constraints on integers:
#<, #=<, #=, #\=, #>=, #>,
variable declarations:
range (::, $::, #::, for example M..N), integers/1 and reals/1.
9.9 Exercises
Exercise 9.1 The META INTERPRETER program for the suspend library
given in Figure 9.1 on page 159 outputs the suspended atomic goals in the
reverse order:
Y = Y
X = X
SuspOut = suspend : (X > 3), suspend : (2 < Y + 1), true
Yes (0.00s cpu)
while we have
Y = Y
X = X
Delayed goals:
suspend : (2 < Y + 1)
suspend : (X > 3)
Yes (0.00s cpu)
Propose a modification that deals with this problem and that reproduces
the output of the suspend library, through a call of a solve/1 predicate.
Exercise 9.2 Write a version of the diff list/2 predicate from Figure 9.4
on page 173 that uses recursion instead of iteration.
Exercise 9.3 As already mentioned, the QUEENS program from Figure 9.7
on page 179 is very inefficient because each constraint is activated only when
the variable QueenStruct becomes ground. Propose a modification of this
program in which each binary constraint is activated as soon as both of its
variables are ground.
Using the suspend/3 built-in write a procedure test cl/1 that imple-
ments the test whether a clause is true, by suspending on just one Boolean
variable.
Hint. Use the following filter/3 predicate in your implementation:
filter([], [], unsat).
filter([H | T], [H | Rest], Sat) :-
var(H), !, filter(T, Rest, Sat).
filter([0 | T], Rest, Sat) :- !,
filter(T, Rest, Sat).
filter([1 |_], [], sat).
The call filter(List, Rest, Sat) filters the instantiated variables (1
or 0) out of the clause List, leaving the remaining variables in Rest, and
recording in Sat if the clause is already satisfied.
Exercise 9.5 Modify the ORDERED program from Figure 9.3 on page 172
using the suspend/3 built-in so that it fails as early as possible. For example
the query ordered([W,X,Y,Z]), W = 2, Z = 2 should fail.
Part IV
10.1 Introduction
185
186 Constraint propagation in ECLi PSe
X = X{[a, d]}
Y = Y{[a, d]}
Z = c
Delayed goals:
X{[a, d]} &= Y{[a, d]}
Yes (0.00s cpu)
10.2 The sd library 187
X = b
Y = b
Z = a
U = b
Yes (0.00s cpu)
So this query actually produces a unique solution to the CSP in question.
Finally, it is important to note that the constraint propagation for the
equality and disequality constraints considers each constraint separately
(and iterates the inferences until no new ones can be made). For exam-
ple, even though it is possible to infer that the CSP
hx 6= y, y 6= z, x 6= z ; x {a, b}, y {a, b}, z {a, b}i
has no solution, the sd library cannot make this inference:
[eclipse 3]: [X,Y,Z] &:: [a,b],
X &\= Y, Y &\= Z, X &\= Z.
188 Constraint propagation in ECLi PSe
X = X{[a, b]}
Y = Y{[a, b]}
Z = Z{[a, b]}
Delayed goals:
X{[a, b]} &\= Y{[a, b]}
Y{[a, b]} &\= Z{[a, b]}
X{[a, b]} &\= Z{[a, b]}
Yes (0.00s cpu)
As in the case of the suspend library we can also generate sequences of
constraints by means of ECLi PSe programs. For example, to generate the
CSP
hx1 = x2 , . . ., xn1 = xn ; x1 {l,r}, x2 {l,r,+}, . . ., xn {l,r,+}i
where n > 1, independently of the value of n, we can use the following
program:
equal([]).
equal([H | Ts]) :- equal(H, Ts).
equal(_, []).
equal(H, [Y | Ts]) :- H &= Y, equal(Y, Ts).
Then for each variable H and a list of variables List the query
H :: [l,r], List :: [l,r,+], equal([H | List])
generates an appropriate CSP on the variables in [H | List]. For example,
we have
[eclipse 4]: H &:: [l,r], [X,Y,Z] &:: [l,r,+],
equal([H | [X,Y,Z]]).
H = H{[l, r]}
X = X{[l, r]}
Y = Y{[l, r]}
Z = Z{[l, r]}
Delayed goals:
H{[l, r]} &= X{[l, r]}
X{[l, r]} &= Y{[l, r]}
Y{[l, r]} &= Z{[l, r]}
10.3 The ic library 189
X = 0
Y = 1
Yes (0.00s cpu)
190 Constraint propagation in ECLi PSe
XOR 0 1
0 0 1
1 1 0
We can view these gates as ternary (so reified) Boolean constraints. There-
fore the circuits built out of these gates can be naturally represented by a
sequence of such ternary constraints. In particular, consider the circuit de-
picted in Figure 10.1.
and
i1 xor or o2
i2 and
i3
xor o1
The last rule defines xor3/3 by means of the and/3, or/3 and neg/2 con-
straints using the equivalence
I1 = 1
I2 = 1
O1 = 0
Yes (0.00s cpu)
xor3(X, Y, Z) :-
[X,Y,Z]::0..1,
( ground(X) -> xor3g(X,Y,Z) ;
ground(Y) -> xor3g(Y,X,Z) ;
ground(Z) -> xor3g(Z,X,Y) ;
true -> suspend(xor3(X,Y,Z),2, [X,Y,Z] -> inst)
).
xor3g(0, Y, Z) :- Y = Z.
xor3g(1, Y, Z) :- neg(Y, Z).
Note that the xor3g/3 predicate is called each time with the arguments
appropriately swapped. For example, if X and Y are not ground and Z is,
the call xor3g(Z,X,Y) takes place. Then, if Z=0, the constraint X = Y is
generated and if Z=1, the constraint neg(X, Y) is generated.
192 Constraint propagation in ECLi PSe
X = 4
Y = 2
Yes (0.00s cpu)
That is, this query has only one solution, X = 4 and Y = 2. The solution X
= 3 and Y = 1 is excluded since then X/2 and Y/2 are not integers.
Below we use the $ syntax rather than the # syntax so as to allow us to
freely switch between integer, finite domains and real, continuous domains.
If a variable without a domain is encountered in an ic constraint, it is
automatically initialised with an infinite domain -1.0Inf .. 1.0Inf.
Because of the built-in constraint propagation the generated CSP can
differ from the one actually written down in ECLi PSe . Since the constraint
propagation maintains equivalence, the resulting CSP is always equivalent
to the original one (i.e., it has the same set of solutions). As an example
involving linear constraints reconsider the predicate ordered/2 from Figure
9.3 on page 172. To generate the CSP
hx < y, y < z, z < u, u < v ; x {1..1000}, y {1..1000},
z {1..1000}, u {1..1000}, v {1..1000}i
we used there the query
[X,Y,Z,U,V] :: 1..1000, ordered([X,Y,Z,U,V]).
Now, however, a different, though equivalent, CSP is generated:
[eclipse 11]: [X,Y,Z,U,V] :: 1..1000, ordered([X,Y,Z,U,V]).
X = X{1 .. 996}
Y = Y{2 .. 997}
Z = Z{3 .. 998}
U = U{4 .. 999}
V = V{5 .. 1000}
10.3 The ic library 193
Delayed goals:
ic : (Y{2 .. 997} - X{1 .. 996} > 0)
ic : (Z{3 .. 998} - Y{2 .. 997} > 0)
ic : (U{4 .. 999} - Z{3 .. 998} > 0)
ic : (V{5 .. 1000} - U{4 .. 999} > 0)
Yes (0.00s cpu)
The first five lines report the new domains of the considered variables.
Sometimes this built-in constraint propagation can result in solving the
generated constraints, that is, in producing a solved CSP. In this case the
final domains of its variables are generated. Here is an example:
[eclipse 12]: X :: [5..10], Y :: [3..7], X $< Y, X $\= 6.
X = 5
Y = Y{[6, 7]}
Yes (0.00s cpu)
Here X = 5 is a shorthand for X = X{[5]}, i.e., it stands for the degen-
erated variable declaration X :: [5..5]. So the answer corresponds to the
CSP h ; x {5}, y {6, 7}i with no constraints. If in the solved CSP all
the variable domains are singletons, a customary Prolog answer is produced.
For instance, we have
[eclipse 13]: X :: [5..10], Y :: [3..6], X $< Y.
X = 5
Y = 6
Yes (0.00s cpu)
Note that the corresponding ECLi PSe query
X :: [5..10], Y :: [3..6], X < Y
yields a run-time error:
[eclipse 14]: X :: [5..10], Y :: [3..6], X < Y.
instantiation fault in X{5 .. 10} < Y{3 .. 6}
Abort
So the use of the constraint $</2 instead of the arithmetic comparison
predicate </2 is essential here. Further, in some circumstances the constraint
propagation can detect that the generated CSP is inconsistent, like in this
example:
194 Constraint propagation in ECLi PSe
No (0.01s cpu)
Also here the replacement of $> by > leads to a run-time error.
So using the constraints and constraint propagation only we can solve
more queries than using Prolog and also more queries than using the suspend
library.
Finally, let us return to the problem of generating a constraint of the form
ni=1 ai xi v,
where n 1, all ai s are given parameters and all xi s and v are variables
ranging over an integer interval. We explained in Section 9.6 how to generate
it using the suspend library. Using the ic library and the foreach/2 iterator
this can be also done by means of the following non-recursive program:
sigma(As, Xs, V) :-
( foreach(A,As),
foreach(X,Xs),
foreach(Expr,Exprs)
do
Expr = A*X
),
sum(Exprs) $=< V.
We use here the built-in sum/1. In the context of the ic library the call
sum(List) returns the sum of the members of the list List.
Now, during the execution of the query
[X,Y,Z,U] :: 1..1000, sigma([1,2,3,4], [X,Y,Z,U], V)
the local variable Exprs becomes instantiated to the list [1*X, 2*Y, 3*Z,
4*U] that is subsequently passed to the constraint sum(Exprs) $=< V. Ad-
ditionally, some constraint propagation is automatically carried out and we
get the following outcome:
[eclipse 16]: [X,Y,Z,U] :: 1..1000,
sigma([1,2,3,4], [X,Y,Z,U], V).
X = X{1..1000}
Y = Y{1..1000}
Z = Z{1..1000}
U = U{1..1000}
10.3 The ic library 195
V = V{10.0..1.0Inf}
Delayed goals:
ic : (4*U{1..1000} + 3*Z{1..1000} + 2*Y{1..1000} +
X{1..1000} - V{10.0..1.0Inf} =< 0)
send(List):-
List = [S,E,N,D,M,O,R,Y],
List :: 0..9,
diff_list(List),
1000*S + 100*E + 10*N + D
+ 1000*M + 100*O + 10*R + E
$= 10000*M + 1000*O + 100*N + 10*E + Y,
S $\= 0,
M $\= 0.
Fig. 10.2 Domains and constraints for the SEND MORE MONEY puzzle
196 Constraint propagation in ECLi PSe
L = [9, 5, 6, 7, 1, 0, 8, 2]
Yes (0.00s cpu, solution 1, maybe more) ? ;
No (0.00s cpu)
Recall that with the suspend library loaded instead of ic we got the final
answer after more than three minutes of CPU time.
It is important to realise that this significant domain reduction is achieved
thanks to the integrality constraints on the variables used. If we drop them
by changing the declaration
List :: 0..9
to
List :: 0.0..9.0,
very little can be inferred since there are many non-integer solutions to the
problem. Indeed, the only domain that gets reduced then is that of M which
becomes 0.0..1.102.
Finally, let us mention that the diff list/1 predicate is available in the
ic library as the alldifferent/1 built-in.
QueenStruct = [](1, 5, 8, 6, 3, 7, 2, 4)
Yes (0.01s cpu, solution 1, maybe more) ?
Here, in contrast to the program that solves the SEND + MORE =
MONEY puzzle, the constraint propagation has no effect before the search
10.3 The ic library 197
X = X{1.0 .. 31.606961258558218}
Y = Y{1.0 .. 31.606961258558218}
Delayed goals:
ic : (Y{1.0 .. 31.606961258558218} =:=
rsqr(_740{1.0 .. 999.0}))
ic : (X{1.0 .. 31.606961258558218} =:=
rsqr(_837{1.0 .. 999.0}))
ic : (_740{1.0 .. 999.0} + _837{1.0 .. 999.0} =:=
1000)
Yes (0.00s cpu)
The delayed goals show how each constraint was internally rewritten by
means of the auxiliary variables.
Integrality and non-linearity can be combined freely. Adding integrality
to the previous example tightens the bounds not just to the next integer
but, in this case, even further because additional constraint propagation
can take place:
Delayed goals:
ic : (Y{10 .. 30} =:= rsqr(_657{100.0 .. 900.0}))
ic : (X{10 .. 30} =:= rsqr(_754{100.0 .. 900.0}))
198 Constraint propagation in ECLi PSe
X = 16
Y = 10
Z = 160
Yes (0.00s cpu)
X = X{1 .. 22}
X = X{1 .. 9}
The second parameter of squash/3 specifies how near the bounds to test
for infeasibility, and the third argument specifies whether to divide up the
domain linearly (lin), or logarithmically (log), which can work better for
large domains. We shall discuss this predicate in more detail in Section 13.5.
10.3 The ic library 199
X = X{4 .. 6}
Y = Y{5 .. 7}
X = X{[5, 6]}
Y = Y{[6, 7]}
2 The exclude/2 command is hidden in the ECLi PSe module ic kernel because its behaviour
is non-standard. This explains the use of the ic kernel: prefix.
200 Constraint propagation in ECLi PSe
X = X{[5, 6]}
Y = Y{5 .. 7}
X = X{[5, 6]}
Y = Y{[6, 7]}
% dist(X, Y, Z) :- |X-Y| $= Z.
dist(X, Y, Z) :- X-Y $= Z.
dist(X, Y, Z) :- Y-X $= Z.
X = 4
Y = 3
Yes (0.00s cpu, solution 1, maybe more) ? ;
X = X{2 .. 4}
Y = Y{3 .. 5}
Delayed goals:
ic : (X{2 .. 4} - Y{3 .. 5} =:= -1)
Yes (0.00s cpu, solution 2)
% dist(X, Y, Z) :- |X-Y| $= Z.
dist(X, Y, Z) :- abs(X-Y) $= Z.
X = X{2 .. 4}
Y = Y{3 .. 5}
Delayed goals:
ic : (_706{-1 .. 1} - X{2 .. 4} + Y{3 .. 5} =:= 0)
ic : (1 =:= abs(_706{-1 .. 1}))
ic : (_706{-1 .. 1} =:= +- 1)
Yes (0.00s cpu)
dist(X, Y, Z) :-
$=(X-Y, Z, B1),
$=(Y-X, Z, B2),
B1+B2 $= 1.
To save introducing these extra arguments we can use the Boolean con-
straint or/2. Thus we can simply write:
arguments of or/2 can be queries. The interaction with ECLi PSe reveals
that this version is actually implemented by translating it into the previous
one using reified constraints:
X = X{1 .. 4}
Y = Y{3 .. 6}
Delayed goals:
=:=(-(Y{3 .. 6}) + X{1 .. 4}, 1, _914{[0, 1]})
=:=(Y{3 .. 6} - X{1 .. 4}, 1, _1003{[0, 1]})
-(_1003{[0, 1]}) - _914{[0, 1]} #=< -1
Yes (0.00s cpu)
We stress that or/2 is different from the Prolog disjunction ;/2. In fact,
the latter creates a choice point and if we used it here instead of or/2, we
would have reintroduced the initial dist/3 constraint.
Just as or/2 above, other Boolean constraints, i.e. neg/1, and/2, and
=>/2, can be used to form logical combinations of arbitrary constraints with-
out introducing any choice points.
The additional Boolean variables used in a reified constraint can be used
after all the constraints have been set up, during search, to make the choices
that were postponed during constraint setup.
Finally, let us mention that reified constraints cannot be further reified.
For example, the following query yields a run-time error:
10.5 Summary
In this chapter we discussed various facilities present in the libraries sd and
ic.
In the case of the sd library we explained how variables ranging over fi-
nite symbolic domains can be introduced and how constraints on these vari-
ables can be generated. In the case of the ic library we explained Boolean
204 Constraint propagation in ECLi PSe
10.6 Exercises
Exercise 10.1 Suppose that the library ic is loaded. What is the ECLi PSe
output to the following queries? (Note that these are the queries from
Exercise 3.1 with is replaced by #=.)
1. X #= 7, X #= 6.
2. X #= 7, X #= X+1.
3. X #= X+1, X #= 7.
4. X #= 3, Y #= X+1.
5. Y #= X+1, X #= 3.
11.1 Introduction
205
206 Top-down search with active constraints
QueenStruct = [](1, 3, 5, 7, 2, 4, 6)
Yes (0.00s cpu, solution 1, maybe more) ?
In fact, for N equal to 7, after placing the first three queens the constraint
11.2 Backtrack-free search 207
queens(QueenStruct, Number) :-
dim(QueenStruct,[Number]),
constraints(QueenStruct, Number),
backtrack_free(QueenStruct).
constraints(QueenStruct, Number) :-
( for(I,1,Number),
param(QueenStruct,Number)
do
QueenStruct[I] :: 1..Number,
(for(J,1,I-1),
param(I,QueenStruct)
do
QueenStruct[I] $\= QueenStruct[J],
QueenStruct[I]-QueenStruct[J] $\= I-J,
QueenStruct[I]-QueenStruct[J] $\= J-I
)
).
backtrack_free(QueenStruct):-
( foreacharg(Col,QueenStruct) do get_min(Col,Col) ).
propagation already determines the unique positions for the remaining four
queens. This can be seen by omitting the final call to the backtrack_free/1
and by executing the following query:
X4 = 7
X5 = 2
X6 = 4
X7 = 6
Yes (0.00s cpu)
backtrack_free(List) :-
( foreach(Var,List) do get_min(Var,Var) ).
List = [1, 2, 3]
X = 1
Y = 2
Z = 3
Yes (0.00s cpu)
X = X{[1, 3, 4]}
Domain = [1, 2, 3, 4]
NewDomain = [1, 3, 4]
Yes (0.00s cpu)
11.3 Shallow backtracking search 209
Once the list Domain is produced, we can enumerate all its members by
means of the member/2 predicate, as in the following query:
X = 1
Domain = [1, 3, 4]
Yes (0.00s cpu, solution 1, maybe more) ? ;
X = 3
Domain = [1, 3, 4]
Yes (0.00s cpu, solution 2, maybe more) ? ;
X = 4
Domain = [1, 3, 4]
Yes (0.00s cpu, solution 3)
shallow_backtrack(List) :-
( foreach(Var,List) do once(indomain(Var)) ).
Recall from Section 4.2 that once(Q) generates only the first solution to the
query Q.
The following two queries illustrate the difference between the backtrack-
free search and shallow backtracking:
No (0.00s cpu)
[eclipse 7]: List = [X,Y,Z], X :: 1..3, [Y,Z] :: 1..2,
alldifferent(List), shallow_backtrack(List).
210 Top-down search with active constraints
List = [3, 1, 2]
X = 3
Y = 1
Z = 2
Yes (0.00s cpu)
send(List):-
List = [S,E,N,D,M,O,R,Y],
List :: 0..9,
diff_list(List),
1000*S + 100*E + 10*N + D
+ 1000*M + 100*O + 10*R + E
$= 10000*M + 1000*O + 100*N + 10*E + Y,
S $\= 0,
M $\= 0,
shallow_backtrack(List).
shallow_backtrack(List) :-
( foreach(Var,List) do once(indomain(Var)) ).
augmented by the diff list/2 procedure of Figure 9.4.
Fig. 11.2 The SEND MORE MONEY program with shallow backtracking
L = [9, 5, 6, 7, 1, 0, 8, 2]
Yes (0.00s cpu)
11.4 Backtracking search 211
search_with_dom(List) :-
( fromto(List, Vars, Rest, [])
do
choose_var(Vars, Var, Rest),
indomain(Var).
).
List = [4, 1, 2, 3]
W = 4
X = 1
Y = 2
Z = 3
Yes (0.00s cpu, solution 1, maybe more) ? ;
queens(QueenStruct, Number) :-
dim(QueenStruct,[Number]),
constraints(QueenStruct, Number),
backtrack_search(QueenStruct).
constraints(QueenStruct, Number) :-
( for(I,1,Number),
param(QueenStruct,Number)
do
QueenStruct[I] :: 1..Number,
(for(J,1,I-1),
param(I,QueenStruct)
do
QueenStruct[I] $\= QueenStruct[J],
QueenStruct[I]-QueenStruct[J] $\= I-J,
QueenStruct[I]-QueenStruct[J] $\= J-I
)
).
backtrack_search(QueenStruct):-
( foreacharg(Col,QueenStruct) do indomain(Col) ).
Expr = A*X
),
sum(Exprs) $=< V.
ni=1 ai xi v.
Here n 1, all ai s are given parameters and all xi s and v are variables
ranging over an integer interval.
The appropriate labelling involves here only the variables in the list Xs
and V. Indeed, first of all we wish to call sigma/3 with the first argument in-
stantiated, so the variables in As do not need to be enumerated. Second, the
local variables A,X,Expr and Exprs are only used to construct the relevant
constraint and should not be enumerated. Finally, if we always call sigma/3
with the last argument V instantiated, its enumeration is not needed either.
11.5 Variable ordering heuristics 215
naive heuristic
This heuristic simply reproduces the behaviour of the built-in labeling/1.
So it labels the variables in the order they appear in the list and tries values
starting with the smallest element in the current domain and trying the rest
in increasing order.
Let us call this heuristic naive and define the first clause of search/2
accordingly:
The naive heuristic reproduces the behaviour of the program from Figure
11.4 on page 214 and finds a solution to the 8-queens problem after only 10
backtracks, and can solve the 16-queens problems with 542 backtracks, but
the number of backtracks grows very quickly with an increasing number of
queens. With a limit of 50 seconds, even the 32-queens problem times out.
constraints(QueenStruct, Number) :-
( for(I,1,Number),
param(QueenStruct,Number)
do
QueenStruct[I] :: 1..Number,
(for(J,1,I-1),
param(I,QueenStruct)
do
QueenStruct[I] $\= QueenStruct[J],
QueenStruct[I]-QueenStruct[J] $\= I-J,
QueenStruct[I]-QueenStruct[J] $\= J-I
)
).
struct_to_list(Struct, List) :-
( foreacharg(Arg,Struct),
foreach(Var,List)
do
Var = Arg
).
:- lib(lists).
middle_out(List, MOutList) :-
halve(List, FirstHalf, LastHalf),
reverse(FirstHalf, RevFirstHalf),
splice(LastHalf, RevFirstHalf, MOutList).
The call halve(List, Front, Back) splits its first argument in the middle
while the call splice(Odds, Evens, List) merges the first two lists by
interleaving their elements.
The middle out heuristic simply reorders the list of queens and labels
them as before:
search(middle_out, List) :-
middle_out(List, MOutList),
labeling(MOutList).
Interestingly, the query queens(middle out, Queens, 8) succeeds with-
out any backtracking, and a first solution to the 16-queens problem now
needs only 17 backtracks. However, for 32 queens the search still times out
after 50 seconds.
search(first_fail, List) :-
( fromto(List, Vars, Rest, [])
do
delete(Var, Vars, Rest, 0, first_fail),
indomain(Var)
).
On the small problem instances of the n-queens problem such as 8-queens
the impact of first fail is unspectacular. Indeed, the search requires 10
backtracks to find the first solution to 8-queens, which is no better than the
naive heuristic. However, on larger instances its impact is dramatic. Indeed,
this heuristic enables a first solution to the 32-queens problem to be found
with only four backtracks. Even the 75-queens problem succumbs after 818
backtracks.
search(moffmo, List) :-
middle_out(List, MOutList),
( fromto(MOutList, Vars, Rest, [])
do
delete(Var, Vars, Rest, 0, first_fail),
indomain(Var,middle)
).
8-queens 10 0 10 0 3
16-queens 542 17 3 0 3
32-queens 4 1 7
75-queens 818 719 0
120-queens 0
Table 11.1 Number of backtracks for solving n-queens with different heuristics
For scheduling and packing problems it is usually best to label each vari-
able by its smallest possible value, so using indomain(Var,min). (In fact
indomain(Var) uses this same value ordering.)
middle_out(OrigDom, MOutDom).
X = 2
Y = Y
Z = Z
Yes (0.00s cpu, solution 1, maybe more) ? ;
X = X
Y = 2
Z = Z
11.7 Constructing specific search behaviour 221
X = X
Y = Y
Z = 2
Yes (0.00s cpu, solution 3)
Finally, to make the output the same, we rotate the board at the end,
so that if in the solution the ith queen is in square j, then in the output the
jth queen is in square i.
Accordingly, the n-queens program in Figure 11.6 produces exactly the
same results in the same order as the middle out variable ordering heuristic,
that is the queries
queens(middle_out, Queens, Number).
and
queens(rotate, Queens, Number).
produce the same solutions in the same order. Surprisingly, when we count
the number of backtracks required to find the first solution, we get the results
in Table 11.2.
8-queens 0 0
16-queens 17 0
24-queens 194 12
27-queens 1161 18
29-queens 25120 136
Table 11.2 Number of backtracks for solving n-queens with middle out and rotate
heuristics
search(rotate, QList) :-
middle_out_dom(QList, MOutDom),
( foreach(Val,MOutDom),
param(QList)
do
member(Val, QList)
).
rotate(QueenStruct, Queens) :-
dim(QueenStruct,[N]),
dim(RQueens,[N]),
( foreachelem(Q,QueenStruct,[I]),
param(RQueens)
do
subscript(RQueens,[Q],I)
),
struct_to_list(RQueens, Queens).
augmented by the constraints/2 and struct to list/2 procedures of
Figure 11.5 and middle out/2 procedure from page 216.
model it means that only one (queen) variable remains which still has a
square in that row in its domain. In this case, which corresponds to the
middle out variable ordering heuristics, constraint propagation does not
instantiate the variable. This phenomenon is illustrated in Figure 11.7. In
row 5 only the field e5 is available. Since in every row exactly one queen
11.8 The search/6 generic search predicate 223
8
7
6
5
4
3
2
1
a b c d e f g h
The way to achieve the same amount of constraint propagation and ulti-
mately the same number of backtracks for the two symmetrical heuristics
is to add another board and link the two, so that if queen i is placed on
square j on one board, then queen j is automatically placed on square i
on the other. This redundant modelling of the problem improves all the n-
queens problem benchmark results, and is a typical way to improve problem
solving efficiency.
% search(Heur, List) :-
% Find a labelling of List using
% the combination Heur of heuristics.
search(naive, List) :-
search(List,0,input_order,indomain,complete,[]).
search(middle_out, List) :-
middle_out(List, MOutList),
search(MOutList,0,input_order,indomain,complete,[]).
search(first_fail, List) :-
search(List,0,first_fail,indomain,complete,[]).
search(moff, List) :-
middle_out(List, MOutList),
search(MOutList,0,first_fail,indomain,complete,[]).
search(moffmo, List) :-
middle_out(List, MOutList),
search(MOutList,0,first_fail,
indomain_middle,complete,[]).
augmented by the middle out/2 procedure from page 216.
So input order and first fail are predefined variable ordering heuris-
tics corresponding to naive and first fail, while indomain and
indomain middle are predefined value ordering heuristics corresponding to
indomain(Var,min) (i.e., indomain(Var)) and indomain(Var,middle).
In all the clauses the fifth argument, Method, is instantiated to complete
which refers to a complete search. Other, incomplete, forms of search are
11.8 The search/6 generic search predicate 225
search(Queens,0,first_fail,
indomain_middle,lds(2),[backtrack(B)]),
writeln(backtracks - B).
Finally, consider the second argument, Arg, of search/6 when Arg > 0.
As already mentioned in that case the first argument, List, should be a list
of compound terms. The idea is that from an object-oriented viewpoint it
makes sense to admit any list of compound terms (representing objects) as
input, as long as during search we can extract the decision variable from the
current compound term. This is done by instantiating the second argument,
Arg, to the argument of each term that contains the decision variable.
Suppose for example that we keep with each queen variable an integer
recording which column this queen belongs to. Each queen will then be
represented by the term q(Col,Q), where Col is an integer recording the
column and Q a domain variable which will represent the queens position.
The first argument, List, of search/6 will then be a list of such terms. In
this case the decision variable is the second argument of each term, so the
second argument of search/6 is set to 2.
We mentioned already that the VarChoice and ValChoice parameters of
search/6 can be instantiated to user-defined heuristics. Let us illustrate
now the usefulness of this facility. Suppose that for performance debug-
ging purposes of a variable ordering heuristic for a given search method
(such as limited discrepancy search) we want to output the column and
the position whenever a queen is labelled, so that we know the order in
which the queens are being selected for labelling. To do this we introduce a
user-defined show choice value ordering heuristic that displays the choice
currently being made, and set the fourth argument, ValChoice, of search/6
to show choice. The resulting program is shown in Figure 11.9.
The following example interaction with this program illustrates the be-
haviour of the first fail variable ordering heuristic for lds(2), the limited
discrepancy search with up to two discrepancies.
struct_to_queen_list(Struct, QList) :-
( foreacharg(Var,Struct),
count(Col,1,_),
foreach(Term,QList)
do
Term = q(Col, Var)
).
show_choice(q(I, Var)) :-
indomain(Var),
writeln(col(I):square(Var)).
augmented by the constraints/2 and struct to list/2 procedures of
Figure 11.5.
col(1) - square(1)
col(2) - square(3)
col(2) - square(4)
col(1) - square(2)
col(2) - square(4)
col(3) - square(6)
col(4) - square(1)
col(5) - square(3)
col(6) - square(5)
Queens = [2, 4, 6, 1, 3, 5]
Yes (0.00s cpu, solution 1, maybe more) ?
credit to our earlier choices and less to later ones. To interface to such proce-
dures the fourth argument, ValChoice, of search/6 allows us to thread an
extra argument, for example the initial credit Var, through the procedure.
An example interface is shown in Figure 11.10 where the ValChoice argu-
ment of search/6 is instantiated to our_credit_search(Var, _). The re-
sulting search/2 procedure also uses the first fail variable choice heuris-
tic, to show the orthogonality of variable and value ordering heuristics, how-
ever complex these heuristics may be.
To complete the chapter we present experimental results in Table 11.3 that
show that each time we change the amount of credit, we also get different
backtracking behaviour. This suggests that insufficient credit is eliminating
some correct choices, and consequently suggests that our credit based search
is not a good heuristic for solving the n-queens problem.
8-queens failed 6 10 16
16-queens 7 13 25 45
32-queens 9 9 14 21
75-queens failed failed 7 6
120-queens failed failed 21 3
Table 11.3 Number of backtracks for solving n-queens with different credit limits
11.9 Summary
In this chapter we studied the support provided in the ic library for top-
down search for finite CSPs in the presence of constraint propagation. We
discussed in turn the backtrack-free search, shallow backtracking and back-
tracking search. We also defined various variable and value ordering heuris-
tics and showed how by using them specific forms of search behaviour can
be defined.
Further, we discussed a generic search procedure search/6 that allows
us to customise the top-down search process by using predefined or user-
defined variable ordering and value ordering heuristics, and various, possibly
incomplete, forms of search.
In our presentation we introduced the following ic library built-ins:
get min/2,
228 Top-down search with active constraints
% search(our_credit_search(Var,_), List) :-
% Find a labelling of List using our own credit based
% search procedure with an initial credit of Var.
search(our_credit_search(Var,_), List) :-
search(List, 0, first_fail,
our_credit_search(Var,_), complete, []).
get max/2,
get_domain_as_list/2,
indomain/1,
labeling/1,
11.10 Exercises 229
delete/5,
indomain/2,
search/6.
Throughout the chapter we assumed that the domain splitting consists of
an enumeration of all the domain elements. But other natural possibilities
exist. For example, instead of such a complete domain decomposition we
could instead split the domain only into two parts, one consisting of a single
element and the other of the remaining elements. Alternatively, we could
split the domain into two, roughly equal parts.
These alternatives can be programmed in ECLi PSe in a straightforward
way. We shall return to the last one in Chapter 13 on solving continuous
CSPs, as for finite domains this approach can be easily obtained by a minor
modification of the corresponding domain splitting method there defined.
11.10 Exercises
Exercise 11.1 Write a version of the queens/2 predicate from Figure 11.4
on page 214 that uses lists instead of arrays and recursion instead of itera-
tion.
Exercise 11.2 Write an ECLi PSe program that solves the problem from
Exercise 6.1 on page 108.
Exercise 11.3 Write an ECLi PSe program that solves the problem from
Exercise 6.2 on page 109.
Exercise 11.4 Write an ECLi PSe program that generates magic squares of
order 5. (For their definition see Exercise 6.3 on page 109.)
12
solveOpt(List):-
230
12.1 The minimize/2 built-in 231
declareDomains(List),
generateConstraints_and_Cost(List, Cost),
minimize(search(List), Cost).
The call generateCons and Cost(List, Cost) generates the constraints
and defines the cost function that is subsequently used to find the optimal
solution. The call
minimize(search(List), Cost)
realises in ECLi PSe the basic branch and bound algorithm discussed in
Section 6.6, augmented with the constraint propagation for constraints on
finite domains. It computes, using the call search(List), a solution to the
generated CSP for which the value of the cost function defined in Cost is
minimal. At the time of the call Cost has to be a variable, i.e., it should
not be instantiated.
To illustrate the internal working of minimize/2 consider the following
query:
[eclipse 1]: minimize(member(X,[5,6,5,3,4,2]), X).
Found a solution with cost 5
Found a solution with cost 3
Found a solution with cost 2
Found no solution with cost -1.0Inf .. 1
X = 2
Yes (0.00s cpu)
So ECLi PSe reports information about each newly found improved value
of the cost function. The member/2 built-in generates upon backtracking
the successive elements of the input list, so only the solutions with the cost
5, 3 and 2 are reported. Upon termination of the search the values of the
variables for the first optimal solution found are printed.
If we need to use a non-variable cost function, we introduce it by means
of a constraint, as in the following example:
[eclipse 2]: Cost #= X+1,
minimize(member(X,[5,6,5,3,4,2]), Cost).
Found a solution with cost 6
Found a solution with cost 4
Found a solution with cost 3
Found no solution with cost -1.0Inf .. 2
232 Optimisation with active constraints
Cost = 3
X = 2
Yes (0.00s cpu)
find(X, Y, Z) :-
[X,Y,Z] :: [100..500],
X*X*X + Y*Y #= Z*Z*Z,
Cost #= Z-X-Y,
minimize(labeling([X,Y,Z]), Cost).
X = 110
Y = 388
Z = 114
Yes (12.34s cpu)
Of course, in general the efficiency of the program and the order in which
the values of the cost function are generated depend on the adopted search
strategy.
that models the requirement that the collection fits in the knapsack. The
cost function which we wish to minimise is:
n
X
bi xi .
i=1
It models the requirement that we seek a solution to (12.1) for which the
sum is maximal.
The appropriate program is given in Figure 12.1. It uses twice the pred-
icate sigma/3. Its variant was originally discussed in Subsection 10.3.2,
where we explained how the constraint (12.1) can be generated in the ic
library. The discussion at the end of Section 11.4 explains why the labeling
procedure is applied only to the list Xs.
volume value
52 100
23 60
35 70
15 15
7 15
We get then:
X1 = 0
X2 = 1
X3 = 1
X4 = 0
X5 = 0
Yes (0.01s cpu)
We formalized this problem using six variables x1 , x2 , x5 , x10 , x20 , x50 rang-
ing over the domain [0..99] that denote the appropriate amounts of the coins,
and for each i [1..99] six variables xi1 , xi2 , xi5 , xi10 , xi20 , xi50 that are used to
state that the amount of i cents can be exactly paid. The appropriate con-
straints are:
xi1 + 2xi2 + 5xi5 + 10xi10 + 20xi20 + 50xi50 = i,
0 xij ,
xij xj
for all i [1..99] and j {1, 2, 5, 10, 20, 50} and the cost function is
x1 + x2 + x5 + x10 + x20 + x50 .
The program is a direct translation of this representation into ECLi PSe .
So we use a list Coins of six variables corresponding to the variables x1 , x2 ,
x5 , x10 , x20 , x50 , each ranging over the domain 0..99.
Then for each price between 1 and 99 we impose the above three con-
straints using the predicate price cons/4. The variables xi1 , xi2 , xi5 , xi10 , xi20 ,
xi50 are kept in the list CoinsforPrice and the complete list of these lists,
with i [1..99], is kept in the list Pockets. Though we are not interested
in the values of the variables assembled in the list Pockets, we keep them
so as to be able to ensure there really is a feasible labelling for them after
setting the appropriate amounts of the coins to variables in the list Coins.
Finally, we use sum(Coins) as the cost function. The program is given in
Figure 12.2.
This optimisation problem is easy to solve. The optimal solution of 8 is
found very quickly:
[eclipse 5]: solve(Coins, Min).
Found a solution with cost 8
Found no solution with cost 1.0 .. 7.0
Coins = [1, 2, 1, 1, 2, 1]
Min = 8
Yes (0.08s cpu)
solve(Coins, Min) :-
init_vars(Values, Coins),
coin_cons(Values, Coins, Pockets),
Min #= sum(Coins),
minimize((labeling(Coins), check(Pockets)), Min).
init_vars(Values, Coins) :-
Values = [1,2,5,10,20,50],
length(Coins, 6),
Coins :: 0..99.
check(Pockets) :-
( foreach(CoinsforPrice,Pockets)
do
once(labeling(CoinsforPrice))
).
of these six coins. Can we then design a set of six coins for which we can
solve the above problem with fewer than eight coins? This is the currency
design problem introduced in Subsection 6.5.2. Computationally this is a
much more challenging problem. The reason is that the constraints become
now non-linear. Indeed, recall that each previous constraint
xi1 + 2xi2 + 5xi5 + 10xi10 + 20xi20 + 50xi50 = i
becomes now
v1 xi1 + v2 xi2 + v3 xi3 + v4 xi4 + v5 xi5 + v6 xi6 = i,
where v1 , . . ., v6 are the values of the coins and for each i [1..99], xi1 , . . ., xi6
are the numbers of these coins that allow one to pay the amount i.
On the other hand, from the programming point of view the solution is a
minor modification of the previous program. Indeed, to solve this problem
instead of the list of coin values
Values = [1,2,5,10,20,50]
we simply use now the list
Values = [V1, V2, V3, V4, V5, V6]
of six constraint variables such that 0 #< V1 #< ... #< V6 #< 100, that
represent the values of the coins for which an optimal solution is sought.
Unfortunately, even though the resulting program finds a solution with
eight coins relatively quickly, the computation concerned with the proof of
optimality takes an excessive amount of time.
A remedy consists of using implied constraints that make a big difference
to performance. The idea is that if we have coins with value V 1 and V 2,
where V 1 < V 2, then it is never needed to have enough coins with value V 1
to equal or exceed the value V 2. Indeed, if we had so many V 1-valued coins,
we could pay the amount that uses them using a V 2-valued coin instead.
The reason is that in any solution there are always enough coins to make
up any amount up to V 1 1.
Additionally, for the coin with the largest value we can impose the con-
straint that we only need to pay amounts smaller than 100. The resulting
program is presented in Figure 12.3. The additional constraints are gener-
ated using the predicate clever_cons/2. The increasing/1 predicate is a
simplified version of the ordered/1 predicate introduced in Figure 9.3 on
page 172, in which we do not deal with the case of the empty lists.
The following interaction with ECLi PSe shows that no solution with seven
coins exists.
238 Optimisation with active constraints
design_currency(Values, Coins) :-
init_vars(Values, Coins),
coin_cons(Values, Coins, Pockets),
clever_cons(Values, Coins),
Min #= sum(Coins),
minimize((labeling(Values), labeling(Coins),
check(Pockets)), Min).
init_vars(Values, Coins) :-
length(Values, 6),
Values :: 1..99,
increasing(Values),
length(Coins, 6),
Coins :: 0..99.
increasing(List) :-
( fromto(List,[This,Next | Rest],[Next | Rest],[_])
do
This #< Next
).
clever_cons(Values, Coins) :-
( fromto(Values,[V1 | NV],NV,[]),
fromto(Coins,[N1 | NN],NN,[])
do
( NV = [V2 | _] ->
N1*V1 #< V2
;
N1*V1 #< 100
)
).
augmented by the procedures coin cons/3 and check/1 of Figure 12.2.
So using the CURRENCY program we showed that no six coins system exists.
The final question is of course whether a seven coins system exists. To
answer it, in view of the above transformation, it suffices to change in the
CURRENCY program the parameter 6 to 7, add the constraint Min #= 7, and
run the query design currency(Values, [1,1,1,1,1,1,1]).
Note also that in this case the call of clever_cons/2 is unneeded. This
yields after 5 seconds a solution
2 1 7
8
9 3
8 9 6 1
6 2
5 3
9 4 5 6
7 3 8
2
[](
[](_, 2, 1, _, _, 7, _, _, _),
[](_, _, _, _, _, 8, _, _, _),
[](_, _, _, _, _, _, 9, 3, _),
[](_, 8, _, 9, _, _, 6, _, 1),
[](_, _, _, _, 6, _, _, 2, _),
[](_, _, _, _, _, 5, _, _, 3),
[](_, 9, 4, _, _, _, _, 5, 6),
[](7, _, _, 3, _, _, 8, _, _),
[](_, _, _, _, _, 2, _, _, _))
12.5 Generating Sudoku puzzles 241
X = X{[0, 1]}
Y = Y{[0, 1]}
Z = Z{[0, 1]}
No (0.00s cpu)
The program is given in Figure 12.5. It uses two built-ins that were not
discussed so far: flatten/2 and term variables/2. The former transforms
a list of lists (in general, a list of lists of . . . lists) into a list. So the query
assigns to the variable SubSquare the list of elements filling the subsquare
Board[I..I+2,J..J+2] of Board. In turn, the term variables/2 built-in
computes the list of variables that appear in a term. It is similar to the
vars/2 predicate defined in Figure 5.2 on page 83 (see also Exercise 5.2 on
page 85). Finally, the multifor([I,J],1,9,3) shorthand stands for the
iteration of I and of J from 1 to 9 in the increments of 3, i.e., through the
values 1,4,7. The solution to the puzzle from Figure 12.4 is easily found:
solve(Board) :-
sudoku(Board),
print_board(Board).
sudoku(Board) :-
constraints(Board),
search(Board).
constraints(Board) :-
dim(Board,[9,9]),
Board[1..9,1..9] :: [1..9],
( for(I,1,9), param(Board)
do
Row is Board[I,1..9],
alldifferent(Row),
Col is Board[1..9,I],
alldifferent(Col)
),
( multifor([I,J],1,9,3),
param(Board)
do
S is Board[I..I+2,J..J+2],
flatten(S, SubSquare),
alldifferent(SubSquare)
).
search(Board) :-
term_variables(Board, Vars),
labeling(Vars).
print_board(Board) :-
( foreachelem(El,Board,[_,J])
do
( J =:= 1 -> nl ; true ),
write( ),
( var(El) -> write(_) ; write(El) )
).
9 2 1 5 3 7 4 6 8
4 3 6 2 9 8 1 7 5
8 5 7 4 1 6 9 3 2
2 8 5 9 7 3 6 4 1
1 7 3 8 6 4 5 2 9
6 4 9 1 2 5 7 8 3
3 9 4 7 8 1 2 5 6
7 6 2 3 5 9 8 1 4
5 1 8 6 4 2 3 9 7
Yes (0.20s cpu)
sudoku_random(Board) :-
constraints(Board),
term_variables(Board, Vars),
( foreach(X,Vars) do indomain(X,random) ).
search(Board, Min) :-
board_to_list(Board, List),
drop_until(List, Final, Min),
list_to_board(Final, FinalBoard),
print_board(FinalBoard), nl.
until a Sudoku puzzle (i.e., a puzzle with a unique solution) is created. The
generated puzzle is printed. Subsequently, using the branch and bound
search a Sudoku puzzle with the minimum number of filled fields is sought.
The program is given in Figure 12.7.
12.5 Generating Sudoku puzzles 245
generate :-
constraints(Board),
minimize(search2(Board,Min), Min).
search2(Board, Min) :-
board_to_list(Board, List),
add_until(Board, List, Final, Min),
list_to_board(Final, FinalBoard),
print_board(FinalBoard), nl.
ct(Q, N) :-
( not twice(Q) -> getval(x,N) ; N = multiple ).
augmented by the procedures constraints/1, search/1 and
print board/1 of Figure 12.5; board to list/2, list to board/2
and remove random/3 given below in Figure 12.8;
and twice/1 of Figure 8.16.
The crucial procedure, add until/4, is now more involved. Its first ar-
gument, Board, represents the constrained Sudoku board. The second ar-
gument, List, is a list of the remaining unfilled squares in the board. The
third argument, Final, as before represents the contents of the final solution
246 Optimisation with active constraints
in a list form, and the fourth argument, Min, is the number of entries in this
final solution.
In contrast to the previous program the search for the minimum uses a
constraint, Min #>= Ct, to prune the search tree, which is needed to curtail
fruitless searches for solutions that have too many entries.
The call remove_random(TFree, [I,J]-Val, NFree) removes an(other)
entry from the board. The test if Val is a variable avoids explicitly selecting
entries whose values are already entailed by constraint propagation from the
previously chosen entries. In turn, the call ct(search(Board), N) is used
to complete the board after instantiating the next value. Since the con-
straints were posted at the start, this program uses the search/1 procedure
instead of the more costly sudoku/1.
Also ct/2 is used instead of one/1, because this enables the program to
distinguish the case where the current board has multiple solutions from the
case where there are none. (Recall that x is the non-logical variable used in
the procedure one/2 defined in Figure 8.16.) Finally, the statement
( N = 1 -> Cont = stop ;
N = multiple -> Cont = ok )
is used to avoid extending a partial assignment that is not a partial solution.
(Note that if N = 0 a failure arises.) If there are no solutions, the program
immediately backtracks and finds a different value for Val.
The program can be easily modified to one in which one first randomly
generates a solution to a Sudoku puzzle and then randomly copies its fields
into the Board array until a Sudoku puzzle is created.
We conclude by listing in Figure 12.8 the supporting predicates needed
for the two previous programs.
remove_nth(1, [H | T], H, T) :- !.
remove_nth(N, [H | T], Val, [H | Tail]) :-
N1 is N-1,
remove_nth(N1, T, Val, Tail).
board_to_list(Board, List) :-
( foreachelem(Val,Board,[I,J]),
foreach([I,J]-Val,List)
do
true
).
list_to_board(List, Board) :-
dim(Board,[9,9]),
( foreach([I,J]-Val,List),
param(Board)
do
subscript(Board,[I,J],Val)
).
Benchmark optimisation problems are typically of this kind. For most opti-
misation problems, however, it is necessary to trawl through a sequence of
better and better solutions en route to the optimum. Here is a trace of a
scheduling program:
Cost: 863 in time: 0.110000000000582
... [20 intermediate solutions found in quick succession]
Cost: 784 in time: 2.04300000000148
... [10 more solutions - the time for each is growing]
Cost: 754 in time: 7.65100000000166
Cost: 752 in time: 8.75300000000061
248 Optimisation with active constraints
bb_options{delta:3}).
Found a solution with cost 5
Found a solution with cost 2
Found no solution with cost -1.0Inf .. -1
X = 2
Yes (0.00s cpu)
Note that in the search the solution with cost 3 was skipped. Of course, the
branch and bound search with such an option can cease to be complete:
X = 5
Yes (0.00s cpu)
Still we can conclude that the cost of the generated solution is at most delta
from the optimum.
A more important option is factor that governs how much better the
next solution should be than the last. The improvement factor is a number
between 0 and 1, which relates the improvement to the current cost upper
bound and lower bound.
Setting the factor to 1 puts the new upper bound at the last found
best cost this is the default used by the standard minimize/2 predicate.
Setting the factor to 0.01 sets the new upper bound almost to the cost lower
bound (factor 0.0 is not accepted by bb_min/3): typically it is easy to prove
there is no solution with a cost near this lower bound, and the optimisation
terminates quickly. More interesting is a setting around 0.9, which looks for
a 10% improvement at each step. In the case of our scheduling example this
factor gives the following trace:
This represents a tremendous saving in the time taken to improve, from the
same initial solution as before 863, to the optimal solution of 677. Notice
that the cost lower bound is 349. So after finding an initial solution of 863,
the new cost bound is set to 349 + 0.9 (863 349) = 811.6. The next
bounds are successively 764.8, 722.5, 679.3, 644.2, the last one being below
the optimum and therefore yielding no solution.
In this example the optimum solution is found, but in fact this search only
proves there is no solution better than 644: had there been further solutions
between 677 and 644 they would not have been found by this procedure.
What we get is a guarantee that the best solution 677 is within 10% or
more precisely within 0.1 (677 349) of the optimum.
The benefit is that not only is the time taken for the improvement process
much shorter (it is reduced from 58 seconds to around 3 seconds), but also
the time taken for the proof of optimality. It is reduced from 7 seconds to
around 0.2 seconds. The reason is that it is much easier to prove there is
no solution better than 644.2 than to prove there is no solution better than
677.
As another example let us modify the CURRENCY program from Figure 12.3
on page 238 so that we call for an improvement by a factor of 2/3. To this
end we replace the call
by
Values = [1, 2, 3, 4, 5, 6]
Coins = [1, 1, 0, 1, 0, 16]
Yes (8.78s cpu)
This confirms that, just as the delta option, the factor option can lead
to incompleteness.
Other important options for bb min/2 control the strategy used in the
branch and bound search and are in this sense more fundamental than delta
and factor. These strategy options are:
strategy:continue,
strategy:restart,
strategy:dichotomic.
The continue strategy is the default used by minimize/2: when a new
optimum is found search continues from the same point as before. The
restart strategy restarts the search whenever a new optimum is found.
The consequence can be that parts of the search tree that have already been
explored previously are explored again. However, because of the dynamic
variable choice heuristics, restarting can be more efficient than continuing,
if the tightened cost focusses the heuristic on the right variables.
As an example consider the following quotation from Hardy [1992]:
I had ridden in taxi cab number 1729 and remarked that the number seemed to
me rather a dull one, and that I hoped it was not an unfavorable omen. No, he
[Srinivasa Ramanujan] replied, it is a very interesting number; it is the smallest
number expressible as the sum of two cubes in two different ways.
X1 = 1
252 Optimisation with active constraints
hardy([X1,X2,Y1,Y2], Z) :-
X1 #> 0, X2 #> 0,
Y1 #> 0, Y2 #> 0,
X1 #\= Y1, X1 #\= Y2,
X1^3 + X2^3 #= Z,
Y1^3 + Y2^3 #= Z,
bb_min(labeling([X1,X2,Y1,Y2]),
Z,
bb_options{strategy:restart}).
X2 = 12
Y1 = 9
Y2 = 10
Z = 1729
Other options are available and so are other versions of bb min with more
arguments that give the programmer even more control over the computa-
tions that deal with the optimisation.
Even if m is fixed, say 100, it is not clear what the number of variables of
the corresponding CSP is. In other words, it not clear how to set up a CSP
for which we are to find an optimal solution. In this case the most natural
solution consists of systematically picking a candidate number n [2..m]
and of trying to find an appropriate combination of n different cubes. This
boils down to solving a CSP with n variables. If no such combination exists,
the candidate number is incremented and the procedure is repeated.
We can limit the variable domains by noting that
implies
xn b 3 M c and n b 3 M c,
where the latter holds since n xn . To keep things simple in the pro-
gram presented in Figure 12.10 we actually use b M c (as theexpression
fix(round(sqrt(M)))) as the bound, since the computation of b 3 M c would
have to be programmed explicitly.
Then the following interaction with ECLi PSe shows that a solution exists
for N equal to 1000 000, in contrast to N equal to 1000:
cubes(M, Qs) :-
K is fix(round(sqrt(M))),
N :: [2..K],
indomain(N),
length(Qs, N),
Qs :: [1..K],
increasing(Qs),
( foreach(Q,Qs),
foreach(Expr,Exprs)
do
Expr = Q*Q*Q
),
sum(Exprs) #= M,
labeling(Qs), !.
augmented by the procedure increasing/1 of Figure 12.3.
No (0.07s cpu)
12.8 Summary
In this chapter we explained how finite COPs can be solved using ECLi PSe .
To this end we studied the facilities provided by the branch and bound li-
brary, notably the minimize/2 built-in. It realises the basic form of the
branch and bound search. We illustrated the use of minimize/2 by con-
sidering the knapsack problem, the coins problem and the currency design
problem originally introduced in Section 6.5. Also we showed how Sudoku
puzzles can be generated in a simple way by combining constraint propaga-
tion with the branch and bound search.
Further, we introduced the bb min/3 built-in, of which minimize/2 is
a special case, that allows us to program various versions of the branch
and bound search by selecting appropriate options. We also clarified that
some finite domain COPs cannot be solved by means of the branch and
bound search when the number of variables can be unknown. In that case
12.9 Exercises 255
the search for an optimal solution can be programmed using the customary
backtracking combined with the constraint propagation.
12.9 Exercises
Constraints on reals
13.1 Introduction
256
13.2 Three classes of problems 257
compost_1(W, L, V) :-
W :: [50, 100, 200],
L :: 2..5,
V $>= 2.0,
V $= (W/100)*(L^2/(4*pi)),
minimize(labeling([W,L]),V).
Notice that in ECLi PSe pi/0 is a built-in constant which returns the value
of .
We get then:
W = 200
L = 4
V = 2.5464790894703251__2.546479089470326
(1,1)
(0,0)
(X,Y)
compost_2(W, L) :-
W :: 50.0..200.0,
L :: 2.0..5.0,
V $= 2.0,
V $= (W/100)*(L^2/(4*pi)).
In this case ECLi PSe returns an interval of possible values for W and L:
W = W{100.53096491487334 .. 200.0}
L = L{3.5449077018110313 .. 5.0}
So, as in the discrete case, ECLi PSe returns a number of delayed goals.
This answer means that there is no width below 100.53096491487334 and no
length below 3.5449077018110313 from which we can build a 2 cubic metre
compost heap. However, it is not possible to list all the pairs of values for
W and L within the above intervals that yield a volume of 2 cubic metres.
Instead, the delayed goals record additional constraints on the variables W
and L that must be satisfied in any solution.
In this example every value for W in the final interval has a compatible
value for L which together make up a solution to the problem. In general, it
is not the case that all values in the final intervals form part of a solution.
A trivial example is:
X = X{-10.0 .. 10.0}
Y = Y{-10.0 .. 10.0}
Clearly the delayed goals on X and Y are unsatisfiable for any value of Y in
the interval -0.1 .. 0.1. We shall see in Section 13.4, to elicit compatible
solutions for X and Y, this example requires some additional search and
constraint propagation.
13.3 Constraint propagation 261
incons(X, Y, Diff) :-
[X,Y] :: 0.0..10.0,
X $>= Y + Diff,
Y $>= X + Diff.
If Diff > 0, then the query incons(X, Y, Diff) should fail even without
any information about the domains of X and Y since X $>= Y + Diff and Y
$>=X + Diff implies X $>= X + 2*Diff. However, in the ic library each
constraint is considered separately and the inconsistency is achieved only by
the repeated shaving of the domain bounds of X and Y by the amount Diff.
We can try smaller and smaller values of Diff and see how the time needed
to detect the inconsistency grows. (The term 1e-n represents in ECLi PSe
10n .)
X = X{0.0 .. 10.0}
Y = Y{0.0 .. 10.0}
262 Constraints on reals
Delayed goals:
ic : (Y{0.0 .. 10.0} - X{0.0 .. 10.0} =< -1e-8)
ic : (-(Y{0.0 .. 10.0}) + X{0.0 .. 10.0} =< -1e-8)
Yes (0.00s cpu)
X = X{0.0 .. 10.0}
Y = Y{0.0 .. 10.0}
Delayed goals:
ic : (Y{0.0 .. 10.0} - X{0.0 .. 10.0} =< -0.0001)
ic : (-(Y{0.0 .. 10.0}) + X{0.0 .. 10.0} =< -0.0001)
Yes (0.00s cpu)
X = X{-1.0000000000000004 .. 2.0000000000000004}
Y = Y{-1.0000000000000004 .. 2.0000000000000004}
X = X{1.8228756488546369 .. 1.8228756603552694}
Y = Y{-0.82287567032498 .. -0.82287564484820042}
No (0.00s cpu)
So the first subdomain, with X >= 1.5, tightens the lower bound on X and
enables the constraint propagation to reduce the domain of X to the interval
1.8228756488546369 .. 1.8228756603552694
-0.82287567032498 .. -0.82287564484820042.
The second subdomain, with X =< 1.5, tightens the upper bound on X
and enables the constraint propagation to detect that there is no solution in
this domain for X. Supplementing this computation by a reasoning showing
that a unique solution exists we can conclude that this solution lies in the
rectangle determined by the above two intervals. So even though we did not
compute the precise solution we could determine its location with a very
high degree of precision.
264 Constraints on reals
Domain splitting can also be effective for problems with infinitely many
solutions, as illustrated by the example X = 1/Y introduced in Section 13.2.
In this case we need to split the domain into three subdomains, X ,
X and X for some small . With this splitting we can achieve
a precise cover of the solution set:1
X = X{-10.0 .. -0.099999999999999992}
Y = Y{-10.0 .. -0.099999999999999992}
X = X{0.099999999999999992 .. 10.0}
Y = Y{0.099999999999999992 .. 10.0}
13.5 Search
For continuous variables a general search method is to associate with each
variable a domain in the form of an interval of possible values, and at each
search node to narrow one interval. Complete search can be maintained by
exploring a different subinterval at each subbranch, but ensuring that the
union of the subintervals covers the input interval at the node.
Because the domains are continuous, this search tree is infinitely deep:
you can go on dividing an interval forever without reducing it to a point.
Consequently a precision has to be specified which is the maximum allowed
width of any interval on completion of the search. During the search, when
the interval associated with a variable is smaller than this precision, this
1 inf is the built-in representation of the infinity.
13.6 The built-in search predicate locate 265
variable is never selected again. Search stops when all the variables have
intervals smaller than the precision.
If the search fails after exploring all the alternative subintervals at each
node, then (assuming the subintervals cover the input interval at the node)
this represents a proof that the problem has no solution. In other words,
the search failure for continuous variables is sound.
On the other hand if the search stops without failure, this is no guaran-
tee that there really is a solution within the final intervals of the variables
(unless they are all reduced by constraint propagation to a single point). A
similar phenomenon has already been illustrated in Section 13.3 using the
incons/3 predicate, where constraint propagation stops even though there
is no solution satisfying the constraints.
Accordingly each answer to a problem on reals is in fact only a condi-
tional solution. It has two components:
A real interval for each variable. Each interval is smaller than the given
precision.
A set of constraints in the form of delayed goals. These constraints are
neither solved nor unsatisfiable when considered on the final real intervals.
10
-5
-10
-10 -5 0 5 10
we can represent this interval using the syntax 1e-6 .. 1e6, and so the
problem is set up as follows:
abel(X) :-
X :: -1e6 .. 1e6,
2*X^5 - 5*X^4 + 5 $= 0.
X = X{-0.92435801492603964 .. -0.92435801492602809}
X = X{1.1711620684831605 .. 2.4280727927074546}
The delayed goals specify the additional conditions on X under which the
13.7 Shaving 267
query is satisfied. To separate out all three solutions we use a smaller pre-
cision, such as 0.001:
[eclipse 14]: abel(X), locate([X], 0.001).
X = X{-0.92435801492603964 .. -0.92435801492602809}
X = X{1.1711620684831605 .. 1.1711620684832094}
There are 5 delayed goals.
Yes (0.00s cpu, solution 2, maybe more) ;
X = X{2.4280727927071966 .. 2.4280727927074546}
There are 5 delayed goals.
Yes (0.01s cpu, solution 3)
13.7 Shaving
While constraint propagation and domain splitting work fine on many prob-
lems, there are problems where even for very narrow intervals constraint
propagation is unable to recognise infeasibility. The consequence is that too
many alternative conditional solutions are returned.
The following predicate encodes the so-called Reimers system:
reimer(X, Y, Z) :-
[X,Y,Z] :: -1.0 .. 1.0,
X^2 - Y^2 + Z^2 $= 0.5,
X^3 - Y^3 + Z^3 $= 0.5,
X^4 - Y^4 + Z^4 $= 0.5.
We will search for the solutions using the ECLi PSe query
reimer(X, Y, Z), locate([X,Y,Z], Prec).
for different levels of precision Prec. The number of conditional solutions
returned for different levels of precision i.e. different final interval widths
are presented in Table 13.1.
For the wide intervals it is clear that several solutions are being captured
within a single interval, and decreasing the interval width increases the num-
ber of distinct conditional solutions. At a width of around 1e-10 the intervals
are narrow enough to detect that certain conditional solutions are infeasible,
268 Constraints on reals
1.0 6 0.09
0.01 8 2.08
1e-5 13 5.41
1e-8 11 7.23
1e-10 10 7.80
1e-12 11 8.25
1e-14 24 8.61
To demonstrate its effect we will tackle Reimers problem again, but this
time applying shaving to the intervals returned by the search routine. We
will again try different precisions for locate/2 and for squash/3 we will use
2 Recall that the arithmetic mean of a and b is (a + b)/2 while the geometric mean of a and b is
a b.
13.7 Shaving 269
an interval width of 1e-15. For the third parameter we will always choose
the value lin. The query is thus:
1.0 4 6.34
0.01 4 7.56
1e-5 4 8.58
1e-10 4 8.66
1e-14 16 8.44
Clearly there are at most four different solutions. The shaving procedure
removes infeasible intervals. Note, however, that very narrow intervals very
close to a real solution still cannot be removed by squash/3. In particular,
if we set the precision to 1e-14, then we get 16 conditional solutions.
Shaving can be used as a (polynomial) alternative to the (exponential)
interval splitting method of narrowing intervals. Because it does not in-
troduce extra choice points it applies well to problems of the second class
introduced in Section 13.2, that is problems which have a finite search space
but involve mathematical constraints.
As another example of the use of shaving we shall consider a program that
has been often used to illustrate solving constraints on continuous variables.
The program specifies a financial application, concerning how to pay off a
mortgage, and is given in Figure 13.4.
It expresses the relationship between the four variables:
Loan (the loan),
Payment (the fixed monthly amount paid off),
Interest (fixed, but compounded, monthly interest rate),
Time (the time in months taken to pay off a loan).
Three of the variables are continuous, Loan, Payment and Interest, and
the other, Time, is integer valued.
This program can be used in many ways, for example to compute the
duration of a given loan. To answer this question we need to ask for the
270 Constraints on reals
amount of time needed to pay off at least the borrowed amount, as in the
following query:
X = 50019.564804291353__50019.56480429232
T = 126
Yes (0.06s cpu)
Note that the query mortgage(50000, 700, 0.01, T) simply fails, since
the payments of 700 do not add up to 50000 precisely.
It is more difficult to compute the required regular payment from a fixed
loan, interest rate and payoff time. At least we know the payment is greater
than 0 and less than the total loan, so we can provide an initial interval for
it:
No (0.02s cpu)
No (0.01s cpu)
No (3.72s cpu)
No (14.48s cpu)
272 Constraints on reals
X = X{3.9941057914745426 .. 4.0587046812659331}
Y = Y{-0.0012435228262296953 .. 0.0044940668232825009}
Min = 1.9412953187340669
solution which is a whole unit (i.e., 1.0) better than the previous one. So we
can only conclude that no minimum below 0.941 295 318 734 066 91 exists.
For continuous problems it therefore makes sense to override the default
and specify a minimum improvement sought at each step. The smaller
the improvement the nearer the procedure can guarantee to come to the
actual optimum (if it exists). However, smaller improvements can result in
increased solving times.
We conclude this chapter by encoding a modified optimisation procedure
that gives the programmer control over the required improvement at each
step. The appropriate optimisation predicate uses the bb min/3 built-in:
opt_2(Query, Expr, Improvement, Min) :-
OptExpr $= eval(Expr),
bb_min(( Query, get_min(OptExpr,Min) ),
Min,
bb_options{delta:Improvement}).
By choosing an improvement of first 0.5 and then 0.1 we can illustrate the
advantage of smaller improvements:3
[eclipse 21]: cons(X, Y),
opt_2(locate([X, Y], 0.01), (6-X), 0.5, Min).
X = X{4.6891966915344225 .. 4.7650376722448371}
Y = Y{-0.0012435228262296953 .. 0.0044940668232825009}
Min = 1.2349623277551629
3 In this problem both queries happen to take a similar time: this is because the branch and
bound is not causing any pruning of the search space. Sometimes a branch and bound search
with more intermediate solutions takes an exponentially longer time.
13.9 Summary 275
X = X{4.8421052712298724 .. 4.9204193272676848}
Y = Y{-0.0012435228262296953 .. 0.0044940668232825009}
Min = 1.0795806727323152
13.9 Summary
In this chapter we discussed the problem of solving constraints on reals. To
this end we considered in turn:
three natural classes of problems involving continuous variables,
constraint propagation on continuous variables,
search,
optimisation over continuous optimisation functions.
We also indicated the importance of a specialised constraint propagation
technique, called shaving.
In the process we introduced the following built-ins:
the constant pi, which represents in ECLi PSe ,
the number and exponent syntax for real numbers, e.g. 1e-8,
set threshold/1, used to set the propagation threshold,
locate/2, to perform search over continuous variables,
squash/3, to perform shaving.
13.10 Exercises
Exercise 13.1 When optimising using the predicate opt 2/3 defined in
Section 13.8, it is necessary to compromise. A tighter precision means more
276 Constraints on reals
Exercise 13.3 A piano with length L and width W has to be moved around
a right-angle corner in a corridor, see Figure 13.5. Coming into the corner
the corridor has width Corr1 and going out it has a possibly different width
Corr2. Given L = 2, W = 1 and Corr1 = 1.5, what is the minimum value
for Corr2?
Corr1
L Corr2
To make this a little easier here are the underlying mathematical con-
straints.
The envelope of the piano as it moves around the corner can be
expressed in terms of a variable T . The envelope describes two coor-
dinates X and Y as follows:
X = W T 3 + L (1 T 2 )1/2 ,
Y = L T + W (1 T 2 )3/2 .
At the critical point X = Corr1 and Y = Corr2.
T can take values in the interval MinT .. MaxT, where
13.10 Exercises 277
2 4L2 )1/2 1/2
MinT = 1/2 (9W 6W ,
2 4L2 )1/2
1/2
MaxT = 1/2 + (9W 6W .
The problem is then to write a predicate which finds the minimum possible
value for Corr2, taking parameters W, L and Corr1.
14
14.1 Introduction
278
14.2 The eplex library 279
incons(W, X, Y, Z) :-
W+X+Y+Z $>= 10,
W+X+Y $= 5,
Z $=< 4.
W = W{0 .. 5}
X = X{0 .. 5}
Y = Y{0 .. 5}
Z = Z{0 .. 4}
Delayed goals:
ic : (-(Z{0 .. 4}) - Y{0 .. 5} - X{0 .. 5} - W{0 .. 5}
=< -10)
ic : (Y{0 .. 5} + X{0 .. 5} + W{0 .. 5} =:= 5)
Yes (0.00s cpu)
280 Linear constraints over continuous and integer variables
No (0.00s cpu)
So the eplex solver correctly detects the inconsistency even without any
initial domains for the variables.
A key difference is that, in contrast to ic, the eplex solver only handles
constraints after it has been explicitly initialised by the program. (The
rationale for this facility is that the same program can initialise more than
one linear solver instance through the eplex library, and send different
constraints to different instances.) The first line initialises the eplex solver
by stating whether one searches for a minimum of the cost function or a
maximum.
In general, the argument of the eplex solver setup/1 built-in is either
min(Cost) or max(Cost), where the Cost variable is constrained to the
expression defining the cost function. In our example we are not interested
in the finding an optimal solution, so just use 0 as the cost function. The
last line launches the eplex solver using the eplex solve/1 built-in. In
general its argument returns the optimal value of the expression defining
the cost function and specified at setup time. Here we are not interested in
the optimal value, so use an anonymous variable as the argument.
...
Val = 5.0
Yes (0.00s cpu)
Alternatively, it is possible to extract information about the complete
solver state using the eplex get/2 built-in:2
[eclipse 5]: eplex_solver_setup(min(0)),
2 In this output, for brevity and clarity, we have set the variable printing options not to display
any of the variable attributes.
282 Linear constraints over continuous and integer variables
...
Cons = [Y + X + W =:= 5.0, Z + Y + X + W >= 10.0]
Vars = (W, X, Y, Z)
Vals = (5.0, 0.0, 0.0, 5.0)
Yes (0.00s cpu)
return_solution :-
eplex_get(vars, Vars),
eplex_get(typed_solution, Vals),
Vars = Vals.
lptest(W, X) :-
eplex_solver_setup(min(X)),
[W,X] :: 0..10,
2*W+X $= 5,
14.2 The eplex library 283
eplex_solve(_),
return_solution.
W = 2.5
X = 0.0
Yes (0.02s cpu)
miptest(W, X) :-
eplex_solver_setup(min(X)),
[W,X] :: 0.0..10.0,
integers([W]),
2*W+X $= 5,
eplex_solve(_),
return_solution.
Now the solver performs a mixed integer programming algorithm and finds
an optimal solution with W integral:
W = 2
X = 1.0
Yes (0.00s cpu)
The internal (branch and bound) search method involving integer vari-
ables is as follows.
(i) Solve the linear constraints in eplex, as if all the variables were con-
tinuous.
(ii) From the solution select an integer variable, W , whose eplex value
val is non-integral. If there is no such variable the problem is solved.
284 Linear constraints over continuous and integer variables
(iii) Let valint be the smallest integer larger than val and valint the largest
integer smaller than val. Then val int = valint + 1.
Make a search choice: add either the constraint W val int or
the constraint W valint . Return to (i). On backtracking try the
alternative choice.
(iv) To find an optimal solution, use the branch and bound search de-
scribed in Subsection 6.6.3.
sat(X, Y, Z) :-
eplex_solver_setup(min(0)),
integers([X,Y,Z]),
[X,Y,Z] :: 0..1,
X+Y $>= 1, Y+Z $>= 1, Z+X $>= 1,
(1-X) + (1-Y) $>= 1, (1-Y) + (1-Z) $>= 1,
eplex_solve(_),
return_solution.
X = 1
Y = 0
Z = 1
Yes (0.01s cpu)
SqrtMax is sqrt(Max),
SqrtX $>= SqrtMin,
SqrtX $=< SqrtMax,
SqrtX*SqrtMin $=< X,
SqrtX*SqrtMax $>= X.
where
nonlin(W, X, Cost) :-
Cost :: 0.0..inf, %1
eplex_solver_setup(min(Cost),
Cost,
[],
[bounds, new_constraint]), %2
cons(W, X), %3
add_linear_cons(X, SqrtX), %4
Cost $= W+SqrtX, %5
minimize(nlsearch(X, SqrtX), Cost), %6
return_solution, %7
eplex_cleanup. %8
W = 0
X = 5.0
14.5 The transportation problem 289
Cost = 2.23606797749979
Yes (0.03s cpu)
cust_count(4).
warehouse_count(3).
capacities([600,500,600]).
demands([400,200,200,300]).
transport_costs([]([](5,4,1),
[](3,3,2),
[](4,2,6),
[](2,4,4))).
customer 1
demand: 400
st: 5
t co
warehouse 1
t cost: 3
capacity: 600
customer 2
demand: 200
warehouse 2
tc
capacity: 500 ost
:4
customer 3
demand: 200
warehouse 3
t
co
capacity: 600
st
:2
customer 4
demand: 300
This problem has a decision variable for each customer and warehouse
reflecting the amount of goods transported from the warehouse to the cus-
tomer. To ensure the model is generic we use a matrix to hold all the decision
variables. We solve it using the predicate transport/2 defined by:
transport(Supplies, Cost) :-
eplex_solver_setup(min(Cost)),
init_vars(Supplies),
supply_cons(Supplies),
cost_expr(Supplies, Cost),
eplex_solve(Cost),
return_solution.
This has the usual structure of a constraint program, except the first and
last lines which are specific to eplex. The arguments contain the decision
variables, as discussed above. The objective is to minimise the variable Cost.
The data structures containing the decision variables are initialised as
follows:
init_vars(Supplies) :-
init_supplies(Supplies).
init_supplies(Supplies) :-
cust_count(CustCt),
warehouse_count(WCt),
dim(Supplies,[CustCt,WCt]),
( foreachelem(S, Supplies)
do
0 $=< S
).
supply_cons(Supplies) :-
capacity_cons(Supplies),
demand_cons(Supplies).
292 Linear constraints over continuous and integer variables
capacity_cons(Supplies) :-
capacities(Capacities),
( count(WHouse, 1, _),
foreach(Cap, Capacities),
param(Supplies)
do
cust_count(CustCt),
Cap $>= sum(Supplies[1..CustCt, WHouse])
).
demand_cons(Supplies) :-
demands(Demands),
( count(Cust, 1, _),
foreach(Demand, Demands),
param(Supplies)
do
warehouse_count(WCt),
sum(Supplies[Cust, 1..WCt]) $>= Demand
).
So we stipulate that the sum of the supplies from each warehouse does
not exceed its capacity and the sum of supplies to each customer meets or
exceeds its demand.
Finally, we define the cost expression which is the sum of the resulting
transportation costs:
cost_expr(Supplies, CostExpr ) :-
transport_expr(Supplies, TransportExpr),
CostExpr $= TransportExpr.
transport_expr(Supplies, sum(TExprList)) :-
transport_costs(TransportCosts),
( foreachelem(TCost, TransportCosts),
foreachelem(Qty, Supplies),
foreach(TExpr, TExprList)
do
TExpr = TCost*Qty
).
14.5 The transportation problem 293
setup_costs([100,800,400]).
a list OpenWhs of Boolean variables, one for each potential warehouse
location.
The data structures are now initialised by the following modified init vars/2
predicate:
init_vars(OpenWhs, Supplies) :-
init_openwhs(OpenWhs),
init_supplies(Supplies).
init_openwhs(OpenWhs) :-
warehouse_count(WCt),
length(OpenWhs, WCt),
OpenWhs :: 0.0..1.0,
integers(OpenWhs).
The key extensions to the transportation problem are in the constraints
on how much goods can be supplied to a customer from a warehouse, and an
extension to the cost function. Consider first the modified supply constraints
with the amended constraints on the supply from each warehouse:
supply_cons(OpenWhs, Supplies) :-
capacity_cons(Supplies, OpenWhs),
demand_cons(Supplies).
capacity_cons(Supplies, OpenWhs) :-
capacities(Capacities),
( count(WHouse, 1, _),
foreach(OpenWh, OpenWhs),
14.6 The linear facility location problem 295
foreach(Cap, Capacities),
param(Supplies)
do
cust_count(CustCt),
Cap*OpenWh $>= sum(Supplies[1..CustCt, WHouse])
).
So if the warehouse WHouse is not chosen, then the corresponding OpenWh
variable is 0. The constraint ensures that in this case the sum of all the
supplies from this warehouse is 0. However, if the warehouse WHouse is
chosen, then OpenWh = 1 and the constraint reduces to the one used in the
previous section.
This way of handling Boolean variables, such as OpenWhs, is standard in
mixed integer programming. The constraint M*Bool $>= Expr is called a
big M constraint. Our big M constraint is:
Cap*OpenWh $>= sum(Supplies[1..CustCt, WHouse]).
This example is especially interesting as, by choosing M equal to the ca-
pacity Cap of the warehouse, we have created a constraint that does two
things at the same time. If the warehouse is not chosen, it precludes any
supply from that warehouse, and if the warehouse is chosen, it restricts its
total supply to be no more than the capacity of the warehouse.
The second extension of the transportation problem model is an additional
argument in the cost function, which is now defined as follows:
cost_expr(OpenWhs, Supplies, Cost) :-
setup_expr(OpenWhs, SetupExpr),
transport_expr(Supplies, TransportExpr),
Cost $= SetupExpr+TransportExpr.
setup_expr(OpenWhs, sum(SExprList)) :-
setup_costs(SetupCosts),
( foreach(OpenWh, OpenWhs),
foreach(SetupCost, SetupCosts),
foreach(SExpr, SExprList)
do
SExpr = OpenWh*SetupCost
).
(The transport expr/2 predicate is defined in the previous section.) So
the cost variable Cost is now constrained to be equal to the sum of the setup
costs of the chosen warehouses plus the resulting transportation costs:
296 Linear constraints over continuous and integer variables
Cost $= SetupExpr+TransportExpr.
We encode now the facility location problem similarly to the transporta-
tion problem:
The problem instance defined by the example data items above yields the
following result:
OpenWhs = [1, 0, 1]
Supplies = []([](0.0, 0.0, 400.0),
[](0.0, 0.0, 200.0),
[](200.0, 0.0, 0.0),
[](300.0, 0.0, 0.0))
Cost = 2700.0
Yes (0.00s cpu)
needed only for the cost expression. The newly introduced predicate
init capacities/2 is defined as follows:
init_capacities(Capacities, SqrtCaps) :-
warehouse_count(WCt),
length(Capacities, WCt),
length(SqrtCaps, WCt),
total_demand(CapUPB),
SqrtCapUPB is sqrt(CapUPB),
Capacities :: 0.0..CapUPB,
SqrtCaps :: 0.0..SqrtCapUPB,
( foreach(Cap, Capacities),
foreach(SqrtCap, SqrtCaps)
do
add_linear_cons(Cap, SqrtCap)
).
total_demand(CapUPB) :-
demands(Demands),
CapUPB is sum(Demands).
So we stipulate that the upper bound on the capacity variables is the total
demand from all the customers this is the largest warehouse capacity that
could ever be needed.
The add linear cons/2 predicate, originally described in Section 14.4,
posts a set of linear constraints that approximate the square root relation.
The next program section posts the problem constraints:
(The demand cons/2 predicate is defined in Section 14.5.) Since the capac-
ity Cap is now a variable, the expression Cap*OpenWh is no longer linear, and
therefore it cannot be used in a linear constraint, as in the capacity cons/2
predicate in the previous section. Instead, the constraint
Cap $>= sum(Supplies[1..CustCt, WHouse])
stating that there is sufficient capacity in the warehouse to meet the supply
commitment is added separately from the constraint
nlsearch(Capacities, SqrtCaps) :-
( too_different(Capacities, SqrtCaps, Cap, SqrtCap) ->
make_choice(Cap),
add_linear_cons(Cap, SqrtCap),
nlsearch(Capacities, SqrtCaps)
;
return_solution
).
OpenWhs = [1, 0, 1]
Capacities = [500.0, 0.0, 600.0]
Cost = 3903.40269486325
Yes (0.06s cpu)
This program scales up in a reasonable way. For a set of benchmarks
with 10 warehouses and 50 customers the program typically finds a solution
quickly and proves optimality in about 20 seconds.
ic: (X $>= 2)
or
test(X, Y) :-
ic: (X $>= 2),
Y #>= X.
14.8 Summary
In this chapter we introduced the eplex solver that allows us to solve linear
and non-linear constraints on integers and reals. We discussed in turn:
14.9 Exercises
Resource T1 T2
Labour (hours) 9 6
Pumps (units) 1 1
Tubing (m) 12 16
Exercise 14.2 Explore the behaviour of the program solving the non-linear
facility location problem and presented in Section 14.7 with different:
(i) linear approximations
Add the following linear approximations one by one to determine
their effect on the search performance:
SqrtX $>= SqrtMin and SqrtX $=< SqrtMax,
SqrtX*SqrtMin $=< X and SqrtX*SqrtMax $>= X,
(SqrtMax-SqrtMin)*(X-Min) $=< (SqrtX-SqrtMin)*(Max-Min).
(ii) setup costs
Try the following setup costs:
SExpr = (OpenWh+SqrtCap*0.3)*SetupCost,
SExpr = (OpenWh+SqrtCap*0.1)*SetupCost,
SExpr = (OpenWh+SqrtCap*0.01)*SetupCost.
(iii) trigger conditions
Add the following trigger conditions one at a time:
304 Linear constraints over continuous and integer variables
new constraint,
bounds,
deviating bounds.
(iv) precision
Try varying the allowed difference between SqrtCapVal and
sqrt(CapVal) using:
10,
1,
1e-5.
Solutions to selected exercises
Exercise 1.1
[eclipse 1]: X = 7, X = 6.
No (0.00s cpu)
[eclipse 2]: X = 7, X = X+1.
No (0.00s cpu)
[eclipse 3]: X = X+1, X = 7.
No (0.00s cpu)
[eclipse 4]: X = 3, Y = X+1.
Y = 3 + 1
X = 3
Yes (0.00s cpu)
[eclipse 5]: Y = X+1, X = 3.
Y = 3 + 1
X = 3
Yes (0.00s cpu)
Exercise 1.2
[eclipse 1]: book(X, apt), book(X, Y), Y \= apt.
X = constraint_logic_programming
Y = wallace
Yes (0.00s cpu)
Exercise 1.3
[eclipse 1]: Z = 2, q(X).
Z = 2
X = X
Yes (0.00s cpu)
305
306 Solutions to selected exercises
Z = 2
Yes (0.00s cpu)
[eclipse 3]: Z = 2, q(X), Y = Z+1.
Z = 2
X = X
Y = 2 + 1
Exercise 1.4
[eclipse 1]: p(Behold), q(Behold).
No (0.00s cpu)
Exercise 1.5
The first solution is obtained twice, by splitting Xs into [] and Xs or into Xs and
[]. In each case, by appending Xs to [] or [] to Xs we get back Xs. The following
modified definition of rotate/2 removes the possibility of such redundancies:
rotate(Xs, Ys) :- app(As, Bs, Xs), Bs \= [], app(Bs, As, Ys).
Exercise 2.1
The result is X=0, Y=0.
Exercise 2.2
The first procedure call fails, while the other two succeed.
Exercise 2.3
app(X1, X2, X3):
begin
begin new Ys;
X1 = []; X2 = Ys; X3 = Ys
end
orelse
begin new X,Xs,Ys,Zs;
X1 = [X | Xs]; X2 = Ys; X3 = [X | Zs];
app(Xs, Ys, Zs)
end
end
Exercise 3.1
[eclipse 1]: X is 7, X is 6.
No (0.00s cpu)
Solutions to selected exercises 307
No (0.00s cpu)
[eclipse 3]: X is X+1, X is 7.
instantiation fault in +(X, 1, X)
Abort
[eclipse 4]: X is 3, Y is X+1.
Y = 4
X = 3
Yes (0.00s cpu)
[eclipse 5]: Y is X+1, X is 3.
instantiation fault in +(X, 1, Y)
Abort
Exercise 3.2
Z = 2
X = X
Yes (0.00s cpu)
[eclipse 2]: Z is 2, q(Z).
Z = 2
Yes (0.00s cpu)
[eclipse 3]: Z is 2, q(X), Y is Z+1.
Z = 2
X = X
Y = 3
Yes (0.00s cpu)
[eclipse 4]: Z = 2, inc(X), X = 2.
instantiation fault in +(X, 1, Z) in module eclipse
Abort
[eclipse 5]: X = 2, inc(X), Z = 2.
X = 2
Z = 2
Yes (0.00s cpu)
Exercise 3.3
An example is the query min(1+1, 3, Z). For the first program it yields the
answer Z = 1+1 and for the second program the answer Z = 2.
Exercise 3.4
Assume the declarations
:- op(500, yfx, have).
:- op(400, yfx, and).
308 Solutions to selected exercises
Exercise 3.5
Here is a possible declaration that ensures that has length/2 is properly defined
and queries such as [a,b,c] has length X+1 are correctly parsed:
:- op(800, xfy, has_length).
Exercise 4.1
Here is one possible solution.
len(X, N) :-
( integer(N) ->
len1(X, N)
;
len2(X, N)
).
len1([], 0) :- !.
len1([_ | Ts], N) :- M is N-1, len1(Ts, M).
len2([], 0).
len2([_ | Ts], N) :- len2(Ts, M), N is M+1.
Exercise 4.2
This exercise illustrates a subtlety in using cut. Consider the query add(a, [a],
[a,a]). For the first program it yields the answer No and for the second program
the answer Yes.
Exercise 4.3
while(B, S) :-
( B ->
S,
while(B, S)
;
true
).
Exercise 5.1
Here is one possible solution.
Solutions to selected exercises 309
Exercise 5.3
Here is one possible solution.
subs(Input, _X, _Term, Input) :- atomic(Input), !.
subs(Input, X, Term, Term) :- var(Input), X == Input, !.
subs(Input, _X, _Term, Input) :- var(Input), !.
Exercise 7.1
( foreach(V,List),
fromto(0,This,Next,Count)
do
(nonvar(V) -> Next is This+1 ; Next = This)
).
Exercise 7.2
( foreachelem(El,Array),
fromto(0,This,Next,Count)
do
(nonvar(El) -> Next is This+1 ; Next = This)
).
Exercise 7.3
Here is one possible solution.
select_val(Min, Max, Val) :-
( count(I,Min,Val),
fromto(go,_,Continue,stop),
param(Max)
310 Solutions to selected exercises
do
( Continue = stop
;
I < Max, Continue = go
)
).
This is an example where the recursive encoding (provided in the SELECT program
from Figure 3.2) is, perhaps, easier to follow.
Exercise 7.4
( count(I,K,L) do Body ) is equivalent to
( L1 is L+1, fromto(K,I,M,L1) do M is I+1, Body ).
Exercise 8.1
setval(c,0),
( foreachelem(El,Array)
do
( var(El) -> true ; incval(c) )
),
getval(c,Count).
Exercise 8.2
Here is one possible solution.
all_solutions(Query, _List) :-
setval(sols,[]),
Query,
getval(sols,Old),
append(Old,[Query],New),
setval(sols,New),
fail.
all_solutions(_, List) :-
getval(sols,List).
Exercise 8.3
Here is one possible solution.
naive_search2(Vars,Vals) :-
( foreach(V,Vars),
count(Indent,0,_),
param(Vals)
do
setval(first,true),
member(V,Vals),
( getval(first,false) -> nl_indent(Indent) ; true ),
write(v), write(Indent), write( = ),
write(V), write(\t),
on_backtracking(setval(first,false))
).
Solutions to selected exercises 311
nl_indent(N) :-
nl,
( for(_I,1,N) do write(\t) ).
on_backtracking(_).
on_backtracking(Q) :-
once(Q),
fail.
(\t stands for the TAB symbol.)
Exercise 9.1
The following solve/1 predicate does the job:
solve(A) :-
solve(A, true, Susp),
write_susp(Susp).
write_susp(true) :- !.
write_susp((A,B)) :-
nl, writeln(Delayed goals:),
write_susp1((A,B)).
write_susp1((A,B)) :-
( B = true -> true ; write_susp1(B) ),
write(\t), writeln(A).
We have then for example:
[eclipse 1]: solve((suspend:(2 < Y + 1), suspend:(X > 3))).
Delayed goals:
suspend : (2 < Y + 1)
suspend : (X > 3)
Y = Y
X = X
Yes (0.00s cpu)
Exercise 9.2
In the solution below the predicate diff_list/2 is defined by a double recursion
using the auxiliary predicate out of/2.
diff_list([]).
diff_list([E1|Tail]):-
out_of(E1, Tail),
diff_list(Tail).
out_of(_, []).
out_of(E1, [E2|Tail]):-
E1 $\= E2,
out_of(E1, Tail).
312 Solutions to selected exercises
Exercise 9.3
The following modification of the constraints/2 predicate does the job:
constraints(QueenStruct, Number) :-
( for(I,1,Number),
param(QueenStruct,Number)
do
QueenStruct[I] :: 1..Number,
( for(J,1,I-1),
param(I,QueenStruct)
do
subscript(QueenStruct,[I],Qi),
subscript(QueenStruct,[J],Qj),
Qi $\= Qj,
Qi-Qj $\= I-J,
Qi-Qj $\= J-I
)
).
We get then a substantial speed up w.r.t. the original solution, for example:
[eclipse 1]: queens(X, 8).
X = [](1, 5, 8, 6, 3, 7, 2, 4)
Yes (0.02s cpu, solution 1, maybe more) ?
Exercise 9.4
Here is one possible solution.
filter([], [], unsat).
filter([H | T], [H | Rest], Sat) :-
var(H), !, filter(T, Rest, Sat).
filter([0 | T], Rest, Sat) :- !,
filter(T, Rest, Sat).
filter([1 |_], [], sat).
test_cl(List) :-
filter(List, Rest, Sat),
( Sat = sat -> true ;
Rest = [Var|_] -> suspend(test_cl(Rest), 2, Var->inst)
).
Exercise 9.5
Here is one possible solution.
ordered(List) :-
( fromto(List,[X|Successors],Successors,[]),
fromto([],Predecessors,[X|Predecessors],_RList)
do
( ground(X) ->
cg(X, Predecessors, Successors)
;
Solutions to selected exercises 313
suspend(cg(X,Predecessors,Successors),2,X->inst)
)
).
check_gap(_, _, []) :- !.
check_gap(X, Comp, [Y|Rest]) :-
var(Y), !,
check_gap(X, Comp, Rest).
check_gap(X, Comp, [Y|_]) :-
comp(X, Comp, Y).
No (0.00s cpu)
[eclipse 2]: X #= 7, X #= X+1.
No (0.00s cpu)
[eclipse 3]: X #= X+1, X #= 7.
No (0.00s cpu)
[eclipse 4]: X #= 3, Y #= X+1.
X = 3
Y = 4
Yes (0.00s cpu)
[eclipse 5]: Y #= X+1, X #= 3.
Y = 4
X = 3
Yes (0.00s cpu)
Exercise 11.1
Here is one possible solution.
queens(Queens, Number):-
length(Queens, Number),
314 Solutions to selected exercises
Queens :: [1..Number],
constraints(Queens),
labeling(Queens).
constraints(Queens):-
noHorizontalCheck(Queens),
noDiagonalCheck(Queens).
noHorizontalCheck(Queens):-
diff_list(Queens).
noDiagonalCheck([]).
noDiagonalCheck([Q | Queens]):-
noCheck(Q, Queens, 1),
noDiagonalCheck(Queens).
correct(List) :-
( foreach(El,List),
count(I,0,_),
param(List)
do
occ(I, List, El)
).
The predicate occ/3 is actually present in ECLi PSe as the occurrences/3 built-
in in the ic global library. For a solution using this built-in see www.eclipse-clp.
org/examples/magicseq.ecl.txt.
Exercise 11.3
Here is one possible solution.
solve(X) :-
N is 10,
dim(X,[N]),
dim(Y,[N]),
subscript(X,[7],6),
( for(I,1,N),
param(X,Y,N)
do
X[I] :: 1..N,
( for(J,1,I-1),
param(I,X)
do
X[I] $\= X[J]
),
( I >= 2 ->
Y[I] $= X[I]-X[I-1],
Y[I] :: [3,-2]
;
true
)
),
( foreacharg(I,X) do indomain(I) ).
Exercise 12.1
For the first problem the continue strategy is much better than restart strat-
egy, while for the second problem the converse holds. Further, labeling(List)
performs well for the first problem, while for the second problem it is much better
316 Solutions to selected exercises
[...]
List = [7, 5, 3, 1, 0, 2, 4, 6, 8]
Max = 530
Yes (6.48s cpu)
[...]
List = [7, 5, 3, 1, 0, 2, 4, 6, 8]
Max = 530
Yes (87.19s cpu)
[...]
List = [14, 12, 10, 8, 6, 0, 1, 2, 3, 4, 5, 7, 9, 11, 13, 15]
Max = 468
Yes (1.57s cpu)
[...]
List = [12, 13, 10, 8, 6, 2, 3, 0, 1, 4, 5, 7, 9, 11, 14, 15]
Max = 468
Yes (104.27s cpu)
[...]
List = [12, 13, 10, 8, 0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 14, 15]
Max = 468
Solutions to selected exercises 317
Exercise 12.2
Here is one possible solution.
domination(N,Min) :-
dim(Board,[N,N]),
matrix_to_list(Board, Vars),
Vars :: 0..1,
sum(Vars) $= Min,
( multifor([Row,Col],[1,1],[N,N]),
param(Board)
do
covered(Row, Col, Board)
),
minimize(labeling(Vars),Min),
print_board(Board).
attack(R,C,I,J) :-
R =:= I ; C =:= J ; R-C =:= I-J ; R+C =:= I+J.
matrix_to_list(Board, List) :-
( foreachelem(El,Board),
foreach(Var,List)
do
Var = El
).
print_board(Board) :-
( foreachelem(El,Board,[_,J])
do
( J =:= 1 -> nl ; true ),
write( ), write(El)
).
The call covered(Row, Col, Board) checks that the list of fields that attack
the square [Row,Col] (constructed in the variable Attacklist) contains at least
one queen (using the constraint sum(AttackList) $>= 1).
Exercise 12.3
Here is one possible solution with a sample input problem.
% problem(Nr, SquareSize, TileSizes)
318 Solutions to selected exercises
).
[...]
Bs = [0, 0, 1, 1, 1, 1, 1]
Xs = [1, 1, 1, 1, 6, 9, 1]
Ys = [1, 1, 1, 6, 1, 1, 10]
Min = 45
Yes (6.58s cpu)
Bibliographic remarks
Chapter 1
The unification problem was introduced and solved in Robinson [1965]. The
MartelliMontanari unification algorithm was introduced in Martelli
and Montanari [1982]. The use of unification for computing is due to Kowal-
ski [1974]. The concept of a pure Prolog was popularised by the first edition
of Sterling and Shapiro [1994].
Chapter 2
The language L0 defined in Section 2.2 and the translation of pure Prolog
into it is based on Apt and Smaus [2001], where it is shown that this lan-
guage supports declarative programming. The declarative interpretation of
L0 programs is closely related to the so-called Clark completion of logic
programs introduced in Clark [1978], further discussed for example in Apt
and van Emden [1982].
The equivalence between the declarative interpretations of pure Prolog
programs and L0 programs w.r.t. successful queries follows from the results
of Apt and van Emden [1982] that compare the least Herbrand model of a
logic program with the models of Clark completion.
Chapter 4
OKeefe [1990] discusses in detail the proper uses of the cut. Sterling and
Shapiro [1994] provides an in-depth discussion of various meta-interpreters
written in Prolog. Apt and Turini [1995] is a collection of articles discussing
the meta-programming in Prolog.
Chapter 6
This chapter draws on material from Apt [2003]. The SEND + MORE
= MONEY puzzle is due to H. E. Dudeney and appeared in England, in
320
Bibliographic remarks 321
the July 1924 issue of Strand Magazine. The representation of the SEND
+ MORE = MONEY puzzle and of the n-queens problem as a CSP is
discussed in Van Hentenryck [1989].
According to Russell and Norvig [2003] the eight queens problem was
originally published anonymously in the German chess magazine Schach in
1848 and its generalisation to the n-queens problem in Netto [1901]. The
coins problem is discussed in Wallace, Novello and Schimpf [1997].
Chapter 7
The iterators in ECLi PSe are designed and implemented by Joachim Schimpf.
In Schimpf [2002] an early version of the iterators and their implementation
in ECLi PSe is described. It is shown in Voronkov [1992] that in logic pro-
gramming appropriately powerful iterators have the same power as recur-
sion. The fromto/4 is an example of such an iterator. The QUEENS program
given in Figure 7.8 of Section 7.6 is modelled after an analogous solution in
the Alma-0 language of Apt et al. [1998]. The summary of the iterators is
based on the ECLi PSe user manual, see www.eclipse-clp.org/doc/bips/
kernel/control/do-2.html.
Chapter 8
The credit-based search was introduced in Beldiceanu et al. [1997]. The
limited discrepancy search is due to Harvey and Ginsberg [1995].
Chapter 10
The illustration of the constraint propagation for Boolean constraints by
means of the full adder example is from Fruhwirth [1995]. The use of the
SEND + MORE = MONEY puzzle to discuss the effect of constraint prop-
agation is from Van Hentenryck [1989, page 143]. The details of the con-
straint propagation mechanisms implemented in the sd and ic libraries are
discussed in Chapters 6 and 7 of Apt [2003].
Chapter 11
Exercise 11.2 is originally from Van Hentenryck [1989, pages 155157].
Chapter 12
The use of constraint logic programming to solve Sudoku puzzles can be
traced back to Carlson [1995]. Various approaches to solving them based
on different forms of constraint propagation are analysed in detail in Simo-
nis [2005]. Exercise 12.2 is from Wikipedia [2006]. Exercise 12.3 is a modi-
fied version of an exercise from www.eclipse-clp.org/examples/square_
tiling.pl.txt.
322 Bibliographic remarks
Chapter 13
The first comprehensive account of solving constraints on reals based on
constraint propagation is Van Hentenryck [1997] where a modelling language
Numerica is introduced. The Reimers system is taken from Granvilliers
[2003]. The MORTGAGE program was originally discussed in Jaffar and Lassez
[1987]. Exercise 13.3 is taken from Monfroy, Rusinowitch and Schott [1996].
Chapter 14
The facility location problem is a standard problem in operations research,
see, e.g., Vazirani [2001].
Bibliography
K. R. Apt
[2003] Principles of Constraint Programming, Cambridge University Press,
Cambridge, England. Cited on page(s) 320, 321.
K. R. Apt, J. Brunekreef, V. Partington and A. Schaerf
[1998] Alma-0: An imperative language that supports declarative programming,
ACM Toplas, 20, pp. 10141066. Cited on page(s) 321.
K. R. Apt and J. Smaus
[2001] Rule-based versus procedure-based view of logic programming, Joint
Bulletin of the Novosibirsk Computing Center and Institute of Infor-
matics Systems, Series: Computer Science, 16, pp. 7597. Available via
http://www.cwi.nl/~apt. Cited on page(s) 320.
K. R. Apt and F. Turini
[1995] eds., Meta-logics and Logic Programming, The MIT Press, Cambridge,
Massachusetts. Cited on page(s) 320.
K. R. Apt and M. van Emden
[1982] Contributions to the theory of logic programming, Journal of the ACM,
29, pp. 841862. Cited on page(s) 320.
N. Beldiceanu, E. Bourreau, P. Chan and D. Rivreau
[1997] Partial search strategy in CHIP, in: Proceedings of the Second Interna-
tional Conference on Meta-Heuristics. Sophia-Antipolis, France. Cited
on page(s) 321.
I. Bratko
[2001] PROLOG Programming for Artificial Intelligence, International Com-
puter Science Series, Addison-Wesley, Harlow, England, third edn.
Cited on page(s) xiii.
B. Carlson
[1995] Compiling and Executing Finite Domain Constraints, PhD thesis, Upp-
sala University and SICS. Cited on page(s) 321.
K. L. Clark
[1978] Negation as failure, in: Logic and Databases, H. Gallaire and J. Minker,
eds., Plenum Press, New York, pp. 293322. Cited on page(s) 320.
323
324 Bibliography
A. Colmerauer
[1990] An introduction to Prolog III, Communications of the ACM, 33, pp. 69
90. Cited on page(s) x.
A. Colmerauer and P. Roussel
[1996] The birth of Prolog, in: History of Programming Languages, T. J. Bergin
and R. G. Gibson, eds., ACM Press/Addison-Wesley, Reading, Mas-
sachusetts, pp. 331367. Cited on page(s) x.
M. Dincbas, P. V. Hentenryck, H. Simonis, A. Aggoun, T. Graf and
F. Berthier
[1988] The constraint logic programming language CHIP, in: Proceedings of
the International Conference on Fifth Generation Computer Systems,
Tokyo, pp. 693702. Cited on page(s) xi.
T. Fruhwirth
[1995] Constraint Handling Rules, in: Constraint Programming: Basics and
Trends, A. Podelski, ed., LNCS 910, Springer-Verlag, Berlin, pp. 90
107. (Chatillon-sur-Seine Spring School, France, May 1994). Cited on
page(s) 321.
M. Gardner
[1979] Mathematical Circus, Random House, New York, NY. Cited on page(s)
108.
L. Granvilliers
[2003] RealPaver Users Manual, Version 0.3. http://www.sciences.
univ-nantes.fr/info/perso/permanents/granvil/realpaver/src/
realpaver-0.3.pdf. Cited on page(s) 322.
G. H. Hardy
[1992] A Mathematicians Apology, Cambridge University Press, Cambridge,
England, reprint edn. Cited on page(s) 251.
W. D. Harvey and M. L. Ginsberg
[1995] Limited discrepancy search, in: Proceedings of the Fourteenth Inter-
national Joint Conference on Artificial Intelligence (IJCAI-95); Vol.
1, C. S. Mellish, ed., Morgan Kaufmann, Montreal, Quebec, Canada,
pp. 607615. Cited on page(s) 321.
C. A. R. Hoare
[1962] Quicksort, BCS Computer Journal, 5, pp. 1015. Cited on page(s) 49.
J. Jaffar and J.-L. Lassez
[1987] Constraint logic programming, in: POPL87: Proceedings 14th ACM
Symposium on Principles of Programming Languages, ACM, pp. 111
119. Cited on page(s) xi, 322.
J. Jaffar, S. Michayov, P. J. Stuckey and R. H. C. Yap
[1992] The CLP(R) language and system, ACM Transactions on Programming
Languages and Systems, 14, pp. 339395. Cited on page(s) xi.
R. Kowalski
[1974] Predicate logic as a programming language, in: Proceedings IFIP74,
North-Holland, New York, pp. 569574. Cited on page(s) ix, 320.
Bibliography 325
A. Mackworth
[1977] Consistency in networks of relations, Artificial Intelligence, 8, pp. 99
118. Cited on page(s) x.
A. Martelli and U. Montanari
[1982] An efficient unification algorithm, ACM Transactions on Programming
Languages and Systems, 4, pp. 258282. Cited on page(s) 320.
E. Monfroy, M. Rusinowitch and R. Schott
[1996] Implementing non-linear constraints with cooperative solvers, in: Pro-
ceedings of the ACM Symposium on Applied Computing (SAC 96), ACM
Press, New York, NY, USA, pp. 6372. Cited on page(s) 322.
U. Montanari
[1974] Networks of constraints: fundamental properties and applications to pic-
ture processing, Information Science, 7, pp. 95132. Also Technical Re-
port, Carnegie Mellon University, 1971. Cited on page(s) x.
E. Netto
[1901] Lehrbuch der Combinatorik, Teubner, Stuttgart. Cited on page(s) 321.
R. OKeefe
[1990] The Craft of Prolog, The MIT Press, Cambridge, Massachusetts. Cited
on page(s) 320.
C. H. Papadimitriou and K. Steiglitz
[1982] Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall,
Englewood Cliffs, NJ. Cited on page(s) 278.
J. Robinson
[1965] A machine-oriented logic based on the resolution principle, J. ACM, 12,
pp. 2341. Cited on page(s) ix, 320.
S. Russell and P. Norvig
[2003] Artifical Intelligence: A Modern Approach, Prentice-Hall, Englewood
Cliffs, NJ, second edn. Cited on page(s) 321.
J. Schimpf
[2002] Logical loops, in: Proceedings of the 2002 International Conference on
Logic Programming, P. J. Stuckey, ed., vol. 2401 of Lecture Notes in
Computer Science, Springer-Verlag, pp. 224238. Cited on page(s) 321.
H. Simonis
[2005] Sudoku as a constraint problem, in: Proceedings of the Fourth Interna-
tional Workshop on Modelling and Reformulating Constraint Satisfac-
tion Problems, B. Hnich, P. Prosser and B. Smith, eds. Sitges, Spain.
Cited on page(s) 321.
L. Sterling and E. Shapiro
[1994] The Art of Prolog, The MIT Press, Cambridge, Massachusetts, sec-
ond edn. Cited on page(s) xiii, 320.
I. Sutherland
[1963] Sketchpad, A Man-Machine Graphical Communication System, Garland
Publishing, New York. Cited on page(s) x.
326 Bibliography
P. Van Hentenryck
[1989] Constraint Satisfaction in Logic Programming, The MIT Press, Cam-
bridge, Massachusetts. Cited on page(s) xi, 321.
[1997] Numerica: A Modeling Language for Global Optimization, The MIT
Press, Cambridge, Massachusetts. Cited on page(s) 322.
V. V. Vazirani
[2001] Approximation Algorithms, Springer-Verlag, Berlin. Cited on page(s)
322.
A. Voronkov
[1992] Logic programming with bounded quantifiers, in: Logic Programming
and Automated ReasoningProc. 2nd Russian Conference on Logic
Programming, A. Voronkov, ed., LNCS 592, Springer-Verlag, Berlin,
pp. 486514. Cited on page(s) 321.
M. Wallace, S. Novello and J. Schimpf
[1997] ECLi PSe : A platform for constraint logic programming, ICL Systems
Journal, 12, pp. 159200. Cited on page(s) 321.
Wikipedia
[2006] Eight queens puzzle. Entry http://en.wikipedia.org/wiki/Queens_
problem in Wikipedia. Cited on page(s) 321.
Index
327
328 Index
equivalent CSPs, 91
ambivalent syntax, 59
expression
answer, 12
arithmetic, 43, 93
arithmetic comparison predicate, 47
Boolean, 92
arithmetic constraint, see constraint
linear, 92
arithmetic evaluator, 43
array, 121 fact, 6
atom, 5 functor, 5, 59
atomic goal, 5
gae (ground arithmetic expression), 43
backtracking, 15, 31 goal, 6
binding order, 43
Boolean constraint, see constraint heuristics
bracketless prefix form, 42 value choice, 105
variable choice, 104
choice point, 15
Clark completion, 320 implication, 8
clause, 5, 284 infix form, 42
Index 329
query, 6
atomic, 6