Local Search Techniques For Scheduling Problems Al
Local Search Techniques For Scheduling Problems Al
net/publication/215777272
CITATIONS READS
25 1,702
1 author:
Luca Di Gaspero
University of Udine
136 PUBLICATIONS 2,765 CITATIONS
SEE PROFILE
All content following this page was uploaded by Luca Di Gaspero on 12 February 2015.
Ph.D. Thesis
Candidate: Supervisor:
Luca Di Gaspero Prof. Andrea Schaerf
Referee:
Prof. Marco Cadoli
Prof. Wolfgang Slany
Chair:
Prof. Moreno Falaschi
c 2003, Luca Di Gaspero
Author’s address:
Local Search meta-heuristics are an emerging class of methods for tackling combinatorial search
and optimization problems, which recently have been shown to be very effective for a large number
of combinatorial problems.
The Local Search techniques are based on the iterative exploration of a solution space: at each
iteration, a Local Search algorithm steps from one solution to one of its “neighbors”, i.e., solutions
that are (in some sense) close to the starting one.
One major drawback of this family of techniques is the lack of robustness on a wide variety of
problem instances. In fact, in many cases, these methods assure finding good results in reasonable
running times, whereas in other cases Local Search techniques are trapped in the so-called local
minima.
Several approaches to the solution of this problem recently appeared in the literature. These
approaches range from the employment of statistical properties (e.g., random explorations of the
solution space), to the application of learning methods or hybrid techniques.
In this thesis we propose an alternative approach to cope with local minima, which is based
on the principled combination of several neighborhood structures. We introduce a set of neigh-
borhood operators that, given a collection of basic neighborhoods, automatically create a new
compound neighborhood and prescribe the strategies for its exploration. We call this approach
Multi-Neighborhood Search.
Although a broad range of problems can be tackled by means of Local Search, in this work
we restrict our attention to scheduling problems, i.e., the problems of associating one or several
resources to activities over a certain time period. This is the applicative domain of our research.
We present the application of Multi-Neighborhood Search to a selected set of scheduling prob-
lems. Namely, the problems tackled in this thesis belong to the classes of educational timetabling,
workforce assignment and production scheduling problems. In general, we obtain improvements
w.r.t. classical solution techniques used in this field.
An additional research line pursued in the thesis deals with the issues raised by the implemen-
tation of Local Search algorithms. The main problem is related to the difficulty of engineering the
code developed for Local Search applications. In order to overcome this problem we designed and
developed an Object-Oriented framework as a general tool for the implementation of Local Search
algorithms.
The framework, called EasyLocal++, favors the development of well-engineered Local Search
applications, by helping the user to derive a neat conceptual scheme of the application and by
supporting the design and the rapid implementation of new compound techniques. Therefore,
EasyLocal++ allows the user to focus on the most difficult parts of the development process,
i.e., the design and the experimental analysis of the heuristics.
iv Abstract
Acknowledgments
These few lines of the thesis would be words of gratitude to all those who helped to make possible
this work. The warmest and most sincere thanks go to my supervisor, Andrea Schaerf, whose
encouragement and willing support guided me achieving this end. Without his unique enthusiasm,
unending optimism and patience, this PhD would have hardly been possible.
I am grateful to Moreno Falaschi, director of the graduate school in Computer Science, and
to the members of the graduate school council, for their advices during these years. Thanks to
Maurizio Gabbrielli and Paolo Serafini, who were in charge of evaluating my thesis proposal and
the first progress reports.
I also give my warmest thanks to Wolfgang Slany and Marco Cadoli, who were appointed as
“official referees”, for reviewing the first draft of the thesis. I thank them also for their helpful
comments, which made possible to improve this thesis. Special regards to Marco Cadoli (again),
Agostino Dovier and Michela Milano for being part of my thesis defense committee.
My colleagues at the department of Mathematics and Computer Science were responsible for
creating a pleasant working environment during the years spent with them. I am specially indebted
to Paolo Coppola, who showed me the way and encouraged me to join this enterprise. Credits to
Massimo Franceschet, whose (few) words of advice helped me to make the right choices at several
branching points (following his terminlogy). Thanks to Stefano Mizzaro for his expert directions,
and for listening me during the recent “Soap Operas” I was involved in. Carla Piazza supported me
during bad times, she hosted me when I was in Amsterdam for the first time and, furthermore, she
kindly donated me her Dutch bicycle that is still my official Amsterdamse fiets (if someone has not
stolen it in the meantime). Thanks also to the “old colleagues”: Stefania Gentili, Daniela Cancila,
Alberto Ciaffaglione, Ivan Scagnetto, Gianluca Franco and Roberto Ranon. I am particularly
grateful to the new friends that joined the department in the last years: Alicia Villanueva-Garcia,
Marco Comini and (Giusep)Pina Barbieri. I want to let them know they gave me a lot in these
years. I will miss Alicia, who is going to go back to Spain. Nothing will replace her big smiles, her
easy-goingness and responsiveness.
I owe my gratitude to Krzysztof R. Apt for hosting me at the Centrum voor Wiskunde en
Informatica in Amsterdam. I spent there nine fruitful months in the Constraint and Integer
Programming Group. I wish to thank all the members of that group, in particular: Sebastian
Brand, Rosella Gennari, Frederic Goualard, Willem Jan van Hoeve and Jan-Georg Smaus. I am
honoured to have been part of their team.
I want to acknowledge other many good friends that I met in the Netherlands, such as Piero,
Luisella, Sebastiano, Jordi, Manuel, Dorina and Simona. Going on, let me thank the Gerbrandy-
Brosio family: Elena, Jelle & the newborn Margherita Nynke. It is their fault (actually of Elena
& Jelle alone, Margherita was about to arrive) if I was a menace as a boat driver in the canals
of Amsterdam and in the Amstel river (believe me, the last one, compared to the canals is a
motorway!). I had a lot of good times with those friends and I will remember them affectionately.
There are also some other good friends I met in Amsterdam, who more or less indirectly
contributed to this thesis. My future little brother- and sister-in-law Sergio and Elena had a big
role in this enterprise by healing my thoothache and being my favorite pizza maker respectively. I
am fond of them!
Alessandro and Lucia, Ludovica, Lorenzo, Gianluca and Elisabetta, Davide and Michela, Chiara
and Chiaretta, Luisa and Yves, Wolf and Eleni, Calina and Fetze were also part of my big family
there, which has now partly spread throughout Europe. Thanks to them I learned a valuable lesson
of friendship which is one of the main hidden results of this thesis. I will always have those friends
in my mind and in my heart.
At the end, I want to write a few words for the most special person to acknowledge. She
is Marina, my very dear girlfriend. Our ways crossed during our PhDs, they come together in
Amsterdam, and since that time they have never split. I wish to thank her for being so patient
and full of care with me. We traveled together to these milestones, and I am sure we will be good
“travel mates” also through the streets of our future.
Introduction xi
I General Concepts 1
1 Introduction to Scheduling Problems 3
1.1 Combinatorial Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 Search Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Constraint-based Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Search Paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Scheduling Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Timetabling Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Local Search 13
2.1 Local Search Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Local Search Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Basic Local Search Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.1 Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.3 Tabu Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Improvements on the Basic Techniques . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Local Search & Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6 Composite Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.7 Hybrid Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.7.1 Local Search & Constructive methods . . . . . . . . . . . . . . . . . . . . . 21
2.7.2 Local Search on Partial Solutions . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Multi-Neighborhood Search 23
3.1 Multi-Neighborhood Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.1 Neighborhood Union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.2 Neighborhood Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1.3 Total Neighborhood Composition . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Multi-Neighborhood Solving Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.1 Token-Ring Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Multi-Neighborhood Kickers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
II Applications 31
4 Course Timetabling: a Case Study in Multi-Neighborhood Search 33
4.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Search Space, Cost Function and Initial State . . . . . . . . . . . . . . . . . . . . . 34
4.3 Neighborhood functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.4 Runners and Kickers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
viii Contents
7 Other Problems 73
7.1 Local Search for the Job-Shop Scheduling problem . . . . . . . . . . . . . . . . 73
7.1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.1.2 Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
7.1.3 Neighborhood relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.1.4 Search strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.1.5 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.2 The Resource-Constrained Scheduling problem . . . . . . . . . . . . . . . . 77
7.2.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.2.2 Local Search components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.2.3 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Contents ix
IV Appendix 121
A Current best results on the Examination Timetabling problems 123
Conclusions 125
Bibliography 127
x Contents
Introduction
The Calvin and Hobbes strip reported in the first page is a nutshell description of our work.
Similarly to Calvin, we deal with high-quality organization of time. However, our real “homework”
is to devise the ETM1 for drawing up the schedules, rather than carrying out the assignments as
Calvin will do. For this purpose, we have also our own Hobbes tiger (i.e., a quality measure) that
gives us an objective judgment of the schedules we developed.
In our work we look inside several ETMs, arising from different domains. Actually, these ETMs
in the scientific terminology are the methods to tackle scheduling problems. This thesis mainly deals
with a specific class of these methods.
Scheduling problems can be defined as the task of associating one or several resources to ac-
tivities over a certain time period. These problems are of particular interest both in the research
community and in the industrial environment. They commonly arise in business operations, espe-
cially in the areas of supply chain management, airline flight crew scheduling, and scheduling for
manufacturing and assembling. High-quality solutions to instances of these problems can result in
huge financial savings.
Moreover, scheduling problems arise also in other organizations, such as schools (and at home,
as Calvin suggested), universities or hospitals. In these cases other aspects beside the financial
one are more meaningful. For instance, a good school or university timetable improves students’
satisfaction, while a well-done nurse rostering (i.e., the shift assignment in hospitals) is of critical
importance for assuring an adequate health-care level to patients.
Differently from Calvin, in order to draw up a schedule for these problems we cannot start with
pencil and paper. Generally speaking, scheduling problems belong to the class of combinatorial
optimization problems. Furthermore, in non-trivial cases these problems are NP -hard and it is
extremely unlikely that someone could find an efficient method (i.e., a polynomial algorithm) for
solving them exactly.
For this reason our solution methods are based on heuristic algorithms that do not assure us
to find the “best” solution, but which perform fairly well in practice. The algorithms studied in
this thesis belong to the Local Search paradigm described in the following.
Local Search methods are based on the simple idea of navigating a solution space by iteratively
stepping from one solution to one of its neighbors. The neighborhood of a solution is usually
given in an intensional fashion, i.e., in terms of atomic local changes that can be applied upon it.
Furthermore, each solution is assigned a quality measure by means of a problem dependent cost
function, that is exploited to guide the exploration of the solutions.
On this simple setting it is possible to design a wide variety of abstract algorithms or meta-
heuristics such as Hill Climbing, Simulated Annealing, or Tabu Search. These techniques are
non-exhaustive in the sense that they do not guarantee to find a feasible (or optimal) solution, but
they search non-systematically until a specific stop criterion is satisfied.
Individual heuristics do not always perform well on all problem instances, even though a com-
mon requirement of approximation algorithms is to be robust on a wide variety of instances. To
cope with this issue, a possible direction to pursue is the employment of several heuristics on the
same problem. This should reduce the bias of a specific heuristic to be applied on a given instance.
Furthermore, this idea opens the way to a line of research which attempts to investigate new
compound Local Search methods obtained by combination of neighborhoods and basic techniques.
1 ETM is the short for “Effective Time Management” (see the first page). We remark that it is absolutely not a
As a by product of this investigation, we obtained solvers that compete fairly well with state-
of-the-art solvers developed ad hoc for the specific problem. Other problems have been taken
into account across our study, but the results on these problems are still preliminary and are only
summarized in this thesis.
The last research line concerns the design and the development of an Object-Oriented framework
as a general tool for the development of Local Search algorithms. Our goal is to obtain a system
that is flexible enough for solving combinatorial problems using a variety of algorithms based on
Local Search.
The framework should help the user deriving a neat conceptual scheme of the application and
it should support also the design and the rapid implementation of new compound techniques
developed along the lines explained above.
This research line has led to the implementation of two versions of the system, called Easy-
Local++ [39, 43, 44], which is written in the C++ language2. The first, abridged, version of
the framework has been made publicly available from the web page http://tabu.dimi.uniud.
it/EasyLocal. At the time of publication of this thesis, this version has been downloaded (and
hopefully used) by more than 200 users.
The complete version of EasyLocal++, instead, is currently available only on request and it
has been used for the implementation of all the Local Search algorithms presented in this thesis.
2 The design and development of EasyLocal++ has been partly supported by the University of Udine, under
Thesis Organization
The thesis is subdivided into three main parts, which roughly correspond to the three goals out-
lined above. The first part illustrates the general topics of combinatorial optimization and the
Local Search domain. Furthermore, it contains the description of the Multi-Neighborhood Search
framework, which is one of the main contributions of this research.
Specifically, in Chapter 1 we present the basic concepts of combinatorial optimization and
scheduling problems and we introduce the terminology and the notation used throughout the
thesis. Chapter 2 describes in detail the basic Local Search techniques and some lines of research
that aim at improving the efficacy of Local Search. Chapter 3 concludes the first part of the thesis
and formally introduces the Multi-Neighborhood framework.
The second part of the thesis deals with the application of both basic and novel Local Search
techniques to selected scheduling problems. In Chapter 4 we present a comprehensive case study in
the application of Multi-Neighborhood techniques to the Course Timetabling problem. Chap-
ter 5 contains our research about the solution of the Examination Timetabling problem, while
Chapter 6 presents some results on the min-Shift Design problem. Preliminary results on other
problems, namely Job-Shop Scheduling and Resource-Constrained Scheduling, are pre-
sented in Chapter 7.
The third part of the thesis is devoted to the description of EasyLocal++, an Object-Oriented
framework for Local Search. In Chapter 8 we describe thoroughly the software architecture of the
framework and the classes that make up EasyLocal++. Finally, in Chapter 9 we present a case
study in the development of applications using EasyLocal++.
At the end of the thesis we draw some conclusions about this research. Furthermore, we describe
the lines of research that can be further investigated on the basis of the results presented in this
work.
xiv Introduction
I
General Concepts
1
Introduction to Scheduling Problems
Combinatorial Optimization [106] is the discipline of decision making in case of discrete alterna-
tives. Specifically, combinatorial problems arise in many areas where computational methods are
applied, such as artificial intelligence, bio-informatics or electronic commerce, just to mention a few
cases. Noticeable examples of this kind of problems include resource allocation, routing, packing,
planning, scheduling, hardware design, and machine learning.
The class of scheduling problems is of specific interest for this study, since we apply the tech-
niques we have developed on this kind of problems. Essentially, a scheduling problem can be
defined as the problem of assigning resources to activities over a time period.
In this chapter we define the basic framework for dealing with scheduling problems and we
introduce the terminology and the notation used throughout the thesis.
1 b d g
c(a)
2 ?
3 a f
4
5 f(c) ?
...
c e h
NP -hard1 . Therefore, unless P = NP, they cannot be solved to optimality in polynomial time.
For this reason there is much interest in heuristics or in approximation algorithms that lead to
near-optimal solutions in reasonable running times.
In the remaining of this section we formally present the problem classes outlined above, and
we provide an example of a family of combinatorial problems.
An example of a combinatorial optimization problem that will be used throughout the thesis
is the so-called min-Graph Coloring problem [57, Prob. GT4, page 191]. A pictorial represen-
tation of the problem is provided in Figure 1.1, whereas its statement is as follows:
composed of the valid colorings only, i.e., the functions c such that c(v) 6= c(w) if (v, w) ∈ E. The
objective function simply accounts for the overall number of colors used by c, i.e., f (c) = |{c(v) :
v ∈ V }|.
The decision variables correspond to the vertices of the graph (i.e., the domain of c), and can
assume values in the set {1, . . . , |V |}.
An example of a decision problem related to the Graph Coloring problem presented above
is the following:
It is worth noticing that a given optimization problem can be translated into a sequence of
decision problems. The translation strategy employs a binary search for the optimal bound k on
the cost function, and it introduces a small overhead that is logarithmic in the size of the optimal
solution value f (x∗ ).
Notice that a search problem can also be viewed as an instance of a decision problem, where
the cost function f (x) = c is constant.
The k -Graph Coloring problem can alternatively be viewed as an instance of a search prob-
lem, once fixed the possible colors to be used. Its statement is as follows.
It is easy to recognize that this definition is an instance of the Definition 1.3, where the role
of variables, domains and constraints is made explicit. In detail, S = D1 × . . . × Dn and F is
intensionally defined by the constraint relations, i.e., d¯ = (d1 , . . . , dn ) ∈ F if and only if, for each
i = 1, . . . , n and each constraint cj , di ∈ cj .
In order to represent the combinatorial optimization problems in a constraint satisfaction formu-
lation we must again take into account a cost criterion for evaluating the quality of the assignments.
The formal definition of optimization problems in this formulation is as follows:
Furthermore, for the definition of the decision problem within the constraint satisfaction frame-
work, we add a bound k to the objective function. This definition is straightforward and we do
not provide the details of its statement.
In the following chapters we will indifferently use the combinatorial and the constrained satis-
faction formulation of the problems.
a feasible coloring has been found. A state of the search, for this algorithm, is a partial function
c : V → {0, . . . , k − 1}.
At each level i the algorithm selects a still uncolored vertex vi and tries to assign it a color
from the set {0, . . . , k − 1}. If no color can be assigned to vi without violating the constraint2
∀u ∈ adj(vi ) c(u) 6= c(vi ), then the assignment made at the previous level is undone and a new
assignment for the vertex vi−1 is tried.
Again, if no new feasible assignment at level i − 1 can be found, the algorithm “backtracks” at
level i − 2 and so on. Conversely, once the algorithm reaches a complete assignment the search is
stopped and the solution is returned.
The worst-case time performance of this algorithm is clearly exponential in the size of the graph.
Nevertheless, it is possible to obtain reasonable running times by employing a clever ordering of
the sequence of vertices explored. For example, one of the best constructive algorithms for Graph
Coloring is the DSATUR algorithm by [12] which is based on the aforementioned schema and
employs a dynamic ordering of the vertices based on their constrainedness.
If we remove the backtracking mechanism from the proposed algorithm we obtain a so-called
heuristic search technique, which, in turn, gives rise to an incomplete method.
The other class of search approaches is composed of the category of selective methods. These
approaches are based on the exploration of a search space composed of all the possible complete
assignments of values to the decision variables, including the infeasible ones. For this reason these
methods are also called iterative repairing techniques. Among others, the Local Search paradigm,
which is the topic of this thesis, belongs to this family of methods. This paradigm is described in
more detail in Chapter 2. We now provide a sketch of a selective algorithm for k -Graph Coloring
based on Local Search. A complete description of the algorithm is provided in Chapter 9.
As mentioned, in this thesis we are dealing with incomplete search techniques, most of the times
characterized also by some randomized element. For this reason, the performance evaluation of
the algorithms is not a simple task, since the well established worst-case analysis is not applicable
with these techniques. Moreover, all the decision versions of the problems taken into account in
this work are at least NP-complete, therefore we already know the theoretical performance of the
algorithms in the worst case, but rather we aim to empirically look into their behavior on a set of
benchmark instances.
The experimental analysis is performed by running the algorithms on a set of instances for a
number of trials, recording some performance indicators (such as the running time and the quality
of the solutions found). Then a measure of the algorithms behavior can be obtained by employing
a statistical analysis over the collected values.
2 We use the notation adj(v) to denote the set of vertices adjacent to v, i.e., adj(v) = {u|(u, v) ∈ E}
8 1. Introduction to Scheduling Problems
Here we have just sketched the general lines of the methodology employed, but the experi-
mental analysis of heuristics is itself a growing research area. Johnson [75] recently tries to fill
the gap between the theoretical analysis and the experimental study of algorithms. On the latter
subject, among others, Taillard [125] recently proposes a precise methodology for the comparison
of heuristics based on non-parametric statistical tests.
(1) Respected deadlines: for each task Ti there is a deadline time di which indicates that the
task must have completed execution di time units after the beginning of the first task. Without
loss of generality, we assume that the first task begins at the time slot 0. This way, the absolute
deadline of each task can be expressed as σi + τi ≤ di for all i = 1, . . . , n.
(2) Processor compatibility: the tasks cannot be executed on all the variety of processors but
only on a specific subset of them. More formally, there is a binary compatibility matrix Cij
that states whether or not task Ti can be executed on processor Pj . This condition can be
expressed as: pij = 1 only if Cij = 1.
(3) Mutual exclusion: no pair of tasks (Ti , Ti′ ) can be executed simultaneously on the same
processor Pj . This is usually called disjunction constraint and is expressed as pij = pi′ j ⇒
σi + τi ≤ σi′ ∨ σi′ + τi′ ≤ σi .
1.4. Scheduling Problems 9
(4) Resource capacity: there is an integer valued matrix Aik that accounts for the amount of
the resource Rk needed for each task Ti . For each resource Rk , at most bk units are available
at all times. Thus, at each time slot, and for each resource Rk , the amount of Rk allocated to
the tasks currently executed must not exceed the overall bound bk .
(5) Schedule order: there is a precedence relation (T , ≺) on the set of tasks such that Ti ≺ Ti′
means that Ti is a prerequisite for Ti′ . In other words, Ti must complete its execution before
Ti′ starts its own. This corresponds to the condition σi + τi ≤ σi′ .
In addition to the satisfaction of the above constraints, we require also that an objective function
is optimized. A natural choice for this function is to account for the overall finishing time (called
makespan). In this case the function can be defined as f (σ̄, p̄) = max{σi + τi |i = 1, . . . , n}.
Alternative formulations of the objective function include flow time, lateness, tardiness, ear-
liness or weighted sums of these criteria to reflect the relative importance of tasks. We do not
define these criteria here but we refer again to [110] for a formal definition of these concepts. The
problem then becomes the following:
The proposed model makes the assumption that the tasks are atomic and there is no possibility
of preemption, i.e., once a task is allocated to a processor, it must be entirely performed and no
interleaving with other tasks is allowed. However, we may relax this requirement and allow the
preemption of tasks.
An important instance of the scheduling model, that conceptually stands between the non-
preemptive and the preemptive scheduling, is the so-called shop scheduling described in the fol-
lowing example.
σ( t 12)
processors
p1 t 21 t 12 t 32
j
1
j
2
p t t t
2 31 22 13 j
3
τ( t )
23
p t t t
3 11 23 33
time
makespan σ( t 33)
Figure 1.2: A Gantt chart representation of a schedule for a Shop scheduling problem
The diagram should be read as follows. The X axis represents the time scale and each time
slot is delimited by means of vertical dashed lines. On the Y axis are reported the processors
schedules, each on a separate row. On each row there are a set of bars that represent the set of
tasks scheduled on that processor. Each bar spans for a variable length, which is equal to the
length of the task. The arrangement of the tasks on a processor line is made accordingly to their
starting times σ.
As a final remark, we want to emphasize that in this thesis we deal with deterministic schedul-
ing, which means that all the entities of the scheduling problem are known in advance. Another
class of scheduling problems, that is not taken into account in this thesis, is the class of stochastic
scheduling. This class of problems is characterized by the presence of uncertainty in some schedul-
ing element. For example, the processing times in some manufacturing environments cannot be
completely predicted in advance, but are subject to a probability distribution.
(2) there must be sufficient other resources to service all the events at the times they have been
scheduled.
The assignment τ is called a timetable.
The general timetabling problem can easily be modeled within the already presented Graph
Coloring framework. The graph encoding is as follows: each event is associated to a vertex of the
graph, and there is an edge between two vertices if they clash, i.e., the events cannot be scheduled
in the same period because at least one individual has to attend both of them. Then, periods are
regarded as colors, and the scheduling of events to periods is simply a coloring of the graph.
Considerable attention has been devoted to automated timetabling during the last forty years.
Starting with [67], many papers related to automated timetabling have been published in conference
proceedings and journals. In addition, several applications have been developed and employed with
good success.
The simplest way for presenting the intuition behind Local Search is the following: imagine a
climber who is ascending a mountain on a foggy day1 . She can view the slope of the terrain close
to her, but she cannot see where the top of the mountain is. Hence, her decisions about the way
to go must rely only upon the slope information. The climber has to choose a strategy to cope
with this situation, and a reasonable idea is, for example, choosing to go uphill at every step until
she reaches a peak. However, because of the fog, she will never be sure whether the peak she has
reached is the real summit of the mountain, or just a mid-level crest.
Interpreting this metaphor within the optimization framework, we can view the mountain as
described by the shape of the objective function. The choice among the possible actions for
improving the objective function must be made by watching near or local solutions only. Finally,
the inherent problem of this kind of search is that, in general, nobody can assure that the best
solution found by the Local Search procedure is actually the globally best solution or only a so-
called local optimum2 .
Local Search is a family of general-purpose techniques for search and optimization problems,
that are based on several variants of the simple idea presented above. In a way, each technique
prescribes a different strategy for dealing with the foggy situation. The application of Local Search
algorithms to optimization problems dates back to early 1960s [48]. Since that time the interest
in this subject has considerably grown in the fields of Operations Research, Computer Science and
Artificial Intelligence.
Local Search algorithms are non-exhaustive in the sense that they do not guarantee to find a
feasible (or optimal) solution, but they search non-systematically until a specific stop criterion is
satisfied. Nevertheless, these techniques are very appealing because of their effectiveness and their
widespread applicability.
Some authors also classify other optimization paradigms as belonging to the Local Search family,
such as Evolutionary Algorithms and Neural Networks. However, these paradigms are beyond the
scope of this thesis and will not be presented. A complete presentation of these topics and their
relationships with the Local Search framework is available, e.g., in [2].
In the rest of the chapter we will describe more formally the concepts behind this optimization
paradigm, presenting the basic techniques proposed in the literature and some improvements over
the basic strategies. Finally we will outline our attempt to systematize the class of composite
strategies, based on the employment of more than one definition of proximity.
define three entities, namely, the search space, the neighborhood relation, and the cost function.
A combinatorial problem upon which these three entities are defined is called a Local Search
problem. A given optimization problem can give rise to different Local Search problems for different
definitions of these entities.
If the previous requirements are fulfilled, we say that we have a valid representation or valid
formulation of the problem. For simplicity, we will write just S for Sπ when the instance π (and
the corresponding problem Π) is clear from the context. Furthermore, we will refer to elements of
S as solutions.
In general, the search space Sπ and the set of solutions S of a problem are equal, but there are
a few cases in which these entities differ. In such case we work on an indirect representation of the
search space, and we require that the representation preserves the information about the optimal
solutions.
For example, for the family of Shop Scheduling problem, a common search space is the
permutation of tasks on the different processors. This is an indirect representation of the schedule
starting times, under the assumption of dealing with left-justified schedules3 . The encoding clearly
preserves the optimal solutions when we are looking at minimizing the makespan.
For each s the set N (s) needs not to be listed explicitly. In general it is implicitly defined by
referring to a set of possible moves, which define transitions between solutions. Moves are usually
defined in an intensional fashion, as local modifications of some part of s. The “locality” of moves
(under a correspondingly appropriate definition of distance between solutions) is one of the key
ingredients of local search, and actually it has also given the name to the whole search paradigm.
Nevertheless, from the definition above there is no implication for the existence of “closeness”
among neighbors, and actually complex neighborhood definitions can be used as well.
The cost function is used to drive the search toward good solutions of the search space and is
used to select the move to perform at each step of the search.
For search problems, the cost function F is generally based on the so-called distance to feasi-
bility, which accounts for the number of constraints that are violated. For optimization problems,
instead, F takes into account also the objective function of the problem.
3 Given a sequence of tasks represented as a permutation, the left-justified schedule is the schedule that comply
with the sequence and assigns to each task its earliest possible starting time.
2.2. Local Search Algorithms 15
In this case, the cost function is typically defined as a weighted sum of the value of the objective
function and the distance to feasibility (which accounts for the constraints). Usually, the highest
weight is assigned to the constraints, in order to give preference to feasibility over optimality.
In some optimization problems, the search space can be defined in such a way that it represents
only the feasible solutions. In this case, the cost function generally coincides with the objective
function of the problem.
in case of minimization problems, is to “descend” the hilly landscape looking for the lowest valley.
2.3. Basic Local Search Techniques 17
In MCHC, the selection of the move is divided into two phases, which are performed using
two different strategies. Specifically, MCHC first looks randomly for one variable v of the current
solution s that is involved in at least one constraint violation. Subsequently, it selects among
the moves in N (s) that change only the value of v, the one that creates the minimum number of
violations (arbitrarily breaking ties).
Other forms of Hill Climbing, like for example the Fast Local Search technique of Tsang and
Voudouris [129], can be regarded as improvements of the ones already presented. However, a
complete discussion of these techniques falls beyond the scope of this thesis.
Both RHC and MCHC accept the selected move if and only if the cost function value is improved
or is left at the same value. Therefore, like GSAT, they navigate plateaus, but are trapped by
strict local minima.
Different stop criteria have been used for Hill Climbing procedures. The simplest one is based
on the total number of iterations: the search is stopped when a predetermined number of steps has
been performed. An improved version of this criterion is based on the number of iterations without
improving the cost function value of the best solution found so far. This way, search trials that
are exploring promising paths are let to run longer than those that are stuck in regions without
good solutions.
Other ad hoc early termination procedures are generally used such as stopping the iteration
process if the cost function has crossed a certain value. In principle, Hill Climbing procedures could
also stop when they reach a strict local minimum (i.e., a solution whose neighborhood is made up
of solutions having greater cost). Unfortunately, though, in general they cannot recognize such a
situation.
temperature is decreased, i.e. the cooling schedule, can be different from the one mentioned above,
which is called geometric and is based on the parameter α.
For example, two other cooling schemes, called polynomial and efficient are described in [1].
We do not discuss them in detail, but we just mention that they introduce a very limited form of
memory. In fact, they are both based on monitoring the quality of the solutions visited at the given
temperature Tn , and choosing the new temperature accordingly. Specifically, in the polynomial
scheme, the new temperature Tn+1 is chosen on the basis of the standard deviation of the cost
function in all solutions at temperature Tn . Similarly, in the efficient scheme, Tn+1 is based on
the number of accepted moves at temperature Tn . Another kind of schedule, called adaptive [49],
allows also of reheating the system, depending on some statistics of the cost function (mainly the
standard deviation of the cost changes).
One of the key issues of Tabu Search is the tabu tenure mechanism, i.e., the way in which
we fix the number of iterations that a move should be considered as tabu. The basic mechanism
described above, which is based on the fixed-length tabu list, has been refined and improved by
several authors with the purpose of increasing the robustness of the algorithms.
A first improvement, proposed by Taillard [124] and commonly accepted, is the employment of
a tabu list of variable size. Specifically, the size of the tabu list is kept at a given value for a fixed
number of iterations, and then it is changed. For setting the new length, there is a set of candidate
lengths, and they are used circularly.
A further improvement of this idea is the one proposed by Gendreau et al. [59]: each performed
move is inserted in the tabu list together with the number of iterations tabu iter that it is going to
be kept in the list. The number tabu iter is randomly selected between two given parameters tmin
and tmax (with tmin ≤ tmax ). Each time a new move is inserted in the list, the value tabu iter of
all the moves in the list is updated (i.e. decremented), and when it gets to 0, the move is removed.
More complex prohibition mechanisms are based on some form of long term memory. For
example in the frequency based tabu list a table of frequencies of accepted moves is maintained
and a move is regarded as tabu if its frequency is greater than a given threshold. This way, cycles
whose length is greater than the tabu list can be prevented.
Regarding the aspiration function, we have mentioned that several criteria have been proposed
in the literature. The most common one is the following: if a move leads to a solution that is
better than the current best solution found so far, it is accepted even if it is tabu. In other words,
assuming s∗ is the current best solution, the aspiration function A is such that A(F (s)) = F (s∗ )− ǫ
for all s.
In some cases, the aspiration mechanism is used to protect the search from the possibility that
in a given state all moves are tabu. In such cases, the aspiration function is set in such a way that
at least one move fulfills its criterion, and its tabu status is removed.
In other cases, the aspiration mechanism is set in such a way that, if a move with a big impact
on the solution is performed, the tabu status of other lower-influence moves is dropped. The
underlying idea is that after a big change, the effect of moves has changed completely, therefore
there is no reason to keep them tabu. Other types of aspiration functions are defined in [65].
One example of such a strategy is the shifting penalty (see, e.g., [59]), which is a mechanism that
changes continuously the shape of the cost function in an adaptive manner. This way, it causes the
local search algorithm to visit solutions that have a different structure than the previously visited
ones. In detail, the constraints which are satisfied for a given number of iterations will be relaxed
in order to allow the exploration of solutions where those constraints do not hold. Conversely, if
some constraint is not satisfied for a long time, it is tightened, with the aim of driving the search
toward its satisfaction.
Another control knob that can be used specifically for the Tabu Search technique is the length
of the tabu list. The dynamic tabu list approach adaptively modifies the tabu list length in the
following way: if the sequence of the last performed moves is improving the cost function, then the
tabu list length is shortened to intensify the search; otherwise, if a sequence of moves induces a
degradation, the length of the tabu list is extended to escape from that region of the search space.
As it has already been noticed, Local Search meta-heuristics belong to an abstract level with
respect to the underlying Local Search problem. For this reason, Local Search techniques can be
easily composed and hybridized with other methods. In addition, the Local Search paradigm can
be regarded as a useful laboratory for the study of learning mechanisms, which can bias the search
toward more promising regions of the search space.
In the following we present some attempts to investigate the proposed issues presented in the
literature.
6 With this term we intend the plot of the cost function over the variable assignments.
2.6. Composite Local Search 21
Multi-Neighborhood Search
The aforementioned approaches can be neatly classified according to the level of granularity they
belong to. In fact, both the tandem and the WalkSAT approaches aim at combining whole al-
gorithms (either directly based on Local Search or not), whereas the VNS strategy deals with
neighborhood combination at a finer level of granularity.
In our recent work we attempt to systematize these ideas in a common framework. In [42]
we propose a set of operators for combining neighborhoods and algorithms that generalizes the
aforementioned techniques. We name this approach Multi-Neighborhood Local Search and we will
present it in detail in Chapter 3. Moreover, we exemplify its use in the solution of various scheduling
problems throughout this thesis.
The idea behind Multi-Neighborhood Search is to define a set of operators for combining neigh-
borhoods (namely the neighborhood union and the neighborhood composition) and basic techniques
based on different neighborhoods (what we call token-ring search).
Furthermore, with these operators it is possible to define additional Local Search components
that we call kickers which deal with perturbation in the spirit of random walks of the WalkSAT
algorithm.
For example, Yoshikawa et al. [143] combine a sophisticated greedy algorithm, called Really-
Fully-Lookahead, for finding the initial solution and a Local Search procedure for the optimization
phase. The Local Search algorithm employed is the Min-Conflict Hill-Climbing (MCHC), defined
by Minton et al. [99].
In [122], Solotorevsky et al. employ a propose-and-revise rule-based approach to the Course
Timetabling problem. The solution is built by means of the Assignment Rules. When the
construction reaches a dead-end, the so-called Local Change Rules come into play so as to find a
possible assignment for the unscheduled activity. However, the authors only perform a single step
before restarting the construction, and their aim is only to accommodate the pending activity,
without any look-ahead mechanism.
Other researchers, like Feo and Resende in the GRASP procedure [50], propose an iterative
scheme that uses the propose-and-revise approach as an inner loop. Starting with a constructive
method, a Local Search scheme is later applied. Some kind of adaptation will guide the constructive
phase to a new attempt that again will be followed by a local-search step.
Glover et al. [63] present an adaptive depth-first search procedure that combines elements of
Tabu Search and Branch & Bound for the solution of the min-Graph Coloring problem. They
build an initial solution through a greedy heuristic (called DANGER), and then they try to improve
it using Tabu Search to control the backtracking mechanism. The Tabu Search phase prevents the
algorithm to fall in already visited local optima by imposing additional backtracking steps. In the
algorithm the “degree” of Local Search may be controlled by a parameter that ranges from a pure
Branch & Bound algorithm to a pure Tabu Search one.
One of the most critical features of Local Search is the definition of the neighborhood structure. In
fact, for most popular problems, many different neighborhoods have been considered and experi-
mented. For example, at least ten different kinds of neighborhood have appeared in the literature
for the Job-Shop Scheduling problem (see [131] for a recent review). Moreover, for most com-
mon problems, there is more than one neighborhood structure that is sufficiently natural and
intuitive to deserve systematic investigation.
We believe that the exploitation of different neighborhood structures, or even different meta-
heuristics, in a compound Local Search strategy can improve the results with respect to the algo-
rithms that use only a single structure or technique.
An evidence of this conjecture is provided in Section 9.6.2, where we present the results of
a compound strategy for the Graph Coloring problem. Even though in that experiment we
employ a single neighborhood function, we show that the use of more than one Local Search
technique in a compound strategy outperforms the basic algorithms.
The main motivation for considering combination of diverse neighborhoods is related to the
diversification of search needed to escape from local minima. In fact a solution that is a local
minimum for a given neighborhood definition, is not necessarily a local minimum for another one.
For this reason, an algorithm that uses both has more chances to move toward better solutions.
More generally, the use of different neighborhoods may lead to trajectories that make the overall
search more effective. However, so far little effort has been devoted to the combination of more
than one neighborhood function inside a single Local Search algorithm (see Section 2.6).
In our recent work [38, 42] we attempted to overcome this lack and we have formally defined
and investigated some ways to combine different neighborhoods and algorithms. We defined a
set of operators that automatically compound basic neighborhoods, and a solving strategy that
combines several algorithms, possibly based on different neighborhoods. We named this approach
Multi-Neighborhood Search.
In this chapter we provide a formal description of the Multi-Neighborhood Search framework
and we describe its use in the solution of an actual scheduling problem, namely the Course
Timetabling problem.
N (s i )
2
si si
N (s i ) N (s i ) N (s i )
1 1 2
Neighborhood total composition: is a more general case of neighborhood union and composi-
tion. As with the composition operator, the atomic moves are chains of moves. However, in
this case, the moves in the chain can belong to any of the neighborhoods employed, regardless
of the order of composition.
In order to define these operators we have to prescribe how the elements of the compound
neighborhood are generated, starting from the elements of the basic neighborhoods. Additionally,
according to the concepts presented in Section 2.2, for providing a valid definition of the compound
neighborhoods we need also to define the strategies for their exploration (i.e., the criterion for
selecting a random move in the neighborhood and the definition of the move inverse).
Notice that, in this case, the order of the neighborhoods in the combination is not relevant
and it is not worth to repeat the same neighborhood more than once, since the set union is an
associative, commutative and idempotent operator.
Furthermore, according to this definition, there are several possible choices for the selection of
a random move in the neighborhood union. In fact, the simplest random distribution for selecting
a move in N1 ⊕ · · · ⊕ Nk from a state s first selects uniformly a random i (with 1 ≤ i ≤ k), and
then selects a random state s′ ∈ Ni (s) as to the random distribution associated with Ni . In this
case, the selection in the neighborhood union is not uniform, because it is not weighted based on
the cardinality of the sets Ni (s).
However, this strategy could be unsatisfactory for some applications. In such cases the random
distribution for selecting the neighborhood can change by taking into account also the size of each
3.1. Multi-Neighborhood Operators 25
N (s i )
2
si
N (s i )
1
si
N (s i ) N (s i )
1 2
It is worth noticing that, differently from the union operator, the order of the Ni for composition
is relevant, therefore it is meaningful to repeat the same Ni in the composition.
N (s i )
2
si
N (s i )
1
si
N (s i ), N (s i )
2 1 2
t1
t2
s0
t3
t1
t2
sn
Search Space t3
presented below.
Many specific cases of the general idea of the token-ring strategy have been studied in the
literature. For example, Hansen and Mladenović [68] explore the case in which each searcher ti
adopts a neighborhood of size larger than ti−1 .
The effectiveness of token-ring search for two searchers has been stressed by several authors
(see [65]). For example, the alternation of a Tabu Search using a small neighborhood with Hill
Climbing using a larger neighborhood has been used by Schaerf [112] for the high-school timetabling
problem. Specifically, when one of the two searchers, say t2 , is not used with the aim of improving
the cost function, but rather for diversifying the search region, this idea falls under the name of
Iterated Local Search (see, e.g., [123]). In this case the run with t2 is normally called the mutation
operator or the kick move.
Kick
s1
Synergic moves
s0
Figure 3.5: Kicks
3.4 Discussion
We presented a novel approach for combining different neighborhoods for a given Local Search
problem, that generalizes previous ideas presented in the literature. The Multi-Neighborhood
Search framework is based on a set of operators for neighborhood combination and in a solving
strategy that interleaves basic algorithms possibly equipped with different neighborhood structures.
Furthermore we define a Local Search component, called kicker, that is meant to implement a sort
of perturbation mechanism based on neighborhood composition.
The benefits of the proposed approach resides in the complete generality of our neighborhood
operators, in the sense that, given the basic neighborhood functions, the synthesis of the proposed
algorithms requires only the definition of the synergy constraint, but no further domain knowledge.
Furthermore, as mentioned above, with respect to other Multi-Neighborhood meta-heuristics,
such as Variable Neighborhood Search [68] and Iterated Local Search [93], we have tried to give a
more general picture in which these previous (successful) proposals fit naturally.
Our software tool presented in Chapter 8, generates automatically the code for exploration of
composite neighborhood starting from the code for the basic ones. This is very important, from
the practical point of view, so that the test for composite techniques is very inexpensive not only
in terms of design efforts, but also it terms of human programming resources.
In the remaining of the thesis we will extensively use the operators and the strategies presented
in this chapter.
30 3. Multi-Neighborhood Search
II
Applications
4
Course Timetabling: a Case Study in
Multi-Neighborhood Search
The university Course Timetabling problem consists in the weekly scheduling of a set of lectures
for several university courses within a given number of rooms and time periods.
There are various formulations of Course Timetabling (see, e.g., [114]), which differ from
each other mostly for the (hard) constraints and the objectives (or soft constraints) they consider.
Constraints mainly concern the overlapping of lectures belonging to the same curriculum (i.e., that
have students in common), and the simultaneous assignment of more than one lecture to the same
room. Objectives are related to the aim of obtaining a compact schedule of lectures belonging to
the same curriculum, and to the conflicting goal of spreading the lectures of the same course in a
minimum number of days.
In this chapter we present an investigation of Multi-Neighborhood Search methods (see Chap-
ter 3) in the domain of Course Timetabling. For the sake of generality, in this work we consider
a basic version of the problem.
We consider this study as a first step toward a full understanding of the capabilities of Multi-
Neighborhood techniques.
(2) Room Occupancy (hard): Two distinct lectures cannot take place in the same room in the
same period. Furthermore, each lecture cannot take place in more than one room.
(3) Conflicts (hard): Lectures of courses in the same curriculum must be all scheduled at dif-
ferent times. Similarly, lectures of courses taught by the same teacher must also be scheduled
at different times.
To the take into account these constraints, we define a conflict matrix CM of size q × q, such
that cmij = 1 if there is a clash between the courses ci and cj , cmij = 0 otherwise.
(4) Availabilities (hard): Teachers might be not available for some periods. We define an
availability matrix A of size q × p, such that aik = 1 if lectures of course ci can be scheduled
at period k, aik = 0 otherwise.
(5) Room Capacity (soft): The number of students that attend a course must be less than or
equal to the number of seats of all the rooms that host its lectures.
(6) Minimum working days (soft): The set of periods p is split in w days of p/w periods each
(assuming p divisible by w). Each period therefore belongs to a specific week day. The lectures
of each course ci must be spread into a minimum number of days di (with di ≤ ki and di ≤ w).
(7) Curriculum compactness (soft): The daily schedule of a curriculum should be as much
compact as possible, avoiding gaps between courses. A gap is a free period between two lectures
scheduled in the same day and that belong to the same curriculum.
Overnight gaps, instead, are allowed. That is, we admit free periods between two courses
scheduled in different days.
A simpler version of the Course Timetabling problem, that does not involve room assign-
ment, can be easily shown to be NP-hard through a reduction from the k -Graph Coloring
problem.
of moves on different rooms. In our experimentation, we employ these kickers with random kicks
of size h = 10 and h = 20, and best kicks with h = 2 or h = 3 steps.
All the proposed runners and kickers are combined in various token-ring strategies, as described
in the next section.
Table 4.2: Results for the Multi-Neighborhood Hill Climbing and Tabu Search algorithms
From the results, it turns out that the Hill Climbing algorithms are superior to the Tabu
Search ones for three out of four instances. Concerning the comparison of neighborhood operators,
the best results are obtained by the Time⊕Room neighborhood for both Hill Climbing and Tabu
Search.
Notice that the full exploration of Time⊗Room performed by Tabu Search does not give good
results. This highlights the trade-off between the steepness of search and the computational cost.
This result for Tabu Search is somehow surprising. Indeed, one may intuitively think that
a thorough neighborhood (such the Time⊗Room one) should have better chances to find good
solutions. This counterintuitive result, however, justifies a complete neighborhood investigation
also for other problems.
Furthermore, it is possible to see that for Tabu Search the random kick strategy obtains mod-
erate improvements in joint action with T⊕R and T⊗R neighborhoods, favoring a diversification
of the search. Conversely, the behavior of the Hill Climbing algorithms with this kind of kicks is
not uniform, and deserves further investigation.
Concerning the influence of different synergy definitions, it is possible to see that the more
strict one has a positive effect in joint action with Tabu Search, while it seems to have little or
no impact with Hill Climbing. In our opinion this is related to the thoroughness of neighborhood
exploration performed by Tabu Search.
Another effect of the Run & Kick strategy, which is not shown in the tables, is the improvement
of algorithm robustness measured in terms of standard deviations of the results. In other words,
the outcomes of the single trials aggregate toward the average value, while they are more scattered
with the plain algorithm only.
4.8 Discussion
We have presented a thorough analysis on a set of Local Search algorithms that exploit the Multi-
Neighborhood approach. The proposed algorithms are based on the Hill Climbing and Tabu
Search meta-heuristics equipped with several combinations of two complementary neighborhood
definitions.
Furthermore, we defined two kicker components, based on the total composition neighborhood,
in order to improve the search effectiveness. The two kickers differ by the synergy definition
employed.
The results show that the algorithms equipped with the kickers improve the results of the basic
Multi-Neighborhood algorithms and increase the degree of robustness of the solving procedure.
Concerning a comparison with the standard Local Search methods for Course Timetabling,
4.8. Discussion 39
the typical way to solve the problem is by decomposition [87]. At first, the lectures are scheduled
neglecting the room assignment. Afterwards, the rooms are assigned to each lecture according to
the scheduled period. In our framework, this would correspond to a token-ring A(Time)⊲A(Room)3
with one single round, and with the initial solution in which all lectures are in the same room.
Experiments show that this choice gives much worse results than those presented in this chapter.
Finally, we want to remark that, for the Course Timetabling problem, it is natural to
compose the neighborhoods because they are complementary, as they work on different features
of the current state (the search space is not connected under them). However, results with other
problems (e.g. the Examination Timetabling problem, see Sections 5.3.2 and 5.5.3) show that
Multi-Neighborhood search helps also for problems that have completely unrelated neighborhoods,
and therefore could also be solved relying on a single neighborhood function.
3 With A(·) we denote one of the Local Search techniques employed. In this example, A ∈ {TS, HC}
40 4. Course Timetabling: a Case Study in Multi-Neighborhood Search
5
Local Search for Examination
Timetabling problems
find tik (i = 1, . . . , n; k = 1, . . . , p)
p
X
s.t. tik = 1 (i = 1, . . . , n) (5.1)
k=1
q
X
tik tjk cih cjh = 0 (k = 1, . . . , p; i, j = 1, . . . , n; i 6= j) (5.2)
h=1
tik = 0 or 1 (i = 1, . . . , n; k = 1, . . . , p) (5.3)
In the definition above, the Constraints (5.1) state that each exam must be assigned exactly to
one time slot, whereas the Constraints (5.2) state that no student shall attend two exams scheduled
at the same time slot.
It is easy to recognize that this basic version of the Examination Timetabling problem is
a variant of the well-known NP -complete k -Graph Coloring problem. The precise encoding
between these problems is presented in Section 5.3.
Capacity: On the basis of the rooms availability, we have a capacity array L = (l1 , . . . , lp ), which
represents the number of available seats. For each time slot k, the value lk is an upper bound
of the total number of students that can be examined at period k. The capacity constraints
can be expressed as follows.
q
n X
X
cih tik ≤ lk (k = 1, . . . , p) (5.4)
i=1 h=1
Notice that in this constraint we do not take into account the number of rooms, but only the
total number of seats available in that period. This is reasonable under the assumption that
more than one exam can take place in the same room. Alternative formulations that assign
one exam per room are discussed in Section 5.1.3.
5.1.2 Objectives
We now describe the soft constraints, that contribute, with their associated weights, to the objective
function to be minimized.
Second-Order Conflicts: A student should not take two exams in consecutive periods. To this
aim, we include in the objective function a component that counts the number of times a
student has to sit for a pair of exams scheduled at adjacent periods.
Many versions of this constraint type have been considered in the literature, according to the
actual time distance between periods:
5.1. Problem Statement 43
All these constraints can be expressed by identifying the binary relations R between pairs of
periods that must be penalized if a conflict is present, and by associating them a weight ωR .
Then, for each relation R the objective to be optimized is
X n
X n
X q
X
min ωR tik1 tjk2 cih cjk
(k1 ,k2 )∈R i=1 j=1,j6=i h=1
In the case of the first version of this constraint, the relation R is given by (k1 , k2 ) ∈ R iff k2 =
k1 + 1, while in the other two cases R identifies precisely the pairs (k1 , k2 ) of overnight or
near-to-lunch periods.
Higher-Order Conflicts: This constraint penalizes also the fact that a student takes two exams
in periods at distance three, four, or five. Specifically, it assigns a proximity cost ωi whenever
a student has to attend two exams scheduled within i time slots. The cost of each conflict is
thus multiplied by the number of students involved in both examinations. The formulation
proposed in [26] employs a set of weights that logarithmically decrease from 16 to 1 as follows:
ω1 = 16, ω2 = 8, ω3 = 4, ω4 = 2, ω5 = 1.
Similarly to the previous constraint, the objective can be expressed as
5 X
X n n
X p−l X
X q
min ωl tik tjk+l cih cjk
l=1 i=1 j=1,j6=i k=1 h=1
Preferences: Preferences can be given by teachers and student for scheduling exams to given
periods. This is the soft version of preassignments and unavailability.
A possible way for taking into account this objective is the following. We can define a
preference matrix Mn×q×p that measures the degree of acceptability of a given schedule.
Specifically, each entry of the matrix mihk is a real number in the range [0, 1] that states to
what extent the assignment of exam ei to period k is desirable for the student sh . The value
0 represents a fully desirable situation, while the value 1 stays for an assignment that should
be avoided.
The objective component for this constraint is
q X
n X
X p
min tik mijk
i=1 h=1 k=1
Room assignment: Some authors (see, e.g., [25]) allow only one exam per room in a given
period. In this case, then exams must be assigned not only to periods, but also to rooms.
44 5. Local Search for Examination Timetabling problems
The assignment must be done on the basis of the number of students taking the exams and
the capacity of each room.
The proposed mathematical model should be changed in order to take into account also the
room assignment problem. Namely, we add the data a set of d rooms H = {h1 , . . . , hd }, and
we look also for a new assignment matrix Rn×d , such that rij = 1 if and only if exam ei is
scheduled in room hj .
This new component of the problem is often referred as roomtabling problem.
Special rooms: Some other authors (see, e.g., [87]) consider also different types of rooms, and
exams that may only be held in certain types of rooms.
Similarly to the case of period preferences, we can define a binary matrix that encodes
whether an exam could be assigned to a given room.
In addition, some exams may be split into two or more rooms, in case the students do not
fit in one single room.
Exams of variable length: Exams may have length that do not fit in one single time slot. In
this case exams must be assigned consecutive time slots.
Minimize the length of the session: We have assumed that the session has a fixed length.
However, we may also want to minimize the number of periods required to accomplish all
the exams. In that case, the number of periods p becomes part of the objective function.
Other higher-order conflicts: Carter et al. [25] generalize the higher-order constraints and
consider a penalty the fact that a student is forced to take x exams in y consecutive periods.
Differently from all the other constraints, by taking into account this formulation of the
higher-order conflicts it is not possible to directly encode the problem in the Graph Colo-
ring settings. In fact, for all the other constraints, the relationship between exams is binary,
and can be easily mapped on the graph structure. Conversely, in this case the relation is
y-ary, and therefore needs a hyper-graph structure in order to be defined.
We conclude here our presentation of the Examination Timetabling problem. Now we move
to a discussion of the approaches to this problem that have appeared in the literature.
Several constructive heuristics for Examination Timetabling were proposed since the mid
1960s by Cole and Broder [13, 31], and thereafter developed for applications in specific universities
[55, 140]. Among others, Wood, and Welsh and Powell pointed out the connections between Ex-
amination Timetabling and Graph Coloring [138, 141]. More recently, also Mehta applied
a modified DSATUR method [12] (i.e., a dynamic ordering heuristic) to a specific Examination
Timetabling encoding.
Unfortunately, these early approaches are not completely satisfactory since they are not able
to handle some of the various types of constraints presented in the previous section. Indeed, for
example, there is no direct translation of complex second-order constraints in the Graph Colo-
ring framework. Furthermore, in these algorithms the room assignment is usually neglected.
An attempt to overcome the limitations of the early Graph Coloring approaches was pro-
posed by Carter et al. [26]. These authors extensively studied the performance of several Graph
Coloring based heuristics in joint action with backtracking, and reported very competitive com-
putational results on a set of benchmarks. Furthermore, their algorithms deal also with most of
the constraints presented so far.
In recent years, also special-purpose methods have been proposed with the aim of obtaining
more flexible algorithms. For example the approach followed by Laporte and Desroches [87], and
refined by Carter et al. [25], deals also with a form of second-order conflicts, and room allocation.
The algorithm proceeds in three stages:
1. find a feasible solution;
2. improve the solution;
3. allocate the rooms (allowing more than one exam per room).
The first stage iteratively schedules exams looking at the increase of the objective function caused
by the assignment. When the algorithm reaches a dead-end, one or more already scheduled exams
are rescheduled. The algorithm does not deal with infeasible solutions by preventing the reschedule
of exams that could introduce new infeasibilities. The schedule for the exams that cannot be moved
is undone. However, in order to prevent cycling, a maximum number of undone steps is allowed
and a list of undone moves is kept and maintained in the spirit of the tabu list.
Afterwards, in the second stage, the solution is improved by means of a Steepest Descent Local
Search method. The procedure stops when it reaches a local minimum.
Finally, in stage three, the algorithm assigns the exams to rooms according to the following
strategy. The procedure manages two lists of items: the rooms with their capacity and the exams
with the number of students enrolled. It iteratively assigns the exams with the largest number
of students to the largest room available. If the exam fits perfectly in the room, both of them
are dropped from the list. If the room is too large, then the exam is eliminated from the list and
the room capacity is updated by subtracting the number of students enrolled to the exam just
assigned. Conversely, if the room is not big enough for all the students enrolled to the current
exam, then the room is eliminated and the number of students of the current exam is updated by
subtracting the capacity of the room.
Since 1995 the interest on this subject manifested by the Local Search community has grown,
thanks also to the PATAT series of conferences [14–16, 20]. In the first conference, Thompson
and Dowsland proposed a family of Simulated Annealing algorithms based on different neighbor-
hood structures. Furthermore, they investigated the impact of different cooling schedules on the
performance of the resulting algorithms.
In the subsequent editions of the conference, other notable papers dealing with Local Search
approaches for Examination Timetabling were proposed e.g. by White and Xie and Burke and
Newall.
In [139], White and Xie employed a long-term memory in joint action with a Tabu Search
algorithm. Furthermore, they discussed in detail a method for estimating the appropriate length
of the longer-term tabu list based on a quantitative analysis of the instances.
Burke and Newall [18] presented experimental results on a combined hybrid approach which
integrates a set of Local Search algorithms with the constructive techniques presented by Carter
et al. [26]. In this approach, the Local Search is used with the aim of improving the solution con-
structed by the greedy algorithm. The Local Search algorithms employed in the experimentation
are Hill Climbing, Simulated Annealing, and a novel algorithm called Degraded Ceiling.
3. For each (unordered) pair of distinct vertices {vi1 , vi2 }, we create an edge {vi1 , vi2 } ∈ E if
there exists a student sj such that ci1 j = ci2 j = 1.
48 5. Local Search for Examination Timetabling problems
Pq
4. The weight wE of an edge (vi1 , vi2 ) ∈ E is given by wE (vi1 , vi2 ) = j=1 ci1 j ci2 j .
6. Constraints (5.2) are translated in the condition that τ is subject to (vi1 , vi2 ) ⇒ τ (vi1 ) 6=
τ (vi2 ).
This way, the basic problem becomes to assign a period k to each vertex vi , through the function
τ , in such a way that τ (vi1 ) 6= τ (vi2 ) if (vi1 , vi2 ) ∈ EG .
Notice that, in the proposed Graph Coloring formulation, the constraints (5.1) assure that
the function τ is well defined, since in the timetable t only one entry k for each i is assigned value
1.
The use of the weight functions makes possible to express the capacity constraints and the
second-order conflicts in a compact way. In fact, the constraints on the overall capacity lk granted
to a period k can be translated in
X
wV (v) ≤ lk
v∈V,τ (v)=k
Furthermore, the formulation of the simplest version of second-order conflicts can be expressed as
X
min wE (u, v)
(u,v)∈V,|τ (u)−τ (v)|=1
Now, we move to the presentation of our algorithms. To this aim we have first to describe the
Local Search features employed in our research, namely the search space, the cost function and
the neighborhood relations.
• If for K consecutive iterations all constraints of that component are satisfied, then ω is
divided by a factor γ randomly chosen between 1.5 and 2.
• If for H consecutive iterations all constraints of that component are not satisfied, then the
corresponding weight is multiplied by a random factor in the same range.
The values H and K are parameters of the algorithm (and their values are usually between 2
and 20).
This mechanism changes continuously the shape of the cost function in an adaptive way, thus
causing Tabu Search to visit solutions that have a different structure than the previously visited
ones.
Regarding the concept of move inverse, we experimented with several definitions, and the one
that gave the best results considers as inverse of a move hu, kold , knew i any move of the form hu, , i.
That is, the tabu mechanism does not allow to change again the period assigned to an exam u to
any new one.
In order to identify the most promising moves at each iteration, we maintain a so-called viola-
tions list VL, which contains the exams that are involved in at least one violation (either hard or
soft). A second (possibly shorter) list HVL contains only the exams that are involved in violations
of hard constraints. In different stages of the search (as explained in Section 5.4.1), exams are
selected either from VL or from HVL, whereas exams not in the lists are never analyzed.
For the selection of the move among the exams in the list (either VL or HVL), we experimented
with two different strategies:
In both cases, the selection of the new period for the chosen exam is exhaustive, and the new
period is assigned in such a way that leads to one of the smallest value of the cost function,
arbitrarily tie breaking.
50 5. Local Search for Examination Timetabling problems
It is worth noticing that this neighborhood does not affect the set of conflicting nodes and,
therefore, cannot be applied alone. The intuition behind this neighborhood is that a move of this
kind only shakes the current solution searching for the best permutation of colors in the given
situation. In fact, in most cases, the value of the objective function depends only on the distance
between the periods.
In such cases, this move contributes in spreading more evenly the workload of the students,
reducing the cost value. Moreover, since the Shake move changes several features of the current
solution at once, it gives a new good starting point for the Local Search algorithms based on the
Recolor move.
procedure InitialSolution(τ, G)
begin
Q := V ; // Q is the set of exams currently unscheduled
k := 1; // and k is the current period
while (Q 6= ∅ and k ≤ p) do
H := ∅; // H will contain an independent set
Q′ := Q; // Q′ is the set of candidate exams to be moved to H
forall u ∈ Q′ in random order do
H := H ∪ {u}; // when an exam u is added to the current independent set
forall v ∈ V such that (u, v) ∈ E do
Q′ := Q′ \ {v} // then all the exams adjacent to u should not be added
end;
forall u ∈ H do
τ (u) := k; // all the exams in H are assigned period k
5.4. Local Search Techniques 51
The algorithm, at each iteration of the outer while loop, tries to build a new independent set
H and to assign color k to its members. The set H is built by adding elements in random order
from a queue Q of unscheduled exams. Then, all the elements in Q that would cause a conflict if
inserted in H are removed. At the end, the exams still unscheduled are assigned a random period.
The described multi-phase strategy differs from the two-phase approach employed by Thompson
and Dowsland [127] in that we apply it repeatedly until no improvement can be found by any of
the algorithms. Conversely, in [127] the authors apply the two phases only once. Furthermore, the
idea of employing the Shake move is new and, up to our knowledge, it did not appear previously
in the literature.
tively.
5.5. Experimental Results 53
of students. This way the authors obtained a measure of the number of violations “per student”,
which allowed them to compare results for instances of different size.
If we define the characteristic function of the set of lth-order conflicting examinations, χl :
V × V → N, as follows (
1 if |τ (u) − τ (v)| = l
χl (u, v) =
0 otherwise
then the detailed formulation of the cost function in this formulation is:
5
!
1 X X
F1 (τ ) = ω0 χ0 (u, v) + ωl wE (u, v)χl (u, v) (5.5)
|S|
(u,v)∈E l=1
where ω0 = 2000.
with ω0 = 5000, ω1 = 3 and ω2 = 1. The latter component, takes into account the excess of room
capacity for each period k.
54 5. Local Search for Examination Timetabling problems
function. They presented results about a subset of the Toronto dataset and on the Nottingham
instance.
The authors proposed a new version of the memetic algorithm MA1. This version uses a
multistage procedure which decomposes the instances in smaller ones and combines the partial
assignments. The decomposition is performed along the lines proposed by Carter in [22]. For
comparison, they implemented also a constructive method.
Table 5.4 shows the comparison of their best2 results with our Tabu Search solver. In the table,
we name MA2 the memetic algorithm which uses decomposition only at the coarse grain level,
MA2+D the one with a strong use of decomposition (into groups of 50-100 exams), and Con the
constructive method.
The table shows that our solver works better than the pure memetic algorithm and the con-
structive one. However, the algorithm MA2+D, based on decomposition, outperforms the Recolor
Tabu Search.
Decompositions are independent of the technique employed. For this reason we tried to exploit
this idea also in our Tabu Search algorithms. Unfortunately, though, preliminary experiments do
not show any improvement with respect to the results presented so far.
1. Disable the shifting penalty mechanism. Penalties are fixed to their original values throughout
the run. Hard constraints are all assigned the same value ω0 = 1000, which is larger than
the sum of all soft ones.
2. Make the selection of the best neighbor always based on the full violation list VL. In the
regular algorithm the selection is performed on the sole HVL when hard-conflicts are present.
3. Explore the whole set of examinations at each search step, instead of focusing on the con-
flicting examination only.
2 The best combination of heuristic and size of decomposition; the results are averages on 5 runs.
56 5. Local Search for Examination Timetabling problems
4. Set a fixed value for the tabu list, rather than letting it vary within a given range.
5. Start from a random initial state instead of using the heuristic that searches for p independent
sets.
We perform 5 runs on each version of the algorithm, recording the best, the worst, and the
average value, and the computing time. Table 5.5 shows the results of such experiments for
the instance KFU-S-93 (21 periods and 1955 seats per period). We use the Burke and Newall’s
formulation F3 , and a parameter setting as follows: tabu list length 10–30, idle iterations 10000.
The results show that the key features of our algorithm are the shifting penalty mechanism
and the management of the conflict set. Removing these features, on average, the quality of the
solution degrades more than 60%. In fact, both these features prevent the algorithm from wasting
time on large plateaus rather then making worsening moves that diversify the search toward more
promising regions.
The intuition that the landscape of the cost function is made up of large plateaux is confirmed
from a modified version of the algorithm which explores the whole set of examinations at each step
of the search. This algorithm is not even able to find a feasible solution, and uses all the time at
its disposal in exploring such regions.
Regarding the selection of the initial state, the loss of starting from a random state is relatively
small on regular runs. However, the random initial state sometimes leads to extremely poor results,
as shown by the maximum cost obtained. In addition, as previously observed, starting from a good
state saves computation time.
The use of a fixed-length tabu list also affects the performance of the algorithm. Furthermore,
additional experiments show that the fixed length makes the selection of the single value much
more critical. In fact, the value of 20 moves employed in the reported experiment has been chosen
after a long trial-and-error session on the KFU-S-93 instance; namely. Conversely, the variable-
length case is more robust with respect to the specific values in use, and gives good results for a
large variety of values.
R, S & K TS
Data set p Carter et al.’s Impr.
best average best average
CAR-S-91 35 5.68 5.79 6.2 6.5 7.1–7.9 -8.45%
EAR-F-83 24 39.36 43.92 45.7 46.7 36.4–46.5 -13.87%
HEC-S-92 18 10.91 11.41 12.4 12.6 10.8–15.9 -12.02%
LSE-F-91 18 12.55 12.95 15.5 15.9 10.5–13.1 -19.07%
STA-F-91 13 157.43 157.72 160.8 166.8 161.5–165.7 -2.10%
UTA-S-92 35 4.12 4.31 4.2 4.5 3.5–4.5 -1.90%
YOR-F-83 21 39.68 40.57 41 42.1 41.7–49.9 -3.21%
Table 5.6: Comparison among Recolor, Shake and Kick, Recolor Tabu Search and Carter et al.’s
solvers [26]
Furthermore, concerning the comparison with Carter et al.’s results, instead, we obtain best results
in three out of seven instances.
5.6 Discussion
We have implemented different Tabu Search algorithms for the Examination Timetabling prob-
lem and we have compared them with the existing literature on the problem.
Our first algorithm is a single-runner solver equipped with the Recolor move. The runner makes
use of a shifting penalty mechanism, a variable-size tabu list, a dynamic neighborhood selection,
and a heuristic initial state. All these features have been shown experimentally to be necessary for
obtaining good results. We tested this algorithm on most of the available problem formulations
defined on the Toronto benchmarks.
The experimental analysis shows that the results of this algorithm are not satisfactory on all
benchmark instances. Nevertheless, we consider these preliminary results quite encouraging, and
in our opinion they provide a good basis for future improvements. To this aim we plan to extend
our application in the following ways:
• Implement and possibly interleave other local search techniques, different from Tabu Search.
• Implement more complex neighborhoods relations. In fact, many relations have been pro-
posed inside the Graph Coloring community, which could be profitably adapted for our
problem.
The second solver, instead, exploits a multi-neighborhood strategy which uses a token-ring
solving strategy and employs a kicker for obtaining further improvements. We compare this algo-
rithm on a subset of the Toronto instances, but we give the results only on one formulation of the
problem.
Since we have not performed a deep analysis of this algorithm, we consider these results only
as preliminary. Nevertheless, the results seems promising, and we plan to extend this work by a
thorough investigation of the proposed strategy on the whole set of benchmark instances and with
respect to different formulations of the problem.
The long-term goal of this research is twofold. On the one hand we want to assess the effective-
ness of local search techniques for Graph Coloring to Examination Timetabling. On the
other hand, we aim at drawing a comprehensive picture of the structure and the hardness of the
numerous variants of the Examination Timetabling problem. For this purpose, we are going
to consider further versions of the problems, as briefly discussed in Section 5.1.3.
The results presented in this chapter have been refined by many authors since their publication.
In Appendix A we report the current state-of-the-art results on the benchmarks employing the
formulations of the problem presented in this chapter.
58 5. Local Search for Examination Timetabling problems
6
Local Search for the min-Shift
Design problem
• A set of n consecutive time slots T = {t1 , t2 , . . . , tn } where ti = [τi , τi+1 ). Each time slot ti
has the same length h = kτi+1 − τi k ∈ R expressed in minutes. The time point τ1 represents
the start of the planning period, whereas time point τn+1 is the end of the planning period.
In this work we deal with cyclic schedules, that is τn+1 = τ1 .
• For each slot ti , the optimal number of employees ri that should be present during that slot.
• A set of days D = {1, . . . , d} that constitutes the planning horizon. Each time slot ti belongs
entirely to a particular day k.
• A set S of possible shifts. Each shift s = [σs , σs + λs ) ∈ S is characterized by the two values
σs and λs that determine, respectively, the starting time and the length of the shift.
• Since we are dealing with discrete time slots, the variables σs can assume only the values τi
defined above, and the variables λs are constrained to be a multiple of the time slot length
h.
• For each shift s, and for each day j ∈ {1, . . . d}, there is a function wj (s) ∈ N that indicates
the number of employees involved in the shift s during the jth day.
• Each shift s belongs to a unique shift type vj . We denote this relation with K(s) = vj .
• For each shift type vj , two quantities min s (vj ) and max s (vj ), which represent the earliest
and the latest starting times of the shift (chosen among the τi values). In addition, for each
shift type vj there are given other two values min l (vj ) and max l (vj ), which represent the
minimum and maximum lengths allowed for the shift.
Given a shift s that belongs to the type vj , we call s a feasible shift if and only if
The min-Shift Design problem is the problem of selecting a set of q feasible shifts Q =
{s1 , s2 , . . . , sq } ⊆ S, and the associated daily workforce wj : S → N, such that the following
components are minimized1 :
F1 : Sum of the excesses of workers in each time slot during the planning period.
F2 : Sum of the shortages of workers in each time slot during the planning period.
In order to formally define the components F1 and F2 we need to define the load li for a time
slot ti = [τi , τi+1 ) as follows.
d X
X q
li = χk (ti )wi (sk )
j=1 k=1
where χk : T → {0, 1} is
(
1 if σk ≤ τi ∧ τi+1 ≤ σk + λk
χk (ti ) =
0 otherwise
1 With abuse of notation in the following we indicate the starting time and the length of shift s ∈ Q as σ and
i i
λi respectively.
6.1. Problem Statement 61
Within these settings, the total excess F1 and the total shortage F2 of workers (expressed in
minutes) in the whole planning period are defined as
n
X
F1 = max{li − ri , 0}h
i=1
and
n
X
F2 = max{ri − li , 0}h
i=1
The min-Shift Design problem is genuinely a multi objective optimization problem in which
the criteria have different relative importance depending on the situation. The objective function
is a weighted sum of the three Fi components, where the weights depend on the instance at hand.
Table 6.2 contains the workforce requirements within a planning horizon of d = 7 days. In the
table, for conciseness, timeslots with same requirements are grouped together.
A solution for the problem from Table 6.2 is given in Table 6.3 and is pictorially represented
in Figure 6.1.
Notice that this solution is far from perfect. In fact, for example, there is a shortage of workers
every day in the time slot 10:00–11:00, represented by the thin white peaks in the figure. Conversely,
on Saturdays there is an excess of one worker in the period 09:00-17:00.
62 6. Local Search for the min-Shift Design problem
16
required
designed
14
12
10
0
Mon Tue Wed Thu Fri Sat Sun Mon
The values of cost for the various objectives Fi are the following. The shortage of employees
F1 is equal to 14, while the excess of workers F2 is 8. These values are measured as workers per
time slots. The total number of shifts used F3 is 5 and it derives directly from the Table 6.3.
Start Type Length Mon Tue Wed Thu Fri Sat Sun
06:00 M 08:00 2 2 2 6 2 0 0
08:00 M 08:00 3 3 3 3 3 3 3
09:00 D 08:00 2 2 2 4 2 3 2
14:00 A 08:00 5 4 2 2 5 0 0
22:00 N 08:00 5 5 5 5 5 5 5
ResizeShift (RS): The length of the shift is increased or decreased by one time slot, either on the
left-hand side or on the right-hand side.
Attributes: hsi , l, pi, where si = [σi , σi + λi ) ∈ S, l ∈ {↑, ↓}, and p ∈ {←, →}.
Preconditions: The shift s′i , obtained from si by the application of the move must be
feasible with respect to the shift type K(si ).
Effects: We denote with δ the size modification to be applied to the shift si . If l =↑ the
shift si is enlarged by one timeslot, i.e., δ = +1. Conversely, if l =↓ the shift is shrunk
by one timeslot, that is δ = −1.
If p =← the action identified by l is performed on the left-hand side of si . This means
that σi′ := σi + δh. By contrast, if p =→ the move takes place to the right-hand side,
therefore λ′i := λi + δh.
In a previous work, Musliu et al. [102] define many neighborhood relations for this problem
including CS, ES, and a variant of RS. In this work, instead, we restrict to the above three relations
for the following reasons.
First, CS and RS represent the most atomic changes, so that all other move types can be built
as chains of moves of these types. For example an ES move can be obtained by a pair of CS moves
that decreases one employee from a shift and assigns her in the same day to the other shift.
Secondly, even though ES is not a basic move type, we employ it because it turned out to be
very effective for the search, especially in joint action with the concept of inactive shift. In fact,
the move that transfers one employee from a shift to a similar one makes a very small change to
the current state, allowing thus for fine grain adjustments that could not be found by the other
move types.
Inactive shifts allow us to insert new shifts and to move staff between shifts in a uniform way.
This approach limits the creation of new shifts only to the current inactive ones, rather than
considering all possible shifts belonging to the shift types (which are many more). The possibility
of creating any legal shift is rescued if we insert as many (distinct) inactive shifts as compatible with
the shift type. Experimental results, though, show that there is a trade-off between computational
cost and search quality which seems to have its best compromise in having two inactive shifts per
type.
subroutine can easily compute the optimal staff assignment with minimum (weighted) deviation
under reasonable assumptions. However, it is worth noticing that the procedure is not able to
simultaneously minimize the number of shifts employed.
The proposed algorithm is based on Tabu Search, which turned out to give the best results
in a preliminary experimental phase. However we have developed and experimented with a set
of solvers based on all the basic meta-heuristics presented in Chapter 2, namely Hill Climbing,
Simulated Annealing and Tabu Search.
Musliu et al. [102] employ Tabu Search as well, but they use a first-descent exploration of a
neighborhood union made up of ten different moves. Differently from these authors, we employ
only the three neighborhood relations defined above. In addition, we use these neighborhoods
selectively in various phases of the search, rather than exploring the overall neighborhood at each
iteration.
In detail, we combine the neighborhood relations CS, ES, and RS, according to the following
scheme made of compositions and interleaving (through the token-ring search strategy). That is,
our algorithm interleaves three different Tabu Search runners using the ES alone, the RS alone,
and the union of the two neighborhoods CS and RS, respectively. Using the notation introduced
in Chapter 3, this corresponds to the solver TS(ES) ⊲ TS(RS) ⊲ TS(CS ⊕ RS).
The token-ring search strategy implemented is the same described in Section 3.2.1. That is, the
runners are invoked sequentially and each one starts from the best state obtained from the previous
one. The overall process stops when a full round of all of them does not find an improvement.
Each single runner stops when it does not improve the current best solution for a given number of
iterations.
The reason for using a subset of the possible neighborhood relations is not related to the saving
of computational time, which could be obtained in other ways (for example by clever ordering of
promising moves, as done in [102]). The main reason, instead, is the introduction of a suitable
degree of diversification in the search. In fact, certain move types would be selected very rarely in
a full-neighborhood exploration strategy, even though they could help to escape from local minima.
For example, we experimentally observe that a runner that uses all the three neighborhood
relations compound by means of the union operator would almost never perform a CS move that
deteriorates the objective function. The reason for this behavior is that such runner can always
find an ES move that deteriorates the objectives by a smaller amount, even though the CS move
could lead to a more promising region of the search space. This intuition is confirmed by the
experimental analysis that shows the our results are much better than those in [102].
This composite solver is further improved by performing a few changes on the final state of
each runner, before handing it over as the initial state of the following runner. In detail, we make
the following two adjustments:
• Identical shifts are merged into one. When the procedure applies RS moves, it is possible
that two shifts become identical. This situation is not detected by the runner at each move,
because it is a costly operation, and is therefore left to this inter-runner step.
• Inactive shifts are recreated. That is, the current inactive shifts are deleted, and new distinct
ones are created at random in the same quantity. This step, again, is meant to improve the
diversification of the search algorithm.
Concerning the prohibition mechanism of Tabu Search, for all three runners, the size of the
tabu list is kept dynamic by assigning to each move a number of tabu iterations randomly selected
within a given range. The ranges vary for the three runners, and were selected experimentally.
The ranges are roughly suggested by the cardinality of the different neighborhoods, in the sense
that a larger neighborhood deserves a longer tabu tenure. According to the standard aspiration
criterion defined in [65], the tabu status of a move is dropped if it leads to a state better than the
current best found.
As already mentioned, each runner stops when it has performed a fixed number of iterations
without any improvement (called idle iterations).
66 6. Local Search for the min-Shift Design problem
Parameter ES RS CS⊕RS
Tabu range 10-20 5-10 20-40 (CS)
5-10 (RS)
Idle iterations 300 300 2000
Tabu lengths and idle iterations has been selected once for all, and the same values were used
for all instances. The selection turned out to be robust enough for all tested instances. The choice
of parameter values is reported in Table 6.4.
The solver and the compound runners have been implemented in C++ using the EasyLo-
cal++ framework and were compiled using the GNU g++ compiler version 2.96 on a Linux
PC.
Our experiments have been run on different machines. The initial solution generation by means
of the greedy Min-Cost Max-Flow algorithm were performed on a PC running MS Windows NT
and using MS Visual Basic. Conversely, the Local Search algorithms were run on a PC equipped
with a 1.5GHz AMD Athlon processor with 384 MB ram running Linux Red Hat 7.1.
The running times have been normalized according to the DIMACS netflow benchmark2 to the
times of the Linux PC employing GNU gcc version 2.96 (calibration timings on that machine for
above benchmark: t1.wm: user 0.030 sec t2.wm: user 0.360 sec). Because of the normalization the
reported running times should be taken as indicatory only.
As mentioned above, we experimented with one Local Search solver using two different settings
for the initial solution selection. Namely, the resulting algorithms employed in this study are the
following:
TS The Local Search procedure is repeated several times starting from different random initial
solutions. The procedure is stopped when the time granted is elapsed or the best solution is
reached.
TS∗ The TS solver is combined with the greedy Min-Cost Max-Flow procedure, which provides
a good initial solution for the Local Search algorithm.
Table 6.5: Times to reach the best known solution for Set 1
6.4. Computational results 69
5
TS
TS*
4
−1
8 10 12 14 16 18 20
Figure 6.2: Aggregated normalized costs for 10s time-limit on data Sets 1 and 2
the number of shifts grows very slowly and always remains under an acceptable level.
The second time-limited experiment aims at investigating the behavior of the solver when
provided with a very short running time on “unknown” instances3 . We performed this experiment
on the third data set and we recorded the cost values found by our solver over 100 trials. Each
trial was granted with 1 second of running time.
In Table 6.6 (on page 70) we report the average and the standard deviation of the cost values
found by TS and TS∗ . In the table, the last column contains the percentage of improvement of
TS∗ over the best result found (a negative number indicates that TS∗ performs better than TS).
As in the previous table, the last row aggregates the results summing up the averages and the
standard deviations over the thirty instances.
TS TS∗ ∆avg
Instance min(avg)
avg std dev avg std dev
1 3,413.85 670.41 2,389.12 14.72 -42.89%
2 8,633.70 410.53 7,686.47 53.10 -12.32%
3 12,418.20 1,286.59 9,596.47 28.55 -29.40%
4 7,813.50 608.10 6,687.06 125.75 -16.85%
5 10,375.20 242.66 10,032.94 140.62 -3.41%
6 2,869.95 442.64 2,075.88 7.40 -38.25%
7 7,660.35 604.16 6,083.53 10.06 -25.92%
8 9,602.40 455.80 8,855.88 68.42 -8.43%
9 6,781.50 481.10 6,032.94 28.62 -12.41%
10 3,796.80 519.87 2,997.06 43.37 -26.68%
11 6,341.10 519.07 5,470.00 88.72 -15.93%
12 5,895.45 866.35 4,172.65 22.79 -41.29%
13 6,027.15 603.08 4,652.65 35.59 -29.54%
14 10,164.60 268.07 9,648.24 46.51 -5.35%
15 12,507.90 372.98 11,445.29 99.93 -9.28%
16 11,297.40 448.13 10,729.41 51.90 -5.29%
17 5,884.50 615.52 4,733.53 43.08 -24.32%
18 7,968.00 452.79 6,696.47 51.37 -18.99%
19 6,201.30 666.00 5,152.06 51.03 -20.37%
20 10,523.50 582.61 9,194.71 62.17 -14.45%
21 7,387.35 714.60 6,047.65 30.99 -22.15%
22 14,325.30 681.01 12,893.53 69.54 -11.10%
23 10,118.40 1,053.70 8,396.76 84.79 -20.50%
24 11,467.20 609.04 10,422.35 75.22 -10.03%
25 14,065.20 495.74 13,238.82 75.73 -6.24%
26 14,442.90 793.24 13,131.18 105.69 -9.99%
27 11,076.00 595.86 10,076.47 36.81 -9.92%
28 11,596.20 510.32 10,617.65 85.66 -9.22%
29 8,993.70 1,522.85 6,721.76 68.78 -33.80%
30 14,930.40 352.93 13,738.82 72.49 -8.67%
Total 274,579.00 18,445.75 239,617.34 1,779.41 -14.59%
Results on this set of instances confirm the trends outlined in the other two experiments. Also
in this case, TS∗ performs better than TS on all instances, and shows a better behavior in terms
of algorithm robustness. In fact, the overall standard deviation of TS∗ is more than an order of
magnitude smaller than the one of TS.
3 We use here the term “unknown” by contrast with the sets of instances constructed around a “best known”
solution
6.5. Discussion 71
6.5 Discussion
The research described in this chapter is still ongoing, and up to now has produced a Local
Search solver for the min-Shift Design problem. We proposed a solver that employs a set of
neighborhoods compound using a Multi-Neighborhood approach. The solver is based on the Tabu
Search meta-heuristic, and is equipped with two different strategies for the selection of the initial
solution. The algorithm denoted with TS strategy starts from a randomly generated solution,
whereas the TS∗ algorithm starts from a good solution generated by a Min-Cost Max-Flow
algorithm that exploits a Network Flow encoding. The code for obtaining the greedy initial solution
has been provided to us by Nysret Musliu and Wolfgang Slany.
The solver was compared both in terms of ability to reach good solutions and in quality reached
in short runs. Concerning the first feature, we found that TS and TS∗ performed better than a
commercial Local Search solver called OPA [102]. Since also OPA is based on a (simpler) Multi-
Neighborhood solving strategy, this result confirms our claim that Multi-Neighborhood approaches
deserve a thorough investigation.
Looking at the comparison between TS and TS∗ only, the results clearly showed that TS∗
outperforms TS both in terms of running times and with respect to the quality of solutions found.
Furthermore, starting from a good initial solution increases the robustness of the Tabu Search
algorithm on all instances.
For this problem, speed is of crucial importance to allow for immediate discussion in working
groups and refinement of requirements. Without quick answers, understanding of requirements
and consensus building would be much more difficult.
In practice, a number of further optimization criteria clutters the problem, e.g., the average
number of working days per week. This number is an extremely good indicator with respect to
how difficult it will be to develop a schedule and what quality that schedule will have. The average
number of duties thereby becomes the key criterion for working conditions and sometimes is also
part of collective labor agreements.
However, this and most further criteria can easily be handled by straightforward extensions of
the solver described in this work and add nothing to the complexity of min-Shift Design. For
this reason we focus on the three main criteria described in this work.
72 6. Local Search for the min-Shift Design problem
7
Other Problems
In this chapter we briefly sketch our insights in other scheduling domains which are not mature
enough to be collected in a set of single thesis chapters. We present in detail the results in the
application of a Multi-Neighborhood approach to the Job-Shop Scheduling problem and the
description of the neighborhood structures designed for a variant of the Resource-Constrained
Scheduling problem.
• for each job j ∈ J, a collection of ordered sets of kj tasks Tj = {tj1 ≺ tj2 ≺ . . . ≺ tjkj } ∈ T ;
The Job-Shop Scheduling problem consists in finding a schedule σ : T → N that minimizes the
makespan f (σ) = maxt∈T σ(t) + τ (t) and satisfies the following constraints:
σ( t 12) block
processors
p1 t 21 t 12 t 32
j
1
critical path j
2
p t t t
2 31 22 13 j
3
τ( t )
23
p t t t
3 11 23 33
time
makespan σ( t 33)
2. No recirculation: each job visits each processor only once. That is, for each job j there are
at most m tasks.
3. Precedence: the order of tasks in a job is strict and should be reflected by the schedule. In
other words t ≺ t′ ⇒ σ(t) + τ (t) ≤ σ(t′ ).
4. Task disjunction: at each time, a processor can process only one task. That is, p(t) = p(t′ ) ⇒
σ(t) + τ (t) ≤ σ(t′ ) ∨ σ(t′ ) + τ (t′ ) ≤ σ(t).
The output of the problem is usually represented by means of a Gantt chart. In Figure 7.1 we
present this pictorial representation for a 3×3 instance of the problem. In addition, in the figure we
introduce two related concepts: the critical path and the block of operations, respectively denoted
with dashed ellipses and a rounded box. Even though we do not enter in the full detail, the critical
path is the group of critical operations, i.e., the operations for which a processing delay implies a
delay of the overall project. The block is a maximal set of adjacent operations that belong to the
critical path.
Sequence of tasks
p1 t21 t12 t32
p2 t31 t22 t13
p3 t11 t23 t33
Table 7.2: Left-justified schedule for the matrix representation in Table 7.1
1. evaluate the behavior of the single-run algorithms built on the N1-N3 neighborhoods on a
common ground and compare the performance with the existing literature;
2. evaluate the effects of runners and kickers equipped with the N2 neighborhood with respect
to a simple multi-start Tabu Search approach.
1 At the URL http://mscmga.ms.ic.ac.uk/info.html
76 7. Other Problems
For the first experiment we performed ten runs for each instance and we record the best solution
found up to that time. In Table 7.3 we report the results obtained by our algorithms on a subset
of instances.
The second experiment was conducted by running the algorithms for a fixed amount of time
(depending on the instance at hand) and recording the best results found. Afterwards, we compared
the average cost found by each algorithm employing a directional Mann-Whitney non-parametric
statistical test (level of significance p < 0.01).
We found significant differences between all pairs of algorithms, and this indicates that the
kicker components have a positive effect with respect to the simple multi-start strategy. In addition,
this result points out that also the difference in the behavior of kicker employing different kick types
is significant. In Table 7.4 we summarize the outcome of this comparison.
TS(N2)⊲
Instance TSms (N2)
RK10 (N2) RK20 (N2) RK30 (N2)
FT10 945.4 943.1 943.9 944.5
LA24 951.9 947.3 950.5 949.1
LA27 1262.1 1262.0 1261.9 1262.9
LA36 1285.5 1283.6 1280.2 1286.8
LA40 1239.9 1234.5 1239.2 1237.2
(a) Evaluation of random kicks of various lengths.
TS(N2)⊲
Instance TSms (N2)
BK2 (N2) BK3 (N2) BK4 (N2)
FT10 945.4 944.7 946.4 946.5
LA24 951.9 948.9 949.2 950.9
LA27 1262.1 1252.7 1259.6 1262.5
LA36 1285.5 1281.1 1282.9 1288.2
LA40 1239.9 1237.9 1237.8 1239.5
(b) Evaluation of best kicks of various lengths.
From the experiments it is clear that the Token-Ring search and the use of kickers improves the
results of the basic algorithms. However, there is no clear winner among the selection strategies
employed by the kicker. Furthermore, the developed algorithms are in the same slot as state-of-
the-art methods.
We consider the results of this preliminary work quite encouraging and we plan to extend
it by further evaluating the algorithms on other benchmark instances. Furthermore we aim at
experimenting with new neighborhoods structures and different composition operators.
7.2. The Resource-Constrained Scheduling problem 77
time. The dashed line indicates the capacity profile for the processor, whereas the gray shaded
area represents the load profile of the current assignment.
1
processor P
T3 T4
sequencing
T2 T T T
5 6 7
T T T
1 8 9
time
1
processor P
capacity
time
Finally, as the cost function F , for this problem we employ the aggregate sum of the two
objectives f1 and f2 with equal weights plus the number of precedence constraints that are violated
multiplied by 1000.
Table 7.6: Results of the Tabu Search solver for Resource-Constrained Scheduling
From the results it is clear that our tabu search solver improves over the solutions found by the
greedy solvers in all cases: it is able to reach a cost value that is even 46% less than the starting
80 7. Other Problems
solution cost. Unfortunately, the solver is not able to wipe out the precedence violation of the
starting state InfBw.
Notice also that, generally, there is a trade-off between the two objectives f1 and f2 and this is
reflected also in the behavior of the algorithm. In fact, in the case of the InfBw starting solution
the improvement of the capacity excess is achieved at the price of deteriorating the tardiness.
The symmetric situation arises in the case of the FinFw starting solution, where the tardiness
improvement is paid by an increased capacity excess.
This remark suggests one direction for further research. In fact, it is important to understand
which is the relative importance of the quality criteria to be employed in this problem in order to
have a precise formulation of the cost function that is in the user’s mind. Moreover, we plan to run
experiments with our solver on new instances, and to test it with other neighborhood combinations
inspired by the Multi-Neighborhood search approach.
III
A Software Tool for Local Search
8
EasyLocal++: an Object-Oriented
Framework for Local Search
Differently from other search paradigms (e.g. branch & bound) no widely-accepted software tool is
available up to now for Local Search, but only a few research-level prototypes have gained limited
popularity. In our opinion, the reason for this lack is twofold: on the one hand, the apparent
simplicity of Local Search induces the users to build their applications from scratch. On the other
hand, the rapid evolution of Local Search techniques (see Chapter 2 for a review) seems to make
impractical the development of general tools.
We believe that the use of object-oriented (O-O) frameworks can help in overcoming these
problems. A framework is a special kind of software library, which consists of a hierarchy of abstract
classes. The user only defines suitable derived classes, which implement the virtual functions of
the abstract classes. Frameworks are characterized by the inverse control mechanism (also known
as the Hollywood Principle: “Don’t call us, we’ll call you”) for the communication with the user
code: the functions of the framework call the user-defined ones and not the other way round. The
framework thus provides the full control structures for the invariant part of the algorithms, and
the user only supplies the problem specific details.
In this chapter we present our attempt to devise a general tool for the development and the
analysis of Local Search algorithms. The system is called EasyLocal++, and is an object-oriented
framework written in the C++ language.
User Application
Solvers
Problem independent
Simple Token−ring Comparative ... Solving strategy
Solver Solver Solver
Testers
Runners Kickers
Hill Tabu Simple ...
Climbing Search Kicker Metaheuristics
Helpers
State Neighborhood Prohibition Local Search
Problem specific
...
manager explorer manager features
Data classes only store attributes, and have no computing capabilities. They are supplied to
the other classes as templates, which need to be instantiated by the user with the corresponding
problem-specific types.
This is a precise design choice we have made for the sake of balancing the trade off between
the computational overhead and the expressive power of O-O features. Specifically, data classes
are massively employed by all the other classes of the framework, therefore providing an efficient
access to them is a primary concern.
8.1.2 Helpers
The Local Search features are embodied in what we name helpers. These classes perform actions
related to each specific aspect of the search. For example, the Neighborhood Explorer is responsible
for everything concerning the neighborhood: selecting the move, updating the current state by
executing a move, and so on. Different Neighborhood Explorers may be defined in case of composite
search, each one handling a specific neighborhood relation used by the algorithm.
Helpers cooperate among themselves. For example, the Neighborhood Explorer is not responsible
for the computation of the cost function, and delegates this task to the State Manager that handles
the attributes of each state. Helpers do not have their own internal data, but they work on the
internal state of the runners and the kickers, and interact with them through function parameters.
8.1.3 Runners
Runners represent the algorithmic core of the framework. They are responsible for performing
a full run of a Local Search algorithm, starting from an initial state and leading to a final one.
Each runner has many data objects for representing the state of the search (current state, best
state, current move, number of iterations, . . . ), and it maintains links to all the helpers, which are
invoked for performing problem-related tasks on its own data.
Runners can completely abstract from the problem description, and delegate problem-related
tasks to the user-supplied classes that comply to a predefined helpers interface.
This feature allows us to describe meta-heuristics through incremental specification. For ex-
ample, in EasyLocal++ we directly translated the abstract Local Search algorithm presented
in Figure 2.1 in the C++ code reported in Figure 8.2. In the figure, the components that are left
8.1. EasyLocal++ Main Components 85
unspecified at the level of abstraction of the algorithm (i.e., the template names and the virtual
methods) are printed in italic.
Then, to specify actual meta-heuristics, it remains to define the strategy for move selection and
acceptance (through an actual implementation of the SelectMove () and AcceptableMove () func-
tions, respectively), and the criterion for stopping the search (by means of the StopCriterion ()
function). We will come back again on this function in Section 8.4.3, where we give more detail
on the meta-heuristics development process.
Examples of runners that have been implemented in EasyLocal++ are the basic techniques
presented in Section 2.3, i.e., hill climbing, simulated annealing and tabu search.
8.1.4 Kickers
Kickers represent an alternative to runners and they are used for diversification purposes. A kicker
is an algorithm based on a composite neighborhood, made up of chains of moves belonging to base
neighborhoods (see Chapter 3 for details). The name “kick” for referring to perturbations applied
to Local Search algorithms (due to [94] up to our knowledge) comes from the metaphor of a long
move as a kick given to the current state in order to perturb it.
Among other capabilities, a kicker allows the user to move away from the current state of the
search by drawing a random kick, or searching for the best kick of a given length.
8.1.5 Solvers
The highest abstraction level in the hierarchy of classes is constituted by the solvers, which represent
the external software layer of EasyLocal++. Solvers control the search by generating the initial
solutions, and deciding how, and in which sequence, runners or kickers have to be activated. A
solver, for instance, implements the token-ring strategy, one of the Multi-Neighborhood Local
Search methods we devised (see Section 3.2.1 for more details). Other solvers implement different
combinations of basic meta-heuristics and/or hybrid methods.
Solvers are linked to (one or more) runners and to the kickers that belong to their solution
strategy. In addition, solvers communicate with the external environment, by getting the input
and delivering the output.
As we are going to see, all the methods of runners, kickers and solvers are completely specified
at the framework level, which means that their use requires only to define the appropriate derived
class (we refer to Chapter 9 for a comprehensive case study in using the framework).
New runners and solvers can be added by the user as well. This way, EasyLocal++ supports
also the design of new meta-heuristics and the combination of already available algorithms. In
fact, it is possible to describe new abstract algorithms (in the sense that they are decoupled from
the problem at hand) at the runner level, while, by defining new solvers, it is possible to prescribe
strategies for composing pools of basic techniques.
86 8. EasyLocal++: an Object-Oriented Framework for Local Search
8.1.6 Testers
In addition to the core classes sketched so far, the framework provides a set of tester classes, which
act as a generic user interface of the program.
They can be used to help the developer in debugging her code, adjusting the techniques, and
tuning the parameters. Furthermore, testers provide some tools for the analysis of the algorithms.
Specifically, the user can employ them to instruct the system to perform massive batch experiments,
and to collect the results in aggregated form.
Batch runs can be instructed using a dedicated language, called ExpSpec, which allows us
to compare different algorithms and parameter settings with very little intervention of the human
operator.
The testers are not used anymore whenever the program is embedded in a larger application,
or if the user develops an ad hoc interface for her program. For this reason, we do not consider
testers as core components of EasyLocal++, but rather as development/analysis utilities.
This is also reflected by the fact that, in the hierarchy picture, testers wrap the core components
of the framework.
MustDef : pure virtual C++ functions that correspond to problem specific aspects of the algo-
rithm; they must be defined by the user, and they encode some particular problem-related
elements.
MayRedef : non-pure virtual C++ functions that come with a tentative definition. These func-
tions may be redefined by the user in case the default version is not satisfactory for the
problem at hand (see examples in the case study of Chapter 9). Thanks to the late binding
mechanism for virtual functions, the program will always execute the user-defined version of
the function.
NoRedef : final (non-virtual) C++ functions that should not be redefined by the user. More
precisely, they can be redefined, but the base class version is executed when invoked through
the framework.
In order to use the framework, the user has to define the data classes (i.e., the template
instantiations), the derived classes for the helpers, and at least one runner and one solver. Figure 8.4
shows an example of one step of this process.
8.2. EasyLocal++ Architecture
I O I O
Simple
Solver Output S
Solver
Producer
Hybrid I S
I S
Solver
S Cost
State
TokenRing
LocalSearch Manager Component
Solver
Solver
MultiRunner
Figure 8.3: EasyLocal++ main classes
Solver
Comparative I S I S
Solver
Runner Runner
Observer
Hill Steepest
I S
Climbing Descent
Kicker M1 Mn M
...
Multimodal Random
MoveRunner Simulated
MoveRunner Non−Ascending
Annealing
M1 ... Mn M
Multimodal Simple I S M
Kicker Kicker Tabu Search
Neighborhood
Explorer
M
Decorated
Prohibition
Prohibition Manager Manager
Helpers Runners Kickers Solvers
Decorated
I S
Shifted
DCC
87
S: state M: move
88 8. EasyLocal++: an Object-Oriented Framework for Local Search
Input State
EasyLocal++
Neighborhood
Move
Explorer
− FirstMove()
− BestMove()
− SampleMove()
Graph Coloring
ChangeColorExplorer Recolor
− MakeMove()
− RandomMove()
− NextMove()
User Application
The function names drawn in the box ChangeColorExplorer are MustDef ones, and they are
defined in the subclass ChangeColorExplorer. Conversely, in the box NeighborhoodExplorer
are reported some MayRedef functions, which need not to be redefined. The classes GraphCol,
Coloring, and Recolor, defined by the user, instantiate the templates Input , State , and Move ,
respectively.
Many framework classes have no MustDef functions. As a consequence, the corresponding
user-defined subclasses comprise only the class constructor, which cannot be inherited in the C++
language.
For all user’s classes, EasyLocal++ provides a skeleton version, which is usually suitable
for the user’s application. The skeleton comprises the definition of the classes, the declaration of
constructors, the MustDef functions and all the necessary include directives. The user thus has
only to fill in the empty MustDef functions. Hence, as discussed in Section 9.6, the user is actually
required to write very little code.
Input : input data of the problem; e.g., an undirected graph G and an upper bound k on the
number of colors. We assume that the colors are represented by the integers 0, 1, . . . , k − 1.
These data can be stored in a Graph class that represents the undirected graph (e.g., by
means of an adjacency matrix), and has a data member k that accounts for the number of
colors to be used.
Output : output to be delivered to the user; e.g., an assignment of colors to all the nodes of the
graph. For example, such data can be represented through a Coloring class that handles a
vector whose index are the nodes of the graph.
State : represents the elements of the search space; e.g., a (possibly partial) function that maps
the nodes of the graph into the set of colors. Again it can be represented by a specialization
of the Coloring class, which maintains also redundant data.
Move : encodes a local move; e.g., a triple hv, cold , cnew i representing the fact that the color assigned
to node v in the map is changing from cold to cnew . Such kind of moves can be stored in a
Recolor class, that handles the mentioned move features.
In a few applications State and Output classes may coincide but, in general, the search space
—that is explored by the algorithm— is only an indirect (possibly also not complete) representation
of the output space —that is related to the problem specification. For example, in the Flow-Shop
problem [57, problem SS15, p. 241] the search space can be the set of task permutations, whereas
the output space is the set of schedules with their start and end times for all tasks.
StateManager<Input ,State >: is responsible for all operations on the state that are independent
of the definition of neighborhood.
OutputManager<Input ,Output ,State >: is responsible for translating between elements of the
search space and output solutions. It also delivers other output information about the search,
and stores and retrieves solutions from files. This is the only helper that deals with the Output
class. All other helpers work only on the State class, that represents the elements of the
search space used by the algorithms.
NeighborhoodExplorer<Input ,State ,Move >: it handles all the features concerning neighbor-
hood exploration.
90 8. EasyLocal++: an Object-Oriented Framework for Local Search
ProhibitionManager<Move >: is in charge for the management of the prohibition mechanism (e.g.,
for the tabu search strategy).
CostComponent<Input ,State >: it handles one component of the cost function. In detail, it com-
putes such cost function element on a given state. It is owned by the State Manager which
relies on the available Cost Components for computing the quality of a state. Each component
has associated a weight that can be modified at run-time.
DeltaCostComponent<Input ,State ,Move >: is the “dynamic” companion of the previous class: it
computes the difference of the cost function in a given state due to a Move passed as param-
eter. Delta Cost Components are attached to a suitable Neighborhood Explorer from which
they are invoked. Additional responsibilities can be delegated to a Delta Cost Component
by means of the decorator pattern. For example, an augmented Delta Cost Component can
implement an adaptive modification of the weights of the cost function, according to the
shifting penalty mechanism (see Section 2.4).
Now, we describe in more detail the State Manager, the Neighborhood Explorer and the Output
Producer. In the following, the type fvalue denotes the co-domain of the objective function
(typically int or double).
State Manager
The State Manager is responsible for all the operations on the state that are independent from
the neighborhood definition; therefore no Move definition is supplied to the State Manager. A
State Manager handles two sets of Cost Component objects, which compute the objective function
elements and the number of violations. The State Manager core functions are the following:
MustDef functions:
void RandomState(State &st): makes st to become a random state.
MayRedef functions:
void SampleState(State &st, int n): stores in st the best solution among n randomly gen-
erated states.
void BuildState(State &st): generates a state according to some problem-specific algorithm
and stores it in st. Its tentative definition simply calls the function RandomState(st).
NoRedef functions:
void AddObjectiveComponent(CostComponent *cc): adds the given Cost Component passed as
parameter to the current set of objective function components.
void AddViolationsComponent(CostComponent *cc): is the companion function of AddObjectiveComponent(),
dealing with the cost components that compute the number of violations.
fvalue Objective(const State &st): computes the value of the objective function in the state
st. The tentative definition simply invokes the Cost() function of the attached Cost Com-
ponent objects and it aggregates the results according to the cost components weights.
fvalue Violations(const State &st): counts the number of violated constraints in the state
st. Again, the tentative definition delegates the Cost Component objects to compute the
number of violations and aggregates those results.
fvalue CostFunction(const State &st): computes a weighted sum of the values returned by
the Objective() and Violations() functions. In detail a hard weight is assigned to viola-
tions and the definition of the function simply returns the value “HARD WEIGHT * Violations(st)”
plus “Objective(st)”.
8.4. A description of EasyLocal++ classes 91
Neighborhood Explorer
A Neighborhood Explorer encodes a particular neighborhood relation associated to a specific Move
class; therefore, if different neighborhood relations are used (e.g. in the multi-neighborhood strate-
gies) different subclasses of NeighborhoodExplorer with different instantiations for the template
Move must be defined. The Neighborhood Explorer manages also a set of Delta Cost Component
objects which computes the elements of variation of the cost function due to a given move.
Some of the main functions of the Neighborhood Explorer are the following:
MustDef functions:
void MakeMove(State &st, const Move &mv): updates the state st by applying the move mv
to it.
void RandomMove(const State &st, Move &mv): generates a random move for the state st
and stores it in mv.
void NextMove(const State &st, Move &mv): modifies mv to become the candidate move that
follows mv according to the neighborhood exploration strategy. This is used in algorithms
relying on exhaustive neighborhood exploration.
MayRedef functions:
void FirstMove(const State &st, Move &mv): generates the first move for the state st ac-
cording to the neighborhood exploration strategy, and stores it in mv. Its tentative definition
simply invokes the RandomMove method.
fvalue BestMove(const State &st, Move &mv): looks for the best possible move in the neigh-
borhood of st
fvalue SampleMove(const State &st, Move &mv, int n): looks for the best move among n
randomly sampled moves in the neighborhood of st.
NoRedef functions:
void AddDeltaObjectiveComponent(DeltaCostComponent *dcc): inserts the Delta Cost Compo-
nent passed as parameter into the current set of components which compute the variation of
the objective function.
fvalue DeltaObjective(const State &st, const Move &mv): computes the difference in the
objective function between the state obtained from st applying mv and the state st itself.
Its definition checks whether they are attached some Delta Cost Component for computing
the variations of the objective function and, in such case, invokes them. If no Delta Cost
Component is available, it resorts to compute explicitly f (s ◦ m) and f (s), by calling the
corresponding methods of the State Manager, and returning the difference.
fvalue DeltaViolations(const State &st, const Move &mv): computes the difference in the
violations count between the state obtained from st applying mv and the state st itself. Its
behavior is the same as the DeltaObjective() function.
fvalue DeltaCostFunction(const State &st, const Move &mv): similarly to the CostFunction()
of the State Manager, it computes a weighted sum of the values returned by DeltaObjective()
and DeltaViolations().
92 8. EasyLocal++: an Object-Oriented Framework for Local Search
Notice that the computation of the cost function is partly performed by the Neighborhood
Explorer, which computes the variations, and partly by the State Manager, which computes the
static value. This is due to the fact that the variation of the cost function is dependent from
the neighborhood relation, and different Neighborhood Explorers compute the variations differently.
This way, we can add new neighborhood definitions without changing the State Manager.
Note also that the definition of the DeltaObjective() and DeltaViolations() functions is
unacceptably inefficient for almost all applications, if no Delta Cost Component is attached to the
Neighborhood Explorer. For this reason, the user is encouraged to define the suitable Delta Cost
Components taking into account only the differences generated by the local changes.
As an example of EasyLocal++ helpers code, we present the definition of the BestMove()
function. In the following code LastMoveDone() is a MayRedef function whose tentative code is
the single instruction “return mv == start move;”.
In the code, p pm is a pointer to the Prohibition Manager discussed below. The function
ProhibitedMove(mv,mv cost) delegates to that helper the decision whether the move mv is pro-
hibited or not.
The reader may wonder why to use the decorator pattern instead of a straight class derivation
and virtual functions. The answer is that simple class derivation would force the user to write the
same code twice: one for the basic Neighborhood Explorer —used in conjunction with the algorithms
that do not make use of prohibition— and the other for the Neighborhood Explorer with prohibition.
From our point of view, this is unacceptable since a good Object-Oriented design practice aims at
preventing code duplication.
The alternative would be to define the prohibition enabled Neighborhood Explorer only once
and to provide it to all the algorithms. Anyway, also this choice is undesirable for us, since it
equips simple algorithms with heavyweight components and, for this reason, it induces unwanted
computing overhead.
The decorator pattern, instead, allows us to retain lightweight components and to attach to
them some responsibilities at run-time only when required. In fact, in order to equip a generic
Neighborhood Explorer p nhe with the prohibition mechanism it is enough to write the following
two lines of code:
Then, all the features of the augmented Neighborhood Explorer are accessible through the p pnhe
variable in a way that is completely transparent to the user.
Output Producer
This helper is responsible for translating between elements of the search space and the output
solutions. It also delivers other output information of the search, and stores/retrieve solutions
from/to files.
This is the only helper that deals with the Output class. All other helpers work only on the
State class that represents the elements of the search space used by the algorithms.
The main functions of the Output Producer are the following ones. The most important is
OutputState() which delivers the output at the end of the search.
94 8. EasyLocal++: an Object-Oriented Framework for Local Search
MustDef functions:
void InputState(State &st, const Output &out): gets the state st from the output out.
void OutputState(const State &st, Output &out,...): writes the output object out from
the state st.
MayRedef functions:
void ReadState(State &st, istream &is): reads the state st from the stream is (it uses
InputState()).
void WriteState(State &st, ostream &os): writes the state in the stream os (it uses OutputState()).
Prohibition Manager
This helper deals with move prohibition mechanisms that prevents cycling and allows for diversifi-
cation. As shown in Figure 8.3, we have also a more specific Prohibition Manager, which maintains
a list of Move elements according to the prohibition mechanisms of tabu search. Its main functions
are the following:
MustDef functions:
bool Inverse(const Move &m1, const Move &m2): checks whether a (candidate) move m1 is
the inverse of a (list member) move m2.
MayRedef functions:
void InsertMove(const Move &mv, ...): inserts the move mv in the list and assigns it a tenure
period; furthermore, it discards all moves whose tenure period is expired.
bool ProhibitedMove(const Move &mv, ...): checks whether a move is prohibited, i.e., it is
the inverse of one of the moves in the list.
Both functions InsertMove() and ProhibitedMove() have other parameters, which are related
to the aspiration mechanism of tabu search that is not described here.
8.4.3 Runners
EasyLocal++ comprises a hierarchy of runners. The base class Runner has only Input and
State templates, and is connected to the solvers, which have no knowledge about the neighborhood
relations.
The class MoveRunner requires also the template Move , and the pointers to the necessary
helpers. It also stores the basic data common to all derived classes: the current state, the current
move, and the number of iterations.
The use of templates allows us to directly define objects of type State , such as current state
and best state, rather than accessing them through pointers. This makes construction and copy
of objects of type State completely transparent to the user, since this operation does not require
any explicit cast operation or dynamic allocation.
The core function of MoveRunner is the Go() function which performs a full run of Local Search.
Although this function has already been presented in Section 8.1.3, we are able now to describe
its code more in detail.
Most of the functions invoked by Go() are abstract methods that will be defined in the subclasses
of MoveRunner, which implement the actual meta-heuristics. For example, if we name p nhe the
pointer to the Neighborhood Explorer, the SelectMove() function invokes p nhe->RandomMove()
in the subclass SimulatedAnnealing, while in the subclass TabuSearch it invokes p nhe->Best-
Move() on the Neighborhood Explorer that, in turn, has been decorated with the tabu-list prohibi-
tion handling mechanism as outlined before.
Two functions that are defined at this level of the hierarchy are the MayRedef functions
UpdateIterationCounter() and LowerBoundReached(). Their tentative definition simply con-
sists in incrementing the iteration counter by one, and in checking if the current state cost is equal
to 0, respectively.
Runners can be equipped with one or more observers (which are not presented in Figure 8.3,
since they are not main components of EasyLocal++). The observers can be used to inspect the
state of the runner, for example for debugging the delta cost function components or for plotting
the data of the execution.
Among the actual runners, TabuSearch is the most complex one. This class has extra data for
the specific features of tabu search. It has various extra members, including:
• a State variable for the best state, which is necessary since the search can go up-hill;
• a decorated Neighborhood Explorer which implements the tabu search move selection strategy;
• a pointer to a Prohibition Manager, which is shared with the decorated Neighborhood Explorer
and is used by the functions SelectMove() and StoreMove();
• two integer variables iteration of best and max idle iterations for implementing the
stop criterion.
We provide also an advanced version of tabu search, which includes the shifting penalty mech-
anism. The corresponding class then works in connection with the decorated versions of the Delta
Cost Component, which implements the chosen adaptive weighting strategy.
The other main subclass of the Runner class, called MultiModalMoveRunner deals with more
than one neighborhood and is used as the base class for implementing some elements of the Multi-
Neighborhood Local Search strategy described in Chapter 3. In detail, the MultiModalMoveRunners
manage sets of moves belonging to different neighborhood definitions and implement the neighbor-
hood union and the neighborhood composition operators.
The definition of two separate hierarchies for simple and multi-modal MoveRunners is not com-
pletely satisfactory. Unfortunately, since in EasyLocal++ moves are supplied through templates,
it is quite difficult to define a generic mono/multi-modal kicker without static type-checking vi-
olations. For this reason, we prefer to maintain these hierarchies apart until we have reached a
stable version of the multi-neighborhood components. However, we are already looking for some
methods to overcome this problem.
96 8. EasyLocal++: an Object-Oriented Framework for Local Search
8.4.4 Kickers
Kickers handle composite neighborhoods made up of chains of moves belonging to different neigh-
borhoods, in the spirit of the total neighborhood composition. (see Section 3.3).
In principle, a kicker can generate and evaluate chains of moves of arbitrary length. However,
due to the size of the base neighborhoods, a thorough exploration of the whole neighborhood is
generally computationally infeasible for lengths of 3 or more (in fact its size increases exponentially
with the number of steps).
To reduce the computational cost, the kickers can be programmed to explore only kicks com-
posed of certain combinations of moves. In detail, a kicker searches for a chain of moves that are
synergic (i.e., related) to each other.
The intuitive reason is that kickers are invoked when the search is trapped in a deep local
minimum, and it is quite unlikely that a chain of unrelated moves could be effective in such
situations.
Among others, the main functions of a kicker are the following ones.
MustDef functions:
bool SynergicMoves(const Move a &m a, const Move b &m b): states whether two moves m a,
of type Move a , and move m b, of type Move b, are synergic.
MayRedef functions:
void RandomKick(const State &st): builds a chain of moves of a given length according to the
random kick selection strategy, starting from the state st.
void BestKick(const State &st): builds a chain of moves of a given length according to the
best kick selection strategy, starting from the state st.
fvalue MakeKick(State &st): applies the selected kick upon the state st.
NoRedef functions:
void SetStep(unsigned int s): sets the number of steps of the total composition, i.e., the num-
ber of moves to be looked for.
Similarly to runners, there are two companion subclasses of the class Kicker that handle a single
neighborhood (SimpleKicker) and a set of neighborhoods (MultiModalKicker) respectively. This
split has the same motivations as for the runners.
Actual kickers implement the strategy for selecting the chain of moves to be applied by means of
the RandomKick() and BestKick()functions. This is quite straightforward for the SimpleKicker
class, which allows the user to draw a random sequence of moves and to search for the best sequence
of moves of the given type.
Concerning the MultiModalKickers, instead, more than one strategy is possible for selecting
the chain of moves. For example the PatternKicker searches for the random or the best kick of a
given length following a pattern of moves. In detail, a pattern specifies which kind of neighborhoods
to employ at each step for building up the chain of moves that implement a simple composition.
Another possible strategy could search for the random or the best kick of a given length using the
full total-composition (i.e., regardless of move patterns). This is implemented in the TotalKicker
class.
The function SynergicMoves() deals with the notion of moves relatedness, which is obviously
a problem-dependent element. It is meant for pruning the total composite neighborhood handled
by the kicker and the user is required to write the complete definition for the problem at hand.
Actually, for multi-modal kickers it is necessary to define more instances of the SynergicMoves()
function, one for each pair of Move types employed.
8.4. A description of EasyLocal++ classes 97
8.4.5 Solvers
Solvers represent the external layer of EasyLocal++. Their code is almost completely provided
by framework classes; i.e., they have no MustDef functions. Solvers have an internal state and
pointers to one or more runners and kickers. The main functions of a solver are the following ones.
MayRedef functions:
void FindInitialState(): provides the initial state for the search by calling the function Sample-
State() of the helper State Manager on the internal state of the solver.
void Run(): starts the Local Search process, invoking the Go() function of the runners, or the
suitable function of the kickers, according to the solver strategy.
NoRedef functions:
void Solve(): makes a complete execution of the solver, by invoking the functions FindIni-
tialState(), Run(), and DeliverOutput().
void MultiStartSolve(): makes many runs from different initial states and delivers the best of
all final states as output.
void DeliverOutput(): calls the function OutputState() of the helper Output Producer on the
internal state of the solver.
void AddRunner(Runner *r): for the SimpleSolver, it replaces the current runner with r, while
for MultiRunnerSolver it adds r at the bottom of its list of runners.
void AddKicker(Kicker *k): for the solvers that manage one or more kickers it adds k at the
bottom of the list of kickers.
void Clear(): removes all runners and kickers attached to the solver.
Various solvers differ among each other mainly for the definition of the Run() function. For
example, for TokenRingSolver, which manages a pool of runners, it consists of a circular invocation
of the Go() function for each runner. Similarly, for the ComparativeSolver, the function Go() of
all runners is invoked on the same initial state, and the best outcome becomes the new internal
state of the solver.
The core of the function Run() of TokenRingSolver is given below. The solver variable
internal state is previously set to the initial state by the function FindInitialState().
{
internal state = runners[current runner]->GetBestState();
internal state cost = runners[current runner]->BestStateCost();
if (runners[current runner]->LowerBoundReached())
{
interrupt search = true;
break;
}
else
improvement found = true;
}
previous runner = current runner;
current runner = (current runner + 1) % runners.size();
runners[current runner]
->SetCurrentState(runners[previous runner]->GetBestState());
}
while (current runner != 0);
if (!interrupt search)
{
if (improvement found)
idle rounds = 0;
else
idle rounds++;
improvement found = false;
}
}
}
Notice that both solvers and runners have their own state variables, and communicate through
the functions GetCurrentState() and SetCurrentState(). These data are used, for instance, by
the comparative solver which makes a run of all runners, and updates its internal state with the
final state of the runner that has given the best result.
8.4.6 Testers
Testers represent a text-based user interface of the program. They support both interactive and
batch runs of the system, collecting data for the analysis of the algorithms.
In the interactive mode, a tester allows the user to perform runs of any of the available runners,
and it keeps track on the evolution of the current state. If requested, for debugging purposes, runs
can be fully traced to a log file. At any moment, the user can ask to check the current violations
and objective, and to retrieve/store the current state from/to data files.
A specialized tester class, called MoveTester, is used to perform single moves one at the time.
The user specifies the neighborhood relation to be used and the move strategy (best, random,
from input, ...). Then, the system returns the selected move, together with all corresponding
information about the variation of the cost function. In addition, a MoveTester provides various
auxiliary functions, such as checking the cardinality of the neighborhood.
Finally, there is a specific tester for running experiments in batch mode. This tester accepts
experiment specifications in a language, called ExpSpec, and executes all of them sequentially.
As an example of ExpSpec code consider the file listed below. The example refers to a solver
which handles a Simulated Annealing algorithm for the Graph Coloring problem.
140000
120000
100000
80000
60000
40000
20000
0
0 0.05 0.1 0.15 0.2 0.25
Figure 8.5: Value of the cost function over time for a set of trials
In this example, the tester performs 10 runs on the instance DSJC125.1.col of the SAGraphColoring
algorithm with the parameter settings enclosed in brackets. The tester collects data upon the solu-
tions found and stores them in the log file specified in the solve options. Statistical data about the
runs are written to the results file while the details of the solving procedure are stored in the plot
directory. The latter data can be shown to the user in aggregated graphical form, by interfacing
with the GNUPlot system1 .
In fact, the data produced by the tester has been used to generate the plot reported in Figure 8.5.
These plots show the value of the cost function for either an individual run or a set of trials and
give a qualitative view of the behavior of the algorithm.
The information collected by the tester allows the user to analyze and compare different algo-
rithms and/or different parameter settings on the same instances of the problem, with very little
intervention of the human operator. Furthermore, the batch mode is especially suitable for massive
night or weekend runs, in which the tester can perform all kinds of experiments in a completely
unsupervised mode.
The testers are implemented as concrete classes that can be used directly, with no need to
define derived classes. The ExpSpec interpreter has been written using the ANTLR grammar
generator [107], and it can be easily customized by an expert user if necessary.
8.5 Discussion
The idea of illustrating Local Search techniques by means of generic algorithms has been proposed,
among others, by Vaessens et al. [130] and by Andreatta et al. [4].
Vaessens et al. use a Local Search template to classify existing local search techniques. They
also suggest new types of search algorithm belonging to the Local Search family. Andreatta and
co-workers describe a conceptual framework for Local Search that differs from Vaessens’ work
because, like EasyLocal++, it relies on Design Patterns. In addition, they discuss in detail a
constructive search phase used for finding the initial state for Local Search.
More interesting for our comparison are the software systems that actively support the design
and the implementation of algorithms.
Moving to glass-box systems, a few O-O frameworks for Local Search problems have been already
developed and are described in the literature, notably in [36] and [51, 54], and [52].
The system HotFrame, by Fink et al. [52] is a C++ framework for Local Search. HotFrame
is heavily based on the use of templates, and in this system inheritance is used only in a secondary
way. In HotFrame the type of neighborhood, the tabu mechanism, and other features are supplied
through template classes and values. This choice results in a very compositional architecture, given
that every specific component can be plugged in by means of a template instantiation. On the
other hand, HotFrame does not exploit the power of virtual functions, that greatly simplifies
the development of the system and of user’s modules. In addition, in HotFrame several member
functions are required to be defined for the template instantiation classes. In EasyLocal++,
conversely, such classes are simply data structures, and the “active” role is played exclusively by
the helper classes.
Ferland and co-workers [51, 54] propose an object-oriented implementation of several Local
Search methods. Specifically, in [51], the authors provide a framework developed in Object-
Oriented Turbo Pascal. Differently from our work, their framework is restricted to assignment
type problems only, and therefore they are able to commit to a fixed structure for the data of the
problem.
Specifically, our template class Move corresponds in their work to a pair of integer-valued
parameters (i, j), which refer to the index i of an item and the new resource j to which it is assigned,
similarly to a finite-domain variable in constraint programming. Such a pair is simply passed to
each function in the framework. Similarly, our template class State is directly implemented as
an integer-valued array. The overall structure of the framework is therefore greatly simplified,
and most of the design issues related to the management of problem data do not arise. This
simplification is obviously achieved at the expense of the generality and flexibility of the framework.
de Bruin et al. [36] developed a template-free framework for branch and bound search, which
shows a different system architecture. Specifically, in their framework solver classes are concrete
instead of being base classes for specific solvers. The data for the problem instance is supplied by
a class, say MyProblem, derived from the framework’s abstract class Problem. The reason why we
do not follow this idea is that the class MyProblem should contain not only the input and output
data, but also all the functions necessary for running the solver, like, e.g., ComputeCost() and
SelectMove(). Therefore, the module MyProblem would have less cohesion with respect to our
solution which uses the modules Input , Output and the concrete solver class.
A more detailed description of related work, including systems that implement other search
techniques, like ABACUS [80] and KIDS [120], is provided in [115]. The latter paper describes
Local++, the predecessor of EasyLocal++, which is composed of a single hierarchy of classes,
without the distribution of responsibilities between helpers, runners, and solvers.
Local++ architecture showed several limitations which led to the development of EasyLo-
cal++. For example, the code that in EasyLocal++ belongs to the helpers, in Local++ had
to be duplicated for each technique. In addition, Local++ missed the ability to compose freely
the features of the algorithms giving rise to a variety of new search strategies. Furthermore, Lo-
cal++ did not support many other important features of EasyLocal++, including the weight
managing capabilities, the testers, the skeleton code, and the experiment language ExpSpec.
Finally, EasyLocal++ has been made freely available for the community, and it has already
been downloaded by many researchers. The continuous exposure to critics and comments by other
researchers has given us additional motivations to extend and improve the system.
102 8. EasyLocal++: an Object-Oriented Framework for Local Search
8.6 Conclusions
The basic idea behind EasyLocal++ is to capture the essential features of most Local Search
techniques and their possible compositions. The framework provides a principled modularization
for the design of Local Search algorithms and exhibits several advantages with respect to directly
implementing the algorithm from scratch, not only in terms of code reuse but also in methodology
and conceptual clarity. Moreover, EasyLocal++ is fully glass-box and is easily extensible by
means of new class derivations and compositions. The above features mitigate some potential
drawbacks of the framework, such as the computational overhead and the loss of the full control
in the implementation of the algorithms.
The main goal of EasyLocal++, and similar systems, is to simplify the task of researchers
and application people who want to implement local search algorithms. The idea is to leave only
the problem-specific programming details to the user. Unfortunately, though, in many cases it
is these problem-specific details that dominate the total implementation time for a Local Search
algorithm, so one might at first wonder why bothering automating the “easy” part.
The answer to these critics is twofold: First, recent research has proved that the solution of
complex problems goes toward the direction of the simultaneous employment of various Local
Search techniques and neighborhood relation. Therefore the “easy” part tends to increase in
complexity and programming cost. Second, we believe that EasyLocal++ provides the user an
added value not only in terms of quantity of code, but rather in modularization and conceptual
clarity. Using EasyLocal++, or other O-O frameworks, the user is forced to place each piece of
code in the “right” position.
EasyLocal++ makes a balanced use of O-O features needed for the design of a framework. In
fact, on the one hand, data classes are provided through templates, giving a better computational
efficiency and a type-safe compilation. On the other hand, the structure of the algorithm is
implemented through virtual functions, giving the chance of incremental specification in hierarchy
levels and a complete reverse control communication. We believe that, for Local Search, this is a
valid alternative to toolkit systems à la ILOG Solver.
One of the main characteristics of EasyLocal++ is its modularity: once the basic data
structures and operations are defined and “plugged-in”, the system provides for free a straight
implementation of all standard techniques and a large variety of their combinations.
The system allows also the user to generate and experiment new combinations of features (e.g.,
neighborhood structures, initial state strategies, and prohibition mechanisms) with a conceptually
clear environment and a fast prototyping capability.
The current modules have actually been applied to some practical problems, mostly in the
scheduling domain:
• University Examination Timetabling [38, 41]: schedule the exam of a set of courses in
a set of time-slots avoiding the overlapping of exams for students, and satisfying other side
constraints (see Chapter 5).
• University Course Timetabling [40, 42]: schedule a set of university courses in a set of
time-slots . . . (see Chapter 4).
• Workforce Shift Design [101]: design the working shifts and determine the number of em-
ployees needed for each shift, over a certain period of time, subject to constraints about the
possible start times and the length of shifts, and an upper limit for the average number of
duties for each employee. (see Chapter 6).
• Employee Timetabling (or Workforce Scheduling) [30]: assign workers to shifts ensuring the
necessary coverage for all tasks, respecting workload regulations for employees.
• Portfolio Selection : select a portfolio of assets (and their quantity) that provides the investor
a given expected return and minimizes the associated risk. Differently from the problems
presented so far, this problem makes use of both integer and real variables.
8.6. Conclusions 103
Several other modules are under implementation and testing. For example, we are working on
a threading mechanism that would manage the parallelization of the execution (see, e.g., [34, 134]).
In addition, a module that integrates the data collected by testers with the STAMP software for
comparing non-deterministic methods [125] is ongoing. Future work also comprises an adaptive
tool for semi-automated framework instantiation in the style of the Active CookBooks proposed in
[117], in order to help the users to develop their applications.
Finally, we recall that EasyLocal++ is part of the Local++ project which aims at realizing
a set of object-oriented software tools for Local Search. Further informations about the project are
available on the web at the address: http://www.diegm.uniud.it/schaerf/projects/local+
+. From the same address it is possible to freely download a stable and documented version of
EasyLocal++, and a set of Local Search solvers based on EasyLocal++.
In the next chapter we will describe a case study in the application of EasyLocal++ for the
solution of the Graph Coloring problem. Furthermore, we refer also to a recent volume [136],
which contains a chapter on the development of Local Search algorithms using EasyLocal++
[43].
104 8. EasyLocal++: an Object-Oriented Framework for Local Search
9
The development of applications
using EasyLocal++: a Case Study
As an example of the actual use of EasyLocal++, we present here the development of a family
of Local Search algorithms for the k -Graph Coloring problem. The problem has already been
presented in Section 1.1.2, however, we now briefly recall its statement.
Given an undirected graph G = (V, E) and a set of k integer colors C = {0, 1, 2, . . . , k − 1}.
The problem is to assign to each vertex v ∈ V a color value c(v) ∈ C such that adjacent vertices
are assigned different colors (i.e., ∀(v, w) ∈ E c(v) 6= c(w)).
We demonstrate the solution of the k -Graph Coloring problem using EasyLocal++ pro-
ceeding in stages. We start from the data classes, and afterwards we present the helpers, the
runners, and the solvers. At last we test the algorithms on a set of benchmark instances and we
compare their results.
For the sake of simplicity, the classes presented are slightly simplified with respect to the version
used in the actual implementation. For example, the input and output operators (“>>” and “<<”)
and some other auxiliary functions are omitted. Nevertheless, the full software is still correct and
could be run “as is”.
9.1.1 Input
The input of the problem is a graph together with an upper bound on the number of colors to be
used for coloring its vertices. To the aim of encoding the graph we adopt the standard adjacency
matrix representation:
• the set of vertices V of the graph is arbitrarily ordered, i.e., V = {v1 , v2 , . . . , vn }, for the
purpose of identifying each vertex with its index;
• we define a n × n symmetric matrix A such that aij = aji = 1 if the edge (vi , vj ) is present
in the graph, and aij = aji = 0 otherwise.
Hence, to instantiate the template Input, we define a class that handles the adjacency matrix
representation. An integer value k has been added to that class, for the purpose of representing
the upper bound on the number of colors. The resulting class declarations is as follows:
106 9. The development of applications using EasyLocal++: a Case Study
class Graph
{
public:
typedef unsigned int Vertex; // vertices are represented by their indices
typedef std::set<Vertex> VertexSet; //we deal also with sets of vertices
typedef unsigned int Color; // colors are represented by natural numbers
Color k; // k is the maximum color allowed
// constructs an empty graph
Graph()
: adj vertices(0)
{}
// loads a graph from a DIMACS file
void Load(const std::string &id);
// states whether two vertices are connected by an edge
bool Adjacent(const Vertex &v, const Vertex &w) const
{ return adj vertices[v][w]; }
// returns the number of vertices
unsigned int NumberOfVertices() const
{ return adj vertices.size(); }
// returns the number of edges
unsigned int NumberOfEdges() const
{ return number of edges; }
protected:
std::vector<std::vector<bool> > adj vertices;
unsigned int number of edges;
};
The method Load() instantiates the adjacency matrix by loading it from a file encoding of the
graph. In our actual implementation we decided to comply to the DIMACS file representation of
graphs [78], however for the sake of brevity we do not give here the detail of such a function.
The class declaration makes use of several classes that belong to the Standard Template Library
of the C++ language. All of them are identified by the std:: namespace prefix. The discussion
of such classes are beyond the scope of this thesis. For a comprehensive reference on the STL we
refer to one of the several books on this subject (e.g., [103]).
9.1.2 Output
The output of the problem is a function c : V → C from graph vertices to color values. Since
we represent the vertices of the graph as integers, the function can be simply encoded through an
array, whose indices are the vertices themselves. For this purpose we define the GraphColoring
class which extends the already available STL vector class. The class definition, reported below,
also includes a pointer to the input class that is needed to resize the vector accordingly.
class GraphColoring
: public std::vector<Graph::Vertex>
{
public:
// constructs an empty coloring vector
GraphColoring()
: p in(NULL)
{}
// constructs a vector that suits for the input
GraphColoring(Graph *g) : p in(g)
{ this->resize(p in->NumberOfVertices()); }
// modifies the vector size according to the new input
void SetInput(Graph *g)
{ p in = g; this->resize(p in->NumberOfVertices()); }
protected:
9.1. Data Classes 107
The class GraphColoring has a constructor that takes as argument a pointer to a Graph object,
which initializes the object based on the information contained in the graph. In addition, it has a
constructor with no arguments which leaves the object uninitialized, and a function SetInput(),
which initializes (or reinitializes) an already existing object according to the provided input.
Such functions, namely the two constructors and SetInput(), are the only mandatory members
for a class that instantiates the Output template, and other EasyLocal++ classes rely on their
presence.
class Coloring
: public GraphColoring
{
public:
// constructs an empty state class
Coloring()
{ conflicts.clear(); }
// constructs a state class that suits with the input
Coloring(Graph *g)
: GraphColoring(g)
{ conflicts.clear(); }
// resize the vector according to the new input and clears the conflict set
void SetInput(Graph *g)
{ GraphColoring::SetInput(g); conflicts.clear(); }
Graph::VertexSet conflicts; // the set of conflicting vertices
};
Similarly to the Output class, the default constructor, the constructor that receives a pointer
to the Input class, and the function SetInput() are mandatory also for the State class.
9.1.4 Move
The neighborhood relation we consider is defined by the color change of one conflicting vertex.
Hence, a move can be identified by a triple hv, cold , cnew i composed of the vertex v, its current
color cold , and the newly assigned color cnew .
For implementing this kind of move, we define a class, called Recolor, as follows:
class Recolor
{
public:
Graph::Vertex v;
Graph::Color c new, c old;
};
Notice that in order to select and apply a move m from a given state s we only need the vertex v
and the new color cnew . Nevertheless, it is necessary to store also the old color for the management
of the prohibition mechanisms. In fact, the tabu list stores only the “raw” moves regardless of the
states in which they were applied. In addition, the presence of the data member c old makes the
code simpler and slightly improves the efficiency of various functions.
9.2 Helpers
We have to define at least six helpers, namely a State Manager, an Output Producer, a Neighborhood
Explorer, and a Prohibition Manager, which encode some problem specific features associated with
the different aspects of the search. Furthermore, we have also to define a Cost Component and a
Delta Cost Component, which deal with the computation of the number of constraints violations.
class ColoringManager
: public StateManager<Graph,Coloring>
{
public:
9.2. Helpers 109
The only function that need to be defined is RandomState() given that the others have already
been defined inline.
The function RandomState() creates an initial state for the search by assigning a random color
to each vertex and rebuilding the conflict set accordingly.
class ColorClashes
: public CostComponent<Graph,Coloring>
{
public:
// constructs a cost component for color clashes having weight 1.0
ColorClashes(Graph* g)
: CostComponent<Graph,Coloring>(g,1.0)
{ }
// computes the value of cost in the given coloring
fvalue ComputeCost(const Coloring &col) const;
};
Even though it is a good practice to specify the weights of a cost component at run-time, in this
simple implementation we hard-code the weight of the unique cost component in the constructor.
Its value is set to 1.0.
The only member function that remains to be defined is ComputeCost(), which computes the
cost value of a given state. In this example the function is as follows:
110 9. The development of applications using EasyLocal++: a Case Study
Such Cost Component will be added to the State Manager by means of the AddViolationCom-
ponent() function, when the actual objects will be created.
class RecolorExplorer
: public NeighborhoodExplorer<Graph,Coloring,Recolor>
{
public:
constructs the neighborhood explorer for the Recolor move
RecolorExplorer(StateManager<Graph,Coloring> *psm, Graph *g)
: NeighborhoodExplorer<Graph,Coloring,Recolor>(psm,g)
{}
// draws a random move rc in the current state col
void RandomMove(const Coloring &col, Recolor &rc);
// applies the move rc to the state col
void MakeMove(Coloring &col, const Recolor &rc);
protected:
// generates the next move in the exploration of the neighborhood
void NextMove(const Coloring &col, Recolor &rc);
};
Among the three functions defined in this class, we first show the implementation of the most
interesting one, namely NextMove(). This function assigns to c new the successive value (modulo
k); if c new is equal to c old the exploration for that vertex is finished, and the next vertex in the
conflict list is processed.
Notice that there is no possibility to cycle indefinitely because the situation when rc.v becomes
again the first vertex explored is detected by the MayRedef function LastMoveDone(), which
returns true and stops the search.
We now show the function RandomMove(), which simply picks a random vertex from the conflict
set, and a new random color for it.
Finally, the MakeMove() function updates the color of the vertex rc.v to the new value, and it
recomputes the set of conflicting vertices by inspecting all the vertices that are adjacent to rc.v.
We omit its code for the sake of brevity.
For the randomized Local Search techniques, the strategy of considering conflicting vertices
only seems to be unfruitful, since it induces an arbitrary bias in the search. For example, the
Simulated Annealing algorithm, that is based on probabilistically accepting and performing non-
improving moves, does not work well in joint action with the proposed neighborhood exploration
strategy.
For this reason, we define a less restrictive Neighborhood Explorer, called LooseRecolorEx-
plorer, which directly derives from RecolorExplorer, but redefines the RandomMove() and the
NextMove() functions. The new definitions simply ignore the conflict set in looking for moves, and
they fall outside the scope of this case study.
class DeltaColorClashes
: public DeltaCostComponent<Graph,Coloring,Recolor>
{
public:
// constructs a delta cost component dealing with color clashes
DeltaColorClashes(Graph* g, ColorClashes *cc)
: DeltaCostComponent<Graph,Coloring,Recolor>(g,cc)
{}
// computes the difference in the number of clashes
fvalue ComputeDeltaCost(const Coloring &col,
const Recolor &rc) const;
};
112 9. The development of applications using EasyLocal++: a Case Study
The function ComputeDeltaCost(), which is the only function that must be defined, computes
the difference between the number of the vertices adjacent to rc.v colored with c new and those
colored with c old.
This function checks each vertex adjacent to rc.v and detects whether it is involved in a new
conflict or if an old conflict has been removed by the new assignment.
class TabuColorsManager
: public TabuListManager<Recolor>
{
public:
// constructs a tabu-list manager for the Recolor move
TabuColorsManager(unsigned int min tenure, unsigned int max tenure)
: TabuListManager<Recolor>(min tenure,max tenure)
{}
// states whether the move rc1 is the inverse of the tabu-active move rc2
bool Inverse(const Recolor &rc1, const Recolor &rc2) const
{ return rc1.v == rc2.v && rc1.c new == rc2.c old; }
};
According to the above definition of the function Inverse(), we consider a move rc1 inverse
of another one rc2 if both the following conditions hold:
a) rc1 and rc2 insist on the same vertex v (i.e., rc1 .v = rc2 .v);
b) the move rc1 tries to restore the color changed by rc2 (i.e., rc1 .cnew = rc2 .cold ).
9.2. Helpers 113
In the class, the frequencies of moves are stored in the STL std::map container class that
implements an associative array, i.e., it maps moves to the corresponding frequency value. For the
purpose of using this container, it is necessary to distinguish among moves by defining an order
among them. In practice, we have to define the operator < of C++ that states whether a move
comes before another one in the order. This implies that, for a suitable definition of the operator
<, it is possible also to cluster moves within a single slot of the map, as we will see below.
In detail, the strategy implemented by the class is the following. First we check whether the
move belongs to the classical short-term tabu list and, in the positive case the move is prohibited.
Afterwards, we look for the relative frequency of the given move and we forbid it if that value is
above a certain threshold. Anyway, the latter mechanism is too much restrictive in early phases of
the search1 . For this reason, we decided to activate the mechanism only after a given number of
steps.
The core functions of the FrequencyBasedTabuListManager class are InsertMove and ProhibitedMove.
The first deals with the update of the frequency of the move passed as parameter. Its code is the
following:
template <class Move>
void FrequencyBasedTabuListManager<Move>::InsertMove(const Move &mv,
fvalue mv cost, fvalue curr, fvalue best)
{
TabuListManager<Move>::InsertMove(mv,mv cost,curr,best);
if (frequency map.find(mv) != frequency map.end())
frequency map[mv]++;
else
frequency map[mv] = 1;
}
The function first inserts the move mv in the classical tabu list (run as a queue) and then it looks
for mv in the frequency map. If mv is already present, it simply updates its frequency, otherwise a
new slot for the move is automatically created and its frequency is assigned value 1.
1 We recall that the relative frequency is computed as frequency / steps, therefore if the number of steps is small
Concerning the ProhibitedMove function, instead, it is slightly more involved, since it has to
manage the activation of the threshold mechanism. The code of the function is as follows:
According to the proposed definition, all the moves that insist on a common vertex are clustered
together. However, if this definition is not satisfactory, it is still possible to modify it and, for
example, distinguish among different new colorings of the vertices. In that case the body of the
function becomes: return rc1.v < rc2.v || (rc1.v == rc2.v && rc1.nc < rc2.nc);. This
mechanism gives the user complete freedom to specify at which level of granularity the frequency-
based prohibition strategy should be applied.
9.3 Runners
Now we move to the runner level. We define three runners that implement the basic Local Search
techniques using the Recolor move. No function needs to be defined for these runners, and their
code results just in a template instantiation. For example, the definition of the hill climbing runner
is the following.
class HCColoring
: public HillClimbing<Graph,Coloring,Recolor>
{
public:
// constructs an instance of HC for the GraphColoring problem
HCColoring(StateManager<Graph,Coloring> *psm,
9.4. Kickers 115
NeighborhoodExplorer<Graph,Coloring,Recolor> *pnhe,
Graph *g = NULL)
: HillClimbing<Graph,Coloring,Recolor>(psm,pnhe,g)
{}
};
This definition is entirely provided by the skeleton code included in EasyLocal++. In this
case the user needs only to supply the name of the problem-specific classes.
The definition of the other runners is absolutely identical, and therefore it is omitted.
Notice that, according to the two exploration strategies we have defined, at run-time we will
provide an instance of the LooseRecolorExplorer to the Simulated Annealing and the Hill Climb-
ing runners. Conversely, we will pass a RecolorExplorer object to the Tabu Search algorithm.
9.4 Kickers
Since in this case study we are dealing with only one kind of move, we just define one simple kicker,
which handles the Recolor move. The class definition is as follows:
class RecolorKicker
: public SimpleKicker<Graph,Coloring,Recolor>
{
// constructs a kicker for the Recolor move
RecolorKicker(NeighborhoodExplorer<Graph,Coloring,Recolor> *pnhe,
Graph *g)
: SimpleKicker<Graph,Coloring,Recolor>(pnhe,g)
{ }
// states whether the moves rc1 and rc2 are synergic
bool SynergicMoves(const Recolor &rc1, const Recolor &rc2) const;
{ return p in->Adjacent(rc1.v,rc2.v); }
};
For the kicker classes, the only function to be defined is SynergicMoves(), which is meant
to accept only the pair of moves that are somehow “coordinated”. Even though it is possible to
experiment with several definitions of synergy, in this case study we focus on kicks made up of
moves that insist on adjacent vertices.
We remark that the kicker relies on a Neighborhood Explorer for performing the neighborhood
exploration. For this reason we have to choose between the two Neighborhood Explorers defined
above the most suitable one to be used within the kicker.
The previous observation about the possible bias of the strategy implemented within the
RecolorExplorer applies also in this case. Therefore, it is better to provide the RecolorKicker
with the LooseRecolorExplorer for dealing with random kicks, and with the RecolorExplorer
for best kicks.
9.5 Solvers
We define three solvers. The first one is a simple solver used for running the basic techniques. The
solver can run different techniques by changing the runner attached to it by means of the function
SetRunner(). The second solver is used for running various tandems of two runners. The runners
participating to the tandem are simply selected using AddRunner() and ClearRunners(), and the
composition does not require any other additional programming effort. Finally, the third solver
implements the Iterated Local Search strategy and handles one runner and one kicker. In this case,
the runner can be attached to the solver by means of the function SetRunner(), while the kicker
can be set by means of the SetKicker() function.
Similarly to the first three runners, the solvers derivation is only a template instantiation and,
as in the previous case, this operation is fully supported by the skeleton code.
116 9. The development of applications using EasyLocal++: a Case Study
HC SA TS TSl
Instance
T V S T V S T V S T V S
DSJC125.1 0.10 0.0 10 0.11 0.0 10 0.19 0.0 10 2.24 0.0 10
DSJC250.1 2.02 0.0 10 2.07 0.0 10 4.21 0.0 10 3.89 0.0 10
DSJC500.1 11.88 0.0 10 11.70 0.0 10 59.82 0.0 10 59.84 0.0 10
DSJC125.5 20.86 1.5 1 4.95 2.0 2 6.29 0.0 10 5.58 0.0 10
DSJC250.5 41.97 5.5 0 148.14 3.5 0 193.19 0.0 8 123.80 1.0 3
DSJC500.5 174.92 4.0 0 548.63 0.5 5 784.01 0.0 10 789.04 0.0 10
DSJC125.9 15.63 0.0 7 19.11 0.0 8 21.45 0.0 10 19.40 0.0 10
DSJC250.9 50.44 1.0 2 58.35 0.0 10 153.39 0.0 10 172.51 0.0 10
DSJC500.9 148.05 2.0 2 1082.62 1.0 2 1128.08 0.0 10 1258.21 0.0 10
Total 465.87 14.0 12 1875.68 7.0 57 2350.63 0.0 88 2434.51 1.0 83
The number of idle iterations allowed depends on the size of the instance and varies from 5000 to
15000.
Table 9.2 shows quite clearly that for this set of instances the classical Tabu Search is superior
to the other techniques, since it can find a feasible solution in 97.8% of the runs. The Tabu Search
equipped with long-term memory, instead, is less effective (especially on one instance) and the rate
of successful runs is 92.2%. Simulated Annnealing finds a feasible solution in 63.3% of the runs,
while Hill Climbing performs very poorly reaching feasibility in only 13.3% of the trials.
Evaluating the performances of the algorithm from the point of view of the running time, it is
clear that the superiority of classical Tabu Search is achieved at the cost of a greater running time.
The reason of this is related to the thoroughness of the neighborhood exploration performed by
Tabu Search. In fact, at each step all the moves in the neighborhood should be evaluated by means
of the function DeltaCost() of the DeltaColorClashes component, while for Hill Climbing and
Simulated Annealing only a subset of moves is sampled and evaluated. However, it is still possible,
with a small programming effort, to store the “delta” data in the state achieving much better
performances for these functions.
Finally, the behavior of Tabu Search equipped with long-term memory is, on overall, worse
than the classical Tabu Search in terms of running time. Furthermore, this algorithm performs
poorly in finding solutions for the hardest instance of the set (namely, DSJC250.5). This indicates
that further investigation on this mechanism is necessary.
HC⊲TS SA⊲TS
Instance
T V S T V S
DSJC125.1 0.07 0.0 10 0.09 0.0 10
DSJC250.1 1.27 0.0 10 1.08 0.0 10
DSJC500.1 5.87 0.0 10 8.04 0.0 10
DSJC125.5 3.76 0.0 10 5.33 0.0 10
DSJC250.5 80.30 0.0 10 112.56 0.0 8
DSJC500.5 361.66 0.0 10 310.18 0.0 10
DSJC125.9 15.32 0.0 10 13.67 0.0 10
DSJC250.9 123.98 0.0 9 58.67 0.0 10
DSJC500.9 298.09 0.0 10 111.52 0.0 10
Total 890.32 0.0 89 621.14 0.0 88
TS⊲Kr TS⊲Kb
Instance
T V S T V S
DSJC125.1 0.19 0.0 10 0.19 0.0 10
DSJC250.1 3.91 0.0 10 3.77 0.0 10
DSJC500.1 59.59 0.0 10 59.09 0.0 10
DSJC125.5 5.18 0.0 10 6.97 0.0 10
DSJC250.5 219.38 1.0 4 113.58 1.0 5
DSJC500.5 781.63 0.0 10 774.80 0.0 10
DSJC125.9 31.24 0.0 10 16.02 0.0 8
DSJC250.9 120.19 0.0 10 122.10 0.0 10
DSJC500.9 1156.36 0.0 10 1129.18 0.0 10
Total 2377.67 1.0 84 2210.70 1.0 83
Table 9.5: Comparison with a direct implementation of the tabu search solver
In order to obtain a fair comparison, the straight Tabu Search implementation relies on the
same data structures employed in the EasyLocal++ one. The overall amount of code written
for this algorithm is about 1700 lines.
It is worth noticing that the amount of code needed to implement a single Local Search solver
from scratch is comparable to the amount of code written for developing a whole family of solvers
using EasyLocal++.
We measure the performances of the implementations in two different settings. First we compile
the programs without any optimization, and we run the whole series of experiments on the test-bed.
Then, we turn on the -O3 compiler optimization flag and we perform again the experiments.
The data collected in the experiences are presented in Table 9.5. We denote with Tel the
running times of the EasyLocal++ implementation, whereas with Td we refer to the running
times of the plain C++ solver. Moreover, we use the superscript o to indicate the optimized
versions. In the third column of each set of experiments we report the performance loss of the
EasyLocal++ implementation; it is computed as the ratio between the difference of the running
times of the two implementations, and the running time of the direct implementation.
The table shows that the behavior of the two implementations is similar: The performance
loss is about 5% if code optimization is disabled, whereas it is about 10% if the executable is
fully optimized. Moreover, one can also notice that the “gap” between the two implementations
becomes smaller for higher running times, and that the behavior of the non-optimized solvers is
more stable with respect to the optimized versions.
Although the performance loss of the optimized EasyLocal++ implementation is not neg-
ligible, this is the typical degradation of programs that make extensive use of virtual functions,
and it is therefore unavoidable for this type of frameworks. We believe that this is an acceptable
drawback compared with its advantages.
In fact, the architecture of EasyLocal++ prescribes a precise methodology for the design of
a Local Search algorithm. The user is required to identify exactly the entities of the problem at
hand, which are factorized in groups of related classes in the framework: Using EasyLocal++
the user is forced to place each piece of code in the “right” position. We believe that this feature
helps in term of conceptual clarity, and it makes easier the reuse of the software and the overall
design process.
120 9. The development of applications using EasyLocal++: a Case Study
IV
Appendix
A
Current best results on the
Examination Timetabling problems
In this appendix we report the best results found for the Examination Timetabling problem,
up to the time of publication of this thesis.
We report our best results for the different formulations of the problem taken into account and
we compare them with the works of Carter et al. [26], Burke et al. [19], Burke and Newall [17],
Caramia et al. [21], White and Xie [139], Merlot et al. [97], and Burke and Newall [18]. The tables
are adapted from [97], and the results are presented in chronological order.
Table A.1: Current Best Results on Formulation F1 (Eq. 5.5, on page 53)
124 A. Current best results on the Examination Timetabling problems
Instance p Burke et al. [19] Caramia et al. Di Gaspero and Merlot et al.
[21] Schaerf [40] [97]
CAR-F-92 40 331 268 424 158
CAR-S-91 51 81 74 88 31
KFU-S-93 20 974 912 512 237
TRE-S-92 35 3 2 4 0
UTA-S-93 38 772 680 554 334
NOTT 23 269 — 123 83
NOTT 26 53 44 11 2
Table A.2: Current Best Results on Formulation F2 (Eq. 5.6, on page 53)
Table A.3: Current Best Results on Formulation F3 (Eq. 5.7, on page 53)
Conclusions
In this final chapter we draw the general conclusions about the research lines pursued in this thesis.
Since the detailed discussion about each single subject is normally included as a final section of
each chapter, in the following we outline only some general discussion about the different topics of
this work.
In this study we have investigated the field of Local Search meta-heuristics. This research area
has considerably grown in recent years and several new approaches have been proposed. Despite
the great interest manifested in the research community, however, the techniques belonging to the
Local Search paradigm are still far from full maturity. In fact, one of the main drawbacks of this
class of methods is the possibility (actually the certainty in practical cases) that the technique at
hand gets stuck in local minima. As a consequence, this limits the range of applicability of such
techniques to a set of practical instances for which the landscape is reasonably smooth. In other
words, the techniques tend to be not robust enough to tackle a complete variety of problems.
Among many other attempts to overcome this general limitation, our proposal is to employ
more than one neighborhood relation for a given Local Search method. In this thesis we dealt
specifically with this issue by introducing what we call the Multi-Neighborhood Search framework.
Multi-Neighborhood Search has been shown to be a promising technique for improving the basic
Local Search methods. Throughout the thesis we extensively applied these techniques in the
solution of several scheduling problems.
In particular, we performed a comprehensive experimentation of this approach, employing the
Course Timetabling problem as a testbed. The results of this experimentation were somehow
counterintuitive, and suggested further investigation of this technique also on other problems.
Moreover, since one of the merits of Multi-Neighborhood Search was to increase the robustness of
the algorithms, this fully justifies our study.
However this is not the ultimate word on this subject, and we consider the studies on the
application of Multi-Neighborhood Search presented in this thesis only as a step toward a deep
understanding of the capabilities of the compound algorithms. Further work is still needed to assess
the applicability of these techniques to different kinds of problems, and new operators and solving
strategy should be considered and investigated. The class of problems we intend to explore include
other scheduling problems (e.g. Flow-Shop scheduling), routing problems and assignment type
problems.
Moreover, we plan to carefully look at the integration of the Multi-Neighborhood Search
paradigm with learning algorithms. A still unexplored, yet interesting, approach consists in the ap-
plication of learning-based methods for the selection of the search technique at each step (or at fixed
intervals) in the search. In our opinion, this blends well with the concepts of Multi-Neighborhood
operators: the selection algorithm should learn a strategy for exploring the compound neighbor-
hood. We intend to investigate this approach in our future research.
Moving to the experimental part of this work, we must remark that all the software develop-
ments presented in this thesis were possible thanks to the EasyLocal++ framework. Specifically,
we could not have managed all the proposed algorithms only by writing the software with “pencil
and paper”, every time starting from scratch. EasyLocal++ helped us in this task by allowing
a massive reuse of our code. This is particularly true for the abstract algorithms developed at
the meta-heuristic level. In fact, thanks to the principles employed in the framework design, once
plugged with the basic features of the problem at hand, EasyLocal++ natively supported the
actual implementation of Local Search techniques inspired by the Multi-Neighborhood approach.
Moreover, the good habits of Object-Oriented programming, together with the testing features of
EasyLocal++, allowed us a quick debugging of the developed algorithms thus increasing our
productivity.
126 Conclusions
As a final remark, even though we collected here most of the work conducted during our
graduate studies, not all the research lines presented in this thesis have reached a homogeneous
discernment. Specifically, there are some insights that are more mature than others. For example,
we consider amply satisfactory our contribution in the development of EasyLocal++. Further-
more, in our opinion, also the research on the Examination Timetabling problem has reached
a good point. Problems on other domains, instead, need further efforts before being considered
acceptably solved.
In particular, we plan to extend the case study on the Course Timetabling problem by taking
into account different formulations of the problem. In detail, we intend to compare our Multi-
Neighborhood Search algorithms on a set of benchmark instances recently deployed1 . Moreover,
we are currently extending the work presented in this thesis for the min-Shift Design problem,
in collaborations with other researchers. Finally, also the remaining scheduling problems presented
in the thesis should be addressed in a more satisfactory way. We plan to look back to all these
problems in the near future.
1 The benchmark instances are part of the International Timetabling Competition sponsored by the Meta-
[1] E. H. Aarts, J. Korst, and P. J. van Laarhoven. Simulated annealing. In E. H. Aarts and
J. K. Lenstra, editors, Local Search in Combinatorial Optimization. John Wiley & Sons,
Chichester, 1997.
[2] E. H. Aarts and J. K. Lenstra. Local Search in Combinatorial Optimization. John Wiley &
Sons, Chichester, 1997.
[3] E. H. L. Aarts and J. Korst. Simulated Annealing and Boltzmann Machines. John Wiley &
Sons, New York, 1989.
[6] N. Balakrishnan and R. T. Wong. A network model for the rotating workforce scheduling
problem. Networks, 20:25–42, 1990.
[7] J. Bartholdi, J. Orlin, and H.Ratliff. Cyclic scheduling via integer programs with circular
ones. Operations Research, 28:110–118, 1980.
[9] P. Boizumault, Y. Delon, and L. Peridy. Constraint logic programming for examination
timetabling. Journal of Logic Programming, 26(2):217–233, 1996.
[10] G. Booch, J. Rumbaugh, and I. Jacobson. The unified modeling language user guide. Addison
Wesley, Reading (Mass.), 1999.
[11] J. A. Boyan and A. W. Moore. Learning evaluation functions for global optimization and
boolean satisfiability. In Proc. of the 15th Nat. Conf. on Artificial Intelligence (AAAI-98).
AAAI Press/MIT Press, 1998.
[12] D. Brélaz. New methods to color vertices of a graph. Communications of the ACM, 22:
251–256, 1979.
[13] S. Broder. Final examination scheduling. Communications of the ACM, 7:494–498, 1964.
[14] E. Burke and M. Carter, editors. Proc. of the 2nd Int. Conf. on the Practice and Theory of
Automated Timetabling, number 1408 in Lecture Notes in Computer Science, 1997. Springer-
Verlag.
[15] E. Burke and P. De Causmaecker, editors. Proc. of the 4th Int. Conf. on the Practice and
Theory of Automated Timetabling, Gent (Belgium), August 2002. KaHo St.-Lieven.
[16] E. Burke and W. Erber, editors. Proc. of the 3rd Int. Conf. on the Practice and Theory of
Automated Timetabling, number 2079 in Lecture Notes in Computer Science, 2000. Springer-
Verlag.
128 Bibliography
[17] E. Burke and J. Newall. A multi-stage evolutionary algorithm for the timetable problem.
IEEE Transactions on Evolutionary Computation, 3(1):63–74, 1999.
[18] E. Burke and J. Newall. Ehnancing timetable solutions with local search methods. In E. Burke
and P. De Causmaecker, editors, Proc. of the 4th Int. Conf. on the Practice and Theory of
Automated Timetabling, pages 336–347, Gent, Belgium, August 2002. KaHo St.-Lieven.
[19] E. Burke, J. Newall, and R. Weare. A memetic algorithm for university exam timetabling.
In Proc. of the 1st Int. Conf. on the Practice and Theory of Automated Timetabling, pages
241–250, 1995.
[20] E. Burke and P. Ross, editors. Proc. of the 1st Int. Conf. on the Practice and Theory of
Automated Timetabling, number 1153 in Lecture Notes in Computer Science, 1995. Springer-
Verlag.
[21] M. Caramia, P. Dell’Olmo, and G. F. Italiano. New algorithms for examination timetabling.
In S. Nher and D. Wagner, editors, Algorithm Engineering 4th International Workshop,
WAE2000, Saarbrücken, Germany, volume 1982 of Lecture Notes in Computer Science, pages
230–241, Berlin-Heidelberg, September 2000. Springer-Verlag.
[22] M. W. Carter. A decomposition algorithm for pratical timetabilng problems. Working Paper
83-06, Industrial Engineering, University of Toronto, April 1983.
[27] S. Casey and J. Thompson. GRASPing the examination scheduling problem. In E. Burke
and P. De Causmaecker, editors, Proc. of the 4th Int. Conf. on the Practice and Theory of
Automated Timetabling, pages 400–403, Gent (Belgium), August 2002. KaHo St.-Lieven.
[28] D. J. Castelino, S. Hurley, and N. M. Stephens. A tabu search algorithm for frequency
assignment. Annals of Operations Research, 63:301–319, 1996.
[30] M. Chiarandini, A. Schaerf, and F. Tiozzo. Solving employee timetabling problems with
flexible workload using tabu search. In Proc. of the 3rd Int. Conf. on the Practice and
Theory of Automated Timetabling, pages 298–302, Konstanz, Germany, 2000.
[31] A. J. Cole. The preparation of examination timetables using a small store computer. Com-
puter Journal, 7:117–121, 1964.
[32] D. Corne, H.-L. Fang, and C. Mellish. Solving the modular exam scheduling problem with
genetic algorithms. Technical Report 622, Department of Artificial Intelligence, University
of Edinburgh, 1993.
[33] D. Costa. A tabu search algorithm for computing an operational timetable. European Journal
of Operational Research, 76:98–110, 1994.
Bibliography 129
[34] T. G. Crainic, M. Toulouse, and M. Gendreau. Toward a taxonomy of parallel tabu search
heuristics. INFORMS Journal of Computing, 9(1):61–72, 1997.
[35] B. De Backer, V. Furnon, and P. Shaw. An object model for meta-heuristic search in con-
straint programming. In Workshop On Integration of AI and OR techniques in Constraint
Programming for Combinatorial Optimization Problems (CP-AI-OR’99), 1999.
[36] A. de Bruin, G.A.P. Kindervater, H.W.J.M. Trienekens, R.A. van der Goot, and W. van
Ginkel. An obiect oriented approach to generic branch and bound. Technical Report EUR-
FEW-CS-96-10, Erasmus University, Department of Computer Science, P.O. Box 1738, 3000
DR Rotterdam, The Netherlands, 1996.
[37] M. Dell’Amico and M. Trubian. Applying tabu search to the job-shop scheduling problem.
Annals of Operations Research, 41:231–252, 1993.
[38] L. Di Gaspero. Recolour, shake and kick: a recipe for the examination timetabling problem.
In E. Burke and P. De Causmaecker, editors, Proc. of the 4th Int. Conf. on the Practice and
Theory of Automated Timetabling, pages 404–407, August 2002.
[40] L. Di Gaspero and A. Schaerf. A case-study for EasyLocal++: the course timetabling
problem. Technical Report UDMI/13/2001/RR, Dipartimento di Matematica e Informatica,
Università di Udine, 2001. Available at http://www.diegm.uniud.it/schaerf/projects/
local++.
[41] L. Di Gaspero and A. Schaerf. Tabu search techniques for examination timetabling. In
E. Burke and W. Erben, editors, Proc. of the 3rd Int. Conf. on the Practice and Theory of
Automated Timetabling, number 2079 in Lecture Notes in Computer Science, pages 104–117.
Springer-Verlag, Berlin-Heidelberg, 2001.
[42] L. Di Gaspero and A. Schaerf. Multi-neighbourhood local search for course timetabling. In
E. Burke and P. De Causmaecker, editors, Proc. of the 4th Int. Conf. on the Practice and
Theory of Automated Timetabling, pages 128–132, August 2002.
[43] L. Di Gaspero and A. Schaerf. Writing local search algorithms using EasyLocal++. In
Stefan Voß and David L. Woodruff, editors, Optimization Software Class Libraries, OR/CS
series. Kluwer Academic Publishers, Boston, 2002.
[45] L. Di Gaspero, J. Vian, and A. Schaerf. A review of neighborhood structures for the job-
shop scheduling problem, 2002. Extended abstract of the talk given at OR2002 (Quadriennal
International Conference on Operations Research), Klagenfurt, Austria.
[46] M. Dorigo, V. Maniezzo, and A. Colorni. The ant system: Optimization by a colony of
cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, B 26(1):29–41,
1996.
[47] K. A. Dowsland, N. Pugh, and J. Thompson. Examination timetabling with ants. In E. Burke
and P. De Causmaecker, editors, Proc. of the 4th Int. Conf. on the Practice and Theory of
Automated Timetabling, pages 397–399, Gent (Belgium), August 2002. KaHo St.-Lieven.
130 Bibliography
[48] B. Dunham, D. Fridshal, R. Fridshal, and J. H. North. Design by natural selection. Research
Report RC-476, IBM Research Department, 1961.
[49] S. Elmohamed, G. Fox, and P. Coddington. A comparison of annealing techniques for aca-
demic course scheduling. In Proc. of the 2nd Int. Conf. on the Practice and Theory of
Automated Timetabling, pages 146–166, April 1997.
[50] T.A. Feo and M.G.C. Resende. Greedy randomized adaptive scheduling search procedures.
Journal of Global Optimization, 6, 1995.
[51] J. A. Ferland, A. Hertz, and A. Lavoie. An object-oriented methodology for solving as-
signment type problems with neighborhood search techniques. Operations Research, 44(2):
347–359, 1996.
[52] A. Fink, S. Voß, and D. L. Woodruff. Building reusable software components for heuristc
search. In P. Kall and H.-J. Lüthi, editors, Proceedings of Operations Research 1998 (OR98),
Zürich, Switzerland, pages 210–219, Berlin-Heidelberg, 1999. Springer-Verlag.
[53] H. Fischer and G. Thompson. Probabilistic learning combinations of local job-shop scheduling
rules. In J. Muth and G. Thompson, editors, Industrial Scheduling. Prentice-Hall, Englewood
Cliffs, 1963.
[55] E. Foxley and K. Lockyer. The construction of examination timetables by computer. The
Computer Journal, 11:264–268, 1968.
[56] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns, Elements of Reusable
Object-Oriented Software. Addison Wesley, Reading (Mass.), 1994.
[57] M. R. Garey and D. S. Johnson. Computers and Intractability—A guide to the theory of
NP-completeness. W.H. Freeman and Company, San Francisco, 1979.
[58] J. Gärtner, N. Musliu, and W. Slany. Rota: a research project on algorithms for workforce
scheduling and shift design optimization. AI Communications, 14(2):83–92, 2001.
[59] M. Gendreau, A. Hertz, and G. Laporte. A tabu search heuristic for the vehicle routing
problem. Management Science, 40(10):1276–1290, 1994.
[60] F. Glover. Tabu search methods in artificial intelligence and operations research. ORSA
Artificial Intelligence, 1(2):6, 1987.
[61] F. Glover. Tabu search. Part I. ORSA Journal of Computing, 1:190–206, 1989.
[62] F. Glover. Tabu search. Part II. ORSA Journal of Computing, 2:4–32, 1990.
[63] F. Glover, M. Parker, and J. Ryan. Coloring by tabu branch and bound. In D. S. Johnson and
M. A. Trick, editors, Cliques, Coloring, and Satisfiability. Second DIMACS Implementation
Challenge, volume 26 of DIMACS Series in Discrete Mathematics and Theoretical Computer
Science. American Mathematical Society, 1996.
[64] F. Glover, E. Taillard, and D. de Werra. A user’s guide to tabu search. Annals of Operations
Research, 41:3–28, 1993.
[65] F. Glover and M. Laguna. Tabu search. Kluwer Academic Publishers, 1997.
Bibliography 131
[66] F. Glover and C. McMillan. The general employee scheduling problem: An integration of
MS and AI. Computers & Operations Research, 13(5):563–573, 1986.
[67] C. C. Gotlieb. The construction of class-teacher timetables. In C. M. Popplewell, editor,
IFIP congress 62, pages 73–77. North-Holland, 1963.
[68] P. Hansen and N. Mladenović. An introduction to variable neighbourhood search. In S. Voß,
S. Martello, I.H. Osman, and C. Roucairol, editors, Meta-Heuristics: Advances and Trends
in Local Search Paradigms for Optimization, pages 433–458. Kluwer Academic Publishers,
1999.
[69] A. Hertz. Tabu search for large scale timetabling problems. European Journal of Operational
Research, 54:39–47, 1991.
[70] A. Hertz and D. de Werra. Using tabu search techniques for graph coloring. Computing, 39:
345–351, 1987.
[71] Ilog. ILOG optimization suite — white paper. Available at http://www.ilog.com, 1998.
[72] J.R. Jackson. An extension of johnson’s resultn job lot scheduling. Naval Research Logistics
Quarterly, 3:201–203, 1956.
[73] W. K. Jackson, W. S. Havens, and H. Dollard. Staff scheduling: A simple approach that
worked. Technical Report CMPT97-23, Intelligent Systems Lab, Centre for Systems Science,
Simon Fraser University, 1997. Available at http://citeseer.nj.nec.com/101034.html.
[74] D. S. Johnson. Timetabling university examinations. Journal of the Operational Research
Society, 41(1):39–47, 1990.
[75] D. S. Johnson. A theoretician’s guide to the experimental analysis of algorithms. In M. Gold-
wasser, D. S. Johnson, and C. C. McGeoch, editors, Proceedings of the 5th and 6th DIMACS
Implementation Challenges, Providence, RI, 2002. American Mathematical Society. to ap-
pear.
[76] D. S. Johnson, C. R. Aragon, L. A. McGeoch, and C. Schevon. Optimization by simulated
annealing: an experimental evaluation; part I, graph partitioning. Operations Research, 37
(6):865–892, 1989.
[77] D. S. Johnson, C. R. Aragon, L. A. McGeoch, and C. Schevon. Optimization by simulated
annealing: an experimental evaluation; part II, graph coloring and number partitioning.
Operations Research, 39(3):378–406, 1991.
[78] D. S. Johnson and M. A. Trick, editors. Cliques, Coloring, and Satisfiability. Second DI-
MACS Implementation Challenge, volume 26 of DIMACS Series in Discrete Mathematics
and Theoretical Computer Science. American Mathematical Society, 1996.
[79] S.M. Johnson. Optimal two- and three-stage production schedules with setup times included.
Naval Research Logistics Quarterly, 1:61–67, 1954.
[80] M. Jünger and S. Thienel. The design of the branch-and-cut system ABACUS. Technical
Report TR97.263, University of Cologne, Dept. of Computer Science, 1997.
[81] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal
of Artificial Intelligence Research, 4:237–285, 1996.
[82] L. Kang and G. M. White. A logic approach to the resolution of constraints in timetabling.
European Journal of Operational Research, 61:306–317, 1992.
[83] S. Kirkpatrick, C. D. Gelatt, Jr, and M. P. Vecchi. Optimization by simulated annealing.
Science, 220:671–680, 1983.
132 Bibliography
[84] G. Kortsarz and W. Slany. The minimum shift design problem and its relation to the
minimum edge-cost flow problem. Technical Report DBAI-TR-2000-46, Institut für Infor-
mationssysteme der Technischen Universität Wien, 2001. http://www.dbai.tuwien.ac.at/
staff/slany/pubs/dbai-tr-2001-46.pdf.
[85] F. Laburthe and Y. Caseau. SALSA: A language for search algorithms. In Proc. of the 4th
Int. Conf. on Principles and Practice of Constraint Programming (CP-98), number 1520 in
Lecture Notes in Computer Science, pages 310–324, Pisa, Italy, 1998.
[86] G. Laporte. The art and science of designing rotating schedules. Journal of the Operational
Research Society, 50:1011–1017, 1999.
[87] G. Laporte and S. Desroches. Examination timetabling by computer. Computers and Oper-
ational Research, 11(4):351–360, 1984.
[88] H. C. Lau. On the complexity of manpower scheduling. Computers & Operations Research,
23(1):93–102, 1996.
[89] M. Laurent and P. Van Hentenryck. Localizer++: An open library for local search. Technical
Report CS-01-02, Brown University, 2001.
[90] S. Lawrence. Resource Constrained Project Scheduling: an Experimental Investigation of
Heuristic Scheduling Techniques (Supplement). PhD thesis, Graduate School of Industrial
Administration, Carnegie Mellon University, Pittsburgh, Pennsylvania, 1984.
[91] C.-Y. Lee, L. Lei, and M. Pinedo. Current trends in deterministic scheduling. Annals of
Operations Research, 70:1–41, 1997.
[92] M. J. J. Lennon. Examination timetabling at the university of Auckland. New Zealand
Operational Research, 14:176–178, 1986.
[93] H. Ramalhino Lourenço, O. Martin, and T. Stützle. Applying iterated local search to the
permutation flow shop problem. In F. Glover and G. Kochenberger, editors, Handbook of
Metaheuristics. Kluwer, 2001. to appear.
[94] O. C. Martin, S. W. Otto, and E.W. Felten. Large-step markov chains for the TSP: Incor-
porating local search heuristics. Operations Research Letters, 11:219–224, 1992.
[95] K. Mehlhorn, S. Näher, M. Seel, and C. Uhrig. The LEDA User Manual. Max Plank
Institute, Saarbrücken, Germany, 1999. Version 4.0.
[96] N. K. Mehta. The application of a graph coloring method to an examination scheduling
problem. Interfaces, 11(5):57–64, 1981.
[97] L. T. G. Merlot, N. Boland, B. D. Hyghes, and P. J. Stuckey. A hybrid algorithm for
examination timetabling problem. In E. Burke and P. De Causmaecker, editors, Proc. of the
4th Int. Conf. on the Practice and Theory of Automated Timetabling, pages 348–371, Gent
(Belgium), August 2002. KaHo St.-Lieven.
[98] L. Michel and P. Van Hentenryck. Localizer: A modeling language for local search. In Proc. of
the 3rd Int. Conf. on Principles and Practice of Constraint Programming (CP-97), number
1330 in Lecture Notes in Computer Science, pages 238–252, Schloss Hagenberg, Austria,
1997.
[99] S. Minton, M. D. Johnston, A. B. Philips, and P. Laird. Minimizing conflicts: a heuristic
repair method for constraint satisfaction and scheduling problems. Artificial Intelligence, 58:
161–205, 1992.
[100] N. Musliu, J. Gärtner, and W. Slany. Efficient generation of rotating workforce schedules.
Discrete Applied Mathematics, 118(1-2):85–98, 2002.
Bibliography 133
[101] N. Musliu, A. Schaerf, and W. Slany. Local search for shift design (extended abstract). In
Proc. of the 4th Metaheuristics International Conference (MIC-01), pages 465–469, 2001.
[102] N. Musliu, A. Schaerf, and W. Slany. Local search for shift design. European Journal
of Operational Research, 2002. To appear, available at http://www.dbai.tuwien.ac.at/
proj/Rota/DBAI-TR-2001-45.ps.
[103] D. R. Musser, G. J. Derge, A. Saini, and A. Stepanov. STL Tutorial and Reference Guide.
Addison Wesley, Reading (Mass.), second edition edition, 2001.
[104] E. Nowicki and C. Smutnicki. A fast taboo search algorithm for the job shop problem.
Management Science, 42(6):797–813, 1996.
[107] T. Parr. ANTLR Version 2.7.1 Reference Manual, October 2000. Available at http://www.
antlr.org/doc/index.html.
[108] G. Pesant and M. Gendreau. A constraint programming framework for local search methods.
Journal of Heuristics, 5:255–279, 1999.
[109] E. Pesch and F. Glover. TSP ejection chains. Discrete Applied Mathematics, 76:175–181,
1997.
[110] M. Pinedo. Scheduling: theory, algorithms, and systems. Prentice-Hall, Englewood Cliffs,
1995.
[111] P. Ross, E. Hart, and D. Corne. Some observation about GA-based exam timetabling. In
Proc. of the 2nd Int. Conf. on the Practice and Theory of Automated Timetabling, pages
115–129, 1997.
[112] A. Schaerf. Tabu search techniques for large high-school timetabling problems. In Proc.
of the 13th Nat. Conf. on Artificial Intelligence (AAAI-96), pages 363–368, Portland, USA,
1996. AAAI Press/MIT Press.
[113] A. Schaerf. Combining local search and look-ahead for scheduling and constraint satisfaction
problems. In Proc. of the 15th Int. Joint Conf. on Artificial Intelligence (IJCAI-97), pages
1254–1259, Nagoya, Japan, 1997. Morgan Kaufmann.
[115] A. Schaerf, M. Cadoli, and M. Lenzerini. Local++: A C++ framework for local search
algorithms. Software—Practice and Experience, 30(3):233–257, 2000.
[116] A. Schaerf and A. Meisels. Solving employee timetabling problems by generalized local search.
In Proc. of the 6th Italian Conf. on Artificial Intelligence (AIIA-99), number 1792 in Lecture
Notes in Computer Science, pages 493–502. Springer-Verlag, 1999.
[118] B. Selman, H. A. Kautz, and B. Cohen. Noise strategies for improving local search. In Proc.
of the 12th Nat. Conf. on Artificial Intelligence (AAAI-94), pages 337–343, 1994.
134 Bibliography
[119] B. Selman, H. Levesque, and D. Mitchell. A new method for solving hard satisfiability
problems. In Proc. of the 10th Nat. Conf. on Artificial Intelligence (AAAI-92), pages 440–
446, 1992.
[121] W. E. Smith. Various optimizers for single stage production. Naval Research Logistics
Quarterly, 3:59–66, 1956.
[122] G. Solotorevsky, E. Gudes, and A. Meisels. RAPS: A rule-based language specifying re-
source allocation and time-tabling problems. IEEE Transactions on Knowledge and Data
Engineering, 6(5):681–697, 1994.
[123] T. Stützle. Iterated local search for the quadratic assignment problem. Technical Report
AIDA-99-03, FG Intellektik, TU Darmstadt, 1998.
[124] E. Taillard. Robust taboo search for the quadratic assignment problem. Parallel Computing,
17:433–445, 1991.
[127] J. Thompson and K. Dowsland. General cooling schedules for simulated annealing-based
timetabling system. In Proc. of the 1st Int. Conf. on the Practice and Theory of Automated
Timetabling, pages 345–363, 1995.
[128] J. M. Tien and A. Kamiyama. On manpower scheduling algorithms. SIAM Review, 24(3):
275–287, 1982.
[129] E. Tsang and C. Voudouris. Fast local search and guided local search and their application to
british telecom’s workforce scheduling. Technical Report CSM-246, Department of Computer
Science, University of Essex, Colchester, UK, 1995.
[130] R. Vaessens, E. Aarts, and J. K. Lenstra. A local search template. Technical Report COSOR
92-11 (revised version), Eindhoven University of Technology, Eindhoven, NL, 1995.
[131] R. Vaessens, E. Aarts, and J. K. Lenstra. Job shop scheduling by local search. INFORMS
Journal of Computing, 8(3):302–317, 1996.
[132] P. J. M. van Laarhoven and E. H. L. Aarts. Simulated Annealing: Theory and Applications.
D. Reidel Publishing Company, Kluwer Academic Publishers Group, 1987.
[133] P.J.M. van Laarhoven, E.H.L. Aarts, and J.K. Lenstra. Job shop scheduling by simulated
annealing. Annals of Operations Research, 40:113–125, 1992.
[134] M. G. A. Verhoeven and E. H. L. Aarts. Parallel local search. Journal of Heuristics, 1:43–65,
1995.
[135] J. Vian. Soluzione di problemi di job-shop scheduling mediante tecniche di ricerca locale. Mas-
ter’s thesis, Undergraduate School of Management Engineering at the University of Udine,
Italy, 2002. In italian.
[136] S. Voß and D. L. Woodruff, editors. Optimization Software Class Libraries. Operations
Research/Computer Science Interfaces series. Kluwer Academic Publishers, Boston, 2002.
Bibliography 135
[137] C. Voudouris. Guided Local Search for Combinatorial Optimisation Problems. Phd thesis,
University of Essex, ftp://ftp.essex.ac.uk/pub/csp/Voudouris-PhD97-pdf.zip, April
1997.
[138] D. J. A. Welsh and M. B. Powell. An upper bound to the chromatic number of a graph and
its application to timetabling problems. The Computer Journal, 10:85–86, 1967.
[139] G. M. White and B. S. Xie. Examination timetables and tabu search with longer-term
memory. In Proc. of the 3rd Int. Conf. on the Practice and Theory of Automated Timetabling,
volume 2079 of Lecture Notes in Computer Science, pages 85–103. Springer-Verlag, Berlin-
Heidelberg, 2000.
[140] D. C. Wood. A system for computing university examination timetables. The Computer
Journal, 11:41–47, 1968.
[141] D. C. Wood. A technique for coloring a graph applicable to large scale time-tabling problems.
The Computer Journal, 12:317–319, 1969.
[142] D. Woodruff and E. Zemel. Hasing vectors for tabu search. Annals of Operations Research,
41:123–137, 1993.
[143] M. Yoshikawa, K. Kaneko, T. Yamanouchi, and M. Watanabe. A constraint-based high
school scheduling system. IEEE Expert, 11(1):63–72, 1996.
[144] J. Zhang and H. Zhang. Combining local search and backtracking techniques for constraint
satisfaction. In Proc. of the 13th Nat. Conf. on Artificial Intelligence (AAAI-96), pages
369–374, Portland, USA, 1996. AAAI Press/MIT Press.