Manufacturing Scheduling Systems - An Integrated View On Models, Methods and Tools (PDFDrive)
Manufacturing Scheduling Systems - An Integrated View On Models, Methods and Tools (PDFDrive)
Framinan
Rainer Leisten
Rubén Ruiz García
Manufacturing
Scheduling
Systems
An Integrated View on Models, Methods
and Tools
Manufacturing Scheduling Systems
Jose M. Framinan Rainer Leisten
•
Manufacturing Scheduling
Systems
An Integrated View on Models, Methods
and Tools
123
Jose M. Framinan Rubén Ruiz García
Departamento de Organización Grupo de Sistemas de Optimización
Industrial y Gestión de Empresas Aplicada, Instituto Tecnológico
Universidad de Sevilla Escuela de Informática
Superior de Ingenieros Universitat Politècnica de València
Isla de la Cartuja Valencia
Seville Spain
Spain
Rainer Leisten
Fakultät für Ingenieurwissenschaften
Allgemeine Betriebswirtschaftslehre und
Operations Management
Universität Duisburg-Essen
Duisburg
Germany
vii
viii Preface
There are few but important prerequisites for this book. Knowledge of the most
basic concepts of production management is required, although an effort is done in
Chap. 2 to place scheduling into context. Maths will appear profusely in some
parts of the book, mostly in the chapters devoted to scheduling models.
The book is structured into five parts. In the first part, we introduce the main
definitions and notation, and present the framework that we will use throughout the
book. In this framework, a scheduling system is defined as a collection of models
(representations of scheduling problems), methods (procedures to obtain efficient
solutions out of scheduling models), and tools (software devices to embed models
and procedures in order to support the scheduling decision problem), together with
the human elements operating the system. Models, procedures, and tools will
constitute the next three parts of the book. A final part on scheduling systems is
devoted to assemble all these elements together in a roadmap to guide the
development of a scheduling system. The structure of the book is summarised in
Fig. 1.
As mentioned before, the book does not have to be read comprehensively and
sequentially. Manufacturing scheduling constitutes a topic of interest for different
professional (including managers and top/intermediate staff in charge of operations
management) and academic profiles (including production/manufacturing engi-
neers, computer scientists, and different bachelors in operations research and
management science). When used as a textbook, the instructors would have a clear
idea about the content, sequence, and depth in which the different topics contained
in the book should be learnt. Therefore, the subsequent lines are mainly intended
as a rough advice whenever such supervision is not present.
We believe that a rather basic knowledge of the scheduling field can be gained
by reading the introductory chapters of each part, i.e. Chaps. 1, 3, 7, 11, and 13. A
more classical (although, in our opinion, also more theoretical) itinerary for the
scheduling field would exclude Chap. 11 to Chap. 15, and would stress issues
related to modeling and solution procedures. Modeling would be reinforced by
adding Chaps. 4, 5, and 6. These three chapters are heavily interdependent and
therefore should be read together, although readers with some scheduling back-
ground may skip some parts of Chaps. 4 and 5, and focus instead on the discus-
sions in Chap. 6. A rather comprehensive understanding of solution procedures is
provided in Chaps. 8 and 9, whereas multiobjective scheduling issues are treated in
Chap. 10. Since multiobjective scheduling is a hot topic with practical and theo-
retical interest, we would favor its inclusion even at a basic level course. Never-
theless, most of it could be skipped if the reader adopts a more classical/basic view
of the topic.
We think that Chaps. 11 and 12 would be of particular value for an audience
interested in the design and implementation of scheduling tools. Chapter 11 would
serve to grasp a general view on the topic, whereas Chap. 12 is intended for
readers with some background and experience on business information systems,
and –in our opinion– should be spared for basic scheduling courses, unless they are
heavily geared toward information systems/computer science.
We hope that users and consultants of manufacturing scheduling systems would
find interesting Chaps. 13 to 14, as the practical issues treated there are not usually
subject of classical scheduling books. Despite the verbosity and apparent sim-
plicity of some of the ideas contained in these chapters, they are mainly intended
for an experienced reader who may better grasp the inherent complexity of the
deployment of manufacturing scheduling systems and their integration with the
human schedulers. We are not sure that many of the ideas contained there could be
fully appreciated at a more basic level, although we think that at least a glimpse of
those in Chap. 13 should be given in order to avoid an excessively technical
approach to the field, which in our opinion is and has been a rather common flaw in
scheduling teaching and research.
Finally, Chap. 15 may be used in many different itineraries, ranging from a
basic level in which modeling and solution procedures issues are seen into
x Preface
Acknowledgements
In the process of writing this book, we are indebted to many people. The authors
benefited from the careful reading of earlier versions of the book done by Manu
Dios, Paz Perez-Gonzalez and Natalia Prischepov. The authors are also indebted to
the many members and colleagues of their institutions at Seville, Duisburg, and
Valencia. The collaborations, help and working atmosphere have proven to be key
when writing this book. They also had to endure many of our absences when
writing ‘‘the book’’ and we are really thankful for their support and patience.
xi
xii Contents
4 Scheduling Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2 Process Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.1 Job or Task Precedence Constraints . . . . . . . . . . . . . 76
4.2.2 Changeovers or Setup Times . . . . . . . . . . . . . . . . . . 78
4.2.3 Machine Eligibility . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.2.4 Permutation Sequences . . . . . . . . . . . . . . . . . . . . . . 82
4.2.5 Machine Availability and Breakdowns . . . . . . . . . . . 82
4.2.6 Re-circulation, Re-processing and Skipping . . . . . . . . 83
4.2.7 No-Idle Machines . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.2.8 Batching Machines . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.3 Operations Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3.1 Interruption, Preemption and Splitting. . . . . . . . . . . . 86
4.3.2 Release Dates, Due Dates, Deadlines,
Processing Windows . . . . . . . . . . . . . . . . . ....... 87
4.3.3 No Wait, Minimum and Maximum Time
Lags and Overlapping . . . . . . . . . . . . . . . . . . . . . . . 87
4.3.4 Special Processing Times . . . . . . . . . . . . . . . . . . . . 89
4.4 Transportation Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.5 Storage Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.6 Other Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.7 Conclusions and Further Readings . . . . . . . . . . . . . . . . . . . . 94
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Contents xiii
5 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.2 A Rationale for Scheduling Objectives . . . . . . . . . . . . . . . . . 102
5.3 Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.4 Scheduling Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.4.1 Non Due Date Related Objectives . . . . . . . . . . . . . . 110
5.4.2 Due Date Related Objectives . . . . . . . . . . . . . . . . . . 111
5.4.3 Rescheduling-Related Objectives . . . . . . . . . . . . . . . 116
5.4.4 Additional Objectives . . . . . . . . . . . . . . . . . . . . . . . 116
5.5 Adding Weights, Priorities or Importance . . . . . . . . . . . . . . . 119
5.6 An Illustrative Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.7 Dealing with Conflicting Criteria: Multiobjective
Scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . ......... 122
5.8 Conclusions and Further Readings . . . . . . . . . . . ......... 123
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ......... 124
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Part I
Introduction to Manufacturing
Scheduling
This part of the book consists of Chaps. 1 and 2. In the first chapter, we move from
a general definition of scheduling as the allocation of resources to tasks along time
to a definition of manufacturing scheduling, where we identify its main features
and context in which it takes place. Then we define a scheduling system as a
collection of methods, models and tools that support scheduling-related decisions
in a company. These three elements are heavily influenced by the organisational
context in which the scheduling-related decisions have to be taken. All these three
parts: models, methods and tools together with the organisation (humans) involved
in the process constitute a framework that we will use throughout the book.
The second chapter is devoted to describing the context in which manufacturing
scheduling decisions are taken. Quite often, specialized scheduling literature fails to
acknowledge that scheduling decisions are usually integrated into a more general
(explicit or implicit) decision system, which influences (and is influenced by)
scheduling decisions. In addition, the aforementioned decision system is not
restricted to an individual company, but extended along a whole supply network
composed of independent enterprises. Therefore, we review the concept of
operations management and its place in obtaining competitive advantages for a
firm. After exemplifying the complexity and interdependency of different decisions
that constitute the management of operations, we introduce the different approaches
to handle this complexity, including hierarchical planning and decentralised
approaches (agent-based systems). In the centralised approach, we identify the
main decision blocks, place scheduling decisions and discuss its relation with other
decision blocks. The difference between scheduling and rescheduling (which can be
interpreted as the frontier among planning and control) is discussed. Next, we
discuss the interrelation of scheduling decisions with other manufacturing-related
decisions, such as simultaneous lot-sizing and scheduling, and emphasize the need
of alignment of these decisions with the goals in the company. Finally, we also
present the context of the supply chain, discuss the concept of supply chain
management and present how this affects scheduling decisions.
Chapter 1
Overview of Manufacturing Scheduling
1.1 Introduction
Broadly speaking, scheduling deals with the allocation of resources to tasks along
time. In general, such definition may encompass a huge number of real-life appli-
cations, such as allocating nurses to the shifts of their hospitals, aircraft scheduling
in airports, processing units in a computing environment, etc. In this book, we will
focus onto production scheduling or manufacturing scheduling, i.e. assigning the
various resources in the company to the manufacturing of a range of products that
are requested by the customers. Since, as we will see later, certain level of abstrac-
tion is required to manage these resources, many of the models and methods that
will be shown here may be applied outside the manufacturing scope. Nevertheless,
manufacturing scheduling presents a number of features that cannot be, in general,
extrapolated to other decision sciences. Most notably, scheduling is not carried out
in an isolated manner, as on one hand it uses as input the results of a previous plan
that determined the set of products to be manufactured by the company along with
the real (or expected) demand of these products, among other issues. On the other
hand, the usually highly variable shop floor conditions may complicate or impede the
fulfilment of a schedule, no matter how detailed it is, so there are further decisions to
be taken, possibly implying the review or modification of the existing schedule. As
a consequence, manufacturing scheduling cannot be considered an isolated process,
but integrated into a set of managerial decisions collectively known as production
management (or operations management).
More specifically, in this chapter we:
• briefly introduce the basic concepts regarding manufacturing scheduling (Sect. 1.5),
and
• provide some hints for further analysis on the issues discussed in this chapter
(Sect. 1.6)
aggregated production plan in which the input for this decision is not the estimation
of the actual products to be sold each day, but an estimation of the families or groups
of products with similar usage of resources and/or similar behaviour with respect to
the demand that are to be sold over a period of time ranging from weeks to months.
As a consequence, this aggregated production plan will serve to estimate the (aggre-
gated) capacity to be used, the amount of raw materials to be purchased, etc., but
not as a detailed indication of the production orders that have to be released to the
shop floor. This exhaustive plan is usually left for a later stage in which a detailed
status of the shop floor and on the firm orders to be processed is known, and this is
precisely the mission of manufacturing scheduling: matching the jobs (tasks) to be
executed against the resources in the company. The result of this process is, ideally,
a schedule where it is specified which job should enter each resource, and when.
As plans rarely occur as estimated (this may be an overstatement for many envi-
ronments, but certainly not for manufacturing scheduling), modifications to clear
the way for unforeseen events (machine breakdown, urgent orders, cancelation of
existing orders,...) have to be carried out continuously. Therefore production plans
may have to be rescheduled. Note that this rescheduling process may actually con-
sist of making a full detailed new plan (schedule) capturing the new situation on the
shop floor, but for us, the distinction between scheduling and rescheduling would
lie basically in the aims sought in the decision: While scheduling decisions would
aim at improving certain performance measure in the shop floor, rescheduling aims
to minimize some measure of the disturbance of the current status on the shop floor,
usually upon the arrival of unexpected events. An implicit assumption behind is
that—at least under certain circumstances—it may be more interesting to modify
an existing schedule to accommodate new events occurring in the shop rather than
generating a new schedule (almost) from scratch in order to fully take into account
these new events. While under ideal circumstances the later would always render a
better solution, there are costs associated to this so-called nervousness resulting from
the scheduling process.
Note that sometimes different definitions are employed to distinguish between
scheduling and rescheduling, as some authors use the terms predictive scheduling and
reactive scheduling instead. A predictive schedule is an instruction to the shop floor,
causing the shop to execute events in the sequence and time indicated in the schedule,
while the process of modifying the predictive schedule in the face of executional
disruptions is denoted as reactive scheduling. Note that the definition of reactive
scheduling not only encompasses our definition of rescheduling, but also generating
a completely new schedule that is followed until the next disruption occurs. These
aspects will be discussed in detail in Sect. 2.3.1.
It is clear that the scope and the time period to take these decisions greatly influ-
ences the quality of the data used to support the decision process. Broadly speaking,
long-term decisions usually involve speculations on the evolution of the market and
on the internal resources in the company, and thus are by nature less amenable to be
formalised using quantitative techniques that require a high quality in the input data
to make a sensible decision. Besides, there is little sense in formalising the decision
process itself, since the changing nature of the business and the long intervals among
6 1 Overview of Manufacturing Scheduling
two consecutive decisions of this nature makes them to be unique in practice. Here,
human judgement based on expertise and on a few key estimations is usually what
is required to make this type of decisions. On the contrary, the repetitive nature of
short-term decisions make them suitable to be formalised and possibly encapsulated
into decision models, which can produce remarkable results as one can reasonably
trust on data related to the current state of the shop floor, the average processing times
of the machines, and on the (relative) stability of the list of orders to be completed to
be able to capture the decision process into a quantitative model that can possibly be
solved as an optimisation problem. Indeed, these two aspects (repetitive nature of the
decision process, and the high volume of data required to support it) make quantita-
tive, procedural techniques to be competitive as compared to the human subjective
expertise.
It is also interesting to note that scheduling does not only serve to generate
detailed plans to be executed, but can be also used to provide a source of infor-
mation for different business functions in the company. For instance, it may serve
to provide a capacity-check for the Materials Requirement Planning (MRP) sys-
tem, or to help quoting an incoming order from customers, or to decide about its
acceptance/rejection. Again, we can see the close interrelation of scheduling to other
business functions, and quite often the distinction between scheduling and these
functions would blur. For purely practical purposes, in this book we would exclude
from scheduling those decisions for which the level of detail considered with respect
to time, resources, products, or customers is not enough to provide a specific plan to
be immediately translated into actions on the shop floor.
Given the high level of interrelation among different production management
decisions (including scheduling) shown in the previous paragraphs, one may be
tempted to try establishing a prioritisation of these decisions, in order to more effi-
ciently allocate the usually scarce financial and human resources to those decisions
with higher impact in the bottom line of the company. However, we believe that dis-
cussing whether the scheduling function is more important than quality management,
supply chain coordination, or product design (to name a few examples of operations
management decisions) is a useless discussion for any practical purpose. It is obvious
that without an overall good operations management, better scheduling will not help
to improve the results of a company no matter how well the scheduling function is
performed, and the same can be said about the different decisions. Nevertheless, it
should be clear that scheduling is a core business decision for industrial companies.
As such, its importance greatly depends on the strategic importance of manufacturing
decisions to gain competitive advantage. For instance, it may not be as critical for
companies trying to sell a small range of new products avidly accepted by the market
with little opposition from competitors, but it would help companies manufacturing
a large range of products in obtaining a competitive advantage as compared to their
competitors. Indeed, the quality of scheduling is often cited as a key factor if the
products and the factory are well-designed.
Since scheduling is closely related to the allocation of resources, it is also obvious
that its importance would increase with the possibility of having a conflict for the
usage of these resources. Along the history of manufacturing, this conflict for the
1.2 Manufacturing Scheduling into Context 7
usage of resources (which is not desirable) has been traditionally avoided by design-
ing shop floors manufacturing a limited range of products with similar technological
requirements. Unfortunately, the trend towards mass customisation, shortening of
product life-cycles, time-based competition, and the globalisation (with the cor-
responding increase of competition) has made this solution unacceptable for most
companies. Another option has been what is described as ‘management by buzzword’
(Hopp and Spearman 2008) meaning the vast stream of three letter acronyms encom-
passing the ultimate solutions for production management. While these apparent
business panaceas (one every few years, just another proof that they are not panaceas
at all) may contain valuable insights, mostly translate into approaches neglecting or
ignoring the realities of manufacturing scheduling. Given the wrong answer provided
by the two previous options, we believe that there is no other way than addressing
scheduling in full detail, with its complexities and their enormous opportunities for
operational improvement.
Scheduling decisions may be very different from one company to another, but all they
share a number of common features. In this section, we will discuss these features in
order to obtain certain degree of abstraction that may serve us to formulate a generic
framework useful for different companies. These features are:
• They are complex decisions, as they involve developing detailed plans for assign-
ing tasks to resources over time. Although this may vary greatly from one company
to another, there is a universal trend on increasing the sophistication of the prod-
ucts and on their customisation, which in turn affects to the complexity of the
manufacturing process.
• Scheduling decisions are short time interval decisions to be taken over and over.
The average lifetime of a schedule is very short, and indeed many authors refer to
a continuous scheduling decision process. Here we mean that what is repeated is
the decision process, which is different from stating that the outcome of a single
decision is put into practice rhythmically again (cyclic scheduling).
• Despite being a short-time decision, scheduling is relevant for companies’ bottom
line, as it determines the lead times and the cost of the products, which on the long
run affects the service level of the company as well as its ability to compete both
in manufacturing costs and on delivery times.
• As a decision process at the core of the operations of a manufacturing company, the
constraints and objectives affecting scheduling are extremely company-specific.
The nature and usage of the resources in a plant producing chemical commodi-
ties has little in common with manufacturing ball bearers, or assembly of highly
customised electronic devices.
• Finally, scheduling decisions are—as we already discussed—relatively structured
decisions, at least as compared to other decision problems within the company. Its
8 1 Overview of Manufacturing Scheduling
In any case, we should recall that there is no way to escape from making (implicit
or explicit) scheduling decisions in manufacturing companies. Perhaps ironically,
this fact has led sometimes to the underestimation of the potential advantages of
efficient scheduling, since, as some plant managers put, ‘we already do scheduling’.
The truth is that, most often, such scheduling is accomplished in a non-structured,
expertise-dependent manner, with partial information both about the inputs of the
process and with respect to the desired output. The consequences of this approach
are obvious: underutilisation of resources, longer lead times than required, higher
inventories, ... to name a few. This book is about how to conduct this process in a
systematic, efficient manner.
• Fourth, although strictly speaking there is no need of a tool to support the schedulers
in their decisions, in most cases it is completely unpractical to conduct them
without any help from a software tool. Simply the sheer volume of data referring
to resources and tasks found in most shop floors would be sufficient to justify
the need of such tool. Whether this software is a spreadsheet, an ‘off-the-shelf’
software, or an advanced planning and scheduling tool is a question that will be
discussed later in this book, but from now on we will assume that some type of
support exists.
From the above paragraph, it should be clear that, in order to effectively perform
scheduling decisions, a system—i.e., a collection of pieces—is needed. The combi-
nation of models, methods and tools—together with the human schedulers involved
in the process—is what we call a scheduling system. Scheduling models, solution
procedures and software tools all play an important part alone, but they have to be
put together to make up scheduling decisions. Therefore, a holistic view—in contrast
to the more extended technical or optimization-related view—is needed to have a
complete picture of scheduling.
This book is thus adapted to the above view, which is summarised in Fig. 1.1. In
the next chapters, the different elements of the scheduling system will be described
one by one. To start this discussion, a number of basic concepts are defined in the
next section.
By formalising the ideas given in the previous sections, we can define manufacturing
scheduling as the decision-making process consisting of assigning the set of opera-
tions/tasks required to manufacture a set of products to the existing resources in the
shop floor, as well as the time periods to initiate these operations/tasks. A schedule
is defined as a specific assignment of these operations/tasks to the resources on a
10 1 Overview of Manufacturing Scheduling
time-scale. A schedule thus fully determines when each operation/task should start
and it is regraded as the main output of manufacturing scheduling.
The products to be manufactured, or the units of work that the manufacturing
activity can be divided into are usually referred to in the scheduling literature simply
as jobs. Although in most of our book we will assume naturally the existence of
such units of work, the concept of job is by no means unambiguous as it might
be dependent on the company, context, and manufacturing setting. Clients’ orders
might be broken down into different Fabrication Orders (FO). Several FOs coming
from different client orders might be coalesced to form larger FOs that might be
more economical to produce. Later, these large FOs could be again broken down, at
a shop level, into smaller, but not necessarily and immediately consecutive, sublots
into what is called a lot streaming process. We will discuss these aspects in Sect. 3.4.
The operations/tasks required by the jobs are provided by different productive
resources available in the shop floor. These could range from cheap tooling and
fixtures to human resources and expensive machinery. Again, we will assume that it
is possible to identify units of resources that can perform these operations/tasks, and,
accordingly to the scheduling literature, we will refer to them simply as machines. As
with the jobs, the concept of ‘machine’ is an abstraction (or mental model) of real-
world operations and is therefore subject to the purposes of the modeling activity, i.e.:
for estimating the throughput of a factory, the whole factory with all its resources can
be considered as a ‘machine’, while for providing a daily timetabling of the work
that the operator of a drilling machine in this factory must perform, the previous
assumption of machine is clearly insufficient.
In addition, there are many technical/production/economic characteristics of the
jobs, of the machines, and of the operation of a job on a machine that must be
taken into account in the scheduling process. Although these will be formalised and
extended in Sect. 3.1, let us state perhaps the most obvious:
• There exists machine specialisation: Not all machines can perform every operation
required by each job, as manufacturing resources are usually capable to perform
just one or at most few operations. Even in the case that such a thing is possible (e.g.
by polyvalent workers or general-purpose machines), it is extremely doubtful that
their efficiency is operation-independent, so at least preferences/exclusions would
have to be defined.
• The order in which the different operations have to be performed on a job is usually
fixed given the specific technology of the product to be manufactured, i.e. there
exists a predefined order (or orders) of operations for each job that must be strictly
followed. This order is labelled job processing route and will be further formalised
in Sect. 3.2.2.
• In general, different jobs have different requirements with respect to the technical,
production management, and economic aspects. From a technical viewpoint, even
if two jobs have the same processing route, some characteristics of the operation
(such as e.g. processing time, or set-up times) may be different. From a production
management/economic viewpoint, technically identical jobs may have different
1.5 Basic Concepts 11
Fig. 1.2 Real world and formal model, problem sphere and solution sphere for decision-making
due dates committed to the customer(s), or are to be sold with different benefits, or
may require raw materials which are not served by the provider at the same time.
• All/some of operations/tasks may allow the so-called preemption, which refers to
the possibility (for all/some jobs) of interrupting the operation once it has started.
If such a possibility exists, the execution of the operation can be stopped and
resumed later with/without any/some penalty (these cases are denoted as resumable
or semi-resumable operations, respectively). There are manufacturing examples
of both preemptive and non preemptive operations. Heat/Chemical treatments are
basically non preemptive operations, as they correspond to gradually changing the
physical or chemical properties of the material under treatment, a process that,
in general, cannot be reversed to the original properties of the material. On the
other hand, some assembly operations are preemptive, at least from a technological
viewpoint, although there may be managerial/economic considerations deterring
or impeding the preemption of such operations.
options. To do so, one or more metrics (or criteria) have to be defined to characterise
the desired features of the schedules. Once the criterion is established, it is (at least
technically) possible to compare different schedules, and to select one yielding the
best value (maximum or minimum) for this criterion among all feasible schedules
(optimal schedule).
In manufacturing, metrics are usually classified into the categories of costs (some-
times profit), time, quality and flexibility. As it will be discussed in Chap. 5, in
scheduling decision-making these categories are associated with criteria related to
the completion time of the jobs. Loosely speaking, the completion time of a job in
a specific schedule is the time spent by the job in the shop floor—according to the
schedule—before all its corresponding operations have been performed. Note that
the completion times of the jobs in a schedule serve to know whether executing this
schedule would result in delays on the delivery of the jobs with respect their com-
mitted due date, or the average time required, according to this schedule, to process
the whole set of jobs.
If the criterion employed is a non-decreasing function of the completion times
of the jobs, then this criterion is named regular. One obvious example of a regular
criterion is the average completion time of the jobs, while the average delay of
the jobs with respect to their due dates is an example of a non-regular criterion.
Finally, note that more than one criteria can be of interest, and that there might be
conflicts between them. An exhaustive discussion and classification of the different
criteria employed in scheduling is given in Chap. 5, while multicriteria scheduling is
discussed in Chap. 10.
In order to highlight the main issues related to scheduling decision-making, we
map in Fig. 1.2 the well-known flow of a decision-making process, at least for quan-
tifiable decision problems: Starting at the real-world problem, for which a decision is
required, a formal model is derived by means of simplification and formalisation. By
using some formal procedure or algorithm (see Sect. 7.2.1 for a more precise defini-
tion of algorithm), this formal model is solved, i.e. a formal solution for this problem
is derived. Afterwards, within implementation this formal solution is transferred
to a real-world solution which is implemented for solving the real-world decision
problem.
The scheme presented in Fig. 1.2 is rather basic and superficial, and it can be
complexified to almost every level. We just refer to this simplification because it
depicts the basic issues related to manufacturing scheduling decision-making. We
will go back to this figure in Chap. 6 (when discussing the transfer of real-world
relations into a formal, mathematical decision model and the techniques to set up
a respective formal model), and in Sect. 7.1 (when addressing the adequacy of the
formal solutions found by the algorithms to the real-world problem).
Since it has been established that obtaining (one or several) schedules is the main
goal of the manufacturing scheduling decision problem, in the next sections we will
present ways to represent schedules (Sect. 1.5.1), their main types (Sect. 1.5.2), and
the number of existing schedules (Sect. 1.5.3).
1.5 Basic Concepts 13
Fig. 1.3 Machine-oriented and job-oriented Gantt diagrams (Domschke et al. 1997)
For non-preemptive scheduling problems, several classes (or types) of schedules can
be identified. As we will see, these classes are used
1. to reduce the number of possible schedules which reduces the search effort for
good or optimal solutions as compared with a continuum of solutions, and
2. to identify properties of the solution space and/or good or optimal solutions to
adjust solution procedures accordingly.
The classes of schedules are further detailed in Fig. 1.5. Note that, in order to
take advantage of restricting the search of the optimal solution to a certain class of
schedules, it has to be checked in advance whether all optimal solutions (or at least
one of them) fulfil the respective property. In this case, the effort to find an adequate
1.5 Basic Concepts 15
Fig. 1.5 Classes of non-preemptive schedules for job shop problems (similar to Pinedo 2012;
T’Kindt and Billaut 2006)
Although the expressions schedule and sequence are often used synonymously in
the literature, one schedule uniquely defines a sequence but not vice versa. Often,
especially in a multi-stage context, (at least non-critical) operations will have some
time buffer within a given sequence. These time buffers will open ranges for the
assignment of these operations to the time-scale. Operations might be assigned ‘left-
shifted’ to their earliest possible processing period (as is often supposed in scheduling
procedures), ‘right-shifted’ to their latest possible processing period (as might make
sense from a profitability point of view, i.e. to minimise tied-up capital), or somewhere
in between (e.g. if externally specified due dates should be reached and neither early
nor late completion of operations or jobs, respectively, is intended).
In general, the number of feasible schedules for most manufacturing scheduling
problems will be infinite while the number of sequences will be countable and even
finite, however possibly rather large. I.e. although the number of sequences on every
single machine and in combination over all machines will be finite, a single operation
might often be shifted continuously on the time scale, at least within some time
limits. However, if we can prove, as holds in many scheduling settings, that an
optimal solution will always be active (at least in some non-preemptive scheduling
problems, see Sect. 1.5.2), no matter whether left-shifted or right-shifted, the number
of solutions to be considered for the determination of an optimal solution will be finite.
16 1 Overview of Manufacturing Scheduling
During the last decades, manufacturing scheduling has been identified to be one of
the most important decisions in planning and control of industrial plant operations,
both in science and in practice. Scheduling is seen as a decision-making process that
is used in manufacturing industries as well as in service industries.
In this chapter we have introduced the conceptual framework which we will use
throughout this book. We have given a definition of manufacturing scheduling and
(even if only in a coarse manner) delineated which aspects are within the scope of
the book, and which are not. A framework for studying and connecting the differ-
ent topics involved in manufacturing scheduling (models, methods and tools under
human guidance) has been presented, and the basic scheduling concepts have been
introduced.
The remainder of the book follows the structure summarised in Fig. 1.1. First, in
Chap. 2 we finalize Part I, which has been devoted to analysing the manufacturing
context in which scheduling takes place. We then discuss the issues related to the
different parts in a scheduling system as depicted in Fig. 1.1 one by one: Part II of
the book (Chaps. 3, 4, 5 and 6) is devoted to scheduling models, Part III (7, 8, 9,
and 10) to scheduling methods, and Part IV (Chaps. 11 and 12) to scheduling tools.
Finally, Part V (Chaps. 13, 14, and 15) relinks all these elements into the concept of
a scheduling system by discussing its relations with its organisational (i.e. human)
environment, and the process of developing such systems. This part concludes by
presenting a real implementation in which the main concepts and issues discussed
in the book are presented.
As scheduling is an area of utmost importance in Operations Research and Man-
agement Science, there is a wealth of books dealing with specific aspects of schedul-
ing. We omit those books of wider scope (e.g. in Operations Management) that
partially cover scheduling decisions in one or two chapters. Among these, we would
like to cite the book by Hopp and Spearman (Hopp and Spearman 2008), although
there are other excellent books on Operations Management. Interestingly, the reader
would detect that, in some of these books, the authors do not seem to feel very com-
fortable placing scheduling in the context of production management and the tone
and content of the chapter(s) devoted to scheduling are quite different than that of
the rest of the chapters.
1.6 Conclusions and Further Readings 17
Regarding the scheduling topic itself, the book acknowledged to be the first book
devoted to scheduling is Conway et al. (1967). This book, and most pre-dated to 1995,
seems today a bit outdated as the scheduling field has almost unrecognisably changed
since, and the contents would be entirely different even if the topics addressed were
identical. Regarding introductory readings on the topic of manufacturing scheduling,
there are many excellent books, although most of them lean towards one or two of the
components of our framework. Among the principal books, here we mention those
by Baker (1974); French (1982); Błazewicz et al. (2002); Brucker (2007); Pinedo
(2012); Baker and Trietsch (2009) or Pinedo (2009). Less known but interesting
references are those of Domschke et al. (1997).
The terms predictive scheduling and reactive scheduling are discussed—although
in a slightly different manner than here—in Aytug et al. (2005). The last reference
is also a key source regarding the use of scheduling for providing information for
different business functions in the company. Specifically, the uses of scheduling for
due date quoting and acceptance/rejection decisions are discussed in Framinan and
Leisten (2010). An excellent book describing human performance in scheduling is
MacCarthy and Wilson (2001). Brucker (2007) is devoted mostly to models and
methods for scheduling problems, not confined to the manufacturing field.
References
Aytug, H., Lawley, M. A., McKay, K., Mohan, S., and Uzsoy, R. (2005). Executing production
schedules in the face of uncertainties: A review and some future directions. European Journal of
Operational Research, 161(1):86–110.
Baker, K. R. (1974). Introduction to Sequencing and Scheduling. John Wiley & Sons, New York.
Baker, K. R. and Trietsch, D. (2009). Principles of Sequencing and Scheduling. Wiley, New York.
Błazewicz, J., Ecker, K. H., Pesch, E., Schmidt, G., and Wȩglarz, J. (2002). Scheduling Computer
and Manufacturing Processes. Springer-Verlag, Berlin, second edition.
Błażewicz, J., Pesch, E., and Sterna, M. (2000). The disjunctive graph machine representation of
the job shop scheduling problem. European Journal of Operational Research, 127(2):317–331.
Brucker, P. (2007). Scheduling Algorithms. Springer, New York, fifth edition.
Conway, R. W., Maxwell, W. L., and Miller, L. W. (1967). Theory of Scheduling. Dover Publications,
New York. Unabridged publication from the 1967 original edition published by Addison-Wesley.
Corsten, H. (2009). Produktionswirtschaft - Einfuehrung in das industrielle Produktionsmanage-
ment. Oldenbourg, Muenchen. 12th, revised and upgraded edition.
Domschke, W., Scholl, A., and Voss, S. (1997). Produktionsplanung: Ablauforganisatorische
Aspekte. Springer, Berlin. 2th, revised and upgraded edition.
Framinan, J. and Leisten, R. (2010). Available-to-promise (ATP) systems: A classification and
framework for analysis. International Journal of Production Research, 48(11):3079–3103.
French, S. (1982). Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop.
Ellis Horwood Limited, Chichester.
Hopp, W. J. and Spearman, M. L. (2008). Factory Physics. McGraw-Hill, New York.
MacCarthy, B. L. and Wilson, J. R., editors (2001). Human performance in Planning and Scheduling.
Taylor & Francis.
18 1 Overview of Manufacturing Scheduling
Pinedo, M. (2009). Planning and Scheduling in Manufacturing and Services. Springer, New York,
second edition.
Pinedo, M. L. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, New York, fourth
edition.
T’Kindt, V. and Billaut, J.-C. (2006). Multicriteria Scheduling: Theory, Models and Algorithms.
Springer, New York, second edition.
Chapter 2
The Context of Manufacturing Scheduling
2.1 Introduction
In the previous chapter, a unified view of manufacturing scheduling has been given.
Furthermore, we have also outlined that manufacturing scheduling is not carried out
in an isolated manner, but as part of a set of interrelated decisions—collectively
known as production management—dealing with efficiently ensuring the delivery of
goods provided by the company. Therefore, before analysing manufacturing schedul-
ing decisions in detail, it is worth looking at the context in which manufacturing
scheduling is embedded: i.e the company’s production management and in the sup-
ply network to which the company belongs.
More specifically, in this chapter we
• present a framework for scheduling decisions (Sect. 2.2),
• analyse manufacturing scheduling as an isolated decision process and study the
main aspects influencing these decisions (Sect. 2.3) and
• investigate the relationship of manufacturing scheduling with other decisions in
the company and its supply chain network (Sect. 2.4).
The basic definitions for manufacturing scheduling have been given in the previous
chapter. In this section, we now discuss the decision framework of manufacturing
scheduling, i.e. the main aspects influencing the scheduling decision process. We
adopt a systems analysis approach: A system can be characterised by its objects
and their relations and whether it is closed or open (i.e. whether it does not include
relations to the off-system environment or it does). Manufacturing scheduling can
be interpreted as a closed system when it is studied as an isolated decision process,
and also as an open system when the focus is set on its relationship to other decision
processes.
This dual (open and closed) view will be extended in the next sections. First
(Sect. 2.3), we study manufacturing scheduling as an isolated decision process and
will discuss the main aspects influencing these decisions. Next (Sect. 2.4), the rela-
tionship of scheduling with other decisions is investigated. These relationships can
be classified into three types:
• Relationship between manufacturing scheduling and the rest of decisions in
production management (Sect. 2.4.1).
• Relationship between manufacturing scheduling and the rest of decisions (apart
from those in production management) within the company (Sect. 2.4.2).
• Relationship between manufacturing scheduling and the rest of decisions in the
supply network in which the company is integrated (Sect. 2.4.3).
planning and control system, also for manufacturing scheduling, the effects of
such a limited scope have to be anticipated and examined carefully.
We can use the above structure to classify scheduling decisions. This classification
will be employed in the next subsections when we analyse the main aspects influenc-
ing manufacturing scheduling, namely time, complexity, variability and flexibility.
2.3.1 Time
As already discussed, time is the main reference point of scheduling and scheduling
decisions. Apart from duration, starting and ending specification of operations and
their assignment to the timescale by scheduling decisions, there are several further
time-related issues in scheduling, which are graphically described in Fig. 2.1. These
are:
• Point in time of scheduling. The point in time of a scheduling decision itself, its
level of information with respect to the real-world scheduling problem (e.g. the
degree of determination of operations duration or even operations’ determination
in total) or, more precisely, its time lag to the first operation to be physically
executed may play an important role concerning the quality of the scheduling
decision.
• Scheduling horizon. Scheduling horizon refers to the time horizon of operations
or jobs considered in the scheduling process. The scheduling horizon is strongly
related to the selection of jobs included in the scheduling decision problem.
• Scheduling frequency refers to the frequency and the trigger of updating of a
schedule, by including new information arrived meanwhile into the new scheduling
decision can be controlled. This updating process might be rhythmic or event-
based, e.g. if a new job arrives or a machine becomes (un-)available.
• Realisation horizon. Closely connected to scheduling horizon and scheduling
rhythm is the determination of the realisation horizon, i.e. those part of the schedul-
ing horizon which is implemented, usually at least until updating by the next
(possibly re-)scheduling decision takes place. This happens because scheduling is
often performed on the well-known rolling horizon basis, i.e. a schedule is con-
structed for the scheduling horizon, but it is executed only for the first few periods
of this horizon (i.e. its realisation horizon) before it is updated by a new plan using
updated input data.
Additionally, it should be mentioned that the grid of the time scale might influence the
quality of the scheduling decision as well. However, usually, the time grid assumed
for the process, may it be minutes, hours, shifts or days, will be ‘adequately’ detailed,
i.e. will not contribute to significant infeasibilities and/or suboptimalities.
Closely related to time aspects in manufacturing scheduling is the consideration
whether a new or updated schedule is generated from scratch as a greenfield solution
or whether at least parts of it are predetermined by remaining operations from the
22 2 The Context of Manufacturing Scheduling
former schedule. The latter case is, obviously, more realistic and sometimes expressed
by something like ‘the tails of the former schedule are the heads of the new, updated
one’. However, many scheduling approaches in science as well as their software
implementation in practice disregard this predetermination of the beginning of a
schedule (at least on some machines) and generate the new schedule from scratch
(usually including the remaining operations of the former schedule) into the planning
process of the new iteration, not as fixed in the beginning of the new schedule.
In order to avoid nervousness consequences already mentioned in Sect. 1.2, plan
revisions in this update might be intended to be avoided or at least be minimised
from a planning point of view, and only new jobs should be appended to the old
plan. However, the relevance of this update has to be interpreted at least two-fold:
On one hand, jobs (or at least operations) already started should be more or less fixed
and their schedule might be, therefore, fixed as input data for the new schedule. The
time period covering such jobs/operations is denoted as frozen period. On the other
hand, if previously scheduled jobs have not yet been started or physically prepared
in the shop floor, respectively, their preliminary schedule might be known only to the
planner while shop floor people possibly got no information on these scheduled but
not yet released jobs/operations up to the replanning point in time. Therefore, these
latter jobs/operations may be very well considered anew in the replanning procedure
of the rolling horizon approach—without causing much additional nervousness to
the shop floor level.
A further approach to reduce nervousness in the shop floor is to expand the frozen
periods to more than the plan revision point in time, so plan revisions enter the shop
floor more ‘smoothly’. However, this advantage with respect to reduced nervousness
is balanced by a reduced reaction speed of the shop floor.
Another way of updating schedules is event-oriented scheduling. Predefined
types of events, such as newly arriving and/or cancelled jobs, machine failures or
unavailability of material, tools and/or staff, processing times different from the
2.3 The Closed View of Scheduling Problems and Decisions 23
planned ones (a very common situation), etc. give reason for updating the schedule.
Sometimes, not a single modification but a certain amount (number) of modifications
is awaited before a plan update is initiated. The modifications might be just added
to the old schedule or a more or less complete replanning is executed while avoid-
ing nervousness, if possible, as mentioned above with respect to the rolling horizon
approach. It should be pointed out that event-oriented scheduling requires more or
less permanent observation (and availability) of all relevant actual data.
As already mentioned, the scheduling process itself might be static or dynamic.
Static in this context does not necessarily mean the absence of intermediate updating,
e.g. in a rolling horizon context. And dynamic does not include only simply more
than one planning period. Recall that static scheduling is separated from dynamic
scheduling by the fact that static scheduling derive a plan/schedule always from
scratch without taking explicitly into account that till the end of the execution of
the plan changes might and will appear. The explicit provision of resources (time,
capacity, etc.) for these changes and or scheduling with the explicit inclusion of these
possible changes (e.g. by applying scenario techniques and/or techniques of flexible
planning) characterise dynamic scheduling.
Within the dynamic context above mentioned, at least four different types of
scheduling approaches can be identified:
• Completely reactive scheduling. This type of scheduling is defined more or less as
online scheduling. Assuming the poor quality of centrally determined schedules
as a drawback, it is supposed that quick and local reaction yields better scheduling
results. Therefore, simple scheduling approaches (known as scheduling policies
and described in Sect. 7.4), are locally applied on the shop floor level, possibly in
real time to determine the next job to be processed on a specific machine. This
approach is flexible and is able to include most recent information from the very
detailed shop floor level. However, because of its myopic perspective, the overall
performance of this approach is questionable since it usually includes an inherent
lack of coordination.
• Predictive-reactive scheduling. The most common dynamic scheduling approach
in manufacturing systems is called predictive-reactive scheduling. It combines
static-deterministic scheduling as a first step with event-based reactive, possi-
bly real-time updating of schedules in the meantime as second step until a new
static-deterministic schedule is generated. Apart from its time coordination effect
between the two steps, this approach is also intended to coordinate the broader view
of the overall system to be scheduled in the first step with the myopic perspective
of rescheduling in the second. However, as often confirmed empirically, large tol-
erances in the second step often significantly reduce the overall performance of the
system since the finally implemented schedule may deviate significantly from the
original schedule—and the deviations are a result of local, myopic considerations
exclusively.
• Robust predictive-reactive scheduling. Taking into account the problems of
predictive-reactive scheduling, additional robustness considerations should be
included, both with respect to the robustness of the first step’s overall schedule
24 2 The Context of Manufacturing Scheduling
and the limitation of the second step’s tolerances. The focus of robust predictive-
reactive scheduling is building predictive-reactive schedules which minimise or
at least limit the effects of disruption on the performance measure values of the
realised schedule.
• Robust proactive scheduling. Robust proactive scheduling intends to immunise
the predictive schedule in advance against possible stochastic disruptions. Usually,
specific time buffers are included to cope with this kind of uncertainty and to make
the predictive schedule robust. Determination of the predictability measures is the
main difficulty of this approach.
Predictive scheduling is an integral part of manufacturing systems planning. Predic-
tive schedules are often produced in advance in order to direct production operations
and to support other planning activities. Since most manufacturing systems operate
in dynamic environments subject to various real-time events, this may render the pre-
dictive optimal schedule neither feasible nor optimal. Therefore, dynamic scheduling
is of great importance for the successful implementation of approaches to real-world
scheduling systems.
Rolling horizon approaches and reactive scheduling can be interpreted as manifes-
tations of rescheduling approaches. These rescheduling approaches may be triggered
externally, e.g. by newly arriving orders or by reaching the next planning point in
time. They might be induced, however, also internally, i.e. by every significant devi-
ation from the planned, predictive schedule derived earlier. The reasons for these
deviations might be multifaceted as is well known. They can refer to resource avail-
ability, stochasticity of processing times, etc. The decision whether a deviation is
significant (i.e. it initiates an update of the schedule) or not, depends on the assess-
ment of the decision maker(s). In case of large deviations and subsequent adjustment
requirements, the existing schedule can be updated, repaired or rejected/stopped. In
the latter case, a completely new schedule will be determined.
The ‘core’ rescheduling approaches include updating and repairing of a schedule.
Rescheduling in this perception is said to be ‘the process of updating an existing pro-
duction schedule in response to disruptions or other changes. This includes the arrival
of new jobs, machine failures, and machine repairs.’ (Vieira et al. 2003). Reasons for
rescheduling may also be due date changes, job cancellations, delay in the arrival or
shortage of materials, change in job priorities, rework or quality problems, overesti-
mation or underestimation of processing times, operator absenteeism, etc. Reaction
to rescheduling requirements does not only include the modification of the schedule
itself but may also refer to the modification of its preconditions, including the alter-
natives of overtime, assignment of utility persons, in-process subcontracting, process
change or re-routing, machine substitution, etc. These arrangements represent means
to augment the manufacturing capacity as basis of a (re-)scheduling decision while
opposite arrangements will reduce it.
Vieira et al. (2003) present a framework for rescheduling which includes reschedul-
ing environments, rescheduling strategies (including rescheduling policies) and
rescheduling methods as categories of classification. This framework is outlined
in Fig. 2.2.
2.3 The Closed View of Scheduling Problems and Decisions 25
Since the rescheduling environment can be either static or dynamic (see classifi-
cation above), in the static environment, a given set of jobs/orders/operations have
to be scheduled while in a dynamic setting, jobs may arrive after the scheduling
decision has been taken. Therefore, in principle, the set of jobs might be interpreted
as infinite.
The static situation can be further separated into a deterministic and a stochastic
setting. Note that a deterministic setting, either needs no rescheduling because it
represents the ‘true’ situation, or an update is performed from scratch. In contrast,
a dynamic environment might be (a) deterministic with respect to job arrivals (i.e.
production cycles which are repeated again and again), (b) stochastic with respect
to jobs’ arrival times but with same flow of every job through the system (flow shop
setting), (c) or dynamic with respect to arrival times and varying with respect to jobs’
routes through the system (job shop setting).
Rescheduling strategies for simply adapting the current schedule or modifying
it myopically then may use (mostly myopic) dispatching rules (see Sect. 7.4) or
some control-theoretic approach which basically tries to keep the system in balance
and initiates adaptations if the system under consideration runs the risk of getting
imbalanced. Otherwise, the combination of predictive and reactive schemes which
have been described above can be applied, periodically, event-driven or in some way
hybrid. Finally, the possible rescheduling methods are obvious from Fig. 2.2, namely
separating schedule generation from schedule repair approaches.
26 2 The Context of Manufacturing Scheduling
2.3.2 Complexity
Complexity on the formal sphere refers to the complexity of the formal problem
and the corresponding algorithms for solving the formal problem. We refer to this
type of complexity to as computational complexity and it will be treated in detail in
Chap. 7. Roughly speaking, this concept of complexity serves to separate algorithms
whose computational effort can be bounded by some polynomial function of some
characteristics of the formal problem from those where such a polynomial limit has
not been derived yet and where it will probably never be found. This separation is
justified by the fact that polynomial approaches can be usually accomplished within
reasonable computational effort while non-polynomial algorithms cannot.
In particular, algorithms performing an explicit or implicit enumeration of each
feasible solution are of this non-polynomial type. Since, as we have discussed in
Sect. 1.5.3, procedures to generate feasible and optimal schedules are basically enu-
merative, we can conclude that manufacturing scheduling is complex from this formal
perspective.
A final remark is that the polynomial/non-polynomial behaviour of the algorithms
refers to its worst case performance and does not give any insight in its average behav-
iour. In contrast, from a real-world application point of view, a non-polynomial algo-
rithm might reach a good or even the optimal solution in reasonable time. However,
the proof of this optimality might be the additional and very time-consuming step.
This, from a practical point of view, gives reason to prematurely stop an optimising
algorithm, especially if an available bound indicates that the maximum deviation of
the current solution from the optimal solution is acceptable.
It is a commonplace that real-world problems are complex and their respective formal
problems are complex as well—no matter how complexity is defined. Here we just
2.3 The Closed View of Scheduling Problems and Decisions 27
intend to structure real-world complexity and thereby to provide starting points for
handling (including reducing) complexity.
Real-world decision problems usually include complexity imposed by a complex
decision framework. This may include among others
• different, maybe even not clearly specified objectives,
• a large variety of sometimes not even clearly pre-specified constraints,
• a large, maybe even not clearly pre-specified number of possible actions,
• a possibly hierarchical or even not clearly specified system of planning and
decision-making and decision-makers,
• complexity which is induced by dynamics and uncertainty,
• the interaction of all aspects mentioned above, within their category and between
the categories.
As already indicated earlier, there is no clear definition of complexity. However, there
is a shirt sleeve classification of complexity by Reiss (1993a, b) which can easily
serve at least as a first approach to clarify complexity issues of a decision problem,
and particularly manufacturing scheduling. Reiss classifies complexity aspects into
• Mass aspects, further divided into
– Multiplicity, i.e. number of elements and interactions in the system, and
– Variance, i.e. number of different elements and interactions in the system, and
• Chaos aspects, further divided into
– Ambiguity, i.e. degree of uncertainty about the characteristics of the elements
and interactions in the system, and
– Changeability, i.e. the change of the characteristics of the elements and interac-
tions over time (thus closely related to dynamics).
We point on the fact that classifying (manufacturing scheduling) decision problems
by means of this scheme not only provides a systematisation but also gives hints on
how to treat the complexity of a decision problem, may it be by simplification, by
identifying the relevant issues, by indicating basic solution strategies, etc.
By applying this scheme of complexity classification to manufacturing schedul-
ing, the multiplicity of a manufacturing system is determined, e.g. by the number of
jobs, the number of stages etc. Diversity might, e.g. refer to the (non-)homogeneity of
jobs processing times, the diversity of job routes in the system, etc. Replacing small
jobs by a larger one, replacing single products by product types, ignoring possibly
sequence-dependent setup times or replacing several machines on one (hopefully
non-bottleneck) stage by one ‘formal’ machine with added up capacity on this stage
are examples for strategies regarding the mass aspect of complexity in manufacturing
scheduling. As is well known, these aspects are intensively dealt with in scheduling
science and practice, however mostly, if at all, they are only implicitly seen within a
complexity context.
In contrast, chaos aspects of Reiß’ complexity classification, i.e. ambiguity/
uncertainty (deterministic vs. stochastic scheduling) and/or changeability (static vs.
28 2 The Context of Manufacturing Scheduling
dynamic scheduling), are much more explicitly addressed in the scheduling literature
and in applications as discussed already earlier. Replacing a dynamic problem by
a ‘semi-dynamic’ problem with large time grid or even by a static model (without
intermediate update of information) or replacing probability distributions of process-
ing times by their expected value are simplification strategies concerning the chaotic
aspects of complexity.
Figure 2.3 summarises the discussion above. It should be mentioned that com-
plexity of the decision-making problem (in our case, manufacturing scheduling) not
only refers to the structural characteristics of the planning object(s) and their rela-
tions but also to the planning procedure, including the time needed for executing
the procedure. On one hand, with respect to the formal complexity discussed in the
previous section, this represents at least one interface to the approach to complexity
sketched out there. On the other hand, the complexity of the solution procedure is
closely connected to the speed of this procedure. Taking into account that real-time
solutions have come up and are required for many scheduling problems during the
last years, this aspect of complexity of manufacturing scheduling becomes increas-
ingly important. It refers not only to the ‘core’ algorithms applied but also to related
aspects as information and/or database management both, from the input as well as
from the output perspective.
Using this concept of structuring complexity and applying it to manufacturing
scheduling problems, can provide valuable hints of how to make the real-world
scheduling problem manageable, e.g. by well-defined simplification strategies in
model building and/or model solution and removing these simplifications when deriv-
ing and implementing a solution for the real-world problem.
2.3.3 Variability
will not occur. In high volume but not mass production, often a production calendar
will be generated which makes further scheduling considerations superfluous. Small
and medium sized lots of products with medium to large heterogeneity are the main
object of shop scheduling approaches. Many approaches and publications refer to
this case. Individual design of products and process sequences lead to (capacitated)
project planning in (job) shop environments being also a main subject of manufac-
turing scheduling approaches. Therefore, with respect to the long-term level, the
more product and especially process variability occurs, the more relevant become
manufacturing scheduling approaches.
Looking at the tactical (mid-term) level, i.e. referring to the allocation of resources/
capacities, availability and flexibility are the main drivers of manufacturing schedul-
ing with respect to the variability perspective. Flexible machines with small and
possibly sequence-independent changeover times but presumably longer process-
ing times per part are the counterpart to inflexible machines with long and/or
sequence-dependent setup times but smaller processing times per part. Availability
of machines also refers to the maintenance strategy. Small, more frequent interrup-
tions because of (preventive) maintenance make the system usually better perform as
compared to long, infrequent interruptions of (probably curative) maintenance after
a machine breakdown. Similar considerations take place for workforce capacity,
possibly accompanied by labour regulations.
On the operating (short-term) level, where among others scheduling decisions
take place, we will refer to the scheme of Figs. 2.6 and 2.7 (see Sect. 2.4.1) for con-
sidering variability aspects. Input to scheduling from the upper levels of production
management (see Sect. 2.4.1) usually comprises extended demand data (including
type and size of demand, processing times, due dates, release dates, etc.). In most
cases/companies, these data are taken as unalterable input to the problem. However,
on one hand usually these data are often not as fixed as customers, the sales depart-
ment or somebody else outside the manufacturing system claims. In a make to stock
environment the targets derived from the demand prognosis might be discussed just
as well as customer-driven due dates in a make to order environment. This is, e.g.
reflected by the discussion on due date setting which is closely related to scheduling
analysis in the recent past. By introducing flexibility to due dates (e.g. by intro-
ducing due windows or re-negotiating due dates after they have been preliminary
fixed, see Sect. 2.3.4) variability of the system can be reduced, uniformity can be
approached and the performance of the system may be improved. Further down,
additional remarks on due date setting can be found. On the other hand, if demand
data shows high and/or even increasing variability (which is not unusual in a world
where demand lots decrease and the degree of customer-individual requirements for
specific product features increase), this variability might not be reducible. In such
cases, variability usually can only be handled by introducing buffers (time, capacity
and/or inventory) or by allowing worse performance (lower service level).
In addition, at all levels, with respect to variability attention has to be paid on the
bottleneck’s performance of the system. On the long-term level, long-term capacity
considerations will take place to harmonise the long-term capacities of the system.
With respect to a mid-term horizon, availability of the bottleneck and/or guaranteeing
30 2 The Context of Manufacturing Scheduling
2.3.4 Flexibility
Clearly, one of the aspects affecting the decision process is the degree of flexibility
of the data at hand. While virtually all data under consideration are candidates to be
flexibilised, here we will refer to one type of flexibility with great impact in practice,
namely due date setting/quoting.
Although due dates are often seen as fixed input into manufacturing scheduling
problems, these due dates might be flexible in one way or another which is not
reflected in standard deterministic scheduling settings. Rather often, customers are
flexible with respect to due dates (at least within some limits) or the manufactur-
ing company is simply asked to propose due dates. So due date setting and/or due
date quoting represent additional decision problems closely related to manufactur-
ing scheduling which balance customer/sales needs on one hand and manufacturing
2.3 The Closed View of Scheduling Problems and Decisions 31
Fig. 2.4 Method for quoting lead times (Hopp and Spearman 2008)
capabilities on the other (Duenyas 1995). Thus people from the sales department and
from the manufacturing department will have to agree on a due date (Fig. 2.4).
Apart from this human resources aspect, formally a due date can be determined
rather easy on a coarse level. The determination of a reasonable due date has to
include three time components into the calculation, i.e.
• the time required to process the current work in process, w,
• the time required to process the currently waiting jobs in front of the system and
having higher priority than the job for which the due date is to be quoted, b, and
• the processing time of the job itself for which the due date is to be quoted, c.
Then the due date to be quoted is simply d = w + b + c. However, setting due dates
in this way may turn out not to be as easy as it might look at a first glance. All 3
components include lead times which might be difficult to be calculated the more
difficult/variable the manufacturing system is. Its variability might refer to jobs’ flow
through the system as well as to their processing times.
The first dimension refers to the scope of the decisions in product management. In
every company many different decisions have to be taken on different levels (of hier-
archy and/or detail) with different importance. Usually, these decisions are not of
stand-alone type but have to be coordinated with many other decisions. Depending
on their relevance for the overall company, the time horizon of decisions’ impact
and their degree of detail of information and decision, these decisions are usually
characterised as strategic, tactical and operational decisions (see Chap. 1). Strate-
gic planning refers to long-term and conceptual decision problems to create and
maintain a dynamic setting for long-term success of the company (e.g. decisions
referring to locations, setting up a manufacturing hierarchy, (general and long-term)
choice of process types, organisation of information flows, etc.). Tactical planning
refers to mid-term and resource allocation tasks, i.e. setting up the infrastructure for
a successful company (according to the strategic implications), while operational
planning and control deals with short-term activities within a given infrastructure for
externally specified requirements such as fulfilling orders/jobs etc.
According to this dimension, scheduling is part of planning and controlling the
execution of manufacturing tasks within a given manufacturing infrastructure—and
not planning of the infrastructure itself. Accordingly, many decision problems related
to scheduling include a rather short-term and operational planning horizon.
The second dimension refers to the logistics of the manufacturing activity, which
is the generation of marketable or intermediate goods (or, for the sake of complete-
ness, also the elimination of bads, such as in waste combustion; we will not address
this aspect further here) that constitute the products/services and are denoted as
the outputs of the company. Manufacturing these goods requires inputs either pro-
cured externally or provided by own preceding production processes. This process
of transforming inputs into outputs is the manufacturing or production process and,
as it is the logistics stage between input and output, is sometimes called throughput.
2.4 The Open View of Scheduling Problems and Decisions 33
Fig. 2.5 Manufacturing scheduling in its planning and control context (see, e.g. Corsten 2009,
p. 26)
Fig. 2.6 Multi-level scheme of PPC. a Hopp and Spearman (2008), b Pinedo (2012)
36 2 The Context of Manufacturing Scheduling
many textbooks on production management. Here we show in Fig. 2.6 two of these
presentations, taken from Hopp and Spearman (2008) and Pinedo (2012).
As can be seen, the wide-spread illustration scheme a includes the aggregate
production planning level which is mainly used to provide adequate resources to
the system on the long-term end of this overall short-term system. Manufacturing
scheduling itself is only included rudimentary in this scheme, namely by the expres-
sion job dispatching.
Not as common, but to our impression reflecting the short-term aspects of manu-
facturing scheduling more adequately, scheme b of Fig. 2.6 represents basically the
same system as in (a) (aggregate production planning might be added at top of this
scheme). However, manufacturing scheduling (labelled as ‘Detailed scheduling’) is
presented in more detail, i.e. by separating the phases of scheduling and reschedul-
ing, dispatching and shop floor management. ‘Rescheduling’ here is meant to set-up
a more or less completely new schedule while dispatching means to react to the shop
status, e.g. by applying dispatching rules to the jobs waiting in front of a machine.
Therefore, this scheme reflects the aspects of predictive and reactive scheduling
addressed already in Sect. 2.3.1.
Figure 2.6b indicates that manufacturing scheduling is embedded into the classi-
cal hierarchical scheme of manufacturing by an interface with material requirements
planning and capacity planning by providing implications from scheduling con-
straints to this level on one hand and getting information on jobs (shop orders) and
their earliest release dates on the other. If scheduling/rescheduling and dispatching
are seen as the level of manufacturing scheduling, then its data provides the main
information with respect to job loading to the shop floor and its management as well
as data collection on the jobs’ current status gives feedback information for updating
dispatching as well as the schedule as a whole.
Apart from this hierarchical integration of manufacturing scheduling into the plan-
ning and control context it has to be mentioned also here that the different levels of
this scheme include different levels of detail (at least) with respect to products/jobs,
capacities/resources and time. Because of this, horizontal (logistical) and diagonal
coordination tasks occur additionally. Without getting into detail of these coordina-
tion tasks, their main components should be at least mentioned:
• decomposition and composition,
• aggregation and disaggregation,
• hierarchical coordination (including anticipation, feed-forward and feedback com-
ponents),
• model building (including central and/or decentral components and their respective
decision structures),
• problem solving (for the single partial problems as well as for their integration),
and
• fitting the planning and control system into the organisational structure.
2.4 The Open View of Scheduling Problems and Decisions 37
Numerous drawbacks of the classical PPC system which have been widely dis-
cussed for decades in the literature and the increasing ability to integrate different
planning systems and planning levels from an IT perspective have led to the develop-
ment of so-called APS systems. Additionally, the Supply Chain Management (SCM)
‘movement’ of the last years, may it be intra-organisational SCM (within one sys-
tem and tending to control via hierarchical structures) or interorganisational (maybe
crossing systems, including coordination tending to be controlled via market-based
structures), promoted modified structures of PPC systems within the planning and
control system of the logistics system. We only refer to some aspects of these sys-
tems which are relevant for manufacturing scheduling. More specifically, an APS
system:
• gives a framework for manufacturing planning and control which includes both,
the hierarchical as well as the logistics perspective,
• comprises advanced planning approaches, including ‘true’ optimisation appro-
aches,
• includes state-of-the-art IT technology, i.e. among many others availability of
quick (possibly real time) updated data within a network structure.
The modular structure of APS systems can be seen from Fig. 2.7. In this figure the
information flows are indicated by the arrows. Manufacturing scheduling is located
as short-term module within the production hierarchy which is coordinated by infor-
mation flows related to production planning via lot sizes, and to procurement and
distribution via due dates. It gives the main information to the shop floor control
level.
Figure 2.8 presents the planning tasks in APS systems as assigned to every module.
As can be seen, the scheduling module includes machine scheduling and scheduling
is referring to data from the mid-term procurement, production and distribution levels
using data on material availability, release dates as well as due dates/deadlines.
Finally, it should be mentioned that both, classical PPC systems as well as APS
systems are integrated into standard ERP software packages. Therefore, they are
applicable to real-world problems and, if they contain respective modules, they are
able to perform automated manufacturing scheduling or to support respective deci-
sion problems and decision makers. However, to perform these tasks, adequate data
must be available at the right time at the right place. Therefore, advanced database
structures and database management are most important features of these systems.
During the last years, data and model integration as well as real-time availability
of data have contributed significantly to the performance of PPC and APS systems
within ERP systems, including modules for manufacturing scheduling.
38 2 The Context of Manufacturing Scheduling
Fig. 2.7 APS modules from a logistics (supply chain) perspective (Reuter and Rohde 2008, p. 249)
Fig. 2.8 (Supply chain) planning tasks in APS systems (modified from Fleischmann et al. 2008)
2.4 The Open View of Scheduling Problems and Decisions 39
Many problems and their respective models and decisions in manufacturing combine
machine scheduling aspects with other aspects in production management, such as
inventory control, workforce scheduling, maintenance scheduling, capacity control
or pricing. These approaches are manifold and will not be described here further.
Aspects of model decomposition and solution integration are relevant here as well,
however even more complicate since the objects to be planned and coordinated are
even more complex than in the settings addressed earlier.
Spearman 1996, 2008). This is, in any case, a very small extract of the related vast
literature on scheduling topics. For a detailed discussion on planning and the PPC
system, the reader is referred to Domschke et al. (1997). Aspects of horizontal (logis-
tical) and diagonal coordination tasks within the PPC system are discussed in detail
in Stadtler (2000), while a comprehensive presentation of and an overview on APS
systems is given in the book chapter of Reuter and Rohde (2008). Finally, a nice
presentation of the main concepts related to supply chain management is Stadtler
and Kilger (2002).
References
The Part II of the book consists of Chaps. 3–6 and it is devoted to scheduling
models. In Chap. 3, we carry out an introduction to scheduling models. Next, in
Chap. 4, the different scheduling constraints found in practice (process-related,
operations-related, storage-related, transportation-related, ...) are reviewed, and
classified, as well as their interactions. Chapter 5 deals with the different objec-
tives that may be considered, and reviews the main performance measures found in
scheduling research. Finally, we discuss in Chap. 6 the integration/alignment of
classical scheduling objectives within the rest of a company’s objectives.
Chapter 3
Overview of Scheduling Models
3.1 Introduction
As it has been discussed in a previous chapter (see Sect. 1.5), scheduling refers to a
decision-making process in which, in order to solve a real-world problem, a formal
model is obtained. This part of the book is devoted to (formal) scheduling models
and address the elements composing a scheduling model (jobs/machines/operations,
constraints, criteria) as well as the main issues related to building scheduling models.
In particular, this chapter overviews the basics of modelling scheduling decision
problems. As we will point out, the modelling process in scheduling goes first through
a rather detailed classification of scheduling models.
More specifically, in this chapter, we
• give an outline of the nature of modeling for problem solving (Sect. 3.1.1), together
with the main characteristics of models, types of models, common errors and
shortcomings (Sect. 3.1.2)
• introduce the basic definitions for building manufacturing scheduling models
(Sect. 3.2.1), with special emphasis in the resources (Sect. 3.2.2) and process-
ing layouts (Sect. 3.2.3), discuss the main processing constraints (Sect. 3.2.4) and
outline the types of scheduling criteria (Sect. 3.2.5)
• present a classification for scheduling models (Sect. 3.3),
• discuss the identification of jobs as the processing unit in scheduling models
(Sect. 3.4).
Modelling lies within the very core of the scientific approach to problem solving.
Reality is utterly complex and varied. Models give some rough classification of
problems and settings in an attempt to guide the researcher in a fruitful direction.
Models range from simple pictograms (e.g. traffic signs) to complex distributed and
evolutionary agent systems. The following list is just a small summary of the types
of models and methodologies that result in models and that are of use in scheduling
problem solving:
• Mathematical models: mathematical programming, linear programming,
non-linear programming, deterministic, probabilistic, etc.,
• Graphical model: pictograms, drawings, Venn diagrams, flow charts, etc.,
• Statistical models,
• Simulation models, and
• Algorithmic models.
Mathematical and algorithmic modelling are the bulk of the types of models used
in deterministic scheduling. This will be later explained in Chap. 6.
Models are not the holy grail of the scientific method. Actually, they are a nec-
essary evil. This is so since no matter how perfect the analysis of the outcome of a
model might be, the conclusions of the whole research could be completely flawed
due to a countless series of problems that abound in model building. In the following,
we provide just a summarised list:
1. The model could be incomplete to the point of rendering the results impractical
for the studied reality.
2. The model might be unrelated with the studied reality. This is often referred to
as the ‘engineering problem’ which is, ‘perfectly solving the wrong problem.’
3. The model might be fed with inaccurate or incomplete data. Hence, the conclu-
sions from its study might be off from the modelled reality.
4. The model works only under severe assumptions and limitations that hinder its
generality and inferential power.
5. The model might be too complex to solve realistically sized instance problems.
Again, the generalisations might be hard to achieve under such circumstances.
6. Improper procedure when testing the results of the model. Lack of sound statis-
tical testing of the sampled results.
7. Related with the above, small and unrepresentative samples used in the tests.
8. Perceptual, interested bias or even fraud in the interpretation of the results of the
model.
9. Difficulty in reproducing the results. The lack of the reproducibility is behind
many scientific faux pas. Many of the above items result in irreproducibility.
10. Reasoning errors. Confusing causality with casuality. Mistaking correlation for
causation, etc.
All the previous problems also occur in building manufacturing scheduling
models. As we will closely study in the following chapters, specially in Chap. 6,
the most important problems in scheduling model construction are the ones listed
in the previous list as 1, 2, 3, 4 and 5. More specifically, if a scheduling model is
48 3 Overview of Scheduling Models
overly simple, the resulting production scheduling will be orientative at best for the
modelled production scheduling problem. In many extreme cases, the scheduling
model studied is such an overly simplification that the results could be completely
disconnected with practice. Furthermore, many scheduling models work under so
many severe assumptions that result also in disconnections with reality. Similarly,
complex models are many times hard to solve with current technologies and solutions
to all but the smallest instance problems are unattainable. As we will mention in later
chapters, the correct formulation of sufficiently precise manufacturing scheduling
models is one of the main issues in the scheduling field.
For the time being, we will assume that a scheduling model is determined by a known,
finite and deterministic number of jobs that have to be processed on an equally known
number of machines. Although we have already discussed in Chap. 1 that all these
assumptions are not easily and readily acceptable in all situations, we will start with
these simplest cases in order to gain the knowledge required for addressing more
complex settings.
More specifically, we will assume that there is a set N of jobs that are consecutively
indexed N = {1, 2, . . . , n}. Therefore, there is a total of n jobs to be processed.
Subindices j and k will be used to refer to jobs in the set N. Similarly, there is a set
M of m machines or productive resources, that are indexed as M = {1, 2, . . . , m}.
We will be using mainly the subindex i to refer to any machine in the set M. If not
stated otherwise, machines and jobs are independent and all the following data is
deterministic, and known a priori:
• Task or Operation (Oij ). Each job j ∈ N has a predefined number of operations or
tasks. These tasks are at least carried out in 1 machine each. Therefore, Oij denotes
the task on machine i of job j. It is also common to refer to tasks simply as (i, j).1
Furthermore, each operation has to be scheduled, this is, each operation will end
up with a start and finish (end) time as a result of the production scheduling. We
will refer to those times as SOij and EOij , respectively.
• Processing route. Each job has a predefined route throughout the production floor.
In our terms, we will denote as Rj to the ordered list of machine visits. For example,
this list could be (2, 4, 5, 1, 3) for a given job, meaning that the job has five different
operations and that it has to visit first machine 2, then machine 4 and so on until
the last machine 3. A discussion on the reasons for the assumption of a processing
route is given in Chap. 1).
• Processing time (pij ). This is the known span of time that is needed at machine i
to process job j. Machine i will be busy during this period processing job j. This
processing time is associated with the operation or task Oij . Usually, it has to be
satisfied that pij ≤ EOij − SOij . Notice that the previous expression does not need
to be necessarily equal as the processing of a job could be interrupted several times
as per example, long jobs spanning through many days being stopped during the
night shifts.
• Release or ready dates (rj ). Those are instants in time from which jobs might start.
They also denote the first point in time where the job is available. They are helpful
when modelling earliest starting times due to availability of raw materials or order
placement time, just to name two possible examples. Jobs enter the production shop
not earlier than rj . This implies that ∀j ∈ N, i ∈ M, SOij ≥ rj . In other words, no
1 Note that, in order to keep the complexity at minimum, this notation excludes the situation where
a job has to visit the same machine more than once.
50 3 Overview of Scheduling Models
task of job j might start before rj . In some scenarios, each task might have a release
date rij , in such cases then it has to be satisfied that ∀j ∈ N, i ∈ M, SOij ≥ rij .
• Follow-up time (fuj ). While release dates refer to the beginning of processing of a
job (or even a single operation), a job or an operation may also have, after being
finished on the respective machine, a follow-up time where it does not require any
additional resources or machines but is not able to be processed further. (Such
settings occur, e.g. in cooling or aging processes). We will see later that these
times are many times modeled as waiting times or lags.
• Delivery or due dates (dj ). These are instants in time at which jobs have to be ideally
completed. They model the delivery dates agreed with customers and constitute a
compromise. Therefore, the last operation of each job should be completed before
dj . Usually, due dates are not fully mandatory and can be violated, subject to a
penalty or cost. Exceeding the due date, i.e. finishing tardy, usually is connected
to some penalty, e.g. by reduction of earnings.
• Mandatory due dates or deadlines (d j ). Contrary to due dates, deadlines cannot
be violated and it is mandatory, due to many possible reasons, to finish the job
before d j . Therefore, ∀j ∈ N, i ∈ M, EOij ≤ d j . As we will see in later chapters,
the presence of deadlines may cause infeasibilities in scheduling problems as it
might be physically impossible to satisfy all deadlines of a set of jobs in a given
production environment.
• Due windows. An extension of the concept of due dates are the so-called due
windows, i.e. a time interval where a job should be delivered. This time interval
might be different per job or all or some jobs have a common due window. These
windows can be modelled as [dj− , dj+ ] where dj− and dj+ denote the start and finish
of the delivery window, respectively.
• Costs, priorities, importance or weights (wj ). This figure, measured in any desired
scale, models the importance or priority (as well as many other possible indicators)
of each job. With the weights it is easy to establish a relative hierarchy among the
set of n independent jobs. This weight could be established as a function of the
total production cost of the job, importance of the client that placed the order,
magnitude of penalty for finishing the job beyond its due date or many others.
• Release dates for machines (rmi ). Similar to the job release dates, machines might
not be available before a point in time. This implies that ∀i ∈ M, j ∈ N, SOij ≥ rmi
Usually, this is much more complex as machines are seldom considered to be con-
tinuously available from rmi . As a matter of fact, machines might have unavail-
ability windows due to, for example, preventive maintenance or even unexpected
breakdowns.
As we can easily see, the previous list is just a meagre extract of the sheer amount
of data that could be associated with jobs and/or machines (for a more exhaustive
list, see e.g. Pinedo 2012). More information would be added in subsequent sections
and chapters whenever needed.
3.2 Formal Definitions 51
With the previous data, and among all possible classifications, by far, the schedul-
ing literature is sectorised according to how the different machines in the set M are
laid out and also according on how the different job processing routes Rj are spec-
ified. The most typical layouts are discussed next, together with some examples of
schedules. Unless stated otherwise, we assume in the examples that the jobs are
readily available for processing and that preemption is not allowed, for reasons of
simplicity.
As the same implies, the plant layout is formed by a single machine. Therefore,
each job has to be processed exactly once on that machine. So there are really no
processing routes.
Single machine models are relatively simple and are often viewed as too theoret-
ical. However, they can be seen as special cases of other more complex layouts and
therefore their study is of significant interest. In addition, we have already discussed in
Chap. 1 that a single machine can adequately represent, under certain circumstances,
larger real-world entities within the shop floor, or even the whole shop floor.
The flow layout schematics of a single machine model is clear, jobs enter the
machine, each one of them keeping it busy for at least pj processing time units (note
that the processing times pij are simplified to just pj in this case) and leave the machine
once finished. The schematics are shown in Fig. 3.1.
Picture an example with six jobs, whose processing times are given in Table 3.1.
In a single machine model, each possible semi-active schedule (see Sect. 1.5.2)
can be represented by simply specifying the sequence in which the jobs are to be
processed in the machine. For instance, if we assume the following sequence π =
(1, 2, 3, 4, 5, 6), the resulting Gantt chart (see Sect. 1.5.1) of this single machine
example is shown in Fig. 3.2.
Since in semi-active schedules, all jobs are processed without interruptions on
the single machine, SO1 = 0 and EO1 = SO1 + p1 = 0 + 8 = 8 for this sequence.
Similarly, SO2 = EO1 = 8 and EO2 = SO 2 + 4 = 12. After all, it is easy to see that
the machine is going to be busy during 6j=1 pj = 42 units of time. As a matter of
fact, and as will be discussed later, the order of the jobs in the single machine setting
is not going to affect the total busy time of the machine.
Machine 1
Table 3.1 Processing times (pj ) of the six jobs for the single machine example
Machine (i) Job (j)
1 2 3 4 5 6
1 8 4 10 5 3 12
Machine
1 2 3 4 5 6
1
0 5 10 15 20 25 30 35 40
Time
Job 1 Job 2 Job 3 Job 4 Job 5 Job 6
Fig. 3.2 Gantt chart with the result of the sequence π = (1, 2, 3, 4, 5, 6) for a single machine
scheduling example with the processing times of Table 3.1
Machine m
Parallel machine models are interesting since a second dimension arises in the
scheduling model. Again, if we assume semi-active schedules, a schedule can be
represented by the assignment of each job to each machine plus the specific sequence
of processing the jobs for each machine.
The flow layout schematics of a parallel machine is also simple. Jobs enter the
shop, are assigned to one out of the m machines, processed and then leave the shop.
The schematics are shown in Fig. 3.3.
3.2 Formal Definitions 53
Table 3.2 Processing times (pij ) of the eight jobs and three unrelated parallel machines example
Machine (i) Job (j)
1 2 3 4 5 6 7 8
1 37 25 25 11 48 68 28 28
2 44 42 20 27 56 59 12 39
3 44 66 58 87 53 41 47 76
Flow shops (often, this term is spelled together as flowshop, we will use both forms
interchangeably) are the first of a series of the so-called ‘shop’ layouts, which include
54 3 Overview of Scheduling Models
Machine
1 4 2 8
Machine
2 7 3 1
Machine
3 6 5
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90
Time
Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8
Fig. 3.4 Gantt chart with the result of the assignment and sequencing of eight jobs on three parallel
machines
Jobs Jobs
input output
flow shops, job shops and open shops (these last two will be object of study in the
following sections).
In a flow shop, there are m machines, but instead of being arranged in parallel, they
are organised in series. It is assumed that each machine serves a different purpose.
Every job has to visit all machines, in the same specified order. This order, without
loss of generality, can be assumed to be 1, 2, . . . , m. As the order is the same for
all jobs, we have that ∀j ∈ N, Rj = (1, 2, . . . , m). Furthermore, pij denotes the time
job j needs at machine i. The term ‘flow’ refers to the fact that all jobs flow from
one machine to the other in the same order, this is clearly depicted in the flow layout
schematics of Fig. 3.5.
Note that the order of processing of each job on each machine is, in general,
different. Therefore, each semi-active schedule can be represented by giving, for
each machine, the sequence to process the jobs. In many cases, and mostly for
reasons of simplicity in scheduling and shop floor control, it is assumed that the
processing sequence is the same for all machines. This special case is referred to
as the permutation flow shop model. Clearly, in a permutation flow shop model,
each semi-active schedule can be represented by a single sequence of the jobs, which
means that the maximum number of possible (semi-active) schedules in a permutation
flow shop is reduced to n! as compared to the (n!)m in the (general) flow shop model.
3.2 Formal Definitions 55
Table 3.3 Processing times (pij ) of the five jobs and four machines permutation flow shop model
example
Machine (i) Job (j)
1 2 3 4 5
1 31 19 23 13 33
2 41 55 42 22 5
3 25 3 27 14 57
4 30 34 6 13 19
Machine 4 2 5 1 3
1
Machine 4 2 5 1 3
2
Machine 4 2 5 1 3
3
Machine 4 2 5 1 3
4
Fig. 3.6 Gantt chart with the result of the sequence π = {4, 2, 5, 1, 3} for a permutation flow shop
model with the processing times of Table 3.3
Due to its higher simplicity, let us start with an example of a permutation flow
shop with four machines disposed in series and five jobs to schedule. The processing
times are given in Table 3.3. Assuming the sequence π = (4, 2, 5, 1, 3), the resulting
Gantt chart is provided in Fig. 3.6.
Notice how some idle times start to appear on machines. This is different from
the two previous single and parallel machine problems that have been depicted in
Figs. 3.2 and 3.4 where machines were continuously busy. In Fig. 3.6, we see that
the first machine is never idle from start of its first operation until the finishing time
of its last operation, as all the jobs are being ‘launched’ to the shop one after another.
However, and at least while the first job in the sequence is being processed on the
first machine, all other machines are left waiting. Other notable idle times occur at
machines three and four after processing the first job (4) of the sequence. Those large
idle times obey to the fact that job (2), the second in the sequence, has a very large
processing time on the second machine (55) and machine 3 has to wait until this job
is completed in the previous machine.
56 3 Overview of Scheduling Models
···
···
Machine
1 2 3 4
1
Machine
1 2 3 4
2
Machine
1 2 3 4
3
0 5 10 15 20 25 30 35 40
Time
Job 1 Job 2 Job 3 Job 4
Fig. 3.8 Gantt chart of a 3 machine, four job permutation flow shop
Machine
2 3 1 4
1
Machine
2 1 3 4
2
Machine
2 1 3 4
3
0 5 10 15 20 25 30 35 40 45
Time
Job 1 Job 2 Job 3 Job 4
Fig. 3.9 Gantt chart of a 3 machine, four job (non-permutation) flow shop
problem. However, it might very well be that some of the non-permutation solutions
are worse than some of the permutation solutions.
The job shop is the second type of the so-called shop environments. Similar to flow
shops, in a job shop there are m machines disposed in series. However, there is a
big difference as compared to the flow shop: Every job has a potentially different
route. For example, given 3 machines and four jobs, we could have R1 = (1, 2, 3),
R2 = (2, 1, 3), R3 = (3, 2, 1) and R4 = (2, 3, 1). Job shops are very frequent in
practice when modelling shops where each order is different from the others and
requires a different machine routing. In most theoretical job shop research, the jobs
visit all machines exactly once, with no missing machines and no re-visitation of
58 3 Overview of Scheduling Models
···
···
Fig. 3.10 Flow layout schematics of a job shop with machine skipping and recirculation
machines, although in Fig. 3.10 we give a (more realistic) flow layout schematics
including these aspects.
Let us picture also one example of a job shop. We have five jobs and three
machines. We take the processing times of the previous Table 3.3 and consider the
processing routes of the five jobs as indicated in Table 3.4. With these data, we can
construct one possible schedule as given in the Gantt chart of Fig. 3.11.
Job shops are much more complex than flow shops as these last problems are
special cases when all Rj are identical. Notice how the possibilities for improving
a sequence are endless. For example, in the last Fig. 3.11, there is a large idle time
on machine 3 between jobs 1 and 2. It would be better to start task O35 around time
110, just when task O15 finishes, delaying a bit O32 but in turn also advancing O25 .
The open shop is the more general and most complex of the shop layouts. Again, as
in the flow and job shops, there are m machines disposed in series and n jobs that have
to visit all m machines. However, the big difference is that the processing route for
each job is not fixed as is to be determined in the scheduling process. From another
perspective, it can be assumed that all Rj for the jobs are arbitrary and that operations
of a job can be processed in any order on the machines. Figure 3.12 shows a layout
schematics of an open shop.
It is easy to see that flow shops and job shops are special cases of the open shop.
One way to see this is to consider the enormous number of possible solutions that
could be obtained following the example of Fig. 3.11 knowing that the only thing
that has to be satisfied is that operations of the same job cannot be overlapped and that
3.2 Formal Definitions 59
Machine 1 3 4 5 2
1
Machine 4 1 2 3 5
2
Machine 3 4 1 2 5
3
Machine 2 4 5 1 3
4
Fig. 3.11 Gantt chart with a possible sequence in a job shop with the processing times of Table 3.3
and the processing routes of Table 3.4
Machine 1 3 5 2 4
1
Machine 2 1 4 5 3
2
Machine 5 2 3 1 4
3
Machine 3 4 2 5 1
4
Fig. 3.13 Gantt chart with a possible sequence for an open shop with the processing times of
Table 3.3
machines cannot process more than one job at the same time. One possible sequence,
following the processing time data of Table 3.3 is given in Fig. 3.13.
As can be seen, the open shop is very flexible. The schedule has been ‘squeezed’
and there are less idle times in the machines. This is because of the total routing
flexibility.
60 3 Overview of Scheduling Models
Table 3.5 Processing times (pij ) of the five jobs, three stages and 10 machines in a hybrid flow
shop
Stage (i) Machine (l) Jobs (j)
1 2 3 4 5
1 1 27 45 56 40 30
2 80 85 25 12 75
3 44 17 39 29 25
2 4 25 80 12 16 25
5 39 45 97 24 88
6 44 71 28 25 96
7 38 26 10 27 44
3 8 93 30 67 29 10
9 45 63 66 88 89
10 20 80 35 15 25
greedy approach will not, in general, result in the best solution. Consider that the
objective is to finish the set of jobs as soon as possible. The fastest machine for job 1
at the third stage is machine 10 as p10,1 = 20. However, job 1 has been assigned to
machine 9, which has more than double the processing time. This is because jobs 3
and 4 have been already assigned to machine 10 (the fastest machine for them) and
machine 10 is already overloaded. Therefore, if we want to finish job 1 as soon as
possible, a slower machine has to be employed at the third stage. A second conclusion
is also that machine 6 in the second stage is not used for this schedule. This is because
the second stage has more machines than are actually needed.
The previous layouts are not, by a long shot, the only possible ones. They should be
seen as a broad classification inside which, with some degree of abstraction, some
62 3 Overview of Scheduling Models
Machine 1
1
Machine 4 3
2
Machine 2 5
3
Machine 1 5
4
Machine 4
5
Machine
6
Machine 2 3
7
Machine 2 5
8
Machine 1
9
Machine 4 3
10
0 10 20 30 40 50 60 70 80 90
Time
Job 1 Job 2 Job 3 Job 4 Job 5
Fig. 3.15 Gantt chart with the a possible sequence for a hybrid flow shop problem with the process-
ing times of Table 3.5 and the job sequences of Table 3.6
dry up or evaporate are clear examples. In some extremes, products might not be
allowed to wait at all in between stages. Frozen food is a good example of such
no waiting allowed scenarios.
• Similarly, a minimum waiting time in between operations might be necessary since
products might be too hot to handle after a furnace or kiln operation, to name an
example.
• Assuming that all data is known in advance and that this data is deterministic is
likely to be unacceptable. In controlled environments and tested manufacturing
systems, the time needed for a given operation of a product might be known and
pretty much fixed. However, the nature of some production processes is stochastic
and processing times (as well as any other data) might be hard to estimate or might
come in the form of a statistic distribution with known parameters.
• Production floors are highly dynamic. This means that jobs arrive over time, release
dates change, due dates might be re-arranged with clients, weights or importances
might vary. Basically, every piece of information, even if originally known in
advance, is subject to change.
• In-process inventory, or semi-finished products do not instantly move from one
machine to the other. Transportation times have to be taken into account. Some-
times, transportation is carried out by simple conveyor belts, other times, complex
robotic or automated guided vehicles are used to move the products inside produc-
tion plants. As a matter of fact, transporting the products can be seen as a routing
problem in complex plant layouts and large factories.
• Machines are operated by personnel. This has huge implications at so many levels.
From work shifts or timetables, to skill levels, learning effects and others. Machines
might operate at different speeds depending on the number of persons attending
to them. Similarly, more than one machine might be supervised by a single person
depending on the shop layout.
• Fabrication is not perfect. Frequently, reworkings, fixings and reintroduction of
jobs in the lines occur.
• In today’s globalised economy, factories are no longer isolated elements.
Distributed manufacturing and complex supply chains bring implications that
reach into the scheduling decisions at manufacturing plants. The number of pos-
sible situations in such cases is very large.
• Jobs might have utterly complex processing routes, even alternative processing
routes that might even change the bill of materials in dependence of the chosen
manufacturing route.
• Processing times, even if known, can be worked on. Assigning more specialised
personnel or additional tooling or any other resource could shorten, for an addi-
tional cost, the processing times.
As a matter of fact, the previous list is just a glimpse of situations that may arise
in manufacturing scheduling. These will be treated in larger detail in Chap. 4.
3.2 Formal Definitions 65
As already noted in several parts in this book, there is a sheer variety of production lay-
outs, processing constraints and criteria in scheduling models. The traditional way
to handle the diversity of scheduling models has been to classify them using tax-
onomies and notations that allow establishing the similarities and differences among
the different models. As we will discuss in Chap. 7, this classification is extremely
useful to build solution procedures for scheduling models based on these of similar
models. In the following sections, we present the two most popular classification
schemes.
From the shortcomings of the previous classification of Conway et al. (1967), other
authors proposed extensions, like RinnooyKan (1976) which proposed a notation
based on the triplet α/β/γ . This scheme was further extended and refined in the
works of Graham et al (1979), Błazewicz et al. (1983), Lawler et al. (1993).
In the following, we highlight the most important aspects of this notation, which
is pictured in Fig. 3.16.
• The first field in the triplet α defines the processing layouts. It is further split into
two sub-fields α1 α2 so that:
α1 = ∅ : single machine layout, sometimes also denoted as α1 = 1.
= P : identical parallel machines.
= Q : uniform parallel machines.
= R : unrelated parallel machines.
= F : flow shop.
= J : job shop.
= O : open shop.
α2 = 1, 2 . . . , m : fixed number of machines in the layout.
= ∅ : the number of machines is not fixed to a precise number (i.e. the problem studied is
not limited to a case with a specific machine count). Often this is also referred to as
simply m. Note that in single machine problems (α1 = 1), α2 has no meaning.
Therefore, with the α field, the machine layout can be easily defined. For example,
α = P3 means the considered layout is composed by three identical parallel
machines. α = Fm or α = F represents a flow shop layout with any number of
machines.
68 3 Overview of Scheduling Models
Note again that the previous α field suffers from some shortcomings as there is
not a predefined way of representing hybrid layouts. The review paper of Vignier
et al. (1999) introduced a significant extension. The authors split the α field into
four sub-fields α1 α2 α3 α4 as follows:
Note that the two last sub-fields α3 and α4 are repeated for every stage i specified
in the sub-field α2 . For example, a hybrid layout with α = HJ3, 1, P2, R3 denotes
a hybrid job shop with three stages where the first stage has one single machine,
the second stage has two identical parallel machines and the third stage has three
unrelated parallel machines. Another example could be α = HFm, (Pmi )m i=1
which stands for a general hybrid flow shop with m stages where each stage i
contains mi identical parallel machines.
• The second field in the triplet notation, β indicates the processing constraints. This
field is split into as many sub-fields as processing constraints there might be in
the studied problem, separated by commas. As we exposed in previous sections,
processing constraints are varied and numerous and are object of later study in
Chap. 4, where we will define many possible β fields.
• Lastly, the third field in the triplet notation, γ , gives information about the single
objective considered in the studied problem. As with the processing constraints,
we have devoted a specific chapter to scheduling objectives which is Chap. 5.
Furthermore, in Chap. 10 we will significantly extend the γ field in order to allow
the notation of multiobjective problems.
We have to bear in mind that even with the hybrid layout extensions given by
Vignier et al. (1999), it is simply not possible to capture every possible production
scheduling layout. Not only this is not possible, but also it would be unrealistic to
expect so. Notation schemes have to be succinct enough so to facilitate a quick and
unambiguous identification of a layout. Complex layouts are simply approximated
to the closest possible and still easily representable layout. Additional characteristics
of complex layouts are given in plain text afterwards. As a result, other researchers
and practitioners might grasp the most important aspects of the represented layout
as if there are complex job routes, stages and so on.
3.4 Production Planning, Lot Sizing and Lot Streaming 69
In Sect. 1.5 we discussed already that the concept of a job in scheduling manufac-
turing, albeit probably clear in the scientific literature, is far from being a precise
concept in real manufacturing. As a matter of fact, probably, the first barrier when
having a scheduling practitioner at a company communicating with a scheduling
researcher is to agree on what a ‘job’ is. Given the extreme importance of identi-
fying this unit of information in order to build scheduling models, we devote this
section to discuss first how jobs (as they are interpreted in the scheduling models
presented in this chapter) result as the output of a production planning process (as it is
described in Sect. 2.4) and, secondly, some alternatives (lot sizing and lot streaming)
to aggregate/disaggregate job entities.
Although Sect. 2.4 has already provided additional insight on the planning process,
production management is a large and broad field and it is outside the scope of
this book and therefore and we will simply stress that, at least with respect to the
implications over manufacturing scheduling, production planning is the process of
obtaining a list of products to be manufactured, along with their quantities.
Some traditional production planning techniques, and more particularly, Materials
Requirements Planning (MRP) suffer from several shortcomings that have been long
documented in the literature and identified in practice. Some of these problems are:
• Data integrity. Unless high integrity of the data is assured, the result is
unsatisfactory. Bill of Materials, inventories, orders and all related data has to
be carefully checked in order to avoid what has been sometimes called a GIGO
(Garbage In, Garbage Out) situation.
• MRP systems usually assume constant lead times and are inflexible as regards as
other data like fixed reorder quantities, points, safety stocks, etc.
• The major problem is that the planning carried out by MRP systems does not take
production capacity into account. Results might be impossible to implement.
• MRP typically does not issue early warnings for stock outs, demand spikes, short-
ages, etc.
Furthermore, some authors have criticised the MRP approach for its limited struc-
ture that basically breaks down the production demands into product families and
later into individual products without actually considering the interactions between
them and the underlying model structure. This myopic decomposition leads to impor-
tant productivity and flexibility losses. Although more advanced techniques such as
Manufacturing Resources Planning (MRPII) or Capacity Requirements Planning
(CRP) overcome some of these problems, alternatives involving the use of linear
and integer programming to model and to solve complex production planning prob-
lems in which high quality solutions (usually optimal) are of interest. One of the
basic models is the well known Single-Item, Single Level, Uncapacitated Lot-Sizing
70 3 Overview of Scheduling Models
2 Note that, in some references, processing costs are time-independent and consequently, are
removed from the objective function since processing costs are decision-irrelevant.
3.4 Production Planning, Lot Sizing and Lot Streaming 71
Machine 4 4 2 2 2 5 5 5 5 1 1 1 3 3
1
Machine 4 4 2 2 2 5 1 1 1 3 3
2
Machine 4 4 5 5 5 5 1 1 1 3 3
3
Machine 4 4 2 2 2 5 5 5 5 1 1 1 3 3
4
Fig. 3.17 Gantt chart for a streaming flow shop problem based in the example of Fig. 3.6
A final remark has to be done to discard the idea that, in principle, sublots could be
simply modeled as different jobs. Although theoretically interesting, this approach
quickly results in problems with a very large number of jobs (literally thousands)
and it is therefore quite unpractical.
We have already noted that the definition of a scheduling model would more or
less match what it is termed in most (classical-oriented) textbooks as a schedul-
ing problem. Accordingly, in these sources the taxonomy presented in Sect. 3.3 is
referred to a taxonomy for scheduling problems, and then the scheduling methods
provide solutions to scheduling problems. However the need of the modeling and
transference processes described in Sect. 3.2.1 is often overlooked in some sources,
and the reader is given the impression that a scheduling model arises naturally from a
(real-world) problem and that this problem is solved once a solution for the model has
been obtained. Therefore, we have opted for somewhat abandoning the mainstream
scheduling literature, but in our defense we can always state that our definition of
scheduling model (and instance) is taken from a source as early as Coffman (1976).
Regarding further readings, the literature is full of comprehensive works on sci-
entific modelling. Some examples are Gilbert (1991) or Mayer (1992) and also the
classical book of Churchman (1984) or more modern volumes like DaCosta and
French (2003). The formal definitions and notation given in Sects. 3.2 and 3.3 follow
the mainstream scheduling literature, such as for example, Conway et al. (1967),
Baker (1974), French (1982), Błazewicz et al. (2002), Brucker (2007) or more mod-
ern texts like Pinedo (2012), Baker and Trietsch (2009), Pinedo (2009). Production
planning is a very rich field where lots of books and papers have been published.
Some of these texts are Orlicky (1975), Plossl (1994), Wight (1995), Higgins et al.
(1996), Toomey (1996) Artiba and Elmaghraby (1997), Drexl and Kimms (1998),
Onwubolu (2002), Sheikh (2003), Voss and Woodruff (2003), Stadtler and Kilger
(2005), Proud (2007). Specially interesting is the book of Pochet and Wolsey (2006),
where the mathematical and optimisation aspects of production planning are studied
in great detail. A good review paper for the Single-Item, Single Level, Uncapacitated
Lot-Sizing problem is given by Brahimi et al. (2006). Regarding lot streaming, some
basic references are Potts and VanWassenhove (1992), Trietsch and Baker (1993) or
more recently, Chang and Chiu (2005). Additionally, in the case of the flow shop
layout, there is a book published that solely deals with the lot streaming variant (Sarin
and Jaiprakash 2007).
References
Artiba, A. and Elmaghraby, S. E., editors (1997). The planning and scheduling of production
systems: methodologies and applications. Chapman & Hall, London.
Baker, K. R. (1974). Introduction to Sequencing and Scheduling. John Wiley & Sons, New York.
Baker, K. R. and Trietsch, D. (2009). Principles of Sequencing and Scheduling. Wiley, New York.
Błazewicz, J., Ecker, K. H., Pesch, E., Schmidt, G., and Wȩglarz, J. (2002). Scheduling Computer
and Manufacturing Processes. Springer-Verlag, Berlin, second edition.
Błazewicz, J., Lenstra, J. K., and RinnooyKan, A. H. G. (1983). Scheduling Subject to Constraints:
Classification and Complexity. Discrete Applied Mathematics, 5:11–24.
Brahimi, N., Dauzère-Pérès, S., Najid, N. M., and Nordli, A. (2006). Single item lot sizing problems.
European Journal of Operational Research, 168(1):1–16.
Brucker, P. (2007). Scheduling Algorithms. Springer, New York, fifth edition.
References 73
Chang, J. H. and Chiu, H. N. (2005). Comprehensive review of lot streaming. International Journal
of Production Research, 43(8):1515–1536.
Churchman, C. W. (1984). The Systems Approach. Dell Publishing Company, New York. Revised
and upgraded edition from the 1968 original.
Coffman, E. G. (1976). Computer & Job/shop Scheduling Theory. John Wiley & Sons.
Conway, R. W., Maxwell, W. L., and Miller, L. W. (1967). Theory of Scheduling. Dover Publications,
New York. Unabridged publication from the 1967 original edition published by Addison-Wesley.
DaCosta, N. and French, S. (2003). Science and Partial Truth: A Unitary Approach to Models and
Scientific Reasoning. Oxford University Press, Oxford.
Drexl, A. and Kimms, A., editors (1998). Beyond Manufacturing Resource Planning (MRP II):
advanced models and methods for production planning. Springer, New York.
Dudek, R. A., Panwalkar, S. S., and Smith, M. L. (1992). The lessons of flowshop scheduling
research. Operations Research, 40(1):7–13.
French, S. (1982). Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop.
Ellis Horwood Limited, Chichester.
Gilbert, S. W. (1991). Model-building and a definition of science. Journal of Research in Science
Teaching, 28(1):73–79.
Graham, R. L., Lawler, E. L., Lenstra, J. K., and Rinnooy Kan, A. H. G. (1979). Optimization
and Approximation in Deterministic Sequencing and Scheduling: A Survey. Annals of Discrete
Mathematics, 5:287–326.
Higgins, P., Le Roy, P., and Tierney, L., editors (1996). Manufacturing planning and control: beyond
MRP II. Springer, New York.
Hyer, N. and Wemmerlöv, U. (2002). Reorganizing the Factory: Competing Through Cellular
Manufacturing. Productivity Press, Portland.
Irani, S. A., editor (1999). Handbook of Cellular Manufacturing Systems. Manufacturing & Automa-
tion Engineering. John Wiley & Sons, New York.
Lawler, E. L., Lenstra, J. K., and RinnooyKan, A. H. G. (1993). Sequencing and Scheduling:
Algorithms and Complexity. In Graves, S. C., RinnooyKan, A. H. G., and Zipkin, P. H., editors,
Logistics of Production and Inventory, volume 4 of Handbooks in Operations Research and
Management Science, Amsterdam. Elsevier Science Publishers, B. V.
MacCarthy, B. L. and Liu, J. (1993). Addressing the gap in scheduling research: A review of
optimization and heuristic methods in production scheduling. International Journal of Production
Research, 31(1):59–79.
Mayer, R. E. (1992). Knowledge and thought: Mental models that support scientific reasoning.
In Duschl, R. A. and Hamilton, R. J., editors, Philosophy of science, cognitive psychology, and
educational theory and practice, pages 226–243, Albany. New York State University.
Morton, T. E. and Pentico, D. W. (1993). Heuristic Scheduling Sysmtems With Applications to
Production Systems and Project Management. Wiley Series in Engineering & Technology Man-
agement. John Wiley & Sons, Hoboken.
Onwubolu, G. C. (2002). Emerging optimization techniques in production planning and control.
Imperial College Press, London.
Orlicky, J. (1975). Material Requirements Planning. McGraw-Hill, New York.
Pinedo, M. (2009). Planning and Scheduling in Manufacturing and Services. Springer, New York,
second edition.
Pinedo, M. L. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, New York, fourth
edition.
Plossl, G. W. (1994). Orlicky’s Material Requirements Planning. McGraw-Hill, New York, second
edition.
Pochet, Y. and Wolsey, L. A. (2006). Production planning by mixed integer programming. Springer,
New York.
Potts, C. N. and VanWassenhove, L. N. (1992). Integrating scheduling with batching and lot-
sizing: A review of algorithms and complexity. The Journal of the Operational Research Society,
43(5):395–406.
74 3 Overview of Scheduling Models
4.1 Introduction
This chapter belongs to the part of the book devoted to scheduling models, one of
the three elements (recall from Chap. 1 that the other two are methods and tools) that
constitute an scheduling system. More specifically, this chapter discusses the con-
straints describing scheduling models. Already in Chap. 2 (Sect. 2.5), it was stated
that manufacturing scheduling is subject to a large variety of constraints. These
were refined in Sect. 3.2.4, where some constraints arising in real-life settings were
introduced. This chapter gives further insight into this issue by bringing a charac-
terisation and description of scheduling constraints. Since—as already mentioned in
Sect. 3.3.2—the scheduling constraints are gathered in current scheduling classifica-
tion schemes in the field β of the notation triplet α/β/γ the scheduling constraints,
we will comprehensively work with the notation and with this field β in this chapter
as well.
In order to provide a (hopefully) coherent view of scheduling constrains, we
have necessarily simplified many of them, and broadly classified them into process,
operations, transportation and storage constraints. Some other situations will be also
addressed. Despite this classification, note that scheduling constraints are rarely
independent from each other and that, even inside the same constraint, the treatment
and the low-level details inevitably vary from one production floor to another. Last,
it is to note that some constraints only make sense in some production layouts and/or
under some specific criteria.
More specifically, in this chapter we
• describe constraints affecting the flow of operations in the shop floor (Sect. 4.2),
• introduce scheduling constraints that are more closely related to the operations or
tasks of the jobs (Sect. 4.3),
• discuss restrictions attached to the movement of material in the shop floor
(Sect. 4.4),
• present constraints arising when considering that storage capacity is not unlimited
(Sect. 4.5) and
• give some hints on further relevant constraints (Sect. 4.6).
Under process constraints we list all situations that affect the otherwise ‘standard’
flow of operations in single, parallel or shop environments.
2 5 7 8
3 4
1 2 4 6
5 7
are added. The most extreme case would be when every job has a predecessor except
the first, for example, 1 ∪ 2 ∪ 3 ∪ 4 ∪ 5. In this situation, only the sequence
π = (1, 2, 3, 4, 5) would be feasible. However, precedence constraints complicate
matters entirely. Picture for example a parallel machines layout. This layout was
presented in Sect. 3.2.3.2. There an example problem was introduced in Table 3.2
and pictured in the Gantt chart of Fig. 3.4. Now, let us assume that 7 ∪ 4, 7 ∪ 3
and 2 ∪ 1. The resulting Gantt chart is shown in Fig. 4.5.
As we can see, idle times appear on machines. This might be seen as not a big
deal, but in most parallel machine problems with no constraints, machines are not
needed to lay idle, if we only consider semi-active schedules. As a matter of fact,
the regular parallel machine problem under some performance criteria, is basically
reduced to an assignment problem. Although more theoretically oriented, authors
like Lenstra and Rinnooy Kan (1978) pointed out already long ago, that adding
precedence constraints makes relatively simple problems much more difficult.
Precedence constraints also allow to capture more complex relationships. For
example, the precedences might be among tasks of different jobs (as in a flow shop)
and not only between the last task of a job and the first task of the succeeding job.
78 4 Scheduling Constraints
1 3 5 7
2 6 8
Machine
4 2 8
1
Machine
7 3 1
2
Machine
3 6 5
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90
Time
Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8
Fig. 4.5 Gantt chart with the result of the assignment and sequencing of eight jobs on three parallel
machines with some precedence constraints
Generally speaking, operations carried out at machines that are not directly related
with the processing of the jobs are commonly referred to as ‘changeovers’.
Changeovers include, but are not limited to, adjustments, changes, cleaning, testings,
re-tooling, fixing or removing jobs, etc. Changeovers are also commonly known as
setup times since they model the time that is needed to setup a machine prior and/or
after processing a job.
4.2 Process Constraints 79
Machine
1 2
1
Machine 1 2
2
0 5 10 15 20 25 30 35
Time
Job 1 Job 2 Setup time
Fig. 4.6 Two job, two machine flow shop with an anticipatory setup on the second machine
Machine
1 2
1
Machine 1 2
2
0 5 10 15 20 25 30 35
Time
Job 1 Job 2 Setup time
Fig. 4.7 Two job, two machine flow shop with a non-anticipatory setup on the second machine
processing time of the job in the machine. After all, the machine is going to be busy
anyway. All other cases require special attention.
It is important to remark that in flexible manufacturing systems, as well as in
many production environments, setup times can amount to a significant portion of
the productive time if they are not handled with care. The time needed to clean, to
name an example, a chemical reactor might be superior to the processing time needed
to produce a batch of a given product.
The previous classifications are not the only possible ones. Other classifications
further break down setups into what is referred to as family, class or batch setup
times. In these cases, jobs are grouped into families so that setup times occur only
when finishing production of one family and before starting production of the next
one. Sometimes, there are minor setups in between jobs of the same family and
major setups between families. Some other classifications include also removal times,
which are seemingly identical to setup times, but that are performed after jobs are
completed and not prior to the starting of jobs.
4.2 Process Constraints 81
It is safe to say that most production systems include, in one way or another,
setup times. In some cases, setup times might be sufficiently small to be considered
negligible, but in many other cases, setups have to be considered explicitly. Real
cases of setup times abound. A clear example comes from the paper cutting industry
where the paper cutting machines (guillotines) need to be adjusted when changing
from one cutting batch to another. Another example can be obtained from the ceramic
tile manufacturing sector. Ceramic tiles are produced in identical processing lines
composed of several processes; moulding press, dryer, glazing line, kiln, quality
control and finally, packing and delivery. If a production line is set up for a given
product format and colour, for example a 33 × 33 cm. black glazed ceramic tile,
changing to a 45 × 45 cm. white glazed ceramic tile will need significant setup times
to change the moulds in the moulding press and to clean and prepare the glazing line
for the white glaze. However, changing to a 33 × 33 cm. yellow glazed tile will only
require cleaning in the glazing line, as the moulding press is already set up. A more
detailed case study taken from the ceramic tile sector is given in Chap. 15.
As far as the field β in the notation triplet α/β/γ is concerned, we denote setup
and removal times as follows:
β = Snsd : There exists sequence independent setup times. Sij denotes the known and deterministic
amount of setup time to be performed on machine i before processing job j.
= Ssd : There exists sequence dependent setup times. In this case, for each machine, we have the
so-called ‘setup time matrix’, where Sijk denotes the setup time needed on machine i for
processing job k after having processed job j.
= Rnsd : Sequence independent removal times.
= Rsd : Sequence dependent removal times.
For families or batches, one can use, for example, SBnsd , SBsd , RBnsd and RBsd .
No specific notation is given to differentiate the anticipatory from the non-
anticipatory setups cases. However, anticipatory setups are largely assumed in the
scientific literature.
In the flow shop layout, as commented in Sect. 3.2.3.3 is usually simplified to consider
only permutation sequences. This is notated as follows:
Basically, most of the scheduling research is carried out in the permutation ver-
sion of the flow shop. Also, in some hybrid environments, the term ‘permutation
schedules’ indicates that there is a single permutation of jobs as a simplified solution
for the problems. At each stage, jobs are launched to machines in the permutation
order and assignment to machines at each stage is solved separately.
Almost throughout the whole book, we have always assumed that machines are con-
tinuously available throughout the time horizon. While this simplifying assumption
is very convenient, it is really a long shot away from reality. Machines are subject to
preventive maintenance, corrective maintenance (breakdowns) or just timetabling or
working shift constraints, among other things. The result is the same: machines are
not continuously available.
The scientific literature about machine availability and breakdowns is abundant.
However, two main distinctions can be stated. The first is when the intervals of
machine availability (or unavailability) are known in advance. In this case, for every
machine i → M and for a given production scheduling horizon T , there is a set of h
machine availability intervals AVi = {{BAV1i , EAV1i }, {BAV2i , EAV2i }, . . . ,
{BAVhi , EAVhi }} where BAVli and EAVli are the starting and ending time of the machine
availability period, respectively, and BAVli < EAVli for 1 ≥ l ≥ h and for i → M.
A possible notation for this is as follows:
4.2 Process Constraints 83
The second distinction in the machine availability literature is when some informa-
tion of the machine availability intervals is not known a priori or when the machines
simply breakdown with some probabilities or following some probability distribu-
tions.
Some important process constraints or special situations include the cases where jobs
might visit one specific machine or stage more than once. A similar situation is when
some jobs skip one or more machines or stages in the processing sequence. Some
of the processing layouts commented in Chap. 3, like the flow shop or the job shop
already consider what is called re-circulation. Re-circulation models, for example,
double firing processes in ceramic tile manufacturing, the repeated polishing opera-
tions in furniture manufacturing and any other environment where a job has to visit a
given machine or stage more than once in the processing sequence. Some examples
of studies in the literature where re-circulation is allowed include, but are not limited,
in the very least, to, Bertel and Billaut (2004) or Choi et al. (2005).
As far as re-circulation goes, the notation is as follows:
β = recrc : Re-circulation exists as a process constraint. Indicates that at least one job visits at
least one machine more than once.
Some related scenarios arise when the outcome of the processing of a job is not
perfect and there is a possibility of needing a re-processing. Note that this is in several
ways different to re-circulation. First, there is no certainty in that a re-processing will
be needed and therefore, re-processing is stochastic. Second, usually a re-process
directly involves the same job leaving and entering again the same machine, whereas
in re-circulation several machines could be visited in between two visits to the same
machine. There is no accepted notation for re-processing but β = reproc looks like
a valid proposal.
Last, stage skipping could be a factor to consider in some problems. In some sce-
narios and layouts, like for example flow shops or hybrid flow shops, some jobs might
not need processing in some machines or might not need to visit some stages. This is
referred to as stage skipping and can be denoted by β = skip. Note that skipping (or
synonymously passing of stations or missing operations) is often (e.g. in flowshops)
84 4 Scheduling Constraints
An interesting situation arises when no-idle time is allowed at machines. This con-
straint models an important practical situation that arises when expensive machinery
is employed. Idling on expensive equipment is often not desired or even not possible
due to technological constraints. Some examples are the steppers used in the pro-
duction of integrated circuits by means of photolithography. Other examples come
from sectors where less expensive machinery is used but where machines cannot be
easily stopped and restarted or is completely uneconomical to do so. Ceramic roller
kilns, for example, consume a large quantity of natural gas when in operation. Idling
is not an option because it takes several days to stop and to restart the kiln due to a
very large thermal inertia. In all such cases, idling must be avoided. Notation goes
as follows:
β = no-idle: No machine can stay idle after starting the process of operations.
High throughput machines usually accept more than one job at the same time. Actu-
ally, arriving jobs might be grouped into batches and then the machine simultaneously
completes all the batch at the same time. This is very common in the semiconductor
industry (Lee et al. 1992) or in tyre manufacturing (Bellanger and Oulamara 2009).
4.2 Process Constraints 85
Machine
2 5 1 3 4
1
Machine
2 5 1 3 4
2
Machine
2 5 1 3 4
3
Machine
2 5 1 3 4
4
Batching might be economical if the arriving jobs require the same setup and can
be processed in batches. Usually, two types of batches are considered. p − batch
(parallel batching) is when the processing time of the whole batch in the machine is
considered to be equal to that of the largest processing time of any job in the batch.
s − batch (serial batching) occurs when the processing time of the batch is equal to
the sum of the processing times of all the jobs in the batch. Additionally, b denotes
the batch size, or the maximum number of jobs that can enter a batch. If this number
changes from machine to machine, then it is used bi instead. As a result of all of the
above, the notation is therefore:
β = p − batch(b) (parallel batching) : All machines consider batches of maximum size b (or bi )
where the processing time of the batch is equal to that of the
largest processing time among the jobs in the batch.
= s − batch(b) (serial batching) : All machines consider batches of maximum size b (or bi )
where the processing time of the batch is equal to that of the
sum of the processing times of the jobs in the batch.
The literature on batching scheduling is extremely large, as there are many possible
sub-cases, generalisations and variations in the batching decisions.
The concept of preemption was discussed in this book as early as in Sect. 1.5. In
contrast, if it is assumed that, once a job or task starts in a machine, it cannot be inter-
rupted and the job has to be processed to completion, then preemption is not allowed.
There are many scenarios where preemption might be needed or even interesting:
• Arrival of new urgent job orders might require to stop jobs already being processed.
• It might be economical or beneficial to preempt a job and/or to continue it in
another machine and/or at a later time.
• Sudden cancellations of clients’ orders might require to stop processing a batch
of products.
• Breakdowns on machines or any other unexpected event might result in preemp-
tion.
There are many possible types of preemptions and varied classifications. If, once
interrupted, the preempted operation is lost and when resumed, processing has to
restart from the beginning, the preemption is denoted as non-resumable. This is the
case for example in a pre-heating phase in metallurgy processes. If one stops the pre-
heating operation, it will be required to start pre-heating from the beginning again at a
later time since the processed product might have cooled off by then. Note that setup
time operations can also be preempted and most of the time, if a setup operation is
aborted, a different setup is started (the next product in the sequence is changed) and
therefore, the initial setup will have to be restarted from the beginning. Setup times
are then denoted as non-resumable. Conversely, a preemption can be resumable if the
interrupted job is just continued from the point where it was interrupted to completion.
Sometimes, an intermediate situation exists where the operation can be resumed, but
some penalty (time, costs, etc.) exists. This case is referred as semi-resumable.
Jobs might be preempted just once, a limited number of times or an unlimited
number of times. Jobs that break down into operations, as in the shop layouts, might
complicate things, since preempting a non-resumable operation might result in the
whole job being repeated again (all operations). A preempted operation, for example
in a parallel machine scenario, might have to return to the same machine or might
be able to return to any other machine. As one can see, the possibilities are endless.
However, not all scenarios are so complex. Sometimes, preemptions are as simple as
stopping a machine because the working shift is over and the processing of the same
job, on the same machine, continues the next working shift or after the weekend with
no introduction of a different job in between.
As far as the notation goes, the basic preemption is referred to as follows:
Another aspect that might affect the different tasks of jobs is splitting. In
specific manufacturing environments, jobs/operations might be split into several
sub-jobs/sub-operations. This problem is partially related to preemption of jobs/
operations. However, the latter is often induced by processing requirements of other
(more urgent) operations while job splitting is usually caused by lot sizing or related
considerations.
Release dates, as well as due dates and deadlines where already defined in Chap. 3
as input data for a scheduling problem. However, they also indicate constraints that
affect operations. As commented, the first task of a job cannot start before the release
date and the last task of a job should be ideally completed before the due date and
forcibly completed before the deadline.
When such data exist in a production setting, it is denoted as follows:
β = rj : There are release dates for the jobs. Omitting this field is the same as
assuming that all release dates are zero, i.e. rj = 0, ∃j → N.
β = dj : There are distinct due dates for each job.
= dj = d : There is a single identical due date for all the jobs. This is referred to as
the common due date case.
= dj : The due dates are compulsory and are referred to as deadlines. Some early
research work denotes deadlines as j .
Note that many times the due dates dj are not explicitly stated in the β field of
the notation since, as we will see in Chap. 5, many optimisation objectives implicitly
consider the due dates.
If one finds in the same production setting release dates rj and deadlines d j the
result is the so-called ‘processing window’ as the entire processing of a job must be
completed within the d j − rj time interval. This is often referred to as ‘time window’.
Deadlines in general and processing windows in particular raise the issue of schedule
feasibility, i.e. not all schedules or possible orderings of the jobs and tasks on the
machines might be feasible. It might even be impossible to complete all jobs within
their respective processing windows.
Normally, in shop or hybrid shop layouts, the next task of a given job can start as
soon as the previous task is fully finished. However, if the next machine or stage
is busy, then the task, in the form of semi-finished product or in-process inventory,
88 4 Scheduling Constraints
Machine
1 2 3 4 5
1
Machine
1 2 3 4 5
2
Machine
1 2 3 4 5
3
Machine
1 2 3 4 5
4
waits at some place until the machine is free. In other words, tasks are allowed to
wait indefinitely in between machines or stages.
There are many scenarios where waiting is not an option and the processing of
some or all jobs needs to be carried out without interruptions between machines.
This situation is commonly known as ‘no-wait’. A typical example comes from steel
production where the steel ingots are not allowed to wait between production stages
or otherwise they would cool off. Most prepared food production lines also have, in
some way or another, no-wait restrictions. A clear example is a frozen food line where
no-wait must be enforced or otherwise the cool chain might break. Other examples
can be found in chemical and pharmaceutical industries, among many others.
Constructing a no-wait schedule is not trivial. If the next machine needed by a job
is busy, it is required to go back to the previous tasks of the same job and to delay
their start, so that tasks never wait. Figure 4.9 shows a five job, four machine no-wait
flow shop example. From the figure, we see that, for job four, the three initial tasks
have to be delayed, even including an idle time on the first machine, to make sure
that the job four does not find the last machine busy.
Note that in the flow shop layout, the no-wait constraint necessarily implies a
permutation sequence. It is of paramount importance to consider that much of the
constraints usually interact and a seemingly easy way to deal with a constraint, could
end up being hard to solve when in conjunction with another special situation. As
regards notation, the no-wait situation is easily described:
β = nwt : Jobs cannot wait in between stages. Mathematically, this implies that SOij =
EOi−1,j , ∃j → N, i → N\1. Note that this is sometimes referred to as no-wait.
4.3 Operations Constraints 89
In some works, like in the one of Hall and Sriskandarajah (1996), the authors
extend this notation to clearly specify between which machines or stages waiting is
not allowed. For example no − wait(3, 4) indicates that no waiting is allowed for
any job between stages or machines three and four.
The no-wait setting is in obvious contrast with the regular setting in which indefi-
nitely waiting could be possible. It should be pointed out that the Gantt chart profiles
of every single no-wait job is fixed, independent of the sequence of jobs. This infor-
mation can be used when developing optimization approaches to no-wait problems.
Intermediate scenarios between assuming indefinite waiting times and the
no-wait case are possible, as there are situations where jobs or tasks might wait
some time, but not too much. For example, in a frozen food cold chain, 10 min
waiting might be acceptable knowing that the temperature will not raise beyond a
microbial contamination threshold in those 10 min. Therefore, a maximum waiting
time or ‘maximum time lag’ is allowed. Jobs or tasks can wait, for no more than the
maximum time lag. In general, we refer to lagij+ to the maximum time lag for job
j → N after being processed on machine i → M and before proceeding to machine
i + 1. In order to satisfy this maximum time lag, the following inequality must hold
for all tasks: EOij ≥ SOi+1,j ≥ EOij + lagij+ .
Similarly, there might be ‘minimum time lags’. This implies that subsequent tasks
of a job cannot start until a given minimum time has elapsed since the last opera-
tion. There are many situations where minimum time lags could appear. Picture for
example painting operations that require a minimum drying time before subsequent
operations can be performed. Products that require some type of furnacing or kiln
firing might need some cooling off time before being handled. We denote by lagij−
to the minimum time lag for job j → N after being processed on machine i → M and
before proceeding to machine i + 1. Minimum time lags effective need jobs to satisfy
the following: SOi+1,j ≥ EOij + lagij− .
A straightforward extension is when both maximum and minimum time lags exist.
This imposes a ‘time window’ interval inside which the next task of any job should
be started. Some other authors, even consider negative time lags. Negative time
lags allow for a subsequent task to start actually before the preceding task is finished.
This can be used to model, for example, large batches of products that can effectively
overlap in between stages, much like the lot streaming and lot sizing discussion of
Sect. 3.4 of Chap. 3.
Nothing has been mentioned about the nature of the processing times so far apart
from mentioning in Chap. 3 that processing times are supposed to be known and
fixed in advance. However, there are many situations that might affect this otherwise
strong assumption.
90 4 Scheduling Constraints
First of all, there is a sizeable part of the theoretical research in scheduling that
deals with very specific processing times. These are referred to as the unit processing
times and identical processing times cases, denoted as follows:
β = pij = 1 : All processing times are equal to 1. This is referred to as unit processing
times.
= pij = {0, 1} : Zero or unit processing times.
= pij = p : All processing times are equal to a value p.
Note that these special processing times cases might be erroneously classified as
too theoretical. While it is true that the bulk of research in these cases is theoretical,
there are some situations in practice that benefit from these special cases. For instance,
in Lee et al. (1992), batching machines for the burn-in operations in semiconductor
manufacturing are studied. In these machines, all the jobs that simultaneously enter
the machine are modelled as having the processing time equal to the largest job
that enters the batching machine. As we can see, such studies are highly relevant in
practice.
Some other specific situations regarding processing times are also often studied
in the literature. For instance, proportional processing times model the situation
in which the processing times of the tasks of a given job are proportional to each
other. For example, the first machine might be faster than the second machine, the
third faster than the fourth machine and so on. These problems are usually noted as
β = pij = pj /si , where si is a speed factor of the machine i. There is a body of the
literature about the proportionate flow shop, with some key works like the ones of
Ow (1985) and Hou and Hoogeveen (2003).
So far, it has also been assumed that processing times, once determined, do not
depend on anything else. In reality, processing times can be acted upon. Picture a
painting operation of a large wood or metal piece. Painting might be carried out by
personnel operating spray paint guns. The more personnel assigned to that painting
operation, the shorter the processing time and the more expensive the operation will
be. In other words, sometimes, processing times depend on other factors and can
be somehow controlled. This situation is referred to in the scientific literature as
controllable processing times and there is also a large body of research. The variety
and diversity of controllable processing times scheduling is overwhelming as there
are literally hundreds of possible variants. Note that controllable processing times are
often also referred to as resource dependent processing times or even compressible
processing times. The interested reader is advised to read through the previously
cited reviews for more information.
No matter how the processing times are estimated, they are no more than that:
an estimate. In some production systems, production or processing times cannot just
be estimated by any amount. In these cases, if a fixed and deterministic processing
time is not acceptable, some statistical distribution for each processing time has to
be assumed. Under this situation, we enter the stochastic processing times case. The
body of research under stochastic processing times is huge. Note that when one refers
to stochastic scheduling, most of the time the meaning is stochastic processing times.
4.3 Operations Constraints 91
However, it has to be noted that analytical results are only possible for simplified
production settings. For complex production scenarios, simulation is believed to be
the only viable choice.
The above special cases for processing times are not the only possible ones. Time-
dependent processing times occur when the actual processing time depends on the
instant in which operations are started. A special case of the previous one is the
so-called deteriorating processing times where the processing time of a given task
increases as time passes by.
Last, it has been demonstrated that processing times decrease over time in repet-
itive production lines. This is the so-called learning effect. While it is debatable
that processing times with learning considerations should be addressed in short-term
scheduling, this has not deterred authors from proposing several models in which
the processing times are subject to the most varied learning effects.
Another situation that is very common in practice is that a task of a job cannot start just
as soon as the previous task of the same job is finished. Doing so implies that the job,
whatever product it models, arrives instantaneously from the output of the previous
machine to the input of the next machine. Normally, the job will need to be carried
over to the next machine. There are numerous ways of transporting products from
machine to machine: conveyor belts, robotic cells, robotic arms, wagons, product
boxes, automated guided vehicles, overhead cranes, as well as many others. As a
matter of fact, the variety in product handling inside a factory is probably as high as
the variety in manufacturing systems themselves. Alongside with this variety, there
are many different approaches to transportation constraints in the scientific literature.
Basically, two types of transportation constraints can be considered. First, there
is the transportation time, so that jobs do not instantaneously move from machine to
machine. Should there be no more constraints, transportation times could be modeled
simply as a minimum time lag with the time it takes to transport the job from one
machine to the next. However, sometimes lot streaming (see Sect. 3.4) is employed
so jobs are able to be splitted for transport from one machine to the next which might
allow overlapping processing of jobs (sub-jobs or sub-lots of a job) on consecutive
machines.
There is a second group of transportation constraints: the number of transporters
is limited. This second situation results in a hard problem as the handling of products
must be solved together with the scheduling. Obviously, if all transporters are busy
transporting products, then the job has to wait somewhere until one is free. The
immediate result of this is that the transportation time is not a fixed and constant
term, but a time that depends on how the scheduling is being done and on the traffic
in the transportation.
In the most complex scenario, the scheduling problem becomes a combined
scheduling and routing problem in which countless routes for different transporters
92 4 Scheduling Constraints
Fig. 4.10 An automated guided vehicle (AGV) transporting a box full of ceramic tiles
have to be derived to transport all the goods inside the factory. Lately, some special
emphasis is being observed on the routing of automated guided vehicles (AGVs) like
those shown in Fig. 4.10.
The previous aspects of transportation constraints are not the only ones. Some-
times, transportation constraints go as far as the final delivery of products to clients.
This is referred to as product delivery coordination and has been widely studied.
Other scenarios include the necessary coordination of scheduling and transportation
in complex supply chains.
In all previous sections, we have shown some alternatives for the many assump-
tions of the theoretical scheduling problems. However, there is still an important one
remaining: storage. In-process inventory has to be stored inside the plant and obvi-
ously, space is not unlimited. When the storage areas are full, upstream machines
have to stop while downstream machines work over the in-process inventory to free
up some space in the storage areas.
Buffer capacity requirements occur per job between two consecutive operations
of this job, i.e. after the previous operation has been finished and before the next
operation starts. Often, unlimited buffers are supposed in manufacturing scheduling.
However, sometimes physically buffer limitations are present explicitly or the goal
4.5 Storage Constraints 93
At the beginning of the present chapter it was stated that the reality of the production
systems is extremely complex and that there are countless possible constraints. In
previous sections we have just scratched the surface of this issue. Now we proceed
with a brief enumeration of many other possible constraints not mentioned before.
It has been mentioned that in supply chains, transportation operations have to be
considered. However, there is one basic assumption that has been considered up to
this point. There is one single production floor or factory, or, if there is more than
one, scheduling is carried out separately at each one. Large companies with many
factories have additional decisions on where to produce the products that form the
production plan, i.e. there is a previous decision of deciding in which factory each
product should be manufactured. Then, on each factory, a scheduling problem has to
be solved. These situations have been recently referred to as ‘distributed scheduling
problems’ and are recently being studied. One example is the study of the distributed
permutation flow shop layout by Naderi and Ruiz (2010). Figure 4.11 clearly shows
one full flow shop problem with 10 jobs and four machines that has to be distributed
between two factories, each one also with four machines. Obviously, the job factory
assignment decisions and the scheduling problem at each factory cannot be separated
if a good quality result is desired.
94 4 Scheduling Constraints
Machine
1 10 7 3 9 5 4 6 2 8 1
Machine
2 10 7 3 9 5 4 6 2 8 1
Machine
3 10 7 3 9 5 4 6 2 8 1
Machine
4 10 7 3 9 5 4 6 2 8 1
Factory 1 Factory 2
Machine Machine
7 3 4 2 1 10 9 5 6 8
1 1
Machine Machine
7 3 4 2 1 10 9 5 6 8
2 2
Machine Machine
7 3 4 2 1 10 9 5 6 8
3 3
Machine Machine
7 3 4 2 1 10 9 5 6 8
4 4
There are many other constraints. Setup times are a never-ending source of special
situations. Some machines, for example, cannot be setup unless an specific technician
is working and therefore, setups during the weekend, for example, have to be avoided.
Furthermore, setups are often cleaning and adjustments of machines that are carried
out by plant personnel. In this situation, setup times are also controllable.
During this chapter we have introduced some of the situations that can be found in
production settings, grouped by process constraints, operations constraints, trans-
portation and storage constraints. Note that this is just one possible grouping as
the sheer variety of real situations easily overcomes any exhaustive classification
attempt. This variety of scheduling settings and constraints results in a vast literature
where most constraints are still studied in an isolated way most of the time. For
example, the literature on no-wait scheduling is plenty, as it is the literature on setup
times. However, in the intersection one finds very few research papers, i.e. no-wait
scheduling with setup times. Moreover, it is safe to say that the intersection among
three or more sets of constraints for any shop layout would return an empty set of
references most of the time. Furthermore, the more constraints considered, the less
complex the studied scheduling layout usually is. This is, there are some studies with
4.7 Conclusions and Further Readings 95
many constraints considered simultaneously for the single machine problem, but just
a handful of papers have been published where several constraints are simultaneously
considered for complex hybrid job shop problems, to name an example.
The previous issue is actually a symptom that shows that more work needs to be
done in the area of scheduling in order to cope with more realistic problems where
several simultaneous constraints are considered. Any production scheduler working
at a company has surely identified several of the previously commented constraints as
important in his/her production setting. In the next chapters we will make an attempt
at providing some general techniques and algorithms, as well as a case study, to help
in bridging this gap between academic scheduling and real scheduling at production
shops.
Further readings on the topic discussed in this chapter stem from the already
commented main textbooks in scheduling. When describing the notation, we have
sought to balance the simplicity against the thoroughness, also taking into account its
acceptance. This is not possible in many cases: For instance, the notation on machine
availability is not widely accepted (see as, for example, Lee et al. 1997 for a different
notation).
Regarding specific works, apart from the references that have been cited in-line
when describing specific sets of constraints in the corresponding sections, state-of-
the-art and comprehensive reviews of the scheduling literature and setup times are
available from Yang and Liao (1999), Allahverdi et al. (1999), Cheng et al. (2000)
and Allahverdi et al. (2008). A good state-of-the-art review of scheduling problems
with machine availabilities is available from Schmidt (2000). Reviews on flow shop
are Park et al. (1984), Turner and Booth (1987), Framinan et al. (2004), Ruiz and
Maroto (2005), Hejazi and Saghafian (2005) or Gupta and Stafford (2006), while for
hybrid flow shops it is worth citing Linn and Zhang (1999), Vignier et al. (1999),
Wang (2005), Ruiz and Maroto (2006), Quadt and Kuhn (2007), Ribas et al. (2010)
and Ruiz and Vázquez-Rodríguez (2010). Stage skipping has been addressed by a
number of authors, see, e.g. Leisten and Kolbe (1998) and Urlings et al. (2010) for the
flow shop/hybrid flow shop settings, respectively. No-idle processing constraints have
been seldom studied in the literature. To the best of our knowledge, the first papers are
those of Adiri and Pohoryles (1982) and Vachajitpan (1982), and recent references on
the topic are Pan and Wang (2008) and Ruiz et al. (2009). A comprehensive review of
scheduling with batching decisions is due to Potts and Kovalyov (2000), although it
only covers research that is already more than a decade old. In the last decade, more
than double that number of papers have appeared for batching scheduling. Some
recent examples of work in preemption are Agnetis et al. (2009) and Hendel et al.
(2009).
Due dates are intensively studied in the research literature. Some examples are Sen
and Gupta (1984), Baker and Scudder (1990) and more recently, Vallada et al. (2008).
Some examples of processing windows research appear in Maffioli and Sciomachen
(1997) or Yeung et al. (2009). Minimum, maximum and both types of time lags have
been dealt with, among many others, by Riezebos and Gaalman (1998), Brucker et al.
(1999) and Fondrevelle et al. (2006). Negative time lags and overlaps are discussed
in network planning textbooks, being a recent reference Urlings et al. (2010).
96 4 Scheduling Constraints
Unit processing times are studied in Liua et al. (1999) and Averbakh et al. (2005)
just to name a few. Zero and unit processing times are examined in Lushchakova
and Kravchenko (1998), among others. Results for identical processing times have
been published in Crama and Spieksma (1996) and Lee et al. (1992). A recent survey
about scheduling with controllable processing times is that of Shabtay and Steiner
(2007).
In the book of Pinedo (2012) there is a whole part devoted to stochastic scheduling.
Regarding time depending processing times, a main source is Gawiejnowicz (2008).
An interesting study for the time-dependent processing times single machine layout
is given by Chen (1996), and a example of deteriorating processing times where the
processing time of a given task increases as time passes by is Kubiak and van de
Velde (1998). Some examples of processing times with learning effect are the papers
of Lee et al. (2004) and Eren and Guner (2009) just to name a few.
Some recent references on lot streaming are Defersha and Chen (2012) and
Feldmann and Biskup (2008), being Sarin and Jaiprakash (2007) a book on the
topic. A good review of scheduling with transportation times is due to Lee and Chen
(2001). Product delivery coordination can be found in Chang and Lee (2004). Excel-
lent review papers on the coordination of scheduling and transportation are available
from Potts and Hall (2003), Li et al. (2005) and Chen and Vairaktarakis (2005).
The main results of scheduling with blocking are due to Hall and Sriskandarajah
(1996). Other relevant papers are those of Papadimitriou and Kanellakis (1980),
Leisten (1990), Brucker et al. (2003) and Mascis and Pacciarelli (2005). Finally, an
interesting application where setup times are considered controllable is Ruiz and
Andres-Romano (2011).
References
Adiri, I. and Pohoryles, D. (1982). Flowshop no-idle or no-wait scheduling to minimize the sum of
completion times. Naval Research Logistics, 29(3):495–504.
Agnetis, A., Alfieri, A., and Nicosia, G. (2009). Single-machine scheduling problems with gener-
alized preemption. INFORMS Journal on Computing, 21(1):1–12.
Allahverdi, A., Gupta, J. N. D., and Aldowaisan, T. (1999). A review of scheduling research involving
setup considerations. OMEGA, The International Journal of Management Science, 27(2):219–
239.
Allahverdi, A., Ng, C. T., Cheng, T. C. E., and Kovalyov, M. Y. (2008). A survey of scheduling
problems with setup times or costs. European Journal of Operational Research, 187(3):985–1032.
Averbakh, I., Berman, O., and Chernykh, I. (2005). The m-machine flowshop problem with unit-time
operations and intree precedence constraints. Operations Research Letters, 33(3):263–266.
Baker, K. R. and Scudder, G. D. (1990). Sequencing with earliness and tardiness penalties: a review.
Mathematics of Operations Research, 15:483–495.
Bellanger, A. and Oulamara, A. (2009). Scheduling hybrid flowshop with parallel batching machines
and compatibilities. Computers & Operations Research, 36(6):1982–1992.
Bertel, S. and Billaut, J.-C. (2004). A genetic algorithm for an industrial multiprocessor flow shop
scheduling problem with recirculation. European Journal of Operational Research, 159(3):651–
662.
References 97
Brucker, P., Heitmann, S., and Hurink, J. (2003). Flow-shop problems with intermediate buffers.
OR Spectrum, 25(4):549–574.
Brucker, P., Hilbig, T., and Hurink, J. (1999). A branch and bound algorithm for a single-machine
scheduling problem with positive and negative time-lags. Discrete Applied Mathematics, 94
(1–3):77–99.
Chang, Y. C. and Lee, C.-Y. (2004). Machine scheduling with job delivery coordination. European
Journal of Operational Research, 158(2):470–487.
Chen, Z.-L. (1996). Parallel machine scheduling with time dependent processing times. Discrete
Applied Mathematics, 70(1):81–93.
Chen, Z. L. and Vairaktarakis, G. L. (2005). Integrated scheduling of production and distribution
operation. Management Science, 51(4):614–628.
Cheng, T. C. E., Gupta, J. N. D., and Wang, G. Q. (2000). A review of flowshopn scheduling research
with setup times. Production and Operations Management, 9(3):262–282.
Choi, S. W., Kim, Y. D., and Lee, G. C. (2005). Minimizing total tardiness of orders with reentrant
lots in a hybrid flowshop. International Journal of Production Research, 43(11):2149–2167.
Crama, Y. and Spieksma, F. C. R. (1996). Scheduling jobs of equal length: Complexity, facets and
computational results. Mathematical Programming, 72(3):207–227.
Defersha, F. and Chen, M. (2012). Jobshop lot streaming with routing flexibility, sequence-
dependent setups, machine release dates and lag time. International Journal of Production
Research, 50(8):2331–2352.
Eren, T. and Guner, E. (2009). A bicriteria parallel machine scheduling with a learning effect.
International Journal of Advanced Manufacturing Technology, 40(11–12):1202–1205.
Feldmann, M. and Biskup, D. (2008). Lot streaming in a multiple product permutation flow shop
with intermingling. International Journal of Production Research, 46(1):197–216.
Fondrevelle, J., Oulamara, A., and Portmann, M.-C. (2006). Permutation flowshop scheduling prob-
lems with maximal and minimal time lags. Computers & Operations Research, 33(6):1540–1556.
Framinan, J. M., Gupta, J. N. D., and Leisten, R. (2004). A review and classification of heuristics for
permutation flow-shop scheduling with makespan objective. Journal of the Operational Research
Society, 55(12):1243–1255.
Gawiejnowicz, S. (2008). Time-Dependent Scheduling. Springer.
Gupta, J. N. D. and Stafford, Jr, E. F. (2006). Flowshop scheduling research after five decades.
European Journal of Operational Research, 169(3):699–711.
Hall, N. G. and Sriskandarajah, C. (1996). A survey of machine scheduling problems with blocking
and no-wait in process. Operations Research, 44(3):510–525.
Hejazi, S. R. and Saghafian, S. (2005). Flowshop-scheduling problems with makespan criterion: a
review. International Journal of Production Research, 43(14):2895–2929.
Hendel, Y., Runge, N., and Sourd, F. (2009). The one-machine just-in-time scheduling problem
with preemption. Discrete Optimization, 6(1):10–22.
Hou, S. and Hoogeveen, H. (2003). The three-machine proportionate flow shop problem with
unequal machine speeds. Operations Research Letters, 31(3):225–231.
Kubiak, W. and van de Velde, S. (1998). Scheduling deteriorating jobs to minimize makespan.
Naval Research Logistics, 45(5):511–523.
Lawler, E. L. (1973). Optimal Sequencing of a Single Machine Subject to Precedence Constraints.
Management Science, 19(5):544–546.
Lee, C.-Y. and Chen, Z.-L. (2001). Machine scheduling with transportation considerations. Journal
of Scheduling, 4(1):3–24.
Lee, C.-Y., Lei, L., and Pinedo, M. (1997). Current trends in deterministic scheduling. Annals of
Operations Research, 70:1–41.
Lee, C.-Y., Uzsoy, R., and Martin-Vega, L. A. (1992). Efficient algorithms for scheduling semicon-
ductor burn-in operations. Operations Research, 40(4):764–775.
Lee, W. C., Wu, C. C., and Sung, H. J. (2004). A bi-criterion single-machine scheduling problem
with learning considerations. Acta Informatica, 40(4):303–315.
98 4 Scheduling Constraints
Leisten, R. (1990). Flowshop sequencing problems with limited buffer storage. International Jour-
nal of Production Research, 28(11):2085.
Leisten, R. and Kolbe, M. (1998). A note on scheduling jobs with missing operations in permutation
flow shops. International Journal of Production Research, 36(9):2627–2630.
Lenstra, J. K. and Rinnooy Kan, A. H. G. (1978). Complexity of Scheduling under Precedence
Constraints. Operations Research, 26(1):22–35.
Li, C. L., Vairaktarakis, G. L., and Lee, C.-Y. (2005). Machine scheduling with deliveries to multiple
customer locations. European Journal of Operational Research, 164(1):39–51.
Linn, R. and Zhang, W. (1999). Hybrid flow shop scheduling: A survey. Computers & Industrial
Engineering, 37(1–2):57–61.
Liua, Z., Yua, W., and Cheng, T. C. E. (1999). Scheduling groups of unit length jobs on two identical
parallel machines. Information Processing Letters, 69(6):275–281.
Lushchakova, I. N. and Kravchenko, S. A. (1998). Two-machine shop scheduling with zero and
unit processing times. European Journal of Operational Research, 107(2):378–388.
Maffioli, F. and Sciomachen, A. (1997). A mixed-integer model for solving ordering problems with
side constraints. Annals of Operations Research, 69:277–297.
Mascis, A. and Pacciarelli, D. (2005). Job-shop scheduling with blocking and no-wait constraints.
European Journal of Operational Research, 143(3):498–517.
Naderi, B. and Ruiz, R. (2010). The distributed permutation flowshop scheduling problem. Com-
puters & Operations Research, 37(4):754–768.
Ow, P. S. (1985). Focused scheduling in proportionate flowshops. Management Science, 31(7):852–
869.
Pan, Q.-K. and Wang, L. (2008). A novel differential evolution algorithm for no-idle permutation
flow-shop scheduling problems. European Journal of Industrial Engineering, 2(3):279–297.
Papadimitriou, C. H. and Kanellakis, P. C. (1980). Flowshop scheduling with limited temporary-
storage. Journal of the ACM, 27(3):533–549.
Park, Y. B., Pegden, C., and Enscore, E. (1984). A survey and evaluation of static flowshop schedul-
ing heuristics. International Journal of Production Research, 22(1):127–141.
Pinedo, M. L. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, New York, fourth
edition.
Potts, C. N. and Hall, N. G. (2003). Supply chain scheduling: Batching and delivery. Operations
Research, 51(4):566–584.
Potts, C. N. and Kovalyov, M. Y. (2000). Scheduling with batching: A review. European Journal of
Operational Research, 120(2):222–249.
Quadt, D. and Kuhn, D. (2007). A taxonomy of flexible flow line scheduling procedures. European
Journal of Operational Research, 178(3):686–698.
Ribas, I., Leisten, R., and Framinan, J. M. (2010). Review and classification of hybrid flow shop
scheduling problems from a production system and a solutions procedure perspective. Computers
& Operations Research, 37(8):1439–1454.
Riezebos, J. and Gaalman, G. J. C. (1998). Time lag size in multiple operations flow shop scheduling
heuristics. European Journal of Operational Research, 105(1):72–90.
Ruiz, R. and Andres-Romano, C. (2011). Scheduling unrelated parallel machines with resource-
assignable sequence-dependent setup, times. 57(5–8):777–794.
Ruiz, R. and Maroto, C. (2005). A comprehensive review and evaluation of permutation flowshop
heuristics. European Journal of Operational Research, 165(2):479–494.
Ruiz, R. and Maroto, C. (2006). A genetic algorithm for hybrid flowshops with sequence dependent
setup times and machine eligibility. European Journal of Operational Research, 169(3):781–800.
Ruiz, R., Vallada, E., and Fernández-Martínez, C. (2009). Scheduling in flowshopns with no-idle
machines. In Chakraborty, U. K., editor, Computational Intelligence in Flow Shop and Job Shop
Scheduling, volume 230 of Studies in Computational Intelligence, pages 21–51, Berlin. Springer-
Verlag.
Ruiz, R. and Vázquez-Rodríguez, J. A. (2010). The hybrid flowshop scheduling problem. European
Journal of Operational Research, 205(1):1–18.
References 99
Sarin, S. C. and Jaiprakash, P. (2007). Flow Shop Lot Streaming. Springer, New York.
Schmidt, G. (2000). Scheduling with limited machine availability. European Journal of Operational
Research, 121(1):1–15.
Sen, T. and Gupta, S. K. (1984). A state-of-art survey of static scheduling research involving due
dates. OMEGA, The International Journal of Management Science, 12(1):63–76.
Shabtay, D. and Steiner, G. (2007). A survey of scheduling with controllable processing times.
Discrete Applied Mathematics, 155(13):1643–1666.
Sidney, J. B. (1975). Decomposition Algorithms for Single-Machine Sequencing with Precedence
Relations and Deferral Costs. Operations Research, 23(2):283–298.
Turner, S. and Booth, D. (1987). Comparison of Heuristics for Flow Shop Sequencing. OMEGA,
The International Journal of Management Science, 15(1):75–78.
Urlings, T., Ruiz, R., and Sivrikaya-Şerifoğlu, F. (2010). Genetic algorithms with different rep-
resentation schemes for complex hybrid flexible flow line problems. International Journal of
Metaheuristics, 1(1):30–54.
Vachajitpan, P. (1982). Job sequencing with continuous machine operation. Computers & Industrial
Engineering, 6(3):255–259.
Vallada, E., Ruiz, R., and Minella, G. (2008). Minimising total tardiness in the m-machine flowshop
problem: A review and evaluation of heuristics and metaheuristics. Computers & Operations
Research, 35(4):1350–1373.
Vignier, A., Billaut, J.-C., and Proust, C. (1999). Les problèmes d’ordonnancement de type flow-
shop hybride: État de l’art. RAIRO Recherche opérationnelle, 33(2):117–183. In French.
Wang, H. (2005). Flexible flow shop scheduling: optimum, heuristics and artificial intelligence
solutions. Expert Systems, 22(2):78–85.
Yang, W.-H. and Liao, C.-J. (1999). Survey of scheduling research involving setup times. Interna-
tional Journal of Systems Science, 30(2):143–155.
Yeung, W.-K., Oğuz, C., and Cheng, T.-C. E. (2009). Two-machine flow shop scheduling with
common due window to minimize weighted number of early and tardy jobs. Naval Research
Logistics, 56(7):593–599.
Chapter 5
Objectives
5.1 Introduction
Note that the objective categories mentioned here are highly interdependent.
Sometimes they act in parallel, such as in the case of idle time costs and utilisation.
Unfortunately, most of them are often conflicting: The case of maximising machine
utilisation is strongly against reducing work-in-progress or inventory levels. High
machine utilisation is also conflicting with service level or with meeting due dates.
If machine utilisation is low, there is a certain guarantee that machines will be free
when needed for finishing a job that is about to be late. On the other hand, a very
high inventory level practically guarantees that lead times will approach zero as all
products will be readily available.
In addition, there is the issue of establishing a good level of performance in the
shop floor for any of the previously cited criteria. A reference point, like for example,
‘A cost threshold’ or a ‘customer’s satisfaction level’ has to be set. However, this
is in fact a circular problem since usually measuring these criteria is, as mentioned,
not easy. As a result, the performance of the schedules is often measured against
past performance. Given that manufacturing systems are highly dynamic, comparing
against past performance inevitably introduces a large degree of bias in the process.
Another important issue is that, when measuring the performance of a schedule, the
schedule has to be broken down into specific time horizons (e.g. weeks) and each
time horizon is scheduled independently of others. The result might be that a very
good optimisation for one period affects negatively the following periods. Other
problems arise when breaking levels of management. For example, the schedule
manager at a higher decision level and the plant manager at an operational level.
While the scheduler might be interested in maximising customer’s satisfaction, the
104 5 Objectives
• Cj : Completion time of job j, i.e. time at which job j finishes its processing in the
shop.
• Fj : Flowtime of job j. This models the time that the job stays in the shop while in
production or waiting for processing. Clearly, Fj = Cj − rj . Most importantly, in
the lack of release dates, flowtime and completion time are equal, i.e. Fj = Cj .
• Lj : Lateness or slack of job j with respect to its due date dj , or deadline d j . As
such, it is calculated as Lj = Cj − dj or Lj = Cj − d j . The lateness function is
shown in Fig. 5.2.
• Tj : Tardiness of job j. This measures only jobs that finish beyond their due dates
and it is calculated as Tj = max{Lj , 0}. Tardiness ignores negative lateness values,
i.e. all jobs that are finished before the due date are not tardy and therefore not a
reason to worry about. The tardiness function is shown in Fig. 5.3.
• Ej : Earliness of job j. Similarly to tardiness, it is only calculated for early jobs, i.e.
Ej = max{−Lj , 0}. Note that the earliness function is non-increasing with respect
to Cj as shown in Fig. 5.4.
• Uj Tardy job or late job. This measure yields 1 if job j is late (i.e. Tj > 0 or
equivalently Lj > 0, or equivalently Cj > dj ), and zero if not. Figure 5.5 depicts
this function.
5.3 Performance Measures 105
Lj
0
dj Cj
Tj
0
dj Cj
• Vj Early job. This measure yields 1 if job j is early (i.e. Lj < 0, or equivalently
Cj < dj ), and zero if not. Figure 5.6 depicts this function.
• ETj Earliness-Tardiness of job j. This measure is defined as the sum of the tardiness
and the earliness of job j, i.e. ETj = Ej +Tj . The function of ETj is given in Fig. 5.7.
• JITj Just-In-Time job. This measure yields 1 if job j is neither early nor late (i.e.
Lj = 0, or equivalently Cj = dj ), and zero if not. This is often referred to as
‘just-in-time’ or JIT. The function of JITj is given in Fig. 5.8.
Aside from the data in the model (most notably rj and dj ), note that all above
measures depend on the completion time of the job. The completion time of the job
106 5 Objectives
Ej
0
dj Cj
can be obtained for an particular schedule, i.e. the collection SOij of the starting times
of operation for each job j on each machine i, according to:
When assuming some form of stochastic behaviour in the processing times, then
the completion time of the job is a random variable that cannot be calculated according
to a deterministic procedure. In these cases, statistical indicators of such random
variable (such as its mean or variance) could be either obtained by a closed formula
(unfortunately, only for specific cases), or at least estimated via samples.
5.3 Performance Measures 107
ETj
0
dj Cj
JIT
0
dj Cj
• Δj (ΠA , ΠB ): Time disruption of job j. It indicates the difference (in absolute value)
between the completion time of job j in schedule ΠA , and its completion time in
schedule ΠB , i.e. Δj (ΠA , ΠB ) = |Cj (ΠA ) − Cj (ΠB )|.
The above measures are summarised in Table 5.1.
5.4 Scheduling Objectives 109
From the previous notation, we are ready to define the most common objectives.
A summary is provided in Fig. 5.2. According to the classification of scheduling
models introduced in Sect. 3.3.2, the γ field of this notation will be used to represent
the criteria under consideration. For practical purposes, we will classify them in five
general groups:
• Feasibility. In this case, the objective is to find a feasible solution for the model.
• Non due-date related. There is no information (or it is not required) regarding due
dates or deadlines.
• Due-date related. The objective involves the usage of information about due dates
or deadlines.
• Rescheduling related. The objective involves information from two different
schedules (usually associated to two different decision intervals).
• Additional objectives. Other objectives that can be considered.
Therefore, the model 1|prec, prmpt|− indicates the formal decision problem of
finding (at least) one feasible schedule for the single machine layout with precedence
constraints where pre-emption is allowed.
For the rest of the groups, a distinction can be done depending on the type of
function g applied to the completion times, i.e. functionf is of the max-form if
f = max1→j→n g(Cj ). In contrast, is of the sum-form if f = nj=1 g(Cj ). Furthermore,
recall from Sect. 1.5 that g is said to be regular if it is a non-decreasing function of
(C1 , C2 , . . . , Cn ). Unless stated otherwise, the objectives presented in the following
sections are regular.
A final remark can be done with respect to an stochastic settings, as there is an
stochastic counterpart of all scheduling objectives considered here. In such cases,
the otherwise deterministic objectives are now random variables and therefore the
objective can be minimising some of their characteristics, most notably their mean
or variance. These objectives can also be integrated in the model. For instance, the
objective of minimising the expected value of objective O is notated as follows:
In this section, we discuss all objectives that do not consider due dates in their
calculation. These are mainly concerned with the completion times and with the
total time each job spends in the shop.
Makespan
The maximum completion time or makespan is defined as Cmax = max{C1 ,
C2 , . . . , Cn }. Regarding notation, makespan minimisation is denoted as:
γ = Cmax : Makespan minimisation.
Makespan can be seen as the time needed to finish the whole production plan since
it measures from the time the first job starts processing (normally this is assumed to
be zero, unless release times or other constraints exist), to the time the last job in the
processing sequence is finished on the last machine it visits. It is thus connected with
the idle time. However, no standard definition of idle time exists, as it depends if
the ‘heads’ (i.e. the time that each machine is idle before starting processing its first
job) and ‘tails’ (i.e. the time that each machine is idle after processing its last job)
are both included, excluded, or just one of them is considered. For the definition of
idle time including both ‘heads’ and ‘tails’, it can be proved that both objectives are
equivalent and thus minimizing makespan and minimizing this definition of idle time
are equivalent. Additionally, makespan is also closely connected to the production
rate which can be calculated as PR = Cmax n
. The utilisation of machine i is calculated
TCi
as PR where TCi is the total capacity of machine i.
Maximising machine utilisation is interesting as far as cost accounting is con-
cerned. Capital equipment and expensive machinery have to be paid for. Therefore,
using machines to produce products that increase the revenue of companies yield
higher returns of the investments and, in the end, a higher profitability. On the con-
trary, idle machines are not generating revenue and are seen as a lost profit. However,
it has to be noted that utterly increasing machine utilisation can result in undesired
effects. For example, keeping machines busy churning out products that nobody
demands will only reduce the level of service, increase the level of inventory and in
the end, reduce benefits. In a make-to-stock environment, increasing machine util-
isation makes all the sense. In addition, it should be mentioned that maximisation
of utilization makes sense for bottleneck machines or bottleneck systems albeit high
utilization might in turn result in larger waiting times as is well known from queuing
theory.
5.4 Scheduling Objectives 111
Maximum flowtime
The maximum flowtime objective is defined as max1→j→n Fj and it is denoted as:
Maximum flowtime is a niche objective that has been seldom studied in the sci-
entific literature. The reason is that with no release times, i.e. rj = 0, ∀j ≥ N, we
have that Cmax = max Fj . Therefore, and specially for long production horizons,
the effect of the release date rj over the Fmax decreases as Cmax increases unless
very large release dates are present. A weighted version of this objective can be also
formulated (see Table 5.2).
Meeting due dates is probably the highest regarded indicator of customer’s satis-
faction influenced by short-term operations management. As a result of this, many
different objectives exist that in one way or another capture this importance. All these
criteria are mainly based in the previous definitions of lateness (Lj ), Tardiness (Tj )
and Earliness (Ej ).
At this stage, it is important to make some punctualisations. In a make-to-order
production environment, every job usually corresponds to a client order (or to several
orders from different clients that have been batched). Therefore, the due date set by or
agreed with the client, is considered. Make-to-stock production systems are different
as client orders are ideally served from the stock immediately as they are placed.
112 5 Objectives
As a result, there are no due dates. However, stock levels decline as orders are served
and reach a point where replenishment is needed. An ‘internal’ order is placed so
to refill the stock with products. These internal orders are, as regards scheduling,
identical to make-to-order client orders. The reason is that if the internal orders are
produced late, client orders might go unserved due to stockouts.
In make-to-order systems, it is interesting to measure the service level which is
just the fraction of client orders served by or before the due date. Alternatively, in
make-to-stock production environments, the fill rate is the fraction of orders that are
served from the inventory. Other measures include the amount of time in backorder
or number of stockouts. All the following scheduling objectives are related to these
real-life goals.
Maximum lateness
The objective of maximum lateness minimisation can be defined as follows:
This objective tries to minimise the maximum deviation from the due date or dead-
line in the schedule. The idea is to limit as much as possible the delay with respect
to the committed date. Note that max Lj might be a negative number if all jobs in a
given schedule are completed before their corresponding due dates. Indeed, if all jobs
can be finished before their due dates (i.e. there are schedules for which Lj < 0∀j),
then maximum lateness minimisation leads to finishing the jobs as early as possible).
Maximum tardiness
Maximum tardiness minimisation can be defined as follows:
γ = Tmax : Maximum tardiness minimisation, calculated as max Tj = max1→j→n Tj .
Maximum earliness
In some scenarios, tardy jobs might be of no concern in comparison with having jobs
finished before the due date. Finishing a product early has several added costs. First
of all, the product has to be stocked if the due date is still far ahead in time, which
results in inventory handling and costs, specially so if the product is cumbersome or
voluminous. Furthermore, if the product is very expensive and/or if the manufacturing
process is costly, finishing it early precludes the company to invoice the client, which
114 5 Objectives
can add a considerable financial burden. Some extreme cases could be perishable
products, that have to be used shortly after production. Dairy foods or some chemical
products are clear examples. In these cases, minimising earliness might be not just
preferable, but mandatory.
Maximum earliness minimisation can be defined as follows:
γ = max Ej : Maximum earliness minimisation, calculated as max Ej = max1→j→n Ej .
Total/average lateness
Minimisation of total/average lateness can be denoted as follows:
γ = Lj : Total lateness minimisation, calculated as Lj = nj=1 Lj .
= L: Average lateness minimisation, calculated as L = n1 · nj=1 Lj .
Total lateness and average lateness are not very appropriate objectives. The reason
is that negative lateness values will compensate the positive ones and the final total or
average lateness value might have no real meaning or interpretation. For example, a
small average lateness L could be the result of all jobs meeting due dates, something
that is desired, or it could be the outcome of a bad schedule where half of the jobs
are very early (large negative lateness values) and the other half of the products are
finished very late (large positive lateness values).
Total/average tardiness
As we have just discussed, total/average lateness minimisation does not seem to
make sense in many settings. Instance, total/average tardiness minimisation can be
used:
= Tj : Total tardiness minimisation, calculated as sumTj = nj=1 Tj .
1 n
= T : Average tardiness minimisation, calculated as T = n · j=1 Tj .
Note that as it was the case with the total and average completion time or flowtime,
there is no real difference from the optimisation point of view in minimising total
5.4 Scheduling Objectives 115
Total/average earliness
As with lateness minimisation, two variants are commonly studied:
= Ej : Total earliness minimisation, calculated as Ej = nj=1 Ej .
= E: Average earliness minimisation, calculated as E = n1 · nj=1 Ej .
Note that Uj is a regular performance
measure whereas Vj is non-regular.
An interesting feature of Uj and Vj is that they are very easy to interpret.
As a matter of fact, they can be easily transformed to measures like the percentage
of early or tardy jobs. From that perspective, it is very easy to assess the quality of
a given schedule. Note however, that it might be impossible to finish all products
before or after their due dates. As expected, the main drawback of these measures
is that they neglect extremely early or tardy jobs. Normally, in real life, it is desired
to have a small number of tardy (early) jobs and for the jobs that are tardy (early) a
small tardiness (earliness) is desired.
Note that, in contrast to the rest of objectives, a maximisation criterion is the one
that makes sense.
In this section, we cover other potentially interesting albeit not as thoroughly studied
objectives. One common characteristic of some previous objectives is that they are
linear on Cj . Each unit of time that a job is completed after its due date, adds one
unit to the tardiness value (or wj units if we are talking about weighted tardiness).
However, very large deviations from the due date are more important than small
deviations. A simple way of penalising more large deviations from the due date is
by simply squaring the deviations. A breed of new objectives and criteria arise:
γ = max Dj : Maximum squared deviation from the due date. This is calculated as
1 − d1 ) , (C2 − d2 ) , . . . , (Cn − dn ) }.
max{(C 2 2 2
= D: Total squared deviation from the due date. Calculated as nj=1 (Cj − dj )2 .
= D: Average
squared deviation from the due date, calculated as
D = n1 · nj=1 (Cj − dj )2 .
5.4 Scheduling Objectives 117
Dj
0
dj Cj
0
-
dj dj+ Cj
Fig. 5.10 A more realistic non-linear earliness and tardiness function with a due date window
Again, we could have all other possibilities as regards weights, different weights
for earliness and tardiness or even some squared and weighted schemes. Picture for
example the more realistic function of Fig. 5.10.
When some specific constraints are present in the system, lots of additional
objectives can be derived. Setup times are a clear example. When setups require
organisation or when they involve lots of time, money or both, it might be inter-
esting to just minimise the number of setups, or to minimise the total setup time
of the sequence. Instead of measuring time, one can measure costs. This is inter-
esting when doing setups for a non-critical machine is cheap, for example, and
carrying over setups for a critical or expensive machine might be more expensive.
Furthermore, different setup operations might require more personnel or special
tooling. Under these situations minimising the total setup cost is an interesting
alternative.
Bringing up the cost minimisation objective raises a very complex and daunting
issue. When doing scheduling, one is tempted to just state ‘minimise the production
costs’. However, this is easy to say, but very difficult to measure. While overall pro-
duction costs are more or less accountable, specific production costs that actually
might depend on how the products are scheduled is much more difficult. Further-
more, the costs of not meeting due dates are not easily measured. Some clients
might be lost and measuring the lost revenue in these situations is all but easy.
However, in some specific production companies, there might be an accepted and
established production cost accounting. Under these settings, minimising produc-
tion cost might be an interesting alternative to all other aforementioned performance
measures.
5.5 Adding Weights, Priorities or Importance 119
We defined in Sect. 3.2.2 of Chap. 3 the weights of the jobs wj . These weights are
useful for expressing the importance, the priority, cost or whatever index of relative
importance we might need. Given different weights for the jobs,we can modify
all previous objectives to consider weights. Some examples are
wj Cj denoting
n
j=1 w j Cj , or max wj Tj denoting max 1→j→n wj Cj .
One big drawback of adding weights to the objectives is that, in some cases, the
value of the objective cannot interpreted in an easy way. However, weighting allows
for more richer and realistic objectives. Picture for example the earliness-tardiness
minimisation. Not all jobs are equally important and it is not the same not being just
in time for a big order of an important client that being just in time for a make-to-stock
or stock-replenishment order. For this last case of earliness-tardiness minimisation,
we have the following notation:
γ = wj ETj : Weighted
sum of earliness
and tardiness minimisation, calculated as
wj ETj = nj=1 wj ETj = nj=1 wj Ej + wj Tj .
= wj Ej + wj∃ Tj : Weighted sum of earliness and tardiness minimisation with different
weights for earliness and tardiness, calculated as wj Ej + wj∃ Tj = nj=1 wj Ej + wj∃ Tj
Notice how the last function is even more interesting. By giving different weights
to each job and also different weights for earliness and tardiness, we can closely
match up the importance of being early and being late. Normally, the consequences
of being early are not as dire as those of being late. We could have a more realistic
function given in Fig. 5.11.
ETj
0
dj Cj
Fig. 5.11 Weighted earliness-tardiness function with different weights for earliness and tardiness
Table 5.4 Completion times of each job j on each machine mi for the sequence (1, 2, 3, 4, 5)
cij 1 2 3 4 5
m1 89 108 143 147 169
m2 126 199 292 311 320
m3 141 246 310 402 406
m4 188 249 316 496 594
In Table 5.5, the first column includes all performance measures in Table 5.1. They
are computed for each job. The last two columns show the maximum and the sum of
the values, respectively, obtaining a great number of the objectives presented in the
previous sections for the given sequence in this example. Note the completion times
Cj for each job (note that they are the same than C4j values of the previous table).
5.6 An Illustrative Example 121
Makespan: 594
Machine
1 1 2 3 45
Machine
2 1 2 3 45
Machine
3 1 2 3 4 5
Machine
4 1 2 3 4 5
0 50 100 150 200 250 300 350 400 450 500 550 600 650
Time
Job 1 Job 2 Job 3 Job 4 Job 5
Fig. 5.12 Gantt chart for the permutation flow shop example with the sequence (1, 2, 3, 4, 5)
Table 5.5 Performance measures for each job j and objective values for the sequence (1, 2, 3, 4, 5)
1 2 3 4 5 Max-form Sum-form
Cj 188 249 316 496 594 Cmax 594 C 1843
j
wj C j 940 1992 632 4960 1782 max wj Cj 4960 wj Cj 10306
Lj 32 −257 −206 258 149 max Lj 258 L −24
j
wj Lj 160 −2056 −412 2580 447 max wj Lj 2580 wj Lj 719
Tj 32 0 0 258 149 max Tj 258 Tj 439
wj Tj 160 0 0 2580 447 max wj Tj 2580 wj Tj 3187
Ej 0 257 206 0 0 max Ej 257 E 463
j
w j Ej 0 2056 412 0 0 max wj Ej 2056 wj Ej 2468
Uj 1 0 0 1 1 U 3
j
wj Uj 5 0 0 10 3 wU 18
j j
Vj 0 1 1 0 0 V 2
j
w j Vj 0 8 2 0 0 w j Vj 10
ETj 32 257 206 258 149 max ETj 258 ET 902
j
wj ETj 160 2056 412 2580 447 max wj ETj 2580 wj ETj 5655
Note that max-form objectives for Uj and Vj do not make sense, and they have not
been included in the table.
122 5 Objectives
At the beginning of this chapter, we mentioned that most of the presented criteria
may be conflicting. High machine utilisation usually results in high flowtime. Many
other examples can be drawn. As a result of all this, the optimisation problem cannot
be dealt with by optimising a single criterion, basically because by doing so, many
other criteria will be severely affected.
The ideal situation would be to transfer everything into a ‘profitability’ measure
and maximising this profitability would be the solution. The problem, as has been
mentioned, is that this is an impossible task as it is not possible to calculate the costs
and benefits of every possible schedule and production decision.
In any case, one has to consider the decision-making process as a more abstract
entity. T’Kindt and Billaut (2006) state several items that make the decision-making
process extremely complicated. Here we instantiate all these items for production
scheduling:
Given all of the above, the only viable approach is to deal with several objec-
tives simultaneously. By simultaneously we mean that several different performance
measures have to be measured for every possible schedule and that a compromise
decision has to be made. For example, let us assume that from five plan managers,
three agree in that maximising machine utilisation is a key concern due to the very
expensive equipment employed. The other two agree but insist that service level must
not be sacrificed. One possible solution to this problem is to minimise the makespan
Cmax subject to a minimum
service level, which could be defined as a maximum
number of tardy jobs Uj . Another possibility could be to minimise makespan and
once the best makespan solution has been obtained, we could minimise the number
of tardy jobs subject to no or moderate makespan deterioration. A third alternative
would be to obtain several trade-off solutions between the best possible makespan
5.7 Dealing with Conflicting Criteria: Multiobjective Scheduling 123
value and the number of tardy jobs, and the lowest possible number of tardy jobs and
the makespan value.
There are many different techniques aimed at multi-objective optimisation. Real-
ity is indeed multi-objective, therefore it is needed to approach real problems from a
multi-objective perspective. However, before delving into this, we need to first intro-
duce single objective scheduling methods. Therefore, multi-objective scheduling will
be later discussed in Chap. 10.
In the last two chapters, the different scheduling models were presented and many
possible scheduling constraints were detailed. It was therefore expected that schedul-
ing criteria were equally as rich and varied. We have made two main classifications
of objectives: those based on completion times of the jobs and those based on the due
dates. It is safe to say that a sizeable part of the scheduling literature is concerned
with makespan minimisation. However, the practical relevance of this objective is
debatable, specially with such a high prevalence.
In the long run, the best objective is the one that is trusted. This trust comes
after the realisation of several schedules for which objectives where known and for
which production and delivery went as expected. Advanced scheduling at produc-
tion plants usually entail ad-hoc and evolved objectives that satisfy several criteria
simultaneously.
Regarding further readings, a discussion about the relation between scheduling
objectives and corporate objectives are given as back as in Gupta and Dudek (1971)
or more recently, in more detail in Gary et al. (1995). This issue is also commented in
the paper of Stoop and Wiers (1996). Some authors even apply quality control charts
to measure the schedule performance (MacCarthy and Wilson 2001). The notation
employed in Sect. 5.3 is rather standard and can be found e.g. in French (1982)
or Pinedo (2012), among many others. Proofs that the maximisation of utilisation
is equivalent to minimisation of makespan and equivalent to the minimisation of
idletime, including heads and tails are commented in most textbooks, such as Conway
et al. (1967), Baker (1974), French (1982), Błazewicz et al. (2002), Brucker (2007),
Pinedo (2009, 2012), Baker and Trietsch (2009). The aforementioned textbooks
usually commented and detailed the objectives presented here.
Regarding specific contributions, makespan minimisation is reviewed in many
different papers, as the review papers are mostly linked with the processing layout
or specific constraints. For the flow shop layout, some interesting reviews centered
around makespan are those of Framinan et al. (2004), Ruiz and Maroto (2005), Hejazi
and Saghafian (2005). Parallel machines have received attention in the reviews of
Cheng and Sin (1990) and Mokotoff (2001) although these last review papers are
not solely centered around makespan. The same is applicable to the hybrid shop
layouts (mainly hybrid flow shops), which are reviewed in Linn and Zhang (1999),
Vignier et al. (1999), Wang (2005), Ruiz and Maroto (2006), Quadt and Kuhn (2007),
124 5 Objectives
Ribas et al. (2010), Ruiz and Vázquez-Rodríguez (2010). There is a whole body of
scientific research dealing with inserted idle time in scheduling, being Kanet and
Sridharan (2000) a good review on the topic. Good reviews of due-date related
research are available from Sen and Gupta (1984), Baker and Scudder (1990) and
more recently, Vallada et al. (2008), Jozefowska (2007) is a book solely devoted to
Just-In-Time scheduling.
References
Baker, K. R. (1974). Introduction to Sequencing and Scheduling. John Wiley & Sons, New York.
Baker, K. R. and Scudder, G. D. (1990). Sequencing with earliness and tardiness penalties: a review.
Mathematics of Operations Research, 15:483–495.
Baker, K. R. and Trietsch, D. (2009). Principles of Sequencing and Scheduling.Wiley, New York.
Błazewicz, J., Ecker, K. H., Pesch, E., Schmidt, G., and Wȩglarz, J. (2002). Scheduling Computer
and Manufacturing Processes. Springer-Verlag, Berlin, second edition.
Brucker, P. (2007). Scheduling Algorithms. Springer, New York, fifth edition.
Cheng, T. C. E. and Sin, C. C. S. (1990). A state-of-the-art review of parallel-machine scheduling
research. European Journal of Operational Research, 47(3):271–292.
Conway, R. W., Maxwell, W. L., and Miller, L. W. (1967). Theory of Scheduling. Dover Publications,
New York. Unabridged publication from the 1967 original edition published by Addison-Wesley.
Framinan, J. M., Gupta, J. N. D., and Leisten, R. (2004). A review and classification of heuristics for
permutation flow-shop scheduling with makespan objective. Journal of the Operational Research
Society, 55(12):1243–1255.
French, S. (1982). Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop.
Ellis Horwood Limited, Chichester.
Gary, K., Uzsoy, R., Smith, S. P., and Kempf, K. (1995). Measuring the quality of manufacturing
schedules. In Brown, D. E. and Scherer, W. T., editors, Intelligent Scheduling Systems, volume 4 of
Operations Research/Computer Science Interfaces, pages 129–154, Dordrecht. Kluwer Academic
Publishers.
Gupta, J. N. D. and Dudek, R. A. (1971). Optimality Criteria for Flowshop Schedules. IIE Trans-
actions, 3(3):199–205.
Hejazi, S. R. and Saghafian, S. (2005). Flowshop-scheduling problems with makespan criterion: a
review.International Journal of Production Research, 43(14):2895–2929.
Jozefowska, J. (2007). Just-in-Time Scheduling. Springer, Berlin.
Kanet, J. J. and Sridharan, V. (2000). Scheduling with inserted idle time: Problem taxonomy and
literature review. Operations Research, 48(1):99–110.
Linn, R. and Zhang, W. (1999). Hybrid flow shop scheduling: A survey. Computers & Industrial
Engineering, 37(1–2):57–61.
MacCarthy, B. L. and Wilson, J. R., editors (2001). Human performance in Planning and Scheduling.
Taylor & Francis.
Mokotoff, E. (2001). Parallel machine scheduling problems: A survey. Asia-Pacific Journal of
Operational Research, 18(2):193–242.
Pinedo, M. (2009). Planning and Scheduling in Manufacturing and Services. Springer, New York,
second edition.
Pinedo, M. L. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, New York, fourth
edition.
Quadt, D. and Kuhn, D. (2007). A taxonomy of flexible flow line scheduling procedures. European
Journal of Operational Research, 178(3):686–698.
References 125
Ribas, I., Leisten, R., and Framinan, J. M. (2010). Review and classification of hybrid flow shop
scheduling problems from a production system and a solutions procedure perspective. Computers
& Operations Research, 37(8):1439–1454.
Ruiz, R. and Maroto, C. (2005). A comprehensive review and evaluation of permutation flowshop
heuristics. European Journal of Operational Research, 165(2):479–494.
Ruiz, R. and Maroto, C. (2006). A genetic algorithm for hybrid flowshops with sequence dependent
setup times and machine eligibility. European Journal of Operational Research, 169(3):781–800.
Ruiz, R. and Vázquez-Rodríguez, J. A. (2010). The hybrid flowshop scheduling problem. European
Journal of Operational Research, 205(1):1–18.
Sen, T. and Gupta, S. K. (1984). A state-of-art survey of static scheduling research involving due
dates. OMEGA, The International Journal of Management Science, 12(1):63–76.
Stoop, P. and Wiers, V. (1996). The complexity of scheduling in practice. International Journal of
Operations and Production Management, 16(10):37–53.
T’Kindt, V. and Billaut, J.-C. (2006). Multicriteria Scheduling: Theory, Models and Algorithms.
Springer, New York, second edition.
Vallada, E., Ruiz, R., and Minella, G. (2008). Minimising total tardiness in the m-machine flowshop
problem: A review and evaluation of heuristics and metaheuristics. Computers & Operations
Research, 35(4):1350–1373.
Vignier, A., Billaut, J.-C., and Proust, C. (1999). Les problèmes d’ordonnancement de type flow-
shop hybride: État de l’art. RAIRO Recherche opérationnelle, 33(2):117–183. In French.
Wang, H. (2005). Flexible flow shop scheduling: optimum, heuristics and artificial intelligence
solutions. Expert Systems, 22(2):78–85.
Chapter 6
Construction of Scheduling Models
6.1 Introduction
After the definition of the main elements constituting scheduling models, this chapter
is devoted to the process of actually building scheduling models. Manufacturing
scheduling models are intended to simplify and to map the relevant real-world prob-
lem settings in a (usually formal) model which will be used to yield—via scheduling
methods—a solution to this formal model. This solution would be then transferred
to and implemented in the real-world setting.
Mathematical (i.e. formal) models include a combination of logical dependencies,
mathematical relationships such as equations and inequalities, data structures and
criteria. The quality of the answers to the real-world problem’s questions which are
produced by a model obviously depends on the accuracy of the structure and of the
data of the model. Here, we describe and discuss the most prominent approaches to
construct scheduling models.
More specifically, in this chapter we
• present basic approaches to construction of scheduling models (Sect. 6.2),
• describe decomposition and reintegration as a basic tool for handling model
complexity (Sect. 6.3),
• address aggregation and disaggregation in manufacturing scheduling to reduce a
model’s complexity (Sect. 6.4),
• sketch out what validation and verification means in the context of (scheduling)
models (Sect. 6.5).
Recall from Fig. 1.2 in Chap. 1 that we mapped the flow of a decision process where
we started from a real-world problem for which a decision is required. Then a formal
model is derived by means of simplification and formalisation. By some formal
procedure, this formal model is solved and this formal solution is transferred to
a real-world solution which is implemented. In this section, we mainly address the
‘upper right corner’ of Fig. 1.2, i.e. we will focus on simplification and formalisation,
especially the transfer of real-world relations into a formal, mathematical decision
model and the techniques to set up a respective formal model.
Formulation of a decision model for manufacturing scheduling requires adequate
real-world assumptions about constraints, objectives, tasks and logical dependen-
cies and their transfer to the formal sphere. Both, decision variables and parame-
ters/coefficients of the model have to be defined precisely and on an adequate level
of detail (see also Chap. 2). ‘Adequate’ here means that both, the real-world aspects of
the problem are sufficiently represented in the model and the model itself is expected
to be tractable. Complexity aspects, therefore, are of eminent importance in model
construction.
In Sect. 2.3.2.2 a classification of complexity has been given. According to this
classification, note that model-oriented complexity in manufacturing scheduling
might result both from mass aspects (multiplicity and variance) and chaos aspects
(ambiguity and changeability) such as:
Fig. 6.1 Framework for selection of scheduling model types (Nagar et al. 1995)
problem under consideration. This is the reason why we do not further specify these
aspects here.
With respect to the availability of resources, every operation (or sub-operation)
on one hand has to regard the availability of the machine(s) it is assigned to, i.e.
constraints have to guarantee that only a limited number of operations (usually at
maximum one operation) are executed simultaneously on the respective machine.
On the other hand, limitations on possibly required additional resources (manpower,
etc.) might have to be considered as well if this is relevant for the problem under
consideration.
With respect to the availability of jobs (or operations), constraints relative to pre-
decessors and successors of the operation under consideration have to be included.
Usually, all predecessor operations have to be finished before the current opera-
tion can start. This is what is called end–start relation with minimum time distance
(usually, in scheduling problems, this minimum time distance is 0) in network plan-
ning models. Supposing SOi j being the starting time of the operation of job j on
machine i and Ci j the respective
finishing time, this constraint can be expressed as
SOi j ∪ Ci → j → + MinTimeDist (i, j), (i → , j → ) , for all operations (i → , j → ) being (imme-
diate) predecessors of operation (i, j). Both, SOi j and Ci j , are continuous-type vari-
ables of the scheduling model.
However, recall from Sect. 4.2.1 that other type of time distance, i.e. apart from
the above described end–start relation, also can occur. Usually, not both, predecessor
and successor relations have to be mapped in a model since they result from each
other. (Jobs’ or operations’ release times and deadlines can be easily included in the
SOi j , Ci j notation.)
Basically, there are two different types of predecessors for every operation, i.e.
fixed and changeable ones. Fixed predecessors result from physical constraints of
operations or jobs, e.g. from pre-specified job/product routings. (MinTimeDist in the
above formula might represent, for fixed predecessors, transportation times between
two consecutive operations of one job.) Changeable predecessors result from the dif-
ferent decisions/schedules which represent the different solutions of the scheduling
problem. A best choice of these changeable predecessors represent the optimal solu-
tion to the manufacturing scheduling problem under consideration. (MinTimeDist in
the above formula might represent, for changeable predecessors, sequence-dependent
setup times.)
Standard scheduling objective functions obviously can also easily be described
by using the above SOi j , Ci j formulas (see also Sect. 5.3). For a given solution (i.e.
a complete assignment of operations to the time scale, ending times of jobs’ last
operations immediately define makespan, flowtime, utilisation, etc.).
Many other aspects might be included in a manufacturing scheduling model,
such as:
• predictable or unpredictable non-availability periods of machines/resources,
• non-availability of materials, fixtures or tools (being also relevant with a short-term
horizon, e.g. in just in time setting),
6.2 Basic Approaches to Construction of Scheduling Models 131
for all pairs of operations (i 1 , j1 ) and (i 2 , j2 ) for which both relative sequences are
relevant, i.e. which might return an infeasibility if they are at least partly processed
in parallel. (This is obviously the case if both operations take place on the same
machine, which is opposed to be able to process no more than one operation at a
time, but might happen sometimes also otherwise.)
Obviously, a necessary constraint for feasibility of a schedule then is
for all i, j1 and j2 with j1 ∃= j2 and a large number M represents both alterna-
tives of precedence for the operations (i, j1 ) and (i, j2 ), see Williams (2013). This
representation of precedence alternatives on one machine requires only half of the
binary variables as compared with the full table since (i, j1 ) preceding (i, j2 ) means
automatically (i, j2 ) not preceding (i, j1 ) and vice versa.
Using (and possibly expanding or modifying) the approaches described in this
section, generates a decision model for manufacturing scheduling problems, defining
variables, constraints as well as objective function(s) in one model which is the
starting point for the optimisation or search process to generate a good, efficient or
even optimal solution.
So far, in this section we exclusively addressed the design of scheduling models
with constraints that were strictly to be kept, i.e. hard constraints. However, in real-
world settings sometimes constraints are not as strict, may it be because of the
existence of processing alternatives not mapped in the model, for (time) buffers
included only implicitly in the model or for other reasons. In such cases, a relaxation
might be included in the respective constraints. E.g., in the above constraints, a time
relaxation of size u (as a variable) might be included, e.g. as
meaning here that the strict precedence requirement between the operations is
replaced by some allowance of parallelity, e.g. in real-world terms by using a time
buffer. However, to avoid unlimited violation of the precedence requirement, u has
to be bounded either primarily by an additional constraint (addressing the size of u)
or by dually penalising a positive value of u in the objective function. An alterna-
tive way of handling such soft constraints is by using fuzzy set approaches where the
degree of membership corresponds to the amount by which the constraint is violated.
Another setting demanding for similar approaches are conflicting constraints in
the original model, resulting in the non-existence of a feasible solution. Relaxing
these constraints primarily or dually or using goal programming approaches might be
adequate as well because of implicit buffers and/or technological or process flexibility
not being included in the model.
Many other aspects of real-world scheduling settings can be mapped in formal
scheduling models. Lead times, release or processing patterns, late modifications of
jobs (with respect to time, size, design, . . .), quality aspects and many other might
increase the complexity of the real-world scheduling task under consideration. Also
variability aspects, e.g. with respect to processing times might be a severe issue in
many situations, may it be influenced by a human component, by planning defi-
ciencies, by technological variability, etc. The enormous diversity of these aspects
6.2 Basic Approaches to Construction of Scheduling Models 133
Constraints:
SO R j (m), j + pi j ≤ C ∀ j (6.5)
xi jk ∈ {0, 1} ∀ i, j, k (6.6)
Combining constraints (6.5) and objective function (6.1), the minimum value for
C is the minimum makespan since the left-hand side of (6.5) defines the finishing time
of every job’s last operation. Constraint set (6.2) guarantees the machine sequence
per job, saying that job’s j R j (i − 1)-th operation must be finished before its R j (i)-th
operation can start. Constraint set (6.3) guarantees that no two jobs are simultaneously
processed on the same machine. Finally, constraint set (6.4) guarantees that, on
machine i, either job j is processed before job k or vice versa.
Clearly, this model can be easily adjusted to some well-known other settings, such
as, e.g. flowtime as objective function or including due dates (and related objective
functions) or deadlines.
6.2 Basic Approaches to Construction of Scheduling Models 135
However, keeping in mind the often rather short-term horizon perspective of real-
world scheduling problems, ignoring uncertainty might be not as critical from an
application point of view. Nevertheless, the extent of uncertainty’s influence is often
not known in advance. Therefore, this approach seems to be appropriate in situations
where uncertainty will have only little influence on the problem and its solution. Only
slightly more elaborated are models which include buffers for uncertainty, e.g. by
expanding processing times or by reducing capacities. The ‘correct’ determination
of the size of these buffers is a most challenging task. Both approaches might run into
feasibility and/or optimality problems. In addition to the application of deterministic
approaches to non-deterministic problems, sensitivity analyses can be executed, may
it be by using some kind of simulation or by applying explicit formal approaches
(if available).
However, if uncertainty is expected to be an important issue with respect to the
problem under consideration, is has to be considered explicitly—if possible. Two
basic approaches are intensively discussed in the literature, i.e. stochastic scheduling
approaches and fuzzy logic approaches.
Restricting to modelling issues in this chapter, stochastic models (in manufactur-
ing scheduling) might be relatively easy formulated by just defining certain parame-
ters of an originally deterministic model to be stochastic. Problems then result ‘only’
from model solution and not from model formulation. However, this interpretation
is profoundly naive. Modelling and solution aspects usually influence each other and
should be regarded jointly. Nevertheless, we will not deal with solution methods
for stochastic scheduling problems here. Instead we point the reader to respective
references, e.g. the book of Pinedo (2012), where stochastic scheduling problems
and solution methods are discussed. Without going into detail and well-known for
many stochastic problem settings also outside of manufacturing scheduling, stochas-
tic aspects significantly increase the complexity of a model and the solution process.
A treatable deterministic model will often become intractable if stochastic model
components occur. Therefore, on one hand, valid results for stochastic scheduling
problems will usually only be deducible for fairly simple problem settings, e.g.
single-stage problems with one or more machines or simple flow shop or job shop
problems. On the other hand, results and interpretations for stochastic scheduling
problems will often differ from those for deterministic problems: (Stochastic) dom-
inance results, deriving hints for scheduling policies or shop capacities (machines,
buffers, . . .) instead of detailed solutions might be the focal aspect instead of deriving
single good or optimal solutions.
If uncertain problem data are available in some simple kind of distribution (e.g.
triangular or trapezoidal) or only in some qualitative or linguistic categories (e.g.
as low, medium or high, which is supposed to be representable in some trapezoidal
distribution), fuzzy set or fuzzy logic approaches might be applied to manufacturing
scheduling problems. For example, a due date for a job might be not as strict as it is
often supposed in basic models. Traditionally, due dates are included in scheduling
problems by respective objective functions such as lateness and tardiness. Missing
the due date is then penalised. However, if due dates are not as strict, they might be
represented by a trapezoidal distribution, representing the categories early, normal
138 6 Construction of Scheduling Models
and late as is shown in Fig. 6.3, where µ indicates the so-called membership function
with values between 0 and 1, and 1 indicating that the relationship under consideration
is completely fulfilled while other values of µ can be interpreted accordingly.
Apart from single parameter data, also the objective function(s) and or the informa-
tion on constraint satisfaction might be available only in linguistic terms. For exam-
ple, makespan might be classified into short, normal and long. The same holds for the
classification of trade-offs between multiple objective functions which, depending
on the specific problem might be classified only linguistically. Flexible resources can
be addressed accordingly (see, e.g. Slowinski and Hapke 2000).
As compared with stochastic programming approaches to manufacturing schedul-
ing, fuzzy approaches on one hand are obviously somewhat simpler, also because of
the rather simplified distributions supposed in fuzzy approaches. On the other hand,
solving a fuzzy scheduling model usually also yields fuzzy result information, giving
the decision-maker some impression of the expected effect of fuzziness, also with
respect to the robustness of a solution subject to the fuzzy components of the model.
Apart from those described above, there are several other approaches to model man-
ufacturing scheduling problems, e.g. Petri nets and object-oriented approaches. We
will just sketch out these approaches here.
From an application point of view, the main features of Petri net-based approaches
for scheduling are twofold. First, they are easily able to handle multiple lots for
complex relations that may exist among jobs, routes, machines and material han-
dling devices, i.e. Petri nets provide an efficient method for representing concurrent
activities, shared resources, precedence constraints and multiple lot sizes. Second,
the generated schedule is event-driven (this facilitates real-time implementation),
6.2 Basic Approaches to Construction of Scheduling Models 139
deadlock-free (since the Petri net model of the system can be a detailed represen-
tation of all the operations and resource-sharing cases, a generated schedule is the
one from the system’s initial condition to the finally desired one; it thus avoids any
deadlock) and optimal or near-optimal with respect to makespan.
Within the Petri net approach, a bottom-up method is used to synthesise the Petri
net model of a system for scheduling. First, a system is partitioned into sub-systems
according to the job types, then sub-models are constructed for each sub-system, and
a complete net model for the entire system is obtained by merging Petri nets of the
sub-systems via the shared resources.
Although the underlying problems of scheduling tasks are highly complex a
simplified model can sometimes be obtained using object-oriented techniques. As
described by Błażewicz et al (2007), Schmidt (1996), object-oriented modelling
attempts to overcome the disadvantage of modelling data, functions, and interactions
between both, separately. The different phases of the modelling process in object-
oriented approaches are analysis, design and programming. Analysis serves as the
main representation formalism to characterise the requirements from the viewpoint
of the application; design uses the results of analysis to obtain an implementation-
oriented representation, and programming means translating this representation using
some programming language into code. Comparing object-oriented modelling with
traditional techniques, its advantages lie in data abstraction, reusability and exten-
sibility of the models, better software maintenance and direct compatibility of the
models of different phases of the software development process. A model built by
object-oriented analysis consists of a set of objects communicating via messages
which represent dynamic relations of pairs of them. The main static relations between
pairs of objects are generalisation/specialisation and aggregation.
model, less important ones are added subsequently after a solution for the important
items has been generated.
• Some constraints are relaxed or even ignored, e.g. preemption is excluded from
consideration, sequence-dependent setup times are interpreted to be sequence-
independent (and are therefore included in the processing times), transmission to
a next machine is only possible for the whole job (although some sublot could also
be transferred earlier to the next machine allowing overlapping processing, . . .).
• The time horizon is cut off earlier which usually results in less operations/jobs to
be considered.
• ...
All these and also more sophisticated approaches intend to reduce the number
of scheduling objects (operations, jobs), resources (machines), variables and/or con-
straints to receive a model complexity which can be handled.
For a given manufacturing scheduling problem, suppose that we have either
implicitly an impression of a too complex model or an explicit model which cannot
be handled (solved) as is. To reduce this model’s complexity, decomposition and
subsequent coordination of the decomposed components of the model is required
(see Fig. 6.4). The basic idea of decomposition is to separate the overall model into
sub-models which are
6.3 Complexity Reduction in Manufacturing Scheduling 141
1. tractable individually per sub-model (e.g. one machine problems or low number
of jobs), and
2. connected with each other by advantageously few and clearly defined relations
between the sub-models (to allow the subsequent integration of the partial solution
to a solution of the overall problem).
(Decomposition might take place in several steps, on several levels and even
iteratively. However, here we only describe aspects of a one-step decomposition
approach.)
In most resource-based decomposition approaches, the overall scheduling prob-
lem is split into several sub-problems where these sub-problems contain each a subset
of all machines. The subsets of machines are usually disjunct and in many cases con-
sist only of a single machine or of the machines of a single production stage. Each
sub-problem involves developing a schedule for the particular machine subset. By
clustering resources/machines, from a formal point of view, resource-based decom-
position approaches will primarily partition constraints referring to the respective
resources/machines.
In operation-based decomposition approaches, an orthogonal view to the resource-
oriented one is adopted. A complete shop schedule is interpreted as the integration
of several job schedules. If n jobs are to be scheduled, e.g. (up to) n sub-problems
are created where each involves the development of a schedule for a particular job
(job group). Afterwards, these single job (job group) schedules are integrated to an
overall schedule. By clustering operations in this type of decomposition approach,
both constraints and variables are partitioned into the respective sub-problems.
In event-based decomposition approaches, each sub-problem involves making
a single scheduling decision, e.g. which job/operation to schedule on a particular
machine, or deciding to send a job to which machine. The schedule is developed,
e.g. by an event-based simulation of the shop, as used in dispatching approaches to
scheduling. In event-based decomposition, information is chronologically grouped.
All information available at the time an event takes place is provided in a sub-problem.
These three basic decomposition strategies may be used isolated or in combina-
tion. The degree of sub-problem interdependencies also depends on the scheduling
objective and the tightness of constraints in the specific problem instance. Therefore,
bottlenecks should be anticipated before a decision for a specific decomposition
approach is taken. If particular machines or jobs or time-periods/events are known
to be crucial or expected to be crucial for the problem under consideration and its
solution, then a focus should be laid on the respective aspects when decomposing
the problem. From both, empirical evidence in more general cases as well as formal
analysis for smaller problem sizes, it is well-known that the bottleneck type (machine,
operations, events) indicates the direction of decomposition while in addition, first
the bottlenecks themselves should not be clumped together with non-bottlenecks
and second the respective other dimensions of decomposition/clustering approaches
could be applied as well. For example, a problem with a clear bottleneck machine
could separate this bottleneck machine from the other machines. The respective one
machine problem for this bottleneck machine should be solved in detail while the
142 6 Construction of Scheduling Models
other machines might be grouped together (and possibly aggregated) in the decom-
position approach as well as all or some jobs on these non-bottleneck machines.
When constructing the sub-models, these models might include some anticipating
information from the other sub-models as well—if this is possible, e.g. as used
in the well-known shifting bottleneck procedure, machine by machine sub-models
might include job availability data (i.e. machine-specific release dates) resulting
from information provided by the other sub-models. Also the classical coordination
mechanisms as (primal) budgeting (e.g. of time) and/or (dual) price mechanisms
might be included in the coordination schemes. The whole tool set of centralised or
decentralised coordination might be applied to manufacturing scheduling models in
one way or another. It should also be mentioned that the structure of the sub-models
might be induced by the organisational structure of the manufacturing setting under
consideration.
After the overall model has been decomposed into sub-models according to the
above description, every single sub-model has to be solved. We suppose that these
sub-models are solvable in one way or another without any severe problems—that
is the main reason why they have been built and how they should be constructed.
Finally, the solutions of the sub-models have to be (re-) integrated using an integration
or coordination approach. This step cannot be described comprehensively. Basically,
the local information from the sub-models’ solutions will usually consist of some
(local) schedule. Therefore, the individual schedules have to be integrated on the
coordination level according to the requirements of the overall (original) model. If
the resulting ‘total’ solution does not meet the decision-maker’s requirements, the
process of decomposition, local solution and (re-) integration of local solutions might
be repeated subject to modified information delivered from the coordination process
to the sub-models. For example, this information could consist of modified release
times of jobs, different assignment of jointly used resources or different prices for
these resources, etc.
Concluding this paragraph, we list some standard subjects of aggregation and disag-
gregation in manufacturing scheduling:
1. An operation usually belongs to a job and a job might belong to a larger customer
order or consists itself of several customer orders. Therefore, the aggregation of
jobs to orders or vice versa and the respective influence on the respective opera-
tions’ structure includes an aggregation/disaggregation approach. This aggrega-
tion/disaggregation is sometimes not seen as part of the scheduling model itself
but is seen as external input to the scheduling model. However, clustering or
decomposing jobs and/or orders accordingly might have a more or less severe
influence on the quality of a schedule.
144 6 Construction of Scheduling Models
2. Parallel and/or sequential batching (pooling) of operations (or whole jobs) on one
machine might be decided outside of the scheduling problem (on an ‘aggregate’
decision level) or a subject of the scheduling model itself. Such questions occur,
e.g. in process industries, if setups are relevant (with respect to time and/or costs,
for product family scheduling problems), if ovens should not run below a certain
utilisation rate (parallel batching), . . .
3. Opposite to batching, disaggregation of operations (lot streaming or lot splitting)
might become an issue, e.g. if due dates are tight and customers are satisfied with
partial shipments or if a next operation of a job is intended to start before the
previous operation is finished for the complete job. Lot streaming was introduced
in Chap. 3, Sect. 3.2.2 and further detailed in Sect. 3.4. Figure 6.5 shows the effect
of lot streaming for a small example.
6.4 Aggregation and Disaggregation 145
4. Transportation times from one stage to another might be added to the processing
times of the respective operations if a separate consideration of transport opera-
tions is not required.
5. Considering every single machine of the shop structure in detail might be rather
complex on one hand. On the other, organisational issues might imply an aggre-
gate view on machines in manufacturing scheduling, e.g. if group technologies
are applied. There the (macro) scheduling problem consists of assigning (macro)
operations/tasks to pre-determined groups of machines while the detailed schedul-
ing task within each group is decentralised and executed by the group itself.
6. Scheduling settings with hybrid structures on specific stages might be simplified
by aggregating the capacities of a hybrid stage into one macro-machine and by
adjusting the operations’ processing times accordingly. Of course, disaggregating
a solution from the aggregate macro-model needs to regard the hybrid structure
explicitly.
7. Additional resources (apart from the machines, e.g. flexible workers) which
are limited and have to be assigned to different machines competing for these
resources might be considered in a detailed or atomic manner on one hand. On
the other, if these resources can be pooled and can be partitioned more flexible,
the solution space will be enlarged and better schedules might be generated, e.g.
by improvements of balancing of production rates, of idle time reduction etc.
This type of resource aggregation also includes the determination of the number
of resource groups and the assignment of resource groups to machine (groups)
as well as an adequate expression for the aggregate capacity itself (Burdett and
Kozan 2003; Le Pape 1994).
8. Aggregation of time periods, i.e. reducing the granularity of the time grid might
reduce the complexity of a scheduling model. Although ceteris paribus the number
of operations remains unchanged by time aggregation, dealing with weeks or days
instead of hours and minutes usually reduces the differences between the objective
function values for different schedules and probably increases the number of
good schedules since the differences of the objective function are reduced by this
aggregation process. However, this must be in line with the requirements of the
real-world scheduling problem, e.g. a schedule with a time horizon of 1 month
usually will not require an hour time grid while a day’s schedule will.
9. Connected with 8, the length of the time horizon itself might be an issue for
complexity reduction although it is not a ‘true’ aggregation approach.
The above-listed approaches may be used isolated or in combination. Anyway, in
most models they have to be designed individually and, as mentioned before, because
of their coarsening of information, they will generate some feasibility and/or opti-
mality lag which has to be evaluated with respect to the detailed model or, better,
with respect to the real-world problem under consideration.
146 6 Construction of Scheduling Models
More operational, model validation and model verification contain one or more
of the following aspects:
• The restrictions (constraints) of the model must be feasible, redundancy-free and
in line with the real-world problem.
• The objective function(s) must be in line with the objectives for the real-world
problem.
• The decision space in the model must be in line with the real-world decisions.
• The formulated model must be error-free.
• The subjective influence of the decision-maker(s) must be minimised (according to
a purely rational point of view) or in line with the real-world structural conditions,
e.g. when giving weights to job, operations and/or objectives.
Summarising, model verification and model validation are two steps being
applicable to all modelling approaches where a real-world problem is mapped into
one or more formal models or model levels. With respect to manufacturing schedul-
ing, both approaches have to be specified according to the specific setting under
consideration.
References
Błażewicz, J., Ecker, K. H., Pesch, E., Schmidt, G., and Weglarz, J. (2007). Handbook on scheduling:
from theory to applications. Springer, Berlin/Heidelberg/New York.
Bowman, E. H. (1959). The schedule-sequencing problem. Operations Research, 7(5):621–624.
Burdett, R. L. and Kozan, E. (2003). Resource aggregation issues and effects in mixed model
assembly. In Kozan, E., Beard, R., and Chattopadhyay, G., editors, Proceedings 5th Operations
Research Conference of the Australian Society for Operations Research Queensland Branch
on Operations research into the 21st century, pages 35–53, Brisbane, Queensland - Australia.
Queensland University of Technology.
Dantzig, G. B. (1960). A machine-job scheduling model. Managemant Science, 6(2):191–196.
Dubois, D., Fargier, H., and Fortemps, P. (2003). Fuzzy scheduling: Modelling flexible constraints
vs. coping with incomplete knowledge. European Journal of Operational Research, 147(2):231–
252.
Dubois, D., Fargier, H., and Prade, H. (1996). Possibility theory in constraint satisfaction problems:
Handling priority, preference and uncertainty. Applied Intelligence, 6(4):287–310.
Dubois, D. and Prade, H. (1988). Possibility theory. Plenum Press, New York.
Fayad, C. and Petrovic, S. (2005). A fuzzy genetic algorithm for real-world job shop scheduling.
Innovations in Applied Artificial Intelligence: 18th Int. Conf. on Industrial and Engineering
Application of Artificial Intelligence and Expert Systems, IEA/AIE 2005. Bari, Italy, 22–24 June
2005, 3533:524–533.
Gibson, J. P. (2001). Formal requirements models: simulation, validation and verification. Technical,
Report NUIM-CS-2001-TR-02.
Greenberg, H. H. (1968). A branch-bound solution to the general scheduling problem. Operations
Research, 16(2):353–361.
Kuroda, M. and Wang, Z. (1996). Fuzzy job shop scheduling. International Journal of Production
Economics, 44(1–2):45–51.
Le Pape, C. (1994). Implementation of resource constraints in ILOG schedule: A library for the
development of constraint-based scheduling systems. Intelligent Systems Engineering, 3(2):55–
66.
Macal, C. M. (2005). Model verification and validation. Workshop on "Threat Anticipation: Social
Science Methods and Models".
Manne, A. S. (1960). On the job-shop scheduling problem. Operations Research, 8(2):219–223.
Nagar, A., Heragu, S. S., and Haddock, J. (1995). A branch-and-bound approach for a two-machine
flowshop scheduling problem. Journal of the Operational Research Society, 46(6):721–734.
Pinedo, M. L. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, New York, fourth
edition.
References 149
Sargent, R. G. (2007). Verification and validation of simulation models. In Henderson, S. G., Biller,
B., Hsieh, M.-H., Shortle, J., Tew, J. D., and Barton, R. R., editors, Proceedings of the 2007
Winter Simulation Conference, pages 124–137, Piscataway, NJ. IEEE Operations Center.
Schmidt, G. (1992). A decision support system for production scheduling. Revue des Systemes de
decision, 1(2–3):243–260.
Schmidt, G. (1996). Modelling production scheduling systems. International Journal of Production
Economics, 46–47:109–118.
Shen, W. and Norrie, D. (1998). An agent-based approach for dynamic manufacturing scheduling.
Working Notes of the Agent-Based Manufacturing Workshop, Minneapolis, MN.
Slowinski, R. and Hapke, M. (2000). Foreword. In Slowinski, R. and Hapke, M., editors, Scheduling
under Fuzziness, Heidelberg, New York. Physica-Verlag.
Vancza, J., Kis, T., and Kovacs, A. (2004). Aggregation: the key to integrating production planning
and scheduling. CIRP Annals of Manufacturing Technology, 3(1):377–380.
Vlach, M. (2000). Single machine scheduling under fuzziness. In Slowinski, R. and Hapke, M.,
editors, Scheduling under Fuzziness, pages 223–245, Heidelberg, New York. Physica-Verlag.
Wagner, H. M. (1959). An integer linear-programming model for machine scheduling. Naval
Research Logistics Quarterly, 6(2):131–140.
Williams, H. P. (2013). Model building in mathematical programming. Wiley, Chichester, 5th ed.
Zadeh, L. A. (1975). Calculus of fuzzy restrictions. In Zadeh, L. A., Fu, K. S., Tanaka, K., and
Shimura, M., editors, Fuzzy sets and their applications cognitive and decision processes, pages
1–39, New York. Academic Press.
Zadeh, L. A. (1999). Fuzzy sets as a basis for a theory of possibility. Fuzzy sets and systems,
100:9–34.
Zhou, M. C. and Venkatesh, K. (2000). Modelling, simulation and control of flexible manufacturing
systems: A petri net approach. World Scientific, Singapore a.o.
Part III
Scheduling Methods
The Part III of the book consists of Chaps. 7–10 and is devoted to scheduling
methods. Here we will review the main approaches related to solution procedures
of scheduling models. We emphasize general approaches rather than specific
solutions, and provide a general scheme to build and validate new procedures for
scheduling problems. We pay attention to the integration of heuristic and exact
approaches and devote a chapter to multiobjective procedures.
Chapter 7
Overview of Scheduling Methods
7.1 Introduction
In the previous part of the book, we have presented the concept of a scheduling model
as a way to formalise the decision-making scheduling problem. This part of the book
is devoted to present the methods to provide (good or even optimal) solutions for
these scheduling models. In this chapter, we give an overview of scheduling methods,
leaving for the next chapters the detailed discussion of specialised methods.
After a few basic definitions related to scheduling methods (most notably, the
concept of algorithm), we discuss, in a rather intuitive way, why scheduling problems
are tremendously hard to solve to optimality, thus giving a rationale for the traditional
use of scheduling policies, which try to ease these difficulties at the expense of an
unimpressive performance, at least as compared with ‘true’ scheduling algorithms.
We also give an overview on the main types of scheduling algorithms, and discuss
how the adequacy of a scheduling method for solving a manufacturing scheduling
decision problem can be assessed.
More specifically in this chapter, we
• provide the main definitions regarding scheduling methods, together with a dicus-
sion on the main assumptions behind these (Sect. 7.2),
• discuss the concept of computational complexity and its consequences when
designing manufacturing scheduling methods (Sect. 7.3),
• review the main scheduling policies along with their rationale (Sect. 7.4),
• present an overview of the main types of scheduling algorithms (Sect. 7.5) and
• introduce the principal issues related to the (formal and real-world) assessment of
manufacturing scheduling methods (Sect. 7.6).
In the context of this book, a scheduling method is a formal procedure that can be
applied to any instance of a scheduling model in order to obtain a feasible schedule
that (presumably) obtains good scores with respect to a sought objective. Note that,
according to Chap. 3, a scheduling model is assumed to be a representation of a well-
defined manufacturing scheduling decision problem. Therefore, since the procedure
can be applied to any instance of a scheduling model, it can be said (rather informally,
as it has been noted in several places throughout the book) that a scheduling method
solves a scheduling problem.
A scheduling method can thus be viewed as a procedure taking an instance of a
scheduling model as an input in order to produce (at least) one schedule as an output.
Since the procedure has to be formal, it can be described using a finite number of
steps, and it is thus amenable to be coded and eventually executed by a computer.
The technical name for such finite procedure is algorithm.
It is customary to classify algorithms into exact and approximate. Exact algorithms
guarantee that no other schedule performs better than the one obtained with respect to
the objective sought. The so-obtained solution is named optimum or optimal solution.
In contrast, approximate algorithms do not guarantee that the solution is optimal
although, in some cases, we will see that it is possible to estimate the maximum
deviation from the optimum.
7.2.2 Assumptions
As mentioned above, there are several explicit and implicit assumptions in the pre-
vious definition of scheduling methods that are worth to be discussed:
by the concept of Pareto set which requires, for an ‘exact’ procedure, to deter-
mine the whole efficient boundary of solutions.
5. (Deterministic) exact procedure. Finally, to be called ‘exact’, a method for ful-
filling the requirements of the formal optimisation problem (i.e. here usually
to derive the optimal solution of every possible problem instance for a formal
manufacturing scheduling problem) must deliver the solution within a finite (but
possibly large) number of computation steps and in a completely deterministic
manner. Completely deterministic here means that the result should be identically
replicable in another run of the procedure for the same problem instance.
6. Stochastics setting. It should be mentioned that exact procedures might also
be defined for stochastic decision problems, including stochastic manufactur-
ing scheduling models. The exact nature of the algorithm is not in conflict with
possible stochastic components of the model formulation as long as the criterion
of the model is not a random variable. For instance, one may speak of an exact
procedure for the α|β|Cmax model as well as for the α|β|E[Cmax ] model where
processing times are random variables.
So, in the sequel of this chapter, we will suppose that we have a precisely formu-
lated (i.e. also completely quantified) manufacturing scheduling problem (e.g. a flow
shop problem) with a unique objective function (e.g. makespan) which is intended to
be minimised, i.e. one (not all) optimal solution is to be determined by a well-defined
deterministic procedure, i.e. an exact procedure. These assumptions will allow us to
go forth until Chap. 10, where several objectives would be discussed, and Chap. 14,
in which the relaxation of these hypotheses will be discussed in the context of a
manufacturing scheduling system.
Once these assumptions have been established, a natural question is why employ-
ing approximate algorithms (without performance guarantee) if there are exact meth-
ods at hand. The answer lies in the computational complexity of most scheduling
models, which will be discussed in the next section, so we will see that approximate
algorithms are rather often the only suitable alternative given the long times that
some exact procedures take to complete. Indeed, the complexity of most real-world
scheduling models is so high that in practice, a particular case of scheduling methods
are the so-called scheduling policies (which will be studied in Sect. 7.4) which are
widely employed.
7.3.1 An Example
scale, i.e. 1,000,000,000) basic operations per second. This would be the equiva-
lent of a CPU running at 1.0 GHz. For simplicity, we are also assuming that one full
schedule can be calculated in just one basic operation. Normally, for calculating total
tardiness, we would need at least n basic operations as the individual tardiness of
all jobs has to be added. We can then calculate the time it would take for this ‘old’
computer to calculate all possible sequences. In order to put these CPU times into per-
spective, we compare these CPU times against those of a theoretical computer that is
5 million times faster, i.e. this computer would be running at 5 PHz or 5,000,000 GHz.
A computer running at such incredible speeds is way beyond current computers that,
at the time of the writing of this book, rarely operate at frequencies higher than 6
GHz. All this data is given in Table 7.1, where AU stands for ‘Age of the Universe’
which has been estimated to be 14 billion years.
As we can see, for trivial problems with about 12 jobs, both computers would give
the optimal solution in less than half a second in the worst case. However, increasing
just 3 jobs to 15 jobs (a 25 % increase in the size of the problem) results in a CPU time
of 21.79 min for the slow computer (2,730 times slower). From 15 jobs onwards, the
CPU times of the slow computer quickly turn unfeasible, as anything longer than 12 h
or so is unpractical for daily use. It can be observed that applying these complete (or
further elaborated implicite) enumeration approaches to combinatorial optimization
problems (such as scheduling problems) often results in quickly finding the optimal
solution but in long term proving that this solution is indeed an optimal one.
Conversely, a computer no less than 5 million times faster would be expected to
solve problems of much larger size than the slow computer. However, if n = 17 is
already unpractical for the slow computer, for the fast computer we would be able
to solve problems of up to n = 21. In other words, a 5 million times faster computer
allows us to solve problems that are less than 24 % larger.
This single machine scheduling problem is an example of a problem with an
exponential number of solutions. Needless to say, realistically sized problems of
n = 150 are completely out of the question. Saying that enumerating all possible
sequences of a single machine problem with 150 jobs is impossible is actually an
underestimation. Consider that 150! results in 5.71 · 10262 sequences. This number
is extremely huge. To have an idea, the number of atoms in the observable universe
is estimated to be between 1078 and 1080 . Therefore, a simplistic problem with one
machine and some 150 jobs is impossible to enumerate. The permutation flow shop
problem also has a solution space of n! However, other more complex problems, like
the regular flow shop or job shop have an even much larger number of solutions with
(n!)m or the parallel machine layout with m n . A job shop with, let us say, 50 jobs
and 10 machines, will have a solution space of (50!)10 = 6.77 · 10644 sequences.
The fact that a problem has an extraordinarily huge number of possible sequences
does not necessary imply that no efficient methods exist for obtaining a good solu-
tion or even an optimal solution. For example, picture again the 1||Cmax problem
which also has n! possible sequences but any of them will return the minimum
makespan. The example given in Table 7.1 has an obvious caveat: evaluating all pos-
sible sequences in the search for the optimum is actually the worst possible method.
7.3 Computational Complexity of Scheduling 159
Table 7.1 Total CPU times for two types of computers for evaluating all solutions in single machine
problems as a function of the number of jobs n
n n! Computer time Computer time
(slow) (fast)
1 1 1 ns 0.20 fs
2 2 2 ns 0.40 fs
3 6 6 ns 1.20 fs
4 24 24 ns 4.80 fs
5 120 120 ns 24 fs
6 720 720 ns 144 fs
7 5,040 5.04 µs 1.01 ps
8 40,320 40.32 µs 8.06 ps
9 362,880 0.36 ms 72.58 ps
10 3,628,800 3.63 ms 726 ps
11 39,916,800 39.92 ms 7.98 ns
12 479,001,600 479 ms 95.80 ns
13 6,227,020,800 6.23 s 1.25 µs
14 87,178,291,200 87.18 s 17.44 µs
15 1,307,674,368,000 21.79 min 262 µs
16 20,922,789,888,000 349 min 4.18 ms
17 355,687,428,096,000 98.80 h 71.14 ms
18 6,402,373,705,728,000 74.10 days 1.28 s
19 121,645,100,408,832,000 3.85 years 24.33 s
20 2,432,902,008,176,640,000 77.09 years 8.11 min
21 51,090,942,171,709,400,000 16.19 centuries 170.30 min
22 1,124,000,727,777,610,000,000 356 centuries 62.44 h
23 25,852,016,738,885,000,000,000 819 millennia 59.84 days
24 620,448,401,733,239,000,000,000 19.66 million years 3.93 years
25 15,511,210,043,331,000,000,000,000 492 million years 98.30 years
26 403,291,461,126,606,000,000,000,000 12.78 billion years 25.56 centuries
27 10,888,869,450,418,400,000,000,000,000 345 billion years 690.09 centuries
28 304,888,344,611,714,000,000,000,000,000 690 AU 1.93 million
years
29 8,841,761,993,739,700,000,000,000,000,000 20,013 AU 56.04 million
years
30 265,252,859,812,191,000,000,000,000,000,000 600,383 AU 1.68 billion
years
Slow computer running at 1 GHz. Fast computer running at 5 PHz
The previous example serves to illustrate two aspects earlier mentioned in this
section: on the one hand, not all algorithms are equally efficient with respect to
solving a scheduling model, so it is of interest analysing its performance. On the
other hand, giving these incredible computer times, it seems that there are models
for which it is difficult to obtain algorithms with a reasonable (not to mention good)
performance.
160 7 Overview of Scheduling Methods
Let us start with the running time of an algorithm. The running time is an estimation
of the time needed by an algorithm (usually coded into a computer) to complete as
a function of the input size. Measuring crude time is a tricky business since each
computer architecture runs at different speeds, has different frequencies and many
other specificities that make the comparison of running times of the same algorithm
in two different architectures a daunting task. To overcome this problem, the running
time is not actually given in time units but as a number of elementary operations
performed by the algorithm instead. These elementary operations or ‘steps’ are an
independent measure since each elementary operation takes a fixed amount of time
to complete. As a result, the amount of time needed and the number of elementary
operations are different by at most a constant factor.
For example, if we have an unsorted list of n elements, what is the time needed
to locate a single item in the list (assuming that all items are different and that they
are unsorted)? A basic search algorithm will require, in the worst case, to analyse
each element and would require n steps to complete. These n steps are n steps
with independence of the computing platform used, but obviously, will translate into
different CPU times in the end. Note that in the best case, the first element in the
unsorted list could be the element searched for. Therefore, only one step is needed.
On average, and given a completely random input, n/2 steps will be needed to locate
an element.
In complexity theory, the so-called ‘Big-O’ notation or, more specifically, the
Bachmann-Landau notation is used. This notation describes the limiting behaviour
of a function when the argument tends towards a particular value or infinity, usually
in terms of simpler functions. Big-O notation simplifies the running times in order to
concentrate on their growth rates. This requires the suppression of multiplicative con-
stants and lower order terms. Therefore, running time expressed with the Big-O nota-
tion is said to be the asymptotical running time, i.e. as the input size goes to infinity.
Continuing with the search algorithm example, we would say that the best case
running time is O(1) or constant time, the average running time would be O(n/2)
or linear time and the worst case running time O(n) or linear time also. Note that,
for very large input sizes (big value of n), the large time needed is only divided by a
constant.
Normally, running times are measured for the average running time, and for sim-
plicity the O notation is used. As a result, our basic searching algorithm has a running
time complexity of O(n) or linear running time. Notice that this means that n basic
operations are needed. However, at least three several basic operations are performed
at each step j, j = {1, 2, . . . , n}: reading the j-th element from the unsorted list
and comparing this element with the desired element. If the comparison is positive,
end the search; if not, increase the counter and repeat for the next element in the list.
However, for large values of n, O(3n) is equal to O(n).
Let us picture now a different algorithm. The algorithm just adds a constant value
to each element of a square matrix of size n × n. This algorithm will have one outer
7.3 Computational Complexity of Scheduling 161
Table 7.2 Total CPU times for two types of computers for applying the Quicksort algorithm of
O(n log n) computational complexity
n n log n Computer time (slow) Computer time (fast)
1,000 3,000 3.00 µs 0.60 ps
10,000 40,000 40.00 µs 8.00 ps
100,000 500,000 0.50 ms 100.00 ps
1,000,000 6,000,000 6.00 ms 1.20 ns
10,000,000 70,000,000 70.00 ms 14.00 ns
100,000,000 800,000,000 0.80 s 160.00 ns
1,000,000,000 9,000,000,000 9.00 s 1.80 µs
Slow computer running at 1 GHz. Fast computer running at 5 PHz
loop to traverse all rows (or columns) and an inner loop to traverse columns (or rows)
with one basic addition operation. The running time of the algorithm will be O(n 2 )
or quadratic time. In general, many algorithms have polynomial running times like
for example O(n 3 ) or O(n 4 ). Note that something like O(10n 4 + 3n 3 + n 2 + n) is
not correct under the Big-O notation and must be written as O(n 4 ). Recall that for
large n values, the dominating term is the n 4 , not the remaining polynomial.
There are many different categories of running times. For example, the most
efficient comparison sort algorithm known, Quicksort, has an average running time
of O(n log n) or linearithmic time, in particular, or polynomial time, in general. In the
previous single machine problem example, the brute-force approach needed O(n!)
steps, something referred to as factorial time, in particular, or exponential time, in
general.
Not all polynomial running time algorithms are equal. After all, an algorithm with
a polynomial running time of O(n 10 ) is surely going to be slower than another with
O(n 3 ). Well, this is obviously true, but recall the previous single machine example
depicted in Table 7.1. Exponential running times of O(n!) quickly run out of hand for
extremely small values of n. However, O(n 10 ) with n = 20 needs 248 h on the ‘slow’
computer of the example, whereas the brute-force approach needed 77.09 years. Fur-
thermore, the same algorithm would need 2.05 ms on the fast computer of the example
(compared to 8.11 min of the other algorithm). In other words, in a polynomial run-
ning time algorithm, a computer that is 5,000,000 times faster reduces the real CPU
times linearly, i.e. 2.05 ms is 5,000,000 times faster than 248 h. This is not the case
for exponential running time algorithms. Table 7.2 shows the CPU times that the two
types of computers would need for very big n values of the Quicksort algorithm.
As can be seen, sorting one billion items with this algorithm would require in the
worst case barely 9 s in the slow computer and an almost instantaneous 1.8 µs in the
fast computer.
162 7 Overview of Scheduling Methods
Unfortunately, most scheduling problems belong to the NP-hard class. Only the
simplest cases, and most of them are single machine layouts, do not belong to the
NP-hard class. The question is: What to do then? The answer is in the very heart of
all the scientific efforts that the scientific scheduling community has been doing over
the past decades. If the problem belongs to the NP-hard class all that is known is that
an optimum solution for large-sized instances is unlikely to be viable in acceptable,
i.e. polynomial time. However, for practical scheduling there are several alternatives
to handle this computational complexity. These are discussed in the next section.
164 7 Overview of Scheduling Methods
When faced with a computationally complex model (which happens most of the
cases), several alternatives are open, depending on whether the model complexity is
acceptable in practical terms, or not. Note that the running time is instance depen-
dent, and that the Big-O notation neglects the size of constant factors or exponents in
the running time and these may have practical importance. Therefore, an exact expo-
nential running time algorithm might still return an optimal solution in an acceptable
time for moderately sized problems. If these problems are of interest to the practical
application, then the computational complexity is (at least from a practical viewpoint)
acceptable.
If the computational complexity is not acceptable, then again two options (not
necessarily alternative) are available:
1. Try to reduce the complexity of the model and/or that of the associate algorithms
to an acceptable level, or
2. Avoid the use of algorithms to solve the model.
With respect to the first option, recall that two types of complexity in manufactur-
ing scheduling were introduced in Sect. 2.3.2, according to the discussion in Sect. 1.5:
A computational complexity reduced to a formal sphere, and a real-world complexity.
While the reduction of the latter was discussed in Sect. 6.3, we will address here the
reduction of the computational complexity. A general framework for complexity
reduction with respect to model and/or method can be seen from Fig. 7.4.
Our point of origin is situation 1 in Fig. 7.4, in which a (original) model is shown
in combination with a (original) exact algorithm. If the complexity of this combi-
nation of model and algorithm is not acceptable, the required complexity reduction
7.3 Computational Complexity of Scheduling 165
might take place into two directions. On one hand, the same original model might
be approximately solved with an approximate method. This case is represented by
situation 2 in Fig. 7.4. For example, a given model can be solved by an algorithm
returning a solution without guarantee of optimality. On the other hand, the model
might be simplified and solved with the same algorithm which is available for the
original model (situation 3 in Fig. 7.4). For example, jobs of the original model might
be clustered into job groups which are to be executed consecutively within their group
and the exact algorithm for the original model is applied only to the job group model
which is hopefully, significantly less complex than the original one. Finally, both
approaches can be combined (situation 4 in Fig. 7.4).1
Note that, after having solved the model- and/or method-simplified setting, the
solution derived for this simplification has to be adjusted to the requirements of a
solution for the original model. Despite of possible ambiguities, we will follow this
intuitive line of thinking in our discussion of selected ideas for approximate methods
which, accordingly, are seen as a simplification with respect to model and/or method
in combination with a subsequent transfer of the solution of the simplified setting to
the original model.
With respect to the second option—avoiding the use of algorithms to solve the
model—it may seem a bit unrealistic, but yet it has been (and still is) one of the
favourite options for the scheduler. While algorithms are designed to obtain a sched-
ule, it may be that one can schedule jobs without a ‘true’ schedule. Instead, some
rules can be given to assign on-the-fly jobs to machines as soon as they enter the shop
floor. Such a rule is called a dispatching rule, and the set of rules that one can use to
avoid deriving a schedule is called scheduling policy. More formally, a scheduling
policy consists of a set of principles usually based in ‘common sense’ or ‘rule of
thumb’ that are used to generate a schedule. However, note that a schedule itself
is not explicitly given, although it can be obtained from this set of principles. For
instance, a reasonable scheduling policy may be to process first those jobs whose
due date is closest, in order to honour the commitment with the customer. Clearly,
given a known set of jobs with known due dates, this policy can be translated into a
schedule. Quite clearly as well, this policy can lead to disastrous results, as it does not
take into account the different processing times of the jobs, and therefore jobs with
short processing times but relatively close due dates would be prioritised in front of
jobs with further due dates, but also high processing times and therefore less slack.
The two aspects mentioned before summarise the main features of using schedul-
ing policies as a scheduling method, i.e.: their simplicity and their myopic nature.
The first feature is the reason why scheduling policies were and still are so popular
in practice, since we will see in this chapter that developing optimal or even good
schedules is, in general, far from being an easy task. In contrast, the second feature
exposes their great disadvantage in front of other methods. There is no point in trying
to make the principles more sophisticated so they can take into account most possible
situations. Every rule has an exception and except for few cases, the performance of
the dispatching rules is rather poor. Even worse, for most models the performance of
the dispatching rule heavily depends on the problem instance, so they are not very
reliable.
The most employed scheduling policies are described in detail in Sect. 7.4.
Industrial activity dates back to the dawn of the industrial revolution. The need for
scheduling is also very old and can be traced back to the onset of the production
line at the beginning of the twentieth century. Obviously, at that time there were
no computers and things were done manually. Even today, a large percentage of
companies rely on human schedulers or on simple spreadsheets and rules of thumb
to schedule jobs in the shop.
Manual schedulers do lots of iterations over production plans that evolve over
time from production wishlists to more detailed production needs. Schedules are
continually revised, changed, adapted and patched on a daily, even hourly basis. Most
manual methods work in a forward fashion, i.e. jobs are launched to the shop and
sequenced at machines to be started as soon as those machines are ready. However,
there is seldom any accuracy when determining when jobs will be completed. Priority
choices are made at some decision points, subject to many judgmental exceptions and
special cases. While this might sound chaotic, we have to remind that it has worked
and still works for countless firms. Human schedulers have a number of advantages
like the ability to react quickly to changes in the production floor, common sense,
experience, etc. However, complex production systems are often out of the possible
reach of human beings. The intricacies and dynamics of several hundred jobs moving
inside a shop is beyond the cognitive capability of even the best scheduling expert.
There are several ways for dealing with this situation. Sometimes, changing the
whole production system to eliminate the need for scheduling is the best approach.
If a production line has become too complex to handle since it is processing many
different products, a possibility is to reduce the number of products or to break
down that line into several different lines. Ideally, one line for each product and the
scheduling problem is mostly gone or at least greatly reduced.
7.4 Scheduling Policies 167
Manual scheduling is often based on the so-called dispatching rules. As the name
implies, these rules basically launch or dispatch tasks to machines according to
usually a simple calculation that returns an index or relative importance for pending
tasks. Dispatching rules are also known as priority rules.
Dispatching rules are classical methods and therefore cannot be easily attributed
to a single author and their origins are hard to trace. Basically, a dispatching rule
maintains a list of eligible tasks or pending tasks. These tasks are entire jobs in a
single machine layout or the tasks that make up a full job in a flow shop or job shop
layout. Tasks enter the eligible set whenever it is actually possible to start processing
them, i.e. when the preceding task is finished, when the release date has elapsed, etc.
Let us denote the list of eligible tasks as ℘. Similarly, |℘| denotes the number of
tasks in the eligible list.
In a nutshell, general dispatching rules carry out a simple calculation for all tasks in
℘ and dispatch the tasks according to the result of this calculation, sometimes referred
to as priority. Dispatching rules can be generally classified in two dimensions:
1. Local/Global rules. This refers to the data considered in the priority index calcu-
lation. Some rules only consider the data of each task in ℘ separately. These are
called local dispatching rules. Rules that consider the data of more than one task
in ℘ simultaneously or other additional information not just related to the task
itself are referred to as global dispatching rules.
2. Static/Dynamic rules. The result of the dispatching rule depends on the time at
which it is applied. Static dispatching rules always return the same priority index,
regardless of the state of the schedule or the list ℘. On the contrary, dynamic
dispatching rules depend on the instant of time t at which they are calculated and
hence on the information resulting from the partial schedule derived up to time t.
• First Come First Served (FCFS). The oldest pending task is scheduled. It is equiv-
alent as ordering the tasks in ℘ in a First In First Out (FIFO) fashion. This rule is
the simplest one and also seems the fairest when dealing with (human) customers
as jobs that arrived first are also processed first. Obviously, this rule is too simple
and does not work well in complex settings. FCFS is a local and static rule. In the
presence of release dates r j , this rule can be easily modified to Earliest Release
Date first (ERD) which sorts jobs in ascending order of their release dates.
• Last Come First Served (LCFS). It is the opposite of FCFS. The most recent task
is scheduled. It is equivalent to Last In First Out (LIFO). LCFS is a local and static
rule.
• Random Order. Pending tasks are sequenced in a random order. While this might
seem naive, we have a guarantee that no specific objective will be either favoured or
penalised in a consistent way. Random Order is often used as a reference solution
in simulation studies when comparing different dispatching rules.
168 7 Overview of Scheduling Methods
• Shortest Processing Time first (SPT). The task in the pending list ℘ with the shortest
processing time has the highest priority. More specifically, among m machines,
indexed by i, the task k that satisfies the following expression has the highest
priority:
|℘|
pik = min pi j
j=1
SPT is a local and staticrule. Most interestingly, SPT results in an optimal solu-
tion for the problem 1|| C j . SPT just requires the pending job list to be ordered.
Furthermore, this can be done once as it is a static rule. This results in a compu-
tational complexity for SPT of O(n log n).
• Longest Processing Time first (LPT). Contrary to SPT, LPT gives higher priority
to the pending task with the longest processing time, i.e. the task k satisfying the
following expression is given the highest priority:
|℘|
pik = max pi j
j=1
LPT is a local and static rule. It also has a computational complexity of O(n log n).
• Earliest Due Date first (EDD). This rule considers the due dates of the pending
tasks in ℘. The highest priority is given to the task with the smallest due date or
deadline, i.e.:
|℘| |℘|
dk = min d j ∨ d̄k = min d̄ j
j=1 j=1
EDD is also a local and static rule. EDD gives the optimum solution for the prob-
lems 1||Tmax and 1||L max . It has a computational complexity of O(n log n). EDD
is sometimes referred to as Jackson’s rule. EDD has some important drawbacks
for
other objectives different than L max . Take for example, the T j or nj=1 w j T j
criteria. It is very easy to get a very bad result if there is a job with a large p j and
an early d j . That job would be sequenced first by EDD, even if doing so will delay
the start of many other jobs, possibly forcing them to finish later than their due
dates even if the first job is impossible to finish on time if d j < p j . In order to
solve this issue, a dynamic version of the EDD is commonly used, this is described
in the following.
• Minimum Slack first (MS). The slack with respect to the due date is the amount
of time that a task still has before it would be tardy. This slack decreases as more
and more jobs are scheduled and the current time t increases. Therefore, from the
pending jobs list ℘, and at any given time t, a job k with the minimum following
calculation is selected:
|℘|
k = argmin max d j − p j − t, 0
j=1
7.4 Scheduling Policies 169
Therefore, a job with the earliest due date but short processing time is not nec-
essarily sequenced first. MS is an example of a dynamic dispatching rule, as it
has to be recalculated each time a job is ready to be dispatched. Assuming that
there are n jobs and that after completing a job, the pending job list ℘ is examined
again, the computational complexity of MS can be stated as O(n 2 log n). While
considerably slower than the more simple EDD, with modern computers there is
no actual difference in running times between EDD and MS even for thousands
of jobs.
• Largest Number of Successors (LNS). In problems with precedence relationships
among jobs, it is interesting to process early jobs that ‘unlock’ several other jobs
once finished. These jobs correspond to those that have many successors. In order
to work correctly, this rule has to be combined with others that further give priority
to some specific objective, such as due dates or completion times. LNS is a local
and static rule. However, this last classification is a bit blurry depending on how
LNS is implemented. If we only consider the immediate successors, then LNS is
a local rule. However, if all successors are considered (i.e. the successors of the
successors and so on), then LNS could be considered as a global rule.
• Shortest Setup Time first (SST). This rule helps in problems where setup times
are a concern. Basically, from all the pending jobs in the ℘ list, the job with the
highest priority is the one with the shortest setup time. SST is a local and static
rule.
• Least Flexible Job first (LFJ). In problems with machine eligibility, it might be
of use to sequence jobs that have very specific machine needs, in order to avoid
scheduling them later where those specific machines might be already busy with
previously scheduled jobs. LFJ is a local and static rule.
• Shortest Remaining Work Next Machine first (SRWNM). This is another example
of a global rule. In flow or shop layouts, when assigning jobs to a specific machine,
we might be interested to assign a job to a current machine looking ahead in the
information of that job for subsequent machines. For example, two jobs to be
processed on a given machine might have similar requirements in that machine
and similar due dates so one could myopically think that it does not matter which
job to assign to that specific machine. However, the next machine to be used for
those jobs (picture that the next machine for those two jobs is not the same) could
give us more information. From those two machines, we could look into the one
that has the least remaining work left. Giving more work to that machine is crucial
so to avoid imminent idleness. Note that there are many possibilities as regards
global dispatching rules and SRWNM is just an example.
Let us give an example of the EDD rule for the 1|| max L j problem with six jobs.
The processing times and due dates of these jobs are given in Table 7.3.
EDD just needs to sort the jobs in ascending order of d j , this requires the applica-
tion of the Quicksort algorithm which has a running time complexity of O(n log n).
For the example, the sorted list, and EDD solution is πEDD = (1, 2, 4, 3, 5, 6). As
we can see, the completion times are C1 = p1 = 6, C2 = C1 + p2 = 6 + 3 = 9,
170 7 Overview of Scheduling Methods
Table 7.3 Processing times ( p j ) for the single machine maximum lateness minimisation problem
example
j 1 2 3 4 5 6
pj 6 3 9 7 8 2
dj 3 7 12 10 20 29
Maximum Lateness: 13
Machine
1 1 2 4 3 5 6
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34
Time
Job 1 Job 2 Job 3 Job 4 Job 5 Job 6
Fig. 7.5 Gantt chart with the EDD solution or the single machine maximum lateness minimisation
problem example
C4 = C2 + p4 = 9 + 7 = 16, C3 = C4 + p3 = 16 + 9 = 25, C5 = C3 + p5 =
25 + 8 = 33 and finally, C6 = C5 + p6 = 33 + 2 = 35. As a result, the lateness of
the jobs are: L 1 = C1 − d1 = 6 − 3 = 3, L 2 = 9 − 7 = 2, L 3 = 25 − 12 = 13,
L 4 = 16 − 10 = 6, L 5 = 33 − 20 = 13 and L 6 = 35 − 29 = 6. Therefore, the
maximum lateness L max = L 3 = L 5 = 13 which is optimal for this example. The
resulting Gantt chart is pictured in Fig. 7.5.
There are variants for dispatching rules that consider weights. For example, the
Weighted Shortest Processing Time first (WSPT) is the weighted variant of the SPT
where the task k that satisfies the following expression is given the highest priority:
wj|℘|
k = argmax
j=1 pi j
Basically, all previous dispatching rules can be modified in order to consider weights
in a straightforward way.
• First Available Machine (FAM). Assigns the job to the first eligible machine avail-
able. This is the machine with the minimum liberation time from its last scheduled
7.4 Scheduling Policies 171
job, or lowest availability date for the machine if no job is scheduled at the machine
yet.
• Earliest Starting Time (EST). Chooses the machine that is able to start job j at
the earliest time. Note that only in simple layouts with no constraints this EST
rule coincides with FAM. For example, availability of the job, lags, overlaps and
a possible setup time can be considered. EST rule measures when the job will be
able to start at the machine, not only when the machine is free from previous jobs.
• Earliest Completion Time (ECT). Takes the eligible machine capable of complet-
ing job j at the earliest possible time. Thus the difference with the previous rule
is that this rule includes processing (possibly machine-dependent) times and other
constraints.
Note than the EST and ECT rules need more calculations than the FAM as all
pending jobs in ℘ have to be tested for earliest starting time and earliest completion
times in all possible eligible machines in the hybrid environments. However, in some
cases, this added CPU time might prove crucial for a good machine assignment
decision.
Similarly, in recent years, some other more advanced machine assignment rules
have been proposed. Some of them are discussed below:
• Earliest Preparation Next Stage (EPNS). The machine able to prepare the job at the
earliest time for the next stage to be visited is chosen. It is a hybrid layout adaptation
of the SRWNM rule. Many possible special situations can be considered like time
lags between the current and the next stage, transportation constraints, etc. The rule
uses more information about the continuation of the job, without directly focusing
on the machines in the next stage. This is a global machine assignment rule.
• Earliest Completion Next Stage (ECNS). The availability of machines in the next
stage to be visited and the corresponding processing times are considered as well.
This rule also considers all the eligible machines at the next stage to make a
decision. ECNS is also a global machine assignment rule.
• Forbidden Machine (FM). Excludes machine l ∗ that is able to finish job k at the
earliest possible time, where job k is the next job in the sequence immediately
following job j. ECT is applied to the remaining eligible machines for job j. This
rule tries to minimise the excessive greediness of the ECT and similar rules. The
fastest machine for a given stage might create problems in downstream stages.
This rule is expected to obtain better results for later jobs, as it reserves some
capacity in the machine able to finish the next job earliest. As with the previous
rules, this is also a global or a ‘look ahead’ rule.
• Next Job Same Machine (NJSM). The assumption is made that two consecutive
jobs in the sequence j and k are assigned to the same machine. The machine
chosen for assignment is the one also able to finish jobs j and k at the earliest
time although only job j is assigned. This is a look-ahead rule that also considers
the next job in the sequence. This rule is especially useful if setups are relatively
large, as not the setup for job j is considered, but also the setup between jobs j
and k.
172 7 Overview of Scheduling Methods
• Sum Completion Times (SCT). Completion times of job j and job k are calculated
for all eligible machine combinations. The machine chosen is the one where the
sum of both completion times is the smallest. This rule is similar to NJSM, but
without the assumption that job k is assigned to the same machine.
Note that extremely complex dispatching rules or machine assignment rules can
be devised. However, whether these more complicated rules perform better or not is
even today a matter of research. For example, Urlings et al. (2010) demonstrated that
the more complex machine assignment rules are often not the best ones. The added
CPU time needed for the more complex assignments did not pay off when compared
to faster and simpler rules. Let us not forget that the main benefit of dispatching rules
is their simplicity and speed; therefore, going in the opposite direction might cancel
out these benefits.
Note that I j (t) has two main components. The first one corresponds to the WSPT
rule and the second is the MS nrule. p is just the average of the processing times on
j=1 p j
the single machine or p = . K is referred to as the look-ahead parameter
n
and has to be set in a particular way. For the simple machine problems, the due date
tightness factor τ and the range of due dates R are employed as follows:
n
dj
j=1
τ =1−
n · Cmax
dmax − dmin
R=
Cmax
where dmax and dmin are the minimum and maximum due dates, respectively. Note
n in the single machine case, the makespan is trivially calculated as Cmax =
that
j=1 p j .
From τ and R there are several ways of calculating K . In any case, it can be seen
that the larger the K value, the lower the quotient of the exponent of the e in the I j (t)
priority index, which translates in a lower participation of the ‘MS part’ in the ATC
rule. In the extreme, if that quotient is 0, and knowing that e0 = 1, the I j (t) priority
index is reduced to the WSPT rule.
The main drawback of dispatching rules is their intrinsic myopic nature. Decisions
are too static and too near-sighted for a good overall performance. Furthermore, dis-
patching rules tend to be tailored for specific objectives and favour some performance
measures over others. However, they are general in the sense that they can be applied
with little modifications to most machine layouts and constraints. Considering that
advanced approaches are very slowly gaining terrain in real factories and are still not
widespread, it is easy to understand that dispatching rules are still nowadays used
intensively at companies.
Broadly speaking, there are two types of exact algorithms for scheduling problems:
• Exact constructive algorithms. These algorithms exploit some properties of the
specific model in order to construct a solution which is guaranteed to be optimal.
Constructive exact procedures will be discussed in detail in Sect. 8.2, but a hint
on its logic is given here using the problem of scheduling a number of jobs on
one machine with the goal of minimising the total completion time or flowtime.
According
to the notation introduced in Chap. 3, this problem can be modelled as
1|d j | C j . For a given sequence
π = (π1 , . . . , πn ), the value of the objective
function can be expressed as j C j (π ) = n · pπ1 +(n −1)· pπ2 +· · ·+2 pπ(n−1) +
pπn , where pπk is the processing time of job πk in the sequence π . It is easy to
see then (and can be formally proved) that the best value of the objective function
is obtained scheduling first these jobs with lowest processing times. Therefore, it
is possible to construct a solution which is optimal. Since the procedure requires
to sort the jobs in ascending order of their processing times, its computational
complexity is given by O(n log n).2
Although such constructive procedures cover only a very limited spectrum of
rather simple problems if they are computational manageable, they can be used as
sub-modules of more complex approaches for the approximate solution of more
complex scheduling problems. For example, if one machine or stage can be iden-
tified as a distinctive bottleneck of the real-world problem under consideration, an
adequate solution procedure for a single machine might yield an initial solution
for this bottleneck machine. Afterwards, this solution is complemented and pos-
sibly adjusted by a schedule for the operations before and behind this bottleneck
machine/level.
• Enumerative algorithms. These algorithms implicitly or explicitly guarantee to
evaluate all possible solutions of the model. It includes complete enumeration
as well as branch and bound, cutting plane, or branch and cut approaches (as
well as other implicit enumeration approaches, often based on some decision tree
consideration)—if these procedures are executed comprehensively (which might
be problematic because of time reasons). The main types of enumerative algorithms
for manufacturing scheduling are discussed in Sects. 8.3.1–8.3.3.
As already mentioned, enumeration approaches offer the comfort and safety of
exact optimal solutions at the expense of computational effort that, in many cases,
grows exponentially with the problem size making large problem sizes almost
intractable for solution.
2 The thorough reader should have recognised that such procedure is identical to the SPT rule
discussed in Sect. 7.4.
7.5 Scheduling Algorithms 175
the restriction of the consideration to a single machine or job. However, this refers
more to the decomposition and coordination aspects mentioned in Sect. 6.3 and is
basically not addressed here.) It should be mentioned that the myopia in this type of
approaches is, as a start, addressing operations as scheduling objects. Since opera-
tions are combining jobs and machines/stages, the prioritisation of operations might
be with respect to both jobs (on a given machine or group of machines) or machines
(for given jobs or group of jobs) or even a combination of both.
The strictness of pure greedy approaches can by reduced by simultaneously con-
sidering the inclusion of more than one operation into a sequence/schedule and/or by
not only appending an operation to a given partial sequence but to consider more than
one position within this partial schedule for insertion of one or more new operations.
Summarising, constructive heuristics in manufacturing scheduling usually work
iteratively: First, they sort the objects (operations, jobs etc.) to be scheduled stati-
cally or dynamically and, second, they choose one or more of these up to now not
scheduled objects and insert them at one or several potential slots in the so far deter-
mined sequences. Third, one or more of the best insertions are kept and represent
the sequence/schedule of the so far considered objects for the next iteration. This
procedure is continued until all objects have been scheduled.
Finally, a short remark is given with respect to so-called ties in the iterative process
of generating a solution: Both, in constructive as well as in improvement approaches
referred to in the next section, it might happen that the criterion for the choice of
one or more new scheduling objects to be inserted in the partial schedule derived so
far does not yield a unique selection. For example, if one out of several remaining
operations should be chosen to be scheduled next, the criterion might give more
than one proposal for the next best insertion. (Also more than one ‘best’ position
in the sequence could appear for only one object to be inserted.) This ambiguity
is called a tie. The scheduling literature does not provide many hints on how to
deal with such a situation. Apart from the simple suggestion to consider all of these
alternatives separately (which might result in exponential computational effort), only
few hints are given. For example, as an adaptation from multi-criteria optimisation,
it is proposed to use a secondary criterion which then has to be determined carefully
in addition.
stopping criterion to finish the improvement process. This holds for both, exact and
heuristic approaches. However, the stopping criterion for exact approaches is implic-
itly or explicitly the achievement of the optimal solution while the stopping criterion
for a heuristic improvement approach will not and cannot include this optimality
condition.
A core problem in enumerative improvement of most combinatorial optimisation
problems—including most in manufacturing scheduling—is their non-convexity.
Consequently, especially looking for improvements in the surrounding of a given
solution (so-called local search procedures) will often lead to local optima while
the intended determination of global optima will not be possible by these heuristic
approaches (see, e.g. Groetschel and Lovasz 1993). Here we distinguish two different
approaches, which are shown below.
Limited enumeration
Limited enumeration as a heuristic for combinatorial optimisation problems here
basically comprises enumerative approaches where the limitation results from a direct
or indirect limitation of the number of solutions considered directly or indirectly to
a number below the overall number of solutions. Consequently, these approaches do
not evaluate (explicitly or implicitly) all solutions and therefore cannot guarantee the
achievement of an optimal solution.
The limitation might result from:
• a direct upper bound on the number of solutions considered,
• an indirect upper bound on the number of solutions considered, e.g. given by a
bound on the computation time or other resources,
• a limitation to the quality of the solution to be derived, e.g. by some upper bound
on the deviation from the optimal objective function value which has to be fulfilled
by the final solution from the approach (this requires an additional procedure for
determination of tight upper bounds),
• another type of limitation of the number of solutions evaluated, e.g. by some
‘distance’ constraint which means that only solutions in the surrounding of a
given solution are explored as in local search procedures.
Of course, these limitation procedures can also be applied in combination.
Local Search
Local search approaches try to explore the neighbourhood of one or more given
solutions. Defining local search procedures requires:
• the definition of a neighbourhood of a solution,
• a rule for the sequence of the solutions in the neighbourhood to be explored
• a rule which of the solutions examined should be kept, i.e. for the next iteration of
the neighbourhood exploration,
• a rule whether the whole neighbourhood should be explored or only a limited part
of it, e.g. only until a first better solution has been detected, as well as
• a stopping criterion to finish the search procedure (see also Domschke et al. 1997).
178 7 Overview of Scheduling Methods
7.5.2.3 Metaheuristics
solutions, formal adequacy then refers exclusively to the evaluation of the quality
of the solution(s) within this formal framework. The issue of formally evaluating
algorithms becomes particularly acute in the case of approximate algorithms, since
we already know that, for most scheduling decision-making problems, we will obtain
models that cannot be solved in an exact manner, at least not in reasonable time.
Since some algorithms—and in particular metaheuristics—depend on some para-
meters, the term ‘different algorithms’ also include comparing two versions (i.e. with
different parameters) of the same algorithm.
The evaluation should refer to (at least) two indicators, e.g. objective function
value and computation time, which results in a multi-dimensional evaluation of the
algorithms. Usually, there will be a trade-off between these two dimensions which has
to be interpreted as in standard multi-criteria evaluations. Without loss of generality,
we will usually refer to the first indicator, i.e: the objective function value.
With these two performance indicators in mind, two possibilities are usually
employed for the evaluation of algorithms:
• Benchmark performance. This often means to give an upper bound for the deviation
of the algorithm’s solution from the optimal objective function value. For example,
the algorithm can be shown to yield a solution that is at most x % worse than the
optimal solution’s objective function value. However, results of this type are not
as often to be derived. With respect to the computation time, an upper bound for
the effort related to the accomplishment of the algorithm and depending on the
problem instance length (e.g. measured by the number of jobs or operations to
be scheduled) might be given. This upper bound analysis for the computational
effort is provided by the computational complexity, which has been addressed in
Sect. 7.3.
• Comparing two or more algorithms among each other. In this case, two different
approaches (which will be discussed in the next sections) can be employed:
1. A formal proof can be given showing one algorithm to perform better than
another one, always or for some characteristics of the instances of the model.
2. A simulation study (also many times referred in the literature as to computa-
tional experiment) tries to derive some dominance results between the algo-
rithms.
A formal proof could be given, e.g. for the trivial situation that an algorithm is a
refinement of another one, in the sense that it examines explicitly or implicitly all
solutions of the non-refined algorithm. Then, obviously, the refined algorithm will
never be worse than the non-refined one. However, conclusions of above type mostly
are either trivial or non-available. An alternative is to compare the performance of
several algorithms by means of worst-case analysis.
Let us formalise the main concepts behind worst-case analysis: Assume that, for
a given problem instance l, the objective function value for the algorithm under
182 7 Overview of Scheduling Methods
consideration is OFVa (l) and the respective objective function value of the reference
algorithm is OFVref (l). A worst-case analysis results usually expresses that OFVa (l)
and OFVref (l) (or the respective optimal objective function values) are linked by
some function as
OFVa (l) ≤ f (OFVref (l)) or OFVa (l) ≤ f (OFVref (l), m, n, ...) for instance l
or
OFVa∗ ≤ f (OFV∗ref ) or OFVa∗ ≤ f (OFV∗ref , m, n, ...) for the respective optimal
objective function values.
An example for the latter is given for the 2-machine flow shop problem with
makespan objective and limited intermediate storage of b jobs between the two
machines (Papadimitriou and Kanellakis 1980). These authors compare the (polyno-
mially solvable) approach for the respective no wait problem (obviously not needing
∗
any buffer between the two machines) yielding makespan Cmax,NW with the (optimal)
∗
solution to the buffer-constrained problem Cmax,b , i.e. they apply the generation of
the no-wait solution as an approximate method to the buffer-constrained problem and
use an exact enumerative approach for the buffer-constrained problem as reference
approach. They prove that
∗ 2b + 1 ∗
Cmax,NW ≤ C
b + 1 max,b
holds. Obviously, therefore the optimal solution to the no-wait problem, used as
heuristic solution to the buffer-constrained problem, is not worse than twice the
optimal solution for the buffer-constrained problem.
Although these analytical results are smart, their contribution to the evaluation
of an approximate approach might be regarded restrainedly, due to the following
reasons:
• The results are only valid for the worst case. To what extent this worst case is
relevant for the problem solution has to be evaluated separately.
• The quality of worst-case conclusions might be somewhat disillusioning. In the
above example, from a mathematical point of view, the result is definitely inter-
esting. To what extent a limitation of the worst case deviation from the optimal
solution by 100 % provides valuable information for a possibly underlying real-
world situation or even only for the formal approximate solution approach via the
no-wait problem might be discussed controversially.
• Analytical results of the above nature are only available for a few and rather simple
problem settings.
Given the limitations of formal proofs, simulation studies using a set of sample
instances for the evaluation of an approximate method are the most widespread
method for the evaluation of the performance of several algorithms. There is no
7.6 Assessing Scheduling Methods 183
universally accepted approach for conducting such simulation studies, but it can be
summarised into two steps:
1. Testbed generation. A certain number of instances of the model are either gener-
ated specifically for the model, or adapted from those employed for other models.
This set of sample instances are usually denoted as testbed. Note that this adapta-
tion might take place twofold: Either the adaptation refers only to the method of
generating test instances while the instances themselves are generated anew, or
the instances are taken directly from the references or from some public databases.
2. Algorithms analysis. The algorithms to be compared are run with every sin-
gle problem instance, and some metrics/analyses are employed to establish the
relative performance of each algorithm on the testbed. If, according to the met-
rics/analysis, one algorithm is found to outperform the other on the testbed, then
it is assumed that the latter would, in general, perform better than the former.
It is clear that the correctness of each step greatly affects to the goal, which is to
establish, in general, the relative performance of different algorithms. Next we give
some remarks on each one of these steps.
Regarding testbed generation, the following consideration should be followed:
• The size of the problem instances (e.g. indicated by the number of jobs and/or
machines) should be in line with real-world problem dimensions and/or the dimen-
sions of problem instances used in comparable studies. This includes also the
different combinations of number of jobs and number of machines, number of
operations, etc.
• The number of problem instances should be sufficient to derive useful results,
i.e. this number should reflect the real-world problem setting and/or the statistical
significance of the conclusions to be obtained in the next step. Note that, if the
algorithms under comparison are deterministic, a single run of each algorithm on
every instance will suffice in step 2. However, if the algorithms under comparison
are of stochastic nature, then one option is to design a testbed large enough to
average the performance of these algorithms over all the testbed.
• The main parameters of the model (such as processing times, set up times, release
dates, etc.) should be chosen adequately with respect to the intended goal. For
instance, highly diverse processing times (e.g. randomly drawn from a uniform
distribution between 1 and 99, as is often proposed) might yield complicate prob-
lem instances and therefore might give indications for some kind of robustness of
an algorithm relative to the heterogeneity of processing times. However, as com-
pared with real-world problem settings, more or less significant correlations of
processing times with respect to different jobs on a specific machine/stage and/or
with respect to different operations of a specific job on different machines might
be related closer to real-world applications—and might yield different conclusions
with respect to the (relative) performance of the algorithms considered. Note that
a hypothesis often addressed with respect to correlations in processing times is
that the relative performance differences of algorithms decrease if the correlation
of processing times increases. However, this is by no means a conclusion valid for
184 7 Overview of Scheduling Methods
all scheduling models. A loophole from this problem with respect to real-world
applications is to base the generation of test instances on observations from the
specific real-world case.
• When extending/adapting an existing testbed to different models, the additional
features of the model under consideration, e.g. due dates, availability assumptions,
precedence constraints, have to be carefully determined and embedded into the
test instances. For instance, taking literature-based standard testbeds for permu-
tation flow shop models with makespan objective and using these instances for
problems with due dates and tardiness objective requires a careful determination
of the due dates. However, setting the due dates too tight on one hand, results in not
reachable due dates with the consequence that approaches which intend to max-
imise throughput will be adequate while very loose due dates on the other hand
will always enable to reach these due dates and eases the optimisation problem
from the opposite side. Although this observation may seem trivial, it is sometimes
ignored or at least not explicitly regarded in some references, i.e. the discussion
of the adequacy of adapted testbeds is often neglected or at least not dealt with
satisfactorily.
Regarding testbed analysis, once the objective function values for each algorithm
on each instance have been obtained, some metrics can be derived for analysis.
Although one option would be to employ the objective function values as a perfor-
mance indicator, it is usually preferred to employ the Relative Percentage Deviation
(RPD) metric for each algorithm and instance. RPD of an instance l (l = 1, . . . , L)
is usually defined as:
Recall from Sect. 7.6.2.1 that OFVa (l) is the objective function value obtained
by algorithm l when applied to instance l, and OFVref (l) the respective objective
function value of the reference algorithm.
Considering the whole set of L test instances the Average Relative Percentage
Deviation of a algorithm l is defined as
Recently, more and more statistical test procedures become standard to evaluate
the relative performance of approximate methods. We explicitly approve such statis-
tical approaches because they give at least some statistical reasoning for the relative
performance of the approaches under consideration.
To apply statistical test procedures, first a hypothesis has to be formulated. After-
wards, an adequate test procedure has to be chosen and the number of problem
instances for a certain level of significance of the evaluation of the hypothesis has to
be determined or, vice versa, for a given number of instances a level of significance
can be derived.
For small sizes of problem instances, especially with respect to the number of
jobs, OFVref (l) might be the value of the optimal solution (obtained e.g. by means of
an enumeration approach). Then ARPD gives the average deviation from the optimal
objective function value. For larger problem sizes, an algorithm cannot be compared
to the optimal solution but only to another algorithm or to (the best value of) a set
of algorithms. An alternative is to use some bound value for the optimal objective
function value as a reference point.
For those instances for which the optimal solution can be determined, a hypothesis
concerning the mean values may be formulated as follows:
Hypothesis: ARPDam,ref ≤ α or equivalently
i.e. subject to a certain level of significance, the solution value of the approximate
method will not deviate more than α % from the optimal objective function value.
For larger problem sizes, a similar hypothesis can be stated as well. However, since
in this case the optimal objective function value is not available for comparison, the
deviation from the best approach among the ones considered can be covered by
respective hypotheses.
In addition and more common for larger problem sizes, comparing two different
algorithms a1 and a2, an appropriate hypothesis can be stated as
Hypothesis: ARPDa1,ref ≤ ARPDa2,ref or equivalently
DISTa2,a1 := ARPDa2,ref − ARPDa1,ref ≥ 0,
The reference then can be defined as the best solution of a1 and a2 for every
instance, i.e. OFVref = min(OFVa1 , OFVa2 ). The confirmation of this hypothesis
(or the rejection of its opposite) would state that a1 performs better than a2, relative to
the reference approach and subject to a certain level of significance. (Remark: Some-
times choosing random solutions as one approximate method and doing a respective
statistical comparison analysis proves or refutes the assertion that a proposed method
is at least competitive with a random choice of a solution. However, in most—but not
all!—cases of methods proposed, the result of this comparison of hypotheses should
be obvious.)
Having precisely defined the hypothesis to be tested, the toolbox of statistical
test procedures yields adequate approaches to decide whether to accept or to reject
the hypothesis. We will not describe these test procedures in detail but refer the
reader to numerous textbooks on statistical testing (see Sect. 7.7 on further readings).
186 7 Overview of Scheduling Methods
Instead, we give some hints for the application of test procedures when evaluating
(approximate) methods in manufacturing scheduling.
If simply one approximate method is compared with another one, paired t-test
procedures can be applied. t-test approaches can be used if DISTa2,a1 follows a
normal distribution. This requirement is often supposed to be fulfilled without explicit
proof. If normal distribution of DISTa2,a1 is questionable, an approximate Gauss test
approach (parametric test) or Wilcoxon’s signed ranks test (non-parametric test) are
more appropriate since they do not require any assumption with respect to the type
of the distribution of DISTa2,a1 .
If more than two heuristics are to be compared with respect to their performance,
ANalysis Of VAriance (ANOVA) can be used to accept or to reject the hypothe-
sis that all algorithms l under consideration perform equally with respect to their
ARPDa,l,ref -values. If this hypothesis is rejected (what is desired to determine a best
method or at least significant differences between (some of) the methods under con-
sideration) it has to be regarded that the identification of better or best approaches
among the approaches under consideration is based on the (pairwise) comparison of
these approaches on the same testbed. This, however, requires statistical corrections
when applying test procedures to more than one pair of approaches. Among oth-
ers, the most popular of these correction approaches are by Tukey, Tukey-Kramer,
Bonferroni and Holm-Bonferroni. Again, we will not discuss this aspect in detail,
but just mention that the correction procedure refers to some adjusted interpretation
of the p-values when determining the level of significance of the evaluation of the
respective hypotheses.
Additionally, we point out that applying ANOVA demands for the fulfilment of
three requirements, i.e.
1. Randomness and independence
2. Normality: sample values are from a normal distribution
3. Homogeneity of variance/homoscedasticity: variances of every indicator are the
same.
Before applying ANOVA to hypotheses testing, these requirements have to be
checked. If these requirements are not fulfilled, the Kruskal-Wallis rank test might
be applied.
Finally, we point out that most of the statistical test procedures mentioned here and
the respective discussions concerning the relative performance of the approximate
scheduling approaches explicitly or implicitly refer to the so-called α error (type 1
error), i.e. how large is the risk to reject a hypothesis although it is true. The β error
(type 2 error), i.e. the risk to accept a hypothesis although it is false, is usually not
considered when statistically evaluating the performance of scheduling approaches.
Concluding this section, we would like to point out that also other more or less
different statistical approaches can yield additional insight in the performance of
approximate methods. For example, referring to the complete distribution of objec-
tive function values for a problem and comparing this distribution with the solutions
generated by an approximate method may yield additional insight into the approxi-
mate method’s performance.
7.6 Assessing Scheduling Methods 187
Summarising the previous section, partly also from a real-world perspective, we point
to the following aspects:
1. We recall once again that a properly defined and executed statistical analysis of
the performance of algorithms for manufacturing scheduling models should be
standard when evaluating these methods. However, we have the impression that
this has not been accomplished so far.
2. Another aspect, also relevant from a real-world application point of view is the
(non-) necessity of a certain level of solution quality. Especially because of argu-
ments referring to the uncertainty of many problem data, a medium-size (what-
ever that means) deviation from a best solution might be accepted within the
formal problem since the real-world implementation will differ from the solu-
tion derived for the formal problem more or less significantly anyway. Therefore,
pre-determination of non-zero levels of acceptance (with respect to optimisation
error and/or degree of infeasibility) might be a setting even in the formal sphere
of the optimisation process. However, an inevitable but also rather trivial require-
ment for the solution of the formal problem is its ability to be transformed to an
adequate solution of the real-world problem or at least to deliver some valuable
additional information to such a solution.
3. For an externally (by the decision maker) given level of significance for the
acceptance or the rejection of a hypothesis of the above type, the test procedures
will give a minimum number of problem instances to be generated and calculated
(or vice versa: for a given number of instances a level of significance can be
derived).
4. If several (i.e. more than one) hypotheses shall be accepted or rejected using
the same testbed, i.e. more than two approaches are to be compared, the respec-
tive comparisons are not statistically independent anymore. Therefore, in this
case either the level of significance decreases (if the number of instances is
not increased) or the number of instances has to be increased (to maintain a
pre-specified level of significance). In statistical analyses of the performance of
approximate methods for manufacturing scheduling (where often more than two
methods are compared), this aspect is ignored rather often.
5. If a thorough statistical analysis of the performance of scheduling approaches is
performed, both types of errors in statistical test theory, i.e. α error (type 1 error)
as well as β error (type 2 error), should be addressed.
6. Finally, not only the quality of a given algorithm for a given formal manufacturing
scheduling problem has to be considered. Also the effort for the process of setting
up the model itself as well as of the solution algorithm might be limited.
188 7 Overview of Scheduling Methods
This chapter has presented a brief overview on many possible scheduling methods.
We have initially discussed the complexity of scheduling problems from a simple
approach avoiding as much as possible the technicalities of computational complex-
ity. The conclusion from this is that most scheduling problems are computation-
ally complex and have an astronomical number of possible solutions. We have also
pointed out the relative importance of obtaining the exact optimum solution for a
scheduling problem, given that what we are actually solving is a model from the
studied reality and the approximations and simplifications made to that model might
perfectly offset the gains of a precisely obtained optimal solution.
The main scheduling methods have been introduced in this chapter as well. Each
method has its own advantages and disadvantages. In a nutshell, dispatching rules
are easy to understand, to implement and to calculate. However, their results are
often poor except for some simplistic problems and are easily surpassed by most
scheduling algorithms. Exact approaches provide the optimum solution but are only
viable for simple problems and for small instances as well. Unless some very specific
problem is being studied for which a very effective exact optimisation approach
exists, they are not advisable to be used in complex production shops where hundreds,
potentially thousands of jobs are to be scheduled. Heuristics are often tailored for
some specific problems and objectives but they usually provide much better solutions
when compared to dispatching rules. With today’s computing power, heuristics are
practically as fast as dispatching rules.
The final part of the chapter is devoted to discussing how to assess scheduling
methods, both from formal and real-world points of view. From a formal viewpoint,
the predominant approach for approximate methods are the simulation studies, for
which no standard procedure yet exists, although we give some insight into the basics
of these analyses.
Interested readers wishing to delve into the world of computational complexity
should start with the books of Garey and Johnson (1979), Papadimitriou (1979) or
Arora and Barak (2009), just among the many more specialised texts. Studies about
computational complexity for scheduling problems appeared in Garey et al. (1976),
Rinnooy Kan (1976), Błazewicz et al. (1983), Lawler et al. (1993) and many others.
The book of Brucker (2007) and the one of Brucker and Knust (2006) contain a
large deal of complexity results for scheduling problems. The same authors run a
website with complexity results for scheduling problems at http://www.mathematik.
uni-osnabrueck.de/research/OR/class/. This website is very detailed and contains
lots of up-to-date information.
Many dispatching rules and heuristics are covered by the works of Panwalkar and
Iskander (1977), Blackstone et al. (1982), Haupt (1998), Rajendran and Holthaus
(1999) or Jayamohan and Rajendran (2000). Some advanced dispatching rules
(including many global machine assignment rules) can be found in Urlings et al.
(2010).
7.7 Conclusions and Further Readings 189
A classical text for heuristics is the book of Morton and Pentico (1993). For a
much more detailed overview on exact algorithms, dispatching rules and heuristic
algorithms, the reader may consult the general books on scheduling: Conway et al.
(1967), Baker (1974), French (1982), Błazewicz et al. (2002), Brucker (2007), Pinedo
(2012), Baker and Trietsch (2009) and Pinedo (2009). Regarding the discussion of
the simulation studies, the reader might want to have a look at Bamberg and Baur
(1998), Berenson et al. (2006), Montgomery (2012), particularly for covering the
technical/statistical issues. Finally, an interesting simulation study regarding corre-
lations in processing times and the relative performance of algorithms is Watson et
al. (2002).
References
Arora, S. and Barak, B. (2009). Computational Complexity: A Modern Approach. Cambridge Uni-
versity Press, Cambridge.
Baker, K. R. (1974). Introduction to Sequencing and Scheduling. John Wiley & Sons, New York.
Baker, K. R. and Trietsch, D. (2009). Principles of Sequencing and Scheduling. Wiley, New York.
Bamberg, G. and Baur, F. (1998). Statistik. Oldenbourg, Muenchen, Wien.
Berenson, M. L., Levine, D. M., and Krehbiel, T. C. (2006). Basic business statistics: Concepts and
applications. Pearson Education, Upper Saddle River, NJ.
Blackstone, Jr, J. H., Phillips, D. T., and Hogg, G. L. (1982). A state-of-the-art survey of dispatch-
ing rules for manufactuing job shop operations. International Journal of Production Research,
20(1):27–45.
Błazewicz, J., Ecker, K. H., Pesch, E., Schmidt, G., and Wȩglarz, J. (2002). Scheduling Computer
and Manufacturing Processes. Springer-Verlag, Berlin, second edition.
Błazewicz, J., Lenstra, J. K., and Rinnooy Kan, A. H. G. (1983). Scheduling Subject to Constraints:
Classification and Complexity. Discrete Applied Mathematics, 5:11–24.
Brucker, P. (2007). Scheduling Algorithms. Springer, New York, fifth edition.
Brucker, P. and Knust, S., editors (2006). Complex Scheduling. Springer-Verlag, Berlin.
Conway, R. W., Maxwell, W. L., and Miller, L. W. (1967). Theory of Scheduling. Dover Publications,
New York. Unabridged publication from the 1967 original edition published by Addison-Wesley.
Domschke, W., Scholl, A., and Voss, S. (1997). Produktionsplanung: Ablauforganisatorische
Aspekte. Springer, Berlin. 2nd, revised and upgraded edition.
Du, J. and Leung, J. Y. T. (1990). Minimising total tardiness on one machine is NP-hard. Mathematics
of Operations Research, 15(3):483–495.
French, S. (1982). Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop.
Ellis Horwood Limited, Chichester.
Garey, M. R. and Johnson, D. S. (1979). Computers and Intractability. A Guide to the Theory of
NP-Completeness. W. H. Freeman and Company, New York.
Garey, M. R., Johnson, D. S., and Sethi, R. (1976). The Complexity of Flowshop and Jobshop
Scheduling. Mathematics of Operations Research, 1(2):117–129.
Groetschel, M. and Lovasz, L. (1993). Geometric algorithms and combinatorial optimization.
Springer, Berlin [a.o.]. 2th, corrected edition.
Haupt, R. (1989). A survey or priority rule-based scheduling. OR Spectrum, 11(1):3–16.
Jayamohan, M. S. and Rajendran, C. (2000). New dispatching rules for shop scheduling: a step
forward. International Journal of Production Research, 38(3):563–586.
Lawler, E. L., Lenstra, J. K., and Rinnooy Kan, A. H. G. (1993). Sequencing and Scheduling:
Algorithms and Complexity. In Graves, S. C., Rinnooy Kan, A. H. G., and Zipkin, P. H., editors,
190 7 Overview of Scheduling Methods
8.1 Introduction
In the previous chapter we discussed that most scheduling models are hard to be
solved optimally and that there is little hope in expecting the optimal solution for
complex problems arising in real production settings. However, some simple prob-
lems might arise as subproblems or surrogate settings in complex shops. Additionally,
exact algorithms help in closely and deeply understanding scheduling models and
therefore, they have an intrinsic utility. Moreover, some exact algorithms do exist for
some special cases of scheduling problems and are worth of study. Finally, we have
already mentioned that some of the exact procedures can be employed as approxi-
mate methods (e.g. by terminating them before optimality is reached), and can serve
to inspire the construction of approximate methods for more complex problems.
After the introduction to exact methods for scheduling models carried out in
Sect. 7.5.1, this chapter deepens in these methods, and we discuss both exact and
constructive algorithms, and some widely employed exact algorithms, i.e. branch
and bound approaches as well as dynamic programming procedures.
More specifically, in this chapter we
• discuss the most important exact constructive algorithms and their limitations
(Sect. 8.2),
• present exact algorithms based on integer programming (Sect. 8.3.1) and on
problem-specific branch and bound approaches (Sect. 8.3.2) and
• introduce the main concepts of dynamic programming employed in manufacturing
scheduling (Sect. 8.3.3).
Recall from Sect. 7.5.1 that exact constructive algorithms attempt to exploit specific
properties of the scheduling model in order to construct a solution which is guaranteed
to be optimal.
There are some cases for which finding exact constructive algorithms is, if
not trivial, rather straightforward. For instance, for the single-machine model with
makespan objective (model 1||Cmax ) every schedule yields the same makespan if the
operations are assigned left shifted to the timescale. Therefore every solution⎛ is opti-
mal. For the 1-machine model with total completion time objective (model 1|| C j ),
we have already discussed in Sect. 7.5.1 that sorting the jobs according to the short-
est processing time-rule (SPT -rule) yields the optimal solution. This sorting can be
performed with computational effort which is limited by O (n · log n). Similarly,
the 1-machine problem with due dates and the objective of minimising maximum
lateness (model 1|| max L j ) can be solved easily by sorting the jobs according to the
earliest due date rule (EDD-rule).
For other models, deriving exact constructive algorithms is not trivial or straight-
forward. Indeed, apart from some single machine layouts and some rather simplistic
two-machine settings, there are not many other known exact algorithms. However,
exact constructive algorithms may play an important role in the development of
approximate algorithms, as we will discuss in Chap. 9. Therefore, it is worth study-
ing in the next subsections some of the most famous.
Probably the most famous case of an exact constructive algorithm is the well-known
Johnson’s rule (Johnson 1954) for solving the 2-machine permutation flow shop
model with makespan criterion or F2| pr mu|Cmax . Note than in Conway et al. (1967),
as well as in many other texts, it is actually demonstrated that the permutation sched-
ules suffice for obtaining the optimum makespan in 2-machine flow shops. Johnson’s
rule determines that a given job j should precede in the sequence another job k if
the following expression is satisfied:
In this way, jobs with a short processing time on the first machine go first in the
sequence (this in turn minimises the idle time in the second machine). Jobs with a
short processing time in the second machine are processed last also to avoid idle
times in the second machine. Note that in a 2-machine flow shop without constraints,
there are no idle times on the first machine. Therefore, minimising makespan is
equivalent to minimising idle times on the second machine. Johnson’s algorithm is
further described Algorithm 1.
8.2 Exact Constructive Algorithms 193
Table 8.1 Processing times for a six job and two machine flow shop example to be solved with
Johnson’s rule
Machine (i) Job ( j)
1 2 3 4 5 6
1 8 4 10 5 3 12
2 12 1 6 5 2 8
Makespan: 44
Machine
4 1 6 3 5 2
1
Machine
4 1 6 3 5 2
2
0 5 10 15 20 25 30 35 40
Time
Job 1 Job 2 Job 3 Job 4 Job 5 Job 6
Fig. 8.1 Gantt chart with the resulting sequence in the Johnson’s rule example
This algorithm was shown optimal by Lawler (1973) for the model 1| pr ec|ν , being
ν any regular objective function with max-form. Let max g j be a regular max-form
function. The algorithm is shown in Algorithm 2.
Add j ∀ to π ;
Delete j ∀ from J ;
Modify J ≥ to represent the new set of schedulable jobs;
return π
end
⎛
Algorithm 3: Moore’s Algorithm for 1|| U j Moore (1968)
Input: Instance data
Output: Optimal sequence π
begin
Let π = ∅, J = {1, . . . , n} verifying d1 → d2 → . . . dn ;
Let π = (1, . . . , n);
Let T ar dy = ∅;
while ∃l ∪ π such as Cl > dl do
L
et k be such that Ck > dk and ∀i < k, Ci → di ;
Let j be such that j → k and p j = max 1 → i → k pi ;
π = π − { j}; T ar dy = T ar dy ∪ { j}; return π
end
8.3 Enumerative Algorithms 195
In this section we consider two parallel machines scheduling models. Recall from
Sect. 3.2.3.2 that in this layout there is a set N of n independent jobs that have to
be processed on a set M of m machines arranged in parallel. The following (rather
standard) assumptions are adopted:
For now, we will consider that there are no further restrictions (later we will
introduce sequence-dependent set-up times in this model), so this model can be
denoted as R||Cmax . Note that this is, in reality, an assignment problem since the
processing order of the jobs assigned to a given machine do not alter the maxi-
mum completion time at that machine. Therefore, there are m n possible solutions
to the problem after all possible assignments. The R||Cmax has been shown to be
NP-Hard, since the special case with identical machines (referred to as P||Cmax ) was
already demonstrated by Garey and Johnson (1979) to belong to that class. Even the
2-machine version (P2||Cmax ) is already NP-Hard according to Lenstra et al. (1977).
As we can see, even the two identical parallel machines case is a hard problem to
solve. Furthermore, Lenstra et al. (1990) showed that no polynomial time algorithm
exists for the general R||Cmax problem with a better worst-case ratio approximation
than 3/2.
A MILP model for R||Cmax would contain the following variables:
1, if job j is assigned to machine i
xi j =
0, otherwise
Cmax = Maximum completion time or makespan
⎞
m
xi j = 1 ∀ j ∪ N (8.2)
i=1
8.3 Enumerative Algorithms 197
⎞
n
pi j · xi j → Cmax ∀i ∪ M (8.3)
j=1
xi j ∪ {0, 1} ∀ j ∪ N , ∀i ∪ M (8.4)
Note that the constraint of type (8.2) ensures that each job is assigned to exactly
one machine. The second set of constraints just states that the makespan is equal or
greater than the sum of the processing times of the jobs assigned to each machine.
This is repeated for all machines since in this problem, the makespan is given by the
machine that finishes all its assigned jobs at the latest time. The last set of constraints
just defines the nature of the variables.
The previous model was thoroughly tested by Fanjul-Peyró and Ruiz (2010) using
IBM CPLEX 11.0 and the results were very encouraging. From a complete set of
1,400 instances, and with a maximum CPU time limit of 2 h, CPLEX 11.0 was able
to solve instances of up to 1000 jobs and 50 machines in many cases, with optimality
rates reaching 88 % in some cases and maximum-observed optimality gaps for the
unsolved instances of 8.66 % and average-observed gaps usually below 2 %. As one
can see, although the MILP model does not result in optimal solutions in all cases,
the results are very good in most scenarios. This enforces the already commented
idea that although most scheduling problems are indeed NP-hard, sometimes usable
solutions for sizeable instances are possible.
Let us add sequence-dependent set-up times to the previous setting. As the par-
allel machines are unrelated, we assume that there is a set-up time matrix per
machine. Therefore, Si jk denotes the machine-based sequence-dependent set-up time
on machine i, i ∪ M, when processing job k, k ∪ N , after having processed job
j, j ∪ N .
The so-resulting model can be denoted as R|Ssd |Cmax . One could think that adding
sequence-dependent setup times to the problem is not a big deal. However, the impli-
cations of the setups are far reaching as far as the model goes. Notice that when adding
setups, this is no longer just an assignment problem. Depending on how the different
jobs are sequenced among those assigned to a machine, the completion time of that
machine will differ. As a result, the sequence has to be determined by the model as
well, not just the assignment. The new MILP model involves the following decision
variables:
1, if job j precedes job k on machine i
X i jk =
0, otherwise
Ci j = Completion time of job j at machine i
Cmax = Maximum completion time
As one can see, there are n · (n − 1) · m binary variables, which quickly grow to
unacceptable numbers in medium-sized problems. Note that X i j j is not defined.
However, the total number grows because some dummy jobs are needed as well.
Below are the details.
198 8 Exact Algorithms
⎞ ⎞
X i jk = 1, ∀k ∪ N (8.6)
i∪M j∪{0}∪{N }
j =k
⎞⎞
X i jk → 1, ∀j ∪ N (8.7)
i∪M k∪N
j =k
⎞
X i0k → 1, ∀i ∪ M (8.8)
k∪N
⎞
X i h j ∩ X i jk , ∀ j, k ∪ N , j = k, ∀i ∪ M (8.9)
h∪{0}∪{N }
h =k,h = j
Ci0 = 0, ∀i ∪ M (8.11)
Ci j ∩ 0, ∀ j ∪ N , ∀i ∪ M (8.12)
Ci j → Cmax , j ∪ N, i ∪ M (8.13)
Constraint set (8.6) ensures that every job is assigned to exactly one machine and has
exactly one predecessor. Notice the usage of dummy jobs 0 as X i0k , i ∪ M, k ∪ N .
With constraint set (8.7) we set the number of successors of every job to a maximum
of one (a job could be the last one on a machine). Set (8.8) limits the number of
successors of the dummy jobs to a maximum of one on each machine. With set (8.9),
we ensure that jobs are properly linked on their machine. if a given job j is processed
on a given machine i, a predecessor h must exist on the same machine. Constraint
set (8.10) is to control the completion times of the jobs at the machines. Basically, if
a job k is assigned to machine i after job j (i.e. X i jk = 1), its completion time Cik
must be greater than the completion time of j, Ci j plus the setup time between j
and k and the processing time of k. If X i jk = 0, then the big constant V renders the
constraint redundant. Sets (8.11) and (8.12) define completion times as 0 for dummy
jobs and non-negative for regular jobs, respectively. Set (8.13) defines the makespan.
Finally, set (8.14) defines the binary variables.
8.3 Enumerative Algorithms 199
We can see how the sequence-dependent set-up times version of the unrelated
parallel machines problem is significantly larger and more complex. As a matter of
fact, and as shown in Vallada and Ruiz (2010), this model is quite unsolvable. The
previous model, when tested also with CPLEX 11.0 and under comparable settings,
resulted in optimum solutions for problems of up to 8 jobs and 5 machines in all cases.
However, only some problems of 10 jobs and 12 jobs could be solved to optimality.
We can see that these are extremely small problems.
Several MILP formulations of flow shop or job shop problems (especially with
makespan objective) are due to Liao and You (1992); Manne (1960); Wagner (1959).
In this section, we will sketch out three different MILP formulations for elemen-
tary models for the permutation flow shop with makespan objective, i.e. the mod-
els by Manne (1960); Wagner (1959) and by Wilson (1989). We will then analyse
the different formulations in order to highlight their implications in terms of their
applicability beyond small-sized problems.
Manne’s model (adapted for permutation flow shop problems as in Tseng et al.
(2004):
The original version of the model by Manne is suited to solve the general job
shop problem. However, for reasons of simplification and comparability to other
approaches, we present the model in its permutation flow shop version which can be
found (Table 8.2), e.g. in Tseng et al. (2004).
Suppose a standard permutation flow shop problem setting with makespan objec-
tive. The model is formulated as follows:
Variables
D j, j ≥ Binary variable with D j, j ≥ = 1 if job j is scheduled before (not necessarily
immediately before) job j ≥ , D j, j ≥ = 0 otherwise, j < j ≥
Ci j Completion time of job’s j operation on machine i
Cmax Finishing time of the last operation on machine m
Parameters
pi j Processing time of job j on machine i
M Large number
Model:
s. t.
C 1 j ∩ p1 j j = 1, . . . , n (8.16a)
Ci j − Ci−1, j ∩ pi j i = 2, . . . , m; j = 1, . . . , n (8.16b)
Ci j −Ci j ≥ + M · D j, j ≥ ∩ pi j i = 1, . . . , m; j, j ≥ = 1, . . . , n, j < j ≥ (8.17a)
Ci j − Ci j ≥ + M(D j, j ≥ − 1) → − pi j ≥ i = 1, . . . , m; j, j ≥ = 1, . . . , n, j < j ≥
(8.17b)
Cmax ∩ Cm j j = 1, . . . , n (8.18)
This model is more or less straight forward. Equation (8.15) represents the objec-
tive function of makespan minimisation. Equations (8.16a, 8.16b) give the job avail-
ability constraints, i.e. they guarantee that an operation (i, j) of a job j on machine
i cannot be finished before its preceding operation (i − 1, j) of the same job j on
the preceding machine i − 1 is finished (no such operation on machine 1 in (8.16a))
and its processing time pi j has passed. Equations (8.17a, 8.17b) give the machine
availability constraints. In case of job j precedes job j ≥ in a solution/permutation,
constraint (8.17a) becomes a ‘true’ constraint while if job j ≥ precedes job j, con-
straint (8.17b) is the relevant constraint while the respective other constraint becomes
irrelevant in the respective situation. Constraints (8.18) give the lower limit of the
makespan as the completion time of every single job j.
This model includes 0.5n(n − 1) binary variables D j, j ≥ and n · m continuous
variables Ci j plus one continuous variable Cmax , i.e. in total n · m + 1 continuous
variables. The number of constraints is nm (8.16a, 8.16b) plus 2mn(n − 1)/2 (8.17a,
8.17b) plus n (8.18), i.e. in total n(mn + 1) constraints.
One characteristic of this adapted Manne model is that it basically only implicitly
constructs the permutation by the jobs’ precedence relation imposed by the binary
variables D j, j ≥ . In contrast, the following models of Wilson and Wagner explicitly
define permutations as solutions by an assignment problem structure.
Wilson model:
By including an assignment problem the Wilson model avoids the dichotomic
constraints of the Manne model which cover both cases of job precedence for all
pairs of jobs, i.e. job j1 preceding job j2 and vice versa (Table 8.3).
Model:
⎞
n
min Cmax = Sm[n] + pm j · Z j[n] (8.19)
j=1
s. t.
⎞
n
Z j [ j ≥] = 1 j ≥ = 1, . . . , n (8.20a)
j=1
8.3 Enumerative Algorithms 201
Parameters
pi j Processing time of job j on machine i
⎞
n
Z j [ j ≥] = 1 j = 1, . . . , n (8.20b)
j ≥ =1
S1[1] = 0 (8.21a)
⎞
n
S1[ j ≥ ] + p1 j Z j [ j ≥ ] = S1,[ j ≥ +1] j ≥ = 1, . . . , n − 1 (8.21b)
j=1
⎞
n
Si[1] + pi j Z j[1] = Si+1,[1] i = 1, . . . , m − 1 (8.21c)
j=1
⎞
n
Si [ j ≥ ] + pi j Z j [ j ≥ ] → Si+1,[ j ≥ ] i = 1, . . . , m − 1; j ≥ = 2, . . . , n (8.22)
j=1
⎞
n
Si [ j ≥ ] + pi j Z j [ j ≥ ] → Si,[ j ≥ +1] i = 2, . . . , m; j ≥ = 1, . . . , n − 1 (8.23)
j=1
The objective function, i.e. the makespan, is given by (8.19) as the sum of the
starting time of the last operation on machine m plus its processing time. Observe
that the sum contains only one ’true’ summand since only one job is assigned to
the last position of the permutation and therefore all other binary variables in this
sum are 0. Constraints (8.20a, 8.20b) represent
the classical assignment constraints,
i.e. every position of the permutation j ≥ has to be assigned to exactly one job j
(8.20a) and every job has to be assigned to exactly one position in the permutation
(8.20b). Equations (8.21a, 8.21b, 8.21c) give the jobs’ initialisation constraints on
machine 1 (8.21b) and the machine initialisation constraints for the job in position [1]
202 8 Exact Algorithms
(8.21c). Equation (8.22) represent the job availability constraints while (8.23) give
the machine availability constraints.
This model includes n 2 binary variables Z j,[ j ≥ ] (being n of them non-zero in every
solution) and nm continuous variables Si,[ j ≥ ] plus one continuous variable Cmax , i.e.
in total nm + 1 continuous variables. The number of constraints is 2n (8.20a, 8.20b)
plus (n + m − 1) (8.21a, 8.21b and 8.21c) plus 2(m − 1)(n − 1) (8.22 and 8.23), i.e.
in total 2mn + n − m + 1 constraints.
Wagner model (modified as in Tseng et al. 2004):
The Wagner model is probably the oldest MILP model for permutation flow shop
scheduling. It combines the job availability constraints and the machine availability
constraints of the Wilson model into one type of constraints (Table 8.4).
Model:
⎞
n ⎞
n
min Cmax = pm j + X m[ j ≥] (8.24)
j=1 j ≥ =1
s. t.
⎞
n
Z j [ j ≥] = 1 j ≥ = 1, . . . , n (8.25a)
j=1
⎞
n
Z j [ j ≥] = 1 j = 1, . . . , n (8.25b)
j ≥ =1
⎞
n ⎞
n
pi j Z j [ j ≥ +1] − pi+1, j Z j [ j ≥ ] + X i [ j ≥ +1] − X i+1,[ j ≥ +1] + Yi [ j ≥ +1] − Yi [ j ≥ ] = 0
j=1 j=1
8.3 Enumerative Algorithms 203
i = 1, . . . , m − 1; j ≥ = 1, . . . , n − 1 (8.26a)
⎞
n
pi j Z j[1] + X i[1] − X i+1,[1] + Yi[1] = 0 i = 1, . . . , m − 1 (8.26b)
j=1
The objective function, i.e. the makespan, is given by (8.24) as the sum of the
processing times of all jobs on the last machine (which is constant and decision irrel-
evant) and the idle time on the last machine. (This means nothing but that the min-
imisation of makespan is equivalent to minimising idle time on the last machine m.)
Constraints (8.25a, 8.25b) represent the classical assignment constraints, as ≥in
the
Wilson
≥ model above. Equations (8.26a, 8.26b) links the jobs on positions j and
j + 1 on all machines, (8.26a) couples jobs 1 through n (or jobs [1] through
[n], respectively) while (8.26a) gives the initialisation for job 1. Equation (8.26b)
indicates that
• the waiting time on machine i before the first job in the sequence [1] will start on
i plus
• the processing time of [1] on i, i.e. pi[1] , plus
• this job’s waiting time after it is finished on i
are equal to this first job’s waiting time on the next machine i + 1. This can be seen
easily by rearranging (8.26b) so that X i+1,[1] is isolated on one side of the equation.
Rearranging (8.26a) accordingly yields the expanded expression for the finishing
time of job’s j ≥ + 1 operation on machine i + 1 as follows:
⎞
n ⎞
n
Yi [ j ≥ ] + pi+1, j Z j [ j ≥ ] + X i+1,[ j ≥ +1] = X i [ j ≥ +1] + pi j Z j [ j ≥ +1] + Yi [ j ≥ +1]
j=1 j=1
This formula can be interpreted by means of Fig. 8.2. The figure, for reasons of
clarity, supposes that the underlying permutation is the identical permutation, i.e.
j = [ j] for all j = 1, . . . , n. The equation can be deducted from the time interval
which is marked by the curly brace in Fig. 8.2, i.e. the situation after job j’s operation
on machine i has been finished. (It should be mentioned that in many cases several
of the X and Y variables sketched out in Fig. 8.2 will be zero for an efficient (with
respect to makespan) schedule).
This modification of Wagner’s model includes n 2 binary variables Z j,[ j ≥ ] (being
n of them non-zero in every solution) and 2nm − (n − 1) continuous variables X i,[ j ≥ ]
and Yi,[ j ≥ ] . The number of constraints is 2n (8.25a, 8.25b) plus n(m − 1) (8.26a,
8.26b), i.e. in total n(m + 1) constraints.
As already mentioned, we will not discuss the B&B optimisation approaches
with respect to these ‘pure’ MILP formulations of the problem under consideration.
However, we give a table that compares the different dimensions of these three models
since these model dimensions might be one crucial aspect for the solvability of the
models.
204 8 Exact Algorithms
Fig. 8.2 Time calculation for the adapted Wagner MILP model for permutation flow shops relative
to operation of job j on machine i
As can be seen from Table 8.5, the number of binary variables is the smallest
in the Manne model. However, since the binary variables’ influence in the Wilson
model and the Wagner model is of assignment model type, additional features of this
problem type might be advantageous for the solution of the problem. Therefore, the
lower number of binary variables in the Manne model may not pay off in the solution
process. The number of continuous variables can be neglected for the evaluation of the
expected computational effort. However, the number of constraints might influence
the performance of the model solution. With respect to this aspect, the Wagner
formulation is best as compared with the other formulations. For more detailed
numerical comparison of these models see Tseng et al. (2004). Their numerical
results indicate that the Wagner formulation performs best, the Wilson formulation
is slightly worse and the Manne formulation performs worst—at least for exactly
solvable small problem instances. This demonstrates that the same (formal) problem
might be solved by different (formal) models and the performance of the model
in terms of computational effort to derive the same optimal solution might differ
significantly from model to model.
Branch and bound approaches are one of the most wide-spread types of approaches
to solve combinatorial optimisation problems exactly or, if prematurely stopped, in
an approximate manner. We will not discuss their mathematical, formal aspects in
detail but refer the interested reader to the relevant literature cited as further readings
in Sect. 8.4.
The problem-oriented approaches to branch and bound procedures refer to the
underlying manufacturing scheduling problem itself. For these approaches we
present some aspects of how the design of the four components that could be
8.3 Enumerative Algorithms 205
Table 8.5 Model sizes with respect to number of machines m and number of jobs n (see also Tseng
et al. 2004)
Manne Wilson Wagner
Continuous variables nm + 1 nm 2nm − (n − 1)
Binary variables 0.5n(n − 1) n2 n2
Constraints n(mn + 1) 2mn + n − m + 1 n(m + 1)
(3) Bound computation: Bound computation is one of the most crucial items of a
branch and bound approach. It substantially determines the quality of the approach,
i.e. the ability to cut off subtrees of the overall decision tree before they are refined
to the final leaves of complete solutions. In manufacturing scheduling (with a min-
imisation objective function), bounds are calculated to give a lower limit on the
best solution/schedule which can be derived from this node (and all its successors).
If this lower bound indicates that the best solution from this node and its successors
is worse (or at least not better) than the objective function of another, already known
solution, this node can be ‘cut off’ and must not be considered further. And obvi-
ously: The earlier a subtree can be cut off (i.e. the better the bounds are), the less is
the computational effort.
(If only one optimal solution is to be determined, then ‘not better’ suffices as
criterion. If all optimal solutions are to be determined, then ‘worse’ is required to cut
off a subtree. In manufacturing scheduling problems, especially those with processing
times with moderate variance or variability, this distinction might result in significant
different computational effort since the number of optimal solutions might increase
significantly with a decrease in variance/variability of processing times).
In manufacturing scheduling problems in every node, lower bounds in principle
based on the objective function contribution of the subsolution in S + try to estimate
the contribution to the objective function of the remaining jobs/operations in S − to
the final solution without computing all complete solutions (which can be generated
departing from S + ).
Sophisticated bounds for many branch and bound approaches are available in
the literature. To structure or even only to describe this variety of bounds would go
far beyond the scope of this book. Therefore, we will only present two common
bounds to the F| pr mu|Cmax problem, i.e. the above example, to demonstrate a basic
approach to bound computation in manufacturing scheduling. These bounds focus
on the remaining operations on the different machines (machine-based bound) or on
the jobs still to be scheduled (job-based bound). See Ignall and Schrage (1965) for
an early description of these bounds.
The second summand of the formula guarantees that every job not scheduled so
far is processed on machine i (without idle time on i) while the third summand
indicates that, after finishing all jobs on machine i, at least the job with the mini-
mum remaining processing times has to be processed on all subsequent machines
208 8 Exact Algorithms
In Fig. 8.3, for every node the bound values are shown within the node. If there
is only one bound value, both values are the same. If two values are shown (only
in ****, 3***, 23**), the first value gives the machine-based bound, the second the
job-based bound while the value in brackets is the worse of both.
Summarising with respect to bound calculation approaches it should be mentioned
that the basic ideas of bound computation schemes refer to two aspects.
8.3 Enumerative Algorithms 209
1980) or branch and price methods are widely discussed in more mathematically ori-
ented references on combinatorial optimisation. We will not discuss these approaches
here since they more or less exclusively refer to the above mentioned IP-based
B&B approaches and emphasize on formal aspects of these problems. However,
since these approaches have proven to be numerically rather efficient, they might
become relevant also for the problem-oriented B&B approaches: If these newer for-
mal approaches would significantly outperform numerically the problem-oriented
approaches, one might consider to emphasize on the mapping of manufacturing
scheduling problems into formal IP problems (and solve them with the advanced
formal IP techniques) instead of developing and refining more problem-specific
approaches to manufacturing scheduling.
Other approaches try to identify good feasible solutions before a B&B procedure
is started, e.g. by sophisticated constructive and/or improvement heuristics. This usu-
ally reduces the error bound in a B&B procedure in the very beginning (as compared
with the simple upper bound for the objective function value which might be derived
from a first-depth search approach), giving reason for stopping the B&B procedure
rather quickly if the remaining maximum error is below a pre-specified limit.
Another, heuristic approach based on B&B is beam search where, within a node
under consideration only a limited number of subsequent nodes, i.e. the most promis-
ing ones, are taken into account for the B&B procedure while the other nodes are
excluded from further consideration. Fixing the number of nodes to be explored fur-
ther and determination of the respective selection criterion are decisive for the quality
of these approaches. Beam search approaches are usually combined with breadth-first
B&B approaches to limit the number of open nodes to be stored simultaneously.
stage to be computed from the beginning stage (may it be the first stage if forward
calculation is used or the last stage if backward calculation is executed) onwards.
The (optimal sub-) decisions (or subdecision sequences) up to the previous stage
remain unchanged and only the decision of the current stage has to be linked with
exactly these previous (sub-) decisions to get the best decisions up to the stage under
consideration.
While in many dynamic programming applications the scale is representing time
and the decisions might be taken at certain points in time (discrete dynamic program-
ming approach) or continuously (continuous dynamic programming approach), in
manufacturing scheduling mostly discrete approaches are used and the discrete stages
often represent the number of jobs/operations scheduled so far. On every stage, the
different settings which might occur on this stage (i.e. the states on the stage) have to
be identified. For a simple 1-machine scheduling problem with n jobs and forward
recursion scheme, a stage might be defined as the number of jobs already scheduled
so far (e.g. the stage index j ≥ − 1 indicates that j ≥ − 1 jobs have been scheduled
already) while the maximum set of states on this stage consists of the ( j ≥ − 1)! per-
mutations of j ≥ − 1 jobs. Moving to stage j ≥ then means taking one by one every
≥
remaining job out of the n − ( j − 1) so far non-scheduled jobs, constructing the
expanded (sub-) sequences of length j ≥ and looking for the best combinations for the
specific job added.
We explain the approach by a most simple example problem which tries to find the
optimal solution for a one-machine problem with four jobs and flowtime objective
as well as no further ‘specialities’. (As is well known, this simple problem can be
easily solved to optimum by using the shortest processing time rule. However, for
the sake of simplicity of the demonstration of the approach, we stick to this trivial
example and give hints on extensions later on.)
Suppose four jobs to be scheduled having processing times 6, 2, 4, and 3, respec-
tively. The objective⎛function is to minimise flowtime, i.e. the sum of completion
times of every job, j C j . The stages are defined by the number of jobs j ≥ being
scheduled up to this stage while the possible states on the stage are characterised by
all subsequences of length j ≥ having a certain job k on the last position j ≥ . We use
a forward recursive approach. Then, the state transition from a sequence S j ≥ −1 on
stage j ≥ − 1 to a sequence including the so far non-scheduled job k in position j ≥ of
S j ≥ is S j ≥ −1 → S j ≥ = S j ≥ −1 , k . The Bellman recursion equation per state on stage
j ≥ is
C⎛ S j ≥ s.t. job k on position j ≥ in S j ≥
pk for j = 1
= ⎛
⎛
/ j ≥ −1 C (S j ≥ −1 ) +
min S j ≥ −1 , k ∪S j ≥ αs j ≥ −1
p j ≥ + pk for j = 2, . . . , n
Figure 8.4 shows the computational steps. In Fig. 8.4a, the states as the partial job
sequences with a certain last job on each stage are mentioned first. Then the possible
realisations of these subsequences are mentioned. In italics letters, the flowtime
values of every single possible subsequence are given while a ‘*’ gives C⎛ (S j ≥ ),
212 8 Exact Algorithms
Fig. 8.4 Dynamic programming approach for the example instance of the one-machine flowtime
minimisation problem, a Results per stage, b Additionally indicating the calculation paths
i.e. the optimal subsequence with minimum recursion value and an additional ‘***’
indicates the overall optimal solution (on stage 4). Figure 8.4b shows the same values
but additionally gives the relevant state transitions considered in every step. Note that
an infinite value (e.g. in xy2) indicates that there is no feasible sequence left to place
job 2 in stage/position 3 since job 2 is already scheduled in positions 1 or 2 in all (!)
optimal subsequences up to stage 2. Analogously, the same holds for xyz2 and xyz4.
As already mentioned, the example problem presented above can be easily solved
by just sorting the jobs according to the shortest processing time rule. For this type
of problem, a dynamic programming approach therefore is superfluous. However, if
the objective function is more complicate, the solution is not as easy and a dynamic
programming approach might become adequate. Early contributions concerning this
8.3 Enumerative Algorithms 213
Fig. 8.5 Dynamic programming calculation for the three machine, four jobs problem instance from
Sect. 8.3.2 for the permutation flow shop problem with flowtime objective
matter are by Held and Karp (1962); Rinnooy Kan et al. (1975) or Schrage and
Baker (1978). Instead of the pure linearity of the influence of jobs’ completion times
with respect to the flowtime objective, initially constant and later linear influence
on the objective function as, e.g. in tardiness problems result in problems which
cannot be solved by polynomial approaches. Therefore, the above-mentioned papers
and the dynamic programming approaches proposed therein allow rather general
objective functions, being monotonicity (not necessarily strict monotonicity) in time
of jobs objective function contribution the more or less only relevant requirement
for the application of these approaches. Also other specific constraints, e.g. prece-
dence relations between the operations, can be advantageously handled by dynamic
programming approaches.
We will not deepen the discussion on dynamic programming in manufactur-
ing scheduling but give only some additional remarks on the application of these
approaches in manufacturing scheduling.
As in all dynamic programming approaches, the adequate definition of stages
and their states as well as the stage separability as mentioned above are the crucial
aspects to setup a dynamic programming procedure for a scheduling problem under
consideration.
State separability may be guaranteed by an adequate separation of the set of jobs
or operations into operations/jobs scheduled so far and not scheduled so far.
The separability with respect to objective functions is often fulfilled for additive
(or averaging) objective functions as (mean) flowtime or (mean) tardiness. In con-
trast, separability for maximin objective functions cannot be expected in general.
Therefore, e.g. makespan or maximum tardiness problems are often not suited to be
solved by means of dynamic programming approaches.
However, to demonstrate problems which might occur with respect to separa-
bility, we refer to the standard permutation flow shop with flowtime objective. We
214 8 Exact Algorithms
suppose the three machine, four jobs problem instance given as a B&B example in
Sect. 8.3.2. But now we suppose flowtime to be minimised. Performing the dynamic
programming approach as in the above example yields the results as in Fig. 8.5.
As can be seen, the dynamic programming approach yields two ‘optimal’ permu-
tations, i.e. 3421 and 3124, with a flowtime of 95. However, the true optimal solution
for this problem instance is 3412 with a flowtime of 94. This solution is not gen-
erated by the dynamic programming approach because the initial partial sequence
341 of the overall optimal solution is excluded from further consideration on stage
3 since 321 gives a better solution for the state xy1. Therefore, even in this simple
example, the separability requirement of dynamic programming is not fulfilled and
dynamic programming cannot be applied to this kind of problem—at least not with
this definition of stages.
Summarising, dynamic programming (in manufacturing scheduling) can be classi-
fied as a limited enumeration approach such as B&B approaches. In contrast to B&B
approaches, dynamic programming requires certain additional and specific struc-
tural features of the manufacturing scheduling problem considered, i.e. separability.
If these requirements are fulfilled, dynamic programming will be usually competitive
to respective B&B methods. However, on one hand dynamic programming is also
a non-polynomial enumerative approach which is, in general, only applicable for
small sizes of problem instances if the optimal solution is to be determined. On the
other hand, as in B&B, dynamic programming can be stopped prematurely and/or
combined with other heuristic approaches.
References
Conway, R. W., Maxwell, W. L., and Miller, L. W. (1967). Theory of Scheduling. Dover Publications,
New York. Unabridged publication from the 1967 original edition published by Addison-Wesley.
Crowder, H. and Padberg, M. W. (1980). Solving large-scale symmetric travelling salesman prob-
lems to optimality. Management Science, 26(5):495–509.
Fanjul-Peyró, L. and Ruiz, R. (2010). Iterated greedy local search methods for unrelated parallel
machine scheduling. European Journal of Operational Research, 207(1):55–69.
French, S. (1982). Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop.
Ellis Horwood Limited, Chichester.
Garey, M. R. and Johnson, D. S. (1979). Computers and Intractability. A Guide to the Theory of
NP-Completeness. W. H. Freeman and Company, New York.
Gupta, J. N. D. and Dudek, R. A. (1971). Optimality Criteria for Flowshop Schedules. IIE Trans-
actions, 3(3):199–205.
Held, M. and Karp, R. M. (1962). A dynamic programming approach to sequencing problems.
Journal of the Society for Industrial and Applied Mathematics, 10(1):196–210.
Ignall, E. and Schrage, L. E. (1965). Application of branch- and bound technique to some flow shop
problems. Operations research, 13(3):400–412.
Johnson, S. M. (1954). Optimal two- and three-stage production schedules with setup times included.
Naval Research Logistics Quarterly, 1:61–68.
Lawler, E. L. (1973). Optimal Sequencing of a Single Machine Subject to Precedence Constraints.
Management Science, 19(5):544–546.
Leisten, R. (1990). Flowshop sequencing problems with limited buffer storage. International Jour-
nal of Production Research, 28(11):2085.
Lenstra, J. K., Rinnooy Kan, A. H. G., and Brucker, P. (1977). Complexity of machine scheduling
problems. Annals of Discrete Mathematics, 1:343–362.
Lenstra, J. K., Shmoys, D. B., and Tardos, E. (1990). Approximation algorithms for scheduling
unrelated parallel machines. Mathematical Programming, 46(3):259–271.
Liao, C.-J. and You, C.-T. (1992). An improved formulation for the job-shop scheduling problem.
Journal of the Operational Research Society, 43(11):1047–1054.
Manne, A. S. (1960). On the job-shop scheduling problem. Operations Research, 8(2):219–223.
Moore J. (1968). An n job, one machine sequencing algorithm for minimizing the number of late
jobs. Management Science, 15 (1):102–109.
Nemhauser, G. L. and Wolsey, L. A. (1988). Integer and combinatorial optimization. Wiley, New
York.
216 8 Exact Algorithms
Pinedo, M. L. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, New York, fourth
edition.
Potts, C. N. (1980). An adaptive branching rule for the permutation flow-shop problem. Elsevier,
Amsterdam.
Rinnooy Kan, A., Lageweg, B., and Lenstra, J. (1975). Minimizing total costs in one-machine
scheduling. Operations Research, 23(5):908–927.
Schrage, L. E. and Baker, K. R. (1978). Dynamic programming solution of sequencing problems
with precedence constraints. Operations Research, 26(3):444–449.
T’Kindt, V. and Billaut, J.-C. (2006). Multicriteria Scheduling: Theory, Models and Algorithms.
Springer, New York, second edition.
Tseng, F. T., Stafford, Jr, E. F., and Gupta, J. N. D. (2004). An empirical analysis of integer pro-
gramming formulations for the permutation flowshopn. OMEGA, The International Journal of
Management Science, 32(4):285–293.
Vallada, E. and Ruiz, R. (2010). Genetic algorithms for the unrelated parallel machine scheduling
problem with sequence dependent setup times. European Journal of Operational Research. In
review.
Wagner, H. M. (1959). An integer linear-programming model for machine scheduling. Naval
Research Logistics Quarterly, 6(2):131–140.
Wilson, J. M. (1989). Alternative formulations of a flowshopn scheduling problem. Journal of the
Operational Research Society, 40(4):395–399.
Chapter 9
Approximate Algorithms
9.1 Introduction
In this chapter we deal with approximate algorithms. In principle, one may think that
approximate algorithms are not a good option if exact approaches exist and indeed
we already discussed that their widespread use is basically justified by the computa-
tional complexity inherent for most manufacturing scheduling models. As mentioned
in Sect. 7.5.2, approximate algorithms are usually divided into heuristics and meta-
heuristics, being the main difference among them that the first are specifically tailored
for a particular model, while the second constitute more generic procedures. This
difference—although not entirely unambiguous—is very important when it comes
to describing both approaches, as we intend to do in this chapter: While it is clear
that it is possible to discuss the templates of the different metaheuristics that can be
employed for manufacturing scheduling models, such thing is not possible regarding
heuristics, as they are heavily problem-dependent. Instead we will first illustrate how
heuristic algorithms work by presenting perhaps the two most popular heuristics for
manufacturing scheduling, i.e. the NEH heuristic (Nawaz et al. 1983) for flowshop
scheduling with makespan objective, and the Shifting Bottleneck Heuristic (Adams
et al. 1988) for job shop scheduling. Then, in a sort of ‘reverse engineering’ attempt,
we will discuss the main ideas and concepts behind (these and other) heuristics. We
believe that this attempt to structure the behaviour of the heuristic may be useful for
the reader when faced to the task of designing new heuristics for existing or new
scheduling models.
More specifically, in this chapter we
• present two well-known heuristics in order to grasp the general mechanics of
heuristic construction (Sect. 9.2),
• give some description of (some) underlying ideas of the approximate methods
under consideration (Sect. 9.3),
• present the main concepts behind metaheuristics, and discuss the most widely
employed in the manufacturing scheduling context (Sect. 9.4), and
• briefly introduce further approximation techniques (Sect. 9.5).
In this section, two heuristics are presented which have turned out to be rather efficient
for the scheduling problems they are created for on one hand. On the other hand, the
basic ideas of these heuristics have been the starting point for many adaptations to
other heuristics and/or other problem classes.
First, in Sect. 9.2.1, we consider the heuristic by Nawaz et al. (1983) which has
been basically the result of a master thesis in the early 1980s and since then is
something like a benchmark for the respective type of problems. It deals with the
standard permutation flow shop problem with makespan objective. Afterwards, in
Sect. 9.2.2, we describe a basic version of the Shifting Bottleneck Heuristic (SBH)
which has been published first by Adams et al. (1988). This heuristic is designed for
the standard job shop problem with makespan objective.
By far and large, one the best known and highly cited heuristic for the flow shop
problem and makespan criterion is the NEH heuristic. This method has been widely
recognised as being very high performing, flexible, and efficient. Furthermore, NEH
is actively used as a seed sequence for metaheuristic techniques (these will be dis-
cussed in the next section). Some examples where NEH is used as a seed sequence
are Tabu Search, Simulated Annealing, Genetic Algorithms, Iterated Local Search,
Ant Colony Optimisation and many other metaheuristic algorithms.
The NEH heuristic is dedicated to solve the basic permutation flow shop schedul-
ing problem with minimizing makespan as objective function, i.e. Fm| pr mu|Cmax .
It reduces the exponential effort of complete enumeration of all permutations of job
sequences to a polynomial effort by sorting the jobs according to some importance
index (see below) and then determines job by job its best position while keeping the
relative sequence of the already scheduled jobs unchanged.
Due to the importance of the NEH, we describe it in detail. The NEH procedure
is based on the idea that jobs with high sum of processing times on all the machines
should be scheduled as early as possible. NEH is explained in Algorithm 1.
Computing P j has a complexity of O(nm), and sorting n jobs of O(n log n). Most
CPU time is needed in the main loop of the heuristic, where we have a loop of n − 2
steps and, at each step k, we carry out k insertions of job k in a partial sequence that
contains k − 1 jobs and for each insertion we have to calculate the Cmax value of k
jobs (including the inserted one). In total, there are n(n+1)2 − 3 insertions. Among
these, only in the last iteration we have to evaluate n complete sequences. As a result,
the computational complexity of the whole method goes to O(n 3 m). This method
can be slow for large instances. However, when inserting the k-th job in position, let
us say, j, all Ci,h , h = j − 1, j − 2, . . . , 1 were already calculated in the previous
insertion and we do not need to recalculate these values. A similar approach was
9.2 Two Sample Heuristics 219
Machine 1 5
2
Machine 1 5
3
Machine 1 5
4
Machine 5 1
2
Machine 5 1
3
Machine 5 1
4
Fig. 9.1 First iteration of the NEH heuristic for the example of Table 9.1. a Sequence (1, 5) with
a Cmax value of 173. b Sequence (5, 1) with a Cmax value of 160
followed by Taillard (1990) and as a result the complexity of all insertions in a given
step can be calculated in O(nm) by a series of matrix calculations. This reduces
the total complexity of NEH to O(n 2 m). Such an improved method is many times
referred to as NEHT or NEH with Taillard’s accelerations.
220 9 Approximate Algorithms
Let us work with an example with five jobs and four machines. Actually, this
example was already given in Chap. 3, Table 3.3. However, in order to avoid flipping
pages back and forth, the table is replicated in Table 9.1.
We calculate in the first step the total processing times P j , that result in: P1 =
31 + 41 + 25 + 30 = 127, P2 = 111, P3 = 98, P4 = 62 y P5 = 114. Then we
arrange the jobs in decreasing order of P j and the resulting list is J = (1, 5, 2, 3, 4).
The first two jobs are extracted. Let us see how the two possible schedules end
up (1, 5) and (5, 1). The corresponding Gantt charts appear in the two subfigures of
Fig. 9.1. We can see that Cmax ((5, 1)) < Cmax ((1, 5)). Therefore, the partial (relative)
sequence (5, 1) is fixed for the next iteration. There are three positions where to insert
the next job 2 and therefore, three possible partial sequences are derived, which are
depicted in the subfigures of Fig. 9.2. As we can see, the partial sequence with least
Cmax value is (2, 5, 1). It is selected for the next iteration. There are four possible
partial sequences resulting from inserting job 3 in the four possible positions. The
resulting schedules, along with their makespan values are shown in Fig. 9.3. The last
from the previous partial sequences is also the one with the least makespan value,
(2, 5, 1, 3), it is selected for the next iteration where job 4 remains to be inserted.
There are n = k possible positions where to place this last job 4. These sequences
are now full sequences since each one contains n jobs. All of these are shown in
Fig. 9.4. In this last step, the first sequence is the one with the minimum Cmax value
and therefore, it is the result of the application of the NEH procedure to the example:
(4, 2, 5, 1, 3). This sequence has a Cmax value of 213.
The NEH has been applied to other objectives and even to other layouts. While it
has been demonstrated that in other problems, it is no as effective and efficient, it is
still competitive.
In its original version, the SBH is dedicated to solve the basic job shop schedul-
ing problem with minimizing makespan as objective function, i.e. J m|.|Cmax . This
problem is known to be of the NP-hard class.
The basic idea is as follows: Fix one machine as the bottleneck of the manufac-
turing system under consideration. Then, for this machine and for every job, it is
considered what has to be accomplished before the job is processed on this machine
(i.e. previous operations of this job on other machines). This results in release times
for all jobs on the machine under consideration. Additionally, for this machine and
9.2 Two Sample Heuristics 221
Machine 2 5 1
2
Machine 2 5 1
3
Machine 2 5 1
4
Machine 5 2 1
2
Machine 5 2 1
3
Machine 5 2 1
4
Machine 5 1 2
2
Machine 5 1 2
3
Machine 5 1 2
4
Fig. 9.2 Iteration for k = 3 of the NEH heuristic for the example of Table 9.1. a Sequence (2, 5, 1)
with a Cmax value of 191. b Sequence (5, 2, 1) with a Cmax value of 203. c Sequence (5, 1, 2) with
a Cmax value of 197
for every job, it is considered what has to be accomplished after the job is processed
on this machine (i.e. subsequent operations of this job on other machines). This
results in due dates for all jobs on the machine under consideration. Since the due
dates might be exceeded by a current solution, the corresponding 1|r j , d j | max L j
problem is (heuristically) solved for the machine under consideration. The resulting
222 9 Approximate Algorithms
Machine 3 2 5 1
1
Machine 3 2 5 1
2
Machine 3 2 5 1
3
Machine 3 2 5 1
4
0 20 40 60 80 100 120 140 160 180 200 220 240 260
Time
Job 1 Job 2 Job 3 Job 5
Fig. 9.3 Iteration for k = 4 of the NEH heuristic for the example of Table 9.1. a Sequence (3, 2, 5, 1)
with a Cmax value of 237. b Sequence (2, 3, 5, 1) with a Cmax value of 255. c Sequence (2, 5, 3, 1)
with a Cmax value of 218. d Sequence (2, 5, 1, 3) with a Cmax value of 199
9.2 Two Sample Heuristics 223
Machine 4 2 5 1 3
1
Machine 4 2 5 1 3
2
Machine 4 2 5 1 3
3
Machine 4 2 5 1 3
4
0 20 40 60 80 100 120 140 160 180 200 220 240 260
Time
Job 1 Job 2 Job 3 Job 4 Job 5
Makespan: 228
(b)
Machine 2 4 5 1 3
1
Machine 2 4 5 1 3
2
Machine 2 4 5 1 3
3
Machine 2 4 5 1 3
4
0 20 40 60 80 100 120 140 160 180 200 220 240 260
Time
Job 1 Job 2 Job 3 Job 4 Job 5
Machine 2 5 4 1 3
2
Machine 2 5 4 1 3
3
Machine 2 5 4 1 3
4
0 20 40 60 80 100 120 140 160 180 200 220 240 260
Time
Job 1 Job 2 Job 3 Job 4 Job 5
Makespan: 221
(d)
Machine 2 5 1 4 3
1
Machine 2 5 1 4 3
2
Machine 2 5 1 4 3
3
Machine 2 5 1 4 3
4
0 20 40 60 80 100 120 140 160 180 200 220 240 260
Time
Job 1 Job 2 Job 3 Job 4 Job 5
Machine 2 5 1 3 4
2
Machine 2 5 1 3 4
3
Machine 2 5 1 3 4
4
0 20 40 60 80 100 120 140 160 180 200 220 240 260
Time
Job 1 Job 2 Job 3 Job 4 Job 5
Fig. 9.4 Final iteration (k = 5 = n) of the NEH heuristic for the example of Table 9.1. a Sequence
(4, 2, 5, 1, 3) with a Cmax value of 213. b Sequence (2, 4, 5, 1, 3) con un Cmax value of 228.
c Sequence (2, 5, 4, 1, 3) con un Cmax value of 217. d Sequence (2, 5, 1, 4, 3) with a Cmax value
of 221. e Sequence (2, 5, 1, 3, 4) con un Cmax value of 220
224 9 Approximate Algorithms
job sequence is kept as the SBH’s solution job sequence for the machine under con-
sideration. 1|r j , d j | max L j itself is NP-hard. However, very good heuristics exist
for this problem.
Since the bottleneck machine of a system is usually not identifiable in advance and
might even change from solution to solution, the approach described above is applied
successively to all machines while fixing the job sequences on those machines which
have already been considered. The sequence of machine consideration is chosen
appropriately (see the following example). The procedure of the SBH is described
in Algorithm 2.
We exemplify the SBH by means of a simple example instance with three machines
and four jobs. Suppose the machine sequences per job and the processing times are
given by Tables 9.2 and 9.3.
We demonstrate the approach by means of the disjunctive graph representation
of the problem (see also Sect. 1.5.1).
The disjunctive graph for this problem is given by Fig. 9.5.
The nodes in Fig. 9.5 represent the respective operation of J j on Mi while being
the processing times noted in the respective cycle. Initial node A and final node Ω
are added. The black conjunctive (uni-directional) arcs represent a job’s machine
sequence which is given and fixed. The purple disjunctive (bi-directional) arcs rep-
resent the possible relative sequences of each two jobs on a specific machine. The
dark blue arcs represent the initial conditions (starting of jobs) and the turquoise
arcs represent the finishing conditions (finishing of jobs). Minimizing the makespan
in the corresponding job shop problem means to transfer every disjunctive arc into
a conjunctive one by giving it a uni-directional orientation (and maintaining logic
feasibility of the resulting sequence, e.g. no cycles are allowed) so that the longest
path from node A to the final node Ω in the network (whose length corresponds to
the makespan of the solution derived) is of minimum length.
9.2 Two Sample Heuristics 225
Initialization:
Initialize the set of already scheduled machines: SM = ≥, i.e. the set of unsched-
uled machines is UM = (1, 2, 3).
Eliminate all disjunctive arcs from the overall problem’s disjunctive graph, i.e.
the constraints with respect to job sequences on each machine. (In a real-world inter-
pretation, this means that all jobs could be processed in parallel on every machine,
assigning every machine (for the time being) an unlimited capacity.)
Determine the longest path in the resulting network (see Fig. 9.6). In Fig. 9.6,
the green numbers assigned to every node indicate the earliest starting time of the
respective operation while the red numbers represent the latest finishing time of this
operation. These time values can be easily computed by the standard forward and
backward calculation scheme well-known from time planning in networks.
The Cmax value of this solution is the maximum of every job’s last operation
finishing time, i.e.
The (green) earliest starting time of each operation is now interpreted as the release
time r j of the operation when it is considered on its respective machine. The (red)
latest finishing time of each operation accordingly is interpreted as the due date d j
of the operation when it is considered on its respective machine.
Iteration 1: SM = ≥, UM = (1, 2, 3)
Determine min max L j for all single machine problems incl. release and due
dates, i.e. solve 1|r j , d j | max L j for all so far unscheduled machines. As already
9.2 Two Sample Heuristics 227
Fig. 9.7 Solving 1|r j , d j | max L j for M2 in first iteration by complete enumeration
mentioned this problem is NP-hard, but good heuristics exist for it. Here, since there
are only 3! = 6 (on M2 ) or 4! = 24 (on M1 and M3 ) possible solutions for each of
these problems, we use complete enumeration to solve the problems exactly:
Cmax (S M) = max (11, 11, 11, 5) = 11, as well as the time value at Ω.
228 9 Approximate Algorithms
Fig. 9.8 Reduced network in the SBH heuristic (SM = (1), UM = (2, 3))
Cmax (SM) = max (12, 12, 12, 6) = 12, as well as the time value at Ω.
Determine min max L j for all single machine problems incl. release dates and
due dates, i.e. solve 1|r j , d j |L max for all so far unscheduled machines.
9.2 Two Sample Heuristics 229
Fig. 9.9 Reduced network in the SBH heuristic (SM = (1, 2), UM = (3))
Fig. 9.10 Final network in the SBH heuristic (SM = (1, 2, 3), UM = ≥)
The result can be seen from Fig. 9.10. The machine is uniquely determined. But
the respective sequence is not unique (4213 and 4231). Here we choose 4231 on M3
and update SM = (1, 2, 3), UM = ≥.
Performing forward and backward calculation of earliest starting times and latest
finishing times of every operation yields the green and red numbers at every node in
Fig. 9.10.
The longest path gives the makespan of the SBH application to the given job
shop problem which is the maximum of every job’s last operation finishing time in
230 9 Approximate Algorithms
Cmax (SM) = max (12, 12, 12, 6) = 12, as well as the time value at Ω.
Since no unscheduled machines remain (UM = ≥), the SBH procedure is finished.
(Note that only by chance the sequence of considering the machines was 1-2-3 in
this instance. Usually, the sequence of machine indexes in the iterations must not
need to be continuously increasing.)
Finally, we received a complete solution to the original job shop problem instance
with makespan minimization which is—in our example instance—also the true opti-
mum solution to this instance.
The solution data from the SBH can be easily deduced from Fig. 9.10.
Job sequences on every machine:
M1 : 1432
M2 : 231
M3 : 4231.
Finishing times of every operation:
T (i, j) J1 J2 J3 J4
M1 3 12 9 5
M2 9 2 6
M3 12 6 10 3
Once the detailed description of two representative heuristics has been presented,
in this section we will outline some of the main ideas behind many heuristics for
manufacturing scheduling. We first give some general ideas which are then developed
in several subsections. These general ideas are:
• Relaxation. A first idea of scheduling procedures is to relax the model under
consideration until the remaining formal problem becomes tractable. Afterwards
the solution of the formal problem is adjusted and modified somehow to a solution
of the real-world problem. This approach, as is well-known, is common to almost
every model building, not even optimisation-oriented, in almost every real-world
based setting.
• Decomposition. A second type is to decompose the problem under considera-
tion until the remaining partial problems become tractable, to solve these partial
9.3 Some Ideas Behind Heuristics for Manufacturing Scheduling 231
other hand, these ideas are not neccessarily confined to the manufacturing scheduling
field. Finally, some of these ideas could be incorporated into metaheuristics.
Some specific guidelines can be constructed upon the general ideas outlined above.
They start with the consideration of the single-operation as the basic unit of a schedul-
ing model, thus ignoring the possibility of interrupting operations and generating
sub-operations thereby. More specifically, we will assume that an operation
• belongs to a job and requires a machine, i.e. it combines a job and a machine,
• possibly requires additional resources,
• cannot be started until certain predecessing operations (usually of the same job)
have been finished (or must be finished before its successors can start),
• has to be assigned to one or more (consecutive) time periods,
• ...
Model-based complexity reduction according to the framework discussed in
Sect. 7.3.4 therefore implies, implicitly or explicitly,
• reduction of the number of operations to be considered in the reduced model,
• complexity-related modification of some or all operations themselves,
• omission or simplification/relaxation of (resource) constraints,
• ...
With these concepts in view some ways to reduce this complexity appear. Some
of them are discussed in the next subsections.
An obvious but at a first glance not very manageable guideline for the determination
of operations which can be excluded in the simplified model is that these operations
should not contribute significantly to the quality of the solution derived finally for
the original model. However, there are at least three interpretations of this guideline
which might be effective and which are used in model simplification.
First, a machine-oriented interpretation of this guideline proposes to completely
exclude non-relevant machines in the simplified model, i.e. to simplify the model
by omitting all operations of all jobs on these machines—and only consider the
remaining model with significantly less machines. The choice of the machines to be
excluded will often be based on their (non-) bottleneck property in the manufacturing
system considered. Bottleneck machines will be preserved in the simplified model
while non-bottleneck machines are deleted. Especially, if the number of machines in
the simplified model becomes very low (e.g. one or two), the remaining simplified
model might be solved with specialised algorithms or heuristics. A typical example
of this idea is the SBH already discussed in Sect. 9.2.2. In our context, one of this
approach’s basic ideas is to identify the bottleneck machine in advance and then con-
fine the simplified model to a one-machine problem which only considers exactly
this bottleneck machine.
9.3 Some Ideas Behind Heuristics for Manufacturing Scheduling 233
However, to include this machine’s interaction with other machines, on one hand
for every job a release time is included which incorporates the time needed for all
previous operations of every job or better: the job’s expected earliest availability for
the bottleneck machine under consideration. On the other hand, a follow-up time is
included for every job which represents the time needed to finish all its following
operations. (As we have already seen, SBH is much more elaborate. Especially its
‘shifting’ property is used to consecutively apply the idea iteratively to different
machines which, from an application point of view, is very well relevant if there are
several bottlenecks or a hierarchy of importance of machines in the system. Several
bottlenecks are not as uncommon if the capacities of machines in the manufactur-
ing system under consideration are balanced at least from a mid-term or long-term
perspective.)
Second, a job-oriented interpretation of the above guideline induces to exclude
whole jobs in a simplified model, i.e. to omit all operations belonging to these pre-
specified jobs. If we remember that the number of jobs in manufacturing scheduling
is the main driver of complexity in many solution procedures, reducing the number
of jobs in a simplification approach turns out to be a rather attractive simplification
strategy from a computational point of view. From an application point of view, job
selection might be executed in two ways, i.e. positive selection and negative selec-
tion. Positive selection in our context means to choose ‘important’ jobs to remain
in the scheduling model. Importance of jobs might result from their size, from cus-
tomers’ and/or products’ importance, from their (expected) processing time variance,
etc. Negative selection means to first consider all jobs and successively eliminate
unimportant jobs in the simplification process until the number of jobs becomes
manageable.
Third, these two approaches to simplification, i.e. job selection and machine
selection, might be combined into a machine/job-oriented interpretation. And the
approach might be even more specific with respect to operations when single oper-
ations are included or excluded in the simplification process.
Without going into more details here, it has also to be mentioned that the above
outlined simplification approaches might also require some adjustment of the objec-
tive function in the simplified model. E.g., using bottleneck ideas might imply to
include opportunity cost considerations into the simplified model.
agg
uppind(k)
pk j = gi pi j .
i=lowind(k)
Here the processing time pi j of job j on machine i is weighted using the machine-
based weight gi and added up from machine lowind(k) through machine uppind(k)
while these machines define the k-th ‘macro’ machine in the simplified model by
using machine aggregation. Dannenbring (1977) or Park et al. (1984) approximate
approaches to multi-machine permutation flow shop problems with makespan objec-
tive give examples for a ‘true’ weighting approach while the procedure of Campbell
et al. (1970) applies machine aggregation for the same type of problem by using
equal weights gi = 1 for all machines. These approaches simplify, i.e. aggregate the
model into a two machine model which can be solved easily to optimum by Johnson’s
algorithm. This solution, as an approximate approach, is then applied to the original
multi-machine problem under consideration.
Job-oriented approaches construct clusters of jobs which are usually interpreted
as one ‘macro’ job in the simplified model. It is supposed that jobs included in one of
these macro jobs are then processed consecutively. The processing times of a macro
job are usually constructed as
agg
uppind(l)
pil = g j pi j .
j=lowind(l)
Here the processing time pi j of job j on machine i is weighted using the job-based
weight g j and added up from job lowind(l) through job uppind(l) while these jobs
define the l-th ‘macro’ job in the simplified model by using job aggregation. It has
to be mentioned that this approach requires a prior assignment of jobs to macro
jobs. This is usually much less obvious as compared with the machine-aggregation
approach since actually the jobs might occur in (almost) every possible sequence
(per machine). Therefore, machine-aggregation approaches are often used when the
standard assumption of cluster construction is fulfilled, i.e. the objects (jobs) in a
cluster are rather homogeneous while objects (jobs) from different clusters differ
more or less significantly. This requirement, e.g., often is supposed to be fulfilled if
changeover/setup times between jobs of one cluster are negligible while changeover
times between jobs of different clusters are of significant size.
Clearly, machine-oriented and job-oriented aggregation approaches might be
applied simultaneously. A further generalisation of this machine-based and/or job-
based view on aggregation in scheduling problems can be derived from considering
the disjunctive graph presentation of the original model and looking for network
partitioning and network aggregation based on this problem representation. We will
not discuss this further here.
9.3 Some Ideas Behind Heuristics for Manufacturing Scheduling 235
Ignoring constraints of the original model in the simplified one represents another
group of simplification approaches. E.g. excluding resource constraints, precedence
constraints, the sequence dependency of changeover times, etc. usually simplifies
the model significantly. However, to receive a feasible solution to the original model
might require sophisticated adjustment procedures with respect to the solution of the
simplified model. We will not address these approaches in more detail here.
Another rather simple but somewhat ambivalent way to reduce the complexity of a
model is to reduce the time horizon of the jobs to be considered in the model. By
this approach, on one hand probably the number of jobs to be considered is reduced.
This reduces the effects of a main complexity driver. On the other hand, excluding
later periods from consideration might yield adjustment problems at the end of the
scheduling horizon and cause additional feasibility problems.
Instead of reducing the number of jobs to be considered by reducing the time
horizon in the model, this number of jobs can also be reduced by performing an add-
on scheduling approach instead of starting the procedure from scratch every time a
(re-) scheduling is executed. That is instead of scheduling all jobs anew only new jobs
are scheduled in an optimisation run while the former jobs’ sequence (or only their
relative sequence on every machine) remains constant. Basically these approaches
adapt the difference known from regenerative and net-change approaches in MRP to
manufacturing scheduling. This approach is rather common in dynamic scheduling
settings.
model probably yields two different solutions to the model (where, of course, the
solution of the symmetric model has to be reverted again to yield a solution to the
original model). However, these two solutions probably will both be of adequate
quality and the best one of these two solutions could be finally chosen. Additionally,
it should be mentioned that very often when testing approaches using randomly
generated test-beds, both, the original instance as well as its symmetric counterpart
represent valid problem instances. Therefore, from a numerical testing point of view,
there is no difference between the application of an approach to the original or to
the symmetric model when using this approach for each of these two instances.
Furthermore, the increase in effort by applying an approach also to the symmetric
model is nothing but duplication. And by experience, in several problem settings
nice improvements might be achieved by using this combined type of approach, i.e.
applying a given approach to both, the original as well as the symmetric model.
Symmetry ideas have been described, for instance, in the book of Pinedo (2012).
One paper where these ideas are employed for designing a heuristic is Ribas et al.
(2010).
sequence(s) of T (k − 1). The job k ∃ which yields the best (sub-) sequences is trans-
ferred from R(k − 1) to S(k − 1), i.e.
T (k) consists of those (sub-) sequences which have been accepted in iteration k.
Of course, this approach can be expanded in many directions. E.g.,
• some already sequenced jobs can be withdrawn from S(k) and be put back into to
R(k),
• several jobs might be assigned to S(k) simultaneously,
• the above approach might be applied to some tree structure, i.e. several jobs can be
alternatively assigned to S(k) and each of the resulting alternatives is considered
in the following iterations separately,
• ...
All these expansions have to be defined such that the resulting effort for the method
remains acceptable, i.e. the computational effort does not grow exponentially but
remains polynomially bounded.
inserted in every iteration of (b). This approach remains with polynomial com-
plexity and yields good results for the flowtime flow shop problem but is rather
time-consuming.
• As already mentioned, ties may occur both in (a) and (b). A sophisticated treatment
of ties may improve the approach’s performance noticeably. Nevertheless, keeping
the computational effort polynomially bounded might be an issue in this case.
operations, or jointly for several operations. Note that the diversity of this type of
approaches is again enormously.
Another type of approach is to include the human planners’ expertise for a
given schedule, i.e. to explicitly address the men-machine-interface. Correspond-
ing approaches have turned out to be more or less successful empirically in many
settings. Especially, if formerly manually solved manufacturing scheduling prob-
lems in practice are automated, on one hand they usually have to be competitive with
the former manual processing which then serves as a benchmark with respect to the
relevant indicators or objective function(s) for the automated solution. On the other
hand, allowing the human planner ‘to have a look’ at the automated solution might
improve even this solution and/or will probably increase acceptance of the automated
solution within the real-world manufacturing setting. However, note that these third-
type approaches refer to planners’ intuition which might be mostly non-quantifiable,
including the application of specific methods ‘just in time’.
9.4 Metaheuristics
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1 2 3 4 5 6
Machine
2
1 2 3 4 5 6 7
Machine
3
indicate the order in which they are processed on that machine. Notice that the term
is not very accurate as each job has to be assigned to exactly one machine. As a
result, each machine does not really contain a permutation of all the jobs, but rather
a list of the assigned jobs. An example for a three machine, 20 job representation is
shown in Fig. 9.12.
As we can see, every job appears once, but not all machines have the same number
of jobs assigned. In the example, we can see that π(2,4) = 19 i.e. the job in the fourth
position of the second machine is the job 19. Notice that for complex problems,
like for example, a hybrid job shop lay out with stage skipping and recirculation,
much more complex representations are needed. Figure 9.13 shows a possible three
stage hybrid flow shop with two, three and two parallel machines at each stage,
respectively.
9.4.1.2 Initialisation
In general, a metaheuristic needs at least one starting solution. This initial solution is
usually referred to as ‘seed’. For some problems, this solution might just be generated
at random. However, a random solution is expected to be of a very low quality. In
some other cases, a random solution might even be unfeasible. For these reasons, it
is usually common in the scheduling methods literature to initialise metaheuristics
with good known constructive heuristics. In the very least, there is nothing wrong in
using a good dispatching rule to seed a metaheuristic algorithm.
Some metaheuristic methods however, rely on a completely random initialisation
as a heuristic initialisation might bias the search process to a specific region of the
solution space, possibly hindering to find a global optimum solution in some way or
another. Additionally, for some problems, there are no good known heuristics and/or
dispatching rules do not offer sufficiently good quality solutions. In all these cases,
it is common to randomly initialise metaheuristic algorithms.
9.4 Metaheuristics 243
1 2 3 4 5
Machine
1
2 10 5 19 2
1 2 3
Stage
Machine
2
2
8 1 6
1 2 3 4 5
Machine
3
3 7 10 3 4
1 2 3 4 5
Machine
1 5 7 5 2 9
Stage
3
1 2 3 4 5 6
Machine
2
1 6 4 3 8 10
Given the solution representation and also given the impossibility of examining the
whole cardinality of the solution representation space, which is often of exponential
size, the next concept that is needed in metaheuristics is the neighbourhood of a
solution. The neighbourhood of a solution is a set of all possible solutions in the
representation space that can be reached by carrying over a small local movement in
the solution. For example, given a permutation, a simple neighbourhood is the one
in which two adjacent jobs exchange their positions. This neighbourhood is called
the adjacent swap. Let us define this adjacent swap neighbourhood in a more formal
way. Let us suppose that we have a sequence of jobs expressed as a permutation π =
(π(1) , π(2) , . . . , π(n) ), one adjacent swap neighbourhood is obtained by selecting
each possible job in position j and its immediately adjacent job in position k where
j = 1, . . . , n − 1, k = j + 1 and swapping the two jobs or also, the jobs in these two
positions of the permutation. Given these two positions j and k, and the corresponding
jobs in these two positions, namely, π( j) and π(k) the resulting sequence after the
swap will be π = (π(1) , . . . , π( j−1) , π(k) , π( j) , π( j+1) , . . . , π(n) ).
244 9 Approximate Algorithms
if j < k, or in a sequence:
if j > k.
The set of insertion moves I is then defined as I = (( j, k) : j, k = (1, . . . , n),
j <> k) and the insertion neighbourhood as N (I, π ) = (πv : v ∀ I ). Note that the
cardinality of I is greater than that of A or S as |I | = (n − 1)2 . A graphical example
of the insertion neighbourhood is given in Fig. 9.16.
Of course, the previous three neighbourhoods are not the only possible ones.
There are many other neighbourhoods, like reverse, block swap, block insert, three
job insert, etc.
Once the neighbourhood has been defined, a metaheuristic method usually
explores neighbours of a given solution in a given way. The general idea is that
good solutions are clustered close to each other, i.e. a very good solution will have
a hopefully even better one in the neighbourhood. Let us picture an example in the
single machine layout with a permutation representation. The objective is the total
tardiness minimisation. A very good starting solution has been found by applying
the Earliest Due Date (EDD) dispatching rule. It is straightforward to see that an
even better solution can be found by selecting a job that is being completed early
and swapping it with another job that is being completed late. Normally, the late job
will be processed earlier in the sequence and the early job later in the sequence. This
is an example of a local movement in the general swap neighbourhood. Hopefully,
after this swap, the early job is not tardy and the tardy job is now early. As a result,
the total tardiness will have been reduced even further.
With the previous example, neighbours of a solution can be examined sequentially
or randomly. Also, given a solution, all neighbours might be examined, and the ‘best’
selected or the first ‘better’ neighbour might be selected. This is known as ‘best
improvement’ or ‘first improvement’, respectively.
There is a known trade-off between the cardinality of the neighbourhood, the
quality of the expected solutions and the time it takes to traverse the neighbourhood.
Obviously, larger cardinality neighbourhoods are more expensive to examine but
246 9 Approximate Algorithms
higher quality solutions are expected. What to do with new solutions is discussed
next in the acceptance criterion.
solution, the first improving neighbour replaces the current solution. An alternative
would be to examine all neighbours of a given solution sequentially, and replacing
the current solution only with the best neighbour.
With the computing power available today, it is fairly straightforward to add a
simple descent local search method as a post-processing stage after a solution has
been obtained with a dispatching rule or heuristic algorithm. In the worst case sce-
nario, for example in big instances where there might be thousands of improving
iterations, the descent local search procedure can be stopped after a given predefined
time has elapsed. However, there are many scheduling problems where neighbour-
hoods are hard to define, and/or each candidate solution is complex to calculate. In
such scenarios, iterating through the neighbourhood can be a costly procedure.
One of the earlier methods to cope with the local optimality trapping problems is
the simulated annealing metaheuristic. Simulated annealing, as the same implies,
emulates the physical annealing process where materials of crystalline structure
are slowly cooled off and re-heated in order to alleviate tensions and to foster a
perfect crystalline structure. The analogy to local search is the following: During the
search, neighbours are always accepted if they are better than the current solution.
However, if the first or best neighbour is worse, it will be probabilistically accepted
with a diminishing probability that depends on the time that has elapsed (cooling
schedule: the more time the algorithm has been running, the lower the probability to
accept worse solutions) and also on the difference between the current solution and
the best non-improving neighbour (the higher the difference, the lower the proba-
bility to accept a worse solution). The template of Simulated Annealing is given in
Algorithm 4.
Note that Random returns a random uniformly distributed real number between 0
D
and 1. The probabilistic expression e− T returns a diminishing number as D increases
and/or as T decreases. Furthermore, 0 < r < 1 is referred as the cooling rate and it
determines the cooling schedule.
As one can see, a high initial temperature and an r value close to 1 will ensure
that a large number of iterations will be carried out. Some more advanced designs
add a number of iterations L at each temperature level and other complex settings.
Simulated annealing was proposed for optimisation problems by Kirkpatrick et al.
(1983) and by Černý (1985). Scheduling applications appeared in the late eighties
and early nineties with the works of Osman and Potts (1989) and Ogbu and Smith
(1990) to name just a few.
Simulated Annealing, as many other metaheuristics, employ different parame-
ters that might affect the performance of the algorithm. The listing of Algorithm 4
shows some parameters as the initial temperature Tini and the cooling rate r , apart
from the initial solution. These parameters have to be tuned for performance and
efficiency and there is a whole area in experimental algorithms devoted to the tuning
of metaheuristics. Some relevant texts are those of Birattari (2005) and Hoos et al.
(2005).
Tabu Search or Taboo Search is actually a family of methods that, similarly to sim-
ulated annealing, include some mechanisms to escape from local optima. A key
difference with simulated annealing is that, in tabu search, the incumbent solution is
always replaced with the best neighbourhood, regardless of whether this neighbour
is actually better or worse than the current solution. Obviously, this approach can
lead to infinite loops in which the search moves from one solution π1 to another
neighbouring solution π2 and from that one again back to π1 . In order to avoid this,
tabu search methods keep a list of recently visited solutions. This list is what gives
9.4 Metaheuristics 249
T := r · T ;
return Best solution π ∃ , best objective function value obj ∃
end
the name to this family of metaheuristics as it is called the Tabu list. The tabu list
usually does not contain full visited solutions, but rather elements of the visited solu-
tions or the movements that were carried out in the solution elements at the most
recent iterations. Therefore, when moving from one solution to the best neighbour,
the movement is not accepted if it leads to a tabu solution. The result is a methodology
that has a short term memory that avoids the formation of cycles in the search.
A comprehensive reference on the mechanisms of Tabu Search is Glover and
Laguna (1997), while some examples of its application to manufacturing scheduling
are Widmer and Hertz (1989); Nowicki and Smutnicki (1996) and Nowicki and
Smutnicki (1998) to name just a small set.
Genetic algorithms are also different from earlier metaheuristics in that they are
population based. The search is not carried out over one single solution but rather
over a set of solutions. This set of solutions is referred to as ‘population’ and each
member of the ‘population’ is called an individual. Individuals are actual solutions to
the problem, or rather, solution representations. In the genetic algorithm jargon, each
individual is formed by a ‘chromosome’ in which the solution elements are encoded.
For scheduling problems, chromosomes can just be the solution representations, for
example, job permutations that form sequences.
The population is evaluated and all individuals are assigned a ‘fitness value’.
This fitness value is directly related with the objective function value, in such a way
that better objective function values are assigned higher fitness values. A selection
operator is introduced. This operator randomly chooses individuals from the popu-
lation with a strong bias towards fitter solutions. As a result, the fittest individuals
have greater probabilities of being selected. Selected individuals undergo a crossover
procedure in which new individuals are generated. The crossover operator tries to
generate new solutions by combining the best characteristics of the fitter selected
individuals. A further mutation operator is introduced to randomly alter some char-
acteristics of the newly created individuals. Lastly, the new solutions are evaluated
and introduced in the population. The previous process is repeated generation after
generation.
Genetic algorithms are exotic as far as metaheuristic methods go. When first
exposed to them, the natural reaction is to wonder how this seemingly complex
mechanism could work. Actually, once individuals are seen as simple representations
of actual job schedules, the process is much better understood.
For example, a simple genetic algorithm using job permutations, for example, for
a single machine layout scheduling problem could be based on the template shown
in Algorithm 5.
9.4 Metaheuristics 251
Offspring
1 3 17 9 15 8 14 11 7 13
Offspring
2
3 9 14 15 11 19 6 18 5
Parent 2 3 9 14 15 11 19 6 18 5 8 7 17 1 16 4 2 10 13 20 12
Crossover point
(b)
Parent 1 3 17 9 15 8 14 11 7 13 19 6 5 1 18 4 2 16 10 20 12
Offspring
2 3 9 14 15 11 19 6 18 5 17 8 7 13 1 4 2 16 10 20 12
Offspring
1
3 17 9 15 8 14 11 7 13 19 6 18 5 1 16 4 2 10 20 12
Parent 2 3 9 14 15 11 19 6 18 5 8 7 17 1 16 4 2 10 13 20 12
Fig. 9.17 One point order crossover in genetic algorithms. a Offspring inherit the jobs prior to the
random cut point from the direct parent. b Remaining jobs are inherited from the other parent in
their relative order
evaluated and usually, replace the old solutions that were selected. In some more
advanced genetic algorithms, an elitism operator is used in order to preserve the best
individuals from the population to be replaced by new solutions of worse quality.
Applications of genetic algorithms to scheduling problems appeared as early as
in the middle 1980s in Davis (1985) and also at Mattfeld (1996); Syswerda (1996);
Biegel and Davern (1990); Chen et al. (1995); Reeves (1995) and more recently, Ruiz
et al. (2006). Note that this is just a small list as genetic algorithms are among the
most prolific metaheuristics from the literature. The interested reader is redirected
to such texts for more information.
This chapter presented some main ideas to design and to evaluate approximate
algorithms for manufacturing scheduling problems. However, the large variety of
such algorithms prohibits their comprehensive overview or even a comprehensive
overview of the respective ideas. For instance, it should be mentioned here again that
simply stopping a (possibly, but not necessarily exact) optimisation approach before
its pre-defined termination, e.g. for time or other resource reasons, obviously defines
a class of heuristics as well.
In addition, apart from the different methods reviewed in this chapter, it should be
mentioned that additional heuristic solution procedures might result from automated
integrated or non-integrated decision support systems, such as solution modules
provided by approaches from artificial intelligence or agent-based systems. These
methods will not be addressed here, but instead some further remarks on a general
technique for generating approximate solutions (Lagrangean relaxation), and the
combination of exact approaches with metaheuristics) are presented.
From a formal point of view, Lagrangean relaxations often are able to provide
bounds on the optimal objective function value which enables the decision maker to
estimate the maximum deviation of a current solution to the optimum. (However, the
integrality requirements with respect to the bound interpretation have to be considered
carefully, at least if mathematically valid bounds should be derived.)
References
Adams, J., Balas, E., and Zawack, D. (1988). The shifting bottleneck procedure for job shop schedul-
ing. Management Science, 34(3):391–401.
Biegel, J. E. and Davern, J. J. (1990). Genetic Algorithms and Job Shop Scheduling. Computers
and Industrial Engineering, 19(1):81–91.
Birattari, M. (2005). The Problem of Tuning Metaheuristics as seen from a machine learning per-
spective. Intelligence-Infix, Berlin.
Campbell, H. G., Dudek, R. A., and Smith, M. L. (1970). A Heuristic Algorithm for the n Job, m
Machine Sequencing Problem. Management Science, 16(10):B-630–B-637.
Chen, C.-L., Vempati, V. S., and Aljaber, N. (1995). An application of genetic algorithms for flow
shop problems. European Journal of Operational Research, 80(2):389–396.
Clerc, M. (2006). Particle Swarm Optimization. Wiley-ISTE, New York.
Dannenbring, D. G. (1977). An evaluation of flowshop sequencing heuristics. Management Science,
23:1174–1182.
Dasgupta, D., editor (1998). Artificial Immune Systems and Their Applications. Springer, New York.
Davis, L. (1985). Job Shop Scheduling with Genetic Algorithms. In Grefenstette, J. J., editor,
Proceedings of the First International Conference on Genetic Algorithms and their Applications,
pages 136–140, Hillsdale. Lawrence Erlbaum Associates.
Davis, L., editor (1996). Handbook of Genetic Algorithms. International Thomson Computer Press,
London.
de Castro, L. N. and Timmis, J. (2002). Artificial Immune Systems: A New Computational Intelli-
gence Approach. Springer, London.
Doerner, K. F., Gendreau, M., Greistorfer, P., Gurjahr, W. J., Hartl, R. F., and Reinmann,
M., editors (2007). Metaheuristics: progress in complex systems optimization. Operations
Research/Computer Science Interfaces. Springer, New York.
Dorigo, M. and Stützle, T. (2004). Ant Colony Optimization. Bradford Books, USA.
Eberhart, R. C. and Shi, Y. (2007). Computational Intelligence: Concepts to Implementations.
Morgan Kaufmann, San Francisco.
Engelbrecht, A. P. (2006). Fundamentals of Computational Swarm Intelligence. John Wiley & Sons,
New York.
Engelbrecht, A. P. (2007). Computational Intelligence: An Introduction. John Wiley & Sons, New
York, second edition.
Feo, T. A. and Resende, M. G. C. (1989). A probabilistic heuristic for a computationally difficult
set covering problem. Operations Research Letters, 8(2):67–71.
Feo, T. A. and Resende, M. G. C. (1995). Greedy randomized adaptive search procedures. Journal
of Global Optimization, 6:109–133.
Feoktistov, V. (2006). Differential Evolution. In Search of Solutions. Springer, New York.
Framinan, J. M., Leisten, R., and Rajendran, C. (2003). Different initial sequences for the heuristic
of Nawaz, Enscore and Ham to minimize makespan, idletime or flowtime in the static permutation
flowshop sequencing problem. International Journal of Production Research, 41(1):121–148.
Geem, Z. W., Kim, J. H., and Loganathan, G. V. (2001). A new heuristic optimization algorithm:
Harmony search. Simulation, 76(2):60–68.
Glover, F. and Kochenberger, G. A., editors (2003). Handbook of Metaheuristics. Kluwer Academic
Publishers, Dordrecht.
Glover, F. and Laguna, M. (1997). Tabu Search. Kluwer Academic Publishers, Dordrecht.
Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning.
Addison-Wesley, Reading.
Grabowski, J. and Wodecki, M. (2004). A very fast tabu search algorithm for the permutation flow
shop problem with makespan criterion. Computers & Operations Research, 31(11):1891–1909.
Hansen, P. and Mladenovic, N. (2001). Variable neighborhood search: Principles and applications.
European Journal of Operational Research, 130(3):449–467.
258 9 Approximate Algorithms
Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. The University of Michigan
Press, Ann Arbor.
Hoos, Holger, H. and Stützle, T. (2005). Stochastic Local Search: Foundations and Applications.
Morgan Kaufmann, San Francisco.
Ibaraki, T., Nonobe, K., and Yagiura, M., editors (2005). Metaheuristics: progress as real problem
solvers. Operations Research/Computer Science Interfaces. Springer, New York.
Kennedy, J., Eberhart, R. C., and Shi, Y. (2001). Swarm Intelligence. Academic Press, San Diego.
Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing.
Science, 220(4598):671–680.
Konar, A. (2005). Computational Intelligence: Principles, Techniques and Applications. Springer,
New York.
Laguna, M. and Martí, R. (2003). Scatter search: methodology and implementations in C. Operations
Research/Computer Science Interfaces. Kluwer Academic Publishers, New York.
Larrañaga, P. and Lozano, J. A., editors (2002). Estimation of distribution algorithms: A new tool
for evolutionary computation. Kluwer Academic Publishers, Dordrecht.
Lozano, J. A., Larrañaga, P., Inza, I. n., and Bengoetxea, E., editors (2006). Towards a New Evolu-
tionary Computation: Advances on Estimation of Distribution Algorithms. Springer, New York.
Mattfeld, D. C. (1996). Evolutionary Search and the Job Shop; Investigations on Genetic Algorithms
for Production Scheduling. Production and Logistics. Springer/Physica Verlag, Berlin.
Michalewicz, Z. (1996). Genetic Algorithms + Data Structures = Evolution Programs. Springer-
Verlag, Berlin, tercera edition.
Mladenovic, N. and Hansen, P. (1997). Variable neighborhood search. Computers & Operations
Research, 24(11):1097–1100.
Morton, T. E. and Pentico, D. W. (1993). Heuristic Scheduling Systems With Applications to Produc-
tion Systems and Project Management. Wiley Series in Engineering & Technology Management.
John Wiley & Sons, Hoboken.
Nawaz, M., Enscore, Jr, E. E., and Ham, I. (1983). A Heuristic Algorithm for the m-Machine, n-Job
Flow-shop Sequencing Problem. OMEGA, The International Journal of Management Science,
11(1):91–95.
Nowicki, E. and Smutnicki, C. (1996). A fast tabu search algorithm for the permutation flow-shop
problem. European Journal of Operational Research, 91(1):160–175.
Nowicki, E. and Smutnicki, C. (1998). The flow shop with parallel machines: A tabu search
approach. European Journal of Operational Research, 106(2–3):226–253.
Ogbu, F. A. and Smith, D. K. (1990). The Application of the Simulated Annealing Algorithm to the
Solution of the n/m/Cmax Flowshop Problem. Computers and Operations Research, 17(3):243–
253.
Osman, I. H. and Potts, C. N. (1989). Simulated Annealing for Permutation Flow-shop Scheduling.
OMEGA, The International Journal of Management Science, 17(6):551–557.
Park, Y. B., Pegden, C., and Enscore, E. (1984). A survey and evaluation of static flowshop schedul-
ing heuristics. International Journal of Production Research, 22(1):127–141.
Pinedo, M. L. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, New York, fourth
edition.
Price, K., Storn, R. M., and Lampinen, J. A. (2005). Differential Evolution: A Practical Approach
to Global Optimization. Springer, New York.
Rajendran, C. and Ziegler, H. (2004). Ant-colony algorithms for permutation flowshopn schedul-
ing to minimize makespan/total flowtime of jobs. European Journal of Operational Research,
155(2):426–438.
Reeves, C. and Yamada, T. (1998). Genetic algorithms, path relinking, and the flowshop sequencing
problem. Evolutionary Computation, 6(1):45–60.
Reeves, C. R. (1995). A genetic algorithm for flowshop sequencing. Computers & Operations
Research, 22(1):5–13.
Resende, M. G. C., Pinho de Sousa, J., and Viana, A., editors (2004). Metaheuristics: computer
decision-making. Kluwer Academic Publishers, Dordrecht.
References 259
Ribas, I., Companys, R., and Tort-Martorell, X. (2010). Comparing three-step heuristics for the
permutation flow shop problem. Computers & Operations Research, 37(12):2062–2070.
Ruiz, R. and Maroto, C. (2005). A comprehensive review and evaluation of permutation flowshop
heuristics. European Journal of Operational Research, 165(2):479–494.
Ruiz, R., Maroto, C., and Alcaraz, J. (2006). Two new robust genetic algorithms for the flowshop
scheduling problem. OMEGA, The International Journal of Management Science, 34(5):461–
476.
Ruiz, R. and Stützle, T. (2007). A simple and effective iterated greedy algorithm for the permutation
flowshop scheduling problem. European Journal of Operational Research, 177(3):2033–2049.
Syswerda, G. (1996). Scheduling Optimization Using Genetic Algorithms. In Davis, L., editor,
Handbook of Genetic Algorithms, pages 332–349, London. International Thomson Computer
Press.
Taillard, E. (1990). Some efficient heuristic methods for the flow shop sequencing problem. Euro-
pean Journal of Operational Research, 47(1):67–74.
Turner, S. and Booth, D. (1987). Comparison of Heuristics for Flow Shop Sequencing. OMEGA,
The International Journal of Management Science, 15(1):75–78.
Černý, V. (1985). A thermodynamical approach to the travelling salesman problem: an efficient
simulation algorithm. Journal of Optimization Theory and Applications, 45(1):41–51.
Wang, L. and Zheng, D. Z. (2003). An effective hybrid heuristic for flow shop scheduling. The
International Journal of Advanced Manufacturing Technology, 21(1):38–44.
Widmer, M. and Hertz, A. (1989). A new heuristic method for the flow shop sequencing problem.
European Journal of Operational Research, 41(2):186–193.
Woo, H.-S. and Yim, D.-S. (1998). A heuristic algorithm for mean flowtime objective in flowshop
scheduling. Computers & Operations Research, 25(3):175–182.
Chapter 10
Multi-Objective Scheduling
10.1 Introduction
Starting with Chap. 3, where scheduling models were presented, followed by Chaps.
4 and 5 where constraints and scheduling objectives were outlined, a common
assumption so far in this book has been that every scheduling problem and its corre-
sponding model has one single objective or criterion to optimise. However, internal
goals, e.g. minimising tied-up capital and the related consequences with respect
to utilisation and work in process, are in conflict with other internal goals or with
externally oriented goals, e.g. maximising service level with its relation to due date-
oriented objectives. Therefore, it is not surprising that manufacturing scheduling
models have been the subject of many publications and considerations dealing with
multi-criteria aspects.
The consideration of multiple criteria greatly increases the types of models to be
considered, as well as the approaches and methods to solve these models. Addition-
ally, the assessment of the quality of the solutions obtained by these methods is far
from straightforward in many cases.
In this chapter we
Strictly speaking, one could say that a multi-objective problem is no more than a
poorly defined single objective problem. In other words, if one needs to optimise a
given scheduling problem with several simultaneous objectives is because it was not
possible to verbalise and to rationalise a single-objective function—no matter how
complex this function might be. However, agreeing upon a single objective is by no
means an easy task in real production scheduling settings. As it has been mentioned,
real-life scheduling problems are never single objective. This might seem as a gratu-
itous assessment. However, a close examination of how scheduling works in practice
should convince the reader that this is, in fact, the truth. Below we state several issues
concerning single- versus multi-objective optimisation in real production scheduling
settings:
• Well-defined objectives are hard to find in practice. Shop managers have a fair
amount of intuition of what makes a good or a bad schedule, but not a precise and
specific objective function. Moreover, even if a well-defined objective exists, its
actual value might be of little relevance in practice. Even simple objectives like
minimising the number of tardy jobs can quickly turn into a debatable measure.
Some jobs are not as important as others. Therefore, being tardy for some jobs
is not as critical. Trying to add weights in order to cope with this issue is only a
temporary fix since weights are often not enough. Things like ‘this job cannot be
tardy. I have added a weight of nine and the scheduling method still results in this
job being tardy’ easily pop out. In other occasions, a simple phone call to a client
in order to advance that the delivery date for a job or order is going to change
suffices to make up for a tardy job.
• Preferences or objectives vary over time. They do depend on past and current shop
performance, on the mix of pending production and on market situation. Therefore,
some days, the focus might be at reducing lead times, while other days machine
utilisation might be of paramount importance.
• Decisions on schedules are often taken in a committee rather than by a single indi-
vidual. Agreeing on a single objective in these cases is seldom feasible. Moreover,
different managers at the company will have various and often conflicting goals.
Top management often desires to increase revenue. Sales force desires to meet
client’s due dates while plant managers care about machine utilisation, to name
just a few examples. As a result, even deciding on an objective to optimise can be
challenging.
• The concept of ‘optimal solution’ is often not well understood. Plant managers
have a fair understanding of what constitutes a bad or a good schedule. However,
discerning between a good and an optimal schedule is not within plant personnel
capabilities and is often a moot issue. Furthermore, an optimal solution for one
objective might degrade another objective more than necessary.
• Problem constraints might be relaxed in view of the performance of the sched-
ule. For example, working shifts might be reduced by sending workers home or
augmented via overtime hours. Similarly, production might be subcontracted in
10.2 Multi-Objective Nature of Many Scheduling Problems 263
order to cope with additional orders. Given this potential relaxation of problem
constraints, the mere concept of optimal solution is debatable at best.
• Human schedulers are more oriented towards goals or minimum/maximum achieve-
ment levels for a set of preferences rather than towards complex mathematical
formulations and precise objectives. Verbalising objectives as ‘reaching an 80 %
service level’ is much more meaningful than saying that the total tardiness is
1,700,000 time units.
• Real settings are marred with portions of data that are unknown, imprecise, con-
stantly changing and even wrong. Under these circumstances, focusing on improv-
ing a single criterion by a few percentage points makes little, if any, sense.
Although it turns out that, in some cases, it is desirable to consider more than
one criteria when scheduling, multi-criteria scheduling (and multi-criteria decision-
making, in general) is substantially more difficult than dealing with a single criterion.
In the latter case, for a formal deterministic model involving the minimisation of
objective f , establishing if schedule S is better than S ∪ reduces to checking if f (S) <
f (S ∪ ). However, this is not possible, in general, for more than one objective. If we
consider the minimisation objectives f1 and f2 , and it turns out that f1 (S) < f1 (S ∪ )
but f2 (S) > f2 (S ∪ ), then it is clear that we cannot say that S is better than S ∪ or vice
versa. Therefore, in order to advance into multi-criteria scheduling, we first need
some specific definitions. These are provided in the next section.
Let us assume that we are dealing with a scheduling model with a number H of
minimisation1 objectives, for which two solutions x1 and x2 exist. Let us derive the
following definitions:
Strong domination: A solution x1 strongly dominates solution x2 (x1 →→ x2 ) if:
1This does not pose a loss of generality, since all definitions can be easily adapted. Furthermore,
most scheduling objectives, as shown in Chap. 5, are of the minimisation type.
264 10 Multi-Objective Scheduling
This implies that x1 is not worse than x2 for all objectives except for one in which
it is strictly better.
Weak domination: Solution x1 weakly dominates solution x2 (x1 ∃ x2 ) if:
fh (x1 ) ≥ fh (x2 ) ∀h
This last definition implies that x1 is not worse than x2 for all objectives.
Incomparability: Solutions x1 and x2 are said to be incomparable, and is denoted
by x1 x2 or by x2 x1 if the following two conditions are simultaneously satisfied:
In other words, if two solutions do not weakly dominate each other, they are said
to be incomparable.
Given a set of solutions A, there might be relations between the solutions con-
tained. More specifically, one is usually interested in the dominant solutions from
the set:
Weak Pareto optimum: a solution x ∈ A is a weak Pareto optimum if and only if
there is no x ∪ ∈ A for which x ∪ →→ x. In other words, x ∈ A is a weak Pareto optimum
if no other solution in set A strongly dominates x. ‘Weakly efficient solution’ is an
alternative name for a weak Pareto optimum.
Strict Pareto optimum: a solution x ∈ A is a strict Pareto optimum if and only if
there is no x ∪ ∈ A for which x ∪ → x. In other words, x ∈ A is a strict Pareto optimum
if no other solution in set A dominates x. ‘Efficient solution’ is an alternative name
for a strict Pareto optimum.
Pareto set: a set A∪ ∩ A is denoted a Pareto set if and only if it contains only and all
strict Pareto optima. Since it contains only efficient solutions, this set is denoted as
efficient set. Alternative names are Pareto front or Pareto frontier.
All previous definitions can be more clearly understood by a graphical represen-
tation. Let us consider a scheduling problem with two minimisation objectives. We
depict the values of the objective functions along two axis:X-axis represents the
makespan (Cmax ) values and the Y-axis the total tardiness ( Tj ) values. Note that
every possible solution xi to this biobjective scheduling problem can be represented
by a point in this graphic. We have represented some possible solutions in Fig. 10.1.
By looking at Fig. 10.1, we can also give some further definitions:
Utopian (or ideal) solution: Is a given solution with the best possible objective
function value for all objectives. Often, this ideal solution does not exist as the
multiple objectives are often conflictive. It has been represented in Fig. 10.1 as
solution ‘U’.
Nadir solution: It is the contrary to the utopian solution. It is a solution with the
worst possible objective function value for all objectives. Luckily, such solution is
10.3 Definitions and Notation 265
+
N
x 11 x 12
x 10
x9
x13
x8
ΣT
x7
x6 x5
x4
x3
x2 x1
U
-
- Cmax +
Fig. 10.1 Representation of the function objectives’ values of several possible solutions
not likely to exist either. In Fig. 10.1 the nadir solution has been represented by ‘N’.
Solutions U and N are many times used for bounding (topologically speaking) the
sets of solutions for multi-objective problems.
Following with Fig. 10.1 we have the following relations:
• Picture, for example, solutions x1 and x2 . Both solutions have the same total tardi-
ness value. However, solution x2 has a lower makespan value than x1 . Therefore,
we say that solution x2 regularly dominates solution x1 or x2 → x1 .
• Following the previous example we also have that x3 → x4 , x6 → x5 , x9 → x10 → x12
and x11 → x12 .
• It can also be seen that both x7 and x8 have both a lower makespan and a lower
total tardiness than x13 . Therefore, it can be stated that x7 →→ x13 and x8 →→ x13 .
• There are some incomparable solutions in the figure. Let us pick, for example, x1
and x3 . x1 has a lower total tardiness value than x3 but also it has a worse makespan
value. Therefore, we cannot conclude, in a multi-objective sense, whether x1 is
better than x3 or viceversa. We then state that x1 and x3 are incomparable or x1 x3 .
• The Pareto front for the example is formed by solutions x2 , x3 , x6 , x7 , x8 , x9 ,
and x11 . This is further depicted in Fig. 10.2. Note the stepwise ‘frontier’. This
is indeed the case as straight lines in between Pareto solutions would imply that
other solutions in between have been found.
266 10 Multi-Objective Scheduling
+
x 11
x9
x8
ΣT
x7
x6
x3
x2
-
- Cmax +
In Chaps. 3–5, the well-known α|β|γ notation was introduced. As already indi-
cated, the third field γ covers the criterion for the manufacturing scheduling model.
In order to be able to represent the different classes of multi-objective scheduling
models, this notation is extended and presented in the corresponding class.
min F1
min F2
s.t.
F1 ≥ f1
The procedure continues until the last objective is optimised. The final solution
is the lexicographic optimal solution for all considered objectives in order.
Multi-criteria scheduling models of the lexicographical class are denoted as
follows:
γ = Lex(F1 , F2 , . . . , FH ): lexicographical optimisation of objectives F1 , F2 , . . . ,
FH . The order indicates that F1 is optimised first, then F2 is optimised subject to no
deterioration in F1 and so on until FH .
Note that the order of the objectives in the γ field in these models is very important,
as different orders define different models.
According to this notation, Fm|prmu, dj |Lex(max Tj , Cmax ) indicates a model of
a permutation flowshop with due dates where the maximum tardiness is minimised
first and then, the makespan is minimised as long as there is no deterioration in the
maximum tardiness value found first. Obviously, Fm|prmu, dj |Lex(Cmax , max Tj )
constitutes a different model.
The ε-constraint is another interesting approach in which only one objective is opti-
mised at a time, while all others are kept as constraints with a known bound on their
minimum or maximum possible values. In other words, it optimises one objective
while maintaining maximum or minimum acceptable levels for all others. These
levels are known as the ε-values or ε-bounds for each objective. These values are
hard to set appropriately and surely depend on the combination of objectives and on
the scheduling problem at hand. In general, large (or small) values for the ε-bounds
allow for larger feasible regions when optimising the chosen objective and therefore,
better objective values. However, this usually sacrifices all other ε-bounded criteria.
Equally important is the choice of which objective to optimise first. Some of these
disadvantages are clearly onset by the fact that in this method a single objective is
optimised at each run, needing only an existing effective single-objective algorithm.
An example with three minimisation objectives is given next:
10.4 Multi-Objective Models 269
min F1
s.t.
F2 ≥ ε2
F3 ≥ ε3
min E1 + D2 + E3 + D3
s.t.
F1 = G1 + E1
F2 = G2 − D2
F3 = G3 + E3 − D3
Note that G1 , G2 and G3 are the target values of objectives F1 , F2 and F3 , respec-
tively. Similarly, the variables ‘D’ and ‘E’ modelise the defect and excess values
below or above the desired target values, respectively. Therefore, by minimising the
sum of E1 + D2 + E3 + D3 we ensure that all three objectives are as close as possible
to their desired goals.
270 10 Multi-Objective Scheduling
Goal programming shares the same dimensionality problems with the weighted
combination of objectives. In the previous example, the objective min E1 +D2 +E3 +
D3 assumes that all deviations are measured in the same units, or alternatively, the
orders of magnitude in which each objective is measured are comparable. If this is
not the case, the same normalisation problems appear. There are many extensions of
goal programming, like when weights are added in order to penalise some objectives
in favour of others.
Multi-criteria scheduling models of the goal programming class are denoted as
follows:
γ = GP(F1 , F2 , . . . , FH ): Goal Programming optimisation of objectives F1
to FH .
Here, the order of the objectives is notrelevant for defining the model. According
to this notation, F2|prmu, dj |GP(Cmax , Uj ) indicates a two-machine permutation
flow shop model with due dates and the objectives of finding schedules for which
their makespan and number of tardy jobs are below given bounds.
Pareto optimisation models provide the decision-maker with the Pareto front. There-
fore, not only one solution, but many, are given to the decision-maker.
We have already stated the many drawbacks of the Pareto optimisation approach.
First and foremost, obtaining a good Pareto set approximation is extremely compli-
cated. Note that many scheduling models with a single-objective are already of the
NP-hard class. Therefore, obtaining many Pareto solutions is usually hard for most
scheduling settings. Second, the cardinality of the Pareto front might be extremely
large, making the choice of the solution to be implemented in practice hard.
Multi-criteria scheduling models of the Pareto optimisation class are denoted as
follows:
γ = #(F1 , F2 , F3 , . . . , FH ): Pareto optimisation of objectives F1 to FH .
Note that in these models, the order of the objectives is not relevant. According to
the notation, Fm||#(Cmax , Cj ) denotes a model of a m-machine flowshop where
the goal is finding the Pareto set for the makespan and the total completion time.
• ‘A priori’ approaches. There is user intervention before the optimisation run. This
information is in the form of user preferences.
• ‘Interactive’ optimisation methods. The user intervention occurs during the opti-
misation run.
• ‘A posteriori’ methods. The user is presented with a complete as possible Pareto
front. The user selects the desired trade-off solution to be implemented in practice
from the front.
In the ‘a priori’ category, the user must provide some information in order to
guide the search. This information often entails objective preferences or weights.
The ‘a priori’ optimisation methods often need a single optimisation run and usually
one single solution is obtained as a result. This single solution already considers
the information and preferences given by the user. A priori multi-objective methods
are very interesting as much of the work and results obtained in the single-objective
scheduling literature can be reused. The biggest drawback is that the ‘a priori’ infor-
mation must be precise and must reflect the preferences of the user accurately. Many
times it is not possible to provide this information without any knowledge of the
resulting sequences and solutions. A possible workaround to this problem is to enter
in a loop in which the user provides information, a solution is obtained and later
analysed. Should this solution fail to address the user’s preferences, the ‘a priori’
information can be updated accordingly and a new solution can be obtained.
Conversely, in the ‘a posteriori’ approach, the user is presented with the Pareto
front (or at least an approximation of such Pareto front). The user can then examine as
many solutions as desired with the aid of graphical tools so as to select the solution to
be implemented in practice. This is an ideal approach as no preferences or any a priori
information must be provided. However, this approach is not exempt of problems.
First, obtaining an approximation of the Pareto front might be computationally costly
and/or challenging. Let us not forget that any a posteriori multi-objective optimiser
has to obtain not only one but also many dominant solutions that sufficiently cover
the objective space. Due to these problems, most a posteriori optimisation methods
are often limited to two simultaneous objectives. Second, there is a lingering problem
of over information. Given a load of dominant solutions, the user can easily become
overwhelmed by the quantity of information. However, there are some techniques
that can be used to cluster solutions in the objective space so to present the user a
given number of selected solutions.
Lastly, the interactive approaches seek to amalgamate the advantages of the two
previous categories. With some limited information, an optimisation run is carried
out and a solution is presented to the user. After examining the solution, the user
provides further information, like where to go, what to avoid or what to enforce.
A second optimisation is run and another solution is presented to the user. The process
272 10 Multi-Objective Scheduling
goes on as much as the user desires or until he or she finds an acceptable solution.
With proper training, very good solutions (albeit usually not probably Pareto optimal)
can be obtained. This methodology allows to include advanced human knowledge
in the process. Surprisingly, not many interactive approaches have been proposed in
the multi-objective scheduling literature.
The different classes of models discussed in Sect. 10.4 can be analysed under the
light of the approaches in which these models may be embedded:
• In principle, LCC models fall within the a priori approach. Namely, the weights
of the different objectives must be given to formulate the model. However, these
models can be embedded into an interactive approach where, at each iteration, the
weights are changed according to the opinion of the decision-maker in order to
direct the search towards new solutions.
• Lexicographical optimisation requires very little data, but it should be still con-
sidered as an ‘a priori’ technique since the user must provide, in the very least, the
order of the objectives to optimise.
• ε-constraint models are often regarded as adequate for a priori approaches. How-
ever, many authors also pinpoint the need to run the method several times when
adjusting the ε-bounds, which effectively turns the technique into an interac-
tive approach. Under specific scenarios and following certain—actually, strict—
conditions (T’Kindt and Billaut 2006) the ε-constraint method can generate the
Pareto set.
• Goal Programming is clearly an ‘a priori’ method since aspiration levels or goals
for each objective have to be set.
• Pareto optimisation is the typical example of an ‘a posteriori’ approach. Once the
decision-maker is confronted with a set of dominant solutions, he or she knows that
each given solution is a non-dominated solution and that improving one objective
always entails a deterioration of the other objectives and in such a case, the closest
non-dominated solution is available in the given Pareto front approximation. The ‘a
posteriori’ decision is then reduced to choosing one of the non-dominated solutions
to be implemented in practice.
Recall from Sect. 7.3.3 that it is possible to establish a hierarchy of the compu-
tational complexity of different scheduling problems and thus construct reductions
trees such as the ones in Figs. 7.1, 7.2 and 7.3. Clearly, the two first reduction trees,
which corresponding to layout and constraints, are valid in the multi-criteria context.
With respect to the third reduction tree, it is valid for the different objectives within
the same model. In addition, another reduction tree with respect to the models can
be devised.
On one hand, it is clear that multi-criteria scheduling models are computationally
more complex than their single-criterion counterpart, therefore any model α|β|C in
a single-criterion model can be reduced to model α|β|ε(C/ . . .) by considering as
upper bounds of the rest of the objectives a very large number. Also quite obviously,
the solution obtained by any α|β|Lex(C, . . .) model serves to obtain the solution for
the single-criterion model.
10.5 Multi-Objective Methods 273
Given the host of different multi-objective techniques, the existing methods and algo-
rithms for solving multi-objective scheduling problems are incredibly numerous and,
in most cases, extremely model- and method-specific. Discussing them or even clas-
sifying them is beyond the scope of the book, and we refer the reader to some specific
sources in Sect. 10.8. In what follows, we simply refer to one specific type of models
(Pareto optimisation models) and algorithms (metaheuristics). The reason for pick-
ing Pareto optimisation models is, as we have seen, the most general approach, in the
sense that some other approaches reduce to these models. Analogously, metaheuris-
tic are chosen as they are the less model-specific type of algorithms and therefore,
their study will be more valuable for a reader approaching this field for the first time.
Among the different types and variants of metaheuristics discussed in Sect. 9.4,
Genetic Algorithms seems to be particularly well-suited for Pareto optimisation
274 10 Multi-Objective Scheduling
+
FRONT FRONT FRONT
x5 1 2
y10
3
y9 z5
x4
y8
z4
y7
z3
ΣT
y6
y5
y4 z2
y3
x3
y2
x2 y1 z1
x1
-
- Cmax +
Fig. 10.4 An example of a set of solutions and the non-dominated sorting procedure
models since they are population-based approaches and keep a set of solutions.
The efficient solutions within this set can be selected so this subset yields an approx-
imation of the Pareto front. As a result, GAs are a natural choice for multi-objective
problems.
Let us illustrate the main features of these algorithms by describing the well-
known Non-dominated Sorting Genetic Algorithm (NSGA) by Srinivas and Deb
(1994) and its subsequent version NSGA-II by Deb (2002). These two versions of
GA have attracted a lot of interest from researchers and it is considered a benchmark
algorithm for Pareto optimisation models.
NSGA basically differs from a simple GA only in the selection operator. As the
name of the algorithm implies, there is a non-dominated sorting procedure (NDS)
that is applied to the current population at each generation. Randomly selecting any
individual from the population to do crossover and mutation is generally a bad idea
in genetic algorithms. In multi-objective optimisation, a selection bias should be
given to non-dominated solutions. NDS iteratively divides the entire population into
different sets or fronts. The first front is formed by the non-dominated individuals in
the population. Then, the individuals in the first front are removed from the population
and, from the remaining individuals, the non-dominated ones are extracted again to
constitute the second front. The procedure continues until all individuals have been
assigned a front. An example of this procedure is graphically shown in Fig. 10.4.
As we can see from Fig. 10.4, the population is formed by 20 individuals. Indi-
viduals x1 , x2 , . . . , x5 form the first front (denoted in the figure as FRONT 1) as they
are non-dominated by all others. If we remove these 5 individuals, the remaining
10.5 Multi-Objective Methods 275
using elitism or other ways of preserving the good individuals from one generation
to another.
• NDS is actually a computationally demanding procedure. Its computational com-
plexity is O(MPop3 ), where M is the number of objectives and Pop is the size of
the population. Recall that this procedure has to be applied at each generation of
the NSGA method.
• Setting the sharing parameter σshare has proven to be far from trivial and problem
dependent.
All these three problems motivated the design of NSGA-II, an improved version
of NSGA. NSGA-II deals with the three aforementioned caveats of the NSGA algo-
rithm. NSGA-II includes elitism since the old and new populations are combined
at the end of each generation to form a new augmented population of size 2 · Pop.
A more effective Fast Non-Dominated Sorting procedure (FNDS) is applied. This
operator is used in order to select the best fronts for the final population that is again
of size Pop. Since all previous individuals from the old generation are considered,
elitism is preserved. FNDS has a lower computational complexity of O(MPop2 ).
A rank value is assigned to each individual of the population according to the front
they are in. Two additional fast operators, the Crowding Distance calculation and
assignment and the Crowded-Distance Sorting Operator are employed to do with-
out the σshare and to achieve fitness sharing. NSGA-II performs largely better than
NSGA and than most well known multi-objective approaches. For many continuous
optimisation problems, it actually achieves state-of-the-art results. Given FNDS, the
implementation of NSGA is not difficult. All these features are key for the success
of the NSGA-II in the literature.
10.5 Multi-Objective Methods 277
• Use of a population of solutions. While the standard IG works with only one
incumbent solution and a best solution found so far, RIPG handles a population (of
variable size) of non-dominated solutions as a working set. In order to provide an
initial population, constructive heuristics for each one of the objectives considered
can be used. Note that there is no guarantee to be non-dominated, and a greedy
phase of IG is applied to these solutions in order to obtain an initial set of non-
dominated solutions.
• Selection operator. Since the main logic of the IG, namely, the destruction/
reconstruction, is still carried out over a single-selected solution from the working
set, a selection operator is needed. RIPG employs a modified version of the Crowd-
ing Distance operator of NSGA-II where there is a selection counter keeping track
of the number of times that a solution has been already selected in order to add
diversity.
• Reconstruction. In RIPG, instead of fully reconstructing the selected solution after
the destruction, a whole set of non-dominated solutions is built by inserting each
removed job into a population of partial solutions. After the greedy phase, the
working set is updated with the recently created set of non-dominated solutions
from the reconstruction procedure. After each upgrade of the working set, domi-
nated solutions are removed.
• Local search. The local search phase consists in randomly choosing one element
from the selected solution, removing and reinserting it into a number of adjacent
278 10 Multi-Objective Scheduling
positions (which can be seen as a curtailed local search). Furthermore, this pro-
cedure is repeated as many times as the number of times the solution has been
previously selected (selection counter). This way, a deeper local search is carried
out to stronger local optimal solutions.
• Restarting. In order to avoid that the method tends to get stuck, a restart operator is
proposed. This operator consists in archiving the current working set, and then cre-
ating a new one with randomly generated solutions. Note that random generation
of the new working set is both fast and simple. The restart mechanism is triggered
after a number of iterations without changes in the cardinality of the working set.
With all previous explanations, we can see a sketch outlining of the RIPG method
in Algorithm 2.
Regarding the quality of the solutions offered to the decision-maker, the only case
in which the comparison of the outcomes of two Pareto optimisation algorithms is
more or less straightforward is when the approximation of the Pareto front of the first
algorithm (set A) completely dominates that of the second (set B), i.e. all and each
solution in B is dominated by at least one solution in A. We can see an example in
the previous Fig. 10.4. In that figure, we can assume that the FRONT 1 comes from a
given algorithm A and that FRONT 2 comes from a second algorithm B. It is clear—
again from the viewpoint of the solutions to be offered to the decision-maker—that
algorithm A is better than algorithm B, as all solutions in FRONT 2 are dominated
by at least one solution in FRONT 1.
However, as we have already noted, the quantity and diversity of the solutions
offered to the decision-maker is also important. From this perspective, FRONT 2
doubles the number of solutions than FRONT 1. Furthermore, there are large gaps
in the objective space between solutions x2 and x3 and between solutions x3 and x4
in FRONT 1.
Both aspects (quality and quantity/diversity) have to be balanced. For instance,
while it is clear that solution y4 from FRONT 2 is dominated by solution x4 from
FRONT 1, this dominance might be just by a few percentage points of improvement
at each objective and therefore of little practical relevance for the decision-maker.
He/she might be very well interested in a set with solutions of lower quality as long
as it has more solutions and more evenly spread out so to have a richer basis for
decision-making.
As we can see, assessing performance in multi-objective optimisation is a very
complex issue in its own right. There are actually two interrelated problems involved
when assessing the performance of multi-objective scheduling methods:
280 10 Multi-Objective Scheduling
Both issues can be dealt with the same techniques, which fall within one of the
following three approaches:
• Performance measures. The set produced by an algorithm is summarised by a
number indicating its quality/quantity/diversity.
• Dominance ranking. It is possible to rank a given algorithm over another based on
the number of times that the resulting sets dominate each other.
• Empirical Attainment Functions (EAF). Attainment functions (AF) provide, in the
objective space, the relative frequency that each region or part of the objectives
space is attained by the set of solutions given by an algorithm.
Each of these three approaches present a number of advantages and disadvantages.
These are discussed in the following sections.
Note that the last two measures can be also employed for comparing the perfor-
mance of one algorithm relative to another (pairwise measures). In addition, other
measures can be recorded, such as b the number of non-dominated solutions of
each algorithm after the two algorithms are compared with each other, and b/a the
percentage of non-dominated solutions (the quotient between b and a).
Note that there is an intrinsic loss of information when using performance
measures as it is obviously not the same comparing two Pareto fronts with (pos-
sibly) a large number of solutions each one than just comparing two performance
10.6 Comparison of Pareto Optimisation Algorithms 281
EAF basically depict the probability for an algorithm to dominate a given point of the
objective space in a single run. As many multi-objective algorithms are stochastic,
different approximations to the Pareto front might be obtained in different runs even
for the same instance. EAFs use colour gradients to show the relative number of times
that each region of the objective space is dominated. Different from the previous
performance measures, EAFs do not condense information and the behaviour over
the whole Pareto front can be observed.
More formally, AFs are defined as follows: Let SS be a set of non-dominated
solutions for a scheduling problem. AF(x) the Attaintment Function of a solution x
is defined as AF(x) = P(∃s ∈ SS : s ∃ x) which describes the probability for a given
Pareto optimisation algorithm of producing, in a single run, at least one solution that
weakly dominates x.
In the case of stochastic algorithms, it is not possible to express this function in
a closed form but one can approximate it empirically using the outcomes of several
runs. This approximation is referred to as EAF. EAFs graphically show, for a certain
instance, which area is more likely to be dominated by a given algorithm. A plot is
given in Fig. 10.5 taken from Minella et al. (2011) for the application of the algorithm
MOSAIIM of Varadharajan and Rajendran (2005) for the Fm|prmu|#(Cmax , Tmax )
model.
As we can see, the MOSAIIM algorithm manages at obtaining a good set of
solutions which is evenly spread out. The solid blue areas indicate that those parts
of the objective space are dominated by solutions obtained by MOSAIIM 100 % of
the times. White solid areas indicate the opposite. We also see how the transition
from solid blue to solid white is quick, which means that MOSAIIM is robust and
that obtains fairly similar Pareto fronts at each run.
EAFs show the result, over many different runs, of an algorithm over one instance.
In order to compare two algorithms, Differential EAF or Diff-EAF can be employed.
In short, Diff-EAF basically subtract two EAFs generated by two different algorithms
for the same instance and puts the results in the same chart. By analysing Diff-EAF
images, one can see in which part of the solution space an algorithm is better than
the other.
10.6 Comparison of Pareto Optimisation Algorithms 283
170.000
0%
168.000 10%
166.000 20%
30%
164.000 40%
162.000 50%
60%
160.000 70%
158.000 80%
90%
156.000 100%
154.000
Total_Tardiness
152.000
150.000
148.000
146.000
144.000
142.000
140.000
138.000
136.000
134.000
132.000
130.000
128.000
126.000
124.000
6.350 6.400 6.450 6.500 6.550 6.600 6.650 6.700 6.750 6.800 6.850 6.900 6.950 7.000 7.050 7.100 7.150
Makespan
Fig. 10.5 Empirical attainment function after 100 runs of the algorithm MOSAIIM of Varadharajan
and Rajendran (2005) (modified by Minella et al. 2008) over instance Ta081 of Taillard (1993) with
100 jobs and 20 machines. Objectives of makespan and total tardiness
For instance, Fig. 10.6 shows a second EAF for the same instance as in Fig. 10.5
but in this case for the RIPG algorithm described in Sect. 10.5.2.1.
The Pareto front given by RIPG is closer to the axes and it contains solutions
closer to the ideal point. Although both scales in Figs. 10.5 and 10.6 are identical, it
is not easy to compare both plots. In these cases is where the Diff-EAFs are handy.
We can subtract both figures and we plot a Diff-EAF with two gradients of colour
(blue and red) to positive and negative values of Diff-EAF, respectively. This way,
we are able to display the two EAFs as a plot which permits a quick and graphical
identification of which algorithm is dominating each zone of the objective space.
The intensity of the colour assigned for each algorithm shows the probability of
dominating a point in the space of that algorithm over the other algorithm. Points
of low colour intensity show lower differences between the algorithms. Notice that
white or no colour indicates that algorithms cannot generate solutions dominating this
region, or that both algorithms have the same probability of generating dominating
points and hence the difference is zero. Figure 10.7 shows the corresponding Diff-
EAF. The ‘moustache’ type of plot indicates that, for this instance, RIPG dominates
MOSAIIM over the whole objective space. Recall however, that EAFs need to be
examined instance by instance.
Although several algorithms and tools are available so as to construct EAF plots,
EAF are hard to code, and hard to manage. Another main drawback of both EAFs and
Diff-EAFs is that one plot has to be generated for each instance and for each pair of
algorithms. Furthermore, to the best of our knowledge, and at least at the time of the
284 10 Multi-Objective Scheduling
170.000
0%
168.000 10%
166.000 20%
30%
164.000 40%
162.000 50%
60%
160.000 70%
158.000 80%
90%
156.000 100%
154.000
152.000
Total_Tardiness
150.000
148.000
146.000
144.000
142.000
140.000
138.000
136.000
134.000
132.000
130.000
128.000
126.000
124.000
6.350 6.400 6.450 6.500 6.550 6.600 6.650 6.700 6.750 6.800 6.850 6.900 6.950 7.000 7.050 7.100 7.150
Makespan
Fig. 10.6 Empirical Attainment Function (EAF) after 100 runs of the algorithm RIPG of Minella
et al. (2011) over instance Ta081 of Taillard (1993) with 100 jobs and 20 machines. Objectives of
makespan and total tardiness.
writing of this book, EAFs and Diff-EAFs have not been applied to multi-objective
problems with more than two objectives.
170.000
100%
168.000 90%
166.000 80%
70%
164.000 60%
162.000 50%
40%
160.000
30%
158.000 20%
10%
156.000 0%
154.000 -10%
-20%
152.000
Total_Tardiness
-30%
150.000 -40%
-50%
148.000 -60%
146.000 -70%
-80%
144.000
-90%
142.000 -100%
140.000
138.000
136.000
134.000
132.000
130.000
128.000
126.000
124.000
6.350 6.400 6.450 6.500 6.550 6.600 6.650 6.700 6.750 6.800 6.850 6.900 6.950 7.000 7.050 7.100 7.150
Makespan
Fig. 10.7 Differential empirical attainment function (Diff-EAF) of the two EAFs given in
Figs. 10.5 and 10.6 between algorithms MOSAIIM and RIPG. Zones in blue indicate MOSAIIM
dominance whereas zones in red show RIPG dominance. Instance Ta081 of Taillard (1993) with
100 jobs and 20 machines. Objectives of makespan and total tardiness
So far, there is no unified name for these models (some of the most popular
options are interfering jobs, heterogenous-criteria scheduling or multi-agent schedul-
ing, which are, in fact, a general case of multicriteria scheduling models (note that all
the previous multicriteria models are particular cases of the corresponding interfering
jobs models.)
problems can be found in Zitzler et al. (2008) following the previous studies of Zitzler
et al. (2003) and Knowles et al. (2006). The hypervolume and the epsilon indicators
were introduced in Zitzler and Thiele (1999) and in Zitzler et al. (2003). A fast
algorithm for the calculation of the hypervolume indicator—otherwise a very time-
consuming indicator— is provided in Deb (2001). The Iε1 version of this indicator is
provided by Knowles et al. (2006). EAFs were first proposed by Grunert da Fonseca
et al. (2001) and were later analysed in more detail by Zitzler et al. (2008). Differential
EAF are described in López-Ibáñez et al. (2006) and López-Ibáñez et al. (2010).
Finally, a classification of interfering jobs is Perez-Gonzalez and Framinan (2013).
References
Abraham, A., Jain, L., and Goldberg, R., editors (2005). Evolutionary Multiobjective Optimization:
Theoretical Advances and Applications. Springer-Verlag, London.
Bagchi, T. P. (1999). Multiobjective Scheduling by Genetic Algorithms. Kluwer Academic Pub-
lishers, Dordrecht.
Coello, C. A. (2004). Applications of Multi-Objective Evolutionary Algorithms. World Scientific
Publishing Company, Singapore.
Coello, C. A., Lamont, G. B., and van Veldhuizen, D. A. (2007). Evolutionary Algorithms for
Solving Multi-Objective Problems. Springer, New York, second edition.
Collette, Y. and Siarry, P. (2003). Multiobjective Optimization: Principles and Case Studies.
Springer-Verlag, Berlin-Heidelberg.
Daniels, R. L. and Chambers, R. J. (1990). Multiobjective flow-shop scheduling. Naval research
logistics, 37(6):981–995.
Deb, K. (2001). Multi-Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons,
West Sussex, England.
Deb, K. (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions
on Evolutionary Computation, 6(2):182–197.
Ehrgott, M. (2005). Multicriteria Optimization. Springer, Berlin-Heidelberg, second edition.
Emmons, H. (1975). A note on a scheduling problem with dual criteria. Naval Research Logistics
Quarterly, 22(3):615–616.
Geiger, M. J. (2007). On operators and search space topology in multi-objective flow shop schedul-
ing. European Journal of Operational Research, 181(1):195–206.
Grunert da Fonseca, V., Fonseca, C. M., and Hall, A. O. (2001). Inferential performance assessment
of stochastic optimisers and the attainment function. In Zitzler, E., Deb, K., Thiele, L., Coello
Coello, C. A., and Corne, D., editors, Proceedings of the First International Conference on Evolu-
tionary Multi-Criterion Optimization, volume 1993 of Lecture Notes in Computer Science, pages
213–225, Berlin. Springer.
Gupta, J. N. D. and Werner, F. (1999). On the solution of 2-machine flow and open shop scheduling
problems with secondary criteria. In 15th ISPE/IEE International Conference on CAD/CAM,
Robotic, and Factories of the Future, Aguas de Lindoia, Sao Paulo, Brasil. Springer-Verlag.
Knowles, J., Corne, D., and Deb, K. (2008). Multiobjective Problem Solving From Nature: From
Concepts to Applications. Springer-Verlag, Berlin-Heidelberg.
Knowles, J., Thiele, L., and Zitzler, E. (2006). A tutorial on the performance assessment of sto-
chastic multiobjective optimizers. Technical Report 214, Computer Engineering and Networks
Laboratory (TIK), ETH Zurich, Switzerland. revised version.
López-Ibáñez, M., Paquete, L. F., and Stützle, T. (2006). Hybrid population-based algorithms for the
bi-objective quadratic assignment problem. Journal of Mathematical Modelling and Algorithms,
5(1):111–137.
288 10 Multi-Objective Scheduling
López-Ibáñez, M., Stützle, T., and Paquete, L. F. (2010). Graphical tools for the analysis of
bi-objective optimization algorithms. In Proceedings of the 12th annual conference on Genetic
and evolutionary computation (GECCO 2010), pages 1959–1962, New York, NY, USA. ACM.
Mc Cormick, S. T. and Pinedo, M. (1995). Scheduling n independant jobs on m uniform machines
with both flowtime and makespan objectives: a parametric analysis. ORSA Journal on Computing,
7(1):63–77.
Minella, G., Ruiz, R., and Ciavotta, M. (2008). A review and evaluation of multi-objective algorithms
for the flowshop scheduling problem. INFORMS Journal on Computing, 20(3):451–471.
Minella, G., Ruiz, R., Ciavotta, M. (2011). Restarted iterated pareto greedy algorithm for multi-
objective flowshop scheduling problems. Computers & Operations Research, 38(11):1521–1533
Nagar, A., Heragu, S. S., and Haddock, J. (1995). A branch-and-bound approach for a two-machine
flowshop scheduling problem. Journal of the Operational Research Society, 46(6):721–734.
Paquete, L. F. (2005). Stochastic Local Search Algorithms for Multiobjective Combinatorial
Optimization: Method and Analysis. PhD thesis, Computer Science Department. Darmstadt Uni-
versity of Technology. Darmstadt, Germany.
Perez-Gonzalez, P. Framinan, J. M. (2014). A Common Framework and Taxonomy for Multicriteria
Scheduling Problems with interfering and competing jobs: Multi-agent scheduling problems,
European Journal of Operational Research, 235(1):1–16.
Rajendran, C. (1992). Two-stage flowshopnn scheduling problem with bicriteria. Journal of the
Operational Research Society, 43(9):871–884.
Ruiz, R. and Stützle, T. (2007). A simple and effective iterated greedy algorithm for the permutation
flowshop scheduling problem. European Journal of Operational Research, 177(3):2033–2049.
Sarin, S. C. and Hariharan, R. (2000). A two machine bicriteria scheduling problem. International
Journal of Production Economics, 65(2):125–139.
Selen, W. J. and Hott, D. D. (1986). A mixed-integer goal-programming formulation of the standard
flowshop scheduling problem. Journal of the Operational Research Society, 37(12):1121–1128.
Sivrikaya-Şerifoğlu, F. and Ulusoy, G. (1998). A bicriteria two-machine permutation flowshopn
problem. European Journal of Operational Research, 107(2):414–430.
Srinivas, N. and Deb, K. (1994). Multiobjective optimization using nondominated sorting in genetic
algorithms. Evolutionary Computation, 2(3):221–248.
Taillard, E. (1993). Benchmarks for basic scheduling problems. European Journal of Operational
Research, 64(2):278–285.
T’Kindt, V. and Billaut, J.-C. (2001). Multicriteria schduling problems: A survey. RAIRO Recherche
operationnelle - Operations Research, 35(2):143–163.
T’Kindt, V. and Billaut, J.-C. (2006). Multicriteria Scheduling: Theory, Models and Algorithms.
Springer, New York, second edition.
Varadharajan, T. and Rajendran, C. (2005). A multi-objective simulated-annealing algorithm for
scheduling in flowshopns to minimize the makespan and total flowtime of jobs. European Journal
of Operational Research, 167(3):772–795.
Vignier, A., Billaut, J.-C., and Proust, C. (1999). Les problèmes d’ordonnancement de type flow-
shop hybride: État de l’art. RAIRO Recherche opérationnelle, 33(2):117–183. In French.
Wilson, J. M. (1989). Alternative formulations of a flowshopn scheduling problem. Journal of the
Operational Research Society, 40(4):395–399.
Zitzler, E., Knowles, J., and Thiele, L. (2008). Quality assessment of pareto set approximations. In
Multiobjective Optimization: Interactive and Evolutionary Approaches, pages 373–404, Berlin,
Heidelberg. Springer-Verlag.
Zitzler, E. and Thiele, L. (1999). Multiobjective evolutionary algorithms: A comparative case
study and the strength pareto approach. IEEE Transactions on Evolutionary Computation, 3(4):
257–271.
Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., and da Fonseca, V. G. (2003). Performance
assessment of multiobjective optimizers: an analysis and review. IEEE, Transactions on Evolu-
tionary Computation, 7(2):117–132.
Part IV
Scheduling Tools
The Part IV of the book includes Chaps. 11 and 12, and deals with software tools
to embed models and methods. We start by discussing in detail the requirements
and main features of modern scheduling tools and describe the software archi-
tecture of a generic scheduling tool. Advanced design issues, such as integration
with other decisions systems in the company, data requirements and accuracy,
robustness and user interaction are also treated in this part of the book.
Chapter 11
Overview of Scheduling Tools
11.1 Introduction
This is the first chapter of a series in the book in which we address a number of
issues related to software tool building containing the models and solution procedures
introduced in the previous parts of the book so that the piece of software may support
the scheduling decision process. This topic is not as usual as some of the aspects of
manufacturing scheduling discussed in the precedent chapters, and in some cases, it
is plainly ignored. However, as we will see, the use of manufacturing scheduling tools
is key to the real-world applicability of the models and methods presented earlier.
More specifically in this chapter, we
• discuss the convenience of performing manufacturing scheduling decisions with
the aid of a software tool, i.e. a Decision Support System (DSS) (Sect. 11.2),
• summarise the (otherwise rather brief) story of the scheduling tools in the last few
decades (Sect. 11.3),
• analyse a manufacturing scheduling tool under the perspective of a business infor-
mation system, and acknowledge the design of the architecture of such a system
as the main generic block of a scheduling tool (Sect. 11.4),
• present earlier approaches to the architectures of scheduling tools (Sect. 11.5),
• identify the different generic requirements of the scheduling tool (Sect. 11.6) and
• discuss an architecture of a scheduling tool fulfilling these requirements
(Sect. 11.7).
(1993), humans have a broad base of common sense, are intuitive and have an ability
to summarise knowledge. However, human expertise is difficult to teach to others
and is frequently overloaded by computation. Besides, there is a variability in the
response that may be expected from a human. On the other hand, computerised
systems possess very fast computation abilities and can be designed to cope with
an extraordinary number of different variables. In contrast, they cannot easily learn
from experience and definitely do not have common sense.
Therefore, the (so far) most successful approach to combine the strengths and
weaknesses of both human and computer is that the latter offers interactive support
to the decisions that should be done by the human. By this approach, the computation
abilities of the computer are employed to iteratively present the human (the ultimate
decision-maker) with useful information that helps him/her to identify problems
and to make decisions. This type of tool is named Decision Support System or
DSS. Note that the term DSS encompasses a large variety of (mostly computer-
based) tools, including the so-called ‘expert systems’, which include a sophisticated
database (named knowledge base) in which a number of rules (usually derived from
the human behaviour) are obtained. The majority of the manufacturing scheduling
systems described can be considered as DSS. However, there is some controversy
on the suitability of this type of systems for scheduling, as some authors claim that
very few of these tools have been successfully implemented (Wiers 1997).
In this section, we will conduct an extremely brief review of the main characteristics
of the scheduling tools employed in the last few decades. The field starts with the
early attempts of interactive scheduling tools in the 1960s and 1970s, which are
summarised in Godin (1978). It is to note that most of the systems discussed in this
reference were prototypes and that none of them was successful enough to lead to the
development of commercial software. All these tools shared a number of features and
difficulties (see, e.g. Morton and Pentico 1993 for a discussion). We have grouped
them into the following categories:
• Scheduling state-of-the-art. While there is a general consensus that interactive
scheduling systems could provide a response to scheduling decision-making, the
state of the art in the scheduling field is not ready for solving large combinatorial
problems. The research was very disjoint and the authors do not communicate
with each other.
• Technology. Computer technology was still quite expensive and not affordable for
all companies, interactive systems were still in its infancy, and software houses
avoided the area due to the needs of a customised deployment and expensive user
training.
• Humans. First, schedulers do not understand their own job and were not moti-
vated by customer goals. Second, decision-makers did not fully understand the
294 11 Overview of Scheduling Tools
• It shortens the development cycle of the tool, as it saves time (and money) that
otherwise should have been allocated to the activities of requirements analysis and
design.
• It ensures that the main functionalities of a scheduling system are adequately
covered, thus the architecture acts both as a checklist and as a design guide for the
developers. Note that failure to properly discover the functionalities of a system
may not be discovered until later stages and may result in an expensive redesign
and recoding.
• It allows the reutilisation for future systems of part of the code developed, pro-
vided that the architecture is described in terms of blocks or function-specific
modules. This strategy is adopted in reuse-oriented process models for software
development.
• It serves to scale the effort by prioritising certain blocks in the development process,
thus allowing a gradual development of the system, keeping enhancements of the
system for a later stage.
In the following sections, we will focus onto identifying a generic architecture for a
scheduling tool. To do so, we first review a seminal work dealing with architectures of
scheduling tools (Pinedo and Yen 1997) and discuss their advantages and limitations.
Building upon this early work, we will identify the main (generic) requirements
to be covered by a manufacturing scheduling tool. Finally, these requirements are
discussed and grouped together in order to propose an integrated modular architecture
implementing them.
298 11 Overview of Scheduling Tools
Despite these facts, this seminal work in the field of scheduling tools includes
a number of issues in this architecture that we retain worth to be discussed before
proceeding further. These are the following:
• A very general critique of the architectural design refers to the main modules in
the architecture: although there is a database module, it is not clear which kind of
data it should contain. Initially, one may think of production process data (such as
the routing schema or the processing time of the machines, etc.) as well as job-
related data. However, on one hand, in most real-life systems, a big part of these
data—particularly job-related data— are already stored in an information system
(such as an ERP system). The interoperability of the scheduling tool with the rest
of business information system is not explicitly considered in the presentation of
the architecture.
• Another issue refers to the adopted model. As mentioned before, it follows the
three-tier architecture adopted in many information systems. Although this model
is quite useful for software reutilisation in many developments, we hesitate that it
can be easily adopted for a scheduling tool. First, the user interface (the presentation
tier) should not only interact with the schedule generator, but also with the database,
as it is clear that many data should be entered into the system by the user in a
(possibly) graphical way. Therefore, the database should lie at the heart of the
system, and the rest of the modules of the tool should have direct access to it.
• The algorithm library itself could be part of the database, particularly if the imple-
mentation follows the pattern adopted by some commercial systems (i.e. in SAP
R/3 the functionalities of the system are stored in the database in the format of
an interpreted-like language that is loaded and executed in memory once it is
required).
• One may assume that at least part of the schedule generator must be included in the
User Interface, as it is very difficult to separate the presentation of a schedule from
the business logic that should be included in order to allow the user to manipulate
the solutions. This fact also speaks for the aforementioned difficulty of the three-
tier architecture adopted in the design.
• The architecture proposed adopts basically a technical, problem-solving approach,
and perhaps overlooks the cyclical nature of the scheduling process. As such,
functionalities such as the analysis and management of different scenarios, self-
learning issues, etc., are not explicitly considered in the description.
• Finally, it is not clear whether reactive scheduling can be carried out with these
modules, or it is considered out of the scope of the tool. This is a very important
issue, as we shall see in the next sections.
As mentioned before, the first step for developing a software tool consists of analysing
its requirements. In software engineering, it is usual to distinguish between functional
requirements and non-functional requirements. The former specify what the system
should do, i.e. what functions it should provide to its users. Non-functional require-
ments are general criteria that the system should meet apart from solving specific
application problems. Such criteria often refer to software quality and performance
of the system. Some examples are maintainability, reliability, scalability, robustness,
etc., and collectively define a number of characteristics of a good piece of software.
As such, it is hard to see how these can be distinguished from other business deci-
sion support software, and thus there is nothing inherently specific to scheduling
tools. Therefore, we will omit this type of requirements in this chapter and refer the
interested reader to some of the many good books on software engineering, such
as Kurbel (2008). In contrast, we will concentrate on the functional requirements,
particularly on those generic requirements that allow us to identify an architecture
of the systems when we break down into the design phase. These are grouped into
several categories and described in more detail in the next sections. In addition, it is
also very common in software engineering to prioritise the different functionalities
of the system to be developed. Since the development process rarely goes smoothly,
it is usual that some functionalities have to be limited or removed from the system
due to the lack of time or resources to accomplish all of them. In such cases, a cat-
egorisation of the functionalities may help to know which are the ones regarded as
critical, and which can be removed without greatly impacting the product. In our
case, we do not dare to make a strict categorisation, but at the end of each section we
provide some hints on how ‘basic’ or ‘advanced’ these categories are. A summary
of these categories and their related functionalities is shown in Fig. 11.2.
The scope of a manufacturing scheduling system refers to the set of business functions
targeted by the system within the context of supporting decisions for production
management. With this regard, there are two issues that should be discussed: The
first one is to define the set of business functions that define scheduling, and second,
whether there are additional production management decisions—usually considering
to be outside of the scheduling function—that are so intertwined with scheduling that
should be included into a software tool.
With respect to the scheduling function, generally speaking, there are two views
or levels, depending on the time horizon (Aytug et al. 1994):
• A higher level that uses the output of production planning to set up the dates for
the beginning of each job on each machine. This level is often referred as release
scheduling.
11.6 Requirements of a Scheduling Tool 301
• A lower level which is involved with real-time item movement planning. This level
is usually denoted as reactive scheduling.
There is a consensus in that these two levels should be adequately covered by the
scheduling system, which means that a scheduling system’s architecture should inte-
grate functionalities regarding the monitoring and execution of the planned schedules.
This aspect is what McKay and Wiers (1999) label as ‘sustained control’, meaning
that the schedulers should be able to monitor the progress of production and to solve
problems if the actual situation deviates from the scheduled situation. In other words,
there is no scheduling problem to be solved, but a scheduling decision process which
may (or may not, depending on the situation) involve solving decision (optimisation)
problems. Therefore, in order to build a tool that supports the scheduling process,
the lower level of scheduling (reactive scheduling) should be contemplated in the
software.
In addition, it has been already mentioned that there are a number of manufacturing
activities that are related to scheduling. Perhaps the most obvious is production
planning, which covers production at a multi-week level and makes up the input
data for release scheduling. Despite this clear connection, planning and scheduling
302 11 Overview of Scheduling Tools
activities are usually handled independently, except for a few attempts (see, e.g.
Huang et al. 1995; Kanet and Adelsberger 1987; Zhang and Mallur 1994). Indeed, it
has been argued that many small shops have little scope to make long-term plans and
that systemic complexities also force scheduling to be directed to very short-time
horizons (Higgins 1996). Therefore, we will not include planning functionalities into
our proposal, although the architecture should contain the necessary coordination
between planning and scheduling.
hundreds of constraints can be identified in the shop, and that some of them are
paramount while others can be safely ignored. Therefore, the system should use
(at least) two layers—or views—of a model: An aggregated view employed during
the search of solutions, and a detailed one employed to present the solutions to
the Decision-Maker. The suitability of the approach depends on ensuring that the
constraints that have been ignored do not have a major effect in the feasibility
of the so-obtained solutions. This can only be verified by a thorough validation
process during the deployment of the tool.
• Representation of the Solutions. The schedules provided by the solution proce-
dures have to be transformed into starting and finishing times for each job. This
functionality is mentioned by Pinedo and Yen (1997) as ‘simulator’, and follows
from the fact –already mentioned– that the tool should work with a simplified sub-
set of shop floor constraints. Obviously, this level of abstraction with respect to the
constraints greatly influences the complexity of this module, as it would possibly
must ‘rework’ the schedules provided by the scheduling generator to make them
feasible, and, if this process is not properly bounded, it may greatly reduce the
efficiency of the scheduling engine as there may not be a correlation among good
schedules (as provided by the schedule generator) and good solutions (as finally
presented to the user), therefore rendering useless any optimisation effort.
If the three functionalities have to be classified or prioritised, it is clear that the
two latter represent basic aspects of a scheduling tool, whereas the first one (Model
Detection) constitutes an advanced feature of the system. In addition, its usefulness
would depend on how well covered are the rest of functionalities (e.g. if the algorithm
library does not contain algorithms for a large number of scheduling models, its nearly
automatic identification would not be very useful).
ated from solving actual instances of the problem (i.e. past scheduling problems),
while T’Kindt et al. (2005) propose a mechanism to generate benchmark instances
derived from actual instances of the problem to be solved that are subsequently
employed to test the different algorithms. Finally, Gupta et al. (2000) propose the
application of neural networks to select the best heuristic algorithm to solve a
given scheduling problem. In their paper, the objective is to choose the heuristic
with higher statistical quality of the results.
• Incorporation of human expertise. This refers to the ability of the system to incor-
porate the knowledge of the human schedulers into the tool, and it has given rise to a
relatively robust literature on scheduling expert systems. A list of the potential ben-
efits of this functionality can be found, e.g. in Reinschmidt et al. (1990). However,
there is some controversy on the usefulness of the approach, since many researchers
are sceptical that true human experts exist in scheduling (Steffen 1986). This is
motivated by the belief that most real-life scheduling environments are beyond
the cognitive capability of most schedulers (Fox 1990). Indeed, some experiments
have been carried out (Baek et al. 1999) in order to analyse the performance of
a human scheduler who must develop a schedule from scratch against that of a
software tool. In these experiments it is shown that, if the problem is complex,
the performance of the solution obtained by the human scheduler from an initial
schedule provided by the software tool is better than that obtained from scratch by
the human scheduler. Besides, even if human experts exist, some authors consider
that the expert system would merely automate their (good or bad) decisions (Kanet
and Adelsberger 1987). As a consequence, the need/suitability of this function-
ality is highly controversial. Since we are describing a general architecture of a
scheduling system, we believe that this functionality should be considered, and let
its implementation to the particular cases that may arise.
Regarding the categorisation of this set of functionalities, the ability to support a
group of algorithms for scheduling and for rescheduling must be considered a basic
functionality of a scheduling tool. Without these features, a scheduling tool is so
severely limited that hardly serves to represent and manipulate the schedules pro-
posed by the decision-maker. The ability to easily incorporate new algorithms may
be regarded as an advanced feature, as we suspect that for many implementations,
if new, specific algorithms are needed, then this task should be accomplished by the
development team and not for the company’s staff. If this is the case, the advantages
of having a tool that helps building new algorithms in a quick manner can be lim-
ited by the knowledge of the software possessed by the developers. Regarding the
incorporation of the human expertise, this topic is very debatable and there are many
voices against it that it cannot be considered a basic functionality. Finally, in our
view, the state of the art with respect to the evaluation of solutions is rather limited
for this functionality to have a great impact, although its usefulness would clearly
depend on how repetitive the data are.
306 11 Overview of Scheduling Tools
This set of functionalities refers to how the solutions found by the problem-solving
functionalities described in Sect. 11.6.3 are evaluated. The following functionalities
have been identified from the literature review:
• Evaluation of solutions for different objectives. The tool must allow the evaluation
of a solution with respect to several objectives, including a combination of dif-
ferent objectives. This functionality is mentioned by Pinedo and Yen (1997) and
Sauer (1993). In practice, implementing this functionality in the tool translates—at
least—into two issues:
– A large number of objective functions should be implemented in the tool. This
would range from the rather typical literature-based scheduling criteria, to cost-
related objective functions. Note also that it is likely that the decision-maker
would like to see each objective function in this functionality to be also included
as an objective function for the algorithms in the algorithm library described
in a previous section. In practice, this would mean that at least some general
purpose algorithms should be implemented by the tool, in order to cover these
objectives for which efficient specific algorithms are not known, or they are
simply not worth the effort for its development. A further refinement would be
to allow the user to include new objectives, much in the manner of the ‘generation
of new algorithms’ functionality.
– A smart way to present the evaluation of the algorithms to the user should be
implemented, as it is likely that the user would like to compare different solutions
for different objectives. If the objectives are numerous, it would be hard for the
user to make sense of the different results, so the tool could ideally detect the
objectives with relevant differences among the solutions under comparison and
hide those for which these differences are not significant. Another option would
be to make a hierarchy of the different objectives and show them in—at least—
two different levels. An alternative to limit this problem is to select a minimal
set of objectives, as in practice most of them would be related. This aspect is
discussed in Chap. 12.
• Stochastic evaluation of solutions. The solutions obtained by the problem-solving
functionalities are usually based on deterministic assumptions, therefore it would
be of interest to evaluate the performance of the solutions under the stochastic
conditions of the real shop floor. This functionality is suggested by Błażewicz
et al. (2001), who propose the use of queuing theory or simulation to accomplish
this evaluation. The combination of simulation and scheduling is discussed in detail
in Fowler et al. (2006). Note, however, that this is only one of the approaches that
can be employed to address variability in scheduling, which include the following
(see e.g. Black et al. 2006 for a summary of the different approaches):
11.6 Requirements of a Scheduling Tool 307
– Using different methods for modelling uncertainty, such as, e.g. fuzzy logic,
– Researching issues associated with measuring robustness in the face of uncer-
tainty,
– Developing different techniques for constructing schedules (e.g., rescheduling
and rolling horizons), using real-time information for rescheduling or schedule
repair and
– Inserting idle or slack time, in a general manner or placing slack at tactical
locations in the schedule to allow for expansion and schedule adjustment caused
by variability.
• Analysis of scenarios. The system must allow the management of different solu-
tions for what-if analysis. This functionality is particularly useful if the user is
allowed to manually modify the schedules proposed by the system (see Sect. 11.6.7
for a discussion). It is mentioned in McKay and Wiers (2003).
With respect to the categorisation of this set of functionalities, the evaluation of
solutions for different objectives and the analysis of scenarios have to be clearly
labelled as basic functionalities. In contrast, our view on the stochastic evaluation
of solutions indicates that, although it seems potentially interesting to combine the
optimisation results with a simulation, we are sceptical with respect to its usefulness
on a general basis. On one hand, detailed simulation models are extremely expensive
to construct and to maintain, and stochastic models based, e.g. on queuing theory
may not provide an accurate capacity check and thus may mislead the scheduler
in his/her decision. On the other hand, some of the methods later mentioned can
be used in a more or less sophisticated manner to perform an analysis of the effect
of the variability in the shop floor without expensive developments. Therefore, this
functionality cannot be regarded as basic.
This set of functionalities refers to the ability of the tool to perform rescheduling.
Two different aspects can be considered:
• Monitoring of execution. While one option for the user is to decide whether to
reschedule or not in view of the shop floor information found outside the scheduling
system, the system could monitor the execution of planned schedules in order to
measure the deviation between the planned and the actual situations and trigger
some alarm under certain conditions. This functionality is mentioned in Błażewicz
et al. (2001), Collinot et al. (1988), Smith (1994) and Numao and Morishita (1989).
• Automatic triggering of scheduling/rescheduling functions. A step beyond is to
allow the tool to decide, in view of the deviation from the plans, whether the current
jobs have to be rescheduled, or a full schedule is to be developed. Although this
functionality is found by several authors (see, e.g. Błażewicz et al. 2001; Collinot
et al. 1988; Smith 1994), we think that this could not be a good option except for
308 11 Overview of Scheduling Tools
fully automated systems, as we believe that the approach of a scheduling tool is that
of a decision support system and not that of an automatic control system. Indeed,
post-evaluation case studies such as the one by Lin et al. (2007) indicate that the
schedulers want to be in full control of the scheduling tool. In contrast, it would be
interesting if the tool includes the capability of learning under which conditions the
human scheduler is conducting a rescheduling of the existing solution, and perform
an automatic triggering of the alarms. Even if this learning process is conducted
using relatively simple procedures, we believe that this would greatly help the
scheduler in making more consistent rescheduling decisions, as the system would
indicate that, in the past, a reschedule was triggered under similar conditions. This
would make the scheduler reflect and modify his/her procedures, or at least to
make explicit constraints or objectives not initially contemplated.
In summary, monitoring of execution should be regarded as a basic functionality,
helping the scheduler to be in control of the system, whereas the automatic triggering
of scheduling and rescheduling functions can be labelled as advanced.
This set of functionalities refers to the ability of the tool to detect critical points and
slacks in the system, and consequently to help the scheduler to focus his/her attention
in the most critical parts of the process. These functionalities can be grouped as
follows:
• Schedule capacity analysis. By using this functionality, the system must be able to
detect potential bottlenecks and under-loaded resources according to the planned
schedule (or even according to the actual schedule, if the system allows monitoring
of execution). This functionality may help the schedulers to focus their attention
during the execution of the schedule into the critical sections of the shop floor, and
to identify under-utilised resources to be loaded during the building of the next
schedule, or transferred to the critical sections. This functionality is mentioned by
Collinot et al. (1988), Kempf (1994), Prietula et al. (1994).
• Instance capacity analysis. By using this functionality, the system may detect
potential bottlenecks and under-loaded resources before a solution for the problem
instance is provided by the system. Note that this functionality is different from that
of schedule capacity analysis, as there is no planned schedule. This functionality
is called preprocessor by Pinedo and Yen (1997), and it can be linked with the
Model Detection functionality described in Sect. 11.6.2, since it can be regarded as
a wizard to help identifying a suitable model for the shop floor (i.e. a preprocessor
that identifies that there is a bottleneck machine regardless the specific job schedule
may serve to point to the single-machine scheduling model as the most suitable
model to schedule jobs).
In our view, both functionalities are somewhat advanced, as they may not be
strictly needed for some scenarios. Among these two, schedule capacity analysis is
11.6 Requirements of a Scheduling Tool 309
less advanced and requires less sophistication that the instance capacity analysis.
Instance capacity analysis may be quite complicated to implement, as the state of
the art regarding this issue is practically inexistent and many practical problems for
which there is no previous guidance may arise during the process.
There are a number of functionalities regarding the user interface that are required for
a generic business information system, mostly referring to the ability of the system
to provide an interactive way to enter input data into the system, or to extract output
data (typically reports) from the system. Here, we focus onto those specific features
of a manufacturing scheduling system, mostly referring to a user interface which acts
both as input and output of the system:
• Manipulation of schedules. Most authors highlight the need of human interaction
(see e.g. the works by McKay et al. 1995a or Trentesaux et al. 1998) meaning the
manipulation by the human scheduler of the solutions offered by the system, an
aspect mentioned by many authors, among them Collinot et al. (1988), Numao and
Morishita (1989), Prietula et al. (1994), Sauer and Bruns (1997), Sauer (1993),
Sauer and Apelrath (1997), T’Kindt et al. (2005) and McKay and Buzacott (2000).
Note that this functionality should include mechanisms for repairing the possible
infeasibility of the solutions generated by the human scheduler. Regarding the
presentation of the solutions, while Gantt charts are mostly used as the primary
mean for this interaction, Higgins (1996), questions its adequacy and suggests
employing the so-called Job Screens and machine-loading boards as a starting
point. In Chap. 12, this discussion will be rejoined when introducing some different
alternatives to represent schedules.
• Comparison of manual versus automatic solutions. It is clear that the manipula-
tion of the resulting solutions may result into a notable worsening of the original
objective function. While this is not necessarily a problem (for instance, it may
be the case where the original solution did not contemplate some complicated
constraints that the scheduler had in mind), the tool must provide reports indicat-
ing the deviation of the so-modified schedules with respect to the ones suggested
by the algorithms. The purpose of this subfunctionality is two-fold: In the short
term, it helps the scheduler to balance the importance of these ‘implicit’ objec-
tives/constraints with respect to the overall performance of the schedule. In the
long term, the continuous monitoring of these deviations may help to notice impor-
tant objective/constraints that were not initially taken into account but that have a
large influence in the final results.
From these two functionalities, it is clear that the first one is a basic requirement
of a scheduling tool. Even if a fine implementation of this functionality can be costly,
without at least some basic manipulation features, the tool gives little space for user
interaction and its value and usability decreases enormously. The second functionality
310 11 Overview of Scheduling Tools
Integration in the organisational environment means that the scheduling tool should
be able to import the data required and to export the results obtained from/to the
rest of the systems. This is a critical aspect in the development of the tool, as several
surveys (see e.g. McKay et al. 1998) identify up to 25 different sources of data related
to scheduling. A number of functionalities can be mentioned regarding this aspect:
• Input data checking. The system must provide functionalities to check the consis-
tency and the comprehensiveness of the data automatically entered into the system.
Significant discrepancies between historical and new data could be checked so that
the system may trigger alerts to the Decision-Maker to allow/discard these changes.
• Feasibility analysis. The system can check the existence of all resources needed for
scheduling the jobs, namely the availability of raw materials, state of the machines,
etc. This functionality is mentioned in Błażewicz et al. (2001) and Hadavi et al.
(1990), and can constitute an output of the scheduling tool towards different infor-
mation systems, such as the MRP.
An important issue in order to accomplish a seamless integration of the scheduling
system is to define a format or language that allows the precise specification of these
data for a wider number of scheduling problems. Given the diversity of scheduling
problems, this is not an easy task. So far, some research has been carried out on
languages for describing scheduling problems (see, e.g. Yen 1997 or Zentner et al.
1998). However, keeping in mind the aforementioned need of integration of schedul-
ing activities with existing manufacturing information systems, the development of
a standard language is a must. In this line, perhaps scheduling systems can benefit
from the XML standard and, more specifically, from the Business Process Modelling
Initiative (BPMI) (BPMG.org 2004). This initiative is seeking to establish an XML-
based language and notation (BPML and BPMN, respectively) for the specification
of business process models.
Regarding the prioritisation of the different functionalities, since input data are
critical for the scheduling system and may come from many different sources of
different quality, such checking functionality is considered to be of great importance
(McKay and Wiers 2003). Indeed, the lack of quality in the input data is mentioned
as a reason to implement data filtering utilities and spreadsheets as an aside product
of different scheduling tools (Berglund and Karltun 2007). The only exception in
the importance of this functionality would be if the scheduling tool is completely
autonomous with respect to the data, which may not be the most usual case. In
contrast, feasibility analysis can be regarded as a more advanced functionality.
11.6 Requirements of a Scheduling Tool 311
This section summarises the contributions discussed in the early sections regard-
ing the functionalities that have been identified. The main outcome is presented in
Table 11.1, where the functionalities, the main references dealing with them and their
character (basic or advanced) are presented.
Usually, after a few iterations, the human scheduler will be satisfied with a final
production sequence. The relevant data of this sequence must be again passed on
to the ERP and all necessary reports must be generated. Once production starts, the
optimised production sequence becomes the ‘current plan’ that is therefore controlled
with the help of Supervisory Control and Data Acquisition (SCADA) production
control systems. If production events occur, resequencing or rescheduling might be
necessary. Furthermore, future production plans are scheduled as per request of the
user as a continuation of the current plan.
In this chapter, we have started to address an area that has received relatively little
attention in the manufacturing scheduling field, at least as compared to the over-
whelming wealth of literature dealing with scheduling models and procedures. How-
ever, in order to put these models and procedures into practice, they must adequately
capture the details of a company’s manufacturing scheduling process, which usually
entails modelling scenarios, constraints and objectives not often treated in the litera-
ture. In addition, the resulting models and solution procedures should be embedded
into a manufacturing scheduling system that feeds them with reliable and timely data
and provides an efficient way to interact with the users. This gives rise to a question
on which the main blocks or components of a manufacturing scheduling system are
and how they are related. Despite the specificity of scheduling systems, a number
of features or functionalities can be identified, thus constituting the architecture of
a manufacturing scheduling tool. In order to analyse these functionalities, we have
reviewed and classified existing contributions on the architecture of manufacturing
scheduling systems, as well as works dealing with the description of a manufac-
turing scheduling system’s requirements. From this analysis, we have proposed an
integrated, modular architecture of scheduling systems covering all the so-identified
functionalities. We do not claim that our proposal is the only way (or even the best
way) to represent the architecture of a scheduling system, although it has been vali-
dated both through literature analysis and through industrial practice. By presenting
it, we hope to enrich the rather scarce literature on the topic and thus help bridg-
ing the widely recognised gap between the development of scheduling models and
procedures, and their implementation in a real-life industrial settings.
We believe that the opinions raised in this chapter may be highly influenced by
the experiences of the authors, and that there may be complex, highly constrained,
non-repetitive scenarios in which the human expertise may outperform an automated
manufacturing system, whereas an automated system could be preferable in other
scenarios with a large number of different orders to be scheduled over a number of
different machines on a repetitive basis. In any case, in our view, the automated system
should serve to support the decisions of the human scheduler, and not to replace
him/her. In the proposed architecture, this idea is explicitly supported by stressing
the need that the tool should allow for the manual modification of the schedules,
11.8 Conclusions and Further Readings 315
References
Adelsberger, H. and Kanet, J. (1991). The leitstand-a new tool for computer-integrated manufac-
turing. Production and Inventory Management Journal, 32(1):43–48.
Aytug, H., Bhattacharyya, S., Koehler, G. J., and Snowdon, J. L. (1994). A review of machine
learning in scheduling. IIE Transactions on Engineering management, 41(2):165–171.
Aytug, H., Lawley, M. A., McKay, K., Mohan, S., and Uzsoy, R. (2005). Executing production
schedules in the face of uncertainties: A review and some future directions. European Journal of
Operational Research, 161(1):86110.
Baek, D., Oh, S., and Yoon, W. (1999). A visualized human-computer interactive approach to job
shop scheduling. International Journal of Computer Integrated Manufacturing, 12(1):75–83.
Berglund, M. and Karltun, J. (2007). Human, technological and organizational aspects influ-
encing the production scheduling process. International Journal of Production Economics,
110(1–2):160–174.
Black, G., McKay, K., and Morton, T. (2006). Aversion scheduling in the presence of risky jobs.
European Journal of Operational Research, 175(1):338–361.
Błażewicz, J., Ecker, K., Pesch, E., Schmidt, G., and Wȩglarz, J. (2001). Scheduling Computer and
Manufacturing Processes. Springer, Berlin, second edition.
BPMG.org (2004). Business process modelling tools. http://www.bmpg.org.
Brandimarte, P., Rigodanza, M., and Roero, L. (2000). Conceptual modelling of an object-oriented
scheduling architecture based on the shifting bottleneck procedure. IIE Transactions, 32(10):
921–929.
Collinot, A., Le Pape, C., and Pinoteau, G. (1988). Sonia: A knowledge-based scheduling system.
Artificial Intelligence in Engineering, 3(2):86–94.
Ecker, K., Gupta, J., and Schmidt, G. (1997). A framework for decision support systems for schedul-
ing problems. European Journal of Operational Research, 101(3):452–462.
Fowler, J., Mnch, L., and Rose, O. (2006). Handbook of Production Scheduling, chapter Scheduling
and Simulation: The Role of Simulation in Scheduling, pages 109–133. Springer.
Fox, M. S. (1990). Constraint guided scheduling: A short history of research at CMU. Computers
in Industry, 14(1–3):79–88.
Fox, M. S. (1994). Intelligent Scheduling, chapter ISIS: A Retrospective, pages 3–28. Morgan
Kaufmann.
Framinan, J. and Ruiz, R. (2010). Architecture of manufacturing scheduling systems: Literature
review and an integrated proposal. European Journal of Operational Research, 205(2):237–246.
316 11 Overview of Scheduling Tools
Godin, V. (1978). Interactive scheduling: Historical survey and state of the art. AIIE Transactions,
10:331–337.
Gupta, J. N. D., Sexton, R. S., and Tunc, E. A. (2000). Selecting scheduling heuristics using neural
networks. INFORMS Journal on Computing, 12(2):150–162.
Hadavi, K., Shahraray, M., and Voigt, K. (1990). Reds-a dynamic planning, scheduling, and control
system for manufacturing. Journal of Manufacturing Systems, 9(4):332–344.
Higgins, P. G. (1996). Interaction in hybrid intelligent scheduling. International Journal of Human
Factors in Manufacturing, 6(3):185–203.
Huang, S. H., Zhang, H.-C., and Smith, M. L. (1995). A progressive approach for the integration
of process planning and scheduling. IIE Transactions, 27(4):456–464.
Jacobson, I., Booch, G., and Rumbaugh, J. (1999). The Unified Software Development Process.
Addison-Wesley Professional.
Kanet, J. J. and Adelsberger, H. H. (1987). Expert systems in production scheduling. European
Journal of Operational Research, 29(1):51–59.
Kempf, K. (1994). Intelligent Scheduling, chapter Intelligent scheduling semiconductor wafer fab-
rication, pages 517–544. Morgan Kaufmann.
Kirchmer, M. (1999). Business Process Oriented Implementation of Standard Software. Springer,
Berlin, second edition.
Kurbel, K. (2008). The making of Information Systems. Springer.
Lamatsch, A., Morlock, M., Neumann, K., and Rubach, K. (1988). Schedule - an expert-like system
for machine scheduling. Annals of Operations Research, 16(1–4):425–438.
Le Pape, C. (1994). Intelligent Scheduling, chapter Scheduling as intelligent control of decision-
making and constraint propagation, pages 67–98. Morgan Kaufmann.
Lee, C.-Y., Lei, L., and Pinedo, M. (1997). Current trends in deterministic scheduling. Annals of
Operations Research, 70:1–41.
Lin, C., Hwang, S., and Wang, E. (2007). A reappraisal on advanced planning and scheduling
systems. Industrial Management and Data Systems, 107(8):1212–1226.
McKay, K. and Buzacott, J. (2000). Application of computerized production control systems in job
shop environments. Computers in Industry, 42(2):79–97.
McKay, K., Safayeni, F., and Buzacott, J. (1988). Job-shop scheduling theory: What is relevant?
Interfaces, 18(4):84–90.
McKay, K. and Wiers, V. (2003). Integrated decision support for planning, scheduling, and dis-
patching tasks in a focused factory. Computers in Industry, 50(1):5–14.
McKay, K. N. and Black, G. W. (2007). The evolution of a production planning system: A 10-year
case study. Computers in Industry, 58(8–9):756–771.
McKay, K. N., Pinedo, M. L., and Webster, S. (2002). Practice-focused research issues for scheduling
systems. Production and Operations Management, 11(2):249–258.
McKay, K. N., Safayeni, F. R., and Buzacott, J. A. (1995a). “Common sense” realities of planning
and scheduling in printed circuit board production. International Journal of Production Research,
33(6):1587–1603.
McKay, K. N., Safayeni, F. R., and Buzacott, J. A. (1995b). Review of hierarchical production
planning and its applicability for modern manufacturing. Production Planning and Control, 6(5):
384–394.
McKay, K. N. and Wiers, V. C. S. (1999). Unifying the theory and practice of production scheduling.
Journal of Manufacturing Systems, 18(4):241–255.
Morton, T. E. and Pentico, D. W. (1993). Heuristic Scheduling Sysmtems With Applications to
Production Systems and Project Management. Wiley Series in Engineering & Technology Man-
agement. John Wiley & Sons, Hoboken.
Numao, M. and Morishita, S. (1989). A scheduling environment for steel-making processes.
Proceedings of the 5th Conference on Artificial Intelligence Applications, pages 279–286.
Pinedo, M. L. (2009). Planning and Scheduling in Manufacturing and Services. Springer, New
York, second edition.
References 317
Pinedo, M. L. (2012). Scheduling: Theory, Algorithms, and Systems. Springer, New York, fourth
edition.
Pinedo, M. L. and Yen, B. P.-C. (1997). On the design and development of object-oriented scheduling
systems. Annals of Operations Research, 70(1):359–378.
Prietula, M., Hsu, W., Ow, P., and G.L., T. (1994). Intelligent Scheduling, chapter MacMerl: Mixed-
Initiative Scheduling with Coincident Problem Spaces, pages 655–682. Morgan Kaufmann.
Reinschmidt, K. F., Slate, J. H., and Finn, G. A. (1990). Expert systems for plant scheduling using
linear programming. In Proceedings of the 4th International Conference on Expert Systems in
Production and Operations Management, Head Island, USA.
Sadeh, N. (1994). Intelligent Scheduling, chapter Micro-opportunistic scheduling: The Micro-Boss
Factory Scheduler, pages 99–135. Morgan Kaufmann.
Sauer, J. (1993). Meta-scheduling using dynamic scheduling knowledge. In Dorn, J. and Froeschl,
K., editors, Scheduling of Production Processes, pages 151–162, Upper Saddle River, N.J. Ellis
Horwood.
Sauer, J. and Apelrath, H.-J. (1997). Knowledge-based design of scheduling systems. In Nahavandi,
S. and Saadat, M., editors, Proceedings of WMC97. International Symposium on Manufacturing
Systems, Auckland.
Sauer, J. and Bruns, R. (1997). Knowledge-based scheduling systems in industry and medicine.
IEEE Expert, January-February:24–31.
Smith, S. (1994). Intelligent Scheduling, chapter OPIS: A Methodology and Architecture, pages
29–66. Morgan Kaufmann.
Steffen, M. S. (1986). A survey of artificial intelligence-based scheduling systems. In Proceed-
ings of the Fall Industrial Engineering Conference, volume Proceedings of the Fall Industrial
Engineering Conference.
T’Kindt, V., Billaut, J.-C., Bouquard, J.-L., Lenté, C., Martineau, P., Néron, E., Proust, C., and
Tacquard, C. (2005). The e-OCEA project: towards an internet decision system for scheduling
problems. Decision Support Systems, 40(2):329–337.
Trentesaux, D., Moray, N., and Tahon, C. (1998). Integration of the human operator into respon-
sive discrete production management systems. European Journal of Operational Research,
109(2):342–361.
Wiers, V. (1997). Human-Computer Interaction in Production Scheduling: Analysis and Design
of Decision Support Systems for Production Scheduling Tasks. Technische Universitiet Eind-
hoven.
Yen, B. P.-C. (1997). Scheduling description languages. Technical report, Dept. of Industrial
Engineering and Engineering Management, Hong Kong University of Science and Technology,
Hong Kong.
Zentner, M. G., Elkamel, A., Pekny, J. F., and Reklaitis, G. V. (1998). A language for describing
process scheduling problems. Computers and Chemical Engineering, 22(1–2):125–145.
Zhang, H.-C. and Mallur, S. (1994). An integrated model of process planning and production
scheduling. International Journal of Computer Integrated Manufacturing, 7(6):356–364.
Chapter 12
Advanced Design of Scheduling Tools
12.1 Introduction
This module mimics modern computer languages and development paradigms which
insist on abstraction, modularity and encapsulation. Databases evolve over time,
since new requirements often imply new fields and tables. The same can be said
about the user interface. If a scheduling tool is built in a monolithic way, a small
change in the database could have a wave effect on many other modules. This
would result in a software that is both very rigid and difficult to maintain. To ensure
complete modularity and a high degree of independence, we propose the usage of
elaborated class hierarchies that allow inter-module communication with published
interfaces (like those available, e.g. in the C# programming language). Notice that this
architecture is very different from standard approaches where data is simply retrieved
from databases and stored in memory structures. A distinct feature we propose is
data polymorphism with which the same data entity might have different represen-
tations depending on the moment and intended use. Picture for example the entity
that modelises a given quantity of a product to be produced. First of all, this might
be referred to as a job, task, lot, production order and many other different names,
depending on the company. Second, the data required to work with such an entity
depends on the module. When visualising the production plan, detailed informa-
tion like family, colour or characteristics of the product to produce are required. In
addition, internal data employed by the tool (such as the status of a job) have to be
handled. Finally, different data (and data formats) may have to be provided to the
different information systems interoperating with the scheduling tool. However, all
this information is not relevant for generating schedules, as typically only the code
of the job and processing time might be all that is needed. As a result, the type of
interface published by a data class or entity to each module differs according to the
intended use.
With this architecture, the BLU/DAM acts as a hub between all modules, ensuring
maximum expandability, flexibility and reusability. With the proposed architecture,
it should be easy to provide a Web-based user interface as a Web-based IU could be
easily plugged in to the BLU/DAM without knowing any detail of the other modules.
Although the BLU/DAM does not directly incorporate any functionality, it includes
the functionalities related to constraint abstraction discussed in Sect. 11.6.2.
Although this module might seem straightforward, it is indeed one of the most com-
plex modules in the architecture. Effective, complete and easy to work with visuali-
sation is a hard task that requires extensive coding. This comes as no surprise, since
different scheduling problems require not only different solution procedures but also
equally varied and complex solution representations.
When building this module, the following design principles should be taken into
account:
1. The tool should be flexible enough to allow the Decision-Maker to follow his/her
preferred logical process when building a schedule. There are numerous studies
describing the unstructured nature of the decision process adopted by the human
schedulers (see, e.g. MacCarthy and Wilson 2001), therefore it does not seem too
practical to force the schedulers to adopt a particular one, even in the case that
the schedulers themselves have actively participated in the design of the tool.
322 12 Advanced Design of Scheduling Tools
2. The tool should allow the Decision-Maker to focus in a specific view or step of
the scheduling process, starting from a general view and moving (if required) to
a more specific one. While it may be very tempting that the decision-maker can
modify input data while analysing schedules, we believe that this should be done
via different interfaces, so the user does not loose track of what he/she is doing.
3. The tool should not overload the decision-maker with data/information outside
the context in which the information is originally presented. Although this is a
difficult decision on which information should be displayed and which not, we
believe that this could be done by a careful selection of the information and by
drill-down menus that, in any case, may allow the scheduler to customise the
information that should be depicted.
4. The tool should be able to capture all relevant scenarios for the decision-making
process, i.e. the Decision-Maker might wish to try different alternatives of per-
forming the scheduling tasks, including different sets of jobs, modes of operations
for the machines, shifts, etc. For each of these scenarios, different schedules may
be stored (some of them generated by the Schedule Generator submodule, while
others are manually built/modified by the Decision-Maker). All this information
should be available in an structured manner so the task of the Decision-Maker is
facilitated.
These principles would naturally lead to the separation of this module into different
interfaces depending on the view of the scheduling process addressed. While the
connection among these interfaces should be enforced, the logical distinction may
allow the scheduler to make sure about the step in the process in which he/she is
currently involved. Consequently, up to five submodules can be identified inside the
user interface. These are the following:
• Output Representation. This submodule implements a set of functionalities devoted
to present the information to the Decision-Maker. As we shall discuss, this submod-
ule can be employed also as an input from the Decision-Maker, if the manipulation
of the schedules is allowed.
• Scenario Management. This submodule serves to implement a number of function-
alities related to handling different scheduling scenarios. By scheduling scenarios
we mean different schedules or even different instances created from the original
schedule provided by the Schedule Generator module due to the manual inter-
vention of the schedule. As we shall discuss, this submodule is instrumental in
providing the tool with the ability to perform a what-if analysis.
• System Maintenance. This submodule refers to the part of the user interface
devoted to ease the creation and maintenance of the data required by the scheduling
tool, including a number of functionalities so that the consistency and comprehen-
siveness of the data entered are checked, and alarms are triggered if discrepancies
between past and actual data are observed.
• Scheduling Control. This submodule deals with the functionalities related to visu-
alise (possibly) in real-time the state of the shop floor. This submodule is closely
linked to the Production Monitoring submodule described in Sect. 12.3.
12.4 User Interface Module 323
• Algorithm Generator Interface. This submodule may allow the user to edit new
algorithms in an easy manner.
These submodules are described in detail in the following sections.
Once a solution for a given scheduling problem or production plan has been provided
by the Schedule Generator Module, it has to be visualised. In addition, in order to
build an interactive tool, the same interface used to visualise the solution can be
employed to allow its modification. In addition, it has been argued (Pinedo 2009)
that Output Representation interfaces should allow (at least up to a certain extent)
for the modification of the input data, as some of these data (for instance, due dates)
often have to be changed during a scheduling session.
Several interfaces for the presentation and manipulation of solutions can be con-
sidered. These are:
• Gantt Charts. The Gantt Charts are the most popular form of schedule manipulation
and have been already presented throughout this book. The typical form of the
Gantt chart is an horizontal bar chart in which the x-axis represents time and the
y-axis represents the different machines or stages in the manufacturing process.
Jobs or tasks are depicted as rectangles whose basis correspond to the processing
time of the tasks. The identification of the different jobs in a Gantt chart is done via
labelling these rectangles and/or depicting them with different colours or patterns.
This format can be also employed to depict some other characteristics of the jobs
and/or the resulting schedule, such as the tardiness or earliness of some jobs. In
addition, some additional features related to the properties of the jobs, machines or
schedules can be represented in a Gantt Chart, such as job earliest starting times,
due dates and deadlines (that can be shown as vertical lines), frozen jobs (that may
be rendered in a different colour/pattern), machine unavailabilities (that may be
pictured as shadowed), etc. Gantt Charts are a well known and (relatively) easy-
to-understand visualisation tool and therefore they constitute the primary form of
showing aggregate information in an scheduling tool (Wiers 1997).
Regarding the manipulation of the schedule shown in the Gantt chart by the sched-
uler, several options are possible (see, e.g. Speranza and Woerlee 1991, or Baek
et al. 1999), i.e.:
– Add/Delete. The news are manually entered into or are removed from the exist-
ing schedule. This leads to the necessity of showing the Decision-Maker a zone
in the interface where non-scheduled jobs are shown (or, alternative, where jobs
removed from the schedule can be stored).
– Interchange. The relative sequence of two or more jobs is interchanged. This
usually leads to a limited change in the schedule (at least as compared to other
options).
324 12 Advanced Design of Scheduling Tools
– Split/Join. A job is splitted into two or more tasks. Conversely, two or more
tasks are merged into a single job. The implications of this movement are not
straightforward, as the processing times of the machines for the joint/splitted
jobs are different from those in the initial schedule. Therefore, it would be
advisable to invoke the Schedule Generator once again in order to detect new
(possibly improved) schedules.
– Lock. A job is locked in the Gantt chart, i.e. their starting and finishing times
are fixed. An alternative would be to lock the relative position of a job in the
machine, so that the starting and finishing times can be changed, but not the
order of the job with respect to the rest of the jobs in this machine.
Note that some of these options can be done by clicking, dragging and dropping
the individual jobs. However, providing the interface with these capabilities is far
from being a trivial task, since after changing the position of a job on a machine,
the rest of the jobs have to be pushed backward or forward to maintain feasibility.
The effect of this movement on the rest of machines is referred as cascading
or propagation (see, e.g. Pinedo 2009) and it may lead to substantial changes
with respect to the schedule provided by the application. Therefore, an option
that should be available for the decision-maker is whether to invoke or not a
rescheduling procedure, possibly after locking some of the jobs.
In addition to a detailed representation of the individual jobs, an aggregated view
of the whole schedule (possibly representing shifts or sets of related jobs, and
groups of machines or stages) would be interesting for the Decision-Maker. This
aggregated view should be static, i.e. it should not allow the manipulation of the
individual schedules. However, it has the advantage of showing a global view of
the process (particularly needed when the number of jobs/machines in the plant is
rather large) that can be useful when assessing the impact of the local changes in
the schedule.
After allowing all this manipulation, the Decision-Maker may want to save
the schedule, or to ‘undo’ certain modifications. The first option is discussed
when describing the functionalities of Scenario Management in Sect. 12.4.2. With
respect to ‘undoing’, we believe that this option should be available (in order to
recover from mistakes when manipulating schedules), but only for a limited num-
ber of moves, as it is easy that the Decision-Maker loose the track of what he/she
is doing. It is preferable, thus, that the tool offers to save the actual schedule as an
scenario every time a short number of manipulations has been performed.
• Job Screens. A Job screen (see, e.g. Higgins 1996) is an ordered list of the jobs
that should be processed on each machine. Another name for job screens is Dis-
patch lists (see Pinedo 2009). A number of characteristics of the jobs, such as
the processing times, their corresponding weights and/or priorities, the starting,
finishing times and deadlines, etc. can be shown. The job screens provide a rep-
resentation of the schedule that is focused on each manufacturing resource, and
therefore seems to be quite suitable for the manipulation of the schedule. For
instance, given the list of the jobs to be processed in front of a machine, the sched-
uler can alter this order according to constraints or considerations that have not
12.4 User Interface Module 325
orders shipped. Note that the difference between the curves of orders produced
and shipped amounts for the work-in-process. In addition, the expected amount of
orders can be also depicted, so this diagram allows visualising the deviations from
the planned schedules, particularly if the data shown in the diagram are captured
in real-time. Throughput diagrams are devised for representation of the schedules
and not for their manipulation, therefore there is no interactivity linked to this type
of diagrams. For a detailed explanation on the uses of throughput diagrams, we
refer the reader to Wiendahl and Tonshoff (1988).
Alternative names for these diagrams are IOP (Input-Output Planning) (Wiers and
Van Der Schaaf 1997), and are available in many standard software packages for
production planning and control (Wortman et al. 1996).
In addition to the types of diagrams described before, a number of reports contain-
ing statistic pages and performance criteria sheets must be provided, among many
other possible data visualisations. It is tempting to overload the decision-maker with
reports and screens very rich in information, but in this case we believe that it is
preferable to concentrate in only one of the different views or perspectives of the
scheduling process. Most of these reports should be developed ad hoc, as they must
comply with company-specific forms and data.
As soon as the scheduler manually modifies the schedule proposed by the tool via
some of the interfaces described in the previous section, a new schedule is generated.
Analogously, if the scheduler modifies some input data (e.g. in order to anticipate a
delay in the arrival of raw materials required to start certain job, its corresponding
release times are changed), a new instance is generated, and several schedules can
be offered by the tool, or be obtained after a manual modification by the scheduler.
328 12 Advanced Design of Scheduling Tools
Each one of these alternatives would have to be eventually confronted by the sched-
uler in order to study the different possibilities and trade-offs offered. We will name
scenario to each one of these alternatives, noting that an scenario consists of an
instance plus a corresponding set of schedules. The tool must then allow the sched-
uler to record each scenario (giving a label or description) and to compare them.
These are the basic functionalities of the Scenario Management submodule.
Obviously, this submodule has to be tightly connected to that of Output Repre-
sentation, as the latter provides the starting point for the former. At some point while
modifying the schedules and/or the instances in the Output Representation submod-
ule, the scheduler must be able to create a new scenario by ‘dropping’ this solution
(together with the corresponding instance) into an ‘Scenario List’. A label and a
description of this scenario should be given, so that the scheduler can easily identify
it at a later stage. Once the initial selection of scenarios has finished, the user must be
able to see the main performance measures of the selected scenarios and to see the
full details of the scenario, perhaps by double-clicking on it. In order to make this
process systematic, our opinion is that the scheduler should not be able to modify
the scenario from this submodule. However, he/she can transfer the scenario back
to the Output Representation module (by ‘copying’ the scenario) and then make all
necessary manual modifications and save it as a new scenario in the ‘Scenario List’.
The differences among the scenarios in the Scenario List should be clearly visible,
both by forcing the Decision-Maker to introduce different, meaningful labels for
each scenario, and by allowing the Decision-Maker to see a summary of the main
characteristics of each scenario (related both to input and output data).
The submodule may allow to discard scenarios that have been initially selected,
but given the complexity of generating an, in principle valid, scenario, we believe
that the scenarios should not be completely removed unless explicitly stated by the
decision-maker. To do so, a ‘folder’ of discarded scenarios should be kept apart from
those under scrutiny.
We believe that this submodule may greatly enhance the decision support capa-
bilities of a scheduling tool, as the analysis of scenarios is pointed out by McKay
and Wiers (2003) as an important element in a scheduling tool. Let us not forget that
once a scheduling tool is available to the user, more and more strategic decisions
can be taken. For example, questions like what would be the result with an addi-
tional machine? What if I cancel the night shift? can be answered with a Scenario
Management submodule.
Of course, all the persistent information in the System Database must be maintained,
from the configuration of the shop and existing machines to the detailed production
characteristics of each product. As simple as this might seem, the sheer amount of
production data often results in a cluttered user interface. A number of interfaces can
be built to present and edit these data, such as the plant layout interface and the jobs
interface.
12.4 User Interface Module 329
The plant layout interface may depict graphically the work centres and machines in
the plant. If possible, it should also represent the routing between the work centres,
although for many shop floors this may not be possible or may result in a very
confusing representation. In such cases, it is preferable to depict just the work centres
together with their description and characteristics. An important part is to include
(either in this interface or in a separate one) an entry point of the resource calendar,
as it is likely that the machines are not always available.
The jobs interface would serve to enter all data related to the jobs and to their
relationship with the machines. Note that some of these data must be particularly
awkward to be entered, such as, e.g. those related to the sequence-dependent setups.
Special care should be put to devise simple screens for this type of interface, as it
relates to data that are frequently modified. Nevertheless, some of these data would
not change, therefore past configurations should be stored and the scheduler should
be asked whether these data must be pre-loaded in order to ease the task of entering
new information, or not.
In order to be usable, maintenance screens should be easy to work with. We
advocate the usage of simple data wizards and automation tools. For example, when
purchasing an additional parallel machine, it is surely easier to copy an existing one
and later specifying the small differences than to introduce a new machine from
scratch. The same can be said regarding entering new jobs into the system.
Finally, note that even in the case of a scheduling tool seamlessly connected to
other information systems providing all sources of data, we believe that this submod-
ule should be kept, so that the tool is able to generate new scenarios for the Scenario
Management submodule.
A separate interface should be devised for handling in real-time the data coming
from the BLU/DAM module. Regarding the Scheduling Control submodule, it is to
note that real-time data coming from the BLU/DAM must be handled independently.
Should the current schedule receive an event from the production floor, this event
must generate a visual warning to the user. Once this warning is accounted for, a
detailed report of changes to the current schedule should be provided. Some of this
information is needed in order to cope with functionalities mentioned in Sect. 11.6.8.
More specifically, input data checking and feasibility analysis must be confronted
upon each new scenario.
Last but not least, the scheduling tool should allow for the creation of new algorithms.
In order to hide the small details of the algorithms to the user, a friendly interface
330 12 Advanced Design of Scheduling Tools
This module contains four main submodules, i.e. Algorithm Library (which in turn
may be divided into Scheduling Algorithms and Rescheduling Algorithms), Algo-
rithm Generator, Scheduler and Dispatcher, and Preprocessor. In order to be adapt-
able, the optimisation algorithm must work on an abstraction of the scheduling model,
or better, on an abstraction of the representation of a possible solution. For example,
a typical permutation flow shop solution for makespan criterion can be represented
with a simple permutation of the jobs. The detailed schedule with all the start and fin-
ish times of every task is many times of little relevance for the optimisation methods.
As mentioned before, these abstractions are provided by the BLU/DAM. Notice that
once the solution to a problem is returned to the BLU/DAM, a complete and much
more detailed schedule might be built by the user interface module.
What should be mentioned as a part of the architecture is the separation of all non-
essential constraints from the schedule generation process. Many constraints can be
accounted for when the detailed schedule is constructed. Of course some level of
accuracy will be lost in the process but it has to be carefully studied whether this
decreased accuracy is compensated by a more general and adaptable optimisation
method. In order to reflect this, we separate the Algorithm Library submodule from
the Scheduler and Dispatcher submodule. A given—and more general—algorithm
produces an optimised production schedule where all major constraints have been
considered. In a second step, the Scheduler and Dispatcher submodule produces
an even more detailed schedule where minor constraints are also considered. By
operating in this way, the scheduling tool might accommodate new constraints and
new situations in a more flexible and agile way.
Similarly, the number and variety of optimisation criteria or performance measures
found in practice is extremely large. As a result, for the calculation of the performance
measure for a given solution, a two-stage approach is preferable. In the first stage,
a detailed schedule is derived from the abstract representation of the solution. All
relevant data, and most importantly, completion times of all tasks, are calculated. In
a second step, specific performance criteria are derived from the completion times
calculated in the first stage. This two-phase approach allows for a large flexibility and
independence when defining new and ad hoc criteria. With the previous premises,
a given scheduling algorithm does not need to know in detail neither the specific
constraints and small details of the underlying model nor the particular scheduling
criterion.
Notice also that the Algorithm Library should contain not only full-scale optimi-
sation algorithms but also some smaller schedule repair and rescheduling algorithms
that might be invoked by the user interface module as a result of changes in a given
12.5 Schedule Generator Module 331
On the other hand, it may happen that more than one algorithm can be employed
to solve a particular problem. From all these, the tool should suggest/propose those
more suitable for each case.
Regarding the capacity of a manufacturing scheduling tool to be enhanced with
new algorithms, McKay et al. (2002) note that few available tools contain libraries of
configurable algorithms that plug and play. It has to be noted that the requirement is
not only that completely new algorithms can be included in the library, but also that
existing algorithms can be tailored and that certain parts of these algorithms can be
integrated into new ones. With respect to the extensibility of scheduling tools, also
McKay et al. (2002) state that few of them allow the enhancement or combination of
existing algorithms in a simple manner. For most commercial software either these
algorithms cannot be enhanced or the procedure for their enhancement is extremely
complex (Stadtler and Kilger 2002).
The blocks described in this section serve to implement many functionalities
discussed in an earlier chapter. First, model detection functionalities (Sect. 11.6.2)
can be implemented in the Preprocessor submodule although this requires inter-
vention from the user interface module. Regarding to problem-solving functionali-
ties (Sect. 11.6.3), rescheduling algorithms are included into the Algorithm Library
submodule together with scheduling algorithms (thus allowing multi-algorithm
scheduling). An Algorithm Generator submodule is connected to the Algorithm Gen-
erator Interface (already discussed in Sect. 12.4) to facilitate the generation of new
algorithms, which again requires a tight integration with the user interface. The Algo-
rithm Library can also contain functionalities for the evaluation of algorithms. The
only functionality presented in Sect. 11.6.3 and not included in our proposal is the
incorporation of the human expertise, as their advantages are a subject of controversy.
However, their integration in the proposed architecture is relatively straightforward.
Finally, functionalities related to the evaluation of solutions for several objectives
(discussed in Sect. 11.6.4) are covered by the Scheduler and Dispatcher submodule.
Cleanly modularising this part of the architecture is a challenging task as it is tempt-
ing to incorporate algorithm logic in other modules. However, abstraction and mod-
ularisation is a must in order to avoid re-implementation of other modules at each
implementation of a scheduling tool.
In this chapter we have moved from the high-level, conceptual issues regarding the
development of a manufacturing scheduling tool described in Chap. 11 to detailed
design issues. This has resulted in an outline of the main contents of the modules
identified in the architecture in Chap. 11.
Summing up, the modules described in this section address a number of func-
tionalities discussed in Chap. 11: In addition to the representation of the solutions
(see Sect. 11.6.2) which is covered by the Output Representation submodule and the
capability that the scheduler manipulates the solutions found by the tool (discussed
12.6 Conclusions and Further Readings 333
in Sect. 12.4), the proposed architecture setup the basis for an intuitive generation
of new algorithms (see Sect. 11.6.3) by using the Algorithm Generation Interface
submodule, and eases the analysis of scenarios (see Sect. 11.6.4) by employing a
submodule for Scenario Management. Finally, functionalities referred to reactive
scheduling (see Sect. 11.6.5) are foreseen by the submodule Scheduling Control.
There are a number of issues that can be concluded from this exposition. The
first one is the careful choice of the technology (computer, databases, programming
languages, etc.) to be employed in the manufacturing scheduling tool. Note that many
of the features described in this chapter (and particularly those referring to the user
interface) cannot be implemented, e.g. on a light client or Web-based user interfaces.
Another important conclusion is to realise the enormous amount of ‘overhead’
work related to the development of a manufacturing scheduling tool beyond the effort
to build appropriate models and solutions procedures. Even for a tool designed to
support a single scheduling model and its corresponding solution procedure, in order
to make the software usable, a sheer number of functions for entering, registering,
tracking and checking data have to be written. This may be another reason explaining
both the difficulty to move from the optimisation-based scheduling to the decision-
making scheduling, as well as the high failure rate of some implementations of
scheduling tools if these supporting, yet vital, functions are not carefully addressed.
Despite the applicability of the issues treated in the chapter regarding the imple-
mentation of a scheduling tool, the existing literature is quite scarce. This chapter has
been elaborated around the ideas in Framinan and Ruiz (2010), using a lot of material
from Higgins (1996), who discussed the main problem related to the interaction of
the user in scheduling software. Several articles cited in the document (such as e.g.
Wiers 1997, or Pinedo 2009) constitute a good point to extend some of the topics
treated here. Finally, additional information on throughput diagrams can be found in
Wiendahl and Tonshoff (1988).
References
Baek, D., Oh, S., and Yoon, W. (1999). A visualized human-computer interactive approach to job
shop scheduling. International Journal of Computer Integrated Manufacturing, 12(1):75–83.
Framinan, J. and Ruiz, R. (2010). Architecture of manufacturing scheduling systems: Literature
review and an integrated proposal. European Journal of Operational Research, 205(2):237–246.
Higgins, P. G. (1996). Interaction in hybrid intelligent scheduling. International Journal of Human
Factors in Manufacturing, 6(3):185–203.
Hopp, W. J. and Spearman, M. L. (1996). Factory physics. Foundations of manufacturing manage-
ment. Irwin, New York, USA.
Jourdan, L., Basseur, M., and Talbi, E.-G. (2009). Hybridizing exact methods and metaheuristics:
A taxonomy. European Journal of Operational Research, 199(3):620–629.
Li, H., Li, Z., Li, L., and Hu, B. (2000). A production rescheduling expert simulation system.
European Journal of Operational Research, 124(2):283–293.
MacCarthy, B. L. and Wilson, J. R., editors (2001). Human performance in Planning and Scheduling.
Taylor & Francis.
334 12 Advanced Design of Scheduling Tools
McKay, K. and Wiers, V. (2003). Integrated decision support for planning, scheduling, and dis-
patching tasks in a focused factory. Computers in Industry, 50(1):5–14.
McKay, K. N., Pinedo, M. L., and Webster, S. (2002). Practice-focused research issues for scheduling
systems. Production and Operations Management, 11(2):249–258.
Pinedo, M. (2009). Planning and Scheduling in Manufacturing and Services. Springer, New York,
second edition.
Pinedo, M. L. and Yen, B. P.-C. (1997). On the design and development of object-oriented scheduling
systems. Annals of Operations Research, 70(1):359–378.
Speranza, M. and Woerlee, A. (1991). A decision support system for operational production schedul-
ing. European Journal of Operational Research, 55(3):329–343.
Stadtler, H. and Kilger, C. (2002). Supply chain management and advanced planning. Springer,
Heildelberg.
Wiendahl, H.-P. and Tonshoff, K. (1988). The throughput diagram - an universal model for the illus-
tration, control and supervision of logistic processes. CIRP Annals - Manufacturing Technology,
37(1):465–468.
Wiers, V. (1997). Human-Computer Interaction in Production Scheduling: Analysis and Design of
Decision Support Systems for Production Scheduling Tasks. Technische Universitiet Eindhoven.
Wiers, V. and Van Der Schaaf, T. (1997). A framework for decision support in production scheduling
tasks. Prod Plann Control, 8(6):533–544.
Wortman, J., Euwe, M., Taal, M., and Wiers, V. (1996). A review of capacity planning techniques
within standard software packages. Production Planning & Control, 7(2):117-128.
Part V
Scheduling Systems
The Part V of the book rejoins the elements described in the previous parts and
focuses on scheduling system as a whole. We first discuss in Chap. 13 the
organisational perspective in scheduling, paying attention to how humans conduct
the scheduling process. In Chap. 14, we present a roadmap for developing a
scheduling system, addressing all relevant issues throughout its life cycle and
pointing to the corresponding section of the book where these are treated. This part
concludes with Chap. 15, in which a real-life case study is described and analysed.
Chapter 13
Overview of Scheduling Systems
13.1 Introduction
In this chapter, we approach for the first time the integration of the three elements
discussed in precedent chapters (i.e. models, methods and tools) in a manufacturing
scheduling system. To do so, we need to gain an in-depth understanding on how
the manufacturing scheduling process is carried out and how the three elements can
contribute to improve this process.
More specifically in this chapter, we
• discuss the integration of the three elements within a system (Sect. 13.2),
• present the organisational perspective in scheduling (Sect. 13.3),
• investigate the main tasks carried out by human schedulers (Sect. 13.4) and how
these are carried out (Sect. 13.5),
• identify the main features of human scheduling in view of the outcome of the
previous item (Sect. 13.6) and
• discuss the integration of the human element in the scheduling system (Sect. 13.7).
In the previous chapters, most of the different elements constituting a scheduling sys-
tem have been discussed. Given the width and complexity of some of these issues,
some aspects would have been just briefly mentioned while a more extensive dis-
cussion would have been possible for another. It is now the point where all these
elements are assembled together to make a scheduling system.
Recall that in Chap. 1 we define a scheduling system as a combination of
models, methods and tools together with the human schedulers involved in the
process. Issues regarding scheduling models have been discussed in Chaps. 3–6,
while the main aspects for scheduling methods are addressed from Chaps. 7–10.
Chapters 11 and 12 discuss the main issues related to the development of a manufac-
turing scheduling tool. In the next three chapters, we relink these different aspects
related to a scheduling system. Particularly, we will focus onto the integration of the
manufacturing scheduling tool (with its models and methods) into an organisation. In
order to conduct such integration in an effective manner, the tool should first reflect
the needs and expectations of the company, and second, the so-developed tool should
be deployed in an effective manner.
The first of these issues (reflecting the needs and expectations of the company in
which the tool is installed) is closely related to the understanding of how scheduling
tasks are performed in companies. Since these tasks are, as we shall see later, generally
performed by specific staff (schedulers), this lead us to the need of understanding the
nature of human scheduling, their main activities and features, and how these could
be integrated into the scheduling tool. All these aspects constitute the content of this
chapter.
The second issue (how to implement and deploy the tool into a company) obviously
encompasses many company-specific aspects, although there may be a number of
lessons that can be regarded as common for most scheduling systems and that can
be thus extracted from an analysis of past implementations. Therefore, guidelines
to effectively conduct the deployment of manufacturing scheduling systems could
be developed from the analysis of evidences supported by the experiences described
in the literature. In Chap. 14, we discuss and classify these experiences and extract
such guidelines. Finally, a case study that can be employed to illustrate the process
of deploying a manufacturing scheduling system is provided in Chap. 15.
First, we start by trying to identify the staff in the company who accomplishes
decisions related to scheduling. In one survey, Halsall et al. (1994) found that 64 %
of small-to-medium enterprises had a member of staff specifically responsible for
production scheduling. It seems sensible to assume that this percentage is much
higher for bigger companies. However, this scheduler role is not exclusive for many
companies: Indeed very rarely the title of ‘scheduler’ can be found in practice (Vernon
2001). The tasks usually allocated to this staff are:
• Assigning jobs to machines. Although (in theory) this seems to be the primary focus
of schedulers, field studies show that the time to build and manipulate schedules
only represents 10–20 % of their time (Fox and Smith 1984; Grant 1986).
• Dealing with problems affecting to the schedules. This is, indeed, the most cited
task in many case studies (see, e.g. Crawford 2001). This task is usually addressed
by dynamic changes into resources, processes, quantities, materials, date, opera-
tors, etc. (McKay and Wiers 2006). An important characteristic of this task is the
fact that, as we will discuss later, is usually performed in an urgent fashion.
• Anticipating and avoiding future scheduling problems. To do so, they spend a lot
of time in checking and correcting inconsistencies between different sources, and
in gathering new information (Webster 2001). In addition, schedulers may change
the current schedule in order to avoid problems in the future (Crawford 2001):
An example may be the case of a scheduler who knows that there will not be
raw materials required for some specific orders, and then it does not release these
particular orders in the shop floor in order to avoid accumulating inventory and/or
creating future problems (Fig. 13.2).
Regarding to how these tasks are carried out, McKay and Buzacott (2000) found that
a common scheduling strategy is adopted—explicitly or implicitly—across different
companies. This strategy (or process, in the following) consists of seven steps:
1. Situation assessment. The current shop situation is analysed and the differences
between the actual situation and the expected situation is determined. This usually
involves a walk about and taking note of the most relevant aspects, the status of
the different orders, and how well the process is doing. Note that, in assessing
the situation, the scheduler adopts much of the role of ‘information investigator’
(Crawford 2001). Typical examples about the information required may be the
changes occurred in the daily shipping requirements, the jobs processed in the
last hours, or the current inventory levels (McKay and Wiers 2006).
By accomplishing this step, the problem definition is refreshed and updated, and
it emerges a new view of what the problems are.
13.5 The Scheduling Process 341
Fig. 13.2 Scheduling tasks (adapted from McKay and Buzacott 2000)
2. Crisis identification. The next step is to identify the special problems or ‘crises’
in the shop. These are likely to be the most constrained or the most important
activities in the shop, that are prioritised in order to be treated as soon as possible
before reflecting on other problems in the shop. Typically, crises are jobs running
behind their processing estimates, jobs where the wrong material was used, jobs
where the yield is lower than required, jobs where the setups were substandard,
jobs which made the wrong parts, jobs where the inventory records were corrected
and significant losses occurred, etc.
Interestingly, part of this step is either to classify a problem as a crisis to be solved,
or simply as a potential issue that has to be monitored. Evidence exists that expert
schedulers know that responding too quickly to disturbance may be detrimental
for the schedule, as sequentially occurring disturbances can cancel a previous
disturbance or compound it (Vernon 2001).
3. Immediate resequencing and task reallocation. Using the information about the
status of the shop (obtained from Step 1) and the priority of the different issues
(gathered in Step 2), a number of changes are devised to correct the deviations
342 13 Overview of Scheduling Systems
from the expected plans. This is done in an urgent manner with the purpose of
altering the trends in the plan running currently in the shop and may entail a broad
spectrum of activities, focusing onto the jobs (jobs expedition, job preemption,
cancellation of existing jobs, changing the order of the current jobs, etc.), onto
the resources (subcontracting, overtime, moving operators from one resource to
another, etc.) or even onto the processes (alternative processes can be employed
for current jobs, or designed for the new jobs).
Note that, in this step, the crises are addressed by resequencing and reallocation,
meaning that there is no intention to provide a new plan for those jobs not directly
affected by the crises to be solved (hot jobs). Indeed, by altering the plan for the
hot jobs, the plan may become now infeasible for other jobs in the immediate
future (McKay and Wiers 2006). McKay and Wiers (2006) also state that, while
a decision-support system can be of help for this step, the ability to make and
negotiate dynamic changes to the problem definition are currently in the realm of
human capabilities.
4. Complete scenario update. The overall scenario (that is, not only the hot jobs that
have been altered in Step 3) is updated. Therefore, the picture of the remaining
jobs emerges so the scheduler knows the current tasks and allocations as a starting
point. As mentioned in the previous step, the reallocation may have caused new
problems in the future jobs, so these new work requirements have to be considered
from this new starting point.
5. Future problem identification. Once the urgent problems are considered to be
fixed, the scheduler has a look at a longer time horizon in order to identify prob-
lems with the schedule or sequence outside the immediate dispatching horizon.
Some of them can be considered to be second-order effects or issues that break
the feasibility or desirability of the schedule (McKay and Wiers 2006), but some
other problems in the future can be caused externally to the shop (see, e.g. the
case described by Crawford 2001).
At this point, McKay and Buzacott (2000) identify two variants in the process.
In a completely manual system or one with a traditional MRP mechanism, the
scheduler will look through the list of upcoming work and isolate work needing
special attention. In most cases, it is possible to obtain a list of planned work
by part or by work centre and restrict the time horizon for data selection. If the
scheduler has the benefit of a scheduling decision-support system, the work may
be displayed visually on the Gantt chart using simple rules such as earliest due
date. The basic information is the same as if a resource report was used, e.g.
sequence of work orders for the resource.
In both cases, the schedulers scan the list looking for jobs that stick out. Since
planning is an iterative process and continues day to day, the completely new
work is usually towards the end of the time horizon and rolls in. The majority of
work within the time horizon would have been looked at already in a past cycle
and corrective action taken. They will look for new work within the time zone,
but in most cases, the decisions modified one day will hold for the next.
6. Constraint relaxation and future problem resolution. Following and during their
reflection in phase five, they identify hot jobs and processes from the future and try
13.5 The Scheduling Process 343
to place them on the schedule that they think is best. This might involve changing
some other decisions or it might not. During this process, they identify possible
risks and try to set up the schedule to minimise or avoid the impacts. These future
problems will be attempted to be solved with sequencing or allocation strategies
(McKay and Wiers 2006). However, if the problem cannot be addressed, the situ-
ation will be dealt with by relaxing constraints. For example, a part might not have
been made for 6 months, and a test batch is needed before the full batch is run.
Or, certain resources are being overloaded and an alternative resource is made
available for a limited time and assigned the work. Once the scheduler relax the
constraint(s) that make the problem unfeasible, he/she assumes that the problem
is fixed.
An important activity within this step is the development of contingency plans,
which is considered to be a key aspect of the scheduler’s behaviour (see, e.g.
McKay et al. 1995b or Vernon 2001). From the viewpoint of the development
of a manufacturing scheduling software tool, the need of creating, storing and
manipulating these contingency plans (i.e. alternative schedules) has to be con-
sidered. Note that the ’analysis of scenarios’ functionality described in Chap. 11
constitutes an implementation of this requirement.
7. Scheduling by rote. Finally, the routine work (i.e. jobs that have not been identified
as critical in the previous steps) is scheduled mechanically. The scheduler tries
to balance due dates and capacity restraints while trying to satisfy Industrial
Engineering requirements for preferred resources, etc. During this phase, he/she
tries to resolve conflicts as they appear by, e.g. looking at alternative resources,
batch splitting, etc.
From the description of this process, which is considered to be commonplace
for many schedulers, a number of clues can be obtained on how a decision-support
system can be of the best aid for the scheduling process. In addition, the process itself
may lead to certain features typical of human scheduling. These will be discussed in
the next section.
they perform their work. Nevertheless, this widely studied and accepted feature is
to be taken into consideration when accounting for human performance in schedul-
ing, and also when designing a scheduling software tool.
• Human schedulers reduce the problem space quickly (McKay et al. 1992). In some
cases, they (implicitly) adopt a Drum-Buffer-Rope approach (Goldratt and Cox
1992) and focus onto the bottleneck of the plant (Webster 2001). They also may
create different sets of jobs according to their status in order to deal with smaller
(and more manageable) subsets.
This can be seen as an advantage for many situations where finding good schedules
(or even formalising the scheduling process) is difficult, but on the other hand this
can lead to an oversimplification which creates infeasible/suboptimal schedules
(Dessouky et al. 1995). In addition, the task of reducing the complexity of the
problem may represent a great percentage of schedulers’ time, as studies suggest
that they spend 80–90 % of their time determining the constraints that will affect
scheduling decisions (see, e.g. Fox and Smith 1984; Grant 1986), which means
that only a few time is spent on the generation and modification of schedules.
• There is a large variability in (human) scheduling practice, as field studies show
that the specific decisions taken by the humans depend on the particular scheduler
(Norman and Naveed 1990; Wiers 1996). As a consequence, for most cases, the
developers of a specific manufacturing scheduling system in a company would not
have at their disposal explicit descriptions of scheduling procedures can be easily
transferred to a software tool. On the contrary, such descriptions (or ‘best prac-
tices’) would emerge (and possibly negotiated) during the requirements elicitation
phase. This aspect would be elaborated further in Sect. 13.7.
• Human schedulers use their own sources of information. Instead of dealing with
the enormous amount of information required to perform scheduling decisions (or
maybe due to this fact), schedulers establish and use their own ‘information net-
works’ from which they collect the necessary information (McKay et al. 1995a).
Part of this information comes from verbal interactions in the shop floor (Vera
et al., 1993), or include ‘informal’ sources of information (Vernon, 2001). Other
sources of information include interaction with other functions within the manu-
facturing company that fall outside of the scheduling area (Crawford 2001).
As they collect a big deal of information, one may think that schedulers also con-
stitute a source of information for different (related) functions in the company.
In this regard, some authors (Crawford 2001) consider the schedulers an ‘infor-
mation hub’, both using and disseminating the information. True as it is, in some
case studies it is found that schedulers may hide information to other upper-level
planners, due to the fact that in some cases he/she adopts different objectives than
those stated by these upper levels (Vernon 2001).
It has also to be noted that not all the information gathered by the human schedulers
is strictly required, as some of the information sources are used for verification,
leading to a duplication of the effort in searching the information (Vernon 2001).
• Human schedulers tend to use untaught and non-routine heuristics in many cir-
cumstances, but particularly in conditions that they consider exceptional.
13.6 Features of Human Scheduling 345
• Human scheduling is (specific) goal-driven. For most cases, rather than generating
the schedule by a specified means or procedures, schedulers generate their sched-
ules to meet specific goals (Crawford and Wiers 2001). Some field studies (Vernon
2001) describe these goals as related to production performance rather than to cus-
tomer’s satisfaction, which not always induce a behaviour in the scheduler that is
congruent with management expectations.
• The role of humans when a decision-support system is employed is to take
control of the DSS whenever there exist a disruption of the schedule (Nakamura
and Salvendy 1994). Therefore, the major function of schedulers is dealing and
preparing for contingencies (Thurley and Hamblin 1962).
In this section, we summarise the different features of human schedulers and extract
relevant conclusions within the context of a scheduling system.
Obvious as it may seems, the scheduling system should place humans at the top of the
decision-making system. Therefore, a scheduling tool must truly support and do not
replace human decision-making (Crawford and Wiers 2001). This also should have
been made clear from the scheduling process described in Sect. 13.4, where some
of the steps cannot simply be accomplished by a software tool. The variety in the
sources from which the human schedulers obtain information (some of them informal
and thus out of the scope of the company’s information systems), the unstability of
the environment and thus the ability to react to unexpected events are at least two
reasons for this.
This view of the scheduling system is what is called ‘interactive’ or ‘hybrid’
approach (Vernon 2001), where the assumption is that this type of systems outper-
forms both fully automated and fully manual approaches (see, e.g. Haider et al. 1981;
Nakamura and Salvendy 1994; Higgins 1996; Baek et al. 1999).
It has been discussed earlier in the chapter that there is a large variability in schedul-
ing practice and that many scheduling heuristics are untaught and non-routine. The
implications of these facts are that, on one hand, scheduling performance may largely
346 13 Overview of Scheduling Systems
depend not only on the specific scheduler, but on the particular disposition, mood or
awareness of the same scheduler. The issue becomes more complicated when dif-
ferent schedulers with different degrees of knowledge, experience, motivation, etc.
have to operate in the same shop floor.
On the other hand, it is likely that these heuristics and scheduling practices are
not explicit, not to mention documented in a formal way that allows their reuse and
improvement. Therefore, particular (i.e. individual) scheduling practices should be
investigated, formalised and documented. If possible, a collection of scheduling ‘best
practices’ should be extracted in order both to reduce the variability of the scheduling
process and to collectively improve scheduling performance.
A special remark should be done regarding the objectives to be pursued in the
scheduling activity, as these have to become a necessary part of the collection of
best practices. In order to make them aligned with the expectations of the company’s
management, the management goals should be integrated, for which the final word
of these best practices should be given to the top management and not confined to
the staff in charge of manufacturing.
As the untaught and non-routine heuristics tend to be more used in exceptional
conditions, a procedure for registering ’new’ heuristics that have been used can be
implemented.
As discussed earlier, human schedulers reduce the problem space. This characteristic
of human scheduling is more useful in the design phase of a scheduling system (where
schedulers may help to remove constraints/objectives that do not have an impact on
the performance of the system) that during the operation phase (where schedulers may
overlook important improvements as they oversimplify the problem). As a conse-
quence, human schedulers may be of great help when designing a scheduling system,
but, when finished, this system should provide them with all schedule information,
including all constraints and objectives.
In addition, it has been mentioned that schedulers build their own networks of
information, which make them to be ‘information hubs’ (Crawford 2001). This
knowledge is relevant not only for designing decision-support systems, but to gain
an in-depth understanding of the shop floor.
It has been shown that a factor affecting scheduling success includes scheduler’s
cognitive abilities and how they use theoretical scheduling techniques (Stoop and
Wiers 1996). In addition, there is a wealth of literature stating that schedulers tend to
13.7 Integrating the Human Piece into the Scheduling System 347
use primarily heuristics and procedures that they are able to understand (Baek et al.
1999; Wiers 2001; Freed et al. 2007, among others). Therefore, emphasis must be
placed in enhancing the knowledge of schedulers by training and upgrading.
13.7.3.2 Objectives
It has been shown in the previous section that, in many cases, schedulers generate
their schedules in order to meet specific goal, and that schedulers have a myopic
view of the process. All this naturally leads to the fact that their suggested scheduling
objectives may change over time, and that they are short-range objectives. Typical
examples are to achieve a certain output during a shift (see, e.g. the case study by
Vernon 2001), which may lead to speed up ‘easy’ jobs by delaying others that are
more important, but also more complex or time-consuming.
Therefore, an effort to agree scheduling objectives within the company, and to
make them explicit has to be carried out in order to be prevented against local,
short-time objectives that may not be beneficial for the company on the long run.
The literature points the need that scheduling is to be carried out by personnel with
first-hand and intimate knowledge of the shop floor, given the complexity in schedul-
ing created by demands from customers due to product customisation (see Thurley
and Hamblin 1962; Crawford and Wiers 2001 among others). Therefore, training of
schedulers must not focus solely on the scheduling process in which they are primar-
ily actors, but on the whole process of the company. Note that the purpose here is not
that the schedulers gain a deep knowledge of the rest of the activities of the company,
but that they understand their work and the implications on their own task. Some
authors (e.g. MacCarthy 2006) state that ignoring the higher level functions within
which lower level allocation decisions are defined and constrained and ignoring the
dynamic rolling pipeline of planned production, and the problems and opportunities
it provides, constitutes a major drawback.
13.7.3.4 Practice
There are many field studies that (perhaps unsurprisingly) found relevant differences
in performance between expert schedulers and novice schedulers (Randel and Pugh
1996). One of the reasons is the fact that expert schedulers are more efficient at
searching for the relevant information when building the schedules (Vera et al. 1993),
which is an expertise that can be acquired mainly through practice.
348 13 Overview of Scheduling Systems
The environment in which schedulers must take their decisions is almost unanimously
described as dynamic, complex and relatively unstable. It has to be noted as well that,
according to the findings in the previous section, the main way for the human sched-
uler to reduce the complexity of the environment is via (over)simplification. There-
fore, efforts should be carried out in order to avoid the potential under-performance
due to this fact. Interestingly, some researchers have found that the parameters influ-
encing human scheduling behaviour are not plant specific (Crawford and Wiers
2001); therefore, while there remains for the scheduler the need of an intimate knowl-
edge of the specific shop floor in which he/she is to perform his/her activities, it is
possible to devise a set of common guidelines to ease schedulers’ activities.
Taking the environment as a whole, one (possibly obvious) solution that is men-
tioned in the literature to reduce the level of uncertainty is to try to automate most of
the manufacturing environment, although there is some instability that is inherent to
this environment. In order to minimise the level of risk and uncertainty that a human
scheduler has to deal with, it has been proposed that a higher level solution needs
to be formulated. For instance, material, capacity and personnel shortages can be
reduced at a higher planning level. Basically, this can be done in three ways:
• Securing against uncertainty by building up inventories of material, as well as
human and machine slacks. This is the traditional form of dealing with uncer-
tainty, but it is far from being efficient or even feasible in today’s ever-competitive
environments.
• Taking into account scheduling-specific considerations at a planning level. In prac-
tice, this leads to the monolithic approach of production planning that has been
widely regarded as unfeasible in many production environments.
13.7 Integrating the Human Piece into the Scheduling System 349
• Providing an adequate coordination between the scheduling function and the higher
level planning functions. This is probably the best approach, and it is has been
discussed in Chap. 3. On the other hand, the issues mentioned in the previous
section regarding the tendency of human schedulers to hide information should be
taken into account.
Another approach is to break down the causes of instability and try addressing
them, if possible, one by one. Instability refers to changes in the production process,
and in the materials and personnel employed, and it is usually due to internal and
to external uncertainty (Halsall et al. 1994, but see Hopp and Spearman 1996 for a
slightly different classification with the names of internal and external variability).
Regarding internal uncertainty, companies appear to have some degree of certainty
about their stages in the production process, but the processing times and set up times
of these stages are less well known (Crawford and Wiers 2001). In some cases, the
uncertainty of the processing times may be inherent to the manufacturing process
(see, e.g. the tile production process described in Chap. 15), so there is little to do.
For the rest of the cases, the lack of reliable data (regarding processing times) or
documented, standard procedures (regarding setup times) can be fixed by proper
actions. Another source of instability is the breakdown of machines. Although there
is little that can be done for the scheduler, at least from a short-term perspective, one
way to handle this type of instability is to provide the scheduler with tools to assess
the criticality of the schedules with respect to the specific resources, so he/she can
think of alternative schedules in case a breakdown happens, or at least cap the overall
effect of the breakdown in the system.
Regarding external uncertainty, most problems have come from the fact that either
the manufacturing process must start before the exact requirements of the customer
are known, or these requirements are subject to frequent changes. Note that this type
of uncertainty may be in reality a sort of ‘internal’ uncertainty, as in some companies
the necessary coordination between Sales and Manufacturing is missing. Whenever
this is the case, appropriate actions should be taken.
Addressing uncertainty coming from the customer is, however, a different thing.
Nevertheless, for certain products, customer’s demand is sensitive to price and lead
times, so the company can quantify the losses due to this source of uncertainty and
offer discounts based on demand stability.
A final source of external uncertainty is the lack of reliability (both in time,
quantity or quality) of the suppliers. Some companies have embarked in just-in-time
programmes with their suppliers to reduce this uncertainty, while others have simply
translated them the responsibility of managing the supplies (together with the costs
associated).
350 13 Overview of Scheduling Systems
In this part of the book, we have started to address the integration of manufacturing
scheduling models, procedures and tools into an organisation in order to constitute
an scheduling system. To do so, we have focused in this chapter in understanding the
nature of human scheduling, their main activities and features. From this analysis, it
turns out that most of the tasks carried out by the human scheduler are reactive (i.e.
avoiding disturbances in schedules) rather than predictive (developing schedules).
This clearly conditions how human schedulers perform their tasks, and give a number
of valuable clues for better placing them into a scheduling system (Fig. 13.3).
One of the—in our opinion—best sources regarding the issues discussed in this
chapter are in the book Human performance in planning and scheduling edited by
McCarthy and Wilson. Several chapters in this book have been cited throughout
the chapter, including Crawford (2001); Vernon (2001); Crawford and Wiers (2001).
This book is one of the few works devoted to this topic, which is both bad news (as an
important issue in manufacturing scheduling is misrepresented) and good news for
the scholar (as new research avenues with relevant practical implications are open).
References 351
References
Baek, D., Oh, S., and Yoon, W. (1999). A visualized human-computer interactive approach to job
shop scheduling. International Journal of Computer Integrated Manufacturing, 12(1):75–83.
Beishon, R. (1974). The Human Operator in Process Control, chapter An Analysis and Simulation
of an Operator’s Behaviour in Controlling Continuous Baking Ovens, pages 79–90. Taylor &
Francis.
Buxey, G. (1989). Production scheduling: Practice and theory. European Journal of Operational
Research, 39(1):17–31.
Crawford, S. (2001). Human Performance in Planning and Scheduling, chapter Making Sense of
Scheduling: The Realities of Scheduling Practice in an Engineering Firm, pages 83–104. Taylor
& Francis.
Crawford, S. and Wiers, V. (2001). From anecdotes to theory: reviewing the knowledge of the
human factors in planning and scheduling. In MacCarthy, B. L. and Wilson, J. R., editors, Human
performance in planning and scheduling, pages 15–44. Taylor & Francis.
Dessouky, M., Moray, N., and Kijowski, B. (1995). Taxonomy of scheduling systems as a basis for
the study of strategic behavior. Human Factors, 37 (3):443–472.
Fox, M. S. and Smith, S. (1984). Isis: A knowledge-based system for factory scheduling. Expert
Systems Journal, 1(1).
Freed, T., Doerr, K., and Chang, T. (2007). In-house development of scheduling decision sup-
port systems: Case study for scheduling semiconductor device test operations. Int J Prod Res,
45(21):5075–5093.
Goldratt, E. and Cox, J. (1992). The Goal: A Process of Ongoing Improvement. North River Press;
2 Revised edition (January 1992).
Grant, T. J. (1986). Lessons for O.R. from A.I.: A scheduling case study. Journal of the Operational
Research Society, 37 (1):41–57.
Haider, S. W., Moodie, C. L., and Buck, J. R. (1981). An investigation of the advantages of using
a man-computer interactive scheduling methodology for job shops. International Journal of
Production Research, 19(4):381–392.
Halsall, D. N., Muhlemann, A. P., and Price, D. H. (1994). A review of production planning
and scheduling in smaller manufacturing companies in the uk. Production Planning & Control,
5(5):485–.
Herrmann, J. (2006). Handbook of Production Scheduling, chapter Decision-Making Systems in
Production Scheduling, pages 91–108. Springer.
Higgins, P. G. (1996). Interaction in hybrid intelligent scheduling. International Journal of Human
Factors in Manufacturing, 6(3):185–203.
Hopp, W. J. and Spearman, M. L. (1996). Factory physics. Foundations of manufacturing manage-
ment. Irwin, New York, USA.
MacCarthy, B. (2006). Handbook of Production Scheduling, chapter Organizational, systems and
human issues un production planning, scheduling, and control, pages 59–90. Springer.
MacCarthy, B. L. and Wilson, J. R., editors (2001). Human performance in Planning and Scheduling.
Taylor & Francis.
McKay, K. and Buzacott, J. (2000). Application of computerized production control systems in job
shop environments. Computers in Industry, 42(2):79–97.
McKay, K., Buzacott, J., Charness, N., and Sayafeni, F. (1992). Artificial Intelligence In Operational
Research, chapter The Scheduler’s Predictive Expertise: An Interdisciplinary Perspective, pages
139–150. Macmillan.
McKay, K. and Wiers, V. (2006). Handbook of Production Scheduling, chapter The Human Factor
in Planning and Scheduling, pages 23–57. Springer.
McKay, K. N., Safayeni, F. R., and Buzacott, J. A. (1995a). “Common sense” realities of planning
and scheduling in printed circuit board production. International Journal of Production Research,
33(6):1587–1603.
352 13 Overview of Scheduling Systems
McKay, K. N., Safayeni, F. R., and Buzacott, J. A. (1995b). Review of hierarchical production plan-
ning and its applicability for modern manufacturing. Production Planning and Control, 6(5):384–
394.
McKay, K. N. and Wiers, V. C. S. (1999). Unifying the theory and practice of production scheduling.
Journal of Manufacturing Systems, 18(4):241–255.
Nakamura, N. and Salvendy, G. (1994). Design of Work and Development of Personnel in Advanced
Manufacturing, chapter Human Planner and Scheduler, pages 331–354. Wiley.
Norman, P. and Naveed, S. (1990). A comparison of expert system and human operator performance
for cement kiln operation. Journal of the Operational Research Society, 41 (11):1007–1019.
Randel, J. and Pugh, L. (1996). Differences in expert and novice situation awareness in naturalistic
decision making. International Journal on Human Computer Science, 45:579–597.
Sanderson, P. and Moray, N. (1990). Ergonomics of Hybrid Automated Systems II, chapter The
Human Factors of Scheduling Behaviour, pages 399–406. Elsevier.
Shobrys, D. and White, O. (2000). Planning, scheduling and control systems: Why can they not
work together. Computers and Chemical Engineering, 24(2–7):163–173.
Stoop, P. and Wiers, V. (1996). The complexity of scheduling in practice. International Journal of
Operations and Production Management, 16(10):37–53.
Thurley, K. and Hamblin, A. (1962). The supervisor’s role in production scheduling. International
Journal of Production Research, 1:1–12.
Vera, A., Lewis, R., and Lerch, F. (1993). Situated decision-making and recognition based learning:
Applying symbolic theories to interactive tasks. In 15th Annual Conference of the Cognitive
Science Sociey, University of Colorado-Boulder, pages 84–95.
Vernon, C. (2001). Lingering amongst the lingerie: An observation-based study into support for
scheduling at a garment manufacturer. In MacCarthy, B. and Wilson, J., editors, Human perfor-
mance in planning and scheduling, pages 135–163. Taylor & Francis.
Webster, S. (2001). A case study of scheduling practice at a machine tool manufacturer. In Mac-
Carthy, B. and Wilson, J., editors, Human performance in planning and scheduling, pages 67–81.
Taylor & Francis.
Wiers, V. (1996). A quantitative field study of the decision behaviour of four shopfloor schedulers.
Production Planning and Control, 7(4):383–392.
Wiers, V. (2001). Design of knowledge-based scheduling system for a sheet material manufacturer.
Human Performance in Planning and Scheduling, pages 201–215.
Chapter 14
Guidelines for Developing Scheduling Systems
14.1 Introduction
Up to this chapter, we have paid attention to how to ensamble the different parts of
a scheduling system in order to make it a useful support for scheduling decisions. In
other words, we have focused on what a scheduling system should contain. In this
chapter, we address the process of how to put this system at work in the company.
This may be certainly be regarded as the ‘last mile’ to be covered, but it is crucial for
the success of a scheduling system, and it is likely to be the root of some development
failures. It is therefore, a critical issue that, unfortunately, has been seldom addressed
in the scheduling literature.
to both types of software, and just a few of them (particularly those related to the
design of solution procedures) may, in principle, be more useful for customised tools.
As just any other business information system, scheduling systems are more suit-
able to be implemented in companies by employing an incremental approach where
progress can be rapidly visualised rather than ‘big-bang’ approaches, where a final
solution is first fully envisioned and only then implemented. Despite their potential
shortcomings—being the most notable the increase in the deployment period—, the
literature on implementation of business information systems is clearly stressing the
suitability of an incremental approach. This recommendation is particularly strong
in the case that the information systems contain company-specific features, which
is precisely the case for most scheduling systems. Indeed, several cases in the liter-
ature (see e.g. Leachman et al. 1996; Wiers 2002) report the need of long, iterative
approaches for the successful implementation of scheduling systems, as opposed
to the classical waterfall approach. This rationale is also related to the best fit of a
bottom-up approach (typical in incremental approaches) as compared to a top-down
approach (typical for big-bang approaches). Therefore, even if the final goal would
be to capture with maximum detail the complexity of the physical shop floor, a more
simple model would suffice for start working with the scheduling system, and for
making the first results of its adoption visible, which in turn would serve to ease the
subsequent deployment steps (Missbauer et al. 2009). Note that the lack of sophis-
tication in which the scheduling function is accomplished in many companies has
made many authors to state that simply a visual representation of the Gantt schedules
is sometimes regarded as a big advance with respect to the previous situation (see
e.g. McKay and Black 2007) and thus can be used to make visible the advantages of
a scheduling system.
Scheduling is about making and monitoring plans, not about capturing and respond-
ing to all possible exceptions and alternative scenarios, which in many real-life imple-
mentations are virtually infinite. There is a big temptation in stretching the limits of
a tool which is potentially able to make plans with great detail. But with the ever
356 14 Guidelines for Developing Scheduling Systems
While the integration of scheduling with related decisions (such as, e.g. lot sizing
or material flow control) has attracted a wealth of interest in the last years, such
complex models are difficult and expensive from the implementation viewpoint (see
e.g. Higgins 1996). There are a number of reasons for this:
• Data requirements. It is obvious that an integrated system would require more
data, which is not a trivial issue given the data-intensive nature of the different
operations management functions (see e.g. Kathawala and Allen 1993). In addition,
the required quality and granularity of these data may be substantially different,
as for instance aggregated estimations of processing times may be more than
sufficient for production planning while unacceptable for scheduling. To make
an integrated system work, the only solution is to stick to the maximum quality
(and quantity) of data required by each of the submodules composing the system,
therefore increasing not only development expenses, but also maintenance costs.
• Number of stakeholders. Since a higher number of decision-makers have to be
involved in the development process of an integrated system, agreeing on con-
straints and objectives (among other issues) becomes harder. In addition, given
the hierarchical nature of some operations management functions (i.e. production
planning and production scheduling), it is likely that higher level considerations
override the desired functionality, goals and constraints of the scheduling system,
which may result in less useful support for the scheduling function.
• Changing nature of the business. Given the dynamic nature of the companies,
integrated models exhibit a shorter life cycle and may require substantial upgrading
when manufacturing conditions change. A modular design and implementation
would increase the reusability of those parts of the system not affected by the
changes.
358 14 Guidelines for Developing Scheduling Systems
Note that layout modeling represents the most persistent abstraction of the scheduling
system from the physical factory, and therefore it is likely that this layout would
remain stable for longer periods. It is also possibly one of the main issues affecting
the validation in the shop floor of the results obtained by the scheduling system, as
it would determine the models to be employed, the data required to feed the models,
and the solution procedures to be employed for obtaining solutions for the scheduling
process. Therefore, these decisions should be carefully regarded.
One advisable hint is to keep the focus on modeling relatively simple layouts—as
much as the physical shop floor allows it—. Even if the physical layout of the factory
may be very complex, there are a number of reasons indicating the suitability of
focusing onto simpler structures, at least on the initial stages of the life cycle of the
deployment of the scheduling system. There are a number of reasons for this hint,
which are detailed in the following subsections.
Whereas theoretical scheduling models assume a set of fixed resources, this is hardly
the usual situation in practice. Quite often, additional resources can be moved, pur-
chased, or removed, provoking changes in the physical layout that detailed, low-level
models cannot easily accommodate (see e.g. Benavides and Carlos Prado 2002; Freed
et al. 2007). Asides, the current shortening of product life cycles and their correspond-
ing technological processes make such changes more likely. This means that detailed
scheduling models should be updated, validated, fed with data and put into produc-
tion at a pace that their maintenance cost could be simply so high that they may never
pay off.
14.5 Data
Usually, some of the data required will be already present in the ERPs or business
information systems of the company. However, it is unlikely that all data needed
will be present, so data collection becomes an important part in the deployment of
a scheduling system. In fact, the data required for a scheduling system may well be
dispersed into as much as more than 20 different sources (in this regard, see the case
study by McKay et al. 1988).
For introducing and editing model-based data, it may be interesting to develop a
graphical interface, but this may be extremely tedious and expensive to maintain for
instance-based data. Therefore, time and resources for the development of database
interfaces should be allocated in the deployment project. Also note that the con-
struction of such interfaces is usually considered as one of the most time-consuming
items for packaged software, and that many maintenance contracts of the business
information system vendors and/or IT consultants stipulate that no third-party can
access to these data (even if in read-only mode). For such cases, semi-automatic
import/export utilities should be the only available option apart from contracting the
vendors and consultants for the development of the interface.
360 14 Guidelines for Developing Scheduling Systems
Another source of concern is the quality of the data employed in the scheduling tool.
There are case studies in which it is reported that less than 50 % of the required data
were consistent (see Leachman et al. 1996). Poor data quality is frequently mentioned
(see Berglund and Karltun 2007) as the cause that specific tools and spreadsheets have
to be used to complement standard scheduling software tools. Indeed, as mentioned
earlier, one of the task of the human scheduler is to filter the data to be employed in
the scheduling process.
Given how hard is to gather the data, an equally detailed system should be placed in
order to update and modify existing data. SCADA systems integration is a promising
method of automating data maintenance. Even modest improvements in the automa-
tion of data can greatly contribute to the accuracy of the data in the system.
The ERP of the company hardly needs to know the full sequencing with all the start
and finish times of the production sequence. Furthermore, providing the ERP with
all this data could prove challenging. Probably, only the estimated finishing date
for each product is needed. As a result, when connecting a scheduling system with
existing ERP software, a good idea is to interface only the most basic information
and extend the amount of communication as the needs arise.
14.5.6 Performance
Advanced models will require extensive datasets that have to be retrieved from the
database as needed. Special care should be put in the performance of queries to the
databases. Modern distributed tables and threaded queries are necessary.
14.6 Objectives 361
14.6 Objectives
Some of the proposed objectives may not be conflicting among them, but are simply
different ways to express the goals of the decision makers. Even if some objectives
could be theoretically conflicting, it has to be checked whether this happens in the
industrial setting where the model is to be deployed. A typical example are the number
of late jobs and maximum tardiness. It seems clear that one may theoretically reduce
the number of late jobs by systematically delaying a small set of jobs, which will in
362 14 Guidelines for Developing Scheduling Systems
turn generate big tardiness values. However, this alternative cannot be accepted by
many companies for which customer reliability is their key competitive advantage.
If this results to be the only available option, the company would find a way to
renegotiate their commitments and immediately will modify its order acceptance
policy to ensure that this situation will not happen again. Therefore, one of the two
indicators may suffice as a scheduling objective. In addition to the additional burden
on evaluating a non-relevant objective, in a multi-objective optimisation context, it
has the additional risk of generating biased solutions, thus not providing the decision-
maker with an appropriate set of solutions.
In practice, some objectives can be safely transferred into constraints. Our experience
is that this may apply to many penalty or cost (profit) -related objectives, as it may
not be so easy to derive an accurate penalty/cost (profit) function. A typical example
are due dates: some companies accept the violation of committed due dates, and
recognise that this should imply a sort of penalty which is approximately related to
the number of violations (service level) and to the difference between the promised
due dates and the completion times (average waiting time). However, establishing
such relation in a meaningful function that adequately weights them and can be easily
calculated for all jobs and customers is not so easy. Therefore, a simple alternative
is to establish a maximum tardiness allowed and/or a maximum fraction of jobs late.
Since most production departments include objectives or indicators linked to service
level and lead times, obtaining these maximum levels of allowance is straightforward
in many situations.
14.7 Solution Procedures 363
A great effort in the scheduling field has been devoted to solution procedures, at
least from the most theoretical side. A general problem is that most existing solution
procedures have been tightly linked to the models, therefore generally resulting in
algorithms with low performance outside the original models for which they were
conceived, if applicable at all. Since no algorithm can outperform the rest for all
scheduling models, building specific algorithms may be justified from a theoretical
viewpoint, but it should be also clear that no scheduling system can (1) store the
myriad of specific algorithms and (2) select the most suitable one among them for
any scheduling decision. In addition, we have already discussed the relatively short
life-cycle of scheduling models in view of the extremely dynamic nature of manu-
facturing. As a consequence, the advantages of designing specific algorithm in terms
of quality of the solutions should be balanced against their cost of development and
deployment. In addition, we note the following issues:
It is quite necessary to balance the effort in developing the solution procedures against
their advantages. As mentioned before, in some cases the problem is so restricted
that there are few feasible solutions (Graves 1981; Benavides and Carlos Prado 2002;
Cowling 2003). In these cases, most of the computational effort may be wasted in
checking that no better feasible solutions are available. In addition, in some cases,
the objective function is rather flat as compared to ‘classical’ objective functions.
Again, in this cases most of the computational effort is spent on confirming optimal
values. This speaks for the need of balancing the effort in developing the solution
procedures, particularly taking into account the aforementioned changing nature of
the objectives sought.
The algorithms employed have to make the best use of the available running (decision)
time (Cowling 2003), so there is a need of developing solution procedures whose
quality of solutions is scaled with the decision interval. As the decision interval is very
context specific (it may not only vary from company to company, but also within
a single company, depending on the shift, workload, etc.), it would be extremely
interesting to build algorithms that their performance is (roughly) linear with time
(i.e. they do not stall after a number of iterations or require very long decision intervals
to obtain acceptable solutions).
364 14 Guidelines for Developing Scheduling Systems
Note that the focus towards this type of algorithms also implies the need of
re-assessing the way some computational experiences of scheduling algorithms are
carried out. Usually, a new solution procedure for a specific scheduling problem is
labeled as ‘efficient’ problem because it yields better solutions than existing ones
when all of them are allowed the same CPU time. However, the relative performance
of this new solution procedure for different CPU times is unknown.
An special effort has to be made to develop a transparent system who can be under-
stood by the users (see Wiers 1997, 2001; Freed et al. 2007). It does not seem reason-
able to believe that decision-makers with years of practice in an specific production
shop floor would immediately accept any solution packaged into a scheduling sys-
tem (a ‘black-box’ sequence, see Cowling 2003), particularly if it does not follows
his/her intuition. Even if this issue could be fixed (at least in part) by a better training
of the schedulers (see Chap. 13), the understanding of the logic behind the solution
procedures, even at a rough level, will surely increase trust in the system and will
foster its use.
As a consequence, solution procedures based on well-defined manufacturing prin-
ciples may provide a more transparent system. Some of these principles could be the
following:
• Focusing on bottleneck resources. Focusing on the bottleneck is a well-understood
principle that helps managers to overcome the complexity of the factory and con-
centrate of the part of the shop floor that greatly impacts the performance of the
system. The preprocessor functionality described in Chap. 11 may help identifying
whether there is a distinct bottleneck in the shop.
14.7 Solution Procedures 365
misuse them (see previous section). Therefore, we believe that either algorithms with
many parameters should be discarded, or the details or their tuning should be made
transparent to the decision-maker. Clearly, this does not include the running time of
the algorithm, as we have already discussed its importance.
If we accept that no mathematical model can pretend capturing all possible scenar-
ios and exceptions, it follows that we should allow the decision-maker to manipu-
late the solutions obtained by the different procedures, as there may be extraneous
constraints and/or objectives which are not considered by the system and must be
satisfied by the resulting solution (Bensana et al. 1986). Several approaches could
be adopted. As seen in Sect. 12.4.1, one option would be to allow drag and drop
the solutions on the Gantt chart. Another option, more complex, would be to allow
partly to freeze jobs and/or partial schedules. This latter option is also very suitable
for considering jobs that are being manufactured, and for reducing the nervousness
of changes in the schedule, and it brings the interesting problem of the advantages
of rescheduling/partly scheduling the non frozen jobs. Unfortunately, this problem
has not been—to the best of our knowledge—addressed in the scheduling literature,
but lessons from the application of production planning systems, such as MRP and
MRPII could be learnt.
As it was mentioned in Chap. 11, the excessive manipulation of the solutions
proposed by the system may result into worsening the original objective function.
As a means to control and to balance this issue, in Chap. 11 it was proposed that
the tool provides reports indicating the deviation of the so-modified schedules with
respect to the ones suggested by the algorithms.
As more and more complex scheduling settings result in slow algorithms, possible
solutions are parallelisation and cooperative computing. By using GRIDS or in a
much simpler level, the available multiple CPU cores of modern computers, results
of very high quality can be obtained in a fraction of CPU time.
Similarly, very complex scheduling problems can be solved in fast CPU times.
Research in this field is rather scarce as compared on the large body of research on
‘sequential’ scheduling. Although not as elegant as much of the scheduling research,
this field can greatly help in improving the performance of existing algorithms.
14.8 Constraints 367
14.8 Constraints
The determination and satisfaction of a large variety of constraints is considered to be
the crux of scheduling (Fox and Smith 1984). The number of constraints that can be
identified in the shop floor can be extremely high (see e.g. the survey on scheduling
practices for 40 schedulers by McKay et al. (1988) where more than 200 constraints
are identified). Obviously, not all of them are relevant for scheduling.
While scheduling literature is rich in dealing with certain types of constraints,
there are constraints that are so common in real environments that it is surprising
that most existing models do not deal with them. Most of them have been already
discussed in Chap. 4, so what follows is a recap of these constraints grouped in
different categories:
• Machine constraints. Apart from specific machine constraints, most shops are
characterised by the non availability of machines for all purposes. On one hand,
not all machines may be amenable to process all jobs (or even if this is the case
from a technological viewpoint, there may be economic reasons preventing that).
Therefore, there is a problem of machine eligibility. On the other hand, in most
cases, eligible machines are not available during all planning period. This is not
only motivated by breakdowns or planned maintenance, but also because machines
are busy processing already scheduled jobs. However, existing research on this
topic is merely anecdotal.
• Staff constraints. Staff is almost an ignored resource in many scheduling models
(see in this regard the results of the survey by Laforge and Craighead 2000).
However, it is of utmost importance for many real shops. Even in automatic or
semi-automatic processes, staff plays an important role in set ups times. Existing
literature on scheduling with additional resources is rather scarce, there are some
studies for parallel machines like the one by Chen (2006) but not for complex
scheduling problems.
• Routing constraints. Many shop floors require some type of job reentrance and/or
precedence constraints. The reasons vary from pure process-related to quality
(including scraps and the need for reworking). The complexity of this type of
resources (usually affecting a small percentage of the jobs in the system) should
be balanced against the effort of developing a system that explicitly takes them
into account.
• Transportation constraints. Transportation times in between stages and the control
of the AGVs is also necessary for many systems.
• Capacity constraints. Storage capacity (number of boxes) in between each stage
is also a key concern as in some cases this is a limited resource.
procedures into practice, they must adequately capture the details of a company’s
scheduling process, which usually entails modelling scenarios, constraints and objec-
tives not often treated in the literature. In addition, the resulting models and solution
procedures should be embedded into a scheduling system that feeds them with reli-
able and timely data and provides an efficient way to interact with the users. This
gives rise to the following interrelated questions: which are the components of a
scheduling system? and how a scheduling system can be developed and deployed?
An answer to the first question has been given in the previous chapters, and regarding
the second question, we propose a number of guidelines to help driving the design
and implementation of scheduling systems based on the previous analysis of the
literature on case studies and on our own experience.
As mentioned in the beginning of the chapter, we do not claim that our guidelines
are the only way (or even the best way) to address the aforementioned issues, although
they have been validated both through literature analysis and through industrial prac-
tice. By presenting them, we hope to enrich the rather scarce literature on the topic
and thus help bridging the gap between the development of scheduling models and
procedures, and their implementation in a real-life industrial setting.
This chapter is heavily based on the work by Framinan and Ruiz (2012), who in
turn review the—rather scarce—literature discussing implementation cases. Among
these, a number of interesting ideas can be found in Freed et al. (2007) regarding
modeling the layout. The study by Berglund and Karltun (2007) includes valuable
insights on the quality of the data required by a manufacturing scheduling system.
Cowling (2003) discusses some issues regarding the objectives to be considered in
the system. Finally, contributions on solution procedures are present in several works
(see e.g. MacCarthy and Wilson 2001; McKay et al. 2000).
References
Benavides, J. and Carlos Prado, J. (2002). Creating an expert system for detailed scheduling. Int. J.
Oper. Prod. Manage., 22(7–8):806–819.
Bensana, E., Correge, M., Bel, G., and Dubois, D. (1986). An expert-system approach to indus-
trial job-shop scheduling. In Robotics and Automation. Proceedings. 1986 IEEE International
Conference on, volume 3, pages 1645–1650.
Berglund, M. and Karltun, J. (2007). Human, technological and organizational aspects influenc-
ing the production scheduling process. International Journal of Production Economics, 110
(1–2):160–174.
Bhattacharyya, S. and Koehler, G. (1998). Learning by objectives for adaptive shop-floor scheduling.
Decis. Sci., 29(2):347–375.
Chen, J.-F. (2006). Unrelated parallel machine scheduling with secondary resource constraints.
International Journal of Advanced Manufacturing Technology, 26(3):285–292.
Cowling, P. (2003). A flexible decision support system for steel hot rolling mill scheduling.
Computers and Industrial Engineering, 45:307–321.
Fox, M. S. and Smith, S. (1984). Isis: A knowledge-based system for factory scheduling. Expert
Systems Journal, 1(1).
Framinan, J. and Ruiz, R. (2012). Guidelines for the deployment and implementation of manufac-
turing scheduling systems. International Journal of Production Research, 50(7):1799–1812.
References 369
Freed, T., Doerr, K., and Chang, T. (2007). In-house development of scheduling decision sup-
port systems: Case study for scheduling semiconductor device test operations. Int J Prod Res,
45(21):5075–5093.
Gao, C. and Tang, L. (2008). A decision support system for color-coating line in steel industry.
In Proceedings of the IEEE International Conference on Automation and Logistics, ICAL 2008,
pages 1463–1468.
Graves, S. C. (1981). A review of production scheduling. Operations Research, 29(4):646–675.
Higgins, P. G. (1996). Interaction in hybrid intelligent scheduling. International Journal of Human
Factors in Manufacturing, 6(3):185–203.
Kathawala, Y. and Allen, W. (1993). Expert systems and job shop scheduling. International Journal
of Operations & Production Management, 13(2):23–35.
Kerr, R. M. (1992). Expert systems in production scheduling: Lessons from a failed implementation.
Journal of Systems and Software, 19(2):123–130.
Knolmayer, G., Mertens, P., and Zeier, A. (2002). Supply chain management based on SAP systems.
Springer.
Laforge, R. and Craighead, C. (2000). Computer-based scheduling in manufacturing firms: Some
indicators of successful practice. Production and Inventory Management Journal, 41(1):29–34.
Leachman, R., R.F., B., Liu, C., and Raar, D. (1996). Impress: An automated production-planning
and delivery-quotation system at harris corporation- semiconductor sector. Interfaces, 26:6–37.
MacCarthy, B. L. and Wilson, J. R., editors (2001). Human performance in Planning and Scheduling.
Taylor & Francis.
McKay, K. N., Morton, T., Ramnath, P., and Wang, J. (2000). ‘aversion dynamics’ scheduling when
the system changes. J. Scheduling, 3(2):71–88.
McKay, K. N., Safayeni, F., and Buzacott, J. (1988). Job-shop scheduling theory: What is relevant?
Interfaces, 18(4):84–90.
McKay, K. N. and Black, G. W. (2007). The evolution of a production planning system: A 10-year
case study. Computers in Industry, 58(8–9):756–771.
Missbauer, H., Hauber, W., and Stadler, W. (2009). A scheduling system for the steelmaking-
continuous casting process. a case study from the steel-making industry. International Journal
of Production Research, 47(15):4147–4172.
Pinedo, M. L. and Yen, B. P.-C. (1997). On the design and development of object-oriented scheduling
systems. Annals of Operations Research, 70(1):359–378.
Storer, R., Wu, S., and Vaccari, R. (1992). New search spaces for sequencing problems with appli-
cation to job shop scheduling. Management Science, 38:1495–1509.
Tang, L. and Wang, G. (2008). Decision support system for the batching problems of steelmaking
and continuous-casting production. Omega, 36(6):976–991.
Wiers, V. (1997). Human-Computer Interaction in Production Scheduling: Analysis and Design of
Decision Support Systems for Production Scheduling Tasks. Technische Universitiet Eindhoven.
Wiers, V. (2001). Design of knowledge-based scheduling system for a sheet material manufacturer.
Human Performance in Planning and Scheduling, pages 201–215.
Wiers, V. (2002). A case study on the integration of aps and erp in a steel processing plant. Production
Planning and Control, 13(6):552–560.
Chapter 15
A Case Study: Ceramic Tile Production
15.1 Introduction
In this chapter, a real case study dealing with the development and implementation of
a manufacturing scheduling system is presented in detail. The goal is twofold. On one
hand, we try to illustrate the application of some of the concepts described in earlier
chapters, most notably those relating to the architecture and design of a scheduling
tool and its deployment in an organisation. By doing so, we hope to ‘close the
circle’ of the book and show how the ideas discussed in the previous chapters can be
implemented in a real-life setting.
On the other hand, it has to be mentioned that despite the enormous size of the
scheduling literature, papers dealing with real problems like the one depicted in this
case study are much more scarce. Although the so-called ‘gap’ between scheduling
theory recognised by many authors has been already commented in other parts of
this book, it is interesting to recall it at this point. Graves (1981) already hinted at
some solutions to bridge this gap. The famous studies about the usage of operations
research techniques in production management by Ledbetter and Cox (1977) and
by Ford et al. (1987) showed that there was very little application of scheduling
theory in practice. Similar conclusions can be seen in McKay et al. (1988) and in
Olhager and Rapp (1995). There are even papers where this gap is directly studied like
Dudek et al. (1992), MacCarthy and Liu (1993) and McKay et al. (2002). Quantita-
tive studies about scheduling research carried out by Reisman et al. (1997), showed
that from a total of 184 reviewed papers, only 5 (less than a 3 %) dealt with realis-
tic production settings. The conclusion is that when modeling realistic production
scheduling problems, there is little hope to find the specific problem at hand being
already studied in the scientific research. More recent reviews of hybrid flows hops,
like those of Ribas et al. (2010) and Ruiz and Vázquez-Rodríguez (2010), also reach
the same conclusion: realistic settings are seldom studied. Therefore, we feel that
the book would be somehow uncomplete without adding at least one case study to
the literature.
This case study deals with a specific scheduling system development and deploy-
ment experience in the Spanish ceramic tile sector. Before entering into the details
of the scheduling system developed, a brief description of the industrial problem
and scheduling model is given. We first describe the ceramic tile production process.
A model of the production process is also given. After this, we describe in detail the
development of a scheduling system, which mimics most of the content that has been
exposed throughout this book. For example, we provide emphasis on a crucial aspect
of the development, which is the data gathering process. Finally, and after exposing
the developed scheduling system, some open issues are discussed.
Raw
materials
preparation
Kneading Molding
Extrusion Drying
1st slow
1st fast firing
firing
Glaze
Glazing
preparation
Classification
and packing
Fig. 15.1 Detailed flowchart of the production process of ceramic tiles with all variants
The fist step is the molding where the atomized clay powder is loaded into molds
(here is where the fluidity of the atomized clay plays an important role) and it is
pressed by large hydraulic presses to form what is called the bisque. This process is
374 15 A Case Study: Ceramic Tile Production
Raw
Milling and Spray
materials Molding Drying
grinding drying
preparation
Selection
Glaze Single kiln
Glazing and
preparation firing
packing
Fig. 15.2 Wet milling, atomization and single kiln firing flowchart: the most common ceramic tile
production procedure
very important as all further operations depend on the correct formation of the tile.
The physical dimensions of the tile, often referred to as the format of the tile, depend
on the characteristics of the mold employed. Additionally, the visible side of the tile
might not be entirely flat as it might have a ‘stone-like’ pattern. This is determined
by the plate of the mold.
Right after molding, the bisques are quite fragile and should undergo further
drying. This process is usually carried out in either vertical or horizontal dryers
(small kilns). After drying, the bisque is hardened to what resembles a chocolate bar
and therefore, can withhold further processing operations. The bisque or ceramic
tile is then decorated in the glazing lines. Some specific products are not glazed,
however, this is not very common. In a glazing line, tiles move over conveyor belts
and different applications, decorations and other operations are performed over them.
The most basic operation is the application of the base glaze by simple bell or waterfall
machines to advanced random rotary decorations, referred to as rotocolors. Modern
glazing equipment includes ‘printers’ which can basically print ceramic tiles with
a large array of colours, patterns and decorations. Glazing lines can contain more
than 50 different machines and can be up to 50 m long, sometimes even more. The
different glaze applications determine the style, colour, surface and characteristics
of the tile.
Decorated bisques are stored in big boxes or ‘roller stillages’ that contain many
roller planes where bisques are fed into, stored and extracted from at later stages.
The big boxes are transported from the glazing lines to an intermediate storage area
by automated guided vehicles (AGVs).
The central stage of the production process is the single kiln firing. Ceramic kilns
are large horizontal ovens with usually a single plane of rollers that move in one
direction, advancing the glazed bisques through the different stages. Each type of
clay and tile follows a different temperature curve inside the kiln. This temperature
might be also affected by the physical size or format of the tile. A number of physical
and chemical reactions undergo inside the kiln to form a hardened and finished tile.
Most notably, the glaze fuses and hardens on top of the tile, conferring most known
and desired characteristics of ceramic tiles.
After firing, AGVs transport again the fired tiles to an intermediate storage area.
The final stage is the classification and packing of the fired tiles. Several classification
and packing lines are usually available. Visual or automated quality control for defects
15.2 The Production Process of Ceramic Tiles 375
is carried out. Depending on the number and importance of defects, tiles are usually
classified into first or second quality and discards. Due to small differences in the raw
materials and the complex production process, not all tiles result in the same tonality
and physical dimensions (calibers) so the tiles are actually assigned to different
batches, each one containing the same quality, caliber and tonality. As a final step,
tiles are boxed and palletised for distribution.
Nowadays, there are many other optional production steps for ceramic tiles, for
example, tiles might undergo one or more polishing steps to confer tiles with a stone
or mirror finish. Other process include the rectification of the edges of the tile, or
cutting processes to produce special pieces like edges or reliefs. Additional detailed
decorations and further firing are also possible. As a result, the common production
process depicted in Fig. 15.2 can only be regarded as a simplification. In any case,
the bulk of the ceramic tile production undergoes this main production process.
In modern production plants, from start to finish, the product is rarely touched
and very little human intervention is needed. This results in the mass production of
ceramic tiles at affordable prices.
Given such high automatisation in the sector, companies have heavily invested
in machinery to achieve maximum efficiency under this mass production context.
The most expensive machinery in the ceramic tile production is the kiln. Small
tile producers typically have 2–3 kilns, whereas large producers might have several
production plants with several kilns each. Therefore, in order to minimise the unit
costs due to machine depreciation, machinery usage should be maximised. Again, it
is typical for tile producers to operate the kilns in a 24/7 schedule and stopping them
only once per year for maintenance. This naturally leads to the minimisation of the
makespan, which is the main objective chosen for the models.
The previous section detailing the ceramic tile production process, together with more
detailed studies about ceramic tile production, as the one shown in
Vallada et al. (2005) allow us to provide a precise modeling of the scheduling prob-
lem arising in ceramic tile manufacturing. The first observable feature of the ceramic
tile production problem (CTPP from now on) is that all products follow a natural
flow through the plant, which suggests a flow shop structure. Production plants are
divided into sections, usually, molding, glazing, kiln and classification. Furthermore,
even Small and Medium Enterprises (SMEs) have several glazing lines and typically,
two or more kilns. As a result, we have a hybrid flow shop (HFS). A HFS, like already
commented in Chap. 3, is a combination of two well known scheduling problems,
namely the flow shop and the parallel machine problem. The HFS problem has been
an active topic of research and many literature reviews have been published for it,
376 15 A Case Study: Ceramic Tile Production
like the ones of Linn and Zhang (1999) and Vignier et al. (1999). More recent related
reviews include those of Wang (2005), Ruiz and Maroto (2006), Quadt and Kuhn
(2007), Ribas et al. (2010) and Ruiz and Vázquez-Rodríguez (2010).
Let us recall that in a flow shop, there is a set N of n jobs that have to be processed
on a set M of m machines. Each job visits each machine in the same order, which can
be, without loss of generality, machines 1, . . . , m. As a result, each job is composed
of m tasks. Common constraints apply, i.e. no machine might process more than one
job at the same time and the tasks of the same job cannot overlap. The only input
data needed to solve the problem is n, m and the processing times of each job on each
machine, commonly referred to as pij , i ∪ M, j ∪ N (more details can be consulted
in Chap. 3, Sect. 3.2.3.3).
Conversely, in a parallel machine problem, the set of m machines are not arranged
in series, but in parallel. Each job has to be processed on exactly one out of the
m machines. The most general case is when the m parallel machines are unrelated,
meaning that the processing time for a job depends on the machine it has been
assigned to.
Given the two previous definitions, in a HFS we also have a set N of jobs, N =
{1, . . . , n}. However, each job has to visit not a set of machines but rather a set of
M stages, M = {1, . . . , m}. At every stage i, i ∪ M there is a set Mi = {1, . . . , mi }
of parallel machines that can process the jobs where there is at least one stage with
more than one machine, i.e. →i\|Mi | > 1, i ∪ M. Every job has to pass through all
stages and must be processed by exactly one machine at every stage. In the following,
pilj indicates the processing time of job j, j ∪ N at machine l, l ∪ Mi inside stage i.
Again, for more details on the HFS, the reader is referred to Chap. 3, Sect. 3.2.3.6.
With the above definitions, we can already model the different molding lines,
glazing lines, kilns and classification lines in the CTPP. However, such modeling is
extremely simplistic due to one important missing feature. In the CTPP, there are
significant change over times in all stages, these times, frequently referred to in the
scheduling literature as setup times are for example, very important in the molds.
When the production of a given ceramic tile with a format of 33 × 33 cm. is
finished, a very large setup time is needed if the next batch of tiles to produce in that
same mold has a different format, like for example 60 × 60 cm. Large portions of the
molding machine have to be disassembled, several adjustments must be done and all
this usually takes up to a full working shift. Depending on the batch size of tiles to
produce, the setup time might be even larger than the processing time itself. Glazing
lines are another clear example, even if two batches of tiles with the same format are
being produced one after another, a slight difference in the decorations or colours for
each batch might result in a full working shift for cleaning and adjusting the glazing
line. As a matter of fact, mostly all stages in the CTPP have strong setup times. What
is more, these setup times depend on the production sequence and on each machine
since some machines (even within the same stage) are more easily reconfigurable
than others. These types of setups are referred to as sequence dependent setup times
as it was already indicated in Chap. 4. The scheduling literature considering setups
is rich, albeit once again it lacks applications to real production problems. Reviews
of scheduling research with setups are available from Allahverdi et al. (1999) and
15.3 Modeling the Ceramic Tile Production Process 377
Allahverdi et al. (2008). In the context of the CTPP, we have a machine- based
sequence dependent setup time in every machine l at stage i when processing job
k, k ∪ N, after having processed job j which is noted as Siljk . Setup times were
explained in Chap. 4, Sect. 4.2.2.
Another common situation found in the ceramic tile producers is that not every
possible job—batch of a given quantity (in m2 ) of a certain type of ceramic tile from
the company’s catalog—can be processed by every available parallel machine at a
given stage. Once again, a clear example comes from the molding presses. Large tile
formats like 60 × 120 cm. often require special molding presses with pressing forces
in excess of 45,000 kN. As a result, not every possible job might be processed on
every possible machine. This is known in the HFS problem as machine eligibility,
also commented in Chap. 4. For every job j and stage i, we have a set Eij of eligible
machines that can process job j. Obviously, at least one machine must be eligible at
each stage for all jobs, i.e. 1 ∀ |Eij | ∀ mi , ≥i ∪ M, j ∪ N. Note also that pilj can be
considered 0 if l ∪ / Eij . Basically, the actual value of the processing time for a non-
eligible machine is irrelevant. As with setups, machine eligibility was introduced in
Chap. 4, Sect. 4.2.3.
The previous model, referred to as HFS with sequence dependent setup times
and machine eligibility is obviously a very hard combinatorial problem. Employing
the usual three field notation α|β|γ for scheduling problems commented in Chap. 3
and the extension for HFSs proposed by Vignier et al. (1999) (also commented
in the same chapter, see Sect. 3.3.2), this initial formulation of the CTPP is noted
(m)
as FHm, ((RM(i) )i=1 )|Ssd , Mj |Cmax if the optimisation objective is to minimise the
maximum completion time of the jobs or makespan (Cmax ). This problem was first
studied by Ruiz and Maroto (2002) and later by Ruiz and Maroto (2006). We will
refer to this first model as CTPP_Model_1. The details regarding the development
and deployment of the scheduling system suitable for this first model will be given
in the following sections.
Apart from the processing times, there is more information associated with a job,
like the release date of job j, rj . The first operation at the first visited stage of job j
cannot start before rj . Also we refer to dj as the due date of job j. The last operation
of job j must be preferably completed before dj . A given job might also have an
importance or weight, referred to as wj .
Production floors are seldom found empty. When doing real scheduling, machines
are at a given moment processing materials. This means that new schedules must start
after existing jobs are finished. Therefore, rmil represents the release date of machine
l inside stage i. This date indicates when the machine is released from previous work
and can start processing new jobs. No operation of any job of the new schedule might
start on machine l before rmil .
In the previous initial modeling attempt, we already defined sequence dependent
setup times. For this enhanced model, we also consider the possibility of defining
each setup as anticipatory or non-anticipatory. As the name implies, an anticipatory
setup time, for example a cleaning, can be done in the corresponding machine as
soon as the machine is free without actually waiting for the next job in the sequence
to arrive. However, some setups on machines might require the jobs to be already
present on the machine (i.e. fixing the product to be processed on the machine) and
are therefore non-anticipatory. As a result, if Ailjk = 0 then the corresponding setup
Siljk is non-anticipatory and job k should be already on machine l for setting up. It
should be noted that Siljk = 0 if j = k and that Siljk = 0, ≥j, k ∪ N, l ∪ Mi if one
or more of the following conditions are satisfied: i ∪ / Fj , i ∪
/ Fk , l ∪
/ Eij , l ∪
/ Eik or
j ∪ Pk . The value of Ailjk in these cases is irrelevant and will be assumed 0.
Another common assumption in scheduling theory is to assume that the n jobs
are independent. While this might be acceptable in some situations, it is not for the
CTPP. Many times some batches of ceramic tiles have to be finished prior to other
batches since they are used as raw materials. Mosaics, stair steps, tile edges and
many other special tiles are clear examples. In short, the CTPP contains precedence
constraints. As such, Pj contains the set of predecessors of job j. Jobs may have zero
or more predecessors indicating that the first operation of job j should not start until
all the last operations of jobs in Pj have finished. All sets Pj define a precedence
graph among jobs which is directed and acyclic.
A job, defined before as a batch of tiles does not need to be fully glazed before
entering the kiln. A large production batch of tiles might need several days of glazing
and several days of kiln firing. Batches are made out of thousands of tiles. This means
that when a sufficient quantity of tiles have been glazed, these can already enter the
kiln while the remaining lot is being glazed. This effectively represents an overlap
between operations. In some other situations, we have a waiting time in between
stages for a given batch. Some special glazes require an ‘air drying’ time before
entering the kiln. As a result, we might have overlaps and/or waiting times. In order
to model this situation, we denote by lagilj the time lag between the end of the
processing of job j at machine l inside stage i and the beginning in the next stage. An
overlap is represented by a negative lag (lagilj < 0) whereas a waiting time is given
by a positive lag (lagilj > 0). Note that for any given job j, lagilj = 0, ≥l ∪ Mi , if
i∪ / Fj or if for a given l ∪ Mi , l ∪
/ Eij .
15.3 Modeling the Ceramic Tile Production Process 379
The joint consideration of lags with the previous formulation makes the problem
specially tricky. For example, the input data has to satisfy some obvious relations, like
for example, if i is the last stage for job j, then lagilj = 0, ≥l ∪ Mi . Overlaps can also
be problematic since a given overlap has to be less than or equal to the processing time,
i.e. |lagilj | ∀ pilj otherwise we could have successive operations starting before the
starting time of preceding operations. Similarly, |lagilj | ∀ pi+1,l∃ ,j , ≥l∃ ∪ Ei+1,j , i.e.
the overlap should be smaller than or equal to the processing time on any machine
at the next stage, otherwise successive operations might finish before preceding
operations. While these relations are straightforward, in huge input datasets, unless
these relations are checked, strange racing conditions might arise in solution methods.
This last model is much more comprehensive and captures much of the existing
constraints and situations arising in the CTPP. The full problem can be denoted
(m)
as HFFLm, ((RM(i) )i=1 )/rm, lag, Sijk , Mj , prec/Cmax and is obviously a very hard
scheduling problem to solve. This problem was first studied from a mathematical
programming and heuristic perspective by Ruiz et al. (2008). In this work it was
shown that modern solvers are able to solve only small instances of up to a dozen
jobs at most. Real problems have at least ten times as many jobs.
The following example taken from Ruiz et al. (2008) will help in illustrating this
complex HFFL problem. We have an instance with five jobs and three stages with
two machines in the first two stages and only one machine in the third. Therefore
n = 5, m = 3, m1 = m2 = 2 and m3 = 1. Jobs 1 and 5 visit all three stages whereas
job 2 skips stage 3, job 3 skips stage 1 and job 4 skips stage 2. So we have that
F1 = F5 = {1, 2, 3}, F2 = {1, 2}, F3 = {2, 3} and F4 = {1, 3}. Furthermore, jobs
4 and 5 are preceded by jobs 2 and 3, i.e. P4 = P5 = {2, 3}. Table 15.1 gives the
eligibility for the jobs on the machines. Table 15.2 shows the processing times of
jobs and the release dates for machines.
Tables 15.3 and 15.4 show the lag values and sequence dependent setup times,
respectively. Table 15.4 shows the setup time and, between parentheses, the anticipa-
tory value for every machine at each stage are given. Note that ‘−’ means that both
the setup time and the anticipatory value are 0.
Now, the example consists on a permutation of jobs π = {2, 1, 3, 4, 5} for which
we want to draw the sequence. The assignment of jobs to machines at each stage, is
as follows:
380 15 A Case Study: Ceramic Tile Production
Table 15.2 Processing times (pilj ) and release dates for machines (rmil ) for the example
i 1 2 3
l 1 2 1 2 1
rmil 4 3 8 16 23
pilj
j 1 10 15 0 8 6
2 6 9 11 4 0
3 0 0 9 0 8
4 0 10 0 0 6
5 11 14 6 12 3
⎛ ⎞
2102 1
⎝1 2 1 0 2⎠
0111 1
Each column in the previous table represents the machines assigned at each stage
to the job at the corresponding position in the permutation. It should be clear that π
is feasible from the perspective of the precedence constraints among jobs and that
the assignment of jobs to machines at each stage is also feasible with respect to the
eligibility constraints. The resulting Gantt chart is shown in Fig. 15.3.
It can be observed that jobs 4 and 5 must wait for their predecessors before starting.
Also, sometimes the lag times allow for an overlap of the operations whereas in other
situations these lags force a waiting period between successive operations of the same
job. A closer look at job 4 reveals that, even with a missing stage 2, the lag still holds
for the third stage. Some setups are anticipatory (see e.g. the setup between jobs
1 and 5 on machine 1 in the first stage), whereas other setups are non-anticipatory
(see e.g. the setup between jobs 3 and 4 on machine 1 in the last stage). This means
that the setup between jobs 1 and 5 in the first stage, for example, could be started
anytime between 14 and 44, whereas the setup between jobs 3 and 4 in the last stage
can start only after job 4 arrives at that stage (and, in this case, after lag124 is also
considered).
Table 15.4 Sequence dependent setup times and anticipatory flags (Siljk and Ailjk ) for the example
i=1 i=2 i=3
k k k
j 1 2 3 4 5 1 2 3 4 5 j 1 2 3 4 5
1 – 3(1), 6(1) – –, 8(1) 4(1), 2(1) – –, 6(1) – – –, 6(1) 1 – – 6(1) 3(1) 9(1)
15.3 Modeling the Ceramic Tile Production Process
i l 4 14 44 48 59
1 Rm1,1 1 S1,1,1,5 5
1 3 12 42 48 58
2 Rm1,2 2 S1,2,2,4 4
lag1,2,2
8 10 21 27 36
1 Rm2,1 2 S2,1,2,3 3
2 14 16 lag1,1,1 24 32 48 54 lag1,1,5 59 66
2 Rm2,2 1 S2,2,1,5 5
23 28 lag2,2,1 34 40 48 58 59 60 66 72 74 77
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75
Note that calculating the Cmax value for this problem is rather costly. Let us define
the expressions that are needed for such calculation. First, some additional notation
is needed:
• Gi is the set of jobs that visit stage i, (Gi ⊆ N and Gi = {j|i ∪ Fj }),
• Gil ⊆ Gi is the set of jobs that can be processed on machine l inside stage i, i.e.
Gil = {j|i ∪ Fj ∧ l ∪ Eij },
• Sk gives the complete and unchained set of successors of job k, i.e. Sk = {j|k ∪ Pj }
• FSk (LSk ) is the first (last) stage that job k visits.
We also have that the machine assigned to job j at stage i is denoted by Tij or
by l in brief. The previous job that was processed by machine l is denoted by k(l).
Let stage i − 1 be the previous stage visited by job j before stage i (note that is the
previously visited stage, which does not have to be the previous stage in the shop,
which might not be visited by job j). Similarly, stage i + 1 is the next stage to be
visited.
With all previous notations, we can calculate the completion times for job j at all
visited stages with the following expressions:
CFSj ,j = max{rmFSj ,l ; max CLSp ,p ; CFSj ,k(l) + AFSj ,l,k(l),j · SFSj ,l,k(l),j }
p∪Pj (15.1)
+(1 − AFSj ,l,k(l),j ) · SFSj ,l,k(l),j + pFSj ,l,j , j∪N
with Eq. (15.1), considering availability of the machine, completion times of the
predecessors, setup and its own processing time. For the other stages Eq. (15.2) is
applied, considering availability of the machine, availability of the job (including
lag), setup and its processing time.
If job j is assigned to machine l inside stage i, the time at which machine l
completes job j is denoted as Lilj . Following our notation, Lilj = Cij given Tij = l.
Furthermore, we refer to the job visiting stage i after job j as job q and to an eligible
machine at the next stage for job j as l∃ ∪ Ei+1,j .
Suppose now that we are scheduling job j in stage i, i ∪ Fj . We have to consider
all machines l ∪ Eij for assignment.
Obviously, now the makespan is easily calculated:
Note that as stated, for complex problems, the calculation of the objective is no longer
straightforward.
For the sake of completeness, the mixed integer linear programming (MILP)
model is described below (taken also from Ruiz et al. 2008). We first define the
decision variables:
1, if job j precedes job k on machine l at stage i
Xiljk =
0, otherwise
Cij = Completion time of job j at stage i
Cmax = Maximum completion time or makespan
Xilkj ∀ 1, k ∪ N, i ∪ Fk (15.6)
j∪Gi l∪Eij ∩Eik
j =k,j∪P
/ k
Xilhj ≥ Xiljk , j, k ∪ N, j = k, j ∪
/ Sk , i ∪ Fj ∩ Fk , l ∪ Eij ∩ Eik (15.7)
h∪{Gil ,0}
h =k,h =j
h∪S
/ j
384 15 A Case Study: Ceramic Tile Production
⎧
Xiljk + Xilkj ∀ 1, j ∪ N, k = j + 1, . . . , n, j = k, j ∪
/ Pk , k ∪
/ Pj ,
l∪Eij ∩Eik
i ∪ Fj ∩ Fk (15.8)
Xil0k ∀ 1, i ∪ M, l ∪ Mi (15.9)
k∪Gil
⎨ ⎫ ⎬
Cik + V (1 − Xiljk ) ≥ max Ci−1,k + lagi−1,l∃ ,k · Xi−1,l∃ ,h,k ,
h∪{Gi−1 ,0} l∃ ∪Ei−1,h ∩Ei−1,k
h =k,h∪S
/⎩ k
rmil , Cij + Ailjk · Siljk + (1 − Ailjk ) · Siljk + pilk ,
k ∪ N, i ∪ {Fk \ FSk }, l ∪ Eik , j ∪ {Gil , 0}, j = k, j ∪
/ Sk
(15.12)
Cij ≥ 0, j ∪ N, i ∪ Fj (15.15)
The first set of constraints (15.5) ensures that every job has exactly one predecessor
on only one machine at each stage. Note that only the possible variables are considered
as in this model, the binary variables that are zero are not defined (for example, if one
job is a successor of another, it might never precede it, therefore, the corresponding
variable is not even defined). Note that for every stage and machine we introduce
a dummy job 0, which precedes the first job at each machine. This also allows for
the consideration of initial setup times, should they be needed. Constraints in the
set (15.6) are similar in the way that every job should have at most one successor.
Constraint set (15.7) forces that if a job is processed on a given machine at a stage,
then it should have a predecessor on the same machine. This is a way of forcing
that assignments are consistent in the machines. Constraint set (15.8) avoids the
occurrence of cross-precedences. Note again that only the possible alternatives are
considered.
With constraint set (15.9) we enforce that dummy job 0 can only be predecessor of
at most one job on each machine at each stage. Constraint set (15.10) simply ensures
that dummy job 0 is completed at time 0 in all stages. Constraint set (15.11) controls
the completion time of jobs at the first stage they start processing by considering all
15.3 Modeling the Ceramic Tile Production Process 385
eligible machines. The value V represents a big number so to make the constraint
redundant if the assignment variable is zero. Notice that precedence relationships
are considered by accounting for the completion of all the predecessors of a given
job. Note also that both types of sequence dependent setup times (anticipatory and
non-anticipatory) are also taken into account. Constraint set (15.12) gives the com-
pletion time on subsequent stages. Here the completion time of the same job in the
previous stage along with the lag time is considered. Constraint set (15.13) defines
the maximum completion time. Finally, (15.14) and (15.15) define just the decision
variables.
The above model was shown by Ruiz et al. (2008) to be mostly unsolvable for
instances of 15 jobs. More precisely, among 1,536 instances with n = 15, only 5.21 %
(80) could be solved to optimality in 15 min of CPU time with CPLEX version 9.1
on a Pentium IV running at 3.2 GHz. While this solver version and hardware is
now dated, such percentages increase only slightly with modern solver versions and
current solvers (including threaded solvers).
Due to this poor performance of exact methods, Ruiz et al. (2008) also proposed
some effective heuristics. Later, Urlings and Ruiz (2007) presented some local search
methods. Metaheuristic algorithms, from simple genetic algorithms to methods with
complex solution representations were studied later by Urlings et al. (2010a) and by
Urlings et al. (2010b). We refer to this extended formulation as CTPP_Model_2 in
the following sections.
The set of straightforward initial data is the configuration of the shop, i.e. the number
of stages m and the number and main characteristics of each machine per stage mi .
386 15 A Case Study: Ceramic Tile Production
This has to be carried out at a per-company basis. SMEs in the ceramic tile sector
typically contain two or more kilns. The molds and glazing lines typically range from
3 to 6 and a somewhat smaller number of classification lines is also common (Vallada
et al. 2005). The division of the different stages is not entirely clear and must be also
studied case by case. For example, for most companies, the first three stages, namely
molding, drying and glazing, are connected by conveyor belts. This means that one
mold is directly connected to a dryer and this one is connected to the glazing line.
In practice, if a ceramic tile is molded in that specific mold, it directly goes to that
dryer and glazing line. Obviously, there is no sense in dividing these three machines
in three stages. What we have is a large virtual stage (as far as CTPP_Model_1 goes)
that includes molding, drying and glazing.
A second stage is the kiln firing. All interviewed companies from Vallada et al.
(2005) had the bisques stored in big boxes in between glazing and firing. The third
stage is the selection, classification and packing. However, for small SMEs, this is
not always the case. For some companies, there is not a storage area between the
kilns and the classification lines and the kiln is directly connected to the classification
lines by conveyor belts. In this scenario, the initial CTTP problem consists in either
two (molding + drying + glazing; firing + classification) or three stages (molding +
drying + glazing; firing; classification). Once the stage divisions of the production
floor are clear, one has to count and characterise the existing machines.
The next step is to gather all the possible products that the company produces,
which later turn into ‘jobs’ for CTPP_Model_1. As simple as this might seem, obtain-
ing a list of products for production purposes only is many times challenging. Depend-
ing on the brand and instantiation of the Enterprise Resource Planning (ERP) that
each company uses, the problems arising might range from simple to solve to almost
impossible to overcome. Inside a company, a product reduces usually to a code. Com-
panies try to codify most important characteristics into the product code, like type
of product, family, colour, quality and so on. in the CTPP problem, stored products
in the inventory include information like the quality, tone and caliber, information
that does not exist when the product starts production as these characteristics result
from the production process itself. Other problems arise in the packaging. Compa-
nies usually produce products for bigger companies and therefore the packaging of
the product changes from client to client and sometimes even from different export
targets. As a result of all this, the same original product, i.e. a type of ceramic tile,
might have dozens, sometimes even more than a hundred different codings in the
databases.
The experienced reader might have realised that this situation poses no problem,
since all the explosion of codes for the same products originates from the same base
code and all that one needs to do is to differentiate the production products from the
base codes. However, in practice this is many times challenging, as the data needed
for characterising each product is scattered among many different references. To
solve all these problems, a careful work has to be performed on each case, with
intimate collaboration from the different companies.
At this stage, all information relating to the production floor, machine characteris-
tics and products to produce is already available. The next step is to obtain the stages
15.4 Development and Deployment of the Scheduling System 387
that each product follows in production. As with previous steps, this might be from
very easy to somewhat challenging. In any case, CTPP_Model_1 is rather limited in
this regard. We will discuss in more detail the stage visitation information for each
product later when approaching CTPP_Model_2.
In order to sequence products, one piece of important information is the production
times or pilj . One quickly realises the magnitude of the dataset that is needed here.
Picture for example a three virtual stages medium company with six molding, drying
and glazing lines, four kilns and four classification and packing lines. This makes a
total of 14 machines. For a company of this size, it is not at all uncommon to have
a total number of production references in excess of 15,000. Therefore, a total of
14 × 15,000 = 210,000 data is needed. Introducing each one of these figures in the
scheduling system is an arduous and repetitive task, let alone maintaining this data
at a latter stage. An smarter approach is needed in order to streamline this aspect of
the scheduling system.
In a nutshell, the ‘speed’ that a machine can process a batch of ceramic tiles
depends on a set of characteristics, many of which are common for large families
of products. Therefore, the speed of the machines can be determined from these
characteristics and it is not necessary to derive this information from the product
code alone. A clear example is the molding press. The mold inside the molding press
determines how many tiles are pressed on each pressing cycle. The base material of
the tile (the type of clay) along with the thickness of the tile and the characteristics of
the molding press, determine how fast each pressing cycle goes. As a result, together
with the type of molding press, and type of mold, we can determine the speed of the
molding operation. Notice that the type of material used and the thickness of the tile
are already factored in the type of mold.
The speed of the conveyor belt is pretty much determined by the speed of the
molding press. Ceramic tiles advance along the conveyor belt, with enough space in
between for the remaining drying and glazing operations to be performed without
problems. The dryer usually accepts this speed and no further calculations are needed
in most cases. The glazing lines, as explained before, consist of large conveyor belts
with the tiles advancing over them. Therefore the processing time here is just derived
from the length of the glazing line.
From the above discussion, it is easy to calculate how many tiles are pressed per
minute in a given molding press. Then, with the speed and length of the conveyor
belt, we can calculate the cadence or the time it takes from a tile to reach the end of
the glazing line after being pressed. With this data and the size of the ceramic tiles
batch, we calculate the processing time.
A similar situation arises at the kilns. Here, the firing time depends mainly on the
format and base material of the tile, with very small variations that depend on the
glaze employed. Large format tiles and/or thick tiles need larger firing times whereas
smaller and thin tiles can be fired in as little as 25 min.
Lastly, classification lines are even easier, as the classification speed varies (and
not by a large margin) with the tile format.
At this point it is worth noticing that eligibility might help in reducing the number
of data needed. Eligibility is in many cases determined by the base material and
388 15 A Case Study: Ceramic Tile Production
the format of the tile. Simply speaking, some molds do not have enough power
to press large tiles. Special decorations are only available at some glazing lines
and some kilns are specially devised for small or large tiles. Therefore, calculation
of eligibility is carried out on each possible product and machine on the basis of
matching characteristics. If these characteristics match, then we can calculate the
processing time. All in all, and following the previous example, from the original
and initial 210,000 processing times needed, the resulting distinct processing times
amount to less than 2,500 for the molding press, less than 200 for the kilns and about
300 for the classification lines.
The most problematic data requirements of CTPP_Model_1 are the sequence
dependent setup times or Siljk . Following the previous example of 14 machines
and 15,000 products, the possible setups go all the way up to 14 × 15,000 ×
15,000 = 3,150,000,000. It is easy to see that this amount of setup times is impossible
to store in any existing database. Note however, that not all setup combinations are
possible, for example, the setup time value Siljk is irrelevant—and can be assumed
to be zero—in the following cases: j = k, ≥j, k ∪ N and l ∪ / Eij or l ∪
/ Eik .
As a result, the real number of needed setups is much smaller and can only be
calculated once eligibility data is known. Furthermore, some ceramic tile production
sequences require such large setups that make them impracticable in real life. For
example, producing a very small white tile after a large black tile with many other
additional differences needs probably half a week of setups in the mold, glazing line
and kiln. As a result of this, such setups can be approximated to a very large value,
without really caring about the real value.
Furthermore, the setup times can also be calculated following the approach we
employed with the processing times. However, calculating the amount of setup times
is much more daunting. For each pair of different products that are eligible for a
given machine we can calculate the differences in their characteristics. For example,
consecutive products sharing the same mold in the molding press require no setup
time on the molding press and dryers. However, if they differ in the colour or in
the number of glazing applications, significant cleaning and adjustment is needed
in the glazing lines. As a result, the setup operations can be accounted for from
the differences in the product’s characteristics. For example, changing a large mold
needed for a large format tile requires operators to use jib or even overhead traveling
cranes due to the weight. Smaller molds can be changed directly by the operators.
Hence, large differences in setup times occur. Similarly, some decoration machines
do have self cleaning routines (like the tile ‘printers’) whereas other machines require
manual and comprehensive cleaning.
The internal organisation of the production shop differs from company to com-
pany. Some companies have many setup squads that can re-configure the molding
press, dryer and glazing lines simultaneously. In other cases, these setups are done
sequentially by a single squad. In summary, we ‘only’ need to know the time it takes
to unmount and to mount every usable mold at each press. The time it takes to change
the visible plate from the mold (given the same format), time to change the drying
curve from format to format, and all the changes in applications from a product to
another in the ceramic glazing line to quantify the setup. This still results in a very
15.4 Development and Deployment of the Scheduling System 389
large set of data but that is many orders of magnitude smaller than the original num-
ber of possible setups. For example, a typical medium size company might have 80
different formats. Each molding press is usually able to press a 60 % of these formats.
As a result, we approximately need a minimum of 48 unmounting and 48 mounting
times to come up with a fairly accurate sequence dependent setup time matrix at the
molding press.
With such enormous data sets, special care should be put on the storage and retrieval.
Carefully crafted relational databases, where performance of the queries is considered
at the design stage, were developed. The details of such databases fail outside the
scope of this book. Suffice to say that important implementation specific details
like distributed tables and multiple indexes where employed. Resulting databases
exceeded 20 GB of information in some cases.
Information retrieval is another key issue. Depending on the size of the company,
and on the span of the scheduling setting, typically from as little as 50 to as more
than 500 batches of ceramics tiles are to the scheduled. This means that all these 500
products, along with their predecessors structure, their stages visitation, processing
times, setup times, and so on must be loaded from the database. In order to carry this
operation in an acceptable time frame, we launch several SQL queries in parallel,
using thread programming and the multiple processing cores available in modern
desktop CPUs. The relational databases are stored in powerful servers that are capable
of serving the SQL requests in parallel. By doing so, all the data required for a given
15.4 Development and Deployment of the Scheduling System 391
Fig. 15.4 Screenshot of ProdPlanner software, capable of producing solutions for CTPP_Model_1
guidelines described in Sect. 14.7. Most of the work was needed in the scheduling
system development and data gathering.
The outcome was really encouraging at the time since up to 16 % makespan reduc-
tions for the CTTP could be observed. Development of the user interface and the data
management modules were equally challenging as providing the required flexibility
results in complex software applications. Overall, and considering all the modules
in the proposed extended architecture, only about 10 % of the time was devoted to
the development—not research—of the optimisation algorithms. More than 50 % of
the time was devoted to the User Interface, whereas the Database Management and
DAM required about 40 % of the time. Similarly, when a new company is interested
in SeKuen, most of the implantation efforts go to the User Interface and Database
Management modules, specially in the Import/Export/Synchronisation submodule.
More information about SeKuen is available from http://www.sekuen.com.
15.4 Development and Deployment of the Scheduling System 393
Fig. 15.5 Screenshot of SeKuen software, based on ProdPlanner, suitable for CTPP_Model_2
Even though CTPP_Model_2 model has been just recently developed, there are still
some open issues in the modeling. First of all, the machines are not the only resource
that must be considered in the production floor. Personnel, tools and other resources
are also necessary. These resources are limited and must be also considered in the
scheduling. For example, even though we might be considering sequence dependent
setup times at the molding presses, these setups are actually carried out by setup
squads, which might be limited in number. A setup needed a squad not currently
available will need to be postponed and therefore might be likely to result in a bad
schedule. Transportation times in between stages and the control and of the AGVs is
also necessary. Storage capacity (number of boxes) in between each stage is also a
key concern as in some cases this is a limited resource. Additional work is needed in
order to significantly increase the number of realistic situations that SeKuen is able
to handle. As of 2012, we are working on a third installment of SeKuen that is able
to sequence production considering additional resources. This new version is also
being rewritten from Delphi to C# and the .NET platform 4.0.
More and more complex scheduling settings result in slow algorithms. Calcu-
lating the production sequences in the presence of so many constraints is really
slow. However, the metaheuristic techniques employed need to evaluate thousands
of schedules in order to produce high quality results. The solution is parallelisation
and cooperative computing, as already pointed out in Sect. 14.7.7. By using GRIDS
or in a much simpler level, the available multiple CPU cores of modern computers,
results of very high quality are obtainable in a fraction of CPU time. Similarly, very
394 15 A Case Study: Ceramic Tile Production
complex scheduling problems can be solved in fast CPU times. Research in this
field is rather scarce when compared on the large body of research on ‘sequential’
scheduling. However, some promising results are already appearing, like the work
of Vallada and Ruiz (2009).
Single objective optimisation is controversial in practice. A sequence obtained
after makespan minimisation will result in a high violation of due dates and usually
large number of setups. Minimising tardiness results again in large setups and large
makespan. In practice, single objective optimisation is of little use. We are not aware
of any attempts at solving multi-objective complex scheduling problems. However,
promising results already exist (Minella et al. 2008), and more work is needed in this
direction.
References
Allahverdi, A., Gupta, J. N. D., and Aldowaisan, T. (1999). A review of scheduling research involv-
ing setup considerations. OMEGA, The International Journal of Management Science, 27(2):
219–239.
Allahverdi, A., Ng, C. T., Cheng, T. C. E., and Kovalyov, M. Y. (2008). A survey of schedul-
ing problems with setup times or costs. European Journal of Operational Research, 187(3):
985–1032.
Dudek, R. A., Panwalkar, S. S., and Smith, M. L. (1992). The lessons of flowshop scheduling
research. Operations Research, 40(1):7–13.
Ford, F. N., Bradbard, D. A., Ledbetter, W. N., and Cox, J. F. (1987). Use of operations research in
production management. Production and Inventory Management, 28(3):59–62.
Graves, S. C. (1981). A review of production scheduling. Operations Research, 29(4):646–675.
Ledbetter, W. N. and Cox, J. F. (1977). Operations research in production management: An inves-
tigation of past and present utilisation. Production and Inventory Management, 18(3):84–91.
Linn, R. and Zhang, W. (1999). Hybrid flow shop scheduling: A survey. Computers & Industrial
Engineering, 37(1-2):57–61.
MacCarthy, B. L. and Liu, J. (1993). Addressing the gap in scheduling research: A review of
optimization and heuristic methods in production scheduling. International Journal of Production
Research, 31(1):59–79.
McKay, K., Safayeni, F., and Buzacott, J. (1988). Job-shop scheduling theory: What is relevant?
Interfaces, 18(4):84–90.
McKay, K. N., Pinedo, M. L., and Webster, S. (2002). Practice-focused research issues for scheduling
systems. Production and Operations Management, 11(2):249–258.
Minella, G., Ruiz, R., and Ciavotta, M. (2008). A review and evaluation of multi-objective algorithms
for the flowshop scheduling problem. INFORMS Journal on Computing, 20(3):451–471.
Olhager, J. and Rapp, B. (1995). Operations research techniques in manufacturing planning and
control systems. International Transactions in Operational Research, 2(1):29–43.
Quadt, D. and Kuhn, D. (2007). A taxonomy of flexible flow line scheduling procedures. European
Journal of Operational Research, 178(3):686–698.
Reisman, A., Kumar, A., and Motwani, J. (1997). Flowshop scheduling/sequencing research: A
statistical review of the literature, 1952-1994. IEEE Transactions on Engineering Management,
44(3):316–329.
Ribas, I., Leisten, R., and Framinan, J. M. (2010). Review and classification of hybrid flow shop
scheduling problems from a production system and a solutions procedure perspective. Computers
& Operations Research, 37(8):1439–1454.
References 395
Ruiz, R. and Maroto, C. (2002). Flexible manufacturing in the ceramic tile industry. In Eight
International Workshop on Project Management and Scheduling, PMS 2002. Abstracts, pages
301–304, Valencia. Universitat de València.
Ruiz, R. and Maroto, C. (2006). A genetic algorithm for hybrid flowshops with sequence dependent
setup times and machine eligibility. European Journal of Operational Research, 169(3):781–800.
Ruiz, R., Sivrikaya-Şerifoğlu, F., and Urlings, T. (2008). Modelling realistic hybrid flexible flowshop
scheduling problems. Computers and Operations Research, 35(4):1151–1175.
Ruiz, R. and Vázquez-Rodríguez, J. A. (2010). The hybrid flowshop scheduling problem. European
Journal of Operational Research, 205(1):1–18.
Urlings, T. and Ruiz, R. (2007). Local search in complex scheduling problems. In Stützle, T.,
Birattari, M., and Hoos, H. H., editors, First Workshop on Stochastic Local Search (SLS 2007).
Engineering Stochastis Local Search Algorithms. Designing, Implementing andAnalyzing Effec-
tive Heuristics, volume 4628, pages 202–206, Berlin. Springer-Verlag.
Urlings, T., Ruiz, R., and Sivrikaya-Şerifoğlu, F. (2010a). Genetic algorithms with different rep-
resentation schemes for complex hybrid flexible flow line problems. International Journal of
Metaheuristics, 1(1):30–54.
Urlings, T., Ruiz, R., and Stützle, T. (2010b). Shifting representation search for hybrid flexible
flowline problems. European Journal of Operational Research. In review.
Vallada, E., Maroto, C., Ruiz, R., and Segura, B. (2005). Analysis of production scheduling in
spanish tile industry. Boletín de la Sociedad Espanola de Cerámica y Vidrio, 44(1):39–44. In
Spanish.
Vallada, E. and Ruiz, R. (2009). Cooperative metaheuristics for the permutation flowshopn schedul-
ing problem. European Journal of Operational Research, 193(2):365–376.
Vignier, A., Billaut, J.-C., and Proust, C. (1999). Les problèmes d’ordonnancement de type flow-
shop hybride: État de l’art. RAIRO Recherche opérationnelle, 33(2):117–183. In French.
Wang, H. (2005). Flexible flow shop scheduling: optimum, heuristics and artificial intelligence
solutions. Expert Systems, 22(2):78–85.
Index
A real-world, 26
Advance Planning Systems, 32 Complexity classes, 157
Advanced Planning and Scheduling, 37 Complexity hierarchy, 162
Agent-based approaches, 135 Computational complexity, 156
Algorithm Computational problems, 156
introduction, 12 Corrective maintenance, 82
Automated guided vehicles, 374 Cutting plane, 174
Average relative percentage deviation, 184 Cycle time, 111
B D
Batch size, 85 Database management, 311, 320
Batching Deadline, 87
parallel, 85 Decision support system, 291
serial, 85 Decision-making, 11
Bill of materials, 63, 69, 76 Dispatching rules, 167
Blocking, 93 apparent tardiness cost, 172
BLU/DAM, see Business logic unit/data ab- dynamic, 167
straction management earliest completion next stage, 171
Branch and bound, 174, 191, 204 earliest completion time, 171
Branch and cut, 174 earliest due date first, 168
Breakdowns, 82 earliest preparation next stage, 171
Business information system earliest release date first, 167
definition, 295 earliest starting time, 171
Business logic unit, 319 first available machine, 170
Business logic unit/data abstraction manage- first come first served, 167
ment, 311, 313, 320, 321, 329, forbidden machine, 171
330 global, 167
largest number of successors, 169
last come first served, 167
C least flexible job first, 169
Capacity buckets, 325, 327 local, 167
Capacity requirements planning, 69 longest processing time first, 168
Ceramic tile, 372 minimum slack first, 168
Completion time, 104 next job same machine, 171
Complexity, 26 random order, 167
computational, 26 shortest processing time first, 168
H N
Heterogenous-criteria scheduling, 285 Nervousness, 5
Hybrid flow shop, 375 No idle time, 84
No wait, 88
I
Instance, 48 O
Interfering jobs, 285 Operation, 49
Index 399