Linear and Integer Optimization Theory and Practice Third Edition Sierksma PDF Download
Linear and Integer Optimization Theory and Practice Third Edition Sierksma PDF Download
https://ebookgate.com/product/linear-and-integer-optimization-
theory-and-practice-third-edition-sierksma/
https://ebookgate.com/product/fundamentals-of-linear-algebra-and-
optimization-gallier-j/
ebookgate.com
https://ebookgate.com/product/bainite-in-steels-theory-and-practice-
third-edition-bhadeshia/
ebookgate.com
https://ebookgate.com/product/physical-agents-theory-and-practice-
third-edition-behrens-pta-ms/
ebookgate.com
https://ebookgate.com/product/corrosion-mechanisms-in-theory-and-
practice-third-edition-philippe-marcus/
ebookgate.com
Engine Testing Theory and Practice Third Edition A. J
Martyr
https://ebookgate.com/product/engine-testing-theory-and-practice-
third-edition-a-j-martyr/
ebookgate.com
https://ebookgate.com/product/universal-human-rights-in-theory-and-
practice-third-edition-jack-donnelly/
ebookgate.com
https://ebookgate.com/product/clinical-child-and-adolescent-
psychology-from-theory-to-practice-third-edition-martin-herbertauth/
ebookgate.com
https://ebookgate.com/product/criminology-theory-and-context-third-
edition-tierney/
ebookgate.com
https://ebookgate.com/product/an-introduction-to-generalized-linear-
models-third-edition-barnett/
ebookgate.com
Third Advances in Applied Mathematics
Presenting a strong and clear relationship between theory and practice, Linear and Edition
Integer Optimization: Theory and Practice is divided into two main parts. The first
More advanced topics also are presented including interior point algorithms, the branch-
and-bound algorithm, cutting planes, complexity, standard combinatorial optimization
models, the assignment problem, minimum cost flow, and the maximum flow/minimum
OPTIMIZATION
cut theorem.
The second part applies theory through real-world case studies. The authors discuss
Theory and Practice
advanced techniques such as column generation, multiobjective optimization, dynamic
optimization, machine learning (support vector machines), combinatorial optimization,
Third Edition
approximation algorithms, and game theory.
Besides the fresh new layout and completely redesigned figures, this new edition
incorporates modern examples and applications of linear optimization. The book now
includes computer code in the form of models in the GNU Mathematical Programming
Language (GMPL). The models and corresponding data files are available for download
and can be readily solved using the provided online solver.
This new edition also contains appendices covering mathematical proofs, linear algebra,
graph theory, convexity, and nonlinear optimization. All chapters contain extensive
examples and exercises.
This textbook is ideal for courses for advanced undergraduate and graduate students
in various fields including mathematics, computer science, industrial engineering,
operations research, and management science.
w w w. c rc p r e s s . c o m
LINEAR AND INTEGER
OPTIMIZATION
Theory and Practice
Third Edition
Advances in Applied Mathematics
Published Titles
Green’s Functions with Applications, Second Edition Dean G. Duffy
Linear and Integer Optimization: Theory and Practice, Third Edition
Gerard Sierksma and Yori Zwols
Markov Processes James R. Kirkwood
Pocket Book of Integrals and Mathematical Formulas, 5th Edition
Ronald J. Tallarida Stochastic Partial Differential Equations,
Second Edition Pao-Liu Chow
Gerard Sierksma
University of Groningen, The Netherlands
Yori Zwols
Google, London, United Kingdom
All code in this book is subject to the MIT open source license. See http://opensource.org/licenses/MIT.
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2015 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid-
ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti-
lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy-
ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
To Rita and Rouba
This page intentionally left blank
Conte nts vii
Contents
Preface xxv
Appendices
A Mathematical proofs 563
A.1 Direct proof · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 564
A.2 Proof by contradiction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 565
A.3 Mathematical induction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 565
D Convexity 597
D.1 Sets, continuous functions, Weierstrass’ theorem · · · · · · · · · · · · · · · · · · · 597
D.2 Convexity, polyhedra, polytopes, and cones · · · · · · · · · · · · · · · · · · · · · 600
Conte nts xv
Bibliography 639
List of Figures
15.2 Owen points for different distributions of farmers of Type 1, 2, and 3. · · · · · · · 504
E.1 The functions f (x) = x2 and h(x) = x̂2 + 2x̂(x − x̂). · · · · · · · · · · · · · · · 612
E.2 The functions f (x) = |x| and h(x) = |x̂| + α(x − x̂). · · · · · · · · · · · · · · · 612
E.3 Global and local minimizers and maximizers of the function f (x) = cos(2πx)
x . · · · 615
E.4 Consumer choice model. · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 616
E.5 The feasible region of the model in Example E.4.1. · · · · · · · · · · · · · · · · · 621
E.6 Nonlinear optimization model for which the KKT conditions fail. · · · · · · · · · 623
This page intentionally left blank
L i s t o f Ta b l e s xxiii
List of Tables
11.1 Relative letter frequencies (in percentages) of several newspaper articles. · · · · · · 426
11.2 Validation results for the classifier. · · · · · · · · · · · · · · · · · · · · · · · · · · 432
The past twenty years showed an explosive growth of applications of mathematical algo-
rithms. New and improved algorithms and theories appeared, mainly in practice-oriented
scientific journals. Linear and integer optimization algorithms have established themselves
among the most-used techniques for quantitative decision support systems.
Many universities all over the world used the first two editions of this book for their linear
optimization courses. The main reason of choosing this textbook was the fact that this book
shows the strong and clear relationships between theory and practice.
Linear optimization, also commonly called linear programming, can be described as the process
of transforming a real-life decision problem into a mathematical model, together with the
process of designing algorithms with which the mathematical model can be analyzed and
solved, resulting in a proposal that may support the procedure of solving the real-life problem.
The mathematical model consists of an objective function that has to be maximized or
minimized, and a finite number of constraints, where both the objective function and the
constraints are linear in a finite number of decision variables. If the decision variables are
restricted to be integer-valued, the linear optimization model is called an integer (linear)
optimization model.
Linear and integer optimization are among the most widely and successfully used decision
tools in the quantitative analysis of practical problems where rational decisions have to be
made. They form a main branch of management science and operations research, and they
are indispensable for the understanding of nonlinear optimization. Besides the fundamental
role that linear and integer optimization play in economic optimization problems, they
are also of great importance in strategic planning problems, in the analysis of algorithms,
in combinatorial problems, and in many other subjects. In addition to its many practical
applications, linear optimization has beautiful mathematical aspects. It blends algorithmic
and algebraic concepts with geometric ones.
xxv
xxvi P r e f ac e
Organization
This book starts at an elementary level. All concepts are introduced by means of simple
prototype examples which are extended and generalized to more realistic settings. The
reader is challenged to understand the concepts and phenomena by means of mathematical
arguments. So besides the insight into the practical use of linear and integer optimization,
the reader obtains a thorough knowledge of its theoretical background. With the growing
need for very specific techniques, the theoretical knowledge has become more and more
practically useful. It is very often not possible to apply standard techniques in practical
situations. Practical problems demand specific adaptations of standard models, which are
efficiently solvable only with a thorough mathematical understanding of the techniques.
The book consists of two parts. Part I covers the theory of linear and integer optimization.
It deals with basic topics such as Dantzig’s simplex algorithm, duality, sensitivity analysis,
integer optimization models, and network models, as well as more advanced topics such as
interior point algorithms, the branch-and-bound algorithm, cutting planes, and complex-
ity. Part II of the book covers case studies and more advanced techniques such as column
generation, multiobjective optimization, and game theory.
All chapters contain an extensive number of examples and exercises. The book contains five
appendices, a list of symbols, an author index, and a subject index. The literature list at the
end of the book contains the relevant literature usually from after 1990.
Examples, computer exercises, and advanced material are marked with icons in the margin:
Overview of Part I
In Chapter 1, the reader is introduced to linear optimization. The basic concepts of linear
optimization are explained, along with examples of linear optimization models. Chapter 2
introduces the mathematical theory needed to study linear optimization models. The ge-
ometry and the algebra, and the relationship between them, are explored in this chapter. In
Chapter 3, Dantzig’s simplex algorithm for solving linear optimization problems is devel-
oped. Since many current practical problems, such as large-scale crew scheduling problems,
may be highly degenerate, we pay attention to this important phenomenon. For instance,
its relationship with multiple optimal solutions and with shadow prices is discussed in de-
tail. This discussion is also indispensable for understanding the output of linear optimization
computer software. Chapter 4 deals with the crucial concepts of duality and optimality, and
Chapter 5 offers an extensive account to the theory and practical use of sensitivity analysis.
In Chapter 6, we discuss the interior path version of Karmarkar’s interior point algorithm
for solving linear optimization problems. Among all versions of Karmarkar’s algorithm, the
interior path algorithm is one of the most accessible and elegant. The algorithm determines
optimal solutions by following the so-called interior path through the (relative) interior
of the feasible region towards an optimal solution. Chapter 7 deals with integer linear opti-
mization, and discusses several solution techniques such as the branch-and-bound algorithm,
and Gomory’s cutting-plane algorithm. We also discuss algorithms for mixed-integer lin-
ear optimization models. Chapter 8 can be seen as an extension of Chapter 7; it discusses
xxviii P r e f ac e
the network simplex algorithm, with an application to the transshipment problem. It also
presents the maximum flow/minimum cut theorem as a special case of linear optimization
duality. Chapter 9 deals with computational complexity issues such as polynomial solvability
and NP-completeness. With the use of complexity theory, mathematical decision problems
can be partitioned into ‘easy’ and ‘hard’ problems.
Overview of Part II
The chapters in Part II of this book discuss a number of (more or less) real-life case studies.
These case studies reflect both the problem-analyzing and the problem-solving ability of
linear and integer optimization. We have written them in order to illustrate several advanced
modeling techniques, such as network modeling, game theory, and machine learning, as well
as specific solution techniques such as column generation and multiobjective optimization.
The specific techniques discussed in each chapter are listed in Table 0.1.
Acknowledgments
We are grateful to a few people who helped and supported us with this book — to Vašek
Chvátal, Peter van Dam, Shane Legg, Cees Roos, Gert Tijssen, and Theophane Weber, to
the LATEX community at tex.stackexchange.com, and to our families without whose
support this book would not have been written.
Groningen and London, January 2015 Gerard Sierksma and Yori Zwols
P r e f ac e xxix
Chapter 1
Introduction
Chapter 10 Chapter 2
Designing a reservoir Geometry and algebra
Section 3.9
Chapter 11 Sections 3.1–3.8
Revised sim-
Classification Simplex algorithm
plex algorithm
Sections 4.1–
Sections 4.6–4.7
4.4, except 4.3.4
Dual simplex algorithm
Duality
Section 4.3.4
Chapters 12, 13 Sections 5.1–5.5
Strong comple-
Production planning Sensitivity analysis
mentary slackness
Chapter 17 Chapter 8
Helicopter scheduling Network optimization
Chapter 18 Chapter 9
Catering problem Complexity theory
Overview
In 1827, the French mathematician Jean-Baptiste Joseph Fourier (1768–1830) published
a method for solving systems of linear inequalities. This publication is usually seen as the
first account on linear optimization. In 1939, the Russian mathematician Leonid V. Kan-
torovich (1912–1986) gave linear optimization formulations of resource allocation problems.
Around the same time, the Dutch economist Tjalling C. Koopmans (1910–1985) formu-
lated linear optimization models for problems arising in classical, Walrasian (Léon Walras,
1834–1910), economics. In 1975, both Kantorovich and Koopmans received the Nobel
Prize in economic sciences for their work. During World War II, linear optimization mod-
els were designed and solved for military planning problems. In 1947, George B. Dantzig
(1914–2005) invented, what he called, the simplex algorithm. The discovery of the simplex
algorithm coincided with the rise of the computer, making it possible to computerize the
calculations, and to use the method for solving large-scale real life problems. Since then, lin-
ear optimization has developed rapidly, both in theory and in application. At the end of the
1960’s, the first software packages appeared on the market. Nowadays linear optimization
problems with millions of variables and constraints can readily be solved.
Linear optimization is presently used in almost all industrial and academic areas of quantita-
tive decision making. For an extensive – but not exhaustive – list of fields of applications of
linear optimization, we refer to Section 1.6 and the case studies in Chapters 10–11. More-
over, the theory behind linear optimization forms the basis for more advanced nonlinear
optimization.
In this chapter, the basic concepts of linear optimization are discussed. We start with a
simple example of a so-called linear optimization model (abbreviated to LO-model) containing
two decision variables. An optimal solution of the model is determined by means of the
‘graphical method’. This simple example is used as a warming up exercise for more realistic
cases, and the general form of an LO-model. We present a few LO-models that illustrate
1
2 C h a p t e r 1 . B a s i c c o n c e p t s o f l i n e a r o p t i m i z at i o n
the use of linear optimization, and that introduce some standard modeling techniques. We
also describe how to use an online linear optimization package to solve an LO-model.
x1 = the number of boxes (×100,000) of long matches to be made the next year,
x2 = the number of boxes (×100,000) of short matches to be made the next year.
The company makes a profit of 3 (×$1,000) for every 100,000 boxes of long matches, which
means that for x1 (×100,000) boxes of long matches, the profit is 3x1 (×$1,000). Similarly,
for x2 (×100,000) boxes of short matches the profit is 2x2 (×$1,000). Since Dovetail aims
at maximizing its profit, and it is assumed that Dovetail can sell its full production, the
objective of Dovetail is:
The function 3x1 + 2x2 is called the objective function of the problem. It is a function of the
decision variables x1 and x2 . If we only consider the objective function, it is obvious that
the production of matches should be taken as high as possible. However, the company also
has to take into account a number of constraints. First, the machine capacity is 9 (×100,000)
boxes per year. This yields the constraint:
x1 + x2 ≤ 9. (1.1)
Third, the numbers of available boxes for long and short matches is restricted, which means
that x1 and x2 have to satisfy:
x1 ≤ 7, (1.3)
and x2 ≤ 6. (1.4)
The inequalities (1.1) – (1.4) are called technology constraints. Finally, we assume that only
nonnegative amounts can be produced, i.e.,
x1 , x2 ≥ 0.
The inequalities x1 ≥ 0 and x2 ≥ 0 are called nonnegativity constraints. Taking together the
six expressions formulated above, we obtain Model Dovetail:
Model Dovetail.
In this model ‘s.t.’ means ‘subject to’. Model Dovetail is an example of a linear optimization
model. We will abbreviate ‘linear optimization model’ as ‘LO-model’. The term ‘linear’
refers to the fact that the objective function and the constraints are linear functions of the
decision variables x1 and x2 . In the next section we will determine an optimal solution (also
called optimal point) of Model Dovetail, which means that we will determine values of x1 and
x2 satisfying the constraints of the model, and such that the value of the objective function
is maximum for these values.
LO-models are often called ‘LP-models’, where ‘LP’ stands for linear programming. The word
‘programming’ in this context is an old-fashioned word for optimization, and has nothing
to do with the modern meaning of programming (as in ‘computer programming’). We
therefore prefer to use the word ‘optimization’ to avoid confusion.
x2 x2
(1.3)
9
(1.2)
v3 (1.4)
6 v
4
v2
v1 (1.1)
0 x1 0 6 x1
Figure 1.1: Nonnegativity constraints. Figure 1.2: The feasible region of Model Dovetail.
x1 + x2 ≤ 9 are located. Figure 1.2 is obtained by doing this for all constraints. We end up
with theh region 0v1 v2 v3 v4 , which is called the feasible region of the model; it contains the
x1
i
points x that satisfy the constraints of the model. The points 0, v1 , v2 , v3 , and v4 are
2
called the vertices of the feasible region. It can easily be calculated that:
1
6 42 3 0
v1 = , v2 = 1 , v3 = , and v4 = .
0 42 6 6
In Figure 1.2, we also see that constraint (1.3) can be deleted without changing the feasible
region. Such a constraint is called redundant with respect to the feasible region. On the other
hand, there are reasons for keeping this constraint in the model. For example, when the
right hand side of constraint (1.2) is sufficiently increased (thereby moving the line in Figure
1.2 corresponding to (1.3) to the right), constraint (1.3) becomes nonredundant again. See
Chapter 5.
Next, we determine the points in the feasible region that attain the maximum value of the
objective function. To that end, we drawh iniFigure 1.2 a number of so-called level lines.
x
A level line is a line for which all points x1 on it have the same value of the objective
2
function. In Figure 1.3, five level lines are drawn, namely 3x1 + 2x2 = 0, 6, 12, 18, and
24. The arrows in Figure 1.3 point in the direction of increasing values of the objective
function 3x1 + 2x2 . These arrows are in fact perpendicular to the level lines.
In order to find an optimal solution using Figure 1.3, we start with a level line corresponding
to a small objective value (e.g., 6) and then (virtually) ‘move’ it in the direction of the arrows,
so that the values of the objective function increase. We stop moving the level line when it
reaches the boundary of the feasible region, so that moving the level line any further would
mean that no point of it would lie in the region 0v1 v2 v3 v4 . This happens for the level line
3x
" 1# + 2x2 = 4 12 . This level line intersects the feasible region at exactly one point, namely
1
42
. Hence, the optimal solution is x∗1 = 4 21 , x2∗ = 4 12 , and the optimal objective value
4 12
1. 2 . D e f i niti on of an L O - mode l 5
z 0
v3
x2 0
v2
0
v4
0
24 v1
18 x2
12
v4
6 v3 v2
0
0
x1 v1 x1
0
Figure 1.3: Level lines and the feasible region. Figure 1.4: Three-dimensional picture.
is 22 21 . Note that the optimal point is a vertex of the feasible region. This fact plays a crucial
role in linear optimization; see Section 2.1.2. Also note that this is the only optimal point.
In Figure 1.4, the same model is depicted in three-dimensional space. The values of z =
3x1 + 2x2 on the region 0v1 v2 v3 v4 form the region 0v10 v20 v30 v40 . From Figure 1.4 it is
obvious that the point v2 with coordinate values x1 = 4 21 and x2 = 4 21 is the optimal
solution. At v2 , the value of the objective function is z ∗ = 22 12 , which means that the
maximum profit is $22,500. This profit is achieved by producing 450,000 boxes of long
matches and 450,000 boxes of short matches.
max c1 x1 + . . . + cn xn , or min c1 x1 + . . . + cn xn ,
respectively. In the case of Model Dovetail the objective is max 3x1 + 2x2 and the
objective function is 3x1 + 2x2 . The value of the objective function at a point x is
called the objective value at x.
I Technology constraints. A technology constraint of an LO-model is either a ‘≤’, a
‘≥’, or an ‘=’ expression of the form:
where (≤, ≥, =) means that either the sign ‘≤’, or ‘≥’, or ‘=’ holds. The entry aij is the
coefficient of the j ’th decision variable xj in the i’th technology constraint. Let m be
the number of technology constraints. All (left hand sides of the) technology constraints
are linear functions of the decision variables x1 , . . . , xn .
I Nonnegativity and nonpositivity constraints. A nonnegativity constraint of an LO-
model is an inequality of the form xi ≥ 0; similarly, a nonpositivity constraint is of the
form xi ≤ 0. It may also happen that a variable xi is not restricted by a nonnegativity
constraint or a nonpositivity constraint. In that case, we say that xi is a free or unrestricted
variable. Although nonnegativity and nonpositivity constraints can be written in the
form of a technology constraint, we will usually write them down separately.
For i ∈ {1, . . . , m} and j ∈ {1, . . . , n}, the real-valued entries aij , bi , and cj are called
the parameters of the model. The technology constraints, nonnegativity and nonpositivity
constraints together are referred to as the constraints (or restrictions) of the model.
A vector x ∈ Rn that satisfies all constraints is called a feasible point or feasible solution of
the model. The set of all feasible points is called the feasible region of the model. An LO-
model is called feasible if its feasible region is nonempty; otherwise, it is called infeasible. An
optimal solution of a maximizing (minimizing) LO-model is a point in the feasible region with
maximum (minimum) objective value, i.e., a point such that there is no other point with a
larger (smaller) objective value. Note that there may be more than one optimal solution, or
none at all. The objective value at an optimal solution is called the optimal objective value.
Let x ∈ Rn . A constraint is called binding at the point x if it holds with equality at x. For
example, in Figure 1.2, the constraints (1.1) and (1.1) are binding at the point v2 , and the
other constraints are not binding. A constraint is called violated at the point x if it does not
hold at x. So, if one or more constraints are violated at x, then x does not lie in the feasible
region.
1. 2 . D e f i niti on of an L O - mode l 7
A maximizing LO-model with only ‘≤’ technology constraints and nonnegativity con-
straints can be written as follows:
max c1 x1 + . . . + cn xn
s.t. a11 x1 + . . . + a1n xn ≤ b1
.. .. ..
. . .
am1 x1 + . . . + amn xn ≤ bm
x1 , . . . , xn ≥ 0.
Using the summation sign ‘ ’, this can also be written as:
P
n
X
max cj xj
j=1
Xn
s.t. aij xj ≤ bi for i = 1, . . . , m
j=1
x1 , . . . , xn ≥ 0.
In terms of matrices there is an even shorter notation. The superscript ‘T’ transposes a row
vector into a column vector, and an (m, n) matrix into an (n, m) matrix (m, n ≥ 1). Let
T T
c = c1 . . . cn ∈ Rn , b = b1 . . . bm ∈ Rm ,
a11 . . . a1n
x = x1 . . . xn ∈ Rn , and A = ... ..
T m×n
. ∈R .
am1 . . . amn
The matrix A is called the technology matrix (or coefficients matrix), c is the objective vector, and
b is the right hand side vector of the model. The LO-model can now be written as:
max cT x Ax ≤ b, x ≥ 0 ,
where 0 ∈ Rn is the n-dimensional all-zero vector. We call this form the standard form of an
LO-model (see also Section 1.3). It is a maximizing model with ‘≤’ technology constraints,
and nonnegativity constraints. The feasible region F of the standard LO-model satisfies:
F = {x ∈ Rn | Ax ≤ b, x ≥ 0}.
x2
(1.3)
(1.2)
v3 (1.4)
v4
v5
v2
(1.5)
(1.1)
0 v6 v1 x1
∗
Figure 1.5: The feasible region of Model Dovetail .
Suppose that we want to add the following additional constraint to Model Dovetail. The
manager of Dovetail has an agreement with retailers to deliver a total of at least 500,000
boxes of matches next year. Using our decision variables, this yields the new constraint:
x1 + x2 ≥ 5. (1.5)
Instead of a ‘≤’ sign, this inequality contains a ‘≥’ sign. With this additional constraint,
Figure 1.3 changes into Figure 1.5. From Figure 1.5 one can graphically derive that the
optimal solution (x∗1 = x∗2 = 4 12 ) is not affected by adding the new constraint (1.5).
Including constraint (1.5) in Model Dovetail, yields Model Dovetail∗ :
Model Dovetail∗ .
In order to write this model in the standard form, the ‘≥’ constraint has to be transformed
into a ‘≤’ constraint. This can be done by multiplying both sides of it by −1. Hence,
x1 + x2 ≥ 5 then becomes −x1 − x2 ≤ −5. Therefore, the standard form of Model
1. 2 . D e f i niti on of an L O - mode l 9
Dovetail∗ is:
1 1 9
3 1 18
x1 x1 x1 0
max 3 2 1 0 ≤ 7 , ≥ .
x2 0 1 x2
6 x2
0
−1 −1 −5
Similarly, a constraint with ‘=’ can be put into standard form by replacing it by two ‘≤’
constraints. For instance, 3x1 − 8x2 = 11 can be replaced Tby 3x1 − 8x2 ≤ 11 and
−3x1 + 8x2 ≤ −11. Also, the minimizing LO-model min c x Ax ≤ b, x ≥ 0 can
be written in standard form, since
x1 + x2 ≤ 9. (1.1)
This constraint expresses the fact that the machine can produce at most 9 (× 100,000) boxes
per year. We may wonder whether there is excess machine capacity (overcapacity) in the case
of the optimal solution. For that purpose, we introduce an additional nonnegative variable
x3 in the following way:
x1 + x2 + x3 = 9.
The variable x3 is called the slack variable of constraint (1.1). Its optimal value, called the
slack, measures the unused capacity of the machine. By requiring that x3 is nonnegative, we
can avoid the situation that x1 + x2 > 9, which would mean that the machine capacity is
exceeded and the constraint x1 + x2 ≤ 9 is violated. If, at the optimal solution, the value
of x3 is zero, then the machine capacity is completely used. In that case, the constraint is
binding at the optimal solution.
Introducing slack variables for all constraints of Model Dovetail, we obtain the following
model:
10 C h a p t e r 1 . B a s i c c o n c e p t s o f l i n e a r o p t i m i z at i o n
In this model, x3 , x4 , x5 , and x6 are the nonnegative slack variables of the constraints (1.1),
(1.2), (1.3), and (1.4), respectively. The number of slack variables is therefore equal to the
number of inequality constraints of the model. In matrix notation the model becomes:
x1 x 0
1
1 1 1 0 0 0 x2 9 x2 0
x1 3 1 0 1 0 0
x 3
18
x 3
0
max 3 2 1 0 0 0 1 0 x4 7 x4 0 .
= , ≥
x2
0 1 0 0 0 1 x5 6 x5 0
x6 x6 0
If Im denotes the identity matrix with m rows and m columns (m ≥ 1), then the general
form of an LO-model with slack variables can be written as:
x
max cT x A Im
= b, x ≥ 0 ,
xs
max x1 + x2 ,
then all points on the line segment v2 v3 (see Figure 1.2) have the same optimal objective
value, namely 9, and therefore all points on the line segment v2 v3 are optimal. In this case,
we say that there are multiple optimal solutions; see also Section 3.7 and Section 5.6.1. The
feasible region has two optimal vertices, namely v2 and v3 .
Three types of feasible regions can be distinguished, namely:
1. 2 . D e f i niti on of an L O - mode l 11
x2 x2 x2
0 x1 0 x1 0 x1
(a) Bounded model, bounded (b) Unbounded model, (c) Bounded model, unbounded
feasible region. unbounded feasible region. feasible region.
I Feasible region bounded and nonempty. A feasible region is called bounded if all
decision variables are bounded on the feasible region (i.e., no decision variable can take
on arbitrarily large values on the feasible region). An example is drawn in Figure 1.6(a). If
the feasible region is bounded, then the objective values are also bounded on the feasible
region and hence an optimal solution exists. Note that the feasible region of Model
Dovetail is bounded; see Figure 1.2.
I Feasible region unbounded. A nonempty feasible region is called unbounded if it is not
bounded; i.e., at least one of the decision variables can take on arbitrarily large values on
the feasible region. Examples of an unbounded feasible region are shown in Figure 1.6(b)
and Figure 1.6(c). Whether an optimal solution exists depends on the objective function.
For example, in the case of Figure 1.6(b) an optimal solution does not exist. Indeed, the
objective function takes on arbitrarily large values on the feasible region. Therefore, the
model has no optimal solution. An LO-model with an objective function that takes on
arbitrarily large values is called unbounded; it is called bounded otherwise. On the other
hand, in Figure 1.6(c), an optimal solution does exist. Hence, this is an example of an
LO-model with an unbounded feasible region, but with a (unique) optimal solution.
I Feasible region empty. In this case we have that F = ∅ and the LO-model is called
infeasible. For example, if an LO-model contains the (contradictory) constraints x1 ≥ 6
and x1 ≤ 3, then its feasible region is empty. If an LO-model is infeasible, then it has
no feasible points and, in particular, no optimal solution. If F 6= ∅, then the LO-model
is called feasible.
So, an LO-model either has an optimal solution, or it is infeasible, or it is unbounded.
Note that an unbounded LO-model necessarily has an unbounded feasible region, but the
converse is not true. In fact, Figure 1.6(c) shows an LO-model that is bounded, although it
has an unbounded feasible region.
12 C h a p t e r 1 . B a s i c c o n c e p t s o f l i n e a r o p t i m i z at i o n
max cT x Ax ≤ b, x ≥ 0 ,
with A ∈ Rm×n . We call this form the standard form of an LO-model. The standard form
is characterized by a maximizing objective, ‘≤’ technology constraints, and nonnegativity
constraints.
In general, many different forms may be encountered, for instance with both ‘≥’ and ‘≤’
technology constraints, and both nonnegativity (xi ≥ 0) and nonpositivity constraints (xi ≤
0). All these forms can be reduced to the standard form max c x Ax ≤ b, x ≥ 0 .
T
The following rules can be applied to transform a nonstandard LO-model into a standard
model:
I A minimizing model is transformed into a maximizing model by using the fact that
minimizing a function is equivalent to maximizing minus that function. So, the objective
of the form ‘min cT x’ is equivalent to the objective ‘− max(−c)T x’. For example,
‘min x1 + x2 ’ is equivalent to ‘− max −x1 − x2 ’.
I A ‘≥’ constraint is transformed into a ‘≤’ constraint by multiplying both sides of the
inequality by −1 and reversing the inequality sign. For example, x1 − 3x2 ≥ 5 is
equivalent to −x1 + 3x2 ≤ −5.
I A ‘=’ constraint of the form ‘aT x = b’ can be written as ‘aT x ≤ b and aT x ≥ b’. The
second inequality in this expression is then transformed into a ‘≤’ constraint (see the
previous item). For example, the constraint ‘2x1 +x2 = 3’ is equivalent to ‘2x1 +x2 ≤ 3
and −2x1 − x2 ≤ −3’.
I A nonpositivity constraint is transformed into a nonnegativity constraint by replacing
the corresponding variable by its negative. For example, the nonpositivity constraint
‘x1 ≤ 0’ is transformed into ‘x01 ≥ 0’ by substituting x1 = −x10 .
I A free variable is replaced by the difference of two new nonnegative variables. For
example, the expression ‘x1 free’ is replaced by ‘x10 ≥ 0, x001 ≥ 0’, and substituting
x1 = x01 − x100 .
The following two examples illustrate these rules.
Example 1.3.1. Consider the nonstandard LO-model:
In addition to being a minimizing model, the model has a ‘≥’ constraint and a ‘=’ constraint. By
applying the above rules, the following equivalent standard form LO-model is found:
point ∗
x∗1
∗ x1 3
x = ∗ = =
x2 (x20 )∗ − (x002 )∗ −2
T
is an optimal solution of model (1.8). Note that x̂0 = 3 10 12 is another optimal solution of
(1.9) (why?), corresponding to the same optimal solution x∗ of model (1.8). In fact, the reader may
verify that every point in the set
3
α ∈ R3 α ≥ 0
2+α
is an optimal solution of (1.9) that corresponds to the optimal solution x∗ of model (1.8).
14 C h a p t e r 1 . B a s i c c o n c e p t s o f l i n e a r o p t i m i z at i o n
We have listed six possible general nonstandard models below. Any method for solving
one of the models (i)–(vi) can be used to solve the others, because they are all equivalent.
The matrix A in (iii) and (vi) is assumed to be of full row rank (i.e., rank(A) = m; see
Appendix B and Section 3.8). The alternative formulations are:
x0 x0
0 00
T T
= max c −c A −A ≤ b, x ≥ 0, x ≥ 0 ,
x00 x00
and this has the form (i). The reduction of (i) to (iii) follows by introducing slack variables
in (i). Formulation (iii) can be reduced to (i) by noticing that the constraints Ax = b can
be written as the two constraints Ax ≤ b and Ax ≥ b. Multiplying the former by −1
on both sides yields −Ax ≤ −b. Therefore, (iii) is equivalent to:
T A b
max c x x≤ .
−A −b
The disadvantage of this transformation is that the model becomes considerably larger. In
Section 3.8, we will see an alternative, more economical, reduction of (iii) to the standard
form. Similarly, (iv), (v), and (vi) are equivalent. Finally, (iii) and (vi) are equivalent because
n o
T
min cT x Ax = b, x ≥ 0 = − max (−c) x Ax = b, x ≥ 0 .
Language: English
Credits: Chuck Greif (This file was produced from images available at
The Internet Archive)
By
Birger Sandzen
CARL J. SMALLEY
Kansas City, Missouri
1922
Updated editions will replace the previous one—the old editions will
be renamed.
ebookgate.com