Optimización Lineal
Optimización Lineal
Linear Optimization
— a dynamic book —
by
Jon Lee
ReEx PrEsS
Jon Lee
2013–2021
ReEx PrEsS
iii
http://creativecommons.org/licenses/by/3.0/
where you will see the summary information below and can click through to the full
license information.
Go Forward
This is a book on linear optimization, written in LATEX. I started it, aiming it at the
course IOE 510, a masters-level course at the University of Michigan. Use it as is, or
adapt it to your course! It is an ongoing project. It is alive! It can be used, modified
(the LATEX source is available) and redistributed as anyone
pleases, subject to the terms of the Creative Commons At-
tribution 3.0 Unported License (CC BY 3.0) c b. Please
take special note that you can share (copy and redistribute
in any medium or format) and adapt (remix, transform, and
build upon for any purpose, even commercially) this mate-
rial, but you must give appropriate credit, provide a link to
the license, and indicate if changes were made. You may do
so in any reasonable manner, but not in any way that sug-
gests that I endorse you or your use. If you are interested
in endorsements, speak to my agent.
I started this material, but I don’t control so much what
you do with it. Control is sometimes overrated — and I am a control freak, so I should
know!
I hope that you find this material useful. If not, I am happy to refund what you paid
to me.
Jon Lee
v
Preface
This book is a treatment of linear optimization meant for students who are reason-
ably comfortable with matrix algebra (or willing to get comfortable rapidly). It is not
a goal of mine to teach anyone how to solve small problems by hand. My goals are to
introduce: (i) the mathematics and algorithmics of the subject at a beginning mathe-
matical level, (ii) algorithmically-aware modeling techniques, and (iii) high-level com-
putational tools for studying and developing optimization algorithms (in particular,
Python/Gurobi1 ).
Proofs are given when they are important in understanding the algorithmics. I make
free use of the inverse of a matrix. But it should be understood, for example, that B −1 b
is meant as a mathematical expression for the solution of the square linear system of
equations Bx = b . I am not in any way suggesting that an efficient way to calculate the
solution of a large (often sparse) linear system is to calculate an inverse! Also, I avoid
the dual simplex algorithm (e.g., even in describing branch-and-bound and cutting-
plane algorithms), preferring to just think about the ordinary simplex algorithm ap-
plied to the dual problem. Again, my goal is not to describe the most efficient way to
do matrix algebra!
Conventional illustrations are woefully few. Though if Lagrange could not be both-
ered1 , who am I to aim higher? Still, I am gradually improving this aspect, and many
of the algorithms and concepts are illustrated and verified in the modern way, with com-
puter code.2
The material that I present was mostly well known by the 1960’s. As a student at
Cornell in the late 70’s and early 80’s, I learned and got excited about linear optimiza-
tion from Bob Bland, Les Trotter and Lou Billera, using [1] and [5]. The present book
is a treatment of some of that material, with additional material on integer-linear op-
timization, mostly which I originally learned from George Nemhauser and Les. But
there is new material too; in particular, a “deconstructed post-modern” version of Go-
mory pure and mixed-integer cuts. There is nothing here on interior-point algorithms
and the ellipsoid algorithm; don’t tell Mike Todd!
Jon Lee
1
New in the 4th Edition! But thanks for the very fond memories AMPL.
vii
Serious Acknowledgments
Throw me some serious funding for this project, and I will acknowledge you — seri-
ously!
Many of the pictures in this book were found floating around on the web. I am making
”fair use” of them as they float through this document. Of course, I gratefully acknowl-
edge those who own them.
Hearty thanks to many students and to Prof. Siqian Shen for pointing out typos in an
earlier version.
ix
Dedication
For students (even Ohio students). Not for publishers — maybe next time.
xi
The Nitty Gritty
You can always get the released edition of this book (in .pdf format) from my web
page or github and the materials to produce them (LATEX source, etc.) from me.
I make significant use of software. Everything seems to work with:
Python 3.8.3 (default, Jul 2 2020, 17:30:36) [MSC v.1916 64 bit (AMD64)]
(via Anaconda distribution)
Jupyter Notebook server 6.0.3 (via Anaconda distribution)
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
WinEdt 10.3
MiKTeX 2.9
Use of older versions is inexcusable. Newer versions will surely break things. Nonethe-
less, if you can report success or failure on newer versions, please let me know.
I use lots of LATEX packages (which, as you may know, makes things rather fragile).
I could not possibly gather the version numbers of those — I do have a day job! (but
WinEdt does endeavor to keep the packages up to date).
xiii
Contents
2 Modeling 9
2.1 A Production Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Norm Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Network Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Modeling in Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Duality 45
5.1 The Strong Duality Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Complementary Slackness . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.3 Duality for General Linear-Optimization Problems . . . . . . . . . . . . . 48
5.4 Theorems of the Alternative . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
xv
xvi CONTENTS
6 Sensitivity Analysis 57
6.1 Right-Hand Side Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.1.1 Local analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.1.2 Global analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.1.3 A brief detour: the column geometry for the Simplex Algorithm . 60
6.2 Objective Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2.1 Local analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.2.2 Global analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8 Integer-Linear Optimization 89
8.1 Integrality for Free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
8.1.1 Some structured models . . . . . . . . . . . . . . . . . . . . . . . . 89
8.1.2 Unimodular basis matrices and total unimodularity . . . . . . . . 92
8.1.3 Consequences of total unimodularity . . . . . . . . . . . . . . . . 95
8.2 Modeling Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.2.1 Disjunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.2.2 Forcing constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8.2.3 Piecewise-linear univariate functions . . . . . . . . . . . . . . . . 103
8.3 A Prelude to Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.4 Branch-and-Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.5 Cutting Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
8.5.1 Pure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
8.5.2 Mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
8.5.3 Finite termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
8.5.4 Branch-and-Cut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
CONTENTS xvii
Appendices 125
A.1 LATEX template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
A.2 MatrixLP.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
A.3 Production.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
A.4 Multi-commodityFlow.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . 143
A.5 pivot_example.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
A.6 pivot_tools.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
A.7 Circle.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
A.8 Decomp.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
A.9 SubgradProj.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
A.10 CSP.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
A.11 UFL.ipynb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
A.12 pure_gomory_example_1.ipynb . . . . . . . . . . . . . . . . . . . . . . . 237
A.13 pure_gomory_example_2.ipynb . . . . . . . . . . . . . . . . . . . . . . . 247
Bibliography 279
Indexes 281
Index of definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Index of Jupyter notebooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Chapter 1
• Review ideas from linear algebra that we will make use of later.
1
2 CHAPTER 1. LET’S GET STARTED
max y 0 b
(D)
y 0 A ≤ c0 .
It is worth emphasizing that (P) and (D) are both defined from the same data A , b
and c . We have the following very simple but key result, relating the objective values
of feasible solutions of the two linear-optimization problems.
Proof.
c0 x̂ ≥ ŷ 0 Ax̂ ,
because ŷ 0 A ≤ c0 (feasibility of ŷ in (D)) and x̂ ≥ 0 (feasibility of x̂ in (P)). Furthermore
ŷ 0 Ax̂ = ŷ 0 b ,
For a matrix A ∈ Rm×n , we denote the entry in row i and column j as aij . For a
matrix A ∈ Rm×n , we denote the transpose of A by A0 ∈ Rn×m . That is, the entry in
row i and column j of A0 is aji .
Except when we state clearly otherwise, vectors are “column vectors.” That is, we
can view a vector x ∈ Rn as a matrix in Rn×1 . Column j of A is denoted by A·j ∈ Rm .
Row i of A is denoted by Ai· , and we view its transpose as a vector in Rn . We will have
far greater occasion to reference columns of matrices rather than rows, so we will often
write Aj as a shorthand for A·j , so as to keep notation less cluttered.
For matrices A ∈ Rm×p Pp and B ∈ R
p×n , the (matrix) product AB ∈ Rm×n is defined
to be the matrix having k=1 aik bkj as the entry in row i and column j . Note that for the
product AB to make sense, the number of columns of A and the number of rows of B
must be identical. It is important to emphasize that matrix multiplication is associative;
that is, (AB)C = A(BC) , and so we can always unambiguously write the product
4 CHAPTER 1. LET’S GET STARTED
of any number of matrices without the need for any parentheses. Also, note that the
product and transpose behave nicely together. That is, (AB)0 = B 0 A0 . P
The dot product or scalar product of vectors x, z ∈ Rn is the scalar hx, zi := nj=1 xj zj ,
which we can equivalently see as x0 z or z 0 x , allowing ourselves to consider a 1 × 1 ma-
trix to be viewed as a scalar. Thinking about matrix multiplication again, and freely
viewing columns as vectors, the entry in row i and column j of the product AB is the
dot product h(Ai· )0 , B·j i .
Matrix multiplication extends to “block matrices” in a straightforward manner. If
A11 · · · A1p B11 · · · B1n
A21 · · · A2p B21 · · · B2n
A := . . . and B := . . .. ,
. . . . .
. . . . . .
Am1 · · · Amp Bp1 · · · Bpn
where each of the Aij and Bij are matrices, and we assume that for all i and j the
number of columns of Aik agrees with the number of rows of Bkj , then
Pp Pp
k=1 A1k Bk1 ··· k=1 A1k Bkn
Pp Pp
k=1 A2k Bk1 ···
k=1 A2k Bkn .
AB =
.. ..
..
. . .
Pp Pp
k=1 Amk Bk1 · · · k=1 Amk Bkn
P
That is, block i, j of the product is pk=1 Aik Bkj , and Aik Bkj is understood as ordinary
matrix multiplication. P
For vectors x1 , x2 , . . . , xp ∈ Rn , and scalars λ1 , λ2 , . . . , λp , pi=1 λi xi is a linear
combination of x1 , x2 , . . . , xp . The linear combination is trivial if all λi = 0 . The vec-
tors x1 , x2 , . . . , xp ∈ Rn are linearly independent if the only representation of the zero
vector in Rn as a linear combination of x1 , x2 , . . . , xp is trivial. The set of all linear combi-
nations of x1 , x2 , . . . , xp is the vector-space span of {x1 , x2 , . . . , xp } . The dimension of
a vector space V , denoted dim(V ) , is the maximum number of linearly-independent
vectors in it. Equivalently, it is the minimum number of vectors needed to span the
space.
A set of dim(V ) linearly-independent vectors that spans a vector space V is a ba-
sis for V . If V is the vector-space span of {x1 , x2 , . . . , xp } , then there is a subset of
{x1 , x2 , . . . , xp } that is a basis for V . It is not hard to prove the following very useful
result.
The span of the rows of a matrix A ∈ Rm×n is the row space of A , denoted r.s.(A) :=
{y 0 A: y ∈ Rm } . Similarly, the span of the columns of a matrix A is the column space
of A , denoted c.s.(A) := {Ax : x ∈ Rn } . It is a simple fact that, for a matrix A ,
the dimension of its row space and the dimension of its column space are identical,
this common number being called the rank of A . The matrix A has full row rank if
its number of rows is equal to its rank. That is, if its rows are linearly independent.
Similarly, the matrix A has full column rank if its number of columns is equal to its
rank. That is, if its columns are linearly independent.
Besides the row and columns spaces of a matrix A ∈ Rm×n , there is another very
important vector space associated with A . The null space of A is the set of vectors
having 0 dot product with all rows of A , denoted n.s.(A) := {x ∈ Rn : Ax = 0} .
An important result is the following theorem relating the dimensions of the row
and null spaces of a matrix.
dim(r.s.(A)) + dim(n.s.(A)) = n .
There are some simple operations on a matrix that preserve its row and null spaces.
The following operations are elementary row operations:
There is one more operation that we allow, which is really one of convenience rather
than mathematics. It is convenient to be able to permute columns while also permuting
the corresponding column indices. That is, if A ∈ Rm×n , we regard the columns as
labeled, in left-to-right order: 1, 2, . . . , n . So we have
A = [A1 , A2 , . . . , An ] .
The identity matrix Ir in Rr×r is the matrix having 1 as every diagonal element and
0 as every off-diagonal element. Via elementary row operations, any matrix A , that is
not all zero, can be transformed, via elementary row operations, into one of the form
[Ir , M ] .
[B, Ir ]
−1 B −1 uv 0 B −1
B + uv 0 = B −1 − ,
1 + v 0 B −1 u
where the r × r matrix B is invertible, u, v ∈ Rr , and it must be assumed that 1 +
v 0 B −1 u 6= 0 for otherwise B + uv 0 would not be invertible.
Next, we define the determinant of a square r × r matrix B , which we denote
det(B) . We define the determinant in a non-standard but useful manner, via a recursive
formula known as Laplace expansion.3
If r = 1 , then B = (b11 ) , and we define det(B) := b11 . For r > 1 , choose any fixed
column j of B , and we define
r
X
det(B) = (−1)i+j bij det(B ij ) ,
i=1
[B, Ir ]
to obtain
[Ir , B −1 ] .
As we carry out the elementary row operations, we sometimes multiply a row by a
non-zero scalar. If we accumulate the product of all of these multipliers, the result is
det(B −1 ) ; equivalently, the reciprocal is det(B) .
Finally, for an invertible r × r matrix B and a vector b , we can express the unique
solution x̄ of the system Bx = b , via a formula involving determinants. Cramer’s rule
is the following formula:
det(B(j))
x̄j = , for j = 1, 2, . . . , r ,
det(B)
where B(j) is defined to be the matrix B with its j-th column replaced by b . It is worth
emphasizing that direct application of Cramer’s rule is not to be thought of as a useful
algorithm for computing the solution of a system of equations. But it can be very useful
to have in the proof toolbox.4
1.4 Exercises
Exercise 1.0 (Learn LATEX)
Learn to use LATEX for writing all of your homework solutions. Personally, I use MiKTEX,
which is an implementation of LATEX for Windows. Specifically, within MiKTEX, I am
using pdfLATEX (it only matters for certain things like including graphics and also pdf
into a document). I find it convenient to use the editor WinEdt, which is very LATEX
friendly. A good book on LATEX is
In Appendix A.1 there is a template to get started. Also, there are plenty of tutorials
and beginner’s guides on the web.
max c0 x
Ax ≤ b ; (P0 )
x ≥ 0.
HINT: Convert (P0 ) to a standard-form problem, and then apply the ordinary Weak
Duality Theorem for standard-form problems.
min c0 x + f 0 w
Ax + Bw ≤ b ;
(P0 )
Dx = g;
x≥0 w≤0 .
HINT: Convert (P0 ) to a standard-form problem, and then apply the ordinary Weak
Duality Theorem for standard-form problems.
Exercise 1.7 (Weak duality for a complicated matrix form — with Python/Gurobi)
Python is an interpreted, general-purpose programming language. Anaconda is a free
and open-source distribution of Python (and R). Via the Anaconda distribution, one also
gets Jupyter Notebook, which is a convenient way to experiment with Python. Gurobi
is a state-of-the art commercial linear and integer linear optimization software, with
free temporary licensing for students. Gurobi can be easily accessed with gurobipy, a
Python module. The Jupyter notebook MatrixLP.ipynb (see Appendix A.2) sets up
and solves an instance of (P0 ) from Exercise 1.6. Run the code and it, to see how it is
works. Now, extend the code to solve the dual of (P0 ). Also, after converting (P0 ) to
standard form (as indicated in the HINT for Exercise 1.6), use Python/Gurobi to solve
that problem and its dual. Make sure that you get the same optimal value for all of
these problems.
Chapter 2
Modeling
9
10 CHAPTER 2. MODELING
“Norms” are very useful as a measure of the “size” of a vector. In some applica-
tions, we are interested in making the “size” small. There are many different “norms”
(for example, the Euclidean norm), but two are particularly interesting for linear opti-
mization.
For x ∈ Rn , the ∞-norm (or max-norm) of x is defined as
min t
t − xi ≥ 0 , i = 1, 2, . . . , n ;
t + xi ≥ 0 , i = 1, 2, . . . , n ;
Ax = b ,
Now, we would like to formulate the problem of finding a 1-norm minimizing solu-
tion of the system of equations Ax = b . This is quite easy, via the linear-optimization
problem: Pn
min j=1 tj
tj − xj ≥ 0 , j = 1, 2, . . . , n ;
tj + xj ≥ 0 , j = 1, 2, . . . , n ;
Ax = b ,
where t ∈ Rn is a vector of n auxiliary variables. Notice how the minimization “pres-
sure” ensures that an optimal solution (x̂, t̂) has t̂j = |x̂j | , for
Pjn = 1, 2, . . . , n (again,
this would not work for maximization!), and so we will have j=1 t̂j = kx̂k1 .
A finite network G is described by a finite set of nodes N and a finite set A of arcs.
Each arc e has two key attributes, namely its tail t(e) ∈ N and its head h(e) ∈ N . We
think of K ≥ 1 commodities as being allowed to “flow” along each arc, from its tail to
its head. Indeed, we have “flow” variables
for v ∈ N . A flow is conservative if the net flow out of node v , minus the net flow into
node v , is equal to the net supply at node v , for all nodes v ∈ N , and all commodities
k = 1, 2, . . . , K .
The multi-commodity min-cost network-flow problem is to find a minimum-cost
conservative flow that is non-negative and respects the flow upper bounds on the arcs.
We can formulate this as follows:
K X
X
min cke xke
k=1 e∈A
X X
xke − xke = bkv , ∀ v ∈ N , k = 1, 2, . . . , K ;
e∈A : e∈A :
t(e)=v h(e)=v
K
X
xke ≤ ue , ∀e∈A;
k=1
xke ≥ 0 , ∀ e ∈ A , k = 1, 2, . . . , K .
2.4. MODELING IN SOFTWARE 13
This objective name is accessible to us in Python and is not used internally by Gurobi.
Finally, we define our resource constraints and attach them to model via:
for i in M:
model.addConstr(sum(A[i,j]*x[j] for j in N) <= b[i], name='r'+str(i))
Note that we have created names, rj, for j ∈ N, for the constraints inside Gurobi. This enables
us to easily retrieve constraint “attributes” from Gurobi. A Juptyer notebook giving the full
Python/Gurobi implementation is Production.ipynb (see Appendix A.3).
Next, we consider the Network Flow problem of Section 2.3. The model is specified as:
x = model.addVars(ArcsCrossCommods)
model.setObjective(sum(sum(CapacityCosts[i,j][k]*x[(i,j),k] for (i,j) in Arcs)
for k in Commods), GRB.MINIMIZE)
model.addConstrs(sum(x[(i,j),k] for k in Commods) <= CapacityCosts[i,j][0]
for (i,j) in Arcs)
model.addConstrs(
(sum(x[(i, j),k] for j in Nodes if (i, j) in Arcs) - sum(x[(j, i),k]
for j in Nodes if (j,i) in Arcs)
== Supplies[i][k-1] for i in Nodes for k in Commods))
Example 2.1
Figure 2.1 depicts an 8-node network for a K = 2 commodity example. Each arc e is labeled
[ue , c12 , c2e ]. Figures 2.2 and 2.3 depict the node supply data and the optimal solutions. Figures
2.2 corresponds to commodity 1 and Figure 2.3 corresponds to commodity 2. Node v is labeled
v : bkv and arc e is labeled with the optimal value of xke .
2.4. MODELING IN SOFTWARE 15
2.5 Exercises
Exercise 2.1 (Dual in Python/Gurobi)
Without changing the data specification in Production.ipynb (see Appendix A.3), use Python/Gurobi
to solve the dual of the Production Problem example, as described in Section 2.1. You will need
to modify the model in Production.ipynb appropriately.
We are given a set of tasks, numbered 1, 2, . . . , n that should be completed in the minimum
amount of time. For convenience, task 0 is a “start task” and task n+1 is an “end task”. Each task,
except for the start and end task, has a known duration di . For convenience, let d0 := 0 . Any
number of tasks can be carried out simultaneously, except that there are precedences between
tasks. Specifically, Ψi is the set of tasks that must be completed before task i can be started. Let
t0 := 0 , and for all other tasks i , let ti be a decision variable representing its start time.
Formulate the problem, mathematically, as a linear-optimization problem. The objective
should be to minimize the start time tn+1 of the end task. Then, model the problem with
Python/Gurobi, make up some data, try some computations, and report on your results.
18 CHAPTER 2. MODELING
dollar invested in investment vehicle #9 at the start of time period 4 yields 0.1 dollars of income
for times periods 4–7, and with the original dollar returned in time period 7, and no returns at
all in the remaining time periods 8–12.
Note that at the start of time period t , the cash available is the external inflow of pt , plus
cash accumulated from all investment vehicles in prior periods that was not reinvested. Finally,
assume that cash held over in any time period earns interest of q percent.
Formulate the problem, mathematically, as a linear-optimization problem. Then, model the
problem with Python/Gurobi, make up some data, try some computations, and report on your
results.
Chapter 3
min c0 x
Ax = b ; (P)
x ≥ 0.
19
20 CHAPTER 3. ALGEBRA VERSUS GEOMETRY
Aβ := [Aβ1 , Aβ2 , . . . , Aβm ] is an invertible m × m matrix. The connection with the standard
“linear-algebra basis” is that the columns of Aβ form a “linear-algebra basis” for Rm . But for
us, “basis” almost always refers to β.
We associate a basic solution x̄ ∈ Rn with the basic partition via:
x̄η := 0 ∈ Rn−m ;
−1
x̄β := Aβ b ∈ Rm .
We can observe that x̄β = A−1β b is equivalent to Aβ x̄β = b , which is the unique way to write b
as a linear combination of the columns of Aβ . Of course this makes sense, because the columns
of Aβ form a “linear-algebra basis” for Rm .
Note that every basic solution x̄ satisfies Ax̄ = b , because
n
X X X
Ax̄ = Aj x̄j = Aj x̄j + Aj x̄j = Aβ x̄β + Aη x̄η = Aβ A−1
β b + Aη 0 = b .
j=1 j∈β j∈η
A basic solution x̄ is a basic feasible solution if it is feasible for (P). That is, if x̄β = A−1
β b≥0.
It is instructive to have a geometry for understanding the algebra of basic solutions, but for
standard-form problems, it is hard to draw something interesting in two dimensions. Instead,
we observe that the feasible region of (P) is the solution set, in Rn , of
xβ + A−1
β Aη x η = A−1
β b;
xβ ≥ 0 , xη ≥ 0 .
Example 3.1
For this system, it is convenient to draw pictures when n − m = 2 , for example n = 6 and
m = 4 . In such a picture, the basic solution x̄ ∈ Rn maps to the origin x̄η = 0 ∈ Rn−m , but
other basic solutions (feasible and not) will map to other points.
Suppose that we have the data:
1 2 1 0 0 0
3 1 0 1 0 0
A :=
3/2
,
3/2 0 0 1 0
0 1 0 0 0 1
b := (7, 9, 6, 33/10)0 ,
β := (β0 , β1 , β2 , β3 ) = (0, 1, 3, 5) ,
η := (η0 , η1 ) = (2, 4) .
3.1. BASIC FEASIBLE SOLUTIONS AND EXTREME POINTS 21
Then
1 2 0 0
3 1 1 0
Aβ = [Aβ0 , Aβ1 , Aβ2 , Aβ3 ] =
3/2
,
3/2 0 0
0 1 0 1
1 0
0 0
Aη = [Aη0 , Aη1 ] = 0 1 ,
0 0
xβ = (x0 , x1 , x3 , x5 )0
xη := (x2 , x4 )0 .
We can calculate
−1 4/3
1 −2/3
A−1 ,
β Aη = 2 −10/3
−1 2/3
A−1
β b := (1, 3, 3, 3/10)0 ,
and then we have plotted this in Figure 3.1. The plot has xη0 = x2 as the abscissa, and xη1 = x4
as the ordinate.
In theplot, besides the non-negativity of the variables x2 and x4 , the four
inequalities of A−1 −1
β Aη xη ≤ Aβ b are labeled with their slack variables — these are the basic
variables
x0 , x1 , x3 , x5 . The correct matching of the basic variables to the inequalities of
A−1 −1
β Aη xη ≤ Aβ b is simply achieved by seeing that the i-th inequality has slack variable
xβi .
The feasible region is colored cyan, while basic feasible solutions project to green points and
basic infeasible solutions project to red points. We can see that the basic solution associate with
the current basis is feasible, because the origin (corresponding to the non-basic variables being
set to 0) is feasible.
A set S ⊂ Rn is a convex set if it contains the entire line segment between every pair of
points in S . That is,
λx1 + (1 − λ)x2 ∈ S , whenever x1 , x2 ∈ S and 0 < λ < 1 .
It is simple to check that the feasible region of every linear-optimization problem is a convex set
— do it!
For a convex set S ⊂ Rn , a point x̂ ∈ S is an extreme point of S if it is not on the interior of
any line segment wholly contained in S . That is, if we cannot write
x̂ = λx1 + (1 − λ)x2 , with x1 6= x2 ∈ S and 0 < λ < 1 .
Theorem 3.2
Every basic feasible solution of standard-form (P) is an extreme point of its feasible region.
x̄η := 0 ∈ Rn−m ;
−1
x̄β := Aβ b ∈ Rm .
22 CHAPTER 3. ALGEBRA VERSUS GEOMETRY
Figure 3.1: Feasible region projected into the space of non-basic variables
If
x̄ = λx1 + (1 − λ)x2 , with x1 and x2 feasible for (P) and 0 < λ < 1 ,
then 0 = x̄η = λx1η + (1 − λ)x2η and 0 < λ < 1 implies that x1η = x2η = 0 . But then Aβ xiβ = b
implies that xiβ = A−1
β b = x̄β , for i = 1, 2 . Hence x̄ = x = x (but we needed x 6= x ), and
1 2 1 2
Theorem 3.3
Every extreme point of the feasible region of standard-form (P) is a basic solution.
That is, ρ is the list of indices for the positive variables of x̂ . Also, we let
ζ := {j ∈ {1, 2, . . . , n} : x̂j = 0} .
That is, ζ is the list of indices for the zero variables of x̂ . Together, ρ and ζ partition {1, 2, . . . , n} .
Our goal is to construct a basic partition, β, η , so that the associated basic solution is pre-
cisely x̂ .
3.2. BASIC FEASIBLE DIRECTIONS 23
The first thing that we will establish is that the columns of Aρ are linearly independent. We
will do that by contradiction. Suppose that they are linearly dependent. That is, there exists
zρ ∈ R|ρ| different from the zero vector, such that Aρ zρ = 0 . Next we extend zρ to a vector
z ∈ Rn , by letting zζ = 0 . Clearly Az = Aρ zρ + Aζ zζ = 0 + Aζ 0 = 0 ; that is, z is in the null
space of A . Next, let
x1 := x̂ + z
and
x2 := x̂ − z ,
with chosen to be sufficiently small so that x1 and x2 are non-negative. Because z is only
non-zero on the ρ coordinates (where x̂ is positive), we can choose an appropriate . Notice
that x1 6= x2 , because zρ and hence z is not the zero vector. Now, it is easy to verify that
Ax1 = A(x̂ + z) = Ax̂ + Az = b + 0 = b and similarly Ax2 = b . Therefore, x1 and x2 are
feasible solutions of (P). Also, 12 x1 + 21 x2 = 12 (x̂ + z) + 12 (x̂ − z) = x̂ . So x̂ is on the interior
(actually it is the midpoint) of the line segment between x1 and x2 , in contradiction to x̂ being
an extreme point of the feasible region of (P). Therefore, it must be that the columns of Aρ are
linearly independent.
In particular, we can conclude that |ρ| ≤ m , since we assume that A ∈ Rm×n has full row
rank. If |ρ| < m , we choose (via Theorem 1.2) m − |ρ| columns of Aζ to append to Aρ in such
a way as to form a matrix Aβ having m linearly-independent columns — we note that such a
choice is not unique. As usual, we let η be a list of the n − m indices not in β . By definition,
the associated basic solution x̄ has x̄η = 0 , and we observe that it is the unique solution to the
system of equations Ax = b having xη = 0 . But x̂η = 0 because x̂η is a subvector of x̂ζ = 0 .
Therefore, x̂ = x̄ . That is, x̂ is a basic solution of (P). t
u
Taken together, these last two results give us the main result of this section.
Corollary 3.4
For a feasible point x̂ of standard-form (P), x̂ is extreme if and only if x̂ is a basic solution.
For a point x̂ in a convex set S ⊂ Rn , a feasible direction relative to the feasible solution
x̂ is a ẑ ∈ Rn such that x̂ + ẑ ∈ S , for sufficiently small positive ∈ R . Focusing now on the
standard-form problem (P), for ẑ to be a feasible direction relative to the feasible solution x̂ ,
we need A(x̂ + ẑ) = b . But
z̄η := ej ∈ Rn−m ;
z̄β := −A−1
β Aη j ∈ Rm .
So
A (x̂ + z̄) = b ,
for every feasible x̂ and every ∈ R . Moving a positive amount in the direction z̄ corresponds
to increasing the value of xηj , holding the values of all other non-basic variables constant, and
making appropriate changes in the basic variables so as to maintain satisfaction of the equation
system Ax = b .
There is a related point worth making. We have just seen that for a given basic partition β, η ,
each of the n − m basic directions is in the null space of A — there is one such basic direction for
each of the n − m choices of ηj . It is very easy to check that these basic directions are linearly
independent — just observe that they are columns of the n × (n − m) matrix
I
.
−A−1β Aη
Because the dimension of the null space of A is n − m , these n − m basic directions form a basis
for the null space of A .
Now, we focus on the basic feasible solution x̄ determined by the basic partition β, η . The
basic direction z̄ is a basic feasible direction relative to the basic feasible solution x̄ if x̄ + z̄
is feasible, for sufficiently small positive ∈ R . That is, if
A−1 −1
β b − Aβ Aηj ≥ 0 ,
x̄β − Āηj ≥ 0 ,
x̄βi − āi,ηj ≥ 0 ,
for i = 1, 2, . . . , m . If āi,ηj ≤ 0 , for some i , then this imposes no restriction at all on . So, the
only condition that we need for z̄ to be a basic feasible direction relative to the basic feasible solution
x̄ is that there exists > 0 satisfying
x̄βi
≤ , for all i such that āi,ηj > 0 .
āi,ηj
3.3. BASIC FEASIBLE RAYS AND EXTREME RAYS 25
Theorem 3.5
For a standard-form problem (P), suppose that x̄ is a basic feasible solution relative to the basic
partition β, η . Consider choosing a non-basic index ηj . Then the associated basic direction z̄
is a feasible direction relative to x̄ if and only if
If the basic direction z̄ is a ray, then we call it a basic feasible ray. We have already seen
that Az̄ = 0. Furthermore, z̄ ≥ 0 if and only if Āηj := A−1β Aηj ≤ 0.
Therefore, we have the following result:
Theorem 3.6
The basic direction z̄ is a ray of the feasible region of (P) if and only if Āηj ≤ 0 .
Recall, further, that z̄ is a basic feasible direction relative to the basic feasible solution x̄ if x̄ + z̄
is feasible, for sufficiently small positive ∈ R . Therefore, if z̄ is a basic feasible ray, relative to the
basic partition β, η and x̄ is the basic feasible solution relative to the same basic partition, then
z̄ is a basic feasible direction relative to x̄ .
A ray ẑ of a convex set S is an extreme ray if we cannot write
ẑ = z 1 + z 2 , with z 1 6= µz 2 being rays of S and µ 6= 0 .
Similarly to the correspondence between basic feasible solutions and extreme points for standard-
form problems, we have the following two results.
Theorem 3.7
Every basic feasible ray of standard-form (P) is an extreme ray of its feasible region.
26 CHAPTER 3. ALGEBRA VERSUS GEOMETRY
Theorem 3.8
Every extreme ray of the feasible region of standard-form (P) is a positive multiple of a basic
feasible ray.
3.4 Exercises
Exercise 3.1 (Illustrate algebraic and geometric concepts)
Using the Jupyter notebook pivot_tools.ipynb (see Appendix A.6), make a small example,
say with six variables and four equations, to fully illustrate the concepts in this chapter. The
Jupyter notebook pivot_example.ipynb (see Appendix A.5) shows how to start to work with
pivot_tools.ipynb.
Exercise 3.3 (Extreme rays are positive multiples of basic feasible rays)
If you are feeling very ambitious, prove Theorem 3.8.
Exercise 3.4 (Dual basic direction — do this if you will be doing Exercise 4.2)
Let β, η be a basic partition for our standard-form problem (P). As you will see on the first page
of the next chapter, we can associate with the basis β, a dual solution
ȳ 0 := c0β A−1
β
of
max y0 b
(D)
y0 A ≤ c0 .
It is easy to see that ȳ satisfies the constraints y 0 Aβ ≤ c0β (of (D)) with equality; that is, the dual
constraints indexed from β are “active”.
Let us assume that ȳ is feasible for (D). Now, let β` be a basic index, and let w̄ := H`· be
row ` of H := A−1 β . Consider ȳ := ȳ − λw̄ , and explain (with algebraic justification) what is
˜ 0
happening to the activity of each constraint of (D), as λ increases. HINT: Think about the cases
of (i) i = `, (ii) i ∈ β, i 6= `, and (iii) j ∈ η .
Chapter 4
ȳ 0 := c0β A−1
β .
Lemma 4.1
If β is a basis, then the primal basic solution x̄ (feasible or not) and the dual solution ȳ (feasible
or not) associated with β have equal objective value.
Proof. The objective value of x̄ is c0 x̄ = c0β x̄β + c0η x̄η = c0β (A−1 −1
β b) + cη 0 = cβ Aβ b . The objective
0 0
27
28 CHAPTER 4. THE SIMPLEX ALGORITHM
Lemma 4.2
The dual solution of (D) associated with basis β is feasible for (D) if
c̄η ≥ 0 .
ȳ 0 Aη ≤ c0η .
So we have
ȳ 0 [Aβ , Aη ] ≤ (c0β , c0η ) ,
or, equivalently,
ȳ 0 A ≤ c0 ,
Hence ȳ is feasible for (D). t
u
Proof. We have already observed that c0 x̄ = ȳ 0 b for the pair of primal and dual solutions associ-
ated with the basis β . If these solutions x̄ and ȳ are feasible for (P) and (D), respectively, then
by weak duality these solutions are optimal. t
u
We can also take (P) and transform it into an equivalent form that is quite revealing. Clearly,
(P) is equivalent to
min c0β xβ + c0η xη
Aβ x β + Aη x η = b
xβ ≥ 0 , x η ≥ 0 .
Next, multiplying the equations on the left by A−1
β , we see that they are equivalent to
xβ + A−1 −1
β Aη x η = Aβ b .
Using this equation to substitute for xβ in the objective function, we are led to the linear objective
function
c0β A−1
β b + min c0
η − c0
A
β β
−1
A 0 −1 0
η xη = cβ Aβ b + min c̄η xη ,
which is equivalent to the original one on the set of points satisfying Ax = b . In this equivalent
form, it is now solely expressed in terms of xη . Now, if c̄η ≥ 0 , the best we could hope for in
minimizing is to set xη = 0 . But the unique solution having xη = 0 is the basic feasible solution
x̄ . So that x̄ is optimal.
4.1. A SUFFICIENT OPTIMALITY CRITERION 29
Example 4.4
This is a continuation of Example 3.1. In Figure 4.1, we have depicted the sufficient optimality
criterion, in the space of a particular choice of non-basic variables — not the choice previously
depicted. Specifically, we consider the equivalent problem
min c̄0η xη
Āη xη ≤ x̄β ;
xη ≥ 0.
This plot demonstrates the optimality of β := (2, 5, 3, 4) (η := (0, 1)). The basic directions
available from the basic feasible solution x̄ appear as standard unit vectors in the space of the
non-basic variables. The solution x̄ is optimal because c̄η ≥ 0 ; we can also think of this as
c̄η having a non-negative dot product with each of the standard unit vectors, hence neither
direction is improving.
Improving direction. Often it is helpful to directly refer to individual elements of the vector
c̄η ; namely,
c̄ηj = cηj − c0β A−1 0
β Aηj = cηj − cβ Āηj , for j = 1, 2, . . . , n − m .
If the sufficient optimality criterion is not satisfied, then we choose an ηj such that c̄ηj is negative,
and we consider solutions that increase the value of xηj up from x̄ηj = 0 , changing the values
of the basic variables to insure that we still satisfy the equations Ax = b , while holding the
other non-basic variables at zero.
Operationally, we take the basic direction z̄ ∈ Rn defined by
z̄η := ej ∈ Rn−m ;
z̄β := −A−1
β Aηj = −Āηj ∈ Rm ,
and we consider solutions of the form x̄ + λz̄ , with λ > 0 . The motivation is based on the
observations that
• c0 (x̄ + λz̄) − c0 x̄ = λc0 z̄ = λc̄ηj < 0 ;
• A(x̄ + λz̄) = Ax̄ + λAz̄ = b + λ0 = b .
That is, the objective function changes at the rate of c̄ηj , and we maintain satisfaction of the
Ax = b constraints.
Maximum step — the ratio test and a sufficient unboundedness criterion. By our
choice of direction z̄, all variables that are non-basic with respect to the current choice of basis
remain non-negative (xηj increases from 0 and the others remain at 0). So the only thing that
restricts our movement in the direction z̄ from x̄ is that we have to make sure that the current
basic variables remain non-negative. This is easy to take care of. We just make sure that we
choose λ > 0 so that
x̄β + λz̄β = x̄β − λĀηj ≥ 0 .
Notice that for i such āi,ηj ≤ 0 , there is no limit on how large λ can be. In fact, it can well
happen that Āηj ≤ 0 . In this case, x̄ + λz̄ is feasible for all λ > 0 and c0 (x̄ + λz̄) → −∞ as
λ → +∞ , so the problem is unbounded.
Otherwise, to insure that x̄ + λz̄ ≥ 0 , we just enforce
x̄βi
λ≤ , for i such that āi,ηj > 0 .
āi,ηj
Finally, to get the best improvement in the direction z̄ from x̄, we let λ equal
x̄βi
λ̄ := min .
i : āi,ηj >0 āi,ηj
4.2. THE SIMPLEX ALGORITHM WITH NO WORRIES 31
Non-degeneracy. There is a significant issue in even carrying out one iteration of this algo-
rithm. If x̄βi = 0 for some i such that āi,ηj > 0 , then λ̄ = 0 , and we are not able to make any
change from x̄ in the direction z̄ . Just for now, we will simply assume away this problem, using
the following hypothesis that every basic variable of every basic feasible solution is positive.
The problem (P) satisfies the non-degeneracy hypothesis if for every feasible basis β, we have
x̄βi > 0 for i = 1, 2, . . . , m . Under the non-degeneracy hypothesis, λ̄ > 0 .
Another basic feasible solution. By our construction, the new solution x̄ + λ̄z̄ is feasible
and has lesser objective value than that of x̄ . We can repeat the construction as long as the new
solution is basic. If it is basic, there is a natural guess as to what an appropriate basis may be. The
variable xηj , formerly non-basic at value 0 has increased to λ̄ , so clearly it must become basic.
Also, at least one variable that was basic now has value 0 . In fact, under our non-degeneracy
hypothesis, once we establish that the new solution is basic, we observe that exactly one variable
that was basic now has value 0. Let
x̄βi
i∗ := argmin .
i : āi,ηj >0 āi,ηj
If there is more than one i that achieves the minimum (which can happen if we do not assume
the non-degeneracy hypothesis), then we will see that the choice of i∗ can be any of these. We
can see that xβi∗ has value 0 in x̄ + λ̄z̄ . So it is natural to hope we can replace xβi∗ as a basic
variable with xηj .
Let
β̃ := (β1 , β2 , . . . , βi∗ −1 , ηj , βi∗ +1 , . . . , βm )
and
η̃ := (η1 , η2 , . . . , ηj−1 , βi∗ , ηj+1 , . . . , ηn−m ) .
Lemma 4.5
Aβ̃ is invertible.
Lemma 4.6
The unique solution of Ax = b having xη̃ = 0 is x̄ + λ̄z̄ .
Proof. (x̄ + λ̄z̄)j = 0, for j ∈ η̃. Moreover, x̄ + λ̄z̄ is the unique solution to Ax = b having xη̃ = 0
because Aβ̃ is invertible. t
u
Putting these two lemmata together, we have the following key result.
32 CHAPTER 4. THE SIMPLEX ALGORITHM
Theorem 4.7
x̄ + λ̄z̄ is a basic solution; in fact, it is the basic solution determined by the basic partition β̃, η̃ .
Passing from the partition β, η to the partition β̃, η̃ is commonly referred to as a pivot.
min c0 x
Ax = b; (P)
x ≥ 0,
5. GOTO 1.
Example 4.8
This is a continuation of Example 3.1 / Example 4.4. In Figure 4.2, be have depicted the solution
one step after the initial solution depicted in Figure 3.1. The result of the next pivot is depicted
in Figure 4.3. Finally, in one more pivot, we reach the optimum depicted in Figure 4.1.
Theorem 4.9
Under the non-degeneracy hypothesis, the Worry-Free Simplex Algorithm terminates cor-
rectly.
Proof. Under the non-degeneracy hypothesis, every time we visit Step 1, we have a primal feasi-
ble solution with a decreased objective value. This implies that we never revisit a basic feasible
partition. But there are only a finite number of basic feasible partitions, so we must terminate,
after a finite number of pivots. But there are only two places where the algorithm terminates;
either in Step 1 where we correctly identify that x̄ and ȳ are optimal by our sufficient optimality
criterion, or in Step 3 because of our sufficient unboundedness criterion. t
u
4.2. THE SIMPLEX ALGORITHM WITH NO WORRIES 33
Remark 4.10
There are two very significant issues remaining:
• How do we handle degeneracy? (see Section 4.3).
• How do we initialize the algorithm in Step 0? (see Section 4.4).
4.3 Anticycling
min c0 x
Ax = b (B) ; (P (B))
x ≥ 0 ,
where
• b (B) := b + B~ , and ~ := (, 2 , . . . , m )0 (these are exponents not superscripts);
• the scalar is an arbitrarily small indeterminant; is not given a numerical value; it is
4.3. ANTICYCLING 35
simply considered to be a quantity that is positive, yet smaller than any positive real
number;
• 0 denotes a vector in which all entries are the zero polynomial (in );
• the variables xj are polynomials in with real coefficients;
• the ordering of polynomials used to interpret the inequality ≥ is described next.
The ordering is actually quite simple, but for the sake of precision, we describe it formally.
An ordered ring. The set of polynomials in , with real coefficients, form what is P known in
m
mathematics as an “ordered ring”. The ordering < is simple to describe. Let p() := j=0 pj j
Pm
and q() := j=0 qj j . Then p() < q() if the least j for which pj 6= qj has pj < qj . Another
way to think about the ordering < is that p() < q() if p() < q() when is considered to be
an arbitrarily small positive number. Notice how the ordering < is in a certain sense a more
refined ordering than < . That is, if p(0) < q(0) , then p() < q() , but we can have p(0) = q(0)
without having p() = q() . Finally, we note that the zero polynomial “0 ”(all coefficients
equal to 0) is the zero of this ordered ring, so we can speak, for example about polynomials that
are positive with respect to the ordering < . Concretely, p() 6= 0 is positive if the least i for
which pi 6= 0 satisfies pi > 0 . Emphasizing that < is a more refined ordering than < , we see
that p() ≥ 0 implies that p(0) = p0 ≥ 0 .
For an arbitrary basis β, the associated basic solution x̄ has x̄β := A−1 β (b + B~ ) = x̄β +
−1
Aβ B~ . It is evident that x̄βi is a polynomial, of degree at most m, in , for each i = 1, . . . , m .
Because the ordering < refines the ordering < , we have that x̄β ≥ 0 implies that x̄β ≥ 0 . That
is, any basic feasible partition for (P (B)) is a basic feasible partition for (P). This implies that
applying the Worry-Free Simplex Algorithm to (P (B)), using the ratio test to enforce feasibility
of x̄ in (P (B)) at each iteration, implies that each associated x̄β is feasible for (P). That is, the
choice of a leaving variable dictated by the ratio test when we work with (P (B)) is valid if we
instead do the ratio test working with (P).
The objective value associated with x̄ is c0β A−1
β (b + B~ ) = ȳ 0 b + ȳ 0 B~ , is a polynomial (of
degree at most m) in . Therefore, we can order basic solutions for (P (B)) using < , and
that ordering refines the ordering of the objective values of the corresponding basic solution of
(P). This implies that if x̄ is optimal for (P (B)) , then the x̄ associated with the same basis is
optimal for (P).
Lemma 4.11
The -perturbed problem (P (B)) satisfies the non-degeneracy hypothesis.
36 CHAPTER 4. THE SIMPLEX ALGORITHM
Proof. For an arbitrary basis matrix Aβ , the associated basic solution x̄ has x̄β := A−1 β (b+B~) =
−1
x̄β + Aβ B~ . As we have already pointed out, x̄βi is a polynomial, of degree at most m, in , for
Theorem 4.12
Let β 0 be a basis that is feasible for (P). Then the Worry-Free Simplex Algorithm applied to
(P (Aβ 0 )), starting from the basis β 0 , correctly demonstrates that (P) is unbounded or finds
an optimal basic partition for (P).
Proof. The first important point to notice is that we are choosing the perturbation of the original
right-hand side to depend on the choice of a basis that is feasible for (P). Then we observe that
x̄β 0 := A−1
β 0 (b + Aβ ~
−1
. Now because x̄ is feasible for (P), we have A−1
0 ) = A 0 b + ~
β β0 b ≥ 0 .
−1
Then, the ordering < implies that x̄β 0 = Aβ 0 b + ~ ≥ 0 . Therefore, the basis β is feasible for
0
(P (Aβ 0 )), and the Worry-Free Simplex Algorithm can indeed be started for (P (Aβ 0 )) on β 0 .
Notice that it is only in Step 4 of the Worry-Free Simplex Algorithm that really depends
on whether we are considering (P (Aβ 0 )) or (P). The sufficient optimality criterion and the
sufficient unboundedness criterion are identical for (P (Aβ 0 )) and (P). Because (P (Aβ 0 )) sat-
isfies the non-degeneracy hypothesis, the Worry-Free Simplex Algorithm correctly terminates
for (P (Aβ 0 )). t
u
(Pivot.mp4)
Figure 4.4: With some .pdf viewers, you can click above to see or download a short
video. Or just see it on YouTube (probably with an ad) by clicking here.
4.4. OBTAINING A BASIC FEASIBLE SOLUTION 37
Otherwise, we have some work to do. We define a new non-negative variable xn+1 , which we
temporarily adjoin as an additional non-basic variable. So our basic indices remain as
β̃ = β̃1 , β̃2 , . . . , β̃m ,
This variable xn+1 is termed an artificial variable. The column for the constraint matrix
associated with xn+1 is defined as An+1 := −Aβ̃ 1 . Hence Ān+1 = −1 . Finally, we temporarily
put aside the objective function from (P) and replace it with one of minimizing the artificial
variable xn+1 . That is, we consider the so-called phase-one problem
min xn+1
Ax + An+1 xn+1 = b; (Φ)
x , xn+1 ≥ 0.
With this terminology, the original problem (P) is referred to as the phase-two problem.
38 CHAPTER 4. THE SIMPLEX ALGORITHM
It is evident that any feasible solution x̂ of (Φ) with x̂n+1 = 0 is feasible for (P). Moreover,
if the minimum objective value of (Φ) is greater than 0, then we can conclude that (P) has no
feasible solution. So, toward establishing whether or not (P) has a feasible solution, we focus
our attention on (Φ). We will soon see that we can easily find a basic feasible solution of (Φ).
Finding a basic feasible solution of (Φ). Choose i∗ so that x̄β̃i∗ is most negative. Then
we exchange β̃i∗ with η̃n−m+1 = n + 1 . That is, our new basic indices are
β := β̃1 , β̃2 , . . . , β̃i∗ −1 , n + 1, β̃i∗ +1 , . . . , β̃m ,
Lemma 4.13
The basic solution of (Φ) associated with the basic partition β, η is feasible for (Φ).
Proof. This pivot, from β̃, η̃ to β, η amounts to moving in the basic direction z̄ ∈ Rn+1 defined
by
z̄η̃ := en−m+1 ∈ Rn−m+1 ;
−1
z̄β̃ := −Aβ̃ An+1 = 1 ∈ Rm ,
in the amount λ := −x̄β̃i∗ > 0 . That is, x̄ + λz̄ is the basic solution associated with the basic
partition β, η . Notice how when we move in the direction z̄ , all basic variables increase at
exactly the same rate that xn+1 does. So, using this direction to increase xn+1 from 0 to −x̄β̃i∗ >
0 results in all basic variables increasing by exactly −x̄β̃i∗ > 0 . By the choice of i∗ , this causes
all basic variable to become non-negative, and xβ̃i∗ to become 0 , whereupon it can leave the
basis in exchange for xn+1 . t
u
The end game for (Φ). If (P) is feasible, then at the very last iteration of the Worry-Free
Simplex Algorithm on (Φ), the objective value will drop from a positive number to zero. As
this happens, xn+1 will be eligible to leave the basis, but so may other variables also be eligible.
That is, there could be a tie in the ratio-test of Step 4 of the Worry-Free Simplex Algorithm.
As is the case whenever there is a tie, any of the tying indices can leave the basis — all of the
associated variables are becoming zero simultaneously. For our purposes, it is critical that if
there is a tie, we choose i∗ so that βi∗ = n + 1 ; that is, xn+1 must be selected to become non-
basic. In this way, we not only get a feasible solution to (P), we get a basis for it that does not use
the artificial variable xn+1 . Now, starting from this basis, we can smoothly shift to minimizing
the objective function of (P).
4.4. OBTAINING A BASIC FEASIBLE SOLUTION 39
min xn+1
Ax + An+1 xn+1 = b (B) ; (Φ )
x , xn+1 ≥ 0 ,
where b (B) := b + ~ , and ~ := (, 2 , . . . , m )0 . Then we choose i∗ so that x̄β̃ ∗ is most negative
i
with respect to the ordering < , and exchange β̃i∗ with η̃n−m+1 = n + 1 as before. Then, as in
Lemma 4.13, the resulting basis is feasible for (Φ ).
We do need to manage the final iteration a bit carefully. There are two different ways we
can do this.
“Early arrival”. If (P) has a feasible solution, at some point the value of xn+1 will decrease to
a homogeneous polynomial in . That is, the constant term will become 0. At this point, although
xn+1 may not be eligible to leave the basis for (Φ ), it will be eligible to leave for (P). So, at this
point we let xn+1 leave the basis, and we terminate the solution process for (Φ ), having found
a feasible basis for (P). In fact, we have just constructively proved the following result.
Theorem 4.14
If standard form (P) has a feasible solution, then it has a basic feasible solution.
Note that because xn+1 may not have been eligible to leave the basis for (Φ ) when we apply
the “early arrival’ idea, the resulting basis may not be feasible for (P ). So we will have to re-
perturb (P) .
40 CHAPTER 4. THE SIMPLEX ALGORITHM
“Be patient”. Perhaps a more elegant way to handle the situation is to fully solve (Φ ). In
doing so, if (P) has a feasible solution, then the minimum objective value of (Φ ) will be 0 (i.e.,
the zero polynomial), and xn+1 will necessarily be non-basic. That is because, at every iteration,
every basic variable in (Φ ) is positive. Because xn+1 legally left the basis for (Φ ) at the final
iteration, the resulting basis is feasible for (P ). So we do not re-perturb (P) , and we simply
revert to solving (P ) from the final basis of (Φ ) .
Putting everything together, we get a mathematically complete algorithm for linear opti-
mization. That is:
1. Apply an algebraic perturbation to the phase-one problem;
2. Solve the phase-one problem using the Worry-Free Simplex Algorithm, adapted to alge-
braically perturbed problems, but always giving preference to xn+1 for leaving the basis
whenever it is eligible to leave for the unperturbed problem. Go to the next step, as soon
as xn+1 leaves the basis;
3. Starting from the feasible basis obtained for the original standard-form problem, apply
an algebraic perturbation (Note that the previous step may have left us with a basis that
is feasible for the original unperturbed problem, but infeasible for the original perturbed
problem — this is why we apply a perturbation anew (see the “Early arrival" paragraph
in Section 4.4.2);
4. Solve the problem using the Worry-Free Simplex Algorithm, adapted to algebraically per-
turbed problems.
It is important to know that the Simplex Algorithm will be used, later, to prove the cele-
brated Strong Duality Theorem. For that reason, it is important that our algorithm be math-
ematically complete. But from a practical computational viewpoint, there is substantial over-
head in working with the -perturbed problems. Therefore, in practice, no computer code that
is routinely applied to large instances worries about the potential for cycling associated with the
very-real possibility of degeneracy.
4.6 Exercises
Exercise 4.1 (Carry out the Simplex Algorithm)
pivot_tools.ipynb (see Appendix A.6) implements the primitive steps of the simplex algo-
rithm. Using these primitives only, write a Python function to carry out the simplex algorithm.
Initialize your data as is done in pivot_example.ipynb. Do not worry about degeneracy/anti-
cycling. But I do want you to take care of algorithmically finding an initial feasible basis as
described in Section 4.4.1. Make some small examples to fully illustrate the different possibili-
ties for (P) (i.e., infeasible, optimal, unbounded).
Exercise 4.2 (Dual change — first do Exercise 3.4)
Let β, η be any basic partition for the standard-form problem (P). The associated dual solution
is ȳ 0 := c0β A−1
β . Now, suppose that we pivot, letting ηj enter the basis and β` leave the basis,
so that the new partition β̃, η̃ is also a basic partition (in other words, Aβ̃ is invertible). Let ȳ˜
be the dual solution associated with the basic partition β̃, η̃ , and let H`· be row ` of H := A−1
β .
Prove that
c̄ηj 0
ȳ˜ = ȳ + H
ā`,ηj `·
HINT: Use the Sherman-Morrison formula; see Section 1.3.
Exercise 4.3 (Traditional phase one)
Instead of organizing the phase-one problem as (Φ), we could first scale rows of Ax = b as
necessary so as to achieve b ≥ 0. Then we can formulate the “traditional phase-one problem”
Pm
min x
Pm j=1 n+j
Ax + j=1 ej xn+j = b; (Φ̂)
x , xn+1 , . . . , xn+m ≥ 0 .
42 CHAPTER 4. THE SIMPLEX ALGORITHM
Here we have m artificial variables: xn+1 , xn+2 , . . . , xn+m . It is easy to see that (i) β :=
{n + 1, n + 2, . . . , n + m} is feasible for (Φ̂), and (ii) the optimal value of (Φ̂) is zero if and only
if (P) has a feasible solution.
It may be that the optimal value of (Φ̂) is zero, but the optimal basis discovered by the
Simplex Algorithm applied to (Φ̂) contains some of the indices {n + 1, n + 2, . . . , n + m} of
artificial variables. Describe how we can take such an optimal basis and pass to a different
optimal basis that uses none of {n + 1, n + 2, . . . , n + m} (and is thus a feasible basis for (P)).
HINT: Use Theorem 1.2.
Let θ := 2π/k, with integer k ≥ 5. The idea is to use the symmetry of the geometric circle,
and complete a cycle of the Worry-Free Simplex Algorithm in 2k pivots. Choose a constant γ
satisfying 0 < γ < tan(θ/2) . Let
1 0
A1 := , A2 := .
0 γ
Let
cos θ − sin θ
R := .
sin θ cos θ
Then, for j = 3, 4, . . . , 2k , let
R(j−1)/2 A1 , for odd j ;
Aj :=
R(j−2)/2 A2 , for even j .
We can observe that for odd j , Aj is a rotation of A1 by (j − 1)π/k radians, and for even j , Aj
is a rotation of A2 by (j − 2)π/k radians.
Let cj := 1 − a1j − a2j /γ , for j = 1, 2, . . . , 2k , and let b := (0, 0)0 . Because b = 0 , the
problem is fully degenerate; that is, x̄ = 0 for all basic solutions x̄ . Notice that this implies that
either the problem has optimal objective value zero, or the objective value is unbounded on the
feasible region.
For k = 5 , you can choose γ := 12 tan(θ/2) , and then check that the following is a sequence
of bases β that are legal for the Worry-Free Simplex Algorithm:
You need to check that for every pivot, the incoming basic variable xηj has negative reduced
cost, and that the outgoing variable is legally selected — that is that āi,ηj > 0 . Feel free to use
any software that you find convenient (e.g., Python, MATLAB, Mathematica, etc.).
4.6. EXERCISES 43
Note that it may seem hard to grasp the picture at all5 . But see Section 6.1.3 and Figure 4.5;
you can look at it from different perspectives using the Jupyter notebook Circle.ipynb (see
Appendix A.7).
If you are feeling ambitious, check that for all k ≥ 5 , we get a cycle of the Worry-Free
Simplex Algorithm.
Exercise 4.5
Run the code pivot_example.ipynb, but with the following line uncommented:
See how this carries out the algebraic-perturbation method from Section 4.3.
Now, using this code as your starting point, solve the example from Exercise 4.4 with k = 5
to optimality, using the algebraic-perturbation method. Just change the data to correspond to
the example that we want to solve, and do the pivots one at a time, following the rules of the
simplex method.
44 CHAPTER 4. THE SIMPLEX ALGORITHM
Duality
• See how duality and complementarity carry over to general linear-optimization problems.
min c0 x
Ax = b; (P)
x ≥ 0
45
46 CHAPTER 5. DUALITY
Proof. If (P) has a feasible solution and (P) is not unbounded, then the Simplex Algorithm
will terminate with a basis β such that the associated basic solution x̄ and the associated dual
solution ȳ are optimal. t
u
It is important to realize that the Strong Optimal Basis Theorem and the Strong Duality
Theorem depend on the correctness of the Simplex Algorithm — this includes: (i) the correct-
ness of the phase-one procedure to find an initial feasible basis of (P), and (ii) the anti-cycling
methodology.
With respect to the standard-form problem (P) and its dual (D), the solutions x̂ and ŷ are
complementary if
Theorem 5.3
If x̄ is a basic solution (feasible or not) of standard-form (P), and ȳ is the associated dual
solution, then x̄ and ȳ are complementary.
Proof. Notice that if x̄ is a basic solution then Ax̄ = b. Then we can see that complementarity of
x̄ and ȳ amounts to
c̄j x̄j = 0 , for j = 1, 2, . . . , n .
It is clear then that x̄ and ȳ are complementary, because if x̄j > 0 , then j is a basic index, and
c̄j = 0 for basic indices. t
u
Theorem 5.4
If x̂ and ŷ are complementary with respect to (P) and (D), then c0 x̂ = ŷ 0 b .
Proof.
c0 x̂ − ŷ 0 b = (c0 − ŷ 0 A)x̂ + ŷ 0 (Ax̂ − b) ,
which is 0 by complementarity. t
u
Proof. This immediately follows from Theorem 5.4 and the Weak Duality Theorem. t
u
Proof. If x̂ and ŷ are optimal, then by the Strong Duality Theorem, we have c0 x̂ − ŷ 0 b = 0 .
Therefore, we have
Clearly this expression is equal to a non-negative number. Finally, we observe that this expres-
sion can only be equal to 0 if
t
u
Theorem 5.7
• Weak Duality Theorem: If (x̂P , x̂N , x̂U ) is feasible in (G) and (ŷG , ŷL , ŷE ) is feasible in
(H), then c0P x̂P + c0N x̂N + c0U x̂U ≥ ŷG
0 0
bG + ŷL 0
bL + ŷE bE .
• Strong Duality Theorem: If (G) has a feasible solution, and (G) is not unbounded,
then there exist feasible solutions (x̂P , x̂N , x̂U ) for (G) and (ŷG , ŷL , ŷE ) for (H) that are
optimal. Moreover, c0P x̂P + c0N x̂N + c0U x̂U = ŷG 0 0
bG + ŷL 0
bL + ŷE bE .
Proof. The Weak Duality Theorem for general problems can be demonstrated as easily as it
was for the standard-form problem and its dual. But the Strong Duality Theorem for general
problems is most easily obtained by converting our general problem (G) to the standard-form
With respect to (G) and its dual (H), the solutions (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE ) are com-
plementary if
0 0 0
(cj − ŷG AGj − ŷL ALj − ŷE AEj ) x̂j = 0 , for all j ;
ŷi (AiP xP + AiN xN + AiU xU − bi ) = 0 , for all i .
Theorem 5.8
• Weak Complementary Slackness Theorem: If (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE ) are feasible
and complementary with respect to (G) and (H), then (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE ) are
optimal.
• Strong Complementary Slackness Theorem: If (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE ) are opti-
mal for (G) and (H), (x̂P , x̂N , x̂U ) and (ŷG , ŷL , ŷE ) are complementary (with respect to
(G) and (H)).
Proof. Similarly to the proof for standard-form (P) and its dual (D), we consider the following
expression.
X
0 0 0
0 = (cj − ŷG AGj − ŷL ALj − ŷE AEj ) x̂j
j∈P
| {z } |{z}
≥0 ≥0
X
0 0 0
+ (cj − ŷG AGj − ŷL ALj − ŷE AEj ) x̂j
j∈N
| {z } |{z}
≤0 ≤0
X
0 0 0
+ (cj − ŷG AGj − ŷL ALj − ŷE AEj ) x̂j
j∈U
| {z }
=0
X
+ ŷi (AiP xP + AiN xN + AiU xU − bi )
|{z} | {z }
i∈G ≥0 ≥0
X
+ ŷi (AiP xP + AiN xN + AiU xU − bi )
|{z} | {z }
i∈L ≤0 ≤0
X
+ ŷi (AiP xP + AiN xN + AiU xU − bi ) .
| {z }
i∈E =0
The results follows easily using the Weak and Strong Duality Theorems for (G) and (H). t
u
50 CHAPTER 5. DUALITY
The table below summarizes the duality relationships between the type of each primal con-
straint and the type of each associated dual variable. Highlighted in yellow are the relation-
ships for the standard-form (P) and its dual (D). It is important to note that the columns are
labeled “min” and “max”, rather than primal and dual — the table is not correct if “min” and
“max” are interchanged.
min max
≥ ≥0
constraints ≤ ≤0 variables
= unres.
≥0 ≤
variables ≤0 ≥ constraints
unres. =
Proof. It is easy to see that there cannot simultaneously be a solution x̂ to (I) and ŷ to (II).
Otherwise we would have
0 ≥ ŷ 0 A x̂ = ŷ 0 b > 0 ,
|{z} |{z}
≤0 ≥0
min 00 x
Ax = b; (P)
x ≥ 0.
5.4. THEOREMS OF THE ALTERNATIVE 51
Its dual is
max y0 b
(D)
y0 A ≤ 00 .
Because (P) is infeasible, then (D) is either infeasible or unbounded. But ŷ := 0 is a feasible
solution to (D), therefore (D) must be unbounded. Therefore, there exists a feasible solution
ŷ to (D) having objective value greater than zero (or even any fixed constant). Such a ŷ is a
solution to (II). t
u
Remark 5.10
Geometrically, the Farkas Lemma asserts that exactly one of the following holds:
(I) b is in the “cone generated by the columns of A” (i.e., b is a non-negative linear combina-
tion of the columns of A), or
(II) there is ŷ ∈ Rm that makes an acute angle with b and a non-acute (i.e., right or obtuse)
angle with every column of A .
In the case of (II), considering the hyperplane H containing the origin having ŷ as its normal
vector, this H separates b from the cone generated by the columns of A . So, the Farkas Lemma
has the geometric interpretation as a“Separating-Hyperplane Theorem.” See Figure 5.1 for an
example with m = 2 and n = 4 . The cone is red and the point b that we separate from the
cone is blue. The green point is a solution ŷ for (II), and the dashed green line is the separating
hyperplane. Notice how the (solid) green vector makes an acute angle with the blue vector and
a non-acute angle with all points in the cone.
52 CHAPTER 5. DUALITY
In a similar fashion to the Farkas Lemma, we can develop theorems of this type for feasible
regions of other linear-optimization problems.
Proof. It is easy to see that there cannot simultaneously be a solution x̂ to (I) and ŷ to (II).
Otherwise we would have
0 = ŷ 0 A x̂ ≥ ŷ 0 b > 0 ,
|{z}
=0
Next, suppose that (I) has no solution. Then the following problem is infeasible:
min 00 x
(P)
Ax ≥ b.
Its dual is
max y0 b
y0 A = 00 ; (D)
y ≥ 0.
Because (P) is infeasible, then (D) is either infeasible or unbounded. But ŷ := 0 is a feasible
solution to (D), therefore (D) must be unbounded. Therefore, there exists a feasible solution ŷ
to (D) having objective value greater than zero (or even greater than any fixed constant). Such
a ŷ is a solution to (II). t
u
5.5 Exercises
Exercise 5.1 (Dual picture)
For the standard-form problem (P) and its dual (D), explain aspects of duality and comple-
mentarity using this picture:
where in (P̃), we regard the non-negativity constraints of (P) as ordinary structural constraints
— with dual variables.
Define w̄ ∈ Rn by
w̄β := 0 ∈ Rm ;
w̄η := c̄η ∈ Rn−m .
Prove that if (P) has an optimal solution, then there are always optimal solutions for (P) and
(D) that are overly complementary.
HINT: Let v be the optimal objective value of (P). For each j = 1, 2, . . . , n , consider
max xj
c0 x ≤ v
(Pj )
Ax = b.
x ≥ 0.
(Pj ) seeks an optimal solution of (P) that has xj positive. Using the dual of (Pj ), show that
if no optimal solution x̂ of (P) has x̂j positive, then there is an optimal solution ŷ of (D) with
cj − ŷ 0 A·j positive. Once you do this you can conclude that, for any fixed j, there are optimal
solutions x̂ and ŷ with the property that exactly one of
Take all of these n pairs of solutions x̂ and ŷ and combine them appropriately to construct
optimal x̂ and ŷ that are overly complementary.
5.5. EXERCISES 55
AG
P xP + AG
N xN + AGU xU ≥ bG ;
AL x P + AN x N + AL
L
U xU ≤ bL ;
P (I)
AE
P xP + AE
N x N + A E
U xU = bE ;
xP ≥ 0 , xN ≤ 0 .
min c0 x
Ax ≥ b; (P)
x ≥ 0.
a) Suppose that (P) is infeasible. Then, by a ‘Theorem of the Alternative’ there is a solution
to what system?
b) Suppose, further, that the dual (D) of (P) is feasible. Take a feasible solution ŷ of (D) and
a solution ỹ to your system of part (a) and combine them appropriately to prove that (D)
is unbounded.
Chapter 6
Sensitivity Analysis
• Learn how the optimal value of a linear-optimization problem behaves when the right-
hand side vector and objective vector are varied.
f (b) := min c0 x
Ax = b; (Pb )
x ≥ 0.
That is, (Pb ) is simply (P) with the optimal objective value viewed as a function of its right-hand
side vector b .
57
58 CHAPTER 6. SENSITIVITY ANALYSIS
Consider a fixed basis β for (Pb ). Associated with that basis is the basic solution x̄β = A−1 β b
and the corresponding dual solution ȳ 0 = c0β A−1 β . Let us assume that ȳ is feasible for the dual
of (Pb ) — or, equivalently, c0η − ȳ 0 Aη ≥ 00 . Considering the set B of b ∈ Rm such that β is an
optimal basis, is it easy to see that B is just the set of b such that x̄β := A−1 β b ≥ 0 . That is,
B ⊂ Rm is the solution set of m linear inequalities (in fact, it is a “simplicial cone” — we will
return to this point in Section 6.1.3). Now, for b ∈ B , we have f (b) = ȳ 0 b . Therefore, f is a
∂f
linear function on b ∈ B . Moreover, as long as b is in the interior of B , we have ∂b i
= ȳi . So we
have that ȳ is the gradient of f , as long as b is in the interior of B . Now what does it mean for
b to be in the interior of B ? It just means that x̄βi > 0 for i = 1, 2, . . . , m .
Let us focus our attention on changes to a single right-hand side element bi . Suppose that
β is an optimal basis of (P) , and consider the problem
min c0 x
Ax = b + ∆i ei ; (Pi )
x ≥ 0,
where ∆i ∈ R . The basis β is feasible (and hence still optimal) for (Pi ) if A−1
β (b + ∆i ei ) ≥ 0 .
Let hi := A−1
β ei . So
[h1 , h2 , . . . , hm ] = A−1
β .
ward to check that β is feasible (and hence still optimal) for (Pi ) as long as ∆i is in the interval
[Li , Ui ] , where
Li := maxi
−x̄βk /hik ,
k : hk >0
and
Ui := min
i
−x̄βk /hik .
k : hk <0
It is worth noting that it can be the case that hik ≤ 0 for all k, in which case we define Li := −∞,
and it could be the case that hik ≥ 0 for all k, in which case we define Ui := +∞,
In summary, for all ∆i satisfying Li ≤ ∆i ≤ Ui , β is an optimal basis of (P) . It is important
to emphasize that this result pertains to changing one right-hand side element and holding all
others constant. For a result on simultaneously changing all right-hand side elements, we refer
to Exercise 6.3.
The domain of f is the set of b for which (Pb ) has an optimal solution. Assuming that the
dual of (Pb ) is feasible (note that this just means that y 0 A ≤ c0 has a solution), then (Pb ) is never
unbounded. So the domain of f is just the set of b ∈ Rm such that (Pb ) is feasible.
Theorem 6.1
The domain of f is a convex set.
Proof. Suppose that bj is in the domain of f , for j = 1, 2 . Therefore, there exist xj that are
feasible for (Pbj ) , for j = 1, 2 . For any 0 < λ < 1 , let b̂ := λb1 + (1 − λ)b2 , and consider
x̂ := λx1 + (1 − λ)x2 . It is easy to check that x̂ is feasible for (Pb̂ ) , so we can conclude that b̂ is
in the domain of f . t
u
for all u1 , u2 ∈ S and 0 < λ < 1 . That is, f is never underestimated by linear interpolation.
PmA function f : R → R is an affine function, if it has the form f (u1 , . . . , um ) = a0 +
m
It would be strange to refer to a function as being “convex piecewise-linear” if it were not convex!
The next result justifies the moniker.
Theorem 6.2
If fˇ is a convex piecewise-linear function, then it is a convex function.
Proof. Let
fˇ(u) := max {fi (u)} ,
1≤i≤k
for u in the domain of fˇ , where each fi is an affine function. That is, fˇ is the pointwise maximum
of a finite number (k) of affine functions.
60 CHAPTER 6. SENSITIVITY ANALYSIS
= λfˇ(u1 ) + (1 − λ)fˇ(u2 ) .
t
u
Theorem 6.3
f is a convex piecewise-linear function on its domain.
f (b) := max y 0 b
(Db )
y0 A ≤ c0 ;
of (Pb ).
A basis β is feasible or not for (Db ), independent of b . Thinking about it this way, we can
see that n o
f (b) = max c0β A−1 β b : β is a dual feasible basis ,
6.1.3 A brief detour: the column geometry for the Simplex Algorithm
In this section, we will describe a geometry for visualizing the Simplex Algorithm.7 The
ordinary geometry for a standard-form problem, in the space of the non-basic variables for
same choice of basis, can be visualized when n − m = 2 or 3. The “column geometry” that we
will describe is in Rm+1 , so it can be visualized when m + 1 = 2 or 3. Note that the graph of
the function f (b) (introduced at the start of this chapter) is also in Rm+1 , which is why we take
the present detour.
6.2. OBJECTIVE CHANGES 61
We think of the first component of these points and of the line as the vertical dimension; so the
requirement line is thought of as vertical. It of particular interest to think of the cone generated
by the n points. That is, ( ! )
c0 x
K := ∈ Rm+1 : x ≥ 0 .
Ax
Notice how the top coordinate of a point in the cone gives the objective value of the associated x
for (P). So the goal of solving (P) can be thought of as that of finding a point on the intersection
of the requirement line and the cone that is as low as possible.
Restricting ourselves to a basis β , we have the cone
( ! )
c0β xβ m+1
Kβ := ∈R : xβ ≥ 0 .
Aβ xβ
The cone Kβ is an “m-dimensional simplicial cone.” Next, we observe that if β is a feasible basis,
then Kβ intersects the requirement line uniquely at the point
!
c0β x̄β
,
Aβ x̄β
“Here is what is needed for Occupy Wall Street to become a force for change: a clear, and
clearly expressed, objective. Or two.” — Elayne Boosler
62 CHAPTER 6. SENSITIVITY ANALYSIS
equivalently, A−1β b ≥ 0 . Considering the set C of c ∈ R such that β is an optimal basis, is it easy
n
−1
to see that this is just the set of c such that cη − cβ Aβ Aη ≥ 00 . That is, C ⊂ Rn is the solution
0 0
set of n − m linear inequalities (in fact, it is a cone). Now, for c ∈ C , we have g(c) = c0β x̄β .
Therefore, g is a linear function on c ∈ C .
Theorem 6.4
The domain of g is a convex set.
6.3. EXERCISES 63
for all u1 , u2 ∈ S and 0 < λ < 1 . That is, f is never overestimated by linear interpolation.
The function g is a concave piecewise-linear function if it is the pointwise minimum of a finite
number of affine functions.
Theorem 6.5
g is a concave piecewise-linear function on its domain.
6.3 Exercises
Exercise 6.1 (Local sensitivity analysis with Python/Gurobi)
We can easily carry out some local sensitivity analysis with Python/Gurobi. See how this is done
in Production.ipynb (Appendix A.3). Verify the calculations of Python/Gurobi by ‘hand’,
using the ideas and formulas in Section 6.1.1 to make the calculations yourself; you may use
any convenient software (e.g., Python, MATLAB, Mathematica, etc.) to assist you, but only for
doing arithmetic on scalars, vector and matrices.
Pm
where λi ≥ 0 , when i=1 λi ≤ 1 . [Note that in the formula above, for each i we can pick either
Li (a decrease) or Ui (an increase)].
• Also, via a study of the “cutting-stock problem,” we will have a first glimpse at some
issues associated with integer-linear optimization.
7.1 Decomposition
65
66 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
We will demonstrate that S = S 0 . It is very easy to check that S 0 ⊂ S , and we leave that to the
reader. For the other direction, suppose that x̂ ∈ S , and consider the system
X X
λj x̂j + µk ẑ k = x̂ ;
j∈J
X k∈K
λj = 1; (I)
j∈J
λj ≥ 0 , j ∈ J ; µk ≥ 0 , k ∈ K .
Keep in mind that in (I), x̂ is fixed as well as are the x̂j and the ẑ k — the variables are the λj
and the µk . By way of establishing that S ⊂ S 0 , suppose that x̂ ∈
/ S 0 — that is, suppose that (I)
7.1. DECOMPOSITION 67
has no solution. Applying the Farkas Lemma to (I) , we see that the system
w0 x̂ + t > 0;
w0 x̂j + t ≤ 0, ∀j ∈J ; (II)
w0 ẑ k ≤ 0, ∀k∈K
min −ŵ0 x
Ax = b; (P̂)
x ≥ 0.
(P̂) cannot be unbounded, because −ŵ0 ẑ k ≥ 0 , for all k ∈ K . In addition, every basic feasible
solution of (P̂) has objective value at least t̂ . By Theorem 5.1 (the Strong Optimal Basis Theo-
rem), this implies that the optimal value of (P̂) is at least t̂ . But the objective value −ŵ0 x̂ of x̂ is
less than t̂ . Therefore, x̂ cannot be feasible. That is, x̂ ∈
/S. t
u
for x in c0 x and in Ex ≥ h of (Q), and it is easy to see that (M) is equivalent to (Q). t
u
Decomposition is typically applied in a way such that the constraints defining (S) are some-
how relatively “nice,” and the constraints Ex ≥ h somehow are “complicating” the situation.
For example, we may have a problem where the overall constraint matrix has the form depicted
in Figure 7.1. In such a scenario, we would let
E := ···
68 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
and
A := ..
.
We note that there is nothing special here about the “nice” constraints being “=”, and the
complicating constraints being “≥”. The method, with small modifications, can handle any
types of constraints; we take the particular form that we do for some convenience.
We will temporarily put aside how we calculate values for y and σ , but for now we suppose
that we have a basic partition of (M̄) and an associated dual solution ȳ and σ̄ .
Entering variable. Notice that nonnegativity of the dual variables y in (M), is is equivalently
realized in (M̄) via the reduced costs of the slack variables being nonnegative. Therefore, a slack
variable si is eligible to enter the basis if ȳi < 0.
The reduced cost of a variable λj is
c0 x̂j − ȳ 0 E x̂j − σ̄ = −σ̄ + (c0 − ȳ 0 E) x̂j .
7.1. DECOMPOSITION 69
It is noteworthy that with the dual solution fixed (at ȳ and σ̄), the reduced cost of λj is a constant
(−σ̄) plus a linear function of x̂j . A variable λj is eligible to enter the basis if its reduced cost
is negative. So we formulate the following optimization problem:
If the “subproblem” (SUB) has as optimal solution, then it has a basic optimal solution — that
is, an x̂j . In such a case, if the optimal objective value of (SUB) is negative, then the λj corre-
sponding to the optimal x̂j is eligible to enter the current basis of (M̄). On the other hand, if
the optimal objective value of (SUB) is non-negative, then we have a proof that no non-basic λj
is eligible to enter the current basis of (M̄).
If (SUB) is unbounded, then (SUB) has a basic feasible ray ẑ k having negative objective
value. That is, (c0 − ȳ 0 E) ẑ k < 0 . Amazingly, the reduced cost of µk is precisely c0 ẑ k −
ȳ 0 E ẑ k = (c0 − ȳ 0 E) ẑ k , so, in fact, µk is then eligible to enter the current basis of (M̄).
Leaving variable. To determine the choice of leaving variable, let us suppose that B is the
basis matrix for (M̄). Note that B consists of at least one column of the form
E x̂j
1
Calculation of basic primal and dual solutions. It is helpful to explain a bit about the
calculation of basic primal and dual solutions. As we have said, B consists of at least one column
of the form
E x̂j
1
and columns of the form
E ẑ k −ei
and .
0 0
70 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
So organizing the basic variables λj , µk and si into a vector ζ , with their order appropriately
matched with the columns of B , the vector ζ̄ of values of ζ is precisely the solution of
h
Bζ = .
1
That is,
h
ζ̄ = B −1 .
1
Finally, organizing the costs c0 x̂j , c0 ẑ k , and 0 of the basic variables λj , µk and si into a vector ξ ,
with their order appropriately matched with the columns of B , the associated dual solution
(ȳ, σ̄) is precisely the solution of
(y 0 , σ)B = ξ 0 .
That is,
(ȳ 0 , σ̄) = ξ 0 B −1 .
Starting basis. It is not obvious how to construct a feasible starting basis for (M̄); after all,
we may not have at hand any basic feasible solutions and rays of (S). Next, we give a simple
recipe. First, we take as x̂1 any basic feasible solution of (P) . Such a solution can be readily
obtained by using our usual (phase-one) methodology of the Simplex Algorithm. Our initial
basic variables are all of the slack variables si and also λ1 , associated with x̂1 . So we have the
initial basis matrix
−I E x̂1
B= .
00 1
It is very easy to see that this is an invertible matrix.
It is very important to realize that we have given a recipe for finding an initial basic solution
of (M̄). This basic solution is feasible precisely when x1 satisfies the Ex ≥ h constraints. If
this solution is not feasible, then we would introduce an artificial variable and do a phase-
one procedure. Following the methodology of Section 4.4.1, we introduce the single artificial
column
1 − E x̂1
,
1
with cost 1. We let the artificial variable enter the basis, removing the slack variable that is the
most negative from the basis. This yields a feasible basis for the phase-one problem, with pos-
itive objective value. Now we carry out phase-one of the simplex method, using Decomposition,
minimizing the artificial variable, seeking to drive it down to zero.
A demonstration implementation.
It is not completely trivial to write a small Python/Gurobi) code for the Decomposition Algo-
rithm. First of all, we solve the subproblems (SUB) using functionality of Gurobi. Another
7.1. DECOMPOSITION 71
point is that rather than carry out the simplex method at a detailed level on (M̄), we just ac-
cumulate all columns of (M̄) that we generate, and always solve linear-optimization problems,
using functionality of Gurobi, with all of the columns generated thus far. In this way, we do not
maintain bases ourselves, and we do not carry out the detailed pivots of the Simplex Algorithm.
Note that the linear-optimization functionality of Gurobi does give us a dual solution, so we do
not compute that ourselves. Our code is in the Jupyter notebook Decomp.ipynb (see Appendix
A.8)
In Figures 7.2 and 7.3, we see quite good behavior for the Decomposition Algorithm, for
a problem with 100 variables, 200 “complicating” constraints (i.e., rows of E), and 50 “nice”
constraints (i.e., rows of A).
Convergence and lower bounds. Practically speaking, the convergence behavior of the
Decomposition Algorithm can suffer from a tailing-off effect. That is, while the sequence of
objective values for successive iterates is non-increasing, at some point improvements can be-
come quite small. It would be helpful to know when we already have a very good but possibly
non-optimal solution. If we could rapidly get a good lower bound on z , then we could stop the
Decomposition when the its objective value is close to such a lower bound. Lower bounds on
z can be obtained from feasible solutions to the dual of (Q) . But there is another way, closely
related to the dual of (Q), to rapidly get good lower bounds. We develop this in the next section.
7.2. LAGRANGIAN RELAXATION 73
Again, we consider
z := min c0 x
Ex ≥ h ;
(Q)
Ax = b ;
x ≥ 0,
but our focus now is on efficiently getting a good lower bound on z , with again the view that
we are able to quickly solve many linear-optimization problems having only the constraints:
Ax = b , x ≥ 0 .
Note that the only variables in the minimization are x , because we consider ŷ to be fixed.
Theorem 7.3
v(ŷ) ≤ z , for all ŷ in the domain of v.
Proof. Let x∗ be an optimal solution for (Q). Clearly x∗ is feasible for (Lŷ ). Therefore
The last equation uses the fact that x∗ is optimal for (Q), so z = c0 x∗ , and also that Ex∗ ≥ h and
ŷ ≥ 0 . t
u
74 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
From what we learned in studying sensitivity analysis, it can be seen that v is a concave
(piecewise-linear) function on its domain (see Theorem 6.5). Because of this nice behavior, it is
plausible that we could calculate the maximum of v as a means of getting a good lower bound
on z . Before doing that, we examine the precise relationship between primal and dual solutions
of (Q), minimizers of v , and primal and dual solutions of the Lagrangian.
Theorem 7.4
Suppose that x∗ is optimal for (Q) , and suppose that ŷ and π̂ are optimal for the dual of (Q) .
Then x∗ is optimal for (Lŷ ) , π̂ is optimal for the dual of (Lŷ ) , ŷ is a maximizer of v(y) over
y ≥ 0, and the maximum value of v(y) over y ≥ 0 is z .
In the theorem above, we refer to two duals. The dual of (Q) is:
min y 0 h + π 0 b
y 0 E + π 0 A ≤ c0 ;
y≥0.
Proof. x∗ is clearly feasible for (Lŷ ) . Because ŷ and π̂ are feasible for the dual of (Q) , we have
ŷ ≥ 0, and ŷ 0 E + π̂ 0 A ≤ c0 . The latter implies that π̂ is feasible for the dual of (Lŷ ) .
Using the Strong Duality Theorem for (Q) implies that c0 x∗ = ŷ 0 h+ π̂ 0 b . Using that E x̂∗ ≥ h
(feasibility of x∗ in (Q)), we then have that (c0 − ŷ 0 E) x∗ ≤ π̂ 0 b . Finally, using the Weak Duality
Theorem for (Lŷ ) , we have that x∗ is optimal for (Lŷ ) and π̂ is optimal for the dual of (Lŷ ) .
Next,
Therefore the all of the inequalities are equations, and so ŷ is a maximizer of v and the maximum
value is z . t
u
Theorem 7.5
Suppose that ŷ is a maximizer of v(y) over y ≥ 0 , and suppose that π̂ is optimal for the dual
of (Lŷ ) . Then ŷ and π̂ are optimal for the dual of (Q) , and the optimal value of (Q) is v(ŷ) .
7.2. LAGRANGIAN RELAXATION 75
Proof.
v(ŷ) = max {v(y)}
y≥0
n o
= max y 0 h + min {(c0 − y 0 E) x : Ax = b , x ≥ 0}
y≥0 x
n o
= max y h + max {π 0 b : π 0 A ≤ c0 − y 0 E}
0
y≥0 π
= max {y 0 h + π 0 b : y 0 E + π 0 A ≤ c0 }
y≥0,π
= z.
The third equation follows from taking the dual of the inner (minimization) problem. The last
equation follows from seeing that the final maximization (over y ≥ 0 and π simultaneously) is
just the dual of (Q).
So, we have established that the optimal value z of (Q) is v(ŷ) . Looking a bit more closely,
we have established that z = ŷ 0 h + π̂ 0 b , and because π̂ 0 A ≤ c0 − ŷ 0 E and y ≥ 0, we have that ŷ
and π̂ are optimal for the dual of (Q) . t
u
Note that the conclusion of Theorem 7.5 gives us an optimal ŷ and π̂ for the dual of (Q), but
not an optimal x∗ for (Q) itself.
Theorem 7.6
Suppose that we fix ŷ , and solve for v(ŷ) . Let x̂ be the solution of (Lŷ ) . Let γ̂ := h − E x̂ .
Then
v(ỹ) ≤ v(ŷ) + (ỹ − ŷ)0 γ̂ ,
for all ỹ in the domain of v .
Proof.
v(ŷ) + (ỹ − ŷ)0 γ̂ = ŷ 0 h + (c0 − ŷ 0 E)x̂ + (ỹ − ŷ)0 (h − E x̂)
= ỹ 0 h + (c0 − ỹ 0 E)x̂
≥ v(ỹ) .
The inequality follows from the fact that x̂ is feasible (but possible not optimal) for (Lỹ ). t
u
Subgradient. What is v(ŷ) + (ỹ − ŷ)0 γ̂ ? It is a linear estimation of v(ỹ) starting from the
actual value of v at ŷ . The direction ỹ − ŷ is what we add to ŷ to move to ỹ . The choice of
γ̂ := h − E x̂ is made so that Theorem 7.6 holds. That is, γ̂ is chosen in such a way that the linear
estimation is always an upper bound on the value v(ỹ) of the function, for all ỹ in the domain of
f . The nice property of γ̂ demonstrated with Theorem 7.6 has a name: we say that γ̂ := h − E x̂
is a subgradient of (the concave function) v at ŷ (because it satisfies the inequality of Theorem
7.6).
76 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
Above, ProjRm (·) means project onto the nonnegative orthant. That is, we take the closest
+
point (in Euclidean norm) to the argument of the function. In fact, this means just zeroing-out
the negative entries.
Convergence. We have neglected, thus far, to fully specify the Subgradient Optimization
Algorithm. We can stop if, at some iteration k , we have γ̂ k = 0 (or, more generally, if ŷ k =
ProjRm (ŷ k + λk γ̂ k )), because the algorithm will make no further progress if this happens, and
+
indeed we will have found that ŷ k is a maximizer of v(y) over y ≥ 0 . But this is actually very
unlikely to happen. In practice, we may stop if k reaches some pre-specified iteration limit, or
if after many iterations, v is barely increasing.
We are interested in mathematically analyzing the convergence behavior of the algorithm,
letting the algorithm iterate infinitely. We will see that the method converges (in a certain
sense), if we takePa sequence of λk > P 0 that in some sense slowly diverges; Specifically, we
∞ ∞
will require that k=1 λ2k < +∞ and k=1 λk = +∞ . That is, “square summable, but not
summable.” For example, taking λk := α/(β + k) , with α > 0 and β ≥ 0 , we get a sequence
of stepP sizes satisfying this property; in particular,
P∞ for α = 1 and β = P 0 we have the harmonic
∞ ∞
series k=1 1/k which satisfies ln(k + 1) < k=1 1/k < ln(k) + 1 and k=1 1/k 2 = π 2 /6 .
To prove convergence of the algorithm, we must first establish a key technical lemma.
Lemma 7.7
Let y ∗ be any maximizer of v(y) over y ≥ 0. Suppose that λk > 0 , for all k . Then
k
X k
X
∗ ∗
ky − ŷ k+1 2
k − ky − ŷ k ≤ 1 2
λ2i kγ̂ i k2 −2 λi v(y ∗ ) − v(ŷ i ) .
i=1 i=1
Proof. Let wk+1 := ŷ k + λk γ̂ k ; that is, the unprojected k + 1-st iterate. For k ≥ 1, we have
ky ∗ − ŷ k+1 k2 − ky ∗ − ŷ k k2
≤ ky ∗ − wk+1 k2 − ky ∗ − ŷ k k2
= k(y ∗ − ŷ k ) − λk γ̂ k k2 − ky ∗ − ŷ k k2
0
= λ2k kγ̂ k k − 2λk y ∗ − ŷ k γ̂ k
≤ λ2k kγ̂ k k − 2λk v(y ∗ ) − v(ŷ k ) .
7.2. LAGRANGIAN RELAXATION 77
The first inequality uses that fact that the projection of a point onto a convex set is no further to
any point in that convex set than the unprojected point. The final inequality uses the assumption
that λk > 0 and the subgradient inequality:
v(ỹ) ≤ v(ŷ k ) + (ỹ − ŷ k )0 γ̂ k ,
plugging in y ∗ for ỹ.
Finally, adding up the established inequality over k yields the result. t
u
Now, let
vk∗ := maxki=1 v(ŷ i ) , for k = 1, 2, . . .
That is, vk∗ is the best value seen up through the k-th iteration.
Proof. Because the left-hand side of the inequality in the statement of Lemma 7.7 is non-negative,
we have
Xk k
X
2 λi v(y ∗ ) − v(ŷ i ) ≤ ky ∗ − ŷ 1 k2 + λ2i kγ̂ i k2 .
i=1 i=1
k
! k
X X
2 λi (v(y ∗ ) − vk∗ ) ≤ ky ∗ − ŷ 1 k2 + λ2i kγ̂ i k2 ,
i=1 i=1
or Pk
∗ ky ∗ − ŷ 1 k2 + i=1 λ2i kγ̂ i k2
v(y ) − vk∗ ≤ Pk .
2 i=1 λi
Next, we observe that kγ̂ i k2 is bounded by some constant Γ , independent of i , because our
algorithm takes γ̂ := h − E x̂ , where x̂ is a basic solution of a Lagrangian subproblem. There
are only a finite number of bases. Therefore, we can take
Γ = max kh − E x̂k2 : x̂ is a basic solution of Ax = b , x ≥ 0 .
So, we have Pk
∗ ky ∗ − ŷ 1 k2 + Γ i=1 λ2i
v(y ) − vk∗ ≤ Pk .
2 i=1 λi
Pk
Now, we get our result by observing that ky ∗ − ŷ 1 k2 is a constant, i=1 λ2i is converging to a
Pk
constant and i=1 λi goes to +∞ (as k increases without limit), and so the right-hand side of
the final inequality converges to zero. The result follows. t
u
A simple implementation. It is very easy to write a small Gurobi/Python code for Subgra-
dient Optimization. Our code is in the Jupyter notebook SubgradProj.ipynb (see Appendix
A.9). Typical behavior is a very bad first iteration, then some iterations to recover from that,
and then a slow and steady convergence to an optimum. The method is usually stopped after
a predetermined number of iterations or after progress becomes very slow. In Figure 7.4, we
see this typical behavior, for a problem with 100 variables, 200 “complicating” constraints (i.e.,
rows of E), and 50 “nice” constraints (i.e., rows of A).
78 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
Figure 7.4: Example: Projected subgradient optimization with harmonic step sizes
Practical steps. Practically speaking, in order to get a ŷ with a reasonably high value of v(ŷ) ,
it can be better to choose a sequence of λk that depends on a “good guess” of the optimal value
of v(ŷ), taking bigger steps when one is far away, and smaller steps when one is close (try to
develop this idea in Exercise 7.3). A further idea is take shorter steps when the subgradient has
a big norm. With these ideas, we can achieve faster practical convergence of the algorithm; see
Figure 7.5
Dual estimation. From Theorem 7.5, we see that the Subgradient Optimization Method is a
way to try and quickly find an estimate of an optimal solution to the dual of (Q). At each step,
ŷ together with the π̂ that is optimal for the dual of (Lŷ ) give a feasible solution of (Q) with
objective value v(ŷ). But note that we give something up — we do not get an x∗ that solves (Q)
from a ŷ that maximizes v and a π̂ that is optimal for the dual of (Lŷ ) . There is no guarantee
7.2. LAGRANGIAN RELAXATION 79
Figure 7.5: Example: Projected subgradient optimization with better step sizes
The cutting-stock problem is a nice concrete topic at this point. We will develop a technique
for it, using column generation, but the context is different than for decomposition. Moreover,
the topic is a nice segue into integer linear optimization — the topic of the next chapter.
The story is as follows. We have stock rolls of some type of paper of (integer) width W .
But we encounter (integer) demand di for rolls of (integer) width wi < W , for i = 1, 2, . . . , m .
The cutting-stock problem is to find a plan for satisfying demand, using as few stock rolls as
possible.9
We endeavor to compute a basic optimum (x̄, t̄) . Because of the nature of the formulation, we
can see that dx̄e is feasible for (CSP). Moreover, we have produced a solution using 10 dx̄e stock
rolls, and we can give an a priori bound on its quality. Specifically, as we will see in the next
theorem, the solution that we obtain wastes at most m − 1 stock rolls, in comparison with an
optimal solution. Moreover, we have a practically-computable bound on the number of wasted
rolls, which is no worse than the worst-case bound of m − 1. That is, our waste is at worst
10 dx̄e − dze .
Theorem 7.9
Proof. Because (CSP) is a relaxation of (CSP) and because z is an integer, we P have dze ≤ z .
m
Because dx̄e is a feasible
Pmsolution of (CSP), we have
Pm z ≤ 10
dx̄e . Now,
Pm 1 0
dx̄e = i=1 (x̄βi + fi ) ,
with each
Pmf i < 1 . But i=1 (x̄ β i
+ fi ) = 10
x̄ + i=1 f i ≤ d1 0
x̄e + i=1 f i . Therefore, 10 dx̄e ≤
d1 x̄e + i=1 fi . Now the left-hand side of this last inequality is an integer, so we may round
0
We suppose that we have a feasible basis of (CSP) and that we have, at hand, the associated
dual solution ȳ . For each i , 1 ≤ i ≤ m , the reduced cost of ti is simply ȳi . Therefore, if ȳi < 0 ,
then ti is eligible to enter the basis.
So, moving forward, we may assume that ȳi ≥ 0 for all i . We now want to examine the
reduced cost of an xj variable. The reduced cost is simply
m
X
1 − ȳ 0 Aj = 1 − ȳi aij .
i=1
Pm
The variable xj is eligible to enter the basis then if 1 − i=1 ȳi aij < 0 . Therefore, to check
whether there is some column xj with negative reduced cost, we can solve the so-called knap-
sack problem
Pm
max i=1 ȳi ai
Pm
i=1 wi ai ≤ W;
ai ≥ 0 integer, i = 1, . . . , m ,
and check whether the optimal value is greater than one. If it is, then the new variable that we
associate with this solution pattern (i.e., column of the constraint matrix) is eligible to enter the
basis.
82 CHAPTER 7. LARGE-SCALE LINEAR OPTIMIZATION
Our algorithmic approach for the knapsack problem is via recursive optimization (known
popularly as dynamic programming10 ). We will solve this problem for all positive integers up
through W . That is, we will solve
Pm
f (s) := max P i=1 ȳi ai
m
i=1 wi ai ≤ s;
ai ≥ 0 integer, i = 1, . . . , m ,
It is important to note that we can always calculate f (s) provided that we have already calcu-
lated f (s0 ) for all s0 < s . Why does this work? It follows from a very simple observation: If
we have optimally filled a knapsack of capacity s and we remove any item i, then what remains
optimally fills a knapsack of capacity s − wi . If there were a better way to fill the knapsack of
capacity s − wi , then we could take such a way, replace the item i , and we would have found
a better way to fill a knapsack of capacity s . Of course, we do not know even a single item that
we can be sure is in an optimally filled knapsack of capacity s , and this is why in the recursion,
we maximize over all items that can fit in (i.e., i : wi ≤ s).
The recursion appears to calculate the value of f (s) , but it is not immediate how to recover
optimal values of the ai . Actually, this is rather easy.
Basic solutions: dual and primal. At any iteration, the basis matrix B has some columns
corresponding to patterns and possibly other columns for ti variables. The column correspond-
ing to ti is −ei .
Organizing the basic variables xj and ti into a vector ζ , with their order appropriately
matched with the columns of B , the vector ζ̄ of values of ζ is precisely the solution of
Bζ = d .
That is,
ζ̄ = B −1 d .
The cost of an xj is 1, while the cost of a ti is 0. Organizing the costs of the basic variables
into a vector ξ , with their order appropriately matched with the columns of B , the associated
dual solution ȳ is precisely the solution of
y0 B = ξ0 .
That is,
ȳ 0 = ξ 0 B −1 .
x[ 0 ]= 0.0
x[ 1 ]= 0.0
x[ 2 ]= 71.5
x[ 3 ]= 0.0
x[ 4 ]= 35.5556
x[ 5 ]= 205.0
x[ 6 ]= 1058.0
x[ 7 ]= 10.3333
***** y': [0.6111 0.3889 0.5 0.2222 0.3333]
x[ 5 ]= 205.0
x[ 6 ]= 1033.3846
x[ 7 ]= 18.5385
x[ 8 ]= 49.2308
***** Optimal LP objective value: 1377.6538461538462
***** rounds up to: 1378.0 (lower bound on rolls needed)
***** x rounded up:
x[ 0 ]= 0.0
x[ 1 ]= 0.0
x[ 2 ]= 72.0
x[ 3 ]= 0.0
x[ 4 ]= 0.0
x[ 5 ]= 205.0
x[ 6 ]= 1034.0
x[ 7 ]= 19.0
x[ 8 ]= 50.0
***** Number of rolls used: 1380.0
By solving a further integer-linear optimization problem to determine the best way to cover
demand using all patterns generated in the course of our algorithm, we improve the upper
bound to 1379.
***** Now solve the ILP over all patterns generated to try and get a better soution...
***** x:
x[ 0 ]= 0.0
x[ 1 ]= 0.0
x[ 2 ]= 72.0
x[ 3 ]= 1.0
x[ 4 ]= 1.0
x[ 5 ]= 205.0
x[ 6 ]= 1034.0
x[ 7 ]= 17.0
x[ 8 ]= 49.0
***** Number of rolls used: 1379.0
It remains unknown as to whether the optimal solution to this instance is 1378 or 1379.
7.4 Exercises
Exercise 7.1 (Dual solutions)
Refer to (Q) and (M) defined in the Decomposition Theorem (i.e., Corollary 7.2) What is the
relationship between optimal dual solutions of (Q) and (M) ?
Integer-Linear Optimization
89
90 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
0 ≤ xe ≤ ue , ∀e∈A.
As we have stated this, it is just a structured linear-optimization problem. But there are many
situations where the given net supplies at the nodes and the given flow capacities on the arcs
are integer, and we wish to constrain the flow variables to be integers.
We will see that it is useful to think of the network-flow problem in matrix-vector language.
We define the network matrix of G to be a matrix A having rows indexed from N , columns
indexed from A , and entries
1 , if v = t(e) ;
ave := −1 , if v = h(e);
0 , if v ∈
/ {t(e), h(e)},
for v ∈ N , e ∈ A . With this notation, and organizing the bv in a column-vector indexed
accordingly with the rows of A , and organizing the ce , xe and ue as three column-vectors
indexed accordingly with the columns of A , we can rewrite the network-flow formulation as
min c0 x
Ax = b;
x ≤ u;
x ≥ 0.
A finite bipartite graph G is described by two finite sets of vertices V1 and V2 , and a set
E of ordered pairs of edges, each one of which is of the form (i, j) with i ∈ V1 and j ∈ V2 . A
perfect matching M of G is a subset of E such that each vertex of the graph meets exactly one
edge in M . We assume that there are given edge weights
and our goal is to find a perfect matching that has minimum (total) weight.
We can define
for all (i, j) ∈ E . Then we can model the problem of finding a perfect matching of G having
minimum weight via the formulation:
X
min cij xij
(i,j)∈E
X
xij = 1, ∀ i ∈ V1 ;
j∈V2 :
(i,j)∈E
X
xij = 1, ∀ j ∈ V2 ;
i∈V1 :
(i,j)∈E
for v ∈ V1 ∪ V2 , (i, j) ∈ E . With this notation, and organizing the cij , xij and as column-
vectors indexed accordingly with the columns of A , we can rewrite the assignment-problem
formulation as
min c0 x
Ax = 1 ;
x ∈ {0, 1}E ,
Staffing problem. In this problem, we have discrete time periods numbered 1, 2, . . . , m , and
we are given
min c0 x
Ax ≥ b;
x ≥ 0 integer.
As we have stated it, this staffing problem is really a very general type of integer-linear-
optimization problem because we have not restricted the form of A beyond it being 0, 1-valued.
In some situations, however, it may may be reasonable to assume that shifts must consist of a
consecutive set of time periods. In this case, the 1’s in each column of A occur consecutively, so
we call A a consecutive-ones matrix.
In this section we explore the essential properties of a constraint matrix so that basic solu-
tions are guaranteed to be integer. This has important implications for the network-flow, assign-
ment, and staffing problems that we introduced.
Let A be an m × n real matrix. A basis matrix Aβ is unimodular if det(Aβ ) = ±1 . Checking
whether a large unstructured matrix has all of its basis matrices unimodular is not a simple mat-
ter. Nonetheless, we will see that this property is very useful for guaranteeing integer optimal
of linear-optimization problems, and certain structured constraint matrices have this property.
Theorem 8.1
If A is an integer matrix, all basis matrices of A are unimodular, and b is an integer vector, then
every basic solution x̄ of
Ax = b ;
x ≥ 0
is an integer vector.
where Aβ (i) is defined to be the matrix Aβ with its i-th column, Aβi , replaced by b . Because
we assume that A and b are all integer, the numerator above is the determinant of an integer
matrix, which is an integer. Next, the fact that A has unimodular basis matrices tells us that the
determinant of the invertible matrix Aβ is ±1 . That is, the denominator above is ±1 . So, we
have an integer divided by ±1 , which results in an integer value for x̄βi . t
u
We note that Theorem 8.1 asserts that all basic solutions are integer, whether or not they are
feasible. There is a converse to this theorem.
Theorem 8.2
Let A be an integer matrix in Rm×n . If the system
Ax = b;
x ≥ 0
has integer basic feasible solutions for every integer vector b ∈ Rm , then all basis matrices of
A are unimodular.
It is important to note that the hypothesis of Theorem 8.2 is weaker than the conclusion of
Theorem 8.1. For Theorem 8.2, we only require integrality for basic feasible solutions.
Proof. (Theorem 8.2). Let β be an arbitrary basis, choose an arbitrary i (1 ≤ i ≤ m), and consider
the associated basic solution when b := ei + ∆Aβ 1 . The basic solution x̄ has x̄β equal to the i-th
column of A−1β plus ∆1 . Note that if we choose ∆ to be an integer, then b is integer. Furthermore,
if we choose ∆ to be sufficiently large, then x̄β is non-negative. Therefore, we can chose ∆ so
that b is integer and x̄ is a basic feasible solution. Therefore, by our hypothesis, x̄ is integer.
So the i-th column of A−1 β plus ∆1 is an integer vector . But this clearly implies that the i-th
column of Aβ is an integer vector . Now, because i was arbitrary, we conclude that A−1
−1
β is an
integer matrix . Of course Aβ is an integer matrix as well. Now, it is a trivial observation that
an integer matrix has an integer determinant. Furthermore, the determinants of Aβ and A−1 β
are reciprocals. Of course the only integers with integer reciprocal are 1 and −1 . Therefore, the
determinant of Aβ is 1 or −1 . t
u
has only itself as a basis matrix, and its determinant is 1, but there is a 1 × 1 submatrix with
determinant 2, so A is not TU. Still, as the next result indicates, there is a way to get the TU
property from unimodularity of basis matrices.
Theorem 8.3
If every basis matrix of [A, Im ] is unimodular, then A is TU.
Next, we point out some simple transformations that preserve the TU property.
Theorem 8.4
If A is TU, then all of the following leave A TU.
(i) multiplying any rows or columns of A by −1 ;
(ii) duplicating any rows or columns of A ;
(iii) appending standard-unit columns (that is, all entries equal to 0 except a single entry of
1) ;
(iv) taking the transpose of A .
Theorem 8.6
If A is a network matrix, then A is TU.
Proof. A network matrix is simply a 0 , ±1-valued matrix with exactly one +1 and one −1 in
each column.
Let B be an r × r invertible submatrix of the network matrix A . We will demonstrate that
det(B) = ±1 , by induction on r . For the base case, r = 1 , the invertible submatrices have a
single entry which is ±1 , which of course has determinant ±1 . Now suppose that r > 1 , and
we inductively assume that all (r − 1) × (r − 1) invertible submatrices of A have determinant
±1 .
Because we assume that B is invertible, it cannot have a column that is a zero-vector.
Moreover, it cannot be that every column of B has exactly one +1 and one −1 . Because, by
simply adding up all the rows of B , we have a non-trivial linear combination of the rows of B
which yields the zero vector. Therefore, B is not invertible in this case.
So, we only need to consider the situation in which B has a column with a single non-
zero ±1 . By expanding the determinant along such a column, we see that, up to a sign, the
determinant of B is the same as the determinant of an (r − 1) × (r − 1) invertible submatrix of
A . By the inductive hypothesis, this is ±1 . t
u
Corollary 8.7
The single-commodity min-cost network-flow formulation
X
min ce xe
e∈A
X X
xe − xe = bv , ∀ v ∈ N ;
e∈A : e∈A :
t(e)=v h(e)=v
0 ≤ xe ≤ ue , ∀e∈A.
has an integer optimal solution if: (i) it has an optimal solution, (ii) each bv is an integer, and
(iii) each ue is an integer or is infinite.
Proof. Recall that we can rewrite the single-commodity min-cost network-flow formulation as
min c0 x
Ax = b;
x ≤ u;
x ≥ 0,
where A is a network matrix. For the purpose of proving the theorem, we may as well assume
that the linear-optimization problem has an optimal solution. Next, we transform the formula-
tion into standard form:
min c0 x
Ax = b;
x + s = u;
x , s ≥ 0.
96 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
A 0
The constraint matrix has the form . This matrix is TU, by virtue of the fact that A
I I
is TU, and that it arises from A using operations that preserve the TU property. Finally, we
delete any redundant equations from this system of equations, and we delete any rows that
have infinite right-hand side ue . The resulting constraint matrix is TU, and the right-hand side
is integer, so an optimal basic solution exists and will be integer. t
u
Remark 8.8
Considering Example 2.1, we can see that Corollary 8.7 does not extend to more than one com-
modity.
Assignments.
Theorem 8.9
If A is the vertex-edge incidence matrix of a bipartite graph, then A is TU.
Proof. The constraint matrix A for the formulation has its rows indexed by the vertices of G .
With each edge having exactly one vertex in V1 and exactly one vertex in V2 , the constraint
matrix has the property that for each column, the only non-zeros are a single 1 in a row indexed
from V1 and a single 1 in a row indexed from V2 .
Certainly multiplying any rows (or columns) of a matrix does not bear upon whether or
not it is TU. It is easy to see that by multiplying the rows of A indexed from V1 , we obtain a
network matrix, thus by Theorem 8.6, the result follows. t
u
Corollary 8.10
The continuous relaxation of the following formulation for finding a minimum-weight perfect
matching of the bipartite graph G has an 0, 1-valued solution whenever it is feasible.
X
min cij xij
(i,j)∈E
X
xij = 1 , ∀ i ∈ V1 ;
j∈V2 :
(i,j)∈E
X
xij = 1 , ∀ j ∈ V2 ;
i∈V1 :
(i,j)∈E
xij ≥ 0 , ∀ (i, j) ∈ E .
Proof. After deleting any redundant equations, the resulting formulation as a TU constraint
matrix and integer right-hand side. Therefore, its basic solutions are all integer. The constraints
imply that no variable can be greater than 1, therefore the optimal value is not unbounded, and
the only integer solutions have all xij ∈ {0, 1} . The result follows. t
u
A matching M of G is a subset of E such that each vertex of the graph is met by no more
that one edge in M . An interesting variation on the problem of finding a perfect matching of G
8.1. INTEGRALITY FOR FREE 97
Proof. We can formulate the problem of finding the maximum cardinality of a matching of G
as follows: X
max xij
(i,j)∈E
X
xij ≤ 1 , ∀ i ∈ V1 ;
j∈V2 :
(i,j)∈E
X
xij ≤ 1 , ∀ j ∈ V2 ;
i∈V1 :
(i,j)∈E
yi + yj ≥ 1 , ∀ (i, j) ∈ E ;
yv ≥ 0, ∀v∈V .
It is easy to see that after putting this into standard form via the subtraction of surplus variables,
the constraint matrix has the form [A0 , −I] , where A is the vertex-edge incidence matrix of G .
This matrix is TU, therefore an optimal integer solution exists.
Next, we observe that because of the minimization objective and the form of the constraints,
an optimal integer solution will be 0, 1-valued; just observe that if ȳ is an integer feasible solution
and ȳv > 1 , for some v ∈ V , then decreasing ȳv to 1 (holding the other components of ȳ
constant, produces another integer feasible solution with a lesser objective value. This implies
that every integer feasible solution ȳ with any ȳv > 1 is not optimal.
Next, let ŷ be an optimal 0, 1-valued solution. Let
W := {v ∈ V : ŷv = 1} .
P
It is easy to see that W is a vertex cover of G and that |W | = v∈V ŷv . The result now follows
from the strong duality theorem. t
u
For studying matching in non-bipartite graphs, one can have a look at [3, Chapter 4].
98 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
Staffing.
Theorem 8.12
If A is a consecutive-ones matrix, then A is TU.
where F and G are the submatrices indicated. Note that F and G are each consecutive-ones
matrices.
8.1. INTEGRALITY FOR FREE 99
Next, we subtract the top row from all other rows that have a 1 in the first column. Such
row operations do not change the determinant of B , and we get a matrix of the form
1 1···1 0···0
0 0···0
.. .. ..
. . .
0 0···0 .
0
..
G
0
. F
Note that this resulting matrix need not be a consecutive-ones matrix — but that is not
needed. By expanding the determinant of this latter matrix along the first column, we see that
the determinant of this matrix is the same as that of the matrix obtained by striking out its first
row and column,
0···0
.. ..
. .
0···0
.
G
F
Corollary 8.13
Let A be a shift matrix such that each shift is a contiguous set of time periods, let c be a vector
of non-negative costs, and let b be a vector of non-negative integer demands for workers in the
time periods. Then there is an optimal solution x̄ of the continuous relaxation
min c0 x
Ax ≥ b;
x ≥ 0
of the staffing formulation that has x̄ integer, whenever the relaxation is feasible.
Proof. A is a consecutive-ones matrix when each shift is a contiguous set of time periods. There-
fore A is TU. After subtracting surplus variables to put the problem into standard form, the
constraint matrix takes the form [A, −I] , which is also T U . The result follows. t
u
100 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
8.2.1 Disjunctions
8.2. MODELING TECHNIQUES 101
Example 8.14
Suppose that we have a single variable x ∈ R , and we want to model the disjunction
−12 ≤ x ≤ 2 or 5 ≤ x ≤ 20 .
x ≤ 2 + M1 y ,
x + M2 (1 − y) ≥ 5,
where the constant scalars M1 and M2 (so-called big M’s) are chosen to be appropriately large.
A little analysis tell us how large. Considering our assumption that x could be as large as 20 ,
we see that M1 should be at least 18 . Considering our assumption that x could be as small as
−12 , we see that M2 should be at least 17 . In fact, we should choose these constants to be as
small as possible so as make the feasible region with y ∈ {0, 1} relaxed to 0 ≤ y ≤ 1 as small as
possible. So, the best model for us is:
x ≤ 2 + 18y ,
x + 17(1 − y) ≥ 5.
It is interesting to see a two-dimensional graph of this in x − y space; see Figures 8.1 and 8.2.
cij := cost for satisfying all of customer j’s demand from facility i ,
102 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
for i = 1, . . . , m , and
for i = 1, . . . , m , j = 1, . . . , n .
8.2. MODELING TECHNIQUES 103
for i = 1, . . . , m ,
−yi + xij ≤ 0,
j = 1, . . . , n ;
for i = 1, . . . , m ,
xij ≥ 0 ,
j = 1, . . . , n .
These constraints simply enforce that for any feasible solution x̂, ŷ , we have that ŷi = 1 when-
ever x̂ij > 0 . It is an interesting point that this could also be enforced via the m constraints:
n
X
− nyi + xij ≤ 0 , for i = 1, . . . , m . (W)
j=1
We can view the coefficient −n of yi as a “big M”, rendering the constraint vacuous when yi = 1 .
Despite the apparent parsimony of the latter formulation, it turns out that the original formu-
lation is preferred. The Python/Gurobi code is in Jupyter notebook UFL.ipynb can be used to
compare the use of (S) versus (W). (see Appendix A.11).
This adjacency condition means that we “activate” the interval [ξ j , ξ j+1 ] for approximating
f (x) . That is, we will approximate f (x) by
λj f (ξ j ) + λj+1 f (ξ j+1 ) ,
104 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
with
λj + λj+1 = 1;
λj , λj+1 ≥ 0.
for j = 1, 2, . . . , n − 1 .
The situation is depicted in Figure 8.3, where the red curve graphs the non-linear function
f.
We only want to allow one of the n − 1 intervals to be activated, so we use the constraint
n−1
X
yj = 1 .
j=1
We only want to allow λ1 > 0 if the first interval [ξ 1 , ξ 2 ] is activated. For an internal breakpoint
ξ j , 1 < j < n , we only want to allow λj > 0 if either [ξ j−1 , ξ j ] or [ξ j , ξ j+1 ] is activated. We
8.3. A PRELUDE TO ALGORITHMS 105
only want to allow λn > 0 if the last interval [ξ n−1 , ξ n ] is activated. We can accomplish these
restrictions with the constraints
λ1 ≤ y1 ;
λj ≤ yj−1 + yj , for j = 2, . . . , n − 1 ;
λn ≤ yn−1 .
Notice how if yk is 1 , for some k (1 ≤ k ≤ n), and necessarily all of the other yj are 0 (j 6= k),
then only λk and λk+1 can be positive.
How do we actually use this? If we have a model involving
Pn such a non-linear f (x), then
wherever we have f (x) in the model, we simply substitute j=1 λj f (ξ j ) , and we incorporate
the further constraints:
n
X
λj = 1;
j=1
n−1
X
yj = 1;
j=1
λ1 ≤ y1 ;
λj ≤ yj−1 + yj , for j = 2, . . . , n − 1 ;
λn ≤ yn−1 ;
λj ≥ 0 , for j = 1, . . . , n ;
yj {0, 1} , for j = 1, . . . , n − 1 .
∈
Pn
Of course a very non-linear f (x) will demand an fˆ(x) := j=1 λj f (ξ j ) with a high value for n ,
so as to get an accurate approximation. And higher values for n imply more binary variables
yj , which come at a high computational cost.
z := max y 0 b
y 0 A ≤ c0 ;
(DI )
y ∈ Rm ;
yi integer, for i ∈ I .
The set I ⊂ {1, 2, . . . , m} allows for a given subset of the variables to be constrained to be integer.
This linear-optimization problem has a non-standard form, but it is convenient that the dual of
the continuous relaxation has the standard form
min c0 x
Ax = b; (P)
x ≥ 0.
To prove that an algorithm for (DI ) is finite, it is helpful to assume that the feasible region
of the continuous relaxation (D) of (DI ) is non-empty and bounded.
We saw in Section 7.3.2 that there are situations in which rounding the solution of a continu-
ous relaxation can yield a good solution to an optimization problem involving integer variables.
But generally, this is not the case.
106 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
Example 8.15
Consider the problem
max y2
2ky1 + y2 ≤ 2k ;
− 2ky1 + y2 ≤ 0;
− y2 ≤ 0;
y1 , y2 integer,
where k ≥ 1 is a possibly large positive integer. It is easy to check that (y1 , y2 ) = (0, 0) and
(y1 , y2 ) = (1, 0) are both optimal solutions of this problem, but the optimal solution of the con-
tinuous relaxation is (y1 , y2 ) = ( 21 , k). If we consider rounding y1 up or down in the continuous
solution, we do not get a feasible solution, and moreover we are quite far from the optimal
solutions.
Example 8.16
Consider the problem
Pm
max i=1 yi
yi + y` ≤ 1, for all 1 ≤ i < ` ≤ m;
−yi ≤ 0, for all 1 ≤ i ≤ m;
yi integer, for all 1 ≤ i ≤ m,
where m ≥ 3 is a possibly large positive integer. It is easy to check that each integer optimal
solution sets any single variable to one and the rest to zero, achieving objective value 1. While
the (unique) continuous optimal solution sets all variables to 21 , achieving objective value m
2.
We can see that the continuous solution is not closely related to the integer solutions.
8.4 Branch-and-Bound
Every feasible solution of the original problem (DI ) with greater objective value than LB is feasible
for a problem on the list.
We stop when the list is empty, and because of the property that we maintain, we correctly
conclude that the optimal value of (DI ) is LB when we do stop.
At a general step of the algorithm, we select and remove a problem (D̃I ) on the list, and
we solve its continuous relaxation (D̃). If this continuous relaxation is infeasible, then we do
nothing further with this problem. Otherwise, we let ȳ be its optimal solution, and we proceed
as follows.
• If ȳ 0 b ≤ LB , then no feasible solution to the selected problem can have objective value
greater than LB, so we are done processing this selected problem.
• If ȳi is integer for all i ∈ I , then we have solved the selected problem. In this case, if
ȳ 0 b > LB , then we
– reset LB to ȳ 0 b ;
– reset ȳLB to ȳ .
• Finally, if ȳ 0 b > LB and ȳi is not integer for all i ∈ I , then (it is possible that this selected
problem has a feasible solution that is better than ȳLB , so) we
– select some i ∈ I such that ȳi is not integer;
– place two new “child” problems on the list, one with the constraint yi ≤ bȳi c ap-
pended (the so-called down branch), and the other with the constraint yi ≥ dȳi e
appended (the so-called up branch).
(observe that every feasible solution to a parent is feasible for one of its children, if
it has children.)
Because the key invariant for branch-and-bound is maintained by the processing rules, the
following result is evident.
Theorem 8.17
Suppose that the original (P) is feasible. Then at termination of branch-and-bound, we have
LB= −∞ if (DI ) is infeasible or with ȳLB being an optimal solution of (DI ).
Finite termination. If the feasible region of the continuous relaxation (D) of (DI ) is a bounded
set, then we can guarantee finite termination. If we do not want to make such an assumption,
then if we assume that the data for the formulation is rational, it is possible to bound the region
that needs to be searched, and we can again assure finite termination.
Solving continuous relaxations. Some remarks are in order regarding the solution of con-
tinuous relaxations. Conceptually, we apply the Simplex Algorithm to the dual (P̃) of the con-
tinuous relaxation (D̃) of a problem (D̃I ) selected and removed from the list. At the outset, for
an optimal basis β of (P̃), the optimal dual solution is given by ȳ 0 := c0β A−1
β . If i ∈ I is chosen,
such that ȳi is not an integer, then we replace the selected problem (D̃I ) with one child having
the additional constraint yi ≤ bȳi c (the down branch) and another with the constraint yi ≥ dȳi e
appended (the up branch).
Adding a constraint to (D̃) adds a variable to the standard-form problem (P̃). So, a basis
for (P̃) remains feasible after we introduce such a variable.
108 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
• The down branch: The constraint yi ≤ bȳi c , dualizes to a new variable xdown in (P̃). The
variable xdown has a new column Adown := ei and a cost coefficient of cdown := bȳi c . Notice
that the fact that ȳi is not an integer (and hence ȳ violates yi ≤ bȳi c) translates into the
fact that the reduced cost c̄down of xdown is c̄down = cdown − ȳ 0 Adown = bȳi c − ȳi < 0 , so xdown
is eligible to enter the basis.
• The up branch: Similarly, the constraint yi ≥ dȳi e , or equivalently −yi ≤ −dȳi e , dualizes
to a new variable xup in (P̃). The variable xup has a new column Aup := −ei and a cost
coefficient of cup := −dȳi e . Notice that the fact that ȳi is not an integer (and hence ȳ
violates yi ≥ dȳi e) translates into the fact that the reduced cost c̄up of xup is c̄up = cup −
ȳ 0 Aup = −dȳi e + ȳi < 0 , so xup is eligible to enter the basis.
In either case, provided that we have kept the optimal basis for the (P̃) associated with a
˜ associated with a child (D̃
problem (D̃I ), the Simplex Algorithm picks up on the (P̃) ˜ ) of that
I
˜ entering the basis.
problem , with the new variable of the child’s (P̃)
Notice that the (P̃) associated with a problem (D̃I ) on the list could be unbounded. But
this just implies that the problem (D̃I ) is infeasible.
Partially solving continuous relaxations. Notice that as the Simplex Algorithm is applied
to the (P̃) associated with any problem (D̃I ) from the list, we generate a sequence of non-
increasing objective values, each one of which is an upperbound on the optimal objective value
of (D̃I ). That is, for any such (P̃), we start with the upperbound value of its parent, and then we
gradually decrease it, step-by-step of the Simplex Algorithm. At any point in this process, if the
objective value of the Simplex Algorithm falls at or below the current LB, we can immediately
terminate the Simplex Algorithm on such a (P̃) — its optimal objective value will be no greater
than LB — and conclude that the optimal objective value of (D̃I ) is no greater than LB.
A global upper bound. As the algorithm progresses, if we let UBbetter be the maximum, over
all problems on the list, of the objective value of the continuous relaxations, then any feasible
solution ŷ with objective value greater than that LB satisfies ŷ 0 b ≤ UBbetter . Of course, it may be
that no optimal solution is feasible to any problem on the list — for example if it happens that
LB = z . But we can see that
It may be useful to have UB at hand, because we can always stop the computation early, say
when UB−LB < τ , returning the feasible solution ȳLB , with the knowledge that z−ȳLB0
b ≤ τ . But
notice that we do not readily have the objective value of the continuous relaxation for problems
on the list — we only solve the continuous relaxation for such a problem after it is selected
(for processing). But, for every problem on the list, we can simply keep track of the optimal
objective value of its parent’s continuous relaxation, and use that instead. Alternatively, we can
re-organize our computations a bit, solving continuous relaxations of subproblems before we
put them on the list.
Selecting a subproblem from the list. Which subproblem from the list should we process
next?
• A strategy of last-in/first-out, known as diving, often results in good increases in LB. To
completely specify such a strategy, one would have to decide which of the two children
of a subproblem is put on the list last (i.e., the down branch or the up branch). A good
choice can affect the performance of this rule, and such a good choice depends on the
type of model being solved.
8.4. BRANCH-AND-BOUND 109
• A strategy of first-in/first-out is very bad. It can easily result in an explosion in the size of
the list of subproblems.
• A strategy of choosing a subproblem to branch on having objective value for its contin-
uous relaxation equal to UB, known as best bound, is a sound strategy for seeking a
decrease in UB. If such a rule is desired, then it is best to solve continuous relaxations of
subproblems before we put them on the list.
A hybrid strategy, doing mostly diving at the start (to get a reasonable value of LB) and shifting
more and more to best bound (to work on proving that LB is at or near the optimal value) has
rather robust performance.
Selecting a branching variable. Probably very many times, we will need to choose an i ∈ I
for which ȳi is fractional, in order to branch and create the child subproblems. Which such i
should we choose? Naïve rules such as choosing randomly or the so-called most fractional rule
of choosing an i that maximizes min{ȳi −bȳi c , dȳi e− ȳi } seem to have rather poor performance.
Better rules are based on estimates of how the objective value of the children will change relative
to the parent.
Using dual variables to bound the “other side” of an inequality. Our constraint
system y 0 A ≤ c0 can be viewed as y 0 Aj ≤ cj , for j = 1, 2, . . . , n ; that is, cj is an upper bound on
y 0 Aj . We may wonder if we can also derive lower bounds on y 0 Aj .
Theorem 8.18
Let LB be the objective value of any feasible solution of (DI ). Let x̄ be an optimal solution of
(P), and assume that x̄j > 0 for some j . Then
LB − c0 x̄
cj + ≤ y 0 Aj
x̄j
z(∆j ) := max y0 b
y0 A ≤ c0 + ∆j e0j ;
(DI (∆j ))
y ∈ Rm ;
yi integer, for i ∈ I .
Let zR (∆j ) be defined the same way as z(∆j ), but with integrality relaxed. Using ideas from
Chapters 6 and 7, we can see that zR is a concave (piecewise-linear) function on its domain, and
x̄j is a subgradient of zR at ∆j = 0 . It follows that
8.5.1 Pure
In this section, we assume that all yi variables are constrained to be integer. That is, I =
{1, 2, . . . , m}
We can choose any non-negative w ∈ Rn , and we see that
w ≥ 0 and y 0 A ≤ c0 =⇒ y 0 (Aw) ≤ c0 w .
Note that this inequality is valid for all solutions of y 0 A ≤ c0 , integer or not. Next, if Aw is integer,
we can exploit the integrality of y . We see that
Aw ∈ Zm , y ∈ Zm =⇒ y 0 (Aw) ≤ bc0 wc ,
Theorem 8.20
ȳ 0 b̃ is not an integer, and so y 0 b̃ ≤ bȳ 0 b̃c cuts off ȳ.
At this point, we have an inequality y 0 b̃ ≤ bȳ 0 b̃c which cuts off ȳ, but we have not established
its validity for (DI ).
Let H·i := A−1 −1
β e , the i-th column of Aβ . Now let
i
w := H·i + r.
Theorem 8.21
Choosing r ∈ Zm satisfying (∗≥ ), we have that y 0 b̃ ≤ bȳ 0 b̃c is valid for (DI ).
y 0 Aβ (A−1 i 0 −1 i
β e + r) ≤ cβ (Aβ e + r),
even for the continuous relaxation (D) of (DI ). Simplifying this, we have
So we have that y 0 b̃ ≤ ȳ 0 b̃ is valid even for (D). Finally, observing that b̃ ∈ Zm and y is constrained
to be in Zm for (DI ), we can round down the right-hand side and get the result. t
u
So, given any non-integer basic dual solution ȳ, we have a way to produce a valid inequality
for (DI ) that cuts it off. This cut for (DI ) is used as a column for (P): the column is b̃ with objec-
tive coefficient bȳ 0 b̃c. Taking β to be an optimal basis for (P), the new variable corresponding to
this column is the unique variable eligible to enter the basis in the context of the primal simplex
algorithm applied to (P) — the reduced cost is precisely
The new column for A is b̃ which is integer. The new objective coefficient for c is bȳ 0 b̃c
which is an integer. So the original assumption that A and c are integer is maintained, and we
can repeat. In this way, we get a legitimate cutting-plane framework for (DI ) — though we
emphasize that we do our computations as column generation with respect to (P).
There is clearly a lot of flexibility in how r can be chosen. Next, we demonstrate that in a
very concrete sense, it is always best to choose a minimal r ∈ Zm satisfying (∗≥ ).
112 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
Theorem 8.22
Let r ∈ Zm be defined by
rk = −bhki c, for k = 1, . . . , m, (∗= )
and suppose that r̂ ∈ Z satisfies (∗≥ ) and r ≤ r̂. Then the cut determined by r dominates
m
Noting that c0β − y 0 Aβ ≥ 0 for all y that are feasible for (D), we see that the strongest inequality
is obtained by choosing r ∈ Zm to be minimal. t
u
Example 8.23
We work through an example in pure_gomory_example_1.ipynb (see Appendix A12) which
uses again pivot_tools.ipynb (see Appendix A.6). The function library pivot_tools.ipynb
contains two (additional) useful tools for this: pure_gomory( ) and dual_plot( , )
Let
7 8 −1 1 3 26
A= , b=
5 6 −1 2 1 19
0
and c = 126 141 −10 5 67 .
So, the integer program (DI ) which we seek to solve is defined by five inequalities in the two
variables y0 and y1 . For the basis of (P), β = (0, 1), we have
7 8 3 −4
Aβ = , and hence A−1 β = .
5 6 −5/2 7/2
Because both ȳ1 and ȳ2 are not integer, we can derive a cut for (DI ) from either. Recalling
the procedure, for any fraction ȳi , we start with the i-th column H·i of H := A−1
β , and we get a
new A·j := ei + Aβ r. Throughout we will choose r via (∗= ). So we have,
3 −3 1 7 8 −3 4
H·0 = ⇒r= ⇒ b̃ = + = =: A·5
−5/2 3 0 5 6 3 3
−4 4 0 7 8 4 4
H·1 = ⇒r= ⇒ b̃ = + = .
7/2 −3 1 5 6 −3 3
In fact, for this iteration of this example, we get the same cut for either choice of i. To calculate
the right-hand side of the cut, we have
4
ȳ 0 b̃ = 51/2 −21/2 = 70 1/2,
3
8.5. CUTTING PLANES 113
Figure 8.4:
We observe that the objective function has decreased, but unfortunately both ȳ0 and ȳ1 are
not integers. So we must continue. We have
3 4 3/5 −4/5
Aβ = , and hence A−1
β = .
1 3 −1/5 3/5
114 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
Figure 8.5:
We observe that the objective function has decreased, but because both ȳ0 and ȳ2 are not
integers, we can again derive a cut for (DI ) from either. We calculate
3/5 0 1 3 4 0 5
H·0 = ⇒r= ⇒ b̃ = + = =: A·6
−1/5 1 0 1 3 1 3
−4/5 1 0 3 4 1 3
H·1 = ⇒r= ⇒ b̃ = + = =: A·7 .
3/5 0 1 1 3 0 2
Correspondingly, we have ȳ 0 A·6 = 96 1/5 and ȳ 0 A·7 = 55 2/5, giving us c6 := 96 and c7 := 55.
So, we have two possible cuts for (DI ):
Choosing to incorporate both as columns for (P), and letting index 7 enter the basis, index
5 leaves (according to the ratio test), and it turns out that we reach an optimal basis β = (7, 5)
after this single pivot. The associated dual basic solution is depicted in Figure 8.6 (the second
graphic is zoomed in)
ȳ 0 = 25 −10 , and the objective value is z = 460.
Not only has the objective decreased, but now all of the ȳi are integers, so we have an optimal
solution for (DI ).
8.5. CUTTING PLANES 115
Figure 8.6:
116 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
8.5.2 Mixed
In this section, we no longer assume that all yi variables are constrained to be integer. That is,
we only assume that non-empty I ⊂ {1, 2, . . . , m}. The cuts from the previous section cannot
be guaranteed to be valid, so we start anew.
Let β be any basis partition for(P), and let ȳ be the associated dual basic solution. Suppose
that ȳi ∈
/ Z, for some i ∈ I. We aim to find a cut, valid for (DI ) and violated by ȳ.
Let
b̃1 := ei + Aβ r,
and r ∈ Rm will be determined later. We will accumulate the conditions we need to impose on
r, as we go.
Let w1 be the basic solution associated with the basis β and the “right-hand side” b̃1 . So
wβ = h·i + r, where h·i is defined as the i-th column of A−1
1
β , and wη = 0. Choosing r ≥ −h·i ,
1
we can make w1 ≥ 0. Moreover, c0 w1 = c0β (h·i + r) = c0β h·i + c0β r = ȳi + c0β r, so because we
assume that ȳi ∈
/ Z, we can choose r ∈ Zm , and we have that c0 w1 ∈/ Z.
Next, let
b̃2 := Aβ r.
Let w2 be the basic solution associated with the basis β and the “right-hand side” b̃2 . So, now
further choosing r ≥ 0, we have wβ2 = r ≥ 0, wη2 = 0, and c0 w2 = c0β r.
So, we choose r ∈ Zm so that:
slope
(1 + αi0 r)yi 0
+ z ≤ ȳi + ȳ Aβ r −1/(1 + αi0 r) (B1)
(αi0 r)yi 0
+ z ≤ ȳ Aβ r −1/αi0 r (B2)
• αi0 r ≥ 0, in which case the first line has negative slope and the second line has more
negative slope (or infinite αi0 r = 0);
• αi0 r ≤ −1, in which case the second line has positive slope and the first line has more
positive slope (or infinite αi0 r = −1).
8.5. CUTTING PLANES 117
Subtracting, we have
z1 − z2 = (ȳi − bȳi c) −(1 + αi0 r ),
| {z } |{z}
∈(0,1) ∈Z
the slope of the line through the pair of points (z 1 , yi1 ) and (z 2 , yi2 ) is just
1 1
= .
z1 −z 2 (ȳi − bȳi c) − (1 + αi0 r)
(B2)
(B1)
Lemma 8.24
(F-BMI) is satisfied at equality by both of the points (z 1 , yi1 ) and (z 2 , yi2 ).
118 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
(B2)
(B1)
yi
(F-BMI)
Lemma 8.25
(F-BMI) is valid for
(yi , z) ∈ R2 : (B1), yi ≥ dȳi e ∪ (yi , z) ∈ R2 : (B2), yi ≤ bȳi c .
Lemma 8.26
(F-BMI) is violated by the point (yi∗ , z ∗ ).
Proof. Plugging (yi∗ , z ∗ ) into (F-BMI), and making some if-and-only-if manipulations, we obtain
or,
− (ȳi − bȳi c − 1) yi + y 0 Aβ r ≤ ȳ 0 Aβ r − (ȳi − bȳi c − 1) bȳi c,
which, finally has the convenient form
We immediately have:
8.5. CUTTING PLANES 119
Theorem 8.27
(F-GMI) is violated by the point ȳ.
Finally, we have:
Theorem 8.28
(F-GMI) is valid for the following relaxation of the feasible region of (D):
Proof. The proof, maybe obvious, is by a simple disjunctive argument. We will argue that
(F-BMI) is valid for both S1 := {y ∈ Rm : y 0 Aβ ≤ c0β , −yi ≤ −bȳi c − 1} and S2 := {y ∈
Rm : y 0 Aβ ≤ c0β , yi ≤ bȳi c}.
The inequality (F-BMI) is simply the sum of (B1) and the scalar ȳi − bȳi c times −yi ≤
−bȳi c − 1. It follows than that taking (I1) plus ȳi − bȳi c times −yi ≤ −bȳi c − 1, we get an
inequality equivalent to (F-GMI).
Similarly, it is easy to check that the inequality (F-BMI) is simply (B2) plus 1 − (ȳi − bȳi c)
times yi ≤ bȳi c. It follows than that taking (I2) plus 1 − (ȳi − bȳi c) times yi ≤ bȳi c, we also get
an inequality equivalent to (F-GMI). t
u
In our algorithm, we append columns to (P), rather than cuts to (D). The column for (P)
corresponding to (F-GMI) is
Aβ r − (ȳi − bȳi c − 1) ei ,
and the associated cost coefficient is
c0β r − (ȳi − bȳi c − 1) bȳi c.
So A−1
β times the column is
r − (ȳi − bȳi c − 1) h·i .
Agreeing with what we calculated in Proposition 8.26, we have the following result.
Proposition 8.29
The reduced cost of the column for (P) corresponding to (F-GMI) is
Proof.
c0β r − (ȳi − bȳi c − 1) bȳi c − c0β (r − (ȳi − bȳi c − 1) h·i )
= (ȳi − bȳi c − 1) c0β h·i − bȳi c
= (ȳi − bȳi c − 1) (ȳi − bȳi c) .
t
u
Theorem 8.30
Let r ∈ Zm be defined by
and suppose that r̂ ∈ Zm satisfies (∗≥+ ) and r ≤ r̂. Then the cut determined by r dominates
the cut determined by r̂.
Observing that c0β − y 0 Aβ ≥ 0 for y that are feasible for (D), we see that the tightest inequality
of this type, satisfying (∗≥+ ), arises by choosing a minimal r. The result follows. t
u
8.5.4 Branch-and-Cut
State-of-the-art algorithms for (mixed-)integer linear optimization (like Gurobi, Cplex and Express)
combine cuts with branch-and-bound. There are a lot of software design and tuning issues that
make this work successfully.
8.6 Exercises
Exercise 8.1 (Task scheduling, continued)
Consider again the “task scheduling” Exercise 2.5. Take the dual of the linear-optimization
problem that you formulated. Explain how this dual can be interpreted as a kind of network
problem. Using Python/Gurobi, solve the dual of the example that you created for Exercise 2.5
and interpret the solution.
A pivot in an m × n matrix A means choosing a row i and column j with aij 6= 0 , subtracting
akj
aij times row i from all other rows k(6= i) , and then dividing row i by aij . Note that after the
pivot, column j becomes the i-th standard-unit column. Prove that if A is TU, then it is TU after
a pivot.
Exercise 8.3 (Comparing formulations for a toy problem)
Consider the systems:
S1 : 2x1 + 2x2 + x3 + x4 ≤ 2;
xj ≤ 1;
−xj ≤ 0.
S2 : x1 + x2 + x3 ≤ 1;
x1 + x2 + x4 ≤ 1;
−xj ≤ 0.
S3 : x1 + x2 ≤ 1;
x1 + x3 ≤ 1;
x1 + x4 ≤ 1;
x2 + x3 ≤ 1;
x2 + x4 ≤ 1;
−xj ≤ 0.
Notice that each system has precisely the same set of integer solutions. In fact, each system
chooses, via its feasible integer (0/1) solutions, the “vertex packings” of the following graph.
A vertex packing of a graph is a set of vertices with no edges between them. For this particular
graph we can see that the packings are: ∅, {1}, {2}, {3}, {4}, {3, 4}.
Compare the feasible regions Si of the continuous relaxations, for each pair of these systems.
Specifically, for each choice of pair i 6= j , demonstrate whether or not the solution set of Si is
contained in the solution set of Sj . HINT: To prove that the solution set of Si is contained in the
solution set of Sj , it suffices to demonstrate that every inequality of Sj is a non-negative linear
combination of the inequalities of Si . To prove that the solution set of Si is not contained in the
solution set of Sj , it suffices to give a solution of Si that is not a solution of Sj .
Exercise 8.4 (Comparing facility-location formulations)
We have seen two formulations of the forcing constraints for the uncapacitated facility-location
problem. We have a choice of the mn constraints:Pn −yi + xij ≤ 0 , for i = 1, . . . , m and
j = 1, . . . , n , or the m constraints: −nyi + x
j=1 ij ≤ 0 , for i = 1, . . . , m . Which formulation
is stronger? That is, compare (both computationally and analytically) the strength of the two as-
sociated continuous relaxations (i.e., when we relax yi ∈ {0, 1} to 0 ≤ yi ≤ 1 , for i = 1, . . . , m).
The Jupyter notebook UFL.ipynb can be used to perform experiments comparing the use of
(S) versus (W). (see Appendix A.11).
122 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
λ1 ≤ y1 ;
λj ≤ yj−1 + yj , for j = 2, . . . , n − 1 ;
λn ≤ yn−1 .
An alternative formulation is
j
X j+1
X
yi ≤ λi , for j = 1, . . . , n − 2 ;
i=1 i=1
n−1
X n
X
yi ≤ λi , for j = 2, . . . , n − 1 .
i=j i=j
Explain why this alternative formulation is valid, and compare its strength to the original for-
mulation, when we relax yi ∈ {0, 1} to 0 ≤ yi ≤ 1 , for i = 1, . . . , n − 1 . (Note that for both
Pn Pn−1
formulations, we require λi ≥ 0, for i = 1, . . . , n , i=1 λi = 1 , and i=1 yi = 1).
Credit will be given for deft modeling, sophisticated use of Python/Gurobi, testing on meaningfully-
large instances, and insightful analysis. Try to play with Gurobi integer solver options (they can
be set through Python) to get better behavior of the solver.
Your grade on this problem will replace your grades on up to 6 homework problems (i.e.,
up to 6 homework problems on which you have lower grades than you get on this one). I will
8.6. EXERCISES 123
not consider any re-grades on this one! If you already have all or mostly A’s (or not), do a good job
on this one because you want to impress me, and because you are ambitious, and because this
problem is what we have been working towards all during the course, and because you should
always finish strong.
124 CHAPTER 8. INTEGER-LINEAR OPTIMIZATION
Take rest
Appendices
125
127
This template can serve as a starting point for learning LATEX. You may download MiKTeX from
miktex.org to get started. Look at the source file for this document (in Section 5) to see how to get
all of the effects demonstrated.
The equations are automatically numbered, like x.y, where x is the section number and y is the y-th
equation in section x. By tagging the equations with labels, we can refer to them later, like (2.3) and (2.1).
Theorem 2.1. This is my favorite theorem.
Proof. Unfortunately, the space here does not allow for including my ingenious proof of Theorem 2.1.
min c0 x
(P) Ax = b;
x ≥ 0.
Notice that in this example, there are 4 columns separated by 3 &’s. The ‘rrcl’ organizes justification
within a column. Of course, one can make more columns.
4. Graphics
This is how to include and refer to Figure 1 with pdfLaTeX.
\title{\LaTeX~ Template}
\date{\today}
LATEX TEMPLATE 3
\maketitle
\href{mailto:[email protected]}
{Your actual name ([email protected])}
%
%\medskip
%
%(this identifies your work and it \emph{greatly} help’s me in returning homework to you by email
%---- just plug in the appropriate replacements in the \LaTeX~ source; then when I click on the
%hyperlink above, my email program opens up starting a message to you)
\bigskip
% ----------------------------------------------------------------
This template can serve as a starting point for learning \LaTeX. You may download MiKTeX from
{\tt miktex.org}
to get started. Look at the source file for this
document (in Section \ref{sec:appendix})
to see how to get all of the effects demonstrated.
You can typeset math inline, like $\sum_{j=1}^n a_{ij} x_j$, by just enclosing the math in dollar signs.
But if you want to \emph{display} the math, then you do it like this:
\[
\sum_{j=1}^n a_{ij} x_j~ \forall~ i=1,\ldots,m.
\]
\begin{thm}\label{Favorite}
This is my favorite theorem.
\end{thm}
\begin{proof}
Unfortunately, the space here does not allow for including my ingenious proof
of Theorem \ref{Favorite}.
\end{proof}
\[
\tag{P}
\begin{array}{rrcl}
\min & c’x & & \\
& Ax & = & b~; \\
& x & \geq & \mathbf{0}~.
\end{array}
\]
\section{Graphics}
\begin{figure}[h!!]
\includegraphics[width=0.4\textwidth]{yinyang.jpg}
\caption{Another duality}\label{nameoffigure}
\end{figure}
Look at the \LaTeX~ commands in this section to see how each of the elements
of this document was produced. Also, this section serves to show
how text files (e.g., programs) can be included in a \LaTeX~ document verbatim.
\bigskip
\hrule
\small
\verbatiminput{LaTeX_Template.tex}
\normalsize
% ----------------------------------------------------------------
\end{document}
% ----------------------------------------------------------------
133
A.2 MatrixLP.ipynb
MatrixLP
min c0 x + f 0 w
Ax + Bw ≤ b
Dx =g
x ≥ 0, w ≤ 0
Note that we have the following dual, but we don’t model it:
max y0 b + v0 g
y0 A + v0 D ≤ c0
y0 B ≥ f0
y ≤ 0, v unrestricted
class StopExecution(Exception):
def _render_traceback_(self):
pass
f=np.matmul(np.transpose(B),ys)-0.01*np.random.rand(n2)
--------------------------------------------
Warning: your license will expire in 10 days
--------------------------------------------
[4]: model.optimize()
if model.status != GRB.Status.OPTIMAL:
print("***** Gurobi solve status:", model.status)
print("***** This is a problem. Model does not have an optimal solution")
raise StopExecution
w[ 0 ]= 0.
w[ 1 ]= 0.
w[ 2 ]= 0.
w[ 3 ]= 0.
w[ 4 ]= 0.
w[ 5 ]= 0.
w[ 6 ]= 0.
w[ 7 ]= 0.
w[ 8 ]= 0.
w[ 9 ]= 0.
w[ 10 ]= -4.7348
w[ 11 ]= 0.
w[ 12 ]= -4.392
w[ 13 ]= 0.
w[ 14 ]= 0.
v[ 0 ]= -0.8196
v[ 1 ]= -0.6668
v[ 2 ]= -0.0458
v[ 3 ]= 0.1904
139
A.3 Production.ipynb
Production
class StopExecution(Exception):
def _render_traceback_(self):
pass
[2]: # Some toy data
m=3
n=2
M=list(range(0,m))
N=list(range(0,n))
A = np.array([ [8, 5], [8, 6], [8, 7] ])
b = np.array([32, 33, 35])
c = np.array([3 ,2])
[4]: model.optimize()
if model.status != GRB.Status.OPTIMAL:
print("***** Gurobi solve status:", model.status)
print("***** This is a problem. Model does not have an optimal solution")
raise StopExecution
print(" ")
print("primal var, dual slack, obj delta-lb, obj delta-ub")
for j in N: print("x[",j,"]=",np.format_float_positional(np.ndarray.item(x[j].
,→X),4,pad_right=4),
print(" ")
print("dual vars, primal slack, rhs delta-lb, rhs delta-ub")
for i in M:
constr=model.getConstrByName('r'+str(i)) # retriving from Gurobi the
,→named constraints r0,r1,r2,...
print("y[",i,"]=",np.format_float_positional(constr.Pi,4,pad_right=4),
" s[",i,"]=", np.format_float_positional(constr.
,→Slack,4,pad_right=4),
A.4 Multi-commodityFlow.ipynb
Multi-commodityFlow
K
min ∑ ∑ cke xek
k =1 e∈A
Notes: * K=1 is ordinary single-commodity network flow. Integer solutions for free when node-
supplies and arc capacities are integer. * K=2 example below with integer data gives a fractional
basic optimum. This example doesn’t have any feasible integer flow at all.
References: * Jon Lee, “A First Course in Linear Optimization”, Fourth Edition (Version 4.0), Reex
Press, 2013-20.
MIT License
Copyright (c) 2021 Jon Lee
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARIS-
ING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
[1]: %reset -f
import itertools
import numpy as np
#%matplotlib notebook
%matplotlib inline
import matplotlib.pyplot as plt
import gurobipy as gp
from gurobipy import GRB
import networkx as nx
class StopExecution(Exception):
def _render_traceback_(self):
pass
[2]: # parameters
solveLPOnly=True # set False to solve as an IP
# CapacityCosts = {
# # arc (i,j): [capacity, cost commodity[1] ... cost commodity[K]],
# (1,2): [6., 2],
# (1,3): [8., -5],
# (2,4): [5., 3],
# (2,5): [7., 12],
# (3,5): [5., -9],
# (4,5): [8., 2],
# (4,6): [5., 0],
# (5,6): [5., 4]}
CapacityCosts = {
# arc (i,j): [capacity, cost commodity[1] ... cost commodity[K]],
(1,2): [1., 1, 1],
(1,3): [1., 1, 1],
(2,5): [1., 1, 1],
(3,4): [1., 1, 1],
(4,1): [1., 1, 1],
(4,7): [1., 1, 1],
(5,6): [1., 1, 1],
(6,2): [1., 1, 1],
(6,8): [1., 1, 1],
(7,3): [1., 1, 1],
(7,8): [1., 1, 1],
(8,5): [1., 1, 1]}
model.addConstrs(
(sum(x[(i, j),k] for j in Nodes if (i, j) in Arcs) - sum(x[(j, i),k] for j in
,→Nodes if (j,i) in Arcs)
--------------------------------------------
Warning: your license will expire in 3 days
--------------------------------------------
***** Flows:
x[( 1 , 2 ), *]= 0.5 0.5 capacity: 1.0
x[( 1 , 3 ), *]= 0.5 0.0 capacity: 1.0
x[( 2 , 5 ), *]= 0.5 0.0 capacity: 1.0
x[( 3 , 4 ), *]= 0.5 0.5 capacity: 1.0
x[( 4 , 1 ), *]= 0.0 0.5 capacity: 1.0
x[( 4 , 7 ), *]= 0.5 0.0 capacity: 1.0
x[( 5 , 6 ), *]= 0.5 0.5 capacity: 1.0
x[( 6 , 2 ), *]= 0.0 0.5 capacity: 1.0
x[( 6 , 8 ), *]= 0.5 0.0 capacity: 1.0
x[( 7 , 3 ), *]= 0.0 0.5 capacity: 1.0
x[( 7 , 8 ), *]= 0.5 0.5 capacity: 1.0
x[( 8 , 5 ), *]= 0.0 0.5 capacity: 1.0
[7]: G = nx.DiGraph()
G.add_nodes_from(Nodes)
G.add_edges_from(Arcs)
plt.figure(figsize=(8,8))
edge_labels=nx.draw_networkx_edge_labels(G,edge_labels=CapacityCosts,
pos=nx.shell_layout(G), label_pos=0.3, font_size=10)
nx.draw_shell(G, with_labels=True, node_color='cyan', node_size=800,
font_size=20, arrowsize=20)
print("Network with node labels and capacities/costs on arcs")
Flow0=np.zeros(len(Arcs))
Flow=dict(zip(list(Arcs), Flow0))
for (i,j) in Arcs: Flow[i,j]= str(round(x[(i,j),k].X,4))
H=nx.relabel_nodes(G, Supply1_label)
plt.figure(figsize=(8,8))
edge_labels=nx.draw_networkx_edge_labels(H,edge_labels=Flow,
pos=nx.shell_layout(G), label_pos=0.7, font_size=10)
nx.draw_shell(H, with_labels=True, node_color='cyan',
node_size=1200, font_size=10, arrowsize=15)
print("Network with supplies and flows for commodity ",k)
A.5 pivot_example.ipynb
pivot_example
z = min c0 x (P)
Ax = b
x ≥ 0.
[4]: A
[4]: 1 2 1 0 0 0
3 1 0 1 0 0
3 3
0
2 2 0 0 1
0 1 0 0 0 1
[5]: c
[5]: 6
7
−2
0
4
9
2
[7]: b
[7]: 7
9
6
33
10
[8]: beta
[8]:
[0, 1, 3, 5]
[9]: eta
[9]:
[2, 4]
[10]: A_beta
[10]: 1 2 0 0
3 1 1 0
3 3
0 0
2 2
0 1 0 1
[11]: A_eta
[11]: 1 0
0 0
0 1
0 0
[12]: pivot_algebra()
pivot_algebra() done
[13]: sym.N(objval)
[13]:
28.35
[14]: xbar_beta
[14]: 1
3
3
3
10
[15]: cbar_eta
[15]: 3
2
− 73
[16]: pivot_ratios(1)
3
4
∞
∞
9
20
x̄ + λz̄ :
1 − 4λ
3
2λ + 3
3
0
10λ + 3
3
λ
3 2λ
10 − 3
[17]: c.dot(zbar) # agrees with cbar_eta(1)
[17]: 7
−
3
[18]: pivot_plot()
[19]: pivot_swap(1,3)
pivot_algebra() done
[21]: sym.N(objval)
[21]:
27.3
[22]: xbar_beta
[22]: 2
5
33
10
9
2
9
20
[23]: cbar_eta
[23]: −2
7
2
[24]: pivot_ratios(0)
2
5
∞
∞
∞
x̄ + λz̄ :
2
5 −λ
33
10
λ
3λ + 9
2
3λ + 9
2 20
0
[28]: pivot_algebra()
pivot_algebra() done
[29]: sym.N(objval)
[29]:
26.5
[30]: xbar_beta
[30]: 2
5
33
10
57
10
21
20
[31]: cbar_eta
[31]: 2
− 12
[32]: pivot_ratios(1)
∞
33
10
∞
∞
x̄ + λz̄ :
0
33
10 − λ2
2λ +
5
λ + 57
10
3λ + 21
2 20
λ
[36]: pivot_algebra()
pivot_algebra() done
[37]: sym.N(objval)
[37]:
24.85
[38]: xbar_beta
[38]: 7
33
10
9
6
[39]: cbar_eta
[39]: 2
1
2
[40]: pivot_plot()
[41]: xbar
[41]: 0
0
7
9
6
33
10
[42]: objval
[42]: 497
20
[43]: c.dot(xbar) # reality check
[43]: 497
20
[44]: c_beta.dot(xbar_beta) # reality check
[44]: 497
20
[45]: ybar.dot(b) # reality check
[45]: 497
20
[46]: sym.transpose(c)-sym.transpose(ybar)*A # reality check
[46]: 1
2 2 0 0 0 0
A.6 pivot_tools.ipynb
pivot_tools
z = min c0 x (P)
Ax = b
x ≥ 0.
Notes: * Can work with e perturbed right-hand side * β = ( β 0 , β 1 , . . . , β m−1 ) has m entries from
{0, 1, . . . , n − 1}. * η = (η0 , η1 , . . . , ηn−m−1 ) has n − m entries from {0, 1, . . . , n − 1}. * So, for the
purpose of selecting j (corresponding to η j entering the basis), we view c̄η = (c̄η0 , c̄η1 , . . . , c̄ηn−m−1 ).
* For pivot_ratios(j): j must be in {0, 1, . . . , n − m − 1}. The output of pivot_ratios(j) is m numbers,
and they correspond to the basic variables numbered β 0 , β 1 , . . . , β m−1 . So, for the purpose of se-
lecting i (correspond to β i leaving the basis), i must be in {0, 1, . . . , m − 1}. * For pivot_swap(j,i): j
must be in {0, 1, . . . , n − m − 1} and i must be in {0, 1, . . . , m − 1}.
References: * Jon Lee, “A First Course in Linear Optimization”, Fourth Edition (Version 4.0), Reex
Press, 2013-20.
MIT License
Copyright (c) 2020 Jon Lee
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARIS-
ING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
[ ]: import numpy as np
import sympy as sym
sym.init_printing()
import copy
import operator
eps = sym.symbols('epsilon')
lam = sym.symbols('lambda')
xlz = sym.Symbol('\\bar{x}+\\lambda \\bar{z} :')
from IPython.display import Latex, Math
#########################################################################
### CHOOSE A BACKEND --- IF YOU SWITCH, RESTART THE KERNEL
# evaluates faster than 'notebook':
%matplotlib inline
# evaluates slower than 'inline' (gives interactive plots, though delayed when
,→running all cells):
#%matplotlib notebook
#########################################################################
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
from matplotlib.ticker import AutoMinorLocator, MultipleLocator
from scipy.spatial import ConvexHull, convex_hull_plot_2d
import itertools
import seaborn as sns; sns.set(); sns.set_style("whitegrid"); color_list = sns.
,→color_palette("muted")
[ ]: # perturb
def pivot_perturb():
global m, b, Perturb, eps
Perturb = True
for i in range(m):
for j in range(m):
b[i] += A_beta[i,j]*eps**(j+1)
print('pivot_perturb() done')
[ ]: # algebra
def pivot_algebra():
global m, n, objval, xbar, xbar_beta, xbar_eta, ybar, cbar_eta, ratios
xbar_beta = A_beta.solve(b)
xbar_eta = sym.zeros(n-m,1)
objval = c_beta.dot(xbar_beta)
xbar = sym.zeros(n,1)
for i in range(m): xbar[beta[i]]=xbar_beta[i]
for j in range(n-m): xbar[eta[j]]=xbar_eta[j]
ybar = A_beta.transpose().solve(c[beta,0])
#cbar_eta = c_eta.transpose()- ybar.transpose()*A_eta
cbar_eta = c_eta- A_eta.transpose()*ybar
ratios=sym.oo*sym.ones(m,1)
print('pivot_algebra() done')
[ ]: # plot
def pivot_plot():
if n-m != 2 or Perturb == True:
display(Latex("Hey friend --- give me a break!"))
display(Latex("This plotting only works if there are $n-m=2$ nonbasic
,→variables and no rhs perturbation"))
return
A_beta_inv = A_beta.inv()
Abar_eta = A_beta_inv*A_eta
M = sym.zeros(n,n-m)
M[0:m,:] = Abar_eta
M[m:n,:] = -sym.eye(n-m)
h = sym.zeros(n,1)
h[0:m,0] = xbar_beta
feaspoints=np.empty((0,2))
infeaspoints=np.empty((0,2))
bbar=sym.zeros(2,1)
M2=sym.zeros(2,2)
for i in range(n-1):
for j in range(i+1,n):
bbar[0]=h[i]
bbar[1]=h[j]
M2[0,:]=M[i,:]
M2[1,:]=M[j,:]
if abs(sym.det(M2)) >0.0001:
xy = M2.solve(bbar)
if min(h - M*xy) >= -0.00001:
feaspoints=np.r_[feaspoints,np.transpose(xy)]
else:
infeaspoints=np.r_[infeaspoints,np.transpose(xy)]
hull = ConvexHull(feaspoints)
fig, ax = plt.subplots(figsize=(8,8))
ax.set(xlabel=r"$x_{}$".format(eta[0]), ylabel=r"$x_{}$".format(eta[1]))
ax.spines['left'].set_position(('data',0.0))
ax.spines['bottom'].set_position(('data',0.0))
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.xlim(float(min(cbar_eta[0],min(feaspoints[:,0])))-1.25,
,→float(max(feaspoints[:,0]))+0.25)
plt.ylim(float(min(cbar_eta[1],min(feaspoints[:,1])))-0.25,
,→float(max(feaspoints[:,1]))+0.25)
x = np.linspace(float(min(feaspoints[:,0]))-0.5,float(max(feaspoints[:
,→,0]))+0.5,100)
for i in range(m):
if Abar_eta[i,1] != 0:
y = (xbar_beta[i] - Abar_eta[i,0]*x) / Abar_eta[i,1]
plt.plot(x, y, linewidth=3, label=r"$x_{}$".format(beta[i]))
else:
plt.vlines(float(xbar_beta[i]/ Abar_eta[i,0]),
,→float(min(cbar_eta[1],min(feaspoints[:,1]))),
float(max(feaspoints[:,0])), label=r"$x_{}$".
,→format(beta[i]))
Notes: * A and c MUST be integer * The variables are y0 , y1 , . . . , ym−1 , so valid input arguments
for pure_gomory(i) are i ∈ {0, 1, . . . , m − 1}.
Reference: * Qi He, Jon Lee. Another pedagogy for pure-integer Gomory. RAIRO – Operations
Research, 51:189–197, 2017.
[ ]: # pure gomory cut
def pure_gomory(i):
global A, c, A_beta, A_eta, c_eta, cbar_eta, m, n, beta, eta
if i>m-1:
display(Latex("error: $i$ is out of range."))
else:
ei = sym.zeros(m,1)
ei[i]=1 # ei is the i-th standard unit column
hi = A_beta.solve(ei) # i-th column of basis inverse
#r = -sym.floor(hi) # best choice of r
r = -(hi.applyfunc(sym.floor))
btilde = ei + A_beta*r # new column for P
A = A.row_join(btilde)
c = c.col_join(sym.Matrix(([sym.floor(ybar.dot(btilde))])))
eta.insert(n-m,n)
n += 1
A_eta = copy.copy(A[:,eta])
c_eta = copy.copy(c[eta,0])
cbar_eta = c_eta - A_eta.transpose()*ybar
print('*** PROBABLY WANT TO APPLY pivot_algebra()! ***')
[ ]: # dual plot
def dual_plot(delta=None,center=None):
if delta==None: delta=2
if center==None: center=ybar
if m != 2 or Perturb == True:
display(Latex("Hey friend --- give me a break!"))
display(Latex("This plotting only works if there are $m=2$ dual
,→variables and no rhs perturbation"))
return
M=sym.transpose(A)
feaspoints=np.empty((0,2))
infeaspoints=np.empty((0,2))
c2=sym.zeros(2,1)
M2=sym.zeros(2,2)
for i in range(n-1):
for j in range(i+1,n):
c2[0]=c[i]
c2[1]=c[j]
M2[0,:]=M[i,:]
M2[1,:]=M[j,:]
if abs(sym.det(M2)) > 0.0001:
y0y1 = M2.solve(c2)
if min(c - M*y0y1) >= -0.00001:
feaspoints=np.r_[feaspoints,np.transpose(y0y1)]
else:
infeaspoints=np.r_[infeaspoints,np.transpose(y0y1)]
hull = ConvexHull(feaspoints)
fig, ax = plt.subplots(figsize=(8,8))
ax.xaxis.set_label_coords(1.05, 0.49)
ax.yaxis.set_label_coords(0.5, 1.05)
ax.set(xlabel=r"$y_{}$".format(0), ylabel=r"$y_{}$".format(1))
ax.spines['left'].set_position(('data',ybar[0]))
ax.spines['bottom'].set_position(('data',ybar[1]))
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# set major ticks to show every 1 (integer)
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_major_locator(MultipleLocator(1))
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.xlim(float(center[0])-delta,float(center[0])+delta)
plt.ylim(float(center[1])-delta,float(center[1])+delta)
plt.fill(feaspoints[hull.vertices,0], feaspoints[hull.vertices,1], 'cyan',
,→alpha=0.3)
y1 = np.linspace(float(min(feaspoints[:,0]))-0.5,float(max(feaspoints[:
,→,0]))+0.5,100)
for j in range(n):
if M[j,1] != 0:
y2 = (c[j] - M[j,0]*y1) / M[j,1]
plt.plot(y1, y2, linewidth=2, label=r"constraint ${}$".format(j))
else:
plt.vlines(float(c[j]/ M[j,0]), float(center[1])-delta,
float(center[1])+delta, linewidth=2, label=r"constraint
,→${}$".format(j))
arrow=plt.arrow(float(ybar[0]),float(ybar[1]),0.5*float(b[0]/(b.dot(b))**0.
,→5),0.5*float(b[1]/(b.dot(b))**0.5), color='magenta', width = 0.01*delta,
yp = np.arange(np.floor(float(center[1])-delta)-1, np.
,→ceil(float(center[1])+delta)+2)
pp = itertools.product(xp, yp)
plt.scatter(*zip(*pp), marker='o', s=5, color='black',zorder=9)
# sorting plot legend entries by label
handles, labels = ax.get_legend_handles_labels()
hl = sorted(zip(handles, labels), key=operator.itemgetter(1))
handles2, labels2 = zip(*hl)
ax.legend(handles2, labels2, loc="lower left",title="constraints")
ax.grid(which='major')
plt.show()
A.7 Circle.ipynb
Circle
Hoffman’s circle
Reference: * Jon Lee. Hoffman’s circle untangled. SIAM Review, 39(1):98-105, 1997.
MIT License
Copyright (c) 2020 Jon Lee
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARIS-
ING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
[1]: import numpy as np
#%matplotlib notebook
%matplotlib inline
import mpl_toolkits.mplot3d as a3
import matplotlib.colors as colors
import pylab as pl
t = 2 * np.pi/5
c = np.cos(t)
s = np.sin(t)
M = np.array([[c, -s, 0], [s, c, 0], [(c - 1)/c, s/c, 1]])
x = np.array((1,0,0))
y = np.array((0, 0.5*np.tan(t/2), 0))
T=np.row_stack((x,y, M.dot(x), M.dot(y), M.dot(M.dot(x)), M.dot(M.dot(y)),
M.dot(M.dot(M.dot(x))), M.dot(M.dot(M.dot(y))),
M.dot(M.dot(M.dot(M.dot(x)))), M.dot(M.dot(M.dot(M.dot(y)))), x))
ax = a3.Axes3D(pl.figure(figsize=(5,8)),azim=42,elev=15)
for i in range(10):
vtx = np.row_stack(([0,0,0],T[i],T[i+1]))
tri = a3.art3d.Poly3DCollection([vtx])
tri.set_color(colors.rgb2hex(np.random.rand(3)))
tri.set_edgecolor('k')
ax.add_collection3d(tri)
ax.set_xlim3d(-1,1)
ax.set_ylim3d(-1,1)
ax.set_zlim3d(-3,4)
pl.show()
177
A.8 Decomp.ipynb
Decomp
z = min c0 x (Q)
Ex ≥ h
Ax = b
x ≥ 0,
class StopExecution(Exception):
def _render_traceback_(self):
pass
A=0.01*np.random.randint(-2,high=3,size=(m2,n)).astype(float) #np.random.
,→randn(m2,nt)
# set up the Subproblem model and get one basic feasible solution
modelS = gp.Model()
#modelS.setParam('OutputFlag', 0) # quiet the Gurobi output
x = modelS.addMVar(n)
constraintsS = modelS.addConstr(A@x == b)
#modelS.setObjective(c@x, GRB.MINIMIZE)
modelS.optimize()
if modelS.status != GRB.Status.OPTIMAL:
print("***** Gurobi (initial) Subproblem solve status:", modelS.status)
print("***** This is a problem. Stopping execution.")
raise StopExecution
xgen += 1
# construct a basis
XZ=np.reshape(x.X,(n,1))
#Z=np.r_[np.zeros((n-m1,m1)),np.eye(m1)]
#Z=np.empty((n,0), dtype=float)
h1=np.r_[h,(1)]
#B=np.c_[np.r_[np.eye(m1),np.zeros((1,m1))],np.r_[[email protected],(1)]]
B=np.c_[-np.r_[np.eye(m1),np.zeros((1,m1))],np.r_[[email protected],(1)]]
while True:
ITER += 1
print(" ")
print("***** Currently in Phase", Phase, ". Iteration number", ITER)
print("***** Solving Main LP...")
modelM.optimize()
if modelM.status != GRB.Status.OPTIMAL:
print("***** Gurobi Main solve status:", modelM.status)
print("***** This is a problem. Stopping execution.")
raise StopExecution
results1=np.append(results1,ITER-1)
results2=np.append(results2,modelM.Objval)
if Phase==1 and modelM.Objval < 0.0000001:
print("***** Phase I succeeded")
print("LP iter", " LP val")
print("--------- ---------")
for j in range(ITER):
print(np.int(results1[j]), " ", np.round(results2[j],9))
fig, ax = plt.subplots(figsize=(10,10))
ax.plot(results1[0:ITER], results2[0:ITER])
ax.set(xlabel='iteration', ylabel='LP objective value')
ax.set_xticks(ticks=results1, minor=False)
ax.grid()
plt.show()
ITERphaseI=ITER
Phase=2
# switch to the Phase II model
modelM=modelM2
modelM.optimize()
# overwrite last iteration result with phase-II objective value
results2[ITER-1]=modelM.Objval
if ITER == MAXIT: break
constraintsM=modelM.getConstrs()
for i in range(m1):
y[i]=constraintsM[i].Pi
sigma=constraintsM[m1].Pi
if Phase==1: modelS.setObjective((-y.dot(E))@x, GRB.MINIMIZE)
else: modelS.setObjective((c-y.dot(E))@x, GRB.MINIMIZE)
print(" ")
print("***** Solving Subproblem LP...")
modelS.optimize()
if modelS.status != GRB.Status.OPTIMAL and modelS.status != GRB.Status.
,→UNBOUNDED:
--------------------------------------------
Warning: your license will expire in 3 days
--------------------------------------------
.
.
.
.
.
.
.
.
.
.
.
.
A.9 SubgradProj.ipynb
SubgradProj
z = min c0 x (Q)
Ex ≥ h
Ax = b
x ≥ 0,
class StopExecution(Exception):
def _render_traceback_(self):
pass
GUESS = -5.6 # but don't guess a target value higher than z!!
,→!!
A=0.01*np.random.randint(-2,high=3,size=(m2,n)).astype(float) #np.random.
,→randn(m2,nt)
[3]: # solve the problem as one big LP --- for comparison purposes
modelQ = gp.Model()
modelQ.reset()
x = modelQ.addMVar(n)
objective = modelQ.setObjective(c@x, GRB.MINIMIZE)
constraintsQ1 = modelQ.addConstr(E@x >= h)
constraintsQ2 = modelQ.addConstr(A@x == b)
modelQ.optimize()
if modelQ.status != GRB.Status.OPTIMAL:
print("***** Gurobi solve status:", modelQ.status)
print("***** This is a problem. Model Q does not have an optimal solution")
raise StopExecution
--------------------------------------------
Warning: your license will expire in 3 days
--------------------------------------------
[4]: # 'SmartInitialization' chooses the initial y so that the dual of the Lagrangian
,→Subproblem has (pi=0 as)
# a feasible solution, thus making sure that the initial Lagrangian Subproblem
,→is not unbounded.
if SmartInitialization:
modelY = gp.Model()
modelY.reset()
yvar = modelY.addMVar(m1)
constraintsY = modelY.addConstr(np.transpose(E)@yvar <= c)
modelY.optimize()
y=yvar.X
else: y=np.zeros(m1)
# initialization
k=1
bestlb = -np.Inf
modelL.optimize()
if modelL.status != GRB.Status.OPTIMAL:
print("***** Gurobi solve status:", modelL.status)
print("***** This is a problem. Lagrangian Subproblem is unbounded.")
print("***** The algorithm cannot work with this starting y.")
raise StopExecution
v = y.dot(h) + modelL.Objval
results1=[0]
results2=[v]
bestlb = v
***** z: -5.611934358015312
***** first lower bound: -35.97487470911054
***** best lower bound: -6.309166317427381
[6]: if k > 1:
fig, ax = plt.subplots(figsize=(10,10))
ax.plot(results1, results2)
ax.plot(results1, modelQ.Objval*np.ones(k))
ax.set(xlabel='iteration', ylabel='v(y)')
ax.grid()
plt.show()
211
A.10 CSP.ipynb
CSP
min e0 x
Ax − t = d
x, t ≥ 0,
where the columns of A are cutting patterns, and d is the demand vector.
Notes: * In this implementaion, we never delete generated columns (i.e., patterns) * Knapsack
subproblems solved by DP or ILP (Gurobi) or both [user options] * At the end, we solve the ILP
over all columns generated, aiming to improve on the rounded-up LP solution from column-
generation
References: * Jon Lee, “A First Course in Linear Optimization”, Fourth Edition (Version 4.0), Reex
Press, 2013-20.
MIT License
Copyright (c) 2020 Jon Lee
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARIS-
ING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
[1]: %reset -f
import numpy as np
#%matplotlib notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set(); sns.set_style("whitegrid"); color_list = sns.
,→color_palette("muted")
import gurobipy as gp
from gurobipy import GRB
class StopExecution(Exception):
def _render_traceback_(self):
pass
--------------------------------------------
Warning: your license will expire in 3 days
--------------------------------------------
m
max ∑ ȳi ai
i =1
m
∑ wi a i ≤ W
i =1
ai ≥ 0 and integer, for i = 1, . . . m.
[5]: # set up for solving the knapsack subproblems: either by DP or IP (or both)
#
y=np.zeros(m)
if IP==True:
# set up the Subproblem ILP knapsack model for Gurobi
Knap = gp.Model()
Knap.setParam('OutputFlag', 0) #comment out to see more Gurobi output
a = Knap.addMVar(m,vtype=GRB.INTEGER)
knapsackobjective = Knap.setObjective(y@a, GRB.MAXIMIZE)
knapsackconstraint = Knap.addConstr(Widths@a <= W)
if DP==True:
# DP for knapsack. Local notation: max c'x, s.t. a'x <= b, x>=0 int.
def Knapf(a,b,c):
m=np.size(a)
f=np.zeros(b+1)
i=-np.ones(b+1,dtype=int)
v=-np.Inf*np.ones(m)
for s in range(min(a),b+1):
for j in range(m):
if a[j]<=s: v[j]=c[j] + f[s-a[j]]
else: v[j]=-np.Inf
f[s]=max(v)
i[s]=np.argmax(v) # save the index j where the max occured for
,→that s
#
x=np.zeros(m)
s=b+0
while s>=min(a):
x[i[s]] += 1
s=s-a[i[s]]
return f[b], x
[6]: # fancy output function
def fancyoutput():
plt.figure()
print("***** Patterns / Widths:", Widths, "Stock roll width:", W)
Aw=np.zeros((m,nPAT))
for i in M:
for j in range(nPAT):
Aw[i,j]=A[i,j]*Widths[i]
Aw=np.c_[ Aw, np.zeros(m) ]
wlist=[''] * m
for i in M:
wlist[i]='w'+str(i)
K=np.diagflat(Widths)
Bw=np.c_[Aw,K]
T = np.arange(Bw.shape[1])
for i in range(Bw.shape[0]):
plt.bar(T, Bw[i],
tick_label = np.concatenate((np.arange(nPAT),np.array([' ']),wlist)),
bottom = np.sum(Bw[:i], axis = 0),
color = color_list[i % len(color_list)])
plt.show()
print("***** A:")
print(A)
raise StopExecution
print("***** Gurobi Knap objval:",Knap.Objval)
reducedcost = 1.0-Knap.Objval
pattern=a.X+np.zeros(m)
#
if DP==True:
results = Knapf(Widths.astype(int),W,y)
print("***** DP Knap objval: ",results[0])
reducedcost = 1.0-results[0]
pattern=results[1]
#
if reducedcost < -0.0001:
print("***** Column:",pattern)
A=np.c_[ A, pattern ]
nPAT += 1
newcol=gp.Column(pattern,demandconstraints)
LP.addVar(obj=1.0, column=newcol)
else:
print("***** No more improving columns")
break
***** A:
[[1. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 2. 0. 0. 0. 1. 2. 0. 1.]
[0. 0. 2. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 4. 0. 0. 1. 3. 0.]
[0. 0. 0. 0. 3. 0. 0. 1. 2.]]
LP iter LP val
--------- ---------
0 1748.25
1 1645.75
2 1381.25
3 1380.3889
4 1377.6538
***** Now solve the ILP over all patterns generated to try and get a better
soution...
***** x:
x[ 0 ]= 0.0
x[ 1 ]= 0.0
x[ 2 ]= 72.0
x[ 3 ]= 1.0
x[ 4 ]= 1.0
x[ 5 ]= 205.0
x[ 6 ]= 1034.0
x[ 7 ]= 17.0
x[ 8 ]= 49.0
***** Number of rolls used: 1379.0
***** Patterns / Widths: [70. 40. 55. 25. 35.] Stock roll width: 110
***** A:
[[1. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 2. 0. 0. 0. 1. 2. 0. 1.]
[0. 0. 2. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 4. 0. 0. 1. 3. 0.]
[0. 0. 0. 0. 3. 0. 0. 1. 2.]]
225
A.11 UFL.ipynb
UFL
∑ xij = 1, for j ∈ N;
i∈ M
xij ≥ 0, for i ∈ M, j ∈ N;
0 ≤ yi ≤ 1, and integer, for i ∈ M.
Notes: * We make two solves, first with the weak forcing constraints
∑ xij ≤ n yi , for i ∈ M,
j∈ N
xij ≤ yi , for i ∈ M, j ∈ N.
* Random instances with m facilities and n customers. Play with m, n and possibly with demand
and scale factor in f .
References: * Jon Lee, “A First Course in Linear Optimization”, Fourth Edition (Version 4.0), Reex
Press, 2013-20.
MIT License
Copyright (c) 2020 Jon Lee
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARIS-
ING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
[1]: %reset -f
import numpy as np
#%matplotlib notebook
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.spatial import Voronoi, voronoi_plot_2d
import gurobipy as gp
from gurobipy import GRB
class StopExecution(Exception):
def _render_traceback_(self):
pass
[2]: # parameters
m=75 # number of facilities
n=4000 # number of customers
M=list(range(0,m))
N=list(range(0,n))
np.random.seed(10) # set seed to be able to repeat the same random data
solveLPsOnly=False # set True to only solve LP relaxations
# cost data
demand=10*np.random.rand(n) # these will be 'baked' into the shipping costs
f=200*np.random.rand(m) # facility costs
c=np.zeros((m,n))
for i in range(0,m):
for j in range(0,n):
c[i,j]=demand[j]*np.sqrt(np.square(fPx[i]-cPx[j])+np.
,→square(fPy[i]-cPy[j]))
Cutting planes:
Implied bound: 11142
model.optimize()
if model.status != GRB.Status.OPTIMAL:
print("***** Gurobi solve status:", model.status)
print("***** This is a problem. Model does not have an optimal solution")
raise StopExecution
for i in M: print("y[",i,"]=",round(y[i].X,4))
print("y total =", round(sum (y[i].X for i in M),4))
A.12 pure_gomory_example_1.ipynb
pure_gomory_example_1
[4]: A
[4]: 7 8 −1 1 3
5 6 −1 2 1
[5]: c
[5]: 126
141
−10
5
67
[6]: #pivot_perturb()
[7]: b
[7]: 26
19
[8]: pivot_algebra()
pivot_algebra() done
[9]: xbar_beta
[9]: 2
3
2
[10]: cbar_eta
[10]: 5
1
2
1
[11]: ybar
[11]: 51
2
− 21
2
[12]: dual_plot()
[13]: pure_gomory(1)
[14]: pivot_algebra()
pivot_algebra() done
[15]: A
[15]: 7 8 −1 1 3 4
5 6 −1 2 1 3
[16]: c
[16]: 126
141
−10
5
67
70
[17]: eta
[17]:
[2, 3, 4, 5]
[18]: cbar_eta
[18]: 5
1
2
1
− 12
[19]: pivot_ratios(3)
∞
3
x̄ + λz̄ :
2
3 − λ
2 2
0
0
0
λ
[20]: pivot_swap(3,1)
[21]: pivot_algebra()
pivot_algebra() done
[22]: xbar_beta
[22]: 2
3
[23]: cbar_eta
[23]: 4
5
−3
1
[24]: pivot_ratios(2)
2
5
∞
x̄ + λz̄ :
2 − 5λ
0
0
0
λ
8λ + 3
[25]: pivot_swap(2,0)
[26]: pivot_algebra()
pivot_algebra() done
[27]: xbar_beta
[27]: 2
5
31
5
[28]: cbar_eta
[28]: 23
5
2
3
5
1
[29]: ybar
[29]: 131
5
− 58
5
[30]: dual_plot()
[31]: pure_gomory(0)
[32]: pure_gomory(1)
[33]: pivot_algebra()
pivot_algebra() done
[34]: cbar_eta
[34]: 23
5
2
3
5
1
− 1
5
− 52
[35]: pivot_ratios(5)
2
31
3
x̄ + λz̄ :
0
0
0
0
2 λ
−
5 5
31 − 3λ
5 5
0
λ
[36]: pivot_swap(5,0)
[37]: pivot_algebra()
pivot_algebra() done
[38]: xbar_beta
[38]: 2
5
[39]: cbar_eta
[39]:
5
0
1
1
1
2
[40]: ybar
[40]: 25
−10
[41]: dual_plot()
[42]: dual_plot(.1)
247
A.13 pure_gomory_example_2.ipynb
pure_gomory_example_2
[4]: A
[4]: 6 −6 0
1 1 −1
[5]: c
[5]: 6
0
1
[6]: #pivot_perturb()
[7]: b
[7]: 0
1
[8]: pivot_algebra()
pivot_algebra() done
[9]: dual_plot(2*k+1)
[10]: ybar
[10]: 1
2
3
[11]: pure_gomory(0)
[12]: pivot_algebra()
pivot_algebra() done
[13]: dual_plot()
[14]: cbar_eta
[14]:
4 − 12
[15]: pivot_ratios(1)
6
6
11
x̄ + λz̄ :
1 λ
2 − 12
1 − 11λ
2 12
0
λ
[16]: pivot_swap(1,1)
[17]: pivot_algebra()
pivot_algebra() done
[18]: cbar_eta
[18]: 41 6
11 11
[19]: xbar_beta
[19]: 5
11
6
11
[20]: dual_plot()
[21]: ybar
[21]: 6
11
30
11
[22]: pure_gomory(1)
[23]: dual_plot()
[24]: pivot_algebra()
pivot_algebra() done
[25]: cbar_eta
[25]: 41 6 8
11 11 − 11
[26]: pivot_ratios(2)
1
1
x̄ + λz̄ :
5 5λ
11 − 11
0
0
6 6λ
11 − 11
λ
[27]: pivot_swap(2,0)
[28]: pivot_algebra()
pivot_algebra() done
[29]: cbar_eta
[29]: 2 8
3 5 5
[30]: xbar_beta
[30]: 1
0
[31]: dual_plot()
[32]: ybar
[32]: 2
5
2
[33]: pure_gomory(0)
[34]: dual_plot()
[35]: pivot_algebra()
pivot_algebra() done
[36]: cbar_eta
[36]: 2 8
3 5 5 − 25
[37]: pivot_ratios(3)
5
0
x̄ + λz̄ :
0
0
0
4λ
−
5
1 − λ
5
λ
[38]: pivot_swap(3,1)
[39]: pivot_algebra()
pivot_algebra() done
[40]: cbar_eta
[40]: 1
3 1 1 2
[41]: xbar_beta
[41]: 1
0
[42]: ybar
[42]: 1
2
2
[43]: dual_plot()
[44]: pure_gomory(0)
[45]: pivot_algebra()
pivot_algebra() done
[46]: cbar_eta
[46]: 1
3 1 1 2 − 12
[47]: pivot_ratios(4)
4
0
x̄ + λz̄ :
0
0
0
0
1 − λ
4
− 3λ
4
λ
[48]: pivot_swap(4,1)
[49]: pivot_algebra()
pivot_algebra() done
[50]: cbar_eta
[50]: 4 2
3 2 0 3 3
[51]: xbar_beta
[51]: 1
0
[52]: ybar
[52]: 2
3
2
[53]: pure_gomory(0)
[54]: pivot_algebra()
pivot_algebra() done
[55]: cbar_eta
[55]: 4 2
3 2 0 3 3 − 32
[56]: pivot_ratios(5)
3
0
x̄ + λz̄ :
0
0
0
0
1 − λ
3
0
− 2λ
3
λ
[57]: pivot_swap(5,1)
[58]: pivot_algebra()
pivot_algebra() done
[59]: cbar_eta
[59]:
3 4 −2 3 2 1
[60]: pivot_ratios(2)
1
4
∞
x̄ + λz̄ :
λ
0
0
0
1 − 4λ
0
0
3λ
[61]: pivot_swap(2,0)
[62]: pivot_algebra()
pivot_algebra() done
[63]: cbar_eta
[63]: 5 1 9 3 3
2 3 2 4 2 4
[64]: xbar_beta
[64]: 1
4
3
4
[65]: dual_plot()
[66]: ybar
[66]: 3
4
3
2
[67]: pure_gomory(1)
[68]: pivot_algebra()
pivot_algebra() done
[69]: dual_plot()
[70]: cbar_eta
[70]: 5 1 9 3 3
2 3 2 4 2 4 − 12
[71]: pivot_ratios(6)
1
1
x̄ + λz̄ :
1
4 − λ4
0
0
0
0
0
0
3
4 − 3λ
4
λ
[72]: pivot_swap(6,1)
[73]: pivot_algebra()
pivot_algebra() done
[74]: cbar_eta
[74]: 19 7 3 2
2 4 1 6 3 2 3
[75]: dual_plot()
[76]: ybar
[76]: 5
6
1
[77]: pure_gomory(0)
[78]: pivot_algebra()
pivot_algebra() done
[79]: cbar_eta
[79]: 19 7 3 2
2 4 1 6 3 2 3 − 56
[80]: pivot_ratios(7)
0
6
5
x̄ + λz̄ :
− λ6
0
0
0
0
0
0
0
1 − 5λ 6
λ
[81]: pivot_swap(7,0)
[82]: pivot_algebra()
pivot_algebra() done
[83]: cbar_eta
[83]:
2 −1 1 −1 −1 −1 −1 5
[84]: xbar_beta
[84]: 0
1
[85]: dual_plot()
[86]: pivot_ratios(6)
∞
1
3
x̄ + λz̄ :
0
0
0
0
0
0
0
λ
1 − 3λ
2λ
[87]: pivot_swap(6,1)
[88]: pivot_algebra()
pivot_algebra() done
[89]: cbar_eta
[89]: 5 4 4 2 1 1 10
3 3 3 1 3 3 3 3
[90]: dual_plot()
[91]: ybar
[91]: 1
3
2
3
[92]: pure_gomory(1)
[93]: pivot_algebra()
pivot_algebra() done
[94]: cbar_eta
[94]: 5 4 4 2 1 1 10
3 3 3 1 3 3 3 3 − 23
[95]: pivot_ratios(8)
1
1
x̄ + λz̄ :
0
0
0
0
0
0
0
1 λ
3−3
0
2 2λ
3 − 3
λ
[96]: pivot_swap(8,1)
[97]: pivot_algebra()
pivot_algebra() done
[98]: cbar_eta
[98]:
1 6 2 5 4 3 1 0 2
[99]: xbar_beta
[99]: 0
1
[100]: ybar
[100]: 1
0
[101]: dual_plot()
END NOTES 273
End Notes
1
“The reader will find no figures in this work. The methods which I set forth do not require either
constructions or geometrical or mechanical reasonings: but only algebraic operations, subject to a regular
and uniform rule of procedure.” — Joseph-Louis Lagrange, Preface to “Mécanique Analytique,” 1815.
2
“The testing of this hypothesis, however, will be postponed until it is programmed for an electronic
computer.” — Ailsa H. Land and Alison G. Doig (inventors of branch-and-bound), last line of: An
Automatic Method of Solving Discrete Programming Problems, Econometrica, 1960, Vol. 28, No. 3, pp.
497–520.
3
“Il est facile de voir que...”, “il est facile de conclure que...”, etc. — Pierre-Simon Laplace, frequently
in “Traité de Mécanique Céleste.”
4
“One would be able to draw thence well some corollaries that I omit for fear of boring you.” — Gabriel
Cramer, Letter to Nicolas Bernoulli, 21 May 1728. Translated from “Die Werke von Jakob Bernoulli,” by
R.J. Pulskamp.
274 END NOTES
5
“Two months after I made up the example, I lost the mental picture which produced it. I really regret
this, because a lot of people have asked me your question, and I can’t answer.” — Alan J. Hoffman, private
communication with J. Lee, August, 1994.
6
“Fourier hat sich selbst vielfach um Ungleichungen bemüht, aber ohne erheblichen Erfolg.” — Gyula
Farkas, “Über die Theorie der Einfachen Ungleichungen,” Journal für die Reine und Angewandte Mathematik,
vol. 124:1–27.
7
“The particular geometry used in my thesis was in the dimension of the columns instead of the rows.
This column geometry gave me the insight that made me believe the Simplex Method would be a very
efficient solution technique for solving linear programs. This I proposed in the summer of 1947 and by
good luck it worked!” — George B. Dantzig, “Reminiscences about the origins of linear programming,”
Operations Research Letters vol. 1 (1981/82), no. 2, 43–48.
END NOTES 275
8
“George would often call me in and talk about something on his mind. One day in around 1959,
he told me about a couple of problem areas: something that Ray Fulkerson worked on, something else
whose details I forget. In both cases, he was using a linear programming model and the simplex method
on a problem that had a tremendous amount of data. Dantzig in one case, Fulkerson in another, had
devised an ad hoc method of creating the data at the moment it was needed to fit into the problem. I
reflected on this problem for quite awhile. And then it suddenly occurred to me that they were all doing
the same thing! They were essentially solving a linear programming problem whose data - whose columns
- being an important part of the data, were too many to write down. But you could devise a procedure
for creating one when you needed it, and creating one that the simplex method would choose to work
with at that moment. Call it the column-generation method. The immediate, lovely looking application
was to the linear programming problem, in which you have a number of linear programming problems
connected only by a small number of constraints. That fit in beautifully with the pattern. It was a way
of decomposing such a problem. So we referred to it as the decomposition algorithm. And that rapidly
became very famous.” — Philip Wolfe, interviewed by Irv Lustig ∼2003.
9
“So they have this assortment of widths and quantities, which they are somehow supposed to make
out of all these ten-foot rolls. So that was called the cutting stock problem in the case of paper. So Paul
[Gilmore] and I got interested in that. We struck out (failed) first on some sort of a steel cutting problem,
but we seemed to have some grip on the paper thing, and we used to visit the paper mills to see what
they actually did. And I can tell you, paper mills are so impressive. I mean they throw a lot of junk in at
one end, like tree trunks or something that’s wood, and out the other end comes – swissssssh – paper! It’s
one damn long machine, like a hundred yards long. They smell a lot, too. We were quite successful. They
didn’t have computers; believe me, no computer in the place. So we helped the salesman to sell them the
first computer.” — Ralph E. Gomory, interviewed by William Thomas, New York City, July 19, 2010.
276 END NOTES
10
“I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision
processes. An interesting question is, Where did the name, dynamic programming, come from? The
1950s were not good years for mathematical research. We had a very interesting gentleman in Washington
named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the
word research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would
turn red, and he would get violent if people used the term research in his presence. You can imagine
how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force,
and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson
and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What
title, what name, could I choose? In the first place I was interested in planning, in decision making, in
thinking. But planning, is not a good word for various reasons. I decided therefore to use the word
‘programming’. I wanted to get across the idea that this was dynamic, this was multistage, this was time-
varying I thought, lets kill two birds with one stone. Lets take a word that has an absolutely precise
meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an
adjective, and that is its impossible to use the word dynamic in a pejorative sense. Try thinking of some
combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic
programming was a good name. It was something not even a Congressman could object to. So I used it as
an umbrella for my activities.” — Richard E. Bellman, “Eye of the Hurricane: An Autobiography,” 1984.
11
“Vielleicht noch mehr als der Berührung der Menschheit mit der Natur verdankt die Graphentheorie
der Berührung der Menschen untereinander.” — Dénes König, “Theorie Der Endlichen Und Unendlichen
Graphen,” 1936.
The Afterward
277
Bibliography
[1] Stephen P. Bradley, Arnoldo C. Hax, and Thomas L. Magnanti. Applied Mathematical Pro-
gramming. Addison Wesley, 1977. Available at http://web.mit.edu/15.053/www/. [Cited
on page vii]
[2] Qi He and Jon Lee. Another pedagogy for pure-integer gomory. RAIRO-Operations Research,
51(1):189–197, 2017. [Cited on pages 110 and 120]
[3] Jon Lee. A First Course In Combinatorial Optimization. Cambridge Texts in Applied Mathe-
matics. Cambridge University Press, Cambridge, 2004. [Cited on page 97]
[4] Jon Lee and Angelika Wiegele. Another pedagogy for mixed-integer gomory. EURO Journal
on Computational Optimization, 5:455––466, 2017. [Cited on pages 110 and 120]
[5] Katta G. Murty. Linear And Combinatorial Programming. John Wiley & Sons Inc., New York,
1976. [Cited on page vii]
279
Index of definitions
281
282 INDEX OF DEFINITIONS
283