Linear or Integer Programming
Linear or Integer Programming
1
Linear+Integer Programming Outline Applications of Linear Programming
Linear Programming 1. A substep in most integer and mixed-integer
– General formulation and geometric interpretation linear programming (MIP) methods
– Simplex method 2. Selecting a mix: oil mixtures, portfolio selection
– Ellipsoid method 3. Distribution: how much of a commodity should be
– Interior point methods distributed to different locations.
Integer Programming 4. Allocation: how much of a resource should be
allocated to different tasks
– Various reductions of NP hard problems
5. Network Flows
– Linear programming approximations
– Branch-and-bound + cutting-plane techniques
2
Algorithms for Linear Programming State of the art
• Simplex (Dantzig 1947) 1 million variables
• Ellipsoid (Kachian 1979) 10 million nonzeros
first algorithm known to be polynomial time No clear winner between Simplex and Interior Point
• Interior Point – Depends on the problem
first practical polynomial-time algorithms – Interior point methods are subsuming more and
– Projective method (Karmakar 1984) more cases
– Affine Method (Dikin 1967) – All major packages supply both
– Log-Barrier Methods (Frisch 1977, Fiacco The truth: the sparse matrix routines, make or
1968, Gill et.al. 1986) break both methods.
Many of the interior point methods can be applied to The best packages are highly sophisticated.
nonlinear programs. Not known to be poly. time
15-853 Page9 15-853 Page10
3
Formulations Geometric View
The two most common formulations: A polytope in n-dimensional space
Canonical form Standard form Each inequality corresponds to a half-space.
maximize cTx slack maximize cTx The “feasible set” is the intersection of the half-
subject to Ax ≤ b variables subject to Ax = b spaces
x≥0 x≥0 This corresponds to a polytope
Polytopes are convex: if x,y is in the polytope, so
e.g.
is the line segment joining them.
7x1 + 5x2 ≤ 7 y1 7x1 + 5x2 + y1 = 7 The optimal solution is at a vertex (i.e., a corner).
x 1 , x2 ≥ 0 x1, x2, y1 ≥ 0
Simplex moves around on the surface of the polytope
More on slack variables later.
Interior-Point methods move within the polytope
15-853 Page13 15-853 Page14
4
The Simple Essense of Simplex Optimality and Reduced Cost
Polytope P Input: max f(x) = cx The Optimal solution must include a corner.
s.t. x in P = {x : Ax ≤ b, x ≥ 0}
The Reduced cost for a hyperplane at a corner is the
Consider Polytope P from canonical cost of moving one unit away from the plane along
form as a graph G = (V,E) with its corresponding edge.
V = polytope vertices,
E = polytope edges. z 1
ei ri = -z · ei
1) Find any vertex v of P. pi
2) While there exists a neighbor u of v in G with
For maximization, if all reduced cost are non-
f(u) < f(v), update v to u.
negative, then we are at an optimal solution.
3) Output v.
Finding the most negative reduced cost is one often
Choice of neighbor if several u have f(u) < f(v)?
used heuristic for choosing an edge to leave on
5
Example Simplifying
x2 Step 2 Problem:
Departing – The Ax ≤ b constraints not symmetric with the
x ≥ 0 constraints.
We would like more symmetry.
Idea:
Step 1 z = 2x1 + 3x2 – Leave only inequalities of the form x ≥ 0.
Use “slack variables” to do this.
Convert into form: maximize cTx
x1 subject to Ax = b
Entering Start x≥0
6
Using Matrices Simplex Algorithm, again
If before adding the slack variables A has size m × n 1. Find a corner of the feasible region
then after it has size m × (n + m) 2. Repeat
m can be larger or smaller than n A. For each of the n hyperplanes intersecting at
n m the corner, calculate its reduced cost
100… B. If they are all non-negative, then done
010… C. Else, pick the most negative reduced cost
A= 001… m
… This is called the entering plane
D. Move along corresponding line (i.e. leave that
slack vrs. hyperplane) until we reach the next corner
(i.e. reach another hyperplane)
Assuming rows are independent, the solution space of The new plane is called the departing plane
Ax = b is an n-dimensional subspace.
15-853 Page25 15-853 Page26
7
Corner Corner
1 -2 1 0 0 4 -.5 -.5 1 0 0 -2
2 1 0 1 0 18 2.5 .5 0 1 0 20
x5 x5
0 1 0 0 1 10 .5 .5 0 0 1 12
-2 -3 0 0 0 0 -3.5 -1.5 0 0 0 -6
x1 x4 x1 x4
x1 x2 x3 x4 x5 x1 x3 x2 x4 x5
x3 x3
x2 x2
Corner Corner
1 -2 1 0 0 4 .2 .4 1 0 0 8
-2 5 0 1 0 10 -.4 .2 0 1 0 2
x5 x5
0 1 0 0 1 10 .4 -.2 0 0 1 8
2 -7 0 0 0 8 -.8 1.4 0 0 0 22
x1 x4 x1 x4
x3 x2 x1 x4 x5 x3 x4 x1 x2 x5
x3 x3
x2 x2
8
Corner Simplex Method Again
Once you have found a basic feasible solution (a
corner), we can move from corner to corner by
swapping columns and eliminating.
-.5 -2.5 1 0 0 -5
.5 .5 0 1 0 9
x5
0 1 0 0 1 10
ALGORITHM
1 -2 0 0 0 18 1. Find a basic feasible solution
x1 x4
x4 x2 x3 x1 x5 2. Repeat
x3 A. If r (reduced cost ) ≥ 0 , DONE
x2
B. Else, pick column with most negative r
Note that in general there are C. Pick row with least positive b’/(selected column)
n+m choose m corners D. Swap columns
E. Use Gaussian elimination to restore form
15-853 Page33 15-853 Page34
F I b’ n
current cost
r 0 z F I b’
Free Variables Basic
values are 0 Variables
r 0 z
9
Tableau Method Tableau Method
C. Move along corresponding line (i.e. leave that D. Swap columns
hyperplane) until we reach the next corner (i.e. x
reach another hyperplane) x No longer in
The new plane is called the departing plane x b’ proper form
x
u r x z
swap
F I b’ min positive bj’/uj
1
E. Gauss-Jordan elimination
r 0 z
Fi+1 I bi+1’ Back to
departing variable
proper form
ri+1 0 zi+1
Example Example
x1 x2 x3 x4 x5
1 -2 1 0 0 4 x1 – 2x2 + x3 = 4 1 -2 1 0 0 4
18
2 1 0 1 0 18 2x1 + x2 + x4 = 18 2 1 0 1 0 18
0 1 0 0 1 10 x2 + x5 = 10 0 1 0 0 1 10
-2 -3 0 0 0 0 z - 2x1 - 3x2 = 0 -2 -3 0 0 0 0 10
x5
x1 x2 x3 x4 x5
Find corner
x5 x1 x4
1 -2 1 0 0 4 1 -2 1 0 0 4 -2
2 1 0 1 0 18 18 x3
2 1 0 1 0 18 x2
x1 x4
0 1 0 0 1 10 0 1 0 0 1 10 10
x3 -2
-2 -3 0 0 0 0 x2 -2 -3 0 0 0 0
x1 x2 x3 x4 x5 x1 x2 x3 x4 x5 bj/vj
x1 = x2 = 0 (start) min
15-853 Page39 15-853 positive Page40
10
18 18
swap Example Example
1 0 1 0 -2 4 10 1 2 1 0 0 24 10
x5 x5
2 0 0 1 1 18 2 -1 0 1 0 8
0 1 0 0 1 10 x1 x4 0 1 0 0 1 10 x1 x4
-2 0 0 0 -3 0 x3 -2 3 0 0 0 30 x3
x2 x2
x1 x5 x3 x4 x2 x1 x5 x3 x4 x2
-2 -2
1 2 1 0 0 24 1 2 1 0 0 24 24
2 -1 0 1 0 8 Gauss-Jordan 2 -1 0 1 0 8 4
0 1 0 0 1 10 Elimination 0 1 0 0 1 10 ∞
-2 3 0 0 0 30 -2 -3 0 0 0 30
x1 x5 x3 x4 x2 x1 x5 x3 x4 x2
18
swap
Example Simplex Concluding remarks
0 2 1 1 0 24 10 For dense matrices, takes O(n(n+m)) time per
x5 iteration
1 -1 0 2 0 8
0 1 0 0 1 10 Can take an exponential number of iterations.
x1 x4
0 3 0 -2 0 30 In practice, sparse methods are used for the
x3
x4 x5 x3 x1 x2
x2 iterations.
-2
-.5 2.5 1 0 0 20
.5 -.5 0 1 0 4 Gauss-Jordan
0 1 0 0 1 10 Elimination
1 2 0 0 0 38
x4 x5 x3 x1 x2
11
Duality Duality (cont.)
Primal (P): Optimal solution for both
maximize z = cTx
subject to Ax ≤ b
x ≥ 0 (n equations, m variables) feasible solutions for feasible solutions for
Dual (D): Primal (maximization) Dual (minimization)
minimize z = yTb
subject to Aty ≥ c
Quite similar to duality of Maximum Flow and
y ≥ 0 (m equations, n variables)
Minimum Cut.
Duality Theorem: if x is feasible for P and y is
feasible for D, then cx ≤ yb Useful in many situations.
and at optimality cx = yb.
Duality Example
Primal: Dual:
maximize: minimize:
z = 2x1 + 3x2 z = 4y1 + 18y2 + 10y3
subject to: subject to:
x1 – 2x2 ≤ 4 y1 + 2y2 ≥2
2x1 + x2 ≤ 18 -2y1 + Y2 + Y3 ≥ 3
x2 ≤ 10 y1, y2, y3 ≥ 0
x 1 , x2 ≥ 0
15-853 Page47
12