05 1 Optimization Methods NDP
05 1 Optimization Methods NDP
Hongwei Zhang
http://www.cs.wayne.edu/~hzhang
Acknowledgment: the slides are based on those from Drs. Yong Liu, Deep Medhi, and Micha Piro.
Optimization Methods
solution methods
brute-force, analytical, and heuristic solutions
linear/integer/convex programming
Outline
Linear programming
Integer/mixed integer programming
NP-Completeness
Branch-Bound
LP decomposition methods
Stochastic heuristics
Matlab optimization toolbox
Outline
Linear programming
Integer/mixed integer programming
NP-Completeness
Branch-Bound
LP decomposition methods
Matlab optimization toolbox
Stochastic heuristics
Linear Programming -
a problem and its solution
x2
-x1+x2 =1 maximize z = x1 + 3x2
subject to - x1 + x2 1
x1 + x2 2
(1/2,3/2)
x1 0 , x2 0
x1+3x2 =z
z=5
x1
x1+x2 =2 z=3
z=0
Extreme point (vertex): a feasible point that cannot be expressed as a convex
linear combination of other feasible points.
Linear Program in Standard Form
indices
j=1,2,...,n variables
i=1,2,...,m equality constraints
SIMPLEX constants
c = (c1,c2,...,cn) cost coefficients
b = (b1,b2,...,bm) constraint right-hand-sides
A = (aij) m n matrix of constraint
coefficients
variables
x = (x1, x2,...,xn)
maximize z = x1 + x 2
subject to 2x1 + 3x2 6
x1 + 7x2 4
x1 + x 2 = 3
x1 0 , x2 unconstrained in sign
Basic facts of Linear Programming
feasible solution - satisfying constraints
basis matrix - a non-singular m m submatrix of A
basic solution to a LP - the unique vector determined by a basis
matrix: n-m variables associated with columns of A not in the
basis matrix are set to 0, and the remaining m variables result
from the square system of equations
basic feasible solution - basic solution with all variables
nonnegative (at most m variables can be positive)
Theorem 1.
The objective function, z, assumes its maximum at an
extreme point of the constraint set.
Theorem 2.
A vector x = (x1, x2,...,xn) is an extreme point of the
constraint set if and only if x is a basic feasible
solution.
Capacitated flow allocation problem
LP formulation
variables
xdp flow realizing demand d on path p
constraints
p xdp = hd d=1,2,,D
d p edpxdp ce e=1,2,,E
flow variables are continuous and non-negative
Property:
D+E non-zero flows at most
depending on the number of saturated links
if all links unsaturated: D flows only!
Solution Methods for Linear Programs (1)
Simplex Method
Optimum must be at
the intersection of
constraints
Intersections are easy x2
to find, change
inequalities to
equalities
Jump from one vertex
to another cT
x1
Efficient solution for
most problems,
exponential time
worst case.
Solution Methods for Linear Programs (2)
Benefits
Scales Better than Simplex x1
Certificate of Optimality
Outline
Linear programming
Integer/mixed integer programming
NP-Completeness
Branch-Bound
LP decomposition methods
Stochastic heuristics
Matlab optimization toolbox
IPs and MIPs
m ax im ize z = cx
subject to Ax b, x 0 (linear constraints)
x integer (integrality constraint)
maximize z = cx + dy
subject to Ax + Dy b, x, y 0 (linear constraints)
x integer (integrality constraint)
Complexity: NP-Complete Problems
Problem Size n: variables, constraints, value bounds.
Time Complexity: asymptotics when n large.
polynomial: n^k
exponential: k^n
Why?
most people accept that it is probably intractable
dont need to come up with an efficient algorithm
can instead work on approximation algorithms
How?
reduce (transform) a well-known NP-Complete
problem P into your own problem Q
if P reduces to Q, P is no harder to solve than Q
IP (and MIP) is NP-Complete
Decision problem:
Instance: given n, A, b, C, and linear function f(x).
Question: is there x X such that f(x) C?
a variant: branch-and-cut
stochastic heuristics
evolutionary algorithms, simulated annenaling, etc.
Why LPs, MIPs, and IPs are so Important?
-cT
x2
LP Solution
x1
Integer Programs
x2
-cT
x1
x2 x2 x22
-cT -cT
x1 x21 x1
Solution Methods for Integer Programs
Added Cut
General B&B algorithim for the binary case
Problem P
minimize z = cx
subject to Ax b
xi {0,1}, i=1,2,...,k
xi 0, i=k+1,k+2,...,n
xi 0, i=k+1,k+2,...,n
xi = 0, i N0
xi = 1, i N1
zbest = +
B&B for the binary case - algorithm
procedure BBB(NU,N0,N1)
begin
solution(NU,N0,N1,x,z); { solve P (NU,N0,N1) }
if NU = or for all i NU xi are binary then
if z < zbest then begin zbest := z; xbest := x end
else
if z zbest then
return { bounding }
else
begin { branching }
choose i NU such that xi is fractional;
BBB(NU \ { i },N0 { i },N1); BBB(NU \ { i },N0,N1 { i })
end
end { procedure }
B&B - example
The optimal objective value for
(LR) is greater than or equal to the
original problem:
optimal objective for (IP).
(IP) maximize cx
If (LR) is infeasible then so is (IP).
subject to Ax b
x 0 and integer If (LR) is optimised by integer
linear relaxation: variables, then that solution is
(LR) maximize cx feasible and optimal for (IP).
subject to Ax b
If the cost coefficients c are
x0
integer, then the optimal objective
for (IP) is less than or equal to the
round down of the optimal
objective for (LR).
B&B - knapsack problem
maximize 8x1 + 11x2 + 6x3+ 4x4
subject to 5x1 + 7x2 + 4x3 + 3x4 14
xj {0,1} , j=1,2,3,4
(LR) solution: x1 = 1, x2 = 1, x3 = 0.5, x4 = 0, z = 22
no integer solution will have value greater than 22
Fractional
add the constraint to (LR) z = 22
x3 = 0 x3 = 1
Fractional Fractional
z = 21.65 z = 21.85
x1 = 1, x2 = 1, x3 = 0, x4 = 0.667 x1 = 1, x2 = 0.714, x3 = 1, x4 = 0
B&B example cntd.
we know that the optimal integer solution is not greater than 21.85 (21 in fact)
we will take a subproblem and branch on one of its variables
- we choose an active subproblem (here: not chosen before)
- we choose a subproblem with highest solution value
Fractional
z = 22
x3 = 0 x3 = 1
Fractional Fractional
z = 21.65 z = 21.85
x3 = 1, x2 = 0 x3 = 1, x2 = 1
Integer Fractional
z = 18 z = 21.8
INTEGER
no further branching, not active
x1 = 1, x2 = 0, x3 = 1, x4 = 1 x1 = 0.6, x2 = 1, x3 = 1, x4 = 0
B&B example cntd.
Fractional
z = 22
x3 = 0 x3 = 1
Fractional Fractional
z = 21.65 z = 21.85
x3 = 1, x2 = 0 x3 = 1, x2 = 1
there is no better solutionInteger Fractional
than 21: bounding z = 18 z = 21.8
INTEGER
x3 = 1, x2 = 1, x1 = 0 x3 = 1, x2 = 1, x1 = 1
Integer Infeasible
z = 21
optimal INTEGER INFEASIBLE
x1 = 0, x2 = 1, x3 = 1, x4 = 1 x1 = 1, x2 = 1, x3 = 1, x4 = ?
B&B example - summary
Solve the linear relaxation of the problem. If the solution is integer,
then we are done. Otherwise create two new subproblems by branching
on a fractional variable.
A subproblem is not active when any of the following occurs:
you have already used the subproblem to branch on
all variables in the solution are integer
the subproblem is infeasible
you can bound the subproblem by a bounding argument.
Choose an active subproblem and branch on a fractional variable.
Repeat until there are no active subproblems.
Remarks
If x is restricted to integer (but not necessarily to 0 or 1), then if x = 4.27
you would branch with the constraints x 4 and x 5.
If some variables are not restricted to integer you do not branch on them.
B&B algorithim - comments
idea: add valid inequalities which define the facets of the integer
polyhedron
the valid inequalities generation is problem-dependent, and not based on
general formulasas for the cutting plane method (e.g., Gomory fractional
cuts)
Outline
Linear programming
Integer/mixed integer programming
NP-Completeness
Branch-Bound
LP decomposition methods
Stochastic heuristics
Matlab optimization toolbox
LP decomposition methods
Lagrangian
Rearranging
Now the relaxed problem is:
Dual function W( )
Now, sub-gradient
Iterative process (5.4.4b)
Linear programming
Integer/mixed integer programming
NP-Completeness
Branch-Bound
LP decomposition methods
Stochastic heuristics
Matlab optimization toolbox
Stochastic heuristics
Local Search
Simulated Annealing
Evolutionary Algorithms
Simulated Allocation
Tabu Search
Others: greedy randomized
adaptive search
Local Search: steepest descent
minimize f(x)
starting from initial point xc=x0
iteratively minimize value f(xc) of current state xc, by
replacing it by point in its neighborhood that has
lowest value.
stop if improvement no longer possible
starting
point
descend
direction
local minimum
global minimum
desired effect
Help escaping the
local optima.
adverse effect
Might pass global optima (easy to avoid by
after reaching it keeping track of
best-ever state)
Simulated annealing (SAN): basic idea
From current state, pick a random successor
state;
If it has better value than current state, then
accept the transition, that is, use successor
state as current state;
Otherwise, do not give up, but instead flip a coin
and accept the transition with a given probability
(that is lower as the successor is worse).
So we accept to sometimes un-optimize the
value function a little with a non-zero
probability.
Simulated Annealing
Kirkpatrick et al. 1983:
begin
choose an initial solution i S;
select an initial temperature T > 0;
while stopping criterion not true
count := 0;
while count < L
choose randomly a neighbour j(i);
F:= F(j) - F(i);
if F 0 then i := j
else if random(0,1) < exp (-F / T) then i := j;
count := count + 1
end while;
reduce temperature (T:= T)
end while
Metropolis test
end
Simulated Annealing - limit theorem
5 2 3 3 1 4
1 2 0 0 3 5
1 0 2 1 chromosome
2 3
Evolutionary Algorithm for the
flow problem cntd.
crossover of two chromosomes
each gene of the offspring is taken from one of the parents
for each d=1,2,,D: xd := xd(1) with probability 0.5
xd := xd(2) with probability 0.5
mutation of a chromosome
for each gene shift some flow from one path to another
everything at random
Simulated Allocation (SAL)
Modular Links and Modular Flows Dimensioning
SAL: general schemes
allocate(x)
randomly pick one non-allocated demand module
allocate demand to the shortest path
link weight 0 if unsaturated
link weight set to the link price if saturated
increase link capacity by 1 on saturated links
disconnect(x)
randomly pick one allocated demand module
disconnect it from the path it uses
decrease link capacity by 1 for links with empty link
modules
Outline
Linear programming
Integer/mixed integer programming
NP-Completeness
Branch-Bound
LP decomposition methods
Stochastic heuristics
Matlab optimization toolbox
Optimization packages
Optimization toolbox
linprog: solve linear programming problems
bintprog: solve binary integer programming problems
Example: linprog
Summary
Linear programming
Integer/mixed integer programming
NP-Completeness
Branch-Bound
LP decomposition methods
Stochastic heuristics
Matlab optimization toolbox
Assignment
Exercise #2
Exercises 5.2 and 5.4
Additional slides: CPLEX/AMPL
Solving LP/IP/MIP with
CPLEX-AMPL
CPLEX is the best LP/IP/MIP optimization engine
out there.
AMPL is a standard programming interface for
many optimization engines.
Student version windows/unix/linux
300 variables
Maximal Software has a free student version
(up to 300 variables): uses CPLEX engine
Maximals format is slightly different than CPLEX
format
Essential Modeling Language Features
Sets and indexing
Simple sets
Compound sets
Computed sets
Objectives and constraints
Linear, piecewise-linear
Nonlinear
Integer, network
. . . and many more features
Express problems the various way that people do
Support varied solvers
CPLEX example
At CPLEX prompt,
Cube Network - formulation
parameters
links
demands
routes
incidences
flow variables
AMPL: the model (II)
Objective
Constraints
AMPL: the data