NLP MultiVAr Constrained
NLP MultiVAr Constrained
NLP MultiVAr Constrained
MULTIVARIABLE, CONSTRAINED
We will complete our investigations into the optimization
of continuous, non-linear systems with the multiple
variable case.
THE BIG ENCHILADA!
min f ( x )
x
s .t .
h( x ) = 0
g ( x) ≥ 0
x min ≤ x ≤ xmax
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Opportunity for optimization.
Opt. Basics I Objective, feasible region, ..
Reboiled vapor
Inputs
Heat
P, Xf, F, Tf Operation transfer
Tcw, Fcw Outputs safety
Freflux
Freboil XD weeping
Equip. XB Pressure
NT, NF Min ≤ Freflux ≤ Max
A, Ntubes, .. Min ≤ Freboil ≤ Max
Numerical
11.3 8.5
x1
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Function values only
13
9 g(x) >0
8.9
x2
What do we
do now?
11.3 8.5
x1 Infeasible
point
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Function values only
COMPLEX
COMPLEX
1. Cannot have equality
g(x) >0
constraints
x2 2. Requires initial feasible pt.
3. Very inefficient for large
number of variables
x1 Infeasible
point 4. Does not effectively move
along inequality constraints.
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Sequential Linear Programming
SLP
First order approximation
Increasing profit Linear
Variable x2
Programming is
Optimum very successful.
Can we develop a
method for NLP
using a sequence of
LP approximations?
Variable x1
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
SLP, first order approximation
min f ( x k ) + ∇f T ( x k )∆x k
min f ( x )
x s.t .
s .t . h ( x k ) + ∇ h T ( x k ) ∆x k − b h = 0
h( x ) = bh g( x k ) + ∇g T ( x k )∆x k − b g ≥ 0
g ( x ) ≥ bg ( ∆x min )k ≤ ∆x k ≤ ( ∆x max )k
x min ≤ x ≤ xmax x min ≤ x k + ∆x k ≤ x max
improvement
= starting point
= linearized constraints
Variable x2
= box limitation on ∆ x
improvement
improvement
= starting point
= linearized constraints
Variable x2
= box limitation on ∆ x
improvement
= starting point
= linearized constraints
Variable x2
= box limitation on ∆ x
improvement
= starting point
= linearized constraints
Variable x2
= box limitation on ∆ x
s .t . s .t .
h( x ) = bh h( xk ) + ∇hT ( xk )∆xk − b = ( S pi + S ni )
g ( x ) ≥ bg ( ∆xmin ) k ≤ ∆xk ≤ ( ∆xmax ) k
x min ≤ x ≤ xmax xmin ≤ xk + ∆xk ≤ xmax
Convert to =
using slacks Love that Taylor Series!
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
SLP, first order approximation
s.t .
h( x k ) + ∇h T ( x k )∆x k − b = (S pi + S ni )
BOX: The bounds on ∆ x must be
( ∆x min )k ≤ ∆x k ≤ ( ∆x max )k reduced to converge to an interior
x min ≤ x k + ∆x k ≤ x max optimum .
See: Baker & Lasdon, Successful SLP at EXXON, Man. Sci., 31, 3, 264-274 (1985)
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
SLP, first order approximation
Good Features
1. Uses reliable LP codes
2. Equality and inequality
3. Requires only 1st derivatives
4. Uses advanced basis for all iterations
5. Separate linear and non-linear terms; only have to update
NL terms
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
First order approximation
SLP - Successive Linear Programming
Poor Features
1. Follows infeasible path (for N-L constraints)
2. Performance is often poor for problems that
- are highly non-linear
- many N-L variables
- solution not at constraints
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
First order approximation
SLP - Successive Linear Programming
CURRENT STATUS
V1
3% sulfur x = % sulfur Sulfur ≤ 2.5%
6 $/BL
V1
pool Blend 1 0 ≤ B1 ≤ 100
9 $/BL
V2 No change F1
in
1% sulfur
inventory
16 $/BL allowed
V2 Sulfur ≤ 1.5%
0 ≤ B2 ≤ 200
Blend 2
F 10 $/BL
2% sulfur
10 $/BL
F2
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Second order approximation
min f ( x )
Let’s quickly review key x
s .t .
results from h( x ) = 0
Optimization Basics III g ( x) ≥ 0
x min ≤ x ≤ xmax
L ( x, λ , u ) = f ( x ) + ∑ λ j h j ( x ) + ∑ u k [ g k ( x )]
j k
Stationarity ∇ x , λ ,u L ( x, λ , u ) = 0
Curvature [∇ 2xx L( x, λ , u )]h =0, g a =0 pos. definite
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Second order approximation
∆x = − [∇ 2x ] −1
f ( xk ) ∇ x f ( x k )
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Second order approximation
L ( x, λ , u ) = f ( x ) + ∑ λ j h j ( x ) + ∑ u k [ g k ( x )]
j k
max f ( x1 , x2 ) max f 2 ( x1 )
x x1
s.t.
h ( x1 , x2 ) = 0
f(x)
All points on this curve
satisfy h(x1, x2)=0
x1 f2(x1)
x2
x1
Equality constraint
dx1
∂f ∂f
max f ( x) df = ..... ... Basic
x ∂x1 ∂xm
dxm
s.t.
dxm +1
h ( x) = 0 ∂f ∂f Non-
+ ..... ... Basic
∂ x
m +1 ∂x n
dxn
df = (∇ xB f ) dxB + (∇ xNB f ) dx NB
T T
Reduced Gradient - Second order approximation
∂h1 ∂h1
∂x1 .. ∂xm dx1
max f ( x) dh( x ) = .. .. .. ..
Basic
x
∂hm dx
.. ..
∂xm m
s.t.
∂h1 .. ∂h1
h ( x) = 0 ∂xm +1 ∂xm +1 dxm +1
..
Non-
+ .. .. .. Basic
.. ..
∂hm dx
∂xn n
?
−1
dxB = −(∇ xB h) (∇ xNB h)dx NB Satisfy the
equations
Is it true
−1
df = −(∇ xB f ) (∇ xB h) ∇ xNB h dx NB
T that we have
satisfied
equations?
+ (∇ xNB f ) dx NB
T
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
df
[ ]
= (∇ xNB f ) − (∇ xB h) −1 ∇ xNB h T (∇ xB f )
dx NB h =0
max f ( x1 , x2 )
x Rearranging, we obtain
s.t. df/dxNB , which is the famous
REDUCED GRADIENT!
h ( x1 , x2 ) = 0
f(x)
Draw the following from
the current point
• unconstrained gradient
x1 • reduced gradient
x2
Current Equality
point constraint
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Rearranging, we obtain df/dxNB , the famous REDUCED GRADIENT!
P C
1
L C -2
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Active Set: We do not know which inequality
constraints will be active at the solution. Therefore, we
will allow constraints to change (active/inactive) status
during solution.
At an iteration, we will divide the degrees of freedom
(non-basic variables) into two groups.
1. Defined by an active inequality (become basic)
Search space for
2. “Free” or unconstrained (super basic) the optimization
6
5
B Which
constraints are
active at each
iteration?
Active Set: The active set can change as the method performs many iterations.
P C
1
For the distillation tower,
determine the following
• recall the dimension of
the reduced space
L C -1
17
L C -2
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
NLP?
0 α *
min f ( xk ) + ∇ x f ( xk )T [ s ] + 1 / 2[ s ]∇ 2x f ( xk )[ s ]
s
s.t.
s ≤ ∆k
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
s.t.
Vector norm, e.g., 2-norm
x1
s ≤ ∆k
|| x || = x12 + x22 + ...
x2 g(x) > 0
• What are advantages?
h(x) = 0
• Are there disadvantages?
x1
NON-LINEAR PROGRAMMING
MULTIVARIABLE, CONSTRAINED
Second order approximation
min f ( x )
x
Reduced Gradient Line Search
s.t.
Active Set h( x ) = 0 Feasible Path
g ( x) ≥ 0
x min ≤ x ≤ xmax
L( x, λ ) = f ( x ) + ∑ λ j h j ( x ) + 1 / 2 ρ hT ( x )h( x )
j