0% found this document useful (0 votes)
17 views

Chapter 9st - Non-Linear Programming

Uploaded by

mytinhsz
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Chapter 9st - Non-Linear Programming

Uploaded by

mytinhsz
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Chapter 9: Non-linear Programming

¨ Introduction
¨ Unconstrained Optimization: Single Variable
¨ Unconstrained Optimization: Multiple Variables
¨ Constrained NLPs: Lagrange Multipliers (reading)
¨ Constrained NLPs: The Kuhn-Tucker Conditions
(reading)
¨ Ref: Chapter 21- Taha’s book
1.Introduction
¨ Definition of NLP: Let x = (x1, x2, …, xn)
(NLP): Maximize f(x)
Subject to gi(x) ≤ bi, i = 1, 2, …, m
 Nonlinear objective function f(x) and/or Nonlinear
constraints gi(x)
 Could include xi ≥ 0 by adding the constraints
xi = yi2 for i=1,…,n.
¨Global vs. local optima: Let x be a feasible solution,
then
 x is a global max if f(x) ≥ f(y) for every feasible y.
 x is a local max if f(x) ≥ f(y) for every feasible y
sufficiently close to x (i.e., xj – ε ≤ yj ≤ xj + ε for all j and
some small ε ).
Local or Global Optimum ?

Local optimum
Global optimum
Concavity and Convexity
• Convex Functions: f(x)
Any point on a
f(y +(1-λ)z) ≤ λf(y) +(1-λ)f(z)
line segment
y and z and for 0≤ λ ≤1 .
connecting any
“strict” convexity ↔ 0< λ <1. Concave
Any point on a line two points on f(x)
• Concave Functions:
segment is  f(x)
f(y+(1- λ)z)≥ λf(y) +(1- λ)f(z)
y and z and for 0≤ λ ≤1 . connecting any two
“strict” convexity ↔ 0< λ <1. points on f(x) is 
• A local max (min) of a f(x)
concave (convex) function on Convex
a convex (convex) feasible x
region is also a global max • Given this, we can exactly solve: max.
(min)  convex programming. (min.) Problems with a concave (convex)
• Strict convexity (concavity) → objective function and linear constraints
the global optimum is unique. → how good the local optimal solutions
are.
Concave or Convex ?

f(x)
Concave

Neither

Convex
Both !
x
2. Unconstrained Optimization:
Single Variable NLP
¨ Classical approach:
max (min) f(x)
s.t. x[a,b]
Optimal solution:
 A boundary point
 A stationary point: a < x < b, f’(x) = 0, f’’(x) <0 (> 0)
 A point where f’(x) does not exist
¨ Direct search method: seek the optimal solution for a unimodal
function (there is at most one local optimum)
 Step 0: initialization, the current interval I0 = (xL, xR) = (a, b)
 Step i: the current interval is I = (x , x ). Define x , x such that x < x
i-1 L R 1 2 L 1
<x2 <xR. The next interval Ii is determined as follows.
if f(x1) > f(x2) then xL< x* < x2, set Ii = (xL, xR = x2)
if f(x1) < f(x2) then x1< x* < xR, set Ii = (x1= xL, xR)
if f(x1) = f(x2) then x1< x* < x2, set Ii = (xL= x1, xR = x2)
 Check I ≤ ε to terminate the algorithm, ε = user-defined level of accuracy
i
Other Methods
¨ Dichotomous method

¨ Golden section method


Example

a=0; b=3. epsilon= 0.05


Solve 2 iterations of Dictonomous method epsilon = 0.05, Delta =
0.1
I0(0,3)
Solution

The maximum value of f(x) occurs at x = 2 with = 0.1


3. Unconstrained Optimization:
Multiple Variables NLP (reading)
¨ Consider the following NLP: Max (Min) z= f(x), xRn
¨ For an n-variable function f(X), X=(x1,x2,…,xn), the gradient
vector of function f is the first partial derivatives of f(X) at a
certain point with respect to the n variables
 f f 
f   ,....., 
 x1 x n 
¨ The Hessian matrix of function f(X) is a compact way for
summarizing the second partial derivatives of f(X)
 2 f 2 f 2 f 
 .... 
 x1
2
x1x 2 x1x n 
H   ... ..... ..... ..... 
 2 f 2 f 2 f 
 .... 
 xnx1 x n x 2 x n2 
¨ 
Theorem 1: A necessary condition for X0 to be an extreme 

point of f(X) is that →stationary points


¨ Theorem 2: A sufficient condition for a stationary point X0 to
be local minimum is that the determinant of Hessian matrix
Hk(X0) > 0 ,k=1,2,…n when X0 is a local minimum point
Unconstrained Optimization:
Multiple Variables NLP
¨ Theorem 3: If, for k=1,2,…n, Hk(X0) ≠ 0 and has the same sign as
(-1)k, a stationary X0 is a local maximum
¨ Theorem 4: if Hn(X0) ≠ 0 and the condition of Theorems 2 and 3 do
not hold, a stationary point X0 is not a local extremum (minimum or
maximum)
¨ If a stationary point X0 is not local extremum, it is called a saddle
point
¨ If Hn(X0) = 0, then X0 may be a local extremum, or a saddle point,
and the preceding tests are inconclusive.
f (x , x , x )  x  2x  x x  x  x  x
1 2 3 1 3 2 3
2
1
2
2
2
3

¨ Example: Consider the function 1  2 x1  0



 The necessary condition: f ( X 0 )  0   x3  2 x 2  0
2  x  2 x  0
 2 3

The solution of these simultaneous


  2 equations

0 0 

is given by X0=(1/2,2/3,4/3)
 The sufficient condition: H   0  2 1 
0  1 2 
Multiple Variables NLP-UC:
Gradient Search Method
¨ Gradient Search Method (steepest ascent method): the gradient
of the function at a point is indicative of the fastest rate of
increase (decrease)
¨ Let X0 be the initial point . define as the gradient of f at the kth
point Xk. The idea of the method is to determine a particular path
p along which df/dp is maximized at a given point.
¨ Select Xk and Xk+1 such that Xk+1=Xk + rkf(Xk), where r is the
optimal step size such that h(r) = f[Xk+rf(Xk)] is maximized
¨ Terminate where the gradient vector becomes null (f(X) is convex
or concave)
¨ rkf(Xk)  0 f(Xk) = 0
¨ Excel uses gradient search

Global
Local Optimum
Optimum
Example
¨Maximize
¨The exact optimum is
¨The gradient is:
¨ Assume to start at X0=(1,1)
¨We have:

¨Thus: 
4. Constrained NLPs: Lagrange Multipliers
¨ Consider the problem: (Reading)

𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑍=𝑓 ( 𝑋
The function f(X) and g(X) are assumed twice continuously
Differentiable. Let L( X ,  )  f ( X )  g ( X ),  is the Lagrange multipliers
¨ The necessary condition:L  0 and L  0
¨ The sufficient condition:  X
 g ( X ) 
 1

LetB  0 | P  Where .   L( X ,  )
2

H   T  P  and Q  x x , i, j


P | Q .
 ( m n ) x ( m n)   i j nxn
 g ( X ) 
The matrix H is called the bordered
B
Hessian
 mmatrix.  mxn Given the
stationary point (X0,0), the X0 is
 A maximum point if, starting with the principal minor determinant
of order (2m+1), the last (n-m) principal minor determinants of HB
form an alternating sign pattern starting with (-1)m+1
 A minimum point if, starting with the principal minor determinant
of order (2m+1), the last (n-m) principal minor determinants of HB
have the sign of (-1)m
Principle Minor Example
Example
¨ Consider the problem: Minimize f ( X )  x12  x 22  x32
Subject to

¨ The Lagrangean function is: n=3, m = 2;


𝑔1 (𝑋)=𝑥1 +𝑥2+3𝑥3 −2=0
L(X,)= f(X) - g(X) = ( ()- ()
1 2
¨ The necessary conditions: L=0 L  0 and L  0
 X
The solution of these equations is X0=(x1,x2,x3) =(0.81,0.35,0.28);
=(1, 2)=(0.0867,0.3067)
¨ To show that the given point is a minimum, consider
0 0 1 1 3
 
0 0 5 2 1
H B   1 5 2 0 0 
1 2 0 2 0
 
3 1 0 0 2
Since n-m=1→ check the determinant HB only. We have (-1)2 > 0
and det(HB)=460 > 0 → X0 is a minimum point
5. Constrained NLPs:
The Kuhn-Tucker Conditions (reading)
¨ Consider the generalized nonlinear problems:

𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒𝑍=𝑓 (𝑋)
Where i is the Lagrange multiplier associated with constraint i, Si is
slack or surplus variable associated with constraint i=1,2,…,m
¨ The necessary condition:   0
f ( X )  g ( X )  0
i g i ( X )  0; i  1,2,..., m
g( X )  0
For the case of minimization, there is only the first condition is changed
to   0
¨ The sufficient condition:
Sense of Optimization Required Conditions
Objective Function Solution
Space
Maximization Concave Convex Set
Minimization Convex Convex Set
Example
¨ Minimize 𝑓 ( 𝑋 )=𝑥 21 +𝑥22 +𝑥 23
Subject to

𝑔1(𝑋)=2𝑥1+𝑥2 −5≤0
¨ The K-T condition:
=()- 1(+s21)
¨ The solution is: …..
x1 = 1,x2 = 2,x3 = 0;1 = 2 = 5 = 0, 3 = - 2, 4 = - 4
¨ Since the function f(X) is convex and the solution space
g(X)  0 is also convex, L(X,S,) must be convex and
the resulting stationary point yields a global
constrained minimum
The General K-T Sufficient Conditions
¨ Consider:

( maximize¿or¿minimize ) 𝑍=𝑓 (𝑋)


Where i is the Lagrangean multiplier and Si is slack or surplus
variable associated with constraint i
Sense of Required Conditions
optimization
f(X) gi(X) i
Maximization Concave Convex 0 (1  i  r)

Concave 0 (r+1  i p)
 Linear
 unrestricted (p+1 i m)
Convex
Minimization Convex  0 (1 i  r)
Concave
 Linear 0 (r+1 i  p)

Unrestricted (p+1 i m)
Other Methods
¨Separable Programming
¨Quadratic Programming
¨Geometric Programming
¨Linear Combinations Method
¨SUMT Algorithm

You might also like