0% found this document useful (0 votes)
23 views26 pages

11 - Static Optimization Subject To Inequality Constraints

Uploaded by

diserplayz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views26 pages

11 - Static Optimization Subject To Inequality Constraints

Uploaded by

diserplayz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

AE-777 (Lecture No.

11)
1. This lecture completes the discussion of
constrained static optimization.

2. Optimization with respect to static vari-


ables subject to inequality constraints are
discussed.

3. As an illustration of direct search algorithms,


the Simplex nongradient method is discussed
for finding minimum points of non-smooth
functions.
Inequality Constraints
Most of the practical minimization problems
include inequality constraints on the control
and state variables. Such constraints provide
a bounded or feasible region for the solution of
the minimization problem.

Consider general inequality constraints of the


form
f (x, u) ≤ 0 , (1)
while minimizing L(x, u) with respect to u. Ex-
tending the concept of Lagrange multipliers,
λ, we adjoin L with the inequality constraint
function into the following augmented objec-
tive function:

J(x, u) = L(x, u) + λT f (x, u) . (2)

There are two possibilities for a minimum point,


(x̂, û), (if it exists), i.e., either

f (x̂, û) < 0 , (3)


or
f (x̂, û) = 0 . (4)
In the first case (Eq.(3)), the constraint is
inactive, hence it can be ignored by putting
λ = 0 in Eq.(2), which is treated as the un-
constrained minimization problem.

In the second case, (Eq.(4)), the constraint


is active and the minimum point (if it exists)
lies on the constraining boundary. A small but
arbitrary control variation, δu, about the min-
imum point, (x̂, û), would result in either an
increase of, or no change in L:
∂L
δL = δu ≥ 0 , (5)
∂u
where δu must satisfy the constraint, Eq.(1),
with λ > 0:
T T ∂f
λ δf = λ δu ≤ 0 (6)
∂u
Since δu is arbitrary, there are only the follow-
ing two possibilities of reconciling Eqs.(5) and
(6), i.e., either we should have (with λ > 0)
∂L T ∂f
= −λ (7)
∂u ∂u
or
∂L ∂f
=0; =0. (8)
∂u ∂u
The three possibilities represented by Eqs.(3),
(7), and (8), can be expressed in the following
compact form:
∂L ∂f
+ λT =0, (9)
∂u ∂u
where
(
= 0, f (x, u) < 0
λ (10)
> 0, f (x, u) = 0
Eq.(9) is the necessary condition for minimiza-
tion, and must be solved in an iterative manner
to yield a minimum point (if it exists).
In summary, whenever inequality constraints
are specified, they define a feasible region in
which the search for any minimum points is
to be performed. Such a search is not a triv-
ial problem, and usually requires a numerical
procedure called nonlinear programming.

If a point falls at the boundary of the feasible


region, then it may not satisfy the necessary
and sufficient conditions for minimization, yet
it could be an minimum point simply because
it has the smallest numerical value of the per-
formance index in the feasible region.

The search for a minimum in the presence of


inequality constraints could yield one of the
following results:

1. The minimum point lies inside the feasible


region, where both the necessary and suf-
ficient conditions of minimization are sat-
isfied.
2. The minimum point lies on a boundary of
the feasible region and satisfies the neces-
sary (but not sufficient) conditions of min-
imization. For example, we may have a
singular point at which an inequality con-
straint supplies the additional information
for determining the minimality of the given
point.

3. The minimum point lies on a boundary of


the feasible region, but does not satisfy the
necessary conditions of minimization.
Example 1
For the minimization of

L(u) = u3 − 3u2 + 1
with respect to u ∈ R, and subject to

0≤u≤1
find the minimum points, û (if any).

The inequality constraints can be expressed as


follows:
   
−u 0
f (u) =  ≤  ,
   
(u − 1) 0
with the necessary condition for minimization
given by
∂L ∂f
+ λT =0,
∂u ∂u
where
(
= 0, f (u) < 0
λ
> 0, f (u) = 0
The given function, L(u), has two stationary
points, u∗ = 0 and u∗ = 2, but only the point
u∗ = 0 lies in the feasible region. Since Luu < 0
at u∗ = 0, it is a maximum point of L(u).

A plot of L(u) reveals that L(u) attains its


smallest value in the feasible region at u =
1, which is at its boundary. Hence û = 1 is
the minimum point of L(u), which satisfies the
necessary condition with
 
0
λ=
 

3
Example 2
Consider the minimization of

L(u) = 2u4 − 5u3 + 3u2 − 5u


with respect to u ∈ R, subject to the following
constraints:

u > 1; u<2

From Example 3 of Lecture 7, we find that the


only stationary point of L(u) is u∗ = 1.65. We
also find that the stationary point satisfies the
sufficient condition,

L′′(u∗) = 24(1.65)2 − 30 × 1.65 + 6 = 21.84 > 0


Since the point u∗ = 1.65 lies within the feasi-
ble region given by the inequality constraints,
and satisfies the sufficient condition of mini-
mization, it is the minimum point of L(u). In
this case, the inequality constraints are inac-
tive.
Example 3
Consider the minimization of

L(u) = (u1 − u2 2
2 )(u1 − 3u2 )
with respect to u = (u1, u2)T ∈ R2, subject to
the following constraint:

u1 > −1/4; u2 > −1/4; u1 ≤ 0; u2 ≤ 0

From Example 4 of Lecture 7, we find that


u∗ = (0, 0)T is the only stationary point of
L(u). Since one of the eigenvalues of Luu
is zero at the stationary point, the sufficient
condition of minimization is not satisfied, and
the stationary point, u∗ = (0, 0)T , is a singular
point.

However, since the feasible region defined by


the inequality constraints is such that the sin-
gular point lies on its boundary, f (u∗) = 0,
and L(u) achieves its smallest value L(u∗) = 0
in the feasible region at that point, it follows
that the singular point, u∗ = (0, 0)T , is the
minimum point.

In this case, the inequality constraint is said to


be active because it determines the minimum
point.
Example 4
For the minimization of
  
1 −1 0   u1 

  

  

  
 −1
L(u) = (u1, u2, u3)  2   u2
1 

  
 




0 1 1  u3
 

with respect to u = (u1, u2, u3)T ∈ R3, and


subject to

u1 ≥ 0; u2 ≥ 0; u3 ≥ 0
find the minimum points, û, (if any).

Since the given function L(u) is a quadratic


function of u, its only stationary point is u∗ =
(0, 0, 0)T . If no constraints are present, u∗ =
(0, 0, 0)T is a singular point of L(u), because
one eigenvalue of Luu is zero.

However, the inequality constraints give a fea-


sible region of minimization, and L(u) achieves
its smallest possible value in the feasible region
at u∗ = (0, 0, 0)T . Hence û = (0, 0, 0)T is the
only minimum point of L(u), subject to the
inequality constraints.
Nongradient Search: Simplex Method
The Simplex method* is a direct search algo-
rithm for finding a minimum point of a function
f (u) : Rn → R with respect to u ∈ Rn in some
feasible space. The search begins by forming
a Simplex, which is a generalized triangle in
the n-dimensional space, and determining the
value of f (u) at each vertex of the triangle.
Let the vertices be given by u1, u2, and u3.
Then the evaluation of f at the vertices gives
three values arranged in the increasing order:

B = [u1 : f (u1) ≤ f (u2) ≤ f (u3)]


G = u2 ; W = u3 (11)
where B stands for the best vertex (having the
lowest value, f (B)), G stands for the good ver-
tex (with the next lowest value, f (G)), and W
stands for the worst vertex (with the highest
value, f (W )).
* Nelder,J.A., and Mead, R., “A Simplex Method for
Function Minimization”, The Computer Journal, Vol.
7, 1965, pp. 308-313.
The Simplex method is a nongradient search,
which works without the need of taking the
gradient, fu = ∇f , to find the minimum point.

Mid-Point of Good-Best Line


The next configuration of the Simplex is se-
lected by replacing the point W by a new point
u3, where f (u3) ≤ f (W ). This is done by slid-
ing the vertex W towards the mid-point M of
the line joining B and G (see Fig. 1).
1
M = (B + G) (12)
2
which implies that the coordinates of M are
the average of those of B and G:
1
u3 = (u1 + u2) (13)
2

Reflection across Good-Best Line


If f (M ) ≤ f (W ), then it is possible that a lower
value of f can be found on the other side of the
line BG from W . Therefore, W is now reflected
across BG by moving M to the reflected point
R such that the distance, d, of R from M is
the same as that from W to M (Fig. 1):

R = M + (M − W ) = 2M − W (14)

Extension
If the f (R) < f (W ), then the reflected point R
is extended a further distance d away from W
by moving it to E (see Fig. 2), where

E = R + (R − M ) = 2R − M (15)

Contraction
If f (R) = f (W ), then R must be replaced by
another point. It is possible that f (M ) < f (R),
but M cannot be selected as a vertex of the
Simplex BM G, because the 3 points are on a
straight line. Let us take the midpoints C1
and C2 of the line segments W M and M R,
respectively (see Fig. 3). The smaller of f (C1)
and f (C2) yields the new vertex C as either C1
or C2 of the Simplex, BGC.
1
C1 = (W + M ) (16)
2
1
C2 = (R + M ) (17)
2

Shrink towards B
If f (C) ≥ f (W ), then the points G and W are
shrunk towards B by replacing G with M , and
W with S, which is the midpoint of the seg-
ment BW (see Fig. 4).
1
S= (W + B) (18)
2

Thus each new Simplex is formed by replac-


ing W by a new vertex, until the triangle be-
comes very small, and converges towards a sin-
gle point, B.
Example 5
Consider the minimization of

 u1 sin u2 − u2
 (u1 > 0, u2 > 0)
L(u) =
 −u sin u + u

(u1 ≤ 0, u2 ≤ 0)
1 2 2

with respect to u = (u1, u2)T ∈ R2, and subject


to
−1 ≤ u1 ≤ 1 ; −2 ≤ u2 ≤ 2
Find the minimum value of the function by the
Simplex direct search method.

Since the given function, L(u), has a discontin-


uous partial derivative, Lu, at u = (0, 0)T , the
necessary conditions of minimization can’t be
applied. Hence a direct search for the minima
is the only alternative.

Taking the Simplex approach, we first form


a triangle with the following vertices, and ar-
range them according to the value of L(u) on
them:

B = u1 = (−1, −1)T ; L(u1) = −1.8415


G = u2 = (0, −1)T ; L(u2) = −1
W = u3 = (1, 1)T ; L(u3) = −0.1585
The midpoint M of the good-best line is the
following:
1
M = (B+G) = (−0.5, −1)T ; L(M ) = −1.4207
2
Since f (M ) < f (W ), a lower value of L(u) may
be obtained by reflecting W across the line BG
to the new vertex R, which is given by

R = 2M − W = (−2, −3)T
However, since the inequality constraints will
be violated by taking R beyond the feasible
space, we must restrict R to be on the bound-
ary of the feasible space:

R = (−1, −2)T ; L(R) = −2.9093


Hence, the new Simplex is given by the follow-
ing triangle:

B = (−1, −2)T ; L(B) = −2.9093


G = (−1, −1)T ; L(G) = −1.8415
W = (0, −1)T ; L(W ) = −1
We now shrink the Simplex towards B by re-
placing G with M = 2 1 (B + G) = (−1, −1.5)T ,

and W with S = 2 1 (W + B) = (−0.5, −1.5)T .

Based upon the values of L(u) on the new ver-


tices, we arrange them as follows:

B = (−1, −2)T ; L(B) = −2.9093


G = (−1, −1.5)T ; L(G) = −2.4975
W = (−0.5, −1.5)T ; L(W ) = −1.9987
Once again, the reflection across the BG line
is performed to yield:
1
M = (B+G) = (−1, −1.75)T ; L(M ) = −2.7340
2

R = 2M − W = (−1.5, −2)T
But this reflected point is not feasible since
it would violate the constraints, hence we take
R = (−1, −2)T , which is the same as B. Hence,
reflection has failed to update the Simplex. We
now shrink the Simplex towards B by replacing
G with M = 2 1 (B + G) = (−1, −1.75)T , and W

with S = 1 T
2 (W + B) = (−0.75, −1.75) . Based
upon the values of L(u) on the new vertices,
we arrange them as follows:

B = (−1, −2)T ; L(B) = −2.9093


G = (−1, −1.75)T ; L(G) = −2.7340
W = (−0.75, −1.75)T ; L(W ) = −2.4880
Shrinking the Simplex again, we have

B = (−1, −2)T ; L(B) = −2.9093


G = (−1, −1.875)T ; L(G) = −2.8291
W = (−0.875, −1.875)T ; L(W ) = −2.7098
After shrinking the Simplex several more times
towards B, both G and W are seen to converge
to B:

B = (−1, −2)T ; L(B) = −2.9093


G = (−1, −1.999)T ; L(G) = −2.9087
W = (−0.999, −1.999)T ; L(W ) = −2.9078
and the search is ended with the minimum
point being

û = (−1, −2)T ; L(û) = −2.9093


Exercises

(1) For the minimization of


1 3 5
L(u) = u + u2 + 4u + 1
3 2
with respect to u ∈ R, and subject to

u ≥ 0; u≤1
find the minimum points, û, (if any).

(2) For the minimization of

L(u) = −3u3
with respect to u ∈ R, and subject to

u≤0
find the minimum points, û, (if any).
(3) For the minimization of

L(u) = u3 − 3u2 + 3u − 4
with respect to u ∈ R, and subject to

u ≥ 1; u<5
find the minimum points, û, (if any).

(4) For the minimization of

L(u) = u2 2
1 − 2u1 u2 + 4u2
with respect to u = (u1, u2)T ∈ R2, and
subject to

u1 ≥ 1; u2 ≥ 2
find the minimum points, û, (if any).

(5) For the minimization of

L(u) = −u2
1 + 2u1 u2 + 3u2
2
with respect to u = (u1, u2)T ∈ R2, and
subject to
u1 ≤ 0
find the minimum points, û, (if any).

You might also like