Lecture 5
Lecture 5
1
Outline of the lecture
How do constraints influence the ability to minimize the objective
function?
The concept of Lagrange multipliers
Feasible space
Active and inactive constraints
Necessary conditions
What we will learn:
Constrained optimum lies on the boundary of the feasible space.
Conditions for constrained local minimum; constraint qualification
Sensitivity of the constrained optimum to small changes in constraints.
Physical meaning of Lagrange multipliers.
ME 260 / G. K. Ananthasuresh, IISc Structural Optimization: Size, Shape, and Topology 2
Two variables and an equality constraint
Min f x1 x2 (The intent here is to maximize the
x1 ,x2 product of two numbers such that their
Subject to sum is equal to 1)
x1 x2 1 Min f x1 (1 x1 )
x1
f f
f (x1 , x2 ) f (x , x )
*
1
*
2 x1 x2
x1 x* ,x *
x2 x1* ,x2*
1 2
h h h h
h(x1 , x2 ) h(x1* , x2* ) x1 x2 x1 x2 0
x1 x1* ,x2*
x2 x1* ,x2*
x1 x1* ,x2*
x2 x1* ,x2*
h h
x2 x1
x1 x1* ,x2* x2 x1* ,x2*
f f h Because the
x2 0 first order
where
x1* , x2*
x1 x1* ,x2* x1 x1* ,x2* derivative
h should be
f h
x2 zero for
x1* , x2*
0 any x
x2 x1* ,x2* x2 x1* ,x2* 1
f h
0 f f
x1 x1* ,x2* x1
x
x2
x1* ,x2*
1 x1 ,x2
* * * *
x1 ,x2
f h
h h
0
x2 x1* ,x2* x2 x1* ,x2* x
1 x1 ,x2
* * x2 * *
x1 ,x2
Short form
x1 ,x2 ,,xn x
Subject to Subject to
h1 (x1 , x2 ,, xn ) 0
h(x) 0
h2 (x1 , x2 ,, xn ) 0
x1 h1 (x1 , x2 ,, xn )
hm (x1 , x2 ,, xn ) 0
x
x2 h2 (x1 , x2 ,, xn )
and h
Can there be more constraints than variables? xn hm (x1 , x2 ,, xn )
mn No; feasible values may not exist. It is over‐constrained.
mn Some discrete feasible values may exist; but cannot do minimization.
mn This must be true in order to do minimization of the objective function.
ME 260 / G. K. Ananthasuresh, IISc Structural Optimization: Size, Shape, and Topology 9
Feasible space; partitioning variables
Let us partition n variables in
If there are m constraints (note: m < to s (solution or dependent)
n), we can choose only (n-m) variables and d (decision or
variables freely because m variables independent variables).
can be found using the m equality
constraints. x1 s1
So, we search in the (n-m)‐ x2 s2
dimensional feasible space.
Feasible space is the reduced space xm sm s
that satisfies the constraints. x
xm1 d1 d
When we say it is (n-m)‐dimensional, xm2 d2
we mean that m variables are
somehow eliminated using m
xn dnm
equality constraints.
Remember that we can take the inverse of the m x m matrix here only
if it is not singular. This implies that the gradients of the constraints
should be linearly independent. This is known as constraint
qualification.
ME 260 / G. K. Ananthasuresh, IISc Structural Optimization: Size, Shape, and Topology 13
Reduced gradient Should be zero for
the necessary
f (x) f (x * ) s f T (x * ) s* d f T (x * ) d* condition for a
minimum.
1
With s s h (x ) d hT (x * ) d*
* T *
We get s f T (x * ) s* d f T (x * )d* 0
1
s f (x ) s h (x ) d hT (x * ) d* d f T (x * )d* = 0
T * T *
T * T *
1
s f (x ) s h (x ) d hT (x * ) d f T (x * ) d* = 0
Think of f as z that depends
Reduced gradient in the z(d)
p=n‐m space. only the d variables
ME 260 / G. K. Ananthasuresh, IISc Structural Optimization: Size, Shape, and Topology 14
Reduced gradient is
The multipliers, again zero is the necessary
condition.
Lagrange multipliers
appear again.
Compare with the two‐ Notice that both equations
variable case on slide 5 of have the same form; one is
this lecture. gradient w.r.t. to d and the
Same story here! other w.r.t. s.
ME 260 / G. K. Ananthasuresh, IISc Structural Optimization: Size, Shape, and Topology 15
The Lagrangian
With
f m h j
j 0 i 1,2,,n
xi j 1 xi
is a row vector. We have n scalar equations here.
Subject to
h(x) 0
Variables: n+m
Equations: n+m
So, we are fine.
Subject to
h(x) 0 p inequality constraints.
Subject to
h(x) 0
g(x) 0
Variables: n+m+p
Equations: n+m+p
So, we are fine.
.
But we are not done yet.
The Lagrange multipliers of inequality constraints are
restricted in sign. Let us discuss why.
ME 260 / G. K. Ananthasuresh, IISc Structural Optimization: Size, Shape, and Topology 21
Inequality constraint; simplest problem
f ( x)
Constrained
local
minimum
x
Feasible region
For x* to be a local minimum.
Subject to
h(x) 0
g(x) 0
Variables: n+m+p
Equations: n+m+p
Number of inequalities: 2p
The first condition follows from the Lagrangian.
Subject to
h(x) 0
g(x) 0
Karush‐Kuhn‐Tucker conditions.
Had done this in his master’s thesis at University of Chicago before Kuhn and Tucker.
Feasible space
Reduced gradient with equality constraints
Necessary conditions
Lagrange multipliers
Constraint qualification