0% found this document useful (0 votes)
64 views5 pages

Constrained Optimization Rel. 01

This document summarizes the key concepts and theorems of constrained optimization. It defines what it means for a function to have a local minimum or maximum point under a set of constraints. The document then presents four theorems that establish conditions for a point to be an optimal solution when the objective function is constrained. Specifically, the theorems introduce Lagrangian functions and show that at an optimal point the gradients of the objective and constraint functions must be linearly related.

Uploaded by

alessandropuce85
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views5 pages

Constrained Optimization Rel. 01

This document summarizes the key concepts and theorems of constrained optimization. It defines what it means for a function to have a local minimum or maximum point under a set of constraints. The document then presents four theorems that establish conditions for a point to be an optimal solution when the objective function is constrained. Specifically, the theorems introduce Lagrangian functions and show that at an optimal point the gradients of the objective and constraint functions must be linearly related.

Uploaded by

alessandropuce85
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Notes of Mathematics:

Elements of Constrained Optimization

Roberto Monte October 26, 2011

Abstract These notes are still a work in progress and are intended to be for internal use. Please, dont cite or quote.

RN

Let D an open subset of RN , let f C 1 (D; R), let gj C 1 (RN ; R) for j = 1, . . . , J , let G {x | gj (x) = 0, k = 1, . . . , J }, and let x D G.

Denition 1 We say that f has a local minimum [resp. maximum] point at x under the constraint G, if there exists r > 0, such that f (x) f (x ) [res. f (x) f (x )], x D G B(x ; r).

Theorem 2 Assume the function f has a local minimum or maximum point at x under the constraint G, that is there exists r > 0 such that f (x ) = max{f (x), x D G B(x ; r)} max
g1 (x)=0,...,gJ (x)=0

{f (x), x D B(x ; r)},

J and rank(Jx (g1 , . . . , gJ )) = J N 1. Then, introducing the Lagrangian function L : D R R given by

L(x, ) = f (x)
k=1

def

k g k ( x) ,

(x, ) D RK

1, . . . , J ) RJ such that L has a critical point at (x ( ). there exists a unique , Let D an open subset of R2 , let f C 1 (D; R), let g C 1 (R2 ; R), let G {x R2 | g (x) 0}, and let x D G. Theorem 3 Assume the function f has a local maximum point at x under the constraint G, that is there exists r > 0 such that f (x ) = max{f (x), x D G B(x ; r)} max {f (x), x D B(x ; r)},
g (x)0

and rank(Jx (g )) = 1. Then, introducing the Lagrangian function L : D R R given by L(x, ) = f (x) g (x),
def

(x, ) D R

R such that the following conditions are fullled there exists ) = 0, for n = 1, 2; optimality xn L(x , (x slackness g ) = 0; 0; multiplier feasibility constrain feasibility g (x ) 0. Let D an open subset of RN , let f C 1 (D; R), let gj , hk C 1 (RN ; R), for j = 1, . . . , J , k = 1, . . . , K , let G {x RN | gj (x) 0, j = 1, . . . , J }, H {x RN | hk (x) = 0, k = 1, . . . , K }, and let x D G H. Theorem 4 Assume the function f has a local maximum point at x under the constraint G H, that is there exists r > 0 such that f (x ) = max{f (x), x D G H B(x ; r)}
g1 (x)0,...,gJ (x)0 h1 (x)=0,...,hK (x)=0

max

{f (x), x D B(x ; r)},

and

) x1 g1 (x . . . x1 gJ (x ) rank x h1 (x ) 1 . . .

.. . .. .

xN g1 (x ) . . . xN gJ (x ) = Jb + K, xN h1 (x ) . . . xN hK (x )

) x1 hK (x

where Jb is the number of binding inequality constraints. Then, introducing the Lagrangian function L : Df RJ RK R, given by
J K

L(x, , ) = f (x)
j =1

j gj (x)
k=1

k hk (x),

(x, , ) D RJ RK ,

1, . . . , J ), ( there exist (, ) RJ RK , ( 1 , . . . , K ), such that the following conditions are fullled: optimality xn L(x , , ) = 0, for n = 1, . . . , N ; j gj (x slackness ) = 0, for j = 1, . . . , J ; j 0, for j = 1, . . . , J ; non negativity I feasibility gj (x ) 0, for j = 1, . . . , J ; II feasibility hk (x ) = 0, for k = 1, . . . , K . Remark 5 The optimality condition 4 can be reformulated in a vector form as
J K

f (x ) =
j =1

j gj (x ) +
k=1

k hk (x ),

which highlights the circumstance that the gradient of the objective function f computed at a candidate maximum point has to be a linear combination of the gradients of the constraining functions. In economic and nancial applications, it is customary to present the inequality constraints in a minimization problem in the form g1 (x) 0, . . . , gJ (x) 0, instead of g1 (x) 0, . . . , gJ (x) 0, for j = 1, . . . , J . That is G {x RN | gj (x) 0, j = 1, . . . , J }. In light of this, the analogue of Theorem 4 for a minimization problem takes the following formulation. Theorem 6 Assume the function f has a local minimum point at x under the constraint G H, that is there exists r > 0 such that f (x ) = min{f (x), x D G H B(x ; r )} min
g1 (x)0,...,gJ (x)0 h1 (x)=0,...,hK (x)=0

{f (x), x D B(x ; r)},

and

) x1 g1 (x . . . x1 gJ (x ) rank x h1 (x ) 1 . . .

.. . .. .

xN g1 (x ) . . . xN gJ (x ) = Jb + K, xN h1 (x ) . . . xN hK (x )

) x1 hK (x

where Jb is the number of binding inequality constraints. Then, introducing the Lagrangian function L : D RJ RK R, given by
J K

L(x, , ) = f (x)
j =1

j gj (x)
k=1

k hk (x),

(x, , ) D RJ RK ,

1, . . . , J ), ( there exist (, ) RJ RK , ( 1 , . . . , K ), such that the following conditions are fullled: optimality xn L(x , , ) = 0, for n = 1, . . . , N ; j gj (x slackness ) = 0, for j = 1, . . . , J ; j 0, for j = 1, . . . , J ; non negativity I feasibility gj (x ) 0, for j = 1, . . . , J ; II feasibility hk (x ) = 0, for k = 1, . . . , K .

You might also like