0% found this document useful (0 votes)
62 views

Chapter 2. Constrained Optimization

The document discusses constrained optimization problems where the objective function is optimized subject to certain constraints. It covers one-variable and two-variable problems with equality constraints and addresses them using techniques like Lagrange multipliers and the elimination method. It also discusses problems with multiple constraints and evaluating second-order conditions.

Uploaded by

ewlakachew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Chapter 2. Constrained Optimization

The document discusses constrained optimization problems where the objective function is optimized subject to certain constraints. It covers one-variable and two-variable problems with equality constraints and addresses them using techniques like Lagrange multipliers and the elimination method. It also discusses problems with multiple constraints and evaluating second-order conditions.

Uploaded by

ewlakachew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

CHAPTER 2.

CONSTRAINED
OPTIMIZATION

1
INTRODUCTION
 In economic optimisation problems, the
variables involved are often required to
satisfy certain constraints
 In case of unconstrained optimisation
problems, no restrictions have been made
regarding the value of the choice variables
 However, in reality optimisation of a certain
economic function should be in line with
certain resource requirement or availability
 This rises from the problem of scarcity
 For example:
 Maximisation of production should be subject to
2
the availability of inputs
CONT’D
 Minimisation of costs should also satisfy a certain
level of output
 The constraint in economics is the non-
negativity restrictions
 Although sometimes negative values may
be admissible, most functions in economics
are meaningful only in the first quadrant.
 Thus, this constraints should be
considered in the optimisation.
 The constraints in optimisation
 Constraints on the availability of the inputs
 None -ve solution 3
 Budget or money
CONT’D
 Constrained Optimization deals with optimization of
the objective function (the function to be optimized)
subject to constraints (restrictions).

 In case of a linear objective and constraint function, we


use the concept of linear programming model.

 However, when we face a non linear function, we use


the concept of derivatives for optimization. This
chapter focused on non linear constrained functions.

4
2.1 ONE VARIABLE CONSTRAINED OPTIMIZATION

A. With Equality Constraint:


 Optimisation of one variable function subject to an
equality constraint takes the form,
Max(Min): y=f(x) subject to X  x
The solution for this type of problem is
y*  f (x)
B. Non-Negativity constraint
Max(Min):y=f(x) s.t x>0

 The optimum values of a function with non-


negativity constraint can be summarized as follows,
Max : f '( x)  0 if f '( x)  0, x  0
5
if f '( x)  0, x  0
NONNEGATIVITY CONSTRAINT
 Graphically

 These three conditions can be consolidated in to:


f’(x1)  0 x1  0 and x1f’(x1) =0
 x1f’(x1)=0,  complementary slackness
 The three equations give us first order necessary
condition for extremum (local/global).
6
CONT’D

Min : f '(x)  0 if f '(x)  0, x  0


if f '(x)  0, x  0
 Example:
Max y  3x  7x  2
2

s.t x  0
 In unconstrained optimization operation:
 F.O.C: f’(x)=-6x-7=0
 X=-7/6; However, imposing the non-negative constraint we
have:
 X*=0; and f(0)=-3(02)-7(0)+2=2
 f’(0)= -7<0
 It is maximized at this critical values 7
2.2 TWO VARIABLES PROBLEMS WITH EQUALITY CONSTRAINTS

 In the case of two choice variables,


optimization problem with equality constraint
takes the form

 For simplicity assume two variable case as in


the following optimum values
 This type of optimization problem is commonly used in economics.
 Because, for the purpose of simplification, two variable cases are
assumed in finding optimum values.

8
CONT’D
 For example in maximization of utility using
indifference curve approach, the consumer is
assumed to consume two bundles of goods.
 Example:

Max utility, u(x1,x2),


Subject to budget constraint ,p1x1+ p2x2 =M
 Two methods are commonly used for solving such
optimization problems with equality constraints.
A. Elimination and Direct Substitution Method
 This method is used for a two variables constrained
optimization problem with only one equality constraint.
 It is relatively simple method.
B. Lagrange Multiplier Method 9
A. ELIMINATION AND DIRECT SUBSTITUTION METHOD
 In this method, one variable is eliminated
using substitution before calculating the 1st
order condition.
 Consider the consumer problem in the above
example.

 Now x2 is expressed as a function of x1.


 Substituting this value, we can eliminate x2
from the equation. Using F.O.C

10
CONT’D

m ax u  x1 x 2
 Example:
s.t x 1  4x 2  120
4 x2 120  x 1

4 4
x 2  30  x 1 / 4
2
 u  x 1 ( 30  x 1 / 4 )  30 x 1  1 / 4 x 1
du
F .C .C   MU 1  30  1 / 2 x 1  0
dx 1
11
30  1 / 2 x 1 ; x 1  60 x2  30 60/ 4  15
B. LAGRANGE MULTIPLIER METHOD
 When the constraint is a complicated function or
when there are several constraints, we resort to the
method of Lagrange
subject to

 An interpretation of the Lagrange Multiplier


 The Lagrange multiplier,  , measures the effect of
a one unit change in the constant of the constraint
function on the objective function.
 If   0, it means that for every one unit increase
(decrease) in the constant of the constraining
function, the objective function will decrease
(increase) by a value approximately equal to  . 12
EXAMPLE

L  f ( x 1 , x 2 )   ( c  g ( x 1 , x 2 ))
 F.O.C

L1  f1 ( x1 , x 2 )   g 1 ( x1 , x 2 )  0

L 2  f 2 ( x1 , x 2 )   g 2 ( x1 , x 2 )  0

L   c  g ( x1 , x 2 )  0

13
CONT’D
 
We can express the optimal choices of variable  , x1 , and x2 as


implicit functions of the parameter c.
x*  x1 * (c )
x2  x 2  (c )
    * (c )
 Now, since the optimal value of L depends on  , x1 , and x2 we
may consider L to be a function of c.
 That is L( *, *, *)
x x 1 2

14
CONT’D
 S.O.C. for a constrained optimization problem
Max : f  x , y  subject to , g  x , y 

 F.O.C. for
L  f x , y    g x , y 
L   g x , y   0 x
Lx  f x  2x  0 y
Ly  f y  2 y  0 

15
CONT’D
 To find the S.O.C find the second derivatives
L   0 , L  x  g x , L  y  g y
Lxx  f xx  g xx , Lxy  f xy  g xy
L yy  f yy   g yy

 Put these in matrix form to construct a Bordered


Hessian and its determinant is denoted by H .
 L Lx Ly   0 gx gy 
   
 Lx Lxx Lxy    g x Lxx Lxy 
L Lyx Lyy   g y Lyx Lyy 
 y

16
CONT’D
 The bordered Hessian H is simply the plain Hessian
bordered by the first order derivatives of the constraint
with zero on the principal diagonal.
L xx L xy
 
L yx L yy

 Determinant Criterion for the sign definiteness of d 2 z .


0 gx gy
 Positive .definite   0  min
d 2 z.is   subject .to.dg  0.iff g x L xx L xy
 Negative .definite   0  Max
gy L yx L yy
 Definiteness of the matrix is determined by looking at
the sign of principal minors, i.e.
 negative define alternating sign … for maximum
17
 same sign of positive definite …… for minimum
MORE THAN ONE EQUALITY CONSTRAINT

Max / Min : f  x 1 , x 2 , x 3  , subject to , g 1  x 1 , x 2 , x 3 ,   c 1 and

g 2 x1 , x 2 , x 3   c 2

L  f x1 , x2 , x3   1 c1  g1 x1 , x2 x3   2 C 2  g 2 x1 , x2 x3 


 F.O.C.: L 1  f 1   1 g 11   2 g 12  0

L2  f 2  1 g 12   2 g 22  0
L 3  f 3  1 g 31   2 g 32  0
L  1  C 1  g 1 x 1 , x 2 , x 3   0
L 2  C 2  g 2  x1 , x 2 , x 3   0 18
MORE THAN ONE EQUALITY CONSTRAINT
 S.O.C
0 0 g11 g12 g31
0 0 g12 g22 g32
H  g11 g12 L11 L12 L13
g12 g22 L21 L22 L23
g31 g32 L31 L32 L33

 H2 is one that contains L22 as the last element of its


principal diagonal
 H  is one that contains L33 as the last element of its
3

principal diagonal.
19
THE BORDERED HESSIAN

 Plain Hessian  Bordered


Hessian: borders
will be g x , g y
Z xx Z xy
Z yx Z yy 0 gx gy
H  gx Z xx Z xy
gy Z yx Z yy

20
SECOND ORDER CONDITION
Determinantal Criterion for sign definiteness:

 positive definite   H >0 


2
d z is a   subject to dg  0 iff  
 negative definite   H  0 
or
0 gx gy
 positive definite   0
d 2 z is a   subject to dg  0 iff g x Z xx Z xy 
 negative definite   0
gy Z yx Z yy

21
EXAMPLE-1

 Find the extremum of


z  xy subject to x y 6

 First, form the Lagrangian function


Z  xy   (6  x  y )
Z  6  x  y  0  x y 6
 
Zx  y    0  or   y0
Z y  x    0     x
 0

 By Cramer’s rule or some other method, we can find


 3 x 3 y 3  Z  z 9 22
CONT’D
Second order condition: first find the second order partial derivatives
Z xx  0, Z xy  Z yx  1, Z yy  0
and the border elements:
g x  1, g y  1
Form the bordered Hessian Determinant:
0 1 1
H  1 0 1 20  z  9 is a maximum.
1 1 0

23
EXAMPLE 2

 Find the extremum of


z  x12  x2 2 s.t. x1  4 x2  2
The Lagrangian function is
Z  x12  x2 2   (2  x1  4 x2 )
Necessary conditions:
Z   2  x1  4 x2  0   x1  4 x2  2
 
Z1  2 x1    0  or     2 x1 0
Z 2  2 x2  4  0   4 
  2 x2  0
Solution:
  174 , x1  172 , x2  178 , Z  z  174 24
CONT’D

Second order condition: first find the second order partial derivatives
Z11  2, Z12  Z21  0, Z22  2
and the border elements:
g1  1, g2  4
Form the bordered Hessian Determinant:
0 1 4
H  1 2 0  34  0
4 0 2
 the value z  174 is a minimum.

25
N-VARIABLE CASE:

Objective function: z  f ( x1 , x2 , , xn )

subject to g ( x1 , x2 , , xn )  c

with z  f ( x1 , x2 , , xn )  [c  g ( x1 , x2 , , xn )]

Given a bordered Hessian


0 g1 g2  gn
g1 Z11 Z12  Z1n
H  g2 Z 21 Z 22  Z 2 n
    
gn Z n1 Z n 2  Z nn 26
N-VARIABLE CASE:

n-1 bordered principal minors are:


0 g1 g2 g3
0 g1 g2
g1 Z11 Z12 Z13
H2  g1 Z11 Z12 H3  etc.
g2 Z21 Z22 Z23
g2 Z21 Z22
g3 Z31 Z32 Z33
.

with the last one being


0 g1 g2  gn
g1 Z11 Z12  Z1n
H  g2 Z 21 Z 22  Z 2 n
     27

gn Z n1 Z n 2  Z nn
N-VARIABLE CASE:

Condition Maximum Minimum


First order
necessary condition L  L1  L2  L3  ...  Ln  0
L  L1  L2  L3  ...  Ln  0

Z   Z1  Z 2    Z n  0 Z   Z1  Z 2    Z n  0

Second order .

H2  0, H3 0, H4  0, H5  0, H2  0, H3  0,... Hn  0
sufficient condition
...,(1)n Hn 0

28
EXAMPLE: LEAST COST COMBINATION OF INPUTS

Minimize : C  PK K  PL L

subject to: Q ( K , L)  Q0

First Order Condition:


Z  PK K  PL L  [Q0  Q( K , L)]
Z   Q0  Q( K , L)  0
Z K  PK   QK  0
Z L  PL   QL  0
PK QK

PL QL 29
CONT…..

Second order condition:

0 QK QL
H  QK   QKK   QKL   (QKK QL 2  2QKL QK QL  QLLQK 2 )  0
QL   QLK   QLL

Therefore, since |H|<0, we have a minimum.

30
SELF ACTIVITY
1. The production function of the firm is given as: K .
3 1
Q  64 L 4 4

Labor costs 96 dollar per unit and capital costs 162 dollar
per unit so that the firm decides to produce 3456 units of
output (Q).
a. Determine the amount of labor and capital that should be
utilized so as to minimize costs.
b. Calculate the minimum cost and economically interpret
lambda.
2. Given the objective function as f(x,y,z)=4xyz2
Subject to x+y+z=56
a. Use lagrange multiplier to calculate the critical values
of the choice variables.
b. Check whether the objective function has maximum or
minimum value at the obtained critical values using
bordered hessian test.
c. Estimate the effect on the value of objective function
for one unit change in the constraint function.
31
2.3. INEQUALITY CONSTRAINTS AND THE THEORY OF
KUHN -TUCKER

 In this section we deal with optimization problems


when the constraints are inequality constraints.
 The techniques we use in this section are categorized
under the class of nonlinear programing.
 Here we extend the techniques of constrained
optimization by allowing for inequality constraints.
 Since this is the standard and general approach to deal
with constrained optimization, we build our discussion
of this section from the very basics of the techniques -
starting with non-negativity constraint.
32
CONT’D
 Nonnegativity Restrictions: Consider a problem with
nonnegativity restrictions on the choice variables, but with no
other constraints.

 where the function f is assumed to be differentiable.


 Three possible situations may arise: interior solution, boundary
solution, and corner solution with negative FOC.
 These scenarios are shown in the figure below.

33
CONT’D

 The Non-negativity Restriction with a Single Variable case


Max:   f ( x ) s .t x  0
1 1

 We have three possible situations on the restrictions


 f’(x1)=0, and x1>0 (point A)
 f’(x1)=0, and x1=0 (Point B)
 f’(x1)< 0, and x1=0 (Point C & D)
 These three conditions can be consolidated in to: f’(x1)  0 x1  0
and x1f’(x1) =0
x1f’(x1)=0, this feature is referred to as complementary slackness
between x1 & f’(x1).
 The three equations give us first order necessary condition for
extremum (local/global).
 Specifically,
 f’(x)>0, x1>0 and X1f’(x1)=0---for min 34

 f’(x)<0, x1>0 and X1f’(x1)=0---for max


CONT’D
 The above condition can easily be generalized for n
variable case:
Maximize  = f(x1,x2,…,xn)
Subject to xj  0 (j=1,2,…,n)
 Which requires the Kuhn-Tucker FOC as:

fj  0 xj  0 and xj fj = 0 (j=1,2,…,n)


Where f j

x j

35
EFFECT OF INEQUALITY CONSTRAINTS
 Now let’s see the effect of inequality constraints
(with three choice variables (n=3) and two
constraints (m=2)
Maximize  = f(x1,x2,x3)
Subject to g1(x1,x2,x3)  r1
g2(x1,x2,x3)  r2
and x1, x2, x3  0

 Use two dummy variables s1 and s2 to transform the


problem to equality form as:

36
CONT’D
 Using two dummy variables (s1 & s2):

 If the non-negativity restriction are absent we can


form the Lagrangian following the classical approach
as:

 And write the first order condition as:


37
CONT’D
 But since xj and si variables do have to be nonnegative,
the FOC on these variables have to be modified as:

 We can reduce 2nd and 3rd KKT condition using the fact
that to

38
CONT’D
 Equivalently,

 Using the fact that si=ri – gi(x1,x2,x3) we can rewrite


the above condition as:

 Therefore, the FOC can be rewritten as:

 Where gij =
39
CONT’D
 The above condition can be generalized to the case of n choice
variables and m constraints.
 Consider:

 And the Kuhn-Tucker conditions for maximization will simply


be:

 For minimization problems, the above conditions will be

40
EXAMPLE
 Given the utility maximization problem
Maximize U = U(x, y)
Subject to Px X + Py Y  B
and X, Y  0
 Suppose further that ration has been imposed on x
such that X  xo , then the problem can be written as:
Maximize U = U(x, y)
Subject to Px X + Py Y  B
X  xo
and X, Y  0

41
CONT’D
 The Lagrangian function is:

 And the Kuhn-Tucker conditions are:

 The KKT condition given as: has an


interesting economic implications.
 It requires that
42
CONT’D
 Therefore, we must have either:

 If we interpret 1 as the marginal utility of budget


money (income), and if the budget constraint is
nonbinding (satisfies as an inequality in the
solution, with money left over), the marginal utility
of B should be zero (1 =0)
 Similarly the condition requires that
either

 this property is called complementary


slackness.
 Since 2 is the marginal utility of relaxing the
43

constraint, if ration is nonbinding 2=0.


NUMERICAL EXAMPLE

 The lagrangian is

 And the KKT conditions become

 Solution trial and error (start with zero…) 44


CONT’D
 we can start by setting the value of choice variable be
zero or make some of the constraints nonbinding and
use the complementary slackness condition etc
 for the above example making x=0 or y=0 makes no
sense as u = x.y = 0
 therefore, assume x and y be nonzero and deduce Zx
=Zy = 0 from complementary slackness

45
CONT’D
 Now assume that ration constraint to be nonbinding  2=0
 Then x = y , given the budget constraint x+y=100  x=y=50
 But this violates the rationing constraint
x  40
 Thus, we have to take the alternative constraint that rationing
is binding  x=40
 y=60 using complementary slackness Zx =Zy= 0  *1=40 and
*2 =20

46
ECONOMIC APPLICATION
 War time rationing
 Assume the consumer which maximize utility with two
goods and rationing on both goods such that coupon (c)
is given cx and cy of it can be purchased by the
consumer

 The Lagrangian

47
CONT’D
 since both the constraints are linear, the constraint
qualification is satisfied and the KKT condition are
necessary

 Example: Suppose the utility function is the form


u=xy2. Further, let B=100 and px=py=1, where C=120,
cx=2 and cy=1.
 The Lagrangian

48
CONT’D
 The KKT conditions are now:

 By trial and error approach (assume 2=0, x, y and 1


are positive ), i.e. the ration constraint is non
binding.

 Solving for x and y gives the trail solution 49


CONT’D
 However, if we substitute this in to the coupon
constraint

 Thus reject this solution


 Now assume 1=0 and 2, x and y >0.
 Which gives us the marginal conditions as:

 Solving the system give us:

50
EXERCISES
1. Find the value of x that maximize the production
Q=64x-2x2+96y-4y2-13
Subject to x+y ≤ 20
2. Given the following non-linear programming problem
Maximize Z=XY
Subject to -X-Y ≤ 1
X+Y ≤ 2
X, Y ≥ 0
a. Find the critical points of the function

b. Calculate the maximum value

(Hint: only first constraint is non-binding(inactive))


51
CONT’D

3. Maximize 4x1+3x2
Subject to 2x1 + x2 ≤ 10
x1, x2 ≥ 0
4. Maximize x1x2
Subject to 5x1+ 4x2 ≤ 50
3x1 + 6x2 ≤ 40
x1, x2 ≥ 0

5. Minimize C= (x1 -4)2 + (x2 -4)2


Subject to 2x1+ 3x2 ≥ 6
-3x1 - 2x2 ≥ -12
52
x1, x2 ≥ 0
53

You might also like