0% found this document useful (0 votes)
92 views

Constrained Optimization

This document discusses constrained optimization. It begins by explaining the difference between free and constrained optima, noting that constraints may result in a different optimum than an unconstrained problem. It then provides examples of constrained optimization problems involving utility maximization with budget constraints. The document outlines different methods for finding constrained optima, including geometrically analyzing how constraints affect extrema, algebraically setting constraints into optimization equations, and using the Lagrange multiplier method. It derives the Lagrangean function and shows how taking its partial derivatives with respect to the choice variables and Lagrange multiplier can help solve constrained optimization problems. Finally, it discusses how Lagrange multipliers can be interpreted economically as shadow prices or benefit-cost ratios.

Uploaded by

Vikram Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

Constrained Optimization

This document discusses constrained optimization. It begins by explaining the difference between free and constrained optima, noting that constraints may result in a different optimum than an unconstrained problem. It then provides examples of constrained optimization problems involving utility maximization with budget constraints. The document outlines different methods for finding constrained optima, including geometrically analyzing how constraints affect extrema, algebraically setting constraints into optimization equations, and using the Lagrange multiplier method. It derives the Lagrangean function and shows how taking its partial derivatives with respect to the choice variables and Lagrange multiplier can help solve constrained optimization problems. Finally, it discusses how Lagrange multipliers can be interpreted economically as shadow prices or benefit-cost ratios.

Uploaded by

Vikram Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Constrained Optimization

DC-1

Semester-II

Paper-IV: Mathematical Methods in Economics-II

Lesson: Constrained Optimization

Lesson Developer: Rakhi Arora & Vaishali Kapoor

College/Department: Department of Economics, University


of Delhi

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Table of Contents

1. Learning outcomes

2. Introduction

3. Geometrically analyzing constraint

4. Algebraically setting constraint into optimization equations

5. Lagrange multiplier method

a. Economic interpretation of lagrange multiplier

b. Lagrangrian multiplier as benefit-cost ratio

6. Sufficient conditions for constrained optimization

7. Envelope results

8. Exercises

9. References

Learning outcomes:

After you have read this chapter, you should be able to:-

1. Explain the importance and relevance of constraint.


2. Differentiate between free and constrained optimum.
3. Solve problems of constrained optimization in economics.
4. Define parameters.
5. Interpret lagrangian multiplier.

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Introduction

In the last chapter, we covered optimization if objective function of two or more choice variable.
But that optimization was unconstrained. It was unconstrained in the sense that for example, in
case of discriminating monopolist; there was no restriction or limit to what level of output to be
produced. But there could have been constraints, that given the level of technique, & machinery,
the total maximum output that could be produced is suppose; 1000 units. In such a case,
optimized maximum may differ if value of extrema is greater than provided this constraint.

In this chapter, we will cover optimization with equality constraints. The new optimum referred
here is constrained optimum, which is likely to differ from free optimum.

This chapter is divided into sections. In the first section, we will analyze geometric properties of
constraint.

1. Geometrically analyzing constraint

The primary purpose of imposing a constraint is to give due cognizance to certain limiting
factors present in the optimization problem under discussion. In the last chapter, we saw hills
and valleys in 2D and bowls and domes in 3D and found relative extrema in all such cases. But
there were no constraints.

Let us take example of a consumer who consumes only good & has utility function:

U(x) = - (x-2)2+4

Its free optimum (maximum) is at x=2. But if government imposes some restriction that no one
can consume more than 1 unit of x then constrained optimum is at x=1 .This is shown in fig 1
below.

Fig.1

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Let us consider another example that utility of a consumer depends on two goods, x1 & x2, with
utility function as follows:-

U = x1x2+2x1

If one finds, partial derivative of U then one would find that marginal utilities are positive and
increasing function of x1 & x2. Hence, an unconstrained optimum would give result for
purchasing infinite amount of goods. But a consumer has constraint known as budget constraint.
If price of x1 is Rs. 4 and price of x2 is Rs.2; income of consumer is Rs.100 then the constraint
becomes:

4x1+2x2=100

This constraint now narrows down the choice of x1 & x2 & one can find optimum x1& x2.

Fig.2

If one considers a general function z = f( x,y) and assumes it appears like a dome in 3D then free
extremum is peak of the dome but constrained extremum is at the peak of u-shaped curve
situated on top of the constraint. In fig.2, MN is constraint line indicating that sum of x & y
cannot go beyond this line. Then constrained maxmium is at point B.

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

2. Algebrically setting constant into optimization equations.

All the points viz. C,D,E,F,G & H are feasible (Infact, the entire section of the bowl in the right
hand side of constraint is feasible section). In Fig.3, various level curves of the function Z=f(x,y)
are drawn. It is 2D projection of a 3D dome. M N is the constraint on sum of X & Y say of the
form g(x,y) =c.

Fig.3

A’ in fig.3 corresponds to A in fig.2 & is unconstrained maximum. B’ (corresponding to B in


fig.2) is constrained maximum. In Fig.3, at B’, slope of the level curve f(x,y) = z1, is equal to the
slope of constraint g(x,y) =c.

Free & constrained maximum

A constrained maximum can be expected to have a lower value than the free
maximum. It could also be that free optimum is also constrained maximum in
which a case, constraint is not binding. If constraint is binding, which
generally it is; free maximum is higher than constrained maximum. But
constrained maximum can never exceed free optimum.

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Algebraically setting constraint

The condition for optimum of f(x,y) = z requires that steep of level curve f (x,y) = z1 is equal to
the slope of constraint g(x,y) = c, which can be expressed as follows:-

 g x ( x, y )  f x( x, y)

g y ( x, y ) f y( x, y )

Example1:

Suppose one wishes to maximize f(x,y)= xy subject to 2x+y = m.The constraint can also be
written as y=m-2x. Then objective function becomes f(x,y(x)) = x (m-2x). Now, the objective
function becomes function of one variable. So, for its optimization

f
0
x

f
 m  4x  0
x

 4x  m

m
 x
4

2 m m
and y  m  
m 4 2
Similar result can also be desired by using calculus techniques as follows:-

f x( x, y ) 2

f y ( x, y ) 1

y 2
 
x 1
 y  2x

Putting y = 2x into 2x+y = m yields

x= m/4 and y = m/2.

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Lagrange Multiplier method

Suppose f (x0,y0) is optimum value of f(x,y) = z with the constrains g (x,y)= c. Then we know
that:

f x( x, y ) g x ( x, y )

f y( x, y ) g y ( x, y )

f x( x, y ) f y ( x, y )
 
g x ( x, y ) g y ( x, y )

At (x0,y0); the above ratios would have some common value. The common value of these ratios
is known as Lagrange multiplier and then the above equation becomes:

f x( x, y)   g x ( x, y)  0

f y( x, y)   g y ( x, y)  0

Now let us define lagrangean function ,Lby:

L( x, y)  f ( x, y)   ( g ( x, y)  c)

The partial derivatives of L(x, y) with respect to x and y are Lx ( x, y)  f x( x, y)   g x ( x, y) and
Ly ( x, y)  f y( x, y)   g y ( x, y) , respectively.

Equate these partial derivatives equal to zero and solve these equations along with constraint

g (x,y) = c. Solve these equations for optimum valves of x,y and λ .

Lagrangean Function: a better technique

The advantage of lagrangean function over slope equality method is that this method
can involve more than two variable and more than one constraint (which we will solve
is coming examples).

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Example 1 contd.:-

The lagrangean is L( x, y)  xy   (2 x  y  m)

Lx ( x, y)  y  2  0 (1)

Ly ( x, y)  x    0 (2)

L ( x, y)  2 x  2  m  0 (3)

Solving 1 & 2 we get,

y = 2x

Putting y = 2x in third equation , we get x=m/4 and y = m/2. The result obtained here again is
same as done by previous techniques. Hence, any of these techniques is equally applicable.
Also y = x from equation 2, so x= m/4. Notice that x,y & λ: all are function of m. m, here is
referred as parameter because optimal value of f(x,y) is also function of m as optimum value
equals (m/4) (m/2) i.e.(m/8).

Economic Interpretation of lagrange multiplier.

Suppose consider the objective is to maximize f(x,y) subject to g (x,y) = c . Suppose that, x* &
y* are the values of x and y that solve for this problem. In general, x & y depend on (parameter
of this model) we assume x=x*(c) and y = y*(c) are differentiable functions of c. The associated
value f* of f(x,y) is then also function of c,

f * (c)  f ( x* (c), y* (c))

Here f*(c) is also called optimal value function. Also, λ is a function of parameter: c. taking
differential of above equation, we get:

df * (c)  df ( x* , y* )

 f x( x*  y* )dx*  f y( x* , y* )dy*

We also know that f x( x , y )   g x ( x , y ) and f y( x* , y* )   g y ( x* , y* ) so


* * * *

df * (c)   g x ( x* , y* )dx*   g y ( x* , y* )dy*

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Taking the total differential of constraint

g ( x* (c), y* (c))  c yields

g x ( x* , y* (c)dx*  g y ( x* , y* )dy*  dc

So it implies df * (c)   dc

In particular if dc is small charge is c, then

f * (c  dc)  f * (c)   (c)dc

df * (c)
Also   (c )
dc
Thus, the Lagrange multiplier is the rate at which the optimal value of the objective function
changes with respect to changes in the parameter c.

In economic applications, c after denotes the available stock of resources which acts as constraint
on utility or profit function: f (x,y). λ becomes then the shadow price of the resource as it
indicates how utility or profit changes as dc more units of recourses are provided.

Lagrangian multiplier as benefit-cost ratio

Consider again that objective is to maximize f(x,y) subject to g (x,y) = c. Then, we know that

f* f y
 
gx g y

In other words, at maximum point ratio of fi to gi is same for every choice variable, i (x & y
here). The numerators (fi) are the marginal contributions of each choice variable to function f.
They show the marginal benefit that one more unit of x or y will have for the function to be
maximized (f(x,y).Denominators i.e. gi:s are marginal cost of each choice variable. That is, they
reflect the added burden on the constraint of using slightly more of x ( or y).

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Example 1 contd:-

Objective was to maximize f (x,y )= xy subject to 2x+y = m. As solved earlier, x(m) = m/4,
y(m)= m/2, and λ(m) = m/4. So the value function is f* (m) = (m/4) (m/2) = m2/8.

df * (m) m
   (m)
dm 4
Suppose m is 100 so f *(100) = 100/8. If m increases to 101 then, new optimized value would be
f *(101) = 101/8. f*(101) – f*(100) = 25.125.

Also from above results, we know that:

df * (c)
  (c)   (100)  25
dc
which is a good approximation to the actual change is the optional value function

Example 2 :

Sufficient Conditions

The conditions that we studied till now were necessary conditions but not sufficient. To make
this clear, let us consider following example:

max f ( x, y)  2 x  3 y subject to

x  y 5

The Lagrangean is L( x, y)  2 x  3 y   ( x  y  5) . So the three first order conditions


becomes:

1
Lx ( x, y )  2   0
2 x

1
Ly ( x, y )  3   0
2 y

x  y 5

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Solving first two equations yields y  4 x / 9 . Putting this value is third equation yields x = 9 and
y = 4. This is indicated by point P in Fig. 4. But as it is evident (9, 4) does not solve the problem
(of maximizing f(x, y)). Rather solution to this problem is Q = (0, 25) where constraint is
satisfied and 2x + 3y optimized value of 75 (instead of 30 at point P).

Figure 4

So, this lays the ground that these first o5rder conditions are though necessary but not
sufficient.

Consider the same problem of maximising z  f ( x, y) subject to g ( x, y)  c .

Let there be some stationary point ( x0 , y0 ) . By implicit function theorem, the equation
g ( x, y)  c defines g as differentiable function of x in some neighbourhood of ( x0 , y0 ) . Let this
be denoted by y  h( x) , then

y  h( x)   g x ( x, y) / g y ( x, y)

The problem of maximisation of f ( x, y ) is reduced to maximisation of z  f ( x, h( x)) i.e. with


respect to single variable, x.

Then

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

dz
 f x( x, y )  f y( x, y ) y
dx

dz g  ( x, y )
  f x( x, y )  f y( x, y ) x
dx g y ( x, y )

dz
The necessary condition becomes  0.
dx
The sufficient condition for maximum of z becomes that second order derivative of z with
respect to x becomes less than zero (for max.).

d2z g x   g xy
( g xx  g ) g y  ( g yx  g yy y ) g x
 f   f  y   ( f   f  y )  f 
g y ( g y )2
xx xy yx yy xy
dx 2

d 2z 1
 [ f xx   f xx ( g y )2  2( f xy   f xy ) g x g y ( f yy   g yy )( g x ) 2 ]
dx 2
( g y ) 2

The expression is bracket becomes determinant

0 g x g y
D( x, y )  g x f xx   g xx f xy   g xy
g y f xy   g xy f yy  g yy

d 2z 1
 D ( x, y )
dx 2
( g y )2

A sufficient condition for ( x0 , y0 ) to solve constraint problem is that ( x0 , y0 ) satisfies the first
order conditions and moreover, that the bordered Hessian D ( x0 , y0 ) given above is > 0 in the
maximization case and, is < 0 in the minimization case.

Envelope Results

Optimization problems in economies usually involve functions that depend on a number


of parameters, like prices, income levels, taxes etc. These are parameters as these act as constant
during optimization but they vary according to economic situation. For example, in case of utility

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

maximization, income is fixed and we find optimal values of xi ' s that maximize utility. But
then in next period if income changes, then optimal solution would also change. To see, how
optimal solution changes when parameters change, we would encounter here Envelope Theorem.

Consider the problem

max f (x, r) subject to g j ( x, r )  0 , j = 1, ..., m


x

where r = (r1, ..., rk) is vector of parameters and x = (x1, ..., xn) is vector of choice
variables.

The optimization would give solution as x1* (r), ...., xn* (r) and optimal value of
f (x, r)  f * (r) , where f * (r ) is optimal value function for this problem.

f * (r)  f (x* (r), r)  f ( x* ,(r),..., xn* (r), r

Suppose now we wish to study how our optimal value function changers when nth parameter rh
changes. One way is to assume this new rh, set lagrangean function, obtain value of f (x, r ) . To
avoid such tedious process is study how optimal value function changes as rh changes.

f * (r ) n f ( x* (r), r ) xi* (r) f (x* (r ), r )


 . 
rh i 1 xi rh rh

The above equation implies that f * ( x) changes on two accounts : first, change in rj changes
vector r and it changes f (x, r ) directly and second, rh changes all the functions x1* (r ) and
m
hence indirectly changes f (x* , r), r) . Let the L(x* , r )  f (x, r )  
j 1
 j g j (x, r ) . The first

order condition for the given problem is given as follows:

f (x* (r ), r ) m gi (x* (r ), r )


 j for all i  1,..., n
xi j 1 xi

f * (r ) n  m  j gi (x (r ), r  xi* (r ) f (x* (r), r )


*

So     
rh i 1  j 1 xi   rh rh

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

f * (r ) m  n gi (x* (r), r) xi* (r)  f (x* (r), r)


    j  
rh j 1  i 1 xi  rh  rh

Differentiating identity gi (x* (r), r) = 0 w.r.t. rh yields

gi (x* (r), r) xi* (r) g j (x (r), r)


n *

i 1 xi
.
rh

rh
0

which holds for all j = 1, ....m.

f * (r ) m
gi (x* (r), r) f (x* (r), r)
So    j 
rh j 1 xi rh

L(x* (r ), r )

rh

Example 2 : Utility maximization. Consider the problem U ( x, y)  100  e x  e y subject to


the constraint px  qy  m .

The lagrangean function L(x,y)= 100  e x  e y   ( px  qy  m)

The first order conditions are as follows:

Lx  e x   p  0 (1)

Ly  e y   q  0 (2)

L  px  qy  m  0 (3)

e x p
Solving first two equations yields  y 
e q

p
 e x  e y
q

Taking on both sides

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

 p
  x   y  ln  
q

 p
 x  y  ln  
q
Putting value of x in (3)

  p 
P  y*  ln     q*y  m
  q 

 p
ln    m
y*   
q
pq

 p
ln    m
 p
x*   
q
 ln  
pq q

x and y here becomes function of parameters price of x : p, price of y : q and income : m

Example 3 : Cost minimization

Cost function of the firm that uses capital K and labour, L to produce single output q. and
production function is Q  F ( K , L)  K 1/ 2 L1/ 4 . The prices of labour and capital are w and r
respectively.

So problem is minC  rk  wL subject to

K 1/ 2 L1/ 4  Q

L( K , L)  k  wL   ( K 1/ 2 L1/ 4  Q)

The necessary conditions are as follows:

1
Lk  r   K 1/ 2 L1/ 4  0 (1)
2

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

1
LL  w   K 1/ 2 L3 / 4  0 (2)
4

L   K 1/ 2 L1/ 4  Q  0 (3)

Solving (1) and (2), we get

  2rK 1/ 2 L1/ 4  4wK 1/ 2 L3/ 4

 r 
 L K
 2w 

Putting this into (3) we get

K 3/ 4  21/ 4 r 1/ 4 w1/ 4Q

K *  21/ 3 r 1/ 3w1/ 3Q

K *  21/ 3 r 1/ 3 w1/ 3Q4 / 3

 r  * 2 / 3 2 / 3 2 / 3 4 / 3
L*   K  2 r w Q
 2w 

Corresponding minimal cost is

C*  rK *  wL*  3.22 / 3 r 2 / 3w1/ 3Q4 / 3

Example 4:

Suppose a firm produces TV sets at two different locations and x1 units are produced at first
location and x2 at second location. The joint cost function is given by

C  0.1x12  0.2 x22  0.2 x1x2  180 x1  60 x2  25000

If the firm has to supply an order of 100then this being constraint x1* and x2* would be produced
so as to reduce cost.

L( x1 , x2 )  0.1x12  0.2 x22  0.2 x1 x2  180 x1  60 x2  25000   ( x1  x2  100)

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Lx  0.2 x1  0.2 x2  180    0 (1)

Ly  0.4 x2  0.2 x1  60    0 (2)

L  ( x1  x2  1000)  0 (3)

Solving above three equations yields :

x1*  400

x2*  600

So firm should 400 units of TV sets at first location and 600 units at second location.

Let us check for the sufficiency condition also

0 1 1
D( x1 , x2 )  1 0.2 0.2  0.2  0
1 0.2 0.4

Hence (400, 600) stationary point is a minima and min. cost is 2,69,000.

Example 3 : (Contd..) Envelope results

Suppose now that we wish to know how C* changes when r changes.

C * 
 (3.22 / 3 r 2 / 3 w1/ 3Q 4 / 3 )
r r

2
  3  22 / 3 r 1/ 3 w1/ 3Q4 / 3
3

 21/ 3 r1/ 3w1/ 3Q4 / 3

K
Now, if instead envelope theorem is used, which states that

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

C * (r , w, Q) L( K , L, r , w, Q)
 =K
r r

C *
Similarly,  L.
w

Exercises

Q1. If the utility of a consumer depends on two goods: x and y and utility function is given by
U= (x+2)(y+1). If prices of x and y are Rs. 2 and Rs.5, respectively and income is Rs.51. find the
optimal levels of x and y purchased by the consumer and indirect utility function.

[Hint: Indirect utility function is utility as function of parameters]

Q2. The production function of a firm is given by X= and prices of capital and labor are
fixed at Rs. r and Rs. w, respectively.

i) Find the cost minimizing combination of capital and labor.


ii) Derive cost function of firm.

Q3. The incomes of an individual in current and next year are Rs.500 and Rs. 792
respectively.his utility function of two consumption expenditures x and y is U= . If the
market interest rate is 10% p.a. ,determine optimum consumption expenditures and amount
consumer should borrow or lend in current year.

[Hint: constraint is income in two periods but consumption could differ from income of that
period as consumer can borrow or lend in the market.]

Q4. A monopolist has the following demand functions for each of his products X and Y; x=72-
0.5px and y=120-py. the combined cost is C=x2+xy+y2+35 and the maximum product is 40 units.
Find

i) Profit maximizing level of output


ii) Price of each product
iii) The total profit

Institute of Lifelong Learning, University of Delhi


Constrained Optimization

Q5. Find the optimal mix and its cost in the case when a producer chooses an output
corresponding to isoquant k2l=16 and respective prices of c apital and labor are Re.1 and Rs.2,
respectively. Also find the expansion path.

[Hint: expansion path shows change in optimal values when output changes parameter changes]

References

K. Sydsaeter and P. Hammond, Mathematics for Economic Analysis, Pearson Educational Asia, Delhi,
2002

Institute of Lifelong Learning, University of Delhi

You might also like