0% found this document useful (0 votes)
17 views27 pages

Maths Cha 4

Chapter Four discusses constrained optimization, focusing on scenarios where constraints affect the optimization of functions in business and economics. It covers methods for solving constrained optimization problems, including one-variable and two-variable cases, and introduces techniques like substitution and the Lagrange multiplier method. The chapter provides examples and exercises to illustrate the application of these concepts in maximizing or minimizing functions under various constraints.

Uploaded by

kivan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views27 pages

Maths Cha 4

Chapter Four discusses constrained optimization, focusing on scenarios where constraints affect the optimization of functions in business and economics. It covers methods for solving constrained optimization problems, including one-variable and two-variable cases, and introduces techniques like substitution and the Lagrange multiplier method. The chapter provides examples and exercises to illustrate the application of these concepts in maximizing or minimizing functions under various constraints.

Uploaded by

kivan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Chapter Four Constrained Optimization

In the previous chapter we have studied the optimization of functions without the
existence of constraints. In business and economics studies, however, there are many
situations in which complete freedom of action is impossible. For example, a firm can
maximize output subject to the constraint of a given budget for expenditures on inputs, or
it may need to minimize cost subject to a certain minimum output being produced. Such
functions which involve constraints are called constrained functions and the process of
optimization is referred as constrained optimization. This chapter explains the ways of
solving constrained optimization problems with equality and inequality constraints.

4.1 One Variable Constrained Optimization with Non - Negative


Constraint
i) With equality constraint

Maximize y = f(x), subject to x  x . In this case y   f (x )


It simply involves determining the value of the objective function at the given point in
the domain.

ii) With non - negativity constraints


Maximize the function y  f (x) subject to x  0

(a)

At x  0, f ( x)  0
For the above function the unconstrained maximum attained when x < 0 at point b as
shown in the above figure where as the function attains its constrained optimum at point a
where x  0 . y   f (0)

(b)

1
In this case the constrained and unconstrained maximum value of the function lie at the
same point, i.e., they coincide at point a as shown above. y   f (x) . At x  0, f ( x)  0

(c)

In this case, similar to that of b, the constrained and unconstrained maximum value of the
function reside on the same point,
y   f ( 0)
f ( x)  0
Please try to minimize y  f (x) , subject to x  0 in a similar way.

Given the function y  f (x) subject to the x  0


For maximization,

f ( x)  0 if f ( x)  0, x  0
if f ( x)  0, x  0

For minimization

f ( x)  0 if f ( x)  0, x  0
if f ( x)  0, x  0

Example

Maximize the objective function y = - 3x 2 - 7 x + 2 subject to x  0.


First order condition for maximization
f ( x)  6 x  7 = 0
6x= - 7
7
x
6
Second order condition for maximization
f ( x)  6  0
7
Thus, the unconstrained maximum value of the function locates at x
, i.e., x < 0
6
but the constrained maximum lies at x  0, f (0)  7  0 . Thus the constrained
maximum is the function is y  2 .

2
2. Minimize y = x2 + 2x+ 5, subject to x  0
First order condition
f ( x)  2 x  2  0
x  1  0
Second order condition
f ( x)  2  0
Thus the function actives it’s unconstrained minimum at x = -1, i.e., y = 4. However,
at x  0, f ( x)  2  0 . Therefore, the minimum value of the constrained function
is y  f (0)  5 .

Exercise
Solve the following questions based on the information in the above section.
1. Maximize y = -x 2 + 5x + 6, subject to x  2 , --------------------------------------------------
------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------
2. Maximize y = - 2 x2 + 3 x + 4, subject to x  0 ----------------------------------------------
----------------------------------------------------------------------------------------------------------
3. Find the maximum value of the function y = -3x 2 + x + 7, subject to x  0 --------------
------------------------------------------------------------------------------------------------------------
4. Minimize the function y = x2 + 3 x + 4, subject to x  0 -------------------------------------
----------------------------------------------------------------------------------------------------------
5. Find the minimum value of the function y = 2x 2 + x + 7 , subject to x  0 -------------
------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------

4.2 Two Variable Problems with Equality Constraints


This section is emphasized on the two methods of constrained optimization. These are.
 Constrained optimization by substitution
 Lagrange multiplier method

A. Constrained Optimization by Substitution


This method is mainly applicable for problems where the objective function with only
two variables is maximized or minimized subject to one constraint.

Consider a firm that wants to maximize output given the production function
Q  f ( K , L) and suppose PK and PL prices of K and L respectively and a fixed budget B.
Then, we can determine the amount of K and L that optimize the output Q using the
method of substitution.

Example
1. A firm faces the production function Q= 12K0.4 L0.4 and assume it can purchase K and
L at price per unit of 40 birr and 5 Birr respectively and it has a budget of 800 Birr.
Determine the amount of K and L which maximizes output.

3
Solution
The problem is Maximize Q= 12K0.4 L0.4
Subject to 40K+5L = 800

According to the theory of production, the optimization condition is written in such away
that the ratio of marginal product of every input to its price must be the same. That is
MPK MPL

PK PL
The marginal products can be obtained using the method of partial differentiation as
follows.

MPK  4.8 K 0.6 L0.4 -------------------------------- (1)


MPL  4.8 K 0.4 L0.6 --------------------------------- (2)
Substituting these marginal products and the given prices in the constraint function gives
us
4.8 K 0.6 L0.4 4.8 K 0.4 L0.6

40 5

K 0.6 L0.4  8 K 0.4 L0.6

Multiplying both sides by K 0.6 L0.6 results in


L = 8K------------------------------------------------------ (3)
Substituting (3) in the budget constraint we get
40K + 5(8K) =800
40K+ 40K = 800
80k =800
K=10
Thus, L= 8(10) =80
Therefore, this firm should employ 10 units of capital and 80 units of labor in the
production process to optimize its output.

2. Suppose the utility function of the consumer is given by U  4 xy  y 2 and the budget
constraint is 2x+y = 6. Determine the amount of x and y which will optimize total utility
of the consumer.

Solution
MU X MU y
Utility is maximized when 
Px Py
In our example, MU X  4 y & MU Y  4 x  2 y .Therefore, at the point of equilibrium

4
4 y 4x  2 y

2 1
4 y  8x  4 y
4 y  4 y  8x
8 y  8x
x  y ---------------------------------------- (4)

Substituting (4) above in the budget constraint gives us


2x  x  6
3x  6
x2 y
Therefore, this consumer can optimize his utility when it consumes 2 units of good x and
2 units of good y.

B. Lagrange Multiplier Method


What is Lagrange Multiplier Method? What are the steps to use this method? How do
you interpret the Lagrange multiplier?

The essence of this method is to change a constrained optimization problem in to a form


such that the first order condition of the unconstrained optimization problem can still be
applicable. This method can be used for most type of constrained optimization problems.
Given the function Z  f ( x, y ) subject to g ( x, y )  PX x  PY y  M , to determine the
amount of x and y which maximize the objective function using the Lagrange
Multiplier Method, we should involve the following steps.

Step 1 Rewrite the constraint function in its implicit form as


M  xPx  yPy  0
Step 2 Multiply the constraint function by the Lagrange multiplier 
 ( M  xPx  yPy )
Step 3 Add the above constraint to the objective function and thereby formulate the
Lagrange function that is a modified form of the objective function which includes the
constraints as follows:
L( x, y,  )  Z ( x, y )   ( M  xPx  yPy ) ------------------- (5)
Necessary condition, i.e. the first orders condition for maximization is that the first order
partial derivatives of the Lagrange function should be equal to zero.

Differentiating L with respect to x , y and  and equating it with zero gives us.
L Z
  Px  0 --------------------------- (6)
x x
L Z
  Py  0 ----------------------------- (7)
y y

5
L
 M  xPx  yPy  0 ------------------------- (8)

From equation (6) and (7) we get
Z Zy
 = x and  =
Px Py
Zx Zy Z x Px
This means,   or 
Px Py Z y Py
Sufficient condition -To get the second order condition, we should partially differentiate
equations (6), (7) and (8). Representing the second direct partial derivatives by Z xx and
Z yy , and the second cross partial derivatives by Z xy and Z yx , the border Hessian
determinant bordered with 0, g x and g y is
0 gx gy 0  Px  Py
H  gx L xx L xy   Px Z xx Z xy  o
gy L yx L yy  Py Z yx Z yy

d 2 Z is referred to as positive definite subject to dg = 0 iff H <0,


d 2 Z is referred to as negative definite subject to dg = 0 iff H > 0 .

2
Negative definiteness of d Z implies that the function achieves its relative maximum
point where as a positive definite is a sufficient condition to satisfy the relative minimum
of the objective function.

i) Maximization

Example
Given the utility function of the consumer who consumes two goods x and y as
U (x, y) = (x+ 2) (y+1)
If the price of good x is Px  4 birr, that of good y is Py  6 Birr and the consumer has a
fixed budget of 130 birr. Determine the optimum values of x and y using the Lagrange
multiplier method,

Solution
Maximize U (x, y) = x y + x+ 2y + 2
Subject to 4x + 6y = 130
Now we should formulate the Lagrange function to solve this problem. That is
L( x, y,  ) = x y + x+ 2y + 2 +  (130 - 4x - 6y) --------------------------------- (9)
L L L
Necessary conditions for utility maximization are  0,  0, 0
x y 

6
L
 ( y  1)  4 = 0
x
y = -1 + 4  ------------------------------------- (10)
L
 ( x  2)  6  0
y
x  2  6 ---------------------------------- (11)
L
 4 x  6 y  130  0

4x+6y= 130----------------------------------- (12)
Substituting the value of x and y explained in equation (10) and (11) in to (12) enables us
to determine
4 (-2+ 6  ) + 6 (-1 +4  ) = 130
- 8 + 24  - 6 + 24  = 130
48  = 144
 =3
Therefore, x = -2+6(3)
x = -2 + 18 = 16
y = -1 + 4 (3)
y = 11
Second order sufficient condition for utility maximization is

0 gx gy
H  gx L xx L xy
gy L yx L yy

The second partial derivatives of the objective function and the first partial derivatives of
the constraint function are
2L
L xx = = 0, L yy = 0, L xy = L yx = 1
x 2
g g
gx  = 4, and g y  6
x y
Therefore, the bordered Hessian determinant of this function is

0 4 6
H  4 0 1 = - 4(0-6) + 6 (4- 0) = 48 > 0
6 1 0

The second order condition, i.e., H > 0 which is negative definite, is satisfied for
maximization. Thus, the consumer maximizes utility when he consumes 11 units of good
y and 16 units of good x. The maximum utility is U = (16+2) (11+1) = (18) (12) = 216
units which is similar to the value of the Lagrange function at these values of x, y and  .

7
The value of the Lagrange multiplier  is 3. It indicates that a one unites increase
(decrease) in the budget of the consumer increases (decreases) his total utility by 3 units.

2. Suppose the monopolist sells two products x and y and their respective demand is
P x = 100 - 2 x and P y = 80 - y
The total cost function is given as TC = 20x + 20y, when the maximum joint product of
the two outputs 60 unit. Determine the profit maximizing level of each output and their
respective price.

Solution
As it is known profit (  ) = TR - TC, where TR represents total revenue and TC
represents total cost.
TR= x P x + y P y = (100x - 2x2) + (80y - y2)
Thus  = 100x - 2x2 + 80 y - y2 - 20x - 20 y
 = 80 x + 60 y – 2x2- y2
But this monopolist can maximize its profit subject to the production quota. Thus,
Maximize  = 80x + 60 y- 2x2- y 2
Subject to x+ y = 60
To solve this problem, we should formulate the Lagrange function,
L (x, y,  ) = 80x + 60y - 2x2 - y 2 +  (x+ y - 60) --------------- (13)
First order conditions for maximum profit are
L x = 80 - 4x +  = 0
- 4x = - 80 - 
1
 = 20 +  ------------------------------------------------ ------- (14)
4
L y = 60 - 2y +  = 0
- 2y = - 60 - 
1
y = 30 +  ----------------------------------------------------------- (15)
2
L  = x + y -60 = 0
x + y = 60------------------------------------------------------------ (16)
Substituting equation (14) and (15) in equation (16), we get
1 1
20 +  + 30 +  = 60
4 2
3
50 +  = 60
4
3
 = 10
4
40

3
1 40 1 (40)
Thus, x = 20 + ( ) y = 30 +
4 3 2 3
= 20+ 3.33 = 30+6.67
x = 23.33 y = 36.67

8
Second order condition for maximum profit is
L XX  4, LYY  2, L XY  LYX  0
g X  1 & gY 1
Therefore, the bordered Hessian determinant of the given function is

0 1 1
H  1 4 0 =  1(2  0)  1(0  4)  6  0
1 0 2

The second order condition is satisfied for maximization of functions.


Px  100  2(23.33) Py  80  36.67
Px  53.34 Py  43.33
Therefore, the monopolist maximizes its profit when it sells 23.33 of good x at a price of
53.34 birr per unit and 36.67 units of good y at a price of 43.33 birr per unit.
40
 = shows that a one unit increase in total expenditure on inputs increases total profit
3
40
of the monopolist by units. In other words, if the constant of the constraint relaxes by
3
one unit that is x  y  61 , then the value of the objective function increases by the value
the Lagrange multiplier.

ii) Minimization
The firm can determine the least cost combination of inputs for the production of a
certain level of output Q. Given the production function Q= f (L, K) and the cost function
of the firm is C = LPL + KP k Where L = labor, K = capital, Q = output. Suppose the price
of both input to be exogenous, we can formulate the problem of minimizing the cost as
Minimizes C = PL L + P k k
Subject to Q = f (L, K)
To determine the amount of labor and capital that should be employed initially we should
formulate the Lagrange function. It is
L  LPL  KPK   (Q  f ( L, K ) --------------------------- ---- (17)
First order conditions for a minimum cost are
LL  PL  QL  0
P P
  L  L ---------------------------------------------- (18)
QL MPL
LK  PK  Qk  0
PK P
  K ------------------------------------------- (19)
QK MPK
L  Q  f ( K , L)  0 --------------------------------------------- (20)
Where QL and Qk represents marginal product of labor and capital respectively.
From equation (17) and (18), we get

9
PL P
  K -------------------------------------------------- (21)
MPL MPK

Equation (21) indicates that, at the point of optimal input combination the input - price
ratio and the marginal product ratio have to be the same for each input. This ratio shows
the amount of expenditure per unit of the marginal product of the input under
consideration. Thus, the interpretation the Lagrange multiplier is the marginal cost of
product at the optimal condition. In other words, it indicates the effect of change in
output on the total costs of production, i.e., it measures the comparative static - effect of
the constraint constant on the optimal value of the objective function.

The first order condition indicated in equation (21) can be analyzed in terms of isoquants
and isocosts as
P MPL
 = L = --------------------------------------------- (22)
Pk MPk
MPL
The represents the negative of the slope of the isoquant, which measures the
MPK
marginal rate of technical substitution of labor to capital (MRTS Lk ).
PL
The ratio shows the negative of the slope of the isocost. An isocost is a line which
PK
indicates the locus of input combinations which entail the same total cost. It is shown by
the equation
C PL
C= P L L + P k K or K = - L
PL Pk
PL MPL
= indicates the fact that the isocost and isoquant lines are tangent to each other
Pk MPk
at the point of optimal input combination.

Second order condition for minimization of cost.


A negative bordered Hessian determinant is sufficient to say the cost is at its minimum
value. That is

0 QL QK
H  QL LLL LLK  0
QK LKL LKK
Example

Suppose a firm produces an output Q using labor L and capital K with production
function Q  10 K 0.5 L0.5 . If the output is restricted to 200 units, the price of labor is 10
birr per unit and Price of capital is 40Birr per unit. Determine the amount of L and K that
should be employed at minimum cost. Find the minimum cost.

10
The problem is Minimize C = 10 L + 40K
Subject to 200  10 K 0.5 L0.5
Formulating the Lagrange function
L( L, K ,  )  10 L  40 K   (200  10 K 0.5 L0.5 ) ---------------------- (23)
First order conditions
LL  10  5K 0.5 L0.5  0
2 L0.5
 ----------------------------------------------------------- (24)
K 0.5
LK  40  5K 0.5 L0.5  0
8 K 0.5
 ---------------------------------------------------------- (25)
L0.5
L  200  10 K 0.5 L0.5  0
10 K 0.5 L0.5  200 --------------------------------------------------- (26)

From equation (24) and (25), we get


2 L0.5 8 K 0.5
=
K 0.5 L0.5
2L = 8K
L= 4K ------------------------------------------------------- (27)
Substituting equation (27) in to (26) gives us
K 0.5 (4 K ) 0.5  20 ---------------------------------------------- (28)
2K = 20
K = 10 and L = 4(10) = 40,  = 4

Second order condition


Now we should check the second order condition to verify that cost of production is least
at K = 10 and L = 40.

For cost minimization the determinant of the bordered Hessian matrix must be less than
zero.

0 QL QK
H  QL LLL LLK  0
QK LKL LKK

At L = 40 and K = 10
Q K 10
QL = = (5)  (5)  2 .5
L L 40
Q L 40
Q k= = (5)  (5)  10
k K 10

11
LLL 2.5 K 0.5 L1.5 = 2.5(4)(10) 0.5 (40) 1.5
=
= 0.125
L kk = 2.5 K 1.5 L0.5 = 2.5(4)(10) 1.5 (40) 0.5
=2
LKL  LLK  2.5K 0.5 L0.5  2.5(4)(10) 0.5 (40) 0.5
  0 .5
Therefore, the determinant of the bordered Hessian matrix is

0 2 .5 10
H  2.5 0.125  0.5
10  0.5 2

= - 2.5 (5+5) +10(-1.25 -1.25)


= - 2.5 (10) + 10 (-2.5)
H  50  0

Thus, the firm can minimize its cost when it employs 10 units of capital and 40 units of
labor in the production process and the minimum cost is
C = 10 (40) + 40 (10)
Min. C = 400 + 400 = 800 birr
In this problem K, L and  are endogenous. The Lagrange multiplier  measures the
responsiveness of the objective function to a change in the constant of the constraint
function.

What happens to the value of the Lagrange function and the constrained function when
total output increases from 200 to 201? What about the amount of L and K? Compare the
value of the constrained function and that of the Lagrange function at this point. Interpret
the value of  .

Optimization of n - variable case

Given the objective function


Optimize Z  f ( x1 , x 2 , x3 ...., x n )
Subject to g ( x1 , x 2 , x3 ,...., x n )  c
Similar to our earlier discussion we ought to first formulate the Lagrange function. That
is
L  f ( x1 , x 2 , x3 ...., x n )   (c  g ( x1 , x 2 , x3 ,..., x n ))

The necessary condition for optimization of this function is that


L  L1  L2  L3  L4      Ln = 0
The second order condition for optimization of this function depends on the sign of d 2L
subject to dg  g 1dx1  g 2 dx 2  g 3 dx3  .....  g n dx n  0 similar to our earlier discussion.

12
The positive or negative definiteness of d 2 L involves the Bordered Hessian Determinate
test. However, in this case the conditions have to be expressed in terms of the bordered
principal minor of the Hessian. Given the bordered Hessian as

0 g1 g2 g3 . . gn
g1 L11 L12 L13 . . L1n
g2 L21 L22 L23 . . L2 n
H  g3 L31 L32 L33 . . L3n
. . . . . . .
. . . . . . .
gn Ln1 Ln 2 Ln 3 . . Lnn

The successive bordered principal minors are

0 g1 g2 g3
0 g1 g2
g1 L11 L12 L13
H 2  g1 L11 L12 , H3  etc
g2 L21 L22 L23
g2 L21 L22
g3 L31 L32 L33

However, H  H n .
H 2 Shows the second principal minor of the Hessian bordered with 0, g1 and g 2 .

d 2 L is positive definite subject to dg  0 if and only if H 2 , H 3 ,-----, H n  0 .


d 2 L is negative definite subject to dg  0 if and only if H 2  0, H 3  0, H 4  0 ,---.

A positive definite d 2 L is a sufficient condition for minimum value and negative definite
d 2 L is sufficient condition for maximization of the objective function.
In our analysis above H 2 is the one which contains L22 as the last element of its
principal diagonal. H 3 is the one which includes L33 as the last element of its principal
diagonal etc.

Optimization when there is more than one equality constraint


Let us consider the optimization problem involves three variables and two constraints.
Optimize Z  f ( x1 , x 2 , x3 )
Subject to g 1 ( x1 , x 2 , x3 )  c 1
g 2 ( x1 , x 2 , x3 )  c 2

13
As usual we should construct the Lagrange function by using the Lagrange
multiplier  .Since we have two constraint functions, we are required to incorporate two
 s, i.e., 1 and 2 in our analysis.
The Lagrange function is
L  f ( x1 , x 2 , x3 )  1 (c 1  g 1 ( x1 , x 2 , x3 ))   2 (c 2  g 2 ( x1 , x 2 , x3 ))
First order conditions for optimization
L1  f 1  1 g 11   2 g12  0
L2  f 2  1 g 12   2 g 22  0
L3  f 3  1 g 31   2 g 32  0
L 1  c 1  g 1 ( x1 , x 2 , x3 )  0
L 2  c 2  g 2 ( x1 , x 2 , x3 )  0
When there are n - variables and m - constraints, the Lagrange function becomes
m
L  f ( x1 , x 2 , x3 ,....., x n )    j [c j  g i ( x1 , x 2 , x3 ,..., x n )]
j 1

In this case we will have m+ n variables in the Lagrange function and we will have also
m+ n simultaneous equations.

First order conditions are


Li  f i   j g ij  , (i = 1, 2, 3, ---, n) and (j= 1, 2, 3, --- m)
Li  c j  g i ( x1 , x 2 , x3 ,..., x n )  0

Second order conditions for optimization of three variables and two constraints problem
are

0 0 g 11 g 12 g 31
0 0 g12 g 22 g 32
H  g 11 g 12 L11 L12 L13
1 2
g 2 g 2 L21 L22 L23
1 2
g 3 g 3 L31 L32 L33

In this case, H 3 = H . Thus for a maximum value,


H 2 > 0, H 3 < 0.
For a minimum,
H 2 < 0, H 3 < 0.
With the existence of n - variables and m - constraints, the second order condition is
explained as

14
0 0 0 . . 0  g 11 g 12 g 31 . . g 1n
0 0 0 . . 0  g12 g 22 g 32 . . g n2
0 0 0 . . 0  g 13 g 23 g 33 . . g n3
. . . . . .  . . . . . .
. . . . . .  . . . . . .
0 0 0 . . 0  g m
1 g m
2 g m
3 . . g nm
H           .    .
g 1
1 g 1
2 g 1
3 . . g 1
n  L11 L12 L13 . . L1n
g 2
1 g 2
2 g 2
3 . . g 2
n  L21 L22 L23 . . L2 n
g 3
1 g 3
2 g 3
3 . . g 3
n  L31 L32 L33 . . L3n
. . . . . .  . . . . . .
. . . . . .  . . . . . .
g1m g 2m g 3m . . g nm  Ln1 Ln 2 Ln 3 . . Lnn

Now we have divided the Bordered Hessian Determinant in to four parts. The upper left
area includes zeros only and the lower right area is simply a plain Hessian. The
remaining two areas include the g ij derivatives. These derivatives have a mirror image
relationship to each other considering the principal diagonal of the Bordered Hessian as
a reference.

We can create several bordered principal minors from H . It is possible to check the
second order sufficient condition for optimization using the sing of the following
bordered principal minors:
H m 1 , H m  2 ,----------, H n
The objective function can sufficiently achieve its maximum value when the successive
bordered principal minors alternate in sign. However, the sign of H m 1 is (-1) m+1 where
as for minimum value the sufficient condition is that all bordered principal minors have
the same sign, i.e., (-1) m. This indicates that if we have an odd number of constraints,
then sign of all bordered principal minors will be negative and positive with even number
of constraints.

Exercise

Solve the following questions based on the information above.


1. What is constrained function? --------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------
2. Explain elasticity of substitution ----------------------------------------------------------------
------------------------------------------------------------------------------------------------------------
3. What does the Lagrange multiplier indicate? --------------------------------------------------
------------------------------------------------------------------------------------------------------------

15
4. Suppose a firm faces the production function Q = 120 L + 200K - L 2 - 2K 2 for
positive values of Q. If it can buy L at 5 birr per unit, K at 8 birr per unit and has a budget
of 70 Birr, determine the maximum output that it can produce using substitution method.-
------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------

5.Suppose the prices of inputs K and L are 12 birr and 3 birr per unit respectively and the
production function of the firm is Q= 25K0.5 L 0.5 . Determine the minimum costs of
producing 1,250 units of output using Lagrange multiplier method. --------------------------
------------------------------------------------------------------------------------------------------------

6. Suppose a consumer has a utility function of U = 40x 0.5 y0.5. If the price of x is 20 birr
per unit, price of y is 5 birr unit and the consumer has a budget of 600 birr. Determine the
amount of x and y which maximize utility using the Lagrange multiplier method.

4.3 Inequality Constraints and Kuhn-Tucker Theorems, and Mixed


Constraints

Nonlinear Programming
The problem of optimization of an objective function subject to certain restrictions or
constraints is a usual phenomenon in economics. Mostly, the method of maximizing or
minimizing a function includes equality constraints. For instance, utility may be
maximized subject to a fixed income that the consumer has and the budget constraint is
given in the form of equation. Such type of optimization is referred to as classical
optimization. But objective function subject to inequality constraints can be optimized
using the method of mathematical programming. If the objective function as well as the
inequality constraints is linear, we will use a method of linear programming. However, if
the objective function and the inequality constraints are nonlinear, we will apply the
technique of nonlinear programming to optimize the function.

Maximization problem
Maximize  = f ( x1 , x 2 , x3 ,....., x n )
Subject to g 1 ( x1 , x 2 , x3 ,..., x n )  k1
g 2 ( x1 , x 2 , x3 ,..., x n )  k 2
g 3 ( x1 , x 2 , x3 ,..., x n )  k 3
: : :
g ( x1 , x 2 , x3 ,..., x n )  k m
m
and xj  0 , ( j  1,2,3....., n)
Minimization Problem
It can be expressed in the form of
Minimize C = f ( x1 , x 2 , x3 ,....., x n ) )
Subject to g 1 ( x1 , x 2 , x3 ,..., x n )  k1
g 2 ( x1 , x 2 , x3 ,..., x n )  k 2

16
g 3 ( x1 , x 2 , x3 ,..., x n )  k 3
: : : :
g ( x1 , x 2 , x3 ,..., x n )  k m , x j  0
m
( j  1,2,3....., n)

Where C- represents total cost which is the objective function.


x j - is the amount of output produced
k i - is the constant of the constraint function
-
gi is the constraint function.

We have observed from the above expression that the nonlinear programming also
includes three ingredients. These are
 the objective function
 a set of constraints ( inequality )
 non - negativity restrictions on the choice variable

The objective function as well as the inequality constraints is assumed to be differentiable


with respect to each of the choice variables. Like linear programming we apply the 
constraints for maximization and minimization problem involves only  constraints.

Example1
1. Find the values of x and y of the following function graphically.
a) Minimize C  x 2  y 2
Subject to x y  25
x, y  0
First we should convert the inequality constraint in to equality as xy  25 and draw the
graph of this constraint function on the xy plane.

x 1 2 3 4 5 6 7 ...............................25
y 25 12.5 8.3 8.3 5 4.6 3.57 ..........................1

17
Fig. (a)

The shaded region in the above figure represents the feasible region. Let us evaluate the
objective function C at points A, B, C, D and E on the graph.

At point a (1, 25), C=12+ 252 = 1+ 625 = 626


At point B (4, 6.3), C = 42 + (6.3) 2 - 16+ 39.69 = 55.69
At point c (5, 5), C = 5 2 + 52 = 25 + 25 = 50
At point d (6, 4.6) C = 62 + (4.6) 2 = 36 + 21.16 = 57.16
At point E (25, 1) , C = ( 25) 2 + 12 = 625 + 1 = 626

Therefore, the value of x and y which minimizes the objective function are 5 and 5
respectively. The minimum value is C = 50.

b) Maximize  = x2 + (y - 2) 2
Subject to 5x + 3y  15
And x, y  0

Solution
Similar to that of problem a, we should convert the inequality constraint in to equality
constraint and draw its graph in the x y plane. It is 5x + 3y = 15

X 0 1 2 3
Y 5 3.3 1.67 0

18
Fig.(b)

The shaded region of the above figure represents the feasible region as every point in this
feasible region satisfies the inequality constraint 5 x + 3y  15.

Evaluating the objective function at points A, B, C and D of the above graph (fig. b),
At point A (0, 5),  = 02 + (5 - 2)2 = 0 + 9 = 9
At point B (1, 3.3),  = 12+ (3.3 - 2) 2 = 1+1.69 = 2.69
At point C (2, 1.67),  = 22 + (1.67 - 2)2 = 4 + 0.1089 = 4.1089
At Point D (3, 0),  = 32 + 9 (0 - 2)2 = 9+4 = 13

Therefore, the objective function is maximized when x = 3 and


y = 0. The maximum profit is  = 13

In general, we can distinguish the nonlinear programming from that of linear one based
on the following points.

 In nonlinear programming the field of choice not necessarily locates at its extreme
points.
 The number of constraints may not be the same with the choice variables.
 Following the same direction in a movement may not lead to a continually
increasing or (decreasing) value of the objective function.
 The feasible region may not be a convex set.
 A local optimum may not be a global optimum.

Kuhn - Tucker Conditions

In the previous section of this chapter, we have discussed about optimization problems of
the objective function with equality constraints and without explicitly restricting the sing
of the choice variables. In this case, the first order condition is satisfied provided that the
first order partial derivative of the Lagrange function with respect of each choice variable
and with respect to the Lagrange multiplier is zero. For instance, in the problem

Maximize  = f ( x, y )
Subject to g ( x, y )  k

19
The Lagrange function is
L  f ( x, y )   (k  g ( x, y ))
The first order condition states that
L x  L y  L  0

In non-linear programming, there is a similar first order condition which is referred to


as Kuhn-Tucker conditions. As we discussed previously, in classical optimization
process, the first order condition is a necessary condition. However, a certain condition
should be fulfilled for the Kuhn - Tucker conditions to be necessary conditions.

Now let us discuss Kuhn-Tucker conditions in two steps for the purpose of making the
explanation easy to understand.

Step 1
In the first step, let us take a problem of optimizing the objective function with non
negativity restrictions and with no other constraints. In economics, the most common
inequality constraint is non negativity constraint.
Maximize  = f(x)
Subject to x  0
provided that the function is supposed to be continuous and smooth. Based on the
restriction x  0, we may have three possible results. As shown in the following figures.

When the local maximum resides in side the shaded feasible region as shown above at
point B of diagram (i), then we have an interior solution. In this case, the first order
d
condition is similar to that of the classical optimization process, i.e.  0.
dx

Diagram (ii) shows that the local maximum is located on the vertical axis indicated by
point C. At this point, the choice variable is 0 and the first order derivative is zero, i.e.
d
= 0, at point C we have a boundary solution.
dx

20
Diagram (iii) indicates that the local maximum may locate at point D or point E with in
the feasible region. In this case, the maximum point is characterized by the inequality
d
< 0 because the curves are at their decreasing portion at these points.
dx

From the above discussion it is clear that the following three conditions have to be met so
as to determine the value of the choice variable which gives the local maximum of the
objective function.

f ( x)  0 , and x > 0 (point B)


f ( x)  0 , and x = 0 (point C)
f ( x)  0 , and x = 0 (point D and E)
Combining these three condition in to one statement given us
f ( x)  0 x  0 and xf ( x)  0

d
The first inequality indicates the information concerning . The second inequality
dx
shows the non negativity restriction of the problem. The third part indicates the product
of the two quantities x and f (x) . The above statement which is a combination of the
three conditions represents the first order necessary condition for the objective function
to achieve its local maximum provided that the choice variable has to be non negative.

If the problem involves n - choice variables like


Maximize   f ( x1 , x 2 , x3 ,...x n )
Subject to xi  0
The first order condition in classical optimization process is
f1 = f 2 = f 3 = -------= f n = 0
The first order condition that should be satisfied to determine the value of the choice
variable which maximizes the objective function is
fi  0 xi  0 and xi f i = 0 (i =1, 2, 3, -------, n)
Where f i is the partial derivative of the objective function with respect to xi , i.e.,

fi  .
xi
Step 2
Now we continue to the second step. To do this, let us attempt to incorporate inequality
constraints in the problem. In order to simplify our analysis, let us first discuss about
maximization problem with three choice variables and two constraints as shown below.
Maximize  = f ( x1 , x 2 , x3 )
Subject to g 1 ( x1 , x 2 , x3 )  k1
g 2 ( x1 , x 2 , x3 )  k2
And x1, x2, x3  0

21
Using the dummy variables s1 and s2 we can change the above problem in to
Maximize  = f ( x1 , x 2 , x3 )
Subject to g 1 ( x1 , x 2 , x3 )  s1  k1
g 2 ( x1 , x 2 , x3 )  s 2  k 2
x1 , x 2 , x3  0 & s1 , s 2  0

We can formulate the Lagrange function using the classical method provided that the non
negativity constraints of the choice variables are not existed as
L  f ( x1 , x 2 , x3 )  1[k1  g 1 ( x1 , x 2 , x3 )  s1 ]   2 [k 2  g 2 ( x1 , x 2 , x3 )  s 2 ]

It is possible to derive the Kuhn- Tucker conditions directly from the Lagrange function.
Considering the above 3-variable 2-constraints problem
The first order condition is

L L L L L L L
= = = = = = =0
x1 x 2 x3 s1 s 2  2 1

However, x j and s i variable are restricted to be non negative. As a result, the first order
conditions on these variables ought to be modified as follows.
L L
0 xj  0 and x j =0
x j x j
L L
0 si  0 and si =0
s i s i
L
=0 Where (i = 1, 2 and j= 1, 2, 3)
i
However, we can combine the last two lines and thereby avoid the dummy variables in
L
the above first order condition as shown below. As  i , the second line shows that
s i
 i  0, , si  0 and – si  i = 0
or
i  0, si  0 and sii = 0

But, we know that si  k i  g i ( x1 , x 2 , x3 ) . By substituting it in place of s i , we can get


k i  g i ( x1 , x 2 , x3 )  0 , i  0 and i [ k i  g i ( x1 , x 2 , x3 ) ] =0

Therefore, the first order condition without dummy variables in expressed as


L L
 0 xj  0 and x j =0
x j x j

22
L
= k i  g i ( x1 , x 2 , x3 )  0 i  0 and i [ k i  g i ( x1 , x 2 , x3 ) ] =0
i
These are the Kuhn - tucker conditions for the given maximization problem.

How can we solve minimization problem?

One of the methods to solve this problem is changing it in to maximization problem and
then applies the same procedure with maximization. Minimizing C is similar to
maximizing (  C ). However, keep in mined the fact that we have to multiply each
constraint inequalities by (  1 ). We can directly apply the Lagrange multiplier method
and determine the minimization version of Kuhn - Tucker condition instead of
converting the inequality constraints into equality constraints using dummy variables as
L L
0 x j  0 and x j =0
x j x j
L L
0  i 0 and i = 0 (minimization)
i i

Example
2. Let us check whether the solutions of our example 1 satisfy the Kuhn - Tucker
conditions or not

a) Minimize C= x2+ y2
Subject to x y  25
x, y  0

The Lagrange function for this problem is


L  x 2  y 2   (25  xy )
It is a minimization problem. Therefore, the appropriate conditions are
L L
 2 x  y  0 , x  0 and x =0
x x
L L
 2 y  x  0 , y  0 and y =0
y y
L L
 25  xy  0 ,   0 and  0
 
Can we determine the non negative value  which will satisfy all the above conditions
together with the optimal solution x and y? The optimal solutions in our earlier discussion
L
are x=5 and y=5, which are nonzero. Thus, the complementary slackness ( x = 0,
x
L L L
y = 0) shows that  0 and = 0.
y x y
Thus, we can determine the value of  by substituting the optimal values of the choice
variables in either of these marginal conditions as

23
L
= 2x - y = 0
x
2(5) -  (5) = 0
10 - 5  = 0
 =2>0
L L L
This value  = 2, x = 5 & y = 5 imply that = 0, = 0, = 0 which fulfils the
x y 
marginal conditions and the complementary slackness conditions. In other words, all the
Kuhn - Tucker conditions are satisfied.

3. Maximize Z  10 x  x 2  180 y  y 2
x  y  80
Subject to
x, y  0
Solution

First we should formulate the Lagrange function assuming the equality constraint and
ignoring the non negativity constraints.
L  10 x  x 2  180 y  y 2   (80  x  y )
The first order conditions are
L
 10  2 x    0    10  2 x                (1)
x
L
 180  2 y    0    180  2 y              (2)
y
L
 80  x  y  0  x  y  80                (3)

Taking equation (1) and (2) simultaneously
10  2 x  180  2 y
2 y  2 x  170
2 y  170  2 x
y  85  x                                (4)
If we substitute equation (4) in to (3), we get
x  85  x  80
2 x  5  x  2.5
However, the value of the choice variables is restricted to be non negative.
Thus x   2.5 is infeasible. We must set x= 0 since it has to be non negative. Now we
can determine the value of y by substituting zero in place of x in equation (3).
0  y  80
y   80
Therefore,   180  2(80)  20
The possible solutions are x   0, y   80,   20

24
However, we must check the inequality constraints and the complementary slackness
conditions to decide whether these values are solutions or not.
1) Inequality constraints
i) The non negativity restrictions are satisfied since x  0, y  80,   20  0
ii) Inequality constraints
x  y  80
0  80  80
2) Complementary Slackness conditions
L L
i) x  0, x  0   0 as the problem is maximization.
x x
L
 10  0
x
L L
ii) y  0, y  80  0  0
y y
L
 180  2(80)  20  0
y
L L
  0,   20  0  0
 
iii)
L
 80  0  80  0

All the Kuhn Tucker conditions are satisfied. Thus, the objective function is maximized
when x   0, y   80,   20 .

Economic Application

Example
4. Given the revenue and cost conditions of a firm as R  32 x  x 2 and C  x 2  8 x  4 ,
where x is output. Suppose the minimum profit is  0  18 . Determine the amount of the
output which maximizes revenue with the given minimum profit. In this case, the revenue
function is concave and the cost function is convex.

The Problem is
Maximize R  32 x  x 2
Subject to x 2  8 x  4  32 x  x 2  18
And x  0

Under these situations the Kuhn-Tucker conditions are necessary and sufficient
conditions as all of the above three conditions, i.e., (1), (2), 4(3), are satisfied.

The Lagrange function of this problem is


L  32 x  x 2   (22  2 x 2  24 x)                (1)
Thus,

25
L
 32  2 x  4x  24  0                  (2)
x
L
 22  2 x 2  24 x  0                    (3)

 22  2 x 2  24 x  0
From equation (3)
2 x 2  24 x  22  0                      (4)
3 1
Solving (4) we get, x  1 or x  11 .   0r  
2 2
However, we must check the inequality constraints and the complementary slackness
conditions to decide whether these values are the solutions or not
L L
 0, x0 and x  0, -----------------------------(5)
x x
L L
 0, 0 and   0, -----------------------------(6)
 

At X=1
L L 3
At this point x  0 this implies that  0 . Thus,  30  20  0    . It does
x x 2
not satisfy equation (6).

L L 1
At X=11, x  0 this implies that  0, Thus  10  20  0    . It satisfies
x x 2
both equation (5) and (6). This means, the Kuhn-Tucker conditions are fulfilled
at x  11 . Therefore, revenue is maximized when the firm sells x  11 units of output.

Mixed Constraints
An optimization problem with mixed constraints can be reformulated either as
maximization or minimization problem. This procedure incorporates the following
conditions.
i) Maximizing the objective function Z (x) is equivalent to the problem of Minimizing
 Z (x) or vice versa-
ii) The constraint g ( x)  c can be presented as  g ( x)  c .
iii) The constraint g ( x)  c is equivalent to the double constraint  g ( x)  c
iv) The non negativity constraint x  0 can be denoted by a new constraint g ( x)   x  0 .

Exercise
Solve the following questions based on the information given above.
1. Describe non linear programming ---------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------
2. Explain the difference between linear programming and nonlinear programming.-------
-----------------------------------------------------------------------------------------------------------
3. What are the ingredients of non linear programming problems? ---------------------------
-----------------------------------------------------------------------------------------------------------

26
4. Write the Kuhn - Tucker condition of the problem
Maximize U  U ( x1 , x 2 , x3 ,...., x n )
Subject to p1 x1  p 2 x 2  p3 x3  ..... p n x  B
x1 , x 2 , x3 ,.....x n  0
When xi represents goods consumed and p i represent the respective price of these goods.
------------------------------------------------------------------------------------------------------------
5. Check whether the Kuhn - Tucker conditions are satisfied or not for the problem given
in example 1 (b) at the optimal values of x and y. ------------------------------------------------
-----------------------------------------------------------------------------------------------------------
6. Minimize C = x2 + y2
Subject to x + y  2
x, y  0
Write out the Kuhn - Tucker conditions and use them to find the optimal solution by trial
and error, what are the values of x and y? ---------------------------------------------------------

7. Given the demand function of the firm is given as


1
P  12  x And the cost function is C  x 2 when the minimum profit is   24 ,
2
then
Maximize R  f (x)
Subject to C ( x)  R ( x)  24
x 0
 Is the Kuhn - Tucker condition satisfied or not?
 Determine the value of x using trial and error.

27

You might also like