0% found this document useful (0 votes)
14 views

Chapter 2 Optimization

Chapter Two discusses unconstrained and constrained optimization, detailing the conditions for finding maximum and minimum values in functions of one or several variables. It introduces methods such as the Lagrange multiplier for constrained optimization and provides examples to illustrate these concepts. Additionally, it covers the implications of changing parameters on optimal values and the necessary conditions for optimization in various scenarios.

Uploaded by

natanyibrah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Chapter 2 Optimization

Chapter Two discusses unconstrained and constrained optimization, detailing the conditions for finding maximum and minimum values in functions of one or several variables. It introduces methods such as the Lagrange multiplier for constrained optimization and provides examples to illustrate these concepts. Additionally, it covers the implications of changing parameters on optimal values and the necessary conditions for optimization in various scenarios.

Uploaded by

natanyibrah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Chapter Two:

UNCONSTRAINED & CONSTRAINED


OPTIMIZATION
1.1. Unconstrained optimization

Functions which do not involve constraints are referred to as unconstrained functions and the
process of optimization is said to be unconstrained/free optimization.
If y = f(x) is continuous and differentiable,
• it is said to have a maximum value at a point where it changes from an increasing to
decreasing function
• it is said to have a minimum value at the point where it changes from decreasing to
increasing functions.
The values of x at which the function is at its minimum or maximum point are
known as critical values.
1.1.1. Functions of One Independent Variable
Given the function Y = f(x),
a) Conditions for minimum value
1. First order condition (Necessary condition): 𝑓′ 𝑥 = 0
2. Second order condition (Sufficient condition) : 𝑓 ′′ 𝑥 > 0
b) Conditions for Maximum value
1. First order condition (Necessary condition): 𝑓′ 𝑥 = 0
2. Second order condition (Sufficient condition) : 𝑓 ′′ 𝑥 < 0

When 𝑓 ′′ 𝑥 = 0 , the second derivative test would be inconclusive.


Thus, we should take the successive derivative test to determine whether the function is at its
extremum point or the point of inflection.
• Evaluated at a critical point if the first non - zero value of a higher order derivative is an odd
numbered derivative, then the function is at the point of inflection.
• If the first non - zero value of a higher order derivative is an even numbered derivative, the
function is at it's relative extremum
• with positive value the derivative shows the relative minimum and
• with negative value the derivative indicates the relative maximum point.

Example.
Find the minimum and maximum values of the function f X = 3𝑋 4 − 10𝑋 3 + 6𝑋 2 + 5
1.1.2. Functions of Several Independent Variables
Given Z = f(𝑥, 𝑦), the objective function Z to be maximum/minimum, it must satisfy
both of the order conditions.
𝜕𝑍 𝜕𝑍
The first order conditions are = 0 and = 0.
𝜕x 𝜕y
i.e. the first order total differential of the function is zero. 𝑑𝑧 = 𝑓𝑥 𝑑𝑥 + 𝑓𝑦 𝑑𝑦 = 0
However, there are two sets of second order conditions
𝜕2 𝑍 𝜕2 𝑍
To be maximum 2 < 0, and 2 < 0----------------------------(1)
𝜕𝑥 𝜕𝑦
𝜕2 𝑍 𝜕2 𝑍
To be minimum 2 > 0, and 2 >0
𝜕𝑥 𝜕𝑦
The other second order condition for both to be at maximum and minimum value is
𝜕2 𝑍 𝜕2 𝑍 𝜕2 𝑍 2
( 2)( 2) > ( ) ---------------------------(2)
𝜕𝑥 𝜕𝑦 𝜕𝑥𝜕𝑦
Alternatively we can determine the second order sufficient condition using the
concept of total differential of the differential of the function which is denoted by
𝑑 2 𝑧 = 𝑑 𝑑𝑍 = 𝑓𝑥𝑥 𝑑𝑥 2 + 2𝑓𝑥𝑦 𝑑𝑥𝑑𝑦 + 𝑓𝑦𝑦 𝑑𝑦 2
Then the second order condition for any value of 𝑑𝑥 and 𝑑𝑦 not both zero.
𝑑 2 𝑧 < 0 indicates that the function is at its maximum whereas
𝑑 2 𝑧 > 0 shows that the function at it minimum point.
However, for any value of 𝑑𝑥 and 𝑑𝑦,
2
𝑑2 𝑧 < 0 if and only if 𝑓𝑥𝑥 < 0, 𝑓𝑦𝑦 < 0 and 𝑓𝑥𝑥 𝑓𝑦𝑦 − 𝑓𝑥𝑦 >0
2
𝑑2 𝑧 > 0 if and only if 𝑓𝑥𝑥 > 0, 𝑓𝑦𝑦 > 0 and 𝑓𝑥𝑥 𝑓𝑦𝑦 − 𝑓𝑥𝑦 >0
Considering the function of 𝑛 - choice variables Z = f(𝑥1 , 𝑥2 , … , 𝑥𝑛 ),
The first order condition for the extremum of the function is 𝑓1 = 𝑓2 = ⋯ = 𝑓𝑛 = 0 which leads to
the fact that dZ = 𝑓1 𝑑𝑥1 + 𝑓2 𝑑𝑥2 + ⋯ 𝑓𝑛 𝑑𝑥𝑛 = 0
The second order sufficient conditions are identified using the Hessian determinant
𝑓11 𝑓12 … 𝑓1𝑛
𝐻 = 𝑓21 𝑓22 … 𝑓2𝑛
𝑓𝑛1 𝑓𝑛2 … 𝑓𝑛𝑛
The sufficient condition for the maximum of the function is satisfied when
𝐻1 < 0, 𝐻2 > 0, 𝐻3 < 0, (−1)𝑛 𝐻𝑛 > 0.
Where as for the minimum of the function all of the Hessian principal minors must be positive.
Example
1. Given the function Z = 160𝑥 − 3𝑥 2 − 2𝑥𝑦 − 2𝑦 2 + 120𝑦 − 18, find the maximum
value of the function.
2. A firm produces two products that are sold in two markets with the demand schedules
𝑃1 = 600 − 0.3𝑄1 and 𝑃2 = 500 − 0.2𝑄2 .
Production costs are related and the firm faces the total cost schedule
𝑇𝐶 = 16 + 1.2𝑄1 + 1.5𝑄2 + 0.2 𝑄1 𝑄2

• Determine the profit maximizing level of output and price in each market.
• Determine the maximum profit of the firm
1.1.3. Unconstrained Envelope Theorems
• We have discussed about the way
how the max/min value of the
function depends on the value of the
independent variables under
consideration.
• What happens to the optimal value of
the objective function when some
parameters change (such as tax rates)
which are assumed to be constant
during the process of optimization,
but may vary according to the
economic situation?
Example
1. Assume that the demand and the total cost functions of the monopolist are
𝐏 = 𝟐𝟒 − 𝟑𝒙 and 𝐂 = 𝒙𝟐 + 𝟖𝒙 respectively.
Determine the rate of change of the profit function with respect to the tax rate
when a tax rate of 4 Birr per unit of production is imposed.
1.2. Constrained optimization
• functions which involve constraints are called constrained functions and the
process of optimization is referred to as constrained optimization.
Example:
a firm can maximize output subject to the constraint of a given budget for
expenditures on inputs, or it may need to minimize cost subject to a certain
minimum out put being produced.
utility may be maximized subject to a fixed income that the consumer has and
the budget constraint is given in the form of equation. (classical optimization)
1.2.1. One Variable Constrained Optimization with
Non-Negative Constraint
With equality constraint With non - negativity constraints
Maximize y = 𝑓 𝑥 , subject to x = 𝑥 . Maximize subject to 𝑥 ≥ 0
In this case 𝑦 ∗ = 𝑓 𝑥
It simply involves determining the value
of the objective function at the given
point in the domain.
Eg.
1. Maximize the objective function
y = −3𝑥 2 − 7𝑥 + 2 subject to 𝑥 ≥ 0
2. Minimize
y = 𝑥 2 + 2𝑥 + 5 , subject to 𝑥 ≥ 0
Two variable problems with equality
constraints
A. Constrained Optimization by Substitution
• This method is mainly applicable for problems where the objective function
with only two variables is maximized or minimized subject to one constraint.
B. Lagrange Multiplier Method
• This method can be used for most type of constrained optimization problems.
Example
1. A firm faces the production function 𝐐 = 𝟏𝟐𝑲𝟎.𝟒 𝑳𝟎.𝟒 and assume it can
purchase K and L at pries per unit of 40 birr and 5 Birr respectively and it has a
budget of 800 Birr. Determine the amount of K and L which maximizes output.
2. Suppose the utility function of the consumer is given by 𝐔 = 𝟒𝐗𝐘 − 𝒀𝟐 and
the budget constraint is 𝟐𝑿 + 𝒀=6. Determine the amount of x and y which
will optimize total utility of the consumer.
3. Given the utility function of the consumer who consumes two goods x and y as
𝐔 𝐱, 𝒚 = 𝐱 + 𝟐 𝐲 + 𝟏 . If the price of good x is 𝑷𝒙 = 𝟒 𝒃𝒊𝒓𝒓, that of good
y is 𝑷𝒚 = 𝟔 𝒃𝒊𝒓𝒓 and the consumer has a fixed budget of 130 birr. Determine
the optimum values of x and y using the Lagrange multiplier method.
Lagrange Multiplier Method
• Optimize 𝒁 = 𝒇 𝒙, 𝒚 subject to g 𝒙, 𝒚 = 𝑷𝒙 𝒙 + 𝑷𝒚 𝒚 = 𝑴,
Step 1. Rewrite the constraint function in its implicit form as:
𝑴 − 𝑷𝒙 𝒙 − 𝑷𝒚 𝒚 = 𝟎
Step 2. Multiply the constraint function by the Lagrange multiplier:
𝝀(𝑴 − 𝑷𝒙 𝒙 − 𝑷𝒚 𝒚) = 𝟎
Step 3. Add the above constraint to the objective function and thereby
we get the Lagrange function:
𝑳 𝒙, 𝒚, 𝝀 = 𝒁 𝒙, 𝒚 +𝝀(𝑴 − 𝑷𝒙 𝒙 − 𝑷𝒚 𝒚) = 𝟎
Necessary condition

• the first orders condition is that the first order partial derivatives of
the Lagrange function should be equal to zero.
𝝏𝑳 𝝏𝒁
= −𝝀𝑷𝒙 = 𝟎 ----------------------- (1)
𝝏𝒙 𝝏𝒙

𝝏𝑳 𝝏𝒁
= −𝝀𝑷𝒚 = 𝟎 ----------------------- (2)
𝝏𝒚 𝝏𝒚

𝝏𝑳
= 𝑴 − 𝑷𝒙 𝒙 − 𝑷𝒚 𝒚 = 𝟎 ----------------------- (3)
𝝏𝝀
𝒁𝒙 𝒁𝒚 𝒁𝒙 𝑷𝒙
From equation (1) and (2) we get, 𝝀 = = or =
𝑷𝒙 𝑷𝒚 𝒁𝒚 𝑷𝒚
Sufficient condition
• To get the second order condition, we should partially differentiate
equations (1), (2) and (3). Representing the second direct partial
derivatives by 𝒁𝒙𝒙 and 𝒁𝒚𝒚 and the second cross partial derivatives by 𝒁𝒙𝒚
and 𝒁𝒚𝒙 , the border Hessian determinant bordered with 0, 𝒈𝒙 and 𝒈𝒚 is:
0 𝑔𝑥 𝑔𝑦 0 −𝑃𝑥 −𝑃𝑦
𝐻 = 𝑔𝑥 𝐿𝑥𝑥 𝐿𝑥𝑦 = −𝑃𝑥 𝑍𝑥𝑥 𝑍𝑥𝑦 > 0
𝑔𝑦 𝐿𝑦𝑥 𝐿𝑦𝑦 −𝑃𝑦 𝑍𝑦𝑥 𝑍𝑦𝑦
• Relative maximum: 𝑑 2 𝑧 is positive definite subject to dg = 0 iff 𝐻 < 0
• Relative maximum: 𝑑 2 𝑧 is negative definite subject to dg = 0 iff 𝐻 > 0
Generalization
i. Optimization of n - variable case
Optimize Z = 𝑓(𝑥1 , … 𝑥𝑛 ) subject to 𝑔 𝑥1 , … 𝑥𝑛 = 𝑐
• The Lagrange function:
• 𝐿 = 𝑓 𝑥1 , … 𝑥𝑛 − 𝜆(𝑐 − 𝑔 𝑥1 , … 𝑥𝑛 )
• The necessary condition for optimization:
• 𝐿𝜆 = 𝐿1 = ⋯ 𝐿𝑛 = 0
• The second order condition for optimization depends on the sign of 𝑑 2 𝑍 subject
to dg = 𝑔1 𝑑𝑥1 + ⋯ + 𝑔𝑛 𝑑𝑥𝑛 = 0
• For relative minimum, 𝑑 2 𝑍 is positive definite subject to dg = 0 iff 𝐻2 , 𝐻3 … 𝐻𝑛 < 0
• For relative maximum, 𝑑 2 𝑍 is negative definite subject to dg = 0 iff 𝐻2 > 0, 𝐻3 < 0, 𝐻4 , … > 0
ii. Optimization when there is more than one equality constraint

Optimize Z = 𝑓(𝑥1 , 𝑥2 , 𝑥3 ) subject to 𝑔1 𝑥1 , 𝑥2 , 𝑥3 = 𝑐1


𝑔2 𝑥1 , 𝑥2 , 𝑥3 = 𝑐 2
The Lagrange function is: 𝐿 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 ) − 𝜆1 (𝑐1 − 𝑔1 𝑥1 , 𝑥2 , 𝑥3 ) −𝜆2 (𝑐 2 − 𝑔2 𝑥1 , 𝑥2 , 𝑥3 )
First order conditions for optimization:
𝐿1 = 𝑓1 − 𝜆1 𝑔11 − 𝜆2 𝑔21 = 0
𝐿2 = 𝑓2 − 𝜆1 𝑔1 2 − 𝜆2 𝑔2 2 = 0
𝐿3 = 𝑓3 − 𝜆1 𝑔1 3 − 𝜆2 𝑔2 3 = 0
𝐿𝜆1 = 𝑐1 − 𝑔1 𝑥1 , 𝑥2 , 𝑥3 = 0
𝐿𝜆2 = 𝑐 2 − 𝑔2 𝑥1 , 𝑥2 , 𝑥3 = 0
• The second order condition for optimization:
• For max: 𝐻2 > 0, 𝐻3 = 𝐻 < 0
• For min: 𝐻2 < 0, 𝐻3 < 0
• The second order condition for optimization in case of n-variables k-constraints:
• For max: negative definiteness:
• 𝐻𝑛 𝑠ℎ𝑜𝑢𝑙𝑑 ℎ𝑎𝑣𝑒 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑠𝑖𝑔𝑛 𝑎𝑠 (−1)𝑛 𝑎𝑛𝑑
𝑡ℎ𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 𝑛 − 𝑘 𝑙𝑒𝑎𝑑𝑖𝑛𝑔 𝑝𝑟𝑖𝑛𝑐𝑖𝑝𝑎𝑙 𝑚𝑖𝑛𝑜𝑟𝑠 𝑎𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑒 𝑖𝑛 𝑠𝑖𝑔𝑛.
𝑖. 𝑒. 𝐻2 > 0, 𝐻3 <0, …
• For min: positive definiteness:
• 𝑡ℎ𝑒 𝑙𝑎𝑟𝑔𝑒𝑠𝑡 𝑛 − 𝑘 𝑙𝑒𝑎𝑑𝑖𝑛𝑔 𝑝𝑟𝑖𝑛𝑐𝑖𝑝𝑎𝑙 𝑚𝑖𝑛𝑜𝑟𝑠, 𝑖𝑛𝑐𝑙𝑢𝑑𝑖𝑛𝑔 𝐻𝑛 ,
ℎ𝑎𝑣𝑒 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑠𝑖𝑔𝑛 𝑎𝑠 (−1)𝑘 .
Inequality Constraints and Kuhn - tucker
Theorems, and Mixed Constraints
• objective function subject to inequality constraints can be optimized using the method of
mathematical programming.
• If the objective function as well as the inequality constraints is linear, we will use a method of
linear programming.
• If the objective function and the inequality constraints are nonlinear, we will apply the
technique of nonlinear programming to optimize the function.

Examples
1. Minimize 𝐶 = 𝑥 2 + 𝑦 2 Subject to 𝑥𝑦 ≥ 25 and 𝑥, 𝑦 ≥ 0
• Check whether the solution fulfills the Kuhn - Tucker conditions (the marginal conditions and the complementary
slackness conditions).
2. Maximize 𝑍 = 10𝑥 − 𝑥 2 + 180𝑦 − 𝑦 2 Subject to 𝑥 + 𝑦 ≤ 80 and 𝑥, 𝑦 ≥ 0

You might also like