0% found this document useful (0 votes)
36 views

Lecture 2 - Optimization With Equality Constraints

This document discusses constrained optimization problems with equality constraints. It defines binding and non-binding constraints and explains how the Lagrangean technique can be used to solve problems with binding equality constraints. The document provides an example of maximizing a function subject to a single equality constraint and outlines the procedure for finding the solution. It also interprets the economic meaning of the Lagrange multiplier and discusses conditions under which a stationary point is a local optimum.

Uploaded by

c9bj9bvr5d
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Lecture 2 - Optimization With Equality Constraints

This document discusses constrained optimization problems with equality constraints. It defines binding and non-binding constraints and explains how the Lagrangean technique can be used to solve problems with binding equality constraints. The document provides an example of maximizing a function subject to a single equality constraint and outlines the procedure for finding the solution. It also interprets the economic meaning of the Lagrange multiplier and discusses conditions under which a stationary point is a local optimum.

Uploaded by

c9bj9bvr5d
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Lecture 2 – Optimization with equality constraints

Constrained optimization

The idea of constrained optimisation is that the choice of one


variable often affects the amount of another variable that can
be used:
- if a firm employs more labour, this may affect the amount of
capital it can rent if it is restricted (constrained) by how
much it can spend on inputs
- when a consumer maximizes utility, income provides the
constraint.
- when a government sets expenditure levels, it faces
constraints set by its income from taxes
Note that the optimal quantities obtained under a constraint
may be different from the quantities obtained without constraint
2
Binding and non-binding constraints

In the solution we say that a constraint is binding if the


constraint function holds with equality (sometimes called an
equality constraint)
Otherwise the constraint is non-binding or slack (sometimes
called an inequality constraint)
When the constraint is binding we can use the Lagrangean
technique

3
In general cases we do not know whether a constraint will be
binding.
Sometime we can use our economic understanding to tell us if
a constraint is binding
– Example: a non-satiated consumer will always spend all
her income, so the budget constraint will be satisfied with
equality

When we are not able to say if constraints are binding we use a


technique which is related to the Lagrangean, but which is
slightly more general ( Kuhn-Tucker programming)

4
Objectives and constraints - example
A firm chooses output x to maximize a profit function
π = -x2 + 10x - 6
Because of a staff shortage, it cannot produce an output higher than x = 4
What are the objective and constraint functions?
The objective function: π = -x2 + 10x-6
The constraint: x ≤ 4

x
4
5
Note that without the constraint the optimum is x = 5
So the constraint is binding (but a constraint of, say, x ≤ 6 would not be)

x
4 5
6
Sometime in the following (and in the textbook) we
denote:
𝑑𝑓 𝑥, 𝑦
= 𝑓1 𝑥, 𝑦
𝑑𝑥
𝑑𝑓 𝑥, 𝑦
= 𝑓2 𝑥, 𝑦
𝑑𝑦
more in general
𝑑𝑓 𝑥1 , 𝑥2 , … 𝑥𝑗 … 𝑥𝑛
= 𝑓𝑗 𝑥1 , 𝑥2 , … 𝑥𝑗 … 𝑥𝑛
𝑑𝑥𝑗

7
Constrained optimization with two variables and one
constraint
The problem is:
max 𝑓(𝑥, 𝑦)
𝑥,𝑦
𝑠. 𝑡 𝑔 𝑥, 𝑦 = 𝑐 𝑥, 𝑦 ∈ 𝑆
To get the solution we have to write the Lagrangean:
𝐿(𝑥, 𝑦, 𝜆) = 𝑓 (𝑥, 𝑦) − 𝜆(𝑔(𝑥, 𝑦) − 𝑐)

where 𝜆 is a new variable


The candidates to the solution are the stationary points of
the Lagrangean, i.e. all points that satisfy the following
system of equations:
𝑓1 𝑥, 𝑦 − 𝜆𝑔1 𝑥, 𝑦 = 0
𝑓2 𝑥, 𝑦 − 𝜆𝑔2 𝑥, 𝑦 = 0
𝑔 𝑥, 𝑦 = 𝑐 8
Intuition about Lagrangean

y Using the implicit function theorem


𝑑𝑦 𝑓1 𝑥 ∗ , 𝑦 ∗ 𝑔1 𝑥 ∗ , 𝑦 ∗
=− ∗ ∗
=−
𝑑𝑥 𝑓2 𝑥 , 𝑦 𝑔2 𝑥 ∗ , 𝑦 ∗
𝑓1 𝑥 ∗ , 𝑦 ∗ 𝑔1 𝑥 ∗ , 𝑦 ∗
∗ ∗
=
𝑓2 𝑥 , 𝑦 𝑔2 𝑥 ∗ , 𝑦 ∗

𝑓1 𝑥 ∗ , 𝑦 ∗ 𝑓2 𝑥 ∗ , 𝑦 ∗
∗ ∗
=
𝑔1 𝑥 , 𝑦 𝑔2 𝑥 ∗ , 𝑦 ∗

x 9
𝑓1 𝑥 ∗ , 𝑦 ∗ 𝑓2 𝑥 ∗ , 𝑦 ∗
∗ ∗
= ∗ ∗
=𝜆
𝑔1 𝑥 , 𝑦 𝑔2 𝑥 , 𝑦
𝑓2 𝑥 ∗ ,𝑦 ∗
Using = 𝜆 we get 𝑓1 𝑥 ∗ , 𝑦 ∗ − 𝜆𝑔1 𝑥 ∗ , 𝑦 ∗ = 0
𝑔2 𝑥 ∗ ,𝑦 ∗
𝑓1 𝑥 ∗ ,𝑦 ∗
Using = 𝜆 we get 𝑓2 𝑥 ∗ , 𝑦 ∗ − 𝜆𝑔2 𝑥 ∗ , 𝑦 ∗ = 0
𝑔1 𝑥 ∗ ,𝑦 ∗
Moreover the solution has to satisfy the constraint 𝑔 𝑥 ∗ , 𝑦 ∗ = 𝑐
Then the solution has to satisfy the following three equations:
𝑓1 𝑥 ∗ , 𝑦 ∗ − 𝜆𝑔1 𝑥 ∗ , 𝑦 ∗ = 0
𝑓2 𝑥 ∗ , 𝑦 ∗ − 𝜆𝑔2 𝑥 ∗ , 𝑦 ∗ = 0
𝑔 𝑥∗, 𝑦∗ = 𝑐
These equations are the derivatives of the Lagrangean
𝐿(𝑥, 𝑦, 𝜆) = 𝑓 (𝑥, 𝑦) − 𝜆(𝑔(𝑥, 𝑦) − 𝑐)
with respect to 𝑥, 𝑦 and 𝜆 to be zero
10
The first two equations are know as the first order conditions
Proposition (necessary conditions)
Let 𝑓 and 𝑔 be continuously differentiable functions of two
variables defined on the set S, 𝑐 be a number.
Suppose that:
• (𝑥 ∗ , 𝑦 ∗ ) is an interior point of S that solves the problem
max 𝑓(𝑥, 𝑦) 𝑠. 𝑡 𝑔 𝑥, 𝑦 = 𝑐 𝑥, 𝑦 ∈ 𝑆
𝑥,𝑦
∗ ∗
• 𝑒𝑖𝑡ℎ𝑒𝑟 𝑔1 𝑥 , 𝑦 ≠ 0 𝑜𝑟 𝑔2(𝑥 ∗ , 𝑦 ∗ ) ≠ 0.
Then there is a unique number 𝜆 such that (𝑥 ∗ , 𝑦 ∗ ) is a stationary
point of the Lagrangean
𝐿(𝑥, 𝑦) = 𝑓 (𝑥, 𝑦) − 𝜆(𝑔(𝑥, 𝑦) − 𝑐)
That is, (𝑥 ∗ , 𝑦 ∗ ) satisfies the first-order conditions.
𝐿1(𝑥 ∗ , 𝑦 ∗ ) = 𝑓1 (𝑥 ∗ , 𝑦 ∗ ) − 𝜆𝑔1(𝑥 ∗ , 𝑦 ∗ ) = 0
𝐿2(𝑥 ∗ , 𝑦 ∗ ) = 𝑓2 (𝑥 ∗ , 𝑦 ∗ ) − 𝜆𝑔2(𝑥 ∗ , 𝑦 ∗ ) = 0.
and 𝑔(𝑥 ∗ , 𝑦 ∗ ) = 𝑐. 11
Procedure for the solution

1. Find all stationary points of the Lagrangean


2. Find all points (x, y) that satisfy 𝑔1 𝑥, 𝑦 = 0 , 𝑔2 𝑥, 𝑦 = 0
and 𝑔 𝑥, 𝑦 = 𝑐
3. If the set S has boundary points, find all boundary points
𝑥, 𝑦 that satisfy 𝑔 𝑥, 𝑦 = 𝑐
4. The points you have found at which 𝑓 𝑥, 𝑦 is largest are
the maximizers of 𝑓(𝑥, 𝑦)

12
Example 1
max 𝑥 𝑎 𝑦 𝑏 𝑠. 𝑡 𝑥 + 𝑦 = 𝑐
𝑥,𝑦

where 𝑎, 𝑏 > 0 and 𝑥 𝑎 𝑦 𝑏 is defined for 𝑥 ≥ 0 𝑦 ≥ 0.


1. 𝐿 𝑥, 𝑦, 𝜆 = 𝑥 𝑎 𝑦 𝑏 − 𝜆 𝑥 + 𝑦 − 𝑐
𝑎𝑥 𝑎−1 𝑦 𝑏 − 𝜆 = 0
𝑏𝑥 𝑎 𝑦 𝑏−1 − 𝜆 = 0
𝑥+𝑦 =𝑐
𝑐𝑎 𝑐𝑏 𝑎𝑎 𝑏𝑏
𝑥= 𝑦= 𝜆= ∙ 𝑐 𝑎+𝑏−1
𝑎+𝑏 𝑎+𝑏 𝑎+𝑏 𝑎+𝑏−1

The value of the objective function at the stationary


point is:
𝑎 𝑏 𝑎𝑎 𝑏𝑏
𝑥 𝑦 = ∙ 𝑐 𝑎+𝑏 > 0
𝑎+𝑏 𝑎+𝑏
13
2. 𝑔1 𝑥, 𝑦 = 1 , 𝑔2 𝑥, 𝑦 = 1 then no values for which
𝑔1 𝑥, 𝑦 = 0 , 𝑔2 𝑥, 𝑦 = 0

3. The boundary points of the set on which the objective


function is defined is the set of points (𝑥, 𝑦) with either
𝑥 = 0 or 𝑦 = 0. At every such point the value of
objective function is 0

10𝑎 10𝑏
4. Then the solution of the problem is 𝑥 = 𝑦=
𝑎+𝑏 𝑎+𝑏

14
Interpretation of λ

𝜕𝑓 ∗ (𝑐)
= 𝜆∗ (𝑐)
𝜕𝑐
the value of the Lagrange multiplier at the solution of the
problem is equal to the rate of change in the maximal value of
the objective function as the constraint is relaxed.

15
Example 2:
max 𝑥2
𝑥
𝑠. 𝑡. 𝑥=𝑐
solution is 𝑥 = 𝑐 so the maximized value of the objective
function is 𝑐2.
Its derivative respect to 𝑐 is 2𝑐
Now consider the Lagrangean
𝐿 𝑥 = 𝑥2 − 𝜆(𝑥 − 𝑐)
The FOC is 2𝑥 − 𝜆 = 0.
Then 𝑥 = 𝑐 and 𝜆 = 2𝑐 satisfy FOC and the constraint.
Note that 𝜆 is equal to the derivative of the maximized value of
the function with respect to 𝑐
16
From example 1:
𝑐𝑎 𝑐𝑏 𝑎𝑎 𝑏𝑏
𝑥= 𝑦= 𝜆= ∙ 𝑐 𝑎+𝑏−1
𝑎+𝑏 𝑎+𝑏 𝑎+𝑏 𝑎+𝑏−1

the maximized value of the objective function is:


𝑎 𝑎 𝑏𝑏
𝑥𝑎 𝑦𝑏 = ∙ 𝑐 𝑎+𝑏
𝑎 + 𝑏 𝑎+𝑏

𝑎𝑎 𝑏𝑏
its derivative respect to 𝑐 is ∙ 𝑐 𝑎+𝑏−1 , i.e. 𝜆
𝑎+𝑏 𝑎+𝑏−1

17
Conditions under which a stationary point is a local optimum

max 𝑓(𝑥, 𝑦)
𝑥,𝑦
𝑠. 𝑡 𝑔 𝑥, 𝑦 = 𝑐

𝐿 𝑥, 𝑦 = 𝑓 𝑥, 𝑦 − 𝜆(𝑔 𝑥, 𝑦 − 𝑐)

Borderd Hessian of the Lagrangean


𝐻𝑏 𝑥, 𝑦, 𝜆 =

0 𝑔1 (𝑥, 𝑦) 𝑔2 (𝑥, 𝑦)
= 𝑔1 (𝑥, 𝑦) 𝑓11 𝑥, 𝑦 − 𝜆𝑔11 (𝑥, 𝑦) 𝑓12 𝑥, 𝑦 − 𝜆𝑔12 (𝑥, 𝑦)
𝑔2 (𝑥, 𝑦) 𝑓21 𝑥, 𝑦 − 𝜆𝑔21 (𝑥, 𝑦) 𝑓22 𝑥, 𝑦 − 𝜆𝑔22 (𝑥, 𝑦)

18
Suppose that it exists a value 𝜆∗ such that 𝑥 ∗ , 𝑦 ∗ is a
stationary point of the Lagrangean.
To check if it is a local maximum
1) Compute the bordered Hessian at the values 𝑥 ∗ , 𝑦 ∗ , 𝜆∗ ,
i.e. 𝐻𝑏 𝑥 ∗ , 𝑦 ∗ , 𝜆∗
2) Compute its determinant, i.e |𝐻𝑏 𝑥 ∗ , 𝑦 ∗ , 𝜆∗ |
3) If |𝐻𝑏 𝑥 ∗ , 𝑦 ∗ , 𝜆∗ | > 0 𝑡ℎ𝑒𝑛 𝑥 ∗ , 𝑦 ∗ is a local maximizer

Note, if|𝐻𝑏 𝑥 ∗ , 𝑦 ∗ , 𝜆∗ | < 0 𝑡ℎ𝑒𝑛 𝑥 ∗ , 𝑦 ∗ is a local minimizer

19
Example 3
max 𝑥 3 𝑦 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥 + 𝑦 = 6 𝑥, 𝑦 > 0
𝑥,𝑦

We simplify the problem using a log transformation


max 3 ln 𝑥 + ln 𝑦 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥 + 𝑦 = 6.
𝑥,𝑦

𝐿 𝑥, 𝑦 = 3 ln 𝑥 + ln 𝑦 − 𝜆(𝑥 + 𝑦 − 6)
FOC are:
3 1
− 𝜆 = 0, −𝜆 =0 𝑥 + 𝑦 = 6
𝑥 𝑦
2
The solution is 𝑥 = 4.5, 𝑦 = 1.5, 𝜆 =
3

20
Borderd Hessian of the Lagrangean is

0 1 1 0 1 1
3 3
1 − 2 0
=1 − 0
2
𝐻𝑏 𝑥, 𝑦, 𝜆 = 𝑥 𝐻𝑏 4.5, 1.5, 4.52
1 3 1
1 0 − 2 1 0 −
𝑦 1.52

3 1
The determinant is + 2 > 0, then the solution is a local
4.52 1.5
maximizer

21
Conditions under which a stationary point is a
global optimum

• Suppose that f and g are continuously differentiable


functions defined on an open convex subset S of two-
dimensional space and
• suppose that there exists a number λ* such that (x*, y*)
is an interior point of S that is a stationary point of the
Lagrangean
𝐿(𝑥, 𝑦) = 𝑓(𝑥, 𝑦) − 𝜆∗ (𝑔 𝑥, 𝑦 − 𝑐).
• Suppose further that 𝑔 𝑥 ∗ , 𝑦 ∗ = 𝑐.
• Then if L is concave then (x*, y*) solves the problem
max 𝑓(𝑥, 𝑦) 𝑠. 𝑡 𝑔 𝑥, 𝑦 = 𝑐
𝑥,𝑦

22
Example 4
Consider example 3
max 𝑥 3 𝑦 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥 + 𝑦 = 6 𝑥, 𝑦 > 0
𝑥,𝑦
2
We found that the solution of the FOC 𝑥 = 4.5, 𝑦 = 1.5, 𝜆 =
3
is a local maximizer. Is it a global maximizer?
For a global maximizer we need that Lagrangean is concave
𝐿 𝑥, 𝑦 = 3 ln 𝑥 + ln 𝑦 − 𝜆(𝑥 + 𝑦 − 6)
Given that constraint is linear we need to check the objective
function
The Hessian of the objective function is
3
− 2 0
𝑥
𝐻= 1
0 − 2
𝑦 23
3
− 2 0
𝑥
𝐻= 1
0 − 2
𝑦
The leading principal minors are:
3 3 1
𝐷1 = − 2 < 0 and 𝐷2 = > 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥, 𝑦 > 0
𝑥 𝑥2 𝑦2
Then the Hessian is negative definite, so the objective
function is strictly concave, and the point 𝑥 = 4.5, 𝑦 = 1.5 is a
global maximum.

24
Optimization with equality constraints: n variables, m
constraints: necessary conditions

Let 𝑓 and 𝑔1, … , 𝑔𝑚 be continuously differentiable functions of


𝑛 variables defined on the set 𝑆,
let 𝑐𝑗 for 𝑗 = 1, … , 𝑚 be numbers, and suppose that 𝑥 ∗ is an
interior point of 𝑆 that solves the problem:

max 𝑓(𝑥) 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑔𝑗(𝑥) = 𝑐𝑗 𝑓𝑜𝑟 𝑗 = 1, … , 𝑚


𝑥
Suppose also that the rank of the Jacobian matrix is 𝑚

𝜕𝑔1 𝜕𝑔1

𝜕𝑥1 𝜕𝑥𝑛
𝐽= ⋮ ⋱ ⋮
𝜕𝑔𝑚 𝜕𝑔𝑚

𝜕𝑥1 𝜕𝑥𝑛
25
Then there are unique numbers 𝜆1, … , 𝜆𝑚 such that 𝑥 ∗ is a
stationary point of the Lagrangean function 𝐿 defined by
𝑚

𝐿(𝑥) = 𝑓(𝑥) − 𝜆𝑗 𝑔𝑗 𝑥 − 𝑐𝑗
𝑗=1

That is, 𝑥 ∗ satisfies the first-order conditions:


𝜕𝑔𝑗
𝐿′𝑖 𝑥∗ = 𝑓𝑖′ 𝑥∗ − 𝑚
𝑗=1 𝜆𝑗 𝜕𝑥 𝑥 ∗ = 0 𝑓𝑜𝑟 𝑖 = 1, … , 𝑛.
𝑖

In addition, 𝑔𝑗(𝑥 ∗ ) = 𝑐𝑗 𝑓𝑜𝑟 𝑗 = 1, … , 𝑚.

26
Conditions under which necessary conditions are sufficient

Suppose that 𝑓 and 𝑔𝑗 for 𝑗 = 1, … , 𝑚 are continuously


differentiable functions defined on an open convex subset 𝑆 of
n-dimensional space and let 𝑥 ∗ ∈ 𝑆 be an interior stationary
point of the Lagrangean:

𝐿(𝑥) = 𝑓(𝑥) − 𝜆𝑗∗ 𝑔𝑗 𝑥 − 𝑐𝑗


𝑗=1

suppose further that 𝑔𝑗(𝑥 ∗ ) = 𝑐𝑗 𝑓𝑜𝑟 𝑗 = 1, … , 𝑚.

Then if 𝐿 is concave then 𝑥 ∗ solves the constrained


maximization problem 27
Interpretation of λ

Consider the problem

max 𝑓(𝑥) 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑔𝑗 (𝑥) = 𝑐𝑗 𝑓𝑜𝑟 𝑗 = 1, … , 𝑚


𝑥
Let 𝑥 ∗ (𝑐) be the solution of this problem, where 𝑐 = (𝑐1, … , 𝑐𝑚)
and let 𝑓 ∗ (𝑐) = 𝑓(𝑥 ∗ (𝑐)).
Then we have
𝜕𝑓 ∗ (𝑐)
= 𝜆𝑗(𝑐) 𝑓𝑜𝑟 𝑗 = 1, … , 𝑚,
𝜕𝑐𝑗
where 𝜆𝑗 is the value of the Lagrange multiplier on the jth
constraint at the solution of the problem.

28
Interpretation of λ

The value of the Lagrange multiplier on the jth constraint at the


solution of the problem is equal to the rate of change in the
maximal value of the objective function as the jth constraint
is relaxed.
If the jth constraint arises because of a limit on the amount of
some resource, then we refer to 𝜆𝑗(𝑐) as the shadow price
of the jth resource.

29
Quasi-concave functions

Let 𝑓(𝑥) be defined on the set 𝑆.


Then for all pairs 𝑥, 𝑥‘ ∈ 𝑆 𝑥, ≠ 𝑥‘ and for all 𝜆 ∈ 0, 1 :
- If 𝑓 𝑥 ″ = 𝑓 𝜆𝑥 + 1 − 𝜆 𝑥‘ ≥ min 𝑓 𝑥 , 𝑓 𝑥’ then 𝑓(𝑥) is
quasi concave
- If 𝑓 𝑥 ″ = 𝑓 𝜆𝑥 + 1 − 𝜆 𝑥‘ > min(𝑓(𝑥), 𝑓(𝑥’) ) then 𝑓(𝑥) is
strictly quasi concave
Note these conditions hold even if 𝑓(𝑥) = 𝑓(𝑥’)
A concave function is also quasi-concave, but the opposite is not
true
If 𝑓(𝑥) > 𝑓(𝑥’) and 𝑓(𝜆𝑥 + (1 − 𝜆)𝑥‘ ) > 𝑓(𝑥’) the function is
explicitly quasi concave

30
It is quasiconvex if:
𝑓(𝑥″) = 𝑓(𝜆𝑥 + (1 − 𝜆)𝑥’ ) ≤ max(𝑓(𝑥), 𝑓(𝑥’) )
Note that a convex function is also quasi-convex
The bottom left picture shows that the opposite is not true
y
y

31
The importance of concavity and quasi-concavity
Consider the problem
max 𝑓(𝑥) 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑔𝑗(𝑥) = 𝑐𝑗 𝑓𝑜𝑟 𝑗 = 1, … , 𝑚
𝑥
and let 𝑥∗ be a stationary point of the lagrangean

If
1. 𝑓(𝑥) is explicitly quasi-concave
2. The constrained set is convex
3. then 𝑥 ∗ is a global maximum

If
1. 𝑓(𝑥) is strictly quasi-concave
2. The constrained set is convex
3. then 𝑥 ∗ is the unique global maximum 32
Convex sets.
A convex set, X, is such that for any two elements of the set, x
and x’ any convex combination of them is also a member of
the set.

x'

More formally, 𝑋 is convex if for all 𝑥 and 𝑥’ ∈ 𝑋, and 0 ≤ 𝜆 ≤ 1,


𝑥″ = 𝜆𝑥 + 1 − 𝜆 𝑥‘ ∈ 𝑋.
Sometimes 𝑋 is described as strictly convex if for 𝑎𝑛𝑦 0 < 𝜆 < 1, 𝑥″ is in the interior
of 𝑋 (i.e. not on the edges)

33
e.g. convex but not strictly convex
Convex sets.
If, for any two points in the set S, the line segment
connecting these two points lie entirely in S, then S is a
convex set.

x2 p1x1+p2x2 ≤ m x2 U(x)≥ U

x1 x
34 1
Non-Convex sets.

U
x2

U(x)≤ U
x1
35
A different definition of quasi concavity
Let 𝑓 be a multivariate function defined on the set 𝑆.

𝑓 is quasi concave if, for any number 𝑎, the set of points for
which 𝑓(𝑥) ≥ 𝑎 is convex.

For any real number 𝑎, the set 𝑃𝑎 = {𝑥 ∈ 𝑆: 𝑓 (𝑥) ≥ 𝑎} is called


the upper level set of 𝑓 for 𝑎.

The multivariate function f defined on a convex set 𝑆 is


quasiconcave if every upper level set of 𝑓 is convex.
That is, 𝑃𝑎 = {𝑥 ∈ 𝑆: 𝑓 (𝑥) ≥ 𝑎} is convex for every value of 𝑎.
36
Example 5
1. 𝑓 (𝑥, 𝑦) = 𝑥2 + 𝑦2.
The upper level set of 𝑓 for 𝑎 is the set of pairs (𝑥, 𝑦)
such that 𝑥2 + 𝑦2 ≥ 𝑎.
Thus for 𝑎 > 0 it the set of point out of a disk of radius
𝑎, then the upper level set is not convex

2. 𝑓 (𝑥, 𝑦) = −𝑥2 − 𝑦2.


The upper level set of f for a is the set of pairs (𝑥, 𝑦)
such that −𝑥2 − 𝑦2 ≥ 𝑎, or 𝑥2 + 𝑦2 ≤ −𝑎.
Thus for 𝑎 > 0 the upper level set 𝑃𝑎 is empty
for 𝑎 < 0 it is the set of points inside a disk of radius 𝑎.
37
Checking quasi concavity
To determine whether a twice-differentiable function is quasi
concave or quasi convex, we can examine the determinants
of the bordered Hessians of the function, defined as follows:

0 𝑓1 𝑥 𝑓2 𝑥 𝑓𝑛 𝑥
𝑓1 𝑥 𝑓11 𝑥 𝑓12 𝑥 𝑓1𝑛 𝑥
𝐵 = 𝑓2 𝑥 𝑓21 𝑥 𝑓22 𝑥 𝑓2𝑛 𝑥

𝑓𝑛 𝑥 𝑓𝑛1 𝑥 𝑓𝑛2 𝑥 𝑓𝑛𝑛 𝑥

We have to compute the determinants of the leading principal


minors

0 𝑓1 𝑥 𝑓2 𝑥
0 𝑓1 𝑥
|𝐵1 | = , |𝐵2 | = 𝑓1 𝑥 𝑓11 𝑥 𝑓12 𝑥 ,……………
𝑓1 𝑥 𝑓11 𝑥
𝑓2 𝑥 𝑓21 𝑥 𝑓22 𝑥
38
If 𝑓 is quasi concave then
|𝐵𝑥 | ≥ 0 if 𝑥 is even and
|𝐵𝑥 | ≤ 0 if 𝑥 is odd

If then |𝐵𝑥 | > 0 if 𝑥 is even and


|𝐵𝑥 | < 0 if 𝑥 is odd
then 𝑓 is strictly quasi concave

If 𝑓 is quasi convex then |𝐵𝑥 | ≤ 0 are negative


If|𝐵𝑥 | < 0 then 𝑓 is strictly quasi convex

for all 𝑥 in the set where function 𝑓 is defined

39
Envelope theorem: unconstrained problem
Let 𝑓(𝑥, 𝑟) be a continuously differentiable function where 𝑥
is an n-vector of variables and 𝑟 is a k-vector of
parameters.

The maximal value of the function is given by 𝑓(𝑥 ∗ (𝑟), 𝑟)


where 𝑥 ∗ (𝑟), is the vector of variables 𝑥 that maximize 𝑓 and
that are function of 𝑟.

Note that we can write 𝑓(𝑥 ∗ (𝑟), 𝑟) as 𝑓 ∗ (𝑟)


(because in this function only parameters appear)

40
If the solution of the maximization problem is a continuously
differentiable function of r then:
𝑑𝑓 ∗ 𝑟 𝑑𝑓 𝑥, 𝑟
= evaluated in 𝑥 ∗ (𝑟),
𝑑𝑟𝑖 𝑑𝑟𝑖
the change in the maximal value of the function as a
parameter changes is the change caused by the direct
impact of the parameter on the function, holding the value
of x fixed at its optimal value;
the indirect effect, resulting from the change in the optimal
value of x caused by a change in the parameter, is zero

41
Example 6
max 𝑝 ln 𝑥 − 𝑐𝑥
𝑝
FOC is − 𝑐 = 0
𝑥
∗ 𝑝
then 𝑥 =
𝑐
𝑝
and 𝑓 ∗ 𝑝, 𝑐 = 𝑝 ln − 𝑝
𝑐
The effect of a change of parameter c on the maximized
value is:
𝑑𝑓 ∗ 𝑝, 𝑐 𝑝
=−
𝑑𝑐 𝑐
Consider the derivative of the objective function evaluated
at the solution 𝑥 ∗
𝑑𝑝 ln 𝑥 − 𝑐𝑥
= −𝑥
𝑑𝑐
𝑝 𝑝
Evaluating it in 𝑥 ∗ = we get −
𝑐 𝑐 42
Envelope theorem: constrained problems

Let 𝑓(𝑥, 𝑟) be a continuously differentiable function where 𝑥 is


an n-vector of variables and 𝑟 is a k-vector of parameters.
The maximal value of the function is given by 𝑓(𝑥 ∗ (𝑟), 𝑟)
where 𝑥 ∗ (𝑟), is the vector of variables 𝑥 that maximize 𝑓 and
that are function of 𝑟.
Note that we can write 𝑓(𝑥 ∗ (𝑟)), as 𝑓 ∗ (𝑟),
Then
d𝑓 ∗ (r) dL (x, r)
= evaluated at the solution 𝑥 ∗ (𝑟),
d𝑟𝑖 d𝑟𝑖
where the function 𝐿 is the Lagrangean of the problem

43
Example 7 max 𝑥𝑦 𝑠. 𝑡 𝑥 + 𝑦 = 𝐵
𝑥,𝑦
𝐿 𝑥, 𝑦, 𝜆 = 𝑥𝑦 − 𝜆 𝑥 + 𝑦 − 𝐵
we solve: 𝑦−𝜆 =0
𝑥−𝜆 =0
𝑥+𝑦 =𝐵
𝐵 𝐵2
then 𝑥∗ = 𝑦∗ = 𝜆∗ = and 𝑓∗ 𝐵 =
2 4
The effect of a change of parameter c on the maximized
value is:
𝑑𝑓 ∗ 𝐵 𝐵
=
𝑑𝐵 2
Consider the derivative of the Lagrangean evaluated at the
solution 𝑥 ∗
𝑑 𝑥𝑦 − 𝜆 𝑥 + 𝑦 − 𝐵 𝐵
=
𝑑𝐵 2 44

You might also like