0% found this document useful (0 votes)
42 views14 pages

Problem Set 2: Solutions: Damien Klossner Damien - Klossner@epfl - CH Extranef 128 October 18, 2018

1) The document provides solutions to two exercises on portfolio optimization and consumer choice. For exercise 1, the optimal portfolio weights are derived by solving the Lagrangian and showing the necessary conditions for a minimum. 2) For exercise 2, the consumer's utility maximization problem is set up with a Lagrangian and first order conditions are taken to find the optimal consumption bundle. 3) Additional steps are outlined but not shown to verify the found solutions are a minimum for both problems using the bordered Hessian.

Uploaded by

ddd huang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views14 pages

Problem Set 2: Solutions: Damien Klossner Damien - Klossner@epfl - CH Extranef 128 October 18, 2018

1) The document provides solutions to two exercises on portfolio optimization and consumer choice. For exercise 1, the optimal portfolio weights are derived by solving the Lagrangian and showing the necessary conditions for a minimum. 2) For exercise 2, the consumer's utility maximization problem is set up with a Lagrangian and first order conditions are taken to find the optimal consumption bundle. 3) Additional steps are outlined but not shown to verify the found solutions are a minimum for both problems using the bordered Hessian.

Uploaded by

ddd huang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Problem Set 2: Solutions

Damien Klossner
[email protected]
Extranef 128

October 18, 2018

Exercise 1
~ = (R1 , R2 , ...., Rm )
There are m risky assets. the mean and covariance matrix of the returns R
are
~ =α
E[R] ~ = (α1 , α2 , ..., αm ) (1)

 
Σ11 ... Σ1m
~ = Σ =  ...
 
Cov(R)  ... ... 
 (2)
Σm1 ... Σmm

~ = (w1 , w2 , ...., wm )0 to
For a given target mean return α0 , choose the portfolio weights w

~ 0 Σw
• minimize portfolio variance 21 w ~

~ 0α
• subject to w ~ = α0 .

Solution

Note that since w


~ are portfolio weights, and there are only m risky assets we can invest in,
~ 0~e = 1
we need to impose the additional constraint that portfolio weights sum to 1, i.e. w
with ~e denoting an m × 1 vector with each component equal to 1.
First, set up the Lagrangian

1 0
L(w,
~ λ, µ) = w~ Σw ~ 0α
~ − λ(w ~ 0~e − 1).
~ − α0 ) − µ(w
2
1
Second, find the critical point(s)

∂L(w,
~ λ, µ)
=α ~ 0w
~ − α0 =0
∂λ
∂L(w,
~ λ, µ)
= ~e0 w
~ −1 =0
∂µ
∂L(w,
~ λ, µ)
= Σw ~ − λ~
α − µ~e = 0.
∂w
~

From the third equation, we have

~ ∗ = Σ−1 (λ∗ α
w ~ + µ∗~e) . (3)

We find the Lagrange multipliers by substituting (3) into the constraints.

~ 0w
α α0 Σ−1 α
~ = λ~ α0 Σ−1~e = α0
~ + µ~
~e0 w
~ = λ~e0 Σ−1 α
~ + µ~e0 Σ−1~e = 1

or     
A B λ α
    =  0
B C µ 1

with

~ 0 Σ−1 α
A = α ~
~ 0 Σ−1~e
B = α
C = ~e0 Σ−1~e.

Therefore, letting ∆ = AC − B 2 , we have


   −1     
λ A B α0 C −B α
 =   = 1    0 .
µ B C 1 ∆ −B A 1

2
Hence

Cα0 − B
λ∗ =

A − Bα0
µ∗ = .

 
A B
We still need to show that the matrix   is indeed invertible, i.e. that ∆ 6= 0.
B C
Actually, as long as α
~ is not spanned by ~e, we can show that ∆ > 0. First note that Σ must
~ 0 Σw
be positive definite. Here, w ~ equals the variance of the portfolio w.
~ As all portfolios
of risky assets have a return with strictly positive variance (i.e., a portfolio of risky assets
remains risky), Σ is indeed positive definite. Second, the positive definiteness of Σ implies
that Σ−1 exists and is also positive definite.1 This, in turn, implies that A > 0 and C > 0.
Finally, write
α − A~e)0 Σ−1 (B~
A∆ = A(AC − B 2 ) = (B~ α − A~e) > 0.

Since A > 0 we conclude that ∆ > 0.


Finally, a few words about sufficiency. Let

1 0
L∗ (w) ~ λ∗ , µ∗ ) = w
~ ≡ L(w, ~ − λ∗ (w
~ Σw ~ 0α
~ − α0 ) − µ∗ (w
~ 0~e − 1)
2

so that D2 L∗ = Σ. L∗ is strictly convex.2 Therefore, the stationary point which we have


found is the unique minimizer of L∗ ,

1 ∗0 0 0 1 0
w ~ ∗ − λ∗ (w
~ Σw ~∗ α~ − α0 ) − µ∗ (w
~ ∗ ~e − 1) ≤ w ~ − λ∗ (w
~ Σw ~ 0α
~ − α0 ) − µ∗ (w
~ 0~e − 1), ∀w.
2 2

Since w∗ satisfies the constraint, this inequality is equivalent to

1 ∗0 1 0
w ~∗ ≤ w
~ Σw ~ − λ∗ (w
~ Σw ~ 0α
~ − α0 ) − µ∗ (w
~ 0~e − 1), ∀w.
2 2

1
How do we know that Σ−1 exists? Suppose Σw ~ = 0. Then w ~ 0 Σw
~ = 0. Since Σ is positive definite, we
see that w~ = 0. Thus, Σ is nonsingular. Next, note that Σw ~ = λw ~ implies Σ−1 w~ = λ1 w.,
~ i.e. the eigenvalues
−1
of Σ and Σ are reciprocals. Hence, if Σ is positive definite (all its eigenvalues are strictly positive), then
so is Σ−1 . (Source: KC Border (2016), More than you wanted to know about quadratic forms).
2
This is just a special case of the fact that if the objective function f (x) is convex, and the constraint
g(x) is concave, then the function L(x) = f (x) − λ∗ g(x) is convex. Recall slide 27 of lecture 3.

3
0
~ ∗ Σw
Hence, we conclude that 12 w ~ ∗ ≤ 21 w
~ 0 Σw ~ 0α
~ for all w such that w ~ 0~e = 1.
~ = α0 and w

Exercise 2
Suppose that a consumer has utility function U (x, y) = Axα y 1−α . x and y are two commodi-
ties with price px and py and α is a constant such that 0 < α < 1. The consumer’s budget
constraint is px x + py y = m. Find a stationary point of U subject to the budget constraint
and verify if it is a max or a min using the condition on the bordered Hessian.

Solution

First, set up the Lagrangian

L(x, y, µ) = Axα y 1−α + µ(m − px x − py y).

Second, find the critical point(s)

∂L(x, y, µ)
= m − px x − py y =0
∂µ
∂L(x, y, µ)
= Aαxα−1 y 1−α − µpx =0
∂x
∂L(x, y, µ)
= A(1 − α)xα y −α − µpy = 0.
∂y

From the second and third equation, we find

1−αx py
=
α y px
1−α
py y = px x. (4)
α

Substituting into the budget constraint

1−α px x
m = px x + px x =
α α
αm
x∗ = .
px

4
Substituting back into (4) gives

1 − α αm
py y = px
α px
(1 − α)m
y∗ = .
py

We now consider the second-order conditions. Throughout the analysis, we assume

px , py , m, A > 0.

This just says that prices are strictly positive, that the agent is endowed with a strictly
positive budget (such that the optimal consumption levels x∗ and y ∗ are strictly positive),
and that positive levels of consumption yield positive utility for the agent.
Intuitively, we expect our stationary point to be a maximizer : levels of consumption for
each good which maximize the utility of the agent subject to his budget constraint. In fact,
we will prove the following result.3

Proposition 1 Let f and g be C 2 functions on R2 . Consider the problem of maximizing f


on the contraint set Ch = {(x, y) : h(x, y) = c}. Form the Lagrangian

L(x, y, µ) = f (x, y) + µ(c − h(x, y)).

Suppose that (x∗ , y ∗ , µ∗ ) satisfies

(a) ∂L
= 0,
∂x
∂L
∂y
= 0, ∂L
∂µ
=0 at (x∗ , y ∗ , µ∗ ), and
 
∂h ∂h
0 ∂x ∂y
 
(b) det  ∂h ∂2L ∂2L  >0 at (x∗ , y ∗ , µ∗ ).
 ∂x ∂x2 ∂x∂y 
∂h ∂2L ∂2L
∂y ∂y∂x ∂y 2

Then, (x∗ , y ∗ ) is a local max of f on Ch .

Proof. Note that (using e.g. Sarrus’ rule to compute the determinant) condition (b) rewrites
2 2
∂ 2 L ∂h ∂h ∂ 2 L ∂ 2L
 
∂h ∂h
2 − − 2 > 0. (5)
∂x∂y ∂x ∂y ∂x2 ∂y ∂y ∂x
3
Source: Simon and Blume (1994), Mathematics for Economists, p. 462–63. This section largely repeats
the arguments of slides 21-27 from Lecture 2. But sometimes, a little bit of repetition does not hurt!

5
Condition (b) also implies that

∂h ∗ ∗ ∂h ∗ ∗
(x , y ) 6= 0 or (x , y ) 6= 0
∂x ∂x

We assume that ∂h/∂x(x∗ , y ∗ ) 6= 0, w.l.o.g. Then, by the Implicit Function Theorem, the
constraint set Ch can be considered as the graph of a C 1 function y = φ(x) around (x∗ , y ∗ ),

h(x, φ(x)) = c for all x near x∗ . (6)

Differentiating (6) yields

∂h ∂h
(x, φ(x)) + (x, φ(x))φ0 (x) = 0 (7)
∂x ∂y

or
∂h
(x, φ(x))
φ0 (x) = − ∂h
∂x

∂y
(x, φ(x))

Let
F (x) ≡ f (x, φ(x))

be f evaluated on Ch , a function of one unconstrained variable. By the usual first and


second order conditions for such functions, if F 0 (x∗ ) = 0 and F 00 (x∗ ) < 0, then x∗ will be a
strict local max of F and (x∗ , y ∗ ) = (x∗ , φ(x∗ )) will be a local contrained max of f .
So, we compute F 0 (x∗ ) and F 00 (x∗ ). We have

∂f ∂f
F 0 (x) = (x, φ(x)) + (x, φ(x))φ0 (x) = 0 (8)
∂x ∂y

Multiply equation (7) by −µ∗ and add it to (8), evaluating both at x = x∗ :


   
0 ∗ ∂f ∗ ∗ ∗ ∂h ∗ ∗ 0 ∗ ∂f ∗ ∗ ∗ ∂h ∗ ∗
F (x ) = (x , y ) − µ (x , y ) + φ (x ) (x , y ) − µ (x , y )
∂x ∂x ∂y ∂y
∂L ∗ ∗ ∂L
= (x , y ) + φ0 (x∗ ) (x∗ , y ∗ )
∂x ∂y
= 0,

where the last line follows from condition (a).

6
Now, take the second derivative of F (x) at x∗ :

∂ 2L ∂ 2L 0 ∗ ∂ 2L 0 ∗ 2
F 00 (x∗ ) = + 2 φ (x ) + φ (x )
∂x2 ∂x∂y ∂y 2
2
∂ 2L ∂ 2L ∂ 2L
  
∂h/∂x ∂h/∂x
= +2 − + 2 −
∂x2 ∂x∂y ∂h/∂y ∂y ∂h/∂y
2  2 !
∂ 2 L ∂h ∂ 2 L ∂h ∂h ∂ 2 L ∂h
 
1
=  2 −2 + 2 ,
∂h ∂x2 ∂y ∂x∂y ∂x ∂y ∂y ∂x
∂y

which is negative by condition (b) (recall (5)). Since F 0 (x∗ ) = 0 and F 00 (x∗ ) < 0,

y 7→ F (x) = f (x, φ(x))

has a local max at x∗ , and therefore, f restricted to Ch has a local max at (x∗ , y ∗ ).
We can now easily verify that condition (5) is met for our problem. We have

∂ 2L
= −Aα(1 − α)(x∗ )α−2 (y ∗ )1−α < 0
∂x2
∂ 2L
= −Aα(1 − α)(x∗ )α (y ∗ )−α−1 < 0
∂y 2
∂ 2L
= Aα(1 − α)(x∗ )α−1 (y ∗ )−α > 0,
∂x∂x

since A, x∗ , y ∗ > 0 and α ∈ (0, 1). Moreover, writing the constraint as 0 = m − h(x, y) with
h(x, y) = px x + py y,

∂h ∂h
= px = py .
∂x ∂y

so that  2  2
∂ 2 L ∂h ∂h ∂ 2 L ∂h ∂ 2 L ∂h
2 − − 2 > 0. (9)
∂x∂y ∂x ∂y ∂x2 ∂y ∂y ∂x
| {z } | {z } | {z }
>0 <0 <0

Note that in this example, the determinant of the bordered Hessian can be expressed

7
quite compactly as:
 
0 px py
 
|H| = det  α−2 1−α α−1 −α 
p
 x −Aα(1 − α)x y Aα(1 − α)x y 
α−1 −α α −α−1
py Aα(1 − α)x y −Aα(1 − α)x y
= −px × Aα(1 − α)(−px xα y −α−1 − py xα−1 y −α )
+py × Aα(1 − α)(px xα−1 y −α + py xα−2 y 1−α )
= Aα(1 − α)xα−2 y −α−1 ((px x)2 + 2px py xy + (py y)2 )
= Aα(1 − α)xα−2 y −α−1 (α2 + 2α(1 − α) + (1 − α)2 )m2
 2
m
= α(1 − α) U (x, y) > 0.
xy


Here I dropped the to avoid notational clutter; but do remember that the Hessian is
evaluated at (x∗ , y ∗ )!

A Small Disgression 4
In fact, Proposition 1 is just a special case (2 variables, 1 constraint) of the following general
(n variables, k constraints) results, which we state without proof.

Theorem 1 Let f, h1 , . . . , hk be C 2 functions on Rn . Consider the problem of maximizing


f on the constraint set

Ch ≡ {x : h1 (x) = c1 , . . . , hk (x) = ck }.

Form the Lagrangian

L(x, µ1 , . . . , µk ) = f (x) + µ(c1 − h1 (x)) · · · + µ(ck − hk (x)), (10)

and suppose that:

(a) x lies in the constraint set Ch ,

4
This section follows Simon and Blume (1994), Mathematics for Economists. If you are short on time,
you can jump directly to the solution of Exercise 3. It is meant as a reminder (and hopefully a “clarifier”)
of the material covered in slides 21–27 of Lecture 2.

8
(b) there exists µ∗1 , · · · , µ∗k such that

∂L ∂L ∂L ∂L
= 0, · · · , = 0, = 0, · · · , =0
∂x1 ∂xn ∂µ1 ∂µk

at (x∗1 , · · · , x∗n , µ∗1 , · · · , µ∗k ),

(c) the Hessian of L with respect to x at (x∗ , µ∗ ), Dx2 L(x∗ , µ∗ ), is negative definite on the
linear constraint set {v : Dh(x∗ )v = 0}; that is,

v 6= 0 and Dh(x∗ )v = 0 =⇒ v> (Dx2 L(x∗ , µ∗ ))v < 0. (11)

Then, x∗ is a strict local constrained max of f on Ch .

The linear constraint set for this problem is the hyperplane which is tangent to the
constraint set {x ∈ Rn : h(x) = c} at the point x∗ . The next theorem provides conditions
for the general problem of determining the definiteness of
 
  x1
a a · · · a1n  
  11 12 x 
. . .. .. 

 2
Q(x) = x> Ax = x1 x2 · · · xn  .
 . .
. . .  . 
 (12)
 .. 
an1 an2 · · · ann
 
xn

on the linear constraint set


  
  x1 0
b11 b12 · · · b1n    
 .  x  0
. .. .. ..   2  
Bx = 
 . . . .   .  = . .
 (13)
 ..   .. 
bm1 bm2 · · · bmn
   
xn 0

Theorem 2 To determine the definiteness of a quadratic form of n variables, Q(x) = x> Ax,
when restricted to a contraint set (13) given by m linear equations Bx = 0, construct the
(n + m) × (n + m) symmetric matrix A by bordering the matrix A above and to the left by

9
the coefficients of B of the linear constraints:
 
0 B
A= 
>
B A

Check the signs of the last n−m leading principal minors of A, starting with the determinant
of A itself.

(a) If |A| has the same sign as (−1)n and if these last n−m leading principal minors alternate
in sign, then Q is negative definite on the constraint set Bx = 0, and x = 0 is a strict
global max of Q on this constraint set.

(b) If |A| and if these last n − m leading principal minors all have the same sign as (−1)m ,
then Q is positive definite on the constraint set Bx = 0, and x = 0 is a strict global min
of Q on this constraint set.

(c) If both conditions (a) and (b) are violated by nonzero leading principal minors, then Q is
indefinite on the constraint set Bx = 0, and x = 0 is neither a max nor a min of Q on
this constraint set.

Theorem 2 makes precise the exact sign pattern on bordered matrices for verifying the
second order condition (11). Border the n × n Hessian Dx2 L(x∗ , µ∗ ) with the k × n constraint
matrix Dh(x∗ ):
 
0 Dh(x∗ )
H ≡  
Dh(x∗ )> Dx2 L(x∗ , µ∗ )
 
∂h1 ∂h1
0 ··· 0 ∂x1
··· ∂xn
 . . .. .. .. .. 
 .. . . . . . . 
 
 
 0 ··· ∂hk ∂hk
0 ∂x1
··· ∂xn

=  . (14)
 
 ∂h1 · · · ∂hk ∂2L ∂2L
 ∂x1 ∂x1 ∂x21
··· ∂xn ∂x1


 . . .. .. .. ..
 .. ..

 . . . . 

∂h1 ∂hk ∂2L ∂2L
∂xn
··· ∂xn ∂x1 ∂xn
··· ∂x2n

If the last (n − k) leading principal minors of matrix (14) alternate in sign, with the sign
of the determinant of the (k + n) × (k + n) matrix H in (14) the same as the (−1)n , then

10
condition (c) of Theorem 1 holds.
We will not present the rather intricate proof of this theorem in this course. However, a
few remarks are in order.

Remark 1 Let us verify that that the conclusions of Theorem 1 and 2 are consistent with
the conclusions of Proposition 1 for the case of two variables and one constraint. For n = 2
and m = 1, we only need to compute n − m = 2 − 1 = 1 leading principal minor. Since we
need to check the signs of the last n − m leading principal minors of H, this means that we
only need to compute the determinant of H itself. Moreover, by condition (a) in order for x∗
to be a strict local constrained max, |H| must have the same sign as (−1)n = (−1)2 = 1. So,
we need to check that |H| > 0. This is exactly the conclusion we reached in Proposition 1.

Remark 2 To provide some indication for why the conditions outlined in Theorem 2, are
actually quite natural, note that the Hessian of the Lagrangian (10) with respect to all (n+k)
variables µ1 , . . . , µk , x1 , . . . , xn , is
 
0 ··· 0 − ∂h
∂x1
1 ∂h1
· · · − ∂x n
 .. . . .. .. .. .. 

 . . . . . . 

 
2
 0 ··· 0 − ∂hk
∂x1
∂hk 
· · · − ∂xn 
D(µ,x) L =  , (15)

 − ∂h1 · · · − ∂hk ∂2L 2L
 ∂x1 ∂x1 ∂x21
· · · ∂x∂n ∂x 
1 
 . .. .. .. .. ..
 ..

 . . . . . 

∂h1 ∂hk ∂2L ∂2L
− ∂x n
· · · − ∂x n ∂x1 ∂xn
··· ∂x2n

∂2L ∂h
since ∂xi ∂µj
= − ∂xji . If we multiply each of the last n rows and each of the last n columns
2
in (15) by −1, we will not change the sign of |D(µ,x) L| or any of its principal minors since
this process involves an even number of of multiplications by −1 in every case. The result is
the bordered Hessian (14). So, the bordered Hessian in (14) has the same principal minors
as the full Hessian (15) of the Lagrangian L.
Recall, however, that the second order condition for the constrained maximization problem
2
involves checking only the last n−k of the n+k leading principal minors of D(µ,x) L.

11
For our particular example, recall that
 
0 px py
 
|H| = det  α−2 1−α
px −Aα(1 − α)x y Aα(1 − α)xα−1 y −α 

py Aα(1 − α)xα−1 y −α α −α−1
−Aα(1 − α)x y

The crucial point is that the second order condition for the constrained maximization does
not amount to checking whether or not the matrix H is negative definite or positive definite.
For H to be negative definite, we would require that (−1)r Mr > 0 for all leading principal
minor Mr of order r = 1, . . . , n.5 This clearly does not hold since M1 = 0. (H is also not
positive definite since M1 = 0 and M2 = −p2x < 0.)
The get some intuition behind the fact that studying the second order condition of our
problem only requires checking one condition, note that with two variables and one constraint,
our problem is really one-dimensional. More generally, if the problem has n variables and m
constraints, we would expect that the problem is really n − m dimensional and therefore that
we will only have n − m conditions to check for the matrix H. Theorem 2 states precisely
what these n − m conditions are.

Remark 3 Recall that, in the case of an unconstrained optimization problem, if Df (x) = 0


and D2 f (x) is positive definite at some x, then x is a strict local minimum of f on the
constraint set. Similarly, if Df (x) = 0 and D2 f (x) is negative definite at some x, then x is
a strict local maximum of f on the constraint set. To verify that x is a strict local minimum,
we thus need to check that all the leading principal minors of D2 f (x) are strictly positive.
And to verify that x is a strict local maximum, we need to check that the leading k th order
principal minors of D2 f (x) have sign (−1)k for k = 1, . . . , n.
The situation is quite similar in the context of constrained optimization. In the case of a
minimization problem, we look for leading principal minors of the same sign. In the case of
a maximization problem, we look for leading principal minors of alternating signs. But there
is one difference to note. For positive definiteness under constraints, all the leading bordered
principal minors of order greater than m have the same sign, but the sign depends on
the number of constraints. As we have seen above, for the case of one constraint (m = 1)

5
Mr = det(H r ), where H r denotes the submatrix obtained by keeping the first r rows and r columns of
H.

12
if D2 f (x) is positive definite under constraints, then these minors are negative.

Exercise 3
Consider the problem: min x2 +y 2 subject to (x−1)3 −y 2 = 0. Find the minimum intuitively.
Show that the method of Lagrange multipliers does not work in this case. Why?

Solution

Note that y 2 is always positive, hence (x − 1)3 must be positive, which means that x must
be greater or equal to one. Since x2 + y 2 is strictly increasing in both x and y, clearly its
constrained minimum is achieved at (x∗ , y ∗ ) = (1, 0), where it takes the value 1.
Let g(x, y) = (x − 1)3 − y 2 , so that

   
J(x, y) = gx gy = 3(x − 1)2 −2y .

In particular, J(1, 0) = (0 0). Note that the constraint qualification fails, so we should
expect the Lagrangian method to fail. Indeed, if we set up the Lagrangian

L(x, y, λ) = x2 + y 2 + λ((x − 1)3 − y 2 ).

and compute the critical point(s), we find

∂L(x, y, λ)
= (x − 1)3 − y 2 =0 (16)
∂λ
∂L(x, y, λ)
= 2x + 3λ(x − 1)2 = 0 (17)
∂x
∂L(x, y, λ)
= 2y − 2λy = 0. (18)
∂y

From the third equation we have two cases: (i) y = 0 and (ii) λ = 1.
Suppose first that y = 0. Then from (16) x = 1. (17) then gives 2 = 0, which is a
contradiction. Thus, y 6= 0 and λ = 1. Hence, the second equation rewrites: 3x2 −4x+3 = 0.
The disciminant is 16 − 36 = −20. This quadratic does not have a real root, which means
that the second equation cannot be satisfied.

13
14

You might also like