0% found this document useful (0 votes)
81 views28 pages

Optimization Methods (MFE) : Elena Perazzi

This document discusses optimization methods for dynamic problems. It begins by recapping methods for deterministic problems, including backward induction for finite horizons and value function iteration for infinite horizons. It then introduces solving problems by linearizing around a steady state, assuming local stability. Examples are provided of deterministic problems in finite and infinite horizons. The document also discusses dynamic programming methods, steady states, and linearizing the dynamics around a steady state.

Uploaded by

Roy Sarkis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views28 pages

Optimization Methods (MFE) : Elena Perazzi

This document discusses optimization methods for dynamic problems. It begins by recapping methods for deterministic problems, including backward induction for finite horizons and value function iteration for infinite horizons. It then introduces solving problems by linearizing around a steady state, assuming local stability. Examples are provided of deterministic problems in finite and infinite horizons. The document also discusses dynamic programming methods, steady states, and linearizing the dynamics around a steady state.

Uploaded by

Roy Sarkis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Optimization methods (MFE)

Lecture 05

Elena Perazzi

EPFL

Fall 2018

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 1 / 28


Today’s topics: Dynamic optimization

Recap of the methods seen last week (for deterministic problems)


I Backward induction methods for finite horizon problem
I Value function iteration for infinite horizon problems.
I “Forward iteration” of Euler equation (for simple problems)
Today: solutions by linearization around steady state (assuming local
stability of steady state)
More examples of deterministic and stochastic problems

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 2 / 28


Deterministic problems: finite horizon
A general deterministic problem in finite horizon can be written as
T t
V (x0 ) = maxx1 , x2 ,...xT +1 Σt=0 β F (xt , xt+1 )

At each date t, xt is the state variable and xt+1 is the choice variable
or control variable.
In general xt and xt+1 n−dimensional: xt , xt+1 ∈ Rn .
Example: optimal growth model
∞ t
V (k0 ) = maxc0 , c1 ,...cT Σt=0 β U(ct )

Since ct = f (kt ) − kt+1 , this problem can be written as


∞ t
V (k0 ) = maxk1 , k2 ,...kT +1 Σt=0 β U(f (kt ) − kt+1 )

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 3 / 28


Deterministic problems: finite horizon
FOCs result in Euler equations, of the form

F2 (xt , xt+1 ) + βF1 (xt+1 , xt+2 ) = 0

for all t 0 s < T , and simply

F2 (xT , xT +1 ) = 0 (∗)

for the last period.


F1,2 to be interpreted as the gradient with respect to the first, second
variable.
(*) allows us to solve for xT +1 , in general as a function of xT . For
the optimal growth problem, we had simply kT +1 = 0. Once we have
xT +1 as a function of xT , we use the Euler equation to find xT as a
function of xT −1 , and so on, until we find the whole sequence as a
function of x0 .
Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 4 / 28
Deterministic problems: infinite horizon

General structure:

V (x0 ) = max{xt }∞ Σ∞ β t F (xt , xt+1 )


t=1 t=0

Again the FOCs result in Euler equations

F2 (xt , xt+1 ) + βF1 (xt+1 , xt+2 ) = 0

But now there is no last period! The optimal choice at t depends on


the optimal choice at t + 1...
In simple problems: solution by forward iteration of Euler equations.
It worked with the optimal growth problem. We see another example
today.

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 5 / 28


Deterministic problems: infinite horizon

Consider a problem with F (x, y ) (x an n-dimensional vector), concave,


strictly increasing in x. Suppose that the state xt is a non-negative vector.
Then sufficient conditions for a sequence {xt∗ , xt+1
∗ , ....} to be an optimum

are
The sequence {xt∗ , xt+1
∗ , ....} satisfies a Euler equation

F2 (xt∗ , xt+1
∗ ∗
) + βF1 (xt+1 ∗
, xt+2 )=0

for all t, and


The transversality condition is satisfied:

limt→∞ β t F2 (xt∗ , xt+1


∗ ∗
)xt+1 =0

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 6 / 28


Example: the cake-eating problem
Initial wealth (the size of the cake) is w0 . Income is 0 for every
period. Agent has to consume a piece of the cake in every period.
Agent maximizes

V (w0 ) = max{ct }∞ Σ∞ β t U(ct )


t=0 t=0
s.t. Σ∞
t=0 ct ≤ w0
wt+1 = wt − ct

Problem can be written as

V (w0 ) = max{wt }∞ Σ∞ β t U(wt − wt+1 )


t=1 t=0

Assume U(c) = ln(c). then Euler equations are


1 1

ct ct+1

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 7 / 28


Example: the cake-eating problem
So

ct+1 = βct
ct+2 = βct+1

Putting everything together


1
Σ∞ ∞ t
t=0 ct = Σt=0 β c0 = c0 = w0
1−β
Finally

c0 = (1 − β)w0
ct = β t c0 = (1 − β)wt

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 8 / 28


Dynamic programming methods & Value function iteration

If a bounded solution exists, the infinite-horizon problem can be


written as a functional equation

V (x) = maxy ∈Γ(x) F (x, y ) + βV (y )

The above equation is functional equation as the solution is the


function which is the fixed point of the operator defined by the RHS.
The Blackwell sufficient conditions tell us that this operator is a
contraction, and the contraction mapping theorem tells us that a
contraction has a unique solution.
Algorithm to find a solution (usually done numerically): start with a
guess V0 (x), apply the operator on the RHS, find V1 (x), ....
Continue to apply the operator on the RHS until the procedure
“converges”, i.e. until Vn (x) is sufficiently close to Vn−1 .

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 9 / 28


Euler equations and dynamics
around stady states

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 10 / 28


Steady states

A steady state x ∗ satisfies

F2 (x ∗ , x ∗ ) + βF1 (x ∗ , x ∗ ) = 0

(i.e. the sequence xt = x ∗ for every t is a solution of the Euler


equations, hence is a solution of the maximization problem)
If the initial state is the steady state, the future state will always be
the steady state.
The steady state is locally stable if there is a region around the steady
state such that, if the initial state is in this region, the solution is a
sequence {xt } that converges to the steady state for t → ∞.
Many economic problems have solution of this type. Also the optimal
growth model!

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 11 / 28


Dynamics around the steady state
Suppose the initial state x0 is not too far from steady state. The
solution of the problem is a sequence {xt∗ } satisfying for all t’s

F2 (xt∗ , xt+1
∗ ∗
) + βF1 (xt+1 ∗
, xt+2 )=0

with x0∗ = x0 . We can linearize the Euler equation around steady state
∗ ∗
F1 (xt+1 , xt+2 ) ' F1 (x ∗ , x ∗ ) + F11 × (xt+1

− x ∗ ) + F12 × (xt+2

− x ∗)
F2 (xt∗ , xt+1

) ' F2 (x ∗ , x ∗ ) + F21 × (xt∗ − x ∗ ) + F22 × (xt+1

− x ∗)

where Fij , i, j = 1, 2 denote Fij (x ∗ , x ∗ ). Hence

F2 (xt∗ , xt+1

) + βF1 (xt∗ , xt+1

) ' (F2 (x ∗ , x ∗ ) + βF1 (x ∗ , x ∗ )) +
F21 × (xt∗ − x ∗ ) + (F22 + βF11 ) × (xt+1

− x ∗ ) + βF12 × (xt+2

− x ∗)

Part in brown equal to 0 (Euler eq for the steady state)


Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 12 / 28
Dynamics around the steady state
Defining zt ≡ xt∗ − x ∗ , the linearized Euler equation is

F21 zt + (F22 + βF11 )zt+1 + βF12 zt+2 = 0

This is a 2nd order difference equation, that can be written as


−1 −1
−β −1 F12 (F22 + βF11 ), −β −1 F12
    
zt+2 F21 zt+1
=
zt+1 I, 0 zt
| {z }
M

We are looking for a solution


 that
 tends
  asymptotically to steady
z 1 0
state, i.e. such that M j →
z0 0
If we find such a solution, it is the unique solution. If we don’t find it,
then there is no optimal path starting from x0 and ending in x ∗ .

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 13 / 28


Steady state stability in the optimal growth model

0.3

0.25

0.2
Kt+1

0.15

0.1

0.05

0
0 0.05 0.1 0.15 0.2 0.25 0.3
Kt

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 14 / 28


Blanchard-Kahn conditions
Blanchard-Kahn conditions
If M has n eigenvalues whose norm is smaller than 1, the unique
solution is converging to steady state.
Write
M = B −1 ΛB
where  
λ1 0 ... 0
 0 λ2 ... 0 
Λ=
 ... ...

... ... 
0 0 ... λ2n
and B is a non-singular matrix.
It can be easily seen that

M j = B −1 Λj B

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 15 / 28


Blanchard-Kahn conditions
Notice that  
λj1 0 ... 0
0 λj2 ... 0
 
Λj = 
 
... ... ... ...

 
0 0 ... λj2n
So      
zj+1 j z1 −1 j w1
=M =B Λ
zj z0 w0
   
w1 z1
with ≡B
w0 z0
Suppose that the first n eigenvalues are < 1 (in norm). Then I can
choose z1 such that
   
z1 w1
B =
z0 0

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 16 / 28


Blanchard-Kahn conditions

If  
B11 B12
B=
B21 B22
where each of the Bij blocks in an n × n matrix, we impose

B21 z1 + B22 z0 = 0

i.e. we choose z1 by solving


−1
z1 = −B21 B22 z0

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 17 / 28


Blanchard-Kahn conditions
It follows that
       
zj+1 j z1 −1 j w1 0
=M =B Λ →
zj z0 0 0
The policy function is
−1
z1 = −B21 B22 z0

or
−1
x1 = x ∗ − B21 B22 (x0 − x ∗ )
 
z1
and the optimal sequence is obtained from by recursively
z0
multiplying by the matrix M.
The sequence constructed this way satisfies the Euler equation by
construction and satisfies the transversality condition. Hence it is the
unique solution of the linearized problem.
Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 18 / 28
Blanchard-Kahn conditions
In order
 for the solution to get closer and closer to steady state, we
z1
need to be a linear combination of the eigenvectors
z0
corresponding to eigenvalues < 1 (in norm).
If there are n such eigenvalues, given z0 there is a unique way to
choose z1 such that the above is satisfied.
Intuition for n = 1 (1-dimensional state variable and choice variable)

Eigenvector
| |<1

z
0

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 19 / 28


Stochastic problems

The time-t “Payoff function” F now also depends on a k-dimensional


shock t . Future payoffs are stochastic

V (x0 , {t }t≤0 ) = max{xt+1 ∈Γ(xt ,t )} F (x0 , x1 , 0 )+Σ∞ t


i=1 β E [F (xt , xt+1 , t )]

We also have to specify a transition matrix for the shocks


P(t |{s }s≤t ) and a constraint correspondence Γ(xt , t ).
At time 0 the choice of x1 depends on x0 and 0 . It follows that xt
with t > 1 are also stochastic.
A plan is a map {x0 , {t }∞ ∞
t=0 } → {xt }t=1 .

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 20 / 28


Stochastic problems

The Bellman equation involves an expectation

V (x0 , {t }t≤0 ) = maxx1 F (x0 , x1 , 0 ) + βE [V (x1 , {t }t≤1 )]

If the distribution of future shocks only depends on the current shock


(and not on past shocks)

V (x0 , 0 ) = maxx1 F (x0 , x1 , 0 ) + βE [V (x1 , 1 )]

The Euler equation also involve expectations

F2 (xt , xt+1 , t ) + βE [F1 (xt , xt+1 , t+1 )] = 0

However, most of the methods and theorems seen for the


deterministic case carry over to the stochastic case.

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 21 / 28


Example 1: a portfolio problem (finite horizon)
A risk-averse agent can invest in two assets.
Riskless asset paying gross return 1.
Risky asset paying gross return R: random variable with support in
[R, R], R > 0.
One-period problem: invest at 0 and consume at 1. Initial wealth w0 .
Problem:
V (w0 ) = maxs∈[0,w0 ] E [u(Rs + w0 − s)]
Assume CRRA utility u(c) = c 1−γ /(1 − γ) and define θ ≡ ws0
(fraction of wealth invested in risky asset). the problem becomes

w01−γ
V (w0 ) = maxθ∈[0,1] E [((R − 1)θ + 1)1−γ ]
1−γ
The optimal θ is independent of wealth.

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 22 / 28


Example 1: a portfolio problem (finite horizon)
Defining
ξ ≡ maxθ∈[0,1] E [((R − 1)θ + 1)1−γ ]
we have
w01−γ
V (w0 ) = ξ
1−γ
Notice that ξ is a constant.
Now consider a two-period problem.
I time 0 invest s0 in risky asset and w0 − s0 in riskless asset.
At
I time 1 wealth is w1 = s0 R1 + w0 − s0 .
At
I time 1 invest again: s1 in risky asset and w1 − s1 in riskless asset.
At
I time 2 consume w2 = s1 R2 + w1 − s1 , with utility w21−γ /(1 − γ).
At
I R1
and R2 are i.i.d.
h 1−γ i
w2 w11−γ
We already know that, given w1 , E1 1−γ = 1−γ ξ

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 23 / 28


Example 1: a portfolio problem (finite horizon)
At t = 0 I want to invest with the objective of maximizing time-2
utility of consumption. So the problem is
" #
w21−γ
V (w0 ) = maxs0 ,s1 (R1 ) E0
1−γ

But we know that, no matter the return R1 , it will always be s1 = θw1


and we know that an equivalent formulation of the problem is
" #
w11−γ
V (w0 ) = maxs0 ξE0
1−γ
Since ξ is a constant, the time-0 problem is identical to the time-1
problem. We will invest the same fraction θ in the risky asset, so that
1−γ
2 w0
V (w0 ) = ξ
1−γ

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 24 / 28


Example 1: a portfolio problem (finite horizon)

N period problem: every period same problem! we will always invest


the same fraction θ in the risky asset.

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 25 / 28


Example 2: unemployed worker problem

Take an unemployed worker with a utility function linear in the


present value of future wages.
As long as he is unemployed, the worker receives a job offer each
period.
Each job offer is characterized by a wage w drawn from a cdf F (w ).
The worker can accept or reject the offer. If he accepts an offer with
wage w , he will from then on receive w every period. His discount
factor is β, so the utility of accepting an offer is
w
V (w ) =
1−β
If the worker rejects the offer, he continues to be unemployed and he
waits for next period’s offer.

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 26 / 28


Example 2: unemployed worker problem
Call VU the utility if unemployed (equal to NPV of expected future
wages after one offer will be accepted). The value of VU is what we
need to determine. This is the utility at the beginning of period,
before current-period offer arrives.
Each period the decision of the unemployed worker is
 
w
max , βV U
1−β

Call ŵ the wage that makes the worker indifferent between accepting
and rejecting. He will aceept if w > ŵ and reject if w < ŵ .
The Bellman equation is
Z w̄  
U w U
V = max , βV dF (w )
0 1−β

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 27 / 28


Example 2: unemployed worker problem

As in deterministic problems, V U is the fixed point of the Bellman


equation, and can be found by value function iteration.
When we find V U , find ŵ by
Z w  
w U
ŵ = (1 − β)β max , βV dF (w )
0 1−β

This uses the fact that for the indifference wage it is 1−β = βV U .
Policy function: accept if w > ŵ , reject otherwise.
w
Exercise: find the solution for uniform distribution F (w ) = w

Elena Perazzi (EPFL) Optimization methods (MFE) Lecture 05 Fall 2018 28 / 28

You might also like