0% found this document useful (0 votes)
11 views71 pages

Lecture SM 1 DP

macroeconomics

Uploaded by

Thiago Henrique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views71 pages

Lecture SM 1 DP

macroeconomics

Uploaded by

Thiago Henrique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Dynamic Programming

(Lectures on Solution Methods for Economists I)

Jesús Fernández-Villaverde1 and Pablo Guerrón2


May 14, 2022
1
University of Pennsylvania
2
Boston College
Theoretical Background
Introduction

• Introduce numerical methods to solve dynamic programming (DP) models.

• DP models with sequential decision making:

• Arrow, Harris, and Marschak (1951) → optimal inventory model.

• Lucas and Prescott (1971) → optimal investment model.

• Brock and Mirman (1972) → optimal growth model under uncertainty.

• Lucas (1978) and Brock (1980) → asset pricing models.

• Kydland and Prescott (1982) → business cycle model.

1
The basic framework

• Almost any DP can be formulated as Markov decision process (MDP).

• An agent, given state st ∈ S takes an optimal action at ∈ A (s) that determines current utility
u (st , at ) and affects the distribution of next period’s state st+1 via a Markov chain p (st+1 |st , at ).

• The problem is to choose α = {α1 , . . . , αT }, where at = αt (st ), that solves


( T )
X
t
V (s) = max Eα β u (st , at ) |s0 = s
α
t=0

• The difficulty is that we are not looking for a set of numbers a = {a1 , . . . , aT } but for a set of
functions α = {α1 , . . . , αT }.

2
The DP problem

• DP simplifies the MDP problem, allowing us to find α = {α1 , . . . , αT } using a recursive procedure.

• Basically, it uses V as a shadow price to map a stochastic/multiperiod problem into a


deterministic/static optimization problem.

• We are going to focus on infinite horizon problems, where V is the unique solution for the Bellman
equation V = Γ (V ).

• Where Γ is called the Bellman operator, that is defined as:


 Z 
Γ (V ) (s) = max u (s, a) + β V (s ′ ) p (s ′ |s, a)
a

• α (s) is equal to the solution to the Bellman equation for each s.

3
The Bellman operator and the Bellman equation

• We will revise the mathematical foundations for the Bellman equation.

• It has a very nice property: Γ is a contraction mapping.

• This will allow us to use some numerical procedures to find the solution to the Bellman equation
recursively.

4
Discrete vs. continuous MDPs

• Difference between Discrete MDPs –whose state and control variables can only take a finite number
of points– and continuous MDPs –whose state and control variables can take a continuum of values.

• Value functions for discrete MDPs belong to a subset of the finite-dimensional Euclidean space R #S .

• Value functions for continuous MDPs belong to a subset of the infinite-dimensional Banach space
B (S) of bounded, measurable real-valued functions on S.

• Therefore, we can solve discrete MDPs exactly (rounding errors) while we can only approximate the
solution to continuous MDPs.

• Discrete MDPs arise naturally in IO/labor type of applications while continuous MDPs arise
naturally in Macro.

5
Computation: speed vs. accuracy

• The approximating error ϵ introduces a trade-off: better accuracy (lower ϵ) versus shorter time to
find the solution (higher ϵ).

• The time needed to find the solution also depends on the dimension of the problem: d.

• We want the fastest method given a pair (ϵ, d).

• Why do we want the fastest method?

• Normally, this algorithms are nested into a bigger optimization algorithm.

• Hence, we will have to solve the Bellman equation for various values of the “structural” parameters
defining β, u, and p.

6
Approximation to continuous DPs

• There are two ways to approximate continuous DPs.

• Discrete.

• Smooth.

• Discrete solves an equivalent discrete problem that approximates the original continuous DPs.

• Smooth treats the value function V and the decision rule α are smooth functions of s and a finite set
of coefficients θ.

7
Smooth approximation to continuous DPs

• Then we will try to find θb such that the approximations the approximated value function Vθb and
decision rule αθb are close to V and α using some metric.

• In general, we will use a sequence of parametrization that is dense on B (S).



• That means that for each V ∈ B (S), ∃ {θk }k=1 such that

lim inf sup |Vθ (s) − V (s)| = 0


k→∞ θk s∈S

• Example:
1. Let S = [−1, 1].
Pk
2. Consider Vθ (s) = i=1 θi pi (s) and let pi (s) = s i .

• Another example is pi (s) = cos i cos−1 (s) . These are called the Chebyshev polynomials of the first


kind.
8
The Stone-Weierstrass approximation theorem

• Let ε > 0 and V be a continuous function in [−1, 1], then there exists a polynomial Vθ such that

∥V − Vθ ∥ < ε

• Therefore, the problem is to find θ such that minimizes

N
!1/2
X 2
Vθ (si ) − b
Γ (Vθ ) (si )
i=1

where b
Γ (Vθ ) is an approximation to the Bellman operator. Why is an approximation?

• Faster to solve the previous problem than by brute force discretizations.

9
MDP definitions

• A MDP is defined by the following objects:


• A state space S.
• An action space A.
• A family of constraints A (s) for s ∈ S.
• A transition probability p (ds ′ |s, a) = Pr (st+1 = ds ′ |st = s, at = a).
• A single period utility u (s, a).
• The agent problem is to choose α = {α1 , . . . , αT } such that:
Z Z
max ... [u (st , αt (st ))] p (dst |st−1 , αt−1 (st−1 )) p0 (ds0 )
α s0 sT

• p0 (ds0 ) is the probability distribution over the initial state.


• This problem is very complicated: search over a set of functions {α1 , . . . , αT } and make a
T + 1-dimension integral.
10
The Bellman equation in the finite horizon problem

• If T < ∞ (the problem has a finite horizon), DP is equivalent to backward induction. In the terminal
period αT is:
αT (sT ) = arg max u (sT , aT )
aT ∈A(sT )

• And VT (sT ) = u (sT , αT (sT )).


• For periods t = 1, . . . , T − 1, we can find Vt and αt by recursion:
 Z 
αt (st ) = arg max u (st , at ) + β Vt+1 (st+1 ) p (dst+1 |st , at )
at ∈A(st )

Z
Vt (st ) = u (st , αt (st )) + β Vt+1 (st+1 ) p (dst+1 |st , αt (st ))

• It could be the case that at = αt (st , at−1 , st−1 , . . .) depend on the whole history, but it can be shown
that separability and the Markovian property of p imply that at = αt (st ).
11
The Bellman equation in the infinite horizon problem I

• If T = ∞, we do not have a finite state.

• On the other hand, the separability and the Markovian property of p imply that at = α (st ), that is,
the problem has a stationary Markovian structure.

• The optimal policy only depend on s, it does not depend on t.

• Thus, the optimal stationary markovian rule is characterized by:


 Z 
α (s) = arg max u (s, a) + β V (s ′ ) p (ds ′ |s, a)
a∈A(s)
Z
V (s) = u (s, α (s)) + β V (s) p (ds ′ |s, α (s))

• This equation is known as the Bellman equation.

• It is a functional equation (mapping from functions to functions).

• The function V is the fixed point to this functional equation. 12


The Bellman equation in the infinite horizon problem II

• To determine existence and uniqueness, we need to impose:


1. S and A are compact metric spaces.

2. u (s, a) is jointly continuous and bounded.

3. s −→ A (s) is a continuous correspondence.

• Let B (S) the Banach space of bounded, measurable real-valued functions on S.

• Let ∥f ∥ = sups∈S |f (s)| for f ∈ B (S) be the sup norm.

• The Bellman operator is:


 Z 
′ ′
Γ (W ) (s) = max u (s, a) + β W (s ) p (ds |s, a)
a∈A(s)

• The Bellman equation is then a fixed point to the operator:


V = Γ (V )
13
The Bellman equation in the infinite horizon problem II

• Blackwell (1965) and Denardo (1967) show that the Bellman operator is a contraction mapping: for
W , V in B (S),
∥Γ (V ) − Γ (W )∥ ≤ β ∥V − W ∥

• Contraction mapping theorem: if Γ is a contractor operator mapping on a Banach Space B, then


Γ has an unique fixed point.

• Blackwell’s theorem: the Stationary Markovian α defined by:


 Z 
′ ′
α (s) = arg max u (s, a) + β V (s ) p (ds |s, a)
a∈A(s)

Z
V (s) = u (s, α (s)) + β V (s) p (ds ′ |s, α (s))

solves the associated MDP problem.


14
A trivial example

• Consider u (s, a) = 1.

• Given that u is constant, let us assume that V is also constant.

• If we substitute this result into the Bellman equation, we get:


 Z 

V = max 1 + β Vp (ds |s, a)
a∈A(s)

• And the unique solution is V = 1


1−β .

• Clearly, the MDP problem implies that V = 1 + β + β 2 + . . .

• So, they are equivalent.

15
Phelps’ (1972) example I

• The agent has to decide between consume and save.

• The state variable, w , is the wealth of the agent and the decision variable, c, is how much to
consume.

• The agent cannot borrow, so the choice set A (w ) = {c|0 ≤ c ≤ w }.

• The saving are invested in a single risky asset with iid return Rt with distribution F .

• The Bellman Equation is:


Z ∞
V (w ) = max log (c) + β V (R (w − c)) F (dR)
c∈A(w ) 0

16
Phelps’ (1972) example II

• Since it operator Γ is a contraction, we can start V = 0.

• If that is the case, Vt = Γt (0) = ft log (w ) + gt for ft and gt constant.

• So, V∞ = Γ∞ (0) = f∞ log (w ) + g∞ .

• If we substitute V∞ into the Bellman equation and we look for f∞ and g∞ , we get:
1
f∞ =
1−β

log (1 − β) β log (β) βE {log (R)}


g∞ = + 2 + 2
1−β (1 − β) (1 − β)
and α (w ) = (1 − β) w .

• Therefore, permanent income hypothesis still holds in this environment.


17
Numerical Implementation
Motivation

• Before, we reviewed some theoretical background on dynamic programming

• Now, we will discuss its numerical implementation

• Perhaps the most important solution algorithm to learn:


1. Wide applicability

2. Many known results

3. Template for other algorithms

• Importance of keeping the “curse of dimensionality” under control

• Two issues to discuss:

1. Finite versus infinite time

2. Discrete versus continuous state space.


18
Finite time

• Problems where there is a terminal condition.

• Examples:

1. Life cycle.

2. Investment with expiration date.

3. Finite games.

• Why are finite time problems nicer? Backward induction.

• You can think about them as a particular case of multivariate optimization.

19
Infinite time

• Problems where there is no terminal condition.

• Examples:

1. Industry dynamics.

2. Business cycles.

3. Infinite games.

• However, we will need the equivalent of a terminal condition: transversality condition.

20
Discrete state space

• We can solve problems up to floating point accuracy.

• Why is this important?

1. ε-equilibria.

2. Estimation.

• However, how realistic are models with a discrete state space?

21
Infinite state space

• More common cases in economics.

• Problem: we have to rely on a numerical approximation.

• Interaction of different approximation errors (computation, estimation, simulation).

• Bounds?

• Interaction of bounds?

22
Different strategies

• Four main strategies:

1. Value function iteration.

2. Policy function iteration.

3. Projection.

4. Perturbation.

• Many other strategies are actually particular cases of the previous ones.

23
Value function iteration

• Well-known, basic algorithm of dynamic programming. Aka as value improvement.

• We have tight convergence properties and bounds on errors.

• Well suited for parallelization.

• It will always (perhaps quite slowly) work.

• How do we implement the operator?

1. We come back to our two distinctions: finite versus infinite time and discrete versus continuous state
space.

2. Then we need to talk about:

• Initialization.

• Discretization.

24
Value function iteration in finite time

• We begin with the Bellman operator:


 Z 
t t′ ′ ′

Γ V (s) = max u (s, a) + β V (s ) p (ds |s, a)
a∈A(s)

• Specify V T and apply Bellman operator:


 Z 
V T −1 (s) = max u (s, a) + β V T (s ′ ) p (ds ′ |s, a)
a∈A(s)

• Iterate until first period:


 Z 
1 2 ′ ′
V (s) = max u (s, a) + β V (s ) p (ds |s, a)
a∈A(s)

25
Value function iteration in infinite time

• We begin with the Bellman operator:


 Z 
′ ′
Γ (V ) (s) = max u (s, a) + β V (s ) p (ds |s, a)
a∈A(s)

• Specify V 0 and apply Bellman operator:


 Z 
V 1 (s) = max u (s, a) + β V 0 (s ′ ) p (ds ′ |s, a)
a∈A(s)

• Iterate until convergence:


 Z 
T T −1 ′ ′
V (s) = max u (s, a) + β V (s ) p (ds |s, a)
a∈A(s)

26
Policy function iteration

• With infinite time, we can also apply policy function iteration (aka as Howard improvement
algorithm):

1. We guess a policy function a0 .

2. We compute the V 0 associated to it (by matrix operations or iteration).

3. We compute the new policy function a1 implied by V 0 .

4. We iterate until convergence.

• Under some conditions, if can be faster than value function iteration (more on this later).

• Most of the next slides applies to policy function iteration without any (material) change.

27
Normalization

• Before initializing the algorithm, it is usually a good idea to normalize problem:


 Z 
V (s) = max (1 − β) u (s, a) + β V (s ′ ) p (ds ′ |s, a)
a∈A(s)

• Three advantages:

1. We save one iteration.

2. Stability properties.

3. Convergence bounds are interpretable.

• More general case: reformulation of the problem.

28
Initial value in finite time problems

• Usually, economics of the problem provides natural choices.

• Example: final value of an optimal expenditure problem is zero.

• However, some times there are subtle issues.

• Example: what is the value of dying? And of bequests? OLG.

29
Initial guesses for infinite time problems

• Theorems tell us we will converge from any initial guess.

• That does not mean we should not be smart picking our initial guess.

• Several good ideas:

1. Steady state of the problem (if one exists). Usually saves at least one iteration.

2. Perturbation approximation.

3. Collapsing one or more dimensions of the problem. Which one?

30
Discretization

• In the case where we have a continuous state space, we need to discretize it into a grid.

• How do we do that?

• Dealing with curse of dimensionality.

• Do we let future states lie outside the grid?

31
New approximated problem

• Exact problem:  Z 
′ ′
V (s) = max (1 − β) u (s, a) + β V (s ) p (ds |s, a)
a∈A(s)

• Approximated problem:
" N
#
X
Vb (s) = max (1 − β) u (s, a) + β Vb (sk′ ) pN (sk′ |s, a)
a∈A(s)
b
k=1

32
Grid generation

• Huge literature on numerical analysis on how to efficiently generate grids.

• Two main issues:

1. How to select points sk .

2. How to approximate p by pN .

• Answer to second issue follows from answer to first problem.

• We can (and we will) combine strategies to generate grids.

33
Uniform grid

• Decide how many points in the grid.

• Distribute them uniformly in the state space.

• What is the state space is not bounded?

• Advantages and disadvantages.

34
Non-uniform grid

• Use economic theory or error analysis to evaluate where to accumulate points.

• Standard argument: close to curvatures of the value function.

• Problem: this an heuristic argument.

• Self-confirming equilibria in computations.

35
Discretizing stochastic process

• Important case: discretizing exogenous stochastic processes.

• Consider a general AR(1) process:


iid
z ′ = (1 − ρ)µz + ρz + ε′ , ε′ ∼ N (0, σε2 )

σε2
• Recall that E[z] = µz and Var [z] = σz2 = (1−ρ2 ) .

• First step is to choose m (e.g., m = 3) and N, and define:

zN = µz + mσz z1 = µz − mσz

• z2 , z3 , ..., zN−1 are equispaced over the interval [z1 , zN ] with zk < zk+1 for any k ∈ {1, 2, ..., N − 1}

36
State
Example Space (Con’t)

37
Transition I

38
Transition II

39
Transition probability

• Let d = zk+1 − zk . Then


πi,j = Pr{z ′ = zj |z = zi }
= Pr{zj − d/2 < z ′ ≤ zj + d/2|z = zi }
= Pr{zj − d/2 < (1 − ρ)µz + ρzi + ε ≤ zj + d/2}
 
zj + d/2 − (1 − ρ)µz − ρzi ε zj − d/2 − (1 − ρ)µz − ρzi
= Pr < ≤
σε σε σε
   
zj + d/2 − (1 − ρ)µz − ρzi zj − d/2 − (1 − ρ)µz − ρzi
= Φ −Φ
σε σε

• Adjust for tails:


  
zN −d/2−(1−ρ)µz −ρzi


 1 − Φ σε if j = N
    
zj +d/2−(1−ρ)µz −ρzi zj −d/2−(1−ρ)µz −ρzi
πi,j = Φ σε −Φ σε otherwise
  
z1 +d/2−(1−ρ)µz −ρzi

Φ if j = 1


σε
40
VAR(1) case: state space

• We can apply Tauchen’s method to VAR(1) case with z ∈ RK .


iid
z ′ = Az + ε′ where ε′ ∼ N (0, Σε )

• Pick Nk ’s for k = 1, ..., K . We now have N = N1 × N2 × · · · NK possible states.

• For each k = 1, ..., K , we can define

zNk k = mσzk z1k = −zNk k

and remaining points are equally spaced.

• σz2k can be obtained from vec(Σz ) = (I − A ⊗ A)−1 vec(Σε ).

41
VAR(1) case: transition probability

• Consider a transition from zi = (zi11 , zi22 , ..., ziKK ) to zj = (zj11 , zj22 , ..., zjKK ).

• Associated probability for each state variable k given state ik to jk is now:


  k 
zN −dk /2−Akk zik
1 − Φ k k




 σεk

  z k +d /2−A z k   k 
k kk i z −d/2−Akk zik
πikk ,jk = Φ jk σε k
− Φ jk σε k
j ̸= 1, Nk
 k k
  
z1k +dk /2−Akk zik


Φ k


σε k

QK
• Therefore, πi,j = k=1 πikk ,jk .

• We can use this method for discretizing higher order AR processes.

42
Example

• For simplicity,Σε = I , and


! ! ! !
1
zt+1 0.72 0 zt1 ε1t+1
2
= +
zt+1 0 0.5 zt2 ε2t+1

• Let m = 3, N1 = 3, N2 = 5. Thus, N = 3 × 5 states in total.

• In this case, d1 = 4.3229, d2 = 1.7321.

• Transition from (z21 , z32 ) to (z31 , z42 ) is given by π2,3


1 2
× π3,4 where
1 1 1

π2,3 = 1 − Φ z3 − d1 /2 − 0.72z2
= 0.0153
2
Φ z42 + d2 /2 − 0.5z32 − Φ z42 − d2 /2 − 0.5z32
 
π3,4 =
= 0.1886

43
Quadrature grid

• Tauchen and Hussey (1991).

• Motivation: quadrature points in integrals


Z N
X
f (s) p (s) ds ≃ f (sk ) wk
k=1

• Gaussian quadrature: we require previous equation to be exact for all polynomials of degree less than
or equal to 2N − 1.

44
Rouwenhorst (1995) Method
iid
• Consider again z ′ = ρz + ε′ with ε′ ∼ N (0, σε2 ).

• Again, we want to approximate it by N-state Markov chain process with

• {z1 , ..., zN } state space.

• Transition probability ΘN .

• Set endpoints as zN = σz N − 1 ≡ ψ, and z1 = −ψ.

• z2 , z3 , ..., zN−1 are equispaced.

• We will derive transition matrix with size n recursively until n = N:

1. For n = 2, define Θ2 .

2. For 2 < n ≤ N, derive Θn from Θn−1 .

45
State and transition probability

1+ρ
• Define p = q = 2 (under the assumption of symmetric distribution) and
" #
p 1−p
Θ2 =
1−q q

• Compute Θn by:
" # " #
Θn−1 0 0 Θn−1
Θn = p + (1 − p)
0′ 0 0 0′
" # " #
0′ 0 0 0′
+(1 − q) +q
Θn−1 0 0 Θn−1

where 0 is a (n − 1) column vector.


• Divide all but the top and bottom rows in Θn by 2 after each iteration.
46
Why divide by two?

• For n = 3 case, we have


   
p 1−p 0 0 p 1−p
Θ3 = p 1−q q 0  + (1 − p)  0 1 − q q 
   
0 0 0 0 0 0
   
0 0 0 0 0 0
+(1 − q)  p 1−p 0 +q 0 p 1−p 
   
1−q q 0 0 1−q q

• We can see that the 2nd row sums up to 2!

47
Invariant distribution

(N) (N)
• Distribution generated by ΘN converges to the invariant distribution λ(N) = (λ1 , ..., λN ) with
!
(N) N −1
λi = s i−1 (1 − s)N−1
i −1

where
1−p
s=
2 − (p + q)

• From this invariant distribution, we can compute moments associate with ΘN analytically.

48
Which method is better?

• Kopecky and Suen (2010) argue that Rouwenhorst method is the best approx., especially for high
persistence (ρ → 1).

• Test bed:
 Z 
′ ′ ′
V (k, a) = max

log (c) + β V (k , a )dF (a |a)
c,k ≥0

s.t. c + k ′ = exp(a)k α + (1 − δ)k


a′ = ρa + ε′
iid
ε′ ∼ N (0, σε2 )

• Compare statistics under approximated stationary distribution to quasi-exact solution using


Chebyshev parameterized expectation algorithm.

• Comparison also with Adda and Cooper (2003).


49
Results

50
Stochastic grid

• Randomly chosen grids.

• Rust (1995): it breaks the curse of dimensionality.

• Why?

• How do we generate random numbers in the best way?

51
Interpolation

• Discretization also generates the need for interpolation.

• Simpler approach: linear interpolation.

• Problem: in one than more dimension, linear interpolation may not preserve concavity.

• Shape-preserving splines: Schumaker scheme.

• Trade-off between speed and accuracy interpolation.

52
V(kt)

kt

53
Multigrid algorithms

• Old tradition in numerical analysis.

• Basic idea: solve first a problem in a coarser grid and use it as a guess for more refined solution.

• Examples:

1. Differential equations.

2. Projection methods.

3. Dynamic programming (Chow and Tsitsiklis, 1991).

• Great advantage: extremely easy to code.

54
Applying the algorithm

• After deciding initialization and discretization, we still need to implement each step:
 Z 
T T −1 ′ ′
V (s) = max u (s, a) + β V (s ) p (ds |s, a)
a∈A(s)

• Two numerical operations:

1. Maximization.

2. Integral.

55
Maximization

• We need to apply the max operator.

• Most costly step of value function iteration.

• Brute force (always works): check all the possible choices in the grid.

• Sensibility: using a Newton or quasi-Newton algorithm.

• Fancier alternatives: simulated annealing, genetic algorithms,...

56
Brute force

• Some times we do not have any other alternative. Examples: problems with discrete choices,
non-differentiabilities, non-convex constraints, etc.

• Even if brute force is expensive, we can speed things up quite a bit:

1. Previous solution.

2. Monotonicity of choices.

3. Concavity (or quasi-concavity) of value and policy functions.

57
Newton or Quasi-Newton

• Much quicker.

• However:

1. Problem of global convergence.

2. We need to compute derivatives.

• We can mix brute force and Newton-type algorithms.

58
Generalized policy iteration

• Maximization is the most expensive part of value function iteration.

• Often, while we update the value function, optimal choices are not.

• This suggests a simple strategy: apply the max operator only from time to time.

• This should remind you of an incomplete policy function iteration.

• Often known as generalized policy iteration.

• How do we choose the optimal timing of the max operator (i.e., the relative sweeps of value and
policy)?

• Related: asynchronous implementations of value and policy function iterations.

59
How do we integrate?

• Exact integration.

• Approximations: Laplace’s method.

• Quadrature.

• Monte Carlo.

60
Convergence assessment

• How do we assess convergence?

• By the contraction mapping property:


1
V − Vk ∞
≤ V k+1 − V k ∞
1−β

• Relation of value function iteration error with Euler equation error.

61
Non-local accuracy test

• Proposed by Judd (1992) and Judd and Guu (1997).

• Example: Euler equation from a stochastic neoclassical growth model


 zt+1 i
αe k (kt , zt )α−1

1
= Et
c i (kt , zt ) c i (k i (kt , zt ), zt+1 )

we can define:
αe zt+1 k i (kt , zt )α−1
 
EE i (kt , zt ) ≡ 1 − c i (kt , zt ) Et
c i (k i (kt , zt ), zt+1 )

• Units of reporting.

• Interpretation.

62
Error analysis

• We can use errors in Euler equation to refine grid.

• How?

• Advantages of procedure.

• Problems.

63
The endogenous grid method

• Proposed by Carroll (2005) and Barillas and Fernández-Villaverde (2006).

• Links with operations research: pre-action and post-action states.

• It is actually easier to understand with a concrete example: a basic stochastic neoclassical growth
model.

• The problem has a Bellman equation representation:


( )
1−τ
(e zt ktα + (1 − δ) kt − kt+1 )
V (kt , zt ) = max + βEt V (kt+1 , zt+1 )
kt+1 1−τ
s.t. zt+1 = ρzt + εt+1

where V (·, ·) is the value function of the problem.

64
Changing state variables

• We will use a state variable called “market resources” or “cash-on-hand,” instead of kt :


Yt = ct + kt+1 = yt + (1 − δ) kt = e zt ktα + (1 − δ) kt

• We use a capital Yt to denote the total market resources and a lower yt for the production function.

• More general point: changes of variables are often key in solving our problems.

• As a result, we write the problem recursively with the Bellman equation:


( )
1−τ
(Yt − kt+1 )
V (Yt , zt ) = max + βEt V (Yt+1 , zt+1 )
kt+1 1−τ
s.t. zt+1 = ρzt + εt+1

• Note difference between V (kt , zt ) and V (Yt , zt ).


65
Optimilaty condition

• Since Yt+1 is only a function of kt+1 and zt+1 , we can write:

Ṽ (kt+1, zt ) = βEt V (Yt+1 , zt+1 )

to get: ( )
1−τ
(Yt − kt+1 )
V (Yt , zt ) = max + Ṽ (kt+1, zt )
kt+1 1−τ

• The first-order condition for consumption:


−τ
(ct∗ ) ∗
= Ṽkt+1 (kt+1 , zt )

where ct∗ = Yt − kt+1



.

66
Backing up consumption

• So, if we know Ṽ (kt+1, zt ), consumption:


 − τ1
ct∗ = Ṽkt+1 (kt+1 , zt )
for each point in a grid for kt+1 and zt .
• It should remind you of Hotz-Miller type estimators.
• Then, given ct∗ and kt+1 , we can find Yt∗ = ct∗ + kt+1 and obtain
( )
1−τ
∗ (ct∗ )
V (Yt , zt ) = + Ṽ (kt+1, zt )
1−τ
where we can drop the max operator, since we have already computed the optimal level of
consumption.
α
• Since Yt∗ = e zt (kt∗ ) + (1 − δ) kt∗ , an alternative interpretation of the algorithm is that, during the
iterations, the grid on kt+1 is fixed, but the values of kt change endogenously. Hence, the name of
Endogenous Grid.
67
Comparison with standard approach

• In the standard VFI, the optimality condition is:


−τ
(ct∗ ) ∗

= βEt Vk kt+1 , zt+1

• Since ct = e zt ktα + (1 − δ) kt − kt+1 , we have to solve



−τ ∗
e zt ktα + (1 − δ) kt − kt+1

= βEt Vk kt+1 , zt+1


a nonlinear equation on kt+1 for each point in a grid for kt .
• The key difference is, thus, that the endogenous grid method defines a fixed grid over the values of
kt+1 instead of over the values of kt .
• This implies that we already know what values the policy function for next period’s capital take and,
thus, we can skip the root-finding.

68

You might also like