Dynamic Programming2
Dynamic Programming2
Diego Corti
June 2, 2010
Contents
0.1 Stochastic Dynamic Programming . . . . . . . . . . . . . . . . . 2
1
0.1 Stochastic Dynamic Programming
wt = (1 + rt )at + xt
wt+1 = (1 + rt+1 )at+1 + xt+1 (2)
and from the budget constraint we obtain that,
at+1 = xt + (1 + rt )at − ct
So,
wt+1 = (1 + rt+1 )(xt + (1 + rt )at − ct ) + xt+1
wt+1 = (1 + rt+1 )(wt − ct ) + xt+1 (3)
Now we can write the value function, F (wt ) as,
F (wt ) = max[U (ct ) + βF [wt+1 ]] (4)
ct
where,
wt+1 = (1 + rt+1 )(wt − ct ) + xt+1
equation (4) is called Bellman's equation. F (wt ) is the maximum constrained
value of Ut , than
Vt = U (ct ) + βF [wt+1 ]
where F (wt ) is the maximum value of Ut+1 . So ct is our "policy variable ,
an optimal consumption plan must maximize Ut+1 subject to the future wealth
level produced by ct (today's consumption). Note that wt+1 is a function of ct .
With Bellman equation we exchange the original problem of nding an innite
sequence of period-solution that maximizes expression (1) for the problem of
nding the optimal value of F (wt ) and a function wt+1 that solves the contin-
uum of maximum problems (4) that has a form of the one maximum problem
for each value of wt (with t = 1, 2, ..) Considering the stochastic maximization
problems, we introduce the stochastic dynamic programming Know the maxi-
mization problem is represented by the function
∞
(5)
X
max Vt = β s Et [U (ct+s )]
c(t+s)
s=0
2
subject the budget constraint
at+1 + ct = xt + (1 + rt )at
then,
F (wt ) = max[U (ct ) + βEt F [(1 + rt+1 )(wt − ct ) + xt+1 ]]
ct
We have to note that an increment to wealth has the same eect on utility
regardless of the use to which the wealth is put, consumption or saving,than it
implies
F 0 [wt ] = U 0 (ct )
(spiegare il procedimento da Obstfeld e rogeo) so from the Bellman equation
we con obtain the Eulero equation
U 0 (ct ) = (1 + rt+1 )βEt [U 0 (ct+1 )]