0% found this document useful (0 votes)
37 views

Dynamic Programming2

The document discusses stochastic dynamic programming as an approach to intertemporal problems. It presents Bellman's equation, which frames an intertemporal problem as a recursive function by using a value function. Bellman's equation maximizes the utility of consumption subject to the future wealth level produced by current consumption. The first order condition for the maximum derived from Bellman's equation is the Euler equation, which equates the marginal utility of current consumption to the expected marginal utility of future consumption discounted by the interest rate.

Uploaded by

diecorti
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Dynamic Programming2

The document discusses stochastic dynamic programming as an approach to intertemporal problems. It presents Bellman's equation, which frames an intertemporal problem as a recursive function by using a value function. Bellman's equation maximizes the utility of consumption subject to the future wealth level produced by current consumption. The first order condition for the maximum derived from Bellman's equation is the Euler equation, which equates the marginal utility of current consumption to the expected marginal utility of future consumption discounted by the interest rate.

Uploaded by

diecorti
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Habit Formation

Diego Corti

June 2, 2010

Contents
0.1 Stochastic Dynamic Programming . . . . . . . . . . . . . . . . . 2

1
0.1 Stochastic Dynamic Programming

The Dynamic Programming is an alternative approach to intertemporal prob-


lems. Our problem is

(1)
X
max Vt = β s [U (ct+s )]
c(t+s)
s=0

with budget constraint, for each periods,


at+1 + ct = xt + (1 + rt )at

From the budget constraint This intertemporal problem is in a discrete time


and has a time-separable objective function that we can rewrite as a recursive
function (structure). The recursive structure is obtained using a value function
that gives us the maximum constrained value of Ut as a function of initial wealth
Wt : We dene wealth for each periods ,as

wt = (1 + rt )at + xt
wt+1 = (1 + rt+1 )at+1 + xt+1 (2)
and from the budget constraint we obtain that,
at+1 = xt + (1 + rt )at − ct

So,
wt+1 = (1 + rt+1 )(xt + (1 + rt )at − ct ) + xt+1
wt+1 = (1 + rt+1 )(wt − ct ) + xt+1 (3)
Now we can write the value function, F (wt ) as,
F (wt ) = max[U (ct ) + βF [wt+1 ]] (4)
ct

where,
wt+1 = (1 + rt+1 )(wt − ct ) + xt+1
equation (4) is called Bellman's equation. F (wt ) is the maximum constrained
value of Ut , than
Vt = U (ct ) + βF [wt+1 ]
where F (wt ) is the maximum value of Ut+1 . So ct is our "policy variable ,
an optimal consumption plan must maximize Ut+1 subject to the future wealth
level produced by ct (today's consumption). Note that wt+1 is a function of ct .
With Bellman equation we exchange the original problem of nding an innite
sequence of period-solution that maximizes expression (1) for the problem of
nding the optimal value of F (wt ) and a function wt+1 that solves the contin-
uum of maximum problems (4) that has a form of the one maximum problem
for each value of wt (with t = 1, 2, ..) Considering the stochastic maximization
problems, we introduce the stochastic dynamic programming Know the maxi-
mization problem is represented by the function

(5)
X
max Vt = β s Et [U (ct+s )]
c(t+s)
s=0

2
subject the budget constraint
at+1 + ct = xt + (1 + rt )at

now the Bellman's equation is


F (wt ) = max[U (ct ) + βEt F [wt+1 ]] (6)
ct

then,
F (wt ) = max[U (ct ) + βEt F [(1 + rt+1 )(wt − ct ) + xt+1 ]]
ct

the rst order condition for the maximum, derived respect to ct is


U 0 (ct ) − (1 + rt+1 )βEt F 0 [wt+1 ] = 0

We have to note that an increment to wealth has the same eect on utility
regardless of the use to which the wealth is put, consumption or saving,than it
implies
F 0 [wt ] = U 0 (ct )
(spiegare il procedimento da Obstfeld e rogeo) so from the Bellman equation
we con obtain the Eulero equation
U 0 (ct ) = (1 + rt+1 )βEt [U 0 (ct+1 )]

You might also like