1 - Programación Dinámica
1 - Programación Dinámica
MacroeconomÌa II
MaestrÌa en EconomÌa
UTDT
Francisco J. Ciocchini
2023
1/81
General Problem
I Consider the following dynamic problem with a Önite horizon:
:::
k0 given
ener
oh
3
dar
c0 = H0 (k0 ) e
:::
cT = HT (k0 )
k1 = J1 (k0 )
:::
kT +1 = JT +1 (k0 )
3/81
General Problem
:::
4/81
General Problem
I Notation: k & (k0 ; k1 ; :::; kT +1 ) ; c & (c0 ; c1 ; :::; cT ).
I First Order Conditions (FOC):
@U (k;c) PT
@L
@c0 = 0 ) @c0 + s=0 0s @F@c s (k;c)
0
=0
::: :::
@U (k;c) PT
@L
@cT = 0 ) @cT + s=0 0s @F@c s (k;c)
T
=0
@U (k;c) PT
@L
@k1 =0) @k1 + s=0 0s @F@k
s (k;c)
1
=0
::: :::
@U (k;c) PT
@L
@kT +1 =0) @kT +1 + s=0 0s @Fs (k;c)
@kT +1 = 0
@L
@*0 =0) F0 (k; c) = 0
::: :::
@L
@*T =0) FT (k; c) = 0
I We have a system of 3(T + 1) equations to solve for 3(T + 1)
unknowns, c0 ; :::; cT ; k1 ; :::; kT +1 ; 00 ; :::; 0T , in terms of k0 .
I Notice that, in general, the system has to be solved simultaneously
for the 3(T + 1) unknowns. This is usually hard.
5/81
Special Case
I Assume:
value
↓Scop
U (k; c) = u0 (k0 ; c0 ) + ::: + uT (kT ; cT ) + S(kT +1 )
6/81
Special Case
I Under the previous assumptions, our problem becomes:
s. t. k1 = G0 (k0 ; c0 )
k2 = G1 (k1 ; c1 )
:::
kT +1 = GT (kT ; cT )
k0 given
I We have:
k+ +1 = G+ (k+ ; c+ )
k+ +2 = G+ +1 (k+ +1 ; c+ +1 )
= G+ +1 (G+ (k+ ; c+ ); c+ +1 )
k+ +3 = G+ +2 (k+ +2 ; c+ +2 )
= G+ +2 (G+ +1 (G+ (k+ ; c+ ); c+ +1 ); c+ +2 )
::: :::
8/81
Special Case
I Also:
u+ (k+ ; c+ )
::::
I Weíll show below that this key property gives the problem a
recursive structure.
9/81
Special Case
I The Lagrangian for the new problem is
:::
10/81
Special Case
I FOC:
@u0 (k0 ;c0 )
@L
@c0 =0) @c0 + 00 @G0@c
(k0 ;c0 )
0
=0
::: :::
@uT (kT ;cT )
@L
@cT =0) @cT + 0T @GT@c
(kT ;cT )
T
=0
@L
@kT +1 =0) S 0 (kT +1 ) ( 0T = 0
@L
@*0 =0) k1 = G0 (k0 ; c0 )
::: :::
@L
@*T =0) kT +1 = GT (kT ; cT )
11/81
Special Case
ct = ht (kt )
kt+1 = gt (kt )
I The functions ht ()) and gt ()) are usually called policy functions (also
called, optimal feedback rules).
12/81
Special Case
I Forward iterations over the functions above give the solution in
terms of k0 :
c0 = H0 (k0 ) ! h0 (k0 )
:::
13/81
Special Case: Recursive Solution
I Start at t = T .
I The FOC for period T are:
@uT (kT ;cT )
@cT
+ "T @GT@c
(kT ;cT )
T
=0
"T = S 0 (kT +1 )
kT +1 = GT (kT ; cT )
I Notice that we have a system of three equations that can be solved
for the three unknowns, cT , kT +1 , and "T , in terms of kT . We get:
cT = hT (kT )
"T = `T (kT )
14/81
Special Case: Recursive Solution
I Now go one step back to t = T " 1.
I The FOC for period T " 1 are:
@uT !1 (kT !1 ;cT !1 ) @GT !1 (kT !1 ;cT !1 )
@cT !1
+ "T "1 @cT !1
=0
15/81
Special Case: Recursive Solution
I t = T " 1 (cont.)
I We obtained a system of three equations that can be solved for the
three unknowns, cT "1 , kT , and "T "1 , in terms of kT "1 . We get:
16/81
Special Case: Recursive Solution
I Now go one step back to t = T " 2.
I The FOC for period T " 2 are:
@uT !2 (kT !2 ;cT !2 ) @GT !2 (kT !2 ;cT !2 )
@cT !2
+ "T "2 @cT !2
=0
I We know from the FOC for period T " 1 that cT "1 = hT "1 (kT "1 )
and "T "1 = `T "1 (kT "1 ). Hence, we can write:
@uT !2 (kT !2 ;cT !2 ) @GT !2 (kT !2 ;cT !2 )
@cT !2
+ !T !2 @cT !2
=0
kT !1 = GT !2 (kT !2 ; cT !2 )
17/81
Special Case: Recursive Solution
I t = T " 2 (cont.)
I We obtained a system of three equations that can be solved for the
three unknowns, cT "2 , kT "1 , and "T "2 , in terms of kT "2 . We get:
kT "1 = gT "2 (kT "2 ) ! GT "2 (kT "2 ; hT "2 (kT "2 ))
18/81
Special Case: Recursive Solution
I Keep going back in time until t = 0
I The FOC for period 0 are:
@u0 (k0 ;c0 )
@c0
+ "0 @G0@c
(k0 ;c0 )
0
=0
k1 = G0 (k0 ; c0 )
k1 = G0 (k0 ; c0 )
19/81
Special Case: Recursive Solution
I t = 0 (cont.)
I We obtained a system of three equations that can be solved for the
three unknowns, c0 , k1 , and "0 , in terms of k0 . We get:
c0 = h0 (k0 )
"0 = `0 (k0 )
20/81
Dynamic Programming and the Bellman Equation
I Now weíll show that the previous problem can be solved using an
alternative method.
21/81
Dynamic Programming and the Bellman Equation
I DeÖne the following sequence of optimization problems, for
t 2 f0; 1; :::; T g :
kt given
4/ cada T
with
WT +1 (kT +1 ) = S(kT +1 )
22/81
Dynamic Programming and the Bellman Equation
I Consider the previous problem for t = 0 :
s. t. k1 = G0 (k0 ; c1 )
k0 given
23/81
Dynamic Programming and the Bellman Equation
I Consider period t = T .
I Then:
WT (kT ) ! max uT (kT ; cT ) + WT +1 (kT +1 )
cT ;kT +1
s. t. kT +1 = GT (kT ; cT )
kT given
I Lagrangian:
I FOC:
@uT (kT ;cT )
@L
@cT
=0) @cT
+ "T @GT@c
(kT ;cT )
T
=0
@L
@kT +1
=0) "T = WT0 +1 (kT +1 )
@L
@)T
=0) kT +1 = GT (kT ; cT )
24/81
Dynamic Programming and the Bellman Equation
I t = T (cont.)
I But WT +1 (kT +1 ) = S(kT +1 ) ) WT0 +1 (kT +1 ) = S 0 (kT +1 ). Then:
@uT (kT ;cT )
@cT
+ "T @GT@c
(kT ;cT )
T
=0
"T = S 0 (kT +1 )
kT +1 = GT (kT ; cT )
I This system coincides with the period-T FOC of our original problem.
I Hence, we get:
cT = hT (kT )
kT "1 given
I Lagrangian:
L = uT "1 (kT "1 ; cT "1 ) + WT (kT ) + "T "1 [GT "1 (kT "1 ; cT "1 ) " kT ]
I FOC
@L @uT !1 (kT !1 ;cT !1 ) @GT !1 (kT !1 ;cT !1 )
@cT !1
=0) @cT !1
+ "T "1 @cT !1
=0
@L
@kT
=0) WT0 (kT ) " "T "1 = 0
@L
@)T !1
=0) kT = GT "1 (kT "1 ; cT "1 )
26/81
Dynamic Programming and the Bellman Equation
I t = T " 1 (cont.)
I We know from our period-T problem that
where we used S 0 (GT (kT ; hT (kT ))) = `T (kT ) and set [%] = 0 in the
penultimate line, by the FOC of period T .
I The previous expression can be found directly by means of the
Envelope Theorem.
27/81
Dynamic Programming and the Bellman Equation
I t = T " 1 (cont.)
I Substituting the previous expression for WT0 (kT ) into the FOC at
T " 1 we get:
@uT !1 (kT !1 ;cT !1 ) @GT !1 (kT !1 ;cT !1 )
@cT !1
+ "T "1 @cT !1
=0
28/81
Dynamic Programming and the Bellman Equation
I We can keep going backward in time and show that, in each period,
the solution coincides with the recursive solution to our original
problem.
I Bottom Line: we can solve our original "big" problem by solving the
sequence of "small" problems deÖned by the Bellman Equations.
I Recall weíve found WT0 +1 (kT +1 ) = 4T and WT0 (kT ) = 4T "1 . More
generally: Wt0 (kt ) = 4t"1 8t ( 1. Hence, the Lagrange multipliers
give the marginal value of the state variable.
I Remark: in our previous argument we assumed without proof that
the functions ht (&) and Wt (&) are di§erentiable; formal conditions for
this have been established in the literature (e.g., Benveniste &
Scheinkman (1979)).
29/81
Dynamic Programming and the Bellman Equation
I Another perspective on the recursive nature of the problem can be
obtained from the following observations.
I First, notice that:
W0 (k0 ) = max fu0 (k0 ; c0 ) + W1 (k1 )g
c0 ;k1
31/81
Dynamic Programming and the Bellman Equation
32/81
Dynamic Programming and the Bellman Equation
I Bellmanís Principle of Optimality
I Suppose the policy functions fct = ht (kt )gTt=0 solve
k0 given
k. given
34/81
Dynamic Programming and the Bellman Equation
I Particular case:
ut (kt ; ct ) = 5 t u(kt ; ct )
where 5 2 (0; 1) is a discount factor.
I Bellman Equation:
s. t. kt+1 = Gt (kt ; ct )
kt given
Wt+1 (kt+1 )
= max u(kt ; ct ) + *t
ct ;kt+1
I Finally:
Vt (kt ) = max u(kt ; ct ) + 5Vt+1 (kt+1 )
ct ;kt+1
I Notice that 4 t is not included in the Örst term of the right-hand side,
but Vt+1 (kt+1 ) is multiplied by 4 to discount it to period t.
36/81
InÖnite Horizon
I Up to now, weíve been dealing with the following problem:
P
T
max ut (kt ; ct ) + S(kT +1 )
fct ;kt+1 gT
t=0 t=0
k0 given
37/81
InÖnite Horizon
Gt (kt ; ct ) = G(kt ; ct )
38/81
InÖnite Horizon
I Hence, consider the problem:
P
1
max 1 1 t u(kt ; ct ) = u(k0 ; c0 ) + 1u(k1 ; c1 ) + 1 2 u(k2 ; c2 ):::
fct ;kt+1 gt=0 t=0
k0 given
P
1 P
T
I Notation: ! t u(kt ; ct ) denotes lim ! t u(kt ; ct ).
t=0 T !1 t=0
I Remark: in general,
P
T P
T
max lim ! t u(kt ; ct ) 6= lim max ! t u(kt ; ct ),
T !1 t=0 T !1 t=0
so we canít just take limits on the Önite-horizon solution.
39/81
InÖnite Horizon
P
1
max 1 1 t$1 u(kt ; ct ) = u(k1 ; c1 ) + 1u(k2 ; c2 ) + 1 2 u(k3 ; c3 ):::
fct ;kt+1 gt=1 t=1
k1 given
I This problem has exactly the same structure as the original one.
I Both the inÖnite horizon and the time invariance of the functions
u(") and G(") are crucial for this.
I Moreover, this property holds for arbitrary periods, t and t + s.
40/81
InÖnite Horizon
I The property described above suggests that the solution to the
original problem will display time-invariant policy functions
ct = h(kt )
kt+1 = g(kt )
s. t. kt+1 = G(kt ; ct )
kt given
41/81
InÖnite Horizon
I Substituting the constraint into the objective we get:
42/81
InÖnite Horizon
I Value-function Iteration
I This method proceeds by constructing a sequence of value functions
and associated policy functions. The sequence is created by iterating
on
Vj+1 (k) = max u(k; c) + !Vj (G(k; c))
c
43/81
InÖnite Horizon
44/81
The Euler Equation
I The solution methods described above require that we Önd the value
function V . We show now that it may be possible to solve the
problem without this knowledge.
I Consider the Bellman Equation for the inÖnite-horizon problem:
V (kt ) = max u(kt ; ct ) + 1V (G(kt ; ct ))
ct
45/81
The Euler Equation
I Di§erentiating V (kt ) with respect to kt we get:
+ [uc (kt ; h(kt )) + 1V 0 (G(kt ; h(kt )))Gc (kt ; h(kt ))]h0 (kt )
| {z }
=0 by FOC
I Then:
kt+1 = G(ct )
47/81
The Euler Equation
I From the FOC we know that:
49/81
Stochastic Dynamic Programming
I Consider the (stationary) inÖnite-horizon stochastic problem:
P
1
max 1 E0 1 t u(kt ; ct )
fct ;kt+1 gt=0 t=0
k0 ; "0 given
I Et x denotes the mathematical expectation of the random variable x,
conditional on information available at time t.
I f"t g1
t=0 is a sequence of random variables with conditional
distribution function f ("t+1 j"t ) independent of t. It is assumed that
"t becomes known at t, before ct is chosen.
I Notice that kt+1 belongs to the period-t information set, since the
three arguments in the transition function are known at t.
I In some speciÖcations, the transition equation is of the form
kt+1 = G(kt ; ct ; "t+1 ). In this case, "t+1 and kt+1 are unknown at t.
In other speciÖcations, a stochastic shock is allowed to a§ect the
return function: u(kt ; ct ; "t ). The same tools weíll develop here can
be used to study these cases.
50/81
Stochastic Dynamic Programming
I The stochastic problem described above continues to have a
recursive structure, which follows from the additive separability of
the objective function in pairs (kt ; ct ), and from the particular form
assumed for the transition equation. This implies that dynamic
programming methods remain appropriate.
I The Bellman Equation is:
ct = h(kt ; "t )
kt+1 = g(kt ; "t ) & G(kt ; h(kt ; "t ); "t )
Vj+1 (kt ; "t ) = max u(kt ; ct ) + 1Et fVj (G(kt ; ct ; "t ); "t+1 )g
ct
52/81
Stochastic Euler Equation
I Consider the problem
ct = h(kt ; "t )
I Substituting into the objective we get:
V (kt ; "t ) = u(kt ; h(kt ; "t )) + 1Et fV (G(kt ; h(kt ; "t ); "t ); "t+1 )g
53/81
Stochastic Euler Equation
I Di§erentiating V (kt ; "t ) with respect to kt we get:
Vk (kt ; "t ) = uk (kt ; h(kt ; "t )) + uc (kt ; h(kt ; "t ))hk (kt ; "t )
! "
Vk (G(kt ; h(kt ; "t ); "t ); "t+1 )&
+ +Et
[Gk (kt ; h(kt ; "t ); "t ) + Gc (kt ; h(kt ; "t ); "t )hk (kt ; "t )]
+ +Et fVk (G(kt ; h(kt ; "t ); "t ); "t+1 )g Gk (kt ; h(kt ; "t ); "t )
# $
uc (kt ; h(kt ; "t ))+
+ hk (kt ; "t )
+Et fVk (G(kt ; h(kt ; "t ); "t ); "t+1 )gGc (kt ; h(kt ; "t )
I The last term in the right-hand of the expression above equals zero
because of the FOC. Then:
Vk (kt ; "t ) = uk (kt ; h(kt ; "t ))
+!Et fVk (G(kt ; h(kt ; "t ); "t ); "t+1 )g Gk (kt ; h(kt ; "t ); "t )
54/81
Stochastic Euler Equation
I Suppose we manage to choose the state and control variables in
such a way that the transition equation does not depend on kt , so
kt+1 = G(ct ; "t ). Then, Gk = 0 and the Envelope Condition
becomes:
Vk (kt ; "t ) = uk (kt ; h(kt ; "t ))
= uk (kt ; ct )
I Shifting the expression above one period ahead we get:
Vk (kt+1 ; "t+1 ) = uk (kt+1 ; h(kt+1 ; "t+1 ))
= uk (kt+1 ; ct+1 )
I From the FOC we know:
uc (kt ; ct ) + 1Et fVk (kt+1 ; "t+1 )g Gc (kt ; ct ; "t ) = 0
I Combining the previous two expressions we obtain the stochastic
Euler Equation:
uc (kt ; ct ) + 1Et fuk (kt+1 ; ct+1 )g Gc (kt ; ct ; "t ) = 0
55/81
Stochastic Euler Equation
I If we can use kt+1 = G(ct ; "t ) to write ct = m(kt+1 ; "t ), the Euler
Equation becomes:
uc (kt ; m(kt+1 ; "t )) + +Et fuk (kt+1 ; m(kt+2 ; "t+1 ))g Gc (kt ; m(kt+1 ; "t ); "t ) = 0
56/81
Dynamic Programming and the Lucas Critique
V (kt ; "t ) = u(kt ; h(kt ; "t )) + 1Et fV (G(kt ; h(kt ; "t ); "t ); "t+1 )g
I Notice that, in general, the optimal policy h($) will depend on the
return function u($), the discount factor 1, the probability
distribution of the shocks f ($), and the transition function G($).
I Even if we keep preferences Öxed (u($) and 1), the optimal decision
rule will depend on the transition function G($).
I The implication is that, in dynamic decision problems, it is in general
impossible to Önd a single decision rule h($) that is invariant with
respect to changes in the law of motion G($).
57/81
Dynamic Programming and the Lucas Critique
58/81
Appendix I: A Simple Example
I Consider the following cake-eating problem: choose c0 , c1 , c2 , k1 , k2
and k3 , in order to solve
max ln c0 + ln c1 + ln c2 + ln k3
s. t. k1 = k0 ( c0
k2 = k1 ( c1
k3 = k2 ( c2
k0 > 0 given
59/81
Appendix I: A Simple Example
I Lagrange
I The Lagrangian is
L = ln c0 +ln c1 +ln c2 +ln k3 +.1 (k0 'c0 'k1 )+.2 (k1 'c1 'k2 )+.3 (k2 'c2 'k3 )
I FOC
1
c0 : c0 = 91
1
c1 : c1 = 92
1
c2 : c2 = 93
k1 : 91 = 92
k2 : 92 = 93
1
k3 : k3 = 93
+ constraints
60/81
Appendix I: A Simple Example
I From the FOC we get:
91 = 9 2 = 9 3
I Then:
1 1 1 1
= = =
c0 c1 c2 k3
I Then:
c0 = c1 = c2 = k3
I Combining the constraints:
k3 = k2 ( c2 = k1 ( c1 ( c2 = k0 ( c0 ( c1 ( c2 )
c0 + c1 + c2 + k3 = k0
I The two previous expressions imply:
c0 + c0 + c0 + c0 = k0 ) 4c0 = k0 )
1
c'0 = k0
4
I Going back to c0 = c1 = c2 = k3 we get:
1
c'1 = c'2 = k3' = k0
4
61/81
Appendix I: A Simple Example
I Substituting c'0 = 14 k0 into the Örst constraint:
k1' = k0 ( c'0 = k0 ( 14 k0 )
3
k1' = k0
4
I Substituting c'1 = 14 k0 and k1' = 34 k0 into the second constraint:
k2' = k1' ( c'1 = 34 k0 ( 14 k0 = 24 k0 )
1
k2' = k0
2
I Finally, substituting the solution into the objective function we
obtain the maximized value of utility:
U ' = ln c'0 + ln c'1 + ln c'2 + ln k3'
U ' = ln( 14 k0 ) + ln( 14 k0 ) + ln( 14 k0 ) + ln( 14 k0 ) = 4 ln( 14 k0 )
U ' = 4 ln( 14 ) + 4 ln k0 )
U ' = (4 ln 4 + 4 ln k0
62/81
Appendix I: A Simple Example
I Dynamic Programming
I At t = 2 :
W2 (k2 ) & max ln c2 + W3 (k3 )
c2 ;k3
s. t. k3 = k2 ( c2
k2 given
with W3 (k3 ) = ln k3 .
I Then:
W2 (k2 ) & max ln c2 + ln k3
c2 ;k3
s. t. k3 = k2 ( c2
k2 given
I Then:
W2 (k2 ) & max ln c2 + ln(k2 ( c2 )
c2
I FOC:
1 1
=
c2 k2 ( c2 63/81
Appendix I: A Simple Example
I Then: k2 ( c2 = c2 ) k2 = 2c2 )
1
c2 = k2
2
I Substituting into the constraint: k3 = k2 ( c2 ) k3 = k2 ( 12 k2 )
1
k3 = k2
2
I Substituting into the objective: W2 (k2&) =' ln( 12 k2 ) + ln( 12 k2 ) )
W2 (k2 ) = 2 ln( 12 k2 ) ) W2 (k2 ) = 2 ln 21 + 2 ln k2 )
W2 (k2 ) = (2 ln 2 + 2 ln k2
I At t = 1
W1 (k1 ) & max ln c1 + W2 (k2 )
c1 ;k2
s. t. k2 = k1 ( c1
k1 given
where W2 (k2 ) was found in the previous step.
64/81
Appendix I: A Simple Example
I Then:
W1 (k1 ) & max ln c1 ( 2 ln 2 + 2 ln k2
c1 ;k2
s. t. k2 = k1 ( c1
k1 given
I Then:
W1 (k1 ) & max ln c1 ( 2 ln 2 + 2 ln(k1 ( c1 )
c1
I FOC:
1 2
=
c1 k1 ( c1
I Then: k1 ( c1 = 2c1 ) k1 = 3c1 )
1
c1 = k1
3
I Substituting into the constraint: k2 = k1 ( c1 ) k2 = k1 ( 13 k1 )
2
k2 = k1
3
65/81
Appendix I: A Simple Example
I Substituting into the objective:
W1 (k1 ) = ln( 13 k1 ) ( 2 ln 2 + 2 ln( 23 k1 )
W1 (k1 ) = ln( 13 ) + ln k1 ( 2 ln 2 + 2 ln( 23 ) + 2 ln k1
W1 (k1 ) = ( ln 3 ( 2 ln 2 + 2 ln 2 ( 2 ln 3 + 3 ln k1 )
W1 (k1 ) = (3 ln 3 + 3 ln k1
I At t = 0
W0 (k0 ) & max ln c0 + W1 (k1 )
c0 ;k1
s. t. k1 = k0 ( c0
k0 given
where W1 (k1 ) was found in the previous step.
I Then:
W0 (k0 ) & max ln c0 ( 3 ln 3 + 3 ln k1
c0 ;k1
s. t. k1 = k0 ( c0
k0 given
66/81
Appendix I: A Simple Example
I Then:
W0 (k0 ) & max ln c0 ( 3 ln 3 + 3 ln(k0 ( c0 )
c0
I FOC:
1 3
=
c0 k0 ( c0
I Then: k0 ( c0 = 3c0 ) k0 = 4c0 )
1
c0 = k0
4
I Substituting into the constraint: k1 = k0 ( c0 ) k1 = k0 ( 14 k0 )
3
k1 = k0
4
I Substituting into the objective:
W0 (k0 ) = ln( 14 k0 ) ( 3 ln 3 + 3 ln( 34 k0 )
W0 (k0 ) = ln( 14 ) + ln k0 ( 3 ln 3 + 3 ln( 34 ) + 3 ln k0
W0 (k0 ) = ( ln 4 ( 3 ln 3 + 3 ln 3 ( 3 ln 4 + 4 ln k0 )
W0 (k0 ) = (4 ln 4 + 4 ln k0
67/81
Appendix I: A Simple Example
c0 = 14 k0 [c0 = h0 (k0 )]
k1 = 34 k0 [k1 = g0 (k0 )]
c1 = 13 k1 [c1 = h1 (k1 )]
k2 = 23 k1 [k2 = g1 (k1 )]
c2 = 12 k2 [c2 = h2 (k2 )]
k3 = 12 k2 [k3 = g2 (k2 )]
68/81
Appendix I: A Simple Example
I From the expressions above we obtain the solution in terms of k0 :
c'0 = 14 k0
k1' = 34 k0
c'1 = 13 k1' = 14 k0
k2' = 23 k1' = 12 k0
c'2 = 12 k2' = 14 k0
k3' = 12 k2' = 14 k0
I Notice that this solution coincides with the one we found with the
method of Lagrange.
I As expected, the values of maximized utility coincide:
U ' = W0 (k0 ) = (4 ln 4 + 4 ln k0
69/81
Appendix II: A Non-Recursive Problem
I Consider a utility-maximization problem of the form:
max u0 (k0 ; c0 ; c1 ) + u1 (k1 ; c1 ) + S(k2 )
s. t. k1 = G0 (k0 ; c0 )
k2 = G1 (k1 ; c1 )
k0 given
I Notice that u0 (") depends not only on k0 and c0 but also on c1 .
I Now consider the following particular version of the problem above:
max ln(c0 ( c1 ) + ln c1 + ln k2
s. t. k1 = k0 ( c0
k2 = k1 ( c1
k0 > 0 given
70/81
Appendix II: A Non-Recursive Problem
I We start by solving the problem with the method of Lagrange.
I Notice that the two constraints can be combined into one:
c0 + c1 + k2 = k0
I Lagrangian:
L = ln(c0 ( c1 ) + ln c1 + ln k2 + 9 (k0 ( c0 ( c1 ( k2 )
I FOC
@L 1
@c0 =0) c0 $c1 =9
@L $1 1
@c1 =0) c0 $c1 + c1 =9
@L 1
@k2 =0) k2 =9
@L
@/ =0) c0 + c1 + k2 = k0
71/81
Appendix II: A Non-Recursive Problem
I Solving the system above we get:
1
c'0 = k0
2
1
c'1 = k0
6
1
k2' = k0
3
I Using the Örst-period constraint, k1 = k0 ( c0 , we get:
1
k1' = k0
2
I Evaluating the objective function we get:
U ' = ( ln 54 + 3 ln k0
72/81
Appendix II: A Non-Recursive Problem
I Now suppose we try to solve the problem by backward induction.
I At t = 1 (i.e., t = T ) we face the following problem:
s. t. k2 = k1 ( c1
k1 given
73/81
Appendix II: A Non-Recursive Problem
74/81
Appendix II: A Non-Recursive Problem
I The optimal plan (i.e., the solution to the original problem) is time
inconsistent: if given the chance to re-optimize at t = 1, the
decision maker would choose to depart from the original plan. In
other words, the optimal plan lacks the self-enforcing character of
dynamic programming solutions.
75/81
Appendix II: A Non-Recursive Problem
I Letís continue with our backward iteration.
I At t = 0 we face the following problem:
W0 (k0 ) & max ln(c0 ( c1 ) + W1 (k1 )
s. t. k1 = k0 ( c0
c1 = 12 k1
k0 given
with W1 (k1 ) = (2 ln 2 + 2 ln k1
I Since c1 appears in the return function at t = 0, we need to take into
account that c1 = 12 k1 .
I Alternatively, we could write the problem as follows:
W0 (k0 ) ( max ln(c0 ' c1 ) + ln c1 + ln k2
s. t. k1 = k0 ' c0
c1 = 12 k1
k2 = 12 k1
k0 given
76/81
Appendix II: A Non-Recursive Problem
I Then, we need to solve:
W0 (k0 ) & max ln(c0 ( 12 k1 ) ( 2 ln 2 + 2 ln k1
c0 ;k1
s. t. k1 = k0 ( c0
k0 given
I We get:
5
b
c0 = k0
9
b 4
k1 = k0
9
I Substituting b
k1 = 49 k0 into c1 = 12 k1 and k2 = 12 k1 we get:
2
b
c1 = k0
9
b 2
k2 = k0
9
77/81
Appendix II: A Non-Recursive Problem
I Finally, substituting the solution into the objective function we get:
& '
W0 (k0 ) = ( ln 243
4 + 3 ln k0
c0 = 59 k0
b > c'0 = 12 k0
b
k1 = 49 k0 < k1' = 12 k0
c1 = 29 k0
b > c'1 = 16 k0
b
k2 = 29 k0 < k2' = 13 k0
I And:
& 243 '
W0 (k0 ) = ( ln 4 + 3 ln k0 < U ' = ( ln 54 + 3 ln k0
79/81
Appendix III: The Euler Equation
I Shifting the expression above one period ahead we get:
Gk (kt+1 ; ct+1 )
V 0 (kt+1 ) = uk (kt+1 ; ct+1 ) ( uc (kt+1 ; ct+1 )
Gc (kt+1 ; ct+1 )
I Substituting the previous expression into the FOC we obtain the
Euler Equation:
# $
Gk (kt+1 ; ct+1 )
uc (kt ; ct )+! uk (kt+1 ; ct+1 ) ' uc (kt+1 ; ct+1 ) Gc (kt ; ct ) = 0
Gc (kt+1 ; ct+1 )
80/81
Bibliography
81/81