Macroeconomics 1a: Fall 2024.
Problem Set 2
Professor: Marek Kapička. Teaching assistant: Jose Gomez Castro
Student: Kai Angel-Augusto Sanchez Pajuelo
CERGE-EI, 1st year PhD studies
2024/2025
Problem 1: Cake Eating with Habit Persistence
1.a Recursive Formulation and Solution for the Consumer’s Decision Problem
Problem Overview
We are tasked with formulating a recursive dynamic programming problem for a consumer who decides how
much of a cake to consume in each period to maximize utility over time. The consumer’s preferences follow
a CRRA (Constant Relative Risk Aversion) utility function:
∞
X c1−σ
t
βt ,
t=0
1−σ
where:
• ct is consumption at time t,
• β ∈ (0, 1) is the discount factor,
• σ > 0 is the coefficient of relative risk aversion.
The resource constraint is:
ct + xt+1 ≤ xt , xt+1 ≥ 0, ct ≥ 0,
where xt represents the amount of cake available at the start of period t.
We are asked to:
1. Formulate the problem recursively.
2. Guess that the value function v(x) and policy function g(x) take the forms:
x1−σ
v(x) = α , g(x) = θx.
1−σ
3. Solve for α and θ.
1
Solution
Recursive Formulation (Bellman Equation) The Bellman equation for the consumer’s decision prob-
lem is:
c1−σ
v(x) = max + βv(x ) ,
′
c 1−σ
subject to the constraint x′ = x − c.
Guessing the Value Function and Policy Function We guess that the value function and policy
function take the following forms:
x1−σ
v(x) = α , g(x) = θx.
1−σ
Substituting into the Bellman Equation Substitute the guessed form for v(x) into the Bellman equa-
tion:
x1−σ (x − c)1−σ
1−σ
c
α = max + βα .
1−σ c 1−σ 1−σ
The next step is to solve for the optimal consumption c.
First-Order Condition Take the derivative of the right-hand side with respect to c to obtain the first-
order condition (FOC):
c−σ = βα(x − c)−σ .
Solving for c Rearranging the FOC:
1 1
= βα ,
cσ (x − c)σ
multiplying both sides by cσ (x − c)σ :
(x − c)σ = βαcσ .
Taking the σ-th root:
x−c 1
= (βα) σ .
c
Solving for c:
x
c= 1 .
1 + (βα) σ
Thus, the optimal policy function is:
2
x
g(x) = 1 .
1 + (βα) σ
We can now express g(x) as g(x) = θx, where:
1
θ= 1 .
1 + (βα) σ
Finding α Finally, substituting the policy function c = x
1 into the Bellman equation gives a non-
1+(βα) σ
linear equation for α:
1−σ
1 (βα) σ
α= 1−σ + βα 1−σ .
1 1
1 + (βα) σ 1 + (βα) σ
1−σ
1 + βα(βα) σ
α= 1−σ .
1
1 + (βα) σ
1
1 + (βα) σ
α= 1−σ .
1
1 + (βα) σ
1 1
α σ = 1 + (βα) σ .
1 1
α σ − (βα) σ = 1.
1
1
α σ 1 − β σ = 1.
1 1
ασ = 1 .
1−βσ
σ
1
α= 1 .
1−βσ
Conclusion
The recursive formulation leads to the following results:
σ
1−σ
• The value function is v(x) = α x1−σ , where α = 1
1 .
1−β σ
• The policy function is g(x) = x
1 , where θ = 1
1 .
1+(βα) σ 1+(βα) σ
3
1.b Habit Persistence (One-Period Lag)
Problem Overview
Now, we assume that the consumer exhibits habit persistence, and the utility function is:
∞
X (ct − γct−1 )1−σ
βt ,
t=0
1−σ
where:
• 0 < γ < 1 is the habit persistence parameter.
• The utility at time t depends on both current consumption ct and past consumption ct−1 .
State Variables
In this case, the consumer’s problem depends on two state variables:
1. xt : the cake available at time t,
2. ct−1 : the past period’s consumption, which affects current utility.
Bellman Equation
The recursive Bellman equation now becomes:
(ct − γct−1 )1−σ
v(xt , ct−1 ) = max + βv(xt+1 , ct ) ,
ct 1−σ
subject to the resource constraint xt+1 = xt − ct .
Conclusion
The introduction of habit persistence means that the previous period’s consumption ct−1 is now a state
variable, and the utility function depends on both ct and ct−1 .
1.c Habit Persistence with an Additional Lag
Problem Overview
In this case, the habit persistence operates with an additional lag. The utility function is:
∞
X (ct − γct−2 )1−σ
βt ,
t=0
1−σ
where ct−2 is consumption from two periods ago.
4
State Variables
Now, the state space must include:
1. xt : the cake available at time t,
2. ct−1 : consumption in the previous period,
3. ct−2 : consumption two periods ago.
Bellman Equation
The Bellman equation becomes:
(ct − γct−2 )1−σ
v(xt , ct−1 , ct−2 ) = max + βv(xt+1 , ct , ct−1 ) ,
ct 1−σ
subject to the resource constraint xt+1 = xt − ct .
Conclusion
The addition of a two-period lag increases the dimensionality of the state space. Now, ct−2 must be tracked
as part of the consumer’s decision problem, and the recursive problem includes three state variables: xt ,
ct−1 , and ct−2 .
Problem 2: Dynamic Programming with Capital Accumulation
2.a Value Function Iteration for Capital Accumulation Model
Problem Overview
In this question, we solve for the value function v(k) and associated policy functions for a neoclassical growth
model using value function iteration. The model is characterized by the following components:
• Production Function: y = k α , where k represents the capital stock and α = 0.4 is the output
elasticity of capital.
• Utility Function: The agent’s preferences follow a logarithmic utility function, u(c) = log(c), where
c denotes consumption.
• Capital Accumulation Equation: The law of motion for capital is given by:
kt+1 = ktα + (1 − δ)kt − ct ,
where δ = 0.04 is the depreciation rate. The agent maximizes lifetime utility with a discount factor
β = 0.96.
The objective is to use value function iteration to approximate v(k), the maximum attainable utility for
each capital level k, and to derive the policy functions for optimal consumption c(k) and next-period capital
k ′ (k).
Solution Methodology
5
Recursive Formulation (Bellman Equation)
The Bellman equation for the agent’s optimization problem is:
v(k) = max
′
{log(k α + (1 − δ)k − k ′ ) + βv(k ′ )} ,
k ≥0
where k ′ represents the capital stock in the next period, and the choice of k ′ must satisfy c = k α + (1 − δ)k −
k ′ > 0.
Value Function Iteration
The solution involves discretizing the capital grid and using value function iteration to solve the Bellman
equation iteratively until convergence. The value function iteration converged after 218 iterations. The
stopping criterion was based on a tolerance level of ϵ = 0.0001, meaning that the algorithm continued
until the maximum absolute difference between successive iterations of the value function was less than
ϵ. This number of iterations indicates that the method achieved stability efficiently, ensuring an accurate
approximation of the value function.
Results and Interpretation of Policy Functions
After implementing the value function iteration, we obtained three key outputs: the value function, the
consumption policy function, and the capital policy function. Each of these graphs provides insights
into the optimal behavior of the agent in the neoclassical growth model.
1. Value Function v ∗ (k)
Figure 1: Value Function v ∗ (k)
6
The value function v ∗ (k) represents the maximum attainable lifetime utility for different levels of capital
k. As shown in the plot, the value function is monotonically increasing and concave. This indicates that
as capital k increases, the lifetime utility also increases, but at a diminishing rate due to the concavity of
v ∗ (k). The concavity reflects diminishing marginal returns to capital in utility terms, which aligns with the
model’s production function where additional capital contributes less to output as capital stock grows.
From an economic perspective, the increasing and concave shape of the value function implies that agents
have an incentive to accumulate capital, but this incentive decreases as the capital stock becomes large.
2. Consumption Policy Function c(k)
Figure 2: Consumption Policy Function c(k)
The consumption policy function c(k) shows the optimal level of consumption for each capital stock level k.
In this model, we observe that consumption is approximately linear with respect to capital. As capital k
increases, the optimal consumption c also increases, reflecting that agents with a higher capital endowment
can afford higher levels of consumption while still optimizing their lifetime utility.
This linearity is a feature of the logarithmic utility function, which suggests that agents will allocate a
roughly constant proportion of output to consumption at different levels of capital. The upward slope of
this function reinforces the intuition that more productive capacity (higher k) enables higher immediate
consumption without compromising future capital accumulation.
3. Capital Policy Function k ′ (k)
The capital policy function k ′ (k) illustrates the optimal level of next-period capital as a function of current
capital. The function is close to a 45-degree line, indicating a nearly one-to-one relationship between k and
k ′ in this range. This behavior suggests that agents prefer to maintain or slightly increase their capital stock,
which aligns with a steady-state growth path.
Economically, this function implies that agents are investing a portion of their output back into capital to
maintain a stable growth path over time. The proximity to the 45-degree line hints at a potential steady-state
equilibrium where capital stock stabilizes, ensuring sustainable consumption and utility levels.
7
Figure 3: Capital Policy Function k ′ (k)
Conclusion
Together, these policy functions and the value function depict a coherent optimal strategy for agents in the
model. The results highlight:
1. Incentives for Capital Accumulation: The value function’s shape shows that agents derive in-
creasing utility from accumulating capital, though with diminishing marginal utility.
2. Balanced Consumption and Savings: The linearity of the consumption policy function indicates a
consistent consumption-saving decision that supports both immediate utility and future capital growth.
3. Sustainability of Capital Stock: The capital policy function’s alignment with the 45-degree line
suggests a self-sustaining capital path, guiding the economy toward a steady state.
2.b Steady-State Analysis and Constant Consumption Policy
Steady-State Consumption Fraction η
The steady-state fraction of output that is consumed, η, is calculated as follows:
To find the steady-state consumption fraction, η, we calculate it as the ratio of steady-state consumption to
output at the steady state:
css css
η= =
F (k ss ) (k ss )α
Using the values from part (a), where the steady-state capital k ss = 14.1262 and steady-state consumption
css = 2.3191, we obtain:
η = 0.8041
8
This means that, at the steady state, approximately 80.41% of output is consumed each period, with the
remaining 19.59% reinvested to maintain the capital stock.
Constant Consumption Policy and Discounted Lifetime Utility v̂0 (k)
Assuming the agent consumes a constant fraction η of output each period, the consumption policy can be
defined as:
ĉ(k) = ηk α , for all k
Given this consumption strategy, we calculate the discounted lifetime utility, v̂0 (k), for an agent who follows
this rule. Using a geometric series approximation for the sum of discounted utilities, we get:
∞
X log(ηk α )
v̂0 (k) = β t log(ηk α ) =
t=0
1−β
Substituting the values η = 0.8041, k ss = 14.1262, and α = 0.4 into this formula, we find:
v̂0 (k) = 21.0289
This discounted lifetime utility, v̂0 (k), serves as the initial guess for the value function in the next iteration
step.
Value Function Iteration with v̂0 (k) as the Initial Guess
Using v̂0 (k) = 21.0289 as the initial guess, we proceed with the value function iteration process. The objective
here is to iteratively refine the value function until it converges to the optimal value function vN (k).
With the convergence tolerance set to 10−6 , the iteration process converged after 1 iteration, yielding a
final value function at kss of:
vN (kss ) = 21.0289
The rapid convergence (in a single iteration) suggests that the initial guess v̂0 (k) was very close to the true
value function, supporting its effectiveness as an approximation.
Comparison of Initial Guess v̂0 (k) and Final Value Function vN (k)
To visually assess the accuracy of v̂0 (k) as an initial guess, we plot both v̂0 (k) and vN (k) over a range of
capital values.
• Initial Guess v̂0 (k) is shown as a dashed line, representing the constant initial approximation of
lifetime utility under a fixed consumption fraction.
• Final Value Function vN (k) is shown as a solid line, illustrating the refined optimal lifetime utility
for various levels of capital.
The plot demonstrates that v̂0 (k) and vN (k) align closely, with vN (k) only marginally higher for larger values
of k. This slight difference reflects the additional flexibility in consumption allocation under the optimal
policy, which provides slightly higher utility than the constant consumption rule as capital increases.
9
Figure 4: Comparison Initial Guess and Final Value Function k ′ (k)
Interpretation of Results
1. Convergence: The iteration process converged in a single step, indicating that v̂0 (k) was a very close
approximation to the optimal value function. This confirms the effectiveness of using v̂0 (k) as a starting
point for the value function iteration.
2. Comparison of v̂0 (k) and vN (k): The close alignment of the initial guess v̂0 (k) with the final
value function vN (k) suggests that the constant fraction consumption policy is a feasible strategy that
approximates the optimal policy well, particularly near the steady state. However, the optimal policy
allows for more flexible consumption choices, yielding slightly higher utility as k increases.
3. Economic Significance: The results indicate that the steady-state values k ss = 14.1262 and css =
2.3191 satisfy both the capital accumulation condition and the Euler equation, ensuring that the agent’s
consumption and investment decisions are optimal. The value function’s concave shape confirms that
each additional unit of capital yields diminishing returns in terms of lifetime utility.
2.c: Transition to Steady State in the Optimal Growth Model
Problem Statement
In this part, we examine the transition path of capital in a neoclassical growth model. Starting from an
initial capital level that is 25% below the steady-state value kss , we aim to simulate and analyze the optimal
path of capital as it converges to the steady state. This transition path is determined by the capital policy
function k ′ = kN (k), which was derived through value function iteration in part 2.a.
Methodology
Initial Setup and Parameters To compute the transition, we follow these steps:
10
1. Define the Initial Capital Level: Set the initial capital k0 as 25% below the steady-state capital
kss obtained in part 2.b:
k0 = 0.75 × kss
This initial condition represents a scenario where the economy begins with lower-than-optimal capital,
simulating a convergence process toward steady-state equilibrium.
2. Iterate Using the Capital Policy Function: Starting from k0 , we apply the capital policy function
k ′ = kN (k) at each period to determine the subsequent level of capital. The process continues iteratively
until the capital level approaches the steady-state level kss , with a tolerance level of 10−4 to ensure
convergence accuracy.
3. Plot the Transition Path: The simulated path shows how capital adjusts over time, approaching the
steady-state value as the economy reaches equilibrium. This plot provides insights into the dynamics
of capital accumulation under optimal policy.
Figure 5: Transition Path to Steady State k ′ (k)
11